Mithun Bhattacharya wrote:

>--- Yashpal Nagar <[EMAIL PROTECTED]> wrote:
>
>  
>
>>Hi
>>We use smbmount to mount certain windows shares as by rc.local on a 
>>Redhat FC3 box as:
>>/bin/mount -t smbfs -o 
>>username=Administrator,password=xxx,uid=userid,gid=ftpusers 
>>//xxx.xxx.xxx.xxx/share /data/share/ &
>>Now the problem is after sometime this get hanged due to either
>>windows 
>>m/c gets rebooted or some time-out don't exactly know .
>>Is there any nice other way around to get sync again smb shares in
>>such 
>>cases?
>>    
>>
>
>
>Firstly put it under /etc/fstab - you have better control over the
>mounting and unmounting process including unmounting shares while
>shuttign down.
>
>  
>
You could have better adminsitration in ideal cases but  i have been 
very annoyed with this fstab/vfstab which screams loud when your nfs 
server is down.
my both the boxes are away i can't quit if nfs server is down..and its 
waiting indefinite to server to come up, anyway....

>Secondly I think your problem stems from the fact that smbfs like nfs
>will retry indefinitely if a share becomes unavilable - this can have
>serious implications on the server. The way around is to understand how
>the hard and soft mount options works. Read the details under the nfs
>section in mount and decide which one is more important. Remember hard
>mount is the default and that is what you would usually want unless
>your apps really know how to handle unrealible network file systems.
>  
>
is the hard mount option by default? if yes then it does't seems to be 
working with smbfs beacuse when problem occurs of  hanging i just umount 
and mount again it becomes fine...
googling & gllug  have suggests to have autofs should work ..with some 
problem at  some cases.

Regards
Yash

_______________________________________________
ilugd mailinglist -- ilugd@lists.linux-delhi.org
http://frodo.hserus.net/mailman/listinfo/ilugd
Archives at: http://news.gmane.org/gmane.user-groups.linux.delhi 
http://www.mail-archive.com/ilugd@lists.linux-delhi.org/

Reply via email to