I've run into a situation where Samba is having problems coping.  I have filesystems 
on a SunCluster
that my Samba server nfs mounts and then shares to our Windows PC's.  I've twice had a 
situation
where if I have a failover of my cluster during production time, Samba begins to use 
up all swap space,
the system load goes extremely high (it went to 98 when this happened the other day), 
and the only
way to recover is to reboot the Samba server.  This works for a while, then I have a 
problem where
some people can map their drives and others can't.  I then have to cat /dev/null to 
the sessionid.tdb
file because I get a lot of messages in my log.smbd file that says:

"tdb(/var/opt/samba/var/locks/sessionid.tdb): tdb_oob len 1530015816 beyond eof at 
24576".

This clears everything up, but it's not the right solution.  I'm curious why the swap 
space gets eaten
so suddenly and so quickly, and the sessionid.tdb file gets corrupted.  This only 
happens during a
cluster failover in production.  If I manually fail over the cluster during 
off-production time, I don't see
the same behavior.  During production, I can have between 800 and 900 shares mapped at 
any given
time with up to 300 files open at any given time.

I'm running version 2.2.2 of Samba on a Sun E3000 with four 250 MHz processors with 
512 mb of
memory under Solaris 8.

Thanks for any help or insight anyone can give -

Mike
-- 
To unsubscribe from this list go to the following URL and read the
instructions:  http://lists.samba.org/mailman/listinfo/samba

Reply via email to