> I bought a less expensive IDE-disk (25GB) installed it on my linux box
> as hbd1 and mounted it. All works fine.

> Then I tried to let other machines (not running linux) participate
> in that storage space and nfs-exported the directory hbd1 is mounted on.

> Now the problem starts. Every time a nfs-client does a ls (or other IO) 
> to or on that exported directory the heads of the system/boot-disc
> (lvd-scsi) start rattling in a manner never heard before.

> The noise is really terrible and the system becomes nearly unuseable
> because of the heavy IO-traffic.

> There seems to be no caching. If I do repeated ls from the nfs-client,
> the noisy and resource hungry IO action is also repeated.

> What is causing such a heavy IO on the system disk and what can I do
> to overcome. ? 

> The problem does not appear if I export a directory from the system disc,
> one that is not mounted.

> I run linux-2.3.38 smp. The other machines are hp's (hpux-10.20) and 
> did not show that same behavior with mounted discs nfs-exported.
 
> I compiled linux-2.2.14 and it behaves the same way.

> Any help welcome.

> klaus

> [EMAIL PROTECTED]


I solved the problem. Playing around a litte I saw the heavy system disk IO
started only when I tried to access the mounted dir from the nfs-client as
user root. 

I changed my /etc/exports 
from
/disc df1tl.local.here(rw) df1tlb.local.here(rw)
to 
/disc df1tl.local.here(rw,no_root_squash) df1tlb.local.here(rw,no_root_squash)

made a killall -HUP /usr/sbin/rpc.mountd; killall -HUP /usr/sbin/rpc.nfsd
and the IO-noise disappeared.

In /var/log/messages and /var/log/warn I got these entries 

Jan 12 17:29:08 df1tlpc nfsd[107]: Unable to setgroups: Invalid argument 
Jan 12 17:29:08 df1tlpc nfsd[107]: Unable to setfsuid -2: Invalid argument

several hundred times.

Is there something wrong with my /etc/group ?

klaus
[EMAIL PROTECTED]






-
Linux SMP list: FIRST see FAQ at http://www.irisa.fr/prive/dmentre/smp-howto/
To Unsubscribe: send "unsubscribe linux-smp" to [EMAIL PROTECTED]

Reply via email to