> Are we still trying to solve the starvation problem?
I would argue the disk I/O model is fundamentally broken on Solaris if there is
no fair I/O scheduling between multiple read sources until that is fixed
individual I_am_systemstalled_while_doing_xyz problems will crop up. Started a
new thre
> For example, you could set it to half your (8GB) memory so that 4GB is
> immediately available for other uses.
>
> * Set maximum ZFS ARC size to 4GB
capping max sounds like a good idea
thanks
banks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
vmstat does show something interesting. The free memory shrinks while doing
the first dd (generating the 8G file) from around 10G to 1.5Gish. The copy
operations thereafter dont consume much and it stays at 1.2G after all
operations have completed. (btw at the point of system slugishness there
place a sync call after dd ?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Phil
You make some interesting points here:
-> yes bs=1G was a lazy thing
-> the GNU cp I m using does __not__ appears to use mmap
open64 open64 read write close close is the relevant sequence
-> replacing cp with dd 128K * 64K does not help no new apps can be launched
until the copies
Btw FWIW if I redo the dd + 2 cp experiment on /tmp the result is far more
disastrous. The GUI stops moving caps lock stops responding for large intervals
no clue why.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Hi Henrik
I have 16GB Ram on my system on a lesser RAM system dd does cause problems as I
mentioned above. My __guess__ dd is probably sitting in some in memory cache
since du -sh doesnt show the full file size until I do a sync.
At this point I m less looking for QA type repro questions and/or
> I am confused. Are you talking about ZFS under
> OpenSolaris, or are
> you talking about ZFS under Linux via Fuse?
???
> Do you have compression or deduplication enabled on
> the zfs
> filesystem?
Compression no. I m guessing 2009.06 doesnt have dedup.
> What sort of system are you using
> Probably not, but ZFS only runs in userspace on Linux
> with fuse so it
> will be quite different.
I wasnt clear in my description, I m referring to ext4 on Linux. In fact on a
system with low RAM even the dd command makes the system horribly unresponsive.
IMHO not having fairshare or times
dd if=/dev/urandom of=largefile.txt bs=1G count=8
cp largefile.txt ./test/1.txt &
cp largefile.txt ./test/2.txt &
Thats it now the system is totally unusable after launching the two 8G copies.
Until these copies finish no other application is able to launch completely.
Checking prstat shows the
10 matches
Mail list logo