On Tuesday, March 26, 2013 04:55 AM, Sašo Kiselkov wrote:
On 03/25/2013 09:48 PM, Dan McDonald wrote:
On Mon, Mar 25, 2013 at 09:43:31PM +0100, Sa?o Kiselkov wrote:
On 03/25/2013 06:39 PM, Franz Schober wrote:
Hi,
A bunch more things came to my mind, see comments below:

Test: Writing a 1 GB File
time dd if=/dev/zero of=/tmp/testfile1 bs=128k count=8k
Mean value of 3 tests on every system.
(I also tried IOZone but the simple dd test seems to show the problem)
Make sure you write a lot more data per test run. Give it like 50-60GB
at least. Btw: "/tmp" is tmpfs, which resides on your swap device, which
is decided not a high-performance file system store.
/tmp is memory first, and if there's no swap device, it uses available system
memory.

I *thought* that's what the original poster was decrying, his memory
throughput, as measured via in-memory tmpfs.
I understand that that's what his issue was, however, I'm not aware of
tmpfs having been designed with hundreds of MB/s of throughput in
mind... If he wants to test raw memory throughput, why not simply
created a ramdisk (ramdiskadm) and do a dd to it? Heck of a lot thinner
in number of layers you need to traverse before you hit the VM layer.
Plus, tmpfs isn't pre-allocated, it eats pages from the disk pagelist,
possibly contending with the ZFS ARC.

In short, if a VM test is needed, ramdisk is a lot simpler to manage
than tmpfs.

The OP did say that he tried ramdisks and that they were even slower than tmpfs or /tmp....


-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to