On 03/26/2013 05:59 AM, Garrett D'Amore wrote:
> I just saw this.
> 
> Someone thought testing ramdisks was a good way to test memory
> bandwidth?!?! Obviously not a driver / kernel guy.

Ouch, guess that one goes to me. Yeah, once I thought about it a bit
more, it was obvious this wasn't going to be very good (I take it that
ramdisks commit the same "everything is SCSI" sin).

> I can think of several tests, that are at least an order of magnitude
> better, but admittedly there are serious challenges isolating memory
> performance since its difficult to isolate all the caches without
> also incurring a CPU overhead.

What I was mostly trying to get at is to assess whether there is a
performance pathology at all. The quoted 750 MB/s write throughput to a
local 2x6-drive raidz2 pool seems in line with expected performance, so
it was really the FC bit that was the oddball there. As I said, I've
seen 200 MB/s on a 4GFC interface in write throughput before, so I
suspect it's more likely a fabric issue.

> Nonetheless, ramdisks were *never* designed to be perfomant. tmpfs
> however gets a lot more use, and accesses memory much more 'naturally'
> than ramdisks. Still far from a "clean" test, of course.

Once I gave it bit more thought, I realized tmpfs *should* be faster,
since it doesn't traverse the block device/SCSI interface and instead
intercepts calls pretty high up the VFS stack. Nonetheless, I suspect
the tmpfs implementation isn't really designed for multi-GB/s throughput
(it's a filesystem for /tmp FFS, it's supposed to hold a couple of kB of
data anyway).

--
Saso


-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com

Reply via email to