I just saw this.
Someone thought testing ramdisks was a good way to test memory bandwidth?!?!
Obviously not a driver / kernel guy. I can think of several tests, that are at
least an order of magnitude better, but admittedly there are serious challenges
isolating memory performance since its difficult to isolate all the caches
without also incurring a CPU overhead. Nonetheless, ramdisks were *never*
designed to be perfomant. tmpfs however gets a lot more use, and accesses
memory much more 'naturally' than ramdisks. Still far from a "clean" test, of
course.
- Garrett
On Mar 25, 2013, at 3:30 PM, Sašo Kiselkov <[email protected]> wrote:
> On 03/25/2013 11:20 PM, Christopher Chan wrote:
>> On Tuesday, March 26, 2013 04:55 AM, Sašo Kiselkov wrote:
>>> On 03/25/2013 09:48 PM, Dan McDonald wrote:
>>>> On Mon, Mar 25, 2013 at 09:43:31PM +0100, Sa?o Kiselkov wrote:
>>>>> On 03/25/2013 06:39 PM, Franz Schober wrote:
>>>>>> Hi,
>>>>> A bunch more things came to my mind, see comments below:
>>>>>
>>>>>> Test: Writing a 1 GB File
>>>>>> time dd if=/dev/zero of=/tmp/testfile1 bs=128k count=8k
>>>>>> Mean value of 3 tests on every system.
>>>>>> (I also tried IOZone but the simple dd test seems to show the problem)
>>>>> Make sure you write a lot more data per test run. Give it like 50-60GB
>>>>> at least. Btw: "/tmp" is tmpfs, which resides on your swap device,
>>>>> which
>>>>> is decided not a high-performance file system store.
>>>> /tmp is memory first, and if there's no swap device, it uses
>>>> available system
>>>> memory.
>>>>
>>>> I *thought* that's what the original poster was decrying, his memory
>>>> throughput, as measured via in-memory tmpfs.
>>> I understand that that's what his issue was, however, I'm not aware of
>>> tmpfs having been designed with hundreds of MB/s of throughput in
>>> mind... If he wants to test raw memory throughput, why not simply
>>> created a ramdisk (ramdiskadm) and do a dd to it? Heck of a lot thinner
>>> in number of layers you need to traverse before you hit the VM layer.
>>> Plus, tmpfs isn't pre-allocated, it eats pages from the disk pagelist,
>>> possibly contending with the ZFS ARC.
>>>
>>> In short, if a VM test is needed, ramdisk is a lot simpler to manage
>>> than tmpfs.
>>
>> The OP did say that he tried ramdisks and that they were even slower
>> than tmpfs or /tmp....
>
> As I read it:
>
> "I then tried to export RAM disks (they are slower then the tmpfs) and
> then tmpfs files, but the performance over FC was the same."
>
> is not the same as "I created a ramdisk and dd'd to it, here's the
> numbers I got". When attempting to isolate performance problems, always
> test the smallest system unit you can to remove complex layering and
> interaction which can itself skew the results.
>
> In fact, ramdisks may have other pathologies that may make them unusable
> as a tool for testing VM performance. Here's what I got when I created a
> zpool on my dual-Opteron machine on a ramdisk:
>
> dd if=/dev/zero of=/test/ttt bs=1M count=700
> 700+0 records in
> 700+0 records out
> 734003200 bytes (734 MB) copied, 2.42307 s, 303 MB/s
>
> And yet this machine can manage several GB/s in a simple memcpy test.
> Clearly, some other performance measurement approaches might be needed.
>
> Cheers,
> --
> Saso
>
>
> -------------------------------------------
> illumos-discuss
> Archives: https://www.listbox.com/member/archive/182180/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/182180/22003744-9012f59c
> Modify Your Subscription: https://www.listbox.com/member/?&
> Powered by Listbox: http://www.listbox.com
-------------------------------------------
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/21175430-2e6923be
Modify Your Subscription:
https://www.listbox.com/member/?member_id=21175430&id_secret=21175430-6a77cda4
Powered by Listbox: http://www.listbox.com