Which IO library do you use? If you use stdio you could use the libast
stdio implementation which allows to set the block size via
environment variables.

Olga

On Tue, Mar 9, 2010 at 7:55 PM, Matt Cowger <mcow...@salesforce.com> wrote:
> That's a very good point - in this particular case, there is no option to
> change the blocksize for the application.
>
>
> On 3/9/10 10:42 AM, "Roch Bourbonnais" <roch.bourbonn...@sun.com> wrote:
>
>>
>> I think This is highlighting that there is extra CPU requirement to
>> manage small blocks in ZFS.
>> The table would probably turn over if you go to 16K zfs records and
>> 16K reads/writes form the application.
>>
>> Next step for you is to figure how much reads/writes IOPS do you
>> expect to take in the real workloads and whether or not the filesystem
>> portion
>> will represent a significant drain of CPU resource.
>>
>> -r
>>
>>
>> Le 8 mars 10 à 17:57, Matt Cowger a écrit :
>>
>>> Hi Everyone,
>>>
>>> It looks like I¹ve got something weird going with zfs performance on
>>> a ramdiskS.ZFS is performing not even a 3rd of what UFS is doing.
>>>
>>> Short version:
>>>
>>> Create 80+ GB ramdisk (ramdiskadm), system has 96GB, so we aren¹t
>>> swapping
>>> Create zpool on it (zpool create ramS.)
>>> Change zfs options to turn off checksumming (don¹t want it or need
>>> it), atime, compression, 4K block size (this is the applications
>>> native blocksize) etc.
>>> Run a simple iozone benchmark (seq. write, seq. read, rndm write,
>>> rndm read).
>>>
>>> Same deal for UFS, replacing the ZFS stuff with newfs stuff and
>>> mounting the UFS forcedirectio (no point in using a buffer cache
>>> memory for something that¹s already in memory)
>>>
>>> Measure IOPs performance using iozone:
>>>
>>> iozone  -e -i 0 -i 1 -i 2 -n 5120 -O -q 4k -r 4k -s 5g
>>>
>>> With the ZFS filesystem I get around:
>>> ZFS
>>>                                                                          
>>> (seq
>>>  write) 42360             (seq read)31010               (random
>>> read)20953       (random write)32525
>>> Not SOO bad, but here¹s UFS:
>>> UFS
>>>                                                                         (seq
>>>  write )42853             (seq read) 100761            (random read)
>>> 100471   (random write) 101141
>>>
>>> For all tests besides the seq write, UFS utterly destroys ZFS.
>>>
>>> I¹m curious if anyone has any clever ideas on why this huge
>>> disparity in performance exists.  At the end of the day, my
>>> application will run on either filesystem, it just surprises me how
>>> much worse ZFS performs in this (admittedly edge case) scenario.
>>>
>>> --M
>>> _______________________________________________
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
      ,   _                                    _   ,
     { \/`o;====-    Olga Kryzhanovska   -====;o`\/ }
.----'-/`-/     olga.kryzhanov...@gmail.com   \-`\-'----.
 `'-..-| /     Solaris/BSD//C/C++ programmer   \ |-..-'`
      /\/\                                     /\/\
      `--`                                      `--`
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to