I’m using the default value of 128K on linux and OmniOS. I tried with 
recordsize=4k, but there is no different in iops…

Matej


> On 22 Oct 2015, at 21:36, Min Kim <mink...@gmail.com> wrote:
> 
> Are you using the same record size of 4K on your zfs pool as you used with 
> your linux test system?
> 
> If the record size for the zpool and slog is set at the default value of 
> 128K, it will greatly reduce the measured IOPS relative to that measured with 
> a recordsize of 4K.
> 
> Min Kim
> 
> 
> 
> 
>> On Oct 22, 2015, at 12:26 PM, Matej Zerovnik <ma...@zunaj.si 
>> <mailto:ma...@zunaj.si>> wrote:
>> 
>> Interesting…
>> 
>> Although, I’m not sure if this is really the problem.
>> 
>> For test, I booted up linux and put both ZeusRAM to raid1 software raid and 
>> repeated the test. I got full 48kIOPS in the test, meaning there was 96kIOPS 
>> sent to JBOD (48k IOPS for each drive).
>> 
>> On the OmniOS test bed, there are 28k IOPS sent to ZIL and X amount to 
>> spindles when flushing write cache, but no more then 1000 IOPS (100 
>> iops/drive * 10). Comparing that to the case above, IOPS shouldn’t be a 
>> limit.
>> 
>> Maybe I could try building my pools with hard drives that aren’t near ZIL 
>> drive, which is in bay 0. I could take hard drives from bays 4-15, which 
>> probably use different SAS lanes.
>> 
>> lp, Matej
>> 
>> 
>>> On 22 Oct 2015, at 21:10, Min Kim <mink...@gmail.com 
>>> <mailto:mink...@gmail.com>> wrote:
>>> 
>>> I believe this is an known issue with SAS expanders.
>>> 
>>> Please see here:
>>> 
>>> http://serverfault.com/questions/242336/sas-expanders-vs-direct-attached-sas
>>>  
>>> <http://serverfault.com/questions/242336/sas-expanders-vs-direct-attached-sas>
>>> 
>>> When you are stress-testing the Zeusram by itself, all the IOPs and 
>>> bandwidth of the expander are allocated to that device alone.  Once you add 
>>> all the other drives, you lose some of that as you have to share it with 
>>> the other disks.
>>> 
>>> Min Kim
>>> 
>>> 
>>> 
>>>> On Oct 22, 2015, at 12:02 PM, Matej Zerovnik <ma...@zunaj.si 
>>>> <mailto:ma...@zunaj.si>> wrote:
>>>> 
>>>> Hello,
>>>> 
>>>> I'm building a new system and I'm having a bit of a performance problem. 
>>>> Well, its either that or I'm not getting the whole ZIL idea:)
>>>> 
>>>> My system is following:
>>>> - IBM xServer 3550 M4 server (dual CPU with 160GB memory)
>>>> - LSI 9207 HBA (P19 firmware)
>>>> - Supermicro JBOD with SAS expander
>>>> - 4TB SAS3 drives
>>>> - ZeusRAM for ZIL
>>>> - LTS Omnios (all patches applied)
>>>> 
>>>> If I benchmark ZeusRAM on its own with random 4k sync writes, I can get 
>>>> 48k IOPS out of it, no problem there.
>>>> 
>>>> If I create a new raidz2 pool with 10 hard drives, mirrored ZeusRAMs for 
>>>> ZIL and set sync=always, I can only squeeze 14k IOPS out of the system.
>>>> Is that normal or should I be getting 48k IOPS on the 2nd pool as well, 
>>>> since this is the performance ZeusRAM can deliver?
>>>> 
>>>> I'm testing with fio:
>>>> fio --filename=/pool0/test01 --size=5g --rw=randwrite --refill_buffers 
>>>> --norandommap --randrepeat=0 --ioengine=solarisaio --bs=4k --iodepth=16 
>>>> --numjobs=16 --runtime=60 --group_reporting --name=4ktest
>>>> 
>>>> thanks, Matej_______________________________________________
>>>> OmniOS-discuss mailing list
>>>> OmniOS-discuss@lists.omniti.com <mailto:OmniOS-discuss@lists.omniti.com>
>>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss 
>>>> <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>>> 
>> 
> 
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to