Thanks Till,

I followed your advice and throughput actually increased to 1,8GB/sec

Now it's all defined as /dev/disk/by-id/* and ashift=12

log and cache is now on partitions on the two ssd's.

Thank you so much for your input.

/tony


On Wed, 2019-05-08 at 11:48 +0200, Till Wegmüller wrote:
> Hi Tony
> 
> Two ponts to your setup.
> Do not use /dev/sdb on Linux based servers use /dev/disk/by-id/*.
> The Reason being that the Disk Subsystem on Linux considers
> /dev/sd[a-z]
> to be emphemeral names and Solaris never did. ZFS expects the devices
> to
> stay in the same slot all the time. if they switch they kill your
> pool.
> 
> Secondly you do not need that much space for logs. A few Gb at max.
> Best
> is to partition the ssd's into two partitions one for logs and one
> for
> cache (L2ARC).
> 
> Anything else you will need to optimize on the Volume/Filesystem
> level
> e.g. Compression and or Blocksize.
> 
> Also be aware of the ashift parameter on Linux as it is not set
> properly
> automagically.
> 
> Greetings
> Till
> On 2019-05-08 11:01, Tony Brian Albers wrote:
> > On Wed, 2019-05-08 at 07:20 +0000, Tony Brian Albers wrote:
> > > Hi guys,
> > > 
> > > I have a new server with 16 4TB sata-disks and 2 480GB sdd-disks.
> > > 
> > > What kind of zfs layout would you suggest for optimal
> > > performance?
> > > Fault-tolerance is needed but not as important as performance.
> > > 
> > > My idea is to use the two ssd's as cache for zfs, and I was
> > > thinking
> > > about using raidz1+0 for the others.
> > > 
> > > Any suggestions?
> > > 
> > > TIA
> > > 
> > > /tony
> > > 
> > > 
> > 
> > Nevermind, I think this does the trick:
> > 
> >  NAME        STATE     READ WRITE CKSUM
> >         datapool    ONLINE       0     0     0
> >           raidz1-0  ONLINE       0     0     0
> >             sdb     ONLINE       0     0     0
> >             sdc     ONLINE       0     0     0
> >             sdd     ONLINE       0     0     0
> >             sde     ONLINE       0     0     0
> >           raidz1-1  ONLINE       0     0     0
> >             sdf     ONLINE       0     0     0
> >             sdg     ONLINE       0     0     0
> >             sdh     ONLINE       0     0     0
> >             sdi     ONLINE       0     0     0
> >           raidz1-2  ONLINE       0     0     0
> >             sdj     ONLINE       0     0     0
> >             sdk     ONLINE       0     0     0
> >             sdl     ONLINE       0     0     0
> >             sdm     ONLINE       0     0     0
> >           raidz1-3  ONLINE       0     0     0
> >             sdn     ONLINE       0     0     0
> >             sdo     ONLINE       0     0     0
> >             sdp     ONLINE       0     0     0
> >             sdq     ONLINE       0     0     0
> >         logs
> >           sda       ONLINE       0     0     0
> >         cache
> >           sdr       ONLINE       0     0     0
> > 
> > 
> > 
> > 
> > time dd if=/dev/zero of=/datapool/testfile bs=4M count=200000
> > 200000+0 records in
> > 200000+0 records out
> > 838860800000 bytes (839 GB) copied, 498.546 s, 1.7 GB/s
> > 
> > real    8m20.459s
> > user    0m0.669s
> > sys     7m9.366s
> > 
> > 
> > I guess that'll have to do for now :)
> > 
> > /tony
> > 
> 
> _______________________________________________
> openindiana-discuss mailing list
> [email protected]
> https://openindiana.org/mailman/listinfo/openindiana-discuss

-- 
Tony Albers - Systems Architect - IT Development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +45 2566 2383 - CVR/SE: 2898 8842 - EAN: 5798000792142
_______________________________________________
openindiana-discuss mailing list
[email protected]
https://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to