A few thoughts from another ZFS backend user:

ZFS:
use arcstats to look at your cache use over time and consider:
        Don’t mirror your cache drives, use them as 2x cache volumes to 
increase available cache.
        Add more RAM. Lots more RAM (if I’m reading that right and you have 
32Gb ram per zfs server).
        Adjust ZFS’s max arc caching upwards if you have lots of RAM.
        Try more metadata caching & less content caching if you’re find heavy.
compression on these volumes could help improve IO on the raidZ2s, but you’ll 
have to copy the data on with compression enabled if you didn’t already have it 
enabled. Different zStd levels are worth evaluating here.
Read up on recordsize and consider if you would get any performance benefits 
from 64K or maybe something larger for your large data, depends on where the 
reads are being done. 
Use relatime or no atime tracking.
Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1

For gluster, sounds like gluster 10 would be good for your use case. Without 
knowing what your workload is (VMs, gluster mounts, nfs mounts?), I don’t have 
much else on that level, but you can probably play with the 
cluster.read-hash-mode (try 3) to spread the read load out amongst your 
servers. Search the list archives for general performance hints too, server & 
client .event-threads are probably good targets, and the various 
performance.*threads may/may not help depending on how the volumes are being 
used.

More details (zfs version, gluster version, volume options currently applied, 
more details on the workload) may help if others use similar setups. You may be 
getting into the area where you just need to get your environment setup to try 
some A/B testing with different options though.

Good luck!

  -Darrell


> On Dec 11, 2021, at 5:27 PM, Arman Khalatyan <arm2...@gmail.com> wrote:
> 
> Hello everybody,
> I was looking for some performance consideration on glusterfs with zfs.
> The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally over 
> 100mln, about 100TB.
> 3replicated Jbods each one with:
> 2x8disks-RaidZ2 +special device mirror  2x1TBnvme+cache mirror 2xssd+32GB ram.
> 
> most operations are  reading and "find file".
> i put some parameters on zfs like: xattr=sa, primarycache=all, secondary 
> cache=all
> what else could be tuned?
> thank you in advanced.
> greetings from Potsdam,
> Arman.
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to