Re: [Gluster-users] glusterfs on zfs on rockylinux

2021-12-15 Thread Arman Khalatyan
Thank you Darrel, now I have clear steps what to do. The data is very
valuable so 2xmirror +arbiter, or 3 replica nodes would be a setup.
Just for the clarification we have now LustreFS it is nice but no
redundancy. I am not using it for the VMs, the workloads are following -
gluster should be mounted on the multiple nodes, connection is Infiniband
or 10Gbit. The clients are pulling the data and making some data analysis,
IO pattern is very different - 26MB blocks or random 1k IO, different
codes, different projects. I am thinking to put all <128K files on the
special device (yes I am on the zfs 2.0.6 branch) On the gluster I have
seen .gluster folder has a lot of small folders or files,  would improve
the performance if I move them to nvme as well or better to increase the
RAM(now I cant, but for the future)?
Unfortunately cannot add more RAM, but your tuning consideration is
important note.
   a.


On Tue, Dec 14, 2021 at 12:25 AM Darrell Budic 
wrote:

> A few thoughts from another ZFS backend user:
>
> ZFS:
> use arcstats to look at your cache use over time and consider:
> Don’t mirror your cache drives, use them as 2x cache volumes to increase
> available cache.
> Add more RAM. Lots more RAM (if I’m reading that right and you have 32Gb
> ram per zfs server).
> Adjust ZFS’s max arc caching upwards if you have lots of RAM.
> Try more metadata caching & less content caching if you’re find heavy.
> compression on these volumes could help improve IO on the raidZ2s, but
> you’ll have to copy the data on with compression enabled if you didn’t
> already have it enabled. Different zStd levels are worth evaluating here.
> Read up on recordsize and consider if you would get any performance
> benefits from 64K or maybe something larger for your large data, depends on
> where the reads are being done.
> Use relatime or no atime tracking.
> Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1
>
> For gluster, sounds like gluster 10 would be good for your use case.
> Without knowing what your workload is (VMs, gluster mounts, nfs mounts?), I
> don’t have much else on that level, but you can probably play with the
> cluster.read-hash-mode (try 3) to spread the read load out amongst your
> servers. Search the list archives for general performance hints too, server
> & client .event-threads are probably good targets, and the various
> performance.*threads may/may not help depending on how the volumes are
> being used.
>
> More details (zfs version, gluster version, volume options currently
> applied, more details on the workload) may help if others use similar
> setups. You may be getting into the area where you just need to get your
> environment setup to try some A/B testing with different options though.
>
> Good luck!
>
>   -Darrell
>
>
> On Dec 11, 2021, at 5:27 PM, Arman Khalatyan  wrote:
>
> Hello everybody,
> I was looking for some performance consideration on glusterfs with zfs.
> The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally
> over 100mln, about 100TB.
> 3replicated Jbods each one with:
> 2x8disks-RaidZ2 +special device mirror  2x1TBnvme+cache mirror 2xssd+32GB
> ram.
>
> most operations are  reading and "find file".
> i put some parameters on zfs like: xattr=sa, primarycache=all, secondary
> cache=all
> what else could be tuned?
> thank you in advanced.
> greetings from Potsdam,
> Arman.
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] glusterfs on zfs on rockylinux

2021-12-13 Thread Darrell Budic
A few thoughts from another ZFS backend user:

ZFS:
use arcstats to look at your cache use over time and consider:
Don’t mirror your cache drives, use them as 2x cache volumes to 
increase available cache.
Add more RAM. Lots more RAM (if I’m reading that right and you have 
32Gb ram per zfs server).
Adjust ZFS’s max arc caching upwards if you have lots of RAM.
Try more metadata caching & less content caching if you’re find heavy.
compression on these volumes could help improve IO on the raidZ2s, but you’ll 
have to copy the data on with compression enabled if you didn’t already have it 
enabled. Different zStd levels are worth evaluating here.
Read up on recordsize and consider if you would get any performance benefits 
from 64K or maybe something larger for your large data, depends on where the 
reads are being done. 
Use relatime or no atime tracking.
Upgrade to ZFS 2.0.6 if you aren’t already at 2 or 2.1

For gluster, sounds like gluster 10 would be good for your use case. Without 
knowing what your workload is (VMs, gluster mounts, nfs mounts?), I don’t have 
much else on that level, but you can probably play with the 
cluster.read-hash-mode (try 3) to spread the read load out amongst your 
servers. Search the list archives for general performance hints too, server & 
client .event-threads are probably good targets, and the various 
performance.*threads may/may not help depending on how the volumes are being 
used.

More details (zfs version, gluster version, volume options currently applied, 
more details on the workload) may help if others use similar setups. You may be 
getting into the area where you just need to get your environment setup to try 
some A/B testing with different options though.

Good luck!

  -Darrell


> On Dec 11, 2021, at 5:27 PM, Arman Khalatyan  wrote:
> 
> Hello everybody,
> I was looking for some performance consideration on glusterfs with zfs.
> The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally over 
> 100mln, about 100TB.
> 3replicated Jbods each one with:
> 2x8disks-RaidZ2 +special device mirror  2x1TBnvme+cache mirror 2xssd+32GB ram.
> 
> most operations are  reading and "find file".
> i put some parameters on zfs like: xattr=sa, primarycache=all, secondary 
> cache=all
> what else could be tuned?
> thank you in advanced.
> greetings from Potsdam,
> Arman.
> 
> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] glusterfs on zfs on rockylinux

2021-12-11 Thread Arman Khalatyan
Hello everybody,
I was looking for some performance consideration on glusterfs with zfs.
The data diversity is following: 90% <50kb and 10%>10GB-100GB . totally
over 100mln, about 100TB.
3replicated Jbods each one with:
2x8disks-RaidZ2 +special device mirror  2x1TBnvme+cache mirror 2xssd+32GB
ram.

most operations are  reading and "find file".
i put some parameters on zfs like: xattr=sa, primarycache=all, secondary
cache=all
what else could be tuned?
thank you in advanced.
greetings from Potsdam,
Arman.




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS on ZFS

2019-07-18 Thread Cody Hill
Thanks Amar.

I’m going to see what kind of performance I get with just ZFS cache using Intel 
Optane and RaidZ10 with 12x drives.
If this performs better than AWS GP2, I’m good. If not I’ll look into dmcache.

Has anyone used bcache? Have any experience there? 

Thank you,
Cody Hill  |  Director of Technology  |  Platform9
Direct: (650) 567-3107  
c...@platform9.com  |  Platform9.com 
 | Public Calendar 











> On May 1, 2019, at 7:34 AM, Amar Tumballi Suryanarayan  
> wrote:
> 
> 
> 
> On Tue, Apr 23, 2019 at 11:38 PM Cody Hill  > wrote:
> 
> Thanks for the info Karli,
> 
> I wasn’t aware ZFS Dedup was such a dog. I guess I’ll leave that off. My data 
> get’s 3.5:1 savings on compression alone. I was aware of stripped sets. I 
> will be doing 6x Striped sets across 12x disks. 
> 
> On top of this design I’m going to try and test Intel Optane DIMM (512GB) as 
> a “Tier” for GlusterFS to try and get further write acceleration. And issues 
> with GlusterFS “Tier” functionality that anyone is aware of?
> 
> 
> Hi Cody, I wanted to be honest about GlusterFS 'Tier' functionality. While it 
> is functional and works, we had not seen the actual benefit we expected with 
> the feature, and noticed it is better to use the tiering on each host machine 
> (ie, on bricks) and use those bricks as glusterfs bricks. (like dmcache). 
> 
> Also note that from glusterfs-6.x releases, Tier feature is deprecated.
> 
> -Amar
>  
> Thank you,
> Cody Hill 
> 
>> On Apr 18, 2019, at 2:32 AM, Karli Sjöberg > > wrote:
>> 
>> 
>> 
>> Den 17 apr. 2019 16:30 skrev Cody Hill > >:
>> Hey folks.
>> 
>> I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of reading 
>> and would like to implement Deduplication and Compression in this setup. My 
>> thought would be to run ZFS to handle the Compression and Deduplication.
>> 
>> You _really_ don't want ZFS doing dedup for any reason.
>> 
>> 
>> ZFS would give me the following benefits:
>> 1. If a single disk fails rebuilds happen locally instead of over the network
>> 2. Zil & L2Arc should add a slight performance increase
>> 
>> Adding two really good NVME SSD's as a mirrored SLOG vdev does a huge deal 
>> for synchronous write performance, turning every random write into large 
>> streams that the spinning drives handle better.
>> 
>> Don't know how picky Gluster is about synchronicity though, most 
>> "performance" tweaking suggests setting stuff to async, which I wouldn't 
>> recommend, but it's a huge boost for throughput obviously; not having to 
>> wait for stuff to actually get written, but it's dangerous.
>> 
>> With mirrored NVME SLOG's, you could probably get that throughput without 
>> going asynchronous, which saves you from potential data corruption in a 
>> sudden power loss.
>> 
>> L2ARC on the other hand does a bit for read latency, but for a general 
>> purpose file server- in practice- not a huge difference, the working set is 
>> just too large. Also keep in mind that L2ARC isn't "free". You need more RAM 
>> to know where you've cached stuff...
>> 
>> 3. Deduplication and Compression are inline and have pretty good performance 
>> with modern hardware (Intel Skylake)
>> 
>> ZFS deduplication has terrible performance. Watch your throughput 
>> automatically drop from hundreds or thousands of MB/s down to, like 5. It's 
>> a feature;)
>> 
>> 4. Automated Snapshotting
>> 
>> I can then layer GlusterFS on top to handle distribution to allow 3x 
>> Replicas of my storage.
>> My question is… Why aren’t more people doing this? Is this a horrible idea 
>> for some reason that I’m missing?
>> 
>> While it could save a lot of space in some hypothetical instance, the 
>> drawbacks can never motivate it. E.g. if you want one node to suddenly die 
>> and never recover because of RAM exhaustion, go with ZFS dedup ;)
>> 
>> I’d be very interested to hear your thoughts.
>> 
>> Avoid ZFS dedup at all costs. LZ4 compression on the hand is awesome, 
>> definitely use that! It's basically a free performance enhancer the also 
>> saves space :)
>> 
>> As another person has said, the best performance layout is RAID10- striped 
>> mirrors. I understand you'd want to get as much volume as possible with 
>> RAID-Z/RAID(5|6) since gluster also replicates/distributes, but it has a 
>> huge impact on IOPS. If performance is the main concern, do striped mirrors 
>> with replica 3 in Gluster. My advice is to test thoroughly with different 
>> pool layouts to see what gives acceptable performance against your volume 
>> requirements.
>> 
>> /K
>> 
>> 
>> Additional thoughts:
>> I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
>> I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
>> this correct?)
>> I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane 
>> 

Re: [Gluster-users] GlusterFS on ZFS

2019-05-01 Thread Amar Tumballi Suryanarayan
On Tue, Apr 23, 2019 at 11:38 PM Cody Hill  wrote:

>
> Thanks for the info Karli,
>
> I wasn’t aware ZFS Dedup was such a dog. I guess I’ll leave that off. My
> data get’s 3.5:1 savings on compression alone. I was aware of stripped
> sets. I will be doing 6x Striped sets across 12x disks.
>
> On top of this design I’m going to try and test Intel Optane DIMM (512GB)
> as a “Tier” for GlusterFS to try and get further write acceleration. And
> issues with GlusterFS “Tier” functionality that anyone is aware of?
>
>
Hi Cody, I wanted to be honest about GlusterFS 'Tier' functionality. While
it is functional and works, we had not seen the actual benefit we expected
with the feature, and noticed it is better to use the tiering on each host
machine (ie, on bricks) and use those bricks as glusterfs bricks. (like
dmcache).

Also note that from glusterfs-6.x releases, Tier feature is deprecated.

-Amar


> Thank you,
> Cody Hill
>
> On Apr 18, 2019, at 2:32 AM, Karli Sjöberg  wrote:
>
>
>
> Den 17 apr. 2019 16:30 skrev Cody Hill :
>
> Hey folks.
>
> I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of
> reading and would like to implement Deduplication and Compression in this
> setup. My thought would be to run ZFS to handle the Compression and
> Deduplication.
>
>
> You _really_ don't want ZFS doing dedup for any reason.
>
>
> ZFS would give me the following benefits:
> 1. If a single disk fails rebuilds happen locally instead of over the
> network
> 2. Zil & L2Arc should add a slight performance increase
>
>
> Adding two really good NVME SSD's as a mirrored SLOG vdev does a huge deal
> for synchronous write performance, turning every random write into large
> streams that the spinning drives handle better.
>
> Don't know how picky Gluster is about synchronicity though, most
> "performance" tweaking suggests setting stuff to async, which I wouldn't
> recommend, but it's a huge boost for throughput obviously; not having to
> wait for stuff to actually get written, but it's dangerous.
>
> With mirrored NVME SLOG's, you could probably get that throughput without
> going asynchronous, which saves you from potential data corruption in a
> sudden power loss.
>
> L2ARC on the other hand does a bit for read latency, but for a general
> purpose file server- in practice- not a huge difference, the working set is
> just too large. Also keep in mind that L2ARC isn't "free". You need more
> RAM to know where you've cached stuff...
>
> 3. Deduplication and Compression are inline and have pretty good
> performance with modern hardware (Intel Skylake)
>
>
> ZFS deduplication has terrible performance. Watch your throughput
> automatically drop from hundreds or thousands of MB/s down to, like 5. It's
> a feature;)
>
> 4. Automated Snapshotting
>
> I can then layer GlusterFS on top to handle distribution to allow 3x
> Replicas of my storage.
> My question is… Why aren’t more people doing this? Is this a horrible idea
> for some reason that I’m missing?
>
>
> While it could save a lot of space in some hypothetical instance, the
> drawbacks can never motivate it. E.g. if you want one node to suddenly die
> and never recover because of RAM exhaustion, go with ZFS dedup ;)
>
> I’d be very interested to hear your thoughts.
>
>
> Avoid ZFS dedup at all costs. LZ4 compression on the hand is awesome,
> definitely use that! It's basically a free performance enhancer the also
> saves space :)
>
> As another person has said, the best performance layout is RAID10- striped
> mirrors. I understand you'd want to get as much volume as possible with
> RAID-Z/RAID(5|6) since gluster also replicates/distributes, but it has a
> huge impact on IOPS. If performance is the main concern, do striped mirrors
> with replica 3 in Gluster. My advice is to test thoroughly with different
> pool layouts to see what gives acceptable performance against your volume
> requirements.
>
> /K
>
>
> Additional thoughts:
> I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
> I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB
> (Is this correct?)
> I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane
> DIMM to really smooth out write latencies… Any issues here?
>
> Thank you,
> Cody Hill
>
>
>
>
>
>
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-23 Thread Cody Hill

Thanks for the info Karli,

I wasn’t aware ZFS Dedup was such a dog. I guess I’ll leave that off. My data 
get’s 3.5:1 savings on compression alone. I was aware of stripped sets. I will 
be doing 6x Striped sets across 12x disks. 

On top of this design I’m going to try and test Intel Optane DIMM (512GB) as a 
“Tier” for GlusterFS to try and get further write acceleration. And issues with 
GlusterFS “Tier” functionality that anyone is aware of?

Thank you,
Cody Hill 

> On Apr 18, 2019, at 2:32 AM, Karli Sjöberg  wrote:
> 
> 
> 
> Den 17 apr. 2019 16:30 skrev Cody Hill :
> Hey folks.
> 
> I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of reading 
> and would like to implement Deduplication and Compression in this setup. My 
> thought would be to run ZFS to handle the Compression and Deduplication.
> 
> You _really_ don't want ZFS doing dedup for any reason.
> 
> 
> ZFS would give me the following benefits:
> 1. If a single disk fails rebuilds happen locally instead of over the network
> 2. Zil & L2Arc should add a slight performance increase
> 
> Adding two really good NVME SSD's as a mirrored SLOG vdev does a huge deal 
> for synchronous write performance, turning every random write into large 
> streams that the spinning drives handle better.
> 
> Don't know how picky Gluster is about synchronicity though, most 
> "performance" tweaking suggests setting stuff to async, which I wouldn't 
> recommend, but it's a huge boost for throughput obviously; not having to wait 
> for stuff to actually get written, but it's dangerous.
> 
> With mirrored NVME SLOG's, you could probably get that throughput without 
> going asynchronous, which saves you from potential data corruption in a 
> sudden power loss.
> 
> L2ARC on the other hand does a bit for read latency, but for a general 
> purpose file server- in practice- not a huge difference, the working set is 
> just too large. Also keep in mind that L2ARC isn't "free". You need more RAM 
> to know where you've cached stuff...
> 
> 3. Deduplication and Compression are inline and have pretty good performance 
> with modern hardware (Intel Skylake)
> 
> ZFS deduplication has terrible performance. Watch your throughput 
> automatically drop from hundreds or thousands of MB/s down to, like 5. It's a 
> feature;)
> 
> 4. Automated Snapshotting
> 
> I can then layer GlusterFS on top to handle distribution to allow 3x Replicas 
> of my storage.
> My question is… Why aren’t more people doing this? Is this a horrible idea 
> for some reason that I’m missing?
> 
> While it could save a lot of space in some hypothetical instance, the 
> drawbacks can never motivate it. E.g. if you want one node to suddenly die 
> and never recover because of RAM exhaustion, go with ZFS dedup ;)
> 
> I’d be very interested to hear your thoughts.
> 
> Avoid ZFS dedup at all costs. LZ4 compression on the hand is awesome, 
> definitely use that! It's basically a free performance enhancer the also 
> saves space :)
> 
> As another person has said, the best performance layout is RAID10- striped 
> mirrors. I understand you'd want to get as much volume as possible with 
> RAID-Z/RAID(5|6) since gluster also replicates/distributes, but it has a huge 
> impact on IOPS. If performance is the main concern, do striped mirrors with 
> replica 3 in Gluster. My advice is to test thoroughly with different pool 
> layouts to see what gives acceptable performance against your volume 
> requirements.
> 
> /K
> 
> 
> Additional thoughts:
> I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
> I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
> this correct?)
> I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane 
> DIMM to really smooth out write latencies… Any issues here?
> 
> Thank you,
> Cody Hill
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-18 Thread Dave Pedu

Do check this doc:

https://docs.gluster.org/en/latest/Administrator%20Guide/Gluster%20On%20ZFS/#build-install-zfs

In particular, the bit regarding xattr=sa. In the past, Gluster would 
cause extremely poor performance on zfs datasets without this option 
set. I'm not sure if this is still the case.


- Dave

On 2019-04-16 15:09, Cody Hill wrote:

Hey folks.

I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of
reading and would like to implement Deduplication and Compression in
this setup. My thought would be to run ZFS to handle the Compression
and Deduplication.

ZFS would give me the following benefits:
1. If a single disk fails rebuilds happen locally instead of over the 
network

2. Zil & L2Arc should add a slight performance increase
3. Deduplication and Compression are inline and have pretty good
performance with modern hardware (Intel Skylake)
4. Automated Snapshotting

I can then layer GlusterFS on top to handle distribution to allow 3x
Replicas of my storage.
My question is… Why aren’t more people doing this? Is this a horrible
idea for some reason that I’m missing? I’d be very interested to hear
your thoughts.

Additional thoughts:
I’d like to use Ganesha pNFS to connect to this storage. (Any issues 
here?)

I think I’d need KeepAliveD across these 3x nodes to store in the
FSTAB (Is this correct?)
I’m also thinking about creating a “Gluster Tier” of 512GB of Intel
Optane DIMM to really smooth out write latencies… Any issues here?

Thank you,
Cody Hill











___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-18 Thread Strahil
Aabout those  Optanes  -> if you decide to go LVM , you can use them as cache 
pools for your largest bricks.
Cached LVs can be converted to cache pools and create bricks out of it.

You have a lot of options...

Best Regards,
Strahil NikolovOn Apr 18, 2019 19:19, Darrell Budic  
wrote:
>
> I use ZFS over VOD because I’m more familiar with it and it suites my use 
> case better. I got similar results from performance tests, with VOD 
> outperforming writes slight and ZFS outperforming reads. That was before I 
> added some ZIL and cache to my ZFS disks, too. I also don’t like that you 
> have to specify estimated sizes with VOD for compression, I prefer the ZFS 
> approach. Don’t forget to set the appropriate zfs attributes, the parts of 
> the Gluster doc with those are still valid.
>
> Few more comments inline:
>
> > On Apr 16, 2019, at 5:09 PM, Cody Hill  wrote:
> > 
> > Hey folks.
> > 
> > I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of 
> > reading and would like to implement Deduplication and Compression in this 
> > setup. My thought would be to run ZFS to handle the Compression and 
> > Deduplication.
> > 
> > ZFS would give me the following benefits:
> > 1. If a single disk fails rebuilds happen locally instead of over the 
> > network
>
> I actually run mine in a pure stripe for best performance, if a disk fails 
> and smart warnings didn’t give me enough time to replace it inline first, 
> I’ll rebuild over the network. I have 10G of course, and currently < 10TB of 
> data so I consider it reasonable. I also decided I’d rather present one large 
> brick over many smaller bricks, in some tests others have done, it has shown 
> benefits for gluster healing.
>
> > 2. Zil & L2Arc should add a slight performance increase
>
> Yes. Get the absolute fasted ZIL you can, but any modern enterprise SSD will 
> still give you some benefits. Over-provision these, you probably need 4-15Gb 
> for the Zil (1G networking vs 10G), and I use 90% of the cache drive to allow 
> the SSD to work it’s best. Cache effectiveness depends on your workload, so 
> monitor and/or test with/without.
>
> > 3. Deduplication and Compression are inline and have pretty good 
> > performance with modern hardware (Intel Skylake)
>
> LZ4 compression is great. As others have said, I’d avoid deduplication 
> altogether. Especially in a gluster environment, why waste the RAM and do the 
> work multiple times?
>
> > 4. Automated Snapshotting
>
> Be careful doing this “underneath” the gluster layer, you’re snapshotting 
> only that replica and it’s not guaranteed to be in sync with the others. At 
> best, you’re making a point in time backup of one node, maybe useful for 
> off-system backups with zfs streaming, but I’d consider gluster geo-rep 
> first. And won’t work at all if you are not running a pure replica.
>
> > I can then layer GlusterFS on top to handle distribution to allow 3x 
> > Replicas of my storage.
> > My question is… Why aren’t more people doing this? Is this a horrible idea 
> > for some reason that I’m missing? I’d be very interested to hear your 
> > thoughts.
> > 
> > Additional thoughts:
> > I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
>
> I’d just use glusterfs glfsapi mounts, but if you want to go NFS, sure. Make 
> sure you’re ready to support Ganesha, it doesn’t seem to be as well 
> integrated in the latest gluster releases. Caveat, I don’t use it myself.
>
> > I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
> > this correct?)
>
> There are easier ways. I use a simple DNS round robin to a name (that i can 
> put in the host files for the servers/clients to avoid bootstrap issues when 
> the local DNS is a vm ;)), and set the backup-server option so nodes can 
> switch automatically if one fails. Or you can mount localhost: with a 
> converged cluster, again with backup-server options for best results.
>
> > I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane 
> > DIMM to really smooth out write latencies… Any issues here?
>
> Gluster tiering is currently being dropped from support, until/unless it 
> comes back, I’d use the optanes as cache/zil or just make a separate fast 
> pool out of them.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-18 Thread Darrell Budic
I use ZFS over VOD because I’m more familiar with it and it suites my use case 
better. I got similar results from performance tests, with VOD outperforming 
writes slight and ZFS outperforming reads. That was before I added some ZIL and 
cache to my ZFS disks, too. I also don’t like that you have to specify 
estimated sizes with VOD for compression, I prefer the ZFS approach. Don’t 
forget to set the appropriate zfs attributes, the parts of the Gluster doc with 
those are still valid.

Few more comments inline:

> On Apr 16, 2019, at 5:09 PM, Cody Hill  wrote:
> 
> Hey folks.
> 
> I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of reading 
> and would like to implement Deduplication and Compression in this setup. My 
> thought would be to run ZFS to handle the Compression and Deduplication.
> 
> ZFS would give me the following benefits:
> 1. If a single disk fails rebuilds happen locally instead of over the network

I actually run mine in a pure stripe for best performance, if a disk fails and 
smart warnings didn’t give me enough time to replace it inline first, I’ll 
rebuild over the network. I have 10G of course, and currently < 10TB of data so 
I consider it reasonable. I also decided I’d rather present one large brick 
over many smaller bricks, in some tests others have done, it has shown benefits 
for gluster healing.

> 2. Zil & L2Arc should add a slight performance increase

Yes. Get the absolute fasted ZIL you can, but any modern enterprise SSD will 
still give you some benefits. Over-provision these, you probably need 4-15Gb 
for the Zil (1G networking vs 10G), and I use 90% of the cache drive to allow 
the SSD to work it’s best. Cache effectiveness depends on your workload, so 
monitor and/or test with/without.

> 3. Deduplication and Compression are inline and have pretty good performance 
> with modern hardware (Intel Skylake)

LZ4 compression is great. As others have said, I’d avoid deduplication 
altogether. Especially in a gluster environment, why waste the RAM and do the 
work multiple times?

> 4. Automated Snapshotting

Be careful doing this “underneath” the gluster layer, you’re snapshotting only 
that replica and it’s not guaranteed to be in sync with the others. At best, 
you’re making a point in time backup of one node, maybe useful for off-system 
backups with zfs streaming, but I’d consider gluster geo-rep first. And won’t 
work at all if you are not running a pure replica.

> I can then layer GlusterFS on top to handle distribution to allow 3x Replicas 
> of my storage.
> My question is… Why aren’t more people doing this? Is this a horrible idea 
> for some reason that I’m missing? I’d be very interested to hear your 
> thoughts.
> 
> Additional thoughts:
> I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)

I’d just use glusterfs glfsapi mounts, but if you want to go NFS, sure. Make 
sure you’re ready to support Ganesha, it doesn’t seem to be as well integrated 
in the latest gluster releases. Caveat, I don’t use it myself.

> I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
> this correct?)

There are easier ways. I use a simple DNS round robin to a name (that i can put 
in the host files for the servers/clients to avoid bootstrap issues when the 
local DNS is a vm ;)), and set the backup-server option so nodes can switch 
automatically if one fails. Or you can mount localhost: with a converged 
cluster, again with backup-server options for best results. 

> I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane 
> DIMM to really smooth out write latencies… Any issues here?

Gluster tiering is currently being dropped from support, until/unless it comes 
back, I’d use the optanes as cache/zil or just make a separate fast pool out of 
them.

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-18 Thread Karli Sjöberg
Den 17 apr. 2019 16:30 skrev Cody Hill :Hey folks.I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of reading and would like to implement Deduplication and Compression in this setup. My thought would be to run ZFS to handle the Compression and Deduplication.You _really_ don't want ZFS doing dedup for any reason.ZFS would give me the following benefits:1. If a single disk fails rebuilds happen locally instead of over the network2. Zil & L2Arc should add a slight performance increaseAdding two really good NVME SSD's as a mirrored SLOG vdev does a huge deal for synchronous write performance, turning every random write into large streams that the spinning drives handle better.Don't know how picky Gluster is about synchronicity though, most "performance" tweaking suggests setting stuff to async, which I wouldn't recommend, but it's a huge boost for throughput obviously; not having to wait for stuff to actually get written, but it's dangerous.With mirrored NVME SLOG's, you could probably get that throughput without going asynchronous, which saves you from potential data corruption in a sudden power loss.L2ARC on the other hand does a bit for read latency, but for a general purpose file server- in practice- not a huge difference, the working set is just too large. Also keep in mind that L2ARC isn't "free". You need more RAM to know where you've cached stuff...3. Deduplication and Compression are inline and have pretty good performance with modern hardware (Intel Skylake)ZFS deduplication has terrible performance. Watch your throughput automatically drop from hundreds or thousands of MB/s down to, like 5. It's a feature;)4. Automated SnapshottingI can then layer GlusterFS on top to handle distribution to allow 3x Replicas of my storage.My question is… Why aren’t more people doing this? Is this a horrible idea for some reason that I’m missing?While it could save a lot of space in some hypothetical instance, the drawbacks can never motivate it. E.g. if you want one node to suddenly die and never recover because of RAM exhaustion, go with ZFS dedup ;) I’d be very interested to hear your thoughts.Avoid ZFS dedup at all costs. LZ4 compression on the hand is awesome, definitely use that! It's basically a free performance enhancer the also saves space :)As another person has said, the best performance layout is RAID10- striped mirrors. I understand you'd want to get as much volume as possible with RAID-Z/RAID(5|6) since gluster also replicates/distributes, but it has a huge impact on IOPS. If performance is the main concern, do striped mirrors with replica 3 in Gluster. My advice is to test thoroughly with different pool layouts to see what gives acceptable performance against your volume requirements./KAdditional thoughts:I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is this correct?)I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane DIMM to really smooth out write latencies… Any issues here?Thank you,Cody Hill___Gluster-users mailing listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-17 Thread Strahil
Hi Code,

Keep in mind that if you like the thin LVM approach, you can still use VDO (Red 
Hat-based systems) and get that deduplication/compression.
VDO most probably will require some tuning to get the writes fast enough, but 
the reads  can be way faster.

Best Regards,
Strahil NikolovOn Apr 17, 2019 18:34, Pascal Suter  
wrote:
>
> Hi Cody
>
> i'm still new to Gluster myself, so take my input with the necessary 
> skepticism:
>
> if you care about performance (and it looks like you do), use zfs mirror 
> pairs and not raidz volumes. in my experience (outside of gluster), 
> raidz pools perform significantly worse than a hardware raid5 or 6. if 
> you combine a mirror on zfs with a 3x replication on gluster, you need 
> 6x the amount of raw disk space to get your desired redundancy.. you 
> could do with 3x the amount of diskspace, if you left the zfs mirror 
> away and accept the rebuild of a lost disk over the network or you could 
> end up somewhere beween 3x and 6x if you used hardware raid6 instead of 
> zfs on the bricks. When using hardware raid6 make sure you align your 
> lvm volumes properly, it makes a huge difference in performance. Okay, 
> deduplication might give you some of it back, but benchmark the zfs 
> deduplication process first before deciding on it. in theory it could 
> add to your write perofrmance, but i'm not sure if that's going to 
> happen in reality.
>
> snapshotting might be tricky.. afaik gluster natively supports 
> snapshotting with thin provisioned lvm volumes only. this lets you 
> create snapshots with the "gluster" cli tool. gluster will then handle 
> consistency across all your bricks so that each snapshot (as a whole, 
> across all bricks) is consistent in itself. this includes some 
> challenges about handling open file sessions etc. I'm not familiar with 
> what gluster actually does but by reading the documentation and some 
> discussion about snapshots it seems that there is more to it than simply 
> automate a couple of lvcreate statements. so i would expect some 
> challenges when doing it yourself on zfs rather than letting gluster 
> handle it. Restoring a single file from a snapshot also seems alot 
> easier if you go with the lvm thin setup.. you can then mount a snapshot 
> (of your entire gluster volume, not just of a brick) and simply copy the 
> file.. while with zfs it seems you need to find out which bricks your 
> file resided on, then copy the necessary raw data to your live bricks 
> which is something i would not feel comfortable doing and it is a lot 
> more work and prone to error.
>
> also, if things go wrong (for example when dealing with the snapshots), 
> there are probably not so many people around to help you.
>
> again, i am no expert, that's just what i'd be concerned about with the 
> little knowledge i have at the moment :)
>
> cheers
>
> Pascal
>
> On 17.04.19 00:09, Cody Hill wrote:
> > Hey folks.
> >
> > I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of 
> > reading and would like to implement Deduplication and Compression in this 
> > setup. My thought would be to run ZFS to handle the Compression and 
> > Deduplication.
> >
> > ZFS would give me the following benefits:
> > 1. If a single disk fails rebuilds happen locally instead of over the 
> > network
> > 2. Zil & L2Arc should add a slight performance increase
> > 3. Deduplication and Compression are inline and have pretty good 
> > performance with modern hardware (Intel Skylake)
> > 4. Automated Snapshotting
> >
> > I can then layer GlusterFS on top to handle distribution to allow 3x 
> > Replicas of my storage.
> > My question is… Why aren’t more people doing this? Is this a horrible idea 
> > for some reason that I’m missing? I’d be very interested to hear your 
> > thoughts.
> >
> > Additional thoughts:
> > I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
> > I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
> > this correct?)
> > I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane 
> > DIMM to really smooth out write latencies… Any issues here?
> >
> > Thank you,
> > Cody Hill
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] GlusterFS on ZFS

2019-04-17 Thread Pascal Suter

Hi Cody

i'm still new to Gluster myself, so take my input with the necessary 
skepticism:


if you care about performance (and it looks like you do), use zfs mirror 
pairs and not raidz volumes. in my experience (outside of gluster), 
raidz pools perform significantly worse than a hardware raid5 or 6. if 
you combine a mirror on zfs with a 3x replication on gluster, you need 
6x the amount of raw disk space to get your desired redundancy.. you 
could do with 3x the amount of diskspace, if you left the zfs mirror 
away and accept the rebuild of a lost disk over the network or you could 
end up somewhere beween 3x and 6x if you used hardware raid6 instead of 
zfs on the bricks. When using hardware raid6 make sure you align your 
lvm volumes properly, it makes a huge difference in performance. Okay, 
deduplication might give you some of it back, but benchmark the zfs 
deduplication process first before deciding on it. in theory it could 
add to your write perofrmance, but i'm not sure if that's going to 
happen in reality.


snapshotting might be tricky.. afaik gluster natively supports 
snapshotting with thin provisioned lvm volumes only. this lets you 
create snapshots with the "gluster" cli tool. gluster will then handle 
consistency across all your bricks so that each snapshot (as a whole, 
across all bricks) is consistent in itself. this includes some 
challenges about handling open file sessions etc. I'm not familiar with 
what gluster actually does but by reading the documentation and some 
discussion about snapshots it seems that there is more to it than simply 
automate a couple of lvcreate statements. so i would expect some 
challenges when doing it yourself on zfs rather than letting gluster 
handle it. Restoring a single file from a snapshot also seems alot 
easier if you go with the lvm thin setup.. you can then mount a snapshot 
(of your entire gluster volume, not just of a brick) and simply copy the 
file.. while with zfs it seems you need to find out which bricks your 
file resided on, then copy the necessary raw data to your live bricks 
which is something i would not feel comfortable doing and it is a lot 
more work and prone to error.


also, if things go wrong (for example when dealing with the snapshots), 
there are probably not so many people around to help you.


again, i am no expert, that's just what i'd be concerned about with the 
little knowledge i have at the moment :)


cheers

Pascal

On 17.04.19 00:09, Cody Hill wrote:

Hey folks.

I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of reading 
and would like to implement Deduplication and Compression in this setup. My 
thought would be to run ZFS to handle the Compression and Deduplication.

ZFS would give me the following benefits:
1. If a single disk fails rebuilds happen locally instead of over the network
2. Zil & L2Arc should add a slight performance increase
3. Deduplication and Compression are inline and have pretty good performance 
with modern hardware (Intel Skylake)
4. Automated Snapshotting

I can then layer GlusterFS on top to handle distribution to allow 3x Replicas 
of my storage.
My question is… Why aren’t more people doing this? Is this a horrible idea for 
some reason that I’m missing? I’d be very interested to hear your thoughts.

Additional thoughts:
I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
this correct?)
I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane DIMM 
to really smooth out write latencies… Any issues here?

Thank you,
Cody Hill











___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS on ZFS

2019-04-17 Thread Cody Hill
Hey folks.

I’m looking to deploy GlusterFS to host some VMs. I’ve done a lot of reading 
and would like to implement Deduplication and Compression in this setup. My 
thought would be to run ZFS to handle the Compression and Deduplication.

ZFS would give me the following benefits:
1. If a single disk fails rebuilds happen locally instead of over the network
2. Zil & L2Arc should add a slight performance increase
3. Deduplication and Compression are inline and have pretty good performance 
with modern hardware (Intel Skylake)
4. Automated Snapshotting

I can then layer GlusterFS on top to handle distribution to allow 3x Replicas 
of my storage.
My question is… Why aren’t more people doing this? Is this a horrible idea for 
some reason that I’m missing? I’d be very interested to hear your thoughts.

Additional thoughts:
I’d like to use Ganesha pNFS to connect to this storage. (Any issues here?)
I think I’d need KeepAliveD across these 3x nodes to store in the FSTAB (Is 
this correct?)
I’m also thinking about creating a “Gluster Tier” of 512GB of Intel Optane DIMM 
to really smooth out write latencies… Any issues here?

Thank you,
Cody Hill











___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfs and zfs and freeBSD - not an option

2010-03-17 Thread Oliver Hoffmann
Hi all,


question answered by searching the list:

http://www.mail-archive.com/gluster-users@gluster.org/msg02299.html


Regards,
 
Oliver Hoffmann

 Hi list,
 
 I used zfs on FreeBSD 8.0 for a while now which is just fine. The
 next step would be to combine it with glusterfs and even heartbeat.
 I installed fuse (from the ports) already and while compiling
 glusterfs 2.0.9 or 3.0.0 or 3.0.2 I always get this:
 
 Making all in posix
 Making all in src
 if /bin/sh ../../../../libtool --tag=CC --mode=compile gcc
 -DHAVE_CONFIG_H  -I. -I. -I../../../.. -fPIC -fno-strict-aliasing
 -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -DGF_BSD_HOST_OS -Wall
 -I../../../../libglusterfs/src -shared -nostartfiles
 -I../../../../argp-standalone -g -O2 -MT posix.lo -MD -MP -MF
 .deps/posix.Tpo -c -o posix.lo posix.c;  then mv -f
 .deps/posix.Tpo .deps/posix.Plo; else rm -f .deps/posix.Tpo;
 exit 1; fi libtool: compile:  gcc -DHAVE_CONFIG_H -I. -I.
 -I../../../.. -fPIC -fno-strict-aliasing -D_FILE_OFFSET_BITS=64
 -D_GNU_SOURCE -DGF_BSD_HOST_OS -Wall -I../../../../libglusterfs/src
 -nostartfiles -I../../../../argp-standalone -g -O2 -MT posix.lo -MD
 -MP -MF .deps/posix.Tpo -c posix.c  -fPIC -DPIC -o .libs/posix.o
 posix.c: In function 'janitor_walker': posix.c:1348: error:
 'FTW_CONTINUE' undeclared (first use in this function) posix.c:1348:
 error: (Each undeclared identifier is reported only once
 posix.c:1348: error: for each function it appears in.) posix.c: In
 function 'posix_readv': posix.c:2436: warning: format '%lu' expects
 type 'long unsigned int', but argument 2 has type 'size_t' *** Error
 code 1
 
 Stop in /tmp/glusterfs-3.0.2/xlators/storage/posix/src.
 *** Error code 1
 
 Stop in /tmp/glusterfs-3.0.2/xlators/storage/posix.
 *** Error code 1
 
 Stop in /tmp/glusterfs-3.0.2/xlators/storage.
 *** Error code 1
 
 Stop in /tmp/glusterfs-3.0.2/xlators.
 *** Error code 1
 
 Stop in /tmp/glusterfs-3.0.2.
 *** Error code 1
 
 Stop in /tmp/glusterfs-3.0.2.
 
 I had success with version 1.3.12 though. FreeBSD 7.0 or 7.2 seems to
 be the better choice here but zfs is not as mature as on 8.0. 
 
 Is there a chance to have FreeBSD 8.0 running zfs and glusterfs 3.x or
 2.x?
 
 Apart from that Linux plus zfs is not what I want but opensolaris
 might be an option. Should I consider this and drop FreeBSD?
 
 Regards,
 
 Oliver Hoffmann
 
 
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
 




___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] glusterfs and zfs and freeBSD

2010-03-11 Thread Oliver Hoffmann
Hi list,

I used zfs on FreeBSD 8.0 for a while now which is just fine. The
next step would be to combine it with glusterfs and even heartbeat.
I installed fuse (from the ports) already and while compiling glusterfs
2.0.9 or 3.0.0 or 3.0.2 I always get this:

Making all in posix
Making all in src
if /bin/sh ../../../../libtool --tag=CC --mode=compile gcc
-DHAVE_CONFIG_H  -I. -I. -I../../../.. -fPIC -fno-strict-aliasing
-D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE -DGF_BSD_HOST_OS -Wall
-I../../../../libglusterfs/src -shared -nostartfiles
-I../../../../argp-standalone -g -O2 -MT posix.lo -MD -MP -MF
.deps/posix.Tpo -c -o posix.lo posix.c;  then mv -f .deps/posix.Tpo
.deps/posix.Plo; else rm -f .deps/posix.Tpo; exit 1; fi libtool:
compile:  gcc -DHAVE_CONFIG_H -I. -I. -I../../../.. -fPIC
-fno-strict-aliasing -D_FILE_OFFSET_BITS=64 -D_GNU_SOURCE
-DGF_BSD_HOST_OS -Wall -I../../../../libglusterfs/src -nostartfiles
-I../../../../argp-standalone -g -O2 -MT posix.lo -MD -MP
-MF .deps/posix.Tpo -c posix.c  -fPIC -DPIC -o .libs/posix.o posix.c:
In function 'janitor_walker': posix.c:1348: error: 'FTW_CONTINUE'
undeclared (first use in this function) posix.c:1348: error: (Each
undeclared identifier is reported only once posix.c:1348: error: for
each function it appears in.) posix.c: In function 'posix_readv':
posix.c:2436: warning: format '%lu' expects type 'long unsigned int',
but argument 2 has type 'size_t' *** Error code 1

Stop in /tmp/glusterfs-3.0.2/xlators/storage/posix/src.
*** Error code 1

Stop in /tmp/glusterfs-3.0.2/xlators/storage/posix.
*** Error code 1

Stop in /tmp/glusterfs-3.0.2/xlators/storage.
*** Error code 1

Stop in /tmp/glusterfs-3.0.2/xlators.
*** Error code 1

Stop in /tmp/glusterfs-3.0.2.
*** Error code 1

Stop in /tmp/glusterfs-3.0.2.

I had success with version 1.3.12 though. FreeBSD 7.0 or 7.2 seems to
be the better choice here but zfs is not as mature as on 8.0. 

Is there a chance to have FreeBSD 8.0 running zfs and glusterfs 3.x or
2.x?

Apart from that Linux plus zfs is not what I want but opensolaris might
be an option. Should I consider this and drop FreeBSD?

Regards,

Oliver Hoffmann


___
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users