Re: [Gluster-users] State of the gluster project

2023-10-29 Thread Dmitry Melekhov


29.10.2023 00:07, Zakhar Kirpichenko пишет:
I don't think it's worth it for anyone. It's a dead project since 
about 9.0, if not earlier.


Well, really earlier.

Attempt to get better gluster as gluster2 in 4.0 failed...






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of the gluster project

2023-10-28 Thread Alexander Schreiber
On Sat, Oct 28, 2023 at 11:07:52PM +0300, Zakhar Kirpichenko wrote:
> I don't think it's worth it for anyone. It's a dead project since about
> 9.0, if not earlier. It's time to embrace the truth and move on.

Which is shame because I choose GlusterFS for one of my storage clusters
_specifically_ due to the ease of emergency data recovery (for purely
replicated volumes) even in case of complete failure of the software
stack and system disks - just grab the data disks, mount on a suitable
machine and copy the data off.

Anyone knows of distributed FS with similar easy emergency recovery?

(I also run Ceph, but Bluestore seems to be pretty much a black box.)

Kind regards,
   Alex.

> 
> /Z
> 
> On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov  wrote:
> 
> > Well,
> >
> > After IBM acquisition, RH discontinued their support in many projects
> > including GlusterFS (certification exams were removed, payed product went
> > EOL, etc).
> >
> > The only way to get it back on track is with a sponsor company that haves
> > the capability to drive it.
> > Kadalu is relying on GlusterFS but they are not as big as Red Hat and
> > based on one of the previous e-mails they will need sponsorship to dedicate
> > resources.
> >
> > Best Regards,
> > Strahil Nikolov
> >
> >
> >
> > On Saturday, October 28, 2023, 9:57 AM, Marcus Pedersén <
> > marcus.peder...@slu.se> wrote:
> >
> > Hi all,
> > I just have a general thought about the gluster
> > project.
> > I have got the feeling that things has slowed down
> > in the gluster project.
> > I have had a look at github and to me the project
> > seems to slow down, for gluster version 11 there has
> > been no minor releases, we are still on 11.0 and I have
> > not found any references to 11.1.
> > There is a milestone called 12 but it seems to be
> > stale.
> > I have hit the issue:
> > https://github.com/gluster/glusterfs/issues/4085
> > that seems to have no sollution.
> > I noticed when version 11 was released that you
> > could not bump OP version to 11 and reported this,
> > but this is still not available.
> >
> > I am just wondering if I am missing something here?
> >
> > We have been using gluster for many years in production
> > and I think that gluster is great!! It has served as well over
> > the years and we have seen some great improvments
> > of stabilility and speed increase.
> >
> > So is there something going on or have I got
> > the wrong impression (and feeling)?
> >
> > Best regards
> > Marcus
> > ---
> > När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> > personuppgifter. För att läsa mer om hur detta går till, klicka här <
> > https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> > E-mailing SLU will result in SLU processing your personal data. For more
> > information on how this is done, click here <
> > https://www.slu.se/en/about-slu/contact-slu/personal-data/>
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >

> 
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


-- 
"Opportunity is missed by most people because it is dressed in overalls and
 looks like work."  -- Thomas A. Edison




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of the gluster project

2023-10-28 Thread Zakhar Kirpichenko
I don't think it's worth it for anyone. It's a dead project since about
9.0, if not earlier. It's time to embrace the truth and move on.

/Z

On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov  wrote:

> Well,
>
> After IBM acquisition, RH discontinued their support in many projects
> including GlusterFS (certification exams were removed, payed product went
> EOL, etc).
>
> The only way to get it back on track is with a sponsor company that haves
> the capability to drive it.
> Kadalu is relying on GlusterFS but they are not as big as Red Hat and
> based on one of the previous e-mails they will need sponsorship to dedicate
> resources.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> On Saturday, October 28, 2023, 9:57 AM, Marcus Pedersén <
> marcus.peder...@slu.se> wrote:
>
> Hi all,
> I just have a general thought about the gluster
> project.
> I have got the feeling that things has slowed down
> in the gluster project.
> I have had a look at github and to me the project
> seems to slow down, for gluster version 11 there has
> been no minor releases, we are still on 11.0 and I have
> not found any references to 11.1.
> There is a milestone called 12 but it seems to be
> stale.
> I have hit the issue:
> https://github.com/gluster/glusterfs/issues/4085
> that seems to have no sollution.
> I noticed when version 11 was released that you
> could not bump OP version to 11 and reported this,
> but this is still not available.
>
> I am just wondering if I am missing something here?
>
> We have been using gluster for many years in production
> and I think that gluster is great!! It has served as well over
> the years and we have seen some great improvments
> of stabilility and speed increase.
>
> So is there something going on or have I got
> the wrong impression (and feeling)?
>
> Best regards
> Marcus
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> personuppgifter. För att läsa mer om hur detta går till, klicka här <
> https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> E-mailing SLU will result in SLU processing your personal data. For more
> information on how this is done, click here <
> https://www.slu.se/en/about-slu/contact-slu/personal-data/>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of the gluster project

2023-10-28 Thread Strahil Nikolov
Well,
After IBM acquisition, RH discontinued their support in many projects including 
GlusterFS (certification exams were removed, payed product went EOL, etc).
The only way to get it back on track is with a sponsor company that haves the 
capability to drive it.Kadalu is relying on GlusterFS but they are not as big 
as Red Hat and based on one of the previous e-mails they will need sponsorship 
to dedicate resources.
Best Regards,Strahil Nikolov 




On Saturday, October 28, 2023, 9:57 AM, Marcus Pedersén 
 wrote:

Hi all,
I just have a general thought about the gluster
project.
I have got the feeling that things has slowed down
in the gluster project.
I have had a look at github and to me the project
seems to slow down, for gluster version 11 there has
been no minor releases, we are still on 11.0 and I have
not found any references to 11.1.
There is a milestone called 12 but it seems to be
stale.
I have hit the issue:
https://github.com/gluster/glusterfs/issues/4085
that seems to have no sollution.
I noticed when version 11 was released that you
could not bump OP version to 11 and reported this,
but this is still not available.

I am just wondering if I am missing something here?

We have been using gluster for many years in production
and I think that gluster is great!! It has served as well over
the years and we have seen some great improvments
of stabilility and speed increase.

So is there something going on or have I got
the wrong impression (and feeling)?

Best regards
Marcus
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina 
personuppgifter. För att läsa mer om hur detta går till, klicka här 

E-mailing SLU will result in SLU processing your personal data. For more 
information on how this is done, click here 





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Ronny Adsetts
Hi Aravinda,

Interesting, I had no idea you were trying to do this.

We've used Gluster from the v3 days and have had few problems over the years 
(well performance, but there are ways of dealing with that to a certain 
extent). We have no short-term plans to migrate away from Gluster but are 
obviously concerned with the lack of visible activity with the project.

Hopefully the companies who have built products on Gluster can come together 
and share the load and those using Gluster across their systems can help either 
financially or technically to support that.

It would be a real shame to see the project abandoned.

Ronny


Aravinda wrote on 27/10/2023 10:22:
> It is very unfortunate that Gluster is not maintained. From Kadalu 
> Technologies, we are trying to set up a small team dedicated to maintain 
> GlusterFS for the next three years. This will be only possible if we get 
> funding from community and companies. The details about the proposal is here 
> https://kadalu.tech/gluster/
>
> *About Kadalu Technologies*: Kadalu Technologies was started in 2019 by a few 
> Gluster maintainers to provide the persistent storage for the applications 
> running in Kubernetes. The solution (https://github.com/kadalu/kadalu) is 
> based on GlusterFS and doesn't use the management layer Glusterd (Natively 
> integrated using Kubernetes APIs). Kadalu Technologies also maintains many of 
> the GlusterFS tools like gdash (https://github.com/kadalu/gdash), 
> gluster-metrics-exporter (https://github.com/kadalu/gluster-metrics-exporter) 
> etc.
>
>
> Aravinda
> https://kadalu.tech
>
>
>  On Fri, 27 Oct 2023 14:21:35 +0530 *Diego Zuccato 
> * wrote ---
>
> Maybe a bit OT...
>
> I'm no expert on either, but the concepts are quite similar.
> Both require "extra" nodes (metadata and monitor), but those can be
> virtual machines or you can host the services on OSD machines.
>
> We don't use snapshots, so I can't comment on that.
>
> My experience with Ceph is limited to having it working on Proxmox. No
> experience yet with CephFS.
>
> BeeGFS is more like a "freemium" FS: the base functionality is free, but
> if you need "enterprise" features (quota, replication...) you have to
> pay (quite a lot... probably not to compromise lucrative GPFS licensing).
>
> We also saw more than 30 minutes for an ls on a Gluster directory
> containing about 50 files when we had many millions of files on the fs
> (with one disk per brick, which also lead to many memory issues). After
> last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory
> pressure wend down drastically, but desyncs still happen even if the
> nodes are connected via IPoIB links that are really rock-solid (and in
> the worst case they could fallback to 1Gbps Ethernet connectivity).
>
> Diego
>
> Il 27/10/2023 10:30, Marcus Pedersén ha scritto:
> > Hi Diego,
> > I have had a look at BeeGFS and is seems more similar
> > to ceph then to gluster. It requires extra management
> > nodes similar to ceph, right?
> > Second of all there are no snapshots in BeeGFS, as
> > I understand it.
> > I know ceph has snapshots so for us this seems a
> > better alternative. What is your experience of ceph?
> >
> > I am sorry to hear about your problems with gluster,
> > from my experience we had quite some issues with gluster
> > when it was "young", I thing the first version we installed
> > whas 3.5 or so. It was also extremly slow, an ls took forever.
> > But later versions has been "kind" to us and worked quite well
> > and file access has become really comfortable.
> >
> > Best regards
> > Marcus
> >
> > On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:
> >> CAUTION: This email originated from outside of the organization. Do 
> not click links or open attachments unless you recognize the sender and know 
> the content is safe.
> >>
> >>
> >> Hi.
> >>
> >> I'm also migrating to BeeGFS and CephFS (depending on usage).
> >>
> >> What I liked most about Gluster was that files were easily recoverable
> >> from bricks even in case of disaster and that it said it supported 
> RDMA.
> >> But I soon found that RDMA was being phased out, and I always find
> >> entries that are not healing after a couple months of (not really 
> heavy)
> >> use, directories that can't be removed because not all files have been
> >> deleted from all the bricks and files or directories that become
> >> inaccessible with no apparent reason.
> >> Given that I currently have 3 nodes with 30 12TB disks each in replica 
> 3
> >> arbiter 1 it's become a major showstopper: can't stop production, 
> backup
> >> everything and restart from scratch every 3-4 months. And there are no
> >> tools helping, just log digging :( Even at version 9.6 seems it's not
> >> 

Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Aravinda
It is very unfortunate that Gluster is not maintained. From Kadalu 
Technologies, we are trying to set up a small team dedicated to maintain 
GlusterFS for the next three years. This will be only possible if we get 
funding from community and companies. The details about the proposal is here 
https://kadalu.tech/gluster/



About Kadalu Technologies: Kadalu Technologies was started in 2019 by a few 
Gluster maintainers to provide the persistent storage for the applications 
running in Kubernetes. The solution (https://github.com/kadalu/kadalu) is based 
on GlusterFS and doesn't use the management layer Glusterd (Natively integrated 
using Kubernetes APIs). Kadalu Technologies also maintains many of the 
GlusterFS tools like gdash (https://github.com/kadalu/gdash), 
gluster-metrics-exporter (https://github.com/kadalu/gluster-metrics-exporter) 
etc.





Aravinda


https://kadalu.tech




 On Fri, 27 Oct 2023 14:21:35 +0530 Diego Zuccato  
wrote ---



Maybe a bit OT...

I'm no expert on either, but the concepts are quite similar.
Both require "extra" nodes (metadata and monitor), but those can be 
virtual machines or you can host the services on OSD machines.

We don't use snapshots, so I can't comment on that.

My experience with Ceph is limited to having it working on Proxmox. No 
experience yet with CephFS.

BeeGFS is more like a "freemium" FS: the base functionality is free, but 
if you need "enterprise" features (quota, replication...) you have to 
pay (quite a lot... probably not to compromise lucrative GPFS licensing).

We also saw more than 30 minutes for an ls on a Gluster directory 
containing about 50 files when we had many millions of files on the fs 
(with one disk per brick, which also lead to many memory issues). After 
last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory 
pressure wend down drastically, but desyncs still happen even if the 
nodes are connected via IPoIB links that are really rock-solid (and in 
the worst case they could fallback to 1Gbps Ethernet connectivity).

Diego

Il 27/10/2023 10:30, Marcus Pedersén ha scritto:
> Hi Diego,
> I have had a look at BeeGFS and is seems more similar
> to ceph then to gluster. It requires extra management
> nodes similar to ceph, right?
> Second of all there are no snapshots in BeeGFS, as
> I understand it.
> I know ceph has snapshots so for us this seems a
> better alternative. What is your experience of ceph?
> 
> I am sorry to hear about your problems with gluster,
> from my experience we had quite some issues with gluster
> when it was "young", I thing the first version we installed
> whas 3.5 or so. It was also extremly slow, an ls took forever.
> But later versions has been "kind" to us and worked quite well
> and file access has become really comfortable.
> 
> Best regards
> Marcus
> 
> On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:
>> CAUTION: This email originated from outside of the organization. Do not 
>> click links or open attachments unless you recognize the sender and know the 
>> content is safe.
>>
>>
>> Hi.
>>
>> I'm also migrating to BeeGFS and CephFS (depending on usage).
>>
>> What I liked most about Gluster was that files were easily recoverable
>> from bricks even in case of disaster and that it said it supported RDMA.
>> But I soon found that RDMA was being phased out, and I always find
>> entries that are not healing after a couple months of (not really heavy)
>> use, directories that can't be removed because not all files have been
>> deleted from all the bricks and files or directories that become
>> inaccessible with no apparent reason.
>> Given that I currently have 3 nodes with 30 12TB disks each in replica 3
>> arbiter 1 it's become a major showstopper: can't stop production, backup
>> everything and restart from scratch every 3-4 months. And there are no
>> tools helping, just log digging :( Even at version 9.6 seems it's not
>> really "production ready"... More like v0.9.6 IMVHO. And now it being
>> EOLed makes it way worse.
>>
>> Diego
>>
>> Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:
>>> Hi,
>>>
>>> Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
>>> projects, so Gluster doesn't get much attention. From my experience, it
>>> has deteriorated since about version 9.0, and we're migrating to
>>> alternatives.
>>>
>>> /Z
>>>
>>> On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén >> > wrote:
>>>
>>>  Hi all,
>>>  I just have a general thought about the gluster
>>>  project.
>>>  I have got the feeling that things has slowed down
>>>  in the gluster project.
>>>  I have had a look at github and to me the project
>>>  seems to slow down, for gluster version 11 there has
>>>  been no minor releases, we are still on 11.0 and I have
>>>  not found any references to 11.1.
>>>  There is a milestone called 12 but it seems to be
>>>  stale.
>>>  

Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Diego Zuccato

Maybe a bit OT...

I'm no expert on either, but the concepts are quite similar.
Both require "extra" nodes (metadata and monitor), but those can be 
virtual machines or you can host the services on OSD machines.


We don't use snapshots, so I can't comment on that.

My experience with Ceph is limited to having it working on Proxmox. No 
experience yet with CephFS.


BeeGFS is more like a "freemium" FS: the base functionality is free, but 
if you need "enterprise" features (quota, replication...) you have to 
pay (quite a lot... probably not to compromise lucrative GPFS licensing).


We also saw more than 30 minutes for an ls on a Gluster directory 
containing about 50 files when we had many millions of files on the fs 
(with one disk per brick, which also lead to many memory issues). After 
last rebuild I created 5-disks RAID5 bricks (about 44TB each) and memory 
pressure wend down drastically, but desyncs still happen even if the 
nodes are connected via IPoIB links that are really rock-solid (and in 
the worst case they could fallback to 1Gbps Ethernet connectivity).


Diego

Il 27/10/2023 10:30, Marcus Pedersén ha scritto:

Hi Diego,
I have had a look at BeeGFS and is seems more similar
to ceph then to gluster. It requires extra management
nodes similar to ceph, right?
Second of all there are no snapshots in BeeGFS, as
I understand it.
I know ceph has snapshots so for us this seems a
better alternative. What is your experience of ceph?

I am sorry to hear about your problems with gluster,
from my experience we had quite some issues with gluster
when it was "young", I thing the first version we installed
whas 3.5 or so. It was also extremly slow, an ls took forever.
But later versions has been "kind" to us and worked quite well
and file access has become really comfortable.

Best regards
Marcus

On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:

CAUTION: This email originated from outside of the organization. Do not click 
links or open attachments unless you recognize the sender and know the content 
is safe.


Hi.

I'm also migrating to BeeGFS and CephFS (depending on usage).

What I liked most about Gluster was that files were easily recoverable
from bricks even in case of disaster and that it said it supported RDMA.
But I soon found that RDMA was being phased out, and I always find
entries that are not healing after a couple months of (not really heavy)
use, directories that can't be removed because not all files have been
deleted from all the bricks and files or directories that become
inaccessible with no apparent reason.
Given that I currently have 3 nodes with 30 12TB disks each in replica 3
arbiter 1 it's become a major showstopper: can't stop production, backup
everything and restart from scratch every 3-4 months. And there are no
tools helping, just log digging :( Even at version 9.6 seems it's not
really "production ready"... More like v0.9.6 IMVHO. And now it being
EOLed makes it way worse.

Diego

Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:

Hi,

Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
projects, so Gluster doesn't get much attention. From my experience, it
has deteriorated since about version 9.0, and we're migrating to
alternatives.

/Z

On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén mailto:marcus.peder...@slu.se>> wrote:

 Hi all,
 I just have a general thought about the gluster
 project.
 I have got the feeling that things has slowed down
 in the gluster project.
 I have had a look at github and to me the project
 seems to slow down, for gluster version 11 there has
 been no minor releases, we are still on 11.0 and I have
 not found any references to 11.1.
 There is a milestone called 12 but it seems to be
 stale.
 I have hit the issue:
 https://github.com/gluster/glusterfs/issues/4085
 
 that seems to have no sollution.
 I noticed when version 11 was released that you
 could not bump OP version to 11 and reported this,
 but this is still not available.

 I am just wondering if I am missing something here?

 We have been using gluster for many years in production
 and I think that gluster is great!! It has served as well over
 the years and we have seen some great improvments
 of stabilility and speed increase.

 So is there something going on or have I got
 the wrong impression (and feeling)?

 Best regards
 Marcus
 ---
 När du skickar e-post till SLU så innebär detta att SLU behandlar
 dina personuppgifter. För att läsa mer om hur detta går till, klicka
 här >
 E-mailing SLU will result in SLU processing your personal data. For
 more information on how this is done, click here
 

Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Marcus Pedersén
Hi Diego,
I have had a look at BeeGFS and is seems more similar
to ceph then to gluster. It requires extra management
nodes similar to ceph, right?
Second of all there are no snapshots in BeeGFS, as
I understand it.
I know ceph has snapshots so for us this seems a
better alternative. What is your experience of ceph?

I am sorry to hear about your problems with gluster,
from my experience we had quite some issues with gluster
when it was "young", I thing the first version we installed
whas 3.5 or so. It was also extremly slow, an ls took forever.
But later versions has been "kind" to us and worked quite well
and file access has become really comfortable.

Best regards
Marcus

On Fri, Oct 27, 2023 at 10:16:08AM +0200, Diego Zuccato wrote:
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you recognize the sender and know the 
> content is safe.
>
>
> Hi.
>
> I'm also migrating to BeeGFS and CephFS (depending on usage).
>
> What I liked most about Gluster was that files were easily recoverable
> from bricks even in case of disaster and that it said it supported RDMA.
> But I soon found that RDMA was being phased out, and I always find
> entries that are not healing after a couple months of (not really heavy)
> use, directories that can't be removed because not all files have been
> deleted from all the bricks and files or directories that become
> inaccessible with no apparent reason.
> Given that I currently have 3 nodes with 30 12TB disks each in replica 3
> arbiter 1 it's become a major showstopper: can't stop production, backup
> everything and restart from scratch every 3-4 months. And there are no
> tools helping, just log digging :( Even at version 9.6 seems it's not
> really "production ready"... More like v0.9.6 IMVHO. And now it being
> EOLed makes it way worse.
>
> Diego
>
> Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:
> > Hi,
> >
> > Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
> > projects, so Gluster doesn't get much attention. From my experience, it
> > has deteriorated since about version 9.0, and we're migrating to
> > alternatives.
> >
> > /Z
> >
> > On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén  > > wrote:
> >
> > Hi all,
> > I just have a general thought about the gluster
> > project.
> > I have got the feeling that things has slowed down
> > in the gluster project.
> > I have had a look at github and to me the project
> > seems to slow down, for gluster version 11 there has
> > been no minor releases, we are still on 11.0 and I have
> > not found any references to 11.1.
> > There is a milestone called 12 but it seems to be
> > stale.
> > I have hit the issue:
> > https://github.com/gluster/glusterfs/issues/4085
> > 
> > that seems to have no sollution.
> > I noticed when version 11 was released that you
> > could not bump OP version to 11 and reported this,
> > but this is still not available.
> >
> > I am just wondering if I am missing something here?
> >
> > We have been using gluster for many years in production
> > and I think that gluster is great!! It has served as well over
> > the years and we have seen some great improvments
> > of stabilility and speed increase.
> >
> > So is there something going on or have I got
> > the wrong impression (and feeling)?
> >
> > Best regards
> > Marcus
> > ---
> > När du skickar e-post till SLU så innebär detta att SLU behandlar
> > dina personuppgifter. För att läsa mer om hur detta går till, klicka
> > här  > >
> > E-mailing SLU will result in SLU processing your personal data. For
> > more information on how this is done, click here
> >  > >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > 
> > Gluster-users mailing list
> > Gluster-users@gluster.org 
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> > 
> >
> >
> > 
> >
> >
> >
> > Community Meeting Calendar:
> >
> > Schedule -
> > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> > Bridge: https://meet.google.com/cpu-eiue-hvk
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users
>
> --
> Diego Zuccato
> DIFA - Dip. di Fisica e Astronomia
> Servizi 

Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Diego Zuccato

Hi.

I'm also migrating to BeeGFS and CephFS (depending on usage).

What I liked most about Gluster was that files were easily recoverable 
from bricks even in case of disaster and that it said it supported RDMA. 
But I soon found that RDMA was being phased out, and I always find 
entries that are not healing after a couple months of (not really heavy) 
use, directories that can't be removed because not all files have been 
deleted from all the bricks and files or directories that become 
inaccessible with no apparent reason.
Given that I currently have 3 nodes with 30 12TB disks each in replica 3 
arbiter 1 it's become a major showstopper: can't stop production, backup 
everything and restart from scratch every 3-4 months. And there are no 
tools helping, just log digging :( Even at version 9.6 seems it's not 
really "production ready"... More like v0.9.6 IMVHO. And now it being 
EOLed makes it way worse.


Diego

Il 27/10/2023 09:40, Zakhar Kirpichenko ha scritto:

Hi,

Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other 
projects, so Gluster doesn't get much attention. From my experience, it 
has deteriorated since about version 9.0, and we're migrating to 
alternatives.


/Z

On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén > wrote:


Hi all,
I just have a general thought about the gluster
project.
I have got the feeling that things has slowed down
in the gluster project.
I have had a look at github and to me the project
seems to slow down, for gluster version 11 there has
been no minor releases, we are still on 11.0 and I have
not found any references to 11.1.
There is a milestone called 12 but it seems to be
stale.
I have hit the issue:
https://github.com/gluster/glusterfs/issues/4085

that seems to have no sollution.
I noticed when version 11 was released that you
could not bump OP version to 11 and reported this,
but this is still not available.

I am just wondering if I am missing something here?

We have been using gluster for many years in production
and I think that gluster is great!! It has served as well over
the years and we have seen some great improvments
of stabilility and speed increase.

So is there something going on or have I got
the wrong impression (and feeling)?

Best regards
Marcus
---
När du skickar e-post till SLU så innebär detta att SLU behandlar
dina personuppgifter. För att läsa mer om hur detta går till, klicka
här >
E-mailing SLU will result in SLU processing your personal data. For
more information on how this is done, click here
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Zakhar Kirpichenko
Hi,

Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
projects, so Gluster doesn't get much attention. From my experience, it has
deteriorated since about version 9.0, and we're migrating to alternatives.

/Z

On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén 
wrote:

> Hi all,
> I just have a general thought about the gluster
> project.
> I have got the feeling that things has slowed down
> in the gluster project.
> I have had a look at github and to me the project
> seems to slow down, for gluster version 11 there has
> been no minor releases, we are still on 11.0 and I have
> not found any references to 11.1.
> There is a milestone called 12 but it seems to be
> stale.
> I have hit the issue:
> https://github.com/gluster/glusterfs/issues/4085
> that seems to have no sollution.
> I noticed when version 11 was released that you
> could not bump OP version to 11 and reported this,
> but this is still not available.
>
> I am just wondering if I am missing something here?
>
> We have been using gluster for many years in production
> and I think that gluster is great!! It has served as well over
> the years and we have seen some great improvments
> of stabilility and speed increase.
>
> So is there something going on or have I got
> the wrong impression (and feeling)?
>
> Best regards
> Marcus
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> personuppgifter. För att läsa mer om hur detta går till, klicka här <
> https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> E-mailing SLU will result in SLU processing your personal data. For more
> information on how this is done, click here <
> https://www.slu.se/en/about-slu/contact-slu/personal-data/>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users