Re: [Gluster-users] [EXT] Re: [Glusterusers] State of the gluster project

2023-10-31 Thread Dmitry Melekhov

01.11.2023 08:32, Joe Julian пишет:

They only own the trademark.



And developers :-)






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [EXT] Re: [Glusterusers] State of the gluster project

2023-10-31 Thread Dmitry Melekhov

01.11.2023 02:25, W Kern пишет:
Well if what you mean by  'dead project' is there haven't been 
significant improvements then yes.


No, I mean that current owner- IBMhat- abandoned it.






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [EXT] Re: [Glusterusers] State of the gluster project

2023-10-31 Thread W Kern
Well if what you mean by  'dead project' is there haven't been 
significant improvements then yes. Maybe given HOW gluster's 
architecture works there isn't a lot that can be done to re-architect it.


If you mean dead project because Gluster is broken then no. At least for 
its initial feature set it works really well. We have never used the 
more advanced features nor did we even try GFAPI. Just vanilla 
replication w/Fuse


We started with Gluster 3.x and it worked well and was easy to manage. 
Recovering from a failure was bummer though due to the need to heal 
whole VM files, especially on the 1GB network connections of those days.


We then migrated to 6.x and got sharding and the arbiter. Both of which 
made huge improvements to the speed and recovery in our replication 
environments.


Again, we never had gluster issues with 6.x. The problems we did see 
were bad networks, drives etc and gluster handled those including the 
fuse mount keeping the images up during a hardware failure. Then it was 
a matter of


swapping out drives, reassigning volumes etc, all of which were pretty 
straight forward and didn't involve downtime.


We are now on 10.1, and have yet to see any issues. Speed seems a little 
faster than 6.x but that is subjective. We haven't upgraded to beyond 
that because we have seen people report issues with 10.2/3/4 and it 
aint' broke so we have a wait and see attitude.


We have used other Distributed File Systems and still use MooseFS for 
archiving which is quite nice and also easy to use, but as was mentioned 
with BeeGFS, its freemium.


To get the important pieces you have to pay up. In the MFS example that 
means that the free version has a single point of failure with the 
mfsmaster. Only enterprise has the ability to failover to another 
mfsmaster.  So its not as resilient as Gluster and we did lose some 
files during one particular ugly outage (totally our fault, but those 
files would have survived if on gluster).


Gluster is open source and on github. I hope it stays that way.

-wk


On 10/29/23 3:54 AM, Dmitry Melekhov wrote:


29.10.2023 00:07, Zakhar Kirpichenko пишет:
I don't think it's worth it for anyone. It's a dead project since 
about 9.0, if not earlier.


Well, really earlier.

Attempt to get better gluster as gluster2 in 4.0 failed...






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [EXT] Re: [Glusterusers] State of the gluster project

2023-10-28 Thread wk


On 10/28/23 1:30 PM, Alexander Schreiber wrote:


Which is shame because I choose GlusterFS for one of my storage clusters
_specifically_ due to the ease of emergency data recovery (for purely
replicated volumes) even in case of complete failure of the software
stack and system disks - just grab the data disks, mount on a suitable
machine and copy the data off.

And that is the beauty of Gluster. You know you always have your data as 
long as just one disk survives.


my hope is that gluster survives due to its open source nature and the 
fact that is rather mature.


and, yes hopefully, the kadula will keep it (or their fork of it) going 
if no one else steps up


Certainly there won't be much in the way of improvements but its been 
pretty feature stable for awhile especially for replicated bricks.


So I hope that the work involved would involve adapting it to the newer 
OS distros.



Anyone knows of distributed FS with similar easy emergency recovery?

(I also run Ceph, but Bluestore seems to be pretty much a black box.)


What has been your impression of the performance difference between Ceph 
and Gluster, especially in regards to VMs


Its been awhile since I tried Ceph at the time, I found it that it 
required more moving parts (compared to Gluster which is of course silly 
simple to get going) for not much improvement, but i'd assume they have 
improved that since then.


-wk




Kind regards,
Alex.


/Z

On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov  wrote:


Well,

After IBM acquisition, RH discontinued their support in many projects
including GlusterFS (certification exams were removed, payed product went
EOL, etc).

The only way to get it back on track is with a sponsor company that haves
the capability to drive it.
Kadalu is relying on GlusterFS but they are not as big as Red Hat and
based on one of the previous e-mails they will need sponsorship to dedicate
resources.

Best Regards,
Strahil Nikolov



On Saturday, October 28, 2023, 9:57 AM, Marcus Pedersén <
marcus.peder...@slu.se> wrote:

Hi all,
I just have a general thought about the gluster
project.
I have got the feeling that things has slowed down
in the gluster project.
I have had a look at github and to me the project
seems to slow down, for gluster version 11 there has
been no minor releases, we are still on 11.0 and I have
not found any references to 11.1.
There is a milestone called 12 but it seems to be
stale.
I have hit the issue:
https://github.com/gluster/glusterfs/issues/4085
that seems to have no sollution.
I noticed when version 11 was released that you
could not bump OP version to 11 and reported this,
but this is still not available.

I am just wondering if I am missing something here?

We have been using gluster for many years in production
and I think that gluster is great!! It has served as well over
the years and we have seen some great improvments
of stabilility and speed increase.

So is there something going on or have I got
the wrong impression (and feeling)?

Best regards
Marcus
---
När du skickar e-post till SLU så innebär detta att SLU behandlar dina
personuppgifter. För att läsa mer om hur detta går till, klicka här <
https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
E-mailing SLU will result in SLU processing your personal data. For more
information on how this is done, click here <
https://www.slu.se/en/about-slu/contact-slu/personal-data/>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users