Re: [Gluster-users] State of the gluster project

2023-10-28 Thread Zakhar Kirpichenko
I don't think it's worth it for anyone. It's a dead project since about
9.0, if not earlier. It's time to embrace the truth and move on.

/Z

On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov  wrote:

> Well,
>
> After IBM acquisition, RH discontinued their support in many projects
> including GlusterFS (certification exams were removed, payed product went
> EOL, etc).
>
> The only way to get it back on track is with a sponsor company that haves
> the capability to drive it.
> Kadalu is relying on GlusterFS but they are not as big as Red Hat and
> based on one of the previous e-mails they will need sponsorship to dedicate
> resources.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> On Saturday, October 28, 2023, 9:57 AM, Marcus Pedersén <
> marcus.peder...@slu.se> wrote:
>
> Hi all,
> I just have a general thought about the gluster
> project.
> I have got the feeling that things has slowed down
> in the gluster project.
> I have had a look at github and to me the project
> seems to slow down, for gluster version 11 there has
> been no minor releases, we are still on 11.0 and I have
> not found any references to 11.1.
> There is a milestone called 12 but it seems to be
> stale.
> I have hit the issue:
> https://github.com/gluster/glusterfs/issues/4085
> that seems to have no sollution.
> I noticed when version 11 was released that you
> could not bump OP version to 11 and reported this,
> but this is still not available.
>
> I am just wondering if I am missing something here?
>
> We have been using gluster for many years in production
> and I think that gluster is great!! It has served as well over
> the years and we have seen some great improvments
> of stabilility and speed increase.
>
> So is there something going on or have I got
> the wrong impression (and feeling)?
>
> Best regards
> Marcus
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> personuppgifter. För att läsa mer om hur detta går till, klicka här <
> https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> E-mailing SLU will result in SLU processing your personal data. For more
> information on how this is done, click here <
> https://www.slu.se/en/about-slu/contact-slu/personal-data/>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] State of the gluster project

2023-10-27 Thread Zakhar Kirpichenko
Hi,

Red Hat Gluster Storage is EOL, Red Hat moved Gluster devs to other
projects, so Gluster doesn't get much attention. From my experience, it has
deteriorated since about version 9.0, and we're migrating to alternatives.

/Z

On Fri, 27 Oct 2023 at 10:29, Marcus Pedersén 
wrote:

> Hi all,
> I just have a general thought about the gluster
> project.
> I have got the feeling that things has slowed down
> in the gluster project.
> I have had a look at github and to me the project
> seems to slow down, for gluster version 11 there has
> been no minor releases, we are still on 11.0 and I have
> not found any references to 11.1.
> There is a milestone called 12 but it seems to be
> stale.
> I have hit the issue:
> https://github.com/gluster/glusterfs/issues/4085
> that seems to have no sollution.
> I noticed when version 11 was released that you
> could not bump OP version to 11 and reported this,
> but this is still not available.
>
> I am just wondering if I am missing something here?
>
> We have been using gluster for many years in production
> and I think that gluster is great!! It has served as well over
> the years and we have seen some great improvments
> of stabilility and speed increase.
>
> So is there something going on or have I got
> the wrong impression (and feeling)?
>
> Best regards
> Marcus
> ---
> När du skickar e-post till SLU så innebär detta att SLU behandlar dina
> personuppgifter. För att läsa mer om hur detta går till, klicka här <
> https://www.slu.se/om-slu/kontakta-slu/personuppgifter/>
> E-mailing SLU will result in SLU processing your personal data. For more
> information on how this is done, click here <
> https://www.slu.se/en/about-slu/contact-slu/personal-data/>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-04-02 Thread Zakhar Kirpichenko
I see. Looks like there's no interest from the development team to address
the issue.

Zakhar

On Sat, Apr 2, 2022 at 2:32 PM Strahil Nikolov 
wrote:

> Sadly, I can't help but you can join the regular gluster meeting and ask
> for feedback on the topic.
>
> Best Regards,
> Strahil Nikolov
>
> On Thu, Mar 31, 2022 at 9:57, Zakhar Kirpichenko
>  wrote:
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-03-31 Thread Zakhar Kirpichenko
Hi,

Any news about this? I provided very detailed test results and proof of the
issue https://github.com/gluster/glusterfs/issues/3206 on 6 February 2022
but haven't heard back after that.

Best regards,
Zakhar

On Tue, Feb 8, 2022 at 7:14 AM Zakhar Kirpichenko  wrote:

> Hi,
>
> I've updated the github issue with more details:
> https://github.com/gluster/glusterfs/issues/3206#issuecomment-1030770617
>
> Looks like there's a memory leak.
>
> /Z
>
> On Sat, Feb 5, 2022 at 8:45 PM Zakhar Kirpichenko 
> wrote:
>
>> Hi Strahil,
>>
>> Many thanks for your reply! I've updated the Github issue with statedump
>> files taken before and after the tar operation:
>> https://github.com/gluster/glusterfs/files/8008635/glusterdump.19102.dump.zip
>>
>> Please disregard that path= entries are empty, in the original dumps
>> there are real paths but I deleted them as they might contain sensitive
>> information.
>>
>> The odd thing is that the dump file is full of:
>>
>> 1) xlator.performance.write-behind.wb_inode entries, but the tar
>> operation does not write to these files. The whole backup process is
>> read-only.
>>
>> 2) xlator.performance.quick-read.inodectx entries, which never go away.
>>
>> None of this happens on other clients, which read and write from/to the
>> same volume in a much more intense manner.
>>
>> Best regards,
>> Z
>>
>> On Sat, Feb 5, 2022 at 11:23 AM Strahil Nikolov 
>> wrote:
>>
>>> Can you generate a statedump before and after the tar ?
>>> For statedump generation , you can follow
>>> https://github.com/gluster/glusterfs/issues/1440#issuecomment-674051243
>>> .
>>>
>>> Best Regards,
>>> Strahil Nikolov
>>>
>>>
>>> В събота, 5 февруари 2022 г., 07:54:22 Гринуич+2, Zakhar Kirpichenko <
>>> zak...@gmail.com> написа:
>>>
>>>
>>> Hi!
>>>
>>> I opened a Github issue https://github.com/gluster/glusterfs/issues/3206
>>> but not sure how much attention they get there, so re-posting here just in
>>> case someone has any ideas.
>>>
>>> Description of problem:
>>>
>>> GlusterFS 9.5, 3-node cluster (2 bricks + arbiter), an attempt to tar
>>> the whole filesystem (35-40 GB, 1.6 million files) on a client succeeds but
>>> causes the glusterfs fuse mount process to consume 0.5+ GB of RAM. The
>>> usage never goes down after tar exits.
>>>
>>> The exact command to reproduce the issue:
>>>
>>> /usr/bin/tar --use-compress-program="/bin/pigz" -cf
>>> /path/to/archive.tar.gz --warning=no-file-changed /glusterfsmount
>>>
>>> The output of the gluster volume info command:
>>>
>>> Volume Name: gvol1
>>> Type: Replicate
>>> Volume ID: 0292ac43-89bd-45a4-b91d-799b49613e60
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 192.168.0.31:/gluster/brick1/gvol1
>>> Brick2: 192.168.0.32:/gluster/brick1/gvol1
>>> Brick3: 192.168.0.5:/gluster/brick1/gvol1 (arbiter)
>>> Options Reconfigured:
>>> performance.open-behind: off
>>> cluster.readdir-optimize: off
>>> cluster.consistent-metadata: on
>>> features.cache-invalidation: on
>>> diagnostics.count-fop-hits: on
>>> diagnostics.latency-measurement: on
>>> storage.fips-mode-rchecksum: on
>>> performance.cache-size: 256MB
>>> client.event-threads: 8
>>> server.event-threads: 4
>>> storage.reserve: 1
>>> performance.cache-invalidation: on
>>> cluster.lookup-optimize: on
>>> transport.address-family: inet
>>> nfs.disable: on
>>> performance.client-io-threads: on
>>> features.cache-invalidation-timeout: 600
>>> performance.md-cache-timeout: 600
>>> network.inode-lru-limit: 5
>>> cluster.shd-max-threads: 4
>>> cluster.self-heal-window-size: 8
>>> performance.enable-least-priority: off
>>> performance.cache-max-file-size: 2MB
>>>
>>> The output of the gluster volume status command:
>>>
>>> Status of volume: gvol1
>>> Gluster process TCP Port  RDMA Port  Online
>>>  Pid
>>>
>>> --
>>> Brick 192.168.0.31:/gluster/brick1/gvol149152 0  Y
>>>   1767
>>> Brick 192.168.0.32:/glust

Re: [Gluster-users] [Gluster-devel] Announcing Gluster release 10.1

2022-02-13 Thread Zakhar Kirpichenko
Yes, that was my point.

On Sun, Feb 13, 2022 at 10:40 AM Strahil Nikolov 
wrote:

> Not really.
> Debian take care for packaging on Debian, Ubuntu for their debs, OpenSuSE
> for their rpms,  CentOS is part of RedHat and they can decide for it.
>
> Best Regards,
> Strahil Nikolov
>
> On Sun, Feb 13, 2022 at 10:13, Zakhar Kirpichenko
>  wrote:
> ---
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> gluster-de...@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-12 Thread Zakhar Kirpichenko
> Maintenance updates != new feature releases (and never has).

Thanks for this, but what's your point exactly? Feature updates for CentOS
7 ended in August 2020, 1.5 years ago. This did not affect the release of
8.x updates, or 9.x release and updates for CentOS 7. Dropping CentOS 7
builds from 10.x onwards seems more in line with what RedHat/IBM did to
CentOS rather than with any kind of CentOS 7 updates.

/Z

On Sun, Feb 13, 2022 at 9:38 AM Eliyahu Rosenberg 
wrote:

> Maintenance updates != new feature releases (and never has).
>
> On Fri, Feb 11, 2022 at 2:14 PM Zakhar Kirpichenko 
> wrote:
>
>> An interesting decision not to support GlusterFS 10.x on CentOS 7, which
>> I'm sure is in use by many and will be supported with maintenance updates
>> for another 2 years.
>>
>> /Z
>>
>> On Fri, Feb 11, 2022 at 12:16 PM Shwetha Acharya 
>> wrote:
>>
>>> Hi Alan,
>>>
>>> Please refer to [1]
>>> As per community guidelines, we will not be supporting Centos 7
>>> GlusterFS 10 onwards.
>>>
>>> Also thanks for letting us know about https://www.gluster.org/install/. We
>>> will update it the earliest.
>>>
>>> [1] https://docs.gluster.org/en/latest/Install-Guide/Community-Packages/
>>>
>>> Regards,
>>> Shwetha
>>>
>>> On Fri, Feb 11, 2022 at 3:21 PM Alan Orth  wrote:
>>>
>>>> Hi,
>>>>
>>>> I don't see the GlusterFS 10.x packages for CentOS 7. Normally they are
>>>> available from the storage SIG via a metapackage. For example, I'm
>>>> currently using centos-release-gluster9. I thought I might just need to
>>>> wait a few days, but now I am curious and thought I'd send a message to the
>>>> list to check...
>>>>
>>>> Thanks,
>>>>
>>>> P.S. the gluster.org website still lists gluster9 as the latest:
>>>> https://www.gluster.org/install/
>>>>
>>>> On Tue, Feb 1, 2022 at 8:34 AM Shwetha Acharya 
>>>> wrote:
>>>>
>>>>> Hi All,
>>>>>
>>>>> The Gluster community is pleased to announce the release of Gluster10.1
>>>>> Packages available at [1].
>>>>> Release notes for the release can be found at [2].
>>>>>
>>>>>
>>>>> *Highlights of Release:*- Fix missing stripe count issue with upgrade
>>>>> from 9.x to 10.x
>>>>> - Fix IO failure when shrinking distributed dispersed volume with
>>>>> ongoing IO
>>>>> - Fix log spam introduced with glusterfs 10.0
>>>>> - Enable ltcmalloc_minimal instead of ltcmalloc
>>>>>
>>>>> NOTE: Please expect the CentOS 9 stream packages to land in the coming
>>>>> days this week.
>>>>>
>>>>> Thanks,
>>>>> Shwetha
>>>>>
>>>>> References:
>>>>>
>>>>> [1] Packages for 10.1:
>>>>> https://download.gluster.org/pub/gluster/glusterfs/10/10.1/
>>>>>
>>>>> [2] Release notes for 10.1:
>>>>> https://docs.gluster.org/en/latest/release-notes/10.1/
>>>>> 
>>>>>
>>>>>
>>>>>
>>>>> Community Meeting Calendar:
>>>>>
>>>>> Schedule -
>>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>>> Gluster-users mailing list
>>>>> Gluster-users@gluster.org
>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>
>>>>
>>>> --
>>>> Alan Orth
>>>> alan.o...@gmail.com
>>>> https://picturingjordan.com
>>>> https://englishbulgaria.net
>>>> https://mjanja.ch
>>>>
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Announcing Gluster release 10.1

2022-02-11 Thread Zakhar Kirpichenko
An interesting decision not to support GlusterFS 10.x on CentOS 7, which
I'm sure is in use by many and will be supported with maintenance updates
for another 2 years.

/Z

On Fri, Feb 11, 2022 at 12:16 PM Shwetha Acharya 
wrote:

> Hi Alan,
>
> Please refer to [1]
> As per community guidelines, we will not be supporting Centos 7 GlusterFS
> 10 onwards.
>
> Also thanks for letting us know about https://www.gluster.org/install/. We
> will update it the earliest.
>
> [1] https://docs.gluster.org/en/latest/Install-Guide/Community-Packages/
>
> Regards,
> Shwetha
>
> On Fri, Feb 11, 2022 at 3:21 PM Alan Orth  wrote:
>
>> Hi,
>>
>> I don't see the GlusterFS 10.x packages for CentOS 7. Normally they are
>> available from the storage SIG via a metapackage. For example, I'm
>> currently using centos-release-gluster9. I thought I might just need to
>> wait a few days, but now I am curious and thought I'd send a message to the
>> list to check...
>>
>> Thanks,
>>
>> P.S. the gluster.org website still lists gluster9 as the latest:
>> https://www.gluster.org/install/
>>
>> On Tue, Feb 1, 2022 at 8:34 AM Shwetha Acharya 
>> wrote:
>>
>>> Hi All,
>>>
>>> The Gluster community is pleased to announce the release of Gluster10.1
>>> Packages available at [1].
>>> Release notes for the release can be found at [2].
>>>
>>>
>>> *Highlights of Release:*- Fix missing stripe count issue with upgrade
>>> from 9.x to 10.x
>>> - Fix IO failure when shrinking distributed dispersed volume with
>>> ongoing IO
>>> - Fix log spam introduced with glusterfs 10.0
>>> - Enable ltcmalloc_minimal instead of ltcmalloc
>>>
>>> NOTE: Please expect the CentOS 9 stream packages to land in the coming
>>> days this week.
>>>
>>> Thanks,
>>> Shwetha
>>>
>>> References:
>>>
>>> [1] Packages for 10.1:
>>> https://download.gluster.org/pub/gluster/glusterfs/10/10.1/
>>>
>>> [2] Release notes for 10.1:
>>> https://docs.gluster.org/en/latest/release-notes/10.1/
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>> --
>> Alan Orth
>> alan.o...@gmail.com
>> https://picturingjordan.com
>> https://englishbulgaria.net
>> https://mjanja.ch
>>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-02-05 Thread Zakhar Kirpichenko
Hi Strahil,

Many thanks for your reply! I've updated the Github issue with statedump
files taken before and after the tar operation:
https://github.com/gluster/glusterfs/files/8008635/glusterdump.19102.dump.zip

Please disregard that path= entries are empty, in the original dumps there
are real paths but I deleted them as they might contain sensitive
information.

The odd thing is that the dump file is full of:

1) xlator.performance.write-behind.wb_inode entries, but the tar operation
does not write to these files. The whole backup process is read-only.

2) xlator.performance.quick-read.inodectx entries, which never go away.

None of this happens on other clients, which read and write from/to the
same volume in a much more intense manner.

Best regards,
Z

On Sat, Feb 5, 2022 at 11:23 AM Strahil Nikolov 
wrote:

> Can you generate a statedump before and after the tar ?
> For statedump generation , you can follow
> https://github.com/gluster/glusterfs/issues/1440#issuecomment-674051243 .
>
> Best Regards,
> Strahil Nikolov
>
>
> В събота, 5 февруари 2022 г., 07:54:22 Гринуич+2, Zakhar Kirpichenko <
> zak...@gmail.com> написа:
>
>
> Hi!
>
> I opened a Github issue https://github.com/gluster/glusterfs/issues/3206
> but not sure how much attention they get there, so re-posting here just in
> case someone has any ideas.
>
> Description of problem:
>
> GlusterFS 9.5, 3-node cluster (2 bricks + arbiter), an attempt to tar the
> whole filesystem (35-40 GB, 1.6 million files) on a client succeeds but
> causes the glusterfs fuse mount process to consume 0.5+ GB of RAM. The
> usage never goes down after tar exits.
>
> The exact command to reproduce the issue:
>
> /usr/bin/tar --use-compress-program="/bin/pigz" -cf
> /path/to/archive.tar.gz --warning=no-file-changed /glusterfsmount
>
> The output of the gluster volume info command:
>
> Volume Name: gvol1
> Type: Replicate
> Volume ID: 0292ac43-89bd-45a4-b91d-799b49613e60
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.0.31:/gluster/brick1/gvol1
> Brick2: 192.168.0.32:/gluster/brick1/gvol1
> Brick3: 192.168.0.5:/gluster/brick1/gvol1 (arbiter)
> Options Reconfigured:
> performance.open-behind: off
> cluster.readdir-optimize: off
> cluster.consistent-metadata: on
> features.cache-invalidation: on
> diagnostics.count-fop-hits: on
> diagnostics.latency-measurement: on
> storage.fips-mode-rchecksum: on
> performance.cache-size: 256MB
> client.event-threads: 8
> server.event-threads: 4
> storage.reserve: 1
> performance.cache-invalidation: on
> cluster.lookup-optimize: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: on
> features.cache-invalidation-timeout: 600
> performance.md-cache-timeout: 600
> network.inode-lru-limit: 5
> cluster.shd-max-threads: 4
> cluster.self-heal-window-size: 8
> performance.enable-least-priority: off
> performance.cache-max-file-size: 2MB
>
> The output of the gluster volume status command:
>
> Status of volume: gvol1
> Gluster process TCP Port  RDMA Port  Online
>  Pid
>
> --
> Brick 192.168.0.31:/gluster/brick1/gvol149152 0  Y
> 1767
> Brick 192.168.0.32:/gluster/brick1/gvol149152 0  Y
> 1696
> Brick 192.168.0.5:/gluster/brick1/gvol1 49152 0  Y
> 1318
> Self-heal Daemon on localhost   N/A   N/AY
> 1329
> Self-heal Daemon on 192.168.0.31N/A   N/AY
> 1778
> Self-heal Daemon on 192.168.0.32N/A   N/AY
> 1707
>
> Task Status of Volume gvol1
>
> --
> There are no active volume tasks
>
> The output of the gluster volume heal command:
>
> Brick 192.168.0.31:/gluster/brick1/gvol1
> Status: Connected
> Number of entries: 0
>
> Brick 192.168.0.32:/gluster/brick1/gvol1
> Status: Connected
> Number of entries: 0
>
> Brick 192.168.0.5:/gluster/brick1/gvol1
> Status: Connected
> Number of entries: 0
>
> The operating system / glusterfs version:
>
> CentOS Linux release 7.9.2009 (Core), fully up to date
> glusterfs 9.5
> kernel 3.10.0-1160.53.1.el7.x86_64
>
> The logs are basically empty since the last mount except for the
> mount-related messages.
>
> Additional info: a statedump from the client is attached to the Github
> issue,
> https://github.com/gluster/glusterfs/files/8004792/glusterdump.18906.dump.1643991007.gz,
> in case someone wants to have a look.
>
> There 

[Gluster-users] GlusterFS 9.5 fuse mount excessive memory usage

2022-02-04 Thread Zakhar Kirpichenko
Hi!

I opened a Github issue https://github.com/gluster/glusterfs/issues/3206
but not sure how much attention they get there, so re-posting here just in
case someone has any ideas.

Description of problem:

GlusterFS 9.5, 3-node cluster (2 bricks + arbiter), an attempt to tar the
whole filesystem (35-40 GB, 1.6 million files) on a client succeeds but
causes the glusterfs fuse mount process to consume 0.5+ GB of RAM. The
usage never goes down after tar exits.

The exact command to reproduce the issue:

/usr/bin/tar --use-compress-program="/bin/pigz" -cf /path/to/archive.tar.gz
--warning=no-file-changed /glusterfsmount

The output of the gluster volume info command:

Volume Name: gvol1
Type: Replicate
Volume ID: 0292ac43-89bd-45a4-b91d-799b49613e60
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.0.31:/gluster/brick1/gvol1
Brick2: 192.168.0.32:/gluster/brick1/gvol1
Brick3: 192.168.0.5:/gluster/brick1/gvol1 (arbiter)
Options Reconfigured:
performance.open-behind: off
cluster.readdir-optimize: off
cluster.consistent-metadata: on
features.cache-invalidation: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
storage.fips-mode-rchecksum: on
performance.cache-size: 256MB
client.event-threads: 8
server.event-threads: 4
storage.reserve: 1
performance.cache-invalidation: on
cluster.lookup-optimize: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: on
features.cache-invalidation-timeout: 600
performance.md-cache-timeout: 600
network.inode-lru-limit: 5
cluster.shd-max-threads: 4
cluster.self-heal-window-size: 8
performance.enable-least-priority: off
performance.cache-max-file-size: 2MB

The output of the gluster volume status command:

Status of volume: gvol1
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 192.168.0.31:/gluster/brick1/gvol149152 0  Y
1767
Brick 192.168.0.32:/gluster/brick1/gvol149152 0  Y
1696
Brick 192.168.0.5:/gluster/brick1/gvol1 49152 0  Y
1318
Self-heal Daemon on localhost   N/A   N/AY
1329
Self-heal Daemon on 192.168.0.31N/A   N/AY
1778
Self-heal Daemon on 192.168.0.32N/A   N/AY
1707

Task Status of Volume gvol1
--
There are no active volume tasks

The output of the gluster volume heal command:

Brick 192.168.0.31:/gluster/brick1/gvol1
Status: Connected
Number of entries: 0

Brick 192.168.0.32:/gluster/brick1/gvol1
Status: Connected
Number of entries: 0

Brick 192.168.0.5:/gluster/brick1/gvol1
Status: Connected
Number of entries: 0

The operating system / glusterfs version:

CentOS Linux release 7.9.2009 (Core), fully up to date
glusterfs 9.5
kernel 3.10.0-1160.53.1.el7.x86_64

The logs are basically empty since the last mount except for the
mount-related messages.

Additional info: a statedump from the client is attached to the Github
issue,
https://github.com/gluster/glusterfs/files/8004792/glusterdump.18906.dump.1643991007.gz,
in case someone wants to have a look.

There was also an issue with other clients, running PHP applications with
lots of small files, where glusterfs fuse mount process would very quickly
balloon to ~2 GB over the course of 24 hours and its performance would slow
to a crawl. This happened very consistently with glusterfs 8.x and 9.5, I
managed to resolve it at least partially with disabling
performance.open-behind: the memory usage either remains consistent or
increases at a much slower rate, which is acceptable for this use case.

Now the issue remains on this single client, which doesn't do much other
than reading and archiving all files from the gluster volume once per day.
The glusterfs fuse mount process balloons to 0.5+ GB during the first tar
run and remains more or less consistent afterwards, including subsequent
tar runs.

I would very much appreciate any advice or suggestions.

Best regards,
Zakhar




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users