[Gluster-users] Announcing Gluster release 7

2019-11-12 Thread Rinku Kothiya
Hi,

The Gluster community is pleased to announce the release of 7.0, our latest
release.

This is a major release that includes a range of code improvements and
stability fixes along with a few features as noted below. A selection of
the key features and bugs addressed are documented in this [1] page.

*Announcements:*

1. Releases that receive maintenance updates post release 7, 6 and 5 [2]

2. Release 7 will receive maintenance updates around the 20th of every
month for the first 3 months post release (i.e Dec'19, Jan'20, Feb'20).
Post the initial 3 months, it will receive maintenance updates every 2
months till EOL. [3]

A series of features/xlators have been deprecated in release 7 as follows,
for upgrade procedures from volumes that use these features to release 7
refer to the release 7 upgrade guide [4].

*Features deprecated:*
- Glupy

*Highlights of this release are:*

*Highlights:*
Several stability fixes addressing,
coverity, clang-scan, address sanitizer and valgrind reported issues
removal of unused and hence, deprecated code and features.
Performance Improvements.

*Features:*
1. Rpcbind not required in glusterd.service when gnfs isn't built.
2. Latency based read child to improve read workload latency in a cluster,
especially in a cloud setup. Also provides a load balancing with the
outstanding pending request.
3. Glusterfind: integrate with gfid2path, to improve performance.
4. Issue #532: Work towards implementing global thread pooling has started
5. This release includes extra coverage for glfs public APIs in our
regression tests, so we don't break anything.

*Major issues:*

https://bugzilla.redhat.com/show_bug.cgi?id=1771308
We have come across Centos-6 packaging issues, we have decided to mark it
as a known issue and release 7. This Centos-6 build issue will be fixed
with release 7.1 which is due on Dec 20th.

Bugs addressed are provided towards the end, in the release notes [1]



*Thank you,Gluster community*
*References:*
[1] Release notes:
 https://docs.gluster.org/en/latest/release-notes/7.0/

[2] Release schedule:
 https://www.gluster.org/release-schedule/

[3] Gluster release cadence and version changes:
 https://lists.gluster.org/pipermail/announce/2018-July/000103.html

[4] Upgrade guide to release-7:
 https://docs.gluster.org/en/latest/Upgrade-Guide/upgrade_to_7/

[5] Packages at :
 https://download.gluster.org/pub/gluster/glusterfs/7/7.0/


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Client disconnections, memory use

2019-11-12 Thread Nithya Balachandran
Hi,

For the memory increase, please capture statedumps of the process at
intervals of an hour and send it across.
https://docs.gluster.org/en/latest/Troubleshooting/statedump/ describes how
to generate a statedump for the client process.

Regards,
Nithya

On Wed, 13 Nov 2019 at 05:18, Jamie Lawrence 
wrote:

> Glusternauts,
>
> I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients
> from a different, much older, cluster. Those clients are running 5.9
> clients, and spontaneously disconnect. It was signal 15, but no user killed
> it, and I can't imagine why another daemon would have.
>
>
> [2019-11-12 22:52:42.790687] I [fuse-bridge.c:5144:fuse_thread_proc]
> 0-fuse: initating unmount of /mnt/informatica/sftp/dectools
> [2019-11-12 22:52:42.791414] W [glusterfsd.c:1500:cleanup_and_exit]
> (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f141e4466ba]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55711c79994d]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55711c7997b4] ) 0-:
> received signum (15), shutting down
> [2019-11-12 22:52:42.791435] I [fuse-bridge.c:5914:fini] 0-fuse:
> Unmounting '/mnt/informatica/sftp/dectools'.
> [2019-11-12 22:52:42.791444] I [fuse-bridge.c:5919:fini] 0-fuse: Closing
> fuse connection to '/mnt/informatica/sftp/dectools'.
>
> Nothing in the log for about 12 minutes previously.
>
> Volume info:
>
> Volume Name: sc5_informatica_prod_shared
> Type: Distributed-Replicate
> Volume ID: db5d2693-59e1-40e0-9c28-7a2385b2524f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 3 = 9
> Transport-type: tcp
> Bricks:
> Brick1: sc5-storage-1:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick2: sc5-storage-2:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick3: sc5-storage-3:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick4: sc5-storage-4:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick5: sc5-storage-5:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick6: sc5-storage-6:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick7: sc5-storage-7:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick8: sc5-storage-8:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Brick9: sc5-storage-9:/gluster-bricks/pool-1/sc5_informatica_prod_shared
> Options Reconfigured:
> performance.readdir-ahead: disable
> performance.quick-read: disable
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
>
> One very disturbing thing I'm noticing is that memory use on the client
> seems to be growing at rate of about 1MB/10 minutes of active use. One
> glusterfs process I'm looking at is consuming about 2.4G right now and
> growing. Does 5.9 have a memory leak, too?
>
>
> -j
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Client disconnections, memory use

2019-11-12 Thread Jamie Lawrence
Glusternauts,

I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients from a 
different, much older, cluster. Those clients are running 5.9 clients, and 
spontaneously disconnect. It was signal 15, but no user killed it, and I can't 
imagine why another daemon would have.


[2019-11-12 22:52:42.790687] I [fuse-bridge.c:5144:fuse_thread_proc] 0-fuse: 
initating unmount of /mnt/informatica/sftp/dectools
[2019-11-12 22:52:42.791414] W [glusterfsd.c:1500:cleanup_and_exit] 
(-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7f141e4466ba] 
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xed) [0x55711c79994d] 
-->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55711c7997b4] ) 0-: received 
signum (15), shutting down
[2019-11-12 22:52:42.791435] I [fuse-bridge.c:5914:fini] 0-fuse: Unmounting 
'/mnt/informatica/sftp/dectools'.
[2019-11-12 22:52:42.791444] I [fuse-bridge.c:5919:fini] 0-fuse: Closing fuse 
connection to '/mnt/informatica/sftp/dectools'.

Nothing in the log for about 12 minutes previously.

Volume info:

Volume Name: sc5_informatica_prod_shared
Type: Distributed-Replicate
Volume ID: db5d2693-59e1-40e0-9c28-7a2385b2524f
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 3 = 9
Transport-type: tcp
Bricks:
Brick1: sc5-storage-1:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick2: sc5-storage-2:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick3: sc5-storage-3:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick4: sc5-storage-4:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick5: sc5-storage-5:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick6: sc5-storage-6:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick7: sc5-storage-7:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick8: sc5-storage-8:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Brick9: sc5-storage-9:/gluster-bricks/pool-1/sc5_informatica_prod_shared
Options Reconfigured:
performance.readdir-ahead: disable
performance.quick-read: disable
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


One very disturbing thing I'm noticing is that memory use on the client seems 
to be growing at rate of about 1MB/10 minutes of active use. One glusterfs 
process I'm looking at is consuming about 2.4G right now and growing. Does 5.9 
have a memory leak, too?


-j

smime.p7s
Description: S/MIME cryptographic signature


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] socket.so: undefined symbol: xlator_api - bd.so: cannot open shared object file & crypt.so: cannot open shared object file: No such file or directory

2019-11-12 Thread Paolo Margara
Hi all,

I've the same problem while upgrading to gluster 6.6, in one case from
gluster 5 in the other from gluster 3.12.

It's safe to ignore these messages or there is some issue in our
configuration? Or a bug, or a packaging issue or something else?

Any suggestions are appreciated.


Greetings,

    Paolo


Il 07/10/19 12:21, lejeczek ha scritto:
> hi everyone
>
> I'm been running glusterfs 6 for a while and either I did not notice or
> it just started to pop:
>
> [2019-10-07 09:17:37.071409] I [run.c:242:runner_log]
> (-->/usr/lib64/glusterfs/6.5/xlator/mgmt/glusterd.so(+0xe8faa)
> [0x7fd6204d3faa]
> -->/usr/lib64/glusterfs/6.5/xlator/mgmt/glusterd.so(+0xe8a75)
> [0x7fd6204d3a75] -->/lib64/libglusterfs.so.0(runner_log+0x115)
> [0x7fd62c360495] ) 0-management: Ran script:
> /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
> --volname=IT-RELATED --first=no --version=1 --volume-op=start
> --gd-workdir=/var/lib/glusterd
> [2019-10-07 09:17:37.099416] I [run.c:242:runner_log]
> (-->/usr/lib64/glusterfs/6.5/xlator/mgmt/glusterd.so(+0xe8faa)
> [0x7fd6204d3faa]
> -->/usr/lib64/glusterfs/6.5/xlator/mgmt/glusterd.so(+0xe8a75)
> [0x7fd6204d3a75] -->/lib64/libglusterfs.so.0(runner_log+0x115)
> [0x7fd62c360495] ) 0-management: Ran script:
> /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
> --volname=IT-RELATED --first=no --version=1 --volume-op=start
> --gd-workdir=/var/lib/glusterd
> [2019-10-07 09:42:26.314045] W [MSGID: 101095]
> [xlator.c:210:xlator_volopt_dynload] 0-xlator:
> /usr/lib64/glusterfs/6.5/xlator/encryption/crypt.so: cannot open shared
> object file: No such file or directory
> [2019-10-07 09:42:26.328413] E [MSGID: 101097]
> [xlator.c:218:xlator_volopt_dynload] 0-xlator: dlsym(xlator_api)
> missing: /usr/lib64/glusterfs/6.5/rpc-transport/socket.so: undefined
> symbol: xlator_api
> [2019-10-07 09:42:26.330640] W [MSGID: 101095]
> [xlator.c:210:xlator_volopt_dynload] 0-xlator:
> /usr/lib64/glusterfs/6.5/xlator/nfs/server.so: cannot open shared object
> file: No such file or directory
> [2019-10-07 09:42:26.348399] W [MSGID: 101095]
> [xlator.c:210:xlator_volopt_dynload] 0-xlator:
> /usr/lib64/glusterfs/6.5/xlator/storage/bd.so: cannot open shared object
> file: No such file or directory
> The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload]
> 0-xlator: /usr/lib64/glusterfs/6.5/xlator/encryption/crypt.so: cannot
> open shared object file: No such file or directory" repeated 2 times
> between [2019-10-07 09:42:26.314045] and [2019-10-07 09:42:26.314307]
> The message "E [MSGID: 101097] [xlator.c:218:xlator_volopt_dynload]
> 0-xlator: dlsym(xlator_api) missing:
> /usr/lib64/glusterfs/6.5/rpc-transport/socket.so: undefined symbol:
> xlator_api" repeated 7 times between [2019-10-07 09:42:26.328413] and
> [2019-10-07 09:42:26.328590]
> The message "W [MSGID: 101095] [xlator.c:210:xlator_volopt_dynload]
> 0-xlator: /usr/lib64/glusterfs/6.5/xlator/nfs/server.so: cannot open
> shared object file: No such file or directory" repeated 30 times between
> [2019-10-07 09:42:26.330640] and [2019-10-07 09:42:26.331499]
>
> These are not available from gluster's yum repositories.
>
> Any suggestions on how to troubleshoot & solve this are very much
> appreciated.
>
> many thanks, L
>
>
> 
>
> Community Meeting Calendar:
>
> APAC Schedule -
> Every 2nd and 4th Tuesday at 11:30 AM IST
> Bridge: https://bluejeans.com/118564314
>
> NA/EMEA Schedule -
> Every 1st and 3rd Tuesday at 01:00 PM EDT
> Bridge: https://bluejeans.com/118564314
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users