[Gluster-users] Pre-Historic Gluster RPM's

2019-03-07 Thread Ersen E.
Hi,

I do have some RHEL5 clients still. I will update OS's but not now.
Meantime I am looking for a way to update at least to latest version
available.
Is there any web site still keeping RHEL5/Centos RPM's ?

Regards,
Ersen E.
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Cannot write more than 512 bytes to gluster vol

2019-03-07 Thread Poornima Gurusiddaiah
>From the client log, looks like the host is null and port is 0, hence the
client is not able to connect to the bricks(Gluster volume). The client
tries to connect to Glusterd daemon on the host specified in the mount
command to get the hosts and port(volfile) on which the bricks are running.

Have you set the firewall rules for opening the ports required by Gluster?
Also can you share the complete client log preferably with TRACE log level?
#gluster vol set volname client-log-level TRACE
Please reset it back once you have collected the log, else this can slow
down and also fill up the logs dir.
#gluster vol reset volname client-log-level

Regards,
Poornima

On Fri, Mar 8, 2019, 1:03 AM Jamie Lawrence 
wrote:

> I just stood up a new cluster running 4.1.7, my first experience with
> version 4. It is a simple replica 3 volume:
>
> gluster v create la1_db_1 replica 3 \
> gluster-10g-1:/gluster-bricks/la1_db_1/la1_db_1 \
> gluster-10g-2:/gluster-bricks/la1_db_1/la1_db_1 \
> gluster-10g-3:/gluster-bricks/la1_db_1/la1_db_1
>
> gluster v set la1_db_1 storage.owner-uid 130
> gluster v set la1_db_1 storage.owner-gid 130
> gluster v set la1_db_1 server.allow-insecure on
> gluster v set la1_db_1 auth.allow [various IPs]
>
> After mounting on a client, everything appears fine until you try to use
> it.
>
> dd if=/dev/zero of=/path/on/client/foo
>
> will write 512 bytes and then hang until timeout, at which point it
> declares "Transport not connected".
>
> Notably, if I mount the volume on one of the gluster machines over the
> same interface, it behave like it should. That led me to investigate packet
> filtering, which is configured correctly, and in any case, after flushing
> all rules on all involved machines, the issue persists.
>
> cli.log contains a lot of:
>
> [2019-03-06 17:00:02.893553] I [cli.c:773:main] 0-cli: Started running
> /sbin/gluster with version 4.1.7
> [2019-03-06 17:00:02.897199] I
> [cli-cmd-volume.c:2375:cli_check_gsync_present] 0-: geo-replication not
> installed
> [2019-03-06 17:00:02.897545] I [MSGID: 101190]
> [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2019-03-06 17:00:02.897617] I [socket.c:2632:socket_event_handler]
> 0-transport: EPOLLERR - disconnecting now
> [2019-03-06 17:00:02.897678] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-glusterfs: error returned while attempting to connect to host:(null),
> port:0
> [2019-03-06 17:00:02.898244] I [input.c:31:cli_batch] 0-: Exiting with: 0
> [2019-03-06 17:00:02.922637] I [cli.c:773:main] 0-cli: Started running
> /sbin/gluster with version 4.1.7
> [2019-03-06 17:00:02.926599] I
> [cli-cmd-volume.c:2375:cli_check_gsync_present] 0-: geo-replication not
> installed
> [2019-03-06 17:00:02.926906] I [MSGID: 101190]
> [event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1
> [2019-03-06 17:00:02.926956] I [socket.c:2632:socket_event_handler]
> 0-transport: EPOLLERR - disconnecting now
> [2019-03-06 17:00:02.927113] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-glusterfs: error returned while attempting to connect to host:(null),
> port:0
> [2019-03-06 17:00:02.927573] I [input.c:31:cli_batch] 0-: Exiting with: 0
>
> The client log is more interesting, I just don't know what to make of it:
>
> [2019-03-07 19:18:36.674687] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-0: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.674726] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-1: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.674752] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-2: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.674806] I [rpc-clnt.c:2105:rpc_clnt_reconfig]
> 0-la1_db_1-client-0: changing port to 49152 (from 0)
> [2019-03-07 19:18:36.674815] I [rpc-clnt.c:2105:rpc_clnt_reconfig]
> 0-la1_db_1-client-1: changing port to 49152 (from 0)
> [2019-03-07 19:18:36.674927] I [rpc-clnt.c:2105:rpc_clnt_reconfig]
> 0-la1_db_1-client-2: changing port to 49152 (from 0)
> [2019-03-07 19:18:36.675012] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-1: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.675054] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-0: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.675155] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-2: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.675203] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-1: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.675243] W [rpc-clnt.c:1753:rpc_clnt_submit]
> 0-la1_db_1-client-0: error returned while attempting to connect to
> host:(null), port:0
> [2019-03-07 19:18:36.675306] W 

[Gluster-users] Gluster Monthly Newsletter, February 2019

2019-03-07 Thread Amye Scavarda
Thank you all for giving us feedback in our user survey for February!

Help us test Gluster 6!
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055876.html

Contributors
Top Contributing Companies: Red Hat, Comcast, DataLab, Gentoo Linux,
Facebook, BioDec, Samsung, Etersoft

Top Contributors in February: Yaniv Kaul, Raghavendra G, Nithya B,
Amar Tumballi, Sanju Rakonde, Shyamsundar R

Noteworthy Threads:
[Gluster-users] Memory management, OOM kills and glusterfs
https://lists.gluster.org/pipermail/gluster-users/2019-February/035782.html
[Gluster-users] Code of Conduct Update
https://lists.gluster.org/pipermail/gluster-users/2019-February/035895.html
[Gluster-users] Disabling read-ahead and io-cache for native fuse
mounts 
https://lists.gluster.org/pipermail/gluster-users/2019-February/035848.html
[Gluster-users] Gluster Container Storage: Release Update
https://lists.gluster.org/pipermail/gluster-users/2019-February/035860.html
[Gluster-devel] I/O performance
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055855.html
[Gluster-devel] Path based Geo-replication
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055836.html
[Gluster-devel] Failing test case
./tests/bugs/distribute/bug-1161311.t
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055842.html
[Gluster-devel] GlusterFs v4.1.5: Need help on bitrot detection
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055859.html
[Gluster-devel] md-cache: May bug found in md-cache.c
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055862.html
[Gluster-devel] [Gluster-Maintainers] glusterfs-6.0rc0 released
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055875.html
[Gluster-devel] GlusterFS - 6.0RC - Test days (27th, 28th Feb)
https://lists.gluster.org/pipermail/gluster-devel/2019-February/055876.html
https://lists.gluster.org/pipermail/gluster-users/2019-March/035938.html
[Gluster-users] Release 6: Release date update
https://lists.gluster.org/pipermail/gluster-users/2019-March/035961.html

Events:
Red Hat Summit, May 4-6, 2019 - https://www.redhat.com/en/summit/2019
Open Source Summit and KubeCon + CloudNativeCon Shanghai, June 24-26,
2019 https://www.lfasiallc.com/events/kubecon-cloudnativecon-china-2019/

-- 
Amye Scavarda | a...@redhat.com | Gluster Community Lead
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Cannot write more than 512 bytes to gluster vol

2019-03-07 Thread Jamie Lawrence
I just stood up a new cluster running 4.1.7, my first experience with version 
4. It is a simple replica 3 volume:

gluster v create la1_db_1 replica 3 \
gluster-10g-1:/gluster-bricks/la1_db_1/la1_db_1 \
gluster-10g-2:/gluster-bricks/la1_db_1/la1_db_1 \
gluster-10g-3:/gluster-bricks/la1_db_1/la1_db_1

gluster v set la1_db_1 storage.owner-uid 130
gluster v set la1_db_1 storage.owner-gid 130
gluster v set la1_db_1 server.allow-insecure on
gluster v set la1_db_1 auth.allow [various IPs]

After mounting on a client, everything appears fine until you try to use it.

dd if=/dev/zero of=/path/on/client/foo

will write 512 bytes and then hang until timeout, at which point it declares 
"Transport not connected".

Notably, if I mount the volume on one of the gluster machines over the same 
interface, it behave like it should. That led me to investigate packet 
filtering, which is configured correctly, and in any case, after flushing all 
rules on all involved machines, the issue persists.

cli.log contains a lot of:

[2019-03-06 17:00:02.893553] I [cli.c:773:main] 0-cli: Started running 
/sbin/gluster with version 4.1.7
[2019-03-06 17:00:02.897199] I [cli-cmd-volume.c:2375:cli_check_gsync_present] 
0-: geo-replication not installed
[2019-03-06 17:00:02.897545] I [MSGID: 101190] 
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2019-03-06 17:00:02.897617] I [socket.c:2632:socket_event_handler] 
0-transport: EPOLLERR - disconnecting now
[2019-03-06 17:00:02.897678] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: 
error returned while attempting to connect to host:(null), port:0
[2019-03-06 17:00:02.898244] I [input.c:31:cli_batch] 0-: Exiting with: 0
[2019-03-06 17:00:02.922637] I [cli.c:773:main] 0-cli: Started running 
/sbin/gluster with version 4.1.7
[2019-03-06 17:00:02.926599] I [cli-cmd-volume.c:2375:cli_check_gsync_present] 
0-: geo-replication not installed
[2019-03-06 17:00:02.926906] I [MSGID: 101190] 
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread with 
index 1
[2019-03-06 17:00:02.926956] I [socket.c:2632:socket_event_handler] 
0-transport: EPOLLERR - disconnecting now
[2019-03-06 17:00:02.927113] W [rpc-clnt.c:1753:rpc_clnt_submit] 0-glusterfs: 
error returned while attempting to connect to host:(null), port:0
[2019-03-06 17:00:02.927573] I [input.c:31:cli_batch] 0-: Exiting with: 0

The client log is more interesting, I just don't know what to make of it:

[2019-03-07 19:18:36.674687] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-0: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.674726] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-1: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.674752] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-2: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.674806] I [rpc-clnt.c:2105:rpc_clnt_reconfig] 
0-la1_db_1-client-0: changing port to 49152 (from 0)
[2019-03-07 19:18:36.674815] I [rpc-clnt.c:2105:rpc_clnt_reconfig] 
0-la1_db_1-client-1: changing port to 49152 (from 0)
[2019-03-07 19:18:36.674927] I [rpc-clnt.c:2105:rpc_clnt_reconfig] 
0-la1_db_1-client-2: changing port to 49152 (from 0)
[2019-03-07 19:18:36.675012] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-1: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.675054] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-0: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.675155] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-2: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.675203] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-1: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.675243] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-0: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.675306] W [rpc-clnt.c:1753:rpc_clnt_submit] 
0-la1_db_1-client-2: error returned while attempting to connect to host:(null), 
port:0
[2019-03-07 19:18:36.675563] I [MSGID: 114046] 
[client-handshake.c:1095:client_setvolume_cbk] 0-la1_db_1-client-1: Connected 
to la1_db_1-client-1, attached to remote volume 
'/gluster-bricks/la1_db_1/la1_db_1'.
[2019-03-07 19:18:36.675573] I [MSGID: 108005] 
[afr-common.c:5336:__afr_handle_child_up_event] 0-la1_db_1-replicate-0: 
Subvolume 'la1_db_1-client-1' came back up; going online.
[2019-03-07 19:18:36.675722] I [MSGID: 114046] 
[client-handshake.c:1095:client_setvolume_cbk] 0-la1_db_1-client-0: Connected 
to la1_db_1-client-0, attached to remote volume 
'/gluster-bricks/la1_db_1/la1_db_1'.
[2019-03-07 19:18:36.675728] I [MSGID: 114046] 
[client-handshake.c:1095:client_setvolume_cbk] 0-la1_db_1-client-2: Connected 
to la1_db_1-client-2, 

Re: [Gluster-users] Experiences with FUSE in real world - Presentationat Vault 2019

2019-03-07 Thread Raghavendra Gowdappa
On Thu, Mar 7, 2019 at 4:51 PM Strahil  wrote:

> Thanks,
>
> I have nothing in mind - but I know from experience that live sessions are
> much more interesting and going in deep.
>

I'll schedule a Bluejeans session on this. Will update the thread with a
date and time.

Best Regards,
> Strahil Nikolov
> On Mar 7, 2019 08:54, Raghavendra Gowdappa  wrote:
>
> Unfortunately, there is no recording. However, we are willing to discuss
> our findings if you've specific questions. We can do that in this thread.
>
> On Thu, Mar 7, 2019 at 10:33 AM Strahil  wrote:
>
> Thanks a lot.
> Is there a recording of that ?
>
> Best Regards,
> Strahil Nikolov
> On Mar 5, 2019 11:13, Raghavendra Gowdappa  wrote:
>
> All,
>
> Recently me, Manoj and Csaba presented on positives and negatives of
> implementing File systems in userspace using FUSE [1]. We had based the
> talk on our experiences with Glusterfs having FUSE as the native interface.
> The slides can also be found at [1].
>
> [1] https://www.usenix.org/conference/vault19/presentation/pillai
>
> regards,
> Raghavendra
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Release 6: Release date update

2019-03-07 Thread Shyam Ranganathan
Bug fixes are always welcome, features or big ticket changes at this
point in the release cycle are not.

I checked the patch and it is a 2 liner in readdir-ahead, and hence I
would backport it (once it gets merged into master).

Thanks for checking,
Shyam
On 3/7/19 6:33 AM, Raghavendra Gowdappa wrote:
> I just found a fix for
> https://bugzilla.redhat.com/show_bug.cgi?id=1674412. Since its a
> deadlock I am wondering whether this should be in 6.0. What do you think?
> 
> On Tue, Mar 5, 2019 at 11:47 PM Shyam Ranganathan  > wrote:
> 
> Hi,
> 
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
> 
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.
> 
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
> 
> Shyam
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Release 6: Release date update

2019-03-07 Thread Raghavendra Gowdappa
I just found a fix for https://bugzilla.redhat.com/show_bug.cgi?id=1674412.
Since its a deadlock I am wondering whether this should be in 6.0. What do
you think?

On Tue, Mar 5, 2019 at 11:47 PM Shyam Ranganathan 
wrote:

> Hi,
>
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
>
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.
>
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
>
> Shyam
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users