Re: [Gluster-users] Node count constraints with EC?

2017-03-30 Thread Ashish Pandey
Terry, 

It is (data/parity)>=2. You can very well create 4+2 or 8+4 volume. 
Are you seeing any error message that you can not create 4+2 config? (4 = data 
brick and 2 = redundancy brick count) 

Ashish 

- Original Message -

From: "Terry McGuire"  
To: gluster-users@gluster.org 
Sent: Friday, March 31, 2017 3:34:35 AM 
Subject: Re: [Gluster-users] Node count constraints with EC? 

Thanks Ashish, Cedric, for your comments. 

I’m no longer concerned about my choice of 4 nodes to start, but, I realize 
that there’s an issue with my subvolume config options. Turns out only my 8+3 
choice is permitted, as the 4+2 and 8+4 options violate the data/parity>2 rule. 
So, 8+3 it is, as 8+2 isn’t quite enough redundancy for me. 

Regards, 
Terry 





On Mar 30, 2017, at 02:14, yipik...@gmail.com wrote: 

On 30/03/2017 08:35, Ashish Pandey wrote: 



Good point Cedric!! 
The only thing is that, I would prefer to say "bricks" instead of "nodes" in 
your statement. 

"starting with 4 bricks (3+1) can only evolve by adding 4 bricks (3+1)" 


Oh right, thanks for correcting me ! 

Cheers 





- Original Message -

From: "Cedric Lemarchand"  
To: "Terry McGuire"  
Cc: gluster-users@gluster.org 
Sent: Thursday, March 30, 2017 11:57:27 AM 
Subject: Re: [Gluster-users] Node count constraints with EC? 


> Le 29 mars 2017 à 20:29, Terry McGuire  a écrit : 
> 
> I was thinking I’d spread these over 4 nodes, and add single nodes over time, 
> with subvolumes rearranged over new nodes to maintain protection from whole 
> node failures. 

Also keep in mind that dispersed cluster can only be expanded by the number of 
initial nodes, eg starting with 4 nodes 3+1 can only evolve by adding 4 nodes 
3+1, you cannot change the default policy 3+1 to 4+1. So the granularity of the 
evolution of the cluster is fixed at the beginning. 

Cheers 
___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 









___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] TLS support

2017-03-30 Thread Yong Zhang
Hi, all

Does anyone know which ssl protocol glusterfs use? Does glusterfs support TLS? 
Thanks.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Unable to Mount on Non-Server Machines

2017-03-30 Thread Atin Mukherjee
This issue is now fixed in 3.10.1.

On Tue, 21 Mar 2017 at 19:07, David Chin  wrote:

> I'm facing the same issue as well. I'm running the version 3.10.0-2 for
> both server and client.
>
> Works fine when the client and server are on the same machine.
>
>
> I did a telnet to the opened port related to gluster from the client-only
> instances to server: (eg :
>
>
> # netstat -antop | grep gluster | grep LISTEN
> tcp0  0 0.0.0.0:49153   0.0.0.0:*
> LISTEN  19443/glusterfsd off (0.00/0/0)
> tcp0  0 0.0.0.0:24007   0.0.0.0:*
> LISTEN  19288/glusterd   off (0.00/0/0)
>
>
> I'm able to establish a connection with all of them, so firewall is not
> the cause here (my firewall rules are empty anyway).
>
>
> Only after I set "auth.allow = *", then only the client-only instance are
> able to connect, but this would have severe security implications.
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

-- 
- Atin (atinm)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] adding arbiter

2017-03-30 Thread Laura Bailey
I can't answer all of these, but I think the only way to share existing
files is to create a new volume with sharding enabled and copy the files
over into it.

Cheers,
Laura B

On Friday, March 31, 2017, Alessandro Briosi  wrote:

> Hi I need some advice.
>
> I'm currently on 3.8.10 and would like to know the following:
>
> 1. If I add an arbiter to an existing volume should I also run a rebalance?
> 2. If I had sharding enabled would adding the arbiter trigger the
> corruption bug?
> 3. What's the procedure to enable sharding on an existing volume so that
> it shards already existing files?
> 4. Suppose I have sharding disabled, then add an arbiter brick, then
> enable sharding and execute the procedure for point 3, would this still
> trigger the corruption bug?
>
> Thanks,
> Alessandro
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
Laura Bailey
Senior Technical Writer
Customer Content Services BNE
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Release 3.10.1: Scheduled for the 30th of March

2017-03-30 Thread Shyam

Glusterfs 3.10.1 has been tagged.

Packages for the various distributions will be available in a few days, 
and with that a more formal release announcement will be made.


For those who are itchy to get a start,
- Tagged code: https://github.com/gluster/glusterfs/tree/v3.10.1
- Release notes: 
https://github.com/gluster/glusterfs/blob/release-3.10/doc/release-notes/3.10.1.md


Thanks,
Shyam

NOTE: Tracker bug for 3.10.1 will be closed in a couple of days and 
tracker for 3.10.2 will be opened, and an announcement for 3.10.2 will 
be sent with the details

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Error occurs during when mounting gluster fuse over TLS

2017-03-30 Thread Joseph Lorenzini
Hi all,

I have gluster 3.9. I have MTLS set up for both management traffic and
volumes. The gluster fuser client successfully mounts the gluster volume.
However, I see the following error in the gluster server logs when mount or
unmount happens on the gluster client. Is this a bug? Is this anything to
be concerned about? Everything seems to be functioning fine.

[2017-03-30 17:24:50.728098] I [socket.c:343:ssl_setup_connection]
0-socket.management: peer CN = dfsclient1.local

[2017-03-30 17:24:50.728161] I [socket.c:346:ssl_setup_connection]
0-socket.management: SSL verification succeeded (client: )

[2017-03-30 17:24:50.731084] E [socket.c:2547:socket_poller]
0-socket.management: error in polling loop

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] adding arbiter

2017-03-30 Thread Alessandro Briosi
Hi I need some advice.

I'm currently on 3.8.10 and would like to know the following:

1. If I add an arbiter to an existing volume should I also run a rebalance?
2. If I had sharding enabled would adding the arbiter trigger the
corruption bug?
3. What's the procedure to enable sharding on an existing volume so that
it shards already existing files?
4. Suppose I have sharding disabled, then add an arbiter brick, then
enable sharding and execute the procedure for point 3, would this still
trigger the corruption bug?

Thanks,
Alessandro

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster really very slow, like painful performance

2017-03-30 Thread Travis Eddy
Is it me ? or is it Gluster? I feel like there is (or hopefully) a simple
setting needs to be changed ( from the google searches I'm not the only
one) I've used GlusterFS on and off for years and even with KVM its always
been really slow. Its been ok for generic file storage)

I know with NFS there are some options that make it 10 times faster than
the defaults. Is this the same for Gluster and my Google Fu isn't finding
it?

Simple test:
1 Gb network ( this should be the bottle neck or at least close... NOT THE
6MB/sec max I'm seeing)
goto Microcenter, buy several AMD 8 core chip & motherboard specials, 16gb
for all. some 1tb disks too. and some of those laptop sshd (for the OS).
(don't blame the parts, the gigabit network should still be the choke but
its NO where close )

install cent 7 min. make a BTRFS storage area. single node gluster setup...
mkfs.btrfs -m raid1 -d raid 1 /dev/sdb /dev/sdc
install glusterfs according to
https://wiki.centos.org/HowTos/GlusterFSonCentOS (using  the centos
packages)
turn off selinux and firewalld
$ sudo gluster volume create GlusterVol7 192.168.3.16:/mnt/tmp/brick
$ sudo gluster volume set GlusterVol7 nfs.disable off

ezpz now restart can lets to some work...

On the XenServer host connect to Gluster as a NFS SR
New Storage -> NFS type in 192.168.3.16:/GlusterVol7 bla bla bla
not copy over some VM's
wait a half a day or whole day or two depending on OS drive size. (I am
not exaggerating)

Start a VM (windows or linux).

Now try to copy data from samba/nfs/gluster/internet, and save to disk...

6MB/sec is the fastest I've seen once the VM's OS cache fills

I know if I use pain NFS and use
options (rw,async,fsid=0,insecure,no_subtree_check,no_root_squash)
This hardware will saturate a gigabit network.


Thank you


Travis Eddy
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-Replication not detecting changes

2017-03-30 Thread Jeremiah Rothschild
On Thu, Mar 30, 2017 at 04:58:53AM -0700, Jeremiah Rothschild wrote:
> Well, at anyrate, here you can see that both servers can talk on 49152/tcp:

Here is the list of ports that were explicitly opened for gfs:

24007/tcp
24008/tcp
24009/tcp
24010/tcp
49152/tcp
49153/tcp
111/tcp
111/udp

> [root@ill ~]# telnet aws 49152
> Trying 54.165.144.9...
> Connected to aws.
> Escape character is '^]'.
> 
> and
> 
> [root@aws jeremiah]# telnet ill 49152
> Trying 67.207.112.66...
> Connected to ill.
> Escape character is '^]'.
> 
> > Thanks and Regards,
> > Kotresh H R
> > 
> > - Original Message -
> > > From: "Jeremiah Rothschild" 
> > > To: "Kotresh Hiremath Ravishankar" 
> > > Cc: gluster-users@gluster.org
> > > Sent: Thursday, March 30, 2017 1:16:03 PM
> > > Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> > > 
> > > On Thu, Mar 30, 2017 at 12:51:23AM -0400, Kotresh Hiremath Ravishankar 
> > > wrote:
> > > > Hi Jeremiah,
> > > 
> > > Hi Kotresh! Thanks for the follow-up!
> > > 
> > > > That's really strange. Please enable DEBUG logs for geo-replication as
> > > > below and send
> > > > us the logs under "/var/log/glusterfs/geo-replication//*.log"
> > > > from master node
> > > > 
> > > > gluster vol geo-rep  :: config log-level
> > > > DEBUG
> > > 
> > > Ok.
> > > 
> > > I started from scratch & enabled debug level logging. The logs have been
> > > attached to Bugzilla #1327244.
> > > 
> > > > Geo-rep has two ways to detect changes.
> > > > 
> > > > 1. changelog (Changelog Crawl)
> > > > 2. xsync (Hybrid Crawl):
> > > >This is good for initial sync. It has the limitation of not
> > > >detecting unlinks and renames.
> > > >So the slave would end up having unlinked files and renamed src file 
> > > > if
> > > >it is used after initial sync.
> > > 
> > > FYI I did try changing the changelog_detector to xsync but it made no
> > > difference. Note that I also detailed this in the "Additional Info" 
> > > section
> > > of the Bugzilla bug.
> > > 
> > > > Thanks and Regards,
> > > 
> > > Thanks again!
> > > 
> > > j
> > > 
> > > > Kotresh H R
> > > > 
> > > > - Original Message -
> > > > > From: "Jeremiah Rothschild" 
> > > > > To: gluster-users@gluster.org
> > > > > Sent: Wednesday, March 29, 2017 12:39:11 AM
> > > > > Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> > > > > 
> > > > > Following up on my own thread...
> > > > > 
> > > > > I have spent hours and hours setting up, re-setting up, screwing with
> > > > > undocumented variables, upgrading from LTS to non-LTS, etc etc.
> > > > > 
> > > > > Nothing seems to give.
> > > > > 
> > > > > This is very much an out-of-the-box setup and core functionality just
> > > > > isn't
> > > > > working.
> > > > > 
> > > > > Can anyone throw me a bone here? Please? Do I file a bug for such an
> > > > > open-ended issue? Is everyone assuming I've just screwed a step up? I
> > > > > must
> > > > > say the documentation is pretty clear & simple. Do you want more logs?
> > > > > 
> > > > > If this is going to be a dead end then so be it but I at least need to
> > > > > make
> > > > > sure I've tried my hardest to get a working deployment.
> > > > > 
> > > > > Thanks for your time and understanding!
> > > > > 
> > > > > j
> > > > > 
> > > > > On Thu, Mar 23, 2017 at 11:47:03AM -0700, Jeremiah Rothschild wrote:
> > > > > > Hey all,
> > > > > > 
> > > > > > I have a vanilla geo-replication setup running. It is comprised of 
> > > > > > two
> > > > > > servers, both CentOS 7 and GlusterFS 3.8.10:
> > > > > > 
> > > > > > * server1: Local server. Master volume named "foo".
> > > > > > * server2: Remote server. Slave volume named "foo".
> > > > > > 
> > > > > > Everything went fine including the initial sync. However, no new
> > > > > > changes
> > > > > > are
> > > > > > being seen or synced.
> > > > > > 
> > > > > > Geo-rep status looks clean:
> > > > > > 
> > > > > > # gluster volume geo-replication foo server2.franz.com::foo status
> > > > > > MASTER NODE: server1.x.com
> > > > > > MASTER VOL: foo
> > > > > > MASTER BRICK: /gv0/foo
> > > > > > SLAVE USER: root
> > > > > > SLAVE NODE: server2.x.com::foo
> > > > > > STATUS: Active
> > > > > > CRAWL STATUS: Changelog Crawl
> > > > > > LAST_SYNCED: 2017-03-23 10:12:57
> > > > > > 
> > > > > > In the geo-rep master log, I see these being triggered:
> > > > > > 
> > > > > > # tail -n3
> > > > > > foo/ssh%3A%2F%2Froot%401.2.3.4%3Agluster%3A%2F%2F127.0.0.1%3Afoo.log
> > > > > > [2017-03-23 18:33:34.697525] I [master(/gv0/foo):534:crawlwrap]
> > > > > > _GMaster:
> > > > > > 20
> > > > > > crawls, 0 turns
> > > > > > [2017-03-23 18:34:37.441982] I [master(/gv0/foo):534:crawlwrap]
> > > > > > _GMaster:
> > > > > > 20
> > > > > > crawls, 0 turns
> > > > > > [2017-03-23 18:35:40.242851] I [master(/gv0/foo):534:crawlwrap]
> > > > > > _GMaster:
> > > > > > 20
> > > > > > crawls, 0 turns
> > > > > > 

Re: [Gluster-users] Geo-Replication not detecting changes

2017-03-30 Thread Jeremiah Rothschild
On Thu, Mar 30, 2017 at 05:49:32AM -0400, Kotresh Hiremath Ravishankar wrote:
> Hi Jeremiah,
> 
> I believe the bug ID is #1437244 and not #1327244.

Oops! You are correct.

> >From the geo-rep logs, the master volume is failed with "Transport Endpoint 
> >Not Connected"
> ...
> [2017-03-30 07:40:57.150348] E [resource(/gv0/foo):234:errlog] Popen: command 
> "/usr/sbin/glusterfs --aux-gfid-mount --acl 
> --log-file=/var/log/glusterfs/geo-replication/foo/ssh%3A%2F%2Froot%4054.165.144.9%3Agluster%3A%2F%2F127.0.0.1%3Afoo.%2Fgv0%2Ffoo.gluster.log
>  --volfile-server=localhost --volfile-id=foo --client-pid=-1 
> /tmp/gsyncd-aux-mount-K1j3ZD" returned with 107
> ..
> 
> Could you try flushing iptables on both master and slave nodes and check 
> again?
> #iptables -F

Done. I then restarted glusterd on both servers and waited for a sync to
happen but there is no change.

Also, I believe that networking was verified as OK, because the initial sync
worked? Is that not true?

Well, at anyrate, here you can see that both servers can talk on 49152/tcp:

[root@ill ~]# telnet aws 49152
Trying 54.165.144.9...
Connected to aws.
Escape character is '^]'.

and

[root@aws jeremiah]# telnet ill 49152
Trying 67.207.112.66...
Connected to ill.
Escape character is '^]'.

> Thanks and Regards,
> Kotresh H R
> 
> - Original Message -
> > From: "Jeremiah Rothschild" 
> > To: "Kotresh Hiremath Ravishankar" 
> > Cc: gluster-users@gluster.org
> > Sent: Thursday, March 30, 2017 1:16:03 PM
> > Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> > 
> > On Thu, Mar 30, 2017 at 12:51:23AM -0400, Kotresh Hiremath Ravishankar 
> > wrote:
> > > Hi Jeremiah,
> > 
> > Hi Kotresh! Thanks for the follow-up!
> > 
> > > That's really strange. Please enable DEBUG logs for geo-replication as
> > > below and send
> > > us the logs under "/var/log/glusterfs/geo-replication//*.log"
> > > from master node
> > > 
> > > gluster vol geo-rep  :: config log-level
> > > DEBUG
> > 
> > Ok.
> > 
> > I started from scratch & enabled debug level logging. The logs have been
> > attached to Bugzilla #1327244.
> > 
> > > Geo-rep has two ways to detect changes.
> > > 
> > > 1. changelog (Changelog Crawl)
> > > 2. xsync (Hybrid Crawl):
> > >This is good for initial sync. It has the limitation of not
> > >detecting unlinks and renames.
> > >So the slave would end up having unlinked files and renamed src file if
> > >it is used after initial sync.
> > 
> > FYI I did try changing the changelog_detector to xsync but it made no
> > difference. Note that I also detailed this in the "Additional Info" section
> > of the Bugzilla bug.
> > 
> > > Thanks and Regards,
> > 
> > Thanks again!
> > 
> > j
> > 
> > > Kotresh H R
> > > 
> > > - Original Message -
> > > > From: "Jeremiah Rothschild" 
> > > > To: gluster-users@gluster.org
> > > > Sent: Wednesday, March 29, 2017 12:39:11 AM
> > > > Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> > > > 
> > > > Following up on my own thread...
> > > > 
> > > > I have spent hours and hours setting up, re-setting up, screwing with
> > > > undocumented variables, upgrading from LTS to non-LTS, etc etc.
> > > > 
> > > > Nothing seems to give.
> > > > 
> > > > This is very much an out-of-the-box setup and core functionality just
> > > > isn't
> > > > working.
> > > > 
> > > > Can anyone throw me a bone here? Please? Do I file a bug for such an
> > > > open-ended issue? Is everyone assuming I've just screwed a step up? I
> > > > must
> > > > say the documentation is pretty clear & simple. Do you want more logs?
> > > > 
> > > > If this is going to be a dead end then so be it but I at least need to
> > > > make
> > > > sure I've tried my hardest to get a working deployment.
> > > > 
> > > > Thanks for your time and understanding!
> > > > 
> > > > j
> > > > 
> > > > On Thu, Mar 23, 2017 at 11:47:03AM -0700, Jeremiah Rothschild wrote:
> > > > > Hey all,
> > > > > 
> > > > > I have a vanilla geo-replication setup running. It is comprised of two
> > > > > servers, both CentOS 7 and GlusterFS 3.8.10:
> > > > > 
> > > > > * server1: Local server. Master volume named "foo".
> > > > > * server2: Remote server. Slave volume named "foo".
> > > > > 
> > > > > Everything went fine including the initial sync. However, no new
> > > > > changes
> > > > > are
> > > > > being seen or synced.
> > > > > 
> > > > > Geo-rep status looks clean:
> > > > > 
> > > > > # gluster volume geo-replication foo server2.franz.com::foo status
> > > > > MASTER NODE: server1.x.com
> > > > > MASTER VOL: foo
> > > > > MASTER BRICK: /gv0/foo
> > > > > SLAVE USER: root
> > > > > SLAVE NODE: server2.x.com::foo
> > > > > STATUS: Active
> > > > > CRAWL STATUS: Changelog Crawl
> > > > > LAST_SYNCED: 2017-03-23 10:12:57
> > > > > 
> > > > > In the geo-rep master log, I see these being triggered:
> > > > > 
> > > > > # tail -n3
> > > > > 

Re: [Gluster-users] Geo-Replication not detecting changes

2017-03-30 Thread Kotresh Hiremath Ravishankar
Hi Jeremiah,

I believe the bug ID is #1437244 and not #1327244.
>From the geo-rep logs, the master volume is failed with "Transport Endpoint 
>Not Connected"
...
[2017-03-30 07:40:57.150348] E [resource(/gv0/foo):234:errlog] Popen: command 
"/usr/sbin/glusterfs --aux-gfid-mount --acl 
--log-file=/var/log/glusterfs/geo-replication/foo/ssh%3A%2F%2Froot%4054.165.144.9%3Agluster%3A%2F%2F127.0.0.1%3Afoo.%2Fgv0%2Ffoo.gluster.log
 --volfile-server=localhost --volfile-id=foo --client-pid=-1 
/tmp/gsyncd-aux-mount-K1j3ZD" returned with 107
..


Could you try flushing iptables on both master and slave nodes and check again?
#iptables -F


Thanks and Regards,
Kotresh H R

- Original Message -
> From: "Jeremiah Rothschild" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: gluster-users@gluster.org
> Sent: Thursday, March 30, 2017 1:16:03 PM
> Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> 
> On Thu, Mar 30, 2017 at 12:51:23AM -0400, Kotresh Hiremath Ravishankar wrote:
> > Hi Jeremiah,
> 
> Hi Kotresh! Thanks for the follow-up!
> 
> > That's really strange. Please enable DEBUG logs for geo-replication as
> > below and send
> > us the logs under "/var/log/glusterfs/geo-replication//*.log"
> > from master node
> > 
> > gluster vol geo-rep  :: config log-level
> > DEBUG
> 
> Ok.
> 
> I started from scratch & enabled debug level logging. The logs have been
> attached to Bugzilla #1327244.
> 
> > Geo-rep has two ways to detect changes.
> > 
> > 1. changelog (Changelog Crawl)
> > 2. xsync (Hybrid Crawl):
> >This is good for initial sync. It has the limitation of not
> >detecting unlinks and renames.
> >So the slave would end up having unlinked files and renamed src file if
> >it is used after initial sync.
> 
> FYI I did try changing the changelog_detector to xsync but it made no
> difference. Note that I also detailed this in the "Additional Info" section
> of the Bugzilla bug.
> 
> > Thanks and Regards,
> 
> Thanks again!
> 
> j
> 
> > Kotresh H R
> > 
> > - Original Message -
> > > From: "Jeremiah Rothschild" 
> > > To: gluster-users@gluster.org
> > > Sent: Wednesday, March 29, 2017 12:39:11 AM
> > > Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> > > 
> > > Following up on my own thread...
> > > 
> > > I have spent hours and hours setting up, re-setting up, screwing with
> > > undocumented variables, upgrading from LTS to non-LTS, etc etc.
> > > 
> > > Nothing seems to give.
> > > 
> > > This is very much an out-of-the-box setup and core functionality just
> > > isn't
> > > working.
> > > 
> > > Can anyone throw me a bone here? Please? Do I file a bug for such an
> > > open-ended issue? Is everyone assuming I've just screwed a step up? I
> > > must
> > > say the documentation is pretty clear & simple. Do you want more logs?
> > > 
> > > If this is going to be a dead end then so be it but I at least need to
> > > make
> > > sure I've tried my hardest to get a working deployment.
> > > 
> > > Thanks for your time and understanding!
> > > 
> > > j
> > > 
> > > On Thu, Mar 23, 2017 at 11:47:03AM -0700, Jeremiah Rothschild wrote:
> > > > Hey all,
> > > > 
> > > > I have a vanilla geo-replication setup running. It is comprised of two
> > > > servers, both CentOS 7 and GlusterFS 3.8.10:
> > > > 
> > > > * server1: Local server. Master volume named "foo".
> > > > * server2: Remote server. Slave volume named "foo".
> > > > 
> > > > Everything went fine including the initial sync. However, no new
> > > > changes
> > > > are
> > > > being seen or synced.
> > > > 
> > > > Geo-rep status looks clean:
> > > > 
> > > > # gluster volume geo-replication foo server2.franz.com::foo status
> > > > MASTER NODE: server1.x.com
> > > > MASTER VOL: foo
> > > > MASTER BRICK: /gv0/foo
> > > > SLAVE USER: root
> > > > SLAVE NODE: server2.x.com::foo
> > > > STATUS: Active
> > > > CRAWL STATUS: Changelog Crawl
> > > > LAST_SYNCED: 2017-03-23 10:12:57
> > > > 
> > > > In the geo-rep master log, I see these being triggered:
> > > > 
> > > > # tail -n3
> > > > foo/ssh%3A%2F%2Froot%401.2.3.4%3Agluster%3A%2F%2F127.0.0.1%3Afoo.log
> > > > [2017-03-23 18:33:34.697525] I [master(/gv0/foo):534:crawlwrap]
> > > > _GMaster:
> > > > 20
> > > > crawls, 0 turns
> > > > [2017-03-23 18:34:37.441982] I [master(/gv0/foo):534:crawlwrap]
> > > > _GMaster:
> > > > 20
> > > > crawls, 0 turns
> > > > [2017-03-23 18:35:40.242851] I [master(/gv0/foo):534:crawlwrap]
> > > > _GMaster:
> > > > 20
> > > > crawls, 0 turns
> > > > 
> > > > I don't see any errors in any of the other logs.
> > > > 
> > > > Not sure what else to poke at here. What are the possible values for
> > > > the
> > > > "change_detector" config variable? Would it be worthwhile to test with
> > > > a
> > > > method other than "changelog"? Other thoughts/ideas?
> > > > 
> > > > Thanks in advance!
> > > > 
> > > > j
> > > > 

Re: [Gluster-users] Node count constraints with EC?

2017-03-30 Thread yipik...@gmail.com

On 30/03/2017 08:35, Ashish Pandey wrote:

Good point Cedric!!
The only thing is that, I would prefer to say "bricks" instead of 
"nodes" in your statement.


"starting with 4 bricks (3+1) can only evolve by adding 4 bricks (3+1)"

Oh right, thanks for correcting me !

Cheers




*From: *"Cedric Lemarchand" 
*To: *"Terry McGuire" 
*Cc: *gluster-users@gluster.org
*Sent: *Thursday, March 30, 2017 11:57:27 AM
*Subject: *Re: [Gluster-users] Node count constraints with EC?


> Le 29 mars 2017 à 20:29, Terry McGuire  a écrit :
>
> I was thinking I’d spread these over 4 nodes, and add single nodes 
over time, with subvolumes rearranged over new nodes to maintain 
protection from whole node failures.


Also keep in mind that dispersed cluster can only be expanded by the 
number of initial nodes, eg starting with 4 nodes 3+1 can only evolve 
by adding 4 nodes 3+1, you cannot change the default policy 3+1 to 
4+1. So the granularity of the evolution of the cluster is fixed at 
the beginning.


Cheers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-Replication not detecting changes

2017-03-30 Thread Jeremiah Rothschild
On Thu, Mar 30, 2017 at 12:51:23AM -0400, Kotresh Hiremath Ravishankar wrote:
> Hi Jeremiah,

Hi Kotresh! Thanks for the follow-up!

> That's really strange. Please enable DEBUG logs for geo-replication as below 
> and send
> us the logs under "/var/log/glusterfs/geo-replication//*.log" from 
> master node
> 
> gluster vol geo-rep  :: config log-level DEBUG

Ok.

I started from scratch & enabled debug level logging. The logs have been
attached to Bugzilla #1327244.

> Geo-rep has two ways to detect changes.
> 
> 1. changelog (Changelog Crawl)
> 2. xsync (Hybrid Crawl):
>This is good for initial sync. It has the limitation of not 
> detecting unlinks and renames.
>So the slave would end up having unlinked files and renamed src file if it 
> is used after initial sync.

FYI I did try changing the changelog_detector to xsync but it made no
difference. Note that I also detailed this in the "Additional Info" section
of the Bugzilla bug.

> Thanks and Regards,

Thanks again!

j

> Kotresh H R
> 
> - Original Message -
> > From: "Jeremiah Rothschild" 
> > To: gluster-users@gluster.org
> > Sent: Wednesday, March 29, 2017 12:39:11 AM
> > Subject: Re: [Gluster-users] Geo-Replication not detecting changes
> > 
> > Following up on my own thread...
> > 
> > I have spent hours and hours setting up, re-setting up, screwing with
> > undocumented variables, upgrading from LTS to non-LTS, etc etc.
> > 
> > Nothing seems to give.
> > 
> > This is very much an out-of-the-box setup and core functionality just isn't
> > working.
> > 
> > Can anyone throw me a bone here? Please? Do I file a bug for such an
> > open-ended issue? Is everyone assuming I've just screwed a step up? I must
> > say the documentation is pretty clear & simple. Do you want more logs?
> > 
> > If this is going to be a dead end then so be it but I at least need to make
> > sure I've tried my hardest to get a working deployment.
> > 
> > Thanks for your time and understanding!
> > 
> > j
> > 
> > On Thu, Mar 23, 2017 at 11:47:03AM -0700, Jeremiah Rothschild wrote:
> > > Hey all,
> > > 
> > > I have a vanilla geo-replication setup running. It is comprised of two
> > > servers, both CentOS 7 and GlusterFS 3.8.10:
> > > 
> > > * server1: Local server. Master volume named "foo".
> > > * server2: Remote server. Slave volume named "foo".
> > > 
> > > Everything went fine including the initial sync. However, no new changes
> > > are
> > > being seen or synced.
> > > 
> > > Geo-rep status looks clean:
> > > 
> > > # gluster volume geo-replication foo server2.franz.com::foo status
> > > MASTER NODE: server1.x.com
> > > MASTER VOL: foo
> > > MASTER BRICK: /gv0/foo
> > > SLAVE USER: root
> > > SLAVE NODE: server2.x.com::foo
> > > STATUS: Active
> > > CRAWL STATUS: Changelog Crawl
> > > LAST_SYNCED: 2017-03-23 10:12:57
> > > 
> > > In the geo-rep master log, I see these being triggered:
> > > 
> > > # tail -n3
> > > foo/ssh%3A%2F%2Froot%401.2.3.4%3Agluster%3A%2F%2F127.0.0.1%3Afoo.log
> > > [2017-03-23 18:33:34.697525] I [master(/gv0/foo):534:crawlwrap] _GMaster:
> > > 20
> > > crawls, 0 turns
> > > [2017-03-23 18:34:37.441982] I [master(/gv0/foo):534:crawlwrap] _GMaster:
> > > 20
> > > crawls, 0 turns
> > > [2017-03-23 18:35:40.242851] I [master(/gv0/foo):534:crawlwrap] _GMaster:
> > > 20
> > > crawls, 0 turns
> > > 
> > > I don't see any errors in any of the other logs.
> > > 
> > > Not sure what else to poke at here. What are the possible values for the
> > > "change_detector" config variable? Would it be worthwhile to test with a
> > > method other than "changelog"? Other thoughts/ideas?
> > > 
> > > Thanks in advance!
> > > 
> > > j
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> > 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Node count constraints with EC?

2017-03-30 Thread Ashish Pandey

Good point Cedric!! 
The only thing is that, I would prefer to say "bricks" instead of "nodes" in 
your statement. 

"starting with 4 bricks (3+1) can only evolve by adding 4 bricks (3+1)" 

- Original Message -

From: "Cedric Lemarchand"  
To: "Terry McGuire"  
Cc: gluster-users@gluster.org 
Sent: Thursday, March 30, 2017 11:57:27 AM 
Subject: Re: [Gluster-users] Node count constraints with EC? 


> Le 29 mars 2017 à 20:29, Terry McGuire  a écrit : 
> 
> I was thinking I’d spread these over 4 nodes, and add single nodes over time, 
> with subvolumes rearranged over new nodes to maintain protection from whole 
> node failures. 

Also keep in mind that dispersed cluster can only be expanded by the number of 
initial nodes, eg starting with 4 nodes 3+1 can only evolve by adding 4 nodes 
3+1, you cannot change the default policy 3+1 to 4+1. So the granularity of the 
evolution of the cluster is fixed at the beginning. 

Cheers 
___ 
Gluster-users mailing list 
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Node count constraints with EC?

2017-03-30 Thread Cedric Lemarchand

> Le 29 mars 2017 à 20:29, Terry McGuire  a écrit :
> 
> I was thinking I’d spread these over 4 nodes, and add single nodes over time, 
> with subvolumes rearranged over new nodes to maintain protection from whole 
> node failures.

Also keep in mind that dispersed cluster can only be expanded by the number of 
initial nodes, eg starting with 4 nodes 3+1 can only evolve by adding 4 nodes 
3+1, you cannot change the default policy 3+1 to 4+1. So the granularity of the 
evolution of the cluster is fixed at the beginning. 

Cheers
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-Maintainers] [Gluster-devel] Release 3.10.1: Scheduled for the 30th of March

2017-03-30 Thread Atin Mukherjee
On Wed, Mar 29, 2017 at 10:50 PM, Shyam  wrote:

> On 03/27/2017 12:59 PM, Shyam wrote:
>
>> Hi,
>>
>> It's time to prepare the 3.10.1 release, which falls on the 30th of each
>> month, and hence would be Mar-30th-2017 this time around.
>>
>> We have one blocker issue for the release, which is [1] "auth failure
>> after upgrade to GlusterFS 3.10", that we are tracking using the release
>> tracker bug [2]. @Atin, can we have this fixed in a day or 2, or does it
>> look like we may slip beyond that?
>>
>
> This looks almost complete, I assume that in the next 24h we should be
> able to have this backported and merged into 3.10.1.
>
> This means we will tag 3.10.1 in all probability tomorrow and packages for
> various distributions will follow.
>
>
Master patch is merged now. I've a backport
https://review.gluster.org/#/c/16967 ready for review.


>
>> This mail is to call out the following,
>>
>> 1) Are there any pending *blocker* bugs that need to be tracked for
>> 3.10.1? If so mark them against the provided tracker [2] as blockers for
>> the release, or at the very least post them as a response to this mail
>>
>
> I have not heard of any other issue (other than the rebalance+shard case,
> for which root cause is still in progress). So I will assume nothing else
> blocks the minor update.
>
>
>> 2) Pending reviews in the 3.10 dashboard will be part of the release,
>> *iff* they pass regressions and have the review votes, so use the
>> dashboard [3] to check on the status of your patches to 3.10 and get
>> these going
>>
>> 3) I have made checks on what went into 3.8 post 3.10 release and if
>> these fixes are included in 3.10 branch, the status on this is *green*
>> as all fixes ported to 3.8, are ported to 3.10 as well
>>
>
> This is still green.
>
>
>> 4) First cut of the release notes are posted here [4], if there are any
>> specific call outs for 3.10 beyond bugs, please update the review, or
>> leave a comment in the review, for me to pick it up
>>
>> Thanks,
>> Shyam
>>
>> [1] Pending blocker bug for 3.10.1:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1429117
>>
>> [2] Release bug tracker:
>> https://bugzilla.redhat.com/show_bug.cgi?id=glusterfs-3.10.1
>>
>> [3] 3.10 review dashboard:
>> https://review.gluster.org/#/projects/glusterfs,dashboards/d
>> ashboard:3-10-dashboard
>>
>>
>> [4] Release notes WIP: https://review.gluster.org/16957
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-devel
>>
> ___
> maintainers mailing list
> maintain...@gluster.org
> http://lists.gluster.org/mailman/listinfo/maintainers
>



-- 

ATin Mukherjee

Associate Manager, RHGS Development

Red Hat



amukh...@redhat.comM: +919739491377
 IM: IRC:
atinm, twitter: @mukherjee_atin

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users