Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
Hi Alessandro,

That will address the failover issue but it will not address configuring
the glusterfs client to connect to the brick over TLS. I would be happy to
be wrong. I was only able to get both by specifying that in the config
file. What's curious is why the config file doesn't handle replication the
same way as when its mounted with the mount command. I'd figure they should
be the same.

Here's my config file if anyone is interested. Perhaps I don't have
something set properly?

 volume gv0-client-0

 type protocol/client

 option ping-timeout 42

 option remote-host host1

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-client-1

 type protocol/client

 option ping-timeout 42

 option remote-host host2

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-client-2

 type protocol/client

 option ping-timeout 42

 option remote-host host3

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-replicate-0

type cluster/replicate

subvolumes gv0-client-0 gv0-client-1 gv0-client-2

 end-volume

Joe

On Fri, Feb 24, 2017 at 11:40 AM, Alessandro Briosi  wrote:

> Il 24/02/2017 14:50, Joseph Lorenzini ha scritto:
>
> 1. I want the mount /etc/fstab to be able to fail over to any one of the
> three servers that I have. so if one server is down, the client can still
> mount from servers 2 and 3.
>
> *backupvolfile-server *option
>
> should do the work or use the config file.
>
> It's mentioned in the blog you linked...
>
> If you need more dynamic failover probably rrdns could be a solution.
>
>
> Alessandro
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Alessandro Briosi
Il 24/02/2017 14:50, Joseph Lorenzini ha scritto:
> 1. I want the mount /etc/fstab to be able to fail over to any one of
> the three servers that I have. so if one server is down, the client
> can still mount from servers 2 and 3.
*backupvolfile-server *option
*
*should do the work or use the config file.

It's mentioned in the blog you linked...

If you need more dynamic failover probably rrdns could be a solution.


Alessandro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
Hi Mohammed,

You are right that mounting it this way will do the appropriate
replication. However, there are problems with that for my use case:

1. I want the mount /etc/fstab to be able to fail over to any one of the
three servers that I have. so if one server is down, the client can still
mount from servers 2 and 3.
2. i have configured SSL on the I/O path and I need to be able to configure
the client to use TLS when it connects to the brick. I was only able to get
that to work with: transport.socket.ssl-enabled off in the configuration
file.

In other words, I was only able to get HA during a mount and TLS to work by
using the volume config file and setting that in the /etc/fstab.

https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume

Is there a better way to handle this?

Thanks,
Joe

On Fri, Feb 24, 2017 at 6:24 AM, Mohammed Rafi K C 
wrote:

> Hi Joseph,
>
> I think there is gap in understanding your problem. Let me try to give
> more clear picture on this,
>
> First , couple of clarification points here
>
> 1) client graph is an internally generated configuration file based on
> your volume, that said you don't need to create or edit your own. If you
> want a 3-way replicated volume you have to mention that when you create the
> volume.
>
> 2) When you mount a gluster volume, you don't need to provide any client
> graph, you just need to give server hostname and volname, it will
> automatically fetches the graph and start working on it (so it does the
> replication based on the graph generated by gluster management daemon)
>
>
> Now let me briefly describe the procedure for creating a 3-way replicated
> volume
>
> 1) gluster volume create  replica 3 :/
> :/ :/
>
>  Note : if you give 3 more bricks , then it will create 2-way
> distributed 3 way replicated volume (you can increase the distribution by
> adding multiple if 3)
>
>  this step will automatically create the configuration file in
> /var/lib/glusterd/vols//trusted-.tcp-fuse.vol
>
> 2) Now start the volume using gluster volume start 
>
> 3) Fuse mount the volume in client machine using the command mount -t
> glusterfs :/   /
>
> this will automatically fetches the configuration file and will do the
> replication. You don't need to do anything
>
>
> Let me know if this helps.
>
>
> Regards
>
> Rafi KC
>
>
> On 02/24/2017 05:13 PM, Joseph Lorenzini wrote:
>
> HI Mohammed,
>
> Its not a bug per se, its a configuration and documentation issue. I
> searched the gluster documentation pretty thoroughly and I did not find
> anything that discussed the 1) client's call graph and 2) how to
> specifically configure a native glusterfs client to properly specify that
> call graph so that replication will happen across multiple bricks. If its
> there, then there's a pretty severe organization issue in the documentation
> (I am pretty sure I ended up reading almost every page actually).
>
> As a result, because I was a new to gluster, my initial set up really
> confused me. I would follow the instructions as documented in official
> gluster docs (execute the mount command), write data on the mount...and
> then only see it replicated to a single brick. It was only after much
> furious googling did I manage to figure out that that 1) i needed a client
> configuration file which should be specified in /etc/fstab and 2) that
> configuration block mentioned above was the key.
>
> I am actually planning on submitting a PR to the documentation to cover
> all this. To be clear, I am sure this is obvious to a seasoned gluster user
> -- but it is not at all obvious to someone who is new to gluster such as
> myself.
>
> So I am an operations engineer. I like reproducible deployments and I like
> monitoring to alert me when something is wrong. Due to human error or a bug
> in our deployment code, its possible that something like not setting the
> client call graph properly could happen. I wanted a way to detect this
> problem so that if it does happen, it can be remediated immediately.
>
> Your suggestion sounds promising. I shall definitely look into that.
> Though that might be a useful information to surface up in a CLI command in
> a future gluster release IMHO.
>
> Joe
>
>
>
> On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C <
> rkavu...@redhat.com> wrote:
>
>>
>>
>> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>>
>> Hi all,
>>
>> I have a simple replicated volume with a replica count of 3. To ensure
>> any file changes (create/delete/modify) are replicated to all bricks, I
>> have this setting in my client configuration.
>>
>>  volume gv0-replicate-0
>> type cluster/replicate
>> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
>> end-volume
>>
>> And that works as expected. My question is how one could detect if this
>> was not happening which could poise a severe problem with data consistency
>> and replication. For example, those settings could be omitted from the
>> client config and then the client will only write data t

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Mohammed Rafi K C
Hi Joseph,

I think there is gap in understanding your problem. Let me try to give
more clear picture on this,

First , couple of clarification points here

1) client graph is an internally generated configuration file based on
your volume, that said you don't need to create or edit your own. If you
want a 3-way replicated volume you have to mention that when you create
the volume.

2) When you mount a gluster volume, you don't need to provide any client
graph, you just need to give server hostname and volname, it will
automatically fetches the graph and start working on it (so it does the
replication based on the graph generated by gluster management daemon)


Now let me briefly describe the procedure for creating a 3-way
replicated volume

1) gluster volume create  replica 3 :/
:/ :/

 Note : if you give 3 more bricks , then it will create 2-way
distributed 3 way replicated volume (you can increase the distribution
by adding multiple if 3)

 this step will automatically create the configuration file in
/var/lib/glusterd/vols//trusted-.tcp-fuse.vol

2) Now start the volume using gluster volume start 

3) Fuse mount the volume in client machine using the command mount -t
glusterfs :/   /

this will automatically fetches the configuration file and will do
the replication. You don't need to do anything


Let me know if this helps.


Regards

Rafi KC


On 02/24/2017 05:13 PM, Joseph Lorenzini wrote:
> HI Mohammed,
>
> Its not a bug per se, its a configuration and documentation issue. I
> searched the gluster documentation pretty thoroughly and I did not
> find anything that discussed the 1) client's call graph and 2) how to
> specifically configure a native glusterfs client to properly specify
> that call graph so that replication will happen across multiple
> bricks. If its there, then there's a pretty severe organization issue
> in the documentation (I am pretty sure I ended up reading almost every
> page actually).
>
> As a result, because I was a new to gluster, my initial set up really
> confused me. I would follow the instructions as documented in official
> gluster docs (execute the mount command), write data on the
> mount...and then only see it replicated to a single brick. It was only
> after much furious googling did I manage to figure out that that 1) i
> needed a client configuration file which should be specified in
> /etc/fstab and 2) that configuration block mentioned above was the key.
>
> I am actually planning on submitting a PR to the documentation to
> cover all this. To be clear, I am sure this is obvious to a seasoned
> gluster user -- but it is not at all obvious to someone who is new to
> gluster such as myself.
>
> So I am an operations engineer. I like reproducible deployments and I
> like monitoring to alert me when something is wrong. Due to human
> error or a bug in our deployment code, its possible that something
> like not setting the client call graph properly could happen. I wanted
> a way to detect this problem so that if it does happen, it can be
> remediated immediately.
>
> Your suggestion sounds promising. I shall definitely look into that.
> Though that might be a useful information to surface up in a CLI
> command in a future gluster release IMHO.
>
> Joe
>
>
>
> On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C
> mailto:rkavu...@redhat.com>> wrote:
>
>
>
> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>> Hi all,
>>
>> I have a simple replicated volume with a replica count of 3. To
>> ensure any file changes (create/delete/modify) are replicated to
>> all bricks, I have this setting in my client configuration.
>>
>>  volume gv0-replicate-0
>> type cluster/replicate
>> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
>> end-volume
>>
>> And that works as expected. My question is how one could detect
>> if this was not happening which could poise a severe problem with
>> data consistency and replication. For example, those settings
>> could be omitted from the client config and then the client will
>> only write data to one brick and all kinds of terrible things
>> will start happening. I have not found a way the gluster volume
>> cli to detect when that kind of problem is occurring. For example
>> gluster volume heal  info does not detect this problem. 
>>
>> Is there any programmatic way to detect when this problem is
>> occurring?
>>
>
> I couldn't understand how you will end up in this situation. There
> is only one possibility (assuming there is no bug :) ), ie you
> changed the client graph in a way that there is only one subvolume
> to replica server.
>
> To check that the simply way is, there is a xlator called meta,
> which provides meta data information through mount point, similiar
> to linux proc file system. So you can check the active graph
> through meta and see the number of subvolumes for replica xlator
>
> for example 

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
HI Mohammed,

Its not a bug per se, its a configuration and documentation issue. I
searched the gluster documentation pretty thoroughly and I did not find
anything that discussed the 1) client's call graph and 2) how to
specifically configure a native glusterfs client to properly specify that
call graph so that replication will happen across multiple bricks. If its
there, then there's a pretty severe organization issue in the documentation
(I am pretty sure I ended up reading almost every page actually).

As a result, because I was a new to gluster, my initial set up really
confused me. I would follow the instructions as documented in official
gluster docs (execute the mount command), write data on the mount...and
then only see it replicated to a single brick. It was only after much
furious googling did I manage to figure out that that 1) i needed a client
configuration file which should be specified in /etc/fstab and 2) that
configuration block mentioned above was the key.

I am actually planning on submitting a PR to the documentation to cover all
this. To be clear, I am sure this is obvious to a seasoned gluster user --
but it is not at all obvious to someone who is new to gluster such as
myself.

So I am an operations engineer. I like reproducible deployments and I like
monitoring to alert me when something is wrong. Due to human error or a bug
in our deployment code, its possible that something like not setting the
client call graph properly could happen. I wanted a way to detect this
problem so that if it does happen, it can be remediated immediately.

Your suggestion sounds promising. I shall definitely look into that. Though
that might be a useful information to surface up in a CLI command in a
future gluster release IMHO.

Joe



On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C 
wrote:

>
>
> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>
> Hi all,
>
> I have a simple replicated volume with a replica count of 3. To ensure any
> file changes (create/delete/modify) are replicated to all bricks, I have
> this setting in my client configuration.
>
>  volume gv0-replicate-0
> type cluster/replicate
> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
> end-volume
>
> And that works as expected. My question is how one could detect if this
> was not happening which could poise a severe problem with data consistency
> and replication. For example, those settings could be omitted from the
> client config and then the client will only write data to one brick and all
> kinds of terrible things will start happening. I have not found a way the
> gluster volume cli to detect when that kind of problem is occurring. For
> example gluster volume heal  info does not detect this problem.
>
> Is there any programmatic way to detect when this problem is occurring?
>
>
> I couldn't understand how you will end up in this situation. There is only
> one possibility (assuming there is no bug :) ), ie you changed the client
> graph in a way that there is only one subvolume to replica server.
>
> To check that the simply way is, there is a xlator called meta, which
> provides meta data information through mount point, similiar to linux proc
> file system. So you can check the active graph through meta and see the
> number of subvolumes for replica xlator
>
> for example : the directory   /.meta/graphs/active/<
> volname>-replicate-0/subvolumes will have entries for each replica
> clients , so in your case you should see 3 directories.
>
>
> Let me know if this helps.
>
> Regards
> Rafi KC
>
>
> Thanks,
> Joe
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-23 Thread Mohammed Rafi K C


On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
> Hi all,
>
> I have a simple replicated volume with a replica count of 3. To ensure
> any file changes (create/delete/modify) are replicated to all bricks,
> I have this setting in my client configuration.
>
>  volume gv0-replicate-0
> type cluster/replicate
> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
> end-volume
>
> And that works as expected. My question is how one could detect if
> this was not happening which could poise a severe problem with data
> consistency and replication. For example, those settings could be
> omitted from the client config and then the client will only write
> data to one brick and all kinds of terrible things will start
> happening. I have not found a way the gluster volume cli to detect
> when that kind of problem is occurring. For example gluster volume
> heal  info does not detect this problem. 
>
> Is there any programmatic way to detect when this problem is occurring?
>

I couldn't understand how you will end up in this situation. There is
only one possibility (assuming there is no bug :) ), ie you changed the
client graph in a way that there is only one subvolume to replica server.

To check that the simply way is, there is a xlator called meta, which
provides meta data information through mount point, similiar to linux
proc file system. So you can check the active graph through meta and see
the number of subvolumes for replica xlator

for example : the directory   /.meta/graphs/active/-replicate-0/subvolumes will have
entries for each replica clients , so in your case you should see 3
directories.


Let me know if this helps.

Regards
Rafi KC


> Thanks,
> Joe
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] detecting replication issues

2017-02-23 Thread Joseph Lorenzini
Hi all,

I have a simple replicated volume with a replica count of 3. To ensure any
file changes (create/delete/modify) are replicated to all bricks, I have
this setting in my client configuration.

 volume gv0-replicate-0
type cluster/replicate
subvolumes gv0-client-0 gv0-client-1 gv0-client-2
end-volume

And that works as expected. My question is how one could detect if this was
not happening which could poise a severe problem with data consistency and
replication. For example, those settings could be omitted from the client
config and then the client will only write data to one brick and all kinds
of terrible things will start happening. I have not found a way the gluster
volume cli to detect when that kind of problem is occurring. For example
gluster volume heal  info does not detect this problem.

Is there any programmatic way to detect when this problem is occurring?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users