[Gluster-users] severe security vulnerability in glusterfs with remote-hosts option

2017-05-03 Thread Joseph Lorenzini
Hi all,

I came across this blog entry. It seems that there's an undocumented
command line option that allows someone to execute a gluster cli command on
a remote host.

https://joejulian.name/blog/one-more-reason-that-glusterfs-should-not-be-used-as-a-saas-offering/

I am on gluster 3.9 and the option is still supported. I'd really like to
understand why this option is still supported and what someone could do to
actually mitigate this vulnerability.  Is there some configuration option I
can set to turn this off for example?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] TLS support

2017-03-31 Thread Joseph Lorenzini
Try OpenSSL s_client connect to a volumes brick port. Note you can controll
the allowed ssl versions by setting a gluster vol option.

Joe

On Fri, Mar 31, 2017 at 8:33 AM Darren Zhang <his...@126.com> wrote:

So how can I know the default ssl protocol currently using between server
and client? (gluster3.10.0 on ubuntu16.04)


Yong Zhang


On 2017-03-31 20:56 , Niels de Vos <nde...@redhat.com> Wrote:

On Fri, Mar 31, 2017 at 07:01:14AM -0500, Joseph Lorenzini wrote:
> Hi Yong,
>
> Gluster uses the openssl library, which supports SSL 3.0 and TLS versions
> 1.0,1.1,1.2. I actually don't know if its dynamically linked against the
> openssl library nor what version of the openssl lib gluster has been
tested
> with. That is important info to know that is currently undocumented.

It is dynamically linked and the version that is used is the openssl
version that is provided by the distribution where the different
glusterfs packages are built.

Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] TLS support

2017-03-31 Thread Joseph Lorenzini
Hi Yong,

Gluster uses the openssl library, which supports SSL 3.0 and TLS versions
1.0,1.1,1.2. I actually don't know if its dynamically linked against the
openssl library nor what version of the openssl lib gluster has been tested
with. That is important info to know that is currently undocumented.

But in regards to your specific question, it would support SSL (which no
one should use anymore) and all versions of TLS (everyone should be using
at least 1.1)

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Error occurs during when mounting gluster fuse over TLS

2017-03-30 Thread Joseph Lorenzini
Hi all,

I have gluster 3.9. I have MTLS set up for both management traffic and
volumes. The gluster fuser client successfully mounts the gluster volume.
However, I see the following error in the gluster server logs when mount or
unmount happens on the gluster client. Is this a bug? Is this anything to
be concerned about? Everything seems to be functioning fine.

[2017-03-30 17:24:50.728098] I [socket.c:343:ssl_setup_connection]
0-socket.management: peer CN = dfsclient1.local

[2017-03-30 17:24:50.728161] I [socket.c:346:ssl_setup_connection]
0-socket.management: SSL verification succeeded (client: )

[2017-03-30 17:24:50.731084] E [socket.c:2547:socket_poller]
0-socket.management: error in polling loop

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-19 Thread Joseph Lorenzini
Hi Deepak,

Sorta. I think it depends on what we mean by I/O path and performance.

If we are referring to disk I/O for gluster servers, then no. If we are
referring to the network I/O between a gluster client and server, than yes
there will by definition be some additional overhead. That however is true
of any security layer one chooses to pick for any application, especially a
distributed system. In practice, security of any kind, whether its
encryption, ACLs, or even iptables, will degrade the performance of an
application. And since distributed systems by definition handle their state
through network I/O, that means security + distributed system = network
latency. There's a reason people say security is where performance goes to
die. :)

Now that all said, frequently the issue is not whether there will be
network latency, but how much and does it matter? Moreover, what are the
specific performance requirements for your gluster pool and have they been
weighed against the costs of meeting those requirements? Additionally, how
does meeting those performance requirements weigh against all your other
requirements like for example having basic network security around a
distributed system?

I would be quite surprised if openssl MTLS  would be any slower compared to
some other key-based authentication scheme. Most of the cost of TLS is
around the TLS handshake, which is a one-time hit when the gluster client
mounts the volume. Since the client is maintaining a persistent TLS
connection, most of the overhead is openssl code performing symmetric
encryption, which openssl, despite all its warts, is really really good at
doing really really fast (understand this all relative to an arbitrary
baseline :).  Bottom line: with modern hardware, the performance impact of
MTLS should be negligible. IMHO, if the performance requirement can't
tolerate MTLS, then its in practice preventing you from implementing any
reasonable security scheme at all. In that case, you'd be better off just
setting up an isolated network and skipping any type of authentication.

I'd recommend setting up MTLS with gluster and run your performance tests
against it. That will definitively answer your question of whether the
performance is acceptable. The MTLS setup is not that hard and the gluster
documentation is reasonable though could be improved (I need to submit some
PRs against it). if you have any questions about setup and configuration, I
am sure I can help.

Joe

On Sat, Mar 18, 2017 at 2:25 PM, Deepak Naidu <dna...@nvidia.com> wrote:

> Hi Joe, thanks for taking time for explaining. I am having basic set of
> requirements along with IO performance as key factor, my reply below should
> justify what I am trying to achieve.
>
> >>If I am understanding your use case properly, you want to ensure that a
> client may only mount a gluster volume if and only if it presents a key or
> secret that attests to the client's identity, which the gluster server can
> use to verify that client's identity.
>
> Yes, this is the exact use case for my requirements.
>
>
>
> >>That's exactly what gluster MTLS is doing since the gluster server
> performs chain-of-trust validation on the client's leaf certificate.
>
> That's good, but my confusion here is does this MTLS also encrypt's IO
> traffic like TLS. If yes, than it's not want I am looking for. The reason
> is the IO encryption/decryption is an extra overhead for my use case as
> performance of IO is also factor why we're are looking for GlusterFS,
> unless my understanding is incorrect that IO encryption has no overhead.
>
>
>
> >> I don't understand why I/O path encryption is something you want to
> avoid -- seems like an essential part of basic network security that you
> get for "free" with the authentication.
>
> If I understand the term IO path encryption correctly, all the storage IO
> will go through extra latency of encryption & decryption which is not
> needed for my requirements as this produced extra IO latency which is why I
> am trying to avoid IO path encryption & just need basic secret based
> authentication.
>
>
>
>
> --
> Deepak
>
> > On Mar 18, 2017, at 10:46 AM, Joseph Lorenzini <jalo...@gmail.com>
> wrote:
> >
> > I am little confused about what you are trying to accomplish here. If I
> am understanding your use case properly, you want to ensure that a client
> may only mount a gluster volume if and only if it presents a key or secret
> that attests to the client's identity, which the gluster server can use to
> verify that client's identity. That's exactly what gluster MTLS is doing
> since the gluster server performs chain-of-trust validation on the client's
> leaf certificate.
> >
> > Of course this will necessarily force encryption of the I/O path since
>

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Joseph Lorenzini
Hi Deepak,

I am little confused about what you are trying to accomplish here. If I am
understanding your use case properly, you want to ensure that a client may
only mount a gluster volume if and only if it presents a key or secret that
attests to the client's identity, which the gluster server can use to
verify that client's identity. That's exactly what gluster MTLS is doing
since the gluster server performs chain-of-trust validation on the client's
leaf certificate.

Of course this will necessarily force encryption of the I/O path since its
TLS. I don't understand why I/O path encryption is something you want to
avoid -- seems like an essential part of basic network security that you
get for "free" with the authentication. It is true that if you want the
key-based authentication of a gluster client, you will need gluster MTLS.
You could treat encryption as the "cost" of getting authentication if you
will.

Now I am personally pretty negative on X.509 and chain-of-trust in general,
since the trust model has been proven to not scale and is frequently broken
by malicious and incompetent CAs. I also think its a completely
inappropriate security model for something like gluster where all endpoints
are known and controlled by a single entity, forcing a massive amount of
unnecessary complexity with certificate management with no real added
security. I have thought about making a feature request that gluster
support a simple public-key encryption that's implemented like SSH. But all
that said, MTLS is a well-tested, well known security protocol and the
gluster team built it on top of openssl so it does get the security job
done in an acceptable way. The fact that the I/O path is encrypted is not
the thing that bothers me about the implementation though.


Joe

On Sat, Mar 18, 2017 at 11:57 AM, Deepak Naidu <dna...@nvidia.com> wrote:

> Thanks Joseph for info.
>
> >>In addition, gluster uses MTLS (each endpoint validate's the other's
> chain-of-trust), so you get authentication as well.
>
> Does it only do authentication of mounts. I am not interested at this
> moment on IO path encryption only looking for authentication.
>
> >>you can set the auth.allow and auth.reject options to whitelist and
> blacklist clients based on their source IPs.
>
> I have used this but unfortunately I don't see ipbased / hostbased ACL as
> matured method, unless GlusterFS supports Kerberos completely. The reason I
> am looking for key or secret based secured mounts is, there will be entire
> subnet granted to the service & more elegant way is to allow only the
> client on that subnet to gluster mount would be if they use keys/secret as
> the client might next cycle/reboot get different IP. I can find workaround
> related to IP but this seems really weird that gluster only uses SSL to
> encrypt IO traffic but not use the same for authenticated mount.
>
>
>
> --
> Deepak
>
> > On Mar 18, 2017, at 9:14 AM, Joseph Lorenzini <jalo...@gmail.com> wrote:
> >
> >
> > Hi Deepak,
> >
> > Here's the TLDR
> >
> > If you don't want the I/O path to be encrypted but you want to control
> access to a gluster volume, you can set the auth.allow and auth.reject
> options to whitelist and blacklist clients based on their source IPs.
> There's also always iptables rules if you don't want to do that.
> >
> > Note this only address a client's (i.e system where multiple unix users
> can exist) to mount a gluster volume. This does not address access by
> different unix users on that mounted gluster volume -- that's a separate
> and much more complicated issue. I can elaborate on that more if you want.
> >
> > Here's the longer explanation on the TLS piece.
> >
> > So there are a couple different security layers here. TLS will in fact
> encrypt the I/O path -- that's one of its key selling points which is to
> ensure confidentiality of the data sent between the gluster server and
> gluster client. In addition, gluster uses MTLS (each endpoint validate's
> the other's chain-of-trust), so you get authentication as well.
> Additionally, if you set the auth.ssl-allow option on the gluster volume,
> you can specify whether authenticated TLS client is permitted to access the
> volume based on the common name in the client's certificate. This provides
> an inflexible but reasonably strong form of authorization.
> >
> >
> 
> ---
> This email message is for the sole use of the intended recipient(s) and
> may contain
> confidential information.  Any unauthorized review, use, disclosure or
> distribution
> is prohibited.  If you are not the intended recipient, please contact the
> sender by
> reply email a

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Joseph Lorenzini
Hi Deepak,

Here's the TLDR

If you don't want the I/O path to be encrypted but you want to control
access to a gluster volume, you can set the auth.allow and auth.reject
options to whitelist and blacklist clients based on their source IPs.
There's also always iptables rules if you don't want to do that.

Note this only address a client's (i.e system where multiple unix users can
exist) to mount a gluster volume. This does not address access by different
unix users on that mounted gluster volume -- that's a separate and much
more complicated issue. I can elaborate on that more if you want.

Here's the longer explanation on the TLS piece.

So there are a couple different security layers here. TLS will in fact
encrypt the I/O path -- that's one of its key selling points which is to
ensure confidentiality of the data sent between the gluster server and
gluster client. In addition, gluster uses MTLS (each endpoint validate's
the other's chain-of-trust), so you get authentication as well.
Additionally, if you set the auth.ssl-allow option on the gluster volume,
you can specify whether authenticated TLS client is permitted to access the
volume based on the common name in the client's certificate. This provides
an inflexible but reasonably strong form of authorization.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] documentation on georeplication failover

2017-03-06 Thread Joseph Lorenzini
Hi all,

I found this doc on georeplication. I am on gluster 3.9. I am looking for
documentation that explains how to failover between the master and slave
volumes.

http://gluster.readthedocs.io/en/latest/Administrator%20Guide/Geo%20Replication/

How would someone handle the following scenario.

   1. two datacenters A and B. Each is running a gluster pool.
   2. In DC A , create gluster volume named gv0-dcA.
   3. In DC B, create gluster volume named gv0-dcB.
   4. Gluster volume gv0-dcA is the master and volume gv0-dcB is the slave.
   5. A couple GB of data is written to gv0-dcA and replicated to gv0-dcB.
   6. Initiate failover process so that:
  1. all writes are completed on gv0-dcA
  2. gv0-dcA is now read-only
  3. gv0-dcB becomes master and now data can be written to this volume
  4. gv0-dcA is now the slave volume and gv0-dcB is the master volume.
  Consequently, georeplication now happens from gv0-dcB to gv0-dcA.
   7. repeat all steps described in the previous step but now roles are
   switched where gv0-dcA becomes master and gv0-dcB becomes slave.


Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] managing lifecycle of a gluster volume in openshift

2017-02-26 Thread Joseph Lorenzini
Hi all,

I am happy to report that I finally got a container in an openshift pod to
mount a gluster volume successfully. This has nothing to do with gluster,
which works fine, and everything to do with openshift interfaces being less
than ideal. Note to self: turn off the settings in openshift that prevent
containers from running as root (what a silly restriction).

So now I need to tackle a much more complicated problem. How to handle the
lifecycle of a gluster volume in openshift

Here are the things I am considering and I'd be interested to see how
others have addressed this problem. Lets assume for the purposes of this
conversation we have a single gluster cluster that will use a replicated
volumes with a replica count of 3 (nothing is distributed) and the cluster
consists of three nodes. So each volume minimally has three bricks, where
each server only has one of the bricks. Total available disk throughput
gluster is 1 terabyte.


   - do you use a single gluster volume for multiple pods or one gluster
   volume for each pod Until gluster supports mounting a subdirectory in a
   volume natively (can't wait for that feature!!), it seems like you'd want
   to go the route of volume per pod for reasons of multi-tenancy and security.
   -  if you do a gluster volume per pod, how do you handle the physical
   storage that backs the gluster cluster? For example, lets say each gluster
   server has three devices (/dev/sdb,/dev/sdc,/dev/sdd) that can be used by
   bricks. Would it be a good idea to create a volume for each openshift pod,
   where there are multiple brick processes writing to the same device on the
   same disk for different volumes? Or would that have unacceptable
   performance implications? The reason I ask is that the gluster docs seems
   to recommend having physical devices dedicated to a single volume only.


I did take a look at heketi but have a variety of concerns/questions on
that, which are probably more appropriate for whatever email list discusses
heketi.

https://github.com/screeley44/openshift-docs/blob/
ce684e3c4c581db3b4aa27ecc1dba2ea65f51eda/install_config/
storage_examples/external_gluster_dynamic_example.adoc

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
Hi Alessandro,

That will address the failover issue but it will not address configuring
the glusterfs client to connect to the brick over TLS. I would be happy to
be wrong. I was only able to get both by specifying that in the config
file. What's curious is why the config file doesn't handle replication the
same way as when its mounted with the mount command. I'd figure they should
be the same.

Here's my config file if anyone is interested. Perhaps I don't have
something set properly?

 volume gv0-client-0

 type protocol/client

 option ping-timeout 42

 option remote-host host1

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-client-1

 type protocol/client

 option ping-timeout 42

 option remote-host host2

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-client-2

 type protocol/client

 option ping-timeout 42

 option remote-host host3

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-replicate-0

type cluster/replicate

subvolumes gv0-client-0 gv0-client-1 gv0-client-2

 end-volume

Joe

On Fri, Feb 24, 2017 at 11:40 AM, Alessandro Briosi <a...@metalit.com> wrote:

> Il 24/02/2017 14:50, Joseph Lorenzini ha scritto:
>
> 1. I want the mount /etc/fstab to be able to fail over to any one of the
> three servers that I have. so if one server is down, the client can still
> mount from servers 2 and 3.
>
> *backupvolfile-server *option
>
> should do the work or use the config file.
>
> It's mentioned in the blog you linked...
>
> If you need more dynamic failover probably rrdns could be a solution.
>
>
> Alessandro
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
Hi Mohammed,

You are right that mounting it this way will do the appropriate
replication. However, there are problems with that for my use case:

1. I want the mount /etc/fstab to be able to fail over to any one of the
three servers that I have. so if one server is down, the client can still
mount from servers 2 and 3.
2. i have configured SSL on the I/O path and I need to be able to configure
the client to use TLS when it connects to the brick. I was only able to get
that to work with: transport.socket.ssl-enabled off in the configuration
file.

In other words, I was only able to get HA during a mount and TLS to work by
using the volume config file and setting that in the /etc/fstab.

https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume

Is there a better way to handle this?

Thanks,
Joe

On Fri, Feb 24, 2017 at 6:24 AM, Mohammed Rafi K C <rkavu...@redhat.com>
wrote:

> Hi Joseph,
>
> I think there is gap in understanding your problem. Let me try to give
> more clear picture on this,
>
> First , couple of clarification points here
>
> 1) client graph is an internally generated configuration file based on
> your volume, that said you don't need to create or edit your own. If you
> want a 3-way replicated volume you have to mention that when you create the
> volume.
>
> 2) When you mount a gluster volume, you don't need to provide any client
> graph, you just need to give server hostname and volname, it will
> automatically fetches the graph and start working on it (so it does the
> replication based on the graph generated by gluster management daemon)
>
>
> Now let me briefly describe the procedure for creating a 3-way replicated
> volume
>
> 1) gluster volume create  replica 3 :/
> :/ :/
>
>  Note : if you give 3 more bricks , then it will create 2-way
> distributed 3 way replicated volume (you can increase the distribution by
> adding multiple if 3)
>
>  this step will automatically create the configuration file in
> /var/lib/glusterd/vols//trusted-.tcp-fuse.vol
>
> 2) Now start the volume using gluster volume start 
>
> 3) Fuse mount the volume in client machine using the command mount -t
> glusterfs :/   /
>
> this will automatically fetches the configuration file and will do the
> replication. You don't need to do anything
>
>
> Let me know if this helps.
>
>
> Regards
>
> Rafi KC
>
>
> On 02/24/2017 05:13 PM, Joseph Lorenzini wrote:
>
> HI Mohammed,
>
> Its not a bug per se, its a configuration and documentation issue. I
> searched the gluster documentation pretty thoroughly and I did not find
> anything that discussed the 1) client's call graph and 2) how to
> specifically configure a native glusterfs client to properly specify that
> call graph so that replication will happen across multiple bricks. If its
> there, then there's a pretty severe organization issue in the documentation
> (I am pretty sure I ended up reading almost every page actually).
>
> As a result, because I was a new to gluster, my initial set up really
> confused me. I would follow the instructions as documented in official
> gluster docs (execute the mount command), write data on the mount...and
> then only see it replicated to a single brick. It was only after much
> furious googling did I manage to figure out that that 1) i needed a client
> configuration file which should be specified in /etc/fstab and 2) that
> configuration block mentioned above was the key.
>
> I am actually planning on submitting a PR to the documentation to cover
> all this. To be clear, I am sure this is obvious to a seasoned gluster user
> -- but it is not at all obvious to someone who is new to gluster such as
> myself.
>
> So I am an operations engineer. I like reproducible deployments and I like
> monitoring to alert me when something is wrong. Due to human error or a bug
> in our deployment code, its possible that something like not setting the
> client call graph properly could happen. I wanted a way to detect this
> problem so that if it does happen, it can be remediated immediately.
>
> Your suggestion sounds promising. I shall definitely look into that.
> Though that might be a useful information to surface up in a CLI command in
> a future gluster release IMHO.
>
> Joe
>
>
>
> On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C <
> <rkavu...@redhat.com>rkavu...@redhat.com> wrote:
>
>>
>>
>> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>>
>> Hi all,
>>
>> I have a simple replicated volume with a replica count of 3. To ensure
>> any file changes (create/delete/modify) are replicated to all bricks, I
>> have this setting in my client configuration.
>>
>>  volume gv0-repli

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
HI Mohammed,

Its not a bug per se, its a configuration and documentation issue. I
searched the gluster documentation pretty thoroughly and I did not find
anything that discussed the 1) client's call graph and 2) how to
specifically configure a native glusterfs client to properly specify that
call graph so that replication will happen across multiple bricks. If its
there, then there's a pretty severe organization issue in the documentation
(I am pretty sure I ended up reading almost every page actually).

As a result, because I was a new to gluster, my initial set up really
confused me. I would follow the instructions as documented in official
gluster docs (execute the mount command), write data on the mount...and
then only see it replicated to a single brick. It was only after much
furious googling did I manage to figure out that that 1) i needed a client
configuration file which should be specified in /etc/fstab and 2) that
configuration block mentioned above was the key.

I am actually planning on submitting a PR to the documentation to cover all
this. To be clear, I am sure this is obvious to a seasoned gluster user --
but it is not at all obvious to someone who is new to gluster such as
myself.

So I am an operations engineer. I like reproducible deployments and I like
monitoring to alert me when something is wrong. Due to human error or a bug
in our deployment code, its possible that something like not setting the
client call graph properly could happen. I wanted a way to detect this
problem so that if it does happen, it can be remediated immediately.

Your suggestion sounds promising. I shall definitely look into that. Though
that might be a useful information to surface up in a CLI command in a
future gluster release IMHO.

Joe



On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C <rkavu...@redhat.com>
wrote:

>
>
> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>
> Hi all,
>
> I have a simple replicated volume with a replica count of 3. To ensure any
> file changes (create/delete/modify) are replicated to all bricks, I have
> this setting in my client configuration.
>
>  volume gv0-replicate-0
> type cluster/replicate
> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
> end-volume
>
> And that works as expected. My question is how one could detect if this
> was not happening which could poise a severe problem with data consistency
> and replication. For example, those settings could be omitted from the
> client config and then the client will only write data to one brick and all
> kinds of terrible things will start happening. I have not found a way the
> gluster volume cli to detect when that kind of problem is occurring. For
> example gluster volume heal  info does not detect this problem.
>
> Is there any programmatic way to detect when this problem is occurring?
>
>
> I couldn't understand how you will end up in this situation. There is only
> one possibility (assuming there is no bug :) ), ie you changed the client
> graph in a way that there is only one subvolume to replica server.
>
> To check that the simply way is, there is a xlator called meta, which
> provides meta data information through mount point, similiar to linux proc
> file system. So you can check the active graph through meta and see the
> number of subvolumes for replica xlator
>
> for example : the directory   /.meta/graphs/active/<
> volname>-replicate-0/subvolumes will have entries for each replica
> clients , so in your case you should see 3 directories.
>
>
> Let me know if this helps.
>
> Regards
> Rafi KC
>
>
> Thanks,
> Joe
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] detecting replication issues

2017-02-23 Thread Joseph Lorenzini
Hi all,

I have a simple replicated volume with a replica count of 3. To ensure any
file changes (create/delete/modify) are replicated to all bricks, I have
this setting in my client configuration.

 volume gv0-replicate-0
type cluster/replicate
subvolumes gv0-client-0 gv0-client-1 gv0-client-2
end-volume

And that works as expected. My question is how one could detect if this was
not happening which could poise a severe problem with data consistency and
replication. For example, those settings could be omitted from the client
config and then the client will only write data to one brick and all kinds
of terrible things will start happening. I have not found a way the gluster
volume cli to detect when that kind of problem is occurring. For example
gluster volume heal  info does not detect this problem.

Is there any programmatic way to detect when this problem is occurring?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] difference between a pool and peers

2017-02-13 Thread Joseph Lorenzini
All:

I can see through the API there's a distinction between pool and a peer. My
question is what distinguishes a pool member from a peer? If a node is one
it always seems to be the other. Can I ever have a node be "online" in a
pool while be an offline peer? What about the reverse? Can I ever have a
node that is a peer but not in a pool?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] lvm layout for gluster -- using multiple physical volumes

2017-02-10 Thread Joseph Lorenzini
Hi all,

I want to use lvm for two reasons:

- gluster snaphosts
- ability to dynamically add space to a brick.


Here's what i'd like to do:

1. create two more physical volumes
2. create a single volume group from those physical volumes
3. create a single logical volume
4. make the single logical volume a brick

So my questions are this:

1. is there any issue with adding and remove LVM physical volumes from the
logical volume group that backs the gluster brick?
2. is there any issue with having multiple physical volumes in the LVM
volume group?
3. will disk size increase more  with gluster because LVM is being used
instead a filesystem like XFS?

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] chances of split brain with a distributed replicated volume where replica is 3

2017-02-09 Thread Joseph Lorenzini
All:

I read this in the gluster docs. Note I am not using arbiter -- I am
setting up volumes with full 3 replicas. In this case, is this split brain
scenario theoretical or has this actually occurred? If so, what are the
chances that this could happen? In other words, aside from doing regular
snapshots, is this type of split brain scenario something I should be
planning in an unlikely disaster recovery scenario *or *part of daily
maintenance? Since the docs say its a corner case, I am inferring that this
is pretty unlikely.

"There is a corner case even with replica 3 volumes where the file can end
up in a split-brain. AFR usually takes range locks for the {offset, length}
of the write. If 3 writes happen on the same file at non-overlapping
{offset, length} and each write fails on (only) one different brick, then
we have AFR xattrs of the file blaming each other."

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/#replica-2-and-replica-3-volumes

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] initial pool setup -- bidirectional probe required?

2017-02-03 Thread Joseph Lorenzini
All:

According to the docs, when you initially set up a gluster storage pool,
the first two servers need to probe each other. However, after that, you
add additional servers in by probing from a node that's already in the
pool.

However, when I follow the directions with gluster 3.8, the behavior
doesn't seem to match up when I do the initial set up of two nodes. I probe
from server 1 to server 2 but I do not probe from server 2 to server 1. My
expectation would be that either the pool or peer commands would indicate
the server 2 does not  "trust" server 1 but it in fact server 2 just
indicates its successfully connected in a pool and server 1 is a trusted
peer.

In addition, if I do a probe from server 2 to server 1, it does not just
say probe success. Instead it says, "probe successful host already in peer
list".

So here are my questions: is this initial probe each server from the other
dance actually required? And if it is, is a there a way to tell through a
command or looking at a log whether that's occurred or not?

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Storage%20Pools/

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] possible kernel panic with glusterd

2017-01-25 Thread Joseph Lorenzini
That file exists but is empty.

Joe

On Wed, Jan 25, 2017 at 7:44 AM Samikshan Bairagya <sbair...@redhat.com>
wrote:

>
>
> On 01/25/2017 06:57 PM, Joseph Lorenzini wrote:
> > Hi Atin,
> >
> > I assume you are referring to the /var/log/glustershd.log. If so, that
> file
> > never gets created.
>
> You'd find the glusterd log file as
> /var/log/glusterfs/etc-glusterfs-glusterd.vol.log AFAR for glusterfs-3.8.
>
> ~ Samikshan
>
> >
> > Joe
> >
> > On Wed, Jan 25, 2017 at 6:14 AM Atin Mukherjee <amukh...@redhat.com>
> wrote:
> >
> >> On Wed, Jan 25, 2017 at 5:15 PM, Joseph Lorenzini <jalo...@gmail.com>
> >> wrote:
> >>
> >> Hi all,
> >>
> >> I have recently started exploring the DFS solution space and was doing
> >> some basic setup and testing with gluster. I set up a pool of three
> nodes
> >> following the quick start guide. That seemed to work fine.
> >>
> >> However, shortly after that, I noticed that one of the servers in the
> pool
> >> was becoming non-responsive -- as in the entire VM was completely hung
> and
> >> i had to use the hypervisor to force a reboot. I sshed into the server
> and
> >> startd poking around. glusterd was shut off. I started it up and the
> >> following happened:
> >>
> >> Message from syslogd at Jan 25 05:20:47 ...
> >>  kernel:[  288.145027] NMI watchdog: BUG: soft lockup - CPU#1 stuck for
> >> 22s! [glusterd:2374]
> >>
> >>
> >> Could you attach the glusterd log file to enable us to look at why
> >> glusterd got shutdown?
> >>
> >>
> >>
> >> At which point, the VM became completely unresponsive again.
> >>
> >> All servers are the same. They are running centos 7.3, linux kernel
> >> 3.10.0-514.2.2.el7.x86_64. The glusterfs-server is 3.8.
> >>
> >> Since I just started investigating gluster, it is certainly possible
> that
> >> I misconfigured something on that one node. However, a kernel hang/panic
> >> seems like an excessive response :).  If anyone would have any ideas or
> >> suggestions about what may be happening here or additional places I
> should
> >> look into to find out what is going on, I am all ears.
> >>
> >> Thanks,
> >> Joe
> >>
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >>
> >>
> >>
> >>
> >> --
> >>
> >> ~ Atin (atinm)
> >>
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] possible kernel panic with glusterd

2017-01-25 Thread Joseph Lorenzini
Hi Atin,

I assume you are referring to the /var/log/glustershd.log. If so, that file
never gets created.

Joe

On Wed, Jan 25, 2017 at 6:14 AM Atin Mukherjee <amukh...@redhat.com> wrote:

> On Wed, Jan 25, 2017 at 5:15 PM, Joseph Lorenzini <jalo...@gmail.com>
> wrote:
>
> Hi all,
>
> I have recently started exploring the DFS solution space and was doing
> some basic setup and testing with gluster. I set up a pool of three nodes
> following the quick start guide. That seemed to work fine.
>
> However, shortly after that, I noticed that one of the servers in the pool
> was becoming non-responsive -- as in the entire VM was completely hung and
> i had to use the hypervisor to force a reboot. I sshed into the server and
> startd poking around. glusterd was shut off. I started it up and the
> following happened:
>
> Message from syslogd at Jan 25 05:20:47 ...
>  kernel:[  288.145027] NMI watchdog: BUG: soft lockup - CPU#1 stuck for
> 22s! [glusterd:2374]
>
>
> Could you attach the glusterd log file to enable us to look at why
> glusterd got shutdown?
>
>
>
> At which point, the VM became completely unresponsive again.
>
> All servers are the same. They are running centos 7.3, linux kernel
> 3.10.0-514.2.2.el7.x86_64. The glusterfs-server is 3.8.
>
> Since I just started investigating gluster, it is certainly possible that
> I misconfigured something on that one node. However, a kernel hang/panic
> seems like an excessive response :).  If anyone would have any ideas or
> suggestions about what may be happening here or additional places I should
> look into to find out what is going on, I am all ears.
>
> Thanks,
> Joe
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
> --
>
> ~ Atin (atinm)
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] possible kernel panic with glusterd

2017-01-25 Thread Joseph Lorenzini
Hi all,

I have recently started exploring the DFS solution space and was doing some
basic setup and testing with gluster. I set up a pool of three nodes
following the quick start guide. That seemed to work fine.

However, shortly after that, I noticed that one of the servers in the pool
was becoming non-responsive -- as in the entire VM was completely hung and
i had to use the hypervisor to force a reboot. I sshed into the server and
started poking around. glusterd was shut off. I started it up and the
following happened:

Message from syslogd at Jan 25 05:20:47 ...
 kernel:[  288.145027] NMI watchdog: BUG: soft lockup - CPU#1 stuck for
22s! [glusterd:2374]


At which point, the VM became completely unresponsive again.

All servers are the same. They are running centos 7.3, linux kernel
3.10.0-514.2.2.el7.x86_64. The glusterfs-server is 3.8.

Since I just started investigating gluster, it is certainly possible that I
misconfigured something on that one node. However, a kernel hang/panic
seems like an excessive response :).  If anyone would have any ideas or
suggestions about what may be happening here or additional places I should
look into to find out what is going on, I am all ears.

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users