Re: [Gluster-users] Issue in installing Gluster 3.9.0

2017-02-24 Thread Niklas Hambüchen
I also found that the Ubuntu PPAs maintained by the gluster team, when
unpacked, contain a patch in the debian/patches directory that addresses
these issues (but of course it'd be better to have it fixed upstream).

On 22/02/17 18:42, Shyam wrote:
> Optionally try patching the sources with this commit and building,
> 
> https://review.gluster.org/#/c/15737/2
> 
> Shyam
> 
> On 02/09/2017 02:26 AM, Amudhan P wrote:
>> Hi All,
>>
>> Using  'configure --disable-events` fixes above problem.
>>
>> Thank you, Niklas for forwarding this info.
>>
>> regards
>> Amudhan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Gluster performance is paramount!

2017-02-24 Thread Ernie Dunbar

  
  
Hi everyone!
We have a gluster array of three servers supporting a large mail
  server with about 10,000 e-mail accounts with the Maildir file
  format. This means lots of random small file reads and writes.
  Gluster's performance hasn't been great since we switched to it
  from a local disk on a single server, but we're aiming for high
  availability here, since simply restoring that mail from backups
  (or even backing it up in the first place) takes a day or two.
  Clearly, some kind of network drive is what we need, and Gluster
  does the job better than every other solution we've looked at so
  far.

The problem comes from the fact that when I set out on this
  project, I'd never done any kind of performance tuning before. We
  didn't need it. All three of our Gluster servers are set up in a
  RAID5 array with a hot spare. I'm starting to think that the
  performance woes we have all stem from this fact, and speaking to
  one of my colleagues, it was suggested that Gluster can handle the
  data integrity just fine on its own, so why don't we just switch
  to the fastest possible type, RAID0 and completely toss out any
  data integrity on each individual node in the cluster?
While this sounds good in theory, I'd like to know how well this
  works in practice before subjecting our 10,000 e-mail clients to
  this experiment. The other possibility is to switch our Gluster
  nodes to RAID1 or 10, which might be faster than RAID5 while still
  keeping some semblance of data integrity.

  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
Hi Alessandro,

That will address the failover issue but it will not address configuring
the glusterfs client to connect to the brick over TLS. I would be happy to
be wrong. I was only able to get both by specifying that in the config
file. What's curious is why the config file doesn't handle replication the
same way as when its mounted with the mount command. I'd figure they should
be the same.

Here's my config file if anyone is interested. Perhaps I don't have
something set properly?

 volume gv0-client-0

 type protocol/client

 option ping-timeout 42

 option remote-host host1

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-client-1

 type protocol/client

 option ping-timeout 42

 option remote-host host2

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-client-2

 type protocol/client

 option ping-timeout 42

 option remote-host host3

 option remote-subvolume /data/glusterfs/gv0/brick1/brick

 option transport-type socket

 option transport.address-family inet

 option send-gids true

 option transport.socket.ssl-enabled on

 end-volume

 volume gv0-replicate-0

type cluster/replicate

subvolumes gv0-client-0 gv0-client-1 gv0-client-2

 end-volume

Joe

On Fri, Feb 24, 2017 at 11:40 AM, Alessandro Briosi  wrote:

> Il 24/02/2017 14:50, Joseph Lorenzini ha scritto:
>
> 1. I want the mount /etc/fstab to be able to fail over to any one of the
> three servers that I have. so if one server is down, the client can still
> mount from servers 2 and 3.
>
> *backupvolfile-server *option
>
> should do the work or use the config file.
>
> It's mentioned in the blog you linked...
>
> If you need more dynamic failover probably rrdns could be a solution.
>
>
> Alessandro
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Alessandro Briosi
Il 24/02/2017 14:50, Joseph Lorenzini ha scritto:
> 1. I want the mount /etc/fstab to be able to fail over to any one of
> the three servers that I have. so if one server is down, the client
> can still mount from servers 2 and 3.
*backupvolfile-server *option
*
*should do the work or use the config file.

It's mentioned in the blog you linked...

If you need more dynamic failover probably rrdns could be a solution.


Alessandro
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
Hi Mohammed,

You are right that mounting it this way will do the appropriate
replication. However, there are problems with that for my use case:

1. I want the mount /etc/fstab to be able to fail over to any one of the
three servers that I have. so if one server is down, the client can still
mount from servers 2 and 3.
2. i have configured SSL on the I/O path and I need to be able to configure
the client to use TLS when it connects to the brick. I was only able to get
that to work with: transport.socket.ssl-enabled off in the configuration
file.

In other words, I was only able to get HA during a mount and TLS to work by
using the volume config file and setting that in the /etc/fstab.

https://www.jamescoyle.net/how-to/439-mount-a-glusterfs-volume

Is there a better way to handle this?

Thanks,
Joe

On Fri, Feb 24, 2017 at 6:24 AM, Mohammed Rafi K C 
wrote:

> Hi Joseph,
>
> I think there is gap in understanding your problem. Let me try to give
> more clear picture on this,
>
> First , couple of clarification points here
>
> 1) client graph is an internally generated configuration file based on
> your volume, that said you don't need to create or edit your own. If you
> want a 3-way replicated volume you have to mention that when you create the
> volume.
>
> 2) When you mount a gluster volume, you don't need to provide any client
> graph, you just need to give server hostname and volname, it will
> automatically fetches the graph and start working on it (so it does the
> replication based on the graph generated by gluster management daemon)
>
>
> Now let me briefly describe the procedure for creating a 3-way replicated
> volume
>
> 1) gluster volume create  replica 3 :/
> :/ :/
>
>  Note : if you give 3 more bricks , then it will create 2-way
> distributed 3 way replicated volume (you can increase the distribution by
> adding multiple if 3)
>
>  this step will automatically create the configuration file in
> /var/lib/glusterd/vols//trusted-.tcp-fuse.vol
>
> 2) Now start the volume using gluster volume start 
>
> 3) Fuse mount the volume in client machine using the command mount -t
> glusterfs :/   /
>
> this will automatically fetches the configuration file and will do the
> replication. You don't need to do anything
>
>
> Let me know if this helps.
>
>
> Regards
>
> Rafi KC
>
>
> On 02/24/2017 05:13 PM, Joseph Lorenzini wrote:
>
> HI Mohammed,
>
> Its not a bug per se, its a configuration and documentation issue. I
> searched the gluster documentation pretty thoroughly and I did not find
> anything that discussed the 1) client's call graph and 2) how to
> specifically configure a native glusterfs client to properly specify that
> call graph so that replication will happen across multiple bricks. If its
> there, then there's a pretty severe organization issue in the documentation
> (I am pretty sure I ended up reading almost every page actually).
>
> As a result, because I was a new to gluster, my initial set up really
> confused me. I would follow the instructions as documented in official
> gluster docs (execute the mount command), write data on the mount...and
> then only see it replicated to a single brick. It was only after much
> furious googling did I manage to figure out that that 1) i needed a client
> configuration file which should be specified in /etc/fstab and 2) that
> configuration block mentioned above was the key.
>
> I am actually planning on submitting a PR to the documentation to cover
> all this. To be clear, I am sure this is obvious to a seasoned gluster user
> -- but it is not at all obvious to someone who is new to gluster such as
> myself.
>
> So I am an operations engineer. I like reproducible deployments and I like
> monitoring to alert me when something is wrong. Due to human error or a bug
> in our deployment code, its possible that something like not setting the
> client call graph properly could happen. I wanted a way to detect this
> problem so that if it does happen, it can be remediated immediately.
>
> Your suggestion sounds promising. I shall definitely look into that.
> Though that might be a useful information to surface up in a CLI command in
> a future gluster release IMHO.
>
> Joe
>
>
>
> On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C <
> rkavu...@redhat.com> wrote:
>
>>
>>
>> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>>
>> Hi all,
>>
>> I have a simple replicated volume with a replica count of 3. To ensure
>> any file changes (create/delete/modify) are replicated to all bricks, I
>> have this setting in my client configuration.
>>
>>  volume gv0-replicate-0
>> type cluster/replicate
>> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
>> end-volume
>>
>> And that works as expected. My question is how one could detect if this
>> was not happening which could poise a severe problem with data consistency
>> and replication. For example, those settings could be omitted from the
>> client config and then the client will only write data t

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Mohammed Rafi K C
Hi Joseph,

I think there is gap in understanding your problem. Let me try to give
more clear picture on this,

First , couple of clarification points here

1) client graph is an internally generated configuration file based on
your volume, that said you don't need to create or edit your own. If you
want a 3-way replicated volume you have to mention that when you create
the volume.

2) When you mount a gluster volume, you don't need to provide any client
graph, you just need to give server hostname and volname, it will
automatically fetches the graph and start working on it (so it does the
replication based on the graph generated by gluster management daemon)


Now let me briefly describe the procedure for creating a 3-way
replicated volume

1) gluster volume create  replica 3 :/
:/ :/

 Note : if you give 3 more bricks , then it will create 2-way
distributed 3 way replicated volume (you can increase the distribution
by adding multiple if 3)

 this step will automatically create the configuration file in
/var/lib/glusterd/vols//trusted-.tcp-fuse.vol

2) Now start the volume using gluster volume start 

3) Fuse mount the volume in client machine using the command mount -t
glusterfs :/   /

this will automatically fetches the configuration file and will do
the replication. You don't need to do anything


Let me know if this helps.


Regards

Rafi KC


On 02/24/2017 05:13 PM, Joseph Lorenzini wrote:
> HI Mohammed,
>
> Its not a bug per se, its a configuration and documentation issue. I
> searched the gluster documentation pretty thoroughly and I did not
> find anything that discussed the 1) client's call graph and 2) how to
> specifically configure a native glusterfs client to properly specify
> that call graph so that replication will happen across multiple
> bricks. If its there, then there's a pretty severe organization issue
> in the documentation (I am pretty sure I ended up reading almost every
> page actually).
>
> As a result, because I was a new to gluster, my initial set up really
> confused me. I would follow the instructions as documented in official
> gluster docs (execute the mount command), write data on the
> mount...and then only see it replicated to a single brick. It was only
> after much furious googling did I manage to figure out that that 1) i
> needed a client configuration file which should be specified in
> /etc/fstab and 2) that configuration block mentioned above was the key.
>
> I am actually planning on submitting a PR to the documentation to
> cover all this. To be clear, I am sure this is obvious to a seasoned
> gluster user -- but it is not at all obvious to someone who is new to
> gluster such as myself.
>
> So I am an operations engineer. I like reproducible deployments and I
> like monitoring to alert me when something is wrong. Due to human
> error or a bug in our deployment code, its possible that something
> like not setting the client call graph properly could happen. I wanted
> a way to detect this problem so that if it does happen, it can be
> remediated immediately.
>
> Your suggestion sounds promising. I shall definitely look into that.
> Though that might be a useful information to surface up in a CLI
> command in a future gluster release IMHO.
>
> Joe
>
>
>
> On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C
> mailto:rkavu...@redhat.com>> wrote:
>
>
>
> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>> Hi all,
>>
>> I have a simple replicated volume with a replica count of 3. To
>> ensure any file changes (create/delete/modify) are replicated to
>> all bricks, I have this setting in my client configuration.
>>
>>  volume gv0-replicate-0
>> type cluster/replicate
>> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
>> end-volume
>>
>> And that works as expected. My question is how one could detect
>> if this was not happening which could poise a severe problem with
>> data consistency and replication. For example, those settings
>> could be omitted from the client config and then the client will
>> only write data to one brick and all kinds of terrible things
>> will start happening. I have not found a way the gluster volume
>> cli to detect when that kind of problem is occurring. For example
>> gluster volume heal  info does not detect this problem. 
>>
>> Is there any programmatic way to detect when this problem is
>> occurring?
>>
>
> I couldn't understand how you will end up in this situation. There
> is only one possibility (assuming there is no bug :) ), ie you
> changed the client graph in a way that there is only one subvolume
> to replica server.
>
> To check that the simply way is, there is a xlator called meta,
> which provides meta data information through mount point, similiar
> to linux proc file system. So you can check the active graph
> through meta and see the number of subvolumes for replica xlator
>
> for example 

Re: [Gluster-users] detecting replication issues

2017-02-24 Thread Joseph Lorenzini
HI Mohammed,

Its not a bug per se, its a configuration and documentation issue. I
searched the gluster documentation pretty thoroughly and I did not find
anything that discussed the 1) client's call graph and 2) how to
specifically configure a native glusterfs client to properly specify that
call graph so that replication will happen across multiple bricks. If its
there, then there's a pretty severe organization issue in the documentation
(I am pretty sure I ended up reading almost every page actually).

As a result, because I was a new to gluster, my initial set up really
confused me. I would follow the instructions as documented in official
gluster docs (execute the mount command), write data on the mount...and
then only see it replicated to a single brick. It was only after much
furious googling did I manage to figure out that that 1) i needed a client
configuration file which should be specified in /etc/fstab and 2) that
configuration block mentioned above was the key.

I am actually planning on submitting a PR to the documentation to cover all
this. To be clear, I am sure this is obvious to a seasoned gluster user --
but it is not at all obvious to someone who is new to gluster such as
myself.

So I am an operations engineer. I like reproducible deployments and I like
monitoring to alert me when something is wrong. Due to human error or a bug
in our deployment code, its possible that something like not setting the
client call graph properly could happen. I wanted a way to detect this
problem so that if it does happen, it can be remediated immediately.

Your suggestion sounds promising. I shall definitely look into that. Though
that might be a useful information to surface up in a CLI command in a
future gluster release IMHO.

Joe



On Thu, Feb 23, 2017 at 11:51 PM, Mohammed Rafi K C 
wrote:

>
>
> On 02/23/2017 11:12 PM, Joseph Lorenzini wrote:
>
> Hi all,
>
> I have a simple replicated volume with a replica count of 3. To ensure any
> file changes (create/delete/modify) are replicated to all bricks, I have
> this setting in my client configuration.
>
>  volume gv0-replicate-0
> type cluster/replicate
> subvolumes gv0-client-0 gv0-client-1 gv0-client-2
> end-volume
>
> And that works as expected. My question is how one could detect if this
> was not happening which could poise a severe problem with data consistency
> and replication. For example, those settings could be omitted from the
> client config and then the client will only write data to one brick and all
> kinds of terrible things will start happening. I have not found a way the
> gluster volume cli to detect when that kind of problem is occurring. For
> example gluster volume heal  info does not detect this problem.
>
> Is there any programmatic way to detect when this problem is occurring?
>
>
> I couldn't understand how you will end up in this situation. There is only
> one possibility (assuming there is no bug :) ), ie you changed the client
> graph in a way that there is only one subvolume to replica server.
>
> To check that the simply way is, there is a xlator called meta, which
> provides meta data information through mount point, similiar to linux proc
> file system. So you can check the active graph through meta and see the
> number of subvolumes for replica xlator
>
> for example : the directory   /.meta/graphs/active/<
> volname>-replicate-0/subvolumes will have entries for each replica
> clients , so in your case you should see 3 directories.
>
>
> Let me know if this helps.
>
> Regards
> Rafi KC
>
>
> Thanks,
> Joe
>
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Some advice

2017-02-24 Thread Gandalf Corvotempesta
I have 3 storage servers that I would like to use as gluster servers
for VM hosting and some Maildir hosting.

these 3 servers will be connected to a bunch of hypervisors servers.

I'll create a distributed replicated volume with SATA disks and ZFS
(to use SLOG) and another distributed replicated volume shared with
NFS-Ganesha to use as Dovecot mailbox storage.

The dovecot server is a VM running on the same gluster nodes where ganesha is.
Is this an issue ? I could try to store Dovecot VM on local disks on
hypervisors without using the same Gluster nodes for VM hosting, then
access ganesha directly from inside the VM.
By using a shared NFS server, I can bring up multiple dovecot servers
even without live migration, because there is always at least another
dovecot VM ready to run on a different server accessing the same NFS
share (stored on gluster).

Any advice ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] volume start: data0: failed: Commit failed on localhost.

2017-02-24 Thread Mohammed Rafi K C
It looks like it is ended up in split brain kind of situation. To find
the root cause we need to get logs for the first failure of volume start
or volume stop .

Or to work around it, you can do a volume start force.


Regards

Rafi KC


On 02/24/2017 01:36 PM, Deepak Naidu wrote:
>
> I keep on getting this error when my config.transport is set to both
> tcp,rdma. The volume doesn’t start. I get the below error during
> volume start.
>
>  
>
> To get around this, I end up delete the volume, then configure either
> only rdma or tcp. May be I am missing something, just trying to get
> the volume up.
>
>  
>
> root@hostname:~# gluster volume start data0
>
> volume start: data0: failed: Commit failed on localhost. Please check
> log file for details.
>
> root@hostname:~#
>
>  
>
> root@ hostname:~# gluster volume status data0
>
> Staging failed on storageN2. Error: Volume data0 is not started
>
> root@ hostname:~
>
>  
>
> =
>
> [2017-02-24 08:00:29.923516] I [MSGID: 106499]
> [glusterd-handler.c:4349:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume data0
>
> [2017-02-24 08:00:29.926140] E [MSGID: 106153]
> [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed
> on storageN2. Error: Volume data0 is not started
>
> [2017-02-24 08:00:33.770505] I [MSGID: 106499]
> [glusterd-handler.c:4349:__glusterd_handle_status_volume]
> 0-management: Received status volume req for volume data0
>
> [2017-02-24 08:00:33.772824] E [MSGID: 106153]
> [glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed
> on storageN2. Error: Volume data0 is not started
>
> =
>
> [2017-02-24 08:01:36.305165] E [MSGID: 106537]
> [glusterd-volume-ops.c:1660:glusterd_op_stage_start_volume]
> 0-management: Volume data0 already started
>
> [2017-02-24 08:01:36.305191] W [MSGID: 106122]
> [glusterd-mgmt.c:198:gd_mgmt_v3_pre_validate_fn] 0-management: Volume
> start prevalidation failed.
>
> [2017-02-24 08:01:36.305198] E [MSGID: 106122]
> [glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre
> Validation failed for operation Start on local node
>
> [2017-02-24 08:01:36.305205] E [MSGID: 106122]
> [glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases]
> 0-management: Pre Validation Failed
>
>  
>
>  
>
> --
>
> Deepak
>
>  
>
> 
> This email message is for the sole use of the intended recipient(s)
> and may contain confidential information.  Any unauthorized review,
> use, disclosure or distribution is prohibited.  If you are not the
> intended recipient, please contact the sender by reply email and
> destroy all copies of the original message.
> 
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] self heal failed, on /

2017-02-24 Thread Mohammed Rafi K C


On 02/24/2017 01:25 PM, max.degr...@kpn.com wrote:
>
> Any advice on the sequence of updating? Server of client first?
>
>  
>
> I assume it’s a simple in-place update where configuration is
> preserved. Right?
>

Configuration will be preserved. I don't know actual procedure for
rolling upgrade (other than servers first with one after other). May be
somebody else in this ML can provide you more info or just a google
search will find you some blogs.


Regards
Rafi KC


>  
>
>  
>
> *Van:*Mohammed Rafi K C [mailto:rkavu...@redhat.com]
> *Verzonden:* vrijdag 24 februari 2017 08:49
> *Aan:* Graaf, Max de; gluster-users@gluster.org
> *Onderwerp:* Re: [Gluster-users] self heal failed, on /
>
>  
>
>  
>
>  
>
> On 02/24/2017 11:47 AM, max.degr...@kpn.com
>  wrote:
>
> The version on the server of this specific mount is 3.7.11. The
> client is running version 3.4.2.
>
>
> It is always better to have everything in one version, all clients and
> all servers. In this case there is huge gap between the versions, 3.7
> and 3.4 .
>
> An additional thing is the code running on 3.4 is replicaV1 code and
> on 3.7 it v2, meaning there is huge difference to the logic of
> replication/healing. So I recommend to keep all the gluster instances
> to the same version
>
>
> ~Rafi
>
>
>
>  
>
> There is more to that. This client is actually mounting to volumes
> where the other server is running 3.4.2 as well. What’s your advice,
> update that other server to 3.7.11 (or higher) first? Of start with
> the client update?
>
>  
>
> *Van:*Mohammed Rafi K C [mailto:rkavu...@redhat.com]
> *Verzonden:* vrijdag 24 februari 2017 07:02
> *Aan:* Graaf, Max de; gluster-users@gluster.org
> 
> *Onderwerp:* Re: [Gluster-users] self heal failed, on /
>
>  
>
>  
>
>  
>
> On 02/23/2017 12:18 PM, max.degr...@kpn.com
>  wrote:
>
> Hi,
>
>  
>
> We have a 4 node glusterfs setup that seems to be running without
> any problems. We can’t find any problems with replication or whatever.
>
>  
>
> We also have 4 machines running the glusterfs client. On all 4
> machines we see the following error in the logs at random moments:
>
>  
>
> [2017-02-23 00:04:33.168778] I
> [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status]
> 0-aab-replicate-0:  metadata self heal  is successfully
> completed,   metadata self heal from source aab-client-0 to
> aab-client-1,  aab-client-2,  aab-client-3,  metadata - Pending
> matrix:  [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on /
>
> [2017-02-23 00:09:34.431089] E
> [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status]
> 0-aab-replicate-0:  metadata self heal  failed,   on /
>
> [2017-02-23 00:14:34.948975] I
> [afr-self-heal-common.c:2869:afr_log_self_heal_completion_status]
> 0-aab-replicate-0:  metadata self heal  is successfully
> completed,   metadata self heal from source aab-client-0 to
> aab-client-1,  aab-client-2,  aab-client-3,  metadata - Pending
> matrix:  [ [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] [ 0 0 0 0 ] ], on /
>
>  
>
> The content within the glusterfs filesystems is rather static with
> only minor changes on it. This “self heal  failed” is printed
> randomly in the logs on the glusterfs client. It’s printed even at
> moment where nothing has changed within the glusterfs filesystem.
> When it is printed, its never on multiple servers at the same
> time. What we also don’t understand : the error indicates self
> heal failed on root “/”. In the root of this glusterfs mount there
> only 2 folders and no files are ever written at the root level.
>
>  
>
> Any thoughts?
>
>
> From the logs, It looks like an older version of gluster , probably
> 3.5 . Please confirm your glusterfs version. The version is pretty old
> and it may be moved End of Life. And this is AFR v1 , where the latest
> stable version runs with AFRv2.
>
> So I would suggest you to upgrade to a later version may be 3.8 .
>
> If you still want to go with this version, I can give it a try. Let me
> know the version, volume info and volume status. Still I will suggest
> to upgrade ;)
>
>
> Regards
> Rafi KC
>
>
>
>
>
>  
>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>  
>
>  
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] volume start: data0: failed: Commit failed on localhost.

2017-02-24 Thread Deepak Naidu
I keep on getting this error when my config.transport is set to both tcp,rdma. 
The volume doesn't start. I get the below error during volume start.

To get around this, I end up delete the volume, then configure either only rdma 
or tcp. May be I am missing something, just trying to get the volume up.

root@hostname:~# gluster volume start data0
volume start: data0: failed: Commit failed on localhost. Please check log file 
for details.
root@hostname:~#

root@ hostname:~# gluster volume status data0
Staging failed on storageN2. Error: Volume data0 is not started
root@ hostname:~

=
[2017-02-24 08:00:29.923516] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume data0
[2017-02-24 08:00:29.926140] E [MSGID: 106153] 
[glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on 
storageN2. Error: Volume data0 is not started
[2017-02-24 08:00:33.770505] I [MSGID: 106499] 
[glusterd-handler.c:4349:__glusterd_handle_status_volume] 0-management: 
Received status volume req for volume data0
[2017-02-24 08:00:33.772824] E [MSGID: 106153] 
[glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on 
storageN2. Error: Volume data0 is not started
=
[2017-02-24 08:01:36.305165] E [MSGID: 106537] 
[glusterd-volume-ops.c:1660:glusterd_op_stage_start_volume] 0-management: 
Volume data0 already started
[2017-02-24 08:01:36.305191] W [MSGID: 106122] 
[glusterd-mgmt.c:198:gd_mgmt_v3_pre_validate_fn] 0-management: Volume start 
prevalidation failed.
[2017-02-24 08:01:36.305198] E [MSGID: 106122] 
[glusterd-mgmt.c:884:glusterd_mgmt_v3_pre_validate] 0-management: Pre 
Validation failed for operation Start on local node
[2017-02-24 08:01:36.305205] E [MSGID: 106122] 
[glusterd-mgmt.c:2009:glusterd_mgmt_v3_initiate_all_phases] 0-management: Pre 
Validation Failed


--
Deepak


---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users