[Gluster-users] vm has paused due to unknown storage error

2014-12-10 Thread Punit Dambiwal
Hi,

Suddenly all of my VM on one host paused with the following error :-

vm has paused due to unknown storage error

I am using glusterfs storage with distributed replicate replica=2my
storage and compute both running on the same node...

engine logs :- http://ur1.ca/j31iu
Host logs :- http://ur1.ca/j31kk(I grep it for one Failed VM)

Thanks,
Punit
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Geo-Replication Issue

2014-12-10 Thread Kotresh Hiremath Ravishankar
Hi Dave,

Two things.

1. I see the gluster has been upgraded from 3.4.2 to 3.5.3. 
   Geo-rep has undergone design changes to make it distributed 
   between these releases 
(https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_distributed_geo_rep.md).
   Have you followed all the upgrade steps w.r.t geo-rep 
   mentioned in the following link?
   
   http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.5

2. Does the output of command 'gluster vol info  --xml' proper ?
   Please paste the output.


Thanks and Regards,
Kotresh H R

- Original Message -
From: "David Gibbons" 
To: "Kotresh Hiremath Ravishankar" 
Cc: "gluster-users" , vno...@stonefly.com
Sent: Wednesday, December 10, 2014 6:12:00 PM
Subject: Re: [Gluster-users] Geo-Replication Issue

Symlinking gluster to /usr/bin/ seems to have resolved the path issue.
Thanks for the tip there.

Now there's a different error throw in the geo-rep/ssh...log:

> [2014-12-10 07:32:42.609031] E
>> [syncdutils(monitor):240:log_raise_exception] : FAIL:
>
> Traceback (most recent call last):
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> 150, in main
>
> main_i()
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> 530, in main_i
>
> return monitor(*rscs)
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> 243, in monitor
>
> return Monitor().multiplex(*distribute(*resources))
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> 205, in distribute
>
> mvol = Volinfo(master.volume, master.host)
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> 22, in __init__
>
> vi = XET.fromstring(vix)
>
>   File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 963, in XML
>
> parser.feed(text)
>
>   File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 1245, in feed
>
> self._parser.Parse(data, 0)
>
> ExpatError: syntax error: line 2, column 0
>
> [2014-12-10 07:32:42.610858] I [syncdutils(monitor):192:finalize] :
>> exiting.
>
>
I also get a bunch of these errors but have been assuming that they are
being thrown because geo-replication hasn't started successfully yet. There
is one for each brick:

> [2014-12-10 12:33:33.539737] E
>> [glusterd-geo-rep.c:2685:glusterd_gsync_read_frm_status] 0-: Unable to read
>> gsyncd status file
>
> [2014-12-10 12:33:33.539742] E
>> [glusterd-geo-rep.c:2999:glusterd_read_status_file] 0-: Unable to read the
>> statusfile for /mnt/a-3-shares-brick-4/brick brick for  shares(master),
>> gfs-a-bkp::bkpshares(slave) session
>
>
Do I have a config file error somewhere that I need to track down? This
volume *was* upgraded from 3.4.2 a few weeks ago.

Cheers,
Dave

On Wed, Dec 10, 2014 at 7:29 AM, David Gibbons 
wrote:

> Hi Kotresh,
>
> Thanks for the tip. Unfortunately that does not seem to have any effect.
> The path to the gluster binaries was already in $PATH. I did try adding the
> path to the gsyncd binary, but same result. Contents of $PATH are:
>
>>
>> /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/libexec/glusterfs/
>>
>
> It seems like perhaps one of the remote gsyncd processes cannot find the
> gluster binary, because I see the following in the
> geo-replication/shares/ssh...log. Can you point me toward how I can find
> out what is throwing this log entry?
>
>> [2014-12-10 07:20:53.886676] E
>>> [syncdutils(monitor):218:log_raise_exception] : execution of "gluster"
>>> failed with ENOENT (No such file or directory)
>>
>> [2014-12-10 07:20:53.886883] I [syncdutils(monitor):192:finalize] :
>>> exiting.
>>
>>
> I think that whatever process is trying to use the gluster command has the
> incorrect path to access it. Do you know how I could modify *that* path?
>
> I've manually tested the ssh_command and ssh_command_tar variables in the
> relevant gsyncd.conf; both connect to the slave server successfully and
> appear to execute the command they're supposed to.
>
> gluster_command_dir in gsyncd.conf is also the correct directory
> (/usr/local/sbin).
>
> In summary: I think we're on to something with setting the path, but I
> think I need to set it somewhere other than my shell.
>
> Thanks,
> Dave
>
>
> On Tue, Dec 9, 2014 at 11:52 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> If that is the case, as a workaround, try adding 'gluster' path
>> to PATH environment variable or creating symlinks to gluster,
>> glusterd binaries.
>>
>> 1. export PATH=$PATH:
>>
>> Above should work, let me know if doesn't.
>>
>> Thanks and Regards,
>> Kotresh H R
>>
>> - Original Message -
>> From: "David Gibbons" 
>> To: "Kotresh Hiremath Ravishankar" 
>> Cc: "gluster-users" , vno...@stonefly.com
>> Sent: Tuesday, December 9, 2014 6:16:03 PM
>> Subject: Re: [Gluster-users] Geo-Replication Issue
>>
>> Hi Kotresh,
>>
>> Yes, I believe that I am. Can you tell me which 

Re: [Gluster-users] # of replica != number pf bricks?

2014-12-10 Thread Jeff Darcy
> What happens if I have 3 peers for quorum. I create 3 bricks and want to have
> only two replicas in my volume.

The number of *bricks* must be a multiple of the replica count, but
quorum is based on the number of *servers* and there can be multiple
bricks per server.  Therefore, if you have servers A, B, and C with two
bricks each, you can do this:

   volume create foo replica 2 \
  A:/brick0 B:/brick0 C:/brick0 A:/brick1 B:/brick1 C:/brick1

First we'll combine this into the following two-way replica sets:

   A:/brick0 and B:/brick0
   C:/brick0 and A:/brick1
   B:/brick1 and C:/brick1

Then we'll distribute files among those three sets.  If one server fails
then we'll still have quorum (2/3) and each replica set will have at
least one surviving replica.  If two fail then neither of those things
will be true and we'll disable the volume.

In 4.0 we plan to improve on this by splitting bricks and creating the
necessary replica sets from the pieces ourselves.  Besides making
configuration simpler, this should remove the restriction on the number
of bricks being a multiple of the replica count, and also redistribute
load more evenly during or after a failure.  4.0 is a long way off,
though, so I probably shouldn't even be talking about it.  ;)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] # of replica != number pf bricks?

2014-12-10 Thread Juan José Pavlik Salles
According to the docs (Link
)
"The number of bricks should be a multiple of the replica count for a
distributed replicated volume.". What I understand is that creating a
replica 2 volume with 3 bricks should not work, but I haven't really tried
it.

Regarding to quorum, unfortunately, I've got no experience with it.

2014-12-10 18:14 GMT-03:00 Michael Schwartzkopff :

> Am Mittwoch, 10. Dezember 2014, 18:02:57 schrieb Juan José Pavlik Salles:
> > Yes it's totally possible. If you are going to use replica 2, you will
> need
> > an even number of bricks in your volume, for instance 2, 4 6, 8, etc.
> > Gluster will set the volume as a distributed replicated volume.
> >
> > 2014-12-10 17:54 GMT-03:00 Michael Schwartzkopff :
> > > Hi,
> > >
> > > what happens if the number of replica in a volume is not equal to the
> > > number
> > > of bricks?
> > >
> > > Sample: I have a volume with 4 bricks (on four peers) and only what to
> > > have
> > > two replicats of every file. Is that possible?
>
> What happens if I have 3 peers for quorum. I create 3 bricks and want to
> have
> only two replicas in my volume.
>
> Mit freundlichen Grüßen,
>
> Michael Schwartzkopff
>
> --
> [*] sys4 AG
>
> http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
> Franziskanerstraße 15, 81669 München
>
> Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
> Vorstand: Patrick Ben Koetter, Marc Schiffbauer
> Aufsichtsratsvorsitzender: Florian Kirstein
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pavlik Salles Juan José
Blog - http://viviendolared.blogspot.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] # of replica != number pf bricks?

2014-12-10 Thread Michael Schwartzkopff
Am Mittwoch, 10. Dezember 2014, 18:02:57 schrieb Juan José Pavlik Salles:
> Yes it's totally possible. If you are going to use replica 2, you will need
> an even number of bricks in your volume, for instance 2, 4 6, 8, etc.
> Gluster will set the volume as a distributed replicated volume.
> 
> 2014-12-10 17:54 GMT-03:00 Michael Schwartzkopff :
> > Hi,
> > 
> > what happens if the number of replica in a volume is not equal to the
> > number
> > of bricks?
> > 
> > Sample: I have a volume with 4 bricks (on four peers) and only what to
> > have
> > two replicats of every file. Is that possible?

What happens if I have 3 peers for quorum. I create 3 bricks and want to have 
only two replicas in my volume.

Mit freundlichen Grüßen,

Michael Schwartzkopff

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein

signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] # of replica != number pf bricks?

2014-12-10 Thread Juan José Pavlik Salles
Yes it's totally possible. If you are going to use replica 2, you will need
an even number of bricks in your volume, for instance 2, 4 6, 8, etc.
Gluster will set the volume as a distributed replicated volume.

2014-12-10 17:54 GMT-03:00 Michael Schwartzkopff :

> Hi,
>
> what happens if the number of replica in a volume is not equal to the
> number
> of bricks?
>
> Sample: I have a volume with 4 bricks (on four peers) and only what to have
> two replicats of every file. Is that possible?
>
> Mit freundlichen Grüßen,
>
> Michael Schwartzkopff
>
> --
> [*] sys4 AG
>
> http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
> Franziskanerstraße 15, 81669 München
>
> Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
> Vorstand: Patrick Ben Koetter, Marc Schiffbauer
> Aufsichtsratsvorsitzender: Florian Kirstein
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>



-- 
Pavlik Salles Juan José
Blog - http://viviendolared.blogspot.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] # of replica != number pf bricks?

2014-12-10 Thread Michael Schwartzkopff
Hi,

what happens if the number of replica in a volume is not equal to the number 
of bricks?

Sample: I have a volume with 4 bricks (on four peers) and only what to have 
two replicats of every file. Is that possible?

Mit freundlichen Grüßen,

Michael Schwartzkopff

-- 
[*] sys4 AG

http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044
Franziskanerstraße 15, 81669 München

Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263
Vorstand: Patrick Ben Koetter, Marc Schiffbauer
Aufsichtsratsvorsitzender: Florian Kirstein

signature.asc
Description: This is a digitally signed message part.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Add space to a volume

2014-12-10 Thread Paul Robert Marino
Of course you can always add space to the volume that works well. The reason you may want to consider adding bricks is if you enable striping in gluster  with the mirroring you will probably see better performance on your reads and writes.-- Sent from my HP Pre3On Dec 10, 2014 11:21 AM, wodel youchi  wrote: Hi,I am learning GlusterFS 3.6I did some testing, using Redhat Documentation. They use thin LVM with XFS on top of the brick.I have two questions about adding more space to a volume.Adding space can be done by adding a new brick (distributed configuration), but it could also be done by adding a new pv to the vg in use and then lvextend and xfs_growfs. I tested the two methods and they work perfectly without any manipulation on the client.1- What is the benefit from using thin LVM and not just LVM, beside snapshots? my target use is VM storage2- Which is preferable when adding space to a volume, add-brick or lvextend?thanx in advance.___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster mount problem

2014-12-10 Thread Mitja Mihelič

Per your suggestion I tired this:
env -i LC_NUMERIC="en_US.UTF-8" mount -t glusterfs -o transport=tcp 
GLUSTER-1.NAME.SI://wp-vol-1 /mnt/volume-1


And it works.

Mounting of volumes via fstab works also.

Regards, Mitja

--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877, fax: +386 1 479 88 78

On 10. 12. 2014 17:17, Jan-Hendrik Zab wrote:

On 10/12/14 16:57 +0100, Mitja Mihelič wrote:

Hi!

I am having trouble mounting a Gluster volume. After issuing the mount
command nothing happens. It is the same for CentOS6 and CentOS7 clients.
Both are updated to the latest package versions. I am using the official
Gluster repository in both clients:
http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
CentOS6 packages installed:
glusterfs-3.6.1-1.el6.x86_64
glusterfs-api-3.6.1-1.el6.x86_64
glusterfs-fuse-3.6.1-1.el6.x86_64
glusterfs-libs-3.6.1-1.el6.x86_64

CentOS7 packages installed:
glusterfs-3.6.1-1.el7.x86_64
glusterfs-api-3.6.1-1.el7.x86_64
glusterfs-fuse-3.6.1-1.el7.x86_64
glusterfs-libs-3.6.1-1.el7.x86_64

I monitored traffic on the Gluster node and on the client node and there was
no communication between them. Telnet to the Cluster node works fine. Both
machines are in the same network and their firewalls and SELinux are turned
off.

I looked into it with strace, and here are the results.
For CentOS6: http://pastebin.com/vcqTh2Hi
For CentOS7: http://pastebin.com/s7MuTbXb

Hey,
please set your LC_NUMERIC locale to en_US.UTF-8 and try again. You
might be hitting a known bug.

-jhz


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Add space to a volume

2014-12-10 Thread wodel youchi
Hi,I am learning GlusterFS 3.6
I did some testing, using Redhat Documentation. They use thin LVM with XFS on 
top of the brick.
I have two questions about adding more space to a volume.
Adding space can be done by adding a new brick (distributed configuration), but 
it could also be done by adding a new pv to the vg in use and then lvextend and 
xfs_growfs. I tested the two methods and they work perfectly without any 
manipulation on the client.
1- What is the benefit from using thin LVM and not just LVM, beside snapshots? 
my target use is VM storage
2- Which is preferable when adding space to a volume, add-brick or lvextend?
thanx in advance.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster mount problem

2014-12-10 Thread Jan-Hendrik Zab
On 10/12/14 16:57 +0100, Mitja Mihelič wrote:
> Hi!
> 
> I am having trouble mounting a Gluster volume. After issuing the mount
> command nothing happens. It is the same for CentOS6 and CentOS7 clients.
> Both are updated to the latest package versions. I am using the official
> Gluster repository in both clients:
> http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
> CentOS6 packages installed:
> glusterfs-3.6.1-1.el6.x86_64
> glusterfs-api-3.6.1-1.el6.x86_64
> glusterfs-fuse-3.6.1-1.el6.x86_64
> glusterfs-libs-3.6.1-1.el6.x86_64
> 
> CentOS7 packages installed:
> glusterfs-3.6.1-1.el7.x86_64
> glusterfs-api-3.6.1-1.el7.x86_64
> glusterfs-fuse-3.6.1-1.el7.x86_64
> glusterfs-libs-3.6.1-1.el7.x86_64
> 
> I monitored traffic on the Gluster node and on the client node and there was
> no communication between them. Telnet to the Cluster node works fine. Both
> machines are in the same network and their firewalls and SELinux are turned
> off.
> 
> I looked into it with strace, and here are the results.
> For CentOS6: http://pastebin.com/vcqTh2Hi
> For CentOS7: http://pastebin.com/s7MuTbXb

Hey,
please set your LC_NUMERIC locale to en_US.UTF-8 and try again. You
might be hitting a known bug.

-jhz


pgpDOXVmvANJk.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster mount problem

2014-12-10 Thread Mitja Mihelič

Hi!

I am having trouble mounting a Gluster volume. After issuing the mount 
command nothing happens. It is the same for CentOS6 and CentOS7 clients. 
Both are updated to the latest package versions. I am using the official 
Gluster repository in both clients:

http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
CentOS6 packages installed:
glusterfs-3.6.1-1.el6.x86_64
glusterfs-api-3.6.1-1.el6.x86_64
glusterfs-fuse-3.6.1-1.el6.x86_64
glusterfs-libs-3.6.1-1.el6.x86_64

CentOS7 packages installed:
glusterfs-3.6.1-1.el7.x86_64
glusterfs-api-3.6.1-1.el7.x86_64
glusterfs-fuse-3.6.1-1.el7.x86_64
glusterfs-libs-3.6.1-1.el7.x86_64

I monitored traffic on the Gluster node and on the client node and there 
was no communication between them. Telnet to the Cluster node works 
fine. Both machines are in the same network and their firewalls and 
SELinux are turned off.


I looked into it with strace, and here are the results.
For CentOS6: http://pastebin.com/vcqTh2Hi
For CentOS7: http://pastebin.com/s7MuTbXb

What could be the problem?

Kind regards,
Mitja

--
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877, fax: +386 1 479 88 78

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Minutes and summary of todays weekly Gluster Community meeting

2014-12-10 Thread Niels de Vos
On Wed, Dec 10, 2014 at 12:09:49PM +0100, Niels de Vos wrote:
> 
> Hi all,
> 
> In about one hour we will have the regular weekly Gluster Community
> meeting.
> 
> Meeting details:
> - location: #gluster-meeting on Freenode IRC
> - date: every Wednesday
> - time: 12:00 UTC, 13:00 CET (in your terminal, run: date -d "12:00 UTC")
> - agenda: https://public.pad.fsfe.org/p/gluster-community-meetings
> 
> Currently the following items are listed:
> * Roll Call
> * Status of last weeks action items
> * GlusterFS 3.6
> * GlusterFS 3.5
> * GlusterFS 3.4
> * GlusterFS Next
> * Open Floor
> 
> The last topic has space for additions. If you have a suitable topic to
> discuss, please add it to the agenda.


Meeting ended Wed Dec 10 13:00:42 2014 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot .
Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2014-12-10/gluster-meeting.2014-12-10-12.00.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2014-12-10/gluster-meeting.2014-12-10-12.00.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2014-12-10/gluster-meeting.2014-12-10-12.00.log.html

Meeting summary
---
* Roll Call  (ndevos, 12:00:43)

* Last week action items  (ndevos, 12:03:32)

* davemc to run a "what should it say?" through users ML  (ndevos,
  12:03:51)
  * HELP: Need to find out what the "what should it say?" action item is
about  (ndevos, 12:05:29)

* Announcement about GlusterFS releases and its EOL  (ndevos, 12:05:36)
  * HELP: Need a volunteer to document the EOL process/versions in a
blogpost and email  (ndevos, 12:08:56)

* hagarth to update maintainers file  (ndevos, 12:09:15)
  * ACTION: hchiramm sends a patch that adds the new
maintainers/components to the MAINTAINERS file  (ndevos, 12:14:20)
  * ACTION: hagarth needs to send a cleanup patch for the MAINTAINERS
file to remove inactive maintainers (add a allumni section?)
(ndevos, 12:15:06)

* JustinClift to create initial GlusterFS Release Checklist page on the
  wiki  (ndevos, 12:15:46)
  * ACTION: JustinClift to create initial GlusterFS Release Checklist
page on the wiki  (ndevos, 12:16:48)

* davemc to add pointer to wiki from site  (ndevos, 12:17:35)
  * ACTION: JustinClift fix the "wiki" link on gluster.org to redirect
to the wiki and not to the blog  (ndevos, 12:19:21)
  * HELP: Need a volunteer to add a link to the blog on the top of
gluster.org  (ndevos, 12:20:35)
  * ACTION: hchiramm will try to fix the duplicate syndication of posts
from blog.nixpanic.net  (ndevos, 12:22:26)

* JustinClift to respond to Soumya Deb on gluster-infra about website
  (ndevos, 12:22:51)

* Discuss stability issues and directions  (ndevos, 12:27:20)
  * ACTION: Debloper to write  a blog post about complete revamp
(hchiramm, 12:28:48)
  * AGREED: we're not aware of stability issues, dropping it from the
agenda unless someone (re)adds more details  (ndevos, 12:30:53)

* hagarth to set up small file performance meeting  (ndevos, 12:31:06)
  * ACTION: jdarcy schedules a "small file performance" meeting
(ndevos, 12:31:48)
  * ACTION: davemc to chat semiosis on ubuntu release  (ndevos,
12:32:01)
  * ACTION: kkeithley and partner will keep an eye on getting debian and
ubuntu packages updated  (ndevos, 12:37:05)

* follow up on Rackspace regressions  (ndevos, 12:37:23)
  * ACTION: atinmu continues looking into the mgmt_v3 regression test
issue  (ndevos, 12:40:34)
  * ACTION: atinmu will look for someone that can fix the spurious
fop-sanity test  (ndevos, 12:41:01)
  * ACTION: atinmu will contact JustinClift to update the regression.sh
script and capture /var/log/messages too  (ndevos, 12:42:40)

* GlusterFS 3.6  (ndevos, 12:43:05)
  * ACTION: raghu` creates a 3.6.2 beta today (10 december) or tomorrow,
release follows later this week or early next week  (ndevos,
12:49:08)

* GlusterFS 3.5  (ndevos, 12:51:00)

* GlusterFS 3.4  (ndevos, 12:52:01)

* GlusterFS.next (is that the correct name?)  (ndevos, 12:53:46)

* Other Agenda Items  (ndevos, 12:56:17)
  * LINK:

http://www.gluster.org/community/documentation/index.php/Features/Feature_Smallfile_Perf
(ndevos, 12:57:29)
  * LINK:
http://www.gluster.org/community/documentation/index.php/Performance_Testing
(hchiramm, 12:58:13)
  * ACTION: JustinClift and misc should post an update or plan about
upgrading Gerrit  (ndevos, 12:59:29)

Meeting ended at 13:00:42 UTC.




Action Items

* hchiramm sends a patch that adds the new maintainers/components to the
  MAINTAINERS file
* hagarth needs to send a cleanup patch for the MAINTAINERS file to
  remove inactive maintainers (add a allumni section?)
* JustinClift to create initial GlusterFS Release Checklist page on the
  wiki
* JustinClift fix the "wiki" link on gluster.org to redirect to the wiki
  and not to the blog
* hchiramm will try to fix the duplicate syndication of posts from
  blog.nixpanic.net
* Debloper

Re: [Gluster-users] Geo-Replication Issue

2014-12-10 Thread David Gibbons
Symlinking gluster to /usr/bin/ seems to have resolved the path issue.
Thanks for the tip there.

Now there's a different error throw in the geo-rep/ssh...log:

> [2014-12-10 07:32:42.609031] E
>> [syncdutils(monitor):240:log_raise_exception] : FAIL:
>
> Traceback (most recent call last):
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> 150, in main
>
> main_i()
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line
>> 530, in main_i
>
> return monitor(*rscs)
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> 243, in monitor
>
> return Monitor().multiplex(*distribute(*resources))
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> 205, in distribute
>
> mvol = Volinfo(master.volume, master.host)
>
>   File "/usr/local/libexec/glusterfs/python/syncdaemon/monitor.py", line
>> 22, in __init__
>
> vi = XET.fromstring(vix)
>
>   File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 963, in XML
>
> parser.feed(text)
>
>   File "/usr/lib64/python2.6/xml/etree/ElementTree.py", line 1245, in feed
>
> self._parser.Parse(data, 0)
>
> ExpatError: syntax error: line 2, column 0
>
> [2014-12-10 07:32:42.610858] I [syncdutils(monitor):192:finalize] :
>> exiting.
>
>
I also get a bunch of these errors but have been assuming that they are
being thrown because geo-replication hasn't started successfully yet. There
is one for each brick:

> [2014-12-10 12:33:33.539737] E
>> [glusterd-geo-rep.c:2685:glusterd_gsync_read_frm_status] 0-: Unable to read
>> gsyncd status file
>
> [2014-12-10 12:33:33.539742] E
>> [glusterd-geo-rep.c:2999:glusterd_read_status_file] 0-: Unable to read the
>> statusfile for /mnt/a-3-shares-brick-4/brick brick for  shares(master),
>> gfs-a-bkp::bkpshares(slave) session
>
>
Do I have a config file error somewhere that I need to track down? This
volume *was* upgraded from 3.4.2 a few weeks ago.

Cheers,
Dave

On Wed, Dec 10, 2014 at 7:29 AM, David Gibbons 
wrote:

> Hi Kotresh,
>
> Thanks for the tip. Unfortunately that does not seem to have any effect.
> The path to the gluster binaries was already in $PATH. I did try adding the
> path to the gsyncd binary, but same result. Contents of $PATH are:
>
>>
>> /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/libexec/glusterfs/
>>
>
> It seems like perhaps one of the remote gsyncd processes cannot find the
> gluster binary, because I see the following in the
> geo-replication/shares/ssh...log. Can you point me toward how I can find
> out what is throwing this log entry?
>
>> [2014-12-10 07:20:53.886676] E
>>> [syncdutils(monitor):218:log_raise_exception] : execution of "gluster"
>>> failed with ENOENT (No such file or directory)
>>
>> [2014-12-10 07:20:53.886883] I [syncdutils(monitor):192:finalize] :
>>> exiting.
>>
>>
> I think that whatever process is trying to use the gluster command has the
> incorrect path to access it. Do you know how I could modify *that* path?
>
> I've manually tested the ssh_command and ssh_command_tar variables in the
> relevant gsyncd.conf; both connect to the slave server successfully and
> appear to execute the command they're supposed to.
>
> gluster_command_dir in gsyncd.conf is also the correct directory
> (/usr/local/sbin).
>
> In summary: I think we're on to something with setting the path, but I
> think I need to set it somewhere other than my shell.
>
> Thanks,
> Dave
>
>
> On Tue, Dec 9, 2014 at 11:52 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> If that is the case, as a workaround, try adding 'gluster' path
>> to PATH environment variable or creating symlinks to gluster,
>> glusterd binaries.
>>
>> 1. export PATH=$PATH:
>>
>> Above should work, let me know if doesn't.
>>
>> Thanks and Regards,
>> Kotresh H R
>>
>> - Original Message -
>> From: "David Gibbons" 
>> To: "Kotresh Hiremath Ravishankar" 
>> Cc: "gluster-users" , vno...@stonefly.com
>> Sent: Tuesday, December 9, 2014 6:16:03 PM
>> Subject: Re: [Gluster-users] Geo-Replication Issue
>>
>> Hi Kotresh,
>>
>> Yes, I believe that I am. Can you tell me which symlinks are missing/cause
>> geo-replication to fail to start? I can create them manually.
>>
>> Thank you,
>> Dave
>>
>> On Tue, Dec 9, 2014 at 3:54 AM, Kotresh Hiremath Ravishankar <
>> khire...@redhat.com> wrote:
>>
>> > Hi Dave,
>> >
>> > Are you hitting the below bug and so not able to sync symlinks ?
>> > https://bugzilla.redhat.com/show_bug.cgi?id=1105283
>> >
>> > Does geo-rep status say "Not Started" ?
>> >
>> > Thanks and Regards,
>> > Kotresh H R
>> >
>> > - Original Message -
>> > From: "David Gibbons" 
>> > To: "gluster-users" 
>> > Cc: vno...@stonefly.com
>> > Sent: Monday, December 8, 2014 7:03:31 PM
>> > Subject: Re: [Gluster-users] Geo-Replication Issue
>> >
>> > Apologies for sending so many messages about this! I think I may be
>> > running into this bug:
>> > https://bugzilla.redha

Re: [Gluster-users] Geo-Replication Issue

2014-12-10 Thread David Gibbons
Hi Kotresh,

Thanks for the tip. Unfortunately that does not seem to have any effect.
The path to the gluster binaries was already in $PATH. I did try adding the
path to the gsyncd binary, but same result. Contents of $PATH are:

>
> /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/libexec/glusterfs/
>

It seems like perhaps one of the remote gsyncd processes cannot find the
gluster binary, because I see the following in the
geo-replication/shares/ssh...log. Can you point me toward how I can find
out what is throwing this log entry?

> [2014-12-10 07:20:53.886676] E
>> [syncdutils(monitor):218:log_raise_exception] : execution of "gluster"
>> failed with ENOENT (No such file or directory)
>
> [2014-12-10 07:20:53.886883] I [syncdutils(monitor):192:finalize] :
>> exiting.
>
>
I think that whatever process is trying to use the gluster command has the
incorrect path to access it. Do you know how I could modify *that* path?

I've manually tested the ssh_command and ssh_command_tar variables in the
relevant gsyncd.conf; both connect to the slave server successfully and
appear to execute the command they're supposed to.

gluster_command_dir in gsyncd.conf is also the correct directory
(/usr/local/sbin).

In summary: I think we're on to something with setting the path, but I
think I need to set it somewhere other than my shell.

Thanks,
Dave


On Tue, Dec 9, 2014 at 11:52 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> If that is the case, as a workaround, try adding 'gluster' path
> to PATH environment variable or creating symlinks to gluster,
> glusterd binaries.
>
> 1. export PATH=$PATH:
>
> Above should work, let me know if doesn't.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> From: "David Gibbons" 
> To: "Kotresh Hiremath Ravishankar" 
> Cc: "gluster-users" , vno...@stonefly.com
> Sent: Tuesday, December 9, 2014 6:16:03 PM
> Subject: Re: [Gluster-users] Geo-Replication Issue
>
> Hi Kotresh,
>
> Yes, I believe that I am. Can you tell me which symlinks are missing/cause
> geo-replication to fail to start? I can create them manually.
>
> Thank you,
> Dave
>
> On Tue, Dec 9, 2014 at 3:54 AM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
> > Hi Dave,
> >
> > Are you hitting the below bug and so not able to sync symlinks ?
> > https://bugzilla.redhat.com/show_bug.cgi?id=1105283
> >
> > Does geo-rep status say "Not Started" ?
> >
> > Thanks and Regards,
> > Kotresh H R
> >
> > - Original Message -
> > From: "David Gibbons" 
> > To: "gluster-users" 
> > Cc: vno...@stonefly.com
> > Sent: Monday, December 8, 2014 7:03:31 PM
> > Subject: Re: [Gluster-users] Geo-Replication Issue
> >
> > Apologies for sending so many messages about this! I think I may be
> > running into this bug:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1105283
> >
> > Would someone be so kind as to let me know which symlinks are missing
> when
> > this bug manifests, so that I can create them?
> >
> > Thank you,
> > Dave
> >
> >
> > On Sun, Dec 7, 2014 at 11:01 AM, David Gibbons <
> david.c.gibb...@gmail.com
> > > wrote:
> >
> >
> >
> > Ok,
> >
> > I was able to get geo-replication configured by changing
> > /usr/local/libexec/glusterfs/gverify.sh to use ssh to access the local
> > machine, instead of accessing bash -c directly. I then found that the
> hook
> > script was missing for geo-replication, so I copied that over manually. I
> > now have what appears to be a "configured" geo-rep setup:
> >
> >
> >
> >
> > # gluster volume geo-replication shares gfs-a-bkp::bkpshares status
> >
> >
> >
> >
> > MASTER NODE MASTER VOL MASTER BRICK SLAVE STATUS CHECKPOINT STATUS CRAWL
> > STATUS
> >
> >
> >
> 
> >
> > gfs-a-3 shares /mnt/a-3-shares-brick-1/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-3 shares /mnt/a-3-shares-brick-2/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-3 shares /mnt/a-3-shares-brick-3/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-3 shares /mnt/a-3-shares-brick-4/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-2 shares /mnt/a-2-shares-brick-1/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-2 shares /mnt/a-2-shares-brick-2/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-2 shares /mnt/a-2-shares-brick-3/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-2 shares /mnt/a-2-shares-brick-4/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-4 shares /mnt/a-4-shares-brick-1/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-4 shares /mnt/a-4-shares-brick-2/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-4 shares /mnt/a-4-shares-brick-3/brick gfs-a-bkp::bkpshares Not
> > Started N/A N/A
> >
> > gfs-a-4 shares /mnt/a-4-shares-b

[Gluster-users] REMINDER: Weekly Gluster Community meeting today at 12:00 UTC

2014-12-10 Thread Niels de Vos

Hi all,

In about one hour we will have the regular weekly Gluster Community
meeting.

Meeting details:
- location: #gluster-meeting on Freenode IRC
- date: every Wednesday
- time: 12:00 UTC, 13:00 CET (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-community-meetings

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* GlusterFS 3.6
* GlusterFS 3.5
* GlusterFS 3.4
* GlusterFS Next
* Open Floor

The last topic has space for additions. If you have a suitable topic to
discuss, please add it to the agenda.

Thanks,
Niels


pgpcrBVNJvxM7.pgp
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Proposal for more sub-maintainers

2014-12-10 Thread Humble Chirammal


- Original Message -
| From: "Vijay Bellur" 
| To: "gluster-users Discussion List" , "Gluster 
Devel" , "Humble
| Chirammal" 
| Sent: Tuesday, December 9, 2014 3:21:47 PM
| Subject: Re: [Gluster-users] Proposal for more sub-maintainers
| 
| On 11/28/2014 01:08 PM, Vijay Bellur wrote:
| > Hi All,
| >
| > To supplement our ongoing effort of better patch management, I am
| > proposing the addition of more sub-maintainers for various components.
| > The rationale behind this proposal & the responsibilities of maintainers
| > continue to be the same as discussed in these lists a while ago [1].
| > Here is the proposed list:
| >
| > Build - Kaleb Keithley & Niels de Vos
| >
| > DHT   - Raghavendra Gowdappa & Shyam Ranganathan
| >
| > docs  - Humble Chirammal & Lalatendu Mohanty
| >
| > gfapi - Niels de Vos & Shyam Ranganathan
| >
| > index & io-threads - Pranith Karampuri
| >
| > posix - Pranith Karampuri & Raghavendra Bhat
| >
| > We intend to update Gerrit with this list by 8th of December. Please let
| > us know if you have objections, concerns or feedback on this process by
| > then.
| 
| 
| We have not seen any objections to this list. Kudos to the new
| maintainers and good luck for maintaining these components!
| 
| Humble - can you please update gerrit to reflect this?
| 

Done. Please cross-check.  

--Humble
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users