[Gluster-users] SLES 11 geo-replication packages for gluster 3.5 or 3.6

2015-05-19 Thread Óscar Menéndez Sánchez
Hi,

I'd like to make some tests on new geo-replication mechanism for gluster 3.5 
onwards but I find no rpm generated for SLES 11. These packages 
(glusterfs-geo-replication-.rpm) are available for other distributions.

Does anyone know the reason for not generating these packages?

Thanks in advance,
Oscar


P Please consider the environment before printing this e-mail.

__
This message including any attachments may contain confidential 
information, according to our Information Security Management System,
 and intended solely for a specific individual to whom they are addressed.
 Any unauthorised copy, disclosure or distribution of this message
 is strictly forbidden. If you have received this transmission in error,
 please notify the sender immediately and delete it.

__
Este mensaje, y en su caso, cualquier fichero anexo al mismo,
 puede contener informacion clasificada por su emisor como confidencial
 en el marco de su Sistema de Gestion de Seguridad de la 
Informacion siendo para uso exclusivo del destinatario, quedando 
prohibida su divulgacion copia o distribucion a terceros sin la 
autorizacion expresa del remitente. Si Vd. ha recibido este mensaje 
 erroneamente, se ruega lo notifique al remitente y proceda a su borrado. 
Gracias por su colaboracion.

__

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Packaging error

2015-05-19 Thread Lars Hanke
I just installed the current Debian packages from gluster and got the 
following error:


Entpacken von glusterfs-server (3.7.0-2) über (3.6.3-1) ...
dpkg: Fehler beim Bearbeiten des Archivs 
/var/cache/apt/archives/glusterfs-server_3.7.0-2_amd64.deb (--unpack):
 Versuch, »/usr/lib/ocf/resource.d/heartbeat/ganesha_nfsd« zu 
überschreiben, welches auch in Paket glusterfs-common 3.7.0-2 ist
dpkg-deb: Fehler: Unterprozess einfügen wurde durch Signal 
(Datenübergabe unterbrochen (broken pipe)) getötet


So obviously the same file has been packed into two different packages.

Furthermore - maybe as a consequence of the failed installatation - 
gluster does not work correctly anymore:


gluster volume info
Speicherzugriffsfehler (= SIGSEGV)

which should of course never happen.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Continuous Error "readv failed (No data available)"

2015-05-19 Thread Jason Riffel
We have a two node Gluster file system which seems to be functioning
properly, but one node is continuously logging the following error:

2015-05-19 23:00:33.946887] W [socket.c:514:__socket_rwv]
0-transient-client-0: readv failed (No data available)
[2015-05-19 23:00:33.946970] I [client.c:2098:client_rpc_notify]
0-transient-client-0: disconnected

The other node seems happy.  We have tried a myriad of the standard
troubleshooting things such as verifying peer status and volume info all
the way down to restarting the node.  It all appears to be working fine
except this rolling error on one of two nodes.  We are using Gluster
version 3.4.3.

Has anyone else experienced this and should I be concerned?

Thanks in advance,

Jason Riffel
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Can a gluster server be an NFS client ?

2015-05-19 Thread Prasun Gera
I'll have to understand the effects of lock=false better. From gluster's
standpoint though, this appears to be a limitation/bug. Autofs continues to
work fine if it is started after gluster's nfs server is started. i.e. It
can adapt to gluster's nfs server, but not the other way around.

On Tue, May 19, 2015 at 1:07 PM, Jason Brooks  wrote:

>
>
> - Original Message -
> > From: "Prasun Gera" 
> > To: "Jason Brooks" 
> > Cc: gluster-users@gluster.org
> > Sent: Monday, May 18, 2015 3:26:02 PM
> > Subject: Re: [Gluster-users] Can a gluster server be an NFS client ?
> >
> > Thanks. Can you tell me what this achieves, and what the side-effects, if
> > any, are ? Btw, the NFS mounts on the gluster server are unrelated to
> what
> > the gluster server itself is exporting. They are regular nfs mounts from
> a
> > different NFS server.
>
> Your nfs client and the gluster nfs server are both vying for portmap,
> turning off locking for the client resolves this. The side effects will
> depend on your environment -- in my converged ovirt+gluster setup,
> it isn't causing an issue, but I'm looking forward to dropping this
> workaround when ovirt gets support for hosting its engine vm from
> native gluster.
>
> For you, this may or may not be a workable workaround.
>
> Jason
>
> >
> > On Mon, May 18, 2015 at 5:35 PM, Jason Brooks 
> wrote:
> >
> > >
> > >
> > > - Original Message -
> > > > From: "Prasun Gera" 
> > > > To: gluster-users@gluster.org
> > > > Sent: Monday, May 18, 2015 1:47:32 PM
> > > > Subject: [Gluster-users] Can a gluster server be an NFS client ?
> > > >
> > > > I am seeing some erratic behavior w.r.t. the NFS service on the
> gluster
> > > > servers (RHS 3.0). The nfs service fails to start occasionally and
> > > randomly
> > > > with
> > > >
> > > > Could not register with portmap 100021 4 38468
> > > > Program  NLM4 registration failed
> > > >
> > >
> > > I've encountered this before -- I had to disable file locking,
> > > adding Lock=False to /etc/nfsmount.conf
> > >
> > > > This appears to be related to
> > > >
> http://www.gluster.org/pipermail/gluster-users/2014-October/019215.html
> > > ,
> > > > although I'm not sure what the resolution is.
> > > >
> > > > The gluster servers use autofs to mount user home directories and
> other
> > > > sundry directories. I could verify that stopping autofs and then
> starting
> > > > the gluster volume seems to solve the problem. Starting autofs after
> > > > gluster seems to work fine too. What's the right way to handle this ?
> > > >
> > > > ___
> > > > Gluster-users mailing list
> > > > Gluster-users@gluster.org
> > > > http://www.gluster.org/mailman/listinfo/gluster-users
> > >
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3rd party projects associated with Gluster

2015-05-19 Thread Joe Julian
I need contacts for 3rd party projects. If you are involved, or know 
somebody involved, with a project that interfaces with glusterfs: ovirt, 
qemu, samba, etc., please send me an email with the relevant information.


Thanks.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] epel.repo is empty

2015-05-19 Thread Vijay Bellur

On 05/19/2015 02:07 PM, Kingsley wrote:

On Mon, 2015-05-18 at 08:31 -0700, Joe Julian wrote:

Might be a good idea *not* to change the symlink until it can actually
point to something.




The symlink did point to the release tarball but the repos were not 
populated when the switch happened.



I was just thinking the same thing ...



Noted. This happened by mistake and we will ensure that this does not 
happen again.


Thanks,
Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Can a gluster server be an NFS client ?

2015-05-19 Thread Jason Brooks


- Original Message -
> From: "Prasun Gera" 
> To: "Jason Brooks" 
> Cc: gluster-users@gluster.org
> Sent: Monday, May 18, 2015 3:26:02 PM
> Subject: Re: [Gluster-users] Can a gluster server be an NFS client ?
> 
> Thanks. Can you tell me what this achieves, and what the side-effects, if
> any, are ? Btw, the NFS mounts on the gluster server are unrelated to what
> the gluster server itself is exporting. They are regular nfs mounts from a
> different NFS server.

Your nfs client and the gluster nfs server are both vying for portmap,
turning off locking for the client resolves this. The side effects will
depend on your environment -- in my converged ovirt+gluster setup, 
it isn't causing an issue, but I'm looking forward to dropping this 
workaround when ovirt gets support for hosting its engine vm from
native gluster.

For you, this may or may not be a workable workaround.

Jason

> 
> On Mon, May 18, 2015 at 5:35 PM, Jason Brooks  wrote:
> 
> >
> >
> > - Original Message -
> > > From: "Prasun Gera" 
> > > To: gluster-users@gluster.org
> > > Sent: Monday, May 18, 2015 1:47:32 PM
> > > Subject: [Gluster-users] Can a gluster server be an NFS client ?
> > >
> > > I am seeing some erratic behavior w.r.t. the NFS service on the gluster
> > > servers (RHS 3.0). The nfs service fails to start occasionally and
> > randomly
> > > with
> > >
> > > Could not register with portmap 100021 4 38468
> > > Program  NLM4 registration failed
> > >
> >
> > I've encountered this before -- I had to disable file locking,
> > adding Lock=False to /etc/nfsmount.conf
> >
> > > This appears to be related to
> > > http://www.gluster.org/pipermail/gluster-users/2014-October/019215.html
> > ,
> > > although I'm not sure what the resolution is.
> > >
> > > The gluster servers use autofs to mount user home directories and other
> > > sundry directories. I could verify that stopping autofs and then starting
> > > the gluster volume seems to solve the problem. Starting autofs after
> > > gluster seems to work fine too. What's the right way to handle this ?
> > >
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://www.gluster.org/mailman/listinfo/gluster-users
> >
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Distributed volume going to Read only mode if any of the Brick is not available

2015-05-19 Thread Varadharajan S
FYI
On 19 May 2015 20:25, "Varadharajan S"  wrote:

> Hi,
> Replication means, I won't get space. Distribution is not like striping
> right? If one brick is not available in the volume, other bricks can
> distribute data in between. If I do any tuning will get solution?
>  On 19 May 2015 20:02, "Atin Mukherjee" 
> wrote:
>
>>
>> On 19 May 2015 17:10, "Varadharajan S"  wrote:
>> >
>> > Hi,
>> >
>> > We are using Ubuntu 14.04 server and for storage purpose we configured
>> gluster 3.5 as distributed volume and find the below details,
>> >
>> > 1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces
>> are  configured as ZFS raiddz2 volume
>> >
>> > 2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8
>> TB,6 TB and 10 TB
>> >
>> > 3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are
>> connected as Distributed Volume and mounted on each system as,
>> >
>> >   For E.x in rep1 -> mount -t glusterfs  rep1:/glustervol  /data.
>> >   rep2  -> mount  -t glusterfs  rep2:/glustervol  /data
>> >   rep3  -> mount  -t glusterfs  rep3:/glustervol  /data
>> >   st1->  mount  -t glusterfs  st1:/glustervol  /data
>> >
>> > So we get /data is having  around 29 TB and all our applications data's
>> are stored in /data mount point.
>> >
>> > Details about volume:
>> >
>> > volume glustervol-client-0
>> > type protocol/client
>> > option send-gids true
>> > option password b217da9d1d8b-bb55
>> > option username 9d76-4553-8c75
>> > option transport-type tcp
>> > option remote-subvolume /pool/gluster
>> > option remote-host rep1
>> > option ping-timeout 42
>> > end-volume
>> >
>> > volume glustervol-client-1
>> > type protocol/client
>> > option send-gids true
>> > option password b217da9d1d8b-bb55
>> > option username jkd76-4553-5347
>> > option transport-type tcp
>> > option remote-subvolume /pool/gluster
>> > option remote-host rep2
>> > option ping-timeout 42
>> > end-volume
>> >
>> > volume glustervol-client-2
>> > type protocol/client
>> > option send-gids true
>> > option password b217da9d1d8b-bb55
>> > option username 19d7-5a190c2
>> > option transport-type tcp
>> > option remote-subvolume /pool/gluster
>> > option remote-host rep3
>> > option ping-timeout 42
>> > end-volume
>> >
>> > volume glustervol-client-3
>> > type protocol/client
>> > option send-gids true
>> > option password b217da9d1d8b-bb55
>> > option username c75-5436b5a168347
>> > option transport-type tcp
>> > option remote-subvolume /pool/gluster
>> > option remote-host st1
>> >
>> > option ping-timeout 42
>> > end-volume
>> >
>> > volume glustervol-dht
>> > type cluster/distribute
>> > subvolumes glustervol-client-0 glustervol-client-1
>> glustervol-client-2 glustervol-client-3
>> > end-volume
>> >
>> > volume glustervol-write-behind
>> > type performance/write-behind
>> > subvolumes glustervol-dht
>> > end-volume
>> >
>> > volume glustervol-read-ahead
>> > type performance/read-ahead
>> > subvolumes glustervol-write-behind
>> > end-volume
>> >
>> > volume glustervol-io-cache
>> > type performance/io-cache
>> > subvolumes glustervol-read-ahead
>> > end-volume
>> >
>> > volume glustervol-quick-read
>> > type performance/quick-read
>> > subvolumes glustervol-io-cache
>> > end-volume
>> >
>> > volume glustervol-open-behind
>> > type performance/open-behind
>> > subvolumes glustervol-quick-read
>> > end-volume
>> >
>> > volume glustervol-md-cache
>> > type performance/md-cache
>> > subvolumes glustervol-open-behind
>> > end-volume
>> >
>> > volume glustervol
>> > type debug/io-stats
>> > option count-fop-hits off
>> > option latency-measurement off
>> > subvolumes glustervol-md-cache
>> > end-volume
>> >
>> >
>> > ap@rep3:~$ sudo gluster volume info
>> >
>> > Volume Name: glustervol
>> > Type: Distribute
>> > Volume ID: 165b-X
>> > Status: Started
>> > Number of Bricks: 4
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: rep1:/pool/gluster
>> > Brick2: rep2:/pool/gluster
>> > Brick3: rep3:/pool/gluster
>> > Brick4: st1:/pool/gluster
>> >
>> > Problem:
>> >
>> > If we shutdown any of the bricks , the volume size is reduced (this is
>> ok) but from the other servers , i can see my mount point /data but it's
>> only listing contents and i can't write or edit any single files/folders.
>> >
>> > Solution Required:
>> >
>> > If anyone brick is not available, From other servers should allow for
>> Write and edit functions
>> This is expected since you are using distributed volume. You wouldn't be
>> able to write/edit files belonging to the brick which is down. Solution
>> would be to migrate to distributed replicate volume.
>> >
>> > Please let us know, what can i try further ?
>> >
>> > Regards,
>> > Varad
>> >
>> >
>> > ___
>> 

[Gluster-users] glusterfsd do not starts automatically after rebooting

2015-05-19 Thread Sharad Shukla
Hi Niels,

I am using glusterfs 3.6.2 version and I observed this issue that
glusterfsd does not start on its own due to which the yolume is also not
automatically mounted.

Also, "gluster volume status" shows Self-heal daemon   as N/A , even when I
start glusterfsd manually. Further due to this problem one of the node
always shows "gluster peer status" as "Disconnected" .

I tried to check the usage glusterfsd.service but I did not find systemd
directory nor the usage of systemctl.

I would appreciate your help regarding glusterfsd.

Thanks
Sharad
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Distributed volume going to Read only mode if any of the Brick is not available

2015-05-19 Thread Atin Mukherjee
On 19 May 2015 17:10, "Varadharajan S"  wrote:
>
> Hi,
>
> We are using Ubuntu 14.04 server and for storage purpose we configured
gluster 3.5 as distributed volume and find the below details,
>
> 1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces are
configured as ZFS raiddz2 volume
>
> 2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8
TB,6 TB and 10 TB
>
> 3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are connected
as Distributed Volume and mounted on each system as,
>
>   For E.x in rep1 -> mount -t glusterfs  rep1:/glustervol  /data.
>   rep2  -> mount  -t glusterfs  rep2:/glustervol  /data
>   rep3  -> mount  -t glusterfs  rep3:/glustervol  /data
>   st1->  mount  -t glusterfs  st1:/glustervol  /data
>
> So we get /data is having  around 29 TB and all our applications data's
are stored in /data mount point.
>
> Details about volume:
>
> volume glustervol-client-0
> type protocol/client
> option send-gids true
> option password b217da9d1d8b-bb55
> option username 9d76-4553-8c75
> option transport-type tcp
> option remote-subvolume /pool/gluster
> option remote-host rep1
> option ping-timeout 42
> end-volume
>
> volume glustervol-client-1
> type protocol/client
> option send-gids true
> option password b217da9d1d8b-bb55
> option username jkd76-4553-5347
> option transport-type tcp
> option remote-subvolume /pool/gluster
> option remote-host rep2
> option ping-timeout 42
> end-volume
>
> volume glustervol-client-2
> type protocol/client
> option send-gids true
> option password b217da9d1d8b-bb55
> option username 19d7-5a190c2
> option transport-type tcp
> option remote-subvolume /pool/gluster
> option remote-host rep3
> option ping-timeout 42
> end-volume
>
> volume glustervol-client-3
> type protocol/client
> option send-gids true
> option password b217da9d1d8b-bb55
> option username c75-5436b5a168347
> option transport-type tcp
> option remote-subvolume /pool/gluster
> option remote-host st1
>
> option ping-timeout 42
> end-volume
>
> volume glustervol-dht
> type cluster/distribute
> subvolumes glustervol-client-0 glustervol-client-1
glustervol-client-2 glustervol-client-3
> end-volume
>
> volume glustervol-write-behind
> type performance/write-behind
> subvolumes glustervol-dht
> end-volume
>
> volume glustervol-read-ahead
> type performance/read-ahead
> subvolumes glustervol-write-behind
> end-volume
>
> volume glustervol-io-cache
> type performance/io-cache
> subvolumes glustervol-read-ahead
> end-volume
>
> volume glustervol-quick-read
> type performance/quick-read
> subvolumes glustervol-io-cache
> end-volume
>
> volume glustervol-open-behind
> type performance/open-behind
> subvolumes glustervol-quick-read
> end-volume
>
> volume glustervol-md-cache
> type performance/md-cache
> subvolumes glustervol-open-behind
> end-volume
>
> volume glustervol
> type debug/io-stats
> option count-fop-hits off
> option latency-measurement off
> subvolumes glustervol-md-cache
> end-volume
>
>
> ap@rep3:~$ sudo gluster volume info
>
> Volume Name: glustervol
> Type: Distribute
> Volume ID: 165b-X
> Status: Started
> Number of Bricks: 4
> Transport-type: tcp
> Bricks:
> Brick1: rep1:/pool/gluster
> Brick2: rep2:/pool/gluster
> Brick3: rep3:/pool/gluster
> Brick4: st1:/pool/gluster
>
> Problem:
>
> If we shutdown any of the bricks , the volume size is reduced (this is
ok) but from the other servers , i can see my mount point /data but it's
only listing contents and i can't write or edit any single files/folders.
>
> Solution Required:
>
> If anyone brick is not available, From other servers should allow for
Write and edit functions
This is expected since you are using distributed volume. You wouldn't be
able to write/edit files belonging to the brick which is down. Solution
would be to migrate to distributed replicate volume.
>
> Please let us know, what can i try further ?
>
> Regards,
> Varad
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs 3.7 Compile Error

2015-05-19 Thread Mohamed Pakkeer
 It is working. Thanks Kaleb

Regards
Backer

On Tue, May 19, 2015 at 7:50 PM, Kaleb KEITHLEY  wrote:

> On 05/19/2015 10:17 AM, Mohamed Pakkeer wrote:
>
>> Hi GlusterFS experts,
>>
>> I am trying to compile GlusterFS 3.7 on Ubuntu 14.04 and getting
>> following error
>>
>> checking sys/acl.h usability... no
>> checking sys/acl.h presence... no
>> checking for sys/acl.h... no
>> configure: error: Support for POSIX ACLs is required
>> node001:~/glusterfs-3.7.0$
>>
>> Ubuntu 14.04 enables the acl by default on root partition. I enabled acl
>> manually on root partition and still it is showing same error.
>>
>
> You don't have the libacl1-dev pkg installed.
>
> % apt-get install libacl1-dev
>
> --
>
> Kaleb
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfs 3.7 Compile Error

2015-05-19 Thread Kaleb KEITHLEY

On 05/19/2015 10:17 AM, Mohamed Pakkeer wrote:

Hi GlusterFS experts,

I am trying to compile GlusterFS 3.7 on Ubuntu 14.04 and getting
following error

checking sys/acl.h usability... no
checking sys/acl.h presence... no
checking for sys/acl.h... no
configure: error: Support for POSIX ACLs is required
node001:~/glusterfs-3.7.0$

Ubuntu 14.04 enables the acl by default on root partition. I enabled acl
manually on root partition and still it is showing same error.


You don't have the libacl1-dev pkg installed.

% apt-get install libacl1-dev

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Glusterfs 3.7 Compile Error

2015-05-19 Thread Mohamed Pakkeer
Hi GlusterFS experts,

I am trying to compile GlusterFS 3.7 on Ubuntu 14.04 and getting following
error

checking sys/acl.h usability... no
checking sys/acl.h presence... no
checking for sys/acl.h... no
configure: error: Support for POSIX ACLs is required
node001:~/glusterfs-3.7.0$

Ubuntu 14.04 enables the acl by default on root partition. I enabled acl
manually on root partition and still it is showing same error.

Any help would be greatly appreciated.

-- 
Regards
Backer
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Meeting minutes of todays Gluster Community Bug Triage meeting

2015-05-19 Thread Raghavendra Talur

On Tuesday 19 May 2015 05:08 PM, Niels de Vos wrote:

Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
 ( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
 (in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Niels
___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel




Minutes: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-05-19/gluster-meeting.2015-05-19-12.01.html
Minutes (text): 
http://meetbot.fedoraproject.org/gluster-meeting/2015-05-19/gluster-meeting.2015-05-19-12.01.txt
Log: 
http://meetbot.fedoraproject.org/gluster-meeting/2015-05-19/gluster-meeting.2015-05-19-12.01.log.html


Meeting summary

Agenda: https://public.pad.fsfe.org/p/gluster-bug-triage (ndevos, 12:01:58)

Roll Call (ndevos, 12:02:03)
Group Triage (ndevos, 12:06:44)
Bugs for 3.4 got a comment, asking for retesting and updating of the 
version in case the problem exists on newer versions (ndevos, 12:07:51)
Bugs for 3.4 will get closed at the end of this month if there are no 
updates/corrections (ndevos, 12:08:24)

111 bugs were updated with the 3.4 re-confirm note (ndevos, 12:08:52)
there are 40 untriaged bugs since the last meeting: http://goo.gl/WuDQun 
(ndevos, 12:13:07)



Meeting ended at 13:34:56 UTC (full logs).

Action items

(none)


People present (lines said)

ndevos (70)
rafi (41)
kkeithley_ (24)
RaSTar (24)
zodbot (3)
rafi1 (1)





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Distributed volume going to Read only mode if any of the Brick is not available

2015-05-19 Thread Varadharajan S
Hi,

We are using Ubuntu 14.04 server and for storage purpose we configured
gluster 3.5 as distributed volume and find the below details,

1).4 Servers - 14.04 Ubuntu Server and each server disks free spaces are
configured as ZFS raiddz2 volume

2). Each server has /pool/gluster zfs volume and capacity as - 5 TB,8 TB,6
TB and 10 TB

3). Bricks are - rep1,rep2,rep3 and st1 and all the bricks are connected as
Distributed Volume and mounted on each system as,

  For E.x in rep1 -> mount -t glusterfs  rep1:/glustervol  /data.
  rep2  -> mount  -t glusterfs  rep2:/glustervol  /data
  rep3  -> mount  -t glusterfs  rep3:/glustervol  /data
  st1->  mount  -t glusterfs  st1:/glustervol  /data

So we get /data is having  around 29 TB and all our applications data's are
stored in /data mount point.

*Details about volume:*

volume glustervol-client-0
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username 9d76-4553-8c75
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep1
option ping-timeout 42
end-volume

volume glustervol-client-1
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username jkd76-4553-5347
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep2
option ping-timeout 42
end-volume

volume glustervol-client-2
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username 19d7-5a190c2
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host rep3
option ping-timeout 42
end-volume

volume glustervol-client-3
type protocol/client
option send-gids true
option password b217da9d1d8b-bb55
option username c75-5436b5a168347
option transport-type tcp
option remote-subvolume /pool/gluster
option remote-host st1

option ping-timeout 42
end-volume

volume glustervol-dht
type cluster/distribute
subvolumes glustervol-client-0 glustervol-client-1 glustervol-client-2
glustervol-client-3
end-volume

volume glustervol-write-behind
type performance/write-behind
subvolumes glustervol-dht
end-volume

volume glustervol-read-ahead
type performance/read-ahead
subvolumes glustervol-write-behind
end-volume

volume glustervol-io-cache
type performance/io-cache
subvolumes glustervol-read-ahead
end-volume

volume glustervol-quick-read
type performance/quick-read
subvolumes glustervol-io-cache
end-volume

volume glustervol-open-behind
type performance/open-behind
subvolumes glustervol-quick-read
end-volume

volume glustervol-md-cache
type performance/md-cache
subvolumes glustervol-open-behind
end-volume

volume glustervol
type debug/io-stats
option count-fop-hits off
option latency-measurement off
subvolumes glustervol-md-cache
end-volume


ap@rep3:~$ sudo gluster volume info

Volume Name: glustervol
Type: Distribute
Volume ID: 165b-X
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: rep1:/pool/gluster
Brick2: rep2:/pool/gluster
Brick3: rep3:/pool/gluster
Brick4: st1:/pool/gluster

*Problem:*

If we shutdown any of the bricks , the volume size is reduced (this is ok)
but from the other servers , i can see my mount point /data but it's only
listing contents and i can't write or edit any single files/folders.

*Solution Required:*

If anyone brick is not available, From other servers should allow for Write
and edit functions

Please let us know, what can i try further ?

Regards,
Varad
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC (~in 20 minutes)

2015-05-19 Thread Niels de Vos
Hi all,

This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.

Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
(in your terminal, run: date -d "12:00 UTC")
- agenda: https://public.pad.fsfe.org/p/gluster-bug-triage

Currently the following items are listed:
* Roll Call
* Status of last weeks action items
* Group Triage
* Open Floor

The last two topics have space for additions. If you have a suitable bug
or topic to discuss, please add it to the agenda.

Appreciate your participation.

Niels
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS keeps crashing

2015-05-19 Thread Brett Gillett
Morning everyone,

Hoping someone can help me out with this.  I've been running GlusterFS for
awhile now and everything was great.  Now for about the last month I'm
lucky if it runs for a few days without crashing and bringing all the
servers down.

Here's what I can see in the logs when a failure occurs.  I see this across
all three hosts in the cluster.

[2015-05-19 04:12:33.761831] C
[rpc-clnt-ping.c:109:rpc_clnt_ping_timer_expired] 0-www-client-0: server
x.x.x.x:49157 has not responded in
the last 42 seconds, disconnecting.
[2015-05-19 04:12:33.762269] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7ff0ae43c550]
 (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7ff0ae211787]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff0ae2118
9e] (-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7ff0ae211951]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15f)[0x7ff
0ae211f1f] ) 0-www-client-0: forced unwinding frame type(GlusterFS 3.3)
op(OPENDIR(20)) called at 2015-05-19 04:11:51.000813 (xid=0x4a67)
[2015-05-19 04:12:33.762302] E
[client-rpc-fops.c:2686:client3_3_opendir_cbk] 0-www-client-0: remote
operation failed: Transport endpoint is n
ot connected. Path: 
(a1fb01c7-bc8e-4854-9760-8da8d62519bc)
[2015-05-19 04:12:33.762436] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7ff0ae43c550]
 (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7ff0ae211787]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff0ae2118
9e] (-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7ff0ae211951]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15f)[0x7ff
0ae211f1f] ) 0-www-client-0: forced unwinding frame type(GF-DUMP)
op(NULL(2)) called at 2015-05-19 04:11:51.000832 (xid=0x4a68)
[2015-05-19 04:12:33.762455] W [rpc-clnt-ping.c:154:rpc_clnt_ping_cbk]
0-www-client-0: socket disconnected
[2015-05-19 04:16:45.804515] C
[rpc-clnt-ping.c:109:rpc_clnt_ping_timer_expired] 0-www-conf-client-0:
server x.x.x.x:49156 has not responde
d in the last 42 seconds, disconnecting.
[2015-05-19 04:16:45.804884] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7ff0ae43c550]
 (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7ff0ae211787]
(--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7ff0ae2118
9e] (-->
/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x91)[0x7ff0ae211951]
(--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x15f)[0x7ff
0ae211f1f] ) 0-www-conf-client-0: forced unwinding frame type(GlusterFS
3.3) op(OPENDIR(20)) called at 2015-05-19 04:16:03.000774 (xid=0x4
a83)

Here's info about the version I'm running:

glusterfs 3.6.3 built on Apr 23 2015 16:12:23
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. 
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


Any insight would be appreciated,
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] epel.repo is empty

2015-05-19 Thread Kingsley
On Mon, 2015-05-18 at 08:31 -0700, Joe Julian wrote:
> Might be a good idea *not* to change the symlink until it can actually 
> point to something.

I was just thinking the same thing ...

-- 
Cheers,
Kingsley.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Can a gluster server be an NFS client ?

2015-05-19 Thread A Ghoshal
Check if another NFS service isn't already running. Basically,

ps -ef | grep nfs 

at the time of failure should tell you something.



From:   Prasun Gera 
To: "gluster-users@gluster.org" 
Date:   05/19/2015 02:17 AM
Subject:[Gluster-users] Can a gluster server be an NFS client ?
Sent by:gluster-users-boun...@gluster.org



I am seeing some erratic behavior w.r.t. the NFS service on the gluster 
servers (RHS 3.0). The nfs service fails to start occasionally and 
randomly with 

Could not register with portmap 100021 4 38468
Program  NLM4 registration failed

This appears to be related to 
http://www.gluster.org/pipermail/gluster-users/2014-October/019215.html , 
although I'm not sure what the resolution is.

The gluster servers use autofs to mount user home directories and other 
sundry directories. I could verify that stopping autofs and then starting 
the gluster volume seems to solve the problem. Starting autofs after 
gluster seems to work fine too. What's the right way to handle this ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users