Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

01.07.2016 09:23, Lindsay Mathieson пишет:

How did you do your downgrade?


downloaded packages from
https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/CentOS/epel-7.2/x86_64/
and
https://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/CentOS/epel-7.2/noarch/

and
yum downgrade *
from this directory




Had you changed the op-version to 3071?

It's strange, but we have
operating-version=30710
in /var/lib/glusterd/glusterd.info

so I decided to not change it.


I can mount volumes using fuse, but can't use libvirt,  strange but true.

If I write wrong pool name in VM config , then I get
 failed to initialize gluster connection to server: 'localhost': No 
such file or directory


But if I write right pool name, but any file name then I get
 failed to initialize gluster connection to server: 'localhost': 
Invalid argument


Looks like upgrade to 3.7.12 changed something in volumes config and 
this change is not compatible with gfapi,

but I don't see any changes.

Here is volume info for volume with VM images:

Volume Name: pool
Type: Replicate
Volume ID: 6748e47c-4a2e-4a80-a42d-61b6917f4a41
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: father:/wall/pool/brick
Brick2: son:/wall/pool/brick
Brick3: spirit:/wall/pool/brick
Options Reconfigured:
network.ping-timeout: 10
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.eager-lock: enable
nfs.disable: on
storage.owner-uid: 107
storage.owner-gid: 107
auth.allow: 127.0.0.1,192.168.22.26,192.168.22.27,192.168.22.28
features.barrier: disable


May be there are any ideas what we can try to change?


On 1 July 2016 at 15:03, Dmitry Melekhov  wrote:

30.06.2016 15:42, Dmitry Melekhov пишет:


30.06.2016 15:40, Lindsay Mathieson пишет:

On 30/06/2016 6:54 PM, Dmitry Melekhov wrote:

After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM
shutdown:

virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 'localhost':
Invalid argument


This looks similar to the errors we are seeing in the "3.7.12 disaster"
thread, with the debian 3.7.12 packages. For myself things were fine with a
rolling ugrade and running VM's, until we shut them down.


Thank you!

I guess we have the same issue here, although I see no errors in logs.

As I see you reverted back to 3.7.11, we are planning to go this way
tomorrow :-(


Downgrade to 3.7.11 doesn't help in our case, don't know why yet.
fuse mount works OK. But virsh still says error: failed to initialize
gluster connection to server: 'localhost': Invalid argument.
Looks like upgrade to 3.7.12 changed some lib, which was not upgraded
before.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Non Shared Persistent Gluster Storage with Kubernetes

2016-06-30 Thread B.K.Raghuram
I have not gone through this implementation nor the new iscsi
implementation being worked on for 3.9 but I thought I'd share the design
behind a distributed iscsi implementation that we'd worked on some time
back based on the istgt code with a libgfapi hook.

The implementation used the idea of using one file to represent one block
(of a chosen size) thus allowing us to use gluster as the backend to store
these files while presenting a single block device of possibly infinite
size. We used a fixed file naming convention based on the block number
which allows the system to determine which file(s) needs to be operated on
for the requested byte offset. This gave us the advantage of automatically
accessing all of gluster's file based functionality underneath to provide a
fully distributed iscsi implementation.

Would this be similar to the new iscsi implementation thats being worked on
for 3.9?


On Wed, Jun 29, 2016 at 4:04 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> Prasanna explains how Gluster can be used as a distributed block store
> with Kubernetes cluster at:
>
>
> https://pkalever.wordpress.com/2016/06/29/non-shared-persistent-gluster-storage-with-kubernetes
>
> Please note that the current version of kubernetes doesn't have support
> for multi-path.
> We will be sending out an updated version of this post once Kubernetes
> 1.3.0. is released.
>
> This is a follow-up post to:
> "Gluster Solution for Non Shared Persistent Storage in Docker Container "
>
> https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
>
> We would love to hear your feedback to both of these solutions.
>
> --
> Pranith
>
> ___
> Gluster-devel mailing list
> gluster-de...@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-devel
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Lindsay Mathieson
On 1 July 2016 at 15:23, Lindsay Mathieson  wrote:
> Had you changed the op-version to 3071?


30712 I mean

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Lindsay Mathieson
How did you do your downgrade?

Had you changed the op-version to 3071?

On 1 July 2016 at 15:03, Dmitry Melekhov  wrote:
> 30.06.2016 15:42, Dmitry Melekhov пишет:
>
>> 30.06.2016 15:40, Lindsay Mathieson пишет:
>>>
>>> On 30/06/2016 6:54 PM, Dmitry Melekhov wrote:

 After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM
 shutdown:

 virsh create w8test.xml
 error: Failed to create domain from w8test.xml
 error: failed to initialize gluster connection to server: 'localhost':
 Invalid argument
>>>
>>>
>>> This looks similar to the errors we are seeing in the "3.7.12 disaster"
>>> thread, with the debian 3.7.12 packages. For myself things were fine with a
>>> rolling ugrade and running VM's, until we shut them down.
>>>
>> Thank you!
>>
>> I guess we have the same issue here, although I see no errors in logs.
>>
>> As I see you reverted back to 3.7.11, we are planning to go this way
>> tomorrow :-(
>>
>
> Downgrade to 3.7.11 doesn't help in our case, don't know why yet.
> fuse mount works OK. But virsh still says error: failed to initialize
> gluster connection to server: 'localhost': Invalid argument.
> Looks like upgrade to 3.7.12 changed some lib, which was not upgraded
> before.
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users



-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

30.06.2016 15:42, Dmitry Melekhov пишет:

30.06.2016 15:40, Lindsay Mathieson пишет:

On 30/06/2016 6:54 PM, Dmitry Melekhov wrote:
After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM 
shutdown:


virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 
'localhost': Invalid argument


This looks similar to the errors we are seeing in the "3.7.12 
disaster" thread, with the debian 3.7.12 packages. For myself things 
were fine with a rolling ugrade and running VM's, until we shut them 
down.



Thank you!

I guess we have the same issue here, although I see no errors in logs.

As I see you reverted back to 3.7.11, we are planning to go this way 
tomorrow :-(




Downgrade to 3.7.11 doesn't help in our case, don't know why yet.
fuse mount works OK. But virsh still says error: failed to initialize 
gluster connection to server: 'localhost': Invalid argument.
Looks like upgrade to 3.7.12 changed some lib, which was not upgraded 
before.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.7.12/3.8.qemu/proxmox testing

2016-06-30 Thread Lindsay Mathieson
Started a new thread for this to get away from the somewhat panicky
subject line ...

Some more test results. I built pve-qemu-kvm against gluster 3.8 and
installed, which would I hoped would remove any libglusterfs version
issues.

Unfortunately it made no difference - same problems emerged.

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Is there a way to manage data location manually?

2016-06-30 Thread Joe Julian
Isn't that what tiering is for?

On June 30, 2016 4:54:42 PM PDT, Serg Gulko  wrote:
>Hello!
>
>We are running purely distributed(no replication) gluster storage.
>Is there a way to "bind" files to certain brick? Reason why I need it
>is
>very simple - I prefer to keep most recent data on more faster storage
>pods
>and offload stale files into slower pods(read - less expensive).
>
>I tried to copy files behind gluster back into target bricks directory
>but
>expectantly failed.
>
>Serg
>
>
>
>
>___
>Gluster-users mailing list
>Gluster-users@gluster.org
>http://www.gluster.org/mailman/listinfo/gluster-users

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is there a way to manage data location manually?

2016-06-30 Thread s . gulko
Hello Russell!

Great idea, thank you very much!

  Original Message  
From: Russell Purinton
Sent: Thursday, June 30, 2016 20:00
To: Serg Gulko
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Is there a way to manage data location manually?

I do this using separate volumes… One fast volume, one slow volume, and just 
move the data between them as necessary…

I don’t believe there’s a way to do it as you describe.

Russ

> On Jun 30, 2016, at 7:54 PM, Serg Gulko  wrote:
> 
> Hello! 
> 
> We are running purely distributed(no replication) gluster storage. 
> Is there a way to "bind" files to certain brick? Reason why I need it is very 
> simple - I prefer to keep most recent data on more faster storage pods and 
> offload stale files into slower pods(read - less expensive). 
> 
> I tried to copy files behind gluster back into target bricks directory but 
> expectantly failed. 
> 
> Serg
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] GlusterFS client mounted by fuse didn't cache inode/dentry?

2016-06-30 Thread Kviw John
hi,
I have encountered problems that I mounted GlusterFS client as local 
filesystem by fuse. By "mount -t glusterfs -o attribute-timeout=86400 -o 
entry-timeout=86400 -o direct-io-mode=enable server1:/testvol /mnt/glusterfs', 
I mounted GlusterFS to Client1, meanwhile, Client1 didn't exist bricks. In 
Client1, I excuted a ls shell to list the dentries. From the "glusterfs(8) - 
Linux man page", options like attribute-timeout,entry-timeout will affect the 
metadata cache in FUSE kernel.
1, I have monitored the Client1's memory by "free -m" when the bash shell 
was excuted, it didn't change any more.
2, I also have monitored the Server1's memory by "free -m", it grew about 
4G(3 million files).
3, How to explain the phenomenon? The metadata cache was provided by FUSE 
kernel, was there something wrong with my mounting command?___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Is there a way to manage data location manually?

2016-06-30 Thread Russell Purinton
I do this using separate volumes…  One fast volume, one slow volume, and just 
move the data between them as necessary…

I don’t believe there’s a way to do it as you describe.

Russ

> On Jun 30, 2016, at 7:54 PM, Serg Gulko  wrote:
> 
> Hello! 
> 
> We are running purely distributed(no replication) gluster storage. 
> Is there a way to "bind" files to certain brick? Reason why I need it is very 
> simple - I prefer to keep most recent data on more faster storage pods and 
> offload stale files into slower pods(read - less expensive). 
> 
> I tried to copy files behind gluster back into target bricks directory but 
> expectantly failed. 
> 
> Serg
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Is there a way to manage data location manually?

2016-06-30 Thread Serg Gulko
Hello!

We are running purely distributed(no replication) gluster storage.
Is there a way to "bind" files to certain brick? Reason why I need it is
very simple - I prefer to keep most recent data on more faster storage pods
and offload stale files into slower pods(read - less expensive).

I tried to copy files behind gluster back into target bricks directory but
expectantly failed.

Serg
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Lindsay Mathieson

On 1/07/2016 9:15 AM, Kaleb KEITHLEY wrote:

There isn't a libglusterfs.a that it could static link to.


Then it shouldn't matter what version it was built against should it? 
unless the function signatures have changed since 3.5


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 06:53 PM, Lindsay Mathieson wrote:
> On 30/06/2016 10:31 PM, Kaushal M wrote:
>> The pve-qemu-kvm package was last built or updated in January this
>> year[1]. And I think it was built against glusterfs-3.5.2, which is
>> the latest version of glusterfs in the proxmox sources [2].
>> Maybe the pve-qemu-kvm package needs a rebuild.
> 
> Does qemu static link libglusterfs?
> 

There isn't a libglusterfs.a that it could static link to.

So no.

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Lindsay Mathieson

On 30/06/2016 10:31 PM, Kaushal M wrote:

The pve-qemu-kvm package was last built or updated in January this
year[1]. And I think it was built against glusterfs-3.5.2, which is
the latest version of glusterfs in the proxmox sources [2].
Maybe the pve-qemu-kvm package needs a rebuild.


Does qemu static link libglusterfs?

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Bit rot disabled as default

2016-06-30 Thread Gandalf Corvotempesta
Il 15 giu 2016 18:13, "Дмитрий Глушенок"  ha scritto:
>
> Hello.
>
> May be because of current implementation of rotten bits detection - one
hash for whole file. Imagine 40 GB VM image - few parts of the image are
modified continuously (VM log files and application data are constantly
changing). Those writes making checksum invalid and BitD has to recalculate
it endlessly. As the result - checksum of VM image can never be verified.
>

And what about enabling bitrot for small files like emails in a maildir?
in this case the bitrot feature with 1 hash for the whole file would be ok.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 11:23 AM, Kaleb KEITHLEY wrote:
> On 06/30/2016 11:18 AM, Vijay Bellur wrote:
>> On Thu, Jun 30, 2016 at 8:31 AM, Kaushal M  wrote:
>>> On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier  
>>> wrote:
>
> Replicated the problem with 3.7.12 *and* 3.8.0 :(
>

 Yeah, I tried 3.8 when it came out too and I had to use the fuse mount 
 point
 to get the VMs to work. I just assumed proxmox wasn't compatible yet with 
 3.8 (since
 the menu were a bit wonky anyway) but I guess it was the same bug.

>>>
>>> I was able to reproduce the hang as well against 3.7.12.
>>>
>>> I tested by installing the pve-qemu-kvm package from the Proxmox
>>> repositories in a Debain Jessie container, as the default Debian qemu
>>> packages don't link with glusterfs.
>>> I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.
>>>
>>> I tried to create an image on a simple 1 brick gluster volume using 
>>> qemu-img.
>>> The qemu-img command succeeded against a 3.7.11 volume, but hung
>>> against 3.7.12 to finally timeout and fail after ping-timeout.
>>>
>>> We can at-least be happy that this issue isn't due to any bugs in AFR.
>>>
>>> I was testing this with Raghavendra, and we are wondering if this is
>>> probably a result of changes to libglusterfs and libgfapi that have
>>> been introduced in 3.7.12 and 3.8.
>>> Any app linking with libgfapi also needs to link with libglusterfs.
>>> While we have some sort of versioning for libgfapi, we don't have any
>>> for libglusterfs.
>>> This has caused problems before (I cannot find any links for this
>>> right now though).
>>>
>>
>> Did any function signatures change between 3.7.11 and 3.7.12?
> 
> In gfapi? No. And (as I'm sure you're aware) they're all versioned, so
> things that linked with the old version-signature continue to do so.
> 
> I don't know about libglusterfs.
> 

And I'm not sure I want to suggest that we version libglusterfs for 4.0;
but perhaps we ought to?

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 11:18 AM, Vijay Bellur wrote:
> On Thu, Jun 30, 2016 at 8:31 AM, Kaushal M  wrote:
>> On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier  
>> wrote:

 Replicated the problem with 3.7.12 *and* 3.8.0 :(

>>>
>>> Yeah, I tried 3.8 when it came out too and I had to use the fuse mount point
>>> to get the VMs to work. I just assumed proxmox wasn't compatible yet with 
>>> 3.8 (since
>>> the menu were a bit wonky anyway) but I guess it was the same bug.
>>>
>>
>> I was able to reproduce the hang as well against 3.7.12.
>>
>> I tested by installing the pve-qemu-kvm package from the Proxmox
>> repositories in a Debain Jessie container, as the default Debian qemu
>> packages don't link with glusterfs.
>> I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.
>>
>> I tried to create an image on a simple 1 brick gluster volume using qemu-img.
>> The qemu-img command succeeded against a 3.7.11 volume, but hung
>> against 3.7.12 to finally timeout and fail after ping-timeout.
>>
>> We can at-least be happy that this issue isn't due to any bugs in AFR.
>>
>> I was testing this with Raghavendra, and we are wondering if this is
>> probably a result of changes to libglusterfs and libgfapi that have
>> been introduced in 3.7.12 and 3.8.
>> Any app linking with libgfapi also needs to link with libglusterfs.
>> While we have some sort of versioning for libgfapi, we don't have any
>> for libglusterfs.
>> This has caused problems before (I cannot find any links for this
>> right now though).
>>
> 
> Did any function signatures change between 3.7.11 and 3.7.12?

In gfapi? No. And (as I'm sure you're aware) they're all versioned, so
things that linked with the old version-signature continue to do so.

I don't know about libglusterfs.

--

Kaleb


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Pranith Kumar Karampuri
Kaushal, Raghavendra Talur(CCed) were looking into why libgfapi could be
giving a problem. We will get in touch with you as soon as they have
something. Please keep the test node until they reach you. Thanks again
Lindsay.

On Thu, Jun 30, 2016 at 5:34 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On 30/06/2016 2:42 PM, Pranith Kumar Karampuri wrote:
>
>> Glad that for both of you, things are back to normal. Could one of you
>> help us find what is the problem you are facing with libgfapi, if you have
>> any spare test machines. Otherwise we need to understand proxmox etc which
>> may take a bit more time.
>>
>
> I got a test node running, with a replica 3 volumes (3 bricks on same
> node).
>
> Replicated the problem with 3.7.12 *and* 3.8.0 :(
>
>
> I can trash this node as needed, happy to build from src and apply patches.
>
> --
> Lindsay Mathieson
>
>


-- 
Pranith
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Vijay Bellur
On Thu, Jun 30, 2016 at 8:31 AM, Kaushal M  wrote:
> On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier  wrote:
>>>
>>> Replicated the problem with 3.7.12 *and* 3.8.0 :(
>>>
>>
>> Yeah, I tried 3.8 when it came out too and I had to use the fuse mount point
>> to get the VMs to work. I just assumed proxmox wasn't compatible yet with 
>> 3.8 (since
>> the menu were a bit wonky anyway) but I guess it was the same bug.
>>
>
> I was able to reproduce the hang as well against 3.7.12.
>
> I tested by installing the pve-qemu-kvm package from the Proxmox
> repositories in a Debain Jessie container, as the default Debian qemu
> packages don't link with glusterfs.
> I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.
>
> I tried to create an image on a simple 1 brick gluster volume using qemu-img.
> The qemu-img command succeeded against a 3.7.11 volume, but hung
> against 3.7.12 to finally timeout and fail after ping-timeout.
>
> We can at-least be happy that this issue isn't due to any bugs in AFR.
>
> I was testing this with Raghavendra, and we are wondering if this is
> probably a result of changes to libglusterfs and libgfapi that have
> been introduced in 3.7.12 and 3.8.
> Any app linking with libgfapi also needs to link with libglusterfs.
> While we have some sort of versioning for libgfapi, we don't have any
> for libglusterfs.
> This has caused problems before (I cannot find any links for this
> right now though).
>

Did any function signatures change between 3.7.11 and 3.7.12?

-Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] About Gluster cluster availability when one of out of two nodes is down

2016-06-30 Thread Atin Mukherjee
On Thursday 30 June 2016, Ted Miller  wrote:

> Is it not the default behavior that if a volume looses quorum, the files
> are still available in read-only mode?  If so, it makes sense to me that
> this behavior would continue after a reboot.  Otherwise the user is going
> to be very confused.
>
I must say that I should have mentioned it as server side quorum to avoid
this confusion. In this specific case brick process(es) do not come up
which means volume is completely inaccessible and you don't get to access
the files hosted by the volume.


> Ted Miller
> Elkhart, IN, USA
>
> On 6/30/2016 2:10 AM, Atin Mukherjee wrote:
>
> Currently on a two node set up, if node B goes down and node A is rebooted
> brick process(es) on node A doesn't come up to avoid split brains. However
> we have had concerns/bugs from different gluster users on the availability
> with this configuration. So we can solve this issue by starting the brick
> process(es) if quorum is not enabled. If quorum is enabled we'd not.
> Although quorum option really doesn't make sense in a two node cluster, but
> we can leverage this option to get rid of this specific situation.
>
> I'd like to know your feedback on this and then I push a patch right away.
>
> ~Atin
>
>
> ___
> Gluster-users mailing listgluster-us...@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>

-- 
--Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Lindsay Mathieson

On 30/06/2016 10:31 PM, Kaushal M wrote:

Any app linking with libgfapi also needs to link with libglusterfs.
While we have some sort of versioning for libgfapi, we don't have any
for libglusterfs.
This has caused problems before (I cannot find any links for this
right now though).

The pve-qemu-kvm package was last built or updated in January this
year[1]. And I think it was built against glusterfs-3.5.2, which is
the latest version of glusterfs in the proxmox sources [2].
Maybe the pve-qemu-kvm package needs a rebuild.


Tricky problem.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] selinux status on RHEL/Centos 7

2016-06-30 Thread Niels de Vos
On Wed, Jun 29, 2016 at 01:32:24PM -0400, Ted Miller wrote:
> What is the status of selinux tagging on Centos 7?  I have read enough to
> know that this is a chain-like process requiring changes in the client, the
> server, FUSE, and the kernel to make it all work.  What is the current
> status of this process on Centos 7?
> 
> My use-case: I need to allow Apache to access files that are stored on
> gluster and mounted using FUSE.  What are my options (besides shutting down
> selinux for the Apache process)?

It is not possible yet to change the SELinux labels over FUSE. There are
some changes needed in Gluster to really support that, in the FUSE
kernel module and also in the SELinux part of the kernel. Possibly even
some selinux-policy changes...

Until then, you should be able to mount a Gluster volume with the
"context" option. This might work for you:

   # mount -t glusterfs \
-o context="unconfined_u:object_r:httpd_sys_content_t:s0" \
storage.example.com:/website /var/www/html

Or, you can allow Apache to access FUSE filesystems with a boolean:

  # sebool httpd_use_fusefs on


The main bug that we use for tracking progress on different fronts is
currently https://bugzilla.redhat.com/show_bug.cgi?id=1318100 . Maybe
some parts of this can be made available in GlusterfS 3.9 (September),
but it is likely that additional components (like kernel) need more
time.

HTH,
Niels


signature.asc
Description: PGP signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 5:47 PM, Kevin Lemonnier  wrote:
>>
>> Replicated the problem with 3.7.12 *and* 3.8.0 :(
>>
>
> Yeah, I tried 3.8 when it came out too and I had to use the fuse mount point
> to get the VMs to work. I just assumed proxmox wasn't compatible yet with 3.8 
> (since
> the menu were a bit wonky anyway) but I guess it was the same bug.
>

I was able to reproduce the hang as well against 3.7.12.

I tested by installing the pve-qemu-kvm package from the Proxmox
repositories in a Debain Jessie container, as the default Debian qemu
packages don't link with glusterfs.
I used the 3.7.11 and 3.7.12 gluster repos from download.gluster.org.

I tried to create an image on a simple 1 brick gluster volume using qemu-img.
The qemu-img command succeeded against a 3.7.11 volume, but hung
against 3.7.12 to finally timeout and fail after ping-timeout.

We can at-least be happy that this issue isn't due to any bugs in AFR.

I was testing this with Raghavendra, and we are wondering if this is
probably a result of changes to libglusterfs and libgfapi that have
been introduced in 3.7.12 and 3.8.
Any app linking with libgfapi also needs to link with libglusterfs.
While we have some sort of versioning for libgfapi, we don't have any
for libglusterfs.
This has caused problems before (I cannot find any links for this
right now though).

The pve-qemu-kvm package was last built or updated in January this
year[1]. And I think it was built against glusterfs-3.5.2, which is
the latest version of glusterfs in the proxmox sources [2].
Maybe the pve-qemu-kvm package needs a rebuild.

We'll continue to try to figure out what the actual issue is though.

~kaushal

> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kevin Lemonnier
> 
> Replicated the problem with 3.7.12 *and* 3.8.0 :(
> 

Yeah, I tried 3.8 when it came out too and I had to use the fuse mount point
to get the VMs to work. I just assumed proxmox wasn't compatible yet with 3.8 
(since
the menu were a bit wonky anyway) but I guess it was the same bug.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Lindsay Mathieson

On 30/06/2016 2:42 PM, Pranith Kumar Karampuri wrote:
Glad that for both of you, things are back to normal. Could one of you 
help us find what is the problem you are facing with libgfapi, if you 
have any spare test machines. Otherwise we need to understand proxmox 
etc which may take a bit more time.


I got a test node running, with a replica 3 volumes (3 bricks on same node).

Replicated the problem with 3.7.12 *and* 3.8.0 :(


I can trash this node as needed, happy to build from src and apply patches.

--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

30.06.2016 15:40, Lindsay Mathieson пишет:

On 30/06/2016 6:54 PM, Dmitry Melekhov wrote:
After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM 
shutdown:


virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 
'localhost': Invalid argument


This looks similar to the errors we are seeing in the "3.7.12 
disaster" thread, with the debian 3.7.12 packages. For myself things 
were fine with a rolling ugrade and running VM's, until we shut them 
down.



Thank you!

I guess we have the same issue here, although I see no errors in logs.

As I see you reverted back to 3.7.11, we are planning to go this way 
tomorrow :-(


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Lindsay Mathieson

On 30/06/2016 6:54 PM, Dmitry Melekhov wrote:

After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM shutdown:

virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 'localhost': 
Invalid argument


This looks similar to the errors we are seeing in the "3.7.12 disaster" 
thread, with the debian 3.7.12 packages. For myself things were fine 
with a rolling ugrade and running VM's, until we shut them down.


--
Lindsay Mathieson

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Disappearance of glusterfs-3.7.11-2.el6.x86_64 and dependencies

2016-06-30 Thread Kaleb KEITHLEY
On 06/30/2016 07:03 AM, Milos Kurtes wrote:
> Hi,
> 
> yesterday and day before package was there but now it is not.
> 
> http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm:
^^^
3.7.x packages are (still) at

http://download.gluster.org/pub/gluster/glusterfs/3.7/


> [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not
> Found"
> 
> The package is still in yum list getting from the repository.
> 
> What happened?
> 
> When will be available again?

And after the release of 3.7.12, LATEST is now

http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.12-1.el6.x86_64.rpm

If you absolutely want 3.7.11, you can still get it from

http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.11/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm

--

Kaleb

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Disappearance of glusterfs-3.7.11-2.el6.x86_64 and dependencies

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 4:33 PM, Milos Kurtes  wrote:
> Hi,
>
> yesterday and day before package was there but now it is not.
>
> http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm:
> [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not
> Found"

To get the latest 3.7 packages you should be using
http://download.gluster.org/pub/gluster/glusterfs/3.7/LATEST/EPEL.repo

>
> The package is still in yum list getting from the repository.
>
> What happened?
>
> When will be available again?
>
> Milos Kurtes, System Administrator, Alison Co.
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Disappearance of glusterfs-3.7.11-2.el6.x86_64 and dependencies

2016-06-30 Thread Milos Kurtes
Hi,

yesterday and day before package was there but now it is not.

http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/EPEL.repo/epel-6/x86_64/glusterfs-3.7.11-2.el6.x86_64.rpm:
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404 Not
Found"

The package is still in yum list getting from the repository.

What happened?

When will be available again?

Milos Kurtes, System Administrator, Alison Co.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

30.06.2016 13:22, Dmitry Melekhov пишет:

30.06.2016 13:09, Dmitry Melekhov пишет:

30.06.2016 12:54, Dmitry Melekhov пишет:

Hello!

After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM 
shutdown:


virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 
'localhost': Invalid argument



Disk config:


  
  

  


It worked with 3.7.11 and still works for VMs still running.

Could you tell me is there any solution for this problem?


btw, I see in  brick log:

[2016-06-30 09:06:05.002914] E [MSGID: 113091] 
[posix.c:178:posix_lookup] 0-pool-posix: null gfid for path (null)
[2016-06-30 09:06:05.002971] E [MSGID: 113018] 
[posix.c:196:posix_lookup] 0-pool-posix: lstat on null failed 
[Invalid argument]


No, this is not related, because there are no new messages in log if I 
retry.



And, yes, it is, I see this on other bricks.
Looks like bug in 3.7.12.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

30.06.2016 13:09, Dmitry Melekhov пишет:

30.06.2016 12:54, Dmitry Melekhov пишет:

Hello!

After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM 
shutdown:


virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 
'localhost': Invalid argument



Disk config:


  
  

  


It worked with 3.7.11 and still works for VMs still running.

Could you tell me is there any solution for this problem?


btw, I see in  brick log:

[2016-06-30 09:06:05.002914] E [MSGID: 113091] 
[posix.c:178:posix_lookup] 0-pool-posix: null gfid for path (null)
[2016-06-30 09:06:05.002971] E [MSGID: 113018] 
[posix.c:196:posix_lookup] 0-pool-posix: lstat on null failed [Invalid 
argument]


No, this is not related, because there are no new messages in log if I 
retry.


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

30.06.2016 12:54, Dmitry Melekhov пишет:

Hello!

After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM shutdown:

virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 'localhost': 
Invalid argument



Disk config:


  
  

  


It worked with 3.7.11 and still works for VMs still running.

Could you tell me is there any solution for this problem?


btw, I see in  brick log:

[2016-06-30 09:06:05.002914] E [MSGID: 113091] 
[posix.c:178:posix_lookup] 0-pool-posix: null gfid for path (null)
[2016-06-30 09:06:05.002971] E [MSGID: 113018] 
[posix.c:196:posix_lookup] 0-pool-posix: lstat on null failed [Invalid 
argument]


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.7.12, centos 7 , libvirt problem

2016-06-30 Thread Dmitry Melekhov

Hello!

After upgrade from 3.7.11 to 3.7.12 we can't start VMs after VM shutdown:

virsh create w8test.xml
error: Failed to create domain from w8test.xml
error: failed to initialize gluster connection to server: 'localhost': 
Invalid argument



Disk config:


  
  

  


It worked with 3.7.11 and still works for VMs still running.

Could you tell me is there any solution for this problem?

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kevin Lemonnier
> 
> I need a quick info from you guys, which packages are you using? Are
> you using any of the packages built by the community (ie. on
> download.gluster.org/launchpad/CentOS-storage-sig etc.).
> We are wondering if the issues you are facing are the same that have
> been fixed by https://review.gluster.org/14822 .
> The packages that have been built by us, contain this patch. So if you
> are facing problems with these packages, we can be sure its a new
> issue.
>


r...@s2.name [hostname]:~ # cat /etc/apt/sources.list.d/gluster.list
deb 
http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.12/Debian/jessie/apt 
jessie main

Should include the patch, right ?

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kaushal M
On Thu, Jun 30, 2016 at 12:29 PM, Kevin Lemonnier  wrote:
>>Glad that for both of you, things are back to normal. Could one of you
>>help us find what is the problem you are facing with libgfapi, if you have
>>any spare test machines. Otherwise we need to understand proxmox etc which
>>may take a bit more time.
>
> Sure, I have my test cluster working now using NFS but I can create other VMs
> using the lib to test if needed. What would you need ? Unfortunatly creating
> a VM on gluster through the lib doesn't work and I don't know how to get the 
> logs of that,
> the only error I get is this in proxmox logs :
>
> Jun 29 13:26:25 s2.name pvedaemon[2803]: create failed - unable to create 
> image: got lock timeout - aborting command
> Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485296] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-3: server 
> 172.16.0.2:49153 has not responded in the last 42 seconds, disconnecting.
> Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485407] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-4: server 
> 172.16.0.3:49153 has not responded in the last 42 seconds, disconnecting.
> Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485443] C 
> [rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-5: server 
> 172.16.0.50:49153 has not responded in the last 42 seconds, disconnecting.
>

I need a quick info from you guys, which packages are you using? Are
you using any of the packages built by the community (ie. on
download.gluster.org/launchpad/CentOS-storage-sig etc.).
We are wondering if the issues you are facing are the same that have
been fixed by https://review.gluster.org/14822 .
The packages that have been built by us, contain this patch. So if you
are facing problems with these packages, we can be sure its a new
issue.

>
> --
> Kevin Lemonnier
> PGP Fingerprint : 89A5 2283 04A0 E6E9 0111
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Replica 3 bricks on same node?

2016-06-30 Thread Atin Mukherjee
On Thu, Jun 30, 2016 at 1:13 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> How about if I setup 3 different IP's on the same node?
>

GlusterD supports multi N/W interfaces. If you configure the interfaces and
peer probe them this should be possible.


> On 30 June 2016 at 17:42, Lindsay Mathieson 
> wrote:
> > As asked earlier I have a test proxmox node setup, but only the one.
> > Is it possible to setup a replica 3 (or 2) volume with the bricks all
> > on the same node?
> >
> > --
> > Lindsay
>
>
>
> --
> Lindsay
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Replica 3 bricks on same node?

2016-06-30 Thread Lindsay Mathieson
How about if I setup 3 different IP's on the same node?

On 30 June 2016 at 17:42, Lindsay Mathieson  wrote:
> As asked earlier I have a test proxmox node setup, but only the one.
> Is it possible to setup a replica 3 (or 2) volume with the bricks all
> on the same node?
>
> --
> Lindsay



-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Replica 3 bricks on same node?

2016-06-30 Thread Lindsay Mathieson
As asked earlier I have a test proxmox node setup, but only the one.
Is it possible to setup a replica 3 (or 2) volume with the bricks all
on the same node?

-- 
Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 3.7.12 disaster

2016-06-30 Thread Kevin Lemonnier
>Glad that for both of you, things are back to normal. Could one of you
>help us find what is the problem you are facing with libgfapi, if you have
>any spare test machines. Otherwise we need to understand proxmox etc which
>may take a bit more time.

Sure, I have my test cluster working now using NFS but I can create other VMs
using the lib to test if needed. What would you need ? Unfortunatly creating
a VM on gluster through the lib doesn't work and I don't know how to get the 
logs of that,
the only error I get is this in proxmox logs :

Jun 29 13:26:25 s2.name pvedaemon[2803]: create failed - unable to create 
image: got lock timeout - aborting command
Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485296] C 
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-3: server 
172.16.0.2:49153 has not responded in the last 42 seconds, disconnecting.
Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485407] C 
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-4: server 
172.16.0.3:49153 has not responded in the last 42 seconds, disconnecting.
Jun 29 13:26:52 s2.name qemu-img[2811]: [2016-06-29 11:26:52.485443] C 
[rpc-clnt-ping.c:165:rpc_clnt_ping_timer_expired] 0-gluster-client-5: server 
172.16.0.50:49153 has not responded in the last 42 seconds, disconnecting.


-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] About Gluster cluster availability when one of out of two nodes is down

2016-06-30 Thread Atin Mukherjee
On Thu, Jun 30, 2016 at 11:57 AM, Ravishankar N 
wrote:

> On 06/30/2016 11:40 AM, Atin Mukherjee wrote:
>
> Currently on a two node set up, if node B goes down and node A is rebooted
> brick process(es) on node A doesn't come up to avoid split brains.
>
>
> This has always been the case. A patch I had sent quite some time back (
> http://review.gluster.org/#/c/8034/) was eventually abandoned, I think
> `volume start force` should suffice instead of adding checks in code.
>

This is exactly what I thought earlier as a workaround. The problem with
this approach here is the manual intervention which users may not like to
apply. I got to know from Joe Julian that this is a departure from prior
behaviour and I got a feedback that we should think about having a solution
where there is no manual intervention/workaround required.


-Ravi
>
> However we have had concerns/bugs from different gluster users on the
> availability with this configuration. So we can solve this issue by
> starting the brick process(es) if quorum is not enabled. If quorum is
> enabled we'd not. Although quorum option really doesn't make sense in a two
> node cluster, but we can leverage this option to get rid of this specific
> situation.
>
> I'd like to know your feedback on this and then I push a patch right away.
>
> ~Atin
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] About Gluster cluster availability when one of out of two nodes is down

2016-06-30 Thread Ravishankar N

On 06/30/2016 11:40 AM, Atin Mukherjee wrote:
Currently on a two node set up, if node B goes down and node A is 
rebooted brick process(es) on node A doesn't come up to avoid split 
brains.


This has always been the case. A patch I had sent quite some time back 
(http://review.gluster.org/#/c/8034/) was eventually abandoned, I think 
`volume start force` should suffice instead of adding checks in code.

-Ravi

However we have had concerns/bugs from different gluster users on the 
availability with this configuration. So we can solve this issue by 
starting the brick process(es) if quorum is not enabled. If quorum is 
enabled we'd not. Although quorum option really doesn't make sense in 
a two node cluster, but we can leverage this option to get rid of this 
specific situation.


I'd like to know your feedback on this and then I push a patch right away.

~Atin


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] About Gluster cluster availability when one of out of two nodes is down

2016-06-30 Thread Atin Mukherjee
Currently on a two node set up, if node B goes down and node A is rebooted
brick process(es) on node A doesn't come up to avoid split brains. However
we have had concerns/bugs from different gluster users on the availability
with this configuration. So we can solve this issue by starting the brick
process(es) if quorum is not enabled. If quorum is enabled we'd not.
Although quorum option really doesn't make sense in a two node cluster, but
we can leverage this option to get rid of this specific situation.

I'd like to know your feedback on this and then I push a patch right away.

~Atin
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users