[Gluster-users] Weird behavior on just one folder (any ideas how that could happen?)

2016-12-14 Thread Andreas Ferrari

Dear Gluster-Users

Today we found a very strange thing on our production. We have 2 
gluster-servers (mirror) connected with FC (HA),
On a volume where all mailboxes are we found a directory which had 
I/O-errors, the strange thing about it was that
we had mounted the same volume on the gluster-server for analysis and 
there we get no I/O-errors. On this Share are over 1'000 mailboxes and 
it was the only broken. The fix was very easy but my question is how 
could that happen?


Regards
Andreas

ps: sorry for my poor englisch
<>___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Weekly community meeting - 2016-12-14

2016-12-14 Thread Kaushal M
Hi all,

The community meeting wasn't held this week either. Because of a lack
of volunteers (to host the meeting) and a lack of attendance.

Considering this, we (kkeithley and I) have decided to tentatively
cancel the remaining meetings for the year (on 21 and 28 December). If
anyone wants the meetings to happen, please feel free to host.

See you all in the new year for the next meeting on 4th January.

Thanks,
Kaushal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Fwd: Replica brick not working

2016-12-14 Thread Atin Mukherjee
On Wed, Dec 14, 2016 at 1:34 PM, Miloš Čučulović - MDPI 
wrote:

> Atin,
>
> I was able to move forward a bit. Initially, I had this:
>
> sudo gluster peer status
> Number of Peers: 1
>
> Hostname: storage2
> Uuid: 32bef70a-9e31-403e-b9f3-ec9e1bd162ad
> State: Peer Rejected (Connected)
>
> Then, on storage2 I removed all from /var/lib/glusterd except the info
> file.
>
> Now I am getting another error message:
>
> sudo gluster peer status
> Number of Peers: 1
>
> Hostname: storage2
> Uuid: 32bef70a-9e31-403e-b9f3-ec9e1bd162ad
> State: Sent and Received peer request (Connected)
>


Please edit /var/lib/glusterd/peers/32bef70a-9e31-403e-b9f3-ec9e1bd162ad
file and set the state to 3 in storage1 and restart glusterd instance.


> But the add brick is still not working. I checked the hosts file and all
> seems ok, ping is also working well.
>
> The think I also need to know, when adding a new replicated brick, do I
> need to first sync all files, or the new brick server needs to be empty?
> Also, do I first need to create the same volume on the new server or adding
> it to the volume of server1 will do it automatically?
>
>
> - Kindest regards,
>
> Milos Cuculovic
> IT Manager
>
> ---
> MDPI AG
> Postfach, CH-4020 Basel, Switzerland
> Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
> Tel. +41 61 683 77 35
> Fax +41 61 302 89 18
> Email: cuculo...@mdpi.com
> Skype: milos.cuculovic.mdpi
>
> On 14.12.2016 05:13, Atin Mukherjee wrote:
>
>> Milos,
>>
>> I just managed to take a look into a similar issue and my analysis is at
>> [1]. I remember you mentioning about some incorrect /etc/hosts entries
>> which lead to this same problem in earlier case, do you mind to recheck
>> the same?
>>
>> [1]
>> http://www.gluster.org/pipermail/gluster-users/2016-December/029443.html
>>
>> On Wed, Dec 14, 2016 at 2:57 AM, Miloš Čučulović - MDPI
>> > wrote:
>>
>> Hi All,
>>
>> Moving forward with my issue, sorry for the late reply!
>>
>> I had some issues with the storage2 server (original volume), then
>> decided to use 3.9.0, si I have the latest version.
>>
>> For that, I synced manually all the files to the storage server. I
>> installed there gluster 3.9.0, started it, created new volume called
>> storage and all seems to work ok.
>>
>> Now, I need to create my replicated volume (add new brick on
>> storage2 server). Almost all the files are there. So, I was adding
>> on storage server:
>>
>> * sudo gluter peer probe storage2
>> * sudo gluster volume add-brick storage replica 2
>> storage2:/data/data-cluster force
>>
>> But there I am receiving "volume add-brick: failed: Host storage2 is
>> not in 'Peer in Cluster' state"
>>
>> Any idea?
>>
>> - Kindest regards,
>>
>> Milos Cuculovic
>> IT Manager
>>
>> ---
>> MDPI AG
>> Postfach, CH-4020 Basel, Switzerland
>> Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
>> Tel. +41 61 683 77 35
>> Fax +41 61 302 89 18
>> Email: cuculo...@mdpi.com 
>> Skype: milos.cuculovic.mdpi
>>
>> On 08.12.2016 17:52, Ravishankar N wrote:
>>
>> On 12/08/2016 09:44 PM, Miloš Čučulović - MDPI wrote:
>>
>> I was able to fix the sync by rsync-ing all the directories,
>> then the
>> hale started. The next problem :), as soon as there are
>> files on the
>> new brick, the gluster mount will render also this one for
>> mounts, and
>> the new brick is not ready yet, as the sync is not yet done,
>> so it
>> results on missing files on client side. I temporary removed
>> the new
>> brick, now I am running a manual rsync and will add the
>> brick again,
>> hope this could work.
>>
>> What mechanism is managing this issue, I guess there is
>> something per
>> built to make a replica brick available only once the data is
>> completely synced.
>>
>> This mechanism was introduced in  3.7.9 or 3.7.10
>> (http://review.gluster.org/#/c/13806/
>> ). Before that version, you
>> manually needed to set some xattrs on the bricks so that healing
>> could
>> happen in parallel while the client still would server reads
>> from the
>> original brick.  I can't find the link to the doc which
>> describes these
>> steps for setting xattrs.:-(
>>
>> Calling it a day,
>> Ravi
>>
>>
>> - Kindest regards,
>>
>> Milos Cuculovic
>> IT Manager
>>
>> ---
>> MDPI AG
>> Postfach, CH-4020 Basel, Switzerland
>> Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
>> Tel. +41 61 683 77 35
>>  

Re: [Gluster-users] [ovirt-users] gluster 3.7.17-1.el7.x86_64 fails libvirt/qemu from centos-qemu-ev

2016-12-14 Thread lejeczek



On 14/12/16 06:37, Sahina Bose wrote:

[+ gluster-users]

Could you be more specific about error?

not that I did troubleshoot it, nor not much of an error, 
kvm guest would not boot up spitting something like "no boot 
disk/device found".

downgrade and guest would boot again no probs.



On Wed, Dec 14, 2016 at 1:50 AM, lejeczek 
> wrote:


libvirt/qemu does not get to gluster vols when one has
these:

Upgraded:
  glusterfs.x86_64 3.7.17-1.el7  glusterfs-api.x86_64
3.7.17-1.el7
  glusterfs-cli.x86_64 3.7.17-1.el7
glusterfs-client-xlators.x86_64 3.7.17-1.el7
  glusterfs-fuse.x86_64 3.7.17-1.el7
glusterfs-ganesha.x86_64 3.7.17-1.el7
  glusterfs-libs.x86_64 3.7.17-1.el7
glusterfs-server.x86_64 3.7.17-1.el7
  qemu-img-ev.x86_64 10:2.3.0-31.0.el7_2.21.1
qemu-kvm-common-ev.x86_64 10:2.3.0-31.0.el7_2.21.1
  qemu-kvm-ev.x86_64 10:2.3.0-31.0.el7_2.21.1

downgrade glusterfs* to 3.7.16-1.el7 fixes the problem.
regards,
L.
___
Users mailing list
us...@ovirt.org 
http://lists.phx.ovirt.org/mailman/listinfo/users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Fwd: Replica brick not working

2016-12-14 Thread Miloš Čučulović - MDPI

Atin,

I was able to move forward a bit. Initially, I had this:

sudo gluster peer status
Number of Peers: 1

Hostname: storage2
Uuid: 32bef70a-9e31-403e-b9f3-ec9e1bd162ad
State: Peer Rejected (Connected)

Then, on storage2 I removed all from /var/lib/glusterd except the info file.

Now I am getting another error message:

sudo gluster peer status
Number of Peers: 1

Hostname: storage2
Uuid: 32bef70a-9e31-403e-b9f3-ec9e1bd162ad
State: Sent and Received peer request (Connected)

But the add brick is still not working. I checked the hosts file and all 
seems ok, ping is also working well.


The think I also need to know, when adding a new replicated brick, do I 
need to first sync all files, or the new brick server needs to be empty? 
Also, do I first need to create the same volume on the new server or 
adding it to the volume of server1 will do it automatically?



- Kindest regards,

Milos Cuculovic
IT Manager

---
MDPI AG
Postfach, CH-4020 Basel, Switzerland
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
Tel. +41 61 683 77 35
Fax +41 61 302 89 18
Email: cuculo...@mdpi.com
Skype: milos.cuculovic.mdpi

On 14.12.2016 05:13, Atin Mukherjee wrote:

Milos,

I just managed to take a look into a similar issue and my analysis is at
[1]. I remember you mentioning about some incorrect /etc/hosts entries
which lead to this same problem in earlier case, do you mind to recheck
the same?

[1]
http://www.gluster.org/pipermail/gluster-users/2016-December/029443.html

On Wed, Dec 14, 2016 at 2:57 AM, Miloš Čučulović - MDPI
> wrote:

Hi All,

Moving forward with my issue, sorry for the late reply!

I had some issues with the storage2 server (original volume), then
decided to use 3.9.0, si I have the latest version.

For that, I synced manually all the files to the storage server. I
installed there gluster 3.9.0, started it, created new volume called
storage and all seems to work ok.

Now, I need to create my replicated volume (add new brick on
storage2 server). Almost all the files are there. So, I was adding
on storage server:

* sudo gluter peer probe storage2
* sudo gluster volume add-brick storage replica 2
storage2:/data/data-cluster force

But there I am receiving "volume add-brick: failed: Host storage2 is
not in 'Peer in Cluster' state"

Any idea?

- Kindest regards,

Milos Cuculovic
IT Manager

---
MDPI AG
Postfach, CH-4020 Basel, Switzerland
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
Tel. +41 61 683 77 35
Fax +41 61 302 89 18
Email: cuculo...@mdpi.com 
Skype: milos.cuculovic.mdpi

On 08.12.2016 17:52, Ravishankar N wrote:

On 12/08/2016 09:44 PM, Miloš Čučulović - MDPI wrote:

I was able to fix the sync by rsync-ing all the directories,
then the
hale started. The next problem :), as soon as there are
files on the
new brick, the gluster mount will render also this one for
mounts, and
the new brick is not ready yet, as the sync is not yet done,
so it
results on missing files on client side. I temporary removed
the new
brick, now I am running a manual rsync and will add the
brick again,
hope this could work.

What mechanism is managing this issue, I guess there is
something per
built to make a replica brick available only once the data is
completely synced.

This mechanism was introduced in  3.7.9 or 3.7.10
(http://review.gluster.org/#/c/13806/
). Before that version, you
manually needed to set some xattrs on the bricks so that healing
could
happen in parallel while the client still would server reads
from the
original brick.  I can't find the link to the doc which
describes these
steps for setting xattrs.:-(

Calling it a day,
Ravi


- Kindest regards,

Milos Cuculovic
IT Manager

---
MDPI AG
Postfach, CH-4020 Basel, Switzerland
Office: St. Alban-Anlage 66, 4052 Basel, Switzerland
Tel. +41 61 683 77 35
Fax +41 61 302 89 18
Email: cuculo...@mdpi.com 
Skype: milos.cuculovic.mdpi

On 08.12.2016 16:17, Ravishankar N wrote:

On 12/08/2016 06:53 PM, Atin Mukherjee wrote:



On Thu, Dec 8, 2016 at 6:44 PM, Miloš Čučulović - MDPI

>> wrote:

Ah, damn! I found the issue. On the storage