Pawan - I couldn't reach to any conclusive analysis so far. But, looking at
the client (nfs) & glusterd log files, it does look like that there is an
issue w.r.t peer connections. Does restarting all the glusterd one by one
solve this?
On Mon, May 29, 2017 at 4:50 PM, Pawan Alwandi
On Mon, May 29, 2017 at 9:52 PM, Merwan Ouddane wrote:
> Hello,
>
> I wanted to play around with gluster and I made a 2 nodes cluster
> replicated, then I wanted to add a third replica "on the fly".
>
> I manage to probe my third server from the cluster, but when I try to add
>
On 5/28/2017 9:24 PM, Ravishankar N wrote:
I think you should try to find if there were self-heals pending to
gluster1 before you brought gluster2 down or the VMs should not have
paused.
yes, if I watch for and then force outstanding heals (if the self-heal
hasn't kicked in) prior to
Hi everybody,
I suppose there will be a lot more people affected by removal of the
driver from Cinder who do not know about it. I am running production
clusters on Mitaka and Newton and did not know about the issue -
Openstack is quite a beast to keep pace with the updates.
Are there any news
Hello,
I wanted to play around with gluster and I made a 2 nodes cluster replicated,
then I wanted to add a third replica "on the fly".
I manage to probe my third server from the cluster, but when I try to add the
new brick to the volume, I get a "Request timed out"
My command:
gluster
Hello,
Yes, i forgot to upgrade the client as well.
I did the upgrade and created a new volume, same options as before, with one VM
running and doing lots of IOs. i started the rebalance with force and after it
completed the process i rebooted the VM, and it did start normally without
Hi all,
I love this project, Gluster and Ganesha are amazing. Thank you for this
great work!
The only thing that I miss is IPv6 support. I know that there are some
challenges and that’s OK. For me it’s not important whether Gluster servers
use IPv4 or IPv6 to speak each other and replicate data.
Thanks for that update. Very happy to hear it ran fine without any issues.
:)
Yeah so you can ignore those 'No such file or directory' errors. They
represent a transient state where DHT in the client process is yet to
figure out the new location of the file.
-Krutika
On Mon, May 29, 2017 at
I was stupid enough to copy an additional newline from email. So, sorry for
the noise.
Works so far.
Thanks for getting that solved, best Chris
Raghavendra Talur schrieb am Mo., 29. Mai 2017 um
13:18 Uhr:
>
>
> On 29-May-2017 3:49 PM, "Christopher Schmidt"
>>Healing could be triggered by client side (access of file) or server side
>>(shd).
>>However, in both the cases actual heal starts from "ec_heal_do" function.
If I do a recursive getfattr operation from clients, then all heal
operation is done on clients right? Client read the chunks, calculate
- Original Message -
From: "Serkan Çoban"
To: "Gluster Users"
Sent: Monday, May 29, 2017 5:13:06 PM
Subject: [Gluster-users] Heal operation detail of EC volumes
Hi,
When a brick fails in EC, What is the healing read/write data
Hi,
When a brick fails in EC, What is the healing read/write data path?
Which processes do the operations?
Assume a 2GB file is being healed in 16+4 EC configuration. I was
thinking that SHD deamon on failed brick host will read 2GB from
network and reconstruct its 100MB chunk and write it on to
On 29-May-2017 3:49 PM, "Christopher Schmidt" wrote:
Hi Raghavendra Talur,
this does not work for me. Most certainly because I forgot something.
So just put the file in the folder, make it executable and create a volume?
Thats all?
If I am doing this, there is no
Hi Raghavendra Talur,
this does not work for me. Most certainly because I forgot something.
So just put the file in the folder, make it executable and create a volume?
Thats all?
If I am doing this, there is no /var/lib/glusterd/hooks/1/create/post/log
file and the Performance Translator is
Did you mount the snapshot bricks after you reconfigured the vgs.
Regards
Rafi KC
On 05/29/2017 01:08 PM, WoongHee Han wrote:
> right, i had reconfigured the vg in one node. and activate the brick
> path then restored the snapshot.
>
>
> 2017-05-29 15:54 GMT+09:00 Mohammed Rafi K C
On 05/27/2017 09:22 AM, WoongHee Han wrote:
> Ih, i'm sorry for my late reply
>
> I've tried to solve it using your answer. It worked as well thanks. it
> means the snapshot was activated.
> and then i was restore the snapshot.
>
> but, after i restored the snapshot ,there was nothing in the
>
Hi,
I took a look at your logs.
It very much seems like an issue that is caused by a mismatch in glusterfs
client and server packages.
So your client (mount) seems to be still running 3.7.20, as confirmed by
the occurrence of the following log message:
[2017-05-26 08:58:23.647458] I [MSGID:
On 05/29/2017 10:45 AM, wk wrote:
OK, can I assume SOME pause is expected when Gluster first sees
gluster2 go down which would unpause after a timeout period. I have
seen that behaviour as well.
Yes, when you power off/shutdown/reboot a node, the mount hangs for a
bit due to not receiving
18 matches
Mail list logo