On 01/19/2016 10:36 PM, Bishoy Mikhael wrote:
So what about IPv6?!
We are still tracking this for 3.8. The patch which provides this
support does look good and after some more reviews I expect it to land
in time for 3.8.
Regards,
Vijay
On Tuesday, January 19, 2016, Vijay Bellur mailto:v
So what about IPv6?!
Bishoy
On Tuesday, January 19, 2016, Vijay Bellur wrote:
> On 01/11/2016 04:22 PM, Vijay Bellur wrote:
>
>> Hi All,
>>
>> We discussed the following proposal for 3.8 in the maintainers mailing
>> list and there was general consensus about the changes being a step in
>> the
On 01/11/2016 04:22 PM, Vijay Bellur wrote:
Hi All,
We discussed the following proposal for 3.8 in the maintainers mailing
list and there was general consensus about the changes being a step in
the right direction. Would like to hear your thoughts about the same.
Changes to 3.8 Plan:
--
And another statedump of FUSE mount client consuming more than 7 GiB of RAM:
https://gist.github.com/136d7c49193c798b3ade
DHT-related leak?
On середа, 13 січня 2016 р. 16:26:59 EET Soumya Koduri wrote:
> On 01/13/2016 04:08 PM, Soumya Koduri wrote:
> > On 01/12/2016 12:46 PM, Oleksandr Natalenko
Here is another RAM usage stats and statedump of GlusterFS mount approaching
to just another OOM:
===
root 32495 1.4 88.3 4943868 1697316 ? Ssl Jan13 129:18 /usr/sbin/
glusterfs --volfile-server=server.example.com --volfile-id=volume /mnt/volume
===
https://gist.github.com/86198201c79e
Hello,
i have a glusterfs distributed volume and had a raid problem. after
solving this i could start the volume and had problem with files. now i
want to move working files direct from one node to a new distributed
volume. How does the linksystem works? if i have a distribute-replicate
with
Hello,
Thanks you for your quick reply, but I had the same resolution:
Unable to fetch slave volume details. Please check the slave cluster and
slave volume.
geo-replication command failed
I just tried to test with gverify.sh and no errors. All seems ok.
Kind regards.
On Tue, Jan 19, 2016 at
Can you try,
gluster volume geo-replication datastore-master server2::datastore-slave create
push-pem
regards
Aravinda
On 01/19/2016 07:16 PM, Curro Rodriguez wrote:
Hello,
I am trying deploy a Distributed Geo-replica between 2 distributed volumes
with 24 bricks.
Operating system Ubuntu 14
Hello,
I am trying deploy a Distributed Geo-replica between 2 distributed volumes
with 24 bricks.
Operating system Ubuntu 14.03.3 LTS
I have on both servers the same version:
glusterfs 3.7.6 built on Nov 9 2015 15:17:09
I have added ssh passwordless and I can log via ssh using secret.pem.pub
On 19/01/2016 10:06 PM, Krutika Dhananjay wrote:
Just to be sure we are not missing any steps here, you did invoke
'gluster volume heal datastore1 full' after adding the third brick,
before the heal could begin, right?
Possibly not. First I immediately ran 'gluster volume heal datastore1
info
Hi Lindsay,
Just to be sure we are not missing any steps here, you did invoke 'gluster
volume heal datastore1 full' after adding the third brick, before the heal
could begin, right?
As far as the reverse heal is concerned, there is one issue with add-brick
where replica count is increased, w
gluster 3.7.6
I seem to be able to reliably reproduce this. I have a replica 2 volume
with 1 test VM image. While the VM is running with heavy disk
read/writes (disk benchmark) I add a 3rd brick for replica 3:
gluster volume add-brick datastore1 replica 3
vng.proxmox.softlog:/vmdata/datast
Hi Dietmar,
After discussion with Aravinda we realized that unfortunately the
suggestion to:
setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
setfattr -n glusterfs.geo-rep.trigger-sync -v "1"
won't work with 3.6.7, since provision for that workaround was added after
3.6.7.
There's
On 18/01/16 22:24, Krutika Dhananjay wrote:
However if I run it on VNA, it succeeds.
Yes, there is a bug report for this @
https://bugzilla.redhat.com/show_bug.cgi?id=1112158.
The workaround, like you yourself figured, is to run the command on
the node with the highest uuid.
Steps:
1) C
14 matches
Mail list logo