Le jeudi 24 septembre 2015 à 07:59 -0400, Kotresh Hiremath Ravishankar a
écrit :
> Thank you:) and also please check the script I had given passes in all
> machines
So it worked everywhere, but on slave0 and slave1. Not sure what is
wrong, or if they are used, I will check later.
--
Michael Sc
Hi,
glusterfs-3.6.6 has been released and the packages for RHEL/Fedora/Centos
can be found here.
http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/
Requesting people running 3.6.x to please try it out and let us know if
there are any issues.
This release supposedly fixes the bugs list
Thank you:) and also please check the script I had given passes in all machines
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Michael Scherer"
> To: "Kotresh Hiremath Ravishankar"
> Cc: "Krutika Dhananjay" , "Atin Mukherjee"
> , "Gaurav Garg"
> , "Aravinda" , "Gluster D
Le jeudi 24 septembre 2015 à 06:50 -0400, Kotresh Hiremath Ravishankar a
écrit :
> >>> Ok, this definitely requires some tests and toughts. It only use ipv4
> >>> too ?
> >>> (I guess yes, since ipv6 is removed from the rackspace build slaves)
>
> Yes!
>
> Could we know
>>> Ok, this definitely requires some tests and toughts. It only use ipv4
>>> too ?
>>> (I guess yes, since ipv6 is removed from the rackspace build slaves)
Yes!
Could we know when can these settings be done on all linux slave machines?
If it takes sometime, we shou
I've checked statedump of volume in question and haven't found lots of
iobuf as mentioned in that bugreport.
However, I've noticed that there are lots of LRU records like this:
===
[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1]
gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595
nlookup=1
fd-coun
Le jeudi 24 septembre 2015 à 02:24 -0400, Kotresh Hiremath Ravishankar a
écrit :
> Hi,
>
> >>>So, it is ok if I restrict that to be used only on 127.0.0.1 ?
> I think no, testcases use 'H0' to create volumes
> H0=${H0:=`hostname`};
> Geo-rep expects passwordLes
We use bare GlusterFS installation with no oVirt involved.
24.09.2015 10:29, Gabi C wrote:
google vdsm memory leak..it's been discussed on list last year and
earlier this one...
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.glust
In our GlusterFS deployment we've encountered something like memory leak
in GlusterFS FUSE client.
We use replicated (×2) GlusterFS volume to store mail (exim+dovecot,
maildir format). Here is inode stats for both bricks and mountpoint:
===
Brick 1 (Server 1):
Filesystem