Re: [Gluster-users] Gluster volume heal info healed shows the healed entries under wrong brick

2016-05-24 Thread Lindsay Mathieson
On 24/05/2016 7:39 PM, Qiu Jie QJ Li wrote: checked the self-healing information by using 'gluster volume heal volume_name info healed'. *I found that the healed entries are all listed under the node_h, which I supposed should be listed under node_f. Also the old healed entries on node_f previo

Re: [Gluster-users] libgfapi using unix domain sockets

2016-05-24 Thread Poornima Gurusiddaiah
Hi, Whenever a new fd is created it is allocated from the mem-pool, if the mem pool is full it will be calloc'd. The current limit for fd-mem-pool is 1024, if there are more than 1024 fd's open, then the perf may be affected. Also, the unix socket used while glfs_set_volfile_server() is only f

Re: [Gluster-users] 答复: 答复: 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-24 Thread Kotresh Hiremath Ravishankar
Hi, Verify below before proceeding further. 1. Run the following command in all the master nodes and You should find only one directory (session directory) and rest all are files. If you find two directories, it needs a clean up in all master nodes to have the same session directory

[Gluster-users] Weekly Community Meeting - 18/May/2016

2016-05-24 Thread Raghavendra Talur
The meeting minutes for this weeks meeting are available at Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-18/weekly_community_meeting_18may2016.2016-05-18-12.06.html Minutes (text): https://meetbot-raw.fedoraproject.org/gluster-meeting/2016-05-18/weekly_community_meeting_18may

Re: [Gluster-users] Create gluster volume on machines with one hard disc

2016-05-24 Thread Chen Chen
Do you mean one blade enclosure with 7 blades in it, each running individual OS? Chen On 5/24/2016 3:22 PM, David Comeyne wrote: 1 node = 1 physical mini computer. I am working with some sort of HPC (High-performance computing machine). So I have 7 computers (nodes) stored inside one case.

[Gluster-users] Gluster volume heal info healed shows the healed entries under wrong brick

2016-05-24 Thread Qiu Jie QJ Li
Hello Gluster developer I have a gluster cluster with 2 nodes configured as replicated volumes, each node has one brick. I did single node failure test including killing all gluster processes and power off one node. The node failed is node_f and the other node is node_h. During the node_f's f

Re: [Gluster-users] Create gluster volume on machines with one hard disc

2016-05-24 Thread David Comeyne
1 node = 1 physical mini computer. I am working with some sort of HPC (High-performance computing machine). So I have 7 computers (nodes) stored inside one case. David From: Lindsay Mathieson [mailto:lindsay.mathie...@gmail.com] Sent: dinsdag 24 mei 2016 6:35 To: David Comeyne Cc: gluster-users

[Gluster-users] 答复: 答复: 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-24 Thread vyyy杨雨阳
Commands output as following, Thanks [root@SVR8048HW2285 ~]# gluster volume geo-replication filews glusterfs01.sh3.ctripcorp.com::filews_slave status MASTER NODE MASTER VOLMASTER BRICK SLAVE STATUS CHECKPOINT STATUSCRAWL STATUS

Re: [Gluster-users] libgfapi using unix domain sockets

2016-05-24 Thread Ankireddypalle Reddy
Is there any suggested best practice for the number of glfs_fd_t that can be associated with a glfs_t. Does having a single glfs_t in an application with large number of glfs_fd_t cause any resource contention issues. Thanks and Regards, Ram From: gluster-users-boun...@gluster.org [mailto:glus

Re: [Gluster-users] libgfapi using unix domain sockets

2016-05-24 Thread Ankireddypalle Reddy
I figured it out. Protocol: unix Hostname: /var/run/glusterd.socket Port: 0 Thanks and Regards, Ram From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Ankireddypalle Reddy Sent: Tuesday, May 24, 2016 10:20 AM To: gluster-users@gluster.org Subject: [Gl

[Gluster-users] libgfapi using unix domain sockets

2016-05-24 Thread Ankireddypalle Reddy
Hi, I am trying to use libgfapi for connecting to a gluster volume using unix domain sockets. I am not able to find the socket path that should be provided while making the "glfs_set_volfile_server" function call. ps -eaf | grep gluster root 15178 31450 0 09:52 pts/100:00:00 gr

[Gluster-users] Meeting minutes: Gluster Community Bug Triage

2016-05-24 Thread Saravanakumar Arumugam
Hi, Please find the meeting minutes and summary: Minutes: Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.html Minutes (text): https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.txt

Re: [Gluster-users] Possible error not being returned

2016-05-24 Thread Ankireddypalle Reddy
Xavier, Thanks for checking this. Will test this in 3.7.12. Thanks and Regards, Ram -Original Message- From: Xavier Hernandez [mailto:xhernan...@datalab.es] Sent: Tuesday, May 24, 2016 2:24 AM To: Ankireddypalle Reddy; gluster-users@gluster.org Subject: Re: [Gluster-users]

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-24 Thread Nicolas Ecarnot
Le 24/05/2016 12:54, Lindsay Mathieson a écrit : On 24/05/2016 8:24 PM, Kevin Lemonnier wrote: So the VM were configured with cache set to none, I just tried with cache=directsync and it seems to be fixing the issue. Still need to run more test, but did a couple already with that option and no I

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-24 Thread Lindsay Mathieson
On 24/05/2016 8:24 PM, Kevin Lemonnier wrote: So the VM were configured with cache set to none, I just tried with cache=directsync and it seems to be fixing the issue. Still need to run more test, but did a couple already with that option and no I/O errors. Never had to do this before, is it kno

Re: [Gluster-users] 答复: 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-24 Thread Kotresh Hiremath Ravishankar
Ok, it looks like there is a problem with ssh key distribution. Before I suggest to clean those up and do setup again, could you share the output of following commands 1. gluster vol geo-rep ::slave status 2. ls -l /var/lib/glusterd/geo-replication/ Is there multiple geo-rep sessions from thi

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-24 Thread Kevin Lemonnier
So the VM were configured with cache set to none, I just tried with cache=directsync and it seems to be fixing the issue. Still need to run more test, but did a couple already with that option and no I/O errors. Never had to do this before, is it known ? Found the clue in some old mail from this m

[Gluster-users] REMINDER: Gluster Community Bug Triage meeting today at 12:00 UTC ~(in 2.0 hours)

2016-05-24 Thread Saravanakumar Arumugam
Hi, This meeting is scheduled for anyone interested in learning more about, or assisting with the Bug Triage. Meeting details: - location: #gluster-meeting on Freenode IRC ( https://webchat.freenode.net/?channels=gluster-meeting ) - date: every Tuesday - time: 12:00 UTC (in your terminal, run:

[Gluster-users] 答复: 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-24 Thread vyyy杨雨阳
We can establish passwordless ssh directly with command 'ssh' , but when create push-pem, it shows ' Passwordless ssh login has not been setup ' unless copy secret.pem to *id_rsa.pub [root@SVR8048HW2285 ~]# ssh -i /var/lib/glusterd/geo-replication/secret.pem r...@glusterfs01.sh3.ctripcorp.com

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-24 Thread Kevin Lemonnier
Hi, Some news on this. I actually don't need to trigger a heal to get corruption, so the problem is not the healing. Live migrating the VM seems to trigger corruption every time, and even without that just doing a database import, rebooting then doing another import seems to corrupt as well. To c

Re: [Gluster-users] 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-24 Thread Kotresh Hiremath Ravishankar
Hi Could you try following command from corresponding masters to faulty slave nodes and share the output? The below command should not ask for password and should run gsync.py. ssh -i /var/lib/glusterd/geo-replication/secret.pem root@ To establish passwordless ssh, it is not necessary to copy s

Re: [Gluster-users] performance issue in gluster volume

2016-05-24 Thread Anuradha Talur
- Original Message - > From: "Ramavtar" > To: gluster-users@gluster.org > Sent: Friday, May 20, 2016 11:12:43 PM > Subject: [Gluster-users] performance issue in gluster volume > > Hi Ravi, > > I am using gluster volume and it's having 2.7 TB data ( mp4 and jpeg > files) with nginx webse