On 24/05/2016 7:39 PM, Qiu Jie QJ Li wrote:
checked the self-healing information by using 'gluster volume heal
volume_name info healed'. *I found that the healed entries are all
listed under the node_h, which I supposed should be listed under
node_f. Also the old healed entries on node_f previo
Hi,
Whenever a new fd is created it is allocated from the mem-pool, if the mem pool
is full it will be calloc'd. The current limit for fd-mem-pool is 1024, if
there are more than 1024 fd's open, then the perf may be affected.
Also, the unix socket used while glfs_set_volfile_server() is only f
Hi,
Verify below before proceeding further.
1. Run the following command in all the master nodes and
You should find only one directory (session directory)
and rest all are files. If you find two directories, it
needs a clean up in all master nodes to have the same
session directory
The meeting minutes for this weeks meeting are available at
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-18/weekly_community_meeting_18may2016.2016-05-18-12.06.html
Minutes (text):
https://meetbot-raw.fedoraproject.org/gluster-meeting/2016-05-18/weekly_community_meeting_18may
Do you mean one blade enclosure with 7 blades in it, each running
individual OS?
Chen
On 5/24/2016 3:22 PM, David Comeyne wrote:
1 node = 1 physical mini computer.
I am working with some sort of HPC (High-performance computing machine).
So I have 7 computers (nodes) stored inside one case.
Hello Gluster developer
I have a gluster cluster with 2 nodes configured as replicated volumes,
each node has one brick.
I did single node failure test including killing all gluster processes and
power off one node.
The node failed is node_f and the other node is node_h. During the
node_f's f
1 node = 1 physical mini computer.
I am working with some sort of HPC (High-performance computing machine). So I
have 7 computers (nodes) stored inside one case.
David
From: Lindsay Mathieson [mailto:lindsay.mathie...@gmail.com]
Sent: dinsdag 24 mei 2016 6:35
To: David Comeyne
Cc: gluster-users
Commands output as following, Thanks
[root@SVR8048HW2285 ~]# gluster volume geo-replication filews
glusterfs01.sh3.ctripcorp.com::filews_slave status
MASTER NODE MASTER VOLMASTER BRICK SLAVE
STATUS CHECKPOINT STATUSCRAWL STATUS
Is there any suggested best practice for the number of glfs_fd_t that can be
associated with a glfs_t. Does having a single glfs_t in an application with
large number of glfs_fd_t cause any resource contention issues.
Thanks and Regards,
Ram
From: gluster-users-boun...@gluster.org
[mailto:glus
I figured it out.
Protocol: unix
Hostname: /var/run/glusterd.socket
Port: 0
Thanks and Regards,
Ram
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ankireddypalle Reddy
Sent: Tuesday, May 24, 2016 10:20 AM
To: gluster-users@gluster.org
Subject: [Gl
Hi,
I am trying to use libgfapi for connecting to a gluster volume using
unix domain sockets. I am not able to find the socket path that should be
provided while making the "glfs_set_volfile_server" function call.
ps -eaf | grep gluster
root 15178 31450 0 09:52 pts/100:00:00 gr
Hi,
Please find the meeting minutes and summary:
Minutes:
Minutes:
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.html
Minutes (text):
https://meetbot.fedoraproject.org/gluster-meeting/2016-05-24/gluster_bug_triage.2016-05-24-12.00.txt
Xavier,
Thanks for checking this. Will test this in 3.7.12.
Thanks and Regards,
Ram
-Original Message-
From: Xavier Hernandez [mailto:xhernan...@datalab.es]
Sent: Tuesday, May 24, 2016 2:24 AM
To: Ankireddypalle Reddy; gluster-users@gluster.org
Subject: Re: [Gluster-users]
Le 24/05/2016 12:54, Lindsay Mathieson a écrit :
On 24/05/2016 8:24 PM, Kevin Lemonnier wrote:
So the VM were configured with cache set to none, I just tried with
cache=directsync and it seems to be fixing the issue. Still need to run
more test, but did a couple already with that option and no I
On 24/05/2016 8:24 PM, Kevin Lemonnier wrote:
So the VM were configured with cache set to none, I just tried with
cache=directsync and it seems to be fixing the issue. Still need to run
more test, but did a couple already with that option and no I/O errors.
Never had to do this before, is it kno
Ok, it looks like there is a problem with ssh key distribution.
Before I suggest to clean those up and do setup again, could you share the
output of
following commands
1. gluster vol geo-rep ::slave status
2. ls -l /var/lib/glusterd/geo-replication/
Is there multiple geo-rep sessions from thi
So the VM were configured with cache set to none, I just tried with
cache=directsync and it seems to be fixing the issue. Still need to run
more test, but did a couple already with that option and no I/O errors.
Never had to do this before, is it known ? Found the clue in some old mail
from this m
Hi, This meeting is scheduled for anyone interested in learning more
about, or assisting with the Bug Triage. Meeting details: - location:
#gluster-meeting on Freenode IRC (
https://webchat.freenode.net/?channels=gluster-meeting ) - date: every
Tuesday - time: 12:00 UTC (in your terminal, run:
We can establish passwordless ssh directly with command 'ssh' , but when create
push-pem, it shows ' Passwordless ssh login has not been setup ' unless copy
secret.pem to *id_rsa.pub
[root@SVR8048HW2285 ~]# ssh -i /var/lib/glusterd/geo-replication/secret.pem
r...@glusterfs01.sh3.ctripcorp.com
Hi,
Some news on this.
I actually don't need to trigger a heal to get corruption, so the problem
is not the healing. Live migrating the VM seems to trigger corruption every
time, and even without that just doing a database import, rebooting then
doing another import seems to corrupt as well.
To c
Hi
Could you try following command from corresponding masters to faulty slave
nodes and
share the output?
The below command should not ask for password and should run gsync.py.
ssh -i /var/lib/glusterd/geo-replication/secret.pem root@
To establish passwordless ssh, it is not necessary to copy s
- Original Message -
> From: "Ramavtar"
> To: gluster-users@gluster.org
> Sent: Friday, May 20, 2016 11:12:43 PM
> Subject: [Gluster-users] performance issue in gluster volume
>
> Hi Ravi,
>
> I am using gluster volume and it's having 2.7 TB data ( mp4 and jpeg
> files) with nginx webse
22 matches
Mail list logo