Re: [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-25 Thread ABHISHEK PALIWAL
please reply On Wed, May 25, 2016 at 3:55 PM, ABHISHEK PALIWAL wrote: > Hi, > > I am using replicated volume on board gluster volume mount point is > working fine but on other board it is mounted as well as file are present > but it is not displaying them. > > When

[Gluster-users] 答复: : geo-replication status partial faulty

2016-05-25 Thread vyyy杨雨阳
I retried many times, find that when I set slave volume's bricks or nodes below 6, the geo-replication volume status is OK. I am not sure if this is a bug. Whether Normal or faulty nodes, test result is the same. [root@SVR8049HW2285 ~]# bash -x /usr/libexec/glusterfs/gverify.sh filews root

[Gluster-users] Version compatibility

2016-05-25 Thread Dmitriy Lock
Hello all! Can you tell me - can client with version 3.5 connect to the server with version 3.2? I ask because i have two server based on debian 7 and the new one on debian 8. And i can't connect debian 8 client to the debian 7. Thank you very much! -- Sincerely yours,

Re: [Gluster-users] file permissions for shared access

2016-05-25 Thread Niklaas Baudet von Gersdorff
Ryan Brothers [2016-05-25 14:38 -0400] : > I am mounting a gluster volume on several clients for shared access. > The uid's and gid's of the users and groups do not match on each of > the clients, so a given user could have a different uid on each > client. I am not a professional, so take the

[Gluster-users] file permissions for shared access

2016-05-25 Thread Ryan Brothers
I am mounting a gluster volume on several clients for shared access. The uid's and gid's of the users and groups do not match on each of the clients, so a given user could have a different uid on each client. I plan to control access to the volume via IP addresses in auth.allow. If an IP address

[Gluster-users] issue reading gluster area

2016-05-25 Thread Pat Haley
We had a machine with 2 hardware raid drives/partitionscombined into a single gluster drive. It was running Centos 6.7 and gluster 3.5? at time of OS disk crash. The 2 raid disks that made up the gluster bricks were not affected by OS reinstall. We had to reinstall Centos 6.7 and glusterfs

Re: [Gluster-users] 0-management: argument invalid

2016-05-25 Thread Niklaas Baudet von Gersdorff
Atin Mukherjee [2016-05-25 18:30 +0530] : > It seems like 24007 port is blocked by firewall. Also brick process > starts binding from port 49152, so you may have to open up those ports too. I can confirm that it was related to some firewall/network related issue. I could eventually solve the

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Kevin Lemonnier
There, re-created the VM from scratch, and still got the same errors. Attached are the logs, I created the VM on node 50, worked fine. I tried to reboot it and start my import again, still worked fine. I powered off the VM, then started it again on node 2, rebooted it a bunch and just got the

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Kevin Lemonnier
Just did that, below is the output. Didn't seem to move after the boot, and no new lines when the I/O errors appeared. Also, as mentionned I tried moving the disk on NFS and had the exact same errors, so it doesn't look like it's a libgfapi problem .. I should probably re-create the VM, maybe

[Gluster-users] Weekly Community Meeting - 25/May/2016

2016-05-25 Thread Raghavendra Talur
The meeting minutes for this weeks meeting are available at Minutes: https://meetbot.fedoraproject.org/gluster-meeting/2016-05-25/weekly_community_meeting_25may2016.2016-05-25-12.02.html Minutes (text):

Re: [Gluster-users] 0-management: argument invalid

2016-05-25 Thread Atin Mukherjee
On 05/25/2016 05:29 PM, Niklaas Baudet von Gersdorff wrote: > Atin Mukherjee [2016-05-25 15:50 +0530] : > >> Could you check the brick log? > > Sorry, I was not aware of the fact that there is an additional log. Here > it is: > > [2016-05-25 08:26:11.830795] I [MSGID: 100030]

Re: [Gluster-users] 0-management: argument invalid

2016-05-25 Thread Niklaas Baudet von Gersdorff
Niklaas Baudet von Gersdorff [2016-05-25 13:59 +0200] : > To me it looks like a firewall issue since there's a problem > establishing a connection (see line 2 and three). Yet, IP 10.1.0.1 is > on an internal interface that is not protected by firewall: I just figured out that it's indeed a

Re: [Gluster-users] 0-management: argument invalid

2016-05-25 Thread Niklaas Baudet von Gersdorff
Atin Mukherjee [2016-05-25 15:50 +0530] : > Could you check the brick log? Sorry, I was not aware of the fact that there is an additional log. Here it is: [2016-05-25 08:26:11.830795] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/local/sbin/glusterfsd: Started running

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Lindsay Mathieson
On 25/05/2016 5:58 PM, Kevin Lemonnier wrote: I use XFS, I read that was recommended. What are you using ? Since yours seems to work, I'm not opposed to changing ! ZFS - RAID10 (4 * WD Red 3TB) - 8GB ram dedicated to ZFS - SSD for log and cache (10GB and 100GB partitions respectively)

[Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-25 Thread ABHISHEK PALIWAL
Hi, I am using replicated volume on board gluster volume mount point is working fine but on other board it is mounted as well as file are present but it is not displaying them. When I checked the brick file of this board found following error logs: [2016-05-24 10:40:34.177887] W [MSGID: 113006]

Re: [Gluster-users] 0-management: argument invalid

2016-05-25 Thread Atin Mukherjee
On 05/25/2016 02:07 PM, Niklaas Baudet von Gersdorff wrote: > Hello, > > on a FreeBSD 10.3 machine I get the following error if I try to start > a volume with `gluster volume start volume1`: > > [2016-05-25 08:26:11.807056] W [common-utils.c:1685:gf_string2boolean] > (-->0x8048d84e4

Re: [Gluster-users] libgfapi using unix domain sockets

2016-05-25 Thread Ankireddypalle Reddy
Poornima, Thanks for checking this. We are using disperse volumes. Unix domain sockets will be used for communication between libgfapi and the brick daemons on the local server. The communication to brick daemons on the other nodes of the volume would be through tcp/rdma. Is my

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Krutika Dhananjay
Also, it seems Lindsay knows a way to get the gluster client logs when using proxmox and libgfapi. Would it be possible for you to get that sorted with Lindsay's help before recreating this issue next time and share the glusterfs client logs from all the nodes when you do hit the issue? It is

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Kevin Lemonnier
Hi, Not that I know of, no. Doesn't look like the bricks have trouble communication, but is there a simple way to check that in glusterFS, some sort of brick uptime ? Who knows, maybe the bricks are flickering and I don't notice, that's entirely possible. As mentionned, the problem occurs on

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Krutika Dhananjay
Hi Kevin, If you actually ran into a 'read-only filesystem' issue, then it could possibly because of a bug in AFR that Pranith recently fixed. To confirm if that is indeed the case, could you tell me if you saw the pause after a brick (single brick) was down while IO was going on? -Krutika On

Re: [Gluster-users] : geo-replication status partial faulty

2016-05-25 Thread Kotresh Hiremath Ravishankar
Answers inline Thanks and Regards, Kotresh H R - Original Message - > From: "vyyy杨雨阳" > To: "Kotresh Hiremath Ravishankar" > Cc: "Saravanakumar Arumugam" , > Gluster-users@gluster.org, "Aravinda Vishwanathapura Krishna >

[Gluster-users] 0-management: argument invalid

2016-05-25 Thread Niklaas Baudet von Gersdorff
Hello, on a FreeBSD 10.3 machine I get the following error if I try to start a volume with `gluster volume start volume1`: [2016-05-25 08:26:11.807056] W [common-utils.c:1685:gf_string2boolean] (-->0x8048d84e4 at

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Kevin Lemonnier
>Whats the underlying filesystem under the bricks? I use XFS, I read that was recommended. What are you using ? Since yours seems to work, I'm not opposed to changing ! -- Kevin Lemonnier PGP Fingerprint : 89A5 2283 04A0 E6E9 0111 signature.asc Description: Digital signature

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Lindsay Mathieson
On 25/05/2016 5:36 PM, Kevin Lemonnier wrote: Nope, not solved ! Looks like directsync just delays the problem, this morning the VM had thrown a bunch of I/O errors again. Tried writethrough and it seems to behave exactly like cache=none, the errors appear in a few minutes. Trying again with

Re: [Gluster-users] VM disks corruption on 3.7.11

2016-05-25 Thread Kevin Lemonnier
Nope, not solved ! Looks like directsync just delays the problem, this morning the VM had thrown a bunch of I/O errors again. Tried writethrough and it seems to behave exactly like cache=none, the errors appear in a few minutes. Trying again with directsync and no errors for now, so it looks like

[Gluster-users] : geo-replication status partial faulty

2016-05-25 Thread vyyy杨雨阳
Hi, Verify below before proceeding further. 1. There is only one session directory in all master nodes. ls -l /var/lib/glusterd/geo-replication/ 2. I can find "*.status" file in those nodes that geo-replication status shows active or passive, but there is no "*.status" file when node

Re: [Gluster-users] libgfapi using unix domain sockets

2016-05-25 Thread Poornima Gurusiddaiah
Hi, Whenever a new fd is created it is allocated from the mem-pool, if the mem pool is full it will be calloc'd. The current limit for fd-mem-pool is 1024, if there are more than 1024 fd's open, then the perf may be affected. Also, the unix socket used while glfs_set_volfile_server() is only

Re: [Gluster-users] 答复: 答复: 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-25 Thread Kotresh Hiremath Ravishankar
Hi, Verify below before proceeding further. 1. Run the following command in all the master nodes and You should find only one directory (session directory) and rest all are files. If you find two directories, it needs a clean up in all master nodes to have the same session