Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
Ok, here is one more hint that point in the direction of libgfapi not re-establishing the connections to the bricks after they come back online: if I migrate the KVM machine (live) from one node to another after the bricks are back online, and I kill the second brick, the KVM will not suffer from

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Samuli Heinonen
Hi, Could you send output of gluster volume info and also exact command you are using to start VM's and what cache settings you are using with KVM? -samuli Paul Penev ppqu...@gmail.com kirjoitti 21.4.2014 kello 10.47: Ok, here is one more hint that point in the direction of libgfapi not

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Joe Julian
Now that you have a simple repro, lets get clean logs for this failure. Truncate logs, produce the error, post logs. Let's see if it's already telling us what the problem is. On 4/21/2014 12:47 AM, Paul Penev wrote: Ok, here is one more hint that point in the direction oflibgfapi *client*

[Gluster-users] Scaling for repository purposes

2014-04-21 Thread Peter Milanese
Greeting- I'm relatively new to the Gluster community, and would like to investigate Gluster as a solution to augment our current storage systems. My use of Gluster has been limited to nitch use cases. Is there anybody in the Library/Digital Repository space that has implemented this for mass

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
Here is the session log from the testing. Unfortunately there is little in the logs. 1. KVM running on server 15. bricks are on server 14 and 15. Killing glusterfsd on server 14 1.1. killall -KILL glusterfsd It has no change to log anything, so the logs below are from the gluster server restart.

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
Sorry, I forgot to mention that at this point, restarting gluster on server 15 leads to reconnect of the KVM client. [2014-04-21 16:29:40.634713] I [server-handshake.c:567:server_setvolume] 0-gtest-server: accepted client from s15-213620-2014/04/21-09:53:17:688030-gtest-client-1-0 (version:

Re: [Gluster-users] Conflicting entries for symlinks between bricks (Trusted.gfid not consistent)

2014-04-21 Thread PEPONNET, Cyril N (Cyril)
Hi, I will try to reproduce this in a vagrant-cluster environnement. If it can help, here is the time line of the event. t0: 2 serveurs in replicate mode no issue t1: power down server1 due to hardware issue t2: server2 still continue to serve files through NFS and fuse, and still continue to

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
I sent the brick logs earlier. But I'm not able to produce logs from events in KVM. I can't find any logging or debugging interface. It is somewhat weird. Paul 2014-04-21 18:30 GMT+02:00 Joe Julian j...@julianfamily.org: I don't expect much from the bricks either, but in combination with the

[Gluster-users] Upgrade from 3.3 to 3.4.3 now can't add bricks

2014-04-21 Thread Brandon Mackie
Good afternoon folks, Not sure if this is an occurrence of https://bugzilla.redhat.com/show_bug.cgi?id=1072720 but cannot add new bricks to my existing cluster running under Ubuntu 12. New bricks are completely blank. Peer probed new from a trusted member. Both sides say “Peer Rejected

Re: [Gluster-users] Upgrade from 3.3 to 3.4.3 now can't add bricks

2014-04-21 Thread Brandon Mackie
Replace “bricks” with “servers” as IRC just informed me my vocab was crossed. Anyway, I wanted to add that following http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected I can get one brick to see it as not rejected (the brick that I peer on step 4 but the rest

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Joe Julian
As was pointed out to me by Andy5_ on the IRC channel, qemu outputs the glusterfs log on stdout: https://github.com/qemu/qemu/blob/stable-1.7/block/gluster.c#L211 On 04/21/2014 09:51 AM, Paul Penev wrote: I sent the brick logs earlier. But I'm not able to produce logs from events in KVM. I

Re: [Gluster-users] Upgrade from 3.3 to 3.4.3 now can't add bricks

2014-04-21 Thread Brandon Mackie
A conversation with myself, but JoeJulian was kind enough to point out that it was the info file and not the vol file that wasn’t matched. Confirmed that new servers have two extra lines: op-version=2 client-op-version=2 Which of course would cause a mismatch. I’ll take down everything tonight

Re: [Gluster-users] Scaling for repository purposes

2014-04-21 Thread James
On Mon, Apr 21, 2014 at 9:54 AM, Peter Milanese petermilan...@nypl.org wrote: Greeting- Hey, I'm relatively new to the Gluster community, and would like to investigate Gluster as a solution to augment our current storage systems. My use of Gluster has been limited to nitch use cases. Is

Re: [Gluster-users] Scaling for repository purposes

2014-04-21 Thread Joop
Peter Milanese wrote: Greeting- I'm relatively new to the Gluster community, and would like to investigate Gluster as a solution to augment our current storage systems. My use of Gluster has been limited to nitch use cases. Is there anybody in the Library/Digital Repository space that has

Re: [Gluster-users] libgfapi failover problem on replica bricks

2014-04-21 Thread Paul Penev
2014-04-21 21:12 GMT+02:00 Joe Julian j...@julianfamily.org: qemu outputs the glusterfs log to stdout: https://github.com/qemu/qemu/blob/stable-1.7/block/gluster.c#L211 I modified that line so that it would log to a file instead (as stdout was not available in my case). At the end of the logs,

[Gluster-users] New volume with existing glusterfs data?

2014-04-21 Thread Peter B.
Hello, I'm looking for a way to migrate an existing glusterfs volume from a KVM client to the KVM host. The data on the disks was written and used by glusterfs previously, so I guess it should be possible to just use it as-is. Unfortunately I can't find any documentation on how to re-create that

Re: [Gluster-users] Slow healing times on large cinder and nova volumes

2014-04-21 Thread Pranith Kumar Karampuri
Could you attach log files please. You said the bricks are replaced. In case of brick-replacement, index based self-heal doesn't work so full self-heal needs to be triggered using gluster volume heal volname full. Could you confirm if that command is issued? Pranith - Original Message -

Re: [Gluster-users] Slow healing times on large cinder and nova volumes

2014-04-21 Thread Schmid, Larry
Thanks for your reply. I will attach logs as soon as I can. We are on 3.4.1 and I followed this process: http://goo.gl/hFwCcB which essentially details how to set attributes so the new disk space is identified as the brick, after which full heal commands are issued on the volumes to start

Re: [Gluster-users] server.allow-insecure doesn't work in 3.4.2?

2014-04-21 Thread Mingfan Lu
I saw something in https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes I wonder whether I should restart the glusterd? Known Issues: - The following configuration changes are necessary for qemu and samba integration with libgfapi to work seamlessly: