Ok, here is one more hint that point in the direction of libgfapi not
re-establishing the connections to the bricks after they come back
online: if I migrate the KVM machine (live) from one node to another
after the bricks are back online, and I kill the second brick, the KVM
will not suffer from
Hi,
Could you send output of gluster volume info and also exact command you are
using to start VM's and what cache settings you are using with KVM?
-samuli
Paul Penev ppqu...@gmail.com kirjoitti 21.4.2014 kello 10.47:
Ok, here is one more hint that point in the direction of libgfapi not
Now that you have a simple repro, lets get clean logs for this failure.
Truncate logs, produce the error, post logs. Let's see if it's already
telling us what the problem is.
On 4/21/2014 12:47 AM, Paul Penev wrote:
Ok, here is one more hint that point in the direction oflibgfapi *client*
Greeting-
I'm relatively new to the Gluster community, and would like to investigate
Gluster as a solution to augment our current storage systems. My use of
Gluster has been limited to nitch use cases. Is there anybody in the
Library/Digital Repository space that has implemented this for mass
Here is the session log from the testing. Unfortunately there is
little in the logs.
1. KVM running on server 15. bricks are on server 14 and 15. Killing
glusterfsd on server 14
1.1. killall -KILL glusterfsd
It has no change to log anything, so the logs below are from the
gluster server restart.
Sorry, I forgot to mention that at this point, restarting gluster on
server 15 leads to reconnect of the KVM client.
[2014-04-21 16:29:40.634713] I
[server-handshake.c:567:server_setvolume] 0-gtest-server: accepted
client from s15-213620-2014/04/21-09:53:17:688030-gtest-client-1-0
(version:
Hi,
I will try to reproduce this in a vagrant-cluster environnement.
If it can help, here is the time line of the event.
t0: 2 serveurs in replicate mode no issue
t1: power down server1 due to hardware issue
t2: server2 still continue to serve files through NFS and fuse, and still
continue to
I sent the brick logs earlier. But I'm not able to produce logs from
events in KVM. I can't find any logging or debugging interface. It is
somewhat weird.
Paul
2014-04-21 18:30 GMT+02:00 Joe Julian j...@julianfamily.org:
I don't expect much from the bricks either, but in combination with the
Good afternoon folks,
Not sure if this is an occurrence of
https://bugzilla.redhat.com/show_bug.cgi?id=1072720 but cannot add new bricks
to my existing cluster running under Ubuntu 12. New bricks are completely blank.
Peer probed new from a trusted member. Both sides say “Peer Rejected
Replace “bricks” with “servers” as IRC just informed me my vocab was crossed.
Anyway, I wanted to add that following
http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
I can get one brick to see it as not rejected (the brick that I peer on step 4
but the rest
As was pointed out to me by Andy5_ on the IRC channel, qemu outputs the
glusterfs log on stdout:
https://github.com/qemu/qemu/blob/stable-1.7/block/gluster.c#L211
On 04/21/2014 09:51 AM, Paul Penev wrote:
I sent the brick logs earlier. But I'm not able to produce logs from
events in KVM. I
A conversation with myself, but JoeJulian was kind enough to point out that it
was the info file and not the vol file that wasn’t matched. Confirmed that new
servers have two extra lines:
op-version=2
client-op-version=2
Which of course would cause a mismatch. I’ll take down everything tonight
On Mon, Apr 21, 2014 at 9:54 AM, Peter Milanese petermilan...@nypl.org wrote:
Greeting-
Hey,
I'm relatively new to the Gluster community, and would like to investigate
Gluster as a solution to augment our current storage systems. My use of
Gluster has been limited to nitch use cases. Is
Peter Milanese wrote:
Greeting-
I'm relatively new to the Gluster community, and would like to
investigate Gluster as a solution to augment our current storage
systems. My use of Gluster has been limited to nitch use cases. Is
there anybody in the Library/Digital Repository space that has
2014-04-21 21:12 GMT+02:00 Joe Julian j...@julianfamily.org:
qemu outputs the glusterfs log to stdout:
https://github.com/qemu/qemu/blob/stable-1.7/block/gluster.c#L211
I modified that line so that it would log to a file instead (as stdout
was not available in my case).
At the end of the logs,
Hello,
I'm looking for a way to migrate an existing glusterfs volume from a KVM
client to the KVM host.
The data on the disks was written and used by glusterfs previously, so I
guess it should be possible to just use it as-is.
Unfortunately I can't find any documentation on how to re-create that
Could you attach log files please.
You said the bricks are replaced. In case of brick-replacement, index based
self-heal doesn't work so full self-heal needs to be triggered using gluster
volume heal volname full. Could you confirm if that command is issued?
Pranith
- Original Message -
Thanks for your reply. I will attach logs as soon as I can.
We are on 3.4.1 and I followed this process: http://goo.gl/hFwCcB
which essentially details how to set attributes so the new disk space is
identified as the brick, after which full heal commands are issued on the
volumes to start
I saw something in
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes
I wonder whether I should restart the glusterd?
Known Issues:
-
The following configuration changes are necessary for qemu and samba
integration with libgfapi to work seamlessly:
19 matches
Mail list logo