yes. After I restart the volume, it works. But it should be a workaround
for sometimes it is impossible to restart the volume in proeuction
envriment.
On Tue, Apr 22, 2014 at 2:09 PM, Humble Devassy Chirammal <
humble.deva...@gmail.com> wrote:
> Hi Mingfan,
>
> Can you please try to restart the
Hi Mingfan,
Can you please try to restart the subjected volume [1] and find the result?
http://review.gluster.org/#/c/7412/7/doc/release-notes/3.5.0.md
--Humble
On Tue, Apr 22, 2014 at 10:46 AM, Mingfan Lu wrote:
> I saw something in
> https://forge.gluster.org/gluster-docs-project/pages/Gl
I saw something in
https://forge.gluster.org/gluster-docs-project/pages/GlusterFS_34_Release_Notes
I wonder whether I should restart the glusterd?
Known Issues:
-
The following configuration changes are necessary for qemu and samba
integration with libgfapi to work seamlessly:
1
Thanks for your reply. I will attach logs as soon as I can.
We are on 3.4.1 and I followed this process: http://goo.gl/hFwCcB
which essentially details how to set attributes so the new disk space is
identified as the brick, after which full heal commands are issued on the
volumes to start repl
I have created a volume named test_auth and set server.allow-insecure on
Volume Name: test_auth
Type: Distribute
Volume ID: d9bdc43e-15ce-4072-8d89-a34063e82427
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/xfsd/test_auth
Brick2: server2:/mnt/xfsd/test_auth
B
Could you attach log files please.
You said the bricks are replaced. In case of brick-replacement, index based
self-heal doesn't work so full self-heal needs to be triggered using "gluster
volume heal full". Could you confirm if that command is issued?
Pranith
- Original Message -
> Fro
Hi guys,
x-posted from irc.
We're having an issue on our prod openstack environment, which is backed by
gluster using two replicas (I know. I wasn't given a choice.)
We lost storage on one of the replica servers and so had to replace failed
bricks. The heal operation on Cinder and Nova volum
Hello,
I'm looking for a way to migrate an existing glusterfs volume from a KVM
client to the KVM host.
The data on the disks was written and used by glusterfs previously, so I
guess it should be possible to just use it as-is.
Unfortunately I can't find any documentation on how to "re-create" that
2014-04-21 21:12 GMT+02:00 Joe Julian :
> qemu outputs the glusterfs log to stdout:
> https://github.com/qemu/qemu/blob/stable-1.7/block/gluster.c#L211
I modified that line so that it would log to a file instead (as stdout
was not available in my case).
At the end of the logs, there is a pointer
Peter Milanese wrote:
Greeting-
I'm relatively new to the Gluster community, and would like to
investigate Gluster as a solution to augment our current storage
systems. My use of Gluster has been limited to nitch use cases. Is
there anybody in the Library/Digital Repository space that has
i
On Mon, Apr 21, 2014 at 9:54 AM, Peter Milanese wrote:
> Greeting-
Hey,
>
> I'm relatively new to the Gluster community, and would like to investigate
> Gluster as a solution to augment our current storage systems. My use of
> Gluster has been limited to nitch use cases. Is there anybody in the
A conversation with myself, but JoeJulian was kind enough to point out that it
was the info file and not the vol file that wasn’t matched. Confirmed that new
servers have two extra lines:
op-version=2
client-op-version=2
Which of course would cause a mismatch. I’ll take down everything tonight
As was pointed out to me by Andy5_ on the IRC channel, qemu outputs the
glusterfs log on stdout:
https://github.com/qemu/qemu/blob/stable-1.7/block/gluster.c#L211
On 04/21/2014 09:51 AM, Paul Penev wrote:
I sent the brick logs earlier. But I'm not able to produce logs from
events in KVM. I can
Replace “bricks” with “servers” as IRC just informed me my vocab was crossed.
Anyway, I wanted to add that following
http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
I can get one brick to see it as not rejected (the brick that I peer on step 4
but the rest still
Good afternoon folks,
Not sure if this is an occurrence of
https://bugzilla.redhat.com/show_bug.cgi?id=1072720 but cannot add new bricks
to my existing cluster running under Ubuntu 12. New bricks are completely blank.
Peer probed new from a trusted member. Both sides say “Peer Rejected
(Connec
I sent the brick logs earlier. But I'm not able to produce logs from
events in KVM. I can't find any logging or debugging interface. It is
somewhat weird.
Paul
2014-04-21 18:30 GMT+02:00 Joe Julian :
> I don't expect much from the bricks either, but in combination with the
> client log they might
Hi,
I will try to reproduce this in a vagrant-cluster environnement.
If it can help, here is the time line of the event.
t0: 2 serveurs in replicate mode no issue
t1: power down server1 due to hardware issue
t2: server2 still continue to serve files through NFS and fuse, and still
continue to b
Sorry, I forgot to mention that at this point, restarting gluster on
server 15 leads to reconnect of the KVM client.
[2014-04-21 16:29:40.634713] I
[server-handshake.c:567:server_setvolume] 0-gtest-server: accepted
client from s15-213620-2014/04/21-09:53:17:688030-gtest-client-1-0
(version: 3.4.3)
Here is the session log from the testing. Unfortunately there is
little in the logs.
1. KVM running on server 15. bricks are on server 14 and 15. Killing
glusterfsd on server 14
1.1. killall -KILL glusterfsd
It has no change to log anything, so the logs below are from the
gluster server restart.
Greeting-
I'm relatively new to the Gluster community, and would like to investigate
Gluster as a solution to augment our current storage systems. My use of
Gluster has been limited to nitch use cases. Is there anybody in the
Library/Digital Repository space that has implemented this for mass sto
Now that you have a simple repro, lets get clean logs for this failure.
Truncate logs, produce the error, post logs. Let's see if it's already
telling us what the problem is.
On 4/21/2014 12:47 AM, Paul Penev wrote:
Ok, here is one more hint that point in the direction oflibgfapi *client* n
Hi,
Could you send output of gluster volume info and also exact command you are
using to start VM's and what cache settings you are using with KVM?
-samuli
Paul Penev kirjoitti 21.4.2014 kello 10.47:
> Ok, here is one more hint that point in the direction of libgfapi not
> re-establishing th
Ok, here is one more hint that point in the direction of libgfapi not
re-establishing the connections to the bricks after they come back
online: if I migrate the KVM machine (live) from one node to another
after the bricks are back online, and I kill the second brick, the KVM
will not suffer from d
23 matches
Mail list logo