Re: [Gluster-users] moving drives containing bricks from one server to another

2017-07-18 Thread Ted Miller
It would help if you say what kind of volumes you have, as they all work a little differently. You should NOT destroy the old volumes. It should be quite possible to move the bricks to a new server and get them running as part of THE SAME gluster

Re: [Gluster-users] set owner:group on root of volume

2017-07-18 Thread mabi
Unfortunately the root directory of my volume still get its owner and group resetted to root. Can someone explain why or help with this issue? I need it to be set to UID/GID 1000 and stay like that. Thanks > Original Message > Subject: Re: set owner:group on root of volume >

Re: [Gluster-users] Sporadic Bus error on mmap() on FUSE mount

2017-07-18 Thread Niels de Vos
On Tue, Jul 18, 2017 at 01:55:17PM +0200, Jan Wrona wrote: > On 18.7.2017 12:17, Niels de Vos wrote: > > On Tue, Jul 18, 2017 at 10:48:45AM +0200, Jan Wrona wrote: > > > Hi, > > > > > > I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses > > > memory-mapped file IO extensively (I

Re: [Gluster-users] Sporadic Bus error on mmap() on FUSE mount

2017-07-18 Thread Jan Wrona
On 18.7.2017 12:17, Niels de Vos wrote: On Tue, Jul 18, 2017 at 10:48:45AM +0200, Jan Wrona wrote: Hi, I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses memory-mapped file IO extensively (I know I can recompile rrdtool with mmap() disabled, but that is just a workaround). I

Re: [Gluster-users] moving drives containing bricks from one server to another

2017-07-18 Thread Andy Tai
hi, I did not see a reply to my problem. Let me ask it in a different way... If I have bricks from a previous glusterfs volume and that volume is now gone because of the old machine was replaced, now I tried to create a new volume and add the old bricks to the new volume with the "force"

Re: [Gluster-users] Sporadic Bus error on mmap() on FUSE mount

2017-07-18 Thread Niels de Vos
On Tue, Jul 18, 2017 at 10:48:45AM +0200, Jan Wrona wrote: > Hi, > > I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses > memory-mapped file IO extensively (I know I can recompile rrdtool with > mmap() disabled, but that is just a workaround). I have three FUSE mount > points on

Re: [Gluster-users] Remove and re-add bricks/peers

2017-07-18 Thread Atin Mukherjee
Wipe off /var/lib/glusterd/* On Tue, 18 Jul 2017 at 12:48, Tom Cannaerts - INTRACTO < tom.cannae...@intracto.com> wrote: > We'll definitely look into upgrading this, but it's a older, legacy system > so we need to see what we can do without breaking it. > > Returning to the re-adding question,

[Gluster-users] Sporadic Bus error on mmap() on FUSE mount

2017-07-18 Thread Jan Wrona
Hi, I need to use rrdtool on top of a Gluster FUSE mount, rrdtool uses memory-mapped file IO extensively (I know I can recompile rrdtool with mmap() disabled, but that is just a workaround). I have three FUSE mount points on three different servers, on one of them the command "rrdtool create

Re: [Gluster-users] Bug 1374166 or similar

2017-07-18 Thread Bernhard Dübi
Hi Jiffin, thank you for the explanation Kind Regards Bernhard 2017-07-18 8:53 GMT+02:00 Jiffin Tony Thottan : > > > On 16/07/17 20:11, Bernhard Dübi wrote: >> >> Hi, >> >> both Gluster servers were rebooted and now the unlink directory is clean. > > > Following should have

Re: [Gluster-users] Remove and re-add bricks/peers

2017-07-18 Thread Tom Cannaerts - INTRACTO
We'll definitely look into upgrading this, but it's a older, legacy system so we need to see what we can do without breaking it. Returning to the re-adding question, what steps do I need to do to clear the config of the failed peers? Do I just wipe the data directory of the volume, or do I need

Re: [Gluster-users] Bug 1374166 or similar

2017-07-18 Thread Jiffin Tony Thottan
On 16/07/17 20:11, Bernhard Dübi wrote: Hi, both Gluster servers were rebooted and now the unlink directory is clean. Following should have happened, If delete operation is performed gluster keeps file in .unlink directory if it has open fd. In this case since lazy umount is performed,