All,
Running glusterfs-4.0.2-1 on CentOS 7.5.1804
I have 10 servers running in a pool. All show as connected when I do
gluster peer status and gluster pool list.
There is 1 volume running that is distributed on servers 1-5.
I try using a brick in server7 and it always gives me:
/volume creat
Hi and thanks,
Well, I definitely don't want to ruin my change log (the extended attributes),
so
I'll have to think of a new way to integrate glusterfs in our use case.
Thanks for confirming my suspicion.
Regards
Andreas
On 04/13/15 06:16, Atin Mukherjee wrote:
>
> On 04/11/2015 02:25 PM, Andr
On 04/11/2015 02:25 PM, Andreas Hollaus wrote:
> Hi and thanks,
>
> Well, I guess I didn't explain my problem particularly well. Sorry for
> that.
>
> I guess that a normal gluster user would in the beginning add the
> replica bricks required and then, whenever the servers restarts, let
> glust
Hi and thanks,
Well, I guess I didn't explain my problem particularly well. Sorry for that.
I guess that a normal gluster user would in the beginning add the
replica bricks required and then, whenever the servers restarts, let
gluster restart using the same configuration files as before the
r
On 04/11/2015 01:21 AM, Andreas Hollaus wrote:
> Hi,
>
> I wonder what happens when the command 'gluster volume create...' is
> executed? How is the file system on the brick affected by that command,
> data and meta-data (extended attributes)?
>
> The reason for this question is that I have a s
Hi,
I wonder what happens when the command 'gluster volume create...' is
executed? How is the file system on the brick affected by that command,
data and meta-data (extended attributes)?
The reason for this question is that I have a strange use case where my
2 (mirrored) servers are restarte
Hi,
Thanks.
I think im sticking to using '/export/lvmvolumename/' as the
mountpoint, and a subdir 'brick' in that for glusterfs:
/export/glusterfslv1/brick
This has the same advantage as you mention, and tells me what lvm2
volumes are part of the gluster volume. I think im not going to use
numbe
On Tue, May 14, 2013 at 08:56:02PM +0200, John Smith wrote:
> Thanks. Is there a preferred naming convention to go along with it ?
It's up to you, but I used /exports/brick1/myvol, /exports/brick2/myvol
and it seemed logical to me.
There's an important benefit to doing this. If the filesystem doe
Hi,
I realize there is no 'hard' reason to do it a certain way, but my
guess is that people are gonna do pretty much what the docs say, at
least for there 1st cluster.
I really like this approach
doing /exports/brick1 as mountpoint, and the subdir as the disk name
or llvm2 volume (which then can
There is no hard rule/convention. Ideally you would want the brick
directory name to match the volume name, rather than something as isolated
as "brick1".
For eg, if you have a volume called 'music', the brick name could be
/export/LVM/music and similarly share the same LVM for video with a brick
Hrm.
/exports/llvm2volname as the mountpoint, with 'brick' as the subdir
(resulting in /exports/llvmvolname/brick) seems nice ...
Guess ill go and do that. Maybe nice for the docs, too ?
;)
- John
On Tue, May 14, 2013 at 8:56 PM, John Smith wrote:
> Hi,
>
>
> Thanks. Is there a preferred nam
Hi,
Thanks. Is there a preferred naming convention to go along with it ?
Doing /exports/brick1/brick 1 seems a bit silly. Also, the /export
mountpoint you suggest makes things a little ugly too, as it will
results in /etxport1, /export2, etc.
Any ideas >?
Thanks,
John Smith.
On Tue, May 14
Fixed the doc. Thanks for pointing out!
On Tue, May 14, 2013 at 11:51 AM, Anand Avati wrote:
> The quick start guide needs to be updated. The brick directory should
> ideally be a sub-directory of a mount point (and not a mount point
> directory itself) for ease of administration. We recently a
The quick start guide needs to be updated. The brick directory should
ideally be a sub-directory of a mount point (and not a mount point
directory itself) for ease of administration. We recently added code to
warn about this (and looks like the code check exposed a documentation bug
which you just
Hi,
Im trying to follow this guide to set up a simple cluster:
http://www.gluster.org/community/documentation/index.php/QuickStart
But when I issue this command :
gluster volume create glustervol01 replica 2
192.168.126.128:/export/brick1 192.168.126.129:/export/brick1
I get the following eror:
15 matches
Mail list logo