Re: [Gluster-users] cluster of 3 nodes and san

2018-04-27 Thread Ricky Gutierrez
2018-04-27 7:13 GMT-06:00 Serkan Çoban :
>>but the Doubt is if I can use glusterfs with a san connected by FC?
> Yes, just format the volumes with xfs and ready to go
>

Ok , excelent


>
> For a replica in different DC, be careful about latency. What is the
> connection between DCs?
> It can be doable if latency is low.
>
now we have a connection between both 1gb cluster in ethernet, all my
switches are cisco and we have plans to upgrade to 10gb ethernet.



-- 
rickygm

http://gnuforever.homelinux.com
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reconstructing files from shards

2018-04-27 Thread Jamie Lawrence

> On Apr 26, 2018, at 9:00 PM, Krutika Dhananjay  wrote:

> But anway, why is copying data into new unsharded volume disruptive for you?

The copy itself isn't; blowing away the existing volume and recreating it is.

That is for the usual reasons - storage on the cluster machines is not 
infinite, the cluster serves a purpose that humans rely on, downtime is 
expensive.

-j

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Problem adding replicated bricks on FreeBSD

2018-04-27 Thread Mark Staudinger


Kaushal M wrote:

On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger 
wrote:


Hi Folks,
I'm trying to debug an issue that I've found while attempting to qualify
GlusterFS for potential distributed storage projects on the FreeBSD-11.1
server platform - using the existing package of GlusterFS v3.11.1_4
The main issue I've encountered is that I cannot add new bricks while
setting/increasing the replica count.
If I create a replicated volume "poc" on two hosts, say s1:/gluster/1/poc
and s2:/gluster/1/poc, the volume is created properly and shows replicated
status, files are written to both volumes.
If I create a single volume: s1:/gluster/1/poc as a single / distributed
brick, and then try to run
gluster volume add-brick poc replica 2 s2:/gluster/1/poc
it will always fail (sometimes after a pause, sometimes not.)  The only
error I'm seeing on the server hosting the new brick, aside from the
generic "Unable to add bricks" message, is like so:
I [MSGID: 106578]
[glusterd-brick-ops.c:1352:glusterd_op_perform_add_bricks] 0-management:
replica-count is set 2
I [MSGID: 106578]
[glusterd-brick-ops.c:1362:glusterd_op_perform_add_bricks] 0-management:
type is set 2, need to change it
E [MSGID: 106054]
[glusterd-utils.c:12974:glusterd_handle_replicate_brick_ops] 0-management:
Failed to set extended attribute trusted.add-brick : Operation not
supported [Operation not supported]

The log here seems to indicate the filesystem on the new brick being added
doesn't seem to support setting xattrs.
Maybe check the new brick again?


E [MSGID: 106074] [glusterd-brick-ops.c:2565:glusterd_op_add_brick]
0-glusterd: Unable to add bricks
E [MSGID: 106123] [glusterd-mgmt.c:311:gd_mgmt_v3_commit_fn] 0-management:
Add-brick commit failed.
I was initially using ZFS and noted that ZFS on FreeBSD does not support
xattr, so I reverted to using UFS as the storage type for the brick, and
still encounter this behavior.
I also recompiled the port (again, GlusterFS v3.11.1) with the patch from
https://bugzilla.redhat.com/show_bug.cgi?id=1484246 as this deals
specifically with xattr handling in FreeBSD.
To recap - I'm able to create any type of volume (2 or 3-way replicated or
distributed), but I'm unable to add replicated bricks to a volume.
I was, however, able to add a second distributed brick ( gluster volume
add-brick poc s2:/gluster/1/poc ) - so the issue seems specific to adding
and/or changing the replica count while adding a new brick.
Please let me know if there are any other issues in addition to bug
#1452961 I should be aware of, or additional log or debug info I can
provide.
Best Regards,
Mark
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Hi Kaushal,

That's what the error message would indicate, but UFS2 does support 
xattr so that's a bit unexpected. Since I'm able to add a distributed 
brick, I can rule out basic setup issues.


All the same, I can see via 'truss'

extattr_set_link(0x8081ff890,0x1,0x803b27886,0x805258c40,0x400) ERR#45 
'Operation not supported’


Is anyone else using a similar setup (FreeBSD >= 11.1-RELEASE with the 
official pkg) that has tested adding bricks in this way? Are there any 
changes to the newfs or mount settings required for this to work that 
aren't obvious?


Cheers,
-=Mark
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RMAN backups on Glusters

2018-04-27 Thread Anand Paladugu
Hi Hemant

I am not sure if Gluster is supported for RMAN.  Please check on that.

But I did find couple of articles online related to the ORA-19510 errors
that point to either no space left or improper RMAN configuration causing
this.  They may be off help.  Check them out.

https://oradbps.wordpress.com/2017/02/11/ora-19510-failed-to-set-size-of-blocks-for-file-block-size/

https://thejunkfiles.com/rman-backup-errors-ora-19510-and-ora-27045-on-nfs-share/

Best

Anand



anand paladugu

Principal PRODUCT MANAGER - Gluster STORAGE
​​
314 Littleton Road, Westford, MA 01886


apala...@redhat.com  M: 1 774-922-0036


On Wed, Apr 25, 2018 at 1:03 PM, Hemant Mamtora 
wrote:

> Sending again.
>
> Can somebody please take a look and let me know is this is doable?
>
> Folks,
>
> We have glusters with version 3.8.13 and we are using that for RMAN
> backups.
>
> We get errors/warnings,
>
> RMAN-03009: failure of backup command on C1 channel at 03/28/2018 16:55:43
>
> ORA-19510: failed to set size of 184820 blocks for file "/gfsnet1/bkp/
> db065.prod.bos.netledger.com/n0030120/N0030120_4246744146_20180328_
> frsuu9op_9723_1_lvl_1" (block size=8192)
>
> ORA-27046: file size is not a multiple of logical block size
>
> Additional information: 2
>
> channel C1 disabled, job failed on it will be run on another channel
>
> channel C2: finished piece 1 at 28-MAR-18 16:55:43
>
> Has anybody used glusters for RMAN backups?
> Is there any specific setting that needs to be done on the glusters to be
> able to write on glusters and not see the above errors?
>
> Has anyone seen the above errors?
>
> Any more help will be appreciated.
>
> -Hemant Mamtora
>
> --
> *From:* Hemant Mamtora
> *Sent:* Wednesday, April 18, 2018 12:45 PM
> *To:* gluster-users@gluster.org
> *Subject:* RMAN backups on Glusters
>
> Folks,
>
> We have glusters with version 3.8.13 and we are using that for RMAN
> backups.
>
> We get errors/warnings,
>
> RMAN-03009: failure of backup command on C1 channel at 03/28/2018 16:55:43
>
> ORA-19510: failed to set size of 184820 blocks for file "/gfsnet1/bkp/
> db065.prod.bos.netledger.com/n0030120/N0030120_4246744146_20180328_
> frsuu9op_9723_1_lvl_1" (block size=8192)
>
> ORA-27046: file size is not a multiple of logical block size
>
> Additional information: 2
>
> channel C1 disabled, job failed on it will be run on another channel
>
> channel C2: finished piece 1 at 28-MAR-18 16:55:43
>
> Has anybody used glusters for RMAN backups?
> Is there any specific setting that needs to be done on the glusters to be
> able to write on glusters and not see the above errors?
>
> Has anyone seen the above errors?
>
> Any more help will be appreciated.
>
> -Hemant Mamtora
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] RMAN backups on Glusters

2018-04-27 Thread Hemant Mamtora
Sending again.

Can somebody please take a look and let me know is this is doable?

Folks,

We have glusters with version 3.8.13 and we are using that for RMAN backups.

We get errors/warnings,


RMAN-03009: failure of backup command on C1 channel at 03/28/2018 16:55:43

ORA-19510: failed to set size of 184820 blocks for file 
"/gfsnet1/bkp/db065.prod.bos.netledger.com/n0030120/N0030120_4246744146_20180328_frsuu9op_9723_1_lvl_1"
 (block size=8192)

ORA-27046: file size is not a multiple of logical block size

Additional information: 2

channel C1 disabled, job failed on it will be run on another channel

channel C2: finished piece 1 at 28-MAR-18 16:55:43

Has anybody used glusters for RMAN backups?
Is there any specific setting that needs to be done on the glusters to be able 
to write on glusters and not see the above errors?

Has anyone seen the above errors?

Any more help will be appreciated.

-Hemant Mamtora


From: Hemant Mamtora
Sent: Wednesday, April 18, 2018 12:45 PM
To: gluster-users@gluster.org
Subject: RMAN backups on Glusters

Folks,

We have glusters with version 3.8.13 and we are using that for RMAN backups.

We get errors/warnings,


RMAN-03009: failure of backup command on C1 channel at 03/28/2018 16:55:43

ORA-19510: failed to set size of 184820 blocks for file 
"/gfsnet1/bkp/db065.prod.bos.netledger.com/n0030120/N0030120_4246744146_20180328_frsuu9op_9723_1_lvl_1"
 (block size=8192)

ORA-27046: file size is not a multiple of logical block size

Additional information: 2

channel C1 disabled, job failed on it will be run on another channel

channel C2: finished piece 1 at 28-MAR-18 16:55:43

Has anybody used glusters for RMAN backups?
Is there any specific setting that needs to be done on the glusters to be able 
to write on glusters and not see the above errors?

Has anyone seen the above errors?

Any more help will be appreciated.

-Hemant Mamtora



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to set up a 4 way gluster file system

2018-04-27 Thread Dave Sherohman
On Fri, Apr 27, 2018 at 07:22:29PM +1200, Thing wrote:
> I have 4 nodes, so a quorum would be 3 of 4.

Nope, gluster doesn't work quite the way you're looking at it.
(Incidentally, I started off with the same expectation that you have.)

When you create a 4-brick replica 2 volume, you don't get a single
cluster with a quorum of 3 out of 4 bricks.  You get two subvolumes,
each of which consists of two mirrored bricks.

Each individual subvolume is susceptible to split-brain if one of the
two bricks in that pair goes down, regardless of how many bricks in
other subvolumes are still up.  Thus, quorum has to be handled on the
subvolume level rather than only being a consideration for the overall
volume as a whole.

One small wrinkle here is that, for calculating quorum, gluster treats
the brick in each pair which was listed first when you created the
volume as "one plus epsilon", so the subvolume will continue to operate
normally if the second brick goes down, but not if the first brick is
missing.

The easy solution to this is to switch from replica 2 to replica 2 +
arbiter.  Arbiter bricks don't need to be nearly as large as data bricks
because they store only file metadata, not file contents, so you can
just scrape up a little spare disk space on two of your boxes, call that
space an arbiter, and run with it.  In my case, I have 10T data bricks
and 100G arbiter bricks; I'm using a total of under 1G across all
arbiter bricks for 3T of data in the volume.

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] cluster of 3 nodes and san

2018-04-27 Thread Serkan Çoban
>but the Doubt is if I can use glusterfs with a san connected by FC?
Yes, just format the volumes with xfs and ready to go


For a replica in different DC, be careful about latency. What is the
connection between DCs?
It can be doable if latency is low.

On Fri, Apr 27, 2018 at 4:02 PM, Ricky Gutierrez  wrote:
> Hi, any advice?
>
> El mié., 25 abr. 2018 19:56, Ricky Gutierrez 
> escribió:
>>
>> Hi list, I need a little help, I currently have a cluster with vmware
>> and 3 nodes, I have a storage (Dell powervault) connected by FC in
>> redundancy, and I'm thinking of migrating it to proxmox since the
>> maintenance costs are very expensive, but the Doubt is if I can use
>> glusterfs with a san connected by FC? , It is advisable? , I add
>> another data, that in another site I have another cluster with proxmox
>> with another storage connected by FC (Hp eva 6000), and I am also
>> considering using glusterfs, the idea is to be able to replicate one
>> cluster to another and have a replica, but not If I can do the job
>> with glusterfs?
>>
>> I wait for your comments!
>>
>>
>> --
>> rickygm
>>
>> http://gnuforever.homelinux.com
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] cluster of 3 nodes and san

2018-04-27 Thread Ricky Gutierrez
Hi, any advice?

El mié., 25 abr. 2018 19:56, Ricky Gutierrez 
escribió:

> Hi list, I need a little help, I currently have a cluster with vmware
> and 3 nodes, I have a storage (Dell powervault) connected by FC in
> redundancy, and I'm thinking of migrating it to proxmox since the
> maintenance costs are very expensive, but the Doubt is if I can use
> glusterfs with a san connected by FC? , It is advisable? , I add
> another data, that in another site I have another cluster with proxmox
> with another storage connected by FC (Hp eva 6000), and I am also
> considering using glusterfs, the idea is to be able to replicate one
> cluster to another and have a replica, but not If I can do the job
> with glusterfs?
>
> I wait for your comments!
>
>
> --
> rickygm
>
> http://gnuforever.homelinux.com
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Reconstructing files from shards

2018-04-27 Thread Jim Kinney
For me, the process of copying out the drive file from Ovirt is a tedious, very 
manual process. Each vm has a single drive file with tens of thousands of 
shards each. Typical vm size is 100G for me. And it's all mostly sparse. So, 
yes, a copy out from the gluster share is best. 

Did the outstanding bug of adding bricks to sharded domain causing data loss 
get fixed in release 3.12?

On April 27, 2018 12:00:15 AM EDT, Krutika Dhananjay  
wrote:
>The short answer is - no there exists no script currently that can
>piece
>the shards together into a single file.
>
>Long answer:
>IMO the safest way to convert from sharded to a single file _is_ by
>copying
>the data out into a new volume at the moment.
>Picking up the files from the individual bricks directly and joining
>them,
>although fast, is a strict no-no for many reasons - for example, when
>you
>have a replicated volume
>and the good copy needs to be carefully selected and must remain a good
>copy through the course of the copying process. There could be other
>consistency issues with
>file attributes changing while they are being copied. All of this is
>not
>possible, unless you're open to taking the volume down.
>
>Then the other option is to have gluster client (perhaps in the shard
>translator itself)) do the conversion in the background within the
>gluster
>translator stack, which is safer
>but would require that shard lock it until the copying is complete. And
>until then no IO can happen into this file.
>(I haven't found the time to work on this, as there exists a workaround
>and
>I've been busy with other tasks. If anyone wants to volunteer to get
>this
>done, I'll be happy to help).
>
>But anway, why is copying data into new unsharded volume disruptive for
>you?
>
>-Krutika
>
>
>On Sat, Apr 21, 2018 at 1:14 AM, Jamie Lawrence
>
>wrote:
>
>> Hello,
>>
>> So I have a volume on a gluster install (3.12.5) on which sharding
>was
>> enabled at some point recently. (Don't know how it happened, it may
>have
>> been an accidental run of an old script.) So it has been happily
>sharding
>> behind our backs and it shouldn't have.
>>
>> I'd like to turn sharding off and reverse the files back to normal. 
>Some
>> of these are sparse files, so I need to account for holes. There are
>more
>> than enough that I need to write a tool to do it.
>>
>> I saw notes ca. 3.7 saying the only way to do it was to read-off on
>the
>> client-side, blow away the volume and start over. This would be
>extremely
>> disruptive for us, and language I've seen reading tickets and old
>messages
>> to this list make me think that isn't needed anymore, but
>confirmation of
>> that would be good.
>>
>
>> The only discussion I can find are these videos[1]:
>> http://opensource-storage.blogspot.com/2016/07/de-
>> mystifying-gluster-shards.html , and some hints[2] that are old
>enough
>> that I don't trust them without confirmation that nothing's changed.
>The
>> video things don't acknowledge the existence of file holes. Also, the
>hint
>> in [2] mentions using trusted.glusterfs.shard.file-size to get the
>size
>> of a partly filled hole; that value looks like base64, but when I
>attempt
>> to decode it, base64 complains about invalid input.
>>
>> In short, I can't find sufficient information to reconstruct these.
>Has
>> anyone written a current, step-by-step guide on reconstructing
>sharded
>> files? Or has someone has written a tool so I don't have to?
>>
>> Thanks,
>>
>> -j
>>
>>
>> [1] Why one would choose to annoy the crap out of their fellow
>gluster
>> users by using video to convey about 80 bytes of ASCII-encoded
>information,
>> I have no idea.
>> [2] http://lists.gluster.org/pipermail/gluster-devel/2017-
>> March/052212.html
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>

-- 
Sent from my Android device with K-9 Mail. All tyopes are thumb related and 
reflect authenticity.___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Turn off replication

2018-04-27 Thread Hari Gowtham
Hi Jose,

Why are all the bricks visible in volume info if the pre-validation
for add-brick failed? I suspect that the remove brick wasn't done
properly.

You can provide the cmd_history.log to verify this. Better to get the
other log messages.

Also I need to know what are the bricks that were actually removed,
the command used and its output.

On Thu, Apr 26, 2018 at 3:47 AM, Jose Sanchez  wrote:
> Looking at the logs , it seems that it is trying to add using the same port
> was assigned for gluster01ib:
>
>
> Any Ideas??
>
> Jose
>
>
>
> [2018-04-25 22:08:55.169302] I [MSGID: 106482]
> [glusterd-brick-ops.c:447:__glusterd_handle_add_brick] 0-management:
> Received add brick req
> [2018-04-25 22:08:55.186037] I [run.c:191:runner_log]
> (-->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0x33045)
> [0x7f5464b9b045]
> -->/usr/lib64/glusterfs/3.8.15/xlator/mgmt/glusterd.so(+0xcbd85)
> [0x7f5464c33d85] -->/lib64/libglusterfs.so.0(runner_log+0x115)
> [0x7f54704cf1e5] ) 0-management: Ran script:
> /var/lib/glusterd/hooks/1/add-brick/pre/S28Quota-enable-root-xattr-heal.sh
> --volname=scratch --version=1 --volume-op=add-brick
> --gd-workdir=/var/lib/glusterd
> [2018-04-25 22:08:55.309534] I [MSGID: 106143]
> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick
> /gdata/brick1/scratch on port 49152
> [2018-04-25 22:08:55.309659] I [MSGID: 106143]
> [glusterd-pmap.c:250:pmap_registry_bind] 0-pmap: adding brick
> /gdata/brick1/scratch.rdma on port 49153
> [2018-04-25 22:08:55.310231] E [MSGID: 106005]
> [glusterd-utils.c:4877:glusterd_brick_start] 0-management: Unable to start
> brick gluster02ib:/gdata/brick1/scratch
> [2018-04-25 22:08:55.310275] E [MSGID: 106074]
> [glusterd-brick-ops.c:2493:glusterd_op_add_brick] 0-glusterd: Unable to add
> bricks
> [2018-04-25 22:08:55.310304] E [MSGID: 106123]
> [glusterd-mgmt.c:294:gd_mgmt_v3_commit_fn] 0-management: Add-brick commit
> failed.
> [2018-04-25 22:08:55.310316] E [MSGID: 106123]
> [glusterd-mgmt.c:1427:glusterd_mgmt_v3_commit] 0-management: Commit failed
> for operation Add brick on local node
> [2018-04-25 22:08:55.310330] E [MSGID: 106123]
> [glusterd-mgmt.c:2018:glusterd_mgmt_v3_initiate_all_phases] 0-management:
> Commit Op Failed
> [2018-04-25 22:09:11.678141] E [MSGID: 106452]
> [glusterd-utils.c:6064:glusterd_new_brick_validate] 0-management: Brick:
> gluster02ib:/gdata/brick1/scratch not available. Brick may be containing or
> be contained by an existing brick
> [2018-04-25 22:09:11.678184] W [MSGID: 106122]
> [glusterd-mgmt.c:188:gd_mgmt_v3_pre_validate_fn] 0-management: ADD-brick
> prevalidation failed.
> [2018-04-25 22:09:11.678200] E [MSGID: 106122]
> [glusterd-mgmt-handler.c:337:glusterd_handle_pre_validate_fn] 0-management:
> Pre Validation failed on operation Add brick
> [root@gluster02 glusterfs]# gluster volume status scratch
> Status of volume: scratch
> Gluster process TCP Port  RDMA Port  Online  Pid
> --
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153  Y
> 1819
> Brick gluster01ib:/gdata/brick2/scratch 49154 49155  Y
> 1827
> Brick gluster02ib:/gdata/brick1/scratch N/A   N/AN   N/A
>
>
>
> Task Status of Volume scratch
> --
> There are no active volume tasks
>
>
>
> [root@gluster02 glusterfs]#
>
>
>
> On Apr 25, 2018, at 3:23 PM, Jose Sanchez  wrote:
>
> Hello Karthik
>
>
> Im having trouble adding the two bricks back online.  Any help is
> appreciated
>
> thanks
>
>
> when i try to add-brick command this is what i get
>
> [root@gluster01 ~]# gluster volume add-brick scratch
> gluster02ib:/gdata/brick2/scratch/
> volume add-brick: failed: Pre Validation failed on gluster02ib. Brick:
> gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or
> be contained by an existing brick
>
> I have run the following commands and remove the .glusterfs hidden
> directories
>
> [root@gluster02 ~]# setfattr -x trusted.glusterfs.volume-id
> /gdata/brick2/scratch/
> setfattr: /gdata/brick2/scratch/: No such attribute
> [root@gluster02 ~]# setfattr -x trusted.gfid /gdata/brick2/scratch/
> setfattr: /gdata/brick2/scratch/: No such attribute
> [root@gluster02 ~]#
>
>
> this is what I get when I run status and info
>
>
> [root@gluster01 ~]# gluster volume info scratch
>
> Volume Name: scratch
> Type: Distribute
> Volume ID: 23f1e4b1-b8e0-46c3-874a-58b4728ea106
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 4
> Transport-type: tcp,rdma
> Bricks:
> Brick1: gluster01ib:/gdata/brick1/scratch
> Brick2: gluster01ib:/gdata/brick2/scratch
> Brick3: gluster02ib:/gdata/brick1/scratch
> Brick4: gluster02ib:/gdata/brick2/scratch
> Options Reconfigured:
> nfs.disable: on
> performance.readdir-ahead: on
> [root@gluster01 ~]#
>
>
> [root@gluster02 ~]# gluster 

Re: [Gluster-users] How to set up a 4 way gluster file system

2018-04-27 Thread Sunil Kumar Heggodu Gopala Acharya
Hi,

>>gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0

This command will create a distributed-replicate volume(yes you have to opt
'y' at the warning message to get it created). We will have two
distribution legs each containing a replica pair(made of two bricks). When
a file is placed on the volume by a user, it will be placed in one of the
distribution legs which as mentioned earlier will have only two copies.
With replica 2 volumes(volume type: replicate /distributed-replicate) we
might hit split-brain situation. So we recommend replica 3 or arbiter
volume to provide consistency and availability.

Regards,

Sunil kumar Acharya

Senior Software Engineer

Red Hat



T: +91-8067935170 


TRIED. TESTED. TRUSTED. 


On Fri, Apr 27, 2018 at 12:52 PM, Thing  wrote:

> Hi,
>
> I have 4 nodes, so a quorum would be 3 of 4.  The Q is I suppose why does
> the documentation give this command as an example without qualifying it?
>
> SO I am running the wrong command?   I want a "raid10"
>
> On 27 April 2018 at 18:05, Karthik Subrahmanya 
> wrote:
>
>> Hi,
>>
>> With replica 2 volumes one can easily end up in split-brains if there are
>> frequent disconnects and high IOs going on.
>> If you use replica 3 or arbiter volumes, it will guard you by using the
>> quorum mechanism giving you both consistency and availability.
>> But in replica 2 volumes, quorum does not make sense since it needs both
>> the nodes up to guarantee consistency, which costs availability.
>>
>> If you can consider having a replica 3 or arbiter volumes it would be
>> great. Otherwise you can anyway go ahead and continue with the replica 2
>> volume
>> by selecting  *y* for the warning message. It will create the replica 2
>> configuration as you wanted.
>>
>> HTH,
>> Karthik
>>
>> On Fri, Apr 27, 2018 at 10:56 AM, Thing  wrote:
>>
>>> Hi,
>>>
>>> I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like
>>> to set these up in a raid 10 which will? give me 2TB useable.  So Mirrored
>>> and concatenated?
>>>
>>> The command I am running is as per documents but I get a warning error,
>>> how do I get this to proceed please as the documents do not say.
>>>
>>> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
>>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>>> glusterp4:/bricks/brick1/gv0
>>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>>> avoid this. See: http://docs.gluster.org/en/lat
>>> est/Administrator%20Guide/Split%20brain%20and%20ways%20to%20
>>> deal%20with%20it/.
>>> Do you still want to continue?
>>>  (y/n) n
>>>
>>> Usage:
>>> volume create  [stripe ] [replica  [arbiter
>>> ]] [disperse []] [disperse-data ] [redundancy ]
>>> [transport ] ?... [force]
>>>
>>> [root@glustep1 ~]#
>>>
>>> thanks
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to set up a 4 way gluster file system

2018-04-27 Thread Thing
Hi,

I have 4 nodes, so a quorum would be 3 of 4.  The Q is I suppose why does
the documentation give this command as an example without qualifying it?

SO I am running the wrong command?   I want a "raid10"

On 27 April 2018 at 18:05, Karthik Subrahmanya  wrote:

> Hi,
>
> With replica 2 volumes one can easily end up in split-brains if there are
> frequent disconnects and high IOs going on.
> If you use replica 3 or arbiter volumes, it will guard you by using the
> quorum mechanism giving you both consistency and availability.
> But in replica 2 volumes, quorum does not make sense since it needs both
> the nodes up to guarantee consistency, which costs availability.
>
> If you can consider having a replica 3 or arbiter volumes it would be
> great. Otherwise you can anyway go ahead and continue with the replica 2
> volume
> by selecting  *y* for the warning message. It will create the replica 2
> configuration as you wanted.
>
> HTH,
> Karthik
>
> On Fri, Apr 27, 2018 at 10:56 AM, Thing  wrote:
>
>> Hi,
>>
>> I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like
>> to set these up in a raid 10 which will? give me 2TB useable.  So Mirrored
>> and concatenated?
>>
>> The command I am running is as per documents but I get a warning error,
>> how do I get this to proceed please as the documents do not say.
>>
>> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
>> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
>> glusterp4:/bricks/brick1/gv0
>> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
>> avoid this. See: http://docs.gluster.org/en/lat
>> est/Administrator%20Guide/Split%20brain%20and%20ways%20to%
>> 20deal%20with%20it/.
>> Do you still want to continue?
>>  (y/n) n
>>
>> Usage:
>> volume create  [stripe ] [replica  [arbiter
>> ]] [disperse []] [disperse-data ] [redundancy ]
>> [transport ] ?... [force]
>>
>> [root@glustep1 ~]#
>>
>> thanks
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] How to set up a 4 way gluster file system

2018-04-27 Thread Karthik Subrahmanya
Hi,

With replica 2 volumes one can easily end up in split-brains if there are
frequent disconnects and high IOs going on.
If you use replica 3 or arbiter volumes, it will guard you by using the
quorum mechanism giving you both consistency and availability.
But in replica 2 volumes, quorum does not make sense since it needs both
the nodes up to guarantee consistency, which costs availability.

If you can consider having a replica 3 or arbiter volumes it would be
great. Otherwise you can anyway go ahead and continue with the replica 2
volume
by selecting  *y* for the warning message. It will create the replica 2
configuration as you wanted.

HTH,
Karthik

On Fri, Apr 27, 2018 at 10:56 AM, Thing  wrote:

> Hi,
>
> I have 4 servers each with 1TB of storage set as /dev/sdb1, I would like
> to set these up in a raid 10 which will? give me 2TB useable.  So Mirrored
> and concatenated?
>
> The command I am running is as per documents but I get a warning error,
> how do I get this to proceed please as the documents do not say.
>
> gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
> glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
> glusterp4:/bricks/brick1/gv0
> Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to
> avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/
> Split%20brain%20and%20ways%20to%20deal%20with%20it/.
> Do you still want to continue?
>  (y/n) n
>
> Usage:
> volume create  [stripe ] [replica  [arbiter
> ]] [disperse []] [disperse-data ] [redundancy ]
> [transport ] ?... [force]
>
> [root@glustep1 ~]#
>
> thanks
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users