Re: [Gluster-users] Volume to store vm

2019-09-06 Thread Ramon Selga

Hi Cristian,

Both approaches are correct but they have different usable capacity and 
tolerance to node failures.


First one is a full replica 3 meaning you get your total node capacity divided 
by 3, because replica 3, and it tolerates a simultaneous of two nodes and very 
good for split-brain avoidance.


Second one is a replica 2 with arbiter, also very good for split-brain avoidance 
(that's purpose of arbiter bricks). In this case you get your total capacity 
divided by two except a little space going to arbiter-bricks, may be less than 
1% of normal storage bricks. It tolerates one node failure at the same time.


For VM usage remember to enable sharding with a shard size of 256MB at least 
before use volume.


If efficiency between total and usable capacity is a concern for you and you 
think you could tolerate only one node failure at the same time, may I suggest 
you to use a distributed dispersed 3 redundancy 1 volume?. You will get your 
total capacity divided by 3 times 2 (that's 2/3 of total capacity) and this 
config still tolerates one node failure at the same time.


Hope this helps.

*Ramon Selga*

934 76 69 10
670 25 37 05

DataLab SL 


Aviso Legal 




El 06/09/19 a les 17:11, Cristian Del Carlo ha escrit:

Hi,

I have an environment consisting of 4 nodes ( with large disks).
I have to create a volume to contain image of virtual machines.

In documentation i read:
/Hosting virtual machine images requires the consistency of three-way 
replication,
which is provided by three-way replicated volumes, three-way distributed 
replicated volumes,

arbitrated replicated volumes, and distributed arbitrated replicated volumes. /

So I'm going to confusion to configure this volume.
I have 4 nodes and I don't want to lose space by dedicating one to the 
function of arbiter.


Would it be reasonable to configure the volume as in these two examples?

# gluster volume create test1 replica 3 \
server1:/bricks/brick1 server2:/bricks/brick1 server3:/bricks/brick1 \
server2:/bricks/brick2 server3:/bricks/brick2 server4:/bricks/brick2 \
server3:/bricks/brick3 server4:/bricks/brick3 server1:/bricks/brick3 \
server4:/bricks/brick4 server1:/bricks/brick4 server2:/bricks/brick4

# gluster volume create test1 replica 3 arbiter 1 \
server1:/bricks/brick1 server2:/bricks/brick1 server3:/bricks/arbiter_brick1 \
server2:/bricks/brick2 server3:/bricks/brick2 server4:/bricks/arbiter_brick2 \
server3:/bricks/brick3 server4:/bricks/brick3 server1:/bricks/arbiter_brick3 \
server4:/bricks/brick4 server1:/bricks/brick4 server2:/bricks/arbiter_brick4

Thanks,

--

*/
/*
*/Cristian Del Carlo/*



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Volume to store vm

2019-09-06 Thread Cristian Del Carlo
Hi,

I have an environment consisting of 4 nodes ( with large disks).
I have to create a volume to contain image of virtual machines.

In documentation i read:


*Hosting virtual machine images requires the consistency of three-way
replication, which is provided by three-way replicated volumes, three-way
distributed replicated volumes, arbitrated replicated volumes, and
distributed arbitrated replicated volumes. *

So I'm going to confusion to configure this volume.
I have 4 nodes and I don't want to lose space by dedicating one to the
function of arbiter.

Would it be reasonable to configure the volume as in these two examples?

# gluster volume create test1 replica 3 \
server1:/bricks/brick1 server2:/bricks/brick1 server3:/bricks/brick1 \
server2:/bricks/brick2 server3:/bricks/brick2 server4:/bricks/brick2 \
server3:/bricks/brick3 server4:/bricks/brick3 server1:/bricks/brick3 \
server4:/bricks/brick4 server1:/bricks/brick4 server2:/bricks/brick4

# gluster volume create test1 replica 3 arbiter 1 \
server1:/bricks/brick1 server2:/bricks/brick1
server3:/bricks/arbiter_brick1 \
server2:/bricks/brick2 server3:/bricks/brick2
server4:/bricks/arbiter_brick2 \
server3:/bricks/brick3 server4:/bricks/brick3
server1:/bricks/arbiter_brick3 \
server4:/bricks/brick4 server1:/bricks/brick4
server2:/bricks/arbiter_brick4

Thanks,

-- 


*Cristian Del Carlo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Rebalancing newly added bricks

2019-09-06 Thread Herb Burnswell
On Thu, Sep 5, 2019 at 9:56 PM Nithya Balachandran 
wrote:

>
>
> On Thu, 5 Sep 2019 at 02:41, Herb Burnswell 
> wrote:
>
>> Thanks for the replies.  The rebalance is running and the brick
>> percentages are not adjusting as expected:
>>
>> # df -hP |grep data
>> /dev/mapper/gluster_vg-gluster_lv1_data   60T   49T   11T  83%
>> /gluster_bricks/data1
>> /dev/mapper/gluster_vg-gluster_lv2_data   60T   49T   11T  83%
>> /gluster_bricks/data2
>> /dev/mapper/gluster_vg-gluster_lv3_data   60T  4.6T   55T   8%
>> /gluster_bricks/data3
>> /dev/mapper/gluster_vg-gluster_lv4_data   60T  4.6T   55T   8%
>> /gluster_bricks/data4
>> /dev/mapper/gluster_vg-gluster_lv5_data   60T  4.6T   55T   8%
>> /gluster_bricks/data5
>> /dev/mapper/gluster_vg-gluster_lv6_data   60T  4.6T   55T   8%
>> /gluster_bricks/data6
>>
>> At the current pace it looks like this will continue to run for another
>> 5-6 days.
>>
>> I appreciate the guidance..
>>
>>
> What is the output of the rebalance status command?
> Can you check if there are any errors in the rebalance logs on the node
> on which you see rebalance activity?
> If there are a lot of small files on the volume, the rebalance is expected
> to take time.
>
> Regards,
> Nithya
>

My apologies, that was a typo.  I meant to say:

"The rebalance is running and the brick percentages are NOW adjusting as
expected"

I did expect the rebalance to take several days.  The rebalance log is not
showing any errors.  Status output:

# gluster vol rebalance tank status
Node Rebalanced-files  size
  scanned  failures   skipped   status  run time in
h:m:s
   -  ---   ---
---   ---   --- 
--
   localhost  125132035.5TB
  2079527 0 0  in progress  139:9:46
   serverB 0
 0Bytes 7 0 0completed
  63:47:55
volume rebalance: tank: success

Thanks again for the guidance.

HB



>
>>
>> On Mon, Sep 2, 2019 at 9:08 PM Nithya Balachandran 
>> wrote:
>>
>>>
>>>
>>> On Sat, 31 Aug 2019 at 22:59, Herb Burnswell <
>>> herbert.burnsw...@gmail.com> wrote:
>>>
 Thank you for the reply.

 I started a rebalance with force on serverA as suggested.  Now I see
 'activity' on that node:

 # gluster vol rebalance tank status
 Node Rebalanced-files  size
   scanned  failures   skipped   status  run time in
 h:m:s
-  ---   ---
   ---   ---   --- 
 --
localhost 6143 6.1GB
  9542 0 0  in progress0:4:5
serverB  00Bytes
 7 0 0  in progress0:4:5
 volume rebalance: tank: success

 But I am not seeing any activity on serverB.  Is this expected?  Does
 the rebalance need to run on each node even though it says both nodes are
 'in progress'?


>>> It looks like this is a replicate volume. If that is the case then yes,
>>> you are running an old version of Gluster for which this was the default
>>> behaviour.
>>>
>>> Regards,
>>> Nithya
>>>
>>> Thanks,

 HB

 On Sat, Aug 31, 2019 at 4:18 AM Strahil  wrote:

> The rebalance status show 0 Bytes.
>
> Maybe you should try with the 'gluster volume rebalance 
> start force' ?
>
> Best Regards,
> Strahil Nikolov
>
> Source:
> https://docs.gluster.org/en/latest/Administrator%20Guide/Managing%20Volumes/#rebalancing-volumes
> On Aug 30, 2019 20:04, Herb Burnswell 
> wrote:
>
> All,
>
> RHEL 7.5
> Gluster 3.8.15
> 2 Nodes: serverA & serverB
>
> I am not deeply knowledgeable about Gluster and it's administration
> but we have a 2 node cluster that's been running for about a year and a
> half.  All has worked fine to date.  Our main volume has consisted of two
> 60TB bricks on each of the cluster nodes.  As we reached capacity on the
> volume we needed to expand.  So, we've added four new 60TB bricks to each
> of the cluster nodes.  The bricks are now seen, and the total size of the
> volume is as expected:
>
> # gluster vol status tank
> Status of volume: tank
> Gluster process TCP Port  RDMA Port
>  Online  Pid
>
> --
> Brick serverA:/gluster_bricks/data1   49162 0  Y
> 20318
> Brick serverB:/

[Gluster-users] Setting up volume for virtualization

2019-09-06 Thread Cristian Del Carlo
Hi,

I have an environment consisting of 4 nodes ( with large disks).
I have to create a volume to contain image of virtual machines.

Is it better to set a Distributed Replicated Volumes like this?

# gluster volume create test-volume replica 2 transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4

Or a Replicated Volumes like this?

# gluster volume create test-volume replica 4 transport tcp server1:/exp1
server2:/exp2 server3:/exp3 server4:/exp4

Thanks,
-- 


*Cristian Del Carlo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users