Re: [Gluster-users] Arbiter node in slow network

2023-01-04 Thread Strahil Nikolov
 As Alan mentioned latency is more important.

Also consider using an SSD for all arbiter bricks and set maxpct (man 8 
mkfs.xfs) to a high level (I prefer to use '90').

Best Regards,
Strahil Nikolov

 В събота, 31 декември 2022 г., 10:43:50 ч. Гринуич+2, Alan Orth 
 написа:  
 
 Hi Filipe,
I think it would probably be fine. The Red Hat Storage docs list the important 
thing being 5ms latency, not link speed:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/creating_arbitrated_replicated_volumes
 I haven't used an arbiter configuration yet (still stuck on distribute + 
replicate, not sure how to migrate). Let us know it goes.
Regards,

On Fri, Dec 2, 2022 at 6:59 PM Filipe Alvarez  wrote:

Hi glusters,
I'm close to deploy my first GlusterFS replica 3 arbiter 1 volume.
Below I will describe my hardware / plans:

Node1: two bricks, 2 x raid0 arrays 40gbe network
Node2: two bricks, 2 x raid0 arrays 40gbe network
Node3: Arbiter 1gbe network

Between Node1 and Node2, I have a 40gbe network.

But the arbiter has 1 gbe network.

The question is, can arbiter run in a slow network ? It will affect the general 
performance of volume?
Thank you





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Alan Orth
alan.o...@gmail.com
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter node in slow network

2022-12-31 Thread Alan Orth
Hi Filipe,

I think it would probably be fine. The Red Hat Storage docs list the
important thing being *5ms latency*, not link speed:

https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/creating_arbitrated_replicated_volumes

I haven't used an arbiter configuration yet (still stuck on distribute +
replicate, not sure how to migrate). Let us know it goes.

Regards,

On Fri, Dec 2, 2022 at 6:59 PM Filipe Alvarez 
wrote:

> Hi glusters,
>
> I'm close to deploy my first GlusterFS replica 3 arbiter 1 volume.
>
> Below I will describe my hardware / plans:
>
> Node1: two bricks, 2 x raid0 arrays 40gbe network
> Node2: two bricks, 2 x raid0 arrays 40gbe network
> Node3: Arbiter 1gbe network
>
> Between Node1 and Node2, I have a 40gbe network.
>
> But the arbiter has 1 gbe network.
>
> The question is, can arbiter run in a slow network ? It will affect the
> general performance of volume?
>
> Thank you
>
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
Alan Orth
alan.o...@gmail.com
https://picturingjordan.com
https://englishbulgaria.net
https://mjanja.ch




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter node in slow network

2022-12-03 Thread Strahil Nikolov
As the arbiter doesn't receive or provide any data to the clients - just 
metadata ,so bandwith is not critical but lattency is.Ensure that lattency is 
the same or lower for the arbiter node and you can use an SSD/NVME to ensure 
that storage lattency won't be a bottleneck.
Also, don't forget to specify the isize=512 and bump the 'maxpct' to a bigger 
number. Usually I set it to minimum 80%.
Best Regards,Strahil Nikolov 
 
 
  On Fri, Dec 2, 2022 at 18:59, Filipe Alvarez wrote:  
 Hi glusters,
I'm close to deploy my first GlusterFS replica 3 arbiter 1 volume.
Below I will describe my hardware / plans:

Node1: two bricks, 2 x raid0 arrays 40gbe network
Node2: two bricks, 2 x raid0 arrays 40gbe network
Node3: Arbiter 1gbe network

Between Node1 and Node2, I have a 40gbe network.

But the arbiter has 1 gbe network.

The question is, can arbiter run in a slow network ? It will affect the general 
performance of volume?
Thank you





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Arbiter node in slow network

2022-12-02 Thread Filipe Alvarez
Hi glusters,

I'm close to deploy my first GlusterFS replica 3 arbiter 1 volume.

Below I will describe my hardware / plans:

Node1: two bricks, 2 x raid0 arrays 40gbe network
Node2: two bricks, 2 x raid0 arrays 40gbe network
Node3: Arbiter 1gbe network

Between Node1 and Node2, I have a 40gbe network.

But the arbiter has 1 gbe network.

The question is, can arbiter run in a slow network ? It will affect the
general performance of volume?

Thank you




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Diego Zuccato

Il 08/02/2022 12:17, Karthik Subrahmanya ha scritto:

Since there are 4 nodes available here, and based on the configuration 
of the available volumes (requested volume info for the same) I was 
thinking whether the arbiter brick can be hosted on one of those nodes 
itself, or a new node is required.
We're using replica 3 arbiter 1, with quorum balanced between the 3 
servers. No need for an extra server.


When we will add a 4th server, there'll be a lot of brick juggling 
(luckily they're connected by IB100 :) ) . The simplest thing you can do 
to balance load across 4 servers i laying down data as:

S1  S2  S3  S4
0a  b0  q0  1a
1b  1q  2a  2b
2q  3a  3b  3q
... and so on: it requires adding 8 disks at a time, 2 per server -- as 
long as you have enough blocks *and inodes* available on an ssd for 
metadata.


Hope the layout is clear: Xa and Xb are the replicated bricks, Xq is 
quorum brick for bricks Xa  and Xb.


For a 3 servers setup the layout we're using is
S1 S2 S3
0a 0b 0q
1a 1q 1b
2q 2a 2b

HIH.

--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Gilberto Ferreira
Yes! That's what I meant. Two-nodes plus the arbiter to achieve quorum.
Sorry if I made some confusion.
Thanks a lot.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 8 de fev. de 2022 às 08:18, Karthik Subrahmanya <
ksubr...@redhat.com> escreveu:

>
>
> On Tue, Feb 8, 2022 at 4:28 PM Gilberto Ferreira <
> gilberto.nune...@gmail.com> wrote:
>
>> Forgive me if I am wrong, but AFAIK, arbiter is for a two-node
>> configuration, isn't it?
>>
> Arbiter is to give the same consistency as replica-3 with 3 nodes, without
> the need to have a full sized 3rd brick [1]. It will store the files and
> their metadata but no data. This acts as a quorum brick to
> avoid split-brains.
> Since there are 4 nodes available here, and based on the configuration of
> the available volumes (requested volume info for the same) I was thinking
> whether the arbiter brick can be hosted on one of those nodes itself, or a
> new node is required.
>
> [1]
> https://docs.gluster.org/en/latest/Administrator-Guide/arbiter-volumes-and-quorum/
>
> Regards,
> Karthik
>
>> ---
>> Gilberto Nunes Ferreira
>> (47) 99676-7530 - Whatsapp / Telegram
>>
>>
>>
>>
>>
>>
>> Em ter., 8 de fev. de 2022 às 07:17, Karthik Subrahmanya <
>> ksubr...@redhat.com> escreveu:
>>
>>> Hi Andre,
>>>
>>> Striped volumes are deprecated long back, see [1] & [2]. Seems like you
>>> are using a very old version. May I know which version of gluster you are
>>> running and the gluster volume info please?
>>> Release schedule and the maintained branches can be found at [3].
>>>
>>>
>>> [1] https://docs.gluster.org/en/latest/release-notes/6.0/
>>> [2]
>>> https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
>>> [3] https://www.gluster.org/release-schedule/
>>>
>>> Regards,
>>> Karthik
>>>
>>> On Mon, Feb 7, 2022 at 9:43 PM Andre Probst 
>>> wrote:
>>>
 I have a striped and replicated volume with 4 nodes. How do I add an
 arbiter to this volume?


 --
 André Probst
 Consultor de Tecnologia
 43 99617 8765
 



 Community Meeting Calendar:

 Schedule -
 Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
 Bridge: https://meet.google.com/cpu-eiue-hvk
 Gluster-users mailing list
 Gluster-users@gluster.org
 https://lists.gluster.org/mailman/listinfo/gluster-users

>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Karthik Subrahmanya
On Tue, Feb 8, 2022 at 4:28 PM Gilberto Ferreira 
wrote:

> Forgive me if I am wrong, but AFAIK, arbiter is for a two-node
> configuration, isn't it?
>
Arbiter is to give the same consistency as replica-3 with 3 nodes, without
the need to have a full sized 3rd brick [1]. It will store the files and
their metadata but no data. This acts as a quorum brick to
avoid split-brains.
Since there are 4 nodes available here, and based on the configuration of
the available volumes (requested volume info for the same) I was thinking
whether the arbiter brick can be hosted on one of those nodes itself, or a
new node is required.

[1]
https://docs.gluster.org/en/latest/Administrator-Guide/arbiter-volumes-and-quorum/

Regards,
Karthik

> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
>
>
>
>
>
>
> Em ter., 8 de fev. de 2022 às 07:17, Karthik Subrahmanya <
> ksubr...@redhat.com> escreveu:
>
>> Hi Andre,
>>
>> Striped volumes are deprecated long back, see [1] & [2]. Seems like you
>> are using a very old version. May I know which version of gluster you are
>> running and the gluster volume info please?
>> Release schedule and the maintained branches can be found at [3].
>>
>>
>> [1] https://docs.gluster.org/en/latest/release-notes/6.0/
>> [2]
>> https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
>> [3] https://www.gluster.org/release-schedule/
>>
>> Regards,
>> Karthik
>>
>> On Mon, Feb 7, 2022 at 9:43 PM Andre Probst 
>> wrote:
>>
>>> I have a striped and replicated volume with 4 nodes. How do I add an
>>> arbiter to this volume?
>>>
>>>
>>> --
>>> André Probst
>>> Consultor de Tecnologia
>>> 43 99617 8765
>>> 
>>>
>>>
>>>
>>> Community Meeting Calendar:
>>>
>>> Schedule -
>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Diego Zuccato

IIUC it always requires 3 servers.
Lightweight arbiter is just to avoid split brain (a client needs to 
reach two servers out of three to be able to write data).
"Full" arbiter is a third replica of metadata while there are only two 
copies of the data.


Il 08/02/2022 11:58, Gilberto Ferreira ha scritto:
Forgive me if I am wrong, but AFAIK, arbiter is for a two-node 
configuration, isn't it?

---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 8 de fev. de 2022 às 07:17, Karthik Subrahmanya 
mailto:ksubr...@redhat.com>> escreveu:


Hi Andre,

Striped volumes are deprecated long back, see [1] & [2]. Seems like
you are using a very old version. May I know which version of
gluster you are running and the gluster volume info please?
Release schedule and the maintained branches can be found at [3].


[1] https://docs.gluster.org/en/latest/release-notes/6.0/

[2]
https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html

[3] https://www.gluster.org/release-schedule/


Regards,
Karthik

On Mon, Feb 7, 2022 at 9:43 PM Andre Probst mailto:andrefpro...@gmail.com>> wrote:

I have a striped and replicated volume with 4 nodes. How do I
add an arbiter to this volume?


--
André Probst
Consultor de Tecnologia
43 99617 8765




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk

Gluster-users mailing list
Gluster-users@gluster.org 
https://lists.gluster.org/mailman/listinfo/gluster-users







Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Gilberto Ferreira
Forgive me if I am wrong, but AFAIK, arbiter is for a two-node
configuration, isn't it?
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em ter., 8 de fev. de 2022 às 07:17, Karthik Subrahmanya <
ksubr...@redhat.com> escreveu:

> Hi Andre,
>
> Striped volumes are deprecated long back, see [1] & [2]. Seems like you
> are using a very old version. May I know which version of gluster you are
> running and the gluster volume info please?
> Release schedule and the maintained branches can be found at [3].
>
>
> [1] https://docs.gluster.org/en/latest/release-notes/6.0/
> [2]
> https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
> [3] https://www.gluster.org/release-schedule/
>
> Regards,
> Karthik
>
> On Mon, Feb 7, 2022 at 9:43 PM Andre Probst 
> wrote:
>
>> I have a striped and replicated volume with 4 nodes. How do I add an
>> arbiter to this volume?
>>
>>
>> --
>> André Probst
>> Consultor de Tecnologia
>> 43 99617 8765
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://meet.google.com/cpu-eiue-hvk
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter

2022-02-08 Thread Karthik Subrahmanya
Hi Andre,

Striped volumes are deprecated long back, see [1] & [2]. Seems like you are
using a very old version. May I know which version of gluster you are
running and the gluster volume info please?
Release schedule and the maintained branches can be found at [3].


[1] https://docs.gluster.org/en/latest/release-notes/6.0/
[2] https://lists.gluster.org/pipermail/gluster-users/2018-July/034400.html
[3] https://www.gluster.org/release-schedule/

Regards,
Karthik

On Mon, Feb 7, 2022 at 9:43 PM Andre Probst  wrote:

> I have a striped and replicated volume with 4 nodes. How do I add an
> arbiter to this volume?
>
>
> --
> André Probst
> Consultor de Tecnologia
> 43 99617 8765
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Arbiter

2022-02-07 Thread Andre Probst
I have a striped and replicated volume with 4 nodes. How do I add an
arbiter to this volume?


--
André Probst
Consultor de Tecnologia
43 99617 8765




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Ben Turner
One thing to remember with arbiters is that they need IOPs, not capacity as 
much.  With a VM use case this is less impactful, but workloads with lots of 
smallfiles can become heavily bottlenecked at the arbiter.  Arbiters only save 
metadata, not data, but metadata needs lots of small reads and writes.  I have 
seen many instances where the the arbiter had considerably less IOPs than the 
other bricks and it lead to perf issues.  With VMs you don't have thousands of 
files so its prolly not a big deal, but in more general purpose workloads its 
important to remember this.

HTH!

-b

- Original Message -
> From: "Dave Sherohman" <d...@sherohman.org>
> To: gluster-users@gluster.org
> Sent: Monday, May 7, 2018 7:21:49 AM
> Subject: Re: [Gluster-users] arbiter node on client?
> 
> On Sun, May 06, 2018 at 11:15:32AM +, Gandalf Corvotempesta wrote:
> > is possible to add an arbiter node on the client?
> 
> I've been running in that configuration for a couple months now with no
> problems.  I have 6 data + 3 arbiter bricks hosting VM disk images and
> all three of my arbiter bricks are on one of the kvm hosts.
> 
> > Can I use multiple arbiter for the same volume ? In example, one arbiter on
> > each client.
> 
> I'm pretty sure that you can only have one arbiter per subvolume, and
> I'm not even sure what the point of multiple arbiters over the same data
> would be.
> 
> In my case, I have three subvolumes (three replica pairs), which means I
> need three arbiters and those could be spread across multiple nodes, of
> course, but I don't think saying "I want 12 arbiters instead of 3!"
> would be supported.
> 
> --
> Dave Sherohman
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Gandalf Corvotempesta
Il giorno lun 7 mag 2018 alle ore 13:22 Dave Sherohman 
ha scritto:
> I'm pretty sure that you can only have one arbiter per subvolume, and
> I'm not even sure what the point of multiple arbiters over the same data
> would be.

Multiple arbiter add availability. I can safely shutdown one hypervisor
node (where arbiter is located)
and still have a 100% working cluster with quorum.

Is possible to add arbiter on the fly or must be configured during the
volume creation ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Dave Sherohman
On Sun, May 06, 2018 at 11:15:32AM +, Gandalf Corvotempesta wrote:
> is possible to add an arbiter node on the client?

I've been running in that configuration for a couple months now with no
problems.  I have 6 data + 3 arbiter bricks hosting VM disk images and
all three of my arbiter bricks are on one of the kvm hosts.

> Can I use multiple arbiter for the same volume ? In example, one arbiter on
> each client.

I'm pretty sure that you can only have one arbiter per subvolume, and
I'm not even sure what the point of multiple arbiters over the same data
would be.

In my case, I have three subvolumes (three replica pairs), which means I
need three arbiters and those could be spread across multiple nodes, of
course, but I don't think saying "I want 12 arbiters instead of 3!"
would be supported.

-- 
Dave Sherohman
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] arbiter node on client?

2018-05-06 Thread Gandalf Corvotempesta
is possible to add an arbiter node on the client?

Let's assume a gluster storage made with 2 storage server. This is prone to
split-brains.
An arbiter node can be added, but can I put the arbiter on one of the
client ?

Can I use multiple arbiter for the same volume ? In example, one arbiter on
each client.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter and geo-replication

2017-09-22 Thread Ravishankar N



On 09/22/2017 02:25 AM, Kotresh Hiremath Ravishankar wrote:
The volume layout of geo-replication slave volume could be different 
from master volume.
It's not mandatory that if the master volume is arbiter type, the 
slave also needs to be arbiter.
But if it's decided to use the arbiter both at master and slave, then 
the expansion rules is

applicable both at master and slave.

Adding Ravi for arbiter related question

On Thu, Sep 21, 2017 at 2:07 PM, Marcus > wrote:


Hi all!

Today I have a small gluster replication on 2 machines.
My plan is to scale this, I though need some feedback that how I
plan things is in the right direction.

First of all I have understood the need of an arbiter.
When I scale this, say that I just have 2 replica and 1 arbiter,
when I add another two machines can I still use the same physical
machine as the arbiter?
Or when I add additional two machines I have to add another
arbiter machine as well?




You can use the same physical machine for hosting multiple arbiter 
bricks. The general rule of brick placement is the same for any type of 
replica volume: No 2 bricks of the same replica subvolume should be on 
the same node.


Thanks,
Ravi



My second question is about geo-replication.
If I want to setup a geo-replication on above gluster cluster do I
need to have the exact "same" machines in the reo-replication?
I know that disk size should be same size on both the brick and on
the geo-replication side.
So if I have 2 replica and 1 arbiter, do I need 2 replica and 1
arbiter for the geo-replication?
Or is it sufficient for a 2 replica and 1 arbiter to use 1 replica
for the geo-replication?
What I wonder is when I scale my gluster with additional 2
machines, do I need 2 machines for geo-replication or 1 machine
for geo-replication?
So adding 2 machines means adding 4 machines in total or do I just
need 3 in total?
Is there a need for a arbiter in the geo-replication?

Many questions, but I hope that you can help me out!

Many thanks in advance!

Best regards
Marcus Pedersén


-- 


*Marcus Pedersén*
/System administrator/


*Interbull Centre*
Department of Animal Breeding & Genetics — SLU
Box 7023, SE-750 07
Uppsala, Sweden

Visiting address:
Room 55614, Ulls väg 26, Ultuna
Uppsala
Sweden

Tel: +46-(0)18-67 1962 
Interbull Logo


ISO certification logo

___
Gluster-users mailing list
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users





--
Thanks and Regards,
Kotresh H R


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter and geo-replication

2017-09-21 Thread Marcus

Hi all!

Today I have a small gluster replication on 2 machines.
My plan is to scale this, I though need some feedback that how I plan 
things is in the right direction.


First of all I have understood the need of an arbiter.
When I scale this, say that I just have 2 replica and 1 arbiter, when I 
add another two machines can I still use the same physical machine as 
the arbiter?
Or when I add additional two machines I have to add another arbiter 
machine as well?


My second question is about geo-replication.
If I want to setup a geo-replication on above gluster cluster do I need 
to have the exact "same" machines in the reo-replication?
I know that disk size should be same size on both the brick and on the 
geo-replication side.
So if I have 2 replica and 1 arbiter, do I need 2 replica and 1 arbiter 
for the geo-replication?
Or is it sufficient for a 2 replica and 1 arbiter to use 1 replica for 
the geo-replication?
What I wonder is when I scale my gluster with additional 2 machines, do 
I need 2 machines for geo-replication or 1 machine for geo-replication?
So adding 2 machines means adding 4 machines in total or do I just need 
3 in total?

Is there a need for a arbiter in the geo-replication?

Many questions, but I hope that you can help me out!

Many thanks in advance!

Best regards
Marcus Pedersén


--

*Marcus Pedersén*
/System administrator/


*Interbull Centre*
Department of Animal Breeding & Genetics — SLU
Box 7023, SE-750 07
Uppsala, Sweden

Visiting address:
Room 55614, Ulls väg 26, Ultuna
Uppsala
Sweden

Tel: +46-(0)18-67 1962
Interbull Logo


ISO certification logo
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter node as VM

2017-06-30 Thread mabi
Thanks for the hints.
Now I added the arbiter 1 to my replica 2 using the volume add-brick command 
and it is now in the healing process in order to copy all the metadata files on 
my arbiter node.
On one of my replica nodes in the brick log file for that particular volume I 
notice a lot of the following warning message during ongoing healing:
[2017-06-30 14:04:42.050120] W [MSGID: 101088] 
[common-utils.c:3894:gf_backtrace_save] 0-myvolume-index: Failed to save the 
backtrace.
Does anyone have a idea what this is about? The only hint here is the word 
"index" which for me means it has something to do with indexing. But is this 
warning normal? anything I can do about it?
Regards,
M.

>  Original Message 
> Subject: Re: [Gluster-users] Arbiter node as VM
> Local Time: June 29, 2017 11:55 PM
> UTC Time: June 29, 2017 9:55 PM
> From: dougti+glus...@gmail.com
> To: mabi <m...@protonmail.ch>
> Gluster Users <gluster-users@gluster.org>
>
> As long as the VM isn't hosted on one of the two Gluster nodes, that's 
> perfectly fine. One of my smaller clusters uses the same setup.
> As for your other questions, as long as it supports Unix file permissions, 
> Gluster doesn't care what filesystem you use. Mix & match as you wish. Just 
> try to keep matching Gluster versions across your nodes.
>
> On 29 June 2017 at 16:10, mabi <m...@protonmail.ch> wrote:
>
>> Hello,
>>
>> I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers 
>> using ZFS as filesystem. Now in order to avoid a split-brain situation I 
>> would like to add a third node as arbiter.
>> Regarding the arbiter node I have a few questions:
>> - can the arbiter node be a virtual machine? (I am planning to use Xen as 
>> hypervisor)
>> - can I use ext4 as file system on my arbiter? or does it need to be ZFS as 
>> the two other nodes?
>> - or should I use here XFS with LVM this provisioning as mentioned in the
>> - is it OK that my arbiter runs Debian 9 (Linux kernel v4) and my other two 
>> nodes run Debian 8 (kernel v3)?
>> - what about thin provisioning of my volume on the arbiter node 
>> (https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/)
>>  is this required? on my two other nodes I do not use any thin provisioning 
>> neither LVM but simply ZFS.
>> Thanks in advance for your input.
>> Best regards,
>> Mabi
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter node as VM

2017-06-29 Thread Gambit15
As long as the VM isn't hosted on one of the two Gluster nodes, that's
perfectly fine. One of my smaller clusters uses the same setup.

As for your other questions, as long as it supports Unix file permissions,
Gluster doesn't care what filesystem you use. Mix & match as you wish. Just
try to keep matching Gluster versions across your nodes.


On 29 June 2017 at 16:10, mabi  wrote:

> Hello,
>
> I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers
> using ZFS as filesystem. Now in order to avoid a split-brain situation I
> would like to add a third node as arbiter.
>
> Regarding the arbiter node I have a few questions:
> - can the arbiter node be a virtual machine? (I am planning to use Xen as
> hypervisor)
> - can I use ext4 as file system on my arbiter? or does it need to be ZFS
> as the two other nodes?
> - or should I use here XFS with LVM this provisioning as mentioned in the
> - is it OK that my arbiter runs Debian 9 (Linux kernel v4) and my other
> two nodes run Debian 8 (kernel v3)?
> - what about thin provisioning of my volume on the arbiter node (
> https://gluster.readthedocs.io/en/latest/Administrator%
> 20Guide/Setting%20Up%20Volumes/) is this required? on my two other nodes
> I do not use any thin provisioning neither LVM but simply ZFS.
>
> Thanks in advance for your input.
>
> Best regards,
> Mabi
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter node as VM

2017-06-29 Thread mabi
Hello,

I have a replica 2 GlusterFS 3.8.11 cluster on 2 Debian 8 physical servers 
using ZFS as filesystem. Now in order to avoid a split-brain situation I would 
like to add a third node as arbiter.
Regarding the arbiter node I have a few questions:
- can the arbiter node be a virtual machine? (I am planning to use Xen as 
hypervisor)
- can I use ext4 as file system on my arbiter? or does it need to be ZFS as the 
two other nodes?
- or should I use here XFS with LVM this provisioning as mentioned in the
- is it OK that my arbiter runs Debian 9 (Linux kernel v4) and my other two 
nodes run Debian 8 (kernel v3)?
- what about thin provisioning of my volume on the arbiter node 
(https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/)
 is this required? on my two other nodes I do not use any thin provisioning 
neither LVM but simply ZFS.
Thanks in advance for your input.
Best regards,
Mabi___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Walter Deignan
Thank you very much for the assistance.

-Walter Deignan
-Uline IT, Systems Architect



From:   Ravishankar N <ravishan...@redhat.com>
To: Walter Deignan <wdeig...@uline.com>
Cc: gluster-users@gluster.org
Date:   05/05/2017 11:43 AM
Subject:    Re: [Gluster-users] Arbiter and Hot Tier



Okay, just tried it in 3.9  Even though attaching a non arbiter volume 
(say a replica-2 ) as hot-tier is succeeding at the CLI level, the brick 
volfiles seem to be generated incorrectly (I see that the arbiter 
translator is getting loaded in one of the replica-2 bricks too, which is 
incorrect). I'd recommend not to use arbiter for tiering irrespective of 
hot or cold tier.

 
On 05/05/2017 09:42 PM, Walter Deignan wrote:
Did that change between 3.9 and 3.10? When I originally saw some 
references on the Redhat storage packaged solution about a possible 
incompatibility I assumed it just meant that the hot tier itself couldn't 
be an arbiter volume. 

I was tripped up by the change in apparent support for the cold tier 
between 3.9 and 3.10. But maybe that was just a fixed oversight which 
never should have worked in the first place? 

-Walter Deignan
-Uline IT, Systems Architect 



From:Ravishankar N <ravishan...@redhat.com> 
To:Walter Deignan <wdeig...@uline.com>, gluster-users@gluster.org 
Date:05/05/2017 11:08 AM 
Subject:    Re: [Gluster-users] Arbiter and Hot Tier 



Hi Walter,
Yes, arbiter volumes are currently not supported with tiering.
-Ravi

On 05/05/2017 08:54 PM, Walter Deignan wrote: 
I've been googling this to no avail so apologies if this is explained 
somewhere I missed. 

Is there a known incompatibility between using arbiters and hot tiering? 

Experience on 3.9 

Original volume - replica 3 arbiter 1 
Attach replica 2 arbiter 1 hot tier - failure 
Attach replica 3 hot tier - success 

Experience on 3.10 

Original volume - replica 3 arbiter 1 
Attach replica 2 arbiter 1 hot tier - failure 
Attach replica 3 hot tier - failure 
Attach hot tier without specifying replica - success but comes in as a 
distributed tier which I would assume totally negates the point of having 
a replicated cold tier? 

The specific error message I get is "volume attach-tier: failed: 
Increasing replica count for arbiter volumes is not supported." 

-Walter Deignan
-Uline IT, Systems Architect 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Ravishankar N
Okay, just tried it in 3.9  Even though attaching a non arbiter volume 
(say a replica-2 ) as hot-tier is succeeding at the CLI level, the brick 
volfiles seem to be generated incorrectly (I see that the arbiter 
translator is getting loaded in one of the replica-2 bricks too, which 
is incorrect). I'd recommend not to use arbiter for tiering irrespective 
of hot or cold tier.



On 05/05/2017 09:42 PM, Walter Deignan wrote:
Did that change between 3.9 and 3.10? When I originally saw some 
references on the Redhat storage packaged solution about a possible 
incompatibility I assumed it just meant that the hot tier itself 
couldn't be an arbiter volume.


I was tripped up by the change in apparent support for the cold tier 
between 3.9 and 3.10. But maybe that was just a fixed oversight which 
never should have worked in the first place?


-Walter Deignan
-Uline IT, Systems Architect



From: Ravishankar N <ravishan...@redhat.com>
To: Walter Deignan <wdeig...@uline.com>, gluster-users@gluster.org
Date: 05/05/2017 11:08 AM
Subject: Re: [Gluster-users] Arbiter and Hot Tier




Hi Walter,
Yes, arbiter volumes are currently not supported with tiering.
-Ravi

On 05/05/2017 08:54 PM, Walter Deignan wrote:
I've been googling this to no avail so apologies if this is explained 
somewhere I missed.


Is there a known incompatibility between using arbiters and hot tiering?

Experience on 3.9

Original volume - replica 3 arbiter 1
Attach replica 2 arbiter 1 hot tier - failure
Attach replica 3 hot tier - success

Experience on 3.10

Original volume - replica 3 arbiter 1
Attach replica 2 arbiter 1 hot tier - failure
Attach replica 3 hot tier - failure
Attach hot tier without specifying replica - success but comes in as a 
distributed tier which I would assume totally negates the point of 
having a replicated cold tier?


The specific error message I get is "volume attach-tier: failed: 
Increasing replica count for arbiter volumes is not supported."


-Walter Deignan
-Uline IT, Systems Architect

___
Gluster-users mailing list
_Gluster-users@gluster.org_ <mailto:Gluster-users@gluster.org>
_http://lists.gluster.org/mailman/listinfo/gluster-users_



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Walter Deignan
Did that change between 3.9 and 3.10? When I originally saw some 
references on the Redhat storage packaged solution about a possible 
incompatibility I assumed it just meant that the hot tier itself couldn't 
be an arbiter volume.

I was tripped up by the change in apparent support for the cold tier 
between 3.9 and 3.10. But maybe that was just a fixed oversight which 
never should have worked in the first place?

-Walter Deignan
-Uline IT, Systems Architect



From:   Ravishankar N <ravishan...@redhat.com>
To: Walter Deignan <wdeig...@uline.com>, gluster-users@gluster.org
Date:   05/05/2017 11:08 AM
Subject:    Re: [Gluster-users] Arbiter and Hot Tier



Hi Walter,
Yes, arbiter volumes are currently not supported with tiering.
-Ravi

On 05/05/2017 08:54 PM, Walter Deignan wrote:
I've been googling this to no avail so apologies if this is explained 
somewhere I missed. 

Is there a known incompatibility between using arbiters and hot tiering? 

Experience on 3.9 

Original volume - replica 3 arbiter 1 
Attach replica 2 arbiter 1 hot tier - failure 
Attach replica 3 hot tier - success 

Experience on 3.10 

Original volume - replica 3 arbiter 1 
Attach replica 2 arbiter 1 hot tier - failure 
Attach replica 3 hot tier - failure 
Attach hot tier without specifying replica - success but comes in as a 
distributed tier which I would assume totally negates the point of having 
a replicated cold tier? 

The specific error message I get is "volume attach-tier: failed: 
Increasing replica count for arbiter volumes is not supported." 

-Walter Deignan
-Uline IT, Systems Architect 

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Ravishankar N

Hi Walter,
Yes, arbiter volumes are currently not supported with tiering.
-Ravi

On 05/05/2017 08:54 PM, Walter Deignan wrote:
I've been googling this to no avail so apologies if this is explained 
somewhere I missed.


Is there a known incompatibility between using arbiters and hot tiering?

Experience on 3.9

Original volume - replica 3 arbiter 1
Attach replica 2 arbiter 1 hot tier - failure
Attach replica 3 hot tier - success

Experience on 3.10

Original volume - replica 3 arbiter 1
Attach replica 2 arbiter 1 hot tier - failure
Attach replica 3 hot tier - failure
Attach hot tier without specifying replica - success but comes in as a 
distributed tier which I would assume totally negates the point of 
having a replicated cold tier?


The specific error message I get is "volume attach-tier: failed: 
Increasing replica count for arbiter volumes is not supported."


-Walter Deignan
-Uline IT, Systems Architect


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter and Hot Tier

2017-05-05 Thread Walter Deignan
I've been googling this to no avail so apologies if this is explained 
somewhere I missed.

Is there a known incompatibility between using arbiters and hot tiering?

Experience on 3.9

Original volume - replica 3 arbiter 1
Attach replica 2 arbiter 1 hot tier - failure
Attach replica 3 hot tier - success

Experience on 3.10

Original volume - replica 3 arbiter 1
Attach replica 2 arbiter 1 hot tier - failure
Attach replica 3 hot tier - failure
Attach hot tier without specifying replica - success but comes in as a 
distributed tier which I would assume totally negates the point of having 
a replicated cold tier?

The specific error message I get is "volume attach-tier: failed: 
Increasing replica count for arbiter volumes is not supported."

-Walter Deignan
-Uline IT, Systems Architect___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] arbiter node sharing

2017-01-16 Thread Ravishankar N

On 01/16/2017 09:42 PM, p...@email.cz wrote:

Hello dears,

how can i share the arbiter node between two-three gluster clusters ??

I've got two clusters ( centos 7.2) with gluster (3.8) filesystem and 
I'd need to share arbiter node between them to spare server nodes.

exam:
gluster volume create SDAP1 replica 3 arbiter 1 
16.0.0.161:/GLUSTER/sdaP1/GFS 16.0.0.162:/GLUSTER/sdaP1/GFS 
16.0.0.159:/GLUSTER/1KVM12-sda1/GFS  force


but gluster peer returns error:
peer probe: failed: 16.0.0.159 is either already part of another 
cluster or having volumes configured  ( YES, it IS , I know)


So, exists any way how to fix this wish ?


No, peer probing a node that is a part of another cluster is not 
supported in gluster. All volumes need to be a part of the same trusted 
storage pool (i.e. cluster).

Regards,
Ravi


regards
Paf1



___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] arbiter node sharing

2017-01-16 Thread p...@email.cz

Hello dears,

how can i share the arbiter node between two-three gluster clusters ??

I've got two clusters ( centos 7.2) with gluster (3.8)  filesystem and 
I'd need to share arbiter node between them to spare server nodes.

exam:
gluster volume create SDAP1 replica 3 arbiter 1 
16.0.0.161:/GLUSTER/sdaP1/GFS 16.0.0.162:/GLUSTER/sdaP1/GFS 
16.0.0.159:/GLUSTER/1KVM12-sda1/GFS  force


but gluster peer returns error:
peer probe: failed: 16.0.0.159 is either already part of another cluster 
or having volumes configured  ( YES, it IS , I know)


So, exists any way how to fix this wish ?

regards
Paf1

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter Addition in Replicated environment

2016-12-06 Thread Ravishankar N

On 12/06/2016 01:33 PM, Atul Yadav wrote:

Hi Team,


Can we add Arbiter brick in 2 node replication running environment.


Yes. this should work on 3.8 release with the command you mentioned. It 
is recommended to add the brick when no I/O is happening on the volume.


For an example
Glusterfs 2 node replication
Current glusterfs storage size 4 TB.
After adding Arbiter brick in this environment what will be the result.

The volume would be converted from 1x2 to 1x(2+1).
-Ravi


#gluster volume add-brick test replica 3 arbiter 1 server3:/glusterfs/arbi


Thank You
Atul Yadav



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter Addition in Replicated environment

2016-12-06 Thread Atul Yadav
Hi Team,


Can we add Arbiter brick in 2 node replication running environment.

For an example
Glusterfs 2 node replication
Current glusterfs storage size 4 TB.
After adding Arbiter brick in this environment what will be the result.

#gluster volume add-brick test replica 3 arbiter 1 server3:/glusterfs/arbi


Thank You
Atul Yadav
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter performance issue

2016-04-05 Thread Ravishankar N

On 03/30/2016 06:36 AM, Ravishankar N wrote:

On 03/30/2016 01:03 AM, Russell Purinton wrote:
Hi all, sorry for 2 threads today, but I felt like this deserved a 
separate thread…


I was trying to replace my replica 2 volumes with replica 3 arbiter 1 
volumes…  The new volumes though are 10x slower on direct writes than 
their replica 2 counterparts.  Im wondering if this is to be expected 
or if I might have done something wrong?   I confirmed that no data 
is being written into the arbiter bricks, just meta data.





It does look like a bug Russel. Another user had reported the same 
behavior. I'll take a look and update.

Thanks,
Ravi


Hi,
I've raised https://bugzilla.redhat.com/show_bug.cgi?id=1324004 and sent 
a fix @ http://review.gluster.org/#/c/13906/.
Once it gets accepted in master, I'll backport it to 3.7 branch. If 
everything goes well, it should make it to the 3.7.11 release. Feel free 
to test the patch if you like.


Thanks,
Ravi



   Here’s the tests I ran.   I did the dd tests multiple times with 
different file names, and they all had the same speeds ….


[root@fs134 wtg002]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.49974 s, 11.0 MB/s
[root@fs134 wtg002]# cd ..
[root@fs134 home]# cd wtg001
[root@fs134 wtg001]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.888929 s, 118 MB/s
[root@fs134 wtg001]# gluster volume info wtg001

Volume Name: wtg001
Type: Distributed-Replicate
Volume ID: 53179cfe-9896-4c94-9f1d-01dd474e027e
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: xs141:/brick1/wtg001p0r0
Brick2: xs138:/brick1/wtg001p0r1
Brick3: xs139:/brick1/wtg001p1r0
Brick4: xs140:/brick1/wtg001p1r1
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@fs134 wtg001]# gluster volume info wtg002

Volume Name: wtg002
Type: Distributed-Replicate
Volume ID: 410b67ad-bc1e-473b-b98f-ad431d7c9831
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: xs141:/brick1/wtg001d2r0
Brick2: xs138:/brick1/wtg001d2r1
Brick3: xs139:/brick1/wtg001d2ra
Brick4: xs139:/brick1/wtg001d3r0
Brick5: xs140:/brick1/wtg001d3r1
Brick6: xs141:/brick1/wtg001d3ra
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@fs134 wtg001]# mount | grep wtg
0:/wtg001 on /home/wtg001 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
0:/wtg002 on /home/wtg002 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)

[root@fs134 wtg001]#




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter performance issue

2016-03-29 Thread Ravishankar N

On 03/30/2016 01:03 AM, Russell Purinton wrote:
Hi all, sorry for 2 threads today, but I felt like this deserved a 
separate thread…


I was trying to replace my replica 2 volumes with replica 3 arbiter 1 
volumes…  The new volumes though are 10x slower on direct writes than 
their replica 2 counterparts.  Im wondering if this is to be expected 
or if I might have done something wrong?   I confirmed that no data is 
being written into the arbiter bricks, just meta data.





It does look like a bug Russel. Another user had reported the same 
behavior. I'll take a look and update.

Thanks,
Ravi



   Here’s the tests I ran.   I did the dd tests multiple times with 
different file names, and they all had the same speeds ….


[root@fs134 wtg002]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.49974 s, 11.0 MB/s
[root@fs134 wtg002]# cd ..
[root@fs134 home]# cd wtg001
[root@fs134 wtg001]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.888929 s, 118 MB/s
[root@fs134 wtg001]# gluster volume info wtg001

Volume Name: wtg001
Type: Distributed-Replicate
Volume ID: 53179cfe-9896-4c94-9f1d-01dd474e027e
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: xs141:/brick1/wtg001p0r0
Brick2: xs138:/brick1/wtg001p0r1
Brick3: xs139:/brick1/wtg001p1r0
Brick4: xs140:/brick1/wtg001p1r1
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@fs134 wtg001]# gluster volume info wtg002

Volume Name: wtg002
Type: Distributed-Replicate
Volume ID: 410b67ad-bc1e-473b-b98f-ad431d7c9831
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: xs141:/brick1/wtg001d2r0
Brick2: xs138:/brick1/wtg001d2r1
Brick3: xs139:/brick1/wtg001d2ra
Brick4: xs139:/brick1/wtg001d3r0
Brick5: xs140:/brick1/wtg001d3r1
Brick6: xs141:/brick1/wtg001d3ra
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@fs134 wtg001]# mount | grep wtg
0:/wtg001 on /home/wtg001 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
0:/wtg002 on /home/wtg002 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)

[root@fs134 wtg001]#




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter performance issue

2016-03-29 Thread Russell Purinton
Hi all, sorry for 2 threads today, but I felt like this deserved a separate 
thread…

I was trying to replace my replica 2 volumes with replica 3 arbiter 1 volumes…  
The new volumes though are 10x slower on direct writes than their replica 2 
counterparts.  Im wondering if this is to be expected or if I might have done 
something wrong?   I confirmed that no data is being written into the arbiter 
bricks, just meta data.



   Here’s the tests I ran.   I did the dd tests multiple times with different 
file names, and they all had the same speeds ….

[root@fs134 wtg002]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.49974 s, 11.0 MB/s
[root@fs134 wtg002]# cd ..
[root@fs134 home]# cd wtg001
[root@fs134 wtg001]# dd if=/dev/zero of=test bs=1M count=100 oflag=direct
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.888929 s, 118 MB/s
[root@fs134 wtg001]# gluster volume info wtg001

Volume Name: wtg001
Type: Distributed-Replicate
Volume ID: 53179cfe-9896-4c94-9f1d-01dd474e027e
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: xs141:/brick1/wtg001p0r0
Brick2: xs138:/brick1/wtg001p0r1
Brick3: xs139:/brick1/wtg001p1r0
Brick4: xs140:/brick1/wtg001p1r1
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@fs134 wtg001]# gluster volume info wtg002

Volume Name: wtg002
Type: Distributed-Replicate
Volume ID: 410b67ad-bc1e-473b-b98f-ad431d7c9831
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: xs141:/brick1/wtg001d2r0
Brick2: xs138:/brick1/wtg001d2r1
Brick3: xs139:/brick1/wtg001d2ra
Brick4: xs139:/brick1/wtg001d3r0
Brick5: xs140:/brick1/wtg001d3r1
Brick6: xs141:/brick1/wtg001d3ra
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
[root@fs134 wtg001]# mount | grep wtg
0:/wtg001 on /home/wtg001 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
0:/wtg002 on /home/wtg002 type fuse.glusterfs 
(rw,default_permissions,allow_other,max_read=131072)
[root@fs134 wtg001]#


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter doesn't create

2016-03-23 Thread André Bauer
The third brick should be the arbiter.
Not sure if it should be marked as arbiter in volume info.

Try to put data on it.
Brick 3 should be empty and get only metadata.

Regards
André

Am 23.03.2016 um 14:33 schrieb Ralf Simon:
> Hello,
> 
> I've installed 
> 
> # yum info glusterfs-server
> Loaded plugins: fastestmirror
> Loading mirror speeds from cached hostfile
> Installed Packages
> Name: glusterfs-server
> Arch: x86_64
> Version : 3.7.6
> Release : 1.el7
> Size: 4.3 M
> Repo: installed
> From repo   : latest
> Summary : Clustered file-system server
> URL : http://www.gluster.org/docs/index.php/GlusterFS
> License : GPLv2 or LGPLv3+
> Description : GlusterFS is a distributed file-system capable of scaling
> to several
> : petabytes. It aggregates various storage bricks over
> Infiniband RDMA
> : or TCP/IP interconnect into one large parallel network file
> : system. GlusterFS is one of the most sophisticated file
> systems in
> : terms of features and extensibility.  It borrows a
> powerful concept
> : called Translators from GNU Hurd kernel. Much of the code
> in GlusterFS
> : is in user space and easily manageable.
> :
> : This package provides the glusterfs server daemon.
> 
> I wanted to build a ...
> 
> # gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0
> d90031:/data/brick0 d90034:/data/brick0
> volume create: gv0: success: please start the volume to access data
> 
> ... but I got a ...
> 
> # gluster volume info
> 
> Volume Name: gv0
> Type: Replicate
> Volume ID: 329325fc-ceed-4dee-926f-038f44281678
> Status: Created
> Number of Bricks: *1 x 3 = 3*
> Transport-type: tcp
> Bricks:
> Brick1: d90029:/data/brick0
> Brick2: d90031:/data/brick0
> Brick3: d90034:/data/brick0
> Options Reconfigured:
> performance.readdir-ahead: on
> 
> ... without the requested arbiter !
> 
> The same situation with 6 bricks ...
> 
> # gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0
> d90031:/data/brick0 d90034:/data/brick0 d90029:/data/brick1
> d90031:/data/brick1 d90034:/data/brick1
> volume create: gv0: success: please start the volume to access data
> [root@d90029 ~]# gluster vol info
> 
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: 2b8dbcc0-c4bb-41e3-a870-e164d8d10c49
> Status: Created
> Number of Bricks: *2 x 3 = 6*
> Transport-type: tcp
> Bricks:
> Brick1: d90029:/data/brick0
> Brick2: d90031:/data/brick0
> Brick3: d90034:/data/brick0
> Brick4: d90029:/data/brick1
> Brick5: d90031:/data/brick1
> Brick6: d90034:/data/brick1
> Options Reconfigured:
> performance.readdir-ahead: on
> 
> 
> In contrast the documentation tells 
> 
> 
> *Arbiter configuration*
> 
> The arbiter configuration a.k.a. the arbiter volume is the perfect sweet
> spot between a 2-way replica and 3-way replica to avoid files getting
> into split-brain, */without the 3x storage space/* as mentioned earlier.
> The syntax for creating the volume is:
> 
> *gluster volume create replica 3 arbiter 1 host1:brick1 host2:brick2
> host3:brick3*
> 
> For example:
> 
> *gluster volume create testvol replica 3 arbiter 1
> 127.0.0.2:/bricks/brick{1..6} force*
> 
> volume create: testvol: success: please start the volume to access data
> 
> *gluster volume info*
> 
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: ae6c4162-38c2-4368-ae5d-6bad141a4119
> Status: Created
> Number of Bricks: *2 x (2 + 1) = 6*
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/bricks/brick1
> Brick2: 127.0.0.2:/bricks/brick2
> Brick3: 127.0.0.2:/bricks/brick3 *(arbiter)*
> Brick4: 127.0.0.2:/bricks/brick4
> Brick5: 127.0.0.2:/bricks/brick5
> Brick6: 127.0.0.2:/bricks/brick6 *(arbiter)*
> Options Reconfigured : transport.address-family: inet
> performance.readdir-ahead: on `
> 
> 
> 
> What's going wrong ? Can anybody help ?
> 
> Kind Regards
> Ralf Simon
> 
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net 
www.magix.com 

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

 
 
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any 

[Gluster-users] Arbiter doesn't create

2016-03-23 Thread Ralf Simon
Hello,

I've installed 

# yum info glusterfs-server
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
Name: glusterfs-server
Arch: x86_64
Version : 3.7.6
Release : 1.el7
Size: 4.3 M
Repo: installed
>From repo   : latest
Summary : Clustered file-system server
URL : http://www.gluster.org/docs/index.php/GlusterFS
License : GPLv2 or LGPLv3+
Description : GlusterFS is a distributed file-system capable of scaling to 
several
: petabytes. It aggregates various storage bricks over 
Infiniband RDMA
: or TCP/IP interconnect into one large parallel network file
: system. GlusterFS is one of the most sophisticated file 
systems in
: terms of features and extensibility.  It borrows a powerful 
concept
: called Translators from GNU Hurd kernel. Much of the code in 
GlusterFS
: is in user space and easily manageable.
:
: This package provides the glusterfs server daemon.

I wanted to build a ...

# gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0 
d90031:/data/brick0 d90034:/data/brick0
volume create: gv0: success: please start the volume to access data

... but I got a ...

# gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 329325fc-ceed-4dee-926f-038f44281678
Status: Created
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: d90029:/data/brick0
Brick2: d90031:/data/brick0
Brick3: d90034:/data/brick0
Options Reconfigured:
performance.readdir-ahead: on

... without the requested arbiter !

The same situation with 6 bricks ...

# gluster volume create gv0 replica 3 arbiter 1 d90029:/data/brick0 
d90031:/data/brick0 d90034:/data/brick0 d90029:/data/brick1 
d90031:/data/brick1 d90034:/data/brick1
volume create: gv0: success: please start the volume to access data
[root@d90029 ~]# gluster vol info

Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 2b8dbcc0-c4bb-41e3-a870-e164d8d10c49
Status: Created
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: d90029:/data/brick0
Brick2: d90031:/data/brick0
Brick3: d90034:/data/brick0
Brick4: d90029:/data/brick1
Brick5: d90031:/data/brick1
Brick6: d90034:/data/brick1
Options Reconfigured:
performance.readdir-ahead: on


In contrast the documentation tells 


Arbiter configuration
The arbiter configuration a.k.a. the arbiter volume is the perfect sweet 
spot between a 2-way replica and 3-way replica to avoid files getting into 
split-brain, without the 3x storage space as mentioned earlier. The syntax 
for creating the volume is:
gluster volume create replica 3 arbiter 1 host1:brick1 host2:brick2 
host3:brick3
For example:
gluster volume create testvol replica 3 arbiter 1 
127.0.0.2:/bricks/brick{1..6} force
volume create: testvol: success: please start the volume to access data
gluster volume info
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: ae6c4162-38c2-4368-ae5d-6bad141a4119
Status: Created
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: 127.0.0.2:/bricks/brick1
Brick2: 127.0.0.2:/bricks/brick2
Brick3: 127.0.0.2:/bricks/brick3 (arbiter)
Brick4: 127.0.0.2:/bricks/brick4
Brick5: 127.0.0.2:/bricks/brick5
Brick6: 127.0.0.2:/bricks/brick6 (arbiter)
Options Reconfigured : transport.address-family: inet
performance.readdir-ahead: on `



What's going wrong ? Can anybody help ?

Kind Regards
Ralf Simon



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Oleksandr Natalenko
And for 256b inode:

(597904 - 33000) / (1066036 - 23) == 530 bytes per inode.

So I still consider 1k to be good estimation for average workload.

Regards,
  Oleksandr.

On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:
> Looks okay to me Oleksandr. You might want to make a github gist of your
> tests+results as a reference for others.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Ravishankar N

On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote:

OK, I've repeated the test with the following hierarchy:

* 10 top-level folders with 10 second-level folders each;
* 10 000 files in each second-level folder.

So, this composes 10×10×1=1M files and 100 folders

Initial brick used space: 33 M
Initial inodes count: 24

After test:

* each brick in replica took 18G, and the arbiter brick took 836M;
* inodes count: 1066036

So:

(836 - 33) / (1066036 - 24) == 790 bytes per inode.

So, yes, it is slightly bigger value than with previous test due to, I guess,
lots of files in one folder, but it is still too far from 4k. Given a good
engineer should consider 30% reserve, the ratio is about 1k per stored inode.

Correct me if I'm missing something (regarding average workload and not corner
cases).


Looks okay to me Oleksandr. You might want to make a github gist of your 
tests+results as a reference for others.

Regards,
Ravi



Test script is here: [1]

Regards,
   Oleksandr.

[1] http://termbin.com/qlvz

On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote:

On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote:

In order to estimate GlusterFS arbiter brick size, I've deployed test
setup
with replica 3 arbiter 1 volume within one node. Each brick is located on
separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 +
memleak
patches. Volume options are kept default.

Here is the script that creates files and folders in mounted volume: [1]

The script creates 1M of files of random size (between 1 and 32768 bytes)
and some amount of folders. After running it I've got 1036637 folders.
So, in total it is 2036637 files and folders.

The initial used space on each brick is 42M . After running script I've
got:

replica brick 1 and 2: 19867168 kbytes == 19G
arbiter brick: 1872308 kbytes == 1.8G

The amount of inodes on each brick is 3139091. So here goes estimation.

Dividing arbiter used space by files+folders we get:

(1872308 - 42000)/2036637 == 899 bytes per file or folder

Dividing arbiter used space by inodes we get:

(1872308 - 42000)/3139091 == 583 bytes per inode

Not sure about what calculation is correct.

I think the first one is right because you still haven't used up all the
inodes.(2036637 used vs. the max. permissible 3139091). But again this
is an approximation because not all files would be 899 bytes. For
example if there are a thousand files present in a directory, then du
 would be more than du  because the directory will take
some disk space to store the dentries.


   I guess we should consider the one

that accounts inodes because of .glusterfs/ folder data.

Nevertheless, in contrast, documentation [2] says it should be 4096 bytes
per file. Am I wrong with my calculations?

The 4KB is a conservative estimate considering the fact that though the
arbiter brick does not store data, it still keeps a copy of both user
and gluster xattrs. For example, if the application sets a lot of
xattrs, it can consume a data block if they cannot be accommodated on
the inode itself.  Also there is the .glusterfs folder like you said
which would take up some space. Here is what I tried on an XFS brick:
[root@ravi4 brick]# touch file

[root@ravi4 brick]# ls -l file
-rw-r--r-- 1 root root 0 Mar  8 12:54 file

[root@ravi4 brick]# du file
*0   file**
*
[root@ravi4 brick]# for i in {1..100}

  > do
  > setfattr -n user.value$i -v value$i file
  > done

[root@ravi4 brick]# ll -l file
-rw-r--r-- 1 root root 0 Mar  8 12:54 file

[root@ravi4 brick]# du -h file
*4.0Kfile**
*
Hope this helps,
Ravi


Pranith?

[1] http://termbin.com/ka9x
[2]
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-vo
lumes-and-quorum/ ___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Ravishankar N
Thanks Oleksandr! I'll update 
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/ 
with a link to your gist.


On 03/18/2016 04:24 AM, Oleksandr Natalenko wrote:

Ravi,

here is the summary: [1]

Regards,
   Oleksandr.

[1] https://gist.github.com/e8265ca07f7b19f30bb3

On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:

On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote:

OK, I've repeated the test with the following hierarchy:

* 10 top-level folders with 10 second-level folders each;
* 10 000 files in each second-level folder.

So, this composes 10×10×1=1M files and 100 folders

Initial brick used space: 33 M
Initial inodes count: 24

After test:

* each brick in replica took 18G, and the arbiter brick took 836M;
* inodes count: 1066036

So:

(836 - 33) / (1066036 - 24) == 790 bytes per inode.

So, yes, it is slightly bigger value than with previous test due to, I
guess, lots of files in one folder, but it is still too far from 4k.
Given a good engineer should consider 30% reserve, the ratio is about 1k
per stored inode.

Correct me if I'm missing something (regarding average workload and not
corner cases).

Looks okay to me Oleksandr. You might want to make a github gist of your
tests+results as a reference for others.
Regards,
Ravi


Test script is here: [1]

Regards,

Oleksandr.

[1] http://termbin.com/qlvz

On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote:

On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote:

In order to estimate GlusterFS arbiter brick size, I've deployed test
setup
with replica 3 arbiter 1 volume within one node. Each brick is located
on
separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 +
memleak
patches. Volume options are kept default.

Here is the script that creates files and folders in mounted volume: [1]

The script creates 1M of files of random size (between 1 and 32768
bytes)
and some amount of folders. After running it I've got 1036637 folders.
So, in total it is 2036637 files and folders.

The initial used space on each brick is 42M . After running script I've
got:

replica brick 1 and 2: 19867168 kbytes == 19G
arbiter brick: 1872308 kbytes == 1.8G

The amount of inodes on each brick is 3139091. So here goes estimation.

Dividing arbiter used space by files+folders we get:

(1872308 - 42000)/2036637 == 899 bytes per file or folder

Dividing arbiter used space by inodes we get:

(1872308 - 42000)/3139091 == 583 bytes per inode

Not sure about what calculation is correct.

I think the first one is right because you still haven't used up all the
inodes.(2036637 used vs. the max. permissible 3139091). But again this
is an approximation because not all files would be 899 bytes. For
example if there are a thousand files present in a directory, then du
 would be more than du  because the directory will take
some disk space to store the dentries.


I guess we should consider the one

that accounts inodes because of .glusterfs/ folder data.

Nevertheless, in contrast, documentation [2] says it should be 4096
bytes
per file. Am I wrong with my calculations?

The 4KB is a conservative estimate considering the fact that though the
arbiter brick does not store data, it still keeps a copy of both user
and gluster xattrs. For example, if the application sets a lot of
xattrs, it can consume a data block if they cannot be accommodated on
the inode itself.  Also there is the .glusterfs folder like you said
which would take up some space. Here is what I tried on an XFS brick:
[root@ravi4 brick]# touch file

[root@ravi4 brick]# ls -l file
-rw-r--r-- 1 root root 0 Mar  8 12:54 file

[root@ravi4 brick]# du file
*0   file**
*
[root@ravi4 brick]# for i in {1..100}

   > do
   > setfattr -n user.value$i -v value$i file
   > done

[root@ravi4 brick]# ll -l file
-rw-r--r-- 1 root root 0 Mar  8 12:54 file

[root@ravi4 brick]# du -h file
*4.0Kfile**
*
Hope this helps,
Ravi


Pranith?

[1] http://termbin.com/ka9x
[2]
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-v
o
lumes-and-quorum/ ___
Gluster-devel mailing list
gluster-de...@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Oleksandr Natalenko
OK, I've repeated the test with the following hierarchy:

* 10 top-level folders with 10 second-level folders each;
* 10 000 files in each second-level folder.

So, this composes 10×10×1=1M files and 100 folders

Initial brick used space: 33 M
Initial inodes count: 24

After test:

* each brick in replica took 18G, and the arbiter brick took 836M;
* inodes count: 1066036

So:

(836 - 33) / (1066036 - 24) == 790 bytes per inode.

So, yes, it is slightly bigger value than with previous test due to, I guess, 
lots of files in one folder, but it is still too far from 4k. Given a good 
engineer should consider 30% reserve, the ratio is about 1k per stored inode.

Correct me if I'm missing something (regarding average workload and not corner 
cases).

Test script is here: [1]

Regards,
  Oleksandr.

[1] http://termbin.com/qlvz

On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote:
> On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote:
> > In order to estimate GlusterFS arbiter brick size, I've deployed test
> > setup
> > with replica 3 arbiter 1 volume within one node. Each brick is located on
> > separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 +
> > memleak
> > patches. Volume options are kept default.
> > 
> > Here is the script that creates files and folders in mounted volume: [1]
> > 
> > The script creates 1M of files of random size (between 1 and 32768 bytes)
> > and some amount of folders. After running it I've got 1036637 folders.
> > So, in total it is 2036637 files and folders.
> > 
> > The initial used space on each brick is 42M . After running script I've
> > got:
> > 
> > replica brick 1 and 2: 19867168 kbytes == 19G
> > arbiter brick: 1872308 kbytes == 1.8G
> > 
> > The amount of inodes on each brick is 3139091. So here goes estimation.
> > 
> > Dividing arbiter used space by files+folders we get:
> > 
> > (1872308 - 42000)/2036637 == 899 bytes per file or folder
> > 
> > Dividing arbiter used space by inodes we get:
> > 
> > (1872308 - 42000)/3139091 == 583 bytes per inode
> > 
> > Not sure about what calculation is correct.
> 
> I think the first one is right because you still haven't used up all the
> inodes.(2036637 used vs. the max. permissible 3139091). But again this
> is an approximation because not all files would be 899 bytes. For
> example if there are a thousand files present in a directory, then du
>  would be more than du  because the directory will take
> some disk space to store the dentries.
> 
> >   I guess we should consider the one
> > 
> > that accounts inodes because of .glusterfs/ folder data.
> > 
> > Nevertheless, in contrast, documentation [2] says it should be 4096 bytes
> > per file. Am I wrong with my calculations?
> 
> The 4KB is a conservative estimate considering the fact that though the
> arbiter brick does not store data, it still keeps a copy of both user
> and gluster xattrs. For example, if the application sets a lot of
> xattrs, it can consume a data block if they cannot be accommodated on
> the inode itself.  Also there is the .glusterfs folder like you said
> which would take up some space. Here is what I tried on an XFS brick:
> [root@ravi4 brick]# touch file
> 
> [root@ravi4 brick]# ls -l file
> -rw-r--r-- 1 root root 0 Mar  8 12:54 file
> 
> [root@ravi4 brick]# du file
> *0   file**
> *
> [root@ravi4 brick]# for i in {1..100}
> 
>  > do
>  > setfattr -n user.value$i -v value$i file
>  > done
> 
> [root@ravi4 brick]# ll -l file
> -rw-r--r-- 1 root root 0 Mar  8 12:54 file
> 
> [root@ravi4 brick]# du -h file
> *4.0Kfile**
> *
> Hope this helps,
> Ravi
> 
> > Pranith?
> > 
> > [1] http://termbin.com/ka9x
> > [2]
> > http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-vo
> > lumes-and-quorum/ ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter brick size estimation

2016-03-19 Thread Oleksandr Natalenko
Ravi, I will definitely arrange the results into some short handy 
document and post it here.


Also, @JoeJulian on IRC suggested me to perform this test on XFS bricks 
with inode size of 256b and 1k:


===
22:38 <@JoeJulian> post-factum: Just wondering what 256 byte inodes 
might look like for that. And, by the same token, 1k inodes.

22:39 < post-factum> JoeJulian: should I try 1k inodes instead?
22:41 <@JoeJulian> post-factum: Doesn't hurt to try. My expectation is 
that disk usage will go up despite inode usage going down.

22:41 < post-factum> JoeJulian: ok, will check that
22:41 <@JoeJulian> post-factum: and with 256, I'm curious if inode usage 
will stay close to the same while disk usage goes down.

===

Here are the results for 1k:

(1171336 - 33000) / (1066036 - 23) == 1068 bytes per inode.

Disk usage is indeed higher (1.2G), but inodes usage is the same.

Will test with 256b inode now.

17.03.2016 06:28, Ravishankar N wrote:

Looks okay to me Oleksandr. You might want to make a github gist of
your tests+results as a reference for others.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter brick size estimation

2016-03-18 Thread Oleksandr Natalenko
Ravi,

here is the summary: [1]

Regards,
  Oleksandr.

[1] https://gist.github.com/e8265ca07f7b19f30bb3

On четвер, 17 березня 2016 р. 09:58:14 EET Ravishankar N wrote:
> On 03/16/2016 10:57 PM, Oleksandr Natalenko wrote:
> > OK, I've repeated the test with the following hierarchy:
> > 
> > * 10 top-level folders with 10 second-level folders each;
> > * 10 000 files in each second-level folder.
> > 
> > So, this composes 10×10×1=1M files and 100 folders
> > 
> > Initial brick used space: 33 M
> > Initial inodes count: 24
> > 
> > After test:
> > 
> > * each brick in replica took 18G, and the arbiter brick took 836M;
> > * inodes count: 1066036
> > 
> > So:
> > 
> > (836 - 33) / (1066036 - 24) == 790 bytes per inode.
> > 
> > So, yes, it is slightly bigger value than with previous test due to, I
> > guess, lots of files in one folder, but it is still too far from 4k.
> > Given a good engineer should consider 30% reserve, the ratio is about 1k
> > per stored inode.
> > 
> > Correct me if I'm missing something (regarding average workload and not
> > corner cases).
> 
> Looks okay to me Oleksandr. You might want to make a github gist of your
> tests+results as a reference for others.
> Regards,
> Ravi
> 
> > Test script is here: [1]
> > 
> > Regards,
> > 
> >Oleksandr.
> > 
> > [1] http://termbin.com/qlvz
> > 
> > On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote:
> >> On 03/05/2016 03:45 PM, Oleksandr Natalenko wrote:
> >>> In order to estimate GlusterFS arbiter brick size, I've deployed test
> >>> setup
> >>> with replica 3 arbiter 1 volume within one node. Each brick is located
> >>> on
> >>> separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 +
> >>> memleak
> >>> patches. Volume options are kept default.
> >>> 
> >>> Here is the script that creates files and folders in mounted volume: [1]
> >>> 
> >>> The script creates 1M of files of random size (between 1 and 32768
> >>> bytes)
> >>> and some amount of folders. After running it I've got 1036637 folders.
> >>> So, in total it is 2036637 files and folders.
> >>> 
> >>> The initial used space on each brick is 42M . After running script I've
> >>> got:
> >>> 
> >>> replica brick 1 and 2: 19867168 kbytes == 19G
> >>> arbiter brick: 1872308 kbytes == 1.8G
> >>> 
> >>> The amount of inodes on each brick is 3139091. So here goes estimation.
> >>> 
> >>> Dividing arbiter used space by files+folders we get:
> >>> 
> >>> (1872308 - 42000)/2036637 == 899 bytes per file or folder
> >>> 
> >>> Dividing arbiter used space by inodes we get:
> >>> 
> >>> (1872308 - 42000)/3139091 == 583 bytes per inode
> >>> 
> >>> Not sure about what calculation is correct.
> >> 
> >> I think the first one is right because you still haven't used up all the
> >> inodes.(2036637 used vs. the max. permissible 3139091). But again this
> >> is an approximation because not all files would be 899 bytes. For
> >> example if there are a thousand files present in a directory, then du
> >>  would be more than du  because the directory will take
> >> some disk space to store the dentries.
> >> 
> >>>I guess we should consider the one
> >>> 
> >>> that accounts inodes because of .glusterfs/ folder data.
> >>> 
> >>> Nevertheless, in contrast, documentation [2] says it should be 4096
> >>> bytes
> >>> per file. Am I wrong with my calculations?
> >> 
> >> The 4KB is a conservative estimate considering the fact that though the
> >> arbiter brick does not store data, it still keeps a copy of both user
> >> and gluster xattrs. For example, if the application sets a lot of
> >> xattrs, it can consume a data block if they cannot be accommodated on
> >> the inode itself.  Also there is the .glusterfs folder like you said
> >> which would take up some space. Here is what I tried on an XFS brick:
> >> [root@ravi4 brick]# touch file
> >> 
> >> [root@ravi4 brick]# ls -l file
> >> -rw-r--r-- 1 root root 0 Mar  8 12:54 file
> >> 
> >> [root@ravi4 brick]# du file
> >> *0   file**
> >> *
> >> [root@ravi4 brick]# for i in {1..100}
> >> 
> >>   > do
> >>   > setfattr -n user.value$i -v value$i file
> >>   > done
> >> 
> >> [root@ravi4 brick]# ll -l file
> >> -rw-r--r-- 1 root root 0 Mar  8 12:54 file
> >> 
> >> [root@ravi4 brick]# du -h file
> >> *4.0Kfile**
> >> *
> >> Hope this helps,
> >> Ravi
> >> 
> >>> Pranith?
> >>> 
> >>> [1] http://termbin.com/ka9x
> >>> [2]
> >>> http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-v
> >>> o
> >>> lumes-and-quorum/ ___
> >>> Gluster-devel mailing list
> >>> gluster-de...@gluster.org
> >>> http://www.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter brick size estimation

2016-03-08 Thread Oleksandr Natalenko
Hi.

On вівторок, 8 березня 2016 р. 19:13:05 EET Ravishankar N wrote:
> I think the first one is right because you still haven't used up all the
> inodes.(2036637 used vs. the max. permissible 3139091). But again this
> is an approximation because not all files would be 899 bytes. For
> example if there are a thousand files present in a directory, then du
>  would be more than du  because the directory will take
> some disk space to store the dentries.

I believe you've got me wrong. 2036637 is the number of files+folders. 3139091 
is the amount of inodes actually allocated on the underlying FS (according to 
df -i information). The max. inodes number is much higher than that, and I do 
not take it into account.

Also, probably, I should recheck the results for 1000 files per folder to make 
it sure.

> The 4KB is a conservative estimate considering the fact that though the
> arbiter brick does not store data, it still keeps a copy of both user
> and gluster xattrs. For example, if the application sets a lot of
> xattrs, it can consume a data block if they cannot be accommodated on
> the inode itself.  Also there is the .glusterfs folder like you said
> which would take up some space. Here is what I tried on an XFS brick:

4KB as upper level sounds and looks reasonable to me, thanks. But the average 
value will be still lower, I believe, as it is uncommon for apps to set lots 
of xattrs, especially for ordinary deployment.

Regards,
  Oleksandr.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter brick size estimation

2016-03-05 Thread Oleksandr Natalenko
In order to estimate GlusterFS arbiter brick size, I've deployed test setup 
with replica 3 arbiter 1 volume within one node. Each brick is located on 
separate HDD (XFS with inode size == 512). Using GlusterFS v3.7.6 + memleak 
patches. Volume options are kept default.

Here is the script that creates files and folders in mounted volume: [1]

The script creates 1M of files of random size (between 1 and 32768 bytes) and 
some amount of folders. After running it I've got 1036637 folders. So, in 
total it is 2036637 files and folders.

The initial used space on each brick is 42M . After running script I've got:

replica brick 1 and 2: 19867168 kbytes == 19G
arbiter brick: 1872308 kbytes == 1.8G

The amount of inodes on each brick is 3139091. So here goes estimation.

Dividing arbiter used space by files+folders we get:

(1872308 - 42000)/2036637 == 899 bytes per file or folder

Dividing arbiter used space by inodes we get:

(1872308 - 42000)/3139091 == 583 bytes per inode

Not sure about what calculation is correct. I guess we should consider the one 
that accounts inodes because of .glusterfs/ folder data.

Nevertheless, in contrast, documentation [2] says it should be 4096 bytes per 
file. Am I wrong with my calculations?

Pranith?

[1] http://termbin.com/ka9x
[2] 
http://gluster.readthedocs.org/en/latest/Administrator%20Guide/arbiter-volumes-and-quorum/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Arbiter vs Dummy Node Details

2015-12-29 Thread Ravishankar N

On 12/30/2015 04:20 AM, Kyle Harris wrote:

Hello All,

Forgive the duplicate but I forgot to give the first post a title so 
this corrects that.  Anyway, I recently discovered the new arbiter 
functionality of the 3.7 branch so I decided to give it a try.  First 
off, I too am looking forward to the ability to add an arbiter to an 
already existing volume as discussed in the following thread: 
https://www.gluster.org/pipermail/gluster-users/2015-August/023030.html.



This is not implemented yet Kyle. We are targetting it for 3.8.
However, my first question for now is can someone perhaps go into a 
bit of detail regarding the difference between using this new arbiter 
functionality versus adding a dummy node with regards to helping to 
eliminate split-brain?  In other words, a bit of information on which 
is best and why?


See this thread 
https://www.gluster.org/pipermail/gluster-users/2015-October/023915.html 
for more information. Client quorum is a better way to avoid split-brain 
of files.
Arbiter is a type of replicate volume which uses client quorum ( plus 
some arbitration logic)  to avoid split-brains.


Second, I noticed at the following URL where is discusses this new 
functionality it says, and I quote "/By default, client quorum 
(cluster.quorum-type) is set to auto . . ."/ which I found to be a bit 
confusing. After setting up my new cluster I noticed that none of the 
quorum settings including cluster.quorum-type seem to have a setting?
The client-quorum is indeed enabled and set to auto; it is just that the 
`volume info` output does not display it correctly.  It being fixed 
(http://review.gluster.org/#/c/11872/).


-Ravi

/
/
Arbiter Link
https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/afr-arbiter-volumes.md

Thank you.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter vs Dummy Node Details

2015-12-29 Thread Kyle Harris
Hello All,

Forgive the duplicate but I forgot to give the first post a title so this
corrects that.  Anyway, I recently discovered the new arbiter functionality
of the 3.7 branch so I decided to give it a try.  First off, I too am
looking forward to the ability to add an arbiter to an already existing
volume as discussed in the following thread:
https://www.gluster.org/pipermail/gluster-users/2015-August/023030.html.

However, my first question for now is can someone perhaps go into a bit of
detail regarding the difference between using this new arbiter
functionality versus adding a dummy node with regards to helping to
eliminate split-brain?  In other words, a bit of information on which is
best and why?

Second, I noticed at the following URL where is discusses this new
functionality it says, and I quote "*By default, client quorum
(cluster.quorum-type) is set to auto . . ."* which I found to be a bit
confusing.  After setting up my new cluster I noticed that none of the
quorum settings including cluster.quorum-type seem to have a setting?

Arbiter Link
https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/afr-arbiter-volumes.md

Thank you.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter vs Dummy Node Details

2015-12-29 Thread Pranith Kumar Karampuri



On 12/30/2015 06:42 AM, Ravishankar N wrote:

On 12/30/2015 04:20 AM, Kyle Harris wrote:

Hello All,

Forgive the duplicate but I forgot to give the first post a title so 
this corrects that. Anyway, I recently discovered the new arbiter 
functionality of the 3.7 branch so I decided to give it a try.  First 
off, I too am looking forward to the ability to add an arbiter to an 
already existing volume as discussed in the following thread: 
https://www.gluster.org/pipermail/gluster-users/2015-August/023030.html.



This is not implemented yet Kyle. We are targetting it for 3.8.
However, my first question for now is can someone perhaps go into a 
bit of detail regarding the difference between using this new arbiter 
functionality versus adding a dummy node with regards to helping to 
eliminate split-brain?  In other words, a bit of information on which 
is best and why?


See this thread 
https://www.gluster.org/pipermail/gluster-users/2015-October/023915.html 
for more information. Client quorum is a better way to avoid 
split-brain of files.
Arbiter is a type of replicate volume which uses client quorum ( plus 
some arbitration logic)  to avoid split-brains.


Second, I noticed at the following URL where is discusses this new 
functionality it says, and I quote "/By default, client quorum 
(cluster.quorum-type) is set to auto . . ."/ which I found to be a 
bit confusing.  After setting up my new cluster I noticed that none 
of the quorum settings including cluster.quorum-type seem to have a 
setting?
The client-quorum is indeed enabled and set to auto; it is just that 
the `volume info` output does not display it correctly.  It being 
fixed (http://review.gluster.org/#/c/11872/).
client-quorum is set to auto only for 3 way replication. For 2 way 
replication there is no quorum.


Pranith


-Ravi

/
/
Arbiter Link
https://github.com/gluster/glusterfs/blob/release-3.7/doc/features/afr-arbiter-volumes.md

Thank you.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter volume

2015-08-06 Thread Ravishankar N



On 08/06/2015 09:17 AM, Pranith Kumar Karampuri wrote:



On 08/06/2015 02:41 AM, Fredrik Brandt wrote:

Arbiter volume

Hi,


Gave arbiter volume a shot today but ran into som problems with it:


1. My arbiter brick is running with a much smaller disk which during 
a df presents a problem, it shows the smaller disksize. If I 
understand the arbiter correctly, then there is no need to match the 
size of the real bricks, so is this by design?


Good point, I think for statfs i.e. syscall used by 'df' we need to 
ignore the output from arbiter brick. Thanks for this input. We will 
raise a bug and work on it. You shouldn't see any problems because of 
this issue though.


statfs in AFR picks the brick with the least available free space for 
displaying the output. Assuming all 3 bricks are up, the statfs never 
picks the arbiter brick, so this problem won't be hit?




2. Using libgfapi with qemu libvirt did not work for me, it just 
hangs with high cpuload on libvirtd. Should this work?


Just to isolate the problem does it work if you try to use the VMs 
through fuse mount?



Running Centos 7 (qemu libvirt) and FreeBSD 10.1 (as arbiter) with 
gluster 3.7.2.


Could you give all the logs in /var/log/glusterfs and also the 
libvirtd logs.



Fredrik Brandt



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Arbiter volume

2015-08-05 Thread Fredrik Brandt
Hi,



Gave arbiter volume a shot today but ran into som problems with it:



1. My arbiter brick is running with a much smaller disk which during a df 
presents a problem, it shows the smaller disksize. If I understand the arbiter 
correctly, then there is no need to match the size of the real bricks, so is 
this by design?



2. Using libgfapi with qemu libvirt did not work for me, it just hangs with 
high cpuload on libvirtd. Should this work?



Running Centos 7 (qemu libvirt) and FreeBSD 10.1 (as arbiter) with gluster 
3.7.2.


Fredrik Brandt

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Arbiter volume

2015-08-05 Thread Pranith Kumar Karampuri



On 08/06/2015 02:41 AM, Fredrik Brandt wrote:

Arbiter volume

Hi,


Gave arbiter volume a shot today but ran into som problems with it:


1. My arbiter brick is running with a much smaller disk which during a 
df presents a problem, it shows the smaller disksize. If I 
understand the arbiter correctly, then there is no need to match the 
size of the real bricks, so is this by design?


Good point, I think for statfs i.e. syscall used by 'df' we need to 
ignore the output from arbiter brick. Thanks for this input. We will 
raise a bug and work on it. You shouldn't see any problems because of 
this issue though.



2. Using libgfapi with qemu libvirt did not work for me, it just hangs 
with high cpuload on libvirtd. Should this work?


Just to isolate the problem does it work if you try to use the VMs 
through fuse mount?



Running Centos 7 (qemu libvirt) and FreeBSD 10.1 (as arbiter) with 
gluster 3.7.2.


Could you give all the logs in /var/log/glusterfs and also the libvirtd 
logs.



Fredrik Brandt



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users