Re: [Gluster-users] Rebalancing newly added bricks

2019-09-11 Thread Nithya Balachandran
On Wed, 11 Sep 2019 at 09:47, Strahil  wrote:

> Hi Nithya,
>
> I just reminded about your previous  e-mail  which left me with the
> impression that old volumes need that.
> This is the one 1 mean:
>
> >It looks like this is a replicate volume. If >that is the case then yes,
> you are >running an old version of Gluster for >which this was the default
>

Hi Strahil,

I'm providing a little more detail here which I hope will explain things.
Rebalance was always a volume wide operation - a *rebalance start*
operation will start rebalance processes on all nodes of the volume.
However, different processes would behave differently. In earlier releases,
all nodes would crawl the bricks and update the directory layouts. However,
only one node in each replica/disperse set would actually migrate files,so
the rebalance status would only show one node doing any "work" (scanning,
rebalancing etc). However, this one node will process all the files in its
replica sets. Rerunning rebalance on other nodes would make no difference
as it will always be the same node that ends up migrating files.
So for instance, for a replicate volume with server1:/brick1,
server2:/brick2 and server3:/brick3 in that order, only the rebalance
process on server1 would migrate files. In newer releases, all 3 nodes
would migrate files.

The rebalance status does not capture the directory operations of fixing
layouts which is why it looks like the other nodes are not doing anything.

Hope this helps.

Regards,
Nithya

> behaviour.
>
> >
> >
>
> >Regards,
>
> >
>
> >Nithya
>
>
> Best Regards,
> Strahil Nikolov
> On Sep 9, 2019 06:36, Nithya Balachandran  wrote:
>
>
>
> On Sat, 7 Sep 2019 at 00:03, Strahil Nikolov 
> wrote:
>
> As it was mentioned, you might have to run rebalance on the other node -
> but it is better to wait this node is over.
>
>
> Hi Strahil,
>
> Rebalance does not need to be run on the other node - the operation is a
> volume wide one . Only a single node per replica set would migrate files in
> the version used in this case .
>
> Regards,
> Nithya
>
> Best Regards,
> Strahil Nikolov
>
> В петък, 6 септември 2019 г., 15:29:20 ч. Гринуич+3, Herb Burnswell <
> herbert.burnsw...@gmail.com> написа:
>
>
>
>
> On Thu, Sep 5, 2019 at 9:56 PM Nithya Balachandran 
> wrote:
>
>
>
> On Thu, 5 Sep 2019 at 02:41, Herb Burnswell 
> wrote:
>
> Thanks for the replies.  The rebalance is running and the brick
> percentages are not adjusting as expected:
>
> # df -hP |grep data
> /dev/mapper/gluster_vg-gluster_lv1_data   60T   49T   11T  83%
> /gluster_bricks/data1
> /dev/mapper/gluster_vg-gluster_lv2_data   60T   49T   11T  83%
> /gluster_bricks/data2
> /dev/mapper/gluster_vg-gluster_lv3_data   60T  4.6T   55T   8%
> /gluster_bricks/data3
> /dev/mapper/gluster_vg-gluster_lv4_data   60T  4.6T   55T   8%
> /gluster_bricks/data4
> /dev/mapper/gluster_vg-gluster_lv5_data   60T  4.6T   55T   8%
> /gluster_bricks/data5
> /dev/mapper/gluster_vg-gluster_lv6_data   60T  4.6T   55T   8%
> /gluster_bricks/data6
>
> At the current pace it looks like this will continue to run for another
> 5-6 days.
>
> I appreciate the guidance..
>
>
> What is the output of the rebalance status command?
> Can you check if there are any errors in the rebalance logs on the node
> on which you see rebalance activity?
> If there are a lot of small files on the volume, the rebalance is expected
> to take time.
>
> Regards,
> Nithya
>
>
> My apologies, that was a typo.  I meant to say:
>
> "The rebalance is running and the brick percentages are NOW adjusting as
> expected"
>
> I did expect the rebalance to take several days.  The rebalance log is not
> showing any errors.  Status output:
>
> # gluster vol rebalance tank status
> Node Rebalanced-files  size
> scanned  failures   skipped   status  run time in
> h:m:s
>-  ---   ---
> ---   ---   --- 
> --
>localhost  125132035.5TB
> 2079527 0 0  in progress  139:9:46
>serverB 0
>  0Bytes 7 0 0completed
>   63:47:55
> volume rebalance: tank: success
>
> Thanks again for the guidance.
>
> HB
>
>
>
>
>
> On Mon, Sep 2, 2019 at 9:08 PM Nithya Balachandran 
> wrote:
>
>
>
> On Sat, 31 Aug 2019 at 22:59, Herb Burnswell 
> wrote:
>
> Thank you for the reply.
>
> I started a rebalance with force on serverA as suggested.  Now I see
> 'activity' on that node:
>
> # gluster vol rebalance tank status
> Node Rebalanced-files  size
> scanned  failures   skipped   status  run time in
> h:m:s
>-  ---   ---
> ---   ---   ---  

[Gluster-users] VM settings

2019-09-11 Thread Cristian Del Carlo
Hi list,

I configured a gluster volume ( three-way replicated and sharded volume )
to contain virtual machines ( with libvirt ).
Which type of images is better to use qcow2 or raw ?
I don't use live migration so  in cache management of the vm  is better to
use writeback or directsync?
Best regards,
-- 

*Cristian Del Carlo*
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users