Thank you so much Ravi, very helpful !

On Tue, Apr 16, 2019, 12:11 Ravishankar N <ravishan...@redhat.com> wrote:

>
> On 16/04/19 2:20 PM, Sahina Bose wrote:
>
> On Tue, Apr 16, 2019 at 1:39 PM Leo David <leoa...@gmail.com> 
> <leoa...@gmail.com> wrote:
>
>
> Hi Everyone,
> I have wrongly configured the main gluster volume ( 12 identical 1tb ssd 
> disks, replica 3 distributed-replicated, across 6 nodes - 2 per node ) with 
> arbiter one.
> Oviously I am wasting storage space in this scenario with the arbiter bricks, 
> and I would like to convert the volume to non-arbitrated one, so having all 
> the data evenly spreaded across all the disks.
> Considering the the storage is being used by about 40 vms in production, what 
> would it be the steps, or is there any chance to change the volume type to 
> non-arbitrated on the fly and then rebalance ?
> Thank you very much !
>
>
> Ravi, can you help here - to change from arbiter to replica 3?
>
> The general steps are:
>
> 1. Ensure there are no pending heals.
>
> 2. Use the `remove-brick` command to reduce the volume to a replica 2
>
> 3. Use the `add-brick` command to convert it to a replica 3.
>
> 4. Monitor and check that the heal is eventually completed on the newly
> added bricks.
>
> The steps are best done when the VMs are offline so that self-heal traffic
> does not eat up too much of I/O traffic.
>
> Example:
> [root@tuxpad ravi]# gluster volume info
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: e3fc6ea5-a48c-4918-8a4b-0a7859f3a182
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/home/ravi/bricks/brick1
> Brick2: 127.0.0.2:/home/ravi/bricks/brick2
> Brick3: 127.0.0.2:/home/ravi/bricks/brick3 (arbiter)
> Brick4: 127.0.0.2:/home/ravi/bricks/brick4
> Brick5: 127.0.0.2:/home/ravi/bricks/brick5
> Brick6: 127.0.0.2:/home/ravi/bricks/brick6 (arbiter)
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@tuxpad ravi]#
>
> [root@tuxpad ravi]# gluster volume heal testvol info
> Brick 127.0.0.2:/home/ravi/bricks/brick1
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick2
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick3
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick4
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick5
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick6
> Status: Connected
> Number of entries: 0
>
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume remove-brick testvol replica 2
> 127.0.0.2:/home/ravi/bricks/brick3  127.0.0.2:/home/ravi/bricks/brick6
> force
> Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
> volume remove-brick commit force: success
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume info
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: e3fc6ea5-a48c-4918-8a4b-0a7859f3a182
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/home/ravi/bricks/brick1
> Brick2: 127.0.0.2:/home/ravi/bricks/brick2
> Brick3: 127.0.0.2:/home/ravi/bricks/brick4
> Brick4: 127.0.0.2:/home/ravi/bricks/brick5
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume add-brick testvol replica 3 
> 127.0.0.2:/home/ravi/bricks/brick3_new
> 127.0.0.2:/home/ravi/bricks/brick6_new
> volume add-brick: success
> [root@tuxpad ravi]#
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume info
>
> Volume Name: testvol
> Type: Distributed-Replicate
> Volume ID: e3fc6ea5-a48c-4918-8a4b-0a7859f3a182
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: 127.0.0.2:/home/ravi/bricks/brick1
> Brick2: 127.0.0.2:/home/ravi/bricks/brick2
> Brick3: 127.0.0.2:/home/ravi/bricks/brick3_new
> Brick4: 127.0.0.2:/home/ravi/bricks/brick4
> Brick5: 127.0.0.2:/home/ravi/bricks/brick5
> Brick6: 127.0.0.2:/home/ravi/bricks/brick6_new
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@tuxpad ravi]#
> [root@tuxpad ravi]#
> [root@tuxpad ravi]# gluster volume heal testvol info
> Brick 127.0.0.2:/home/ravi/bricks/brick1
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick2
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick3_new
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick4
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick5
> Status: Connected
> Number of entries: 0
>
> Brick 127.0.0.2:/home/ravi/bricks/brick6_new
> Status: Connected
> Number of entries: 0
>
>
> HTH,
> Ravi
>
>
>
>  _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UBEZWN35M365IKCIE3U6TRHDDX7TS75T/
>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AOGWSENMRV6KQWMQV5HBBANXQBF7NMAO/

Reply via email to