Re: [Gluster-users] [ovirt-users] Re: Single instance scaleup.

2020-06-05 Thread Krist van Besien
Hi all.

I acrtually did something like that myself.

I started out with a single node HC cluster. I then added another node (and 
plan to add a third). This is what I did:

1) Set up the new node. Make sure that you have all dependencies. (In my case I 
started with a Centos 8 machine, and installed vdms-gluster and gluster-ansible)
2) Configure the bricks. For this I just copied over hc_wizard_inventory.yml 
over from the first node, edited it to fit the second node, and just ran the 
gluster.infra role.
3) Expand the volume. In this case with the following command:
gluster volume add-brick engine replica 2 :/gluster_bricks/engine/engine
4) now just add the host as a hypervisor using the management console.

I plan on adding a third node. Then I want to have full replica on the engine, 
and replica 2 + arbiter on the vmstore volume.

Expanding gluster volumes, migrating from distributed to replicated and even 
replacing bricks etc. is rather easy in Gluster once you know how it works. I 
have even replaced all the servers on a live gluster cluster, without service 
interruption…

Krist

On Jul 18, 2019, 09:58 +0200, Leo David , wrote:
> Hi,
> Looks like the only way arround would be to create a brand-new volume as 
> replicated on other disks, and start moving the vms all around the place 
> between volumes ?
> Cheers,
>
> Leo
>
> > On Mon, May 27, 2019 at 1:53 PM Leo David  wrote:
> > > Hi,
> > > Any suggestions ?
> > > Thank you very much !
> > >
> > > Leo
> > >
> > > > On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov  
> > > > wrote:
> > > > > Yeah,
> > > > > it seems different from the docs.
> > > > > I'm adding the gluster users list ,as they are more experienced into 
> > > > > that.
> > > > >
> > > > > @Gluster-users,
> > > > >
> > > > > can you provide some hint how to add aditional replicas to the below 
> > > > > volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type 
> > > > > volumes ?
> > > > >
> > > > >
> > > > > Best Regards,
> > > > > Strahil Nikolov
> > > > >
> > > > > В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David 
> > > > >  написа:
> > > > >
> > > > >
> > > > > Thank you Strahil,
> > > > > The engine and ssd-samsung are distributed...
> > > > > So these are the ones that I need to have replicated accross new 
> > > > > nodes.
> > > > > I am not very sure about the procedure to accomplish this.
> > > > > Thanks,
> > > > >
> > > > > Leo
> > > > >
> > > > > On Sun, May 26, 2019, 13:04 Strahil  wrote:
> > > > > > Hi Leo,
> > > > > > As you do not have a distributed volume , you can easily switch to 
> > > > > > replica 2 arbiter 1 or replica 3 volumes.
> > > > > > You can use the following for adding the bricks:
> > > > > > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
> > > > > > Best Regards,
> > > > > > Strahil Nikoliv
> > > > > > On May 26, 2019 10:54, Leo David  wrote:
> > > > > > > Hi Stahil,
> > > > > > > Thank you so much for yout input !
> > > > > > >
> > > > > > >  gluster volume info
> > > > > > >
> > > > > > >
> > > > > > > Volume Name: engine
> > > > > > > Type: Distribute
> > > > > > > Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> > > > > > > Status: Started
> > > > > > > Snapshot Count: 0
> > > > > > > Number of Bricks: 1
> > > > > > > Transport-type: tcp
> > > > > > > Bricks:
> > > > > > > Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> > > > > > > Options Reconfigured:
> > > > > > > nfs.disable: on
> > > > > > > transport.address-family: inet
> > > > > > > storage.owner-uid: 36
> > > > > > > storage.owner-gid: 36
> > > > > > > features.shard: on
> > > > > > > performance.low-prio-threads: 32
> > > > > > > performance.strict-o-direct: off
> > > > > > > network.remote-dio: off
> > > > > > > network.ping-timeout: 30
> > > > > > > user.cifs: off
> > > > > > > performance.quick-read: off
> > > > > > > performance.read-ahead: off
> > > > > > > performance.io-cache: off
> > > > > > > cluster.eager-lock: enable
> > > > > > > Volume Name: ssd-samsung
> > > > > > > Type: Distribute
> > > > > > > Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> > > > > > > Status: Started
> > > > > > > Snapshot Count: 0
> > > > > > > Number of Bricks: 1
> > > > > > > Transport-type: tcp
> > > > > > > Bricks:
> > > > > > > Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> > > > > > > Options Reconfigured:
> > > > > > > cluster.eager-lock: enable
> > > > > > > performance.io-cache: off
> > > > > > > performance.read-ahead: off
> > > > > > > performance.quick-read: off
> > > > > > > user.cifs: off
> > > > > > > network.ping-timeout: 30
> > > > > > > network.remote-dio: off
> > > > > > > performance.strict-o-direct: on
> > > > > > > performance.low-prio-threads: 32
> > > > > > > features.shard: on
> > > > > > > storage.owner-gid: 36
> > > > > > > storage.owner-uid: 36
> > > > > > > transport.address-family: inet
> > > > > > > nfs.disable: on
> > > > > > >
> > > > > > > The other two hosts will be 

Re: [Gluster-users] [ovirt-users] Re: Single instance scaleup.

2019-07-18 Thread Leo David
Hi,
Looks like the only way arround would be to create a brand-new volume as
replicated on other disks, and start moving the vms all around the place
between volumes ?
Cheers,

Leo

On Mon, May 27, 2019 at 1:53 PM Leo David  wrote:

> Hi,
> Any suggestions ?
> Thank you very much !
>
> Leo
>
> On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov 
> wrote:
>
>> Yeah,
>> it seems different from the docs.
>> I'm adding the gluster users list ,as they are more experienced into that.
>>
>> @Gluster-users,
>>
>> can you provide some hint how to add aditional replicas to the below
>> volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ?
>>
>>
>> Best Regards,
>> Strahil Nikolov
>>
>> В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David <
>> leoa...@gmail.com> написа:
>>
>>
>> Thank you Strahil,
>> The engine and ssd-samsung are distributed...
>> So these are the ones that I need to have replicated accross new nodes.
>> I am not very sure about the procedure to accomplish this.
>> Thanks,
>>
>> Leo
>>
>> On Sun, May 26, 2019, 13:04 Strahil  wrote:
>>
>> Hi Leo,
>> As you do not have a distributed volume , you can easily switch to
>> replica 2 arbiter 1 or replica 3 volumes.
>>
>> You can use the following for adding the bricks:
>>
>>
>> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
>>
>> Best Regards,
>> Strahil Nikoliv
>> On May 26, 2019 10:54, Leo David  wrote:
>>
>> Hi Stahil,
>> Thank you so much for yout input !
>>
>>  gluster volume info
>>
>>
>> Volume Name: engine
>> Type: Distribute
>> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> features.shard: on
>> performance.low-prio-threads: 32
>> performance.strict-o-direct: off
>> network.remote-dio: off
>> network.ping-timeout: 30
>> user.cifs: off
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> cluster.eager-lock: enable
>> Volume Name: ssd-samsung
>> Type: Distribute
>> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
>> Options Reconfigured:
>> cluster.eager-lock: enable
>> performance.io-cache: off
>> performance.read-ahead: off
>> performance.quick-read: off
>> user.cifs: off
>> network.ping-timeout: 30
>> network.remote-dio: off
>> performance.strict-o-direct: on
>> performance.low-prio-threads: 32
>> features.shard: on
>> storage.owner-gid: 36
>> storage.owner-uid: 36
>> transport.address-family: inet
>> nfs.disable: on
>>
>> The other two hosts will be 192.168.80.192/193  - this is gluster
>> dedicated network over 10GB sfp+ switch.
>> - host 2 wil have identical harware configuration with host 1 ( each disk
>> is actually a raid0 array )
>> - host 3 has:
>>-  1 ssd for OS
>>-  1 ssd - for adding to engine volume in a full replica 3
>>-  2 ssd's in a raid 1 array to be added as arbiter for the data
>> volume ( ssd-samsung )
>> So the plan is to have "engine"  scaled in a full replica 3,  and
>> "ssd-samsung" scalled in a replica 3 arbitrated.
>>
>>
>>
>>
>> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>>
>> Hi Leo,
>>
>> Gluster is quite smart, but in order to provide any hints , can you
>> provide output of 'gluster volume info '.
>> If you have 2 more systems , keep in mind that it is best to mirror the
>> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
>> machine), while for the arbiter this is not neccessary.
>>
>> What is your network and NICs ? Based on my experience , I can recommend
>> at least 10 gbit/s  interfase(s).
>>
>> Best Regards,
>> Strahil Nikolov
>> On May 26, 2019 07:52, Leo David  wrote:
>>
>> Hello Everyone,
>> Can someone help me to clarify this ?
>> I have a single-node 4.2.8 installation ( only two gluster storage
>> domains - distributed  single drive volumes ). Now I just got two
>> identintical servers and I would like to go for a 3 nodes bundle.
>> Is it possible ( after joining the new nodes to the cluster ) to expand
>> the existing volumes across the new nodes and change them to replica 3
>> arbitrated ?
>> If so, could you share with me what would it be the procedure ?
>> Thank you very much !
>>
>> Leo
>>
>>
>>
>> --
>> Best regards, Leo David
>>
>>
>
> --
> Best regards, Leo David
>


-- 
Best regards, Leo David
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [ovirt-users] Re: Single instance scaleup.

2019-07-18 Thread Leo David
Hi,
Any suggestions ?
Thank you very much !

Leo

On Sun, May 26, 2019 at 4:38 PM Strahil Nikolov 
wrote:

> Yeah,
> it seems different from the docs.
> I'm adding the gluster users list ,as they are more experienced into that.
>
> @Gluster-users,
>
> can you provide some hint how to add aditional replicas to the below
> volumes , so they become 'replica 2 arbiter 1' or 'replica 3' type volumes ?
>
>
> Best Regards,
> Strahil Nikolov
>
> В неделя, 26 май 2019 г., 15:16:18 ч. Гринуич+3, Leo David <
> leoa...@gmail.com> написа:
>
>
> Thank you Strahil,
> The engine and ssd-samsung are distributed...
> So these are the ones that I need to have replicated accross new nodes.
> I am not very sure about the procedure to accomplish this.
> Thanks,
>
> Leo
>
> On Sun, May 26, 2019, 13:04 Strahil  wrote:
>
> Hi Leo,
> As you do not have a distributed volume , you can easily switch to replica
> 2 arbiter 1 or replica 3 volumes.
>
> You can use the following for adding the bricks:
>
>
> https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Expanding_Volumes.html
>
> Best Regards,
> Strahil Nikoliv
> On May 26, 2019 10:54, Leo David  wrote:
>
> Hi Stahil,
> Thank you so much for yout input !
>
>  gluster volume info
>
>
> Volume Name: engine
> Type: Distribute
> Volume ID: d7449fc2-cc35-4f80-a776-68e4a3dbd7e1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/engine/engine
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard: on
> performance.low-prio-threads: 32
> performance.strict-o-direct: off
> network.remote-dio: off
> network.ping-timeout: 30
> user.cifs: off
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> cluster.eager-lock: enable
> Volume Name: ssd-samsung
> Type: Distribute
> Volume ID: 76576cc6-220b-4651-952d-99846178a19e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.80.191:/gluster_bricks/sdc/data
> Options Reconfigured:
> cluster.eager-lock: enable
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> user.cifs: off
> network.ping-timeout: 30
> network.remote-dio: off
> performance.strict-o-direct: on
> performance.low-prio-threads: 32
> features.shard: on
> storage.owner-gid: 36
> storage.owner-uid: 36
> transport.address-family: inet
> nfs.disable: on
>
> The other two hosts will be 192.168.80.192/193  - this is gluster
> dedicated network over 10GB sfp+ switch.
> - host 2 wil have identical harware configuration with host 1 ( each disk
> is actually a raid0 array )
> - host 3 has:
>-  1 ssd for OS
>-  1 ssd - for adding to engine volume in a full replica 3
>-  2 ssd's in a raid 1 array to be added as arbiter for the data volume
> ( ssd-samsung )
> So the plan is to have "engine"  scaled in a full replica 3,  and
> "ssd-samsung" scalled in a replica 3 arbitrated.
>
>
>
>
> On Sun, May 26, 2019 at 10:34 AM Strahil  wrote:
>
> Hi Leo,
>
> Gluster is quite smart, but in order to provide any hints , can you
> provide output of 'gluster volume info '.
> If you have 2 more systems , keep in mind that it is best to mirror the
> storage on the second replica (2 disks on 1 machine -> 2 disks on the new
> machine), while for the arbiter this is not neccessary.
>
> What is your network and NICs ? Based on my experience , I can recommend
> at least 10 gbit/s  interfase(s).
>
> Best Regards,
> Strahil Nikolov
> On May 26, 2019 07:52, Leo David  wrote:
>
> Hello Everyone,
> Can someone help me to clarify this ?
> I have a single-node 4.2.8 installation ( only two gluster storage domains
> - distributed  single drive volumes ). Now I just got two identintical
> servers and I would like to go for a 3 nodes bundle.
> Is it possible ( after joining the new nodes to the cluster ) to expand
> the existing volumes across the new nodes and change them to replica 3
> arbitrated ?
> If so, could you share with me what would it be the procedure ?
> Thank you very much !
>
> Leo
>
>
>
> --
> Best regards, Leo David
>
>

-- 
Best regards, Leo David
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users