Re: Secondary Storage timeout for ISO registration

2022-05-03 Thread Suresh Anaparti
Hi Peter,

Did you notice any error in secondary storage VM logs when downloading the ISO? 
Are you facing this issue with a specific ISO? Can you reach that ISO url using 
curl / wget for secondary storage VM?

Also, try to register ISO with different url, and see if you notice the same 
issue. From the code, I see the timeout is 30s, and if the download cmd doesn't 
progress, it'll fail with timeout error.


Regards,
Suresh

On 04/05/22, 2:19 AM, "Peter Stine"  wrote:

Hello all,

I am running CloudStack 4.16.1.0 on a Ubuntu 20.04 system. I am also using 
a Ceph array for primary storage with an nfs share also located on it. The 
management servers are also in a cluster.
I am having issues downloading new ISOs. When I enter the information for 
the ISO, it says it registers successfully, but then after a short wait shows 
"Timeout waiting for response from storage host". I can still create VMs from 
templates in secondary storage and live migrate VMs. 

Any thoughts on what might be causing this?

I checked the firewall and iptables to see if anything was being blocked 
and it does not appear so.

Here is the management log: 
https://gist.github.com/PeterS-gd/3baf67c2fbe9dc984246055e85fdd23c
iptables for both: 
https://gist.github.com/PeterS-gd/269070bc663a47f1eb67788cd1fcbe56

NB: This was working before. There was some network trouble, but I 
restarted the servers one by one and that cleared up most of the issues except 
for this one.

Thanks!

Peter Stine



 



Secondary Storage timeout for ISO registration

2022-05-03 Thread Peter Stine
Hello all,

I am running CloudStack 4.16.1.0 on a Ubuntu 20.04 system. I am also using a 
Ceph array for primary storage with an nfs share also located on it. The 
management servers are also in a cluster.
I am having issues downloading new ISOs. When I enter the information for the 
ISO, it says it registers successfully, but then after a short wait shows 
"Timeout waiting for response from storage host". I can still create VMs from 
templates in secondary storage and live migrate VMs.

Any thoughts on what might be causing this?

I checked the firewall and iptables to see if anything was being blocked and it 
does not appear so.

Here is the management log: 
https://gist.github.com/PeterS-gd/3baf67c2fbe9dc984246055e85fdd23c
iptables for both: 
https://gist.github.com/PeterS-gd/269070bc663a47f1eb67788cd1fcbe56

NB: This was working before. There was some network trouble, but I restarted 
the servers one by one and that cleared up most of the issues except for this 
one.

Thanks!

Peter Stine



Re: ACS 4.16.1 ::XCP-ng 8.2.1 CS Guest VM can't communicate with virtual routers when they are on different hosts

2022-05-03 Thread benoit lair
Hello Midhun,

Faced an issue during last update
Updating to Xcp-NG 8.2.1 causes bugs on some features.

Take a look at the issue i found about. There is a fix i tried and succeded
to continue working with Xcp-NG 8.2.1

https://github.com/apache/cloudstack/issues/6349

Hope this could help


Le mar. 12 avr. 2022 à 09:01, Midhun Jose  a
écrit :

> Hi vivek/Nux,
>
> Our network department updated that the ethernet ports on our switch are
> access ports, not trunk ports,  and hence no vlans are allowed.
> They asked us to check the configuration in the virtual router and make
> sure the vlans are allowed.
> could you please suggest anything on this.
>
>
> Midhun Jose
>
>
> - Original Message -
> From: "Vivek Kumar" 
> To: "users" 
> Sent: Thursday, April 7, 2022 1:17:07 PM
> Subject: Re: ACS 4.16.1 ::XCP-ng 8.2.1 CS Guest VM can't communicate with
> virtual routers when they are on different hosts
>
> Hello Midhun,
>
> This typically happens when your guest VLAN range is not allowed in the
> backend switch ports. So allowed all of your VLAN range on the ports where
> you have defined your guest traffic.
>
>
>
> Regards,
> Vivek Kumar
>
>
> > On 07-Apr-2022, at 12:13 PM, Midhun Jose 
> wrote:
> >
> > Hi @All,
> >
> > I'm using Cloudstack 4.16.1 with XCP-ng Cluster having 2 hosts.
> > I am facing issue  When virtual router is created on host 1 and a guest
> VM that uses that virtual router is created on host 2. there is no
> connectivity from VM and the VR.
> > (refer the screenshot attached.)
> > But when both virtual router and guest VM are created on the same host
> everything works like normal.
> > Did I miss something on configuring the network?
> >
> > Best Regards,
> > Midhun Jose
> >
>
>
> --
> This message is intended only for the use of the individual or entity to
> which it is addressed and may contain confidential and/or privileged
> information. If you are not the intended recipient, please delete the
> original message and any copy of it from your computer system. You are
> hereby notified that any dissemination, distribution or copying of this
> communication is strictly prohibited unless proper authorization has been
> obtained for such action. If you have received this communication in
> error,
> please notify the sender immediately. Although IndiQus attempts to sweep
> e-mail and attachments for viruses, it does not guarantee that both are
> virus-free and accepts no liability for any damage sustained as a result
> of
> viruses.
>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
Hello Wei,

The issue is opened here : https://github.com/apache/cloudstack/issues/6349

Have a nice day

Le mar. 3 mai 2022 à 11:19, benoit lair  a écrit :

> Hello Wei,
>
> Yes i'm going to open an issue :)
> I am doing some units tests on xcp-ng 8.2.1 with acs 4.16
>
> Le mar. 3 mai 2022 à 10:54, Wei ZHOU  a écrit :
>
>> Good, you have solved the problem.
>>
>> CloudStack supports 8.2.0 but not 8.2.1.
>>
>> Can you add a github issue ? we could support it in future releases.
>>
>> -Wei
>>
>>
>> On Tue, 3 May 2022 at 10:02, benoit lair  wrote:
>>
>> > I precise after adding these 2 two lines into  hypervisor_capabilities
>> and
>> > guest_os_hypervisor this fixed the feature of live storage migration
>> for me
>> >
>> > Le mar. 3 mai 2022 à 10:01, benoit lair  a
>> écrit :
>> >
>> > > Hello Antoine,
>> > >
>> > > I saw that this time my yum update upgraded me to 8.2.1
>> > > You were in 8.2.1 too ?
>> > >
>> > > I tried this fix in ACS :
>> > >
>> > > #add hypervsisor xcp 8.2.1 to acs 4.16
>> > > INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
>> > > hypervisor_type,
>> > > hypervisor_version, max_guests_limit, max_data_volumes_limit,
>> > > max_hosts_per_cluster, storage_motion_supported) values (UUID(),
>> > > 'XenServer',
>> > > '8.2.1', 1000, 253, 64, 1);
>> > >
>> > > +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer
>> 8.2.1
>> > > +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor`
>> (uuid,hypervisor_type,
>> > > hypervisor_version, guest_os_name, guest_os_id, created,
>> is_user_defined)
>> > > SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
>> > > utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
>> > > hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
>> > >
>> > > Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
>> > >
>> > > Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
>> > > capabilities
>> > >
>> > > I tried to reboot and this is not working on another Cloudstack mgmt
>> > > instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
>> > >
>> > >
>> > > Regards, Benoit
>> > >
>> > > Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
>> > > écrit :
>> > >
>> > >> Bonjour Benoit,
>> > >>
>> > >> I had similar issues after I did a yum update and I was only able to
>> > fitx
>> > >> the issue by rebooting my hosts.
>> > >>
>> > >> -Antoine
>> > >>
>> > >> > On May 2, 2022, at 12:04 PM, benoit lair 
>> > wrote:
>> > >> >
>> > >> > Hello all,
>> > >> >
>> > >> > This is surely due to my yum update which updated to xcp 8.2.1
>> > >> >
>> > >> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would
>> it
>> > be
>> > >> > possible to add hypervisor capabilities without doing it in beta
>> mode
>> > ?
>> > >> >
>> > >> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
>> > >> écrit :
>> > >> >
>> > >> >> Hello folks,
>> > >> >>
>> > >> >> I have a several issue
>> > >> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster
>> > and i
>> > >> >> cant live migrate
>> > >> >> When clicking on the "Migrate volume" button, i have the following
>> > >> message
>> > >> >> :
>> > >> >>
>> > >> >> No primary storage pools available for migration
>> > >> >>
>> > >> >> and  it generates this in logs : "the hypervisor doesn't support
>> > >> storage
>> > >> >> motion."
>> > >> >>
>> > >> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
>> > >> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
>> > >> >> 192.168.4.30 -- GET
>> > >> >>
>> > >>
>> >
>> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
>> > >> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
>> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> > CIDRs
>> > >> >> from which account
>> 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin]
>> > --
>> > >> >> Account {"id": 2, "name": "admin", "uuid":
>> > >> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
>> > >> calls:
>> > >> >> 0.0.0.0/0,::/0
>> > >> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
>> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> > >> Volume
>> > >> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking
>> for
>> > >> storage
>> > >> >> pools in the cluster to which this volumes can be migrated.
>> > >> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
>> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> > >> >> Capabilities for host Host {"id": "2", "name":
>> "xcp-cluster1-node2",
>> > >> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
>> > >> couldn't
>> > >> >> be retrieved.
>> > >> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
>> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> > >> Volume
>> > >> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the
>> > hypervisor
>> > >> >> doesn't support

Re: [VOTE] Apache CloudStack 4.17.0.0 RC1

2022-05-03 Thread Nicolas Vazquez
Thanks Wei – looking forward to getting the fix ready and cutting RC2

Regards,
Nicolas Vazquez


From: Wei ZHOU 
Date: Tuesday, 3 May 2022 at 04:34
To: users , d...@cloudstack.apache.org 

Subject: Re: [VOTE] Apache CloudStack 4.17.0.0 RC1
Hi Nicolas,

Thank you for the hard work !

Unfortunately I have to vote -1 on RC1. We have found a blocker issue with
IPv6 with non-redundant isolated networks.
We are working on the fix: https://github.com/apache/cloudstack/pull/6343

Kind regards,
Wei

On Fri, 29 Apr 2022 at 20:36, Nicolas Vazquez 
wrote:

> Hi all,
>
> I have created a 4.17.0.0 release (RC1) with the following artefacts up
> for testing and a vote:
>
> Git Branch and Commit SH:
> https://github.com/apache/cloudstack/tree/4.17.0.0-RC20220429T1412
> Commit: 9ec270aa63a0ac479322a6f95146d20aa811ea23
>
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.17.0.0/
>
> PGP release keys (signed using 239A653975E13A0EEF5122A1656E1BCC8CB54F84):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> For testing purposes, I have uploaded the different distro packages to:
> https://download.cloudstack.org/testing/4.17.0.0-RC1/
>
> Since 4.16 the system VM template registration is no longer mandatory
> prior to upgrading, however, it can be downloaded from here if needed:
> https://download.cloudstack.org/systemvm/4.17/
>
> The vote will be open until 4th May 2022.
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> Regards,
> Nicolas Vazquez
>
>
>
>
>

 



REMINDER - Travel Assistance available for ApacheCon NA New Orleans 2022

2022-05-03 Thread Gavin McDonald
Hi All Contributors and Committers,

This is a first reminder email that travel
assistance applications for ApacheCon NA 2022 are now open!

We will be supporting ApacheCon North America in New Orleans, Louisiana,
on October 3rd through 6th, 2022.

TAC exists to help those that would like to attend ApacheCon events, but
are unable to do so for financial reasons. This year, We are supporting
both committers and non-committers involved with projects at the
Apache Software Foundation, or open source projects in general.

For more info on this year's applications and qualifying criteria, please
visit the TAC website at http://www.apache.org/travel/
Applications are open and will close on the 1st of July 2022.

Important: Applicants have until the closing date above to submit their
applications (which should contain as much supporting material as required
to efficiently and accurately process their request), this will enable TAC
to announce successful awards shortly afterwards.

As usual, TAC expects to deal with a range of applications from a diverse
range of backgrounds. We therefore encourage (as always) anyone thinking
about sending in an application to do so ASAP.

Why should you attend as a TAC recipient? We encourage you to read stories
from
past recipients at https://apache.org/travel/stories/ . Also note that
previous TAC recipients have gone on to become Committers, PMC Members, ASF
Members, Directors of the ASF Board and Infrastructure Staff members.
Others have gone from Committer to full time Open Source Developers!

How far can you go! - Let TAC help get you there.


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
Hello Wei,

Yes i'm going to open an issue :)
I am doing some units tests on xcp-ng 8.2.1 with acs 4.16

Le mar. 3 mai 2022 à 10:54, Wei ZHOU  a écrit :

> Good, you have solved the problem.
>
> CloudStack supports 8.2.0 but not 8.2.1.
>
> Can you add a github issue ? we could support it in future releases.
>
> -Wei
>
>
> On Tue, 3 May 2022 at 10:02, benoit lair  wrote:
>
> > I precise after adding these 2 two lines into  hypervisor_capabilities
> and
> > guest_os_hypervisor this fixed the feature of live storage migration for
> me
> >
> > Le mar. 3 mai 2022 à 10:01, benoit lair  a écrit
> :
> >
> > > Hello Antoine,
> > >
> > > I saw that this time my yum update upgraded me to 8.2.1
> > > You were in 8.2.1 too ?
> > >
> > > I tried this fix in ACS :
> > >
> > > #add hypervsisor xcp 8.2.1 to acs 4.16
> > > INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
> > > hypervisor_type,
> > > hypervisor_version, max_guests_limit, max_data_volumes_limit,
> > > max_hosts_per_cluster, storage_motion_supported) values (UUID(),
> > > 'XenServer',
> > > '8.2.1', 1000, 253, 64, 1);
> > >
> > > +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer
> 8.2.1
> > > +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor`
> (uuid,hypervisor_type,
> > > hypervisor_version, guest_os_name, guest_os_id, created,
> is_user_defined)
> > > SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
> > > utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
> > > hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
> > >
> > > Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
> > >
> > > Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
> > > capabilities
> > >
> > > I tried to reboot and this is not working on another Cloudstack mgmt
> > > instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
> > >
> > >
> > > Regards, Benoit
> > >
> > > Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
> > > écrit :
> > >
> > >> Bonjour Benoit,
> > >>
> > >> I had similar issues after I did a yum update and I was only able to
> > fitx
> > >> the issue by rebooting my hosts.
> > >>
> > >> -Antoine
> > >>
> > >> > On May 2, 2022, at 12:04 PM, benoit lair 
> > wrote:
> > >> >
> > >> > Hello all,
> > >> >
> > >> > This is surely due to my yum update which updated to xcp 8.2.1
> > >> >
> > >> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it
> > be
> > >> > possible to add hypervisor capabilities without doing it in beta
> mode
> > ?
> > >> >
> > >> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
> > >> écrit :
> > >> >
> > >> >> Hello folks,
> > >> >>
> > >> >> I have a several issue
> > >> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster
> > and i
> > >> >> cant live migrate
> > >> >> When clicking on the "Migrate volume" button, i have the following
> > >> message
> > >> >> :
> > >> >>
> > >> >> No primary storage pools available for migration
> > >> >>
> > >> >> and  it generates this in logs : "the hypervisor doesn't support
> > >> storage
> > >> >> motion."
> > >> >>
> > >> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
> > >> >> 192.168.4.30 -- GET
> > >> >>
> > >>
> >
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
> > >> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > CIDRs
> > >> >> from which account
> 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin]
> > --
> > >> >> Account {"id": 2, "name": "admin", "uuid":
> > >> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
> > >> calls:
> > >> >> 0.0.0.0/0,::/0
> > >> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> Volume
> > >> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
> > >> storage
> > >> >> pools in the cluster to which this volumes can be migrated.
> > >> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> >> Capabilities for host Host {"id": "2", "name":
> "xcp-cluster1-node2",
> > >> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
> > >> couldn't
> > >> >> be retrieved.
> > >> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> Volume
> > >> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the
> > hypervisor
> > >> >> doesn't support storage motion.
> > >> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
> > >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> > >> ===END===
> > >> >> 192.168.4.30 -- GET
> > >> >>
> > >>
> >
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=j

Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread Wei ZHOU
Good, you have solved the problem.

CloudStack supports 8.2.0 but not 8.2.1.

Can you add a github issue ? we could support it in future releases.

-Wei


On Tue, 3 May 2022 at 10:02, benoit lair  wrote:

> I precise after adding these 2 two lines into  hypervisor_capabilities and
> guest_os_hypervisor this fixed the feature of live storage migration for me
>
> Le mar. 3 mai 2022 à 10:01, benoit lair  a écrit :
>
> > Hello Antoine,
> >
> > I saw that this time my yum update upgraded me to 8.2.1
> > You were in 8.2.1 too ?
> >
> > I tried this fix in ACS :
> >
> > #add hypervsisor xcp 8.2.1 to acs 4.16
> > INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
> > hypervisor_type,
> > hypervisor_version, max_guests_limit, max_data_volumes_limit,
> > max_hosts_per_cluster, storage_motion_supported) values (UUID(),
> > 'XenServer',
> > '8.2.1', 1000, 253, 64, 1);
> >
> > +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer 8.2.1
> > +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor` (uuid,hypervisor_type,
> > hypervisor_version, guest_os_name, guest_os_id, created, is_user_defined)
> > SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
> > utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
> > hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
> >
> > Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
> >
> > Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
> > capabilities
> >
> > I tried to reboot and this is not working on another Cloudstack mgmt
> > instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
> >
> >
> > Regards, Benoit
> >
> > Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
> > écrit :
> >
> >> Bonjour Benoit,
> >>
> >> I had similar issues after I did a yum update and I was only able to
> fitx
> >> the issue by rebooting my hosts.
> >>
> >> -Antoine
> >>
> >> > On May 2, 2022, at 12:04 PM, benoit lair 
> wrote:
> >> >
> >> > Hello all,
> >> >
> >> > This is surely due to my yum update which updated to xcp 8.2.1
> >> >
> >> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it
> be
> >> > possible to add hypervisor capabilities without doing it in beta mode
> ?
> >> >
> >> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
> >> écrit :
> >> >
> >> >> Hello folks,
> >> >>
> >> >> I have a several issue
> >> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster
> and i
> >> >> cant live migrate
> >> >> When clicking on the "Migrate volume" button, i have the following
> >> message
> >> >> :
> >> >>
> >> >> No primary storage pools available for migration
> >> >>
> >> >> and  it generates this in logs : "the hypervisor doesn't support
> >> storage
> >> >> motion."
> >> >>
> >> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
> >> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
> >> >> 192.168.4.30 -- GET
> >> >>
> >>
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
> >> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
> >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> CIDRs
> >> >> from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin]
> --
> >> >> Account {"id": 2, "name": "admin", "uuid":
> >> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
> >> calls:
> >> >> 0.0.0.0/0,::/0
> >> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
> >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> >> Volume
> >> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
> >> storage
> >> >> pools in the cluster to which this volumes can be migrated.
> >> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
> >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> >> >> Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
> >> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
> >> couldn't
> >> >> be retrieved.
> >> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
> >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> >> Volume
> >> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the
> hypervisor
> >> >> doesn't support storage motion.
> >> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
> >> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> >> ===END===
> >> >> 192.168.4.30 -- GET
> >> >>
> >>
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
> >> >>
> >>
> >>
>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
I precise after adding these 2 two lines into  hypervisor_capabilities and
guest_os_hypervisor this fixed the feature of live storage migration for me

Le mar. 3 mai 2022 à 10:01, benoit lair  a écrit :

> Hello Antoine,
>
> I saw that this time my yum update upgraded me to 8.2.1
> You were in 8.2.1 too ?
>
> I tried this fix in ACS :
>
> #add hypervsisor xcp 8.2.1 to acs 4.16
> INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid,
> hypervisor_type,
> hypervisor_version, max_guests_limit, max_data_volumes_limit,
> max_hosts_per_cluster, storage_motion_supported) values (UUID(),
> 'XenServer',
> '8.2.1', 1000, 253, 64, 1);
>
> +-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer 8.2.1
> +INSERT IGNORE INTO `cloud`.`guest_os_hypervisor` (uuid,hypervisor_type,
> hypervisor_version, guest_os_name, guest_os_id, created, is_user_defined)
> SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
> utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
> hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';
>
> Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15
>
> Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
> capabilities
>
> I tried to reboot and this is not working on another Cloudstack mgmt
> instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update
>
>
> Regards, Benoit
>
> Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
> écrit :
>
>> Bonjour Benoit,
>>
>> I had similar issues after I did a yum update and I was only able to fitx
>> the issue by rebooting my hosts.
>>
>> -Antoine
>>
>> > On May 2, 2022, at 12:04 PM, benoit lair  wrote:
>> >
>> > Hello all,
>> >
>> > This is surely due to my yum update which updated to xcp 8.2.1
>> >
>> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it be
>> > possible to add hypervisor capabilities without doing it in beta mode ?
>> >
>> > Le lun. 2 mai 2022 à 16:15, benoit lair  a
>> écrit :
>> >
>> >> Hello folks,
>> >>
>> >> I have a several issue
>> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster and i
>> >> cant live migrate
>> >> When clicking on the "Migrate volume" button, i have the following
>> message
>> >> :
>> >>
>> >> No primary storage pools available for migration
>> >>
>> >> and  it generates this in logs : "the hypervisor doesn't support
>> storage
>> >> motion."
>> >>
>> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
>> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
>> >> 192.168.4.30 -- GET
>> >>
>> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
>> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) CIDRs
>> >> from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin] --
>> >> Account {"id": 2, "name": "admin", "uuid":
>> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
>> calls:
>> >> 0.0.0.0/0,::/0
>> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> Volume
>> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
>> storage
>> >> pools in the cluster to which this volumes can be migrated.
>> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> >> Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
>> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
>> couldn't
>> >> be retrieved.
>> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> Volume
>> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the hypervisor
>> >> doesn't support storage motion.
>> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
>> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
>> ===END===
>> >> 192.168.4.30 -- GET
>> >>
>> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
>> >>
>>
>>


Re: ACS 4.16 and xcp-ng - cant live storage migration

2022-05-03 Thread benoit lair
Hello Antoine,

I saw that this time my yum update upgraded me to 8.2.1
You were in 8.2.1 too ?

I tried this fix in ACS :

#add hypervsisor xcp 8.2.1 to acs 4.16
INSERT IGNORE INTO `cloud`.`hypervisor_capabilities`(uuid, hypervisor_type,
hypervisor_version, max_guests_limit, max_data_volumes_limit,
max_hosts_per_cluster, storage_motion_supported) values (UUID(),
'XenServer',
'8.2.1', 1000, 253, 64, 1);

+-- Copy XenServer 8.2.0 hypervisor guest OS mappings to XenServer 8.2.1
+INSERT IGNORE INTO `cloud`.`guest_os_hypervisor` (uuid,hypervisor_type,
hypervisor_version, guest_os_name, guest_os_id, created, is_user_defined)
SELECT UUID(),'Xenserver', '8.2.1', guest_os_name, guest_os_id,
utc_timestamp(), 0 FROM `cloud`.`guest_os_hypervisor` WHERE
hypervisor_type='Xenserver' AND hypervisor_version='8.2.0';

Theses are the fix used to add xcp-ng 8.2.0 to ACS 4.15

Here i adapted the fix to copy guest os mapping from xcp-ng 8.2.0
capabilities

I tried to reboot and this is not working on another Cloudstack mgmt
instance with xcp-ng 8.2 freshly patched to 8.2.1 with yum update


Regards, Benoit

Le lun. 2 mai 2022 à 19:46, Antoine Boucher  a
écrit :

> Bonjour Benoit,
>
> I had similar issues after I did a yum update and I was only able to fitx
> the issue by rebooting my hosts.
>
> -Antoine
>
> > On May 2, 2022, at 12:04 PM, benoit lair  wrote:
> >
> > Hello all,
> >
> > This is surely due to my yum update which updated to xcp 8.2.1
> >
> > Do anybody know how to fix this ? xcp 8.2.1 is compatible ? would it be
> > possible to add hypervisor capabilities without doing it in beta mode ?
> >
> > Le lun. 2 mai 2022 à 16:15, benoit lair  a écrit
> :
> >
> >> Hello folks,
> >>
> >> I have a several issue
> >> I try to live migrate my storage vm disks on a xcp-ng 8.2 cluster and i
> >> cant live migrate
> >> When clicking on the "Migrate volume" button, i have the following
> message
> >> :
> >>
> >> No primary storage pools available for migration
> >>
> >> and  it generates this in logs : "the hypervisor doesn't support storage
> >> motion."
> >>
> >> 2022-05-02 15:52:33,120 DEBUG [c.c.a.ApiServlet]
> >> (qtp1850777594-186961:ctx-2ee90dcf) (logid:1b094155) ===START===
> >> 192.168.4.30 -- GET
> >>
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
> >> 2022-05-02 15:52:33,136 DEBUG [c.c.a.ApiServer]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) CIDRs
> >> from which account 'Acct[a6441eae-68b8-11ec-acb6-96264736f9a1-admin] --
> >> Account {"id": 2, "name": "admin", "uuid":
> >> "a6441eae-68b8-11ec-acb6-96264736f9a1"}' is allowed to perform API
> calls:
> >> 0.0.0.0/0,::/0
> >> 2022-05-02 15:52:33,151 INFO [c.c.s.ManagementServerImpl]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
> >> Vol[320|vm=191|DATADISK] is attached to any running vm. Looking for
> storage
> >> pools in the cluster to which this volumes can be migrated.
> >> 2022-05-02 15:52:33,157 ERROR [c.c.s.ManagementServerImpl]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> >> Capabilities for host Host {"id": "2", "name": "xcp-cluster1-node2",
> >> "uuid": "ae51578b-928c-4d25-9164-3bd7ca0afed4", "type"="Routing"}
> couldn't
> >> be retrieved.
> >> 2022-05-02 15:52:33,157 INFO [c.c.s.ManagementServerImpl]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155) Volume
> >> Vol[320|vm=191|DATADISK] is attached to a running vm and the hypervisor
> >> doesn't support storage motion.
> >> 2022-05-02 15:52:33,164 DEBUG [c.c.a.ApiServlet]
> >> (qtp1850777594-186961:ctx-2ee90dcf ctx-d6c062ae) (logid:1b094155)
> ===END===
> >> 192.168.4.30 -- GET
> >>
> id=b8d15b4c-93e9-4931-81ab-26a47ada32d5&command=findStoragePoolsForMigration&response=json
> >>
>
>


Re: [VOTE] Apache CloudStack 4.17.0.0 RC1

2022-05-03 Thread Wei ZHOU
Hi Nicolas,

Thank you for the hard work !

Unfortunately I have to vote -1 on RC1. We have found a blocker issue with
IPv6 with non-redundant isolated networks.
We are working on the fix: https://github.com/apache/cloudstack/pull/6343

Kind regards,
Wei

On Fri, 29 Apr 2022 at 20:36, Nicolas Vazquez 
wrote:

> Hi all,
>
> I have created a 4.17.0.0 release (RC1) with the following artefacts up
> for testing and a vote:
>
> Git Branch and Commit SH:
> https://github.com/apache/cloudstack/tree/4.17.0.0-RC20220429T1412
> Commit: 9ec270aa63a0ac479322a6f95146d20aa811ea23
>
> Source release (checksums and signatures are available at the same
> location):
> https://dist.apache.org/repos/dist/dev/cloudstack/4.17.0.0/
>
> PGP release keys (signed using 239A653975E13A0EEF5122A1656E1BCC8CB54F84):
> https://dist.apache.org/repos/dist/release/cloudstack/KEYS
>
> For testing purposes, I have uploaded the different distro packages to:
> https://download.cloudstack.org/testing/4.17.0.0-RC1/
>
> Since 4.16 the system VM template registration is no longer mandatory
> prior to upgrading, however, it can be downloaded from here if needed:
> https://download.cloudstack.org/systemvm/4.17/
>
> The vote will be open until 4th May 2022.
>
> For sanity in tallying the vote, can PMC members please be sure to
> indicate "(binding)" with their vote?
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> Regards,
> Nicolas Vazquez
>
>
>
>
>


Re: Database High Availability

2022-05-03 Thread Jayanth Reddy
Sure, thanks a lot!

On Tue, May 3, 2022 at 12:53 PM Ivan Kudryavtsev  wrote:

> Well,
>
> You would better consult with real-life mysql experts, as for me, I
> referred to great severalnines.com articles like:
>
> https://severalnines.com/resources/database-management-tutorials/galera-cluster-mysql-tutorial
>
> https://severalnines.com/database-blog/avoiding-deadlocks-galera-setting-haproxy-single-node-writes-and-multi-node-reads
>
> Just take a look for best practices there.
>
>
>
> On Tue, May 3, 2022 at 10:18 AM Jayanth Reddy 
> wrote:
>
> > Hi,
> >
> >  Thanks again for the tips! Below is the current configuration, please
> > suggest changes if any.
> >
> >  HAProxy 
> >
> > frontend galera-fe
> > mode tcp
> > bind 10.231.4.112:3306
> > use_backend galera-be
> >
> > backend galera-be
> > balance source
> > mode tcp
> > option tcpka
> > option mysql-check user haproxy
> > server galera-0 10.231.4.36:3306 check
> > server galera-1 10.231.4.37:3306 check
> > server galera-2 10.231.4.38:3306 check
> >
> >  Keepalived 
> >
> > vrrp_script check_backend {
> > script "killall -0 haproxy"
> > weight -20
> > interval 2
> > rise 2
> > fall 2
> > }
> >
> > vrrp_instance DB_0 {
> >   state MASTER  # BACKUP on others
> >   priority 100
> >   interface enp1s0
> >   virtual_router_id 50
> >   advert_int 1
> >   unicast_peer {
> > 10.231.4.87 # Relevant on others
> > 10.231.4.88 # Relevant on others
> >   }
> >   virtual_ipaddress {
> > 10.231.4.112/24
> >   }
> >   track_script {
> >   check_backend
> >   }
> > }
> >
> > Best Regards,
> > Jayanth
> >
> > On Tue, May 3, 2022 at 12:33 PM Ivan Kudryavtsev  wrote:
> >
> > > Sounds cool,
> > >
> > > Just ensure that in any failure case (db, haproxy, OS or hardware
> crash)
> > > all the Management servers are switched to the same Galera instance,
> > > otherwise, this could lead to operational problems.
> > > Also, backups are still mandatory, recommend doing them from one of
> > > Galera's hot-swap nodes, not from the main operational node.
> > >
> > > Best wishes, Ivan
> > >
> > > On Tue, May 3, 2022 at 9:53 AM Jayanth Reddy <
> jayanthreddy5...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Thank you. Have set up MariaDB Galera Cluster with the required
> > > HAProxy
> > > > configuration with MYSQL health checks.  Everything is working fine.
> > > >
> > > > On Mon, May 2, 2022 at 10:48 AM Ivan Kudryavtsev 
> > wrote:
> > > >
> > > > > Hi, I use MariaDB Galera cluster.
> > > > >
> > > > > But you have to pin all the CS management to the same galera node
> to
> > > make
> > > > > cloudstack transactioned operations work correctly. HAproxy or
> shared
> > > > > common ip solve that.
> > > > >
> > > > > пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy <
> jayanthreddy5...@gmail.com
> > >:
> > > > >
> > > > > > Hello guys,
> > > > > >
> > > > > > How are you doing database High Availability? Any inputs on
> DB
> > > > > > Clustering and CloudStack configuration would really help me.
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Database High Availability

2022-05-03 Thread Ivan Kudryavtsev
Well,

You would better consult with real-life mysql experts, as for me, I
referred to great severalnines.com articles like:
https://severalnines.com/resources/database-management-tutorials/galera-cluster-mysql-tutorial
https://severalnines.com/database-blog/avoiding-deadlocks-galera-setting-haproxy-single-node-writes-and-multi-node-reads

Just take a look for best practices there.



On Tue, May 3, 2022 at 10:18 AM Jayanth Reddy 
wrote:

> Hi,
>
>  Thanks again for the tips! Below is the current configuration, please
> suggest changes if any.
>
>  HAProxy 
>
> frontend galera-fe
> mode tcp
> bind 10.231.4.112:3306
> use_backend galera-be
>
> backend galera-be
> balance source
> mode tcp
> option tcpka
> option mysql-check user haproxy
> server galera-0 10.231.4.36:3306 check
> server galera-1 10.231.4.37:3306 check
> server galera-2 10.231.4.38:3306 check
>
>  Keepalived 
>
> vrrp_script check_backend {
> script "killall -0 haproxy"
> weight -20
> interval 2
> rise 2
> fall 2
> }
>
> vrrp_instance DB_0 {
>   state MASTER  # BACKUP on others
>   priority 100
>   interface enp1s0
>   virtual_router_id 50
>   advert_int 1
>   unicast_peer {
> 10.231.4.87 # Relevant on others
> 10.231.4.88 # Relevant on others
>   }
>   virtual_ipaddress {
> 10.231.4.112/24
>   }
>   track_script {
>   check_backend
>   }
> }
>
> Best Regards,
> Jayanth
>
> On Tue, May 3, 2022 at 12:33 PM Ivan Kudryavtsev  wrote:
>
> > Sounds cool,
> >
> > Just ensure that in any failure case (db, haproxy, OS or hardware crash)
> > all the Management servers are switched to the same Galera instance,
> > otherwise, this could lead to operational problems.
> > Also, backups are still mandatory, recommend doing them from one of
> > Galera's hot-swap nodes, not from the main operational node.
> >
> > Best wishes, Ivan
> >
> > On Tue, May 3, 2022 at 9:53 AM Jayanth Reddy  >
> > wrote:
> >
> > > Hi,
> > >
> > > Thank you. Have set up MariaDB Galera Cluster with the required
> > HAProxy
> > > configuration with MYSQL health checks.  Everything is working fine.
> > >
> > > On Mon, May 2, 2022 at 10:48 AM Ivan Kudryavtsev 
> wrote:
> > >
> > > > Hi, I use MariaDB Galera cluster.
> > > >
> > > > But you have to pin all the CS management to the same galera node to
> > make
> > > > cloudstack transactioned operations work correctly. HAproxy or shared
> > > > common ip solve that.
> > > >
> > > > пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy  >:
> > > >
> > > > > Hello guys,
> > > > >
> > > > > How are you doing database High Availability? Any inputs on DB
> > > > > Clustering and CloudStack configuration would really help me.
> > > > >
> > > >
> > >
> >
>


Re: Database High Availability

2022-05-03 Thread Jayanth Reddy
Hi,

 Thanks again for the tips! Below is the current configuration, please
suggest changes if any.

 HAProxy 

frontend galera-fe
mode tcp
bind 10.231.4.112:3306
use_backend galera-be

backend galera-be
balance source
mode tcp
option tcpka
option mysql-check user haproxy
server galera-0 10.231.4.36:3306 check
server galera-1 10.231.4.37:3306 check
server galera-2 10.231.4.38:3306 check

 Keepalived 

vrrp_script check_backend {
script "killall -0 haproxy"
weight -20
interval 2
rise 2
fall 2
}

vrrp_instance DB_0 {
  state MASTER  # BACKUP on others
  priority 100
  interface enp1s0
  virtual_router_id 50
  advert_int 1
  unicast_peer {
10.231.4.87 # Relevant on others
10.231.4.88 # Relevant on others
  }
  virtual_ipaddress {
10.231.4.112/24
  }
  track_script {
  check_backend
  }
}

Best Regards,
Jayanth

On Tue, May 3, 2022 at 12:33 PM Ivan Kudryavtsev  wrote:

> Sounds cool,
>
> Just ensure that in any failure case (db, haproxy, OS or hardware crash)
> all the Management servers are switched to the same Galera instance,
> otherwise, this could lead to operational problems.
> Also, backups are still mandatory, recommend doing them from one of
> Galera's hot-swap nodes, not from the main operational node.
>
> Best wishes, Ivan
>
> On Tue, May 3, 2022 at 9:53 AM Jayanth Reddy 
> wrote:
>
> > Hi,
> >
> > Thank you. Have set up MariaDB Galera Cluster with the required
> HAProxy
> > configuration with MYSQL health checks.  Everything is working fine.
> >
> > On Mon, May 2, 2022 at 10:48 AM Ivan Kudryavtsev  wrote:
> >
> > > Hi, I use MariaDB Galera cluster.
> > >
> > > But you have to pin all the CS management to the same galera node to
> make
> > > cloudstack transactioned operations work correctly. HAproxy or shared
> > > common ip solve that.
> > >
> > > пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy :
> > >
> > > > Hello guys,
> > > >
> > > > How are you doing database High Availability? Any inputs on DB
> > > > Clustering and CloudStack configuration would really help me.
> > > >
> > >
> >
>


Re: Database High Availability

2022-05-03 Thread Ivan Kudryavtsev
Sounds cool,

Just ensure that in any failure case (db, haproxy, OS or hardware crash)
all the Management servers are switched to the same Galera instance,
otherwise, this could lead to operational problems.
Also, backups are still mandatory, recommend doing them from one of
Galera's hot-swap nodes, not from the main operational node.

Best wishes, Ivan

On Tue, May 3, 2022 at 9:53 AM Jayanth Reddy 
wrote:

> Hi,
>
> Thank you. Have set up MariaDB Galera Cluster with the required HAProxy
> configuration with MYSQL health checks.  Everything is working fine.
>
> On Mon, May 2, 2022 at 10:48 AM Ivan Kudryavtsev  wrote:
>
> > Hi, I use MariaDB Galera cluster.
> >
> > But you have to pin all the CS management to the same galera node to make
> > cloudstack transactioned operations work correctly. HAproxy or shared
> > common ip solve that.
> >
> > пн, 2 мая 2022 г., 7:34 AM Jayanth Reddy :
> >
> > > Hello guys,
> > >
> > > How are you doing database High Availability? Any inputs on DB
> > > Clustering and CloudStack configuration would really help me.
> > >
> >
>