Re: CKS Storage Provisioner Info

2024-02-26 Thread Jayanth Babu A
Hello Bharat,
You’ve some well-known options as below:
https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
https://github.com/rook/rook - CephFS & NFS can fit

Thanks,
Jayanth Reddy

From: Bharat Bhushan Saini 
Date: Tuesday, 27 February 2024 at 11:01
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
Hi All,

Thanks for helping out for the storage provisioner.

In CKS service I am able to bound PVC as ReadWriteOnce with Leaseweb CloudStack 
CSI Driver.
Is there any opportunity in the cloudstack with other driver that I can bound 
storage as ReadWriteMany.

I tried with Leaseweb and got below warning
Warning  ProvisioningFailed6s (x5 over 21s)  
csi.cloudstack.apache.org_cloudstack-csi-controller-7f89c8cd47-ztllw_c6112933-3587-4445-8702-74b255b1e56f
  failed to provision volume with StorageClass "cloudstack-custom": rpc error: 
code = InvalidArgument desc = Volume capabilities not supported. Only 
SINGLE_NODE_WRITER supported.

Thanks and Regards,
Bharat Saini

[signature_636195584]

From: Kiran Chavala 
Date: Monday, 26 February 2024 at 5:54 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Hi Bharath

Note the CKS provisioner works on KVM based cloudstack environment

Regards
Kiran

From: Kiran Chavala 
Date: Monday, 26 February 2024 at 5:45 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
Hi Bharat Bhusan

Please follow these steps


1. Deploy a Kubernetes cluster on cluster


NAME STATUS   ROLES   AGE VERSION
ty-control-18de52c04f7   Readycontrol-plane   5m11s   v1.28.4
ty-node-18de52c4185  Ready  4m55s   v1.28.4

kubectl get secrets -A
NAMESPACE  NAME  TYPE   
 DATA   AGE
kube-systemcloudstack-secret Opaque 
 1  10m



2. Check the deployment, it should  be in pending

~ kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   0/1 10   
46s


3. Edit the deployment and remove the nodeSelector part

~kubectl edit deployment/cloudstack-csi-controller -n kube-system


  nodeSelector:
kubernetes.io/os: linux
node-role.kubernetes.io/master: ""

4. Check the deployment again  and it should be running state

~kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   1/1 11   
2m39s

5. Replace the disk offering in the storage class yaml

Provide the custom disk offering UUID (service offerings > disk offering > 
custom disk offering

4c518474-5d7b-4285-a07c-c57e214abb3b


vi 0-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cloudstack-custom
provisioner: csi.cloudstack.apache.org
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
parameters:
  csi.cloudstack.apache.org/disk-offering-id: 
4c518474-5d7b-4285-a07c-c57e214abb3b


kubectl apply -f 0-storageclass.yaml

k8s git:(master) ✗ kubectl get sc
NAMEPROVISIONER RECLAIMPOLICY   
VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION   AGE
cloudstack-custom   csi.cloudstack.apache.org   Delete  
WaitForFirstConsumer   false  72s



6. Apply the pvc yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc-block
spec:
  storageClassName: cloudstack-custom
  volumeMode: Block
  accessModes:
- ReadWriteOnce
  resources:
requests:
  storage: 1Gi


kubectl apply -f pvc-block.yaml

k8s git:(master) ✗ kubectl get pvc
NAMESTATUSVOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   
 AGE
example-pvc-block   Pending  
cloudstack-custom   31s
7.  Apply the pod block yaml

apiVersion: v1
kind: Pod
metadata:
  name: example-pod-block
spec:
  containers:
- name: example
  image: ubuntu
  volumeDevices:
- devicePath: "/dev/example-block"
  name: example-volume
  stdin: true
  stdinOnce: true
  tty: true
  volumes:
- name: example-volume
  persistentVolumeClaim:
claimName: example-pvc-block


kubectl apply -f pod-block.yaml


➜  k8s git:(master) ✗ k get pods -A
NAMESPACE  NAME READY   
STATUSRESTARTS  AGE
defaultexample-pod-block1/1 
Running   0 107s

Events:
  TypeReason  Age   From Message
  --     ---

Re: CKS Storage Provisioner Info

2024-02-26 Thread Bharat Bhushan Saini
Hi All,

Thanks for helping out for the storage provisioner.

In CKS service I am able to bound PVC as ReadWriteOnce with Leaseweb CloudStack 
CSI Driver.
Is there any opportunity in the cloudstack with other driver that I can bound 
storage as ReadWriteMany.

I tried with Leaseweb and got below warning
Warning  ProvisioningFailed6s (x5 over 21s)  
csi.cloudstack.apache.org_cloudstack-csi-controller-7f89c8cd47-ztllw_c6112933-3587-4445-8702-74b255b1e56f
  failed to provision volume with StorageClass "cloudstack-custom": rpc error: 
code = InvalidArgument desc = Volume capabilities not supported. Only 
SINGLE_NODE_WRITER supported.

Thanks and Regards,
Bharat Saini

[signature_636195584]

From: Kiran Chavala 
Date: Monday, 26 February 2024 at 5:54 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Hi Bharath

Note the CKS provisioner works on KVM based cloudstack environment

Regards
Kiran

From: Kiran Chavala 
Date: Monday, 26 February 2024 at 5:45 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
Hi Bharat Bhusan

Please follow these steps


1. Deploy a Kubernetes cluster on cluster


NAME STATUS   ROLES   AGE VERSION
ty-control-18de52c04f7   Readycontrol-plane   5m11s   v1.28.4
ty-node-18de52c4185  Ready  4m55s   v1.28.4

kubectl get secrets -A
NAMESPACE  NAME  TYPE   
 DATA   AGE
kube-systemcloudstack-secret Opaque 
 1  10m



2. Check the deployment, it should  be in pending

~ kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   0/1 10   
46s


3. Edit the deployment and remove the nodeSelector part

~kubectl edit deployment/cloudstack-csi-controller -n kube-system


  nodeSelector:
kubernetes.io/os: linux
node-role.kubernetes.io/master: ""

4. Check the deployment again  and it should be running state

~kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   1/1 11   
2m39s

5. Replace the disk offering in the storage class yaml

Provide the custom disk offering UUID (service offerings > disk offering > 
custom disk offering

4c518474-5d7b-4285-a07c-c57e214abb3b


vi 0-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cloudstack-custom
provisioner: csi.cloudstack.apache.org
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
parameters:
  csi.cloudstack.apache.org/disk-offering-id: 
4c518474-5d7b-4285-a07c-c57e214abb3b


kubectl apply -f 0-storageclass.yaml

k8s git:(master) ✗ kubectl get sc
NAMEPROVISIONER RECLAIMPOLICY   
VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION   AGE
cloudstack-custom   csi.cloudstack.apache.org   Delete  
WaitForFirstConsumer   false  72s



6. Apply the pvc yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc-block
spec:
  storageClassName: cloudstack-custom
  volumeMode: Block
  accessModes:
- ReadWriteOnce
  resources:
requests:
  storage: 1Gi


kubectl apply -f pvc-block.yaml

k8s git:(master) ✗ kubectl get pvc
NAMESTATUSVOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   
 AGE
example-pvc-block   Pending  
cloudstack-custom   31s
7.  Apply the pod block yaml

apiVersion: v1
kind: Pod
metadata:
  name: example-pod-block
spec:
  containers:
- name: example
  image: ubuntu
  volumeDevices:
- devicePath: "/dev/example-block"
  name: example-volume
  stdin: true
  stdinOnce: true
  tty: true
  volumes:
- name: example-volume
  persistentVolumeClaim:
claimName: example-pvc-block


kubectl apply -f pod-block.yaml


➜  k8s git:(master) ✗ k get pods -A
NAMESPACE  NAME READY   
STATUSRESTARTS  AGE
defaultexample-pod-block1/1 
Running   0 107s

Events:
  TypeReason  Age   From Message
  --     ---
  Normal  Scheduled   9sdefault-schedulerSuccessfully 
assigned default/example-pod-block to ty-node-18de52c4185
  Normal  SuccessfulAttachVolume  6sattachdetach-controller  
AttachVolume.Attach succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b"
  Normal  SuccessfulMountVolume   4skubelet   

Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Jithin Raju
Congratulations Vishesh.

-Jithin

From: Daan Hoogland 
Date: Monday, 26 February 2024 at 7:35 PM
To: users , dev 
Subject: new committer: Vishesh Jindal (vishesh)
users and devs,

The Project Management Committee (PMC) for Apache CloudStack
has invited Vishesh Jindal to become a committer and we are pleased
to announce that they have accepted.

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.

Please join me in congratulating Vishesh.

--
on behalf of the PMC, Daan

 



Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Kiran Chavala

Congratulations Vishesh

Regards
Kiran

From: Daan Hoogland 
Date: Monday, 26 February 2024 at 7:35 PM
To: users , dev 
Subject: new committer: Vishesh Jindal (vishesh)
users and devs,

The Project Management Committee (PMC) for Apache CloudStack
has invited Vishesh Jindal to become a committer and we are pleased
to announce that they have accepted.

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.

Please join me in congratulating Vishesh.

--
on behalf of the PMC, Daan

 



Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Suresh Anaparti
Congratulations Vishesh!

Regards,
Suresh

From: Daan Hoogland 
Date: Monday, 26 February 2024 at 7:35 PM
To: users , dev 
Subject: new committer: Vishesh Jindal (vishesh)
users and devs,

The Project Management Committee (PMC) for Apache CloudStack
has invited Vishesh Jindal to become a committer and we are pleased
to announce that they have accepted.

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.

Please join me in congratulating Vishesh.

--
on behalf of the PMC, Daan

 



Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Harikrishna Patnala
Congrats Vishesh!

Regards,
Harikrishna

From: Daan Hoogland 
Date: Monday, 26 February 2024 at 7:35 PM
To: users , dev 
Subject: new committer: Vishesh Jindal (vishesh)
users and devs,

The Project Management Committee (PMC) for Apache CloudStack
has invited Vishesh Jindal to become a committer and we are pleased
to announce that they have accepted.

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.

Please join me in congratulating Vishesh.

--
on behalf of the PMC, Daan

 



Re: How to create one network per project using as few public addresses as possible?

2024-02-26 Thread Wei ZHOU
Unfortunately vpc source NAT IP cannot be used by any vpc tiers for any
purposes (load balance or port FORWARDING, Static NAT).
You need to acquire a new public IP.

-Wei


On Monday, February 26, 2024, Jorge Luiz Correa
 wrote:

> Returning to this topic with the 4.19 release,  I can create a domain VPC
> and tiers in each project connected to this domain VPC. Each tier has its
> ACL rules. This is ok to filter Egress traffic, for example.  But, I
> couldn't find a way to configure port forward in VPC (Ingress). Is there in
> GUI?
>
> For example, in Networks > Public IP addresses -> choose any isolated
> network. I can see options like "Details, Firewall, Port forwarding, Load
> balancing, VPN, Events, Comments".
>
> When a tier is created its public IP is also listed in Networks > Public IP
> addresses. But, when I click on the public IP address from the VPC the
> options are only "Details, VPN".
>
> How can I configure ingress options, as port forwarding? For example, I
> need to forward ports 80 and 443 to a specific VM in some tier.
>
> Thank you!
>
>
> Em qua., 29 de nov. de 2023 às 14:50, Jorge Luiz Correa <
> jorge.l.cor...@embrapa.br> escreveu:
>
> > Hi Gabriel! This is exactly what I was looking for. I couldn't find this
> > request in github when looking for something. Thank you for sharing.
> >
> > No problem in creating through the API. So, I'll wait for the test
> > results. If you could share with us, I would appreciate. And thank you so
> > much for these tests!
> >
> > :)
> >
> > Em qua., 29 de nov. de 2023 às 10:01, Gabriel Ortiga Fernandes <
> > gabriel.ort...@hotmail.com> escreveu:
> >
> >> Hello Jorge,
> >>
> >> A soon as release 4.19 is launched, the feature of Domain VPCs(
> >> https://github.com/apache/cloudstack/pull/7153) will be available,
> which
> >> will allow users and operators to create tiers to VPCs for any account
> (or
> >> in your case project) to which the VPC owner has access, regardless of
> >> domain, thus, allowing all the projects to share a single VR.
> >>
> >> For now, this feature is not available in the GUI; however, you can
> >> create a tier through the API 'createNetwork', informing both the
> projectId
> >> and vpcId.
> >>
> >> This feature has been tested using accounts, but not projects, so I will
> >> run some tests in the next few days and give you an answer regarding its
> >> viability.
> >>
> >> Kind regards,
> >>
> >> GaOrtiga
> >>
> >> PS: This email will probably be a duplicate since I tried sending it
> >> through a different provider, but it took too long, so I am sending this
> >> again to save time.
> >>
> >
>
> --
> __
> Aviso de confidencialidade
>
> Esta mensagem da
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica
> federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro
> de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter
> informacoes  confidenciais, protegidas  por sigilo profissional.  Sua
> utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei.
> Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao
> emitente, esclarecendo o equivoco.
>
> Confidentiality note
>
> This message from
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government
> company  established under  Brazilian law (5.851/72), is directed
> exclusively to  its addressee  and may contain confidential data,
> protected under  professional secrecy  rules. Its unauthorized  use is
> illegal and  may subject the transgressor to the law's penalties. If you
> are not the addressee, please send it back, elucidating the failure.
>


Re: Slow Metrics output in GUI

2024-02-26 Thread Andrei Mikhailovsky
Interesting.

Joan, do you mind sharing how you are doing it?

Thanks

- Original Message -
> From: "Joan g" 
> To: "users" 
> Sent: Monday, 26 February, 2024 18:06:58
> Subject: Re: Slow Metrics output in GUI

> I am facing the same problem in my 4.17.2 version. We are manually clearing
> the stats table to  make the instance list page load faster :(
> 
> 
> Joan
> 
> On Mon, 26 Feb, 2024, 22:24 Andrei Mikhailovsky, 
> wrote:
> 
>> Hello everyone,
>>
>> My setup: ACS 4.18.1.0 on Ubuntu 20.04.6. Two management servers and mysql
>> active-active replication.
>>
>>
>> I seem to have a very slow response on viewing vms. It takes about 20
>> seconds for the vm data to show when I click on any vm under Compute >
>> Instances. When I click on various vm tabs (like NICs, Disks, Details, etc)
>> the only tab that takes about 15-20 seconds to refresh is the Metrics tab.
>> When the spinner stops I get the following message: "No data to show for
>> the selected period." Also this information is shown in red colour: The
>> Control Plane Status of this instance is Offline. Some actions on this
>> instance will fail, if so please wait a while and retry. When I click on
>> the 12 or 24 hours tab it takes a bit of time, but it does show the tables
>> and the message in red colour is not shown.
>> On mysql server I see the mysql process is using over 100% cpu (with 0%
>> iowait) while ACS tries to retrieve the Metrics data. Also, the
>> cloudstack-management server cpu usage goes to 200-400%.
>>
>>
>> I've tried all the obvious (restarting management servers, stopping one of
>> the management servers, restarting host servers).
>>
>> Does anyone know what is the issue? why does it take so long to retrieve
>> the vm data and metrics? I don't remember having this problem before 4.18.
>>
>> Many thanks for any pointers.
>>
>> cheers
>>
>> Andrei
>>
>>
>>
>>
>>
>>
>>


Re: How to create one network per project using as few public addresses as possible?

2024-02-26 Thread Jorge Luiz Correa
Returning to this topic with the 4.19 release,  I can create a domain VPC
and tiers in each project connected to this domain VPC. Each tier has its
ACL rules. This is ok to filter Egress traffic, for example.  But, I
couldn't find a way to configure port forward in VPC (Ingress). Is there in
GUI?

For example, in Networks > Public IP addresses -> choose any isolated
network. I can see options like "Details, Firewall, Port forwarding, Load
balancing, VPN, Events, Comments".

When a tier is created its public IP is also listed in Networks > Public IP
addresses. But, when I click on the public IP address from the VPC the
options are only "Details, VPN".

How can I configure ingress options, as port forwarding? For example, I
need to forward ports 80 and 443 to a specific VM in some tier.

Thank you!


Em qua., 29 de nov. de 2023 às 14:50, Jorge Luiz Correa <
jorge.l.cor...@embrapa.br> escreveu:

> Hi Gabriel! This is exactly what I was looking for. I couldn't find this
> request in github when looking for something. Thank you for sharing.
>
> No problem in creating through the API. So, I'll wait for the test
> results. If you could share with us, I would appreciate. And thank you so
> much for these tests!
>
> :)
>
> Em qua., 29 de nov. de 2023 às 10:01, Gabriel Ortiga Fernandes <
> gabriel.ort...@hotmail.com> escreveu:
>
>> Hello Jorge,
>>
>> A soon as release 4.19 is launched, the feature of Domain VPCs(
>> https://github.com/apache/cloudstack/pull/7153) will be available, which
>> will allow users and operators to create tiers to VPCs for any account (or
>> in your case project) to which the VPC owner has access, regardless of
>> domain, thus, allowing all the projects to share a single VR.
>>
>> For now, this feature is not available in the GUI; however, you can
>> create a tier through the API 'createNetwork', informing both the projectId
>> and vpcId.
>>
>> This feature has been tested using accounts, but not projects, so I will
>> run some tests in the next few days and give you an answer regarding its
>> viability.
>>
>> Kind regards,
>>
>> GaOrtiga
>>
>> PS: This email will probably be a duplicate since I tried sending it
>> through a different provider, but it took too long, so I am sending this
>> again to save time.
>>
>

-- 
__
Aviso de confidencialidade

Esta mensagem da 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica 
federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro 
de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter 
informacoes  confidenciais, protegidas  por sigilo profissional.  Sua 
utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei. 
Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao 
emitente, esclarecendo o equivoco.

Confidentiality note

This message from 
Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government 
company  established under  Brazilian law (5.851/72), is directed 
exclusively to  its addressee  and may contain confidential data,  
protected under  professional secrecy  rules. Its unauthorized  use is 
illegal and  may subject the transgressor to the law's penalties. If you 
are not the addressee, please send it back, elucidating the failure.


Re: Slow Metrics output in GUI

2024-02-26 Thread Joan g
I am facing the same problem in my 4.17.2 version. We are manually clearing
the stats table to  make the instance list page load faster :(


Joan

On Mon, 26 Feb, 2024, 22:24 Andrei Mikhailovsky, 
wrote:

> Hello everyone,
>
> My setup: ACS 4.18.1.0 on Ubuntu 20.04.6. Two management servers and mysql
> active-active replication.
>
>
> I seem to have a very slow response on viewing vms. It takes about 20
> seconds for the vm data to show when I click on any vm under Compute >
> Instances. When I click on various vm tabs (like NICs, Disks, Details, etc)
> the only tab that takes about 15-20 seconds to refresh is the Metrics tab.
> When the spinner stops I get the following message: "No data to show for
> the selected period." Also this information is shown in red colour: The
> Control Plane Status of this instance is Offline. Some actions on this
> instance will fail, if so please wait a while and retry. When I click on
> the 12 or 24 hours tab it takes a bit of time, but it does show the tables
> and the message in red colour is not shown.
> On mysql server I see the mysql process is using over 100% cpu (with 0%
> iowait) while ACS tries to retrieve the Metrics data. Also, the
> cloudstack-management server cpu usage goes to 200-400%.
>
>
> I've tried all the obvious (restarting management servers, stopping one of
> the management servers, restarting host servers).
>
> Does anyone know what is the issue? why does it take so long to retrieve
> the vm data and metrics? I don't remember having this problem before 4.18.
>
> Many thanks for any pointers.
>
> cheers
>
> Andrei
>
>
>
>
>
>
>
>


Re: Cloudstack DB using 3 Node Galrea Cluster.

2024-02-26 Thread Joan g
Thank you kiran for the detailed information. For me the replication is
fine, its failing only when new install or upgrade of cloudstack, which
calls for a many schema changes. I think during install or upgrade we may
need to disable percona replication.

Regards Joan

On Mon, 26 Feb, 2024, 18:10 Kiran Chavala, 
wrote:

> Hi Joan
>
> You can refer this article
>
>
> https://severalnines.com/blog/how-deploy-high-availability-cloudstackcloudplatform-mariadb-galera-cluster/
>
>
> I had these in my notes when I tried setting it up percona-xtradb, hope
> its useful to you.
>
> 
> Install 2 ubuntu nodes for percona-xtradb cluster
>
> On ubuntu node 1
>
> $ sudo apt update
>
> $ sudo apt install gnupg2
>
> $ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release
> -sc)_all.deb
>
> $ sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
>
> $ sudo apt update
>
> $ sudo apt install percona-server-server-5.7
>
>
>
> cat >>/etc/mysql/my.cnf<
> [mysqld]
>
> wsrep_provider=/usr/lib/libgalera_smm.so
> wsrep_cluster_name=democluster
> wsrep_cluster_address=gcomm://
> wsrep_node_name=ubuntuvm01
> wsrep_node_address=172.42.42.101
> wsrep_sst_method=xtrabackup-v2
> wsrep_sst_auth=repuser:reppassword
> pxc_strict_mode=ENFORCING
> binlog_format=ROW
> default_storage_engine=InnoDB
> innodb_autoinc_lock_mode=2
>
> EOF
>
>
>
> $systemctl start mysql
>
> login to mysql on node 1 and execute the following commands
>
>
> mysql -uroot -p -e "create user repuser@localhost identified by
> 'reppassword'"
> mysql -uroot -p -e "grant reload, replication client, process, lock tables
> on *.* to repuser@localhost"
> mysql -uroot -p -e "flush privileges"
>
>
>
> On Ubuntu Node 2
>
>
> $ sudo apt update
>
> $ sudo apt install gnupg2
>
> $ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release
> -sc)_all.deb
>
> $ sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
>
> $ sudo apt update
>
> $ sudo apt install percona-server-server-5.7
>
>
> cat >>/etc/mysql/my.cnf<
> [mysqld]
>
> wsrep_provider=/usr/lib/libgalera_smm.so
>
> wsrep_cluster_name=democluster
>
> wsrep_cluster_address= gcomm://172.42.42.101,172.42.42.102
>
> wsrep_node_name=ubuntuvm02
>
> wsrep_node_address=172.42.42.101
>
> wsrep_sst_method=xtrabackup-v2
>
> wsrep_sst_auth=repuser:reppassword
>
> pxc_strict_mode=ENFORCING
>
> binlog_format=ROW
>
> default_storage_engine=InnoDB
>
> innodb_autoinc_lock_mode=2
>
> EOF
>
>
>
> $systemctl start mysql
>
>
>
>
> Login back to node 1 check the status of the xtradb cluster
>
> mysql >show status like 'wsrep%';
>
>
> mysql>use mysql
> mysql>GRANT ALL ON *.* to root@'%' IDENTIFIED BY 'password';
> mysql>GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password'
> WITH GRANT OPTION;
> mysql>FLUSH PRIVILEGES;
> mysql> SELECT host FROM mysql.user WHERE User = 'root';
> mysql>SET GLOBAL pxc_strict_mode=PERMISSIVE
>
>
>
> Regards
> Kiran
>
> From: Joan g 
> Date: Saturday, 24 February 2024 at 12:29 AM
> To: users@cloudstack.apache.org 
> Subject: Cloudstack DB using 3 Node Galrea Cluster.
> Hi Community,
>
> I need some suggestions  on using 3 node Mariadb *Galera Cluster or percona
> xtradb* for Cloudstack Databases.
>
> In My setup the Databases are behind a LB and write happens only to a
> single node
>
> With new Cloudstack 4.18.1 install  initial database migration is always
> failing because of schema update/sync issues with other DB nodes.
>
> Logs in Mysql err::
> 2024-02-23T12:55:15.521278Z 17 [ERROR] [MY-010584] [Repl] Replica SQL:
> Error 'Duplicate column name 'display'' on query. Default
>  database: 'cloud'. Query: 'ALTER TABLE cloud.guest_os ADD COLUMN display
> tinyint(1) DEFAULT '1' COMMENT 'should this guest_os b
> e shown to the end user'', Error_code: MY-001060
>
> Due to this Cloudstack initialisation is always failing.
>
> Can someone point me with a suggested method for DB HA
>
> Jon
>
>
>
>


GPU discovery in the hypervisor

2024-02-26 Thread Douglas Oliveira
Hello,
How does the GPU discovery process work on the hypervisor with SC,
something similar to what Opennebula does? (through lspci)
I currently have a service offering created via API for an Nvidia A16 GPU,
which does not work because it is informed that there are no hosts
available to serve the resource. So I'm unsure whether what doesn't work is
the service offering or the non-detection of the GPU on the host.

Regards

-- 
--
Att.
Douglas Oliveira


Slow Metrics output in GUI

2024-02-26 Thread Andrei Mikhailovsky
Hello everyone, 

My setup: ACS 4.18.1.0 on Ubuntu 20.04.6. Two management servers and mysql 
active-active replication. 


I seem to have a very slow response on viewing vms. It takes about 20 seconds 
for the vm data to show when I click on any vm under Compute > Instances. When 
I click on various vm tabs (like NICs, Disks, Details, etc) the only tab that 
takes about 15-20 seconds to refresh is the Metrics tab. When the spinner stops 
I get the following message: "No data to show for the selected period." Also 
this information is shown in red colour: The Control Plane Status of this 
instance is Offline. Some actions on this instance will fail, if so please wait 
a while and retry. When I click on the 12 or 24 hours tab it takes a bit of 
time, but it does show the tables and the message in red colour is not shown. 
On mysql server I see the mysql process is using over 100% cpu (with 0% iowait) 
while ACS tries to retrieve the Metrics data. Also, the cloudstack-management 
server cpu usage goes to 200-400%. 


I've tried all the obvious (restarting management servers, stopping one of the 
management servers, restarting host servers). 

Does anyone know what is the issue? why does it take so long to retrieve the vm 
data and metrics? I don't remember having this problem before 4.18. 

Many thanks for any pointers. 

cheers 

Andrei 









RE: corrupt RVR causing host agent issues

2024-02-26 Thread Gary Dixon
Hi Daan
It seems cloudstack did know the host had died because it tried to fence the 
host but couldn't because we have host HA disabled. It also reported OOB stop 
had occurred on the HA enabled VM's and started them all again on the same 
host. We then had to put the host into MM because the iDrac logs were showing 
issues with  2 memory DIMMS.

All I know is that whichever host the corrupt VR was running on - we could not 
Console to it or any other running VM on the same host - because the agent 
comms were messed up.

We have found in the agent host a line that states PublicKey authentication had 
failed to the VR (because the VR was corrupt at the guest OS level). At the 
time we did not see this and any command sent from with ACS mgmt. to either 
reboot the VR or restart the VPC with cleanup resulted in the host agent not 
servicing the request or any other request - such as to view the console of any 
VM or live migrate any VM to another host. We're still sifting through both 
agent and mgmt. logs to try and determine what exactly happened that was 
causing this behaviour. All other running VM's on the host were actually fine 
as we could connect by external methods.
We are hoping to upgrade the environment ASAP so we can get better Host HA with 
StorPool Primary storage.

BR

Gary


Gary Dixon
Quadris Cloud Manager
0161 537 4980 +44 7989717661
gary.di...@quadris.co.uk
www.quadris.com
Innovation House, 12-13 Bredbury Business Park
Bredbury Park Way, Bredbury, Stockport, SK6 2SN
-Original Message-
From: Daan Hoogland 
Sent: Monday, February 26, 2024 1:03 PM
To: users@cloudstack.apache.org
Subject: Re: corrupt RVR causing host agent issues

Gary, the mail does not display the screenshot for me. Also this is an old 
version (4.15) I think you should upgrade.

What might be the root of your issue is that *you* have seen the physical host 
crashed but CloudStack could not determine that. To prevent starting the same 
VM twice it would withhold taking any action in such situations.

You may call this a bug or a "lack of feature", but the bottom line is that 
this is expected behaviour.

I do not think a corrupt VR would crash a host.


On Mon, Feb 26, 2024 at 1:25 PM Gary Dixon 
wrote:

> ACS 4.15.2
>
> KVM
>
> Ubuntu 20.04
>
>
>
> Hi all
>
>
>
> We had a physical host crash on Friday due to hardware failure. This
> appeared to have caused issues with some RVR’s going into an ‘unknown’
> state.
>
>
>
> The strange thing was that on any host where a RVR in an unknown state
> was running – we could not console onto any VM’s on that host – nor
> could we SSH directly to the RVR from the host.
>
> The UI was showing all hosts agent state as ‘UP’
>
>
>
> Only when we restarted the ACS mgmt. service did we notice that the
> host agent where a RVR was running in an ‘unknown’ state then was in a
> ‘connecting’ state for some time – there were no networking issues
> either – host was pingable from the mgmt. server.
>
>
>
> We were then briefly able to console onto one of the RVR’s in an
> unknown state and then discovered that the RVR was indeed corrupt –
> this is the screenshot of the RVR terminal :
>
>
>
> We then marked the RVR in the DB as ‘stopped’ and virsh destroyed it
> directly on the host. We were then able to restart the VPC with
> cleanup which then re-created the corrupt RVR.
>
> It then appeared that once the corrupt RVR had gone – all other RVR’s
> in an unknown state transitioned to ‘backup’ state
>
>
>
> We are wondering if we have encountered a bug where if a corrupt RVR
> crashes the host cloudstack agent if ACS tries to do anything with the
> RVR – like restart it
>
>
>
> BR
>
>
>
> Gary
>
>
>
>
>
>
> Gary Dixon
> Quadris Cloud Manager
> 0161 537 4980 <0161%20537%204980>
>  +44 7989717661 <+44%207989717661>
> gary.di...@quadris.co.uk
> http://www.q/
> uadris.com%2F=05%7C02%7CGary.Dixon%40quadris.co.uk%7Cccb839a47f40
> 4b38ae5608dc36cb3fbe%7Cf1d6abf3d3b44894ae16db0fb93a96a2%7C0%7C0%7C6384
> 45493800485528%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2l
> uMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=9hX%2BwqSLFpxdb
> KKSdUqqhPBIK3CaUyl%2F9GkrNUSny98%3D=0
> Innovation House, 12‑13 Bredbury Business Park Bredbury Park Way,
> Bredbury, Stockport, SK6 2SN
>


--
Daan


Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Nicolas Vazquez
Congratulation Vishesh!

Regards,
Nicolas Vazquez


From: Daan Hoogland 
Date: Monday, 26 February 2024 at 11:05
To: users , dev 
Subject: new committer: Vishesh Jindal (vishesh)
users and devs,

The Project Management Committee (PMC) for Apache CloudStack
has invited Vishesh Jindal to become a committer and we are pleased
to announce that they have accepted.

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.

Please join me in congratulating Vishesh.

--
on behalf of the PMC, Daan

 



Re: Cloudstack 4.19 / Ubuntu 22.04 fresh install fails

2024-02-26 Thread Wei ZHOU
Can you check if there are SQLException in the log ?

-Wei

On Mon, Feb 26, 2024 at 2:45 PM Ishan Talathi  wrote:

> Hello,
>
> Setup is as follows -
>
> Ubuntu 22.04 fresh VM
> Cloudstack 4.19
> MySQL 8 ( same node as management )
>
> After deploying, cloudstack-management fails to start with recurring errors
> like below -
>
>
> 2024-02-26 13:28:52,565 WARN  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
> (main:null) (logid:) Failed to start module [vmware-storage] due to: [Error
> creating bean with name
> 'org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0'
> defined in URL
>
> [jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.19.0.0.jar!/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml]:
> Cannot resolve reference to bean 'DefaultConfigResources' while setting
> bean property 'locations'; nested exception is
> org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean
> named 'DefaultConfigResources' available].
> 2024-02-26 13:28:52,566 DEBUG [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
> (main:null) (logid:) module start failure of module [vmware-storage] was
> due to:
> org.springframework.beans.factory.BeanCreationException: Error creating
> bean with name
> 'org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0'
> defined in URL
>
> [jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.19.0.0.jar!/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml]:
> Cannot resolve reference to bean 'DefaultConfigResources' while setting
> bean property 'locations'; nested exception is
> org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean
> named 'DefaultConfigResources' available
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:342)
> at
>
> org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:113)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1707)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1452)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:619)
> at
>
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542)
> at
>
> org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
> at
>
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
> at
>
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
> at
>
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213)
> at
>
> org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:171)
> at
>
> org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:748)
> at
>
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:564)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContext(DefaultModuleDefinitionSet.java:171)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$2.with(DefaultModuleDefinitionSet.java:140)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:271)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:259)
> at
>
> org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContexts(DefaultModuleDefinitionSet.java:128)
> at
>
> 

Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Pearl d'Silva
Congratulations Vishesh.


 



From: Wei ZHOU 
Sent: February 26, 2024 9:14 AM
To: users@cloudstack.apache.org 
Cc: dev 
Subject: Re: new committer: Vishesh Jindal (vishesh)

Congratulations Vishesh!



On Monday, February 26, 2024, Daan Hoogland  wrote:

> users and devs,
>
> The Project Management Committee (PMC) for Apache CloudStack
> has invited Vishesh Jindal to become a committer and we are pleased
> to announce that they have accepted.
>
> Being a committer enables easier contribution to the
> project since there is no need to go via the patch
> submission process. This should enable better productivity.
>
> Please join me in congratulating Vishesh.
>
> --
> on behalf of the PMC, Daan
>


Re: new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Wei ZHOU
Congratulations Vishesh!



On Monday, February 26, 2024, Daan Hoogland  wrote:

> users and devs,
>
> The Project Management Committee (PMC) for Apache CloudStack
> has invited Vishesh Jindal to become a committer and we are pleased
> to announce that they have accepted.
>
> Being a committer enables easier contribution to the
> project since there is no need to go via the patch
> submission process. This should enable better productivity.
>
> Please join me in congratulating Vishesh.
>
> --
> on behalf of the PMC, Daan
>


new committer: Vishesh Jindal (vishesh)

2024-02-26 Thread Daan Hoogland
users and devs,

The Project Management Committee (PMC) for Apache CloudStack
has invited Vishesh Jindal to become a committer and we are pleased
to announce that they have accepted.

Being a committer enables easier contribution to the
project since there is no need to go via the patch
submission process. This should enable better productivity.

Please join me in congratulating Vishesh.

-- 
on behalf of the PMC, Daan


Cloudstack 4.19 / Ubuntu 22.04 fresh install fails

2024-02-26 Thread Ishan Talathi
Hello,

Setup is as follows -

Ubuntu 22.04 fresh VM
Cloudstack 4.19
MySQL 8 ( same node as management )

After deploying, cloudstack-management fails to start with recurring errors
like below -


2024-02-26 13:28:52,565 WARN  [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) Failed to start module [vmware-storage] due to: [Error
creating bean with name
'org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0'
defined in URL
[jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.19.0.0.jar!/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml]:
Cannot resolve reference to bean 'DefaultConfigResources' while setting
bean property 'locations'; nested exception is
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean
named 'DefaultConfigResources' available].
2024-02-26 13:28:52,566 DEBUG [o.a.c.s.m.m.i.DefaultModuleDefinitionSet]
(main:null) (logid:) module start failure of module [vmware-storage] was
due to:
org.springframework.beans.factory.BeanCreationException: Error creating
bean with name
'org.springframework.beans.factory.config.PropertyPlaceholderConfigurer#0'
defined in URL
[jar:file:/usr/share/cloudstack-management/lib/cloudstack-4.19.0.0.jar!/META-INF/cloudstack/bootstrap/spring-bootstrap-context-inheritable.xml]:
Cannot resolve reference to bean 'DefaultConfigResources' while setting
bean property 'locations'; nested exception is
org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean
named 'DefaultConfigResources' available
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveReference(BeanDefinitionValueResolver.java:342)
at
org.springframework.beans.factory.support.BeanDefinitionValueResolver.resolveValueIfNecessary(BeanDefinitionValueResolver.java:113)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1707)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1452)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:619)
at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:542)
at
org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:335)
at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234)
at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:333)
at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:213)
at
org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:171)
at
org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:748)
at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:564)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContext(DefaultModuleDefinitionSet.java:171)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet$2.with(DefaultModuleDefinitionSet.java:140)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:271)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:276)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.withModule(DefaultModuleDefinitionSet.java:259)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.loadContexts(DefaultModuleDefinitionSet.java:128)
at
org.apache.cloudstack.spring.module.model.impl.DefaultModuleDefinitionSet.load(DefaultModuleDefinitionSet.java:78)
at
org.apache.cloudstack.spring.module.factory.ModuleBasedContextFactory.loadModules(ModuleBasedContextFactory.java:37)
at

Re: corrupt RVR causing host agent issues

2024-02-26 Thread Daan Hoogland
Gary, the mail does not display the screenshot for me. Also this is an old
version (4.15) I think you should upgrade.

What might be the root of your issue is that *you* have seen the physical
host crashed but CloudStack could not determine that. To prevent starting
the same VM twice it would withhold taking any action in such situations.

You may call this a bug or a "lack of feature", but the bottom line is that
this is expected behaviour.

I do not think a corrupt VR would crash a host.


On Mon, Feb 26, 2024 at 1:25 PM Gary Dixon 
wrote:

> ACS 4.15.2
>
> KVM
>
> Ubuntu 20.04
>
>
>
> Hi all
>
>
>
> We had a physical host crash on Friday due to hardware failure. This
> appeared to have caused issues with some RVR’s going into an ‘unknown’
> state.
>
>
>
> The strange thing was that on any host where a RVR in an unknown state was
> running – we could not console onto any VM’s on that host – nor could we
> SSH directly to the RVR from the host.
>
> The UI was showing all hosts agent state as ‘UP’
>
>
>
> Only when we restarted the ACS mgmt. service did we notice that the host
> agent where a RVR was running in an ‘unknown’ state then was in a
> ‘connecting’ state for some time – there were no networking issues either –
> host was pingable from the mgmt. server.
>
>
>
> We were then briefly able to console onto one of the RVR’s in an unknown
> state and then discovered that the RVR was indeed corrupt – this is the
> screenshot of the RVR terminal :
>
>
>
> We then marked the RVR in the DB as ‘stopped’ and virsh destroyed it
> directly on the host. We were then able to restart the VPC with cleanup
> which then re-created the corrupt RVR.
>
> It then appeared that once the corrupt RVR had gone – all other RVR’s in
> an unknown state transitioned to ‘backup’ state
>
>
>
> We are wondering if we have encountered a bug where if a corrupt RVR
> crashes the host cloudstack agent if ACS tries to do anything with the RVR
> – like restart it
>
>
>
> BR
>
>
>
> Gary
>
>
>
>
>
>
> Gary Dixon
> Quadris Cloud Manager
> 0161 537 4980 <0161%20537%204980>
>  +44 7989717661 <+44%207989717661>
> gary.di...@quadris.co.uk
> www.quadris.com
> Innovation House, 12‑13 Bredbury Business Park
> Bredbury Park Way, Bredbury, Stockport, SK6 2SN
>


-- 
Daan


Re: Console Proxy VM has high CPU usage

2024-02-26 Thread Daan Hoogland
hey Leo, you are @kohrar are you?
(https://github.com/apache/cloudstack/pull/8694)
as discussed further improvements might be desirable.

On Wed, Feb 21, 2024 at 7:36 PM Leo Leung  wrote:
>
> I did some basic performance analysis on the Java process and it appears the 
> high CPU usage stems from the NIOSocketInputStream class which is part of the 
> new secure KVM VNC feature released with ACS 4.18.
>
> In the mean time, I've sized up the Console Proxy VM compute offering with 
> more CPUs (1 vCPU for each potential connection) as each connection gobbles 
> up an entire core's worth of CPU.
>
> -Leo
>
>
> > On 02/20/2024 4:51 PM MST Leo Leung  wrote:
> >
> >
> > Just a quick update:
> >
> > - I see 100% CPU on the console proxy's java process if one or more VNC 
> > session is in use, even if nothing is happening in the session (such as a 
> > blank screen). Is this normal?
> > - The persistent 100% CPU appears to be triggered when I try to VNC to the 
> > console proxy itself. Connecting the console proxy to itself somehow causes 
> > the VNC connection to persist until I run 'systemctl restart cloud' on the 
> > console proxy and immediately disconnect from the console. Since the VNC 
> > connection happens to the underlying KVM process on the hypervisor, I'm not 
> > quite sure why this is even a problem.
> >
> > Is this a potential bug with the cloud/proxy service?
> >
> > -Leo
> >
> >
> > > On 02/20/2024 4:16 PM MST Leo Leung  wrote:
> > >
> > >
> > > Hello everyone,
> > >
> > > I am running CloudStack 4.18 and 4.19 in two separate environments and 
> > > notice that in both environments, the Console Proxy SystemVM is pretty 
> > > much pegging its single CPU. Logging in to the SystemVM, top reports the 
> > > java process that handles the VNC connections is constantly using ~100% 
> > > CPU. This behaviour happens even on a cleanly provisioned console proxy 
> > > (after deleting/recreating it) with only a 4-5 established VNC 
> > > connections (as reported by netstat -ntp).
> > >
> > > Is this normal? Does anyone else experience this behaviour? Should I 
> > > assign a larger console proxy compute offering?
> > >
> > > Thank-you in advance.
> > > -Leo



-- 
Daan


Re: Cloudstack DB using 3 Node Galrea Cluster.

2024-02-26 Thread Kiran Chavala
Hi Joan

You can refer this article

https://severalnines.com/blog/how-deploy-high-availability-cloudstackcloudplatform-mariadb-galera-cluster/


I had these in my notes when I tried setting it up percona-xtradb, hope its 
useful to you.


Install 2 ubuntu nodes for percona-xtradb cluster

On ubuntu node 1

$ sudo apt update

$ sudo apt install gnupg2

$ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release 
-sc)_all.deb

$ sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb

$ sudo apt update

$ sudo apt install percona-server-server-5.7



cat >>/etc/mysql/my.cnf>/etc/mysql/my.cnfuse mysql
mysql>GRANT ALL ON *.* to root@'%' IDENTIFIED BY 'password';
mysql>GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH 
GRANT OPTION;
mysql>FLUSH PRIVILEGES;
mysql> SELECT host FROM mysql.user WHERE User = 'root';
mysql>SET GLOBAL pxc_strict_mode=PERMISSIVE



Regards
Kiran

From: Joan g 
Date: Saturday, 24 February 2024 at 12:29 AM
To: users@cloudstack.apache.org 
Subject: Cloudstack DB using 3 Node Galrea Cluster.
Hi Community,

I need some suggestions  on using 3 node Mariadb *Galera Cluster or percona
xtradb* for Cloudstack Databases.

In My setup the Databases are behind a LB and write happens only to a
single node

With new Cloudstack 4.18.1 install  initial database migration is always
failing because of schema update/sync issues with other DB nodes.

Logs in Mysql err::
2024-02-23T12:55:15.521278Z 17 [ERROR] [MY-010584] [Repl] Replica SQL:
Error 'Duplicate column name 'display'' on query. Default
 database: 'cloud'. Query: 'ALTER TABLE cloud.guest_os ADD COLUMN display
tinyint(1) DEFAULT '1' COMMENT 'should this guest_os b
e shown to the end user'', Error_code: MY-001060

Due to this Cloudstack initialisation is always failing.

Can someone point me with a suggested method for DB HA

Jon

 



Re: readyForShutdown api call every second

2024-02-26 Thread Daan Hoogland
swen,
Not sure if I answered this already (think I did somewhere) so for the
innocent bystanders: this will happen to root users that have access
to the shutdown APIs, not to regular users. It is programmed thus in
the UI (the repetition)

On Mon, Feb 19, 2024 at 11:47 AM  wrote:
>
> Hi all,
>
>
>
> CS 4.19.0 upgrade from 4.18.1
>
>
>
> I observe that my UI is doing the following api call every second:
>
> /client/api/?command=readyForShutdown=json
>
>
>
> Is this expected? I found this api docu:
>
> https://cloudstack.apache.org/api/apidocs-4.19/apis/readyForShutdown.html
>
>
>
> But I do not really understand it. I already restarted cloudstack-management
> service on the management server.
>
>
>
> Any idea what triggers that api call?
>
>
>
> Regards,
>
> Swen
>


-- 
Daan


Re: Cloudstack with Managed Databases?

2024-02-26 Thread Kiran Chavala
+1 for using Cloud-init to  install and configure your DBMS.

Or you can use packer plugin for cloudstack to create golden image template 
with DBMS

https://developer.hashicorp.com/packer/integrations/hashicorp/cloudstack/latest/components/builder/cloudstack
https://copyprogramming.com/howto/how-to-setup-mysql-with-cloud-init


Regards
Kiran

From: Jayanth Reddy 
Date: Monday, 26 February 2024 at 8:15 AM
To: users@cloudstack.apache.org 
Subject: Re: Cloudstack with Managed Databases?
Hello,
One of the ways I think of is to make use of the cloud-init's functionalities 
to install and configure your DBMS. However you may have less view of the DBMS 
later which might not exactly fit into "managed".

I've seen people have their own CMP and handling all the integration there.

Thanks,
Jayanth

 



From: Hunter Yap <123qwqqw...@gmail.com>
Sent: Monday, February 26, 2024 8:04:39 am
To: users@cloudstack.apache.org 
Subject: Cloudstack with Managed Databases?

Hi Guys,

We are exploring offering Managed Databases as a service on our Cloudstack
Public Cloud.

Has anyone done this before? What method did you use and what was the
experience like?

Regards,
Hunter


corrupt RVR causing host agent issues

2024-02-26 Thread Gary Dixon
ACS 4.15.2
KVM
Ubuntu 20.04

Hi all

We had a physical host crash on Friday due to hardware failure. This appeared 
to have caused issues with some RVR’s going into an ‘unknown’ state.

The strange thing was that on any host where a RVR in an unknown state was 
running – we could not console onto any VM’s on that host – nor could we SSH 
directly to the RVR from the host.
The UI was showing all hosts agent state as ‘UP’

Only when we restarted the ACS mgmt. service did we notice that the host agent 
where a RVR was running in an ‘unknown’ state then was in a ‘connecting’ state 
for some time – there were no networking issues either – host was pingable from 
the mgmt. server.

We were then briefly able to console onto one of the RVR’s in an unknown state 
and then discovered that the RVR was indeed corrupt – this is the screenshot of 
the RVR terminal :
[cid:image006.png@01DA68AE.A9D7A090]

We then marked the RVR in the DB as ‘stopped’ and virsh destroyed it directly 
on the host. We were then able to restart the VPC with cleanup which then 
re-created the corrupt RVR.
It then appeared that once the corrupt RVR had gone – all other RVR’s in an 
unknown state transitioned to ‘backup’ state

We are wondering if we have encountered a bug where if a corrupt RVR crashes 
the host cloudstack agent if ACS tries to do anything with the RVR – like 
restart it

BR

Gary




Gary Dixon
Quadris Cloud Manager
0161 537 4980 +44 7989717661
gary.di...@quadris.co.uk
www.quadris.com
Innovation House, 12-13 Bredbury Business Park
Bredbury Park Way, Bredbury, Stockport, SK6 2SN


Re: CKS Storage Provisioner Info

2024-02-26 Thread Kiran Chavala
Hi Bharath

Note the CKS provisioner works on KVM based cloudstack environment

Regards
Kiran

From: Kiran Chavala 
Date: Monday, 26 February 2024 at 5:45 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
Hi Bharat Bhusan

Please follow these steps


1. Deploy a Kubernetes cluster on cluster


NAME STATUS   ROLES   AGE VERSION
ty-control-18de52c04f7   Readycontrol-plane   5m11s   v1.28.4
ty-node-18de52c4185  Ready  4m55s   v1.28.4

kubectl get secrets -A
NAMESPACE  NAME  TYPE   
 DATA   AGE
kube-systemcloudstack-secret Opaque 
 1  10m



2. Check the deployment, it should  be in pending

~ kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   0/1 10   
46s


3. Edit the deployment and remove the nodeSelector part

~kubectl edit deployment/cloudstack-csi-controller -n kube-system


  nodeSelector:
kubernetes.io/os: linux
node-role.kubernetes.io/master: ""

4. Check the deployment again  and it should be running state

~kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   1/1 11   
2m39s

5. Replace the disk offering in the storage class yaml

Provide the custom disk offering UUID (service offerings > disk offering > 
custom disk offering

4c518474-5d7b-4285-a07c-c57e214abb3b


vi 0-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cloudstack-custom
provisioner: csi.cloudstack.apache.org
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
parameters:
  csi.cloudstack.apache.org/disk-offering-id: 
4c518474-5d7b-4285-a07c-c57e214abb3b


kubectl apply -f 0-storageclass.yaml

k8s git:(master) ✗ kubectl get sc
NAMEPROVISIONER RECLAIMPOLICY   
VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION   AGE
cloudstack-custom   csi.cloudstack.apache.org   Delete  
WaitForFirstConsumer   false  72s



6. Apply the pvc yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc-block
spec:
  storageClassName: cloudstack-custom
  volumeMode: Block
  accessModes:
- ReadWriteOnce
  resources:
requests:
  storage: 1Gi


kubectl apply -f pvc-block.yaml

k8s git:(master) ✗ kubectl get pvc
NAMESTATUSVOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   
 AGE
example-pvc-block   Pending  
cloudstack-custom   31s
7.  Apply the pod block yaml

apiVersion: v1
kind: Pod
metadata:
  name: example-pod-block
spec:
  containers:
- name: example
  image: ubuntu
  volumeDevices:
- devicePath: "/dev/example-block"
  name: example-volume
  stdin: true
  stdinOnce: true
  tty: true
  volumes:
- name: example-volume
  persistentVolumeClaim:
claimName: example-pvc-block


kubectl apply -f pod-block.yaml


➜  k8s git:(master) ✗ k get pods -A
NAMESPACE  NAME READY   
STATUSRESTARTS  AGE
defaultexample-pod-block1/1 
Running   0 107s

Events:
  TypeReason  Age   From Message
  --     ---
  Normal  Scheduled   9sdefault-schedulerSuccessfully 
assigned default/example-pod-block to ty-node-18de52c4185
  Normal  SuccessfulAttachVolume  6sattachdetach-controller  
AttachVolume.Attach succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b"
  Normal  SuccessfulMountVolume   4skubelet  
MapVolume.MapPodDevice succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b" globalMapPath 
"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-02999f04-dd0a-407c-8805-125c7c56d51b/dev"
  Normal  SuccessfulMountVolume   4skubelet  
MapVolume.MapPodDevice succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b" volumeMapPath 
"/var/lib/kubelet/pods/076bd828-7130-4f72-a0ee-29f93043bbb1/volumeDevices/kubernetes.io~csi"
  Normal  Pulling 3skubelet  Pulling image 
"ubuntu"

Regards
Kiran





From: Jayanth Reddy 
Date: Monday, 26 February 2024 at 4:35 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
Hello Bharat,
If that is the case, please update your "api-url" variable for the cloud-config.

FWIW, please also visit "endpoint.url" in global configuration to include the 
HTTPS endpoint.

Thanks,
Jayanth

Sent from Outlook for 

Re: CKS Storage Provisioner Info

2024-02-26 Thread Kiran Chavala
Hi Bharat Bhusan

Please follow these steps


1. Deploy a Kubernetes cluster on cluster


NAME STATUS   ROLES   AGE VERSION
ty-control-18de52c04f7   Readycontrol-plane   5m11s   v1.28.4
ty-node-18de52c4185  Ready  4m55s   v1.28.4

kubectl get secrets -A
NAMESPACE  NAME  TYPE   
 DATA   AGE
kube-systemcloudstack-secret Opaque 
 1  10m



2. Check the deployment, it should  be in pending

~ kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   0/1 10   
46s


3. Edit the deployment and remove the nodeSelector part

~kubectl edit deployment/cloudstack-csi-controller -n kube-system


  nodeSelector:
kubernetes.io/os: linux
node-role.kubernetes.io/master: ""

4. Check the deployment again  and it should be running state

~kubectl get deployments -A
NAMESPACE  NAMEREADY   UP-TO-DATE   
AVAILABLE   AGE
kube-systemcloudstack-csi-controller   1/1 11   
2m39s


5. Replace the disk offering in the storage class yaml

Provide the custom disk offering UUID (service offerings > disk offering > 
custom disk offering

4c518474-5d7b-4285-a07c-c57e214abb3b


vi 0-storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cloudstack-custom
provisioner: csi.cloudstack.apache.org
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
parameters:
  csi.cloudstack.apache.org/disk-offering-id: 
4c518474-5d7b-4285-a07c-c57e214abb3b


kubectl apply -f 0-storageclass.yaml

k8s git:(master) ✗ kubectl get sc
NAMEPROVISIONER RECLAIMPOLICY   
VOLUMEBINDINGMODE  ALLOWVOLUMEEXPANSION   AGE
cloudstack-custom   csi.cloudstack.apache.org   Delete  
WaitForFirstConsumer   false  72s



6. Apply the pvc yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: example-pvc-block
spec:
  storageClassName: cloudstack-custom
  volumeMode: Block
  accessModes:
- ReadWriteOnce
  resources:
requests:
  storage: 1Gi


kubectl apply -f pvc-block.yaml

k8s git:(master) ✗ kubectl get pvc
NAMESTATUSVOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   
 AGE
example-pvc-block   Pending  
cloudstack-custom   31s

7.  Apply the pod block yaml

apiVersion: v1
kind: Pod
metadata:
  name: example-pod-block
spec:
  containers:
- name: example
  image: ubuntu
  volumeDevices:
- devicePath: "/dev/example-block"
  name: example-volume
  stdin: true
  stdinOnce: true
  tty: true
  volumes:
- name: example-volume
  persistentVolumeClaim:
claimName: example-pvc-block


kubectl apply -f pod-block.yaml


➜  k8s git:(master) ✗ k get pods -A
NAMESPACE  NAME READY   
STATUSRESTARTS  AGE
defaultexample-pod-block1/1 
Running   0 107s

Events:
  TypeReason  Age   From Message
  --     ---
  Normal  Scheduled   9sdefault-schedulerSuccessfully 
assigned default/example-pod-block to ty-node-18de52c4185
  Normal  SuccessfulAttachVolume  6sattachdetach-controller  
AttachVolume.Attach succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b"
  Normal  SuccessfulMountVolume   4skubelet  
MapVolume.MapPodDevice succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b" globalMapPath 
"/var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/pvc-02999f04-dd0a-407c-8805-125c7c56d51b/dev"
  Normal  SuccessfulMountVolume   4skubelet  
MapVolume.MapPodDevice succeeded for volume 
"pvc-02999f04-dd0a-407c-8805-125c7c56d51b" volumeMapPath 
"/var/lib/kubelet/pods/076bd828-7130-4f72-a0ee-29f93043bbb1/volumeDevices/kubernetes.io~csi"
  Normal  Pulling 3skubelet  Pulling image 
"ubuntu"

Regards
Kiran


From: Jayanth Reddy 
Date: Monday, 26 February 2024 at 4:35 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
Hello Bharat,
If that is the case, please update your "api-url" variable for the cloud-config.

FWIW, please also visit "endpoint.url" in global configuration to include the 
HTTPS endpoint.

Thanks,
Jayanth

Sent from Outlook for Android

 



From: Bharat Bhushan Saini 
Sent: Monday, February 26, 2024 4:29:22 pm
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info

Hi Jayanth,

For to secure 

Re: CKS Storage Provisioner Info

2024-02-26 Thread Jayanth Reddy
Hello Bharat,
If that is the case, please update your "api-url" variable for the cloud-config.

FWIW, please also visit "endpoint.url" in global configuration to include the 
HTTPS endpoint.

Thanks,
Jayanth

Sent from Outlook for Android


From: Bharat Bhushan Saini 
Sent: Monday, February 26, 2024 4:29:22 pm
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info

Hi Jayanth,

For to secure communication, I already enabled the https over the management 
server and turned off the http.
The API URL only listen over http?

Thanks and Regards,
Bharat Saini

[signature_3317674547]

From: Jayanth Reddy 
Date: Monday, 26 February 2024 at 4:19 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Are you able to access the CloudStack WebUI in the URL
http://10.1.10.2:8080/client ? As long as k8s nodes have connectivity on
the path /client/api on your management, this should work fine. Perhaps a
host-level firewall in your management server?

Thanks

On Mon, Feb 26, 2024 at 4:08 PM Bharat Bhushan Saini
 wrote:

> Hi Vivek,
>
>
>
> Please check the findings
>
>
>
> ping 10.1.x.2
>
> PING 10.1.x.2 (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.616 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.716 ms
>
> ^C--- 10.1.10.2 ping statistics ---
>
> 2 packets transmitted, 2 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.616/0.666/0.716/0.050 ms
>
>
>
> ping cloudstack.internal.com
>
> PING cloudstack.internal.com (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.555 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.620 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=2 ttl=64 time=0.664 ms
>
> ^C--- cloudstack.internal.kloudspot.com ping statistics ---
>
> 3 packets transmitted, 3 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.555/0.613/0.664/0.045 ms
>
>
>
> telnet 10.1.x.2 8080
>
> Trying 10.1.x.2...
>
> telnet: Unable to connect to remote host: Connection refused
>
>
>
>
>
> I am able to ping the management IP and URL but not able to get access
> from the port as it is not open in the cluster.
> NOTE: I use the management IP in the API URL.
>
>
>
> Thanks and Regards,
>
> Bharat Saini
>
>
>
> [image: signature_4099962424]
>
>
>
> *From: *Vivek Kumar 
> *Date: *Monday, 26 February 2024 at 3:49 PM
> *To: *users@cloudstack.apache.org 
> *Subject: *Re: CKS Storage Provisioner Info
>
> EXTERNAL EMAIL: Please verify the sender email address before taking any
> action, replying, clicking any link or opening any attachment.
>
>
> Hello Bharat,
>
> Is the cloudstack URL is reachable from your cluster, can you manually
> check i.e ping, telnet on that port ?
>
>
>
>
> > On 26-Feb-2024, at 3:43 PM, Bharat Bhushan Saini
>  wrote:
> >
> > Hi Wei/Jayanth,
> >
> > Thanks for sharing the details. I am able to fetch out the api and keys
> and deployed the driver as suggested by @vivek and GH page.
> >
> > Now I encountered with one more issue that the cloudstack csi node goes
> in CrashLoopBackOff Error. I am trying to get some more info regarding this
> which is as below
> >
> >
> {"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
> unary call with code
> Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
> error: code = Internal desc = Get \"
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\
> ":
> <
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\%22:
> >
> dial tcp 10.1.10.2:8080: connect: connection
> refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"
> github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem
> 

Re: CKS Storage Provisioner Info

2024-02-26 Thread Bharat Bhushan Saini
Hi Jayanth,

For to secure communication, I already enabled the https over the management 
server and turned off the http.
The API URL only listen over http?

Thanks and Regards,
Bharat Saini

[signature_3317674547]

From: Jayanth Reddy 
Date: Monday, 26 February 2024 at 4:19 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Are you able to access the CloudStack WebUI in the URL
http://10.1.10.2:8080/client ? As long as k8s nodes have connectivity on
the path /client/api on your management, this should work fine. Perhaps a
host-level firewall in your management server?

Thanks

On Mon, Feb 26, 2024 at 4:08 PM Bharat Bhushan Saini
 wrote:

> Hi Vivek,
>
>
>
> Please check the findings
>
>
>
> ping 10.1.x.2
>
> PING 10.1.x.2 (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.616 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.716 ms
>
> ^C--- 10.1.10.2 ping statistics ---
>
> 2 packets transmitted, 2 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.616/0.666/0.716/0.050 ms
>
>
>
> ping cloudstack.internal.com
>
> PING cloudstack.internal.com (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.555 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.620 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=2 ttl=64 time=0.664 ms
>
> ^C--- cloudstack.internal.kloudspot.com ping statistics ---
>
> 3 packets transmitted, 3 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.555/0.613/0.664/0.045 ms
>
>
>
> telnet 10.1.x.2 8080
>
> Trying 10.1.x.2...
>
> telnet: Unable to connect to remote host: Connection refused
>
>
>
>
>
> I am able to ping the management IP and URL but not able to get access
> from the port as it is not open in the cluster.
> NOTE: I use the management IP in the API URL.
>
>
>
> Thanks and Regards,
>
> Bharat Saini
>
>
>
> [image: signature_4099962424]
>
>
>
> *From: *Vivek Kumar 
> *Date: *Monday, 26 February 2024 at 3:49 PM
> *To: *users@cloudstack.apache.org 
> *Subject: *Re: CKS Storage Provisioner Info
>
> EXTERNAL EMAIL: Please verify the sender email address before taking any
> action, replying, clicking any link or opening any attachment.
>
>
> Hello Bharat,
>
> Is the cloudstack URL is reachable from your cluster, can you manually
> check i.e ping, telnet on that port ?
>
>
>
>
> > On 26-Feb-2024, at 3:43 PM, Bharat Bhushan Saini
>  wrote:
> >
> > Hi Wei/Jayanth,
> >
> > Thanks for sharing the details. I am able to fetch out the api and keys
> and deployed the driver as suggested by @vivek and GH page.
> >
> > Now I encountered with one more issue that the cloudstack csi node goes
> in CrashLoopBackOff Error. I am trying to get some more info regarding this
> which is as below
> >
> >
> {"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
> unary call with code
> Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
> error: code = Internal desc = Get \"
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\
> ":
> <
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\%22:
> >
> dial tcp 10.1.10.2:8080: connect: connection
> refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"
> github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem
> 
> <
> http://github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer/n/t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212/ngithub.com/grpc-ecosystem
> 

Re: CKS Storage Provisioner Info

2024-02-26 Thread Jayanth Reddy
Are you able to access the CloudStack WebUI in the URL
http://10.1.10.2:8080/client ? As long as k8s nodes have connectivity on
the path /client/api on your management, this should work fine. Perhaps a
host-level firewall in your management server?

Thanks

On Mon, Feb 26, 2024 at 4:08 PM Bharat Bhushan Saini
 wrote:

> Hi Vivek,
>
>
>
> Please check the findings
>
>
>
> ping 10.1.x.2
>
> PING 10.1.x.2 (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.616 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.716 ms
>
> ^C--- 10.1.10.2 ping statistics ---
>
> 2 packets transmitted, 2 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.616/0.666/0.716/0.050 ms
>
>
>
> ping cloudstack.internal.com
>
> PING cloudstack.internal.com (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.555 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.620 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=2 ttl=64 time=0.664 ms
>
> ^C--- cloudstack.internal.kloudspot.com ping statistics ---
>
> 3 packets transmitted, 3 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.555/0.613/0.664/0.045 ms
>
>
>
> telnet 10.1.x.2 8080
>
> Trying 10.1.x.2...
>
> telnet: Unable to connect to remote host: Connection refused
>
>
>
>
>
> I am able to ping the management IP and URL but not able to get access
> from the port as it is not open in the cluster.
> NOTE: I use the management IP in the API URL.
>
>
>
> Thanks and Regards,
>
> Bharat Saini
>
>
>
> [image: signature_4099962424]
>
>
>
> *From: *Vivek Kumar 
> *Date: *Monday, 26 February 2024 at 3:49 PM
> *To: *users@cloudstack.apache.org 
> *Subject: *Re: CKS Storage Provisioner Info
>
> EXTERNAL EMAIL: Please verify the sender email address before taking any
> action, replying, clicking any link or opening any attachment.
>
>
> Hello Bharat,
>
> Is the cloudstack URL is reachable from your cluster, can you manually
> check i.e ping, telnet on that port ?
>
>
>
>
> > On 26-Feb-2024, at 3:43 PM, Bharat Bhushan Saini
>  wrote:
> >
> > Hi Wei/Jayanth,
> >
> > Thanks for sharing the details. I am able to fetch out the api and keys
> and deployed the driver as suggested by @vivek and GH page.
> >
> > Now I encountered with one more issue that the cloudstack csi node goes
> in CrashLoopBackOff Error. I am trying to get some more info regarding this
> which is as below
> >
> >
> {"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
> unary call with code
> Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
> error: code = Internal desc = Get \"
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\
> ":
> <
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\%22:
> >
> dial tcp 10.1.10.2:8080: connect: connection
> refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"
> github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem
> 
> <
> http://github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer/n/t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212/ngithub.com/grpc-ecosystem
> >/go-grpc-middleware/logging/zap.UnaryServerInterceptor.func1\n\t/home/runner/go/pkg/mod/
> github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/server_interceptors.go:39\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183\ngithub.com/container-storage-interface/spec/lib/go/csi
> 

Re: CKS Storage Provisioner Info

2024-02-26 Thread Bharat Bhushan Saini
Hi Vivek,

Please check the findings


ping 10.1.x.2

PING 10.1.x.2 (10.1.x.2): 56 data bytes

64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.616 ms

64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.716 ms

^C--- 10.1.10.2 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.616/0.666/0.716/0.050 ms


ping cloudstack.internal.com

PING cloudstack.internal.com (10.1.x.2): 56 data bytes

64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.555 ms

64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.620 ms

64 bytes from 10.1.x.2: icmp_seq=2 ttl=64 time=0.664 ms

^C--- cloudstack.internal.kloudspot.com ping statistics ---

3 packets transmitted, 3 packets received, 0% packet loss

round-trip min/avg/max/stddev = 0.555/0.613/0.664/0.045 ms


telnet 10.1.x.2 8080

Trying 10.1.x.2...

telnet: Unable to connect to remote host: Connection refused


I am able to ping the management IP and URL but not able to get access from the 
port as it is not open in the cluster.
NOTE: I use the management IP in the API URL.

Thanks and Regards,
Bharat Saini

[signature_4099962424]

From: Vivek Kumar 
Date: Monday, 26 February 2024 at 3:49 PM
To: users@cloudstack.apache.org 
Subject: Re: CKS Storage Provisioner Info
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Hello Bharat,

Is the cloudstack URL is reachable from your cluster, can you manually check 
i.e ping, telnet on that port ?




> On 26-Feb-2024, at 3:43 PM, Bharat Bhushan Saini 
>  wrote:
>
> Hi Wei/Jayanth,
>
> Thanks for sharing the details. I am able to fetch out the api and keys and 
> deployed the driver as suggested by @vivek and GH page.
>
> Now I encountered with one more issue that the cloudstack csi node goes in 
> CrashLoopBackOff Error. I am trying to get some more info regarding this 
> which is as below
>
> {"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
>  unary call with code 
> Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
>  error: code = Internal desc = Get 
> \"http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\":
>  
> 
>  dial tcp 10.1.10.2:8080: connect: connection 
> refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem
>  
> /go-grpc-middleware/logging/zap.UnaryServerInterceptor.func1\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/server_interceptors.go:39\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183\ngithub.com/container-storage-interface/spec/lib/go/csi
>  
> ._Node_NodeGetInfo_Handler\n\t/home/runner/go/pkg/mod/github.com/container-storage-interface/spec@v1.9.0/lib/go/csi/csi.pb.go:7351\ngoogle.golang.org/grpc
>  
> .(*Server).processUnaryRPC\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1372\ngoogle.golang.org/grpc
>  
> .(*Server).handleStream\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1783\ngoogle.golang.org/grpc
>  
> .(*Server).serveStreams.func2.1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1016
>  

Re: CKS Storage Provisioner Info

2024-02-26 Thread Vivek Kumar
Hello Bharat,

Is the cloudstack URL is reachable from your cluster, can you manually check 
i.e ping, telnet on that port ?




> On 26-Feb-2024, at 3:43 PM, Bharat Bhushan Saini 
>  wrote:
> 
> Hi Wei/Jayanth,
>  
> Thanks for sharing the details. I am able to fetch out the api and keys and 
> deployed the driver as suggested by @vivek and GH page.
>  
> Now I encountered with one more issue that the cloudstack csi node goes in 
> CrashLoopBackOff Error. I am trying to get some more info regarding this 
> which is as below
> 
> {"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
>  unary call with code 
> Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
>  error: code = Internal desc = Get 
> \"http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\":
>  
> 
>  dial tcp 10.1.10.2:8080: connect: connection 
> refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem
>  
> /go-grpc-middleware/logging/zap.UnaryServerInterceptor.func1\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/server_interceptors.go:39\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183\ngithub.com/container-storage-interface/spec/lib/go/csi
>  
> ._Node_NodeGetInfo_Handler\n\t/home/runner/go/pkg/mod/github.com/container-storage-interface/spec@v1.9.0/lib/go/csi/csi.pb.go:7351\ngoogle.golang.org/grpc
>  
> .(*Server).processUnaryRPC\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1372\ngoogle.golang.org/grpc
>  
> .(*Server).handleStream\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1783\ngoogle.golang.org/grpc
>  
> .(*Server).serveStreams.func2.1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1016
>  "}
>  
> kubectl get pods -A
> NAMESPACE  NAME   
>  READY   STATUS RESTARTSAGE
> defaultexample-pod
>  0/1 Pending0   87m
> kube-systemcloud-controller-manager-574bcb86c-vzp4m   
>  1/1 Running0   155m
> kube-systemcloudstack-csi-controller-7f89c8cd47-ftgnf 
>  5/5 Running0   150m
> kube-systemcloudstack-csi-controller-7f89c8cd47-j4s4z 
>  5/5 Running0   150m
> kube-systemcloudstack-csi-controller-7f89c8cd47-ptvss 
>  5/5 Running0   150m
> kube-systemcloudstack-csi-node-56hxg  
>  2/3 CrashLoopBackOff   34 (99s ago)150m
> kube-systemcloudstack-csi-node-98cf2  
>  2/3 CrashLoopBackOff   34 (39s ago)150m
> kube-systemcoredns-5dd5756b68-5wwxk   
>  1/1 Running0   4h17m
> kube-systemcoredns-5dd5756b68-mbpwt   
>  1/1 Running0   4h17m
> kube-systemetcd-kspot-app-control-18de3ee6b6f 
>  1/1 Running0   4h17m
> kube-systemkube-apiserver-kspot-app-control-18de3ee6b6f   
>  1/1 Running0   4h17m
> kube-system

Re: CKS Storage Provisioner Info

2024-02-26 Thread Bharat Bhushan Saini
Hi Wei/Jayanth,

Thanks for sharing the details. I am able to fetch out the api and keys and 
deployed the driver as suggested by @vivek and GH page.

Now I encountered with one more issue that the cloudstack csi node goes in 
CrashLoopBackOff Error. I am trying to get some more info regarding this which 
is as below

{"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
 unary call with code 
Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
 error: code = Internal desc = Get 
\"http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA=listVirtualMachines=cf4940eb-52a4-4205-b056-1575926cb488=json=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\":
 dial tcp 10.1.10.2:8080: connect: connection 
refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem/go-grpc-middleware/logging/zap.UnaryServerInterceptor.func1\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/server_interceptors.go:39\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183\ngithub.com/container-storage-interface/spec/lib/go/csi._Node_NodeGetInfo_Handler\n\t/home/runner/go/pkg/mod/github.com/container-storage-interface/spec@v1.9.0/lib/go/csi/csi.pb.go:7351\ngoogle.golang.org/grpc.(*Server).processUnaryRPC\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1372\ngoogle.golang.org/grpc.(*Server).handleStream\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1783\ngoogle.golang.org/grpc.(*Server).serveStreams.func2.1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1016"}

kubectl get pods -A
NAMESPACE  NAME
READY   STATUS RESTARTSAGE
defaultexample-pod 
0/1 Pending0   87m
kube-systemcloud-controller-manager-574bcb86c-vzp4m
1/1 Running0   155m
kube-systemcloudstack-csi-controller-7f89c8cd47-ftgnf  
5/5 Running0   150m
kube-systemcloudstack-csi-controller-7f89c8cd47-j4s4z  
5/5 Running0   150m
kube-systemcloudstack-csi-controller-7f89c8cd47-ptvss  
5/5 Running0   150m
kube-systemcloudstack-csi-node-56hxg   
2/3 CrashLoopBackOff   34 (99s ago)150m
kube-systemcloudstack-csi-node-98cf2   
2/3 CrashLoopBackOff   34 (39s ago)150m
kube-systemcoredns-5dd5756b68-5wwxk
1/1 Running0   4h17m
kube-systemcoredns-5dd5756b68-mbpwt
1/1 Running0   4h17m
kube-systemetcd-kspot-app-control-18de3ee6b6f  
1/1 Running0   4h17m
kube-systemkube-apiserver-kspot-app-control-18de3ee6b6f
1/1 Running0   4h17m
kube-systemkube-controller-manager-kspot-app-control-18de3ee6b6f   
1/1 Running0   4h17m
kube-systemkube-proxy-56r4l
1/1 Running0   4h17m
kube-systemkube-proxy-mf6cc
1/1 Running0   4h17m
kube-systemkube-scheduler-kspot-app-control-18de3ee6b6f
1/1 Running0   4h17m
kube-systemweave-net-59t9z 
2/2 Running1 (4h17m ago)   4h17m
kube-systemweave-net-7xvpp 
2/2 Running0   4h17m
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-g89lq  
1/1 Running0   4h17m
kubernetes-dashboard   kubernetes-dashboard-5b749d9495-fqplb   
1/1 Running0   4h17m

kubectl get csinode
NAMEDRIVERS   AGE