Re: CloudFest Promo Code for Free Registrations

2024-02-15 Thread Ivet Petrova
Hey David, are you planning to join the event?


 

> On 16 Feb 2024, at 1:38, David Amorín  wrote:
> 
> 18-21 de Marzo
> 
> David Amorín
> 
> david.amo...@jotelulu.com  | 
> jotelulu.com
> 
> This message and the attached documents are confidential and are addressed 
> exclusively to the referenced recipient. If it is not and you have received 
> this email in error, please notify me by this means and proceed to delete it. 
> In accordance with what is available in current regulations, you are informed 
> that the personal data included in this communication will be processed by 
> JOTELULU  (responsible for the Treatment) with the purpose of managing 
> professional communications, and that they will not be transferred to third 
> parties except obligation lawful or with your consent. The legitimizing basis 
> of the treatment is your consent or compliance with the contractual 
> relationship. You can exercise your rights of access, rectification, 
> portability and deletion of data, as well as those of limitation and 
> opposition through d...@jotelulu.com or by written 
> communication.
> 
> Message produced and distributed by JOTELULU. © 2023, JOTELULU. All rights 
> reserved.
> 
> 
> 
> From: Ivet Petrova 
> Sent: Wednesday, January 24, 2024 10:56:27 AM
> To: users@cloudstack.apache.org ; dev 
> 
> Subject: CloudFest Promo Code for Free Registrations
> 
> Caution: This email has been originated from the outside of the organization, 
> Do not click links or open attachments unless you recognize the sender and 
> know the content is safe.
> 
> 
> Hi all,
> 
> I am happy to announce that for a second year, CloudStack will be exhibiting 
> at CloudFest - biggest cloud expo in Europe.
> I would like to share a code for free registration for our community members: 
> c3SY3Zu2
> 
> You can register here: https://registration.cloudfest.com/?code=c3SY3Zu2
> 
> Also, I am searching for volunteers who would like to support the project at 
> the booth.
> We already have a few people from ShapeBlue and Wido will be also at the 
> event. So is there anybody who would like to participate as booth staff?
> 
> Best regards,
> 
> 
> 
> 



Re: Shallow Provision and K8s Docs

2024-02-15 Thread Bharat Bhushan Saini
Hi Jithin,

Let me clear my context, for example I am taking 6 cpu cores to run a instance 
and I run 2 instance with 6 cpu cores which causes 12 cpu cores are using at a 
time according to the cloudstack dashboard. But if we look according to the 1 
instances which is having 6 cores, it will not uses 6 cores at a time, but the 
cloudstack dashboard says it still using the complete cores.
It is something like thin provisioning of cpu cores by which we can create more 
instances with overcommitment of cpu cores.

Thanks and Regards,
Bharat Saini

[signature_1997811085]

From: Jithin Raju 
Date: Friday, 16 February 2024 at 12:36 PM
To: users@cloudstack.apache.org 
Subject: Re: Shallow Provision and K8s Docs
EXTERNAL EMAIL: Please verify the sender email address before taking any 
action, replying, clicking any link or opening any attachment.


Hi Bharat,

I believe you are looking for CPU and Memory overcommit.

-Jithin

From: Bharat Bhushan Saini 
Date: Friday, 16 February 2024 at 11:27 AM
To: users@cloudstack.apache.org 
Subject: Shallow Provision and K8s Docs
Hi All,

Is there any feature is available in cloudstack by which we can use thin 
provision of CPU and RAM. Reason behind this is that the cloudstack confirm all 
cpu cores I gave but in reality the intances is not using that much of core. As 
a result I can’t create more instances in the cloudstack.

If possible please share the enabling feature of Kubernetes and configuration 
doc related to Kubernetes in cloudstack.
Thanks in advance.

Thanks and Regards,
Bharat Saini

[signature_714485635]

--- Disclaimer: --
This message and its contents are intended solely for the designated addressee 
and are proprietary to Kloudspot. The information in this email is meant 
exclusively for Kloudspot business use. Any use by individuals other than the 
addressee constitutes misuse and an infringement of Kloudspot's proprietary 
rights. If you are not the intended recipient, please return this email to the 
sender. Kloudspot cannot guarantee the security or error-free transmission of 
e-mail communications. Information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. Therefore, Kloudspot 
shall not be liable for any issues arising from the transmission of this email.



--- Disclaimer: --
This message and its contents are intended solely for the designated addressee 
and are proprietary to Kloudspot. The information in this email is meant 
exclusively for Kloudspot business use. Any use by individuals other than the 
addressee constitutes misuse and an infringement of Kloudspot's proprietary 
rights. If you are not the intended recipient, please return this email to the 
sender. Kloudspot cannot guarantee the security or error-free transmission of 
e-mail communications. Information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. Therefore, Kloudspot 
shall not be liable for any issues arising from the transmission of this email.


Re: Shallow Provision and K8s Docs

2024-02-15 Thread Jithin Raju
Hi Bharat,

I believe you are looking for CPU and Memory overcommit.

-Jithin

From: Bharat Bhushan Saini 
Date: Friday, 16 February 2024 at 11:27 AM
To: users@cloudstack.apache.org 
Subject: Shallow Provision and K8s Docs
Hi All,

Is there any feature is available in cloudstack by which we can use thin 
provision of CPU and RAM. Reason behind this is that the cloudstack confirm all 
cpu cores I gave but in reality the intances is not using that much of core. As 
a result I can’t create more instances in the cloudstack.

If possible please share the enabling feature of Kubernetes and configuration 
doc related to Kubernetes in cloudstack.
Thanks in advance.

Thanks and Regards,
Bharat Saini

[signature_714485635]

--- Disclaimer: --
This message and its contents are intended solely for the designated addressee 
and are proprietary to Kloudspot. The information in this email is meant 
exclusively for Kloudspot business use. Any use by individuals other than the 
addressee constitutes misuse and an infringement of Kloudspot's proprietary 
rights. If you are not the intended recipient, please return this email to the 
sender. Kloudspot cannot guarantee the security or error-free transmission of 
e-mail communications. Information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. Therefore, Kloudspot 
shall not be liable for any issues arising from the transmission of this email.

 



Re: Account limits on object storage

2024-02-15 Thread Levin Ng
https://github.com/apache/cloudstack/issues/8638
On 16 Feb 2024 at 09:16 +0800, Leo Leung , wrote:
> Hello,
>
> Congrats on your 4.19 release. I upgraded it on my test instance without any 
> major issues.
>
> Can anyone tell me if it's possible to set an account limit on the new object 
> storage feature? (i.e. similar to how you can set limits for 
> primary/secondary storage in GBs). I don't see this type of limit when 
> looking on the limits page of an account and no mention of it in the 
> documentation. Is it a work in progress?
>
> Thanks in advance.
> -Leo


Shallow Provision and K8s Docs

2024-02-15 Thread Bharat Bhushan Saini
Hi All,

Is there any feature is available in cloudstack by which we can use thin 
provision of CPU and RAM. Reason behind this is that the cloudstack confirm all 
cpu cores I gave but in reality the intances is not using that much of core. As 
a result I can’t create more instances in the cloudstack.

If possible please share the enabling feature of Kubernetes and configuration 
doc related to Kubernetes in cloudstack.
Thanks in advance.

Thanks and Regards,
Bharat Saini

[signature_714485635]

--- Disclaimer: --
This message and its contents are intended solely for the designated addressee 
and are proprietary to Kloudspot. The information in this email is meant 
exclusively for Kloudspot business use. Any use by individuals other than the 
addressee constitutes misuse and an infringement of Kloudspot's proprietary 
rights. If you are not the intended recipient, please return this email to the 
sender. Kloudspot cannot guarantee the security or error-free transmission of 
e-mail communications. Information could be intercepted, corrupted, lost, 
destroyed, arrive late or incomplete, or contain viruses. Therefore, Kloudspot 
shall not be liable for any issues arising from the transmission of this email.


Account limits on object storage

2024-02-15 Thread Leo Leung
Hello,

Congrats on your 4.19 release. I upgraded it on my test instance without any 
major issues.

Can anyone tell me if it's possible to set an account limit on the new object 
storage feature? (i.e. similar to how you can set limits for primary/secondary 
storage in GBs). I don't see this type of limit when looking on the limits page 
of an account and no mention of it in the documentation. Is it a work in 
progress?

Thanks in advance.
-Leo


Re: CloudFest Promo Code for Free Registrations

2024-02-15 Thread David Amorín
18-21 de Marzo

David Amorín

david.amo...@jotelulu.com  | 
jotelulu.com

This message and the attached documents are confidential and are addressed 
exclusively to the referenced recipient. If it is not and you have received 
this email in error, please notify me by this means and proceed to delete it. 
In accordance with what is available in current regulations, you are informed 
that the personal data included in this communication will be processed by 
JOTELULU  (responsible for the Treatment) with the purpose of managing 
professional communications, and that they will not be transferred to third 
parties except obligation lawful or with your consent. The legitimizing basis 
of the treatment is your consent or compliance with the contractual 
relationship. You can exercise your rights of access, rectification, 
portability and deletion of data, as well as those of limitation and opposition 
through d...@jotelulu.com or by written communication.

Message produced and distributed by JOTELULU. © 2023, JOTELULU. All rights 
reserved.



From: Ivet Petrova 
Sent: Wednesday, January 24, 2024 10:56:27 AM
To: users@cloudstack.apache.org ; dev 

Subject: CloudFest Promo Code for Free Registrations

Caution: This email has been originated from the outside of the organization, 
Do not click links or open attachments unless you recognize the sender and know 
the content is safe.


Hi all,

I am happy to announce that for a second year, CloudStack will be exhibiting at 
CloudFest - biggest cloud expo in Europe.
I would like to share a code for free registration for our community members: 
c3SY3Zu2

You can register here: https://registration.cloudfest.com/?code=c3SY3Zu2

Also, I am searching for volunteers who would like to support the project at 
the booth.
We already have a few people from ShapeBlue and Wido will be also at the event. 
So is there anybody who would like to participate as booth staff?

Best regards,






Re: ACL List Order

2024-02-15 Thread Wally B
So, I removed the xxx.xxx.xxx.170/32 from the source so it's just
xxx.xxx.xxx.235/32,
and it works.

Can we not use a comma-separated list? This was my understanding so, if
not, this is my bad.

Thanks!
Wally

On Thu, Feb 15, 2024 at 11:33 AM Wally B  wrote:

> It seems the Address is correct
>
> 17:30:26.747007 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
> Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
> 8,nop,nop,sackOK], length 0
> 17:30:27.749514 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
> Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
> 8,nop,nop,sackOK], length 0
> 17:30:29.758959 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
> Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
> 8,nop,nop,sackOK], length 0
> 17:30:33.766394 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
> Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
> 8,nop,nop,sackOK], length 0
> 17:30:41.779309 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
> Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
> 8,nop,nop,sackOK], length 0
>
>
> ciderlist is
>
> xxx.xxx.xxx.235/32,xxx.xxx.xxx.170/32
> I'm coming from .235
>
>
>
>
>
> On Thu, Feb 15, 2024 at 11:05 AM Wei ZHOU  wrote:
>
>> Yes.
>>
>> I suspect the source IP of the packets to the VR is not the IP
>> `x.x.x.x/32`
>> in the rule.
>> You can use tcpdump in the VR to capture the packets and check the source
>> of the packets.
>>
>> -Wei
>>
>> On Thu, 15 Feb 2024 at 17:32, Wally B  wrote:
>>
>> > I'm trying to add an allow rule for management into my ACL. I have a
>> Deny
>> > All inbound at the bottom of the ACL and the allow management at the
>> top.
>> > Yet I cannot SSH into Virtual Machines in the Subnet. If I change the
>> Deny
>> > All Inbound to Allow or just remove it everything works.
>> >
>> > My understanding is that if I have an allow-all from x.x.x.x/32 at rule
>> > number 1 it would supersede any deny rules. Is that not correct?
>> >
>> > Here's my acl exported
>> >
>> >
>> > 6b7f371d-3dc4-469e-b5cf-6b74c1762195 all Ingress Active x.x.x.x/32
>> > 2d3758c6-2b98-433b-b507-c038ad03f33b test-acl-1 1 Allow TRUE SYSTEM:
>> > MANAGEMENT INBOUND
>> > 5baa2be8-39d1-4c6f-b2ee-e42b69f52242 icmp Ingress Active 0.0.0.0/0
>> > 2d3758c6-2b98-433b-b507-c038ad03f33b
>> >  test-acl-1 10998
>> > Deny TRUE Deny All
>> > ICMP Inbound
>> > 90801df9-3dcc-4406-8cf6-2923b70ce46a all Ingress Active 0.0.0.0/0
>> > 2d3758c6-2b98-433b-b507-c038ad03f33b
>> >  test-acl-1 11000
>> > Deny TRUE Deny All
>> > Inbound
>> >
>>
>


Always getting Expired Token on Vm Console

2024-02-15 Thread Ricardo Pertuz
Hi all,

We are getting Failed to Connect to Server/Access Token Expired after this 
configuration

consoleproxy.url.domain: A public name solving to a Public IP
consoleproxy.sslEnabled: False

SSL Offloading in the external Load Balancer
External Load Balancer pointing to port 80 and 8080 (wss) to the console proxy 
systemvm

Anything Missing? Thanks!


BR,

Ricardo Pertuz




Re: ACL List Order

2024-02-15 Thread Wally B
It seems the Address is correct

17:30:26.747007 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
8,nop,nop,sackOK], length 0
17:30:27.749514 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
8,nop,nop,sackOK], length 0
17:30:29.758959 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
8,nop,nop,sackOK], length 0
17:30:33.766394 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
8,nop,nop,sackOK], length 0
17:30:41.779309 eth1  In  IP xxx.xxx.xxx.235.61700 > xxx.xxx.xxx.153.:
Flags [S], seq 3211985522, win 64240, options [mss 1460,nop,wscale
8,nop,nop,sackOK], length 0


ciderlist is

xxx.xxx.xxx.235/32,xxx.xxx.xxx.170/32
I'm coming from .235





On Thu, Feb 15, 2024 at 11:05 AM Wei ZHOU  wrote:

> Yes.
>
> I suspect the source IP of the packets to the VR is not the IP `x.x.x.x/32`
> in the rule.
> You can use tcpdump in the VR to capture the packets and check the source
> of the packets.
>
> -Wei
>
> On Thu, 15 Feb 2024 at 17:32, Wally B  wrote:
>
> > I'm trying to add an allow rule for management into my ACL. I have a Deny
> > All inbound at the bottom of the ACL and the allow management at the top.
> > Yet I cannot SSH into Virtual Machines in the Subnet. If I change the
> Deny
> > All Inbound to Allow or just remove it everything works.
> >
> > My understanding is that if I have an allow-all from x.x.x.x/32 at rule
> > number 1 it would supersede any deny rules. Is that not correct?
> >
> > Here's my acl exported
> >
> >
> > 6b7f371d-3dc4-469e-b5cf-6b74c1762195 all Ingress Active x.x.x.x/32
> > 2d3758c6-2b98-433b-b507-c038ad03f33b test-acl-1 1 Allow TRUE SYSTEM:
> > MANAGEMENT INBOUND
> > 5baa2be8-39d1-4c6f-b2ee-e42b69f52242 icmp Ingress Active 0.0.0.0/0
> > 2d3758c6-2b98-433b-b507-c038ad03f33b
> >  test-acl-1 10998
> > Deny TRUE Deny All
> > ICMP Inbound
> > 90801df9-3dcc-4406-8cf6-2923b70ce46a all Ingress Active 0.0.0.0/0
> > 2d3758c6-2b98-433b-b507-c038ad03f33b
> >  test-acl-1 11000
> > Deny TRUE Deny All
> > Inbound
> >
>


Re: ACL List Order

2024-02-15 Thread Wei ZHOU
Yes.

I suspect the source IP of the packets to the VR is not the IP `x.x.x.x/32`
in the rule.
You can use tcpdump in the VR to capture the packets and check the source
of the packets.

-Wei

On Thu, 15 Feb 2024 at 17:32, Wally B  wrote:

> I'm trying to add an allow rule for management into my ACL. I have a Deny
> All inbound at the bottom of the ACL and the allow management at the top.
> Yet I cannot SSH into Virtual Machines in the Subnet. If I change the Deny
> All Inbound to Allow or just remove it everything works.
>
> My understanding is that if I have an allow-all from x.x.x.x/32 at rule
> number 1 it would supersede any deny rules. Is that not correct?
>
> Here's my acl exported
>
>
> 6b7f371d-3dc4-469e-b5cf-6b74c1762195 all Ingress Active x.x.x.x/32
> 2d3758c6-2b98-433b-b507-c038ad03f33b test-acl-1 1 Allow TRUE SYSTEM:
> MANAGEMENT INBOUND
> 5baa2be8-39d1-4c6f-b2ee-e42b69f52242 icmp Ingress Active 0.0.0.0/0
> 2d3758c6-2b98-433b-b507-c038ad03f33b
>  test-acl-1 10998
> Deny TRUE Deny All
> ICMP Inbound
> 90801df9-3dcc-4406-8cf6-2923b70ce46a all Ingress Active 0.0.0.0/0
> 2d3758c6-2b98-433b-b507-c038ad03f33b
>  test-acl-1 11000
> Deny TRUE Deny All
> Inbound
>


ACL List Order

2024-02-15 Thread Wally B
I'm trying to add an allow rule for management into my ACL. I have a Deny
All inbound at the bottom of the ACL and the allow management at the top.
Yet I cannot SSH into Virtual Machines in the Subnet. If I change the Deny
All Inbound to Allow or just remove it everything works.

My understanding is that if I have an allow-all from x.x.x.x/32 at rule
number 1 it would supersede any deny rules. Is that not correct?

Here's my acl exported


6b7f371d-3dc4-469e-b5cf-6b74c1762195 all Ingress Active x.x.x.x/32
2d3758c6-2b98-433b-b507-c038ad03f33b test-acl-1 1 Allow TRUE SYSTEM:
MANAGEMENT INBOUND
5baa2be8-39d1-4c6f-b2ee-e42b69f52242 icmp Ingress Active 0.0.0.0/0
2d3758c6-2b98-433b-b507-c038ad03f33b test-acl-1 10998 Deny TRUE Deny All
ICMP Inbound
90801df9-3dcc-4406-8cf6-2923b70ce46a all Ingress Active 0.0.0.0/0
2d3758c6-2b98-433b-b507-c038ad03f33b test-acl-1 11000 Deny TRUE Deny All
Inbound


Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wally B
Wei,

It did work before so a routing change at our core must have messed it up.
I assume the routing issue was the actual issue here. Everything else was
just ancillary.

Thanks for all the help, the clusters are working now!
-Wally

On Thu, Feb 15, 2024 at 10:07 AM Wei ZHOU  wrote:

> Hi,
>
> As I understand,
> 1. After upgrading, you need to patch the system vms or recreate them.
> Not a bug I think.
> 2. a minor issue which does not impact the provisioning and operation of
> CKS cluster.
> 3. Looks like a network misconfiguration, but  did it work before ?
>
>
> -Wei
>
>
> On Thu, 15 Feb 2024 at 16:39, Wally B  wrote:
>
> > As a quick add-on. After running those commands and getting the kubectl
> > commands working the Error in the management log is
> >
> > tail -f /var/log/cloudstack/management/management-server.log | grep ERROR
> >
> > 2024-02-15 14:09:41,124 ERROR [c.c.k.c.a.KubernetesClusterActionWorker]
> > (API-Job-Executor-4:ctx-29ed2b8e job-12348 ctx-333d) (logid:ae448a2e)
> > Failed to setup Kubernetes cluster : pz-dev-k8s-ncus-1 in usable
> state
> > as unable to access control node VMs of the cluster
> >
> > 2024-02-15 14:09:41,129 ERROR [c.c.a.ApiAsyncJobDispatcher]
> > (API-Job-Executor-4:ctx-29ed2b8e job-12348) (logid:ae448a2e) Unexpected
> > exception while executing
> >
> >
> org.apache.cloudstack.api.command.user.kubernetes.cluster.CreateKubernetesClusterCmd
> >
> > 2024-02-15 14:33:01,117 ERROR [c.c.k.c.a.KubernetesClusterActionWorker]
> > (API-Job-Executor-17:ctx-0685d548 job-12552 ctx-997de847)
> (logid:fda8fc82)
> > Failed to setup Kubernetes cluster : pz-dev-k8s-ncus-1 in usable
> state
> > as unable to access control node VMs of the cluster
> >
> >
> > did a quick test-netconnection from my pc to the control node and got
> >
> >
> >
> > Test-NetConnection 99.xx.xx.xxx -p 6443
> >
> >
> >
> >  ComputerName :   99.xx.xx.xxx
> > RemoteAddress:   99.xx.xx.xxx
> > RemotePort   : 6443
> > InterfaceAlias   : Ethernet
> > SourceAddress: xxx.xxx.xxx.xxx
> > TcpTestSucceeded : True
> >
> >
> > So I did a test to see If I could get it from my Management hosts (on the
> > same public ip range as the Virtual Router Public IP). and I got a TTL
> > Expired.
> >
> >
> >
> >
> > To wrap it up there were 3 issues.
> >
> >
> > 1. Needed to delete and re-provision the Secondary Storage System Virtual
> > Machine after upgrading from 4.18.1 to 4.19.0
> > 2. Needed to fix additional control nodes not getting the kubeadm.conf
> > copied correctly (Wei PR)
> > 3. Needed to fix some routing on our end since were were bouncing between
> > our L3 TOR ->Firewall <- ISP Routers
> >
> > Thanks again for all the help, everyone!
> > Wally
> >
> > On Thu, Feb 15, 2024 at 7:24 AM Wally B  wrote:
> >
> > > Thanks Wei ZHOU!
> > >
> > > That fixed the kubectl command issue but the cluster still just sits at
> > >
> > > Create Kubernetes cluster k8s-cluster-1 in progress
> > >
> > > Maybe this is just a UI issue? Unfortunately If I stop the k8s cluster
> > > after it errors out it just stays in the error state.
> > >
> > > 1. Click Stop Kubernetes cluster
> > > 2. UI Says it successfully stopped.
> > > 3. Try to Start the Cluster but the power button just says  Stop
> > > Kubernetes cluster and the UI Status stays in the error state.
> > >
> > >
> > > On Thu, Feb 15, 2024 at 7:02 AM Wei ZHOU 
> wrote:
> > >
> > >> Hi,
> > >>
> > >> Please run the following commands as root:
> > >>
> > >> mkdir -p /root/.kube
> > >> cp -i /etc/kubernetes/admin.conf /root/.kube/config
> > >>
> > >> After then the kubectl commands should work
> > >>
> > >> -Wei
> > >>
> > >> On Thu, 15 Feb 2024 at 13:53, Wally B  wrote:
> > >>
> > >> > What command do you suggest I run?
> > >> >
> > >> > kubeconfig returns command not found
> > >> >
> > >> > on your PR I see
> > >> >
> > >> > kubeadm join is being called out as well but I wanted to verify what
> > you
> > >> > wanted me to test first.
> > >> >
> > >> > On Thu, Feb 15, 2024 at 2:41 AM Wei ZHOU 
> > wrote:
> > >> >
> > >> > > Hi Wally,
> > >> > >
> > >> > > I think the cluster is working fine.
> > >> > > The kubeconfig is missing in extra nodes. I have just created a PR
> > for
> > >> > it:
> > >> > > https://github.com/apache/cloudstack/pull/8658
> > >> > > You can run the command on the control nodes which should fix the
> > >> > problem.
> > >> > >
> > >> > >
> > >> > > -Wei
> > >> > >
> > >> > > On Thu, 15 Feb 2024 at 09:31, Wally B 
> > wrote:
> > >> > >
> > >> > > > 3 Nodes
> > >> > > >
> > >> > > > Control 1 -- No Errors
> > >> > > >
> > >> > > > kubectl get nodes
> > >> > > > NAMESTATUS   ROLES
> > >> >  AGE
> > >> > > >  VERSION
> > >> > > > pz-dev-k8s-ncus-1-control-18dabdb141b   Ready
> control-plane
> > >> >  2m6s
> > >> > > > v1.28.4
> > >> > > > pz-dev-k8s-ncus-1-control-18dabdb6ad6   Ready
> control-plane
> > >> >  107s
> > >> > > > v1.28.4
> > >> > > > 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wei ZHOU
Hi,

As I understand,
1. After upgrading, you need to patch the system vms or recreate them.
Not a bug I think.
2. a minor issue which does not impact the provisioning and operation of
CKS cluster.
3. Looks like a network misconfiguration, but  did it work before ?


-Wei


On Thu, 15 Feb 2024 at 16:39, Wally B  wrote:

> As a quick add-on. After running those commands and getting the kubectl
> commands working the Error in the management log is
>
> tail -f /var/log/cloudstack/management/management-server.log | grep ERROR
>
> 2024-02-15 14:09:41,124 ERROR [c.c.k.c.a.KubernetesClusterActionWorker]
> (API-Job-Executor-4:ctx-29ed2b8e job-12348 ctx-333d) (logid:ae448a2e)
> Failed to setup Kubernetes cluster : pz-dev-k8s-ncus-1 in usable state
> as unable to access control node VMs of the cluster
>
> 2024-02-15 14:09:41,129 ERROR [c.c.a.ApiAsyncJobDispatcher]
> (API-Job-Executor-4:ctx-29ed2b8e job-12348) (logid:ae448a2e) Unexpected
> exception while executing
>
> org.apache.cloudstack.api.command.user.kubernetes.cluster.CreateKubernetesClusterCmd
>
> 2024-02-15 14:33:01,117 ERROR [c.c.k.c.a.KubernetesClusterActionWorker]
> (API-Job-Executor-17:ctx-0685d548 job-12552 ctx-997de847) (logid:fda8fc82)
> Failed to setup Kubernetes cluster : pz-dev-k8s-ncus-1 in usable state
> as unable to access control node VMs of the cluster
>
>
> did a quick test-netconnection from my pc to the control node and got
>
>
>
> Test-NetConnection 99.xx.xx.xxx -p 6443
>
>
>
>  ComputerName :   99.xx.xx.xxx
> RemoteAddress:   99.xx.xx.xxx
> RemotePort   : 6443
> InterfaceAlias   : Ethernet
> SourceAddress: xxx.xxx.xxx.xxx
> TcpTestSucceeded : True
>
>
> So I did a test to see If I could get it from my Management hosts (on the
> same public ip range as the Virtual Router Public IP). and I got a TTL
> Expired.
>
>
>
>
> To wrap it up there were 3 issues.
>
>
> 1. Needed to delete and re-provision the Secondary Storage System Virtual
> Machine after upgrading from 4.18.1 to 4.19.0
> 2. Needed to fix additional control nodes not getting the kubeadm.conf
> copied correctly (Wei PR)
> 3. Needed to fix some routing on our end since were were bouncing between
> our L3 TOR ->Firewall <- ISP Routers
>
> Thanks again for all the help, everyone!
> Wally
>
> On Thu, Feb 15, 2024 at 7:24 AM Wally B  wrote:
>
> > Thanks Wei ZHOU!
> >
> > That fixed the kubectl command issue but the cluster still just sits at
> >
> > Create Kubernetes cluster k8s-cluster-1 in progress
> >
> > Maybe this is just a UI issue? Unfortunately If I stop the k8s cluster
> > after it errors out it just stays in the error state.
> >
> > 1. Click Stop Kubernetes cluster
> > 2. UI Says it successfully stopped.
> > 3. Try to Start the Cluster but the power button just says  Stop
> > Kubernetes cluster and the UI Status stays in the error state.
> >
> >
> > On Thu, Feb 15, 2024 at 7:02 AM Wei ZHOU  wrote:
> >
> >> Hi,
> >>
> >> Please run the following commands as root:
> >>
> >> mkdir -p /root/.kube
> >> cp -i /etc/kubernetes/admin.conf /root/.kube/config
> >>
> >> After then the kubectl commands should work
> >>
> >> -Wei
> >>
> >> On Thu, 15 Feb 2024 at 13:53, Wally B  wrote:
> >>
> >> > What command do you suggest I run?
> >> >
> >> > kubeconfig returns command not found
> >> >
> >> > on your PR I see
> >> >
> >> > kubeadm join is being called out as well but I wanted to verify what
> you
> >> > wanted me to test first.
> >> >
> >> > On Thu, Feb 15, 2024 at 2:41 AM Wei ZHOU 
> wrote:
> >> >
> >> > > Hi Wally,
> >> > >
> >> > > I think the cluster is working fine.
> >> > > The kubeconfig is missing in extra nodes. I have just created a PR
> for
> >> > it:
> >> > > https://github.com/apache/cloudstack/pull/8658
> >> > > You can run the command on the control nodes which should fix the
> >> > problem.
> >> > >
> >> > >
> >> > > -Wei
> >> > >
> >> > > On Thu, 15 Feb 2024 at 09:31, Wally B 
> wrote:
> >> > >
> >> > > > 3 Nodes
> >> > > >
> >> > > > Control 1 -- No Errors
> >> > > >
> >> > > > kubectl get nodes
> >> > > > NAMESTATUS   ROLES
> >> >  AGE
> >> > > >  VERSION
> >> > > > pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane
> >> >  2m6s
> >> > > > v1.28.4
> >> > > > pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane
> >> >  107s
> >> > > > v1.28.4
> >> > > > pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane
> >> >  108s
> >> > > > v1.28.4
> >> > > > pz-dev-k8s-ncus-1-node-18dabdc1644  Ready
> >> > 115s
> >> > > > v1.28.4
> >> > > > pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready
> >> > 115s
> >> > > > v1.28.4
> >> > > >
> >> > > >
> >> > > > kubectl get pods --all-namespaces
> >> > > > NAMESPACE  NAME
> >> > > >READY   STATUSRESTARTSAGE
> >> > > > kube-systemcoredns-5dd5756b68-g84vk
> >> > > >1/1 Running   0   2m46s
> >> > > 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wally B
As a quick add-on. After running those commands and getting the kubectl
commands working the Error in the management log is

tail -f /var/log/cloudstack/management/management-server.log | grep ERROR

2024-02-15 14:09:41,124 ERROR [c.c.k.c.a.KubernetesClusterActionWorker]
(API-Job-Executor-4:ctx-29ed2b8e job-12348 ctx-333d) (logid:ae448a2e)
Failed to setup Kubernetes cluster : pz-dev-k8s-ncus-1 in usable state
as unable to access control node VMs of the cluster

2024-02-15 14:09:41,129 ERROR [c.c.a.ApiAsyncJobDispatcher]
(API-Job-Executor-4:ctx-29ed2b8e job-12348) (logid:ae448a2e) Unexpected
exception while executing
org.apache.cloudstack.api.command.user.kubernetes.cluster.CreateKubernetesClusterCmd

2024-02-15 14:33:01,117 ERROR [c.c.k.c.a.KubernetesClusterActionWorker]
(API-Job-Executor-17:ctx-0685d548 job-12552 ctx-997de847) (logid:fda8fc82)
Failed to setup Kubernetes cluster : pz-dev-k8s-ncus-1 in usable state
as unable to access control node VMs of the cluster


did a quick test-netconnection from my pc to the control node and got



Test-NetConnection 99.xx.xx.xxx -p 6443



 ComputerName :   99.xx.xx.xxx
RemoteAddress:   99.xx.xx.xxx
RemotePort   : 6443
InterfaceAlias   : Ethernet
SourceAddress: xxx.xxx.xxx.xxx
TcpTestSucceeded : True


So I did a test to see If I could get it from my Management hosts (on the
same public ip range as the Virtual Router Public IP). and I got a TTL
Expired.




To wrap it up there were 3 issues.


1. Needed to delete and re-provision the Secondary Storage System Virtual
Machine after upgrading from 4.18.1 to 4.19.0
2. Needed to fix additional control nodes not getting the kubeadm.conf
copied correctly (Wei PR)
3. Needed to fix some routing on our end since were were bouncing between
our L3 TOR ->Firewall <- ISP Routers

Thanks again for all the help, everyone!
Wally

On Thu, Feb 15, 2024 at 7:24 AM Wally B  wrote:

> Thanks Wei ZHOU!
>
> That fixed the kubectl command issue but the cluster still just sits at
>
> Create Kubernetes cluster k8s-cluster-1 in progress
>
> Maybe this is just a UI issue? Unfortunately If I stop the k8s cluster
> after it errors out it just stays in the error state.
>
> 1. Click Stop Kubernetes cluster
> 2. UI Says it successfully stopped.
> 3. Try to Start the Cluster but the power button just says  Stop
> Kubernetes cluster and the UI Status stays in the error state.
>
>
> On Thu, Feb 15, 2024 at 7:02 AM Wei ZHOU  wrote:
>
>> Hi,
>>
>> Please run the following commands as root:
>>
>> mkdir -p /root/.kube
>> cp -i /etc/kubernetes/admin.conf /root/.kube/config
>>
>> After then the kubectl commands should work
>>
>> -Wei
>>
>> On Thu, 15 Feb 2024 at 13:53, Wally B  wrote:
>>
>> > What command do you suggest I run?
>> >
>> > kubeconfig returns command not found
>> >
>> > on your PR I see
>> >
>> > kubeadm join is being called out as well but I wanted to verify what you
>> > wanted me to test first.
>> >
>> > On Thu, Feb 15, 2024 at 2:41 AM Wei ZHOU  wrote:
>> >
>> > > Hi Wally,
>> > >
>> > > I think the cluster is working fine.
>> > > The kubeconfig is missing in extra nodes. I have just created a PR for
>> > it:
>> > > https://github.com/apache/cloudstack/pull/8658
>> > > You can run the command on the control nodes which should fix the
>> > problem.
>> > >
>> > >
>> > > -Wei
>> > >
>> > > On Thu, 15 Feb 2024 at 09:31, Wally B  wrote:
>> > >
>> > > > 3 Nodes
>> > > >
>> > > > Control 1 -- No Errors
>> > > >
>> > > > kubectl get nodes
>> > > > NAMESTATUS   ROLES
>> >  AGE
>> > > >  VERSION
>> > > > pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane
>> >  2m6s
>> > > > v1.28.4
>> > > > pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane
>> >  107s
>> > > > v1.28.4
>> > > > pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane
>> >  108s
>> > > > v1.28.4
>> > > > pz-dev-k8s-ncus-1-node-18dabdc1644  Ready
>> > 115s
>> > > > v1.28.4
>> > > > pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready
>> > 115s
>> > > > v1.28.4
>> > > >
>> > > >
>> > > > kubectl get pods --all-namespaces
>> > > > NAMESPACE  NAME
>> > > >READY   STATUSRESTARTSAGE
>> > > > kube-systemcoredns-5dd5756b68-g84vk
>> > > >1/1 Running   0   2m46s
>> > > > kube-systemcoredns-5dd5756b68-kf92x
>> > > >1/1 Running   0   2m46s
>> > > > kube-system
>> etcd-pz-dev-k8s-ncus-1-control-18dabdb141b
>> > > >1/1 Running   0   2m50s
>> > > > kube-system
>> etcd-pz-dev-k8s-ncus-1-control-18dabdb6ad6
>> > > >1/1 Running   0   2m16s
>> > > > kube-system
>> etcd-pz-dev-k8s-ncus-1-control-18dabdbc0a8
>> > > >1/1 Running   0   2m37s
>> > > > kube-system
>> > > >  

Secondary storage cannot be deleted after migration completed

2024-02-15 Thread Mark Winnemueller

Hello CloudStack users,

Using CloudStack V 4.18.1.0 I cannot delete my secondary storage store1 
due to  "Cannot delete image store with active templates backup!". I 
have tried migrating the data to store3, and CloudStack says that it 
succeeded, but the data is still on the existing store1 and I'm not 
seeing all of the data on store2.


Background:

I think this is what confused things. I have gluster NFS across three 
machines with ganesha and was using that as secondary storage (picking 
one of the IP addresses of the gluster cluster). Because this is fragile 
I added keepalived to the gluster machines and added that as secondary 
storage. To be clear - they reference the same physical media. That's 
when I found out that Cloudstack would not delete store1 (because there 
is still data there).


My first attempt to fix this was to create yet a third Secondary storage 
on our TrueNAS box. Migrating data to that storage yields "Successfully 
completed migrating Image store data" and "Cannot delete image store 
with active templates backup!" when trying to delete the primary storage.


It seems that there are 2 ways to solve this - 1 is to manually move the 
data to store3, and then attempt to remove store1. The second is to 
change MySQL image_store table's removed value. The 
cloud.image_store.parent value is NULL for the two image stores, so I'm 
a little leery of that solution.


Any help you can provide on the proper way to solve this will be 
appreciated.


Thanks,

Mark

--
Mark Winnemueller
reThought DevOps Engineer
719.480.4609
mark.winnemuel...@rethoughtinsurance.com

--









Operating under CA license #0M61234



NOTICE: You cannot bind, 
alter or cancel coverage without speaking to an authorized representative 
of reThought Insurance. Coverage cannot be bound without written 
confirmation from an authorized representative of reThought Insurance. This 
e-mail and any files transmitted with it are not encrypted and may contain 
privileged or other confidential information and is intended solely for the 
use of the individual or entity to whom they are addressed. If you are not 
the intended recipient or entity, or believe that you may have received 
this e-mail in error, please reply to the sender indicating that fact and 
delete the copy you received. In addition, you should not print, copy, 
retransmit, disseminate, or otherwise use this information.





Re: Web UI on Safari 4.19.0

2024-02-15 Thread Niclas Lindblom
Thanks,

Yes, I did clear cookies and data, but still unable to load it. However, if it 
works for others like yourself, I suppose it must be something local with my 
browser.

Thanks for the response though

Niclas

> On 14 Feb 2024, at 19:17, Jimmy Huybrechts  wrote:
>
> Did you clear your cookies and data from the website running your portal? 
> That was my issue at first, after cleaning that it was solved.
>
> --
> Jimmy
>
> Op 14-02-2024 17:50 heeft Niclas Lindblom 
>  geschreven:
> Hi all,
>
> I upgraded to 4.19 this weekend and noticed that I can no longer load the Web 
> UI using Safari on my Mac, I only get the Cloudstack spinning wheel and the 
> login page never loads. However, using Chrome it works fine, has anyone else 
> seen this, or is it something with my laptop ?
>
> Thanks
>
> Niclas



smime.p7s
Description: S/MIME cryptographic signature


Re: Error while migrating the instanes

2024-02-15 Thread Wei ZHOU
Hi,

Do the hosts have the same number of cpu core numbers and threads ?

-Wei

On Thu, 15 Feb 2024 at 15:24, Vivek Kumar 
wrote:

> Hello,
>
> I have just deployed a new ACS 4.18.1 with KVM ( Ubuntu 22 ). I have 5
> hosts in a cluster but only on 2 hosts migration is working, on another 3
> it’s not working. Logs says below -
>
> 2024-02-15 12:34:09,831 ERROR [c.c.v.VmWorkJobHandlerProxy]
> (Work-Job-Executor-11:ctx-28d97c02 job-53/job-54 ctx-779439d6)
> (logid:0d5e6412) Invocation exception, caused by:
> com.cloud.utils.exception.CloudRuntimeException: Exception during migrate:
> org.libvirt.LibvirtException: Invalid value '0-87' for 'cpuset.cpus':
> Invalid argument
> 2024-02-15 12:34:09,831 INFO  [c.c.v.VmWorkJobHandlerProxy]
> (Work-Job-Executor-11:ctx-28d97c02 job-53/job-54 ctx-779439d6)
> (logid:0d5e6412) Rethrow exception
> com.cloud.utils.exception.CloudRuntimeException: Exception during migrate:
> org.libvirt.LibvirtException: Invalid value '0-87' for 'cpuset.cpus':
> Invalid argument
> 2024-02-15 12:34:09,831 DEBUG [c.c.v.VmWorkJobDispatcher]
> (Work-Job-Executor-11:ctx-28d97c02 job-53/job-54) (logid:0d5e6412) Done
> with run of VM work job: com.cloud.vm.VmWorkMigrate for VM 5, job origin: 53
> 2024-02-15 12:34:09,831 ERROR [c.c.v.VmWorkJobDispatcher]
> (Work-Job-Executor-11:ctx-28d97c02 job-53/job-54) (logid:0d5e6412) Unable
> to complete AsyncJobVO: {id:54, userId: 2, accountId: 2, instanceType:
> null, instanceId: null, cmd: com.cloud.vm.VmWorkMigrate, cmdInfo:
> rO0ABXNyABpjb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZRdxQXtPtzYqAgAGSgAJc3JjSG9zdElkTAAJY2x1c3RlcklkdAAQTGphdmEvbGFuZy9Mb25nO0wABmhvc3RJZHEAfgABTAAFcG9kSWRxAH4AAUwAB3N0b3JhZ2V0AA9MamF2YS91dGlsL01hcDtMAAZ6b25lSWRxAH4AAXhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAACAAIABXQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAXNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAAXNxAH4ABwAGcQB-AAlwcQB-AAk,
> cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0,
> result: null, initMsid: 12079656927289, completeMsid: null, lastUpdated:
> null, lastPolled: null, created: Thu Feb 15 12:34:07 UTC 2024, removed:
> null}, job origin:53
> com.cloud.utils.exception.CloudRuntimeException: Exception during migrate:
> org.libvirt.LibvirtException: Invalid value '0-87' for 'cpuset.cpus':
> Invalid argument
> at
> com.cloud.vm.VirtualMachineManagerImpl.migrate(VirtualMachineManagerImpl.java:2795)
> at
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrate(VirtualMachineManagerImpl.java:2659)
> at
> com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrate(VirtualMachineManagerImpl.java:5439)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> at
> com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
> at
> com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5536)
> at
> com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
> at
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:620)
> at
> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
> at
> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
> at
> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
> at
> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:568)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> 2024-02-15 12:34:09,837 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
> (Work-Job-Executor-11:ctx-28d97c02 job-53/job-54) (logid:0d5e6412) Complete
> async job-54, jobStatus: FAILED, resultCode: 0, result: rO0ABXNy
>
>
> All the 

Error while migrating the instanes

2024-02-15 Thread Vivek Kumar
Hello,

I have just deployed a new ACS 4.18.1 with KVM ( Ubuntu 22 ). I have 5 hosts in 
a cluster but only on 2 hosts migration is working, on another 3 it’s not 
working. Logs says below -

2024-02-15 12:34:09,831 ERROR [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-11:ctx-28d97c02 job-53/job-54 ctx-779439d6) (logid:0d5e6412) 
Invocation exception, caused by: 
com.cloud.utils.exception.CloudRuntimeException: Exception during migrate: 
org.libvirt.LibvirtException: Invalid value '0-87' for 'cpuset.cpus': Invalid 
argument
2024-02-15 12:34:09,831 INFO  [c.c.v.VmWorkJobHandlerProxy] 
(Work-Job-Executor-11:ctx-28d97c02 job-53/job-54 ctx-779439d6) (logid:0d5e6412) 
Rethrow exception com.cloud.utils.exception.CloudRuntimeException: Exception 
during migrate: org.libvirt.LibvirtException: Invalid value '0-87' for 
'cpuset.cpus': Invalid argument
2024-02-15 12:34:09,831 DEBUG [c.c.v.VmWorkJobDispatcher] 
(Work-Job-Executor-11:ctx-28d97c02 job-53/job-54) (logid:0d5e6412) Done with 
run of VM work job: com.cloud.vm.VmWorkMigrate for VM 5, job origin: 53
2024-02-15 12:34:09,831 ERROR [c.c.v.VmWorkJobDispatcher] 
(Work-Job-Executor-11:ctx-28d97c02 job-53/job-54) (logid:0d5e6412) Unable to 
complete AsyncJobVO: {id:54, userId: 2, accountId: 2, instanceType: null, 
instanceId: null, cmd: com.cloud.vm.VmWorkMigrate, cmdInfo: 
rO0ABXNyABpjb20uY2xvdWQudm0uVm1Xb3JrTWlncmF0ZRdxQXtPtzYqAgAGSgAJc3JjSG9zdElkTAAJY2x1c3RlcklkdAAQTGphdmEvbGFuZy9Mb25nO0wABmhvc3RJZHEAfgABTAAFcG9kSWRxAH4AAUwAB3N0b3JhZ2V0AA9MamF2YS91dGlsL01hcDtMAAZ6b25lSWRxAH4AAXhyABNjb20uY2xvdWQudm0uVm1Xb3Jrn5m2VvAlZ2sCAARKAAlhY2NvdW50SWRKAAZ1c2VySWRKAAR2bUlkTAALaGFuZGxlck5hbWV0ABJMamF2YS9sYW5nL1N0cmluZzt4cAACAAIABXQAGVZpcnR1YWxNYWNoaW5lTWFuYWdlckltcGwAAXNyAA5qYXZhLmxhbmcuTG9uZzuL5JDMjyPfAgABSgAFdmFsdWV4cgAQamF2YS5sYW5nLk51bWJlcoaslR0LlOCLAgAAeHAAAXNxAH4ABwAGcQB-AAlwcQB-AAk,
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 12079656927289, completeMsid: null, lastUpdated: null, 
lastPolled: null, created: Thu Feb 15 12:34:07 UTC 2024, removed: null}, job 
origin:53
com.cloud.utils.exception.CloudRuntimeException: Exception during migrate: 
org.libvirt.LibvirtException: Invalid value '0-87' for 'cpuset.cpus': Invalid 
argument
at 
com.cloud.vm.VirtualMachineManagerImpl.migrate(VirtualMachineManagerImpl.java:2795)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrate(VirtualMachineManagerImpl.java:2659)
at 
com.cloud.vm.VirtualMachineManagerImpl.orchestrateMigrate(VirtualMachineManagerImpl.java:5439)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.cloud.vm.VmWorkJobHandlerProxy.handleVmWorkJob(VmWorkJobHandlerProxy.java:107)
at 
com.cloud.vm.VirtualMachineManagerImpl.handleVmWorkJob(VirtualMachineManagerImpl.java:5536)
at com.cloud.vm.VmWorkJobDispatcher.runJob(VmWorkJobDispatcher.java:102)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.runInContext(AsyncJobManagerImpl.java:620)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:48)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:55)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:102)
at 
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:52)
at 
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:45)
at 
org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$5.run(AsyncJobManagerImpl.java:568)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
2024-02-15 12:34:09,837 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(Work-Job-Executor-11:ctx-28d97c02 job-53/job-54) (logid:0d5e6412) Complete 
async job-54, jobStatus: FAILED, resultCode: 0, result: rO0ABXNy


All the hypvervisor nodes are simmer, and installed agent and setup using the 
automations script. Any suggestion what to look at ?







Vivek Kumar
Sr. Manager - Cloud & DevOps
TechOps | Indiqus Technologies

vivek.ku...@indiqus.com 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wally B
Thanks Wei ZHOU!

That fixed the kubectl command issue but the cluster still just sits at

Create Kubernetes cluster k8s-cluster-1 in progress

Maybe this is just a UI issue? Unfortunately If I stop the k8s cluster
after it errors out it just stays in the error state.

1. Click Stop Kubernetes cluster
2. UI Says it successfully stopped.
3. Try to Start the Cluster but the power button just says  Stop Kubernetes
cluster and the UI Status stays in the error state.


On Thu, Feb 15, 2024 at 7:02 AM Wei ZHOU  wrote:

> Hi,
>
> Please run the following commands as root:
>
> mkdir -p /root/.kube
> cp -i /etc/kubernetes/admin.conf /root/.kube/config
>
> After then the kubectl commands should work
>
> -Wei
>
> On Thu, 15 Feb 2024 at 13:53, Wally B  wrote:
>
> > What command do you suggest I run?
> >
> > kubeconfig returns command not found
> >
> > on your PR I see
> >
> > kubeadm join is being called out as well but I wanted to verify what you
> > wanted me to test first.
> >
> > On Thu, Feb 15, 2024 at 2:41 AM Wei ZHOU  wrote:
> >
> > > Hi Wally,
> > >
> > > I think the cluster is working fine.
> > > The kubeconfig is missing in extra nodes. I have just created a PR for
> > it:
> > > https://github.com/apache/cloudstack/pull/8658
> > > You can run the command on the control nodes which should fix the
> > problem.
> > >
> > >
> > > -Wei
> > >
> > > On Thu, 15 Feb 2024 at 09:31, Wally B  wrote:
> > >
> > > > 3 Nodes
> > > >
> > > > Control 1 -- No Errors
> > > >
> > > > kubectl get nodes
> > > > NAMESTATUS   ROLES
> >  AGE
> > > >  VERSION
> > > > pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane
> >  2m6s
> > > > v1.28.4
> > > > pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane
> >  107s
> > > > v1.28.4
> > > > pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane
> >  108s
> > > > v1.28.4
> > > > pz-dev-k8s-ncus-1-node-18dabdc1644  Ready
> > 115s
> > > > v1.28.4
> > > > pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready
> > 115s
> > > > v1.28.4
> > > >
> > > >
> > > > kubectl get pods --all-namespaces
> > > > NAMESPACE  NAME
> > > >READY   STATUSRESTARTSAGE
> > > > kube-systemcoredns-5dd5756b68-g84vk
> > > >1/1 Running   0   2m46s
> > > > kube-systemcoredns-5dd5756b68-kf92x
> > > >1/1 Running   0   2m46s
> > > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb141b
> > > >1/1 Running   0   2m50s
> > > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> > > >1/1 Running   0   2m16s
> > > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> > > >1/1 Running   0   2m37s
> > > > kube-system
> > > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb141b
> > 1/1
> > > >   Running   0   2m52s
> > > > kube-system
> > > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> > 1/1
> > > >   Running   1 (2m16s ago)   2m15s
> > > > kube-system
> > > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> > 1/1
> > > >   Running   0   2m37s
> > > > kube-system
> > > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb141b
> >  1/1
> > > >   Running   1 (2m25s ago)   2m51s
> > > > kube-system
> > > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> >  1/1
> > > >   Running   0   2m18s
> > > > kube-system
> > > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> >  1/1
> > > >   Running   0   2m37s
> > > > kube-systemkube-proxy-445qx
> > > >1/1 Running   0   2m37s
> > > > kube-systemkube-proxy-8swdg
> > > >1/1 Running   0   2m2s
> > > > kube-systemkube-proxy-bl9rx
> > > >1/1 Running   0   2m47s
> > > > kube-systemkube-proxy-pv8gj
> > > >1/1 Running   0   2m43s
> > > > kube-systemkube-proxy-v7cw2
> > > >1/1 Running   0   2m43s
> > > > kube-system
> > > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb141b
> > 1/1
> > > >   Running   1 (2m22s ago)   2m50s
> > > > kube-system
> > > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> > 1/1
> > > >   Running   0   2m15s
> > > > kube-system
> > > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> > 1/1
> > > >   Running   0   2m37s
> > > > kube-systemweave-net-8dvl5
> > > > 2/2 Running   0   2m37s
> > > > kube-systemweave-net-c54bz
> > > > 2/2 Running   0   2m43s
> > > > kube-systemweave-net-lv8l4
> > > > 2/2 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wei ZHOU
Hi,

Please run the following commands as root:

mkdir -p /root/.kube
cp -i /etc/kubernetes/admin.conf /root/.kube/config

After then the kubectl commands should work

-Wei

On Thu, 15 Feb 2024 at 13:53, Wally B  wrote:

> What command do you suggest I run?
>
> kubeconfig returns command not found
>
> on your PR I see
>
> kubeadm join is being called out as well but I wanted to verify what you
> wanted me to test first.
>
> On Thu, Feb 15, 2024 at 2:41 AM Wei ZHOU  wrote:
>
> > Hi Wally,
> >
> > I think the cluster is working fine.
> > The kubeconfig is missing in extra nodes. I have just created a PR for
> it:
> > https://github.com/apache/cloudstack/pull/8658
> > You can run the command on the control nodes which should fix the
> problem.
> >
> >
> > -Wei
> >
> > On Thu, 15 Feb 2024 at 09:31, Wally B  wrote:
> >
> > > 3 Nodes
> > >
> > > Control 1 -- No Errors
> > >
> > > kubectl get nodes
> > > NAMESTATUS   ROLES
>  AGE
> > >  VERSION
> > > pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane
>  2m6s
> > > v1.28.4
> > > pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane
>  107s
> > > v1.28.4
> > > pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane
>  108s
> > > v1.28.4
> > > pz-dev-k8s-ncus-1-node-18dabdc1644  Ready
> 115s
> > > v1.28.4
> > > pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready
> 115s
> > > v1.28.4
> > >
> > >
> > > kubectl get pods --all-namespaces
> > > NAMESPACE  NAME
> > >READY   STATUSRESTARTSAGE
> > > kube-systemcoredns-5dd5756b68-g84vk
> > >1/1 Running   0   2m46s
> > > kube-systemcoredns-5dd5756b68-kf92x
> > >1/1 Running   0   2m46s
> > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb141b
> > >1/1 Running   0   2m50s
> > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> > >1/1 Running   0   2m16s
> > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> > >1/1 Running   0   2m37s
> > > kube-system
> > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb141b
> 1/1
> > >   Running   0   2m52s
> > > kube-system
> > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> 1/1
> > >   Running   1 (2m16s ago)   2m15s
> > > kube-system
> > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> 1/1
> > >   Running   0   2m37s
> > > kube-system
> > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb141b
>  1/1
> > >   Running   1 (2m25s ago)   2m51s
> > > kube-system
> > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb6ad6
>  1/1
> > >   Running   0   2m18s
> > > kube-system
> > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdbc0a8
>  1/1
> > >   Running   0   2m37s
> > > kube-systemkube-proxy-445qx
> > >1/1 Running   0   2m37s
> > > kube-systemkube-proxy-8swdg
> > >1/1 Running   0   2m2s
> > > kube-systemkube-proxy-bl9rx
> > >1/1 Running   0   2m47s
> > > kube-systemkube-proxy-pv8gj
> > >1/1 Running   0   2m43s
> > > kube-systemkube-proxy-v7cw2
> > >1/1 Running   0   2m43s
> > > kube-system
> > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb141b
> 1/1
> > >   Running   1 (2m22s ago)   2m50s
> > > kube-system
> > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> 1/1
> > >   Running   0   2m15s
> > > kube-system
> > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> 1/1
> > >   Running   0   2m37s
> > > kube-systemweave-net-8dvl5
> > > 2/2 Running   0   2m37s
> > > kube-systemweave-net-c54bz
> > > 2/2 Running   0   2m43s
> > > kube-systemweave-net-lv8l4
> > > 2/2 Running   1 (2m42s ago)   2m47s
> > > kube-systemweave-net-vg6td
> > > 2/2 Running   0   2m2s
> > > kube-systemweave-net-vq9s4
> > > 2/2 Running   0   2m43s
> > > kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-4k886
> > >1/1 Running   0   2m46s
> > > kubernetes-dashboard   kubernetes-dashboard-5b749d9495-jpbxl
> > > 1/1 Running   1 (2m22s ago)   2m46s
> > >
> > >
> > >
> > >
> > > Control 2: Errors at the CLI
> > > Failed to start Execute cloud user/final scripts.
> > >
> > > kubectl get nodes
> > > E0215 08:27:07.7978252772 memcache.go:265] couldn't get current
> > server
> > > API group list: Get 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wally B
What command do you suggest I run?

kubeconfig returns command not found

on your PR I see

kubeadm join is being called out as well but I wanted to verify what you
wanted me to test first.

On Thu, Feb 15, 2024 at 2:41 AM Wei ZHOU  wrote:

> Hi Wally,
>
> I think the cluster is working fine.
> The kubeconfig is missing in extra nodes. I have just created a PR for it:
> https://github.com/apache/cloudstack/pull/8658
> You can run the command on the control nodes which should fix the problem.
>
>
> -Wei
>
> On Thu, 15 Feb 2024 at 09:31, Wally B  wrote:
>
> > 3 Nodes
> >
> > Control 1 -- No Errors
> >
> > kubectl get nodes
> > NAMESTATUS   ROLES   AGE
> >  VERSION
> > pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane   2m6s
> > v1.28.4
> > pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane   107s
> > v1.28.4
> > pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane   108s
> > v1.28.4
> > pz-dev-k8s-ncus-1-node-18dabdc1644  Ready  115s
> > v1.28.4
> > pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready  115s
> > v1.28.4
> >
> >
> > kubectl get pods --all-namespaces
> > NAMESPACE  NAME
> >READY   STATUSRESTARTSAGE
> > kube-systemcoredns-5dd5756b68-g84vk
> >1/1 Running   0   2m46s
> > kube-systemcoredns-5dd5756b68-kf92x
> >1/1 Running   0   2m46s
> > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb141b
> >1/1 Running   0   2m50s
> > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb6ad6
> >1/1 Running   0   2m16s
> > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdbc0a8
> >1/1 Running   0   2m37s
> > kube-system
> >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb141b1/1
> >   Running   0   2m52s
> > kube-system
> >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb6ad61/1
> >   Running   1 (2m16s ago)   2m15s
> > kube-system
> >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdbc0a81/1
> >   Running   0   2m37s
> > kube-system
> >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb141b   1/1
> >   Running   1 (2m25s ago)   2m51s
> > kube-system
> >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb6ad6   1/1
> >   Running   0   2m18s
> > kube-system
> >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdbc0a8   1/1
> >   Running   0   2m37s
> > kube-systemkube-proxy-445qx
> >1/1 Running   0   2m37s
> > kube-systemkube-proxy-8swdg
> >1/1 Running   0   2m2s
> > kube-systemkube-proxy-bl9rx
> >1/1 Running   0   2m47s
> > kube-systemkube-proxy-pv8gj
> >1/1 Running   0   2m43s
> > kube-systemkube-proxy-v7cw2
> >1/1 Running   0   2m43s
> > kube-system
> >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb141b1/1
> >   Running   1 (2m22s ago)   2m50s
> > kube-system
> >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb6ad61/1
> >   Running   0   2m15s
> > kube-system
> >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdbc0a81/1
> >   Running   0   2m37s
> > kube-systemweave-net-8dvl5
> > 2/2 Running   0   2m37s
> > kube-systemweave-net-c54bz
> > 2/2 Running   0   2m43s
> > kube-systemweave-net-lv8l4
> > 2/2 Running   1 (2m42s ago)   2m47s
> > kube-systemweave-net-vg6td
> > 2/2 Running   0   2m2s
> > kube-systemweave-net-vq9s4
> > 2/2 Running   0   2m43s
> > kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-4k886
> >1/1 Running   0   2m46s
> > kubernetes-dashboard   kubernetes-dashboard-5b749d9495-jpbxl
> > 1/1 Running   1 (2m22s ago)   2m46s
> >
> >
> >
> >
> > Control 2: Errors at the CLI
> > Failed to start Execute cloud user/final scripts.
> >
> > kubectl get nodes
> > E0215 08:27:07.7978252772 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection refused
> > E0215 08:27:07.7987592772 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection refused
> > E0215 08:27:07.8010392772 memcache.go:265] couldn't get current
> server
> > API 

Re: [PROPOSAL] version naming : drop the 4.

2024-02-15 Thread Rohit Yadav
(+ users)

All,

Generally speaking, any versioning/styling change can be perceived as a big or 
concerning change by users (those existing or new ones trying/adopting). So, we 
must get our message across properly and correctly.

I'm not for or against cosmetics change in versioning, but I'm really keen if 
we want to discuss if we can use this opportunity to streamline our LTS 
release, improve how we upgrade CloudStack (i.e relook at our DB/upgrade 
approach), make releases more linear and faster (avoid forking branches for 
example), and try to change new defaults and drop some old API/arch things 
(such as default API response type to json, but largely be backward 
compatible). Some of these suggestions may be too large an undertaking and make 
not be worth it.


Overall, I've no objections if the consensus is to drop the "4." version 
prefix. I also want to hear from our users if they've any feedback for us.


Regards.

 



From: Guto Veronezi 
Sent: Tuesday, February 13, 2024 18:34
To: d...@cloudstack.apache.org 
Subject: Re: [PROPOSAL] version naming : drop the 4.

Daan,

As we still plan to introduce disruptive changes (in a cautious and
structured way) in the major versions, all my concerns are met; I do not
have further technical reasons to keep the "4.".

Best regards,
Daniel Salvador (gutoveronezi)

On 2/12/24 11:55, Daan Hoogland wrote:
> bump,
> @Daniel Salvador is there any technical reason to keep the 4? any
> reason why there must be a 5 instead of a 21, 22 or 23? We are
> maintaining 4 number semantic versioning for no reason, as I see it.
>
> On Tue, Jan 30, 2024 at 12:02 PM Daan Hoogland  
> wrote:
>> Daniel, "technical" reasons for dropping the 4 are all in the field of
>> social engineering. In practice (as I think Wei also described) we are
>> already treating the "minor" version number as major version. Since
>> 4.0 or 4.1 (don´t remember) there has been renewed talk of a 5 , but
>> never enough reason and or commitment to make it real. We could argue
>> about it a lot.
>>
>> so
>> ¨¨¨
>> The main point is: *we have to understand the technical reasons for
>> the proposal and what we expect from it before deciding anything.
>> ¨¨¨
>> The most important point is that we expect that people understand that
>> we treat the number that now seems to be "minor" as major release
>> numbers.
>>
>>
>> On Fri, Jan 26, 2024 at 7:42 PM Wei ZHOU  wrote:
>>> Hi Daniel,
>>>
>>> If we are discussing 5.0, I would have the same concern as you.
>>> What we are discussing is dropping 4.x. The fact is, we will never release
>>> 5.0 (anyone disagree ?)
>>> In this case, the major version 4.x becomes useless.
>>> If we compare 4.20.0/4.21.0 with 20.0/21.0, it is obvious which is better.
>>> IMHO due to the similar reason, the Java version has been changed from 1.x
>>> to java 1.7/1.8 (=java 7/8) then to java 11/14/17.
>>> of course there will be some issues if semantic changes, I think it is
>>> under control.
>>>
>>>
>>>
>>> Regarding the compatibility, I think we can change the APIs gradually.
>>> I noticed the following recently when I tested VR upgrade to
>>> debian12/python3
>>>
>>> root@r-431-VM:~# python
>>> Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
>>> Type "help", "copyright", "credits" or "license" for more information.
>> import cgi
>>> :1: DeprecationWarning: 'cgi' is deprecated and slated for removal
>>> in Python 3.13
>>>
>>> For the API changes you mentioned, we could try the similar
>>> - in version X, add new APIs, mark the old APIs as deprecated
>>> - tell users the old APIs will be removed in version Y, please use new APIs
>>> instead.
>>> - in version Y, remove the old APIs.
>>>
>>> This can be done in each major/minor release. No need to wait for 5.0.
>>>
>>>
>>> -Wei
>>>
>>> On Fri, 26 Jan 2024 at 18:51, Guto Veronezi  wrote:
>>>
 Exactly, so you understand now why we must discuss what we intend.
 Although, incompatibilities are needed sometimes so we can evolve,
 leaving old ways and deprecated technologies and techniques in the past.

 *The main point is: *we have to understand the technical reasons for the
 proposal and what we expect from it before deciding anything.

 Best regards,
 Daniel Salvador (gutoveronezi)



>>
>>
>> --
>> Daan
>
>


Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wei ZHOU
Hi Wally,

I think the cluster is working fine.
The kubeconfig is missing in extra nodes. I have just created a PR for it:
https://github.com/apache/cloudstack/pull/8658
You can run the command on the control nodes which should fix the problem.


-Wei

On Thu, 15 Feb 2024 at 09:31, Wally B  wrote:

> 3 Nodes
>
> Control 1 -- No Errors
>
> kubectl get nodes
> NAMESTATUS   ROLES   AGE
>  VERSION
> pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane   2m6s
> v1.28.4
> pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane   107s
> v1.28.4
> pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane   108s
> v1.28.4
> pz-dev-k8s-ncus-1-node-18dabdc1644  Ready  115s
> v1.28.4
> pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready  115s
> v1.28.4
>
>
> kubectl get pods --all-namespaces
> NAMESPACE  NAME
>READY   STATUSRESTARTSAGE
> kube-systemcoredns-5dd5756b68-g84vk
>1/1 Running   0   2m46s
> kube-systemcoredns-5dd5756b68-kf92x
>1/1 Running   0   2m46s
> kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb141b
>1/1 Running   0   2m50s
> kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb6ad6
>1/1 Running   0   2m16s
> kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdbc0a8
>1/1 Running   0   2m37s
> kube-system
>  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb141b1/1
>   Running   0   2m52s
> kube-system
>  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb6ad61/1
>   Running   1 (2m16s ago)   2m15s
> kube-system
>  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdbc0a81/1
>   Running   0   2m37s
> kube-system
>  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb141b   1/1
>   Running   1 (2m25s ago)   2m51s
> kube-system
>  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb6ad6   1/1
>   Running   0   2m18s
> kube-system
>  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdbc0a8   1/1
>   Running   0   2m37s
> kube-systemkube-proxy-445qx
>1/1 Running   0   2m37s
> kube-systemkube-proxy-8swdg
>1/1 Running   0   2m2s
> kube-systemkube-proxy-bl9rx
>1/1 Running   0   2m47s
> kube-systemkube-proxy-pv8gj
>1/1 Running   0   2m43s
> kube-systemkube-proxy-v7cw2
>1/1 Running   0   2m43s
> kube-system
>  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb141b1/1
>   Running   1 (2m22s ago)   2m50s
> kube-system
>  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb6ad61/1
>   Running   0   2m15s
> kube-system
>  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdbc0a81/1
>   Running   0   2m37s
> kube-systemweave-net-8dvl5
> 2/2 Running   0   2m37s
> kube-systemweave-net-c54bz
> 2/2 Running   0   2m43s
> kube-systemweave-net-lv8l4
> 2/2 Running   1 (2m42s ago)   2m47s
> kube-systemweave-net-vg6td
> 2/2 Running   0   2m2s
> kube-systemweave-net-vq9s4
> 2/2 Running   0   2m43s
> kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-4k886
>1/1 Running   0   2m46s
> kubernetes-dashboard   kubernetes-dashboard-5b749d9495-jpbxl
> 1/1 Running   1 (2m22s ago)   2m46s
>
>
>
>
> Control 2: Errors at the CLI
> Failed to start Execute cloud user/final scripts.
>
> kubectl get nodes
> E0215 08:27:07.7978252772 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 08:27:07.7987592772 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 08:27:07.8010392772 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 08:27:07.8019772772 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 08:27:07.8040292772 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wally B
3 Nodes

Control 1 -- No Errors

kubectl get nodes
NAMESTATUS   ROLES   AGE
 VERSION
pz-dev-k8s-ncus-1-control-18dabdb141b   Readycontrol-plane   2m6s
v1.28.4
pz-dev-k8s-ncus-1-control-18dabdb6ad6   Readycontrol-plane   107s
v1.28.4
pz-dev-k8s-ncus-1-control-18dabdbc0a8   Readycontrol-plane   108s
v1.28.4
pz-dev-k8s-ncus-1-node-18dabdc1644  Ready  115s
v1.28.4
pz-dev-k8s-ncus-1-node-18dabdc6c16  Ready  115s
v1.28.4


kubectl get pods --all-namespaces
NAMESPACE  NAME
   READY   STATUSRESTARTSAGE
kube-systemcoredns-5dd5756b68-g84vk
   1/1 Running   0   2m46s
kube-systemcoredns-5dd5756b68-kf92x
   1/1 Running   0   2m46s
kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb141b
   1/1 Running   0   2m50s
kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdb6ad6
   1/1 Running   0   2m16s
kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabdbc0a8
   1/1 Running   0   2m37s
kube-system
 kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb141b1/1
  Running   0   2m52s
kube-system
 kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdb6ad61/1
  Running   1 (2m16s ago)   2m15s
kube-system
 kube-apiserver-pz-dev-k8s-ncus-1-control-18dabdbc0a81/1
  Running   0   2m37s
kube-system
 kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb141b   1/1
  Running   1 (2m25s ago)   2m51s
kube-system
 kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdb6ad6   1/1
  Running   0   2m18s
kube-system
 kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabdbc0a8   1/1
  Running   0   2m37s
kube-systemkube-proxy-445qx
   1/1 Running   0   2m37s
kube-systemkube-proxy-8swdg
   1/1 Running   0   2m2s
kube-systemkube-proxy-bl9rx
   1/1 Running   0   2m47s
kube-systemkube-proxy-pv8gj
   1/1 Running   0   2m43s
kube-systemkube-proxy-v7cw2
   1/1 Running   0   2m43s
kube-system
 kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb141b1/1
  Running   1 (2m22s ago)   2m50s
kube-system
 kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdb6ad61/1
  Running   0   2m15s
kube-system
 kube-scheduler-pz-dev-k8s-ncus-1-control-18dabdbc0a81/1
  Running   0   2m37s
kube-systemweave-net-8dvl5
2/2 Running   0   2m37s
kube-systemweave-net-c54bz
2/2 Running   0   2m43s
kube-systemweave-net-lv8l4
2/2 Running   1 (2m42s ago)   2m47s
kube-systemweave-net-vg6td
2/2 Running   0   2m2s
kube-systemweave-net-vq9s4
2/2 Running   0   2m43s
kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-4k886
   1/1 Running   0   2m46s
kubernetes-dashboard   kubernetes-dashboard-5b749d9495-jpbxl
1/1 Running   1 (2m22s ago)   2m46s




Control 2: Errors at the CLI
Failed to start Execute cloud user/final scripts.

kubectl get nodes
E0215 08:27:07.7978252772 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
127.0.0.1:8080: connect: connection refused
E0215 08:27:07.7987592772 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
127.0.0.1:8080: connect: connection refused
E0215 08:27:07.8010392772 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
127.0.0.1:8080: connect: connection refused
E0215 08:27:07.8019772772 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
127.0.0.1:8080: connect: connection refused
E0215 08:27:07.8040292772 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify
the right host or port?

kubectl get pods --all-namespaces
E0215 08:29:41.8184522811 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
127.0.0.1:8080: connect: connection refused
E0215 08:29:41.8199352811 memcache.go:265] couldn't get current server
API group list: Get "http://localhost:8080/api?timeout=32s": dial 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wei ZHOU
Can you try with 3 control nodes ?

-Wei

On Thu, 15 Feb 2024 at 09:13, Wally B  wrote:

> - zone type :
> Core
> - network type:
> Advanced
> Isolated Network inside a Redundant VPC (same results in just an
> Isolated network without VPC)
> - number of control nodes:
> 2 Control Nodes (HA Cluster)
>
> We were able to deploy k8s in the past, not sure what changed.
>
> Thanks!
> -Wally
>
> On Thu, Feb 15, 2024 at 2:04 AM Wei ZHOU  wrote:
>
> > Hi,
> >
> > can you share
> > - zone type
> > - network type
> > - number of control nodes
> >
> >
> > -Wei
> >
> > On Thu, 15 Feb 2024 at 08:52, Wally B  wrote:
> >
> > > So
> > >
> > > Recreating the Sec Storage VM Fixed the Cert issue and I was able to
> > > install K8s 1.28.4 Binaries. --- THANKS Wei ZHOU !
> > >
> > >
> > > Im still getting
> > >
> > > [FAILED] Failed to start Execute cloud user/final scripts.
> > >
> > > on 1 control and 1 worker.
> > >
> > > *Control 1 --  pz-dev-k8s-ncus-1-control-18dabaf66c1  --:* No
> > > errors at the CLI
> > >
> > > kubectl get nodes
> > > NAMESTATUS   ROLES
>  AGE
> > >   VERSION
> > > pz-dev-k8s-ncus-1-control-18dabaf0edb   Readycontrol-plane
>  5m2s
> > >  v1.28.4
> > > pz-dev-k8s-ncus-1-control-18dabaf66c1   Readycontrol-plane
> >  4m44s
> > >   v1.28.4
> > > pz-dev-k8s-ncus-1-node-18dabafb0bd  Ready
> > 4m47s
> > >   v1.28.4
> > > pz-dev-k8s-ncus-1-node-18dabb006bc  Ready
> > 4m47s
> > >   v1.28.4
> > >
> > >
> > > kubectl get pods --all-namespaces
> > > NAMESPACE  NAME
> > >READY   STATUSRESTARTSAGE
> > > kube-systemcoredns-5dd5756b68-295gb
> > >1/1 Running   0   5m32s
> > > kube-systemcoredns-5dd5756b68-cdwvw
> > >1/1 Running   0   5m33s
> > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabaf0edb
> > >1/1 Running   0   5m36s
> > > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabaf66c1
> > >1/1 Running   0   5m23s
> > > kube-system
> > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabaf0edb
> 1/1
> > >   Running   0   5m36s
> > > kube-system
> > >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabaf66c1
> 1/1
> > >   Running   0   5m23s
> > > kube-system
> > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabaf0edb
>  1/1
> > >   Running   1 (5m13s ago)   5m36s
> > > kube-system
> > >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabaf66c1
>  1/1
> > >   Running   0   5m23s
> > > kube-systemkube-proxy-2m8zb
> > >1/1 Running   0   5m26s
> > > kube-systemkube-proxy-cwpjg
> > >1/1 Running   0   5m33s
> > > kube-systemkube-proxy-l2vbf
> > >1/1 Running   0   5m26s
> > > kube-systemkube-proxy-qhlqt
> > >1/1 Running   0   5m23s
> > > kube-system
> > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabaf0edb
> 1/1
> > >   Running   1 (5m8s ago)5m36s
> > > kube-system
> > >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabaf66c1
> 1/1
> > >   Running   0   5m23s
> > > kube-systemweave-net-5cs26
> > > 2/2 Running   1 (5m9s ago)5m26s
> > > kube-systemweave-net-9zqrw
> > > 2/2 Running   1 (5m28s ago)   5m33s
> > > kube-systemweave-net-fcwtr
> > > 2/2 Running   0   5m23s
> > > kube-systemweave-net-lh2dh
> > > 2/2 Running   1 (4m41s ago)   5m26s
> > > kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-r284t
> > >1/1 Running   0   5m32s
> > > kubernetes-dashboard   kubernetes-dashboard-5b749d9495-vtwdd
> > > 1/1 Running   0   5m32s
> > >
> > >
> > >
> > > *Control 2 ---  pz-dev-k8s-ncus-1-control-18dabaf66c1   :*
> [FAILED]
> > > Failed to start Execute cloud user/final scripts.
> > >
> > > kubectl get nodes
> > > E0215 07:38:33.3145612643 memcache.go:265] couldn't get current
> > server
> > > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > > 127.0.0.1:8080: connect: connection refused
> > > E0215 07:38:33.3167512643 memcache.go:265] couldn't get current
> > server
> > > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > > 127.0.0.1:8080: connect: connection refused
> > > E0215 07:38:33.3177542643 memcache.go:265] couldn't get current
> > server
> > > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > > 127.0.0.1:8080: connect: connection refused
> > > E0215 07:38:33.3191812643 memcache.go:265] couldn't get current
> > server
> > > API 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wally B
- zone type :
Core
- network type:
Advanced
Isolated Network inside a Redundant VPC (same results in just an
Isolated network without VPC)
- number of control nodes:
2 Control Nodes (HA Cluster)

We were able to deploy k8s in the past, not sure what changed.

Thanks!
-Wally

On Thu, Feb 15, 2024 at 2:04 AM Wei ZHOU  wrote:

> Hi,
>
> can you share
> - zone type
> - network type
> - number of control nodes
>
>
> -Wei
>
> On Thu, 15 Feb 2024 at 08:52, Wally B  wrote:
>
> > So
> >
> > Recreating the Sec Storage VM Fixed the Cert issue and I was able to
> > install K8s 1.28.4 Binaries. --- THANKS Wei ZHOU !
> >
> >
> > Im still getting
> >
> > [FAILED] Failed to start Execute cloud user/final scripts.
> >
> > on 1 control and 1 worker.
> >
> > *Control 1 --  pz-dev-k8s-ncus-1-control-18dabaf66c1  --:* No
> > errors at the CLI
> >
> > kubectl get nodes
> > NAMESTATUS   ROLES   AGE
> >   VERSION
> > pz-dev-k8s-ncus-1-control-18dabaf0edb   Readycontrol-plane   5m2s
> >  v1.28.4
> > pz-dev-k8s-ncus-1-control-18dabaf66c1   Readycontrol-plane
>  4m44s
> >   v1.28.4
> > pz-dev-k8s-ncus-1-node-18dabafb0bd  Ready
> 4m47s
> >   v1.28.4
> > pz-dev-k8s-ncus-1-node-18dabb006bc  Ready
> 4m47s
> >   v1.28.4
> >
> >
> > kubectl get pods --all-namespaces
> > NAMESPACE  NAME
> >READY   STATUSRESTARTSAGE
> > kube-systemcoredns-5dd5756b68-295gb
> >1/1 Running   0   5m32s
> > kube-systemcoredns-5dd5756b68-cdwvw
> >1/1 Running   0   5m33s
> > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabaf0edb
> >1/1 Running   0   5m36s
> > kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabaf66c1
> >1/1 Running   0   5m23s
> > kube-system
> >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabaf0edb1/1
> >   Running   0   5m36s
> > kube-system
> >  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabaf66c11/1
> >   Running   0   5m23s
> > kube-system
> >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabaf0edb   1/1
> >   Running   1 (5m13s ago)   5m36s
> > kube-system
> >  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabaf66c1   1/1
> >   Running   0   5m23s
> > kube-systemkube-proxy-2m8zb
> >1/1 Running   0   5m26s
> > kube-systemkube-proxy-cwpjg
> >1/1 Running   0   5m33s
> > kube-systemkube-proxy-l2vbf
> >1/1 Running   0   5m26s
> > kube-systemkube-proxy-qhlqt
> >1/1 Running   0   5m23s
> > kube-system
> >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabaf0edb1/1
> >   Running   1 (5m8s ago)5m36s
> > kube-system
> >  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabaf66c11/1
> >   Running   0   5m23s
> > kube-systemweave-net-5cs26
> > 2/2 Running   1 (5m9s ago)5m26s
> > kube-systemweave-net-9zqrw
> > 2/2 Running   1 (5m28s ago)   5m33s
> > kube-systemweave-net-fcwtr
> > 2/2 Running   0   5m23s
> > kube-systemweave-net-lh2dh
> > 2/2 Running   1 (4m41s ago)   5m26s
> > kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-r284t
> >1/1 Running   0   5m32s
> > kubernetes-dashboard   kubernetes-dashboard-5b749d9495-vtwdd
> > 1/1 Running   0   5m32s
> >
> >
> >
> > *Control 2 ---  pz-dev-k8s-ncus-1-control-18dabaf66c1   :*  [FAILED]
> > Failed to start Execute cloud user/final scripts.
> >
> > kubectl get nodes
> > E0215 07:38:33.3145612643 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection refused
> > E0215 07:38:33.3167512643 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection refused
> > E0215 07:38:33.3177542643 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection refused
> > E0215 07:38:33.3191812643 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection refused
> > E0215 07:38:33.3199752643 memcache.go:265] couldn't get current
> server
> > API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> > 127.0.0.1:8080: connect: connection 

Re: Kubernetes Clusters Failing to Start 4.19.0

2024-02-15 Thread Wei ZHOU
Hi,

can you share
- zone type
- network type
- number of control nodes


-Wei

On Thu, 15 Feb 2024 at 08:52, Wally B  wrote:

> So
>
> Recreating the Sec Storage VM Fixed the Cert issue and I was able to
> install K8s 1.28.4 Binaries. --- THANKS Wei ZHOU !
>
>
> Im still getting
>
> [FAILED] Failed to start Execute cloud user/final scripts.
>
> on 1 control and 1 worker.
>
> *Control 1 --  pz-dev-k8s-ncus-1-control-18dabaf66c1  --:* No
> errors at the CLI
>
> kubectl get nodes
> NAMESTATUS   ROLES   AGE
>   VERSION
> pz-dev-k8s-ncus-1-control-18dabaf0edb   Readycontrol-plane   5m2s
>  v1.28.4
> pz-dev-k8s-ncus-1-control-18dabaf66c1   Readycontrol-plane   4m44s
>   v1.28.4
> pz-dev-k8s-ncus-1-node-18dabafb0bd  Ready  4m47s
>   v1.28.4
> pz-dev-k8s-ncus-1-node-18dabb006bc  Ready  4m47s
>   v1.28.4
>
>
> kubectl get pods --all-namespaces
> NAMESPACE  NAME
>READY   STATUSRESTARTSAGE
> kube-systemcoredns-5dd5756b68-295gb
>1/1 Running   0   5m32s
> kube-systemcoredns-5dd5756b68-cdwvw
>1/1 Running   0   5m33s
> kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabaf0edb
>1/1 Running   0   5m36s
> kube-systemetcd-pz-dev-k8s-ncus-1-control-18dabaf66c1
>1/1 Running   0   5m23s
> kube-system
>  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabaf0edb1/1
>   Running   0   5m36s
> kube-system
>  kube-apiserver-pz-dev-k8s-ncus-1-control-18dabaf66c11/1
>   Running   0   5m23s
> kube-system
>  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabaf0edb   1/1
>   Running   1 (5m13s ago)   5m36s
> kube-system
>  kube-controller-manager-pz-dev-k8s-ncus-1-control-18dabaf66c1   1/1
>   Running   0   5m23s
> kube-systemkube-proxy-2m8zb
>1/1 Running   0   5m26s
> kube-systemkube-proxy-cwpjg
>1/1 Running   0   5m33s
> kube-systemkube-proxy-l2vbf
>1/1 Running   0   5m26s
> kube-systemkube-proxy-qhlqt
>1/1 Running   0   5m23s
> kube-system
>  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabaf0edb1/1
>   Running   1 (5m8s ago)5m36s
> kube-system
>  kube-scheduler-pz-dev-k8s-ncus-1-control-18dabaf66c11/1
>   Running   0   5m23s
> kube-systemweave-net-5cs26
> 2/2 Running   1 (5m9s ago)5m26s
> kube-systemweave-net-9zqrw
> 2/2 Running   1 (5m28s ago)   5m33s
> kube-systemweave-net-fcwtr
> 2/2 Running   0   5m23s
> kube-systemweave-net-lh2dh
> 2/2 Running   1 (4m41s ago)   5m26s
> kubernetes-dashboard   dashboard-metrics-scraper-5657497c4c-r284t
>1/1 Running   0   5m32s
> kubernetes-dashboard   kubernetes-dashboard-5b749d9495-vtwdd
> 1/1 Running   0   5m32s
>
>
>
> *Control 2 ---  pz-dev-k8s-ncus-1-control-18dabaf66c1   :*  [FAILED]
> Failed to start Execute cloud user/final scripts.
>
> kubectl get nodes
> E0215 07:38:33.3145612643 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 07:38:33.3167512643 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 07:38:33.3177542643 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 07:38:33.3191812643 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 07:38:33.3199752643 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> The connection to the server localhost:8080 was refused - did you specify
> the right host or port?
>
>
> kubectl get pods --all-namespaces
> E0215 07:42:23.7867042700 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 07:42:23.7874552700 memcache.go:265] couldn't get current server
> API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp
> 127.0.0.1:8080: connect: connection refused
> E0215 07:42:23.7895292700