Are you able to access the CloudStack WebUI in the URL
http://10.1.10.2:8080/client ? As long as k8s nodes have connectivity on
the path /client/api on your management, this should work fine. Perhaps a
host-level firewall in your management server?

Thanks

On Mon, Feb 26, 2024 at 4:08 PM Bharat Bhushan Saini
<bharat.sa...@kloudspot.com.invalid> wrote:

> Hi Vivek,
>
>
>
> Please check the findings
>
>
>
> ping 10.1.x.2
>
> PING 10.1.x.2 (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.616 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.716 ms
>
> ^C--- 10.1.10.2 ping statistics ---
>
> 2 packets transmitted, 2 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.616/0.666/0.716/0.050 ms
>
>
>
> ping cloudstack.internal.com
>
> PING cloudstack.internal.com (10.1.x.2): 56 data bytes
>
> 64 bytes from 10.1.x.2: icmp_seq=0 ttl=64 time=0.555 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=1 ttl=64 time=0.620 ms
>
> 64 bytes from 10.1.x.2: icmp_seq=2 ttl=64 time=0.664 ms
>
> ^C--- cloudstack.internal.kloudspot.com ping statistics ---
>
> 3 packets transmitted, 3 packets received, 0% packet loss
>
> round-trip min/avg/max/stddev = 0.555/0.613/0.664/0.045 ms
>
>
>
> telnet 10.1.x.2 8080
>
> Trying 10.1.x.2...
>
> telnet: Unable to connect to remote host: Connection refused
>
>
>
>
>
> I am able to ping the management IP and URL but not able to get access
> from the port as it is not open in the cluster.
> NOTE: I use the management IP in the API URL.
>
>
>
> Thanks and Regards,
>
> Bharat Saini
>
>
>
> [image: signature_4099962424]
>
>
>
> *From: *Vivek Kumar <vivek.ku...@indiqus.com.INVALID>
> *Date: *Monday, 26 February 2024 at 3:49 PM
> *To: *users@cloudstack.apache.org <users@cloudstack.apache.org>
> *Subject: *Re: CKS Storage Provisioner Info
>
> EXTERNAL EMAIL: Please verify the sender email address before taking any
> action, replying, clicking any link or opening any attachment.
>
>
> Hello Bharat,
>
> Is the cloudstack URL is reachable from your cluster, can you manually
> check i.e ping, telnet on that port ?
>
>
>
>
> > On 26-Feb-2024, at 3:43 PM, Bharat Bhushan Saini
> <bharat.sa...@kloudspot.com.INVALID> wrote:
> >
> > Hi Wei/Jayanth,
> >
> > Thanks for sharing the details. I am able to fetch out the api and keys
> and deployed the driver as suggested by @vivek and GH page.
> >
> > Now I encountered with one more issue that the cloudstack csi node goes
> in CrashLoopBackOff Error. I am trying to get some more info regarding this
> which is as below
> >
> >
> {"level":"error","ts":1708932622.5365772,"caller":"zap/options.go:212","msg":"finished
> unary call with code
> Internal","grpc.start_time":"2024-02-26T07:30:22Z","grpc.request.deadline":"2024-02-26T07:32:22Z","system":"grpc","span.kind":"server","grpc.service":"csi.v1.Node","grpc.method":"NodeGetInfo","error":"rpc
> error: code = Internal desc = Get \"
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA&command=listVirtualMachines&id=cf4940eb-52a4-4205-b056-1575926cb488&response=json&signature=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\
> <http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA&command=listVirtualMachines&id=cf4940eb-52a4-4205-b056-1575926cb488&response=json&signature=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D%5C>":
> <
> http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA&command=listVirtualMachines&id=cf4940eb-52a4-4205-b056-1575926cb488&response=json&signature=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D\%22:
> <http://10.1.10.2:8080/client/api?apiKey=k83H56KFdhFqpv7cXPU11nkwxPt8f2rXnm1WWVIRdeErqZr72Pzp7ySmricPWs7FQQuMmClznDhMz7uqnRD2wA&command=listVirtualMachines&id=cf4940eb-52a4-4205-b056-1575926cb488&response=json&signature=t4jdPVL7jqhGt5pWC0kjx%2Bxzr3o%3D%5C%22:>>
> dial tcp 10.1.10.2:8080: connect: connection
> refused","grpc.code":"Internal","grpc.time_ms":1.138,"stacktrace":"
> github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer\n\t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212\ngithub.com/grpc-ecosystem
> <http://github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer%5Cn%5Ct/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212%5Cngithub.com/grpc-ecosystem>
> <
> http://github.com/grpc-ecosystem/go-grpc-middleware/logging/zap.DefaultMessageProducer/n/t/home/runner/go/pkg/mod/github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/options.go:212/ngithub.com/grpc-ecosystem
> >/go-grpc-middleware/logging/zap.UnaryServerInterceptor.func1\n\t/home/runner/go/pkg/mod/
> github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/server_interceptors.go:39\ngoogle.golang.org/grpc.chainUnaryInterceptors.func1\n\t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183\ngithub.com/container-storage-interface/spec/lib/go/csi
> <http://github.com/grpc-ecosystem/go-grpc-middleware@v1.4.0/logging/zap/server_interceptors.go:39%5Cngoogle.golang.org/grpc.chainUnaryInterceptors.func1%5Cn%5Ct/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183%5Cngithub.com/container-storage-interface/spec/lib/go/csi>
> <
> http://ngoogle.golang.org/grpc.chainUnaryInterceptors.func1/n/t/home/runner/go/pkg/mod/google.golang.org/grpc@v1.60.1/server.go:1183/ngithub.com/container-storage-interface/spec/lib/go/csi
> >._Node_NodeGetInfo_Handler\n\t/home/runner/go/pkg/mod/
> github.com/container-storage-interface/spec@v1.9.0/lib/go/csi/csi.pb.go:7351\ngoogle.golang.org/grpc
> <http://github.com/container-storage-interface/spec@v1.9.0/lib/go/csi/csi.pb.go:7351%5Cngoogle.golang.org/grpc>
> <
> http://github.com/container-storage-interface/spec@v1.9.0/lib/go/csi/csi.pb.go:7351/ngoogle.golang.org/grpc>.(*Server
> ).processUnaryRPC\n\t/home/runner/go/pkg/mod/
> google.golang.org/grpc@v1.60.1/server.go:1372\ngoogle.golang.org/grpc
> <http://google.golang.org/grpc@v1.60.1/server.go:1372%5Cngoogle.golang.org/grpc>
> <
> http://google.golang.org/grpc@v1.60.1/server.go:1372/ngoogle.golang.org/grpc>.(*Server
> ).handleStream\n\t/home/runner/go/pkg/mod/
> google.golang.org/grpc@v1.60.1/server.go:1783\ngoogle.golang.org/grpc
> <http://google.golang.org/grpc@v1.60.1/server.go:1783%5Cngoogle.golang.org/grpc>
> <
> http://google.golang.org/grpc@v1.60.1/server.go:1783/ngoogle.golang.org/grpc>.(*Server
> ).serveStreams.func2.1\n\t/home/runner/go/pkg/mod/
> google.golang.org/grpc@v1.60.1/server.go:1016 <
> http://google.golang.org/grpc@v1.60.1/server.go:1016>"}
> >
> > kubectl get pods -A
> > NAMESPACE
> NAME                                                    READY
> STATUS             RESTARTS        AGE
> > default
> example-pod                                             0/1
> Pending            0               87m
> > kube-system
> cloud-controller-manager-574bcb86c-vzp4m                1/1
> Running            0               155m
> > kube-system
> cloudstack-csi-controller-7f89c8cd47-ftgnf              5/5
> Running            0               150m
> > kube-system
> cloudstack-csi-controller-7f89c8cd47-j4s4z              5/5
> Running            0               150m
> > kube-system
> cloudstack-csi-controller-7f89c8cd47-ptvss              5/5
> Running            0               150m
> > kube-system
> cloudstack-csi-node-56hxg                               2/3
> CrashLoopBackOff   34 (99s ago)    150m
> > kube-system
> cloudstack-csi-node-98cf2                               2/3
> CrashLoopBackOff   34 (39s ago)    150m
> > kube-system
> coredns-5dd5756b68-5wwxk                                1/1
> Running            0               4h17m
> > kube-system
> coredns-5dd5756b68-mbpwt                                1/1
> Running            0               4h17m
> > kube-system
> etcd-kspot-app-control-18de3ee6b6f                      1/1
> Running            0               4h17m
> > kube-system
> kube-apiserver-kspot-app-control-18de3ee6b6f            1/1
> Running            0               4h17m
> > kube-system
> kube-controller-manager-kspot-app-control-18de3ee6b6f   1/1
> Running            0               4h17m
> > kube-system
> kube-proxy-56r4l                                        1/1
> Running            0               4h17m
> > kube-system
> kube-proxy-mf6cc                                        1/1
> Running            0               4h17m
> > kube-system
> kube-scheduler-kspot-app-control-18de3ee6b6f            1/1
> Running            0               4h17m
> > kube-system
> weave-net-59t9z                                         2/2
> Running            1 (4h17m ago)   4h17m
> > kube-system
> weave-net-7xvpp                                         2/2
> Running            0               4h17m
> > kubernetes-dashboard
> dashboard-metrics-scraper-5657497c4c-g89lq              1/1
> Running            0               4h17m
> > kubernetes-dashboard
> kubernetes-dashboard-5b749d9495-fqplb                   1/1
> Running            0               4h17m
> >
> > kubectl get csinode
> > NAME                            DRIVERS   AGE
> > kspot-app-control-18de3ee6b6f   0         4h23m
> > kspot-app-node-18de3eeb7b7      0         4h23m
> >
> > kubectl describe csinode
> > Name:               kspot-app-control-18de3ee6b6f
> > Labels:             <none>
> > Annotations:        storage.alpha.kubernetes.io/migrated-plugins: <
> http://storage.alpha.kubernetes.io/migrated-plugins:>
> >
> kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/vsphere-vo
> <
> http://kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/vsphere-vo
> >...
> > CreationTimestamp:  Mon, 26 Feb 2024 05:42:57 +0000
> > Spec:
> > Events:  <none>
> > Name:               kspot-app-node-18de3eeb7b7
> > Labels:             <none>
> > Annotations:        storage.alpha.kubernetes.io/migrated-plugins: <
> http://storage.alpha.kubernetes.io/migrated-plugins:>
> >
> kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/vsphere-vo
> <
> http://kubernetes.io/aws-ebs,kubernetes.io/azure-disk,kubernetes.io/azure-file,kubernetes.io/cinder,kubernetes.io/gce-pd,kubernetes.io/vsphere-vo
> >...
> > CreationTimestamp:  Mon, 26 Feb 2024 05:43:12 +0000
> > Spec:
> > Events:  <none>
> >
> >
> > Thanks and Regards,
> > Bharat Saini
> >
> >
> >
> > From: Wei ZHOU <ustcweiz...@gmail.com <mailto:ustcweiz...@gmail.com
> <ustcweiz...@gmail.com>>>
> > Date: Monday, 26 February 2024 at 1:52 AM
> > To: users@cloudstack.apache.org <mailto:users@cloudstack.apache.org
> <users@cloudstack.apache.org>> <users@cloudstack.apache.org <
> mailto:users@cloudstack.apache.org <users@cloudstack.apache.org>>>
> > Subject: Re: CKS Storage Provisioner Info
> >
> > EXTERNAL EMAIL: Please verify the sender email address before taking any
> action, replying, clicking any link or opening any attachment.
> >
> >
> > +1
> >
> > Or  use the api key of "admin" user.
> >
> > -Wei
> >
> > On Sun, Feb 25, 2024 at 7:57 PM Jayanth Reddy <
> jayanthreddy5...@gmail.com <mailto:jayanthreddy5...@gmail.com
> <jayanthreddy5...@gmail.com>>>
> > wrote:
> >
> > > Hello Bharat,
> > > With your login as "admin" user, you should be able to generate keys
> for
> > > any user. Please do the below
> > >
> > > 1. Go to "Accounts"
> > > 2. Select the account named "admin"
> > > 3. Scroll down and click "users"
> > > 4. Select the "admin-kubeadmin" user
> > > 5. Then select the button for generation of keys.
> > >
> > > Please let me know if that helps.
> > >
> > > Thanks,
> > > Jayanth
> > >
> > > Sent from Outlook for Android<https://aka.ms/AAb9ysg>
> > >
> > >
> > >
> > --------------------------- Disclaimer: ------------------------------
> > This message and its contents are intended solely for the designated
> addressee and are proprietary to Kloudspot. The information in this email
> is meant exclusively for Kloudspot business use. Any use by individuals
> other than the addressee constitutes misuse and an infringement of
> Kloudspot's proprietary rights. If you are not the intended recipient,
> please return this email to the sender. Kloudspot cannot guarantee the
> security or error-free transmission of e-mail communications. Information
> could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. Therefore, Kloudspot shall not be liable
> for any issues arising from the transmission of this email.
> >
>
>
> --
> This message is intended only for the use of the individual or entity to
> which it is addressed and may contain confidential and/or privileged
> information. If you are not the intended recipient, please delete the
> original message and any copy of it from your computer system. You are
> hereby notified that any dissemination, distribution or copying of this
> communication is strictly prohibited unless proper authorization has been
> obtained for such action. If you have received this communication in error,
> please notify the sender immediately. Although IndiQus attempts to sweep
> e-mail and attachments for viruses, it does not guarantee that both are
> virus-free and accepts no liability for any damage sustained as a result of
> viruses.
>
> --------------------------- Disclaimer: ------------------------------
> This message and its contents are intended solely for the designated
> addressee and are proprietary to Kloudspot. The information in this email
> is meant exclusively for Kloudspot business use. Any use by individuals
> other than the addressee constitutes misuse and an infringement of
> Kloudspot's proprietary rights. If you are not the intended recipient,
> please return this email to the sender. Kloudspot cannot guarantee the
> security or error-free transmission of e-mail communications. Information
> could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. Therefore, Kloudspot shall not be liable
> for any issues arising from the transmission of this email.
>

Reply via email to