Re: Juju Kubernetes vSphere storage

2017-09-07 Thread Micheal B
YUP THAT’S THE ONE I FOLLOWED .. I will go back through it though and make sure 
I did not miss anything .. 

 

Thanks for all the support! Much appreciated!

 

Cheers

 

 

Micheal

 

From: Tim Van Steenburgh <tim.van.steenbu...@canonical.com>
Date: Thursday, September 7, 2017 at 2:02 PM
To: Micheal B <tic...@tictoc.us>
Cc: juju <Juju@lists.ubuntu.com>
Subject: Re: Juju Kubernetes vSphere storage

 

 

 

On Thu, Sep 7, 2017 at 1:31 PM, Micheal B <tic...@tictoc.us> wrote:

Thanks!

 

Stuck on 

 

Step-6 Add flags to controller-manager, API server and Kubelet to enable 
vSphere Cloud Provider. * Add following flags to kubelet running on every node 
and to the controller-manager and API server pods manifest files.

--cloud-provider=vsphere

--cloud-config=

 

 

tried this .. did not make a difference

 

 

It's difficult to help because I don't know what steps were performed. You may 
find it helpful to follow along with the steps here: 
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/402. This 
give more detail about configuring the vsphere cloud provider in the context of 
CDK.

 

Configuring Masters

Edit or create the master configuration file on all masters 
(/etc/origin/master/master-config.yaml by default) and update the contents of 
the apiServerArguments and controllerArguments sections with the following:

 

kubernetesMasterConfig:

  admissionConfig:

pluginConfig:

  {}

  apiServerArguments:

cloud-provider:

- "vsphere"

cloud-config:

- "/etc/vsphere/vsphere.conf"

  controllerArguments:

cloud-provider:

- "vsphere"

cloud-config:

- "/etc/vsphere/vsphere.conf"

When triggering a containerized installation, only the /etc/origin and 
/var/lib/origin directories are mounted to the master and node container. 
Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.

Configuring Nodes

Edit or create the node configuration file on all nodes 
(/etc/origin/node/node-config.yaml by default) and update the contents of the 
kubeletArguments section:

 

kubeletArguments:

  cloud-provider:

- "vsphere"

  cloud-config:

- "/etc/vsphere/vsphere.conf"

When triggering a containerized installation, only the /etc/origin and 
/var/lib/origin directories are mounted to the master and node container. 
Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/.

 

 

 

 

 

 

 

 

From: Tim Van Steenburgh <tim.van.steenbu...@canonical.com>
Date: Thursday, September 7, 2017 at 6:33 AM
To: Micheal B <tic...@tictoc.us>
Cc: juju <Juju@lists.ubuntu.com>
Subject: Re: Juju Kubernetes vSphere storage

 

Hi Micheal,

 

Have you enabled the vsphere cloud provider for kubernetes as documented here: 
https://kubernetes.io/docs/getting-started-guides/vsphere/ ?

 

Tim

 

On Thu, Sep 7, 2017 at 4:06 AM, Micheal B <tic...@tictoc.us> wrote:

While working through -- 
https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere 

 

To test the different storage types on my vSphere lab I seem to either have a 
bug or am no able to copy and paste some code ☺ 

 

None work. All get pretty close to the same error. 

 

 

MountVolume.SetUp failed for volume "test-volume" : mount failed: exit status 
32 Mounting command: mount Mounting arguments: 
/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DS_TICTOC01] 
volumes/myDisk 
/var/lib/kubelet/pods/aa94ec10-9349-11e7-a663-005056a192ad/volumes/kubernetes.io~vsphere-volume/test-volume
 [bind] Output: mount: special device 
/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DS_TICTOC01] 
volumes/myDisk does not exist

 

Unable to mount volumes for pod 
"test-vmdk_default(aa94ec10-9349-11e7-a663-005056a192ad)": timeout expired 
waiting for volumes to attach/mount for pod "default"/"test-vmdk". list of 
unattached/unmounted volumes=[test-volume]

 

 

 

The volume is there /volume/myDisk.vmdk for the first test and the auto create 
volume also fails. Tested using the paths

 

>From datastore cluster + datastore to 
>/vmfs/volumes/55b828da-b978a6d4-6619-002655e59984/volumes  

 

datastore: DS_TICTOC01/volumes

datastore: ticsdata/DS_TICTOC01/volumes

 

My user making the connection is the default Admininistrator and have used it 
for years to create other assorted vm’s so I know that good.  No error in 
vsphere vcenter either. 

 

I am using vSphere 6.1 / Kubernetes 1.7 / vSphere DRS is enabled using local 
drives to create the DS Cluster. JUJU had no issues deploying to them. 

 

What am I missing or could try?

 

Cheers

 

 

Micheal

 

 

 

 


--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

 

 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Kubernetes vSphere storage

2017-09-07 Thread Tim Van Steenburgh
On Thu, Sep 7, 2017 at 1:31 PM, Micheal B <tic...@tictoc.us> wrote:

> Thanks!
>
>
>
> Stuck on
>
>
>
> Step-6 Add flags to controller-manager, API server and Kubelet to enable
> vSphere Cloud Provider. * Add following flags to kubelet running on every
> node and to the controller-manager and API server pods manifest files.
>
> --cloud-provider=vsphere
>
> --cloud-config=
>
>
>
>
>
> tried this .. did not make a difference
>
>
>

It's difficult to help because I don't know what steps were performed. You
may find it helpful to follow along with the steps here:
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/402.
This give more detail about configuring the vsphere cloud provider in the
context of CDK.


> Configuring Masters
>
> Edit or create the master configuration file on all masters
> (/etc/origin/master/master-config.yaml by default) and update the
> contents of the apiServerArguments and controllerArguments sections with
> the following:
>
>
>
> kubernetesMasterConfig:
>
>   admissionConfig:
>
> pluginConfig:
>
>   {}
>
>   apiServerArguments:
>
> cloud-provider:
>
> - "vsphere"
>
> cloud-config:
>
> - "/etc/vsphere/vsphere.conf"
>
>   controllerArguments:
>
> cloud-provider:
>
> - "vsphere"
>
> cloud-config:
>
> - "/etc/vsphere/vsphere.conf"
>
> When triggering a containerized installation, only the /etc/origin and
> /var/lib/origin directories are mounted to the master and node container.
> Therefore, master-config.yaml must be in /etc/origin/master rather than
> /etc/.
>
> Configuring Nodes
>
> Edit or create the node configuration file on all nodes
> (/etc/origin/node/node-config.yaml by default) and update the contents of
> the kubeletArguments section:
>
>
>
> kubeletArguments:
>
>   cloud-provider:
>
> - "vsphere"
>
>   cloud-config:
>
> - "/etc/vsphere/vsphere.conf"
>
> When triggering a containerized installation, only the /etc/origin and
> /var/lib/origin directories are mounted to the master and node container.
> Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *From: *Tim Van Steenburgh <tim.van.steenbu...@canonical.com>
> *Date: *Thursday, September 7, 2017 at 6:33 AM
> *To: *Micheal B <tic...@tictoc.us>
> *Cc: *juju <Juju@lists.ubuntu.com>
> *Subject: *Re: Juju Kubernetes vSphere storage
>
>
>
> Hi Micheal,
>
>
>
> Have you enabled the vsphere cloud provider for kubernetes as documented
> here: https://kubernetes.io/docs/getting-started-guides/vsphere/ ?
>
>
>
> Tim
>
>
>
> On Thu, Sep 7, 2017 at 4:06 AM, Micheal B <tic...@tictoc.us> wrote:
>
> While working through -- https://github.com/kubernetes/
> examples/tree/master/staging/volumes/vsphere
>
>
>
> To test the different storage types on my vSphere lab I seem to either
> have a bug or am no able to copy and paste some code ☺
>
>
>
> None work. All get pretty close to the same error.
>
>
>
>
>
> MountVolume.SetUp failed for volume "test-volume" : mount failed: exit
> status 32 Mounting command: mount Mounting arguments:
> /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DS_TICTOC01]
> <http://kubernetes.io/vsphere-volume/mounts/%5BDS_TICTOC01%5D>
> volumes/myDisk /var/lib/kubelet/pods/aa94ec10-9349-11e7-a663-
> 005056a192ad/volumes/kubernetes.io~vsphere-volume/test-volume [bind]
> Output: mount: special device /var/lib/kubelet/plugins/kuber
> netes.io/vsphere-volume/mounts/[DS_TICTOC01]
> <http://kubernetes.io/vsphere-volume/mounts/%5BDS_TICTOC01%5D>
> volumes/myDisk does not exist
>
>
>
> Unable to mount volumes for pod 
> "test-vmdk_default(aa94ec10-9349-11e7-a663-005056a192ad)":
> timeout expired waiting for volumes to attach/mount for pod
> "default"/"test-vmdk". list of unattached/unmounted volumes=[test-volume]
>
>
>
>
>
>
>
> The volume is there /volume/myDisk.vmdk for the first test and the auto
> create volume also fails. Tested using the paths
>
>
>
> From datastore cluster + datastore to /vmfs/volumes/55b828da-
> b978a6d4-6619-002655e59984/volumes
>
>
>
> datastore: DS_TICTOC01/volumes
>
> datastore: ticsdata/DS_TICTOC01/volumes
>
>
>
> My user making the connection is the default Admininistrator and have used
> it for years to create other assorted vm’s so I know that good.  No error
> in vsphere vcenter either.
>
>
>
> I am using vSphere 6.1 / Kubernetes 1.7 / vSphere DRS is enabled using
> local drives to create the DS Cluster. JUJU had no issues deploying to
> them.
>
>
>
> What am I missing or could try?
>
>
>
> Cheers
>
>
>
>
>
> Micheal
>
>
>
>
>
>
>
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Kubernetes vSphere storage

2017-09-07 Thread Micheal B
Thanks!

 

Stuck on 

 

Step-6 Add flags to controller-manager, API server and Kubelet to enable 
vSphere Cloud Provider. * Add following flags to kubelet running on every node 
and to the controller-manager and API server pods manifest files.

--cloud-provider=vsphere

--cloud-config=

 

 

tried this .. did not make a difference

 

Configuring Masters

Edit or create the master configuration file on all masters 
(/etc/origin/master/master-config.yaml by default) and update the contents of 
the apiServerArguments and controllerArguments sections with the following:

 

kubernetesMasterConfig:

  admissionConfig:

    pluginConfig:

  {}

  apiServerArguments:

    cloud-provider:

    - "vsphere"

    cloud-config:

    - "/etc/vsphere/vsphere.conf"

  controllerArguments:

    cloud-provider:

    - "vsphere"

    cloud-config:

    - "/etc/vsphere/vsphere.conf"

When triggering a containerized installation, only the /etc/origin and 
/var/lib/origin directories are mounted to the master and node container. 
Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/.

Configuring Nodes

Edit or create the node configuration file on all nodes 
(/etc/origin/node/node-config.yaml by default) and update the contents of the 
kubeletArguments section:

 

kubeletArguments:

  cloud-provider:

    - "vsphere"

  cloud-config:

    - "/etc/vsphere/vsphere.conf"

When triggering a containerized installation, only the /etc/origin and 
/var/lib/origin directories are mounted to the master and node container. 
Therefore, node-config.yaml must be in /etc/origin/node rather than /etc/.

 

 

 

 

 

 

 

 

From: Tim Van Steenburgh <tim.van.steenbu...@canonical.com>
Date: Thursday, September 7, 2017 at 6:33 AM
To: Micheal B <tic...@tictoc.us>
Cc: juju <Juju@lists.ubuntu.com>
Subject: Re: Juju Kubernetes vSphere storage

 

Hi Micheal,

 

Have you enabled the vsphere cloud provider for kubernetes as documented here: 
https://kubernetes.io/docs/getting-started-guides/vsphere/ ?

 

Tim

 

On Thu, Sep 7, 2017 at 4:06 AM, Micheal B <tic...@tictoc.us> wrote:

While working through -- 
https://github.com/kubernetes/examples/tree/master/staging/volumes/vsphere 

 

To test the different storage types on my vSphere lab I seem to either have a 
bug or am no able to copy and paste some code ☺ 

 

None work. All get pretty close to the same error. 

 

 

MountVolume.SetUp failed for volume "test-volume" : mount failed: exit status 
32 Mounting command: mount Mounting arguments: 
/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DS_TICTOC01] 
volumes/myDisk 
/var/lib/kubelet/pods/aa94ec10-9349-11e7-a663-005056a192ad/volumes/kubernetes.io~vsphere-volume/test-volume
 [bind] Output: mount: special device 
/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DS_TICTOC01] 
volumes/myDisk does not exist

 

Unable to mount volumes for pod 
"test-vmdk_default(aa94ec10-9349-11e7-a663-005056a192ad)": timeout expired 
waiting for volumes to attach/mount for pod "default"/"test-vmdk". list of 
unattached/unmounted volumes=[test-volume]

 

 

 

The volume is there /volume/myDisk.vmdk for the first test and the auto create 
volume also fails. Tested using the paths

 

>From datastore cluster + datastore to 
>/vmfs/volumes/55b828da-b978a6d4-6619-002655e59984/volumes  

 

datastore: DS_TICTOC01/volumes

datastore: ticsdata/DS_TICTOC01/volumes

 

My user making the connection is the default Admininistrator and have used it 
for years to create other assorted vm’s so I know that good.  No error in 
vsphere vcenter either. 

 

I am using vSphere 6.1 / Kubernetes 1.7 / vSphere DRS is enabled using local 
drives to create the DS Cluster. JUJU had no issues deploying to them. 

 

What am I missing or could try?

 

Cheers

 

 

Micheal

 

 

 

 


--
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju

 

-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju


Re: Juju Kubernetes vSphere storage

2017-09-07 Thread Tim Van Steenburgh
Hi Micheal,

Have you enabled the vsphere cloud provider for kubernetes as documented
here: https://kubernetes.io/docs/getting-started-guides/vsphere/ ?

Tim

On Thu, Sep 7, 2017 at 4:06 AM, Micheal B  wrote:

> While working through -- https://github.com/kubernetes/
> examples/tree/master/staging/volumes/vsphere
>
>
>
> To test the different storage types on my vSphere lab I seem to either
> have a bug or am no able to copy and paste some code ☺
>
>
>
> None work. All get pretty close to the same error.
>
>
>
>
>
> MountVolume.SetUp failed for volume "test-volume" : mount failed: exit
> status 32 Mounting command: mount Mounting arguments:
> /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[DS_TICTOC01]
> 
> volumes/myDisk /var/lib/kubelet/pods/aa94ec10-9349-11e7-a663-
> 005056a192ad/volumes/kubernetes.io~vsphere-volume/test-volume [bind]
> Output: mount: special device /var/lib/kubelet/plugins/kuber
> netes.io/vsphere-volume/mounts/[DS_TICTOC01]
> 
> volumes/myDisk does not exist
>
>
>
> Unable to mount volumes for pod 
> "test-vmdk_default(aa94ec10-9349-11e7-a663-005056a192ad)":
> timeout expired waiting for volumes to attach/mount for pod
> "default"/"test-vmdk". list of unattached/unmounted volumes=[test-volume]
>
>
>
>
>
>
>
> The volume is there /volume/myDisk.vmdk for the first test and the auto
> create volume also fails. Tested using the paths
>
>
>
> From datastore cluster + datastore to /vmfs/volumes/55b828da-
> b978a6d4-6619-002655e59984/volumes
>
>
>
> datastore: DS_TICTOC01/volumes
>
> datastore: ticsdata/DS_TICTOC01/volumes
>
>
>
> My user making the connection is the default Admininistrator and have used
> it for years to create other assorted vm’s so I know that good.  No error
> in vsphere vcenter either.
>
>
>
> I am using vSphere 6.1 / Kubernetes 1.7 / vSphere DRS is enabled using
> local drives to create the DS Cluster. JUJU had no issues deploying to
> them.
>
>
>
> What am I missing or could try?
>
>
>
> Cheers
>
>
>
>
>
> Micheal
>
>
>
>
>
>
>
>
>
> --
> Juju mailing list
> Juju@lists.ubuntu.com
> Modify settings or unsubscribe at: https://lists.ubuntu.com/
> mailman/listinfo/juju
>
>
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju