Re: Kubernetes charms now support 1.6!

2017-04-13 Thread Samuel Cozannet
Yooohoo!! Congrats!

I tested the GPU deployment and it worked flawlessly. I have another team
using it now, also worked OOTB.

I'll be updating all my blog posts to outline the changes.

Good job on snaps as well, it makes reconfiguration of the cluster so much
easier. I intend to do a short write up about this in a near future.

++
Sam




--
Samuel Cozannet
Cloud, Big Data and IoT Strategy Team
Business Development - Cloud and ISV Ecosystem
Changing the Future of Cloud
Ubuntu   / Canonical UK LTD  / Juju

samuel.cozan...@canonical.com
mob: +33 616 702 389
skype: samnco
Twitter: @SaMnCo_23
[image: View Samuel Cozannet's profile on LinkedIn]


On Thu, Apr 13, 2017 at 12:34 AM, Matt Bruzek 
wrote:

> We are proud to release the latest Charms supporting Kubernetes version
> 1.6.1!
>
>
> Kubernetes 1.6 is a major milestone for the community, we’ve got a full
> write up of features and support on our blog  2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/>
> Getting Started
>
> Here’s the simplest way to get a Kubernetes cluster up and running on an
> Ubuntu 16.04 system:
>
> sudo snap install conjure-up --classic
> conjure-up kubernetes
>
>
> During the installation conjure-up will ask you what cloud you want to
> deploy on and prompt you for the proper credentials. If you’re deploying to
> local containers (LXD) see these instructions  getting-started-guides/ubuntu/local/> for localhost-specific
> considerations.
>
> For production grade deployments and cluster lifecycle management it is
> recommended to read the full Canonical Distribution of Kubernetes
> documentation .
> Upgrading an existing cluster
>
> If you’ve got a cluster already deployed, we’ve got instructions to help
> get you upgraded. If possible, deploying a new cluster will be the easiest
> route. Otherwise, the instructions for upgrading are outlined here:
> https://insights.ubuntu.com/2017/04/12/general-
> availability-of-kubernetes-1-6-on-ubuntu/#upgrades
> Changes in this release
>
>-
>
>Support for Kubernetes v1.6, with the current release being 1.6.1
>-
>
>Installation of components via snaps: kubectl, kube-apiserver,
>kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. To learn
>more about snaps: https://snapcraft.io
>-
>
>Added ‘allow-privileged’ config option on kubernetes-master and
>kubernetes-worker charms. Valid values are true|false|auto (default: auto).
>If the value is ‘auto’, containers will run in unprivileged mode unless GPU
>hardware is detected on a worker node. If there are GPUs, or the value is
>true, Kubernetes will set `--allow-privileged=true`. Otherwise the flag is
>set to false.
>-
>
>Added GPU support (beta). If Nvidia GPU hardware is detected on a
>worker node, Nvidia drivers and CUDA packages will be installed, and
>kubelet will be restarted with the flags required to use the GPU hardware.
>The ‘allow-privileged’ config option must be ‘true’ or ‘auto’.
>-
>
>   Nvidia driver version = 375.26; CUDA version = 8.0.61; these will
>   be configurable future charm releases.
>   -
>
>   GPU support does not currently work on lxd.
>   -
>
>   This feature is beta - feedback on the implementation is welcomed.
>   -
>
>Added support for running your own private registry, see the docs here
>
> 
>for instructions.
>
> General Fixes:
>
>-
>
>Fixed a bug in the kubeapi-load-balancer not properly forwarding
>SPDY/HTTP2 traffic for `kubectl exec` commands.
>
> Etcd specific changes:
>
>-
>
>Installation of etcd and etcdctl is now done using the `snap install`
>command.
>-
>
>We support upgrading the previous etcd charm, to the latest charm with
>snap delivery mechanism.  See manual upgrade process for updating existing
>etcd clusters.
>
> Changes to the bundles and layers:
>
>-
>
>Add registry action to the kubernetes-worker layer, which deploys a
>Docker registry in Kubernetes.
>-
>
>Add support for kube-proxy cluster-cidr option.
>
> Test results
>
> The Canonical Distribution of Kubernetes is running daily tests to verify
> it works with the upstream code. As part of the Kubernetes test
> infrastructructure we upload daily test runs. The test results are
> available on the dashboard. Follow along with our progress here:
>
> https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/
> kubernetes-gce-e2e-node/
> How to contact us
>
> We're normally found in the Kubernetes Slack channels and attend these
> Special Interest Group (SIG) meetings 

Kubernetes charms now support 1.6!

2017-04-12 Thread Matt Bruzek
We are proud to release the latest Charms supporting Kubernetes version
1.6.1!


Kubernetes 1.6 is a major milestone for the community, we’ve got a full
write up of features and support on our blog <
https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/
>
Getting Started

Here’s the simplest way to get a Kubernetes cluster up and running on an
Ubuntu 16.04 system:

sudo snap install conjure-up --classic
conjure-up kubernetes


During the installation conjure-up will ask you what cloud you want to
deploy on and prompt you for the proper credentials. If you’re deploying to
local containers (LXD) see these instructions <
https://kubernetes.io/docs/getting-started-guides/ubuntu/local/> for
localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is
recommended to read the full Canonical Distribution of Kubernetes
documentation .
Upgrading an existing cluster

If you’ve got a cluster already deployed, we’ve got instructions to help
get you upgraded. If possible, deploying a new cluster will be the easiest
route. Otherwise, the instructions for upgrading are outlined here:
https://insights.ubuntu.com/2017/04/12/general-availability-of-kubernetes-1-6-on-ubuntu/#upgrades
Changes in this release

   -

   Support for Kubernetes v1.6, with the current release being 1.6.1
   -

   Installation of components via snaps: kubectl, kube-apiserver,
   kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. To learn
   more about snaps: https://snapcraft.io
   -

   Added ‘allow-privileged’ config option on kubernetes-master and
   kubernetes-worker charms. Valid values are true|false|auto (default: auto).
   If the value is ‘auto’, containers will run in unprivileged mode unless GPU
   hardware is detected on a worker node. If there are GPUs, or the value is
   true, Kubernetes will set `--allow-privileged=true`. Otherwise the flag is
   set to false.
   -

   Added GPU support (beta). If Nvidia GPU hardware is detected on a worker
   node, Nvidia drivers and CUDA packages will be installed, and kubelet will
   be restarted with the flags required to use the GPU hardware. The
   ‘allow-privileged’ config option must be ‘true’ or ‘auto’.
   -

  Nvidia driver version = 375.26; CUDA version = 8.0.61; these will be
  configurable future charm releases.
  -

  GPU support does not currently work on lxd.
  -

  This feature is beta - feedback on the implementation is welcomed.
  -

   Added support for running your own private registry, see the docs here
   

   for instructions.

General Fixes:

   -

   Fixed a bug in the kubeapi-load-balancer not properly forwarding
   SPDY/HTTP2 traffic for `kubectl exec` commands.

Etcd specific changes:

   -

   Installation of etcd and etcdctl is now done using the `snap install`
   command.
   -

   We support upgrading the previous etcd charm, to the latest charm with
   snap delivery mechanism.  See manual upgrade process for updating existing
   etcd clusters.

Changes to the bundles and layers:

   -

   Add registry action to the kubernetes-worker layer, which deploys a
   Docker registry in Kubernetes.
   -

   Add support for kube-proxy cluster-cidr option.

Test results

The Canonical Distribution of Kubernetes is running daily tests to verify
it works with the upstream code. As part of the Kubernetes test
infrastructructure we upload daily test runs. The test results are
available on the dashboard. Follow along with our progress here:

https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/
How to contact us

We're normally found in the Kubernetes Slack channels and attend these
Special Interest Group (SIG) meetings regularly:

  - sig-cluster-lifecycle  <
https://kubernetes.slack.com/messages/sig-cluster-lifecycle/>
  - sig-cluster-ops  

  - sig-onprem  >


Operators are an important part of Kubernetes, we encourage you to
participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels <
http://kubernetes.io/community/>, feel free to reach out to us. As always,
PRs, recommendations, and bug reports are welcome:

https://github.com/juju-solutions/bundle-canonical-kubernetes
-- 
Juju mailing list
Juju@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/juju