[openstack-dev] [containers][magnum] Make certs insecure in magnum drivers

2017-02-10 Thread Kevin Lefevre
Hi,

This change (https://review.openstack.org/#/c/383493/) makes certificates 
request to magnum_api insecure since is a common use case.

In swarm drivers, the make-cert.py script is in python whereas in K8s for 
CoreOS and Atomic, it is a shell script.

I wanted to make the change (https://review.openstack.org/#/c/430755/) but it 
gets flagged by bandit because of python requests pacakage insecure TLS.

I know that we should supports Custom CA in the futur but if right now (and 
according to the previous merged change) insecure request are by default, what 
should we do ?

Do we disable bandit for the the swarm drivers ? Or do you use the same scripts 
(and keep it as simple as possible) for all the drivers, possibly without 
python as it is not included in CoreOS.


signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Swarm Mode template

2017-02-01 Thread Kevin Lefevre
On Tue, 2017-01-31 at 21:15 +0100, Spyros Trigazis wrote:
> Hi,
> 
> The hack-ish way is to check if the current master has a different ip
> than the
> swarm_api_api and based on that decide whether to swarm init or join.
> The
> proper way is to have two resource groups (as you said) one for the
> primary
> master and one for the secondary masters. This requires some plumping
> though.
> 
> We decided two have a _v2 driver in /contrib initially. I have a
> prototype working
> based on fedora-25 (docker 1.12.6). I can push and work together on
> it, if you
> want.

Yes I'd be happy to. I think this is something we should push forward
as swarm mode brings a lot more COE-like features (in comparison with
Kubernetes) than swarm legacy.

I'd also like to do the same with CoreOS :) 

> 
> Spyros
> 
> On 31 January 2017 at 20:52, Kevin Lefevre 
> wrote:
> > On Tue, 2017-01-31 at 17:01 +0100, Spyros Trigazis wrote:
> > > Hi.
> > >
> > > I have done it by checking the ip address of the master. The
> > current
> > > state of
> > > the heat drivers doesn't allow the distinction between master > 1
> > or
> > > master=1.
> > >
> > 
> > Please, could you elaborate on this ?
> > 
> > Also what is your opinion about starting a new swarm driver for
> > swarm
> > mode ?
> > 
> > > Spyros
> > >
> > >
> > >
> > > On 31 January 2017 at 16:33, Kevin Lefevre  > om>
> > > wrote:
> > > > Hi, Docker 1.13 has been released with several improvements
> > that
> > > > brings
> > > > swarm mode principles closer to Kubernetes such as docker-
> > compose
> > > > service swarm mode.
> > > >
> > > > I'd like to implement a v2 swarm template. I don't know if it's
> > > > already
> > > > been discussed.
> > > >
> > > > Swarm mode is a bit different but a lot simpler to deploy than
> > > > Swarm
> > > > Legacy.
> > > >
> > > > In Kubernetes you can deploy multiples masters at the same time
> > but
> > > > in
> > > > swarm mode you have to:
> > > > - bootstrap a first docker node
> > > > - run docker swarm init
> > > > - get a token (worker or manager)
> > > > - bootstrap other worker
> > > > - use manager or worker token depending manager count.
> > > >
> > > > I don't know what is the best way to do so in HEAT. I'm sure
> > there
> > > > are
> > > > multiple options (I'm not an expert in HEAT i don't know if
> > they
> > > > are
> > > > feasible) :
> > > >
> > > > - Bootstrap a first server
> > > > - Wait for it to ready, run docker swarm init, get both manager
> > and
> > > > worker tokens
> > > > - if manager count >1, we can bootstrap another resource group
> > for
> > > > extra managers which will use a manager token.
> > > > - Bootstrap the rest of the worker and use a worker token.
> > > >
> > > > The difficulty is to handle multiples master properly, i'd like
> > to
> > > > hear
> > > > your ideas about that.
> > > >
> > > >
> > > > --
> > > > Kevin Lefevre
> > > >
> > > >
> > ___
> > > > ___
> > > > OpenStack Development Mailing List (not for usage questions)
> > > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> > unsu
> > > > bscribe
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-d
> > ev
> > > >
> > >
> > >
> > ___
> > __
> > > _
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
> > subs
> > > cribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > --
> > Kevin Lefevre
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-- 
Kevin Lefevre

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Swarm Mode template

2017-01-31 Thread Kevin Lefevre
On Tue, 2017-01-31 at 17:01 +0100, Spyros Trigazis wrote:
> Hi.
> 
> I have done it by checking the ip address of the master. The current
> state of
> the heat drivers doesn't allow the distinction between master > 1 or
> master=1.
> 

Please, could you elaborate on this ?

Also what is your opinion about starting a new swarm driver for swarm
mode ?  

> Spyros
> 
> 
> 
> On 31 January 2017 at 16:33, Kevin Lefevre 
> wrote:
> > Hi, Docker 1.13 has been released with several improvements that
> > brings
> > swarm mode principles closer to Kubernetes such as docker-compose
> > service swarm mode.
> > 
> > I'd like to implement a v2 swarm template. I don't know if it's
> > already
> > been discussed.
> > 
> > Swarm mode is a bit different but a lot simpler to deploy than
> > Swarm
> > Legacy.
> > 
> > In Kubernetes you can deploy multiples masters at the same time but
> > in
> > swarm mode you have to:
> > - bootstrap a first docker node
> > - run docker swarm init
> > - get a token (worker or manager)
> > - bootstrap other worker
> > - use manager or worker token depending manager count.
> > 
> > I don't know what is the best way to do so in HEAT. I'm sure there
> > are
> > multiple options (I'm not an expert in HEAT i don't know if they
> > are
> > feasible) :
> > 
> > - Bootstrap a first server
> > - Wait for it to ready, run docker swarm init, get both manager and
> > worker tokens
> > - if manager count >1, we can bootstrap another resource group for
> > extra managers which will use a manager token.
> > - Bootstrap the rest of the worker and use a worker token.
> > 
> > The difficulty is to handle multiples master properly, i'd like to
> > hear
> > your ideas about that.
> > 
> > 
> > --
> > Kevin Lefevre
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-- 
Kevin Lefevre

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [containers][magnum] Swarm Mode template

2017-01-31 Thread Kevin Lefevre
Hi, Docker 1.13 has been released with several improvements that brings
swarm mode principles closer to Kubernetes such as docker-compose
service swarm mode.

I'd like to implement a v2 swarm template. I don't know if it's already
been discussed.

Swarm mode is a bit different but a lot simpler to deploy than Swarm
Legacy.

In Kubernetes you can deploy multiples masters at the same time but in
swarm mode you have to:
- bootstrap a first docker node
- run docker swarm init
- get a token (worker or manager)
- bootstrap other worker
- use manager or worker token depending manager count.

I don't know what is the best way to do so in HEAT. I'm sure there are
multiple options (I'm not an expert in HEAT i don't know if they are
feasible) :

- Bootstrap a first server
- Wait for it to ready, run docker swarm init, get both manager and
worker tokens
- if manager count >1, we can bootstrap another resource group for
extra managers which will use a manager token.
- Bootstrap the rest of the worker and use a worker token.

The difficulty is to handle multiples master properly, i'd like to hear
your ideas about that.


-- 
Kevin Lefevre

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] CoreOS template v2

2017-01-25 Thread Kevin Lefevre
Hi,

I did write a blueprint a while ago but did not start to implement it.

https://blueprints.launchpad.net/magnum/+spec/coreos-best-pratice


> Le 24 janv. 2017 à 23:16, Spyros Trigazis  a écrit :
> 
> Or start writing down (in the BP) what you want to put in the driver.
> Network, lbaas, scripts, the order of the scripts and then we can see
> if it's possible to adapt to the current coreos driver.
> 
> Spyros
> 
> On Jan 24, 2017 22:54, "Hongbin Lu"  wrote:
> As Spyros mentioned, an option is to start by cloning the existing templates. 
> However, I have a concern for this approach because it will incur a lot of 
> duplication. An alternative approach is modifying the existing CoreOS 
> templates in-place. It might be a little difficult to implement but it saves 
> your overhead to deprecate the old version and roll out the new version.
> 
> 
> 
> Best regards,
> 
> Hongbin
> 
> 
> 
> From: Spyros Trigazis [mailto:strig...@gmail.com]
> Sent: January-24-17 3:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] CoreOS template v2
> 
> 
> 
> Hi.
> 
> 
> 
> IMO, you should add a BP and start by adding a v2 driver in /contrib.
> 
> 
> 
> Cheers,
> 
> Spyros
> 
> 
> 
> On Jan 24, 2017 20:44, "Kevin Lefevre"  wrote:
> 
> Hi,
> 
> The CoreOS template is not really up to date and in sync with upstream CoreOS 
> « Best Practice » (https://github.com/coreos/coreos-kubernetes), it is more a 
> port of th fedora atomic template but CoreOS has its own Kubernetes 
> deployment method.
> 
> I’d like to implement the changes to sync kubernetes deployment on CoreOS to 
> latest kubernetes version (1.5.2) along with standards components according 
> the CoreOS Kubernetes guide :
>   - « Defaults » add ons like kube-dns , heapster and kube-dashboard (kube-ui 
> has been deprecated for a long time and is obsolete)
>   - Canal for network policy (Calico and Flannel)
>   - Add support for RKT as container engine
>   - Support sane default options recommended by Kubernetes upstream 
> (admission control : https://kubernetes.io/docs/admin/admission-controllers/, 
> using service account…)
>   - Of course add every new parameters to HOT.
> 
> These changes are difficult to implement as is (due to the fragment concept 
> and everything is a bit messy between common and specific template fragment, 
> especially for CoreOS).
> 
> I’m wondering if it is better to clone the CoreOS v1 template to a new v2 
> template en build from here ?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] CoreOS template v2

2017-01-24 Thread Kevin Lefevre
Hi,

The CoreOS template is not really up to date and in sync with upstream CoreOS « 
Best Practice » (https://github.com/coreos/coreos-kubernetes), it is more a 
port of th fedora atomic template but CoreOS has its own Kubernetes deployment 
method.

I’d like to implement the changes to sync kubernetes deployment on CoreOS to 
latest kubernetes version (1.5.2) along with standards components according the 
CoreOS Kubernetes guide :
  - « Defaults » add ons like kube-dns , heapster and kube-dashboard (kube-ui 
has been deprecated for a long time and is obsolete) 
  - Canal for network policy (Calico and Flannel)
  - Add support for RKT as container engine
  - Support sane default options recommended by Kubernetes upstream (admission 
control : https://kubernetes.io/docs/admin/admission-controllers/, using 
service account…)
  - Of course add every new parameters to HOT.

These changes are difficult to implement as is (due to the fragment concept and 
everything is a bit messy between common and specific template fragment, 
especially for CoreOS).

I’m wondering if it is better to clone the CoreOS v1 template to a new v2 
template en build from here ?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev