Re: [openstack-dev] Stepping down from Neutron core team

2018-12-03 Thread Slawomir Kaplonski
Hi,

Thanks for all Your work in Neutron and good luck in Your new role.

— 
Slawek Kaplonski
Senior software engineer
Red Hat

> Wiadomość napisana przez Hirofumi Ichihara  w 
> dniu 02.12.2018, o godz. 15:08:
> 
> Hi all,
> 
> I’m stepping down from the core team because my role changed and I cannot 
> have responsibilities of neutron core.
> 
> My start of neutron was 5 years ago. I had many good experiences from neutron 
> team.
> Today neutron is great project. Neutron gets new reviewers, contributors and, 
> users.
> Keep on being a great community.
> 
> Thanks,
> Hirofumi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-12-03 Thread Bogdan Dobrelya

Hi Kevin.
Puppet not only creates config files but also executes a service 
dependent steps, like db sync, so neither '[base] -> [puppet]' nor 
'[base] -> [service]' would not be enough on its own. That requires some 
services specific code to be included into *config* images as well.


PS. There is a related spec [0] created by Dan, please take a look and 
propose you feedback


[0] https://review.openstack.org/620062

On 11/30/18 6:48 PM, Fox, Kevin M wrote:

Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org; 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On 11/30/18 1:52 PM, Dan Prince wrote:

On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:

On 11/29/18 6:42 PM, Jiří Stránský wrote:

On 28. 11. 18 18:29, Bogdan Dobrelya wrote:

On 11/28/18 6:02 PM, Jiří Stránský wrote:




Reiterating again on previous points:

-I'd be fine removing systemd. But lets do it properly and
not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can
certainly put
them in a separate container outside of the runtime service
containers
but doing so would actually cost you much more
space/bandwidth for each
service container. As both of these have to get downloaded to
each node
anyway in order to generate config files with our current
mechanisms
I'm not sure this buys you anything.


+1. I was actually under the impression that we concluded
yesterday on
IRC that this is the only thing that makes sense to seriously
consider.
But even then it's not a win-win -- we'd gain some security by
leaner
production images, but pay for it with space+bandwidth by
duplicating
image content (IOW we can help achieve one of the goals we had
in mind
by worsening the situation w/r/t the other goal we had in
mind.)

Personally i'm not sold yet but it's something that i'd
consider if we
got measurements of how much more space/bandwidth usage this
would
consume, and if we got some further details/examples about how
serious
are the security concerns if we leave config mgmt tools in
runtime
images.

IIRC the other options (that were brought forward so far) were
already
dismissed in yesterday's IRC discussion and on the reviews.
Bin/lib bind
mounting being too hacky and fragile, and nsenter not really
solving the
problem (because it allows us to switch to having different
bins/libs
available, but it does not allow merging the availability of
bins/libs
from two containers into a single context).


We are going in circles here I think


+1. I think too much of the discussion focuses on "why it's bad
to have
config tools in runtime images", but IMO we all sorta agree
that it
would be better not to have them there, if it came at no cost.

I think to move forward, it would be interesting to know: if we
do this
(i'll borrow Dan's drawing):


base container| --> |service container| --> |service
container w/

Puppet installed|

How much more space and bandwidth would this consume per node
(e.g.
separately per controller, per compute). This could help with
decision
making.


As I've already evaluated in the related bug, that is:

puppet-* modules and manifests ~ 16MB
puppet with dependencies ~61MB
dependencies of the seemingly largest a dependency, systemd
~190MB

that would be an extra layer size for each of the container
images to be
downloaded/fetched into registries.


Thanks, i tried to do the math of the reduction vs. inflation in
sizes
as follows. I think the crucial point here is the layering. If we
do
this image layering:


base| --> |+ service| --> |+ Puppet|


we'd drop ~267 MB from base image, but we'd be installing that to
the
topmost level, per-component, right?


Given we detached systemd from puppet, cronie et al, that would be
267-190MB, so the math below would be looking much better


Would it be worth writing a spec that summarizes what action items are
bing taken to optimize our base image with regards to the systemd?


Perhaps it would be. But honestly, I see nothing biggie to require a
full blown spec. Just changing RPM deps and layers for containers
images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted,
it should be working as of fedora28(or 29) I hope)

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672




It seems like the general consenses is that cleaning up some of the RPM
dependencies so that we don't install Systemd is the biggest win.

What confuses me is why are there still patches posted to move Puppet
out of the base layer when we agree moving it out of the base layer
would actually

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-12-03 Thread Bogdan Dobrelya

On 12/3/18 10:34 AM, Bogdan Dobrelya wrote:

Hi Kevin.
Puppet not only creates config files but also executes a service 
dependent steps, like db sync, so neither '[base] -> [puppet]' nor 
'[base] -> [service]' would not be enough on its own. That requires some 
services specific code to be included into *config* images as well.


PS. There is a related spec [0] created by Dan, please take a look and 
propose you feedback


[0] https://review.openstack.org/620062


I'm terribly sorry, but that's a corrected link [0] to that spec.

[0] https://review.openstack.org/620909



On 11/30/18 6:48 PM, Fox, Kevin M wrote:

Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org; 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of 
containers for security and size of images (maintenance) sakes


On 11/30/18 1:52 PM, Dan Prince wrote:

On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:

On 11/29/18 6:42 PM, Jiří Stránský wrote:

On 28. 11. 18 18:29, Bogdan Dobrelya wrote:

On 11/28/18 6:02 PM, Jiří Stránský wrote:




Reiterating again on previous points:

-I'd be fine removing systemd. But lets do it properly and
not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can
certainly put
them in a separate container outside of the runtime service
containers
but doing so would actually cost you much more
space/bandwidth for each
service container. As both of these have to get downloaded to
each node
anyway in order to generate config files with our current
mechanisms
I'm not sure this buys you anything.


+1. I was actually under the impression that we concluded
yesterday on
IRC that this is the only thing that makes sense to seriously
consider.
But even then it's not a win-win -- we'd gain some security by
leaner
production images, but pay for it with space+bandwidth by
duplicating
image content (IOW we can help achieve one of the goals we had
in mind
by worsening the situation w/r/t the other goal we had in
mind.)

Personally i'm not sold yet but it's something that i'd
consider if we
got measurements of how much more space/bandwidth usage this
would
consume, and if we got some further details/examples about how
serious
are the security concerns if we leave config mgmt tools in
runtime
images.

IIRC the other options (that were brought forward so far) were
already
dismissed in yesterday's IRC discussion and on the reviews.
Bin/lib bind
mounting being too hacky and fragile, and nsenter not really
solving the
problem (because it allows us to switch to having different
bins/libs
available, but it does not allow merging the availability of
bins/libs
from two containers into a single context).


We are going in circles here I think


+1. I think too much of the discussion focuses on "why it's bad
to have
config tools in runtime images", but IMO we all sorta agree
that it
would be better not to have them there, if it came at no cost.

I think to move forward, it would be interesting to know: if we
do this
(i'll borrow Dan's drawing):


base container| --> |service container| --> |service
container w/

Puppet installed|

How much more space and bandwidth would this consume per node
(e.g.
separately per controller, per compute). This could help with
decision
making.


As I've already evaluated in the related bug, that is:

puppet-* modules and manifests ~ 16MB
puppet with dependencies ~61MB
dependencies of the seemingly largest a dependency, systemd
~190MB

that would be an extra layer size for each of the container
images to be
downloaded/fetched into registries.


Thanks, i tried to do the math of the reduction vs. inflation in
sizes
as follows. I think the crucial point here is the layering. If we
do
this image layering:


base| --> |+ service| --> |+ Puppet|


we'd drop ~267 MB from base image, but we'd be installing that to
the
topmost level, per-component, right?


Given we detached systemd from puppet, cronie et al, that would be
267-190MB, so the math below would be looking much better


Would it be worth writing a spec that summarizes what action items are
bing taken to optimize our base image with regards to the systemd?


Perhaps it would be. But honestly, I see nothing biggie to require a
full blown spec. Just changing RPM deps and layers for containers
images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted,
it should be working as of fedora28(or 29) I hope)

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672




It seems like the general consenses is that cleaning up some of the RPM
dependencies so that we don't install Systemd is the biggest w

Re: [openstack-dev] Stepping down from Neutron core team

2018-12-03 Thread Ranjan Krchaubey
Hi all,

Can any one help me to resvolve error 111 on keystone

Thanks & Regards
Ranjan Kumar 
Mob: 9284158762

> On 03-Dec-2018, at 1:39 PM, Slawomir Kaplonski  wrote:
> 
> Hi,
> 
> Thanks for all Your work in Neutron and good luck in Your new role.
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 
>> Wiadomość napisana przez Hirofumi Ichihara  w 
>> dniu 02.12.2018, o godz. 15:08:
>> 
>> Hi all,
>> 
>> I’m stepping down from the core team because my role changed and I cannot 
>> have responsibilities of neutron core.
>> 
>> My start of neutron was 5 years ago. I had many good experiences from 
>> neutron team.
>> Today neutron is great project. Neutron gets new reviewers, contributors 
>> and, users.
>> Keep on being a great community.
>> 
>> Thanks,
>> Hirofumi
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] Stepping down from Neutron core team

2018-12-03 Thread Nate Johnston
On Sun, Dec 02, 2018 at 11:08:25PM +0900, Hirofumi Ichihara wrote:
 
> I’m stepping down from the core team because my role changed and I cannot
> have responsibilities of neutron core.

Thank you very much for all of the insightful reviews over the years.
Good luck on your next adventure!

Nate Johnston (njohnston)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Hello all,

Following the vulnerability [0], with magnum rocky and the kubernetes driver
on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
upgrade
the apiserver in existing clusters, on the master node(s) you can run:
sudo atomic pull --storage ostree
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
sudo atomic containers update --rebase
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver

You can upgrade the other k8s components with similar commands.

I'll share instructions for magnum queens tomorrow morning CET time.

Cheers,
Spyros

[0] https://github.com/kubernetes/kubernetes/issues/71411
[1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Magnum queens, uses kubernetes 1.9.3 by default.
You can upgrade to v1.10.11-1. From a quick test
v1.11.5-1 is also compatible with 1.9.x.

We are working to make this painless, sorry you
have to ssh to the nodes for now.

Cheers,
Spyros

On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis  wrote:

> Hello all,
>
> Following the vulnerability [0], with magnum rocky and the kubernetes
> driver
> on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
> upgrade
> the apiserver in existing clusters, on the master node(s) you can run:
> sudo atomic pull --storage ostree
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
> sudo atomic containers update --rebase
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver
>
> You can upgrade the other k8s components with similar commands.
>
> I'll share instructions for magnum queens tomorrow morning CET time.
>
> Cheers,
> Spyros
>
> [0] https://github.com/kubernetes/kubernetes/issues/71411
> [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[openstack-dev] IMPORTANT: This list is retired

2018-12-03 Thread Jeremy Stanley
This mailing list was replaced by a new
openstack-disc...@lists.openstack.org mailing list[0] as of Monday
November 19, 2018 and starting now will no longer receive any new
messages. The archive of prior messages will remain published in the
expected location indefinitely for future reference.

For convenience posts to the old list address will be rerouted to
the new list for an indeterminate period of time, but please correct
it in your replies if you notice this.

See my original notice[1] (and the many reminders sent in months
since) for an explanation of this change.

[0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev