[Openstack] IMPORTANT: This list is retired

2018-12-03 Thread Jeremy Stanley
This mailing list was replaced by a new
openstack-disc...@lists.openstack.org mailing list[0] as of Monday
November 19, 2018 and starting now will no longer receive any new
messages. The archive of prior messages will remain published in the
expected location indefinitely for future reference.

For convenience posts to the old list address will be rerouted to
the new list for an indeterminate period of time, but please correct
it in your replies if you notice this.

See my original notice[1] (and the many reminders sent in months
since) for an explanation of this change.

[0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
[1] http://lists.openstack.org/pipermail/openstack/2018-September/047005.html
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

[Openstack-operators] IMPORTANT: This list is retired

2018-12-03 Thread Jeremy Stanley
This mailing list was replaced by a new
openstack-disc...@lists.openstack.org mailing list[0] as of Monday
November 19, 2018 and starting now will no longer receive any new
messages. The archive of prior messages will remain published in the
expected location indefinitely for future reference.

For convenience posts to the old list address will be rerouted to
the new list for an indeterminate period of time, but please correct
it in your replies if you notice this.

See my original notice[1] (and the many reminders sent in months
since) for an explanation of this change.

[0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
[1] 
http://lists.openstack.org/pipermail/openstack-operators/2018-September/015919.html
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[openstack-dev] IMPORTANT: This list is retired

2018-12-03 Thread Jeremy Stanley
This mailing list was replaced by a new
openstack-disc...@lists.openstack.org mailing list[0] as of Monday
November 19, 2018 and starting now will no longer receive any new
messages. The archive of prior messages will remain published in the
expected location indefinitely for future reference.

For convenience posts to the old list address will be rerouted to
the new list for an indeterminate period of time, but please correct
it in your replies if you notice this.

See my original notice[1] (and the many reminders sent in months
since) for an explanation of this change.

[0] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [Openstack-operators] [openstack-dev][magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Magnum queens, uses kubernetes 1.9.3 by default.
You can upgrade to v1.10.11-1. From a quick test
v1.11.5-1 is also compatible with 1.9.x.

We are working to make this painless, sorry you
have to ssh to the nodes for now.

Cheers,
Spyros

On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis  wrote:

> Hello all,
>
> Following the vulnerability [0], with magnum rocky and the kubernetes
> driver
> on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
> upgrade
> the apiserver in existing clusters, on the master node(s) you can run:
> sudo atomic pull --storage ostree
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
> sudo atomic containers update --rebase
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver
>
> You can upgrade the other k8s components with similar commands.
>
> I'll share instructions for magnum queens tomorrow morning CET time.
>
> Cheers,
> Spyros
>
> [0] https://github.com/kubernetes/kubernetes/issues/71411
> [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Re: [openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Magnum queens, uses kubernetes 1.9.3 by default.
You can upgrade to v1.10.11-1. From a quick test
v1.11.5-1 is also compatible with 1.9.x.

We are working to make this painless, sorry you
have to ssh to the nodes for now.

Cheers,
Spyros

On Mon, 3 Dec 2018 at 23:24, Spyros Trigazis  wrote:

> Hello all,
>
> Following the vulnerability [0], with magnum rocky and the kubernetes
> driver
> on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
> upgrade
> the apiserver in existing clusters, on the master node(s) you can run:
> sudo atomic pull --storage ostree
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
> sudo atomic containers update --rebase
> docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver
>
> You can upgrade the other k8s components with similar commands.
>
> I'll share instructions for magnum queens tomorrow morning CET time.
>
> Cheers,
> Spyros
>
> [0] https://github.com/kubernetes/kubernetes/issues/71411
> [1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[Openstack-operators] [openstack-dev][magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Hello all,

Following the vulnerability [0], with magnum rocky and the kubernetes driver
on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
upgrade
the apiserver in existing clusters, on the master node(s) you can run:
sudo atomic pull --storage ostree
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
sudo atomic containers update --rebase
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver

You can upgrade the other k8s components with similar commands.

I'll share instructions for magnum queens tomorrow morning CET time.

Cheers,
Spyros

[0] https://github.com/kubernetes/kubernetes/issues/71411
[1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[openstack-dev] [magnum] kubernetes images for magnum rocky

2018-12-03 Thread Spyros Trigazis
Hello all,

Following the vulnerability [0], with magnum rocky and the kubernetes driver
on fedora atomic you can use this tag "v1.11.5-1" [1] for new clusters. To
upgrade
the apiserver in existing clusters, on the master node(s) you can run:
sudo atomic pull --storage ostree
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1
sudo atomic containers update --rebase
docker.io/openstackmagnum/kubernetes-apiserver:v1.11.5-1 kube-apiserver

You can upgrade the other k8s components with similar commands.

I'll share instructions for magnum queens tomorrow morning CET time.

Cheers,
Spyros

[0] https://github.com/kubernetes/kubernetes/issues/71411
[1] https://hub.docker.com/r/openstackmagnum/kubernetes-apiserver/tags/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [Openstack] unexpected distribution of compute instances in queens

2018-12-03 Thread Mike Carden
>
>
> Presuming you are deploying Rocky or Queens,
>

Yep, it's Queens.


>
> It goes in the nova.conf file under the [placement] section:
>
> randomize_allocation_candidates = true
>

In triple-o land it seems like the config may need to be somewhere like
nova-scheduler.yaml and laid down via a re-deploy.

Or something.

The nova_scheduler runs in a container on a 'controller' host.

-- 
MC
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [OpenStack-Infra] Adding index and views/dashboards for Kata to ELK stack

2018-12-03 Thread Whaley, Graham
Hi Clark,

> There is more to it than that. This service is part of the CI system we 
> operate.
> The way you consume it is through the use of Zuul jobs. If you want to inject
> data into our Logstash/Elasticsearch system you do that by configuring your 
> jobs
> in Zuul to do so. We are not in the business of operating one off solutions to
> problems. We support a large variety of users and projects and using generic
> flexible systems like this one is how we make that viable.
> 
> Additionally these systems are community managed so that we can work
> together to solve these problems in a way that gives the infra team 
> appropriate
> administrative access while still allowing you and others to get specific work
> done. Rather than avoid this tooling can we please attempt to use it when it 
> has
> preexisting solutions to problems like this? We will happily do our best to 
> make
> re-consumption of existing systems a success, but building one off solutions 
> to
> solve problems that are already solved does not scale.
> 

Sure, OK, understood...

[snip]
> 
> I wasn't directly involved with the decision making at the time but back at 
> the
> beginning of the year my understanding was that Jenkins was chosen over Zuul
> for expediency. This wasn't a bad choice as the Github support in Zuul was 
> still
> quite new (though having more users would likely have pushed it along more
> quickly). It probably would be worthwhile to decide separately if Jenkins is 
> the
> permanent solution to the Kata CI tooling problem, or if we should continue to
> push for Zuul. If we want to push for Zuul then I think we need to stop 
> choosing
> Jenkins as a default and start implementing new stuff in Zuul then move the
> existing CI as Kata is able.
> 
> As for who has Zuul access, the Infra team has administrative access to the
> service. Zuul configuration for the existing Kata jobs is done through a repo
> managed by the infra team, but anyone can push and propose changes to this
> repo. The reason for this is Zuul wants to gate its config updates to prevent 
> new
> configs from being merged without being tested. Bypassing this testing does
> allow you to break your Zuul configuration. Currently we aren't gating Kata 
> with
> Zuul so the configs live in the Infra repo. If we started gating Kata changes 
> with
> Zuul we could move the configs into Kata repos and Kata could self manage
> them.
> 
> Looking ahead Zuul is multitenant aware, and we could deploy a Kata tenant.
> This would give Kata a bit more freedom to configure its Zuul pipeline 
> behavior
> as desired, though gating is still strongly recommended as that will prevent
> broken configs from merging.

I spoke with some of the other Kata folks - we agreed I'd try to move the Kata 
metrics CI into Zuul utilizing the packet.net hardware, and let's see how that 
pans out. I think that will help both sides understand the current state of 
kata/zuul so we can move things forwards there.

Wrt the packet.net slaves, I believe we can do that using some of the 
packet.net/zuul integration work done by JohnStudarus - John and I had some 
chats at the Summit in Berlin.
https://opensource.com/article/18/10/building-zuul-cicd-cloud

I'll do some Zuul readup, work out how I need to PR the additional ansible/yaml 
items to the infra repos to add the metrics build/runs (I see the repos and 
code, and a metrics run is very very similar to a normal kata CI run - and to 
begin with we  can do those runs in the VM builders to test out the flows 
before moving to the packet.net hardware).

[move this down to the end...]
> 
> No, we would inject the data through the existing test node -> Zuul -> 
> Logstash -
> > Elasticsearch path.

This might be one bit we have to work out. The metrics generates raw JSON 
results. The best method I found for landing that directly into logstash was 
through the socket filebeat. It is not clear in my head how this ties in with 
Zuul - the best method I found previously for direct JSON injection into 
logstash, and thus elastic, was using the socket filebeat. Will that fit in 
with the infra?

Graham

-
Intel Corporation (UK) Limited
Registered No. 1134945 (England)
Registered Office: Pipers Way, Swindon SN3 1RJ
VAT No: 860 2173 47

This e-mail and any attachments may contain confidential material for
the sole use of the intended recipient(s). Any review or distribution
by others is strictly prohibited. If you are not the intended
recipient, please contact the sender and delete all copies.
___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [openstack-dev] Stepping down from Neutron core team

2018-12-03 Thread Nate Johnston
On Sun, Dec 02, 2018 at 11:08:25PM +0900, Hirofumi Ichihara wrote:
 
> I’m stepping down from the core team because my role changed and I cannot
> have responsibilities of neutron core.

Thank you very much for all of the insightful reviews over the years.
Good luck on your next adventure!

Nate Johnston (njohnston)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [Openstack] unexpected distribution of compute instances in queens

2018-12-03 Thread Jay Pipes

On 11/30/2018 05:52 PM, Mike Carden wrote:


Have you set the placement_randomize_allocation_candidates CONF option
and are still seeing the packing behaviour?


No I haven't. Where would be the place to do that? In a nova.conf 
somewhere that the nova-scheduler containers on the controller hosts 
could pick it up?


Just about to deploy for realz with about forty x86 compute nodes, so it 
would be really nice to sort this first. :)


Presuming you are deploying Rocky or Queens,

It goes in the nova.conf file under the [placement] section:

randomize_allocation_candidates = true

The nova.conf file should be the one used by nova-scheduler.

Best,
-jay

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Re: [openstack-dev] Stepping down from Neutron core team

2018-12-03 Thread Ranjan Krchaubey
Hi all,

Can any one help me to resvolve error 111 on keystone

Thanks & Regards
Ranjan Kumar 
Mob: 9284158762

> On 03-Dec-2018, at 1:39 PM, Slawomir Kaplonski  wrote:
> 
> Hi,
> 
> Thanks for all Your work in Neutron and good luck in Your new role.
> 
> — 
> Slawek Kaplonski
> Senior software engineer
> Red Hat
> 
>> Wiadomość napisana przez Hirofumi Ichihara  w 
>> dniu 02.12.2018, o godz. 15:08:
>> 
>> Hi all,
>> 
>> I’m stepping down from the core team because my role changed and I cannot 
>> have responsibilities of neutron core.
>> 
>> My start of neutron was 5 years ago. I had many good experiences from 
>> neutron team.
>> Today neutron is great project. Neutron gets new reviewers, contributors 
>> and, users.
>> Keep on being a great community.
>> 
>> Thanks,
>> Hirofumi
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-12-03 Thread Bogdan Dobrelya

On 12/3/18 10:34 AM, Bogdan Dobrelya wrote:

Hi Kevin.
Puppet not only creates config files but also executes a service 
dependent steps, like db sync, so neither '[base] -> [puppet]' nor 
'[base] -> [service]' would not be enough on its own. That requires some 
services specific code to be included into *config* images as well.


PS. There is a related spec [0] created by Dan, please take a look and 
propose you feedback


[0] https://review.openstack.org/620062


I'm terribly sorry, but that's a corrected link [0] to that spec.

[0] https://review.openstack.org/620909



On 11/30/18 6:48 PM, Fox, Kevin M wrote:

Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org; 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of 
containers for security and size of images (maintenance) sakes


On 11/30/18 1:52 PM, Dan Prince wrote:

On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:

On 11/29/18 6:42 PM, Jiří Stránský wrote:

On 28. 11. 18 18:29, Bogdan Dobrelya wrote:

On 11/28/18 6:02 PM, Jiří Stránský wrote:




Reiterating again on previous points:

-I'd be fine removing systemd. But lets do it properly and
not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can
certainly put
them in a separate container outside of the runtime service
containers
but doing so would actually cost you much more
space/bandwidth for each
service container. As both of these have to get downloaded to
each node
anyway in order to generate config files with our current
mechanisms
I'm not sure this buys you anything.


+1. I was actually under the impression that we concluded
yesterday on
IRC that this is the only thing that makes sense to seriously
consider.
But even then it's not a win-win -- we'd gain some security by
leaner
production images, but pay for it with space+bandwidth by
duplicating
image content (IOW we can help achieve one of the goals we had
in mind
by worsening the situation w/r/t the other goal we had in
mind.)

Personally i'm not sold yet but it's something that i'd
consider if we
got measurements of how much more space/bandwidth usage this
would
consume, and if we got some further details/examples about how
serious
are the security concerns if we leave config mgmt tools in
runtime
images.

IIRC the other options (that were brought forward so far) were
already
dismissed in yesterday's IRC discussion and on the reviews.
Bin/lib bind
mounting being too hacky and fragile, and nsenter not really
solving the
problem (because it allows us to switch to having different
bins/libs
available, but it does not allow merging the availability of
bins/libs
from two containers into a single context).


We are going in circles here I think


+1. I think too much of the discussion focuses on "why it's bad
to have
config tools in runtime images", but IMO we all sorta agree
that it
would be better not to have them there, if it came at no cost.

I think to move forward, it would be interesting to know: if we
do this
(i'll borrow Dan's drawing):


base container| --> |service container| --> |service
container w/

Puppet installed|

How much more space and bandwidth would this consume per node
(e.g.
separately per controller, per compute). This could help with
decision
making.


As I've already evaluated in the related bug, that is:

puppet-* modules and manifests ~ 16MB
puppet with dependencies ~61MB
dependencies of the seemingly largest a dependency, systemd
~190MB

that would be an extra layer size for each of the container
images to be
downloaded/fetched into registries.


Thanks, i tried to do the math of the reduction vs. inflation in
sizes
as follows. I think the crucial point here is the layering. If we
do
this image layering:


base| --> |+ service| --> |+ Puppet|


we'd drop ~267 MB from base image, but we'd be installing that to
the
topmost level, per-component, right?


Given we detached systemd from puppet, cronie et al, that would be
267-190MB, so the math below would be looking much better


Would it be worth writing a spec that summarizes what action items are
bing taken to optimize our base image with regards to the systemd?


Perhaps it would be. But honestly, I see nothing biggie to require a
full blown spec. Just changing RPM deps and layers for containers
images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted,
it should be working as of fedora28(or 29) I hope)

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672




It seems like the general consenses is that cleaning up some of the RPM
dependencies so that we don't install Systemd is the biggest 

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-12-03 Thread Bogdan Dobrelya

Hi Kevin.
Puppet not only creates config files but also executes a service 
dependent steps, like db sync, so neither '[base] -> [puppet]' nor 
'[base] -> [service]' would not be enough on its own. That requires some 
services specific code to be included into *config* images as well.


PS. There is a related spec [0] created by Dan, please take a look and 
propose you feedback


[0] https://review.openstack.org/620062

On 11/30/18 6:48 PM, Fox, Kevin M wrote:

Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org; 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On 11/30/18 1:52 PM, Dan Prince wrote:

On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:

On 11/29/18 6:42 PM, Jiří Stránský wrote:

On 28. 11. 18 18:29, Bogdan Dobrelya wrote:

On 11/28/18 6:02 PM, Jiří Stránský wrote:




Reiterating again on previous points:

-I'd be fine removing systemd. But lets do it properly and
not via 'rpm
-ev --nodeps'.
-Puppet and Ruby *are* required for configuration. We can
certainly put
them in a separate container outside of the runtime service
containers
but doing so would actually cost you much more
space/bandwidth for each
service container. As both of these have to get downloaded to
each node
anyway in order to generate config files with our current
mechanisms
I'm not sure this buys you anything.


+1. I was actually under the impression that we concluded
yesterday on
IRC that this is the only thing that makes sense to seriously
consider.
But even then it's not a win-win -- we'd gain some security by
leaner
production images, but pay for it with space+bandwidth by
duplicating
image content (IOW we can help achieve one of the goals we had
in mind
by worsening the situation w/r/t the other goal we had in
mind.)

Personally i'm not sold yet but it's something that i'd
consider if we
got measurements of how much more space/bandwidth usage this
would
consume, and if we got some further details/examples about how
serious
are the security concerns if we leave config mgmt tools in
runtime
images.

IIRC the other options (that were brought forward so far) were
already
dismissed in yesterday's IRC discussion and on the reviews.
Bin/lib bind
mounting being too hacky and fragile, and nsenter not really
solving the
problem (because it allows us to switch to having different
bins/libs
available, but it does not allow merging the availability of
bins/libs
from two containers into a single context).


We are going in circles here I think


+1. I think too much of the discussion focuses on "why it's bad
to have
config tools in runtime images", but IMO we all sorta agree
that it
would be better not to have them there, if it came at no cost.

I think to move forward, it would be interesting to know: if we
do this
(i'll borrow Dan's drawing):


base container| --> |service container| --> |service
container w/

Puppet installed|

How much more space and bandwidth would this consume per node
(e.g.
separately per controller, per compute). This could help with
decision
making.


As I've already evaluated in the related bug, that is:

puppet-* modules and manifests ~ 16MB
puppet with dependencies ~61MB
dependencies of the seemingly largest a dependency, systemd
~190MB

that would be an extra layer size for each of the container
images to be
downloaded/fetched into registries.


Thanks, i tried to do the math of the reduction vs. inflation in
sizes
as follows. I think the crucial point here is the layering. If we
do
this image layering:


base| --> |+ service| --> |+ Puppet|


we'd drop ~267 MB from base image, but we'd be installing that to
the
topmost level, per-component, right?


Given we detached systemd from puppet, cronie et al, that would be
267-190MB, so the math below would be looking much better


Would it be worth writing a spec that summarizes what action items are
bing taken to optimize our base image with regards to the systemd?


Perhaps it would be. But honestly, I see nothing biggie to require a
full blown spec. Just changing RPM deps and layers for containers
images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted,
it should be working as of fedora28(or 29) I hope)

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672




It seems like the general consenses is that cleaning up some of the RPM
dependencies so that we don't install Systemd is the biggest win.

What confuses me is why are there still patches posted to move Puppet
out of the base layer when we agree moving it out of the base layer
would 

Re: [openstack-dev] Stepping down from Neutron core team

2018-12-03 Thread Slawomir Kaplonski
Hi,

Thanks for all Your work in Neutron and good luck in Your new role.

— 
Slawek Kaplonski
Senior software engineer
Red Hat

> Wiadomość napisana przez Hirofumi Ichihara  w 
> dniu 02.12.2018, o godz. 15:08:
> 
> Hi all,
> 
> I’m stepping down from the core team because my role changed and I cannot 
> have responsibilities of neutron core.
> 
> My start of neutron was 5 years ago. I had many good experiences from neutron 
> team.
> Today neutron is great project. Neutron gets new reviewers, contributors and, 
> users.
> Keep on being a great community.
> 
> Thanks,
> Hirofumi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev