Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Monty Taylor

On 05/16/2017 05:39 AM, Sean Dague wrote:

On 05/15/2017 10:00 PM, Adrian Turjak wrote:



On 16/05/17 13:29, Lance Bragstad wrote:



On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
> wrote:



Based on the specs that are currently up in Keystone-specs, I
would highly recommend not doing this per user.

The scenario I imagine is you have a sysadmin at a company who
created a ton of these for various jobs and then leaves. The
company then needs to keep his user account around, or create tons
of new API keys, and then disable his user once all the scripts he
had keys for are replaced. Or more often then not, disable his
user and then cry as everything breaks and no one really knows why
or no one fully documented it all, or didn't read the docs.
Keeping them per project and unrelated to the user makes more
sense, as then someone else on your team can regenerate the
secrets for the specific Keys as they want. Sure we can advise
them to use generic user accounts within which to create these API
keys but that implies password sharing which is bad.


That said, I'm curious why we would make these as a thing separate
to users. In reality, if you can create users, you can create API
specific users. Would this be a different authentication
mechanism? Why? Why not just continue the work on better access
control and let people create users for this. Because lets be
honest, isn't a user already an API key? The issue (and the Ron's
spec mentions this) is a user having too much access, how would
this fix that when the issue is that we don't have fine grained
policy in the first place? How does a new auth mechanism fix that?
Both specs mention roles so I assume it really doesn't. If we had
fine grained policy we could just create users specific to a
service with only the roles it needs, and the same problem is
solved without any special API, new auth, or different 'user-lite'
object model. It feels like this is trying to solve an issue that
is better solved by fixing the existing problems.

I like the idea behind these specs, but... I'm curious what
exactly they are trying to solve. Not to mention if you wanted to
automate anything larger such as creating sub-projects and setting
up a basic network for each new developer to get access to your
team, this wouldn't work unless you could have your API key
inherit to subprojects or something more complex, at which point
they may as well be users. Users already work for all of this, why
reinvent the wheel when really the issue isn't the wheel itself,
but the steering mechanism (access control/policy in this case)?


All valid points, but IMO the discussions around API keys didn't set
out to fix deep-rooted issues with policy. We have several specs in
flights across projects to help mitigate the real issues with policy
[0] [1] [2] [3] [4].

I see an API key implementation as something that provides a cleaner
fit and finish once we've addressed the policy bits. It's also a
familiar concept for application developers, which was the use case
the session was targeting.

I probably should have laid out the related policy work before jumping
into API keys. We've already committed a bunch of keystone resource to
policy improvements this cycle, but I'm hoping we can work API keys
and policy improvements in parallel.

[0] https://review.openstack.org/#/c/460344/
[1] https://review.openstack.org/#/c/462733/
[2] https://review.openstack.org/#/c/464763/
[3] https://review.openstack.org/#/c/433037/
[4] https://review.openstack.org/#/c/427872/


I'm well aware of the policy work, and it is fantastic to see it
progressing! I can't wait to actually be able to play with that stuff!
We've been painstakingly tweaking the json policy files which is a giant
mess.

I'm just concerned that this feels like a feature we don't really need
when really it's just a slight variant of a user with a new auth model
(that is really just another flavour of username/password). The sole
reason most of the other cloud services have API keys is because a user
can't talk to the API directly. OpenStack does not have that problem,
users are API keys. So I think what we really need to consider is what
exact benefit does API keys actually give us that won't be solved with
users and better policy?


The benefits of API key are if it's the same across all deployments, so
your applications can depend on it working. That means the application
has to be able to:

1. provision an API Key with normal user credentials
2. set/reduce permissions with that with those same user credentials
3. operate with those credentials at the project level (so that when you
leave, someone else in your dept can take over)
4. have all it's resources built in the same project that you are in, so
API Key created resources could interact with 

Re: [openstack-dev] [keystone] [Pile] Need Exemption On Submitted Spec for the Keystone

2017-05-16 Thread Lance Bragstad
That sounds good - I'll review the spec before today's meeting [0]. Will
someone be around to answer questions about the spec if there are any?


[0] http://eavesdrop.openstack.org/#Keystone_Team_Meeting

On Mon, May 15, 2017 at 11:24 PM, Mh Raies  wrote:

> Hi Lance,
>
>
>
> We had submitted one blueprint and it’s Specs last weeks.
>
> Blueprint - https://blueprints.launchpad.
> net/keystone/+spec/api-implemetation-required-to-
> download-identity-policies
>
> Spec - https://review.openstack.org/#/c/463547/
>
>
>
> As Keystone Pike proposal freeze is already completed on April 14th 2017,
> to proceed on this Spec we need your help.
>
> Implementation of this Spec is also started and being addressed by -
> https://review.openstack.org/#/c/463543/
>
>
>
> So, if we can get an exemption to proceed with the Spec review and
> approval process, it will be a great help for us.
>
>
>
>
>
> [image: Ericsson] 
>
>
>
> *Mh Raies*
>
> *Senior Solution Integrator*
> *Ericsson** Consulting and Systems Integration*
>
> *Gurgaon, India | Mobile **+91 9901555661 <+91%2099015%2055661>*
>
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Lance Bragstad
On Tue, May 16, 2017 at 8:54 AM, Monty Taylor  wrote:

> On 05/16/2017 05:39 AM, Sean Dague wrote:
>
>> On 05/15/2017 10:00 PM, Adrian Turjak wrote:
>>
>>>
>>>
>>> On 16/05/17 13:29, Lance Bragstad wrote:
>>>


 On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
 > wrote:

>>> 
>>
>>> Based on the specs that are currently up in Keystone-specs, I
 would highly recommend not doing this per user.

 The scenario I imagine is you have a sysadmin at a company who
 created a ton of these for various jobs and then leaves. The
 company then needs to keep his user account around, or create tons
 of new API keys, and then disable his user once all the scripts he
 had keys for are replaced. Or more often then not, disable his
 user and then cry as everything breaks and no one really knows why
 or no one fully documented it all, or didn't read the docs.
 Keeping them per project and unrelated to the user makes more
 sense, as then someone else on your team can regenerate the
 secrets for the specific Keys as they want. Sure we can advise
 them to use generic user accounts within which to create these API
 keys but that implies password sharing which is bad.


 That said, I'm curious why we would make these as a thing separate
 to users. In reality, if you can create users, you can create API
 specific users. Would this be a different authentication
 mechanism? Why? Why not just continue the work on better access
 control and let people create users for this. Because lets be
 honest, isn't a user already an API key? The issue (and the Ron's
 spec mentions this) is a user having too much access, how would
 this fix that when the issue is that we don't have fine grained
 policy in the first place? How does a new auth mechanism fix that?
 Both specs mention roles so I assume it really doesn't. If we had
 fine grained policy we could just create users specific to a
 service with only the roles it needs, and the same problem is
 solved without any special API, new auth, or different 'user-lite'
 object model. It feels like this is trying to solve an issue that
 is better solved by fixing the existing problems.

 I like the idea behind these specs, but... I'm curious what
 exactly they are trying to solve. Not to mention if you wanted to
 automate anything larger such as creating sub-projects and setting
 up a basic network for each new developer to get access to your
 team, this wouldn't work unless you could have your API key
 inherit to subprojects or something more complex, at which point
 they may as well be users. Users already work for all of this, why
 reinvent the wheel when really the issue isn't the wheel itself,
 but the steering mechanism (access control/policy in this case)?


 All valid points, but IMO the discussions around API keys didn't set
 out to fix deep-rooted issues with policy. We have several specs in
 flights across projects to help mitigate the real issues with policy
 [0] [1] [2] [3] [4].

 I see an API key implementation as something that provides a cleaner
 fit and finish once we've addressed the policy bits. It's also a
 familiar concept for application developers, which was the use case
 the session was targeting.

 I probably should have laid out the related policy work before jumping
 into API keys. We've already committed a bunch of keystone resource to
 policy improvements this cycle, but I'm hoping we can work API keys
 and policy improvements in parallel.

 [0] https://review.openstack.org/#/c/460344/
 [1] https://review.openstack.org/#/c/462733/
 [2] https://review.openstack.org/#/c/464763/
 [3] https://review.openstack.org/#/c/433037/
 [4] https://review.openstack.org/#/c/427872/

 I'm well aware of the policy work, and it is fantastic to see it
>>> progressing! I can't wait to actually be able to play with that stuff!
>>> We've been painstakingly tweaking the json policy files which is a giant
>>> mess.
>>>
>>> I'm just concerned that this feels like a feature we don't really need
>>> when really it's just a slight variant of a user with a new auth model
>>> (that is really just another flavour of username/password). The sole
>>> reason most of the other cloud services have API keys is because a user
>>> can't talk to the API directly. OpenStack does not have that problem,
>>> users are API keys. So I think what we really need to consider is what
>>> exact benefit does API keys actually give us that won't be solved with
>>> users and better policy?
>>>
>>
>> The benefits of API key are 

Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Chris Dent

On Sun, 14 May 2017, Sean Dague wrote:

So, the basic idea is, services will optionally take an inbound 
X-OpenStack-Request-ID which will be strongly validated to the format 
(req-$uuid). They will continue to always generate one as well. When the 
context is built (which is typically about 3 more steps down the paste 
pipeline), we'll check that the service user was involved, and if not, reset 
the request_id to the local generated one. We'll log both the global and 
local request ids. All of these changes happen in oslo.middleware, 
oslo.context, oslo.log, and most projects won't need anything to get this 
infrastructure.


I may not be understanding this paragraph, but this sounds like you
are saying: accept a valid and authentic incoming request id, but
only use it in ongoing requests if the service user was involved in
those requests.

If that's correct, I'd suggest not doing that because it confuses
traceability of a series of things. Instead, always use the request
id if it is valid and authentic.

But maybe you mean "if the request id could not be proven authentic,
don't use it"?

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic-UI review requirements - single core reviews

2017-05-16 Thread Dmitry Tantsur

On 05/15/2017 09:10 PM, Julia Kreger wrote:

All,

In our new reality, in order to maximize velocity, I propose that we
loosen the review requirements for ironic-ui to allow faster
iteration. To this end, I suggest we move ironic-ui to using a single
core reviewer for code approval, along the same lines as Horizon[0].


Ok, Horizon example makes me feel a bit better about this :)



Our new reality is a fairly grim one, but there is always hope. We
have several distinct active core reviewers. The problem is available
time to review, and then getting any two reviewers to be on the same,
at the same time, with the same patch set. Reducing the requirements
will help us iterate faster and reduce the time a revision waits for
approval to land, which should ultimately help everyone contributing.

If there are no objections from my fellow ironic folk, then I propose
we move to this for ironic-ui immediately.


I'm fine with that. As I mentioned to you, it's clearly more important to be 
able to move forward than to make sure we never miss a sub-perfect patch. 
Especially for leaf projects like UI.




Thanks,

-Julia

[0]: 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/113029.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Flavio Percoco

On 16/05/17 04:22 +, Steven Dake (stdake) wrote:

Flavio,

Forgive the top post – outlook ftw.

I understand the concerns raised in this thread.  It is unclear if this thread 
is the feeling of two TC members or enough TC members care deeply about this 
issue to permanently limit OpenStack big tent projects’ ability to generate 
container images in various external artifact storage systems.  The point of 
discussion I see effectively raised in this thread is “OpenStack infra will not 
push images to dockerhub”.

I’d like clarification if this is a ruling from the TC, or simply an 
exploratory discussion.

If it is exploratory, it is prudent that OpenStack projects not be blocked by 
debate on this issue until the TC has made such ruling as to banning the 
creation of container images via OpenStack infrastructure.


Hey Steven,

It's nothing to do with the TC. It's a release management concern and I just
happen to have an opinion on it. :)

As Doug mentioned, OpenStack has (almost) never released binaries in any form.
This doesn't mean we can't revisit this "rule" but until that happens, the
concern stands.

Flavio


Regards
-steve

-Original Message-
From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 15, 2017 at 7:00 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

   On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
   >On 15 May 2017 at 12:12, Doug Hellmann  wrote:

   [huge snip]

   >>> > I'm raising the issue here to get some more input into how to
   >>> > proceed. Do other people think this concern is overblown? Can we
   >>> > mitigate the risk by communicating through metadata for the images?
   >>> > Should we stick to publishing build instructions (Dockerfiles, or
   >>> > whatever) instead of binary images? Are there other options I haven't
   >>> > mentioned?
   >>>
   >>> Today we do publish build instructions, that's what Kolla is. We also
   >>> publish built containers already, just we do it manually on release
   >>> today. If we decide to block it, I assume we should stop doing that
   >>> too? That will hurt users who uses this piece of Kolla, and I'd hate
   >>> to hurt our users:(
   >>
   >> Well, that's the question. Today we have teams publishing those
   >> images themselves, right? And the proposal is to have infra do it?
   >> That change could be construed to imply that there is more of a
   >> relationship with the images and the rest of the community (remember,
   >> folks outside of the main community activities do not always make
   >> the same distinctions we do about teams). So, before we go ahead
   >> with that, I want to make sure that we all have a chance to discuss
   >> the policy change and its implications.
   >
   >Infra as vm running with infra, but team to publish it can be Kolla
   >team. I assume we'll be responsible to keep these images healthy...

   I think this is the gist of the concern and I'd like us to focus on it.

   As someone that used to consume these images from kolla's dockerhub account
   directly, I can confirm they are useful. However, I do share Doug's concern 
and
   the impact this may have on the community.

   From a release perspective, as Doug mentioned, we've avoided releasing 
projects
   in any kind of built form. This was also one of the concerns I raised when
   working on the proposal to support other programming languages. The problem 
of
   releasing built images goes beyond the infrastructure requirements. It's the
   message and the guarantees implied with the built product itself that are the
   concern here. And I tend to agree with Doug that this might be a problem for 
us
   as a community. Unfortunately, putting your name, Michal, as contact point is
   not enough. Kolla is not the only project producing container images and we 
need
   to be consistent in the way we release these images.

   Nothing prevents people for building their own images and uploading them to
   dockerhub. Having this as part of the OpenStack's pipeline is a problem.

   Flavio

   P.S: note this goes against my container(ish) interests but it's a
   community-wide problem.

   --
   @flaper87
   Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for 

Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Masayuki Igawa
+1!

-- 
  Masayuki Igawa
  masay...@igawa.me



On Tue, May 16, 2017, at 05:22 PM, Andrea Frittoli wrote:
> Hello team,
> 
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
> 
> Over the past two cycle Fanglei has been steadily contributing to
> Tempest and its community.> She's done a great deal of work in making Tempest 
> code cleaner, easier
> to read, maintain and> debug, fixing bugs and removing cruft. Both her code 
> as well as her
> reviews demonstrate a> very good understanding of Tempest internals and of 
> the project future
> direction.> I believe Fanglei will make an excellent addition to the team.
> 
> As per the usual, if the current Tempest core team members would
> please vote +1> or -1(veto) to the nomination when you get a chance. We'll 
> keep the
> polls open> for 5 days or until everyone has voted.
> 
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn 
> https://review.openstack.org/#/q/reviewer:zhufl 
> 
> Thank you,
> 
> Andrea (andreaf)
> -
> > OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> > 
> > > On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > > > I'm raising the issue here to get some more input into how to
> > > > proceed. Do other people think this concern is overblown? Can we
> > > > mitigate the risk by communicating through metadata for the images?
> > > > Should we stick to publishing build instructions (Dockerfiles, or
> > > > whatever) instead of binary images? Are there other options I haven't
> > > > mentioned?
> > > 
> > > Today we do publish build instructions, that's what Kolla is. We also
> > > publish built containers already, just we do it manually on release
> > > today. If we decide to block it, I assume we should stop doing that
> > > too? That will hurt users who uses this piece of Kolla, and I'd hate
> > > to hurt our users:(
> > 
> > Well, that's the question. Today we have teams publishing those
> > images themselves, right? And the proposal is to have infra do it?
> > That change could be construed to imply that there is more of a
> > relationship with the images and the rest of the community (remember,
> > folks outside of the main community activities do not always make
> > the same distinctions we do about teams). So, before we go ahead
> > with that, I want to make sure that we all have a chance to discuss
> > the policy change and its implications.
> 
> Sorry for hijacking the thread, but we have a similar scenario for example in 
> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
> stuff, and not containers, but it's looks really the same.
> So far ready-made images have been published under 
> http://sahara-files.mirantis.com/images/upstream/, but we are looking to have 
> them hosted on 
> openstack.org, just like other artifacts. 
> 
> We asked about this few days ago on openstack-infra@, but no answer so far 
> (the Summit didn't help):
> 
> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
> 
> I think that the answer to the question raised in this thread is definitely 
> going to be relevant for our use case.
> 
> Ciao

Thanks for raising this. I think the same concerns apply to VM images.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sam Yaple
I would like to bring up a subject that hasn't really been discussed in
this thread yet, forgive me if I missed an email mentioning this.

What I personally would like to see is a publishing infrastructure to allow
pushing built images to an internal infra mirror/repo/registry for
consumption of internal infra jobs (deployment tools like kolla-ansible and
openstack-ansible). The images built from infra mirrors with security
turned off are perfect for testing internally to infra.

If you build images properly in infra, then you will have an image that is
not security checked (no gpg verification of packages) and completely
unverifiable. These are absolutely not images we want to push to
DockerHub/quay for obvious reasons. Security and verification being chief
among them. They are absolutely not images that should ever be run in
production and are only suited for testing. These are the only types of
images that can come out of infra.

Thanks,
SamYaple

On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
wrote:

> On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
> >> Flavio Percoco wrote:
> >> > From a release perspective, as Doug mentioned, we've avoided
> releasing projects
> >> > in any kind of built form. This was also one of the concerns I raised
> when
> >> > working on the proposal to support other programming languages. The
> problem of
> >> > releasing built images goes beyond the infrastructure requirements.
> It's the
> >> > message and the guarantees implied with the built product itself that
> are the
> >> > concern here. And I tend to agree with Doug that this might be a
> problem for us
> >> > as a community. Unfortunately, putting your name, Michal, as contact
> point is
> >> > not enough. Kolla is not the only project producing container images
> and we need
> >> > to be consistent in the way we release these images.
> >> >
> >> > Nothing prevents people for building their own images and uploading
> them to
> >> > dockerhub. Having this as part of the OpenStack's pipeline is a
> problem.
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >>
> >
> > At the forum we talked about putting test images on a "private"
> > repository hosted on openstack.org somewhere. I think that's option
> > 3 from your list?
> >
> > Paul may be able to shed more light on the details of the technology
> > (maybe it's just an Apache-served repo, rather than a full blown
> > instance of Docker's service, for example).
>
> Issue with that is
>
> 1. Apache served is harder to use because we want to follow docker API
> and we'd have to reimplement it
> 2. Running registry is single command
> 3. If we host in in infra, in case someone actually uses it (there
> will be people like that), that will eat up lot of network traffic
> potentially
> 4. With local caching of images (working already) in nodepools we
> loose complexity of mirroring registries across nodepools
>
> So bottom line, having dockerhub/quay.io is simply easier.
>
> > Doug
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Eric Fried
> The idea is that a regular user calling into a service should not
> be able to set the request id, but outgoing calls from that service
> to other services as part of the same request would.

Yeah, so can anyone explain to me why this is a real problem?  If a
regular user wanted to be a d*ck and inject a bogus (or worse, I
imagine, duplicated) request-id, can any actual harm come out of it?  Or
does it just cause confusion to the guy reading the logs later?

(I'm assuming, of course, that the format will still be validated
strictly (req-$UUID) to preclude code injection kind of stuff.)

Thanks,
Eric (efried)
.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean Dague
On 05/16/2017 11:17 AM, Sean McGinnis wrote:
> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>> Folks,
>>
>> See $TITLE :)
>>
>> Thanks,
>> Dims
>>
> 
> My preference would be to have an #openstack-tc channel.
> 
> One thing I like about the dedicated meeting time was if I was not able to
> attend, or when I was just a casual observer, it was easy to catch up on
> what was discussed because it was all in one place and did not have any
> non TC conversations interlaced.
> 
> If we just use -dev, there is a high chance there will be a lot of cross-
> talk during discussions. There would also be a lot of effort to grep
> through the full day of activity to find things relevant to TC
> discussions. If we have a dedicated channel for this, it makes it very
> easy for anyone to know where to go to get a clean, easy to read capture
> of all relevant discussions. I think that will be important with the
> lack of a captured and summarized meeting to look at.

The thing is, IRC should never be a summary or long term storage medium.
IRC is a discussion medium. It is a hallway track. It's where ideas
bounce around lots are left on the floor, there are lots of
misstatements as people explore things. It's not store and forward
messaging, it's realtime chat.

If we want digestible summaries with context, that's never IRC, and we
shouldn't expect people to look to IRC for that. It's source material at
best. I'm not sure of any IRC conversation that's ever been clean, easy
to read, and captures the entire context within it without jumping to
assumptions of shared background that the conversation participants
already have.

Summaries with context need to emerge from here for people to be able to
follow along (out to email or web), and work their way back into the
conversations.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Waines, Greg
thanks for the pointers Sam.

I took a quick look.
I agree that the VM Heartbeat / Health-check looks like a good fit into 
Masakari.

Currently your instance monitoring looks like it is strictly black-box type 
monitoring thru libvirt events.
Is that correct ?
i.e. you do not do any intrusive type monitoring of the instance thru the QUEMU 
Guest Agent facility
   correct ?

I think this is what VM Heartbeat / Health-check would add to Masaraki.
Let me know if you agree.

Greg.

From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, May 15, 2017 at 9:36 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck 
Monitoring

Hi Greg,

In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
Masakari instance monitor has similar functionality with what you
have described.
Please see [1] for more details on instance monitoring.
[0] https://wiki.openstack.org/wiki/Masakari
[1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>>
>> Flavio Percoco wrote:
>>>
>>> From a release perspective, as Doug mentioned, we've avoided releasing
>>> projects
>>> in any kind of built form. This was also one of the concerns I raised
>>> when
>>> working on the proposal to support other programming languages. The
>>> problem of
>>> releasing built images goes beyond the infrastructure requirements. It's
>>> the
>>> message and the guarantees implied with the built product itself that are
>>> the
>>> concern here. And I tend to agree with Doug that this might be a problem
>>> for us
>>> as a community. Unfortunately, putting your name, Michal, as contact
>>> point is
>>> not enough. Kolla is not the only project producing container images and
>>> we need
>>> to be consistent in the way we release these images.
>>>
>>> Nothing prevents people for building their own images and uploading them
>>> to
>>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>
>
> Agreed #4 is a bit unrealistic.
>
> Not sure I understand the difference between #2 and #3. Is it just the
> cadence?
>
> I'd prefer for these builds to have a daily cadence because it sets the
> expectations w.r.t maintenance right: "These images are daily builds and not
> certified releases. For stable builds you're better off building it
> yourself"

And daily builds are exactly what I wanted in the first place:) We
probably will keep publishing release packages too, but we can be so
called 3rd party. I also agree [4] is completely unrealistic and I
would be against putting such heavy burden of responsibility on any
community, including Kolla.

While daily cadence will send message that it's not stable, truth will
be that it will be more stable than what people would normally build
locally (again, it passes more gates), but I'm totally fine in not
saying that and let people decide how they want to use it.

So, can we move on with implementation?

Thanks!
Michal

>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Edward Leafe
On May 15, 2017, at 9:00 PM, Flavio Percoco  wrote:

> [huge snip]

Thank you! We don’t need 50K of repeated text in every response.

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Project updates - OpenStack Summit

2017-05-16 Thread Emilien Macchi
If you missed the TripleO project updates presentation, feel free to
watch the recording:
https://www.openstack.org/videos/boston-2017/project-update-triple0

and the slides:
https://docs.google.com/presentation/d/1knOesCs3HTqKvIl9iUZciUtE006ff9I3zhxCtbLZz4c

If you have any question or feedback regarding our roadmap, feel free
to use this thread to discuss about it on the public forum.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:12, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >>
>> >> Flavio Percoco wrote:
>> >>>
>> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >>> projects
>> >>> in any kind of built form. This was also one of the concerns I raised
>> >>> when
>> >>> working on the proposal to support other programming languages. The
>> >>> problem of
>> >>> releasing built images goes beyond the infrastructure requirements. It's
>> >>> the
>> >>> message and the guarantees implied with the built product itself that are
>> >>> the
>> >>> concern here. And I tend to agree with Doug that this might be a problem
>> >>> for us
>> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >>> point is
>> >>> not enough. Kolla is not the only project producing container images and
>> >>> we need
>> >>> to be consistent in the way we release these images.
>> >>>
>> >>> Nothing prevents people for building their own images and uploading them
>> >>> to
>> >>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>> >>
>> >>
>> >> I totally subscribe to the concerns around publishing binaries (under
>> >> any form), and the expectations in terms of security maintenance that it
>> >> would set on the publisher. At the same time, we need to have images
>> >> available, for convenience and testing. So what is the best way to
>> >> achieve that without setting strong security maintenance expectations
>> >> for the OpenStack community ? We have several options:
>> >>
>> >> 1/ Have third-parties publish images
>> >> It is the current situation. The issue is that the Kolla team (and
>> >> likely others) would rather automate the process and use OpenStack
>> >> infrastructure for it.
>> >>
>> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> This would allow to automate the process, but it would be a bit weird to
>> >> use common infra resources to publish in a private repo.
>> >>
>> >> 3/ Publish transient (per-commit or daily) images
>> >> A "daily build" (especially if you replace it every day) would set
>> >> relatively-limited expectations in terms of maintenance. It would end up
>> >> picking up security updates in upstream layers, even if not immediately.
>> >>
>> >> 4/ Publish images and own them
>> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> those images and publish them officially.
>> >>
>> >> Personally I think (4) is not realistic. I think we could make (3) work,
>> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >
>> >
>> > Agreed #4 is a bit unrealistic.
>> >
>> > Not sure I understand the difference between #2 and #3. Is it just the
>> > cadence?
>> >
>> > I'd prefer for these builds to have a daily cadence because it sets the
>> > expectations w.r.t maintenance right: "These images are daily builds and 
>> > not
>> > certified releases. For stable builds you're better off building it
>> > yourself"
>>
>> And daily builds are exactly what I wanted in the first place:) We
>> probably will keep publishing release packages too, but we can be so
>> called 3rd party. I also agree [4] is completely unrealistic and I
>> would be against putting such heavy burden of responsibility on any
>> community, including Kolla.
>>
>> While daily cadence will send message that it's not stable, truth will
>> be that it will be more stable than what people would normally build
>> locally (again, it passes more gates), but I'm totally fine in not
>> saying that and let people decide how they want to use it.
>>
>> So, can we move on with implementation?
>
> I don't want the images published to docker hub. Are they still useful
> to you if they aren't published?

What do you mean? We need images available...whether it's dockerhub,
infra-hosted registry or any other way to have them, we need to be
able to have images that are available and fresh without building.
Dockerhub/quay.io is least problems for infra team/resources.

> Doug
>
>>
>> Thanks!
>> Michal
>>
>> >
>> > Flavio
>> >
>> > --
>> > @flaper87
>> > Flavio Percoco
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2017-05-16 15:16:08 +0100:
> On Tue, 16 May 2017, Monty Taylor wrote:
> 
> > FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll with 
> > that until someone has a better idea. I'm uncrazy about it for two reasons:
> >
> > a) the word "key" implies things to people that may or may not be true 
> > here. 
> > If we do stick with it - we need some REALLY crisp language about what it 
> > is 
> > and what it isn't.
> >
> > b) Rackspace Public Cloud (and back in the day HP Public Cloud) have a 
> > thing 
> > called by this name. While what's written in the spec is quite similar in 
> > usage to that construct, I'm wary of re-using the name without the 
> > semantics 
> > actually being fully the same for risk of user confusion. "This uses 
> > api-key... which one?" Sean's email uses "APPKey" instead of "APIKey" - 
> > which 
> > may be a better term. Maybe just "ApplicationAuthorization"?
> 
> "api key" is a fairly common and generic term for "this magical
> thingie I can create to delegate my authority to some automation".
> It's also sometimes called "token", perhaps that's better (that's
> what GitHub uses, for example)? In either case the "api" bit is
> pretty important because it is the thing used to talk to the API.
> 
> I really hope we can avoid creating yet more special language for
> OpenStack. We've got an API. We want to send keys or tokens. Let's
> just call them that.
> 

+1

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Sean Dague
On 05/16/2017 11:28 AM, Eric Fried wrote:
>> The idea is that a regular user calling into a service should not
>> be able to set the request id, but outgoing calls from that service
>> to other services as part of the same request would.
> 
> Yeah, so can anyone explain to me why this is a real problem?  If a
> regular user wanted to be a d*ck and inject a bogus (or worse, I
> imagine, duplicated) request-id, can any actual harm come out of it?  Or
> does it just cause confusion to the guy reading the logs later?
> 
> (I'm assuming, of course, that the format will still be validated
> strictly (req-$UUID) to preclude code injection kind of stuff.)

Honestly, I don't know. I know it was once a concern. I'm totally happy
to remove the trust checking knowing we could add it back in later if
required.

Maybe reach out to some public cloud providers to know if they have any
issues with it?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [devstack] [deployment] nova-api and meta-api under uwsgi

2017-05-16 Thread Chris Dent


(This is a followup to
http://lists.openstack.org/pipermail/openstack-dev/2017-May/116267.html
but I don't have that around anymore to make a proper response to.)

In a devstack change:

https://review.openstack.org/#/c/457715/

nova-api and nova-metadata will be changed to run as WSGI
applications with a uwsgi server by default. This helps to enable
a few recent goals:

* everything under systemd in devstack
* minimizing custom ports for HTTP in devstack
* is part of a series of changes[1] which gets the compute api working
  under WSGI, including some devref for wsgi use:
  https://docs.openstack.org/developer/nova/wsgi.html
* helps enforce the idea that any WSGI server is okay

This last point is important consideration for deployers: Although
devstack will (once the change merges) default to using a
combination of apache2, mod_proxy_uwsgi, and uwsgi there is zero
requirement that deployments replicate that arrangement. The new
'nova-api-wsgi' and 'nova-metadata-wsgi' script provide a
module-level 'application' that can be run by any WSGI compliant
server.

In those contexts things like the path-prefix of an application and
the port used (if any) to host the application are entirely in the
domain of the web server's config, not the application. This is
a good thing, but it does mean that any deployment automation needs
to make some decisions about how to manipulate the web server's
configuration.

Some other details which might be relevant:

In the devstack change the compute service is registered to run on a
default port of either 80 or 443 at '/compute' and _not_ on a custom
port.

The metadata API, however, continues to run as its own service on
its own port. In fact, it runs using solely uwsgi, without apache2
being involved at all.

Please follow up if there any questions.


[1] https://review.openstack.org/#/c/457283/
https://review.openstack.org/#/c/459413/
https://review.openstack.org/#/c/461289/

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Davanum Srinivas
Steve,

We should not always ask "if this is a ruling from the TC", the
default is that it's a discussion/exploration. If it is a "ruling", it
won't be on a ML thread.

Thanks,
Dims

On Tue, May 16, 2017 at 9:22 AM, Steven Dake (stdake)  wrote:
> Dims,
>
> The [tc] was in the subject tag, and the message was represented as 
> indicating some TC directive and has had several tc members comment on the 
> thread.  I did nothing wrong.
>
> Regards
> -steve
>
>
> -Original Message-
> From: Davanum Srinivas 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Tuesday, May 16, 2017 at 4:34 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] 
> [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
>  do we want to be publishing binary container images?
>
> Why drag TC into this discussion Steven? If the TC has something to
> say, it will be in the form of a resolution with topic "formal-vote".
> So please Stop!
>
> Thanks,
> Dims
>
> On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake)  
> wrote:
> > Flavio,
> >
> > Forgive the top post – outlook ftw.
> >
> > I understand the concerns raised in this thread.  It is unclear if this 
> thread is the feeling of two TC members or enough TC members care deeply 
> about this issue to permanently limit OpenStack big tent projects’ ability to 
> generate container images in various external artifact storage systems.  The 
> point of discussion I see effectively raised in this thread is “OpenStack 
> infra will not push images to dockerhub”.
> >
> > I’d like clarification if this is a ruling from the TC, or simply an 
> exploratory discussion.
> >
> > If it is exploratory, it is prudent that OpenStack projects not be 
> blocked by debate on this issue until the TC has made such ruling as to 
> banning the creation of container images via OpenStack infrastructure.
> >
> > Regards
> > -steve
> >
> > -Original Message-
> > From: Flavio Percoco 
> > Reply-To: "OpenStack Development Mailing List (not for usage 
> questions)" 
> > Date: Monday, May 15, 2017 at 7:00 PM
> > To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> > Subject: Re: [openstack-dev] 
> [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
>  do we want to be publishing binary container images?
> >
> > On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> > >On 15 May 2017 at 12:12, Doug Hellmann  
> wrote:
> >
> > [huge snip]
> >
> > >>> > I'm raising the issue here to get some more input into how to
> > >>> > proceed. Do other people think this concern is overblown? Can 
> we
> > >>> > mitigate the risk by communicating through metadata for the 
> images?
> > >>> > Should we stick to publishing build instructions 
> (Dockerfiles, or
> > >>> > whatever) instead of binary images? Are there other options I 
> haven't
> > >>> > mentioned?
> > >>>
> > >>> Today we do publish build instructions, that's what Kolla is. 
> We also
> > >>> publish built containers already, just we do it manually on 
> release
> > >>> today. If we decide to block it, I assume we should stop doing 
> that
> > >>> too? That will hurt users who uses this piece of Kolla, and I'd 
> hate
> > >>> to hurt our users:(
> > >>
> > >> Well, that's the question. Today we have teams publishing those
> > >> images themselves, right? And the proposal is to have infra do 
> it?
> > >> That change could be construed to imply that there is more of a
> > >> relationship with the images and the rest of the community 
> (remember,
> > >> folks outside of the main community activities do not always make
> > >> the same distinctions we do about teams). So, before we go ahead
> > >> with that, I want to make sure that we all have a chance to 
> discuss
> > >> the policy change and its implications.
> > >
> > >Infra as vm running with infra, but team to publish it can be Kolla
> > >team. I assume we'll be responsible to keep these images healthy...
> >
> > I think this is the gist of the concern and I'd like us to focus on 
> it.
> >
> > As someone that used to consume these images from kolla's dockerhub 
> account
> > directly, I can confirm they are useful. However, I do share Doug's 
> concern and
> > the impact this may have on the community.
> >
> > From a release 

Re: [openstack-dev] [tripleo] Validations before upgrades and updates

2017-05-16 Thread Florian Fuchs
On Mon, May 15, 2017 at 6:27 PM, Steven Hardy  wrote:
> On Mon, May 08, 2017 at 02:45:08PM +0300, Marios Andreou wrote:
>>Hi folks, after some discussion locally with colleagues about improving
>>the upgrades experience, one of the items that came up was pre-upgrade and
>>update validations. I took an AI to look at the current status of
>>tripleo-validations [0] and posted a simple WIP [1] intended to be run
>>before an undercloud update/upgrade and which just checks service status.
>>It was pointed out by shardy that for such checks it is better to instead
>>continue to use the per-service  manifests where possible like [2] for
>>example where we check status before N..O major upgrade. There may still
>>be some undercloud specific validations that we can land into the
>>tripleo-validations repo (thinking about things like the neutron
>>networks/ports, validating the current nova nodes state etc?).
>>So do folks have any thoughts about this subject - for example the kinds
>>of things we should be checking - Steve said he had some reviews in
>>progress for collecting the overcloud ansible puppet/docker config into an
>>ansible playbook that the operator can invoke for upgrade of the 'manual'
>>nodes (for example compute in the N..O workflow) - the point being that we
>>can add more per-service ansible validation tasks into the service
>>manifests for execution when the play is run by the operator - but I'll
>>let Steve point at and talk about those.Â
>
> Thanks for starting this thread Marios, sorry for the slow reply due to
> Summit etc.
>
> As we discussed, I think adding validations is great, but I'd prefer we
> kept any overcloud validations specific to services in t-h-t instead of
> trying to manage service specific things over multiple repos.
>
> This would also help with the idea of per-step validations I think, where
> e.g you could have a "is service active" test and run it after the step
> where we expect the service to start, a blueprint was raised a while back
> asking for exactly that:
>
> https://blueprints.launchpad.net/tripleo/+spec/step-by-step-validation
>
> One way we could achive this is to add ansible tasks that perform some
> validation after each step, where we combine the tasks for all services,
> similar to how we already do upgrade_tasks and host_prep_tasks:
>
> https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/database/redis.yaml#L92
>
> With the benefit of hindsight using ansible tags for upgrade_tasks wasn't
> the best approach, because you can't change the tags via SoftwareDeployment
> (e.g you need a SoftwareConfig per step), it's better if we either generate
> the list of tasks by merging maps e.g
>
>   validation_tasks:
> step3:
>   - sometask
>
> Or via ansible conditionals where we pass a step value in to each run of
> the tasks:
>
>   validation_tasks:
> - sometask
>   when: step == 3
>
> The latter approach is probably my preference, because it'll require less
> complex merging in the heat layer.
>
> As you mentioned, I've been working on ways to make the deployment steps
> more ansible driven, so having these tasks integrated with the t-h-t model
> would be well aligned with that I think:
>
> https://review.openstack.org/#/c/454816/
>
> https://review.openstack.org/#/c/462211/
>
> Happy to discuss further when you're ready to start integrating some
> overcloud validations.

Maybe these are two kinds of pre-upgrade validations that serve
different purposes.

The more general validations (like checking connectivity, making sure
the stack is in good shape, repos are available, etc.) should give
operators a fair amount of confidence that all basic prerequisites to
start an update are met *before* the upgrade is started. They could be
run from the UI or CLI and would fit well into the tripleo-validations
repo. Similar to the existing tripleo-validations, failures don't
prevent operators from doing something.

The service-specific validations otoh are closely tied to the upgrade
process and will stop further progress when failing. They are
fundamentally different to the tripleo-validations and could therefore
live in t-h-t.

I personally don't see why we shouldn't have pre-upgrade validations
both in tripleo-validations and in t-h-t, as long as we know which
ones go where. If everything that's tied to a specific overcloud
service or upgrade step goes into t-h-t, I could see these two groups
(using the validations suggested earlier in this thread):

tripleo-validations:
- Undercloud service check
- Verify that the stack is in a *_COMPLETE state
- Verify undercloud disk space. For node replacement we recommended a
minimum of 10 GB free.
- Network/repo availability check (undercloud and overcloud)
- Verify we're at the latest version of the current release
- ...

tripleo-heat-templates:
- Pacemaker cluster health
- Ceph health
- APIs 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sean Dague
On 05/16/2017 09:24 AM, Doug Hellmann wrote:
> Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
>> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
>>> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>>>
 On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

 Today we do publish build instructions, that's what Kolla is. We also
 publish built containers already, just we do it manually on release
 today. If we decide to block it, I assume we should stop doing that
 too? That will hurt users who uses this piece of Kolla, and I'd hate
 to hurt our users:(
>>>
>>> Well, that's the question. Today we have teams publishing those
>>> images themselves, right? And the proposal is to have infra do it?
>>> That change could be construed to imply that there is more of a
>>> relationship with the images and the rest of the community (remember,
>>> folks outside of the main community activities do not always make
>>> the same distinctions we do about teams). So, before we go ahead
>>> with that, I want to make sure that we all have a chance to discuss
>>> the policy change and its implications.
>>
>> Sorry for hijacking the thread, but we have a similar scenario for example 
>> in 
>> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
>> stuff, and not containers, but it's looks really the same.
>> So far ready-made images have been published under 
>> http://sahara-files.mirantis.com/images/upstream/, but we are looking to 
>> have them hosted on 
>> openstack.org, just like other artifacts. 
>>
>> We asked about this few days ago on openstack-infra@, but no answer so far 
>> (the Summit didn't help):
>>
>> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
>>
>> I think that the answer to the question raised in this thread is definitely 
>> going to be relevant for our use case.
>>
>> Ciao
> 
> Thanks for raising this. I think the same concerns apply to VM images.

Agreed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> >> Sorry for the top post, Michal, Can you please clarify a couple of things:
> >>
> >> 1) Can folks install just one or two services for their specific scenario?
> >
> >Yes, that's more of a kolla-ansible feature and require a little bit
> >of ansible know-how, but entirely possible. Kolla-k8s is built to
> >allow maximum flexibility in that space.
> >
> >> 2) Can the container images from kolla be run on bare docker daemon?
> >
> >Yes, but they need to either override our default CMD (kolla_start) or
> >provide ENVs requred by it, not a huge deal
> >
> >> 3) Can someone take the kolla container images from say dockerhub and
> >> use it without the Kolla framework?
> >
> >Yes, there is no such thing as kolla framework really. Our images
> >follow stable ABI and they can be deployed by any deploy mechanism
> >that will follow it. We have several users who wrote their own deploy
> >mechanism from scratch.
> >
> >Containers are just blobs with binaries in it. Little things that we
> >add are kolla_start script to allow our config file management and
> >some custom startup scripts for things like mariadb to help with
> >bootstrapping, both are entirely optional.
> 
> Just as a bonus example, TripleO is currently using kolla images. They used to
> be vanilla and they are not anymore but only because TripleO depends on puppet
> being in the image, which has nothing to do with kolla.
> 
> Flavio
> 

When you say "using kolla images," what do you mean? In upstream
CI tests? On contributors' dev/test systems? Production deployments?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Chris Dent

On Tue, 16 May 2017, Monty Taylor wrote:

FWIW - I'm un-crazy about the term API Key - but I'm gonna just roll with 
that until someone has a better idea. I'm uncrazy about it for two reasons:


a) the word "key" implies things to people that may or may not be true here. 
If we do stick with it - we need some REALLY crisp language about what it is 
and what it isn't.


b) Rackspace Public Cloud (and back in the day HP Public Cloud) have a thing 
called by this name. While what's written in the spec is quite similar in 
usage to that construct, I'm wary of re-using the name without the semantics 
actually being fully the same for risk of user confusion. "This uses 
api-key... which one?" Sean's email uses "APPKey" instead of "APIKey" - which 
may be a better term. Maybe just "ApplicationAuthorization"?


"api key" is a fairly common and generic term for "this magical
thingie I can create to delegate my authority to some automation".
It's also sometimes called "token", perhaps that's better (that's
what GitHub uses, for example)? In either case the "api" bit is
pretty important because it is the thing used to talk to the API.

I really hope we can avoid creating yet more special language for
OpenStack. We've got an API. We want to send keys or tokens. Let's
just call them that.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Heat template example repository

2017-05-16 Thread Lance Haig

On 15.05.17 19:01, Zane Bitter wrote:

On 15/05/17 12:10, Steven Hardy wrote:

On Mon, May 15, 2017 at 04:46:28PM +0200, Lance Haig wrote:

Hi Steve,

I am happy to assist in any way to be honest.


It was great to meet you in Boston, and thanks very much for 
volunteering to help out.


BTW one issue I'm aware of is that the autoscaling template examples 
we have all use OS::Ceilometer::* resources for alarms. We have a 
global environment thingy that maps those to OS::Aodh::*, so at least 
in theory those templates should continue to work, but there are 
actually no examples that I can find of autoscaling templates doing 
things the way we want everyone to do them.
I think we can perhaps come up with some standard scenarios that we want 
to showcase and then we can work on getting this setup.


I might suggest that you look at the repo that my colleague Florin and I 
setup for our library and training material.

https://github.com/heat-extras

In the lib repo we have a test directory that tests each library 
template it might be an idea as to how to achieve test coverage of the 
different resources.
We currently just run yamllint testing with the script in there but I am 
sure we can add other tests as needed.



The backwards compatibility is not always correct as I have seen when
developing our library of templates on Liberty and then trying to 
deploy it

on Mitaka for example.


Yeah, I guess it's true that there are sometimes deprecated resource
interfaces that get removed on upgrade to a new OpenStack version, 
and that

is independent of the HOT version.


What if instead of a directory per release, we just had a 'deprecated' 
directory where we move stuff that is going away (e.g. anything 
relying on OS::Glance::Image), and then deleted them when it 
disappeared from any supported release (e.g. LBaaSv1 must be close if 
it isn't gone already).


I agree in general this would be good. How would we deal with users who 
are running older versions of openstack?
Most of the customers I support have Liberty and newer so I would 
perhaps like to have these available as tested.
The challenge for us is that the newer the OStack version the more 
features are available e.g. conditionals etc..
To support that in a backwards compatible fashion is going to be tough I 
think. Unless I am missing something.
As we've proven, maintaining these templates has been a challenge 
given the
available resources, so I guess I'm still in favor of not duplicating 
a bunch

of templates, e.g perhaps we could focus on a target of CI testing
templates on the current stable release as a first step?


I'd rather do CI against Heat master, I think, but yeah that sounds 
like the first step. Note that if we're doing CI on old stuff then 
we'd need to do heat-templates stable branches rather than 
directory-per-release.


With my suggestion above, we could just not check anything in the 
'deprecated' directory maybe?

I agree in part.
If we are using the heat examples to test the functionality of the 
master branch then that would be a good idea.
If we want to provide useable templates for users to reference and use 
then I would suggest we test against stable.


I am sure we could find a way to do both.
I would suggets that we first get reliable CICD running on the current 
templates and fix what we can in there.

Then we can look at what would be a good way forward.

I am just brain dumping so any other ideas would also be good.


As you guys mentioned in our discussions the Networking example I 
quoted is
not something you guys can deal with as the source project affects 
this.


Unless we can use this exercise to test these and fix them then I am
happier.

My vision would be to have a set of templates and examples that are 
tested

regularly against a running OS deployment so that we can make sure the
combinations still run. I am sure we can agree on a way to do this 
with CICD

so that we test the fetureset.


Agreed, getting the approach to testing agreed seems like the first 
step -
FYI we do already have automated scenario tests in the main heat tree 
that

consume templates similar to many of the examples:

https://github.com/openstack/heat/tree/master/heat_integrationtests/scenario 



So, in theory, getting a similar test running on heat_templates 
should be
fairly simple, but getting all the existing templates working is 
likely to

be a bigger challenge.


Even if we just ran the 'template validate' command on them to check 
that all of the resource types & properties still exist, that would be 
pretty helpful. It'd catch of of the times when we break backwards 
compatibility so we can decide to either fix it or deprecate/remove 
the obsolete template. (Note that you still need all of the services 
installed, or at least endpoints in the catalog, for the validation to 
work.)


Actually creating all of the stuff would be nice, but it'll likely be 
difficult (just keeping up-to-date OS images to boot from is a giant 

Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean Dague
On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
> Folks,
> 
> See $TITLE :)
> 
> Thanks,
> Dims

I'd rather avoid #openstack-tc and just use #openstack-dev.
#openstack-dev is pretty low used environment (compared to like
#openstack-infra or #openstack-nova). I've personally been trying to
make it my go to way to hit up members of other teams whenever instead
of diving into project specific channels, because typically it means we
can get a broader conversation around the item in question.

Our fragmentation of shared understanding on many issues is definitely
exacerbated by many project channels, and the assumption that people
need to watch 20+ different channels, with different context, to stay up
on things.

I would love us to have the problem that too many interesting topics are
being discussed in #openstack-dev that we feel the need to parallelize
them with a different channel. But I would say we should wait until
that's actually a problem.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> I would like to bring up a subject that hasn't really been discussed in
> this thread yet, forgive me if I missed an email mentioning this.
> 
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security
> turned off are perfect for testing internally to infra.
> 
> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.
> 
> Thanks,
> SamYaple

This sounds like an implementation detail of option 3? I think not
signing the images does help indicate that they're not meant to be used
in production environments.

Is some sort of self-hosted solution a reasonable compromise between
building images in test jobs (which I understand makes them take
extra time) and publishing images to public registries (which is
the thing I object to)?

If self-hosting is reasonable, then we can work out which tool to
use to do it as a second question.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Dnyaneshwar Pawar
Hi Steve,

Thanks for your reply.

Out of interest, where did you find OS::TripleO::ControllerServer, do we
have a mistake in our docs somewhere?

I referred below template. 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/controller-role.yaml

resources:

  Controller:
type: OS::TripleO::ControllerServer
metadata:
  os-collect-config:


OS::Heat::SoftwareDeployment is referred instead of 
OS::Heat::SoftwareDeployments at following places.

1. 
https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/11/pdf/partner_integration/Red_Hat_OpenStack_Platform-11-Partner_Integration-en-US.pdf
   Section 2.1.4. TripleO and TripleO Heat Templates (page #13 in pdf)
   Section 5.4. CUSTOMIZING CONFIGURATION BEFORE OVERCLOUD CONFIGURATION 
(Page #32 in pdf)
2. http://hardysteven.blogspot.in/2015/05/heat-softwareconfig-resources.html
   Section: Heat SoftwareConfig resources
   Section: SoftwareDeployment HOT template definition
3. 
http://hardysteven.blogspot.in/2015/05/tripleo-heat-templates-part-2-node.html
Section: Initial deployment flow, step by step


Thanks and Regards,
Dnyaneshwar


On 5/16/17, 4:40 PM, "Steven Hardy"  wrote:

On Tue, May 16, 2017 at 04:33:33AM +, Dnyaneshwar Pawar wrote:
> Hi TripleO team,
> 
> I am trying to apply custom configuration to an existing overcloud. 
(using openstack overcloud deploy command)
> Though there is no error, the configuration is in not applied to 
overcloud.
> Am I missing anything here?
> http://paste.openstack.org/show/609619/

In your paste you have the resource_registry like this:

OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml

The problem is OS::TripleO::ControllerServer isn't a resource type we use,
e.g it's not a valid hook to enable additional node configuration.

Instead try something like this:

OS::TripleO::NodeExtraConfigPost: /home/stack/test/heat3_ocata.yaml

Which will run the script on all nodes, as documented here:


https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html

Out of interest, where did you find OS::TripleO::ControllerServer, do we
have a mistake in our docs somewhere?

Also in your template the type: OS::Heat::SoftwareDeployment should be
either type: OS::Heat::SoftwareDeployments (as in the docs) or type:
OS::Heat::SoftwareDeploymentGroup (the newer name for SoftwareDeployments,
we should switch the docs to that..).

Hope that helps!

-- 
Steve Hardy
Red Hat Engineering, Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean McGinnis
On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
> Folks,
> 
> See $TITLE :)
> 
> Thanks,
> Dims
> 

My preference would be to have an #openstack-tc channel.

One thing I like about the dedicated meeting time was if I was not able to
attend, or when I was just a casual observer, it was easy to catch up on
what was discussed because it was all in one place and did not have any
non TC conversations interlaced.

If we just use -dev, there is a high chance there will be a lot of cross-
talk during discussions. There would also be a lot of effort to grep
through the full day of activity to find things relevant to TC
discussions. If we have a dedicated channel for this, it makes it very
easy for anyone to know where to go to get a clean, easy to read capture
of all relevant discussions. I think that will be important with the
lack of a captured and summarized meeting to look at.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sean McGinnis
On Tue, May 16, 2017 at 02:08:07PM +0200, Thierry Carrez wrote:
> 
> I totally subscribe to the concerns around publishing binaries (under
> any form), and the expectations in terms of security maintenance that it
> would set on the publisher. At the same time, we need to have images
> available, for convenience and testing. So what is the best way to
> achieve that without setting strong security maintenance expectations
> for the OpenStack community ? We have several options:
> 
> 1/ Have third-parties publish images
> It is the current situation. The issue is that the Kolla team (and
> likely others) would rather automate the process and use OpenStack
> infrastructure for it.
> 
> 2/ Have third-parties publish images, but through OpenStack infra
> This would allow to automate the process, but it would be a bit weird to
> use common infra resources to publish in a private repo.
> 
> 3/ Publish transient (per-commit or daily) images
> A "daily build" (especially if you replace it every day) would set
> relatively-limited expectations in terms of maintenance. It would end up
> picking up security updates in upstream layers, even if not immediately.
> 

I share the concerns around implying support for any of these. But I
also think they could be incredibly useful, and if we don't do it,
there is even more of a chance of multiple "bad" images being published
by others.

I agree having an automated daily image published should give a
reasonable expectation that there is not long term maintenance for
these.

> 4/ Publish images and own them
> Staff release / VMT / stable team in a way that lets us properly own
> those images and publish them officially.
> 
> Personally I think (4) is not realistic. I think we could make (3) work,
> and I prefer it to (2). If all else fails, we should keep (1).
> 
> -- 
> Thierry Carrez (ttx)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Matthew Treinish

On Tue, May 16, 2017 at 08:22:44AM +, Andrea Frittoli wrote:
> Hello team,
> 
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
> 
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and its community.
> She's done a great deal of work in making Tempest code cleaner, easier to
> read, maintain and
> debug, fixing bugs and removing cruft. Both her code as well as her reviews
> demonstrate a
> very good understanding of Tempest internals and of the project future
> direction.
> I believe Fanglei will make an excellent addition to the team.
> 
> As per the usual, if the current Tempest core team members would please
> vote +1
> or -1(veto) to the nomination when you get a chance. We'll keep the polls
> open
> for 5 days or until everyone has voted.

+1

-Matt Treinish

> 
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
> https://review.openstack.org/#/q/reviewer:zhufl


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 07:11, Sam Yaple  wrote:
> I would like to bring up a subject that hasn't really been discussed in this
> thread yet, forgive me if I missed an email mentioning this.
>
> What I personally would like to see is a publishing infrastructure to allow
> pushing built images to an internal infra mirror/repo/registry for
> consumption of internal infra jobs (deployment tools like kolla-ansible and
> openstack-ansible). The images built from infra mirrors with security turned
> off are perfect for testing internally to infra.
>
> If you build images properly in infra, then you will have an image that is
> not security checked (no gpg verification of packages) and completely
> unverifiable. These are absolutely not images we want to push to
> DockerHub/quay for obvious reasons. Security and verification being chief
> among them. They are absolutely not images that should ever be run in
> production and are only suited for testing. These are the only types of
> images that can come out of infra.

So I guess we need new feature:) since we can test gpg packages...

> Thanks,
> SamYaple
>
> On Tue, May 16, 2017 at 1:57 PM, Michał Jastrzębski 
> wrote:
>>
>> On 16 May 2017 at 06:22, Doug Hellmann  wrote:
>> > Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> >> Flavio Percoco wrote:
>> >> > From a release perspective, as Doug mentioned, we've avoided
>> >> > releasing projects
>> >> > in any kind of built form. This was also one of the concerns I raised
>> >> > when
>> >> > working on the proposal to support other programming languages. The
>> >> > problem of
>> >> > releasing built images goes beyond the infrastructure requirements.
>> >> > It's the
>> >> > message and the guarantees implied with the built product itself that
>> >> > are the
>> >> > concern here. And I tend to agree with Doug that this might be a
>> >> > problem for us
>> >> > as a community. Unfortunately, putting your name, Michal, as contact
>> >> > point is
>> >> > not enough. Kolla is not the only project producing container images
>> >> > and we need
>> >> > to be consistent in the way we release these images.
>> >> >
>> >> > Nothing prevents people for building their own images and uploading
>> >> > them to
>> >> > dockerhub. Having this as part of the OpenStack's pipeline is a
>> >> > problem.
>> >>
>> >> I totally subscribe to the concerns around publishing binaries (under
>> >> any form), and the expectations in terms of security maintenance that
>> >> it
>> >> would set on the publisher. At the same time, we need to have images
>> >> available, for convenience and testing. So what is the best way to
>> >> achieve that without setting strong security maintenance expectations
>> >> for the OpenStack community ? We have several options:
>> >>
>> >> 1/ Have third-parties publish images
>> >> It is the current situation. The issue is that the Kolla team (and
>> >> likely others) would rather automate the process and use OpenStack
>> >> infrastructure for it.
>> >>
>> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> This would allow to automate the process, but it would be a bit weird
>> >> to
>> >> use common infra resources to publish in a private repo.
>> >>
>> >> 3/ Publish transient (per-commit or daily) images
>> >> A "daily build" (especially if you replace it every day) would set
>> >> relatively-limited expectations in terms of maintenance. It would end
>> >> up
>> >> picking up security updates in upstream layers, even if not
>> >> immediately.
>> >>
>> >> 4/ Publish images and own them
>> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> those images and publish them officially.
>> >>
>> >> Personally I think (4) is not realistic. I think we could make (3)
>> >> work,
>> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >>
>> >
>> > At the forum we talked about putting test images on a "private"
>> > repository hosted on openstack.org somewhere. I think that's option
>> > 3 from your list?
>> >
>> > Paul may be able to shed more light on the details of the technology
>> > (maybe it's just an Apache-served repo, rather than a full blown
>> > instance of Docker's service, for example).
>>
>> Issue with that is
>>
>> 1. Apache served is harder to use because we want to follow docker API
>> and we'd have to reimplement it
>> 2. Running registry is single command
>> 3. If we host in in infra, in case someone actually uses it (there
>> will be people like that), that will eat up lot of network traffic
>> potentially
>> 4. With local caching of images (working already) in nodepools we
>> loose complexity of mirroring registries across nodepools
>>
>> So bottom line, having dockerhub/quay.io is simply easier.
>>
>> > Doug
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Thierry Carrez
Flavio Percoco wrote:
> On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
> 
> Agreed #4 is a bit unrealistic.
> 
> Not sure I understand the difference between #2 and #3. Is it just the
> cadence?

In #3 the infrastructure ends up publishing to an official
"openstack-daily" repository. In #2 the infrastructure job ends up
publishing to some "flavios-garage" repository.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
> >>
> >> Flavio Percoco wrote:
> >>>
> >>> From a release perspective, as Doug mentioned, we've avoided releasing
> >>> projects
> >>> in any kind of built form. This was also one of the concerns I raised
> >>> when
> >>> working on the proposal to support other programming languages. The
> >>> problem of
> >>> releasing built images goes beyond the infrastructure requirements. It's
> >>> the
> >>> message and the guarantees implied with the built product itself that are
> >>> the
> >>> concern here. And I tend to agree with Doug that this might be a problem
> >>> for us
> >>> as a community. Unfortunately, putting your name, Michal, as contact
> >>> point is
> >>> not enough. Kolla is not the only project producing container images and
> >>> we need
> >>> to be consistent in the way we release these images.
> >>>
> >>> Nothing prevents people for building their own images and uploading them
> >>> to
> >>> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
> >>
> >>
> >> I totally subscribe to the concerns around publishing binaries (under
> >> any form), and the expectations in terms of security maintenance that it
> >> would set on the publisher. At the same time, we need to have images
> >> available, for convenience and testing. So what is the best way to
> >> achieve that without setting strong security maintenance expectations
> >> for the OpenStack community ? We have several options:
> >>
> >> 1/ Have third-parties publish images
> >> It is the current situation. The issue is that the Kolla team (and
> >> likely others) would rather automate the process and use OpenStack
> >> infrastructure for it.
> >>
> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> This would allow to automate the process, but it would be a bit weird to
> >> use common infra resources to publish in a private repo.
> >>
> >> 3/ Publish transient (per-commit or daily) images
> >> A "daily build" (especially if you replace it every day) would set
> >> relatively-limited expectations in terms of maintenance. It would end up
> >> picking up security updates in upstream layers, even if not immediately.
> >>
> >> 4/ Publish images and own them
> >> Staff release / VMT / stable team in a way that lets us properly own
> >> those images and publish them officially.
> >>
> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> and I prefer it to (2). If all else fails, we should keep (1).
> >
> >
> > Agreed #4 is a bit unrealistic.
> >
> > Not sure I understand the difference between #2 and #3. Is it just the
> > cadence?
> >
> > I'd prefer for these builds to have a daily cadence because it sets the
> > expectations w.r.t maintenance right: "These images are daily builds and not
> > certified releases. For stable builds you're better off building it
> > yourself"
> 
> And daily builds are exactly what I wanted in the first place:) We
> probably will keep publishing release packages too, but we can be so
> called 3rd party. I also agree [4] is completely unrealistic and I
> would be against putting such heavy burden of responsibility on any
> community, including Kolla.
> 
> While daily cadence will send message that it's not stable, truth will
> be that it will be more stable than what people would normally build
> locally (again, it passes more gates), but I'm totally fine in not
> saying that and let people decide how they want to use it.
> 
> So, can we move on with implementation?

I don't want the images published to docker hub. Are they still useful
to you if they aren't published?

Doug

> 
> Thanks!
> Michal
> 
> >
> > Flavio
> >
> > --
> > @flaper87
> > Flavio Percoco
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
> >> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
> >> >> things:
> >> >>
> >> >> 1) Can folks install just one or two services for their specific 
> >> >> scenario?
> >> >
> >> >Yes, that's more of a kolla-ansible feature and require a little bit
> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
> >> >allow maximum flexibility in that space.
> >> >
> >> >> 2) Can the container images from kolla be run on bare docker daemon?
> >> >
> >> >Yes, but they need to either override our default CMD (kolla_start) or
> >> >provide ENVs requred by it, not a huge deal
> >> >
> >> >> 3) Can someone take the kolla container images from say dockerhub and
> >> >> use it without the Kolla framework?
> >> >
> >> >Yes, there is no such thing as kolla framework really. Our images
> >> >follow stable ABI and they can be deployed by any deploy mechanism
> >> >that will follow it. We have several users who wrote their own deploy
> >> >mechanism from scratch.
> >> >
> >> >Containers are just blobs with binaries in it. Little things that we
> >> >add are kolla_start script to allow our config file management and
> >> >some custom startup scripts for things like mariadb to help with
> >> >bootstrapping, both are entirely optional.
> >>
> >> Just as a bonus example, TripleO is currently using kolla images. They 
> >> used to
> >> be vanilla and they are not anymore but only because TripleO depends on 
> >> puppet
> >> being in the image, which has nothing to do with kolla.
> >>
> >> Flavio
> >>
> >
> >When you say "using kolla images," what do you mean? In upstream
> >CI tests? On contributors' dev/test systems? Production deployments?
> 
> All of them. Note that TripleO now builds its own "kolla images" (it uses the
> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
> said, TripleO uses kolla images was intended to answer Dims question on 
> whether
> these images (or Dockerfiles) can be consumed by other projects.
> 
> Flavio
> 

Ah, OK. So TripleO is using the build instructions for kolla images, but
not the binary images being produced today?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:30, Emilien Macchi  wrote:
> On Tue, May 16, 2017 at 11:12 AM, Doug Hellmann  wrote:
>> Excerpts from Flavio Percoco's message of 2017-05-16 10:07:52 -0400:
>>> On 16/05/17 09:45 -0400, Doug Hellmann wrote:
>>> >Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:
>>> >> On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
>>> >> >On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
>>> >> >> Sorry for the top post, Michal, Can you please clarify a couple of 
>>> >> >> things:
>>> >> >>
>>> >> >> 1) Can folks install just one or two services for their specific 
>>> >> >> scenario?
>>> >> >
>>> >> >Yes, that's more of a kolla-ansible feature and require a little bit
>>> >> >of ansible know-how, but entirely possible. Kolla-k8s is built to
>>> >> >allow maximum flexibility in that space.
>>> >> >
>>> >> >> 2) Can the container images from kolla be run on bare docker daemon?
>>> >> >
>>> >> >Yes, but they need to either override our default CMD (kolla_start) or
>>> >> >provide ENVs requred by it, not a huge deal
>>> >> >
>>> >> >> 3) Can someone take the kolla container images from say dockerhub and
>>> >> >> use it without the Kolla framework?
>>> >> >
>>> >> >Yes, there is no such thing as kolla framework really. Our images
>>> >> >follow stable ABI and they can be deployed by any deploy mechanism
>>> >> >that will follow it. We have several users who wrote their own deploy
>>> >> >mechanism from scratch.
>>> >> >
>>> >> >Containers are just blobs with binaries in it. Little things that we
>>> >> >add are kolla_start script to allow our config file management and
>>> >> >some custom startup scripts for things like mariadb to help with
>>> >> >bootstrapping, both are entirely optional.
>>> >>
>>> >> Just as a bonus example, TripleO is currently using kolla images. They 
>>> >> used to
>>> >> be vanilla and they are not anymore but only because TripleO depends on 
>>> >> puppet
>>> >> being in the image, which has nothing to do with kolla.
>>> >>
>>> >> Flavio
>>> >>
>>> >
>>> >When you say "using kolla images," what do you mean? In upstream
>>> >CI tests? On contributors' dev/test systems? Production deployments?
>>>
>>> All of them. Note that TripleO now builds its own "kolla images" (it uses 
>>> the
>>> kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
>>> said, TripleO uses kolla images was intended to answer Dims question on 
>>> whether
>>> these images (or Dockerfiles) can be consumed by other projects.
>>>
>>> Flavio
>>>
>>
>> Ah, OK. So TripleO is using the build instructions for kolla images, but
>> not the binary images being produced today?
>
> Exactly. We have to add Puppet packaging into the list of things we
> want in the binary, that's why we don't consume the binary directly.

And frankly, if we get this thing agreed on, I don't see why TripleO
couldn't publish their images too. If we build technical infra in
Kolla, everyone else can benefit from it.

>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] after create cluster for kubernetes, kubect create command was failed.

2017-05-16 Thread KiYoun Sung
Hello,
Magnum team.

I Installed Openstack newton and magnum.
I installed Magnum by source(master branch).

I have two questions.

1.
After installation,
I created kubernetes cluster and it's CREATE_COMPLETE,
and I want to create kubernetes pod.

My create script is below.
--
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
app: nginx
spec:
  containers:
  - name: nginx
image: nginx
ports:
- containerPort: 80
--

I tried "kubectl create -f nginx.yaml"
But, error has occured.

Error message is below.
error validating "pod-nginx-with-label.yaml": error validating data:
unexpected type: object; if you choose to ignore these errors, turn
validation off with --validate=false

Why did this error occur?

2.
I want to access this kubernetes cluster service(like nginx) above the
Openstack magnum environment from outside world.

I refer to this guide(
https://docs.openstack.org/developer/magnum/dev/kubernetes-load-balancer.html#how-it-works),
but it didn't work.

Openstack: newton
Magnum: 4.1.1 (master branch)

How can I do?
Do I must install Lbaasv2?

Thank you.
Best regards.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [User] Achieving Resiliency at Scales of 1000+

2017-05-16 Thread Arkady.Kanevsky
Team,
We manage to have a productive discussion  on resiliency for 1000+ nodes.
Many thanks to Adam Spiers on helping with it.
https://etherpad.openstack.org/p/Achieving_Resiliency_at_Scales_of_1000+
There are several concrete actions especially for current gate testing.
Will bring these at the next user committee meeting.
Thanks,
Arkady

Arkady Kanevsky, Ph.D.
Director of SW Development
Dell EMC CPSD
Dell Inc. One Dell Way, MS PS2-91
Round Rock, TX 78682, USA
Phone: 512 723 5264

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Ed Leafe
On May 16, 2017, at 3:06 PM, Jeremy Stanley  wrote:
> 
>> It's pretty clear now some see drawbacks in reusing #openstack-dev, and
>> so far the only benefit expressed (beyond not having to post the config
>> change to make it happen) is that "everybody is already there". By that
>> rule, we should not create any new channel :)
> 
> That was not the only concern expressed. It also silos us away from
> the community who has elected us to represent them, potentially
> creating another IRC echo chamber.

Unless you somehow restrict access to the channel, it isn't much of a silo. I 
see TC members in many other channels, so it isn't as if there will be no 
interaction between TC members and the community that they serve.

I also think that a channel like #openstack-tc is more discoverable to people 
who might want to interact with the TC, as it follows the same naming 
convention as #openstack-nova, #openstack-ironic, etc.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Bug Smash for Pike Release

2017-05-16 Thread Fred Li
Hi all,

OpenStack Bug Smash for Pike is ongoing now. I will last from Wednesday to
Friday May 17 to 19 in Suzhou, China.

Around 60 engineers are working on Nova, Cinder, Neutron, Keystone, Heat,
Telemetry, Ironic, Oslo, OSC, Kolla, Trove, Dragonflow, Karbor, Manila,
Zaqar, Tricircle, Cloudkitty, Cyborg, Mogan, and etc.

You are appreciated to review the patches in the coming days.

Please find the homepage of the bug smash in [1] and the list of bugs we
are working on in [2].

[1] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike-Suzhou[2] [2]
https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike-Suzhou-Bug-List

Fred

On Wednesday, March 29, 2017, ChangBo Guo  wrote:

> I attended the bug smash two times before. it's really like we did at PTG,
> but just fix bugs in same room in 3 days,
> It's appricated core reviewers can help review online.
>
> 2017-03-28 21:35 GMT+08:00 Sean McGinnis  >:
>
>> I can say from my experience being involved in the last event that these
>> can be very productive. It was great seeing a room full of devs just
>> focused on getting bugs fixed!
>>
>> I highly encourage anyone interested to attend. I would also recommend
>> cores for each project to pay some attention to getting these reviewed.
>> It can be a great way to build up momentum and really get a lot fixed in
>> a short amount of time.
>>
>> Sean
>>
>> On Tue, Mar 28, 2017 at 07:15:02AM +, Liyongle (Fred) wrote:
>> > Hi all,
>> >
>> > We are planning to have the Bug Smash for Pike release from Wednesday
>> to Friday, May 17 to 19 in Suzhou, China.
>> > After considering summit Boston (May 8 to 11) and Pike-2 milestone (Jun
>> 5 to 9), we finalized the schedule.
>> >
>> > Bug Smash China will probably cover Nova, Neutron, Cinder, Keystone,
>> Manila, Heat, Telemetry, Karbor, Tricircle, which finally depends on the
>> attendees.
>> >
>> > If you want to set up bug smash in your city, please share the
>> information at [1].
>> > If you are planning to join the 6th Bug Smash in China, please register
>> at [2].
>> >
>> > [1] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike
>> > [2] https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Pike-Suzhou
>> >
>> > Fred (李永乐)
>> >
>> > 
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> ChangBo Guo(gcb)
>


-- 
Regards
Fred Li (李永乐)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][concurrency] lockutils lock fairness / starvation

2017-05-16 Thread Joshua Harlow

So fyi,

If you really want something like this:

Just use:

http://fasteners.readthedocs.io/en/latest/api/lock.html#fasteners.lock.ReaderWriterLock

And always get a write lock.

It is a slightly different way of getting those locks (via a context 
manager) but the implementation underneath is a deque; so fairness 
should be assured in FIFO order...


https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L139

and

https://github.com/harlowja/fasteners/blob/master/fasteners/lock.py#L220

-Josh

Chris Friesen wrote:

On 05/15/2017 03:42 PM, Clint Byrum wrote:


In order to implement fairness you'll need every lock request to happen
in a FIFO queue. This is often implemented with a mutex-protected queue
of condition variables. Since the mutex for the queue is only held while
you append to the queue, you will always get the items from the queue
in the order they were written to it.

So you have lockers add themselves to the queue and wait on their
condition variable, and then a thread running all the time that reads
the queue and acts on each condition to make sure only one thread is
activated at a time (or that one thread can just always do all the work
if the arguments are simple enough to put in a queue).


Do you even need the extra thread? The implementations I've seen for a
ticket lock (in C at least) usually have the unlock routine wake up the
next pending locker.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 09:40, Clint Byrum  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>> > Container images introduce some extra complexity, over the basic
>> > operating system style packages mentioned above. Due to the way
>> > they are constructed, they are likely to include content we don't
>> > produce ourselves (either in the form of base layers or via including
>> > build tools or other things needed when assembling the full image).
>> > That extra content means there would need to be more tracking of
>> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
>> > as needed.
>>
>> We can do this by building daily, which was the plan in fact. If we
>> build every day you have at most 24hrs old packages, CVEs and things
>> like that on non-openstack packages are still maintained by distro
>> maintainers.
>>
>
> What's at stake isn't so much "how do we get the bits to the users" but
> "how do we only get bits to users that they need". If you build and push
> daily, do you expect all of your users to also _pull_ daily? Redeploy
> all their containers? How do you detect that there's new CVE-fixing
> stuff in a daily build?
>
> This is really the realm of distributors that have full-time security
> teams tracking issues and providing support to paying customers.
>
> So I think this is a fine idea, however, it needs to include a commitment
> for a full-time paid security team who weighs in on every change to
> the manifest. Otherwise we're just lobbing time bombs into our users'
> data-centers.

One thing I struggle with is...well...how does *not having* built
containers help with that? If your company have full time security
team, they can check our containers prior to deployment. If your
company doesn't, then building locally will be subject to same risks
as downloading from dockerhub. Difference is, dockerhub containers
were tested in our CI to extend that our CI allows. No matter whether
or not you have your own security team, local CI, staging env, that
will be just a little bit of testing on top of that which you get for
free, and I think that's value enough for users to push for this.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-16 Thread Joshua Harlow

Fine with me,

I'd personally rather get down to say 2 'great' drivers for RPC,

And say 1 (or 2?) for notifications.

So ya, wfm.

-Josh

Mehdi Abaakouk wrote:

+1 too, I haven't seen its contributors since a while.

On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15
14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco





__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Davanum Srinivas
On Tue, May 16, 2017 at 11:52 AM, Michał Jastrzębski  wrote:
> On 16 May 2017 at 08:32, Doug Hellmann  wrote:
>> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
>>> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>>> > Folks,
>>> >
>>> > See $TITLE :)
>>> >
>>> > Thanks,
>>> > Dims
>>> >
>>>
>>> My preference would be to have an #openstack-tc channel.
>>>
>>> One thing I like about the dedicated meeting time was if I was not able to
>>> attend, or when I was just a casual observer, it was easy to catch up on
>>> what was discussed because it was all in one place and did not have any
>>> non TC conversations interlaced.
>>>
>>> If we just use -dev, there is a high chance there will be a lot of cross-
>>> talk during discussions. There would also be a lot of effort to grep
>>> through the full day of activity to find things relevant to TC
>>> discussions. If we have a dedicated channel for this, it makes it very
>>> easy for anyone to know where to go to get a clean, easy to read capture
>>> of all relevant discussions. I think that will be important with the
>>> lack of a captured and summarized meeting to look at.
>>>
>>> Sean
>>>
>>
>> I definitely understand this desire. I think, though, that any
>> significant conversations should be made discoverable via an email
>> thread summarizing them. That honors the spirit of moving our
>> "decision making" to asynchronous communication tools.
>
> To both this and Dims's concerns, I actually think we need some place
> to just come and ask "guys, is this fine?". If answer would be "let's
> talk on ML because it's important", that's cool, but on the other hand
> sometimes simple "yes" would suffice. Not all conversations with TC
> requires mailing thread, but I'd love to have some "semi-official" TC
> space where I can drop question, quickly discuss cross-project issues
> and such.

Michal,

Let's try using the ping list on #openstack-dev channel:
cdent dhellmann dims dtroyer emilienm flaper87 fungi johnthetubaguy
mordred sdague smcginnis stevemar ttx

The IRC nicks are here:
https://governance.openstack.org/tc/#current-members

Looks like the foundation page needs refreshing, will ping folks about it.
https://www.openstack.org/foundation/tech-committee/

Thanks,
Dims

>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 10:41, Jeremy Stanley  wrote:
> On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
>> Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> [...]
>> > If you build images properly in infra, then you will have an image that is
>> > not security checked (no gpg verification of packages) and completely
>> > unverifiable. These are absolutely not images we want to push to
>> > DockerHub/quay for obvious reasons. Security and verification being chief
>> > among them. They are absolutely not images that should ever be run in
>> > production and are only suited for testing. These are the only types of
>> > images that can come out of infra.
>>
>> This sounds like an implementation detail of option 3? I think not
>> signing the images does help indicate that they're not meant to be used
>> in production environments.
> [...]
>
> I'm pretty sure Sam wasn't talking about whether or not the images
> which get built are signed, but whether or not the package manager
> used when building the images vets the distro packages it retrieves
> (the Ubuntu package mirror we maintain in our CI doesn't have
> "secure APT" signatures available for its indices so we disable that
> security measure by default in the CI system to allow us to use
> those mirrors). Point being, if images are built in the upstream CI
> with packages from our Ubuntu package mirror then they are (at least
> at present) not suitable for production use from a security
> perspective for this particular reason even in absence of the other
> concerns expressed.
> --
> Jeremy Stanley

This is valid concern, but also particularly easy to solve. If we
decide to use nightly builds (or midday in Hawaii? Any timezone with
least traffic would do), we can skip infra mirrors. In fact, that
approach would help us in a different sense as well. Since these
wouldn't be bound to any particular patchset, we could test it to an
extreme, so voting gates for both kolla-ansible and kolla-kubernetes
deployment. I was reluctant to have deploy gates voting inside Kolla,
but that would allow us to do it. In fact, net uplink consumption from
infra would go down, as we won't need to publish tarballs of registry
every commit, we'll do it once a day in a most convenient hour.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
So another consideration. Do you think whole rule of "not building
binares" should be reconsidered? We are kind of new use case here. We
aren't distro but we are packagers (kind of). I don't think putting us
on equal footing as Red Hat, Canonical or other companies is correct
here.

K8s is something we want to work with, and what we are discussing is
central to how k8s is used. K8s community creates this culture of
"organic packages" built by anyone, most of companies/projects already
have semi-official container images and I think expectations on
quality of these are well...none? You get what you're given and if you
don't agree, there is always way to reproduce this yourself.

[Another huge snip]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [devstack] [deployment] nova-api and meta-api under uwsgi

2017-05-16 Thread Matt Riedemann

On 5/16/2017 11:11 AM, Chris Dent wrote:


(This is a followup to
http://lists.openstack.org/pipermail/openstack-dev/2017-May/116267.html
but I don't have that around anymore to make a proper response to.)

In a devstack change:

https://review.openstack.org/#/c/457715/

nova-api and nova-metadata will be changed to run as WSGI
applications with a uwsgi server by default. This helps to enable
a few recent goals:

* everything under systemd in devstack
* minimizing custom ports for HTTP in devstack
* is part of a series of changes[1] which gets the compute api working
  under WSGI, including some devref for wsgi use:
  https://docs.openstack.org/developer/nova/wsgi.html
* helps enforce the idea that any WSGI server is okay

This last point is important consideration for deployers: Although
devstack will (once the change merges) default to using a
combination of apache2, mod_proxy_uwsgi, and uwsgi there is zero
requirement that deployments replicate that arrangement. The new
'nova-api-wsgi' and 'nova-metadata-wsgi' script provide a
module-level 'application' that can be run by any WSGI compliant
server.

In those contexts things like the path-prefix of an application and
the port used (if any) to host the application are entirely in the
domain of the web server's config, not the application. This is
a good thing, but it does mean that any deployment automation needs
to make some decisions about how to manipulate the web server's
configuration.

Some other details which might be relevant:

In the devstack change the compute service is registered to run on a
default port of either 80 or 443 at '/compute' and _not_ on a custom
port.

The metadata API, however, continues to run as its own service on
its own port. In fact, it runs using solely uwsgi, without apache2
being involved at all.

Please follow up if there any questions.


[1] https://review.openstack.org/#/c/457283/
https://review.openstack.org/#/c/459413/
https://review.openstack.org/#/c/461289/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Thanks for taking the lead on this work Chris (with credit to sdague as 
well for the assist).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Thierry Carrez
Davanum Srinivas wrote:
> On Tue, May 16, 2017 at 11:52 AM, Michał Jastrzębski  wrote:
>> On 16 May 2017 at 08:32, Doug Hellmann  wrote:
>>> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
 My preference would be to have an #openstack-tc channel.

 One thing I like about the dedicated meeting time was if I was not able to
 attend, or when I was just a casual observer, it was easy to catch up on
 what was discussed because it was all in one place and did not have any
 non TC conversations interlaced.

 If we just use -dev, there is a high chance there will be a lot of cross-
 talk during discussions. There would also be a lot of effort to grep
 through the full day of activity to find things relevant to TC
 discussions. If we have a dedicated channel for this, it makes it very
 easy for anyone to know where to go to get a clean, easy to read capture
 of all relevant discussions. I think that will be important with the
 lack of a captured and summarized meeting to look at.
>>>
>>> I definitely understand this desire. I think, though, that any
>>> significant conversations should be made discoverable via an email
>>> thread summarizing them. That honors the spirit of moving our
>>> "decision making" to asynchronous communication tools.

I also prefer we opt for an #openstack-tc channel. A channel is defined
by the topic of its discussions, and #openstack-dev is a catch-all, a
default channel. Reusing it for topical discussions will force everyone
to filter through random discussions in order to get to the
really-TC-related ones. Yes, it's not used much. But that isn't a good
reason to recycle it.

>> To both this and Dims's concerns, I actually think we need some place
>> to just come and ask "guys, is this fine?". If answer would be "let's
>> talk on ML because it's important", that's cool, but on the other hand
>> sometimes simple "yes" would suffice. Not all conversations with TC
>> requires mailing thread, but I'd love to have some "semi-official" TC
>> space where I can drop question, quickly discuss cross-project issues
>> and such.
> 
> Michal,
> 
> Let's try using the ping list on #openstack-dev channel:
> cdent dhellmann dims dtroyer emilienm flaper87 fungi johnthetubaguy
> mordred sdague smcginnis stevemar ttx
> 
> The IRC nicks are here:
> https://governance.openstack.org/tc/#current-members

If you need a ping list to reuse the default channel as a TC "office
hours" channel, that kind of proves that a dedicated channel would be
more appropriate :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Clint Byrum
Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> > Container images introduce some extra complexity, over the basic
> > operating system style packages mentioned above. Due to the way
> > they are constructed, they are likely to include content we don't
> > produce ourselves (either in the form of base layers or via including
> > build tools or other things needed when assembling the full image).
> > That extra content means there would need to be more tracking of
> > upstream issues (bugs, CVEs, etc.) to ensure the images are updated
> > as needed.
> 
> We can do this by building daily, which was the plan in fact. If we
> build every day you have at most 24hrs old packages, CVEs and things
> like that on non-openstack packages are still maintained by distro
> maintainers.
> 

What's at stake isn't so much "how do we get the bits to the users" but
"how do we only get bits to users that they need". If you build and push
daily, do you expect all of your users to also _pull_ daily? Redeploy
all their containers? How do you detect that there's new CVE-fixing
stuff in a daily build?

This is really the realm of distributors that have full-time security
teams tracking issues and providing support to paying customers.

So I think this is a fine idea, however, it needs to include a commitment
for a full-time paid security team who weighs in on every change to
the manifest. Otherwise we're just lobbing time bombs into our users'
data-centers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Anita Kuno

On 2017-05-16 11:46 AM, Sean Dague wrote:

On 05/16/2017 11:17 AM, Sean McGinnis wrote:

On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:

Folks,

See $TITLE :)

Thanks,
Dims


My preference would be to have an #openstack-tc channel.

One thing I like about the dedicated meeting time was if I was not able to
attend, or when I was just a casual observer, it was easy to catch up on
what was discussed because it was all in one place and did not have any
non TC conversations interlaced.

If we just use -dev, there is a high chance there will be a lot of cross-
talk during discussions. There would also be a lot of effort to grep
through the full day of activity to find things relevant to TC
discussions. If we have a dedicated channel for this, it makes it very
easy for anyone to know where to go to get a clean, easy to read capture
of all relevant discussions. I think that will be important with the
lack of a captured and summarized meeting to look at.

The thing is, IRC should never be a summary or long term storage medium.
IRC is a discussion medium. It is a hallway track. It's where ideas
bounce around lots are left on the floor, there are lots of
misstatements as people explore things. It's not store and forward
messaging, it's realtime chat.

If we want digestible summaries with context, that's never IRC, and we
shouldn't expect people to look to IRC for that. It's source material at
best. I'm not sure of any IRC conversation that's ever been clean, easy
to read, and captures the entire context within it without jumping to
assumptions of shared background that the conversation participants
already have.

Summaries with context need to emerge from here for people to be able to
follow along (out to email or web), and work their way back into the
conversations.

-Sean


I'll disagree on this point.

I do agree IRC is a discussion medium. I further agree that any 
agreements decided upon need to be further disseminated via other media. 
However, I disagree the only value for those trying to catch up with 
items that took place in the past lies in a digestible summary. The 
conversation of how that agreement was arrived at holds great value.


Feel free to disregard what I have to say, because I'm not really 
involved right now. But I would like to feel that should occasion arise 
I could step back in, do my homework reading past conversations and have 
a reasonable understanding of the current state of things.


For me OpenStack is about people, foibles and mistakes included. I think 
there is a huge value in seeing how a conversation develops and how an 
agreement came into being, sometimes this is far more valuable to me 
than the agreement itself. Agreements and policies are constantly 
changing, but the process of discussion and how we reach this agreement 
is often more important both to me and as a demonstration to others of 
how to interact effectively than the final agreement, which will likely 
change next release or two.


If you are going to do away with tc meetings and I can't find the 
backstory in an IRC tc meeting log then at least let me find the 
backstory in a channel somewhere.


I am in favour of using #openstack-dev for this purpose. I appreciate 
Sean McGuinnis' point about have the conversation focused, I don't think 
you would get that even if you had a dedicated #openstack-tc channel. 
Either channel would include side conversations and unrelated chat, I 
don't see anyway for that not to happen. So for me I would go with using 
what we already have, also including Sean and Doug's previous points 
that we already are fractured enough, it sure would be nice to see some 
good use of already existing public spaces.


Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread John Dickinson


On 14 May 2017, at 4:04, Sean Dague wrote:

> One of the things that came up in a logging Forum session is how much effort 
> operators are having to put into reconstructing flows for things like server 
> boot when they go wrong, as every time we jump a service barrier the 
> request-id is reset to something new. The back and forth between Nova / 
> Neutron and Nova / Glance would be definitely well served by this. Especially 
> if this is something that's easy to query in elastic search.
>
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from random 
> users. We're going to assume that's still a concern by some. However, since 
> the last time that came up, we've introduced the concept of "service users", 
> which are a set of higher priv services that we are using to wrap user 
> requests between services so that long running request chains (like image 
> snapshot). We trust these service users enough to keep on trucking even after 
> the user token has expired for this long run operations. We could use this 
> same trust path for request-id chaining.
>
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, reset 
> the request_id to the local generated one. We'll log both the global and 
> local request ids. All of these changes happen in oslo.middleware, 
> oslo.context, oslo.log, and most projects won't need anything to get this 
> infrastructure.
>
> The python clients, and callers, will then need to be augmented to pass the 
> request-id in on requests. Servers will effectively decide when they want to 
> opt into calling other services this way.
>
> This only ends up logging the top line global request id as well as the last 
> leaf for each call. This does mean that full tree construction will take more 
> work if you are bouncing through 3 or more servers, but it's a step which I 
> think can be completed this cycle.
>
> I've got some more detailed notes, but before going through the process of 
> putting this into an oslo spec I wanted more general feedback on it so that 
> any objections we didn't think about yet can be raised before going through 
> the detailed design.
>
>   -Sean
>
> -- 
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm not sure the best place to respond (mailing list or gerrit), so
I'll write this up and post it to both places.

I think the idea behind this proposal is great. It has the potential
to bring a lot of benefit to users who are tracing a request across
many different services, in part by making it easy to search in an
indexing system like ELK.

The current proposal has some elements that won't work with the way
Swift currently solves this problem. This is mostly due to the
proposed uuid-ish check for validation. However, the Swift solution
has a few aspects that I believe would be very helpful for the entire
community.

NB: Swift returns both an `X-OpenStack-Request-ID` and an `X-Trans-ID`
header in every response. The `X-Trans-ID` was implemented before the
OpenStack request ID was proposed, and so we've kept the `X-Trans-ID` so
as not to break existing clients. The value of `X-OpenStack-Request-ID`
in any response from Swift is simply a mirror of the `X-Trans-ID` value.

The request id in Swift is made up of a few parts:

X-Openstack-Request-Id: txbea0071df2b0465082501-00591b3077saio-extraextra


In the code, this in generated from:

'tx%s-%010x%s' % (uuid.uuid4().hex[:21], time.time(), 
quote(trans_id_suffix))

...meaning that there are three parts to the request id. Let's take
each in turn.

The first part always starts with 'tx' (originally from the
"transaction id") and then is the first 21 hex characters of a uuid4.
The truncation is to limit the overall length of the value.

The second part is the hex value of the current time, padded to 10
characters.

Finally, the third part is the quoted suffix, and it defaults to the
empty string. The suffix itself can be made of two parts. The first is
configured in the Swift proxy server itself (ie the service that does
the logging) via the `trans_id_suffix` config. This allows an operator
to set a different suffix for each API endpoint or each region or each
cluster in order to help distinguish them in logs. For example, if a
deployment with multiple clusters uses centralized log aggregation, a
different trans_id_suffix value for each 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
> Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
[...]
> > If you build images properly in infra, then you will have an image that is
> > not security checked (no gpg verification of packages) and completely
> > unverifiable. These are absolutely not images we want to push to
> > DockerHub/quay for obvious reasons. Security and verification being chief
> > among them. They are absolutely not images that should ever be run in
> > production and are only suited for testing. These are the only types of
> > images that can come out of infra.
> 
> This sounds like an implementation detail of option 3? I think not
> signing the images does help indicate that they're not meant to be used
> in production environments.
[...]

I'm pretty sure Sam wasn't talking about whether or not the images
which get built are signed, but whether or not the package manager
used when building the images vets the distro packages it retrieves
(the Ubuntu package mirror we maintain in our CI doesn't have
"secure APT" signatures available for its indices so we disable that
security measure by default in the CI system to allow us to use
those mirrors). Point being, if images are built in the upstream CI
with packages from our Ubuntu package mirror then they are (at least
at present) not suitable for production use from a security
perspective for this particular reason even in absence of the other
concerns expressed.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 09:38:34 -0400 (-0400), Davanum Srinivas wrote:
> See $TITLE :)

Trying not to rehash other points, I'm in favor of using
#openstack-dev for now until we see it's not working out. Creating a
new channel for this purpose before we've even undertaken the
experiment seems like a social form of premature optimization.

If the concern is that it's hard to get the attention of (I hate to
say "ping" since contextlessly highlighting people in channel to
find out whether they're around is especially annoying to me at
least) members of the OpenStack Technical Committee, the Infra
team's root sysadmins already solved this issue by all configuring
their clients to highliht on a specific keyword (in that case,
"infra-root" mentioned in channel gets the attention of most of our
rooters these days). Something like "tc-members" can be used to
address a question specifically to those on the TC who happen to be
around and paying attention and also gives people looking at the
logs a useful string to grep/search/whatever. I've gone ahead and
configured my client to highlight on that now.

As for losing context when a discussion transfers from informal
temperature taking, brainstorming and bikeshedding in IRC to a less
synchronous thread on the ML, simply making sure to include a URL to
the point in the channel log where the discussion began ought to be
sufficient (and should be encouraged _regardless_ of which channel
that was).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] pep8 failing

2017-05-16 Thread Ihar Hrachyshka
Make sure you have the latest neutron-lib in your tree: neutron-lib==1.6.0

On Tue, May 16, 2017 at 3:05 AM, Vikash Kumar
 wrote:
> Hi Team,
>
>   pep8 is failing in master code. translation hint helpers are removed from
> LOG messages. Is this purposefully done ? Let me know if it is not, will
> change it.
>
> ./networking_sfc/db/flowclassifier_db.py:342:13: N531  Log messages require
> translation hints!
> LOG.info("Deleting a non-existing flow classifier.")
> ^
> ./networking_sfc/db/sfc_db.py:383:13: N531  Log messages require translation
> hints!
> LOG.info("Deleting a non-existing port chain.")
> ^
> ./networking_sfc/db/sfc_db.py:526:13: N531  Log messages require translation
> hints!
> LOG.info("Deleting a non-existing port pair.")
> ^
> ./networking_sfc/db/sfc_db.py:658:13: N531  Log messages require translation
> hints!
> LOG.info("Deleting a non-existing port pair group.")
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:38:9: N531  Log
> messages require translation hints!
> LOG.info("Configured Flow Classifier drivers: %s", names)
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:44:9: N531  Log
> messages require translation hints!
> LOG.info("Loaded Flow Classifier drivers: %s",
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:80:9: N531  Log
> messages require translation hints!
> LOG.info("Registered Flow Classifier drivers: %s",
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:87:13: N531  Log
> messages require translation hints!
> LOG.info("Initializing Flow Classifier driver '%s'",
> ^
> ./networking_sfc/services/flowclassifier/driver_manager.py:107:17: N531  Log
> messages require translation hints!
> LOG.error(
> ^
> ./networking_sfc/services/flowclassifier/plugin.py:63:17: N531  Log messages
> require translation hints!
> LOG.error("Create flow classifier failed, "
> ^
> ./networking_sfc/services/flowclassifier/plugin.py:87:17: N531  Log messages
> require translation hints!
> LOG.error("Update flow classifier failed, "
> ^
> ./networking_sfc/services/flowclassifier/plugin.py:102:17: N531  Log
> messages require translation hints!
> LOG.error("Delete flow classifier failed, "
> ^
> ./networking_sfc/services/sfc/driver_manager.py:38:9: N531  Log messages
> require translation hints!
> LOG.info("Configured SFC drivers: %s", names)
> ^
> ./networking_sfc/services/sfc/driver_manager.py:43:9: N531  Log messages
> require translation hints!
> LOG.info("Loaded SFC drivers: %s", self.names())
> ^
> ./networking_sfc/services/sfc/driver_manager.py:78:9: N531  Log messages
> require translation hints!
> LOG.info("Registered SFC drivers: %s",
> ^
> ./networking_sfc/services/sfc/driver_manager.py:85:13: N531  Log messages
> require translation hints!
> LOG.info("Initializing SFC driver '%s'", driver.name)
> ^
> ./networking_sfc/services/sfc/driver_manager.py:104:17: N531  Log messages
> require translation hints!
> LOG.error(
> ^
> ./networking_sfc/services/sfc/plugin.py:57:17: N531  Log messages require
> translation hints!
> LOG.error("Create port chain failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:82:17: N531  Log messages require
> translation hints!
> LOG.error("Update port chain failed, port_chain '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:97:17: N531  Log messages require
> translation hints!
> LOG.error("Delete port chain failed, portchain '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:122:17: N531  Log messages require
> translation hints!
> LOG.error("Create port pair failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:144:17: N531  Log messages require
> translation hints!
> LOG.error("Update port pair failed, port_pair '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:159:17: N531  Log messages require
> translation hints!
> LOG.error("Delete port pair failed, port_pair '%s'",
> ^
> ./networking_sfc/services/sfc/plugin.py:185:17: N531  Log messages require
> translation hints!
> LOG.error("Create port pair group failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:213:17: N531  Log messages require
> translation hints!
> LOG.error("Update port pair group failed, "
> ^
> ./networking_sfc/services/sfc/plugin.py:229:17: N531  Log messages require
> translation hints!
> LOG.error("Delete port pair 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> So another consideration. Do you think whole rule of "not building
> binares" should be reconsidered? We are kind of new use case here. We
> aren't distro but we are packagers (kind of). I don't think putting us
> on equal footing as Red Hat, Canonical or other companies is correct
> here.
> 
> K8s is something we want to work with, and what we are discussing is
> central to how k8s is used. K8s community creates this culture of
> "organic packages" built by anyone, most of companies/projects already
> have semi-official container images and I think expectations on
> quality of these are well...none? You get what you're given and if you
> don't agree, there is always way to reproduce this yourself.
> 
> [Another huge snip]
> 

I wanted to have the discussion, but my position for now is that
we should continue as we have been and not change the policy.

I don't have a problem with any individual or group of individuals
publishing their own organic packages. The issue I have is with
making sure it is clear those *are* "organic" and not officially
supported by the broader community. One way to do that is to say
they need to be built somewhere other than on our shared infrastructure.
There may be other ways, though, so I'm looking for input on that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
> >> >>
> >> >> Flavio Percoco wrote:
> >> >>>
> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
> >> >>> projects
> >> >>> in any kind of built form. This was also one of the concerns I raised
> >> >>> when
> >> >>> working on the proposal to support other programming languages. The
> >> >>> problem of
> >> >>> releasing built images goes beyond the infrastructure requirements. 
> >> >>> It's
> >> >>> the
> >> >>> message and the guarantees implied with the built product itself that 
> >> >>> are
> >> >>> the
> >> >>> concern here. And I tend to agree with Doug that this might be a 
> >> >>> problem
> >> >>> for us
> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
> >> >>> point is
> >> >>> not enough. Kolla is not the only project producing container images 
> >> >>> and
> >> >>> we need
> >> >>> to be consistent in the way we release these images.
> >> >>>
> >> >>> Nothing prevents people for building their own images and uploading 
> >> >>> them
> >> >>> to
> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
> >> >>> problem.
> >> >>
> >> >>
> >> >> I totally subscribe to the concerns around publishing binaries (under
> >> >> any form), and the expectations in terms of security maintenance that it
> >> >> would set on the publisher. At the same time, we need to have images
> >> >> available, for convenience and testing. So what is the best way to
> >> >> achieve that without setting strong security maintenance expectations
> >> >> for the OpenStack community ? We have several options:
> >> >>
> >> >> 1/ Have third-parties publish images
> >> >> It is the current situation. The issue is that the Kolla team (and
> >> >> likely others) would rather automate the process and use OpenStack
> >> >> infrastructure for it.
> >> >>
> >> >> 2/ Have third-parties publish images, but through OpenStack infra
> >> >> This would allow to automate the process, but it would be a bit weird to
> >> >> use common infra resources to publish in a private repo.
> >> >>
> >> >> 3/ Publish transient (per-commit or daily) images
> >> >> A "daily build" (especially if you replace it every day) would set
> >> >> relatively-limited expectations in terms of maintenance. It would end up
> >> >> picking up security updates in upstream layers, even if not immediately.
> >> >>
> >> >> 4/ Publish images and own them
> >> >> Staff release / VMT / stable team in a way that lets us properly own
> >> >> those images and publish them officially.
> >> >>
> >> >> Personally I think (4) is not realistic. I think we could make (3) work,
> >> >> and I prefer it to (2). If all else fails, we should keep (1).
> >> >
> >> >
> >> > Agreed #4 is a bit unrealistic.
> >> >
> >> > Not sure I understand the difference between #2 and #3. Is it just the
> >> > cadence?
> >> >
> >> > I'd prefer for these builds to have a daily cadence because it sets the
> >> > expectations w.r.t maintenance right: "These images are daily builds and 
> >> > not
> >> > certified releases. For stable builds you're better off building it
> >> > yourself"
> >>
> >> And daily builds are exactly what I wanted in the first place:) We
> >> probably will keep publishing release packages too, but we can be so
> >> called 3rd party. I also agree [4] is completely unrealistic and I
> >> would be against putting such heavy burden of responsibility on any
> >> community, including Kolla.
> >>
> >> While daily cadence will send message that it's not stable, truth will
> >> be that it will be more stable than what people would normally build
> >> locally (again, it passes more gates), but I'm totally fine in not
> >> saying that and let people decide how they want to use it.
> >>
> >> So, can we move on with implementation?
> >
> > I don't want the images published to docker hub. Are they still useful
> > to you if they aren't published?
> 
> What do you mean? We need images available...whether it's dockerhub,
> infra-hosted registry or any other way to have them, we need to be
> able to have images that are available and fresh without building.
> Dockerhub/quay.io is least problems for infra team/resources.

There are 2 separate concerns.

The first concern is whether this is a good idea at all, from a
policy perspective. Do we have the people to maintain the images,
track CVEs, etc.? Do we have the response time to update or remove
bad images? Can we, as a community, actually staff the support to
an appropriate level? Or, can we clearly communicate that we do not
support the images for production use and effectively avoid having
someone start to rely on them?

The 

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-05-16 17:41:28 +:
> On 2017-05-16 11:17:31 -0400 (-0400), Doug Hellmann wrote:
> > Excerpts from Sam Yaple's message of 2017-05-16 14:11:18 +:
> [...]
> > > If you build images properly in infra, then you will have an image that is
> > > not security checked (no gpg verification of packages) and completely
> > > unverifiable. These are absolutely not images we want to push to
> > > DockerHub/quay for obvious reasons. Security and verification being chief
> > > among them. They are absolutely not images that should ever be run in
> > > production and are only suited for testing. These are the only types of
> > > images that can come out of infra.
> > 
> > This sounds like an implementation detail of option 3? I think not
> > signing the images does help indicate that they're not meant to be used
> > in production environments.
> [...]
> 
> I'm pretty sure Sam wasn't talking about whether or not the images
> which get built are signed, but whether or not the package manager
> used when building the images vets the distro packages it retrieves
> (the Ubuntu package mirror we maintain in our CI doesn't have
> "secure APT" signatures available for its indices so we disable that
> security measure by default in the CI system to allow us to use
> those mirrors). Point being, if images are built in the upstream CI
> with packages from our Ubuntu package mirror then they are (at least
> at present) not suitable for production use from a security
> perspective for this particular reason even in absence of the other
> concerns expressed.

Thanks for clarifying; that makes more sense.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
> >> So another consideration. Do you think whole rule of "not building
> >> binares" should be reconsidered? We are kind of new use case here. We
> >> aren't distro but we are packagers (kind of). I don't think putting us
> >> on equal footing as Red Hat, Canonical or other companies is correct
> >> here.
> >>
> >> K8s is something we want to work with, and what we are discussing is
> >> central to how k8s is used. K8s community creates this culture of
> >> "organic packages" built by anyone, most of companies/projects already
> >> have semi-official container images and I think expectations on
> >> quality of these are well...none? You get what you're given and if you
> >> don't agree, there is always way to reproduce this yourself.
> >>
> >> [Another huge snip]
> >>
> >
> > I wanted to have the discussion, but my position for now is that
> > we should continue as we have been and not change the policy.
> >
> > I don't have a problem with any individual or group of individuals
> > publishing their own organic packages. The issue I have is with
> > making sure it is clear those *are* "organic" and not officially
> > supported by the broader community. One way to do that is to say
> > they need to be built somewhere other than on our shared infrastructure.
> > There may be other ways, though, so I'm looking for input on that.
> 
> What I was trying to say here is, current discussion aside, maybe we
> should revise this "not supported by broader community" rule. They may
> very well be supported to a certain point. Support is not just yes or
> no, it's all the levels in between. I think we can afford *some* level
> of official support, even if that some level means best effort made by
> community. If Kolla community, not an individual like myself, would
> like to support these images best to our ability, why aren't we
> allowed? As long as we are crystal clear what is scope of our support,
> why can't we do it? I think we've already proven that it's going to be
> tremendously useful for a lot of people, even in a shape we discuss
> today, that is "best effort, you still need to validate it for
> yourself"...

Right, I understood that. So far I haven't heard anything to change
my mind, though.

I think you're underestimating the amount of risk you're taking on
for yourselves and by extension the rest of the community, and
introducing to potential consumers of the images, by promising to
support production deployments with a small team of people without
the economic structure in place to sustain the work.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Sean Dague
On 05/16/2017 02:39 PM, Doug Hellmann wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
>> On 16 May 2017 at 09:40, Clint Byrum  wrote:
>>>
>>> What's at stake isn't so much "how do we get the bits to the users" but
>>> "how do we only get bits to users that they need". If you build and push
>>> daily, do you expect all of your users to also _pull_ daily? Redeploy
>>> all their containers? How do you detect that there's new CVE-fixing
>>> stuff in a daily build?
>>>
>>> This is really the realm of distributors that have full-time security
>>> teams tracking issues and providing support to paying customers.
>>>
>>> So I think this is a fine idea, however, it needs to include a commitment
>>> for a full-time paid security team who weighs in on every change to
>>> the manifest. Otherwise we're just lobbing time bombs into our users'
>>> data-centers.
>>
>> One thing I struggle with is...well...how does *not having* built
>> containers help with that? If your company have full time security
>> team, they can check our containers prior to deployment. If your
>> company doesn't, then building locally will be subject to same risks
>> as downloading from dockerhub. Difference is, dockerhub containers
>> were tested in our CI to extend that our CI allows. No matter whether
>> or not you have your own security team, local CI, staging env, that
>> will be just a little bit of testing on top of that which you get for
>> free, and I think that's value enough for users to push for this.
> 
> The benefit of not building images ourselves is that we are clearly
> communicating that the responsibility for maintaining the images
> falls on whoever *does* build them. There can be no question in any
> user's mind that the community somehow needs to maintain the content
> of the images for them, just because we're publishing new images
> at some regular cadence.

+1. It is really easy to think that saying "don't use this in
production" prevents people from using it in production. See: User
Survey 2017 and the number of folks reporting DevStack as their
production deployment tool.

We need to not only manage artifacts, but expectations. And with all the
confusion of projects in the openstack git namespace being officially
blessed openstack projects over the past few years, I can't imagine
people not thinking that openstack infra generated content in dockerhub
is officially supported content.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Flavio Percoco

On 16/05/17 14:08 +0200, Thierry Carrez wrote:

Flavio Percoco wrote:

From a release perspective, as Doug mentioned, we've avoided releasing projects
in any kind of built form. This was also one of the concerns I raised when
working on the proposal to support other programming languages. The problem of
releasing built images goes beyond the infrastructure requirements. It's the
message and the guarantees implied with the built product itself that are the
concern here. And I tend to agree with Doug that this might be a problem for us
as a community. Unfortunately, putting your name, Michal, as contact point is
not enough. Kolla is not the only project producing container images and we need
to be consistent in the way we release these images.

Nothing prevents people for building their own images and uploading them to
dockerhub. Having this as part of the OpenStack's pipeline is a problem.


I totally subscribe to the concerns around publishing binaries (under
any form), and the expectations in terms of security maintenance that it
would set on the publisher. At the same time, we need to have images
available, for convenience and testing. So what is the best way to
achieve that without setting strong security maintenance expectations
for the OpenStack community ? We have several options:

1/ Have third-parties publish images
It is the current situation. The issue is that the Kolla team (and
likely others) would rather automate the process and use OpenStack
infrastructure for it.

2/ Have third-parties publish images, but through OpenStack infra
This would allow to automate the process, but it would be a bit weird to
use common infra resources to publish in a private repo.

3/ Publish transient (per-commit or daily) images
A "daily build" (especially if you replace it every day) would set
relatively-limited expectations in terms of maintenance. It would end up
picking up security updates in upstream layers, even if not immediately.

4/ Publish images and own them
Staff release / VMT / stable team in a way that lets us properly own
those images and publish them officially.

Personally I think (4) is not realistic. I think we could make (3) work,
and I prefer it to (2). If all else fails, we should keep (1).


Agreed #4 is a bit unrealistic.

Not sure I understand the difference between #2 and #3. Is it just the cadence?

I'd prefer for these builds to have a daily cadence because it sets the
expectations w.r.t maintenance right: "These images are daily builds and not
certified releases. For stable builds you're better off building it yourself"

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Flavio Percoco

On 16/05/17 09:45 -0400, Doug Hellmann wrote:

Excerpts from Flavio Percoco's message of 2017-05-15 21:50:23 -0400:

On 15/05/17 11:49 -0700, Michał Jastrzębski wrote:
>On 15 May 2017 at 11:19, Davanum Srinivas  wrote:
>> Sorry for the top post, Michal, Can you please clarify a couple of things:
>>
>> 1) Can folks install just one or two services for their specific scenario?
>
>Yes, that's more of a kolla-ansible feature and require a little bit
>of ansible know-how, but entirely possible. Kolla-k8s is built to
>allow maximum flexibility in that space.
>
>> 2) Can the container images from kolla be run on bare docker daemon?
>
>Yes, but they need to either override our default CMD (kolla_start) or
>provide ENVs requred by it, not a huge deal
>
>> 3) Can someone take the kolla container images from say dockerhub and
>> use it without the Kolla framework?
>
>Yes, there is no such thing as kolla framework really. Our images
>follow stable ABI and they can be deployed by any deploy mechanism
>that will follow it. We have several users who wrote their own deploy
>mechanism from scratch.
>
>Containers are just blobs with binaries in it. Little things that we
>add are kolla_start script to allow our config file management and
>some custom startup scripts for things like mariadb to help with
>bootstrapping, both are entirely optional.

Just as a bonus example, TripleO is currently using kolla images. They used to
be vanilla and they are not anymore but only because TripleO depends on puppet
being in the image, which has nothing to do with kolla.

Flavio



When you say "using kolla images," what do you mean? In upstream
CI tests? On contributors' dev/test systems? Production deployments?


All of them. Note that TripleO now builds its own "kolla images" (it uses the
kolla Dockerfiles and kolla-build) because the dependency of puppet. When I
said, TripleO uses kolla images was intended to answer Dims question on whether
these images (or Dockerfiles) can be consumed by other projects.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 07:49, Sean Dague  wrote:
> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
>> Folks,
>>
>> See $TITLE :)
>>
>> Thanks,
>> Dims
>
> I'd rather avoid #openstack-tc and just use #openstack-dev.
> #openstack-dev is pretty low used environment (compared to like
> #openstack-infra or #openstack-nova). I've personally been trying to
> make it my go to way to hit up members of other teams whenever instead
> of diving into project specific channels, because typically it means we
> can get a broader conversation around the item in question.
>
> Our fragmentation of shared understanding on many issues is definitely
> exacerbated by many project channels, and the assumption that people
> need to watch 20+ different channels, with different context, to stay up
> on things.
>
> I would love us to have the problem that too many interesting topics are
> being discussed in #openstack-dev that we feel the need to parallelize
> them with a different channel. But I would say we should wait until
> that's actually a problem.
>
> -Sean

I, on the flip side, would be all for #openstack-tc. First,
#openstack-dev is not obvious to look for TC members, #openstack-tc
would be channel to talk about tc related stuff, which in large
portion would be something significant and worth coming back to, so
having this "filtered" field just for cross-community discussions
would make digging through logs much easier.

> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-16 10:49:54 -0400:
> On 05/16/2017 09:38 AM, Davanum Srinivas wrote:
> > Folks,
> > 
> > See $TITLE :)
> > 
> > Thanks,
> > Dims
> 
> I'd rather avoid #openstack-tc and just use #openstack-dev.
> #openstack-dev is pretty low used environment (compared to like
> #openstack-infra or #openstack-nova). I've personally been trying to
> make it my go to way to hit up members of other teams whenever instead
> of diving into project specific channels, because typically it means we
> can get a broader conversation around the item in question.
> 
> Our fragmentation of shared understanding on many issues is definitely
> exacerbated by many project channels, and the assumption that people
> need to watch 20+ different channels, with different context, to stay up
> on things.
> 
> I would love us to have the problem that too many interesting topics are
> being discussed in #openstack-dev that we feel the need to parallelize
> them with a different channel. But I would say we should wait until
> that's actually a problem.
> 
> -Sean
> 

+1, let's start with just the -dev channel and see if volume becomes
an issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Thierry Carrez
Michał Jastrzębski wrote:
> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> I'd prefer for these builds to have a daily cadence because it sets the
>> expectations w.r.t maintenance right: "These images are daily builds and not
>> certified releases. For stable builds you're better off building it
>> yourself"
> 
> And daily builds are exactly what I wanted in the first place:) We
> probably will keep publishing release packages too, but we can be so
> called 3rd party. I also agree [4] is completely unrealistic and I
> would be against putting such heavy burden of responsibility on any
> community, including Kolla.
> 
> While daily cadence will send message that it's not stable, truth will
> be that it will be more stable than what people would normally build
> locally (again, it passes more gates), but I'm totally fine in not
> saying that and let people decide how they want to use it.
> 
> So, can we move on with implementation?

I'm just listing options to help frame the discussion. I still think we
need a global answer on this (for container images and VMs) so I think
it would be great to have a clear TC resolution (picking one of those
options) before moving on with implementation.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread Sean Dague
On 05/16/2017 10:28 AM, Chris Dent wrote:
> On Sun, 14 May 2017, Sean Dague wrote:
> 
>> So, the basic idea is, services will optionally take an inbound
>> X-OpenStack-Request-ID which will be strongly validated to the format
>> (req-$uuid). They will continue to always generate one as well. When
>> the context is built (which is typically about 3 more steps down the
>> paste pipeline), we'll check that the service user was involved, and
>> if not, reset the request_id to the local generated one. We'll log
>> both the global and local request ids. All of these changes happen in
>> oslo.middleware, oslo.context, oslo.log, and most projects won't need
>> anything to get this infrastructure.
> 
> I may not be understanding this paragraph, but this sounds like you
> are saying: accept a valid and authentic incoming request id, but
> only use it in ongoing requests if the service user was involved in
> those requests.
> 
> If that's correct, I'd suggest not doing that because it confuses
> traceability of a series of things. Instead, always use the request
> id if it is valid and authentic.
> 
> But maybe you mean "if the request id could not be proven authentic,
> don't use it"?

It is a little more clear in the detailed spec, the issue is that the
place where this is generated is before we have enough ability to know
if we should be allowed to use it (it's actually before keystone auth).
I put some annotations of paste pipelines inline to help explain.

We either assume success, or assume failure, and fix later. We don't
actually have a functional logger using the request-id until we've got
keystone auth (bootstrapping is fun!) so assuming success, and reverting
if auth says no, actually should cause less confusion (and require less
code) than the other way around.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 08:32, Doug Hellmann  wrote:
> Excerpts from Sean McGinnis's message of 2017-05-16 10:17:35 -0500:
>> On Tue, May 16, 2017 at 09:38:34AM -0400, Davanum Srinivas wrote:
>> > Folks,
>> >
>> > See $TITLE :)
>> >
>> > Thanks,
>> > Dims
>> >
>>
>> My preference would be to have an #openstack-tc channel.
>>
>> One thing I like about the dedicated meeting time was if I was not able to
>> attend, or when I was just a casual observer, it was easy to catch up on
>> what was discussed because it was all in one place and did not have any
>> non TC conversations interlaced.
>>
>> If we just use -dev, there is a high chance there will be a lot of cross-
>> talk during discussions. There would also be a lot of effort to grep
>> through the full day of activity to find things relevant to TC
>> discussions. If we have a dedicated channel for this, it makes it very
>> easy for anyone to know where to go to get a clean, easy to read capture
>> of all relevant discussions. I think that will be important with the
>> lack of a captured and summarized meeting to look at.
>>
>> Sean
>>
>
> I definitely understand this desire. I think, though, that any
> significant conversations should be made discoverable via an email
> thread summarizing them. That honors the spirit of moving our
> "decision making" to asynchronous communication tools.

To both this and Dims's concerns, I actually think we need some place
to just come and ask "guys, is this fine?". If answer would be "let's
talk on ML because it's important", that's cool, but on the other hand
sometimes simple "yes" would suffice. Not all conversations with TC
requires mailing thread, but I'd love to have some "semi-official" TC
space where I can drop question, quickly discuss cross-project issues
and such.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Steven Hardy
On Tue, May 16, 2017 at 04:33:33AM +, Dnyaneshwar Pawar wrote:
> Hi TripleO team,
> 
> I am trying to apply custom configuration to an existing overcloud. (using 
> openstack overcloud deploy command)
> Though there is no error, the configuration is in not applied to overcloud.
> Am I missing anything here?
> http://paste.openstack.org/show/609619/

In your paste you have the resource_registry like this:

OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml

The problem is OS::TripleO::ControllerServer isn't a resource type we use,
e.g it's not a valid hook to enable additional node configuration.

Instead try something like this:

OS::TripleO::NodeExtraConfigPost: /home/stack/test/heat3_ocata.yaml

Which will run the script on all nodes, as documented here:

https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html

Out of interest, where did you find OS::TripleO::ControllerServer, do we
have a mistake in our docs somewhere?

Also in your template the type: OS::Heat::SoftwareDeployment should be
either type: OS::Heat::SoftwareDeployments (as in the docs) or type:
OS::Heat::SoftwareDeploymentGroup (the newer name for SoftwareDeployments,
we should switch the docs to that..).

Hope that helps!

-- 
Steve Hardy
Red Hat Engineering, Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Davanum Srinivas
Why drag TC into this discussion Steven? If the TC has something to
say, it will be in the form of a resolution with topic "formal-vote".
So please Stop!

Thanks,
Dims

On Tue, May 16, 2017 at 12:22 AM, Steven Dake (stdake)  wrote:
> Flavio,
>
> Forgive the top post – outlook ftw.
>
> I understand the concerns raised in this thread.  It is unclear if this 
> thread is the feeling of two TC members or enough TC members care deeply 
> about this issue to permanently limit OpenStack big tent projects’ ability to 
> generate container images in various external artifact storage systems.  The 
> point of discussion I see effectively raised in this thread is “OpenStack 
> infra will not push images to dockerhub”.
>
> I’d like clarification if this is a ruling from the TC, or simply an 
> exploratory discussion.
>
> If it is exploratory, it is prudent that OpenStack projects not be blocked by 
> debate on this issue until the TC has made such ruling as to banning the 
> creation of container images via OpenStack infrastructure.
>
> Regards
> -steve
>
> -Original Message-
> From: Flavio Percoco 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Monday, May 15, 2017 at 7:00 PM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: Re: [openstack-dev] 
> [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
>  do we want to be publishing binary container images?
>
> On 15/05/17 12:32 -0700, Michał Jastrzębski wrote:
> >On 15 May 2017 at 12:12, Doug Hellmann  wrote:
>
> [huge snip]
>
> >>> > I'm raising the issue here to get some more input into how to
> >>> > proceed. Do other people think this concern is overblown? Can we
> >>> > mitigate the risk by communicating through metadata for the images?
> >>> > Should we stick to publishing build instructions (Dockerfiles, or
> >>> > whatever) instead of binary images? Are there other options I 
> haven't
> >>> > mentioned?
> >>>
> >>> Today we do publish build instructions, that's what Kolla is. We also
> >>> publish built containers already, just we do it manually on release
> >>> today. If we decide to block it, I assume we should stop doing that
> >>> too? That will hurt users who uses this piece of Kolla, and I'd hate
> >>> to hurt our users:(
> >>
> >> Well, that's the question. Today we have teams publishing those
> >> images themselves, right? And the proposal is to have infra do it?
> >> That change could be construed to imply that there is more of a
> >> relationship with the images and the rest of the community (remember,
> >> folks outside of the main community activities do not always make
> >> the same distinctions we do about teams). So, before we go ahead
> >> with that, I want to make sure that we all have a chance to discuss
> >> the policy change and its implications.
> >
> >Infra as vm running with infra, but team to publish it can be Kolla
> >team. I assume we'll be responsible to keep these images healthy...
>
> I think this is the gist of the concern and I'd like us to focus on it.
>
> As someone that used to consume these images from kolla's dockerhub 
> account
> directly, I can confirm they are useful. However, I do share Doug's 
> concern and
> the impact this may have on the community.
>
> From a release perspective, as Doug mentioned, we've avoided releasing 
> projects
> in any kind of built form. This was also one of the concerns I raised when
> working on the proposal to support other programming languages. The 
> problem of
> releasing built images goes beyond the infrastructure requirements. It's 
> the
> message and the guarantees implied with the built product itself that are 
> the
> concern here. And I tend to agree with Doug that this might be a problem 
> for us
> as a community. Unfortunately, putting your name, Michal, as contact 
> point is
> not enough. Kolla is not the only project producing container images and 
> we need
> to be consistent in the way we release these images.
>
> Nothing prevents people for building their own images and uploading them 
> to
> dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>
> Flavio
>
> P.S: note this goes against my container(ish) interests but it's a
> community-wide problem.
>
> --
> @flaper87
> Flavio Percoco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Afek, Ifat (Nokia - IL/Kfar Sava)


On 16/05/2017, 4:36, "Sam P"  wrote:

Hi Greg,

 In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
 Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
 Masakari instance monitor has similar functionality with what you
have described.
 Please see [1] for more details on instance monitoring.
 [0] https://wiki.openstack.org/wiki/Masakari
 [1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

 Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

 
Hi Greg, Sam,

As Vitrage is about correlating alarms that come from different sources, and is 
not a monitor by itself – I think that it can benefit from information 
retrieved by both Masakari and Zabbix monitors. 

Zabbix is already integrated into Vitrage. I don’t know if there are specific 
tests for VM heartbeat, but I think it is very likely that there are. 
Regarding Masakari – looking at your documents, I believe that integrating your 
monitoring information into Vitrage could be quite straight forward. 

Best Regards,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Consolidating web themes

2017-05-16 Thread Alexandra Settle
This all sounds really great ☺ thanks for taking it on board, Anne!

No questions at present ☺ looking forward to seeing the new design!

From: Anne Gentle 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, May 15, 2017 at 2:33 PM
To: "openstack-d...@lists.openstack.org" , 
OpenStack Development Mailing List 
Subject: [openstack-dev] [all] Consolidating web themes

Hi all,

I wanted to make you all aware of some consolidation efforts I'll be working on 
this release. You may have noticed a new logo for OpenStack, and perhaps you 
saw the update to the web design and headers on 
docs.openstack.org as well.

To continue these efforts, I'll also be working on having all docs pages use 
one theme, the openstackdocstheme, that has these latest updates. Currently we 
are using version 1.8.0, and I'll do more releases as we complete the UI 
consolidation.

I did an analysis to compare oslosphinx to openstackdocstheme, and I wanted to 
let this group know about the upcoming changes so you can keep an eye out for 
reviews. This effort will take a while, and I'd welcome help, of course.

There are a few UI items that I don't plan port from oslosphinx to 
openstackdocstheme:

Quick search form in bottom of left-hand navigation bar (though I'd welcome a 
way to unify that UI and UX across the themes).
Previous topic and Next topic shown in left-hand navigation bar (these are 
available in the openstackdocstheme in a different location).
Return to project home page link in left-hand navigation bar. (also would 
welcome a design that fits well in the openstackdocstheme left-hand nav)
Customized list of links in header. For example, the page 
athttps://docs.openstack.org/infra/system-config/
 contains a custom header.
When a landing page like https://docs.openstack.org/infra/ uses oslosphinx, the 
page should be redesigned with the new theme in mind.

I welcome input on these changes, as I'm sure I haven't caught every scenario, 
and this is my first wider communication about the theme changes. The spec for 
this work is detailed here: 
http://specs.openstack.org/openstack/docs-specs/specs/pike/consolidating-themes.html

Let me know what I've missed, what you cannot live without, and please reach 
out if you'd like to help.

Thanks,
Anne

--
Technical Product Manager, Cisco Metacloud
annegen...@justwriteclick.com
@annegentle


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]cancellation of weekly meeting(May 17) due to bug smash

2017-05-16 Thread joehuang
Hello, team,

The bug smash will be held May.17~19, the weekly meeting of May. 17 will be 
cancelled.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] etcd tarballs for CI use

2017-05-16 Thread Jesse Pretorius
On 5/15/17, 11:20 PM, "Davanum Srinivas"  wrote:

> At this moment, though Fedora has 3.1.7 [1], Xenial is way too old, So
> we will need to pull down tar balls from either [2] or [3]. proposing
> backports is a possibility, but then we need some flexibility if we
> end up picking up some specific version (say 3.0.17 vs 3.1.7). So a
> download location would be good to have so we can request infra to
> push versions we can experiment with.

Hi Dims,

I can’t help but ask - how old is too old? By what measure are we saying
something is too old?

I think we need to be careful with what we do here and ensure that the
Distribution partners we have are on board with the criteria and whether
They’re ready to include more recent package versions in their extra
Archives (eg: RDO / UCA).

As developers we want the most recent things because reasons… but
Distributions and Operators are then stuck with raised complexity in
their release and quality management processes.




Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Marios Andreou
On Tue, May 16, 2017 at 7:33 AM, Dnyaneshwar Pawar <
dnyaneshwar.pa...@veritas.com> wrote:

> Hi TripleO team,
>
> I am trying to apply custom configuration to an existing overcloud. (using
> openstack overcloud deploy command)
> Though there is no error, the configuration is in not applied to overcloud.
> Am I missing anything here?
> http://paste.openstack.org/show/609619/
>
>
>

[stack@h-uc test]$ cat tripleo_ocata.yaml
resource_registry:
  OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml

^^^ this bit won't work for you. The 'normal' ControllerServer points
to 'OS::TripleO::Server' and then 'OS::Nova::Server'
https://github.com/openstack/tripleo-heat-templates/blob/66b39c2c21b6629222c0d212642156437119e977/overcloud-resource-registry-puppet.j2.yaml#L44-L47

You're overriding it with something that defines a 'normal'
SoftwareConfig (afaics it is 'correct' heat template syntax fwiw) but
I don't think it is going to run on any servers && surprised you don't
get an error for the properties being passed in here
https://github.com/openstack/tripleo-heat-templates/blob/ef82c3a010cf6161f1da1020698dbd38257f5a12/puppet/controller-role.yaml#L168-L175

[stack@h-uc test]$ openstack overcloud deploy --templates -e
tripleo_ocata.yaml 2>&1 |tee dny4.log


^^^ here be aware that you should re-specify all the environment files
you used on the original deploy in addition to your customization
environments at the end (tripleo_ocata.yaml). Otherwise you'll be
getting all the defaults specified by the
/usr/share/openstack-tripleo-heat-templates


Have you see this
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
there are some examples there that do what you want.

Instead of overriding the ControllerServer try
"OS::TripleO::NodeUserData" for example



hope it helps





> Thanks and Regards,
> Dnyaneshwar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [POC] Introduce an auto-converge policy to speedup migration

2017-05-16 Thread Chao Fan
Hi Chris,

Sorry for no Cc to you, I remember I have added cc

Thanks,
Chao Fan

On Mon, May 15, 2017 at 01:30:39PM +0800, Chao Fan wrote:
>On Thu, May 11, 2017 at 02:34:16PM -0400, Chris Friesen wrote:
>>On 05/11/2017 05:58 AM, Chao Fan wrote:
>>> Hi all,
>>> 
>>> We plan to develop a policy about auto-converge, which can set cpu
>>> throttle value automatically according to the workload
>>> (dirty-pages-rate). It uses the API of libvirt to set the
>>> cpu-throttle-initial and cpu-throttle-increment.
>>> But the spec file of nova shows the dependent API is not accepted
>>> by OpenStack:
>>> 
>>> The initial decrease and increment size can be adjusted during
>>> the live migration process via the libvirt API. However these API calls
>>> are experimental so nova will not be using them.
>>> 
>>> So I am wondering if OpenStack is willing to use this API and accept
>>> the policy mentioned above.
>>
>>Just to clarify, as I understand it:
>>
>>1) You are pointing out that the auto-live-migration spec from Newton[1] says
>>that the libvirt APIs to set the initial and increment throttle values are
>>experimental and thus won't be using them.
>
>Hi Chris,
>
>Thank you for your reply, and really sorry for delay. Cause I did not
>notice this mail without Cc me.
>
>The spec file is cloned from https://github.com/openstack/nova-specs.git. 
>It looks same to your link.
>
>>
>>2) You are asking whether these APIs are now stable enough to be used in
>>nova, since you want to propose some mechanism to allow them to be changed.
>>
>>Is that accurate?
>
>Yes, your understanding is right.
>
>Thanks,
>Chao Fan
>
>>
>>Chris
>>
>>
>>[1] 
>>https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/auto-live-migration-completion.html
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storlets] No team meeting today

2017-05-16 Thread Eran Rom
Hi All,
There will be no team meeting today.
As usual, if you have something please ping at #openstack-storlets

Thanks, 
Eran

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Thierry Carrez
Flavio Percoco wrote:
> From a release perspective, as Doug mentioned, we've avoided releasing 
> projects
> in any kind of built form. This was also one of the concerns I raised when
> working on the proposal to support other programming languages. The problem of
> releasing built images goes beyond the infrastructure requirements. It's the
> message and the guarantees implied with the built product itself that are the
> concern here. And I tend to agree with Doug that this might be a problem for 
> us
> as a community. Unfortunately, putting your name, Michal, as contact point is
> not enough. Kolla is not the only project producing container images and we 
> need
> to be consistent in the way we release these images.
> 
> Nothing prevents people for building their own images and uploading them to
> dockerhub. Having this as part of the OpenStack's pipeline is a problem.

I totally subscribe to the concerns around publishing binaries (under
any form), and the expectations in terms of security maintenance that it
would set on the publisher. At the same time, we need to have images
available, for convenience and testing. So what is the best way to
achieve that without setting strong security maintenance expectations
for the OpenStack community ? We have several options:

1/ Have third-parties publish images
It is the current situation. The issue is that the Kolla team (and
likely others) would rather automate the process and use OpenStack
infrastructure for it.

2/ Have third-parties publish images, but through OpenStack infra
This would allow to automate the process, but it would be a bit weird to
use common infra resources to publish in a private repo.

3/ Publish transient (per-commit or daily) images
A "daily build" (especially if you replace it every day) would set
relatively-limited expectations in terms of maintenance. It would end up
picking up security updates in upstream layers, even if not immediately.

4/ Publish images and own them
Staff release / VMT / stable team in a way that lets us properly own
those images and publish them officially.

Personally I think (4) is not realistic. I think we could make (3) work,
and I prefer it to (2). If all else fails, we should keep (1).

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Andreas Jaeger
On 2017-05-16 12:10, Julien Danjou wrote:
> On Tue, May 16 2017, Andreas Jaeger wrote:
> 
>> what exactly happened with Babel?
>>
>> I see in global-requirements the following:
>> Babel>=2.3.4,!=2.4.0  # BSD
>>
>> that shouldn't case a problem - or does it? Or what's the problem?
> 
> Damn, at the moment I pressed the `Sent' button I thought "You just
> complained without including much detail idiot". Sorry about that!

no worries.

> One of the log that fails:
> 
>  
> http://logs.openstack.org/13/464713/2/check/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/db61bdf/console.html
> 
> 
> Basically oslo.policy pulls oslo.i18n which pulls Babel!=2.4.0
> But Babel is already pulled by os-testr which depends on >=2.3.4.

and os-testr is not importing global-requirements:
https://review.openstack.org/#/c/454511/

> So pip does not solve that (unfortunately) and then the failure is:
> 
> 2017-05-16 05:08:43.629772 | 2017-05-16 05:08:43.503 10699 ERROR gnocchi
> ContextualVersionConflict: (Babel 2.4.0
> (/home/jenkins/workspace/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/upgrade/lib/python2.7/site-packages),
> Requirement.parse('Babel!=2.4.0,>=2.3.4'), set(['oslo.i18n']))
> 
> I'm pretty sure Babel should not even be in the requirements list of
> oslo.i18n since it's not a runtime dependency AFAIU.

It is needed to generate the translations, but can't we move it for
oslo-i18n into test-requirements?

But os-testr does not need Babel at all - let's remove it,
https://review.openstack.org/465023

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][keystone][product] api keys/application specific passwords

2017-05-16 Thread Sean Dague
On 05/15/2017 10:00 PM, Adrian Turjak wrote:
> 
> 
> On 16/05/17 13:29, Lance Bragstad wrote:
>>
>>
>> On Mon, May 15, 2017 at 7:07 PM, Adrian Turjak
>> > wrote:

>> Based on the specs that are currently up in Keystone-specs, I
>> would highly recommend not doing this per user.
>>
>> The scenario I imagine is you have a sysadmin at a company who
>> created a ton of these for various jobs and then leaves. The
>> company then needs to keep his user account around, or create tons
>> of new API keys, and then disable his user once all the scripts he
>> had keys for are replaced. Or more often then not, disable his
>> user and then cry as everything breaks and no one really knows why
>> or no one fully documented it all, or didn't read the docs.
>> Keeping them per project and unrelated to the user makes more
>> sense, as then someone else on your team can regenerate the
>> secrets for the specific Keys as they want. Sure we can advise
>> them to use generic user accounts within which to create these API
>> keys but that implies password sharing which is bad.
>>
>>
>> That said, I'm curious why we would make these as a thing separate
>> to users. In reality, if you can create users, you can create API
>> specific users. Would this be a different authentication
>> mechanism? Why? Why not just continue the work on better access
>> control and let people create users for this. Because lets be
>> honest, isn't a user already an API key? The issue (and the Ron's
>> spec mentions this) is a user having too much access, how would
>> this fix that when the issue is that we don't have fine grained
>> policy in the first place? How does a new auth mechanism fix that?
>> Both specs mention roles so I assume it really doesn't. If we had
>> fine grained policy we could just create users specific to a
>> service with only the roles it needs, and the same problem is
>> solved without any special API, new auth, or different 'user-lite'
>> object model. It feels like this is trying to solve an issue that
>> is better solved by fixing the existing problems.
>>
>> I like the idea behind these specs, but... I'm curious what
>> exactly they are trying to solve. Not to mention if you wanted to
>> automate anything larger such as creating sub-projects and setting
>> up a basic network for each new developer to get access to your
>> team, this wouldn't work unless you could have your API key
>> inherit to subprojects or something more complex, at which point
>> they may as well be users. Users already work for all of this, why
>> reinvent the wheel when really the issue isn't the wheel itself,
>> but the steering mechanism (access control/policy in this case)?
>>
>>
>> All valid points, but IMO the discussions around API keys didn't set
>> out to fix deep-rooted issues with policy. We have several specs in
>> flights across projects to help mitigate the real issues with policy
>> [0] [1] [2] [3] [4].
>>
>> I see an API key implementation as something that provides a cleaner
>> fit and finish once we've addressed the policy bits. It's also a
>> familiar concept for application developers, which was the use case
>> the session was targeting.
>>
>> I probably should have laid out the related policy work before jumping
>> into API keys. We've already committed a bunch of keystone resource to
>> policy improvements this cycle, but I'm hoping we can work API keys
>> and policy improvements in parallel.
>>
>> [0] https://review.openstack.org/#/c/460344/
>> [1] https://review.openstack.org/#/c/462733/
>> [2] https://review.openstack.org/#/c/464763/
>> [3] https://review.openstack.org/#/c/433037/
>> [4] https://review.openstack.org/#/c/427872/
>>
> I'm well aware of the policy work, and it is fantastic to see it
> progressing! I can't wait to actually be able to play with that stuff!
> We've been painstakingly tweaking the json policy files which is a giant
> mess.
> 
> I'm just concerned that this feels like a feature we don't really need
> when really it's just a slight variant of a user with a new auth model
> (that is really just another flavour of username/password). The sole
> reason most of the other cloud services have API keys is because a user
> can't talk to the API directly. OpenStack does not have that problem,
> users are API keys. So I think what we really need to consider is what
> exact benefit does API keys actually give us that won't be solved with
> users and better policy?

The benefits of API key are if it's the same across all deployments, so
your applications can depend on it working. That means the application
has to be able to:

1. provision an API Key with normal user credentials
2. set/reduce permissions with that with those same user credentials
3. operate with those credentials at the project level (so that 

Re: [openstack-dev] [tripleo] Issue while applying customs configuration to overcloud.

2017-05-16 Thread Dnyaneshwar Pawar
Hi Marios,
Thanks for your reply.
Referred example mentioned at 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
 , it is failing with error mentioned at http://paste.openstack.org/show/609644/


Regards,
Dnyaneshwar

From: Marios Andreou 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, May 16, 2017 at 11:59 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tripleo] Issue while applying customs 
configuration to overcloud.



On Tue, May 16, 2017 at 7:33 AM, Dnyaneshwar Pawar 
> wrote:
Hi TripleO team,

I am trying to apply custom configuration to an existing overcloud. (using 
openstack overcloud deploy command)
Though there is no error, the configuration is in not applied to overcloud.
Am I missing anything here?
http://paste.openstack.org/show/609619/




[stack@h-uc test]$ cat tripleo_ocata.yaml

resource_registry:

  OS::TripleO::ControllerServer: /home/stack/test/heat3_ocata.yaml



^^^ this bit won't work for you. The 'normal' ControllerServer points to 
'OS::TripleO::Server' and then 'OS::Nova::Server' 
https://github.com/openstack/tripleo-heat-templates/blob/66b39c2c21b6629222c0d212642156437119e977/overcloud-resource-registry-puppet.j2.yaml#L44-L47

You're overriding it with something that defines a 'normal' SoftwareConfig 
(afaics it is 'correct' heat template syntax fwiw) but I don't think it is 
going to run on any servers && surprised you don't get an error for the 
properties being passed in here 
https://github.com/openstack/tripleo-heat-templates/blob/ef82c3a010cf6161f1da1020698dbd38257f5a12/puppet/controller-role.yaml#L168-L175

[stack@h-uc test]$ openstack overcloud deploy --templates -e  
tripleo_ocata.yaml 2>&1 |tee dny4.log



^^^ here be aware that you should re-specify all the environment files you used 
on the original deploy in addition to your customization environments at the 
end (tripleo_ocata.yaml). Otherwise you'll be getting all the defaults 
specified by the /usr/share/openstack-tripleo-heat-templates



Have you see this 
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/extra_config.html
 there are some examples there that do what you want.

Instead of overriding the ControllerServer try "OS::TripleO::NodeUserData" for 
example



hope it helps




Thanks and Regards,
Dnyaneshwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Andreas Jaeger
On 2017-05-16 11:42, Julien Danjou wrote:
> On Wed, Apr 19 2017, Julien Danjou wrote:
> 
>> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
>> some new release of oslo.* depends on pbr!=2.1.0.
> 
> Same things happened today with Babel. As far as Gnocchi is concerned,
> we're going to take the easiest route and remove all our oslo
> dependencies over the next months for sanely maintained alternative at
> this point.

what exactly happened with Babel?

I see in global-requirements the following:
Babel>=2.3.4,!=2.4.0  # BSD

that shouldn't case a problem - or does it? Or what's the problem?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Wed, Apr 19 2017, Julien Danjou wrote:

> So Gnocchi gate is all broken (agan) because it depends on "pbr" and
> some new release of oslo.* depends on pbr!=2.1.0.

Same things happened today with Babel. As far as Gnocchi is concerned,
we're going to take the easiest route and remove all our oslo
dependencies over the next months for sanely maintained alternative at
this point.

Cheers,
-- 
Julien Danjou
-- Free Software hacker
-- https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Luigi Toscano
On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
> 
> > On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> > > I'm raising the issue here to get some more input into how to
> > > proceed. Do other people think this concern is overblown? Can we
> > > mitigate the risk by communicating through metadata for the images?
> > > Should we stick to publishing build instructions (Dockerfiles, or
> > > whatever) instead of binary images? Are there other options I haven't
> > > mentioned?
> > 
> > Today we do publish build instructions, that's what Kolla is. We also
> > publish built containers already, just we do it manually on release
> > today. If we decide to block it, I assume we should stop doing that
> > too? That will hurt users who uses this piece of Kolla, and I'd hate
> > to hurt our users:(
> 
> Well, that's the question. Today we have teams publishing those
> images themselves, right? And the proposal is to have infra do it?
> That change could be construed to imply that there is more of a
> relationship with the images and the rest of the community (remember,
> folks outside of the main community activities do not always make
> the same distinctions we do about teams). So, before we go ahead
> with that, I want to make sure that we all have a chance to discuss
> the policy change and its implications.

Sorry for hijacking the thread, but we have a similar scenario for example in 
Sahara. It is about full VM images containing Hadoop/Spark/other_big_data 
stuff, and not containers, but it's looks really the same.
So far ready-made images have been published under 
http://sahara-files.mirantis.com/images/upstream/, but we are looking to have 
them hosted on 
openstack.org, just like other artifacts. 

We asked about this few days ago on openstack-infra@, but no answer so far 
(the Summit didn't help):

http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html

I think that the answer to the question raised in this thread is definitely 
going to be relevant for our use case.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Ghanshyam Mann
+1. Nice work done by Fanglei and good to have her in team.

-gmann


On Tue, May 16, 2017 at 5:22 PM, Andrea Frittoli
 wrote:
> Hello team,
>
> I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.
>
> Over the past two cycle Fanglei has been steadily contributing to Tempest
> and its community.
> She's done a great deal of work in making Tempest code cleaner, easier to
> read, maintain and
> debug, fixing bugs and removing cruft. Both her code as well as her reviews
> demonstrate a
> very good understanding of Tempest internals and of the project future
> direction.
> I believe Fanglei will make an excellent addition to the team.
>
> As per the usual, if the current Tempest core team members would please vote
> +1
> or -1(veto) to the nomination when you get a chance. We'll keep the polls
> open
> for 5 days or until everyone has voted.
>
> References:
> https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
> https://review.openstack.org/#/q/reviewer:zhufl
>
> Thank you,
>
> Andrea (andreaf)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] pep8 failing

2017-05-16 Thread Vikash Kumar
Hi Team,

  pep8 is failing in master code. *translation hint helpers *are removed
from LOG messages. Is this purposefully done ? Let me know if it is not,
will change it.

./networking_sfc/db/flowclassifier_db.py:342:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing flow classifier.")
^
./networking_sfc/db/sfc_db.py:383:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing port chain.")
^
./networking_sfc/db/sfc_db.py:526:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing port pair.")
^
./networking_sfc/db/sfc_db.py:658:13: N531  Log messages require
translation hints!
LOG.info("Deleting a non-existing port pair group.")
^
./networking_sfc/services/flowclassifier/driver_manager.py:38:9: N531  Log
messages require translation hints!
LOG.info("Configured Flow Classifier drivers: %s", names)
^
./networking_sfc/services/flowclassifier/driver_manager.py:44:9: N531  Log
messages require translation hints!
LOG.info("Loaded Flow Classifier drivers: %s",
^
./networking_sfc/services/flowclassifier/driver_manager.py:80:9: N531  Log
messages require translation hints!
LOG.info("Registered Flow Classifier drivers: %s",
^
./networking_sfc/services/flowclassifier/driver_manager.py:87:13: N531  Log
messages require translation hints!
LOG.info("Initializing Flow Classifier driver '%s'",
^
./networking_sfc/services/flowclassifier/driver_manager.py:107:17: N531
Log messages require translation hints!
LOG.error(
^
./networking_sfc/services/flowclassifier/plugin.py:63:17: N531  Log
messages require translation hints!
LOG.error("Create flow classifier failed, "
^
./networking_sfc/services/flowclassifier/plugin.py:87:17: N531  Log
messages require translation hints!
LOG.error("Update flow classifier failed, "
^
./networking_sfc/services/flowclassifier/plugin.py:102:17: N531  Log
messages require translation hints!
LOG.error("Delete flow classifier failed, "
^
./networking_sfc/services/sfc/driver_manager.py:38:9: N531  Log messages
require translation hints!
LOG.info("Configured SFC drivers: %s", names)
^
./networking_sfc/services/sfc/driver_manager.py:43:9: N531  Log messages
require translation hints!
LOG.info("Loaded SFC drivers: %s", self.names())
^
./networking_sfc/services/sfc/driver_manager.py:78:9: N531  Log messages
require translation hints!
LOG.info("Registered SFC drivers: %s",
^
./networking_sfc/services/sfc/driver_manager.py:85:13: N531  Log messages
require translation hints!
LOG.info("Initializing SFC driver '%s'", driver.name)
^
./networking_sfc/services/sfc/driver_manager.py:104:17: N531  Log messages
require translation hints!
LOG.error(
^
./networking_sfc/services/sfc/plugin.py:57:17: N531  Log messages require
translation hints!
LOG.error("Create port chain failed, "
^
./networking_sfc/services/sfc/plugin.py:82:17: N531  Log messages require
translation hints!
LOG.error("Update port chain failed, port_chain '%s'",
^
./networking_sfc/services/sfc/plugin.py:97:17: N531  Log messages require
translation hints!
LOG.error("Delete port chain failed, portchain '%s'",
^
./networking_sfc/services/sfc/plugin.py:122:17: N531  Log messages require
translation hints!
LOG.error("Create port pair failed, "
^
./networking_sfc/services/sfc/plugin.py:144:17: N531  Log messages require
translation hints!
LOG.error("Update port pair failed, port_pair '%s'",
^
./networking_sfc/services/sfc/plugin.py:159:17: N531  Log messages require
translation hints!
LOG.error("Delete port pair failed, port_pair '%s'",
^
./networking_sfc/services/sfc/plugin.py:185:17: N531  Log messages require
translation hints!
LOG.error("Create port pair group failed, "
^
./networking_sfc/services/sfc/plugin.py:213:17: N531  Log messages require
translation hints!
LOG.error("Update port pair group failed, "
^
./networking_sfc/services/sfc/plugin.py:229:17: N531  Log messages require
translation hints!
LOG.error("Delete port pair group failed, "
^
./networking_sfc/services/sfc/agent/extensions/sfc.py:111:13: N531  Log
messages require translation hints!
LOG.error("SFC L2 extension handle_port failed")
^
./networking_sfc/services/sfc/agent/extensions/sfc.py:124:9: N531  Log
messages require translation hints!
LOG.info("a device %s is removed", port_id)
 

Re: [openstack-dev] [oslo] Can we stop global requirements update?

2017-05-16 Thread Julien Danjou
On Tue, May 16 2017, Andreas Jaeger wrote:

> what exactly happened with Babel?
>
> I see in global-requirements the following:
> Babel>=2.3.4,!=2.4.0  # BSD
>
> that shouldn't case a problem - or does it? Or what's the problem?

Damn, at the moment I pressed the `Sent' button I thought "You just
complained without including much detail idiot". Sorry about that!

One of the log that fails:

 
http://logs.openstack.org/13/464713/2/check/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/db61bdf/console.html


Basically oslo.policy pulls oslo.i18n which pulls Babel!=2.4.0
But Babel is already pulled by os-testr which depends on >=2.3.4.
So pip does not solve that (unfortunately) and then the failure is:

2017-05-16 05:08:43.629772 | 2017-05-16 05:08:43.503 10699 ERROR gnocchi
ContextualVersionConflict: (Babel 2.4.0
(/home/jenkins/workspace/gate-gnocchi-tox-py27-mysql-ceph-upgrade-from-3.1-ubuntu-xenial/upgrade/lib/python2.7/site-packages),
Requirement.parse('Babel!=2.4.0,>=2.3.4'), set(['oslo.i18n']))

I'm pretty sure Babel should not even be in the requirements list of
oslo.i18n since it's not a runtime dependency AFAIU.

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] etcd tarballs for CI use

2017-05-16 Thread Davanum Srinivas
Jesse,

Great question :) We need the version that has the grpc gateway v3alpha API:
https://github.com/coreos/etcd/pull/5669

Since we want to standardize on the etcd v3 API (to avoid migration of
data from /v2 to /v3). Unfortunately the v3 API is gRPC based and has
trouble with eventlet based processes. So we need the /v3alpha HTTP
API. You can see the prior discussion and list of bugs from Jay in
https://review.openstack.org/#/c/446983/

the etcd in xenial is 2.x which does not have either the gRPC v3 or
the gRPC+gateway HTTP API.

Thanks,
Dims

On Tue, May 16, 2017 at 5:02 AM, Jesse Pretorius
 wrote:
> On 5/15/17, 11:20 PM, "Davanum Srinivas"  wrote:
>
>> At this moment, though Fedora has 3.1.7 [1], Xenial is way too old, So
>> we will need to pull down tar balls from either [2] or [3]. proposing
>> backports is a possibility, but then we need some flexibility if we
>> end up picking up some specific version (say 3.0.17 vs 3.1.7). So a
>> download location would be good to have so we can request infra to
>> push versions we can experiment with.
>
> Hi Dims,
>
> I can’t help but ask - how old is too old? By what measure are we saying
> something is too old?
>
> I think we need to be careful with what we do here and ensure that the
> Distribution partners we have are on board with the criteria and whether
> They’re ready to include more recent package versions in their extra
> Archives (eg: RDO / UCA).
>
> As developers we want the most recent things because reasons… but
> Distributions and Operators are then stuck with raised complexity in
> their release and quality management processes.
>
>
>
> 
> Rackspace Limited is a company registered in England & Wales (company 
> registered number 03897010) whose registered office is at 5 Millington Road, 
> Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
> viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
> contain confidential or privileged information intended for the recipient. 
> Any dissemination, distribution or copying of the enclosed material is 
> prohibited. If you receive this transmission in error, please notify us 
> immediately by e-mail at ab...@rackspace.com and delete the original message. 
> Your cooperation is appreciated.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-16 Thread Mehdi Abaakouk

+1 too, I haven't seen its contributors since a while.

On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

On 15/05/17 15:29 -0500, Ben Nemec wrote:



On 05/15/2017 01:55 PM, Doug Hellmann wrote:

Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15 14:27:36 -0400:

On Mon, May 15, 2017 at 2:08 PM, Ken Giusti  wrote:

Folks,

It was decided at the oslo.messaging forum at summit that the pika
driver will be marked as deprecated [1] for removal.


[dims} +1 from me.


+1


Also +1


+1

Flavio

--
@flaper87
Flavio Percoco





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tempest] Proposing Fanglei Zhu for Tempest core

2017-05-16 Thread Andrea Frittoli
Hello team,

I'm very pleased to propose Fanglei Zhu (zhufl) for Tempest core.

Over the past two cycle Fanglei has been steadily contributing to Tempest
and its community.
She's done a great deal of work in making Tempest code cleaner, easier to
read, maintain and
debug, fixing bugs and removing cruft. Both her code as well as her reviews
demonstrate a
very good understanding of Tempest internals and of the project future
direction.
I believe Fanglei will make an excellent addition to the team.

As per the usual, if the current Tempest core team members would please
vote +1
or -1(veto) to the nomination when you get a chance. We'll keep the polls
open
for 5 days or until everyone has voted.

References:
https://review.openstack.org/#/q/owner:zhu.fanglei%2540zte.com.cn
https://review.openstack.org/#/q/reviewer:zhufl

Thank you,

Andrea (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck Monitoring

2017-05-16 Thread Waines, Greg
Sam,

Two other more higher-level points I wanted to discuss with you about Masaraki.


First,
so I notice that you are doing both monitoring, auto-recovery and even host 
maintenance
type functionality as part of the Masaraki architecture.

are you open to some configurability (enabling/disabling) of these capabilities 
?

e.g. OPNFV guys would NOT want auto-recovery, they would prefer that fault 
events
  get reported to Vitrage ... and eventually filter up to 
Aodh Alarms that get
  received by VNFManagers which would be responsible for 
the recovery.

e.g. some deployers of openstack might want to disable parts or all of your 
monitoring,
 if using other mechanisms such as Zabbix or Nagios for the host 
monitoring (say)


Second,
are you open to configurably having fault events reported to Vitrage ?


Greg.


From: Sam P 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Monday, May 15, 2017 at 9:36 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [vitrage] [nova] VM Heartbeat / Healthcheck 
Monitoring

Hi Greg,

In Masakari [0] for VMHA, we have already implemented some what
similar function in masakri-monitors.
Masakari-monitors runs on nova-compute node, and monitors the host,
process or instance failures.
Masakari instance monitor has similar functionality with what you
have described.
Please see [1] for more details on instance monitoring.
[0] https://wiki.openstack.org/wiki/Masakari
[1] 
https://github.com/openstack/masakari-monitors/tree/master/masakarimonitors/instancemonitor

Once masakari-monitors detect failures, it will send notifications to
masakari-api to take appropriate recovery actions to recover that VM
from failures.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 06:22, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2017-05-16 14:08:07 +0200:
>> Flavio Percoco wrote:
>> > From a release perspective, as Doug mentioned, we've avoided releasing 
>> > projects
>> > in any kind of built form. This was also one of the concerns I raised when
>> > working on the proposal to support other programming languages. The 
>> > problem of
>> > releasing built images goes beyond the infrastructure requirements. It's 
>> > the
>> > message and the guarantees implied with the built product itself that are 
>> > the
>> > concern here. And I tend to agree with Doug that this might be a problem 
>> > for us
>> > as a community. Unfortunately, putting your name, Michal, as contact point 
>> > is
>> > not enough. Kolla is not the only project producing container images and 
>> > we need
>> > to be consistent in the way we release these images.
>> >
>> > Nothing prevents people for building their own images and uploading them to
>> > dockerhub. Having this as part of the OpenStack's pipeline is a problem.
>>
>> I totally subscribe to the concerns around publishing binaries (under
>> any form), and the expectations in terms of security maintenance that it
>> would set on the publisher. At the same time, we need to have images
>> available, for convenience and testing. So what is the best way to
>> achieve that without setting strong security maintenance expectations
>> for the OpenStack community ? We have several options:
>>
>> 1/ Have third-parties publish images
>> It is the current situation. The issue is that the Kolla team (and
>> likely others) would rather automate the process and use OpenStack
>> infrastructure for it.
>>
>> 2/ Have third-parties publish images, but through OpenStack infra
>> This would allow to automate the process, but it would be a bit weird to
>> use common infra resources to publish in a private repo.
>>
>> 3/ Publish transient (per-commit or daily) images
>> A "daily build" (especially if you replace it every day) would set
>> relatively-limited expectations in terms of maintenance. It would end up
>> picking up security updates in upstream layers, even if not immediately.
>>
>> 4/ Publish images and own them
>> Staff release / VMT / stable team in a way that lets us properly own
>> those images and publish them officially.
>>
>> Personally I think (4) is not realistic. I think we could make (3) work,
>> and I prefer it to (2). If all else fails, we should keep (1).
>>
>
> At the forum we talked about putting test images on a "private"
> repository hosted on openstack.org somewhere. I think that's option
> 3 from your list?
>
> Paul may be able to shed more light on the details of the technology
> (maybe it's just an Apache-served repo, rather than a full blown
> instance of Docker's service, for example).

Issue with that is

1. Apache served is harder to use because we want to follow docker API
and we'd have to reimplement it
2. Running registry is single command
3. If we host in in infra, in case someone actually uses it (there
will be people like that), that will eat up lot of network traffic
potentially
4. With local caching of images (working already) in nodepools we
loose complexity of mirroring registries across nodepools

So bottom line, having dockerhub/quay.io is simply easier.

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 21:53:43 +0200 (+0200), Thierry Carrez wrote:
[...]
> I wouldn't say it's premature optimization, we create channels all the
> time. #openstack-dev is a general discussion channel, which is used for
> anything that doesn't fit anywhere else. If you look at recent logs,
> you'll see that it is used for general community pings, but also lengthy
> inter-project discussions.

Those sound entirely on-topic for #openstack-dev to me. Like others
on this thread I also worry that a TC-specific channel will seem
"exclusive" when really we're just members of the community having
discussions with other members of the community (elected or not).

> Here we have a clear topic, and TC members need to pay a certain level
> of attention to whatever is said. Mixing it with other community
> discussions (which I have to prioritize lower) just makes it harder to
> pay the right level of attention to the channel. Basically I have
> trouble to see how we can repurpose a general discussion channel into a
> specific group office-hours channel (different topics, different level
> of attention to be paid). Asking people to use a ping list when they
> *really* need to get TC members attention feels like a band-aid.

If we go with office hours as proposed I for one would pay close
attention to whatever's said on #openstack-dev during those
scheduled timeframes, highlighted or not. Having a common highlight
is merely a possible means of getting the attention of specific
people in a channel without them needing to pay close attention to
everything said in that channel.

> It's pretty clear now some see drawbacks in reusing #openstack-dev, and
> so far the only benefit expressed (beyond not having to post the config
> change to make it happen) is that "everybody is already there". By that
> rule, we should not create any new channel :)

That was not the only concern expressed. It also silos us away from
the community who has elected us to represent them, potentially
creating another IRC echo chamber.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Sean Dague
On 05/16/2017 03:59 PM, Thierry Carrez wrote:
> Thierry Carrez wrote:
>> Here we have a clear topic, and TC members need to pay a certain level
>> of attention to whatever is said. Mixing it with other community
>> discussions (which I have to prioritize lower) just makes it harder to
>> pay the right level of attention to the channel. Basically I have
>> trouble to see how we can repurpose a general discussion channel into a
>> specific group office-hours channel (different topics, different level
>> of attention to be paid). Asking people to use a ping list when they
>> *really* need to get TC members attention feels like a band-aid.
> 
> To summarize, I fear that using a general development discussion channel
> as the TC discussion channel will turn *every* development discussion
> into a TC discussion. I don't think that's desirable.

Maybe we have different ideas of what we expect to be the kinds of
discussions and asks. What do you think would be in #openstack-tc that
would not be appropriate for #openstack-dev, and why?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Do we need a #openstack-tc IRC channel

2017-05-16 Thread Thierry Carrez
Thierry Carrez wrote:
> Here we have a clear topic, and TC members need to pay a certain level
> of attention to whatever is said. Mixing it with other community
> discussions (which I have to prioritize lower) just makes it harder to
> pay the right level of attention to the channel. Basically I have
> trouble to see how we can repurpose a general discussion channel into a
> specific group office-hours channel (different topics, different level
> of attention to be paid). Asking people to use a ping list when they
> *really* need to get TC members attention feels like a band-aid.

To summarize, I fear that using a general development discussion channel
as the TC discussion channel will turn *every* development discussion
into a TC discussion. I don't think that's desirable.

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
+1. ironic and trove have the same issues as well. lowering the bar in order to 
kick the tires will help OpenStack a lot in adoption.

From: Sean Dague [s...@dague.net]
Sent: Tuesday, May 16, 2017 6:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 05/16/2017 09:24 AM, Doug Hellmann wrote:
> Excerpts from Luigi Toscano's message of 2017-05-16 11:50:53 +0200:
>> On Monday, 15 May 2017 21:12:16 CEST Doug Hellmann wrote:
>>> Excerpts from Michał Jastrzębski's message of 2017-05-15 10:52:12 -0700:
>>>
 On 15 May 2017 at 10:34, Doug Hellmann  wrote:
> I'm raising the issue here to get some more input into how to
> proceed. Do other people think this concern is overblown? Can we
> mitigate the risk by communicating through metadata for the images?
> Should we stick to publishing build instructions (Dockerfiles, or
> whatever) instead of binary images? Are there other options I haven't
> mentioned?

 Today we do publish build instructions, that's what Kolla is. We also
 publish built containers already, just we do it manually on release
 today. If we decide to block it, I assume we should stop doing that
 too? That will hurt users who uses this piece of Kolla, and I'd hate
 to hurt our users:(
>>>
>>> Well, that's the question. Today we have teams publishing those
>>> images themselves, right? And the proposal is to have infra do it?
>>> That change could be construed to imply that there is more of a
>>> relationship with the images and the rest of the community (remember,
>>> folks outside of the main community activities do not always make
>>> the same distinctions we do about teams). So, before we go ahead
>>> with that, I want to make sure that we all have a chance to discuss
>>> the policy change and its implications.
>>
>> Sorry for hijacking the thread, but we have a similar scenario for example in
>> Sahara. It is about full VM images containing Hadoop/Spark/other_big_data
>> stuff, and not containers, but it's looks really the same.
>> So far ready-made images have been published under 
>> http://sahara-files.mirantis.com/images/upstream/, but we are looking to 
>> have them hosted on
>> openstack.org, just like other artifacts.
>>
>> We asked about this few days ago on openstack-infra@, but no answer so far
>> (the Summit didn't help):
>>
>> http://lists.openstack.org/pipermail/openstack-infra/2017-April/005312.html
>>
>> I think that the answer to the question raised in this thread is definitely
>> going to be relevant for our use case.
>>
>> Ciao
>
> Thanks for raising this. I think the same concerns apply to VM images.

Agreed.

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Doug Hellmann
Excerpts from Michał Jastrzębski's message of 2017-05-16 09:51:00 -0700:
> On 16 May 2017 at 09:40, Clint Byrum  wrote:
> >
> > What's at stake isn't so much "how do we get the bits to the users" but
> > "how do we only get bits to users that they need". If you build and push
> > daily, do you expect all of your users to also _pull_ daily? Redeploy
> > all their containers? How do you detect that there's new CVE-fixing
> > stuff in a daily build?
> >
> > This is really the realm of distributors that have full-time security
> > teams tracking issues and providing support to paying customers.
> >
> > So I think this is a fine idea, however, it needs to include a commitment
> > for a full-time paid security team who weighs in on every change to
> > the manifest. Otherwise we're just lobbing time bombs into our users'
> > data-centers.
> 
> One thing I struggle with is...well...how does *not having* built
> containers help with that? If your company have full time security
> team, they can check our containers prior to deployment. If your
> company doesn't, then building locally will be subject to same risks
> as downloading from dockerhub. Difference is, dockerhub containers
> were tested in our CI to extend that our CI allows. No matter whether
> or not you have your own security team, local CI, staging env, that
> will be just a little bit of testing on top of that which you get for
> free, and I think that's value enough for users to push for this.

The benefit of not building images ourselves is that we are clearly
communicating that the responsibility for maintaining the images
falls on whoever *does* build them. There can be no question in any
user's mind that the community somehow needs to maintain the content
of the images for them, just because we're publishing new images
at some regular cadence.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Joshua Harlow

My guess is same with octavia.

https://github.com/openstack/octavia/tree/master/diskimage-create#diskimage-builder-script-for-creating-octavia-amphora-images

-Josh

Fox, Kevin M wrote:

+1. ironic and trove have the same issues as well. lowering the bar in order to 
kick the tires will help OpenStack a lot in adoption.

From: Sean Dague [s...@dague.net]
Sent: Tuesday, May 16, 2017 6:28 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 05/16/2017 09:24 AM, Doug Hellmann wrote:




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Michał Jastrzębski
On 16 May 2017 at 11:49, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 11:38:19 -0700:
>> On 16 May 2017 at 11:27, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 09:46:19 -0700:
>> >> So another consideration. Do you think whole rule of "not building
>> >> binares" should be reconsidered? We are kind of new use case here. We
>> >> aren't distro but we are packagers (kind of). I don't think putting us
>> >> on equal footing as Red Hat, Canonical or other companies is correct
>> >> here.
>> >>
>> >> K8s is something we want to work with, and what we are discussing is
>> >> central to how k8s is used. K8s community creates this culture of
>> >> "organic packages" built by anyone, most of companies/projects already
>> >> have semi-official container images and I think expectations on
>> >> quality of these are well...none? You get what you're given and if you
>> >> don't agree, there is always way to reproduce this yourself.
>> >>
>> >> [Another huge snip]
>> >>
>> >
>> > I wanted to have the discussion, but my position for now is that
>> > we should continue as we have been and not change the policy.
>> >
>> > I don't have a problem with any individual or group of individuals
>> > publishing their own organic packages. The issue I have is with
>> > making sure it is clear those *are* "organic" and not officially
>> > supported by the broader community. One way to do that is to say
>> > they need to be built somewhere other than on our shared infrastructure.
>> > There may be other ways, though, so I'm looking for input on that.
>>
>> What I was trying to say here is, current discussion aside, maybe we
>> should revise this "not supported by broader community" rule. They may
>> very well be supported to a certain point. Support is not just yes or
>> no, it's all the levels in between. I think we can afford *some* level
>> of official support, even if that some level means best effort made by
>> community. If Kolla community, not an individual like myself, would
>> like to support these images best to our ability, why aren't we
>> allowed? As long as we are crystal clear what is scope of our support,
>> why can't we do it? I think we've already proven that it's going to be
>> tremendously useful for a lot of people, even in a shape we discuss
>> today, that is "best effort, you still need to validate it for
>> yourself"...
>
> Right, I understood that. So far I haven't heard anything to change
> my mind, though.
>
> I think you're underestimating the amount of risk you're taking on
> for yourselves and by extension the rest of the community, and
> introducing to potential consumers of the images, by promising to
> support production deployments with a small team of people without
> the economic structure in place to sustain the work.

Again, we tell what it is and what it is not. I think support is
loaded term here. Instead we can create lengthy documentation
explaining to a detail lifecycle and testing certain container had to
pass before it lands in dockerhub. Maybe add link to particular set of
jobs that container had passed. Only thing we can offer is automated
and transparent process of publishing. On top of that? You are on your
own. But even within these boundaries, a lot of people could have
better experience of running OpenStack...

> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Fox, Kevin M
And bandwidth can be conserved by only uploading images that actually changed 
in non trivial ways (packages were updated, not just logfile with a new 
timestamp)

Thanks,
Keivn

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Tuesday, May 16, 2017 11:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] 
[tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes]
 do we want to be publishing binary container images?

On 16 May 2017 at 11:33, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-05-16 08:20:17 -0700:
>> On 16 May 2017 at 08:12, Doug Hellmann  wrote:
>> > Excerpts from Michał Jastrzębski's message of 2017-05-16 06:52:12 -0700:
>> >> On 16 May 2017 at 06:20, Flavio Percoco  wrote:
>> >> > On 16/05/17 14:08 +0200, Thierry Carrez wrote:
>> >> >>
>> >> >> Flavio Percoco wrote:
>> >> >>>
>> >> >>> From a release perspective, as Doug mentioned, we've avoided releasing
>> >> >>> projects
>> >> >>> in any kind of built form. This was also one of the concerns I raised
>> >> >>> when
>> >> >>> working on the proposal to support other programming languages. The
>> >> >>> problem of
>> >> >>> releasing built images goes beyond the infrastructure requirements. 
>> >> >>> It's
>> >> >>> the
>> >> >>> message and the guarantees implied with the built product itself that 
>> >> >>> are
>> >> >>> the
>> >> >>> concern here. And I tend to agree with Doug that this might be a 
>> >> >>> problem
>> >> >>> for us
>> >> >>> as a community. Unfortunately, putting your name, Michal, as contact
>> >> >>> point is
>> >> >>> not enough. Kolla is not the only project producing container images 
>> >> >>> and
>> >> >>> we need
>> >> >>> to be consistent in the way we release these images.
>> >> >>>
>> >> >>> Nothing prevents people for building their own images and uploading 
>> >> >>> them
>> >> >>> to
>> >> >>> dockerhub. Having this as part of the OpenStack's pipeline is a 
>> >> >>> problem.
>> >> >>
>> >> >>
>> >> >> I totally subscribe to the concerns around publishing binaries (under
>> >> >> any form), and the expectations in terms of security maintenance that 
>> >> >> it
>> >> >> would set on the publisher. At the same time, we need to have images
>> >> >> available, for convenience and testing. So what is the best way to
>> >> >> achieve that without setting strong security maintenance expectations
>> >> >> for the OpenStack community ? We have several options:
>> >> >>
>> >> >> 1/ Have third-parties publish images
>> >> >> It is the current situation. The issue is that the Kolla team (and
>> >> >> likely others) would rather automate the process and use OpenStack
>> >> >> infrastructure for it.
>> >> >>
>> >> >> 2/ Have third-parties publish images, but through OpenStack infra
>> >> >> This would allow to automate the process, but it would be a bit weird 
>> >> >> to
>> >> >> use common infra resources to publish in a private repo.
>> >> >>
>> >> >> 3/ Publish transient (per-commit or daily) images
>> >> >> A "daily build" (especially if you replace it every day) would set
>> >> >> relatively-limited expectations in terms of maintenance. It would end 
>> >> >> up
>> >> >> picking up security updates in upstream layers, even if not 
>> >> >> immediately.
>> >> >>
>> >> >> 4/ Publish images and own them
>> >> >> Staff release / VMT / stable team in a way that lets us properly own
>> >> >> those images and publish them officially.
>> >> >>
>> >> >> Personally I think (4) is not realistic. I think we could make (3) 
>> >> >> work,
>> >> >> and I prefer it to (2). If all else fails, we should keep (1).
>> >> >
>> >> >
>> >> > Agreed #4 is a bit unrealistic.
>> >> >
>> >> > Not sure I understand the difference between #2 and #3. Is it just the
>> >> > cadence?
>> >> >
>> >> > I'd prefer for these builds to have a daily cadence because it sets the
>> >> > expectations w.r.t maintenance right: "These images are daily builds 
>> >> > and not
>> >> > certified releases. For stable builds you're better off building it
>> >> > yourself"
>> >>
>> >> And daily builds are exactly what I wanted in the first place:) We
>> >> probably will keep publishing release packages too, but we can be so
>> >> called 3rd party. I also agree [4] is completely unrealistic and I
>> >> would be against putting such heavy burden of responsibility on any
>> >> community, including Kolla.
>> >>
>> >> While daily cadence will send message that it's not stable, truth will
>> >> be that it will be more stable than what people would normally build
>> >> locally (again, it passes more gates), but I'm totally fine in not
>> >> saying that and let people decide how they want to use it.
>> >>
>> >> So, can we move on with implementation?
>> >
>> > I don't want the images published to docker hub. Are they still useful
>> > to you if they aren't published?
>>

Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-16 Thread Jeremy Stanley
On 2017-05-16 11:46:14 -0700 (-0700), Michał Jastrzębski wrote:
[...]
> So CVE tracking might not be required by us. Since we still use
> distro packages under the hood, we can just use these.
[...]

I think the question is how I, as a semi-clueful downstream user of
your images, can tell whether the image I'm deploying has fixes for
some specific recently disclosed vulnerability. It sounds like your
answer is that I should compare the package manifest against the
versions listed on the distro's CVE tracker or similar service? That
should be prominently documented, perhaps in a highly visible FAQ
list.

> Since we'd rebuild daily, that alone would ensure timely update to
> our containers. What we can promise to potential users is that
> containers out there were built lately (24hrs)
[...]

As outlined elsewhere in the thread, there are a myriad of reasons
why this could end up not being the case from time to time so I can
only assume your definition of "promise" differs from mine (and
unfortunately, from most people who might be trying to decide
whether it's safe to rely on these images in a sensitive/production
environment).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >