[openstack-dev] [murano] No meeting next Tuesday

2017-09-29 Thread Rong Zhu
Hi Teams,

No meeting next Tuesday(Oct 3) due to China's National Day holiday.

Thanks,
Rong Zhu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-09-29 Thread ChangBo Guo
pylockfile was  deprecated  about two years ago in [1] and it is not used
in any OpenStack Projects now [2] , we would like to retire it according
to steps of retiring a project[3].


[1]c8798cedfbc4d738c99977a07cde2de54687ac6c#diff-88b99bb28683bd5b7e3a204826ead112
[2] http://codesearch.openstack.org/?q=pylockfile=nope==
[3]https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project
-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][stable] attn: No approvals for stable/newton right now

2017-09-29 Thread Dan Smith

Hi all,

Due to a zuulv3 bug, we're running an old nova-network test job on 
master and, as you would expect, failing hard. As a workaround in the 
meantime, we're[0] going to disable that job entirely so that it runs 
nowhere. This makes it not run on master (good) but also not run on 
stable/newton (not so good).


So, please don't approve anything new for stable/newton until we turn 
this job back on. That will happen when this patch lands:


  https://review.openstack.org/#/c/508638

Thanks!

--Dan

[0]: Note that this is all magic and dedication from the infra people, 
all I did was stand around and applaud. I'm including myself in the "we" 
here because I like to feel included by standing next to smart people, 
not because I did any work.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Fox, Kevin M
https://review.openstack.org/#/c/93/

From: Giuseppe de Candia [giuseppe.decan...@gmail.com]
Sent: Friday, September 29, 2017 1:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Supporting SSH host certificates

Ihar, thanks for pointing that out - I'll definitely take a close look.

Jon, I'm not very familiar with Barbican, but I did assume the full 
implementation would use Barbican to store private keys. However, in terms of 
actually getting a private key (or SSH host cert) into a VM instance, Barbican 
doesn't help. The instance needs permission to access secrets stored in 
Barbican. The main question of my e-mail is: how do you inject a credential in 
an automated but secure way? I'd love to hear ideas - in the meantime I'll 
study Ihar's link.

thanks,
Pino



On Fri, Sep 29, 2017 at 2:49 PM, Ihar Hrachyshka 
> wrote:
What you describe (at least the use case) seems to resemble
https://review.openstack.org/#/c/456394/ This work never moved
anywhere since the spec was posted though. You may want to revive the
discussion in scope of the spec.

Ihar

On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
> wrote:
> Hi Folks,
>
>
>
> My intent in this e-mail is to solicit advice for how to inject SSH host
> certificates into VM instances, with minimal or no burden on users.
>
>
>
> Background (skip if you're already familiar with SSH certificates): without
> host certificates, when clients ssh to a host for the first time (or after
> the host has been re-installed), they have to hope that there's no man in
> the middle and that the public key being presented actually belongs to the
> host they're trying to reach. The host's public key is stored in the
> client's known_hosts file. SSH host certicates eliminate the possibility of
> Man-in-the-Middle attack: a Certificate Authority public key is distributed
> to clients (and written to their known_hosts file with a special syntax and
> options); the host public key is signed by the CA, generating an SSH
> certificate that contains the hostname and validity period (among other
> things). When negotiating the ssh connection, the host presents its SSH host
> certificate and the client verifies that it was signed by the CA.
>
>
>
> How to support SSH host certificates in OpenStack?
>
>
>
> First, let's consider doing it by hand, instance by instance. The only
> solution I can think of is to VNC to the instance, copy the public key to my
> CA server, sign it, and then write the certificate back into the host (again
> via VNC). I cannot ssh without risking a MITM attack. What about using Nova
> user-data? User-data is exposed via the metadata service. Metadata is
> queried via http (reply transmitted in the clear, susceptible to snooping),
> and any compute node can query for any instance's meta-data/user-data.
>
>
>
> At this point I have to admit I'm ignorant of details of cloud-init. I know
> cloud-init allows specifying SSH private keys (both for users and for SSH
> service). I have not yet studied how such information is securely injected
> into an instance. I assume it should only be made available via ConfigDrive
> rather than metadata-service (again, that service transmits in the clear).
>
>
>
> What about providing SSH host certificates as a service in OpenStack? Let's
> keep out of scope issues around choosing and storing the CA keys, but the CA
> key is per project. What design supports setting up the SSH host certificate
> automatically for every VM instance?
>
>
>
> I have looked at Vendor Data and I don't see a way to use that, mainly
> because 1) it doesn't take parameters, so you can't pass the public key out;
> and 2) it's queried over http, not https.
>
>
>
> Just as a feasibility argument, one solution would be to modify Nova compute
> instance boot code. Nova compute can securely query a CA service asking for
> a triplet (private key, public key, SSH certificate) for the specific
> hostname. It can then inject the triplet using ConfigDrive. I believe this
> securely gets the private key into the instance.
>
>
>
> I cannot figure out how to get the equivalent functionality without
> modifying Nova compute and the boot process. Every solution I can think of
> risks either exposing the private key or vulnerability to a MITM attack
> during the signing process.
>
>
>
> Your help is appreciated.
>
>
>
> --Pino
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack 

[openstack-dev] Developer Mailing List Digest September 23-29

2017-09-29 Thread Mike Perez
HTML version: 
https://www.openstack.org/blog/2017/09/developer-mailing-list-digest-september-23-29-2017/

# Summaries
* [TC Report 
39](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122679.html)
* [Release countdown for week R-21,September 29 - October 
6](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122819.html)
* [Technical committee status update, September 
29](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122879.html)
* [Placement/resource providers update 
36](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122883.html)
* [POST 
/api-sig/news](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122805.html)

## Sydney Forum

### General Links
* [What the heck is the form?](https://wiki.openstack.org/wiki/Forum)
* When: November 6-8, 2017
* Where: OpenStack Summit in Sydney Australia
* Register for [The OpenStack Sydney 
Summit](https://www.openstack.org/summit/sydney-2017/) and show up!
* Deadline for topic sessions was September 29th UTC by this [submission 
form](http://forumtopics.openstack.org/).
* [All Sydney Forum etherpads](https://wiki.openstack.org/wiki/Forum/Sydney2017)

### Etherpads (copied from Sydney Forum wiki)

 Catch-alls
If you want to post an idea, but aren’t working with a specific team or working 
group, you can use these:

* [Technical Committee 
Catch-all](https://etherpad.openstack.org/p/SYD-TC-brainstorming) 
* [User Committee 
Catch-all](https://etherpad.openstack.org/p/SYD-UC-brainstorming) 

 Etherpads from Teams and Working Groups
* [Nova](https://etherpad.openstack.org/p/SYD-nova-brainstorming) 
* [Cinder](https://etherpad.openstack.org/p/cinder-sydney-forum-topics) 
* [Ops Meetups 
Team](https://etherpad.openstack.org/p/SYD-ops-session-ideas) 
* [OpenStack 
Ansible](https://etherpad.openstack.org/p/osa-sydney-summit-planning) 
* [Self-healing 
SIG](https://etherpad.openstack.org/p/self-healing-rocky-forum) 
* [Neutron Quality-Of-Service 
Discussion](https://etherpad.openstack.org/p/qos-talk-sydney) 
* [QA Team](https://etherpad.openstack.org/p/qa-sydney-forum-topics) 
* [Watcher](https://etherpad.openstack.org/p/watcher-Sydney-meetings) 
* [SIG K8s](https://etherpad.openstack.org/p/sig-k8s-sydney-forum-topics) 
* [Kolla](https://etherpad.openstack.org/p/kolla-sydney-forum-topics) 


## Garbage Patches for Simple Typos Fixes
* There is some agreement that we as a community have to do something beyond 
mentoring new developers.
* Others have mentioned that some companies are doing this to game the 
system in other communities besides OpenStack.
* Gain: show a high contribution level was “low quality” patches.
* Some people in the community want to put a stop to this figuratively with 
a stop sign, otherwise things will never improve. If we don't do something now 
we are hurting everyone, including those developers who could have done more 
meaningful contributions.
* Others would like before we go into creating harsh processes, we need to 
collect the data to show other times to provide guidance I Have not worked.
* We have a lot of anecdotal information right now that we need to collect 
and summarize.
* If the results show that there are clear abuses, rather than 
misunderstandings, then we can use that data to design effective blocks without 
hurting other contributors or creating a reputation that our community is not 
welcoming.
* Some are unclear why there is so much outrage about these patches to begin 
with. They are fixing real things.
* Maybe there is a CI cost, but the faster they are merged the less likely 
someone is to propose it in the future which keeps the CI cost down.
* If people are deeply concerned about CI resources, step one is to give us 
a better accounting into their existing system to see where resources are 
currently spent. 
* 
[Thread](http://lists.openstack.org/pipermail/openstack-dev/2017-September/thread.html#122472)


## Status of the Stewardship Working Group
* The stewardship working group was created after the first session of 
leadership training that the Technical Committee, User Committee, Board and 
other community members were invited to participate in 2016.
* Follow-up on what we learned at ZingTrain and push adoption of the tools we 
discovered there.
* While we did (and continue)
* The activity of the workgroup mostly died when we decided to experiment 
getting rid of weekly meetings for greater inclusion.
* Lost original leadership.
* The workgroup is dormant, until someone steps up and leads it again.
* Join us on IRC Freenode in channel openstack-swg if interested.
* 
[Message](http://lists.openstack.org/pipermail/openstack-dev/2017-September/122868.html)


## Improving the Process for Release Marketing
* Release marketing is a critical heart for sharing what's new in each release.
* Let's work together on reworking how the 

Re: [openstack-dev] [release][ptl] Improving the process for release marketing

2017-09-29 Thread Mike Perez
On 14:33 Sep 26, Anne Bertucio wrote:
> Release marketing is a critical part of sharing what’s new in each release,
> and we want to rework how the marketing community and projects work together
> to make the release communications happen. 
> 
> Having multiple, repetetive demands to summarize "top features" during
> release time can be pestering and having to recollect the information each
> time isn't an effective use of time. Being asked to make polished,
> "press-friendly" messages out of release notes can feel too far outside of
> the PTL's focus areas or skills. At the same time, for technical content
> marketers, attempting to find the key features from release notes, ML posts,
> specs, Roadmap, etc., means interesting features are sometimes overlooked.
> Marketing teams don't have the latest on what features landed and with what
> caveats.
> 
> To address this gap, the Release team and Foundation marketing team propose
> collecting information as part of the release tagging process. Similar to the
> existing (unused) "highlights" field for an individual tag, we will collect
> some text in the deliverable file to provide highlights for the series (about
> 3 items). That text will then be used to build a landing page on
> release.openstack.org that shows the "key features" flagged by PTLs that
> marketing teams should be looking at during release communication times. The
> page will link to the release notes, so marketers can start there to gather
> additional information, eliminating repetitive asks of PTLs. The "pre
> selection" of features means marketers can spend more time diving into
> release note details and less sifting through them.
> 
> To supplement the written information, the marketing community is also going
> to work together to consolidate follow up questions and deliver them in
> "press corps" style (i.e. a single phone call to be asked questions from
> multiple parties vs. multiple phone calls from individuals).
> 
> We will provide more details about the implementation for the highlights page
> when that is ready, but want to gather feedback about both aspects of the
> plan early.

As being someone who participates in building out that page, I  welcome this to
represent highlights from the community itself better.

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-09-26 and plans for next week

2017-09-29 Thread Lance Bragstad
Office hours was a little slow this week. Most people seem to be getting
back in the groove from the PTG. No bugs were closed during this week's
office hours.

FWIW - I plan to go through and start cleaning up v2.0 bugs there are no
longer relevant now that v2.0 is being removed. This will be a good
opportunity to reassess bugs for next week. We can also dedicate time to
specs next week since Queens-1 is just around the corner.

Let me know if you have thoughts. Thanks!




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][ptls][install] Install guide vs. tutorial

2017-09-29 Thread Doug Hellmann
Excerpts from Petr Kovar's message of 2017-09-29 21:09:02 +0200:
> On Tue, 26 Sep 2017 18:53:04 -0500
> Jay S Bryant  wrote:
> 
> > 
> > 
> > On 9/25/2017 3:47 AM, Alexandra Settle wrote:
> > >   
> > >  > I completely agree consistency is more important, than bike 
> > > shedding over the
> > >  > name :)
> > >  > To be honest, it would be easier to change everything to ‘guide’ – 
> > > seeing as
> > >  > all our URLs are ‘install-guide’.
> > >  > But that’s the lazy in me speaking.
> > >  >
> > >  > Industry wise – there does seem to be more of a trend towards 
> > > ‘guide’ rather
> > >  > than ‘tutorial’. Although, that is at a cursory glance.
> > >  >
> > >  > I am happy to investigate further, if this matter is of some 
> > > contention to
> > >  > people?
> > >  
> > >  This is the first time I'm hearing about "Install Tutorial". I'm 
> > > also lazy, +1
> > >  with sticking to install guide.
> > >  
> > > Just to clarify: https://docs.openstack.org/install-guide/ The link is 
> > > “install-guide” but the actual title on the page is “OpenStack 
> > > Installation Tutorial”.
> > >
> > > Apologies if I haven’t been clear enough in this thread! Context always 
> > > helps :P
> > >
> > Oy!  The URL says guide but the page says tutorial?  That is even more 
> > confusing.  I think it would be good to make it consistent and just with 
> > guide then.  All for your laziness when it leads to consistency.  :-)
> 
> Yes, this inconsistency in document naming is totally something we need
> to change, hopefully based on the outcome of this discussion.
> 
> At the PTG, I was leaning towards "tutorial" because previously, the docs
> team chose that term to distinguish an installation HOWTO (describing
> installing a PoC environment from packages) from a more general guide on
> installation (possibly documenting different methods that different
> audiences can use).
> 
> But I could go with both.
> 
> Cheers,
> pk
> 

Everyone seems rather flexible on this, so I think we just need someone
to pick a name.

Using "tutorial" will involve renaming the directory containing all of
the content and updating the way that is published in openstack-manuals.

Using "guide" will involve changing the contents of the documents
inside a few files in openstack-manuals to match where it is already
published, and we can do that with sed.

My vote is to do the simple thing, and change the file contents:
https://review.openstack.org/508608

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Giuseppe de Candia
Hi Ihar,

I have reviewed https://review.openstack.org/#/c/456394/  (Fetch hostkey
from port) and noted that:
1) that discussion is likely to stay among the Neutron developers only
(whereas I would like a wider audience, especially including Nova
developers)
2) that proposal does not consider SSH certificates, which are becoming a
standard approach in the industry (see [1] and [2])

Therefore, I would like to keep the discussion here.

thanks,
Pino


[1] -
https://code.facebook.com/posts/365787980419535/scalable-and-secure-access-with-ssh/
[2] -
https://medium.com/uber-security-privacy/introducing-the-uber-ssh-certificate-authority-4f840839c5cc



On Fri, Sep 29, 2017 at 3:05 PM, Giuseppe de Candia <
giuseppe.decan...@gmail.com> wrote:

> Ihar, thanks for pointing that out - I'll definitely take a close look.
>
> Jon, I'm not very familiar with Barbican, but I did assume the full
> implementation would use Barbican to store private keys. However, in terms
> of actually getting a private key (or SSH host cert) into a VM instance,
> Barbican doesn't help. The instance needs permission to access secrets
> stored in Barbican. The main question of my e-mail is: how do you inject a
> credential in an automated but secure way? I'd love to hear ideas - in the
> meantime I'll study Ihar's link.
>
> thanks,
> Pino
>
>
>
> On Fri, Sep 29, 2017 at 2:49 PM, Ihar Hrachyshka 
> wrote:
>
>> What you describe (at least the use case) seems to resemble
>> https://review.openstack.org/#/c/456394/ This work never moved
>> anywhere since the spec was posted though. You may want to revive the
>> discussion in scope of the spec.
>>
>> Ihar
>>
>> On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
>>  wrote:
>> > Hi Folks,
>> >
>> >
>> >
>> > My intent in this e-mail is to solicit advice for how to inject SSH host
>> > certificates into VM instances, with minimal or no burden on users.
>> >
>> >
>> >
>> > Background (skip if you're already familiar with SSH certificates):
>> without
>> > host certificates, when clients ssh to a host for the first time (or
>> after
>> > the host has been re-installed), they have to hope that there's no man
>> in
>> > the middle and that the public key being presented actually belongs to
>> the
>> > host they're trying to reach. The host's public key is stored in the
>> > client's known_hosts file. SSH host certicates eliminate the
>> possibility of
>> > Man-in-the-Middle attack: a Certificate Authority public key is
>> distributed
>> > to clients (and written to their known_hosts file with a special syntax
>> and
>> > options); the host public key is signed by the CA, generating an SSH
>> > certificate that contains the hostname and validity period (among other
>> > things). When negotiating the ssh connection, the host presents its SSH
>> host
>> > certificate and the client verifies that it was signed by the CA.
>> >
>> >
>> >
>> > How to support SSH host certificates in OpenStack?
>> >
>> >
>> >
>> > First, let's consider doing it by hand, instance by instance. The only
>> > solution I can think of is to VNC to the instance, copy the public key
>> to my
>> > CA server, sign it, and then write the certificate back into the host
>> (again
>> > via VNC). I cannot ssh without risking a MITM attack. What about using
>> Nova
>> > user-data? User-data is exposed via the metadata service. Metadata is
>> > queried via http (reply transmitted in the clear, susceptible to
>> snooping),
>> > and any compute node can query for any instance's meta-data/user-data.
>> >
>> >
>> >
>> > At this point I have to admit I'm ignorant of details of cloud-init. I
>> know
>> > cloud-init allows specifying SSH private keys (both for users and for
>> SSH
>> > service). I have not yet studied how such information is securely
>> injected
>> > into an instance. I assume it should only be made available via
>> ConfigDrive
>> > rather than metadata-service (again, that service transmits in the
>> clear).
>> >
>> >
>> >
>> > What about providing SSH host certificates as a service in OpenStack?
>> Let's
>> > keep out of scope issues around choosing and storing the CA keys, but
>> the CA
>> > key is per project. What design supports setting up the SSH host
>> certificate
>> > automatically for every VM instance?
>> >
>> >
>> >
>> > I have looked at Vendor Data and I don't see a way to use that, mainly
>> > because 1) it doesn't take parameters, so you can't pass the public key
>> out;
>> > and 2) it's queried over http, not https.
>> >
>> >
>> >
>> > Just as a feasibility argument, one solution would be to modify Nova
>> compute
>> > instance boot code. Nova compute can securely query a CA service asking
>> for
>> > a triplet (private key, public key, SSH certificate) for the specific
>> > hostname. It can then inject the triplet using ConfigDrive. I believe
>> this
>> > securely gets the private key into the instance.
>> >
>> >
>> >
>> > I cannot figure out how to get the 

Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Giuseppe de Candia
Ihar, thanks for pointing that out - I'll definitely take a close look.

Jon, I'm not very familiar with Barbican, but I did assume the full
implementation would use Barbican to store private keys. However, in terms
of actually getting a private key (or SSH host cert) into a VM instance,
Barbican doesn't help. The instance needs permission to access secrets
stored in Barbican. The main question of my e-mail is: how do you inject a
credential in an automated but secure way? I'd love to hear ideas - in the
meantime I'll study Ihar's link.

thanks,
Pino



On Fri, Sep 29, 2017 at 2:49 PM, Ihar Hrachyshka 
wrote:

> What you describe (at least the use case) seems to resemble
> https://review.openstack.org/#/c/456394/ This work never moved
> anywhere since the spec was posted though. You may want to revive the
> discussion in scope of the spec.
>
> Ihar
>
> On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
>  wrote:
> > Hi Folks,
> >
> >
> >
> > My intent in this e-mail is to solicit advice for how to inject SSH host
> > certificates into VM instances, with minimal or no burden on users.
> >
> >
> >
> > Background (skip if you're already familiar with SSH certificates):
> without
> > host certificates, when clients ssh to a host for the first time (or
> after
> > the host has been re-installed), they have to hope that there's no man in
> > the middle and that the public key being presented actually belongs to
> the
> > host they're trying to reach. The host's public key is stored in the
> > client's known_hosts file. SSH host certicates eliminate the possibility
> of
> > Man-in-the-Middle attack: a Certificate Authority public key is
> distributed
> > to clients (and written to their known_hosts file with a special syntax
> and
> > options); the host public key is signed by the CA, generating an SSH
> > certificate that contains the hostname and validity period (among other
> > things). When negotiating the ssh connection, the host presents its SSH
> host
> > certificate and the client verifies that it was signed by the CA.
> >
> >
> >
> > How to support SSH host certificates in OpenStack?
> >
> >
> >
> > First, let's consider doing it by hand, instance by instance. The only
> > solution I can think of is to VNC to the instance, copy the public key
> to my
> > CA server, sign it, and then write the certificate back into the host
> (again
> > via VNC). I cannot ssh without risking a MITM attack. What about using
> Nova
> > user-data? User-data is exposed via the metadata service. Metadata is
> > queried via http (reply transmitted in the clear, susceptible to
> snooping),
> > and any compute node can query for any instance's meta-data/user-data.
> >
> >
> >
> > At this point I have to admit I'm ignorant of details of cloud-init. I
> know
> > cloud-init allows specifying SSH private keys (both for users and for SSH
> > service). I have not yet studied how such information is securely
> injected
> > into an instance. I assume it should only be made available via
> ConfigDrive
> > rather than metadata-service (again, that service transmits in the
> clear).
> >
> >
> >
> > What about providing SSH host certificates as a service in OpenStack?
> Let's
> > keep out of scope issues around choosing and storing the CA keys, but
> the CA
> > key is per project. What design supports setting up the SSH host
> certificate
> > automatically for every VM instance?
> >
> >
> >
> > I have looked at Vendor Data and I don't see a way to use that, mainly
> > because 1) it doesn't take parameters, so you can't pass the public key
> out;
> > and 2) it's queried over http, not https.
> >
> >
> >
> > Just as a feasibility argument, one solution would be to modify Nova
> compute
> > instance boot code. Nova compute can securely query a CA service asking
> for
> > a triplet (private key, public key, SSH certificate) for the specific
> > hostname. It can then inject the triplet using ConfigDrive. I believe
> this
> > securely gets the private key into the instance.
> >
> >
> >
> > I cannot figure out how to get the equivalent functionality without
> > modifying Nova compute and the boot process. Every solution I can think
> of
> > risks either exposing the private key or vulnerability to a MITM attack
> > during the signing process.
> >
> >
> >
> > Your help is appreciated.
> >
> >
> >
> > --Pino
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Arkady.Kanevsky
There are some loose ends that Saverio correctly bringing up.
These perfect points to discuss at Forum.
Suggest we start etherpad to collect agenda for it.

-Original Message-
From: Lee Yarwood [mailto:lyarw...@redhat.com] 
Sent: Friday, September 29, 2017 7:04 AM
To: Saverio Proto 
Cc: OpenStack Development Mailing List (not for usage questions) 
; openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] 
[skip-level-upgrades][fast-forward-upgrades] PTG summary

On 29-09-17 11:40:21, Saverio Proto wrote:
> Hello,
> 
> sorry I could not make it to the PTG.
> 
> I have an idea that I want to share with the community. I hope this is 
> a good place to start the discussion.
> 
> After years of Openstack operations, upgrading releases from Icehouse 
> to Newton, the feeling is that the control plane upgrade is doable.
> 
> But it is also a lot of pain to upgrade all the compute nodes. This 
> really causes downtime to the VMs that are running.
> I can't always make live migrations, sometimes the VMs are just too 
> big or too busy.
> 
> It would be nice to guarantee the ability to run an updated control 
> plane with compute nodes up to N-3 Release.
> 
> This way even if we have to upgrade the control plane every 6 months, 
> we can keep a longer lifetime for compute nodes. Basically we can 
> never upgrade them until we decommission the hardware.
> 
> If there are new features that require updated compute nodes, we can 
> always organize our datacenter in availability zones, not scheduling 
> new VMs to those compute nodes.
> 
> To my understanding this means having compatibility at least for the 
> nova-compute agent and the neutron-agents running on the compute node.
> 
> Is it a very bad idea ?
> 
> Do other people feel like me that upgrading all the compute nodes is 
> also a big part of the burden regarding the upgrade ?

Yeah, I don't think the Nova community would ever be able or willing to verify 
and maintain that level of backward compatibility. Ultimately there's nothing 
stopping you from upgrading Nova on the computes while also keeping instance 
running.

You only run into issues with kernel, OVS and QEMU (for n-cpu with
libvirt) etc upgrades that require reboots or instances to be restarted (either 
hard or via live-migration). If you're unable or just unwilling to take 
downtime for instances that can't be moved when these components require an 
update then you have bigger problems IMHO.

Regards,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Ihar Hrachyshka
What you describe (at least the use case) seems to resemble
https://review.openstack.org/#/c/456394/ This work never moved
anywhere since the spec was posted though. You may want to revive the
discussion in scope of the spec.

Ihar

On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
 wrote:
> Hi Folks,
>
>
>
> My intent in this e-mail is to solicit advice for how to inject SSH host
> certificates into VM instances, with minimal or no burden on users.
>
>
>
> Background (skip if you're already familiar with SSH certificates): without
> host certificates, when clients ssh to a host for the first time (or after
> the host has been re-installed), they have to hope that there's no man in
> the middle and that the public key being presented actually belongs to the
> host they're trying to reach. The host's public key is stored in the
> client's known_hosts file. SSH host certicates eliminate the possibility of
> Man-in-the-Middle attack: a Certificate Authority public key is distributed
> to clients (and written to their known_hosts file with a special syntax and
> options); the host public key is signed by the CA, generating an SSH
> certificate that contains the hostname and validity period (among other
> things). When negotiating the ssh connection, the host presents its SSH host
> certificate and the client verifies that it was signed by the CA.
>
>
>
> How to support SSH host certificates in OpenStack?
>
>
>
> First, let's consider doing it by hand, instance by instance. The only
> solution I can think of is to VNC to the instance, copy the public key to my
> CA server, sign it, and then write the certificate back into the host (again
> via VNC). I cannot ssh without risking a MITM attack. What about using Nova
> user-data? User-data is exposed via the metadata service. Metadata is
> queried via http (reply transmitted in the clear, susceptible to snooping),
> and any compute node can query for any instance's meta-data/user-data.
>
>
>
> At this point I have to admit I'm ignorant of details of cloud-init. I know
> cloud-init allows specifying SSH private keys (both for users and for SSH
> service). I have not yet studied how such information is securely injected
> into an instance. I assume it should only be made available via ConfigDrive
> rather than metadata-service (again, that service transmits in the clear).
>
>
>
> What about providing SSH host certificates as a service in OpenStack? Let's
> keep out of scope issues around choosing and storing the CA keys, but the CA
> key is per project. What design supports setting up the SSH host certificate
> automatically for every VM instance?
>
>
>
> I have looked at Vendor Data and I don't see a way to use that, mainly
> because 1) it doesn't take parameters, so you can't pass the public key out;
> and 2) it's queried over http, not https.
>
>
>
> Just as a feasibility argument, one solution would be to modify Nova compute
> instance boot code. Nova compute can securely query a CA service asking for
> a triplet (private key, public key, SSH certificate) for the specific
> hostname. It can then inject the triplet using ConfigDrive. I believe this
> securely gets the private key into the instance.
>
>
>
> I cannot figure out how to get the equivalent functionality without
> modifying Nova compute and the boot process. Every solution I can think of
> risks either exposing the private key or vulnerability to a MITM attack
> during the signing process.
>
>
>
> Your help is appreciated.
>
>
>
> --Pino
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Jonathan Proulx
Giuseppe ,

I'm pretty sure this is the project you want ot look into:

http://git.openstack.org/cgit/openstack/barbican/

"Barbican is a ReST API designed for the secure storage, provisioning
and management of secrets, including in OpenStack environments."

-Jon


On Fri, Sep 29, 2017 at 02:21:06PM -0500, Giuseppe de Candia wrote:
:Hi Folks,
:
:
:
:My intent in this e-mail is to solicit advice for how to inject SSH host
:certificates into VM instances, with minimal or no burden on users.
:
:
:
:Background (skip if you're already familiar with SSH certificates): without
:host certificates, when clients ssh to a host for the first time (or after
:the host has been re-installed), they have to hope that there's no man in
:the middle and that the public key being presented actually belongs to the
:host they're trying to reach. The host's public key is stored in the
:client's known_hosts file. SSH host certicates eliminate the possibility of
:Man-in-the-Middle attack: a Certificate Authority public key is distributed
:to clients (and written to their known_hosts file with a special syntax and
:options); the host public key is signed by the CA, generating an SSH
:certificate that contains the hostname and validity period (among other
:things). When negotiating the ssh connection, the host presents its SSH
:host certificate and the client verifies that it was signed by the CA.
:
:
:
:How to support SSH host certificates in OpenStack?
:
:
:
:First, let's consider doing it by hand, instance by instance. The only
:solution I can think of is to VNC to the instance, copy the public key to
:my CA server, sign it, and then write the certificate back into the host
:(again via VNC). I cannot ssh without risking a MITM attack. What about
:using Nova user-data? User-data is exposed via the metadata service.
:Metadata is queried via http (reply transmitted in the clear, susceptible
:to snooping), and any compute node can query for any instance's
:meta-data/user-data.
:
:
:
:At this point I have to admit I'm ignorant of details of cloud-init. I know
:cloud-init allows specifying SSH private keys (both for users and for SSH
:service). I have not yet studied how such information is securely injected
:into an instance. I assume it should only be made available via ConfigDrive
:rather than metadata-service (again, that service transmits in the clear).
:
:
:
:What about providing SSH host certificates as a service in OpenStack? Let's
:keep out of scope issues around choosing and storing the CA keys, but the
:CA key is per project. What design supports setting up the SSH host
:certificate automatically for every VM instance?
:
:
:
:I have looked at Vendor Data and I don't see a way to use that, mainly
:because 1) it doesn't take parameters, so you can't pass the public key
:out; and 2) it's queried over http, not https.
:
:
:
:Just as a feasibility argument, one solution would be to modify Nova
:compute instance boot code. Nova compute can securely query a CA service
:asking for a triplet (private key, public key, SSH certificate) for the
:specific hostname. It can then inject the triplet using ConfigDrive. I
:believe this securely gets the private key into the instance.
:
:
:
:I cannot figure out how to get the equivalent functionality without
:modifying Nova compute and the boot process. Every solution I can think of
:risks either exposing the private key or vulnerability to a MITM attack
:during the signing process.
:
:
:
:Your help is appreciated.
:
:
:
:--Pino

:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Giuseppe de Candia
Hi Folks,



My intent in this e-mail is to solicit advice for how to inject SSH host
certificates into VM instances, with minimal or no burden on users.



Background (skip if you're already familiar with SSH certificates): without
host certificates, when clients ssh to a host for the first time (or after
the host has been re-installed), they have to hope that there's no man in
the middle and that the public key being presented actually belongs to the
host they're trying to reach. The host's public key is stored in the
client's known_hosts file. SSH host certicates eliminate the possibility of
Man-in-the-Middle attack: a Certificate Authority public key is distributed
to clients (and written to their known_hosts file with a special syntax and
options); the host public key is signed by the CA, generating an SSH
certificate that contains the hostname and validity period (among other
things). When negotiating the ssh connection, the host presents its SSH
host certificate and the client verifies that it was signed by the CA.



How to support SSH host certificates in OpenStack?



First, let's consider doing it by hand, instance by instance. The only
solution I can think of is to VNC to the instance, copy the public key to
my CA server, sign it, and then write the certificate back into the host
(again via VNC). I cannot ssh without risking a MITM attack. What about
using Nova user-data? User-data is exposed via the metadata service.
Metadata is queried via http (reply transmitted in the clear, susceptible
to snooping), and any compute node can query for any instance's
meta-data/user-data.



At this point I have to admit I'm ignorant of details of cloud-init. I know
cloud-init allows specifying SSH private keys (both for users and for SSH
service). I have not yet studied how such information is securely injected
into an instance. I assume it should only be made available via ConfigDrive
rather than metadata-service (again, that service transmits in the clear).



What about providing SSH host certificates as a service in OpenStack? Let's
keep out of scope issues around choosing and storing the CA keys, but the
CA key is per project. What design supports setting up the SSH host
certificate automatically for every VM instance?



I have looked at Vendor Data and I don't see a way to use that, mainly
because 1) it doesn't take parameters, so you can't pass the public key
out; and 2) it's queried over http, not https.



Just as a feasibility argument, one solution would be to modify Nova
compute instance boot code. Nova compute can securely query a CA service
asking for a triplet (private key, public key, SSH certificate) for the
specific hostname. It can then inject the triplet using ConfigDrive. I
believe this securely gets the private key into the instance.



I cannot figure out how to get the equivalent functionality without
modifying Nova compute and the boot process. Every solution I can think of
risks either exposing the private key or vulnerability to a MITM attack
during the signing process.



Your help is appreciated.



--Pino
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Did you know archive_deleted_rows isn't super terrible anymore?

2017-09-29 Thread melanie witt

On Fri, 29 Sep 2017 13:49:55 -0500, Matt Riedemann wrote:

For awhile now actually.

Someone was asking about when archive_deleted_rows would actually work, 
and the answer is, it should since at least mitaka:


https://review.openstack.org/#/q/I77255c77780f0c2b99d59a9c20adecc85335bb18

And starting in Ocata there is the --until-complete option which lets 
you run it continuously until its done, rather than the weird manual 
batching from before:


https://review.openstack.org/#/c/378718/

So this shouldn't be news, but it might be. So FYI.


True that. However, I want to give people a heads up about something I 
learned recently (today actually). I think problems with archive can 
arise if you've restarted your database after archiving, and attempt to 
do a future archive. The InnoDB engine in MySQL keeps the AUTO_INCREMENT 
counter only in memory, so after a restart it selects the maximum value 
and adds 1 to use as the next value [1].


So if you had soft-deleted rows with primary keys 1 through 10 in the 
main table and ran archive_deleted_rows, those rows would get inserted 
into the shadow table and be hard-deleted from the main table. Then, if 
you restarted the database, the primary key AUTO_INCREMENT counter would 
be initialized to 1 again and the primary keys you had archived would be 
reused. If those new rows with primary keys 1 through 10 were eventually 
soft-deleted and then you ran archive_deleted_rows, the archive would 
fail with something like, "DBDuplicateEntry: 
(pymysql.err.IntegrityError) (1062, u"Duplicate entry '1' for key 
'PRIMARY'")". The workaround would be to delete or otherwise move the 
archived rows containing duplicate keys out of the shadow table.


-melanie

[1] 
https://dev.mysql.com/doc/refman/5.7/en/innodb-auto-increment-handling.html#innodb-auto-increment-initialization



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][ptls][install] Install guide vs. tutorial

2017-09-29 Thread Petr Kovar
On Tue, 26 Sep 2017 18:53:04 -0500
Jay S Bryant  wrote:

> 
> 
> On 9/25/2017 3:47 AM, Alexandra Settle wrote:
> >   
> >  > I completely agree consistency is more important, than bike shedding 
> > over the
> >  > name :)
> >  > To be honest, it would be easier to change everything to ‘guide’ – 
> > seeing as
> >  > all our URLs are ‘install-guide’.
> >  > But that’s the lazy in me speaking.
> >  >
> >  > Industry wise – there does seem to be more of a trend towards 
> > ‘guide’ rather
> >  > than ‘tutorial’. Although, that is at a cursory glance.
> >  >
> >  > I am happy to investigate further, if this matter is of some 
> > contention to
> >  > people?
> >  
> >  This is the first time I'm hearing about "Install Tutorial". I'm also 
> > lazy, +1
> >  with sticking to install guide.
> >  
> > Just to clarify: https://docs.openstack.org/install-guide/ The link is 
> > “install-guide” but the actual title on the page is “OpenStack Installation 
> > Tutorial”.
> >
> > Apologies if I haven’t been clear enough in this thread! Context always 
> > helps :P
> >
> Oy!  The URL says guide but the page says tutorial?  That is even more 
> confusing.  I think it would be good to make it consistent and just with 
> guide then.  All for your laziness when it leads to consistency.  :-)

Yes, this inconsistency in document naming is totally something we need
to change, hopefully based on the outcome of this discussion.

At the PTG, I was leaning towards "tutorial" because previously, the docs
team chose that term to distinguish an installation HOWTO (describing
installing a PoC environment from packages) from a more general guide on
installation (possibly documenting different methods that different
audiences can use).

But I could go with both.

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Did you know archive_deleted_rows isn't super terrible anymore?

2017-09-29 Thread Matt Riedemann

For awhile now actually.

Someone was asking about when archive_deleted_rows would actually work, 
and the answer is, it should since at least mitaka:


https://review.openstack.org/#/q/I77255c77780f0c2b99d59a9c20adecc85335bb18

And starting in Ocata there is the --until-complete option which lets 
you run it continuously until its done, rather than the weird manual 
batching from before:


https://review.openstack.org/#/c/378718/

So this shouldn't be news, but it might be. So FYI.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Dell Ironic CI Migration

2017-09-29 Thread Rajini.Karthik
Hi all
Dell Ironic 3rd party CI is gearing up for hardware migration next week. We 
expect a week of down time as it is a larger change for us.

Will send out another mail when it is done.

Thank you
Rajini
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-29 Thread Luke Hinds
On Fri, Sep 29, 2017 at 5:31 PM, Jay Pipes  wrote:

> On 09/29/2017 06:19 AM, Luke Hinds wrote:
>
>> On Thu, Sep 28, 2017 at 8:38 PM, McClymont Jr, Scott <
>> scott.mcclym...@verizonwireless.com > nwireless.com>> wrote:
>>
>> Hey All,
>>
>> I've got a spec up for a change I want to implement in Glance for
>> Queens to enhance the current checksum (md5) functionality with a
>> stronger hash algorithm. I'm going to do this in such a way that it
>> is easily altered in the future for new algorithms as they are
>> released.  I'd appreciate it if someone on the security team could
>> look it over and comment. Thanks.
>>
>> Review: https://review.openstack.org/#/c/507568/
>> 
>>
>>
>> +1 , thanks for undertaking this work. Strong support from the security
>> projects side.
>>
>> Would be good to see all projects move on from MD5 use now, its been
>> known to be insecure for sometime and clashes with FIPS-142 compliance.
>>
>
> In the case of Glance's use of MD5 for checksums, it is used to identify
> whether a particular array of bytes that represents an image has changed.
> The client uploads a bytestream to Glance, which does a rolling checksum of
> that byte data for each chunk received and writes the checksum to the
> database upon completion of the upload.
>
> That checksum number never changes since Glance images are immutable once
> uploaded.
>
> Can someone please inform me how changing the checksum algorithm for this
> operation to SHA-1 or something else would improve the security of this
> operation?
>


As I understand it, MD5 has been proven to be susceptible to collision
attacks, so its possible to generate the same hash from two blobs. The same
is also true for SHA-1, and can't be out ruled this may also be the case
for strong cryptos (SHA 256, 512 etc) in the future.


> As someone who recently had to go through thousands of (mostly bogus)
> entries in a spreadsheet generated from the Bandit "security scanning
> tool", I'd like to ask that we approach these kinds of things with some
> common sense and not just as a checking-the-box-off activity.
>

understood, you may already know this, but you can make a # nosec on the
line using hashlib.md5 and Bandit will not false positive. It might seem a
pain it reporting on md5, but it has highlighted a few occurrences of
people using weak hashes for salting , integrity checks etc.

>
> md5 is used in a number of places in many OpenStack services, and often
> those uses have nothing to do with cryptography. Rather, in those cases md5
> is used as a simple mechanism to generate a hash from a name. [1]
>
> All I ask is that we don't have an army of people going out and replacing
> blindly all uses of the MD5 algorithm everywhere, since (as I learned
> recently) that will just lead to a lot of busywork for little gain.
>

I do see you're point, some network protocols also use md5 purely for error
checking, not computing. In OpenStack swift uses MD5 for a non security
case, and its no simple swap out for them.

If anything it would be ideal to insure anyone implementing new
functionality, not use md5 and if anyone can easily swap out uses of md5 /
sha1 then its worth doing, as there may be an edge case not yet seen that
it opens up a hole.


>
> Best,
> -jay
>
> [1] https://github.com/openstack/nova/blob/master/nova/utils.py#L1067
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Luke Hinds | NFV Partner Engineering | Office of Technology | Red Hat
e: lhi...@redhat.com | irc: lhinds @freenode | m: +44 77 45 63 98 84 | t: +44
12 52 36 2483
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-29 Thread Jeremy Stanley
On 2017-09-29 12:31:21 -0400 (-0400), Jay Pipes wrote:
[...]
> Can someone please inform me how changing the checksum algorithm
> for this operation to SHA-1 or something else would improve the
> security of this operation?
[...]

The current known flaws in MD5 pretty much boil down to this one
potential exploit scenario:

As a devious malcontent, I construct two images which are specially
engineered to result in the same MD5 checksum (this part alone may
not even be possible depending on the nature of the image protocol
and its metadata headers, but let's leave that aside for the
moment). One image is benign, and the other is malicious in nature.

I upload the benign image and get people to trust it. Later I
(again, exercise left to the imagination of the reader... leveraging
optional external image locations functionality in Glance?)
substitute the malicious image and people begin booting it instead,
continuing to trust it because it has the same checksum.

This example is, of course, contrived and riddled with gaping plot
holes; it would never make for a mystery bestseller. Who or what is
even validating these checksums to begin with? If you can get people
to run images you've uploaded, odds are it's game over anyway
regardless of whether or not the checksums change, and the known
avenues for that involve either an inside job or dangerous
configuration options.

The simpler explanation is that people hear "MD5 is broken" and so
anyone writing policies and auditing security/compliance just tells
you it's verboten. That, and uninformed alarmists who freak out when
they find uses of MD5 and think that means the software will be
hax0red the moment you put it into production. Sometimes it's easier
to just go through the pain of replacing unpopular cryptographic
primitives so you can avoid having this same discussion over and
over with people whose eyes glaze over as soon as you start to try
and tell them anything which disagrees with their paranoid
sensationalist media experts.

Oh, also, SHA-1 isn't much better in this regard.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-29 Thread Jeremy Stanley
On 2017-09-29 18:39:18 +0200 (+0200), Thomas Bechtold wrote:
> On 29.09.2017 12:56, Jesse Pretorius wrote:
> > On 9/29/17, 7:18 AM, "Thomas Bechtold"  wrote:
> >
> > This will still install the files into usr/etc :
> > It's not nice but packagers can workaround that.
> >
> > Yes, that is true. Is there a ‘better’ location to have them? I
> > noticed that Sahara was placing the files into share, resulting
> > in them being installed into /usr/share – is that better?
> 
> There is /etc [1]
[...]

Not really, no, because the system-context data_files path has to be
relative to /usr or /usr/local unless we want to have modules going
into /lib and entrypoints in /bin now. There was a somewhat thorough
discussion on the distutils-sig ML a couple years ago (I think? my
sense of time is pretty terrible) of a potential solution involving
setting up a taxonomy of data file types and allowing
installers/package maintainers to map those to whatever they want
(FHS for distro packages maybe, relative to the install root for
virtualenvs, and so on). I can't seem to find that proposal now, nor
does it appear to have gotten as far as becoming a PEP, but I'll
admit I only spent a few minutes digging around for it.

The current situation with data_files behavior is complicated by the
multitude of ways and places Python packages can be installed,
resulting in bug reports like:

https://github.com/pypa/setuptools/issues/130
https://github.com/pypa/setuptools/issues/460
https://github.com/pypa/setuptools/issues/807

Also there's the problem that it's just a single bucket which could
be used to house example configuration files, shared datasets,
manpages or similar documentation... so dumping all those into a
single path is pretty messy anyway.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Bob Ball
Hi Sahid,

> > a second device emulator along-side QEMU.  There is no mdev 
> > integration.  I'm concerned about how much mdev-specific functionality 
> > would have to be faked up in the XenServer-specific driver for vGPU to 
> > be used in this way.
>
> What you are refering with your DEMU it's what QEMU/KVM have with its 
> vfio-pci. XenServer is
> reading through MDEV since the vendors provide drivers on *Linux* using the 
> MDEV framework.
> MDEV is a kernel layer, used to expose hardwares, it's not hypervisor 
> specific.

It is possible that the vendor's userspace libraries use mdev, however DEMU has 
no concept of mdev at all.  If the vendor's userspace libraries do use mdev 
then this is entirely abstracted from XenServer's integration.
While I don't have access to the vendors source for the userspace libraries or 
the kernel module my understanding was that the kernel module in XenServer's 
integration is for the userspace libraries to talk to the kernel module and for 
IOCTLS.

My reading of mdev implies that /sys/class/mdev_bus should exist for it to be 
used?  It does not exist in XenServer, which to me implies that the vendor's 
driver for XenServer do not use mdev?

Bob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-29 Thread Thomas Bechtold

hi,

On 29.09.2017 12:56, Jesse Pretorius wrote:

On 9/29/17, 7:18 AM, "Thomas Bechtold"  wrote:

 This will still install the files into usr/etc :
 It's not nice but packagers can workaround that.

Yes, that is true. Is there a ‘better’ location to have them? I noticed that 
Sahara was placing the files into share, resulting in them being installed into 
/usr/share – is that better?


There is /etc [1]

Best,

Tom

[1] 
http://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.html#etcHostspecificSystemConfiguration


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-29 Thread Jay Pipes

On 09/29/2017 06:19 AM, Luke Hinds wrote:
On Thu, Sep 28, 2017 at 8:38 PM, McClymont Jr, Scott 
> wrote:


Hey All,

I've got a spec up for a change I want to implement in Glance for
Queens to enhance the current checksum (md5) functionality with a
stronger hash algorithm. I'm going to do this in such a way that it
is easily altered in the future for new algorithms as they are
released.  I'd appreciate it if someone on the security team could
look it over and comment. Thanks.

Review: https://review.openstack.org/#/c/507568/



+1 , thanks for undertaking this work. Strong support from the security 
projects side.


Would be good to see all projects move on from MD5 use now, its been 
known to be insecure for sometime and clashes with FIPS-142 compliance.


In the case of Glance's use of MD5 for checksums, it is used to identify 
whether a particular array of bytes that represents an image has 
changed. The client uploads a bytestream to Glance, which does a rolling 
checksum of that byte data for each chunk received and writes the 
checksum to the database upon completion of the upload.


That checksum number never changes since Glance images are immutable 
once uploaded.


Can someone please inform me how changing the checksum algorithm for 
this operation to SHA-1 or something else would improve the security of 
this operation?


As someone who recently had to go through thousands of (mostly bogus) 
entries in a spreadsheet generated from the Bandit "security scanning 
tool", I'd like to ask that we approach these kinds of things with some 
common sense and not just as a checking-the-box-off activity.


md5 is used in a number of places in many OpenStack services, and often 
those uses have nothing to do with cryptography. Rather, in those cases 
md5 is used as a simple mechanism to generate a hash from a name. [1]


All I ask is that we don't have an army of people going out and 
replacing blindly all uses of the MD5 algorithm everywhere, since (as I 
learned recently) that will just lead to a lot of busywork for little gain.


Best,
-jay

[1] https://github.com/openstack/nova/blob/master/nova/utils.py#L1067


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-29 Thread Fox, Kevin M
Its easier to convince the developers employer to keep paying the developer 
when their users (operators) want to use their stuff. Its a longer term 
strategic investment. But a critical one. I think this has been one of the 
things holding OpenStack back of late. The developers continuously push off 
hard issues to operators that may have other, better solutions. I don't feel 
this is out of malice but more out of lack of understanding on what operators 
do. The operators are starting to push back and are looking at alternatives 
now. We need to break this trend before it accelerates and more developers can 
no longer afford to work on OpenStack. I'd be happy as an operator to work with 
developers to identify pain points so they can be resolved in more operator 
friendly ways.

Thanks,
Kevin

From: Ben Nemec [openst...@nemebean.com]
Sent: Friday, September 29, 2017 6:43 AM
To: OpenStack Development Mailing List (not for usage questions); Rochelle 
Grober
Subject: Re: [openstack-dev] [ptg] Simplification in OpenStack

On 09/26/2017 09:13 PM, Rochelle Grober wrote:
> Clint Byrum wrote:
>> Excerpts from Jonathan Proulx's message of 2017-09-26 16:01:26 -0400:
>>> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>>>
>>> :OpenStack is big. Big enough that a user will likely be fine with
>>> learning :a new set of tools to manage it.
>>>
>>> New users in the startup sense of new, probably.
>>>
>>> People with entrenched environments, I doubt it.
>>>
>>
>> Sorry no, I mean everyone who doesn't have an OpenStack already.
>>
>> It's nice and all, if you're a Puppet shop, to get to use the puppet modules.
>> But it doesn't bring you any closer to the developers as a group. Maybe a few
>> use Puppet, but most don't. And that means you are going to feel like
>> OpenStack gets thrown over the wall at you once every
>> 6 months.
>>
>>> But OpenStack is big. Big enough I think all the major config systems
>>> are fairly well represented, so whether I'm right or wrong this
>>> doesn't seem like an issue to me :)
>>>
>>
>> They are. We've worked through it. But that doesn't mean potential users
>> are getting our best solution or feeling well integrated into the community.
>>
>>> Having common targets (constellations, reference architectures,
>>> whatever) so all the config systems build the same things (or a subset
>>> or superset of the same things) seems like it would have benefits all
>>> around.
>>>
>>
>> It will. It's a good first step. But I'd like to see a world where 
>> developers are
>> all well versed in how operators actually use OpenStack.
>
> Hear, hear!  +1000  Take a developer to work during peak operations.

Or anytime really.  One of the best experiences I had was going on-site
to some of our early TripleO users and helping them through the install
process.  It was eye-opening to see someone who wasn't already immersed
in the project try to use it.  In a relatively short time they pointed
out a number of easy opportunities for simplification (why is this two
steps instead of one?  Umm, no good reason actually.).

I've pushed for us to do more of that sort of thing, but unfortunately
it's a hard sell to take an already overworked developer away from their
day job for a week to focus on one specific user. :-/

>
> For Walmart, that would be Black Firday/Cyber Monday.
> For schools, usually a few days into the new session.
> For otherseach has a time when things break more.  Having a developer 
> experience what operators do to predict/avoid/recover/work around the normal 
> state of operations would help each to understand the macro work flows.  
> Those are important, too.  Full stack includes Ops.
>
> < Snark off />
>
> --Rocky
>
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Forum topics brainstorming

2017-09-29 Thread Matt Riedemann

On 9/28/2017 4:45 PM, Matt Riedemann wrote:

2. Placement update and direction

Same as the Cells v2 discussion - a Pike update and the focus items for 
Queens. This would also be a place we can mention the Ironic flavor 
migration to custom resource classes that happens in Pike.


Someone else proposed something very similar sounding to this:

http://forumtopics.openstack.org/cfp/details/50

But it's unclear what overlap there is, or if ^ is proposing just 
talking about future use cases for nested resource providers and custom 
resource classes for all the external-to-nova thingies people would like 
to do.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-09-29 Thread Doug Hellmann
The Glance team has weekly meetings just like the Oslo team. You’ll find the 
details about the time and agenda on eavesdrop.openstack.org 
. I think it would make sense to add an item 
to the agenda for their next meeting to discuss this issue, and ask for someone 
to help guide you in fixing it. If the Oslo team needs to get involved after 
there is someone from Glance helping, then we can find the right person.

Brian Rosmaita (rosmaita on IRC) is the Glance team PTL. I’ve copied him on 
this email to make sure he notices this thread.

Doug

> On Sep 29, 2017, at 11:24 AM, ruan...@orange.com wrote:
> 
> Not yet, we are not familiar with the Glance team.
> Ruan
> 
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com] 
> Sent: vendredi 29 septembre 2017 16:26
> To: openstack-dev
> Subject: Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't 
> send correctly authorization request to Oslo policy
> 
> Excerpts from ruan.he's message of 2017-09-29 12:56:12 +:
>> Hi folks,
>> We are testing the http_check function in Oslo policy, and we figure out a 
>> bug: https://bugs.launchpad.net/glance/+bug/1720354.
>> We believe that this is due to the Glance part since it doesn't well prepare 
>> the authorization request (body) to Oslo policy.
>> Can we put this topic for the next Oslo meeting?
>> Thanks,
>> Ruan HE
>> 
> 
> Do you have someone from the Glance team helping already?
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _
> 
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
> 
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Sahid Orentino Ferdjaoui
On Fri, Sep 29, 2017 at 12:26:07PM +, Bob Ball wrote:
> Hi Sahid,
> 
> > Please consider the support of MDEV for the /pci framework which provides 
> > support for vGPUs [0].
> 

> From my understanding, this MDEV implementation for vGPU would be
> entirely specific to libvirt, is that correct?

No, but Linux specific yes. Windows is supporting SR-IOV.

> XenServer's implementation for vGPU is based on a pooled device
> model (as described in
> http://lists.openstack.org/pipermail/openstack-dev/2017-September/122702.html)

This topic is referring something which I guess everyone understand
now - It's basically why I do have added support of MDEV in /pci to
make it working whatever how the virtual devices are exposed, SR-IOV
or MDEV.

> a second device emulator along-side QEMU.  There is no mdev
> integration.  I'm concerned about how much mdev-specific
> functionality would have to be faked up in the XenServer-specific
> driver for vGPU to be used in this way.

What you are refering with your DEMU it's what QEMU/KVM have with its
vfio-pci. XenServer is reading through MDEV since the vendors provide
drivers on *Linux* using the MDEV framework.

MDEV is a kernel layer, used to expose hardwares, it's not hypervisor
specific.

> I'm not familiar with mdev, but it looks Linux specific, so would not be 
> usable by Hyper-V?
> I've also not been able to find suggestions that VMWare can make use of mdev, 
> although I don't know the architecture of VMWare's integration.
> 
> The concepts of PCI and SR-IOV are, of course, generic, but I think out of 
> principal we should avoid a hypervisor-specific integration for vGPU (indeed 
> Citrix has been clear from the beginning that the vGPU integration we are 
> proposing is intentionally hypervisor agnostic)
> I also think there is value in exposing vGPU in a generic way, irrespective 
> of the underlying implementation (whether it is DEMU, mdev, SR-IOV or 
> whatever approach Hyper-V/VMWare use).
> 
> It's quite difficult for me to see how this will work for other
> hypervisors.  Do you also have a draft alternate spec where more
> details can be discussed?

I would expect that XenServer provides the MDEV UUID, then it's easy
to ask sysfs if you need to get the NUMA node of the physical device
or the mdev_type.

> Bob
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [relmgt] Libraries published to pypi with YYYY.X.Z versions

2017-09-29 Thread Thierry Carrez
OK, I got individual PTL confirmation that it was OK to remove all of
those from PyPI:

mistral-extra 2015.1.0
mistral-dashboard 2015.1.*

networking-odl, 2015.1.1  2015.1.dev986

networking-midonet, 2014.2.2 and various 2015.1.* versions
(might have been removed by networking-midonet folks by now)

sahara-image-elements 2014.*.*
sahara-dashboard 2014.*.*

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-09-29 Thread ruan.he
Not yet, we are not familiar with the Glance team.
Ruan

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: vendredi 29 septembre 2017 16:26
To: openstack-dev
Subject: Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't 
send correctly authorization request to Oslo policy

Excerpts from ruan.he's message of 2017-09-29 12:56:12 +:
> Hi folks,
> We are testing the http_check function in Oslo policy, and we figure out a 
> bug: https://bugs.launchpad.net/glance/+bug/1720354.
> We believe that this is due to the Glance part since it doesn't well prepare 
> the authorization request (body) to Oslo policy.
> Can we put this topic for the next Oslo meeting?
> Thanks,
> Ruan HE
> 

Do you have someone from the Glance team helping already?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Jay Pipes

Hi Sahid, comments inline. :)

On 09/29/2017 04:53 AM, Sahid Orentino Ferdjaoui wrote:

On Thu, Sep 28, 2017 at 05:06:16PM -0400, Jay Pipes wrote:

On 09/28/2017 11:37 AM, Sahid Orentino Ferdjaoui wrote:

Please consider the support of MDEV for the /pci framework which
provides support for vGPUs [0].

Accordingly to the discussion [1]

With this first implementation which could be used as a skeleton for
implementing PCI Devices in Resource Tracker


I'm not entirely sure what you're referring to above as "implementing PCI
devices in Resource Tracker". Could you elaborate? The resource tracker
already embeds a PciManager object that manages PCI devices, as you know.
Perhaps you meant "implement PCI devices as Resource Providers"?


A PciManager? I know that we have a field PCI_DEVICE :) - I guess a
virt driver can return inventory with total of PCI devices. Talking
about manager, not sure.


I'm referring to this:

https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L33

The PciDevTracker class is instantiated in the resource tracker when the 
first ComputeNode object managed by the resource tracker is init'd:


https://github.com/openstack/nova/blob/master/nova/compute/resource_tracker.py#L578

On initialization, the PciDevTracker inventories the compute node's 
collection of PCI devices by grabbing a list of records from the 
pci_devices table in the cell database:


https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L69

and then comparing those DB records with information the hypervisor 
returns about PCI devices:


https://github.com/openstack/nova/blob/master/nova/pci/manager.py#L160

Each hypervisor returns something different for the list of pci devices, 
as you know. For libvirt, the call that returns PCI device information 
is here:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/host.py#L842

The results of that are jammed into a "pci_passthrough_devices" key in 
the returned result of the virt driver's get_available_resource() call. 
For libvirt, that's here:


https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L5809

It is that piece that Eric and myself have been talking about 
standardizing into a "generic device management" interface that would 
have an update_inventory() method that accepts a ProviderTree object [1]


[1] 
https://github.com/openstack/nova/blob/master/nova/compute/provider_tree.py


and would add resource providers corresponding to devices that are made 
available to guests for use.



You still have to define "traits", basically for physical network
devices, the users want to select device according physical network,
to select device according the placement on host (NUMA), to select the
device according the bandwidth capability... For GPU it's same
story. *And I do not have mentioned devices which support virtual
functions.*


Yes, the generic device manager would be responsible for associating 
traits to the resource providers it adds to the ProviderTree provided to 
it in the update_inventory() call.



So that is what you plan to do for this release :) - Reasonably I
don't think we are close to have something ready for production.


I don't disagree with you that this is a huge amount of refactoring to 
undertake over the next couple releases. :)



Jay, I have question, Why you don't start by exposing NUMA ?


I believe you're asking here why we don't start by modeling NUMA nodes 
as child resource providers of the compute node? Instead of starting by 
modeling PCI devices as child providers of the compute node? If that's 
not what you're asking, please do clarify...


We're starting with modeling PCI devices as child providers of the 
compute node because they are easier to deal with as a whole than NUMA 
nodes and we have the potential of being able to remove the 
PciPassthroughFilter from the scheduler in Queens.


I don't see us being able to remove the NUMATopologyFilter from the 
scheduler in Queens because of the complexity involved in how coupled 
the NUMA topology resource handling is to CPU pinning, huge page 
support, and IO emulation thread pinning.


Hope that answers that question; again, lemme know if that's not the 
question you were asking! :)



For the record, I have zero confidence in any existing "functional" tests
for NUMA, SR-IOV, CPU pinning, huge pages, and the like. Unfortunately, due
to the fact that these features often require hardware that either the
upstream community CI lacks or that depends on libraries, drivers and kernel
versions that really aren't available to non-bleeding edge users (or users
with very deep pockets).


It's good point, if you are not confidence, don't you think it's
premature to move forward on implementing new thing without to have
well trusted functional tests?


Completely agree with you. I would rather see functional integration 
tests that are proven to actually test these complex hardware devices 
*gating* Nova patches before adding any 

[openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-09-29 Thread Monty Taylor

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this 
email and followups on this thread for mentions of them. If it's an 
issue with your job and you can spot it (bad config) just submit a patch 
with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like 
to ask that you send a follow up email to this thread so that we can 
ensure we've got them all and so that others can see it too.


** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's 
working perfectly for you.


If you have - sorry for the disruption. It turns out we have a REALLY 
complicated array of job content you've all created. Hopefully the pain 
of the moment will be offset by the ability for you to all take direct 
ownership of your awesome content... so bear with us, your patience is 
appreciated.


If you find yourself with some extra time on your hands while you wait 
on something, you may find it helpful to read:


  https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the 
issues is that the infra manual publication job stopped working.


While the infra manual publication is being fixed, we're collecting FAQ 
content for it in an etherpad:


  https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for 
it. Once manual publication is fixed, we'll update the etherpad to point 
to the FAQ section of the manual.


** Global Issues **

There are a number of outstanding issues that are being worked. As of 
right now, there are a few major/systemic ones that we're looking in to 
that are worth noting:


* Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do 
something wrong?", we're having an issue that jeblair and Shrews are 
currently tracking down with intermittent connection issues in the 
backend plumbing.


When it happens it's an across the board issue, so fixing it is our 
number one priority.


* Incorrect node type

We've got reports of things running on trusty that should be running on 
xenial. The job definitions look correct, so this is also under 
investigation.


* Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes 
while the old jobs were designed to only collect from the 'primary'. 
Patches are up to fix this and should be fixed soon.


* Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a 
bunch of changes in it, so we really do appreciate everyone's 
understanding as we work through it all.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova docker replaced by zun?

2017-09-29 Thread ADAMS, STEVEN E
Can anyone point me to some background on why nova docker was discontinued and 
how zun is the heir?
Thx,
Steve Adams
AT
https://github.com/openstack/nova-docker/blob/master/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Zuul v3 migration update

2017-09-29 Thread Doug Hellmann
Excerpts from Clark Boylan's message of 2017-09-28 18:40:41 -0700:
> On Wed, Sep 27, 2017, at 03:24 PM, Monty Taylor wrote:
> > Hey everybody,
> > 
> > We're there. It's ready.
> > 
> > We've worked through all of the migration script issues and are happy 
> > with the results. The cutover trigger is primed and ready to go.
> > 
> > But as it's 21:51 UTC / 16:52 US Central it's a short day to be 
> > available to respond to the questions folks may have... so we're going 
> > to postpone one more day.
> > 
> > Since it's all ready to go we'll be looking at flipping the switch first 
> > thing in the morning. (basically as soon as the West Coast wakes up and 
> > is ready to go)
> > 
> > The project-config repo should still be considered frozen except for 
> > migration-related changes. Hopefully we'll be able to flip the final 
> > switch early tomorrow.
> > 
> > If you haven't yet, please see [1] for information about the transition.
> > 
> > [1] https://docs.openstack.org/infra/manual/zuulv3.html
> > 
> 
> Its done! Except for all the work to make jobs run properly. Early today

Woo!

> (PDT) we converted everything over to our auto generated Zuulv3 config.
> Since then we've been working to address problems in job configs.
> 
> These problems include:
> Missing inclusion of the requirements repo for constraints in some
> jobs
> Configuration of python35 unittest jobs in some cases
> Use of sudo checking not working properly
> Multinode jobs not having multinode nodesets
> 
> Known issues we will continue to work on:
> Multinode devstack and grenade jobs are not working quite right
> Releasenote jobs not working due to use of origin/ refs in git

This problem should be fixed by the reno enhancement in
https://review.openstack.org/#/c/508324/, which I will release when I'm
given the all-clear for the release tagging jobs.

> It looks like we may not have job branch exclusions in place for all
> cases
> The zuul-cloner shim may not work in all cases. We are tracking down
> and fixing the broken corner cases.
> 
> Keep in mind that with things in flux, there is a good chance that
> changes enqueued to the gate will fail. It is a good idea to check
> recent check queue results before approving changes.
> 
> I don't think we've found any deal breaker problems at this point. I am
> sure there are many more than I have listed above. Please feel free to
> ask us about any errors. For the adventurous, fixing problems is likely
> a great way to get familiar with the new system. You'll want to start by
> fixing errors in openstack-infra/openstack-zuul-jobs/playbooks/legacy.
> Once that stabilizes the next step is writing native job configs within
> your project tree. Documentation can be found at
> https://docs.openstack.org/infra/manual/zuulv3.html. I expect we'll
> spend the next few days ironing out the transition.
> 
> Thank you for your patience,
> Clark
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 36

2017-09-29 Thread Chris Dent


Update 36, accelerating into the cycle, is thinking about specs.

# Most Important

There are several specs outstanding for the main placement-related
work that is prioritized for this cycle. And some of those specs have
spin off specs inspired by them. Since a spec sprint is planned for
early next week, I'll break with tradition and format things
differently this time to put some emphasis on specs to be clear that
we need to get those out of the way.

The three main priorities are migration uuid for allocations,
alternate hosts, and nested providers.

## Nested Resource Proivders

The nested resource providers spec is at

https://review.openstack.org/#/c/505209/

It was previously accepted, but with all the recent talk about dealing
with traits on nested providers there's some discussion happening
there. There's a passel of related specs, about implementing traits in
various ways:

* https://review.openstack.org/#/c/497713/
  Add trait support in the allocation candidates API

* https://review.openstack.org/#/c/468797/
  Request traits in Nova

John has started a spec about using traits with Ironic:

https://review.openstack.org/#/c/507052/

The NRP implementation is at:

https://review.openstack.org/#/q/topic:bp/nested-resource-providers

## Migration allocations

The migration allocations spec has already merged

https://review.openstack.org/#/c/498510/

and the work for it is ongoing at:

https://review.openstack.org/#/q/topic:bp/migration-allocations

Management of those allocations currently involves some raciness,
plans to address that are captured in:

* https://review.openstack.org/#/c/499259/
  Add a spec for POST /allocations in placement

but that proposes a change in the allocation representation which
ought to first be reflected in PUT /allocations/{consumer_uuid},
that's at:

* https://review.openstack.org/#/c/508164/
  Add spec for symmetric GET and PUT of allocations

## Alternate Hosts

We want to be able to do retries within cells, so we need some
alternate hosts when returning a destination, the spec for that
is:

https://review.openstack.org/#/c/504275/k

We want that data to be formatted in a way that causes neither fear
nor despair, so a spec for "Selection" objects exists:

https://review.openstack.org/#/c/498830/

Implementation ongoing at:


https://review.openstack.org/#/q/topic:bp/placement-allocation-requests+status:open

## Other Specs

* https://review.openstack.org/#/c/496853/
  Add a spec for minimal cache headers in placement

* https://review.openstack.org/#/c/504540/
  Spec for limiting GET /allocation_candidates
  (This one needs some discussion about what the priorities are, lots
  of good but different ideas on the spec)

* https://review.openstack.org/#/c/502306/
  Network bandwitdh resource provider

# End

Next time we'll go back to the usual format.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 and you: FAQs and how to troubleshoot your new (and legacy) jobs

2017-09-29 Thread David Moreau Simard
Jim Blair rightfully pointed out that fleshed out documentation of the
Zuul v3 documentation can be found in the infra-manual docs [1]
which everyone should definitely read.

The etherpad is not meant as a replacement for the documentation but
rather as a collaborative effort around the questions which
are most often asked or are not covered (yet) in the infra-manual.

[1]: https://docs.openstack.org/infra/manual/zuulv3.html

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]


On Fri, Sep 29, 2017 at 10:16 AM, David Moreau Simard  wrote:
> Hi !
>
> So Zuul v3, years in the making, is now live.
>
> Congrats to everyone involved in making it happen and thanks everyone
> else for their patience, even now as we work towards fully migrating and
> fixing problems.
>
> In the interest of keeping this short as possible, Zuul jobs are now driven
> by Ansible playbooks and roles.
>
> We realize that not everyone is familiar with Ansible, much less Zuul v3,
> so I've started an etherpad with some FAQs and tips on how to get started.
>
> You'll find the etherpad here [1].
>
> [1]: https://etherpad.openstack.org/p/zuulv3-migration-faq
>
> David Moreau Simard
> Senior Software Engineer | OpenStack RDO
>
> dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-09-29 Thread Doug Hellmann
Excerpts from ruan.he's message of 2017-09-29 12:56:12 +:
> Hi folks,
> We are testing the http_check function in Oslo policy, and we figure out a 
> bug: https://bugs.launchpad.net/glance/+bug/1720354.
> We believe that this is due to the Glance part since it doesn't well prepare 
> the authorization request (body) to Oslo policy.
> Can we put this topic for the next Oslo meeting?
> Thanks,
> Ruan HE
> 

Do you have someone from the Glance team helping already?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Zuul v3 and you: FAQs and how to troubleshoot your new (and legacy) jobs

2017-09-29 Thread David Moreau Simard
Hi !

So Zuul v3, years in the making, is now live.

Congrats to everyone involved in making it happen and thanks everyone
else for their patience, even now as we work towards fully migrating and
fixing problems.

In the interest of keeping this short as possible, Zuul jobs are now driven
by Ansible playbooks and roles.

We realize that not everyone is familiar with Ansible, much less Zuul v3,
so I've started an etherpad with some FAQs and tips on how to get started.

You'll find the etherpad here [1].

[1]: https://etherpad.openstack.org/p/zuulv3-migration-faq

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, September 29th

2017-09-29 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something) that is
not on the tracker, feel free to add to it !


== Recently-approved changes ==

NB: Some changes (marked with ** below) have not formally merged yet at
the time of this email due to some jobs stuck in the Zuulv3 transition,
but they have been approved nevertheless.

* New project team: Blazar (Resource Reservation Service) [1]
* New project team: Cyborg (Accelerator Lifecycle Management) [2] **
* Put searchlight into maintenance mode [3] **
* Adding "Infra sysadmins" to help-wanted list [4]
* Update wording in Glance "help-wanted" item [5]
* Updates to Queens goals status

[1] https://review.openstack.org/#/c/482860/
[2] https://review.openstack.org/#/c/504940/
[3] https://review.openstack.org/#/c/502849/
[4] https://review.openstack.org/#/c/502259/
[5] https://review.openstack.org/#/c/503168/

Lots of change this week, with the adoption of two new project teams to
the OpenStack family: Blazar (Resource Reservation Service), and Cyborg
(Accelerator Lifecycle Management).

Searchlight was placed in maintenance-mode: lower activity is expected
as it is essentially feature-complete and functional, just needs more
projects to implement plugins.

Our infrastructure team was pretty busy this week with the migration to
Zuul v3, but they are still struggling with resources and timezone
coverage. To reflect the constant need for more resources there, it was
added to the "Top 5" help-wanted list:

https://governance.openstack.org/tc/reference/top-5-help-wanted.html


== New project teams ==

We still have 3 project teams still applying for inclusion.

Stackube (https://review.openstack.org/#/c/462460/) voting is in
progress. They provided extra information on the commit message, which
pointed to a few necessary actions (like holding their meeting in
publicly-logged IRC channels, or more active cooperation with the Kuryr
folks).

Glare (https://review.openstack.org/479285) application is still under
heavy discussion. While Glare has pledged a narrower scope and
coexistence with Glance lately, most TC members would like to let some
time pass before approving the team.

Masakari (https://review.openstack.org/#/c/500118/) voting is in
progress, no objection recorded so far.


== An update on SIGs ==

SIGs are a new form of workgroups, more explicitly operating beyond
governance boundaries and open to anyone interested in improving the
state of OpenStack on a specific topic.

We now have 3 active SIGs (Meta, API and Scientific), and 4 are being
formed as we speak (Ansible, K8s, Public Cloud, Self-healing).

We'll continue on-boarding new SIGs in the coming weeks, but the Meta
SIG will also discuss the next steps: active promotion of existing SIGs
as a way to contribute to OpenStack, setting up a governance website
around SIGs to replace the current wiki page, setting clearer
expectations in terms of regular status reports, etc. Join us on the
openstack-sigs ML if interested:

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs


== Tag changes ==

smcginnis proposal to remove assert:supports-zero-impact-upgrade did not
meet serious objection yet. Please review it at:

https://review.openstack.org/506241

The removal of assert:supports-accessible-upgrade is under discussion
though, as some projects could just claim it now. So if this tag
provides a useful piece of information (and projects apply to it), it
should probably be kept around:

https://review.openstack.org/506263

A number of tag applications were proposed and could use some review
attention:

* supports-rolling-upgrade for Heat: https://review.openstack.org/503145
* api-interop assert for Nova: https://review.openstack.org/506255
* api-interop assert for Ironic: https://review.openstack.org/482759
* Remove stable:follows-policy for TripleO:
https://review.openstack.org/507924


== Voting in progress ==

A new patchset was posted for adding Designate to the top-5 help wanted
list. It's now ready for vote pile-up:

https://review.openstack.org/498719

fungi proposed to make Infra contributors #2 in the list, please vote here:

https://review.openstack.org/#/c/507637/


== TC member actions for the coming week(s) ==

Monty should answer the feedback on the supported database version
resolution (https://review.openstack.org/493932) so that we can make
progress there.


== Need for a TC meeting next Tuesday ==

The Glare application is where there is the standing disagreement at
this point, but the discussion seems to still make progress on the
review and during office hours. I propose we give it another week before
calling a more formal meeting to discuss this. You can find TC office
hours at:

https://governance.openstack.org/tc/#office-hours

Cheers,

-- 
Thierry Carrez (ttx)



Re: [openstack-dev] [ocata] [nova-api] Nova api stopped working after a yum update

2017-09-29 Thread Avery Rozar
So I was able to get things working, I added the following config options
to all configs requiring [keystone_authtoken]:

I added '/v3' to:

auth_uri = https://host.domain.com:5000
auth_url = https://host.domain.com:35357

and then had to add:

cafile = /etc/pki/tls/certs/gd_bundle-g2-g1.crt

After these changes it's working again.

Thanks,
Avery

On Fri, Sep 29, 2017 at 9:49 AM, Ben Nemec  wrote:

> You may want to ask this on rdo-list, assuming RDO is where you got your
> packages: https://www.redhat.com/mailman/listinfo/rdo-list
>
> Generally speaking, a minor update like that should not bring in any new
> required configuration options.
>
> On 09/27/2017 04:51 AM, Avery Rozar wrote:
>
>> Hello all,
>> I ran "yum update" on my OpenStack controller and now any request to the
>> nova.api service (port 8774) results in an error in
>> "/var/log/nova/nova-api.log".
>>
>> A simple get request,
>>
>> GET /v2.1/os-hypervisors/detail HTTP/1.1
>> Host: host.domain.com :8774
>>
>> User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:54.0)
>> Gecko/20100101 Firefox/54.0
>> X-Auth-Token: 
>> Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
>> Accept-Language: en-US,en;q=0.5
>> Content-Type: application/json
>> Content-Length: 0
>> DNT: 1
>> Connection: close
>> Upgrade-Insecure-Requests: 1
>>
>>
>> Results in and error logged to "/var/log/nova/nova-api.log
>>
>> WARNING keystoneauth.identity.generic.base [-] Discovering versions from
>> the identity service failed when creating the password plugin. Attempting
>> to determine version from URL.
>> ERROR nova.api.openstack [-] Caught error: Could not determine a suitable
>> URL for the plugin
>> ERROR nova.api.openstack Traceback (most recent call last):
>> ERROR nova.api.openstack   File "/usr/lib/python2.7/site-packa
>> ges/nova/api/openstack/__init__.py", line 88, in __call__
>> ERROR nova.api.openstack return req.get_response(self.application)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/request.py",
>> line 1299, in send
>> ERROR nova.api.openstack application, catch_exc_info=False)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/request.py",
>> line 1263, in call_application
>> ERROR nova.api.openstack app_iter = application(self.environ,
>> start_response)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/dec.py",
>> line 144, in __call__
>> ERROR nova.api.openstack return resp(environ, start_response)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/dec.py",
>> line 130, in __call__
>> ERROR nova.api.openstack resp = self.call_func(req, *args,
>> **self.kwargs)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/dec.py",
>> line 195, in call_func
>> ERROR nova.api.openstack return self.func(req, *args, **kwargs)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/osprofiler/web.py",
>> line 108, in __call__
>> ERROR nova.api.openstack return request.get_response(self.appl
>> ication)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/request.py",
>> line 1299, in send
>> ERROR nova.api.openstack application, catch_exc_info=False)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/request.py",
>> line 1263, in call_application
>> ERROR nova.api.openstack app_iter = application(self.environ,
>> start_response)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/dec.py",
>> line 130, in __call__
>> ERROR nova.api.openstack resp = self.call_func(req, *args,
>> **self.kwargs)
>> ERROR nova.api.openstack   File 
>> "/usr/lib/python2.7/site-packages/webob/dec.py",
>> line 195, in call_func
>> ERROR nova.api.openstack return self.func(req, *args, **kwargs)
>> ERROR nova.api.openstack   File "/usr/lib/python2.7/site-packa
>> ges/keystonemiddleware/auth_token/__init__.py", line 332, in __call__
>> ERROR nova.api.openstack response = self.process_request(req)
>> ERROR nova.api.openstack   File "/usr/lib/python2.7/site-packa
>> ges/keystonemiddleware/auth_token/__init__.py", line 623, in
>> process_request
>> ERROR nova.api.openstack resp = super(AuthProtocol,
>> self).process_request(request)
>> ERROR nova.api.openstack   File "/usr/lib/python2.7/site-packa
>> ges/keystonemiddleware/auth_token/__init__.py", line 405, in
>> process_request
>> ERROR nova.api.openstack allow_expired=allow_expired)
>> ERROR nova.api.openstack   File "/usr/lib/python2.7/site-packa
>> ges/keystonemiddleware/auth_token/__init__.py", line 435, in
>> _do_fetch_token
>> ERROR nova.api.openstack data = self.fetch_token(token, **kwargs)
>> ERROR nova.api.openstack   File "/usr/lib/python2.7/site-packa
>> ges/keystonemiddleware/auth_token/__init__.py", line 762, in fetch_token
>> ERROR 

Re: [openstack-dev] [Oslo][oslo.policy] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-09-29 Thread Lance Bragstad
++ it'd be great to come up with some sort of pattern here that other
projects can follow if they need to implement the same thing. Some sort
of consistency would be great when/if we start seeing more http_check
adoption.


On 09/29/2017 07:56 AM, ruan...@orange.com wrote:
>
> Hi folks,
>
> We are testing the http_check function in Oslo policy, and we figure
> out a bug: https://bugs.launchpad.net/glance/+bug/1720354.
>
> We believe that this is due to the Glance part since it doesn't well
> prepare the authorization request (body) to Oslo policy.
>
> Can we put this topic for the next Oslo meeting?
>
> Thanks,
>
> Ruan HE
>
>  
>
>  
>
> _
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc
> pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu 
> ce message par erreur, veuillez le signaler
> a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
> electroniques etant susceptibles d'alteration,
> Orange decline toute responsabilite si ce message a ete altere, deforme ou 
> falsifie. Merci.
>
> This message and its attachments may contain confidential or privileged 
> information that may be protected by law;
> they should not be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and delete 
> this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have been 
> modified, changed or falsified.
> Thank you.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ocata] [nova-api] Nova api stopped working after a yum update

2017-09-29 Thread Ben Nemec
You may want to ask this on rdo-list, assuming RDO is where you got your 
packages: https://www.redhat.com/mailman/listinfo/rdo-list


Generally speaking, a minor update like that should not bring in any new 
required configuration options.


On 09/27/2017 04:51 AM, Avery Rozar wrote:

Hello all,
I ran "yum update" on my OpenStack controller and now any request to the 
nova.api service (port 8774) results in an error in 
"/var/log/nova/nova-api.log".


A simple get request,

GET /v2.1/os-hypervisors/detail HTTP/1.1
Host: host.domain.com :8774
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:54.0) 
Gecko/20100101 Firefox/54.0

X-Auth-Token: 
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Content-Type: application/json
Content-Length: 0
DNT: 1
Connection: close
Upgrade-Insecure-Requests: 1


Results in and error logged to "/var/log/nova/nova-api.log

WARNING keystoneauth.identity.generic.base [-] Discovering versions from 
the identity service failed when creating the password plugin. 
Attempting to determine version from URL.
ERROR nova.api.openstack [-] Caught error: Could not determine a 
suitable URL for the plugin

ERROR nova.api.openstack Traceback (most recent call last):
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/__init__.py", line 
88, in __call__

ERROR nova.api.openstack return req.get_response(self.application)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send

ERROR nova.api.openstack application, catch_exc_info=False)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1263, in 
call_application
ERROR nova.api.openstack app_iter = application(self.environ, 
start_response)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 144, in __call__

ERROR nova.api.openstack return resp(environ, start_response)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
ERROR nova.api.openstack resp = self.call_func(req, *args, 
**self.kwargs)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func

ERROR nova.api.openstack return self.func(req, *args, **kwargs)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/osprofiler/web.py", line 108, in __call__

ERROR nova.api.openstack return request.get_response(self.application)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1299, in send

ERROR nova.api.openstack application, catch_exc_info=False)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/request.py", line 1263, in 
call_application
ERROR nova.api.openstack app_iter = application(self.environ, 
start_response)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 130, in __call__
ERROR nova.api.openstack resp = self.call_func(req, *args, 
**self.kwargs)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/webob/dec.py", line 195, in call_func

ERROR nova.api.openstack return self.func(req, *args, **kwargs)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 332, in __call__

ERROR nova.api.openstack response = self.process_request(req)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 623, in process_request
ERROR nova.api.openstack resp = super(AuthProtocol, 
self).process_request(request)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 405, in process_request

ERROR nova.api.openstack allow_expired=allow_expired)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 435, in _do_fetch_token

ERROR nova.api.openstack data = self.fetch_token(token, **kwargs)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/__init__.py", 
line 762, in fetch_token

ERROR nova.api.openstack allow_expired=allow_expired)
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/_identity.py", 
line 217, in verify_token

ERROR nova.api.openstack auth_ref = self._request_strategy.verify_token(
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/_identity.py", 
line 168, in _request_strategy

ERROR nova.api.openstack strategy_class = self._get_strategy_class()
ERROR nova.api.openstack   File 
"/usr/lib/python2.7/site-packages/keystonemiddleware/auth_token/_identity.py", 
line 190, in _get_strategy_class
ERROR nova.api.openstack if 

Re: [openstack-dev] [keystone] [keystoneauth] Debug data isn't sanitized - bug 1638978

2017-09-29 Thread Lance Bragstad


On 09/27/2017 06:38 AM, Bhor, Dinesh wrote:
>
> Hi Team,
>
>  
>
> There are four solutions to fix the below bug:
>
> https://bugs.launchpad.net/keystoneauth/+bug/1638978
>
>  
>
> 1) Carry a copy of mask_password() method to keystoneauth from
> oslo_utils [1]:
>
> *Pros:*
>
> A. keystoneauth will use already tested and used version of mask_password.
>
>    
>
> *Cons:*
>
> A. keystoneauth will have to keep the version of mask_password()
> method sync with oslo_utils version.
>
>  If there are any new "_SANITIZE_KEYS" added to oslo_utils
> mask_password then those should be added in keystoneauth mask_password
> also.
>
> B. Copying the "mask_password" will also require to copy its
> supporting code [2] which is huge.
>
>  
>

I'm having flashbacks of the oslo-incubator days...

>  
>
> 2) Use Oslo.utils mask_password() method in keystoneauth:
>
> *Pros:*
>
> A) No synching issue as described in solution #1. keystoneauth will
> directly use mask_password() method from Oslo.utils.
>
>    
>
> *Cons:*
>
> A) You will need oslo.utils library to use keystoneauth.
>
> Objection by community:
>
> - keystoneauth community don't want any dependency on any of OpenStack
> common oslo libraries.
>
> Please refer to the comment from Morgan:
> https://bugs.launchpad.net/keystoneauth/+bug/1700751/comments/3
>
>  
>
>  
>
> 3) Add a custom logging filter in oslo logger
>
> Please refer to POC sample here:
> http://paste.openstack.org/show/617093/
> 
>
> OpenStack core services using any OpenStack individual python-*client
> (for e.g python-cinderclient used in nova service) will need to pass
> oslo_logger object during it’s
>
> initialization which will do the work of masking sensitive information.
>
> Note: In nova, oslo.logger object is not passed during cinder client
> initialization
> (https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L135-L141),
>
>
> In this case, sensitive information will not be masked as it isn’t
> using Oslo.logger.
>
>    
>
> *Pros:*
>
> A) No changes required in oslo.logger or any OpenStack services if
> mask_password method is modified in oslo.utils.
>
>    
>
> *Cons:*
>
> A) Every log message will be scanned for certain password fields
> degrading the performance.
>
> B) If consumer of keystoneauth doesn’t use oslo_logger, then the
> sensitive information will not be masked.
>
> C) Will need to make changes wherever applicable to the OpenStack core
> services to pass oslo.logger object during python-novaclient
> initialization.
>
>  
>
>  
>
> 4) Add mask_password formatter parameter in oslo_log:
>
> Add "mask_password" formatter to sanitize sensitive data and pass it
> as a keyword argument to the log statement.
>
> If the mask_password is set, then only the sensitive information will
> be masked at the time of logging.
>
> The log statement will look like below:
>
>  
>
> logger.debug("'adminPass': 'Now you see me'"), mask_password=True)
>
>  
>
> Please refer to the POC code here:
> http://paste.openstack.org/show/618019/
> 
>
>    
>
> *Pros:  *
>
> A) No changes required in oslo.logger or any OpenStack services if
> mask_password method is modified in oslo.utils.
>
>  
>
> *Cons:*
>
> A) If consumer of keystoneauth doesn’t use oslo_logger, then the
> sensitive information will not be masked.
>
> B) If you forget to pass mask_password=True for logging messages where
> sensitive information is present, then those fields won't be masked
> with ***.
>
>  But this can be clearly documented as suggested by Morgan and Lance.
>
> C) This solution requires you to add a below check in keystoneauth to
> avoid from an exception being raised in case logger is pure python
> Logger as it
>
>   doesn’t accept mask_password keyword argument.
>
>  
>
> if isinstance(logger, logging.Logger):
>
>     logger.debug(' '.join(string_parts))
>
> else:
>
>     logger.debug(' '.join(string_parts), mask_password=True)
>
>    
>
> This check assumes that the logger instance will be oslo_log only if
> it is not of python default logging.Logger.
>
> Keystoneauth community is not ready to have any dependency on any
> oslo-* lib, so it seems this solution has low acceptance chances.
>

Options 2, 3, and 4 all require dependencies on oslo in order to work,
which is a non-starter according to Morgan's comment in the bug [0].
Options 3 and 4 will require a refactor to get keystoneauth to use
oslo.log (today it uses the logging module from Python's standard library).

[0] https://bugs.launchpad.net/keystoneauth/+bug/1700751/comments/3

>  
>
> Please let me know your opinions about the above four approaches.
> Which one should we adopt?
>
>  
>
> [1]
> https://github.com/openstack/oslo.utils/blob/master/oslo_utils/strutils.py#L248-L313
>
> [2]
> 

Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-29 Thread Ben Nemec



On 09/26/2017 09:13 PM, Rochelle Grober wrote:

Clint Byrum wrote:

Excerpts from Jonathan Proulx's message of 2017-09-26 16:01:26 -0400:

On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:

:OpenStack is big. Big enough that a user will likely be fine with
learning :a new set of tools to manage it.

New users in the startup sense of new, probably.

People with entrenched environments, I doubt it.



Sorry no, I mean everyone who doesn't have an OpenStack already.

It's nice and all, if you're a Puppet shop, to get to use the puppet modules.
But it doesn't bring you any closer to the developers as a group. Maybe a few
use Puppet, but most don't. And that means you are going to feel like
OpenStack gets thrown over the wall at you once every
6 months.


But OpenStack is big. Big enough I think all the major config systems
are fairly well represented, so whether I'm right or wrong this
doesn't seem like an issue to me :)



They are. We've worked through it. But that doesn't mean potential users
are getting our best solution or feeling well integrated into the community.


Having common targets (constellations, reference architectures,
whatever) so all the config systems build the same things (or a subset
or superset of the same things) seems like it would have benefits all
around.



It will. It's a good first step. But I'd like to see a world where developers 
are
all well versed in how operators actually use OpenStack.


Hear, hear!  +1000  Take a developer to work during peak operations.


Or anytime really.  One of the best experiences I had was going on-site 
to some of our early TripleO users and helping them through the install 
process.  It was eye-opening to see someone who wasn't already immersed 
in the project try to use it.  In a relatively short time they pointed 
out a number of easy opportunities for simplification (why is this two 
steps instead of one?  Umm, no good reason actually.).


I've pushed for us to do more of that sort of thing, but unfortunately 
it's a hard sell to take an already overworked developer away from their 
day job for a week to focus on one specific user. :-/




For Walmart, that would be Black Firday/Cyber Monday.
For schools, usually a few days into the new session.
For otherseach has a time when things break more.  Having a developer 
experience what operators do to predict/avoid/recover/work around the normal 
state of operations would help each to understand the macro work flows.  Those 
are important, too.  Full stack includes Ops.

< Snark off />

--Rocky



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-
requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] multi domain usage for handlers

2017-09-29 Thread Graham Hayes


On 29/09/17 11:39, Kim-Norman Sahm wrote:
> Hi Graham,
> 
> thanks for your answer.
> I want try to extent the nova_fixed handler for our requirements.
> 
> how can i enable a new handler in designate?
> I've defined a new handler section in designate.conf and copied my
> handlername.py file to
> /usr/lib/python2.7/dist-packages/designate/notification_handler but
> designate sink should not use it:
> 
> 2017-09-29 12:19:26.180 12177 WARNING designate.sink.service [-] No
> designate-sink handlers enabled or loaded
> 
> regards
> Kim
> 

If you look at this folder:
https://github.com/openstack/designate/tree/master/contrib/designate-ext-samplehandler


it has an example handler.

You would need to copy that folder, and then write your custom handler.

Then update
https://github.com/openstack/designate/blob/master/contrib/designate-ext-samplehandler/setup.cfg
with the new details, and `pip install` the handler.

Then you can set the enabled handlers to be what ever you set:

https://github.com/openstack/designate/blob/master/contrib/designate-ext-samplehandler/setup.cfg#L28


to be.

Thanks,

Graham
> 
> 
> Kim-Norman Sahm
> Cloud & Infrastructure(OCI)
> 
> noris network AG
> Thomas-Mann-Straße 16-20
> 90471 Nürnberg
> Deutschland
> 
> Tel +49 911 9352 1433
> Fax +49 911 9352 100
> 
> kim-norman.s...@noris.de
> 
> https://www.noris.de - Mehr Leistung als Standard
> Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
> Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689
> 
>  
> 
>  
> 
>  
> 
>  
> 
> Am 28.09.2017 um 18:54 schrieb Graham Hayes:
>>
>> On 28/09/17 17:06, Kim-Norman Sahm wrote:
>>> Hi,
>>>
>>> i'm currently testing designate and i have a question about the
>>> architecture.
>>> We're using openstack newton with keystone v3 and thus the keystone
>>> domain/project structure.
>>>
>>> I've tried the global nova_fixed and neutron_floating_ip handlers but
>>> all dns records (for each domains/projects) are stored in the same dns
>>> domain (instance1.novafixed.example.com and
>>> anotherinstance.neutronfloatingip.example.com).
>>> is is possible to define a seperate DNS domain for each keystone
>>> domain/project and auto-assign the instances to this domain?
>>> example: openstack domain "customerA.com" with projects "prod" and
>>> "dev". instance1 starts in project "dev" and the dns record is
>>> instance1.dev.customerA.com
>>>
>>> Best regards
>>> Kim
>> Hi Kim,
>>
>> Unfortunately, with the default handlers, there is no way of assigning
>> them to different projects.
>>
>> We also mark any recordsets created by designate-sink as "managed" -
>> this means that normal users cannot modify them, an admin has to update
>> them, with the `--all-projects` and `--edit-managed` flags.
>>
>> The modules provided are only designed to be examples. We expected any
>> users would end up writing their own handlers [0].
>>
>> You should also look at the neutron / designate integration [1] as it
>> may do what you need.
>>
>> Thanks,
>>
>> Graham
>>
>> 0 
>> -https://github.com/openstack/designate/tree/master/contrib/designate-ext-samplehandler
>>
>> 1 
>> -https://docs.openstack.org/ocata/networking-guide/config-dns-int.html#integration-with-an-external-dns-service
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Dan Smith

The concepts of PCI and SR-IOV are, of course, generic


They are, although the PowerVM guys have already pointed out that they
don't even refer to virtual devices by PCI address and thus anything 
based on that subsystem isn't going to help them.



but I think out of principal we should avoid a hypervisor-specific
integration for vGPU (indeed Citrix has been clear from the beginning
that the vGPU integration we are proposing is intentionally
hypervisor agnostic) I also think there is value in exposing vGPU in
a generic way, irrespective of the underlying implementation (whether
it is DEMU, mdev, SR-IOV or whatever approach Hyper-V/VMWare use).


I very much agree, of course.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Zuul v3 migration update

2017-09-29 Thread Luke Hinds
On Fri, Sep 29, 2017 at 2:40 AM, Clark Boylan  wrote:

> On Wed, Sep 27, 2017, at 03:24 PM, Monty Taylor wrote:
> > Hey everybody,
> >
> > We're there. It's ready.
> >
> > We've worked through all of the migration script issues and are happy
> > with the results. The cutover trigger is primed and ready to go.
> >
> > But as it's 21:51 UTC / 16:52 US Central it's a short day to be
> > available to respond to the questions folks may have... so we're going
> > to postpone one more day.
> >
> > Since it's all ready to go we'll be looking at flipping the switch first
> > thing in the morning. (basically as soon as the West Coast wakes up and
> > is ready to go)
> >
> > The project-config repo should still be considered frozen except for
> > migration-related changes. Hopefully we'll be able to flip the final
> > switch early tomorrow.
> >
> > If you haven't yet, please see [1] for information about the transition.
> >
> > [1] https://docs.openstack.org/infra/manual/zuulv3.html
> >
>
> Its done! Except for all the work to make jobs run properly. Early today
> (PDT) we converted everything over to our auto generated Zuulv3 config.
> Since then we've been working to address problems in job configs.
>
> These problems include:
> Missing inclusion of the requirements repo for constraints in some
> jobs
> Configuration of python35 unittest jobs in some cases
> Use of sudo checking not working properly
> Multinode jobs not having multinode nodesets
>
> Known issues we will continue to work on:
> Multinode devstack and grenade jobs are not working quite right
> Releasenote jobs not working due to use of origin/ refs in git
> It looks like we may not have job branch exclusions in place for all
> cases
> The zuul-cloner shim may not work in all cases. We are tracking down
> and fixing the broken corner cases.
>
> Keep in mind that with things in flux, there is a good chance that
> changes enqueued to the gate will fail. It is a good idea to check
> recent check queue results before approving changes.
>
> I don't think we've found any deal breaker problems at this point. I am
> sure there are many more than I have listed above. Please feel free to
> ask us about any errors. For the adventurous, fixing problems is likely
> a great way to get familiar with the new system. You'll want to start by
> fixing errors in openstack-infra/openstack-zuul-jobs/playbooks/legacy.
> Once that stabilizes the next step is writing native job configs within
> your project tree. Documentation can be found at
> https://docs.openstack.org/infra/manual/zuulv3.html. I expect we'll
> spend the next few days ironing out the transition.
>
> Thank you for your patience,
> Clark
>
>
Hi,

We have a couple of failures in Bandit's gvate I wanted to flag up (fungi
believes it might be from missing openstack/requirements as a required repo
in the job definition):

Glance Store:

http://logs.openstack.org/44/504544/6/check/legacy-bandit-integration-glance_store/900f88a/job-output.txt.gz


Magnum client:

http://logs.openstack.org/44/504544/6/check/legacy-bandit-integration-python-magnumclient/ba50a54/job-output.txt.gz

Thanks, and do let me know if I need to do anything on Bandits side.

Luke
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Oslo][oslo.policy] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-09-29 Thread ruan.he
Hi folks,
We are testing the http_check function in Oslo policy, and we figure out a bug: 
https://bugs.launchpad.net/glance/+bug/1720354.
We believe that this is due to the Glance part since it doesn't well prepare 
the authorization request (body) to Oslo policy.
Can we put this topic for the next Oslo meeting?
Thanks,
Ruan HE



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Queens spec review sprint next week

2017-09-29 Thread Jay Pipes

Either one works for me.

On 09/28/2017 07:45 PM, Matt Riedemann wrote:

Let's do a Queens spec review sprint.

What day works for people that review specs?

Monday came up in the team meeting today, but Tuesday could be good too 
since Monday's are generally evil.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][swg] Status of the Stewardship WG

2017-09-29 Thread Thierry Carrez
Hi everyone,

In Denver we had a room for the TC and the Stewardship working group,
where we discussed the current state of the Stewardship WG. I took the
action to write a follow-up thread to communicate the group status.

The Stewardship working group was created after the first session of
leadership training that TC/UC/Board and other community members were
invited to participate in in 2016. The idea was to follow-up on what we
learned at ZingTrain and push adoption of the tools we discovered there.
While we did (and do continue to) apply what we learned there, the
activity of the workgroup mostly died when we decided to experiment
getting rid of weekly meetings (for greater inclusion) and Colette could
no longer dedicate time leading the workgroup. It's proven possible to
have workgroups without regular meetings, but you need to replace them
with continuous status updates, which are difficult to produce without
team leaders.

Currently the workgroup is dormant, until someone steps up to lead it
again. If we resurrected it, we'd likely use the new SIG format, as
there is no reason that the Stewardship activities should be solely
under the Technical Committee governance. Join us on penstack-swg if
interested !

Regards,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Bob Ball
Hi Sahid,

> Please consider the support of MDEV for the /pci framework which provides 
> support for vGPUs [0].

From my understanding, this MDEV implementation for vGPU would be entirely 
specific to libvirt, is that correct?

XenServer's implementation for vGPU is based on a pooled device model (as 
described in 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/122702.html) 
and directly interfaces with the card using DEMU ("Discrete EMU") as a second 
device emulator along-side QEMU.  There is no mdev integration.  I'm concerned 
about how much mdev-specific functionality would have to be faked up in the 
XenServer-specific driver for vGPU to be used in this way.

I'm not familiar with mdev, but it looks Linux specific, so would not be usable 
by Hyper-V?
I've also not been able to find suggestions that VMWare can make use of mdev, 
although I don't know the architecture of VMWare's integration.

The concepts of PCI and SR-IOV are, of course, generic, but I think out of 
principal we should avoid a hypervisor-specific integration for vGPU (indeed 
Citrix has been clear from the beginning that the vGPU integration we are 
proposing is intentionally hypervisor agnostic)
I also think there is value in exposing vGPU in a generic way, irrespective of 
the underlying implementation (whether it is DEMU, mdev, SR-IOV or whatever 
approach Hyper-V/VMWare use).

It's quite difficult for me to see how this will work for other hypervisors.  
Do you also have a draft alternate spec where more details can be discussed?

Bob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-29 Thread Attila Fazekas
I have overlay2 and super fast disk I/O (memory cheat + SSD),
just the CPU freq is not high. The CPU is a Broadwell
and actually it has lot more core (E5-2630V4). Even a 5 year old gamer CPU
can be 2 times
faster on a single core, but cannot compete with all of the cores ;-)

This machine have seen faster setup time,  but I'll return to this in an
another topic.

On Tue, Sep 26, 2017 at 6:16 PM, Michał Jastrzębski 
wrote:

> On 26 September 2017 at 07:34, Attila Fazekas  wrote:
> > decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
> >
> > Fully pulling all container takes something like ~4.5 min (from
> localhost,
> > one leaf request at a time),
> > but on the gate vm  we usually have 4 core,
> > so it is possible to go bellow 2 min with better pulling strategy,
> > unless we hit some disk limit.
>
> Check your $docker info. If you kept defaults, storage driver will be
> devicemapper on loopback, which is awfully slow and not very reliable.
> Overlay2 is much better and should speed things up quite a bit. For me
> deployment of 5 node openstack on vms similar to gate took 6min (I had
> registry available in same network). Also if you pull single image it
> will download all base images as well, so next one will be
> significantly faster.
>
> >
> > On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
> > wrote:
> >>
> >> On 22 September 2017 at 17:21, Paul Belanger 
> >> wrote:
> >> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> >> >> > "if DevStack gets custom images prepped to make its jobs
> >> >> > run faster, won't Triple-O, Kolla, et cetera want the same and
> where
> >> >> > do we draw that line?). "
> >> >> >
> >> >> > IMHO we can try to have only one big image per distribution,
> >> >> > where the packages are the union of the packages requested by all
> >> >> > team,
> >> >> > minus the packages blacklisted by any team.
> >> >> [...]
> >> >>
> >> >> Until you realize that some projects want packages from UCA, from
> >> >> RDO, from EPEL, from third-party package repositories. Version
> >> >> conflicts mean they'll still spend time uninstalling the versions
> >> >> they don't want and downloading/installing the ones they do so we
> >> >> have to optimize for one particular set and make the rest
> >> >> second-class citizens in that scenario.
> >> >>
> >> >> Also, preinstalling packages means we _don't_ test that projects
> >> >> actually properly declare their system-level dependencies any
> >> >> longer. I don't know if anyone's concerned about that currently, but
> >> >> it used to be the case that we'd regularly add/break the package
> >> >> dependency declarations in DevStack because of running on images
> >> >> where the things it expected were preinstalled.
> >> >> --
> >> >> Jeremy Stanley
> >> >
> >> > +1
> >> >
> >> > We spend a lot of effort trying to keep the 6 images we have in
> nodepool
> >> > working
> >> > today, I can't imagine how much work it would be to start adding more
> >> > images per
> >> > project.
> >> >
> >> > Personally, I'd like to audit things again once we roll out zuulv3, I
> am
> >> > sure
> >> > there are some tweaks we could make to help speed up things.
> >>
> >> I don't understand, why would you add images per project? We have all
> >> the images there.. What I'm talking about is to leverage what we'll
> >> have soon (registry) to lower time of gates/DIB infra requirements
> >> (DIB would hardly need to refresh images...)
> >>
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)

[openstack-dev] [oslo] No meeting next Monday( Oct 2)

2017-09-29 Thread ChangBo Guo
Hi Oslo folks,


Next week  is China's National Day holiday,  I can't be online the whole
week,
If we can't find others to hold the meeting, let's skip the weekly meeting.

I will prepare weekly release patch tomorrow, the that won't block other's
work.

-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Sylvain Bauza
On Fri, Sep 29, 2017 at 2:32 AM, Dan Smith  wrote:

> In this serie of patches we are generalizing the PCI framework to
>>> handle MDEV devices. We arguing it's a lot of patches but most of them
>>> are small and the logic behind is basically to make it understand two
>>> new fields MDEV_PF and MDEV_VF.
>>>
>>
>> That's not really "generalizing the PCI framework to handle MDEV devices"
>> :) More like it's just changing the /pci module to understand a different
>> device management API, but ok.
>>
>
> Yeah, the series is adding more fields to our PCI structure to allow for
> more variations in the kinds of things we lump into those tables. This is
> my primary complaint with this approach, and has been since the topic first
> came up. I really want to avoid building any more dependency on the
> existing pci-passthrough mechanisms and focus any new effort on using
> resource providers for this. The existing pci-passthrough code is almost
> universally hated, poorly understood and tested, and something we should
> not be further building upon.
>
> In this serie of patches we make libvirt driver support, as usually,
>>> return resources and attach devices returned by the pci manager. This
>>> part can be reused for Resource Provider.
>>>
>>
>> Perhaps, but the idea behind the resource providers framework is to treat
>> devices as generic things. Placement doesn't need to know about the
>> particular device attachment status.
>>
>
> I quickly went through the patches and left a few comments. The base work
> of pulling some of this out of libvirt is there, but it's all focused on
> the act of populating pci structures from the vgpu information we get from
> libvirt. That code could be made to instead populate a resource inventory,
> but that's about the most of the set that looks applicable to the
> placement-based approach.
>
>
I'll review them too.

As mentioned in IRC and the previous ML discussion, my focus is on the
>> nested resource providers work and reviews, along with the other two
>> top-priority scheduler items (move operations and alternate hosts).
>>
>> I'll do my best to look at your patch series, but please note it's lower
>> priority than a number of other items.
>>
>
> FWIW, I'm not really planning to spend any time reviewing it until/unless
> it is retooled to generate an inventory from the virt driver.
>
> With the two patches that report vgpus and then create guests with them
> when asked converted to resource providers, I think that would be enough to
> have basic vgpu support immediately. No DB migrations, model changes, etc
> required. After that, helping to get the nested-rps and traits work landed
> gets us the ability to expose attributes of different types of those vgpus
> and opens up a lot of possibilities. IMHO, that's work I'm interested in
> reviewing.
>

That's exactly the things I would like to provide for Queens, so operators
would have a possibility to have flavors asking for vGPU resources in
Queens, even if they couldn't yet ask for a specific VGPU type yet (or
asking to be in the same NUMA cell than the CPU). The latter is definitely
needing to have nested resource providers, but the former (just having vGPU
resource classes provided by the virt driver) is possible for Queens.



> One thing that would be very useful, Sahid, if you could get with Eric
>> Fried (efried) on IRC and discuss with him the "generic device management"
>> system that was discussed at the PTG. It's likely that the /pci module is
>> going to be overhauled in Rocky and it would be good to have the mdev
>> device management API requirements included in that discussion.
>>
>
> Definitely this.
>

++


> --Dan
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM

2017-09-29 Thread Sahid Orentino Ferdjaoui
On Thu, Sep 28, 2017 at 11:10:38PM +0200, Premysl Kouril wrote:
> >
> > Only the memory mapped for the guest is striclty allocated from the
> > NUMA node selected. The QEMU overhead should float on the host NUMA
> > nodes. So it seems that the "reserved_host_memory_mb" is enough.
> >
> 
> Even if that would be true and overhead memory could float in NUMA
> nodes it generally doesn't prevent us from running into OOM troubles.
> No matter where (in which NUMA node) the overhead memory gets
> allocated, it is not included in available memory calculation for that
> NUMA node when provisioning new instance and thus can cause OOM (once
> the guest operating system  of the newly provisioned instance actually
> starts allocating memory which can only be allocated from its assigned
> NUMA node).

That is why you need to use Huge Pages. The memory will be reserved
and locked for the guest.

> Prema
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Lee Yarwood
On 29-09-17 11:40:21, Saverio Proto wrote:
> Hello,
> 
> sorry I could not make it to the PTG.
> 
> I have an idea that I want to share with the community. I hope this is a
> good place to start the discussion.
> 
> After years of Openstack operations, upgrading releases from Icehouse to
> Newton, the feeling is that the control plane upgrade is doable.
> 
> But it is also a lot of pain to upgrade all the compute nodes. This
> really causes downtime to the VMs that are running.
> I can't always make live migrations, sometimes the VMs are just too big
> or too busy.
> 
> It would be nice to guarantee the ability to run an updated control
> plane with compute nodes up to N-3 Release.
> 
> This way even if we have to upgrade the control plane every 6 months, we
> can keep a longer lifetime for compute nodes. Basically we can never
> upgrade them until we decommission the hardware.
> 
> If there are new features that require updated compute nodes, we can
> always organize our datacenter in availability zones, not scheduling new
> VMs to those compute nodes.
> 
> To my understanding this means having compatibility at least for the
> nova-compute agent and the neutron-agents running on the compute node.
> 
> Is it a very bad idea ?
> 
> Do other people feel like me that upgrading all the compute nodes is
> also a big part of the burden regarding the upgrade ?

Yeah, I don't think the Nova community would ever be able or willing to
verify and maintain that level of backward compatibility. Ultimately
there's nothing stopping you from upgrading Nova on the computes while
also keeping instance running.

You only run into issues with kernel, OVS and QEMU (for n-cpu with
libvirt) etc upgrades that require reboots or instances to be restarted
(either hard or via live-migration). If you're unable or just unwilling
to take downtime for instances that can't be moved when these components
require an update then you have bigger problems IMHO.

Regards,

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 5:41 GMT+00:00 Ian Wienand :
> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>
>> I'm not aware of issues other than these at this time
>
>
> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
> also failing for unknown reasons.  Any debugging would be helpful,
> thanks.

Seem there are multiple issues with the multinode jobs:

a) post_failures due to an error in log collection, sample fix at
https://review.openstack.org/508473
b) jobs are being run as two identical tasks on primary and subnodes,
triggering https://bugs.launchpad.net/zun/+bug/1720240

Other issues:
- openstack-tox-py27 is being run on trusty nodes instead of xenial
- unit tests are missing in at least neutron gate runs
- some patches are not getting any results from zuul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi threads with swift backend

2017-09-29 Thread Erno Kuvaja
On Fri, Sep 29, 2017 at 6:35 AM, Arnaud MORIN  wrote:
> My objective is to be able to download and upload from glance/computes to
> swift in a faster way.
> I was thinking that if glance could parallelizes the connections to swift
> for a single image (with chunks), it would be faster.
> Am I wrong ?
> Is there any other way I am not thinking of?
>
> Arnaud.

Lets take an example, image download as everyone wants their instances
booting really quickly. We have a cold image. What happens now is Nova
requests the image from Glance, glance-api requests the bits from
Swift and streams it through to Nova. If you have caching enabled in
Glance (which should be your first point of improvement to get you hot
images quickly), the glance-api will tee that data to it's local
cache, where the later requests will be served from. If your Swift and
networking between Glance and Swift are scaled appropriately, I have
hard time believing that you would get performance improvement by
waiting Glance first caching all the segments, composing them back to
an image and then sending that image over to Nova. Your really hot
images will be delivered directly from Glance cache anyways easing the
load on Swift.

Then we look image upload. We're again on the same situation,
currently we stream the image to Swift as soon as we start receiving
the bits from the client. With your proposal to gain any benefit we
would need to have Glance caching basically the first segment x the
number of threads amount of data we are using before we initiate the
upload to swift and the upload to glance would need to be able to keep
up with the swift sink not just throttling the connections there.
Based on the real world data we've seen so far user clients suffer
rather slow connection to their Glance nodes than Glance throttling
their transfers down due to Swift not keeping up.

Now lets say we have 10 concurrent image operations per glance-api
node which of 7 is actually interfacing with the swift. The proposed
multi threading would put totally different scaling pressure to the
Glance nodes just for example cope with all this short term caching
that needs to happen. That needs either ridiculous amounts of memory
or array of very fast disks. Do you have any data you can share that
would show us getting the performance boost worth of the overhead
cost? Just remember the connection between Glance and the client (that
being user, nova, cinder...) is
>
> Le 28 sept. 2017 6:30 PM, "Erno Kuvaja"  a écrit :
>>
>> On Thu, Sep 28, 2017 at 4:27 PM, Arnaud MORIN 
>> wrote:
>> > Hey all,
>> > So I finally tested your pull requests, it does not work.
>> > 1 - For uploads, swiftclient is not using threads when source is given
>> > by
>> > glance:
>> >
>> > https://github.com/openstack/python-swiftclient/blob/master/swiftclient/service.py#L1847
>> >
>> > 2 - For downloads, when requesting the file from swift, it is
>> > recomposing
>> > the chunks into one big file.
>> >
>> >
>> > So patch is not so easy.
>> >
>> > IMHO, for uploads, we should try to uploads chunks using multithreads.
>> > Sounds doable.
>> > For downloads, I need to dig a little bit more in glance store code to
>> > be
>> > sure, but maybe we can try to download the chunks separately and
>> > recompose
>> > them locally before sending it to the requester (compute / cli).
>> >
>> > Cheers,
>> >
>>
>> So I'm still trying to understand (without success) why do we want to
>> do this at all?
>>
>> - jokke
>>
>> >
>> > On 6 September 2017 at 21:19, Arnaud MORIN 
>> > wrote:
>> >>
>> >> Hey,
>> >> I would love to see that reviving!
>> >>
>> >> Cheers,
>> >> Arnaud
>> >>
>> >> On 6 September 2017 at 21:00, Mikhail Fedosin 
>> >> wrote:
>> >>>
>> >>> Hey! As you said it's not possible now.
>> >>>
>> >>> I implemented the support several years ago, bit unfortunately no one
>> >>> wanted to review it: https://review.openstack.org/#/c/218993
>> >>> If you want, we can revive it.
>> >>>
>> >>> Best,
>> >>> Mike
>> >>>
>> >>> On Wed, Sep 6, 2017 at 9:05 PM, Clay Gerrard 
>> >>> wrote:
>> 
>>  I'm pretty sure that would only be possible with a code change in
>>  glance
>>  to move the consumption of the swiftclient abstraction up a layer
>>  from the
>>  client/connection objects to swiftclient's service objects [1].  I'm
>>  not
>>  sure if that'd be something that would make a lot of sense to the
>>  Image
>>  Service team.
>> 
>>  -Clay
>> 
>>  1.
>>  https://docs.openstack.org/python-swiftclient/latest/service-api.html
>> 
>>  On Wed, Sep 6, 2017 at 9:02 AM, Arnaud MORIN 
>>  wrote:
>> >
>> > Hi all,
>> >
>> > Is there any chance that glance can use the multiprocessing from
>> > swiftclient library (equivalent of xxx-threads options from cli)?
>> > If yes, 

Re: [openstack-dev] l2gw

2017-09-29 Thread Lajos Katona

Hi Ricardo,

That is the exception which gives us the trouble.

If you have ideas as you mentioned in which case a gw should be updated, 
and in what not, that would be really nice.
Actually now we have a kind of development environment with devstack and 
vtep emulator (http://docs.openvswitch.org/en/latest/howto/vtep/) on the 
same host, what do you think is that enough to go on with this problem?
I am not so sure if with vtep emulator we can cover all the good and bad 
(I mean when we mustn't do the update for example) scenarios.


Regards
Lajos

On 2017-09-28 14:12, Ricardo Noriega De Soto wrote:

I see the exception now Lajos:

class L2GatewayInUse(exceptions.InUse):
    message = _("L2 Gateway '%(gateway_id)s' still has active mappings "
                "with one or more neutron networks.")

:-)

On Wed, Sep 27, 2017 at 6:40 PM, Ricardo Noriega De Soto 
> wrote:


Hey Lajos,

Is this the exception you are encountering?

(neutron) l2-gateway-update --device
name=hwvtep,interface_names=eth0,eth1 gw1
L2 Gateway 'b8ef7f98-e901-4ef5-b159-df53364ca996' still has active
mappings with one or more neutron networks.
Neutron server returns request_ids:
['req-f231dc53-cb7d-4221-ab74-fa8715f85869']

I don't see the L2GatewayInUse exception you're talking about, but
I guess it's the same situation.

We should discuss in which case the l2gw instance could be
updated, and in which cases it shouldn't.

Please, let me know!



On Wed, Aug 16, 2017 at 11:14 AM, Lajos Katona
> wrote:

Hi,

We faced an issue with l2-gw-update, which means that actually
if there are connections for a gw the update will throw an
exception (L2GatewayInUse), and the update is only possible
after deleting first the connections, do the update and add
the connections back.

It is not exactly clear why this restriction is there in the
code (at least I can't find it in docs or comments in the
code, or review).
As I see the check for network connections was introduced in
this patch:
https://review.openstack.org/#/c/144097


(https://review.openstack.org/#/c/144097/21..22/networking_l2gw/db/l2gateway/l2gateway_db.py

)

Could you please give me a little background why the update
operation is not allowed on an l2gw with network connections?

Thanks in advance for the help.

Regards
Lajos


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 
Ricardo Noriega


Senior Software Engineer - NFV Partner Engineer | Office of
Technology  | Red Hat
irc: rnoriega @freenode




--
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology 
 | Red Hat

irc: rnoriega @freenode



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [skip-level-upgrades][fast-forward-upgrades] PTG summary

2017-09-29 Thread Saverio Proto
Hello,

sorry I could not make it to the PTG.

I have an idea that I want to share with the community. I hope this is a
good place to start the discussion.

After years of Openstack operations, upgrading releases from Icehouse to
Newton, the feeling is that the control plane upgrade is doable.

But it is also a lot of pain to upgrade all the compute nodes. This
really causes downtime to the VMs that are running.
I can't always make live migrations, sometimes the VMs are just too big
or too busy.

It would be nice to guarantee the ability to run an updated control
plane with compute nodes up to N-3 Release.

This way even if we have to upgrade the control plane every 6 months, we
can keep a longer lifetime for compute nodes. Basically we can never
upgrade them until we decommission the hardware.

If there are new features that require updated compute nodes, we can
always organize our datacenter in availability zones, not scheduling new
VMs to those compute nodes.

To my understanding this means having compatibility at least for the
nova-compute agent and the neutron-agents running on the compute node.

Is it a very bad idea ?

Do other people feel like me that upgrading all the compute nodes is
also a big part of the burden regarding the upgrade ?

thanks !

Saverio


On 28.09.17 17:38, arkady.kanev...@dell.com wrote:
> Erik,
> 
> Thanks for setting up a session for it.
> 
> Glad it is driven by Operators.
> 
> I will be happy to work with you on the session and run it with you.
> 
> Thanks,
> 
> Arkady
> 
>  
> 
> *From:*Erik McCormick [mailto:emccorm...@cirrusseven.com]
> *Sent:* Thursday, September 28, 2017 7:40 AM
> *To:* Lee Yarwood 
> *Cc:* OpenStack Development Mailing List
> ; openstack-operators
> 
> *Subject:* Re: [openstack-dev] [Openstack-operators]
> [skip-level-upgrades][fast-forward-upgrades] PTG summary
> 
>  
> 
>  
> 
> On Sep 28, 2017 4:31 AM, "Lee Yarwood"  > wrote:
> 
> On 20-09-17 14:56:20, arkady.kanev...@dell.com
>  wrote:
> > Lee,
> > I can chair meeting in Sydney.
> > Thanks,
> > Arkady
> 
> Thanks Arkady!
> 
> FYI I see that emccormickva has created the following Forum session to
> discuss FF upgrades:
> 
> http://forumtopics.openstack.org/cfp/details/19
> 
> You might want to reach out to him to help craft the agenda for the
> session based on our discussions in Denver.
> 
> .
> 
> I just didn't want to risk it not getting in, and it was on our etherpad
> as well. I'm happy to help, but would love for you guys to lead.
> 
>  
> 
> Thanks,
> 
> Erik
> 
>  
> 
> 
> Thanks again,
> 
> Lee
> 
> --
> Lee Yarwood                 A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33
> F672 2D76
> 
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
>  
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi threads with swift backend

2017-09-29 Thread Erno Kuvaja
On Fri, Sep 29, 2017 at 6:35 AM, Arnaud MORIN  wrote:
> My objective is to be able to download and upload from glance/computes to
> swift in a faster way.
> I was thinking that if glance could parallelizes the connections to swift
> for a single image (with chunks), it would be faster.
> Am I wrong ?
> Is there any other way I am not thinking of?
>
> Arnaud.

Lets take an example, image download as everyone wants their instances
booting really quickly. We have a cold image. What happens now is Nova
requests the image from Glance, glance-api requests the bits from
Swift and streams it through to Nova. If you have caching enabled in
Glance (which should be your first point of improvement to get you hot
images quickly), the glance-api will tee that data to it's local
cache, where the later requests will be served from. If your Swift and
networking between Glance and Swift are scaled appropriately, I have
hard time believing that you would get performance improvement by
waiting Glance first caching all the segments, composing them back to
an image and then sending that image over to Nova. This would likely
speed up caching, depending how fast you cache storage is, but the
first instance likely would still boot slower. Your really hot images
will be delivered directly from Glance cache anyways easing the load
on Swift.

Then we look image upload. We're again on the same situation,
currently we stream the image to Swift as soon as we start receiving
the bits from the client. With your proposal to gain any benefit we
would need to have Glance caching basically the first segment x the
number of threads amount of data we are using before we initiate the
upload to swift and the upload to glance would need to be able to keep
up with the swift sink not just throttling the connections there.
Based on the real world data we've seen so far user clients suffer
rather slow connection to their Glance nodes than Glance throttling
their transfers down due to Swift not keeping up.

Now lets say we have 10 concurrent image operations per glance-api
node which of 7 is actually interfacing with the swift. The proposed
multi threading would put totally different scaling pressure to the
Glance nodes just for example cope with all this short term caching
that needs to happen. That needs either ridiculous amounts of memory
or array of very fast disks. Do you have any data you can share that
would show us getting the performance boost worth of the overhead
cost? Just remember the connection between Glance and the client (that
being user, nova, cinder...) is single stream anyways.

- jokke
>
> Le 28 sept. 2017 6:30 PM, "Erno Kuvaja"  a écrit :
>>
>> On Thu, Sep 28, 2017 at 4:27 PM, Arnaud MORIN 
>> wrote:
>> > Hey all,
>> > So I finally tested your pull requests, it does not work.
>> > 1 - For uploads, swiftclient is not using threads when source is given
>> > by
>> > glance:
>> >
>> > https://github.com/openstack/python-swiftclient/blob/master/swiftclient/service.py#L1847
>> >
>> > 2 - For downloads, when requesting the file from swift, it is
>> > recomposing
>> > the chunks into one big file.
>> >
>> >
>> > So patch is not so easy.
>> >
>> > IMHO, for uploads, we should try to uploads chunks using multithreads.
>> > Sounds doable.
>> > For downloads, I need to dig a little bit more in glance store code to
>> > be
>> > sure, but maybe we can try to download the chunks separately and
>> > recompose
>> > them locally before sending it to the requester (compute / cli).
>> >
>> > Cheers,
>> >
>>
>> So I'm still trying to understand (without success) why do we want to
>> do this at all?
>>
>> - jokke
>>
>> >
>> > On 6 September 2017 at 21:19, Arnaud MORIN 
>> > wrote:
>> >>
>> >> Hey,
>> >> I would love to see that reviving!
>> >>
>> >> Cheers,
>> >> Arnaud
>> >>
>> >> On 6 September 2017 at 21:00, Mikhail Fedosin 
>> >> wrote:
>> >>>
>> >>> Hey! As you said it's not possible now.
>> >>>
>> >>> I implemented the support several years ago, bit unfortunately no one
>> >>> wanted to review it: https://review.openstack.org/#/c/218993
>> >>> If you want, we can revive it.
>> >>>
>> >>> Best,
>> >>> Mike
>> >>>
>> >>> On Wed, Sep 6, 2017 at 9:05 PM, Clay Gerrard 
>> >>> wrote:
>> 
>>  I'm pretty sure that would only be possible with a code change in
>>  glance
>>  to move the consumption of the swiftclient abstraction up a layer
>>  from the
>>  client/connection objects to swiftclient's service objects [1].  I'm
>>  not
>>  sure if that'd be something that would make a lot of sense to the
>>  Image
>>  Service team.
>> 
>>  -Clay
>> 
>>  1.
>>  https://docs.openstack.org/python-swiftclient/latest/service-api.html
>> 
>>  On Wed, Sep 6, 2017 at 9:02 AM, Arnaud MORIN 
>>  wrote:
>> >
>> > Hi all,
>> >
>> 

Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-29 Thread Jesse Pretorius
On 9/29/17, 7:18 AM, "Thomas Bechtold"  wrote:

This will still install the files into usr/etc :
It's not nice but packagers can workaround that.

Yes, that is true. Is there a ‘better’ location to have them? I noticed that 
Sahara was placing the files into share, resulting in them being installed into 
/usr/share – is that better?

For OSA as a project it’s not really a problem where it goes, just that the 
files are there and ideally in a consistent place.



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-29 Thread Adam Heczko
Thanks Scott, makes sense.

On Fri, Sep 29, 2017 at 12:19 PM, Luke Hinds  wrote:

>
>
> On Thu, Sep 28, 2017 at 8:38 PM, McClymont Jr, Scott  verizonwireless.com> wrote:
>
>> Hey All,
>>
>> I've got a spec up for a change I want to implement in Glance for Queens
>> to enhance the current checksum (md5) functionality with a stronger hash
>> algorithm. I'm going to do this in such a way that it is easily altered in
>> the future for new algorithms as they are released.  I'd appreciate it if
>> someone on the security team could look it over and comment. Thanks.
>>
>> Review: https://review.openstack.org/#/c/507568/
>>
>>
> +1 , thanks for undertaking this work. Strong support from the security
> projects side.
>
> Would be good to see all projects move on from MD5 use now, its been known
> to be insecure for sometime and clashes with FIPS-142 compliance.
>
>
>
>> --
>> Scott McClymont
>> Sr. Software Engineer
>> Verizon Cloud Platform
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][Security] Secure Hash Algorithm Spec

2017-09-29 Thread Luke Hinds
On Thu, Sep 28, 2017 at 8:38 PM, McClymont Jr, Scott <
scott.mcclym...@verizonwireless.com> wrote:

> Hey All,
>
> I've got a spec up for a change I want to implement in Glance for Queens
> to enhance the current checksum (md5) functionality with a stronger hash
> algorithm. I'm going to do this in such a way that it is easily altered in
> the future for new algorithms as they are released.  I'd appreciate it if
> someone on the security team could look it over and comment. Thanks.
>
> Review: https://review.openstack.org/#/c/507568/
>
>
+1 , thanks for undertaking this work. Strong support from the security
projects side.

Would be good to see all projects move on from MD5 use now, its been known
to be insecure for sometime and clashes with FIPS-142 compliance.



> --
> Scott McClymont
> Sr. Software Engineer
> Verizon Cloud Platform
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg][glance] call for comments on Glance spec for Queens

2017-09-29 Thread Adam Heczko
Thank you Brian!
+1 for solving this, I left my comments in review.


On Fri, Sep 29, 2017 at 12:00 PM, Luke Hinds  wrote:

>
>
> On Fri, Sep 29, 2017 at 3:08 AM, Brian Rosmaita <
> rosmaita.foss...@gmail.com> wrote:
>
>> Hello API WG,
>>
>> I've got a patch up for a proposal to fix OSSN-0075 by introducing a
>> new policy.  There are concerns that this will introduce an
>> interoperability problem in that an API call that works in one
>> OpenStack cloud may not work in other OpenStack clouds.  As author of
>> the spec, I think this is an OK trade-off to fix the security issue,
>> but not all members of the Glance community agree, so we're trying to
>> get some wider perspective.  We'd appreciate it if some API-WG members
>> could take a look and leave a comment:
>>
>> https://review.openstack.org/#/c/468179/
>>
>> If you could respond by Tuesday 3 October, that would give us time to
>> get this worked out before the spec freeze (6 October).
>>
>> thanks,
>> brian
>>
>>
> +1 for efforts to take this forward and find a resolution, from a security
> standpoint it would be good to see this solved.
>
> Luke
>
> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> 
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg][glance] call for comments on Glance spec for Queens

2017-09-29 Thread Luke Hinds
On Fri, Sep 29, 2017 at 3:08 AM, Brian Rosmaita 
wrote:

> Hello API WG,
>
> I've got a patch up for a proposal to fix OSSN-0075 by introducing a
> new policy.  There are concerns that this will introduce an
> interoperability problem in that an API call that works in one
> OpenStack cloud may not work in other OpenStack clouds.  As author of
> the spec, I think this is an OK trade-off to fix the security issue,
> but not all members of the Glance community agree, so we're trying to
> get some wider perspective.  We'd appreciate it if some API-WG members
> could take a look and leave a comment:
>
> https://review.openstack.org/#/c/468179/
>
> If you could respond by Tuesday 3 October, that would give us time to
> get this worked out before the spec freeze (6 October).
>
> thanks,
> brian
>
>
+1 for efforts to take this forward and find a resolution, from a security
standpoint it would be good to see this solved.

Luke

__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
> 
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Forum topics brainstorming

2017-09-29 Thread Sylvain Bauza
2017-09-28 23:45 GMT+02:00 Matt Riedemann :

> On 9/21/2017 4:01 PM, Matt Riedemann wrote:
>
>> So this shouldn't be news now that I've read back through a few emails in
>> the mailing list (I've been distracted with the Pike release, PTG planning,
>> etc) [1][2][3] but we have until Sept 29 to come up with whatever forum
>> sessions we want to propose.
>>
>> There is already an etherpad for Nova [4].
>>
>> The list of proposed topics is here [5]. The good news is we're not the
>> last ones to this party.
>>
>> So let's start throwing things on the etherpad and figure out what we
>> want to propose as forum session topis. If memory serves me, in Pike we
>> were pretty liberal in what we proposed.
>>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
>> ember/121783.html
>> [2] http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
>> ember/122143.html
>> [3] http://lists.openstack.org/pipermail/openstack-dev/2017-Sept
>> ember/122454.html
>> [4] https://etherpad.openstack.org/p/SYD-nova-brainstorming
>> [5] http://forumtopics.openstack.org/
>>
>>
> The deadline for Queens Forum topic submissions is tomorrow. Based on our
> etherpad:
>
> https://etherpad.openstack.org/p/SYD-nova-brainstorming
>
> I plan to propose something like:
>
> 1. Cells v2 update and direction
>
> This would be an update on what happened in Pike, upgrade impacts, known
> issues, etc and what we're doing in Queens. I think we'd also lump the Pike
> quota behavior changes in here too if possible.
>
> 2. Placement update and direction
>
> Same as the Cells v2 discussion - a Pike update and the focus items for
> Queens. This would also be a place we can mention the Ironic flavor
> migration to custom resource classes that happens in Pike.
>
> 3. Queens development focus and checkpoint
>
> This would be a session to discuss anything in flight for Queens, what
> we're working on, and have a chance to ask questions of operators/users for
> feedback. For example, we plan to add vGPU support but it will be quite
> simple to start, similar with volume multi-attach.
>
> 4. Michael Still had an item in the etherpad about privsep. That could be
> a cross-project educational session on it's own if he's going to give a
> primer on what privsep is again and how it's integrated into projects. This
> session could be lumped into #3 above but is probably better on it's own if
> it's going to include discussion about operational impacts. I'm going to
> ask that mikal runs with this though.
>
> 
>
> There are some other things in the etherpad about hardware acceleration
> features and documentation, and I'll leave it up to others if they want to
> propose those sessions.
>
>

Yup, I provided two proposals :
http://forumtopics.openstack.org/cfp/details/47 for discussing about
documentation and release notes
http://forumtopics.openstack.org/cfp/details/48 for talking about how
operators could use OSC and make sure it works for the maximum version (at
least knowing the gap we have).

-Sylvain



-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-09-29 Thread Sahid Orentino Ferdjaoui
On Thu, Sep 28, 2017 at 05:06:16PM -0400, Jay Pipes wrote:
> On 09/28/2017 11:37 AM, Sahid Orentino Ferdjaoui wrote:
> > Please consider the support of MDEV for the /pci framework which
> > provides support for vGPUs [0].
> > 
> > Accordingly to the discussion [1]
> > 
> > With this first implementation which could be used as a skeleton for
> > implementing PCI Devices in Resource Tracker
> 
> I'm not entirely sure what you're referring to above as "implementing PCI
> devices in Resource Tracker". Could you elaborate? The resource tracker
> already embeds a PciManager object that manages PCI devices, as you know.
> Perhaps you meant "implement PCI devices as Resource Providers"?

A PciManager? I know that we have a field PCI_DEVICE :) - I guess a
virt driver can return inventory with total of PCI devices. Talking
about manager, not sure.

You still have to define "traits", basically for physical network
devices, the users want to select device according physical network,
to select device according the placement on host (NUMA), to select the
device according the bandwidth capability... For GPU it's same
story. *And I do not have mentioned devices which support virtual
functions.*

So that is what you plan to do for this release :) - Reasonably I
don't think we are close to have something ready for production.

Jay, I have question, Why you don't start by exposing NUMA ?

> > we provide support for
> > attaching vGPUs to guests. And also to provide affinity per NUMA
> > nodes. An other important point is that that implementation can take
> > advantage of the ongoing specs like PCI NUMA policies.
> > 
> > * The Implementation [0]
> > 
> > [PATCH 01/13] pci: update PciDevice object field 'address' to accept
> > [PATCH 02/13] pci: add for PciDevice object new field mdev
> > [PATCH 03/13] pci: generalize object unit-tests for different
> > [PATCH 04/13] pci: add support for mdev device type request
> > [PATCH 05/13] pci: generalize stats unit-tests for different
> > [PATCH 06/13] pci: add support for mdev devices type devspec
> > [PATCH 07/13] pci: add support for resource pool stats of mdev
> > [PATCH 08/13] pci: make manager to accept handling mdev devices
> > 
> > In this serie of patches we are generalizing the PCI framework to
> > handle MDEV devices. We arguing it's a lot of patches but most of them
> > are small and the logic behind is basically to make it understand two
> > new fields MDEV_PF and MDEV_VF.
> 
> That's not really "generalizing the PCI framework to handle MDEV devices" :)
> More like it's just changing the /pci module to understand a different
> device management API, but ok.

If you prefer call it like that :) - The point is the /pci manages
physical devices, It can passthrough the whole device or its virtual
functions exposed through SRIOV or MDEV.

> > [PATCH 09/13] libvirt: update PCI node device to report mdev devices
> > [PATCH 10/13] libvirt: report mdev resources
> > [PATCH 11/13] libvirt: add support to start vm with using mdev (vGPU)
> > 
> > In this serie of patches we make libvirt driver support, as usually,
> > return resources and attach devices returned by the pci manager. This
> > part can be reused for Resource Provider.
> 
> Perhaps, but the idea behind the resource providers framework is to treat
> devices as generic things. Placement doesn't need to know about the
> particular device attachment status.
> 
> > [PATCH 12/13] functional: rework fakelibvirt host pci devices
> > [PATCH 13/13] libvirt: resuse SRIOV funtional tests for MDEV devices
> > 
> > Here we reuse 100/100 of the functional tests used for SR-IOV
> > devices. Again here, this part can be reused for Resource Provider.
> 
> Probably not, but I'll take a look :)
> 
> For the record, I have zero confidence in any existing "functional" tests
> for NUMA, SR-IOV, CPU pinning, huge pages, and the like. Unfortunately, due
> to the fact that these features often require hardware that either the
> upstream community CI lacks or that depends on libraries, drivers and kernel
> versions that really aren't available to non-bleeding edge users (or users
> with very deep pockets).

It's good point, if you are not confidence, don't you think it's
premature to move forward on implementing new thing without to have
well trusted functional tests?

> > * The Usage
> > 
> > There are no difference between SR-IOV and MDEV, from operators point
> > of view who knows how to expose SR-IOV devices in Nova, they already
> > know how to expose MDEV devices (vGPUs).
> > 
> > Operators will be able to expose MDEV devices in the same manner as
> > they expose SR-IOV:
> > 
> >   1/ Configure whitelist devices
> > 
> >   ['{"vendor_id":"10de"}']
> > 
> >   2/ Create aliases
> > 
> >   [{"vendor_id":"10de", "name":"vGPU"}]
> > 
> >   3/ Configure the flavor
> > 
> >   openstack flavor set --property "pci_passthrough:alias"="vGPU:1"
> > 
> > * Limitations
> > 
> > The mdev does not provide 'product_id' but 'mdev_type' which should be
> > 

Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
>>
>> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>>
>>> I'm not aware of issues other than these at this time
>>
>>
>> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
>> also failing for unknown reasons.  Any debugging would be helpful,
>> thanks.
>
>
> We also have our legacy-telemetry-dsvm-integration-ceilometer broken:
>
> http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.

[1] https://review.openstack.org/#/c/508396

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Newton End-Of-Life (EOL) next month (reminder #1)

2017-09-29 Thread Tony Breeds
On Wed, Sep 27, 2017 at 08:14:19PM -0700, Emilien Macchi wrote:
> On Wed, Sep 27, 2017 at 5:37 PM, Tony Breeds  wrote:
> > On Wed, Sep 27, 2017 at 10:39:13AM -0600, Alex Schultz wrote:
> >
> >> One idea would be to allow trailing projects additional trailing on
> >> the phases as well.  Honestly 2 weeks for trailing for just GA is hard
> >> enough. Let alone the fact that the actual end-users are 18+ months
> >> behind.  For some deployment project like tripleo, there are sections
> >> that should probably follow stable-policy as it exists today but
> >> elements where there's 3rd party integration or upgrade implications
> >> (in the case of tripleo, THT/puppet-tripleo) and they need to be more
> >> flexible to modify things as necessary.  The word 'feature' isn't
> >> necessarily the same for these projects than something like
> >> nova/neutron/etc.
> >
> > There are 2 separate aspects here:
> > 1) What changes are appropriate on stable/* branches ; and
> > 2) How long to stable/* stay around for.
> >
> > Looking at 1.  I totally get that deployment projects have a different
> > threshold on the bugfix/feature line.  That's actually the easy part to
> > fix.  The point of the stable policy is to give users some assurance
> > that moving from version x.y.z -> x.Y.Z will be a smooth process.  We
> > just need to capture that intent in a policy that works in the context
> > of a deployment project.
> 
> It makes total sense to me. BTW we have CI coverage for upgrades from
> Newton to Ocata (and Ocata to Pike is ongoing but super close; also
> Pike to Queens is targeted to Queens-1 milestone) so you can see our
> efforts on that front are pretty heavy.

Yup I hope nothing I said implied a lack of comittment from the tripleo
team.  If I understand the context of the 'minor update' work once it's
done you'll be ahead of the curve ;P
 
> > Looking at 2.  The stable policy doesn't say you *need* to EOL on
> > Oct-11th  by default any project that asserts that tag is included but
> > you're also free to opt out as long as there is a good story around CI
> > and impact on human and machine resources.  We re-evaluate that from
> > time to time.  As an example, group-based-policy otpted out of the
> > kilo?, liberty and mitaka EOLs, recently dropped everything before
> > mitaka.  I get that GBP has a different footprint in CI than tripleo
> > does but it illustrates that there is scope to support your users within
> > the current policy.
> 
> Again, it makes a lot of sense here. We don't want to burn too much CI
> resources and keep strict minimum - also make sure we don't burn any
> external team (e.g. stable-maint).

Cool.
 
> > I'm still advocating for crafting a more appropriate policy for
> > deployment projects.
> 
> Cool, it's aligned with what Ben and Alex are proposing, iiuc.

Yup.
 
> >> >> What proposing Giulio probably comes from the real world, the field,
> >> >> who actually manage OpenStack at scale and on real environments (not
> >> >> in devstack from master). If we can't have this code in-tree, we'll
> >> >> probably carry this patch downstream (which is IMHO bad because of
> >> >> maintenance and lack of CI). In that case, I'll vote to give up
> >> >> stable:follows-policy so we can do what we need.
> >> >
> >> > Rather than give up on the stable:follows policy tag it is possibly
> >> > worth looking at which portions of tripleo make that assertion.
> >> >
> >> > In this specific case, there isn't anything in the bug that indicates
> >> > it comes from a user report which is all the stable team has to go on
> >> > when making these types of decisions.
> >> >
> >>
> >> We'll need to re-evaulate what stable-policy means for tripleo.  We
> >> don't want to allow the world for backporting but we also want to
> >> reduce the patches carried downstream for specific use cases.  I think
> >> in the case of 3rd party integrations we need a better definition of
> >> what that means and perhaps creating a new repository like THT-extras
> >> that doesn't follow stable-policy while the main one does.
> >
> > Right, I don't pretend to understand the ins-and-outs of tripleo but yes
> > I think we're mostly agreeing on that point.
> >
> > https://review.openstack.org/#/c/507924/ buys everyone the space to make
> > that evaluation.
> >
> > Yours Tony.
> 
> Thanks Tony for being open on the ideas; I find our discussion very
> productive despite the fact we want to give up the tag for now.

You're all welcome.  This community is good at trying to see problems
from all sides and acting with maturity :)

\o/
 
> So as a summary:
> 
> 1) We discuss on 507924 to figure out yes/no we give up the tag and
> which repos we do it.

Yup, as I said on the review I think removing the tag is the right thing
to do right now.

> 2) Someone to propose an amendment to the existing stable policy or
> propose a new policy.

Yup, though to level set this wont be an immediate action.  I'd like to
have a draft 

Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.


We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] using different dns domains in neutorn networks

2017-09-29 Thread Kim-Norman Sahm

Hi,

i'm currently testing the integration of designate and i found the dns 
integration on neutron:
https://docs.openstack.org/newton/networking-guide/config-dns-int.html

In this example the value "dns_domain = example.org." is set in the 
neutron.conf.
if i create a port with the "--dns_name fancyname" it is assigned to the domain 
example.org: fancyname.example.org.

if i set another domain name in another network "neutron net-update --dns-domain 
anotherdomain.org. net2" and create a port in this network the dns records is still 
in the example.org. domain.
is there a way to overwrite the global dns domain in a project and inherit the 
dns-domain to the ports in this network?

Best regards
Kim


Kim-Norman Sahm
Cloud & Infrastructure(OCI)

noris network AG
Thomas-Mann-Straße 16-20
90471 Nürnberg
Deutschland

Tel +49 911 9352 1433
Fax +49 911 9352 100

kim-norman.s...@noris.de

https://www.noris.de - Mehr Leistung als Standard
Vorstand: Ingo Kraupa (Vorsitzender), Joachim Astel
Vorsitzender des Aufsichtsrats: Stefan Schnabel - AG Nürnberg HRB 17689










smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-Ansible testing with OpenVSwitch

2017-09-29 Thread Gyorgy Szombathelyi

> 
> Hello JP,
> 
> Ok, I will do some more testing against the blog post and then hit up the
> #openstack-ansible channel.
> 
> I need to finish a presentation on SFC first which is why I am looking into
> OpenVSwitch.

Hi Michael,

If your goal is not openstack-ansible, here's an AIO installer for Pike with 
OpenVSwitch:
https://github.com/DoclerLabs/openstack
(needs vagrant and VirtualBox)

Br,
György

> 
> Thanks
> Michael
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-09-29 Thread Thomas Bechtold

Hi,

On 28.09.2017 16:50, Jesse Pretorius wrote:
[...]
Do any packagers or deployment projects have issues with this 
implementation? If there are any issues, what’re your suggestions to 
resolve them?


This will still install the files into usr/etc :

$ python setup.py install --skip-build --root /tmp/sahara-install > 
/dev/null

$ ls /tmp/sahara-install/usr/
bin  etc  lib

It's not nice but packagers can workaround that.

Best,

Tom

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev