Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues

2017-10-03 Thread Rikimaru Honjo

Hello,

I'm trying to run jobs with Zuul v3 in my local environment.[1]
I prepared a sample job that runs sleep command on zuul's host.
This job doesn't use Nodepool. [2]

As a result, Zuul v3 submitted "SUCCESS" to gerrit when gerrit event occurred.
But, error logs were generated. And my job was not run.

I'd appreciate it if you help me.
(Should I write this topic on Zuul Storyboard?)

[1]I use Ubuntu 16.04 and zuul==2.5.3.dev1374.

[2]In my understanding, I can use Zuul v3 without Nodepool.
https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#attr-job.nodeset

If a job has an empty or no nodeset definition, it will still run and may be 
able to perform actions on the Zuul executor.


[Conditions]
* Target project is defined as config-project in tenant configuration file.
* I didn't write nodeset in .zuul.yaml.
  Because my job doesn't use Nodepool.
* I configured  playbooks's hosts as "- hosts: all" or "- hosts: localhost".
  (I referred to project-config repository.)

[Error logs]
"no hosts matched" or "list index out of range" were generated.
Please see the attached file.


On 2017/09/29 23:58, Monty Taylor wrote:

Hey everybody!

tl;dr - If you're having issues with your jobs, check the FAQ, this email and 
followups on this thread for mentions of them. If it's an issue with your job 
and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If 
it's bigger/weirder/you don't know - we'd like to ask that you send a follow up 
email to this thread so that we can ensure we've got them all and so that 
others can see it too.

** Zuul v3 Migration Status **

If you haven't noticed the Zuul v3 migration - awesome, that means it's working 
perfectly for you.

If you have - sorry for the disruption. It turns out we have a REALLY 
complicated array of job content you've all created. Hopefully the pain of the 
moment will be offset by the ability for you to all take direct ownership of 
your awesome content... so bear with us, your patience is appreciated.

If you find yourself with some extra time on your hands while you wait on 
something, you may find it helpful to read:

   https://docs.openstack.org/infra/manual/zuulv3.html

We're adding content to it as issues arise. Unfortunately, one of the issues is 
that the infra manual publication job stopped working.

While the infra manual publication is being fixed, we're collecting FAQ content 
for it in an etherpad:

   https://etherpad.openstack.org/p/zuulv3-migration-faq

If you have a job issue, check it first to see if we've got an entry for it. 
Once manual publication is fixed, we'll update the etherpad to point to the FAQ 
section of the manual.

** Global Issues **

There are a number of outstanding issues that are being worked. As of right 
now, there are a few major/systemic ones that we're looking in to that are 
worth noting:

* Zuul Stalls

If you say to yourself "zuul doesn't seem to be doing anything, did I do something 
wrong?", we're having an issue that jeblair and Shrews are currently tracking down 
with intermittent connection issues in the backend plumbing.

When it happens it's an across the board issue, so fixing it is our number one 
priority.

* Incorrect node type

We've got reports of things running on trusty that should be running on xenial. 
The job definitions look correct, so this is also under investigation.

* Multinode jobs having POST FAILURE

There is a bug in the log collection trying to collect from all nodes while the 
old jobs were designed to only collect from the 'primary'. Patches are up to 
fix this and should be fixed soon.

* Branch Exclusions being ignored

This has been reported and its cause is currently unknown.

Thank you all again for your patience! This is a giant rollout with a bunch of 
changes in it, so we really do appreciate everyone's understanding as we work 
through it all.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp


Case1)I configures playbooks's hosts as "- hosts: all".

2017-09-29 16:18:40,247 DEBUG zuul.AnsibleJob: [build: 
e56656cd5d1444619c01755e6f858be0] Writing logging config for job 
/tmp/e56656cd5d1444619c01755e6f858be0/work/logs/job-output.txt 
/tmp/e56656cd5d1444619c01755e6f858be0/ansible/logging.json
2017-09-29 16:18:40,249 DEBUG zuul.BubblewrapExecutionContext: Bubblewrap 
command: bwrap --dir /tmp --tmpfs /tmp --dir /var --dir /var/tmp --dir 
/run/user/1000 --ro-bind /usr /usr --ro-bind /lib /lib --ro-bind /bin /bin 
--ro-bind /sbin /sbin --ro-bind /etc/resolv.conf /etc/resolv.conf --ro-bind 
/etc/hosts /etc/hosts --ro-bind /tmp/ssh-WDBthw8Kiv3s/agent.8420 
/tmp/ssh-WDBthw8Kiv3s/agent.8420 --bind 

Re: [openstack-dev] [nova] A way to delete a record in 'host_mappings' table

2017-10-03 Thread Takashi Natsume

On 2017/10/03 23:12, Dan Smith wrote:

But the record in 'host_mappings' table of api database is not deleted
(I tried it with nova master 8ca24bf1ff80f39b14726aca22b5cf52603ea5a0).
The cell cannot be deleted if the records for the cell remains in 
'host_mappings' table.
(An error occurs with a message "There are existing hosts mapped to cell with uuid 
...".)

Are there any ways (CLI, API) to delete the host record in 'host_mappings' 
table?
I couldn't find it.


Hmm, yeah, I bet this is a gap. Can you file a bug for this?


Dan, thank you for your reply.
I have filed the bug.

https://bugs.launchpad.net/nova/+bug/1721179

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key_pair update on rebuild (a whole lot of conversations)

2017-10-03 Thread Clint Byrum
Excerpts from Sean Dague's message of 2017-10-03 16:16:48 -0400:
> There is currently a spec up for being able to specify a new key_pair
> name during the rebuild operation in Nova -
> https://review.openstack.org/#/c/375221/
> 
> For those not completely familiar with Nova operations, rebuild triggers
> the "reset this vm to initial state" by throwing out all the disks, and
> rebuilding them from the initial glance images. It does however keep the
> IP address and device models when you do that. So it's useful for
> ephemeral but repeating workloads, where you'd rather not have the
> network information change out from under you.
> 
> The spec is a little vague about when this becomes really useful,
> because this will not save you from "I lost my private key, and I have
> important data on that disk". Because the disk is destroyed. That's the
> point of rebuild. We once added this preserve_ephemeral flag to rebuild
> for trippleo on ironic, but it's so nasty we've scoped it to only work
> with ironic backends. Ephemeral should mean ephemeral.
> 

Let me take a moment to apologize for that feature. It was the worst idea
we had in TripleO, even worse than the name. ;)

> Rebuild bypasses the scheduler. A rebuilt server stays on the same host
> as it was before, which means the operation has a good chance of being
> faster than a DELETE + CREATE, as the image cache on that host should
> already have the base image for you instance.
> 

There are some pro's, but for the most part I'd rather train my users
to be creating new instances than train them to cling to fixed IPs and
single compute node resources. It's a big feature, and obviously we've
given it to users so they use it. But that doesn't mean it's the best
use of Nova development's time to be supporting it, nor is it the most
scalable way for users to interact with a cloud.

A trade-off for instance, is that a rebuilding server is unavailable while
rebuilding. The user cannot choose how long that server is unavailable,
or choose to roll back and make it available if something goes wrong. It's
rebuilding until it isn't. A new server, spun up somewhere else, can be
fully prepared before any switch is made. One of the best things about
being a cloud operator is that you put more onus on the users to fix
their own problems, and give them lots of tools to do it. But while a
server is being rebuilt it is entirely _the operator's problem_.

Also as an operator, while I appreciate that it's quick on that compute
node, I'd rather new servers be scheduled to the places that my scheduler
rules say they should go. I will at times want to drain a compute node,
and the longer the pet servers stick around and are rebuilt, the more
likely I am to have to migrate them forcibly.

> = Where I think we are? =
> 
> I think with all this data we're at the following:
> 
> Q: Should we add this to rebuild
> A: Yes, probably - after some enhancement to the spec *
> 
> * - we really should have much better use cases about the situations it
> is expected to be used in. We spend a lot of time 2 and 3 years out
> trying to figure out how anyone would ever use a feature, and adding
> another one without this doesn't seem good
> 
> Q: should this also be on reboot?
> A: NO - it would be too fragile
> 
> 
> I also think figuring out a way to get Nova out of the key storage
> business (which it really shouldn't be in) would be good. So if anyone
> wants to tackle Nova using Barbican for keys, that would be ++. Rebuild
> doesn't wait on that, but Barbican urls for keys seems like a much
> better world to be in.
> 

The keys are great. Barbican is a fantastic tool for storing _secret_
keys, but feels like a massive amount of overkill for this tiny blob of
public data.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key_pair update on rebuild (a whole lot of conversations)

2017-10-03 Thread Ben Nemec



On 10/03/2017 03:16 PM, Sean Dague wrote:

= Where I think we are? =

I think with all this data we're at the following:

Q: Should we add this to rebuild
A: Yes, probably - after some enhancement to the spec *

* - we really should have much better use cases about the situations it
is expected to be used in. We spend a lot of time 2 and 3 years out
trying to figure out how anyone would ever use a feature, and adding
another one without this doesn't seem good


Here's an example from my use: I create a Heat stack, then realize I 
deployed some of the instances with the wrong keypair.  I'd rather not 
tear down the entire stack just to fix that, and being able to change 
keys on rebuild would allow me to avoid doing so.  I can rebuild a 
Heat-owned instance without causing any trouble, but I can't re-create it.


I don't know how common this is, but it's definitely something that has 
happened to me in the past.




Q: should this also be on reboot?
A: NO - it would be too fragile


I also think figuring out a way to get Nova out of the key storage
business (which it really shouldn't be in) would be good. So if anyone
wants to tackle Nova using Barbican for keys, that would be ++. Rebuild
doesn't wait on that, but Barbican urls for keys seems like a much
better world to be in.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Security of Meta-Data

2017-10-03 Thread Joshua Harlow

I would treat the metadata service as not secure.

From amazon docs (equivalent can be said about openstack):

'''
Important

Although you can only access instance metadata and user data from within 
the instance itself, the data is not protected by cryptographic methods. 
Anyone who can access the instance can view its metadata. Therefore, you 
should take suitable precautions to protect sensitive data (such as 
long-lived encryption keys). You should not store sensitive data, such 
as passwords, as user data.

'''

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

So private keys would be a no-no, public keys would be ok (since they 
are public anyway).


Giuseppe de Candia wrote:

Hi Folks,


Are there any documented conventions regarding the security model for
MetaData?


Note that CloudInit allows passing user and ssh service public/private
keys via MetaData service (or ConfigDrive). One assumes it must be
secure, but I have not found a security model or documentation.


My understanding of the Neutron reference implementation is that
MetaData requests are HTTP (not HTTPS) and go from the VM to the
MetaData proxy on the Network Node (after which they are proxied to Nova
meta-data API server). The path from VM to Network Node using HTTP
cannot guarantee confidentiality and is also susceptible to
Man-in-the-Middle attacks.

Some Neutron drivers proxy Metadata requests locally from the node
hosting the VM that makes the query. I have mostly seen this
presented/motivated as a way of removing dependency on the Network node,
but it should also increase security. Yet, I have not seen explicit
discussions of the security model, nor any attempt to set a standard for
security of the meta-data.

Finally, there do not seem to be granular controls over what meta-data
is presented over ConfigDrive (when enabled) vs. meta-data REST API. As
an example, Nova vendor data is presented over both, if both are
enabled; config drive is presumably more secure.

thanks,
Pino


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable] Preping for the stable/newton EOL

2017-10-03 Thread Tony Breeds
Hi All,
This is a quick update on the process for tagging stable/newton as
EOL:

The published[1][2] timeline is:
Sep 29 : Final newton library releases
Oct 09 : stable/newton branches enter Phase III
Oct 11 : stable/newton branches get tagged EOL

Given that those key dates were a little disrupted I'm proposing adding
a week to each so the new timeline looks like:
Oct 08 : Final newton library releases
Oct 16 : stable/newton branches enter Phase III
Oct 18 : stable/newton branches get tagged EOL

The transition to Phase II is important to set expectation about what
backports are applicable while we process the EOL.

I'll prep the list of repos that will be tagged EOL real soon now for
review.

Yours Tony.

[1] https://releases.openstack.org/index.html 
[2] https://releases.openstack.org/queens/schedule.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] non-candidacy for TC

2017-10-03 Thread Steve Martinelli
Folks, due to changes in my day job I will not be running in the next TC
election. I still intend to contribute to OpenStack whenever possible. I
look forward to seeing how the community continues to grow, change, and
approach new challenges.

I'd like to encourage others to step up to the challenge and run. It's an
excellent experience to learn more about yourself, OpenStack, and
governance of a large open source project.

Thanks for your time,
Steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [simplification] PTG Recap

2017-10-03 Thread Mike Perez
# Simplification PTG Recap

## Introduction
This goal was started off in the May 2017 leadership workshop [1].  We are
collecting feedback from the community that OpenStack can be complex for
deployers, contributors, and ultimately the people we’re all supporting, our
consumers of clouds. This goal is purposely broad in response to some feedback
of OpenStack being complex. As a community, we must work together, and from an
objective standpoint set proper goals to this never-ending effort.

[1] - https://wiki.openstack.org/wiki/Governance/Foundation/8Mar2017BoardMeeting

## Moving Forward
We have a growing thread [1] on this topic, and the dev digest summary [2].
Let's move the discussion to this thread for better focus.

Let's recognize we’re not going to solve this problem with just some group or
code. It’s is going to be never-ending. 

So far with the etherpad, we have allowed the community to identify some of the
known things that make OpenStack complex. Some areas have more information than
others. Let's start research on those more identified areas first. We can
always revisit the other identified areas as interest and more information is
brought forward.

The three areas are Installation, Operation, and Upgrade … otherwise known as
I.O.U.

Below are the areas, some snippets from the etherpad and then also from our
2017 user survey [3].


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-September/thread.html#122075
[2] - 
https://www.openstack.org/blog/2017/09/developer-mailing-list-digest-september-23-29-2017/
[3] - https://www.openstack.org/assets/survey/April2017SurveyReport.pdf

## Installation

### Etherpad summary
* Our documentation team is moving towards an effort of decentralizing install
guides and more.
* We’ve been bridging the gap between project names and service names with the
project navigator [1], and service-type-authority repository [2].

### User Survey Feedback
* What we have today is varied installation/ deployment models.
* Need the installation to become easier—the architecture is still too complex
  right now.
* Installation, particularly around TripleO and HA UPGRADES deployments, are
  very complicated.
* A common deployment and lifecycle management tool/framework would make things
  easier. Having every distribution use its tools (Triple-O- Fuel- Crowbar-
  ...) really doesn’t help. And yes, I know that this is not OpenStack’s fault
  but if the community unites behind one tool (or maybe two), we could put some
  pressure to the vendors.
* Automate installation. Require consistent installation between projects.
* Standardized automated deployment methods to minimize the risk of splitting
  the developments in vendor-specific branches.
* Deployment is still a nightmare of complexity and riddled with failure unless
  you are covered in scars from previous deployments.
* Initial build-up needs to be much easier, such as using a simple scripted
  installer that analyzes the hardware and then can build a working OpenStack.
  When upgrades become available, it can do a rolling upgrade with 0 down time.

[1] - https://www.openstack.org/software/project-navigator/
[2] - http://git.openstack.org/cgit/openstack/service-types-authority/

## Upgrades

### Etherpad summary
* Easier to burn down clouds than to go from Newton -> Ocata -> Etc.
* It’s recognized things are getting better and will continue to improve
  assuming operators partner with Dev like with the skip level upgrade effort.
* More requests on publishing binaries. Lets refer back to our discussion on
  publish binary images [2] also dev digest version [3].

### User Survey Feedback

 End of Life Upstream
The lifecycle could use a lot of attention. Most large customers move slowly
and thus are running older versions, which are EOL upstream sometimes before
they even deploy them. Doing in-place upgrades is risky business with just
a one or two release jumps, so the prospect of trying to jump 4 or 5 releases
to get to a current, non-EOL version is daunting and results in either a lot of
outages or simply green-fielding new releases and letting the old die on the
vine. This causes significant operational overhead as getting tenants to move
to a new deploy entirely is a big ask, and you end up operating multiple
versions.

 Containerizing OpenStack Itself
Many organizations appear to be moving toward containerizing their OpenStack
control plane. Continued work on multi-version interoperability would allow
organizations to upgrade a lot more seamlessly and rapidly by deploying
newer-versioned containers in parallel with their existing older-versioned
containers. And it may have a profoundly positive effect on the upgrade and
lifecycle for larger deployments.

 Bugs
* The biggest challenge is to upgrade the production system since there are
  a lot of dependencies and bugs that we are facing.
* Releases need more feature and bugfix backporting.

 Longer Development Cycles
Stop coming out with all 

Re: [openstack-dev] [nova] key_pair update on rebuild (a whole lot of conversations)

2017-10-03 Thread Michael Still
I think new-keypair-on-rebuild makes sense for some forms of key rotation
as well. For example, I've worked with a big data ironic customer who uses
rebuild to deploy new OS images onto their ironic managed machines.
Presumably if they wanted to do a keypair rotation they'd do it in a very
similar way.

So yes, I think you've reached the right conclusion here. Thanks for your
work Sean.

Michael




On Wed, Oct 4, 2017 at 9:06 AM, Matt Riedemann  wrote:

> On 10/3/2017 3:16 PM, Sean Dague wrote:
>
>> There is currently a spec up for being able to specify a new key_pair
>> name during the rebuild operation in Nova -
>> https://review.openstack.org/#/c/375221/
>>
>> For those not completely familiar with Nova operations, rebuild triggers
>> the "reset this vm to initial state" by throwing out all the disks, and
>> rebuilding them from the initial glance images. It does however keep the
>> IP address and device models when you do that. So it's useful for
>> ephemeral but repeating workloads, where you'd rather not have the
>> network information change out from under you.
>>
>
> We also talked quite a bit about rebuild with volume-backed instances
> today, and the fact the root disk isn't replaced during rebuild in that
> case, for which there are many reported bugs...
>
>
>> The spec is a little vague about when this becomes really useful,
>> because this will not save you from "I lost my private key, and I have
>> important data on that disk". Because the disk is destroyed. That's the
>> point of rebuild. We once added this preserve_ephemeral flag to rebuild
>> for trippleo on ironic, but it's so nasty we've scoped it to only work
>> with ironic backends. Ephemeral should mean ephemeral.
>>
>> Rebuild bypasses the scheduler. A rebuilt server stays on the same host
>> as it was before, which means the operation has a good chance of being
>> faster than a DELETE + CREATE, as the image cache on that host should
>> already have the base image for you instance.
>>
>
> It also means no chances for NoValidHost or resource claim failures.
>
>
>
>> A bunch of data was collected today in a lot of different IRC channels
>> (#openstack-nova, #openstack-infra, #openstack-operators).
>>
>> = OpenStack Operators =
>>
>> mnaser said that for their customers this would be useful. Keys get lost
>> often, but keeping the IP is actually valuable. They would also like this.
>>
>> penick said that for their existing environment, they have a workflow
>> where this would be useful. But they are moving away from using nova for
>> key distribution because in Nova keys are user owned, which actually
>> works poorly given that everything else is project owned. So they are
>> building something to do key distribution after boot in the guest not
>> using nova's metadata.
>>
>> Lots of people said they didn't use nova's keypair interfaces, they just
>> did it all in config management after the fact.
>>
>> = Also on reboot? =
>>
>> Because the reason people said they wanted it was: "I lost my private
>> key", the question at PTG was "does that mean you want it on reboot?"
>>
>> But as we dive through the constraints of that, people that build "pet"
>> VMs typically delete or disable cloud-init (or similar systems) after
>> first boot. Without that kind of agent, this isn't going to work anyway.
>>
>> So also on reboot seems very fragile and unuseful.
>>
>> = Infra =
>>
>> We asked the infra team if this is useful to them, the answer was no.
>> What would be useful them is if keypairs could be updated. They use a
>> symbolic name for a keypair but want to do regular key rotation. Right
>> now they do this by deleting then recreating keypairs, but that does
>> mean there is a window where there is no keypair with that name, so
>> server creates fail.
>>
>> It is agreed that something supporting key rotation in the future would
>> be handy, that's not in this scope.
>>
>> = Barbican =
>>
>> In the tradition of making a simple fix a generic one, it does look like
>> there is a longer term part of this where Nova should really be able to
>> specify a Barbican resource url for a key so that things like rotation
>> could be dealt with in a system that specializes in that. It also would
>> address the very weird oddity of user vs. project scoping.
>>
>> That's a bigger more nebulous thing. Other folks would need to be
>> engaged on that one.
>>
>>
>> = Where I think we are? =
>>
>> I think with all this data we're at the following:
>>
>> Q: Should we add this to rebuild
>> A: Yes, probably - after some enhancement to the spec *
>>
>> * - we really should have much better use cases about the situations it
>> is expected to be used in. We spend a lot of time 2 and 3 years out
>> trying to figure out how anyone would ever use a feature, and adding
>> another one without this doesn't seem good
>>
>> Q: should this also be on reboot?
>> A: NO - it would be too fragile
>>
>>
>> I also think figuring out a way to get Nova out of 

Re: [openstack-dev] [TripleO] Configure SR-IOV VFs in tripleo

2017-10-03 Thread Moshe Levi


> -Original Message-
> From: Saravanan KR [mailto:skram...@redhat.com]
> Sent: Tuesday, October 3, 2017 1:36 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [TripleO] Configure SR-IOV VFs in tripleo
> 
> On Tue, Sep 26, 2017 at 3:37 PM, Moshe Levi 
> wrote:
> > Hi  all,
> >
> >
> >
> > While working on tripleo-ovs-hw-offload work, I encounter the
> > following issue with SR-IVO.
> >
> >
> >
> > I added -e ~/heat-templates/environments/neutron-sriov.yaml -e
> > ~/heat-templates/environments/host-config-and-reboot.yaml to the
> > overcloud-deploy.sh.
> >
> > The computes nodes are configure with the intel_iommu=on kernel option
> > and the computes are reboot as expected,
> >
> > than the tripleo::host::sriov will create /etc/sysconfig/allocate_vfs
> > to configure the SR-IOV VF. It seem it requires additional reboot for
> > the SR-IOV VFs to be created. Is that expected behavior? Am I doing
> > something wrong?
> 
> The file allocate_vfs is required for the subsequent reboots, but during the
> deployment, the vfs are created by puppet-tripleo [1]. No additional reboot
> required for creating VFs.
Yes, I am not sure what went wrong in my first attempt, but know it is working 
as expected.
 
> 
> Regards,
> Saravanan KR
> 
> [1]
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit
> hub.com%2Fopenstack%2Fpuppet-
> tripleo%2Fblob%2Fmaster%2Fmanifests%2Fhost%2Fsriov.pp%23L19=0
> 2%7C01%7Cmoshele%40mellanox.com%7C92c6146ab3e5460df36808d50a4aa
> 2db%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63642623803127
> 6659=zSCKOPHBHyiZXmSs4CLVMxWsZ3KCkp3JGhYZuPZRpY8%3D
> erved=0
> 
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > [1]
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit
> > hub.com%2Fopenstack%2Fpuppet-
> tripleo%2Fblob%2F80e646ff779a0f8e201daec0
> >
> c927809224ed5fdb%2Fmanifests%2Fhost%2Fsriov.pp=02%7C01%7Cmo
> shele%
> >
> 40mellanox.com%7C92c6146ab3e5460df36808d50a4aa2db%7Ca652971c7d2e
> 4d9ba6
> >
> a4d149256f461b%7C0%7C0%7C636426238031276659=OBk9S2mnCNoG
> eygYwviv
> > DCDOo4vtG2vAIpHG7yRmpC4%3D=0
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flist
> > s.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02
> >
> %7C01%7Cmoshele%40mellanox.com%7C92c6146ab3e5460df36808d50a4aa2
> db%7Ca6
> >
> 52971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636426238031276659
> a=x%2F
> > 0%2F2nDx79tfpFCHyDRFdekO0gCStBbuvviUzVZ9An0%3D=0
> >
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.
> openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02%7C01%7Cmoshele%40mellanox.com%7C92c6146ab3e5460df3
> 6808d50a4aa2db%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636
> 426238031276659=x%2F0%2F2nDx79tfpFCHyDRFdekO0gCStBbuvviUz
> VZ9An0%3D=0
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] [glance] backport patch doesn't seem to be applied to doc site

2017-10-03 Thread Clark Boylan
On Tue, Oct 3, 2017, at 04:30 PM, Ken'ichi Ohmichi wrote:
> Hi
> 
> I tried to install glance manually according to docs site[1] for Pike
> release, and current doc site doesn't show how to create glance
> database.
> The bug has been already fixed and the backport patch[2] also has been
> merged into Pike branch.
> So how to affect these backport patches to actual doc site?
> 
> Thanks
> Kenichi Omichi
> 
> ---
> [1]: https://docs.openstack.org/glance/pike/install/install-ubuntu.html
> [2]: https://review.openstack.org/#/c/508279
> 

This likely failed to publish due to bugs in the Zuulv3 jobs. Merging
new changes to the branches should retrigger doc publishing. Otherwise
an infra root will need to manually retrigger the jobs but we are fairly
swamped right now just various migration related items so it would be
great if merging a subsequent change could be used instead.

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [doc] [glance] backport patch doesn't seem to be applied to doc site

2017-10-03 Thread Ken'ichi Ohmichi
Hi

I tried to install glance manually according to docs site[1] for Pike
release, and current doc site doesn't show how to create glance
database.
The bug has been already fixed and the backport patch[2] also has been
merged into Pike branch.
So how to affect these backport patches to actual doc site?

Thanks
Kenichi Omichi

---
[1]: https://docs.openstack.org/glance/pike/install/install-ubuntu.html
[2]: https://review.openstack.org/#/c/508279

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread Monty Taylor

On 10/03/2017 11:17 AM, Dean Troyer wrote:

On Mon, Oct 2, 2017 at 9:13 PM, Jamie Lennox  wrote:

I'm really sad to announce that I'll be leaving the OpenStack community (at
least for a while), I've accepted a new position unrelated to OpenStack
that'll begin in a few weeks, and am going to be mostly on holiday until
then.


No, this just will not do. -2


I concur. Will a second -2 help?


Seriously, it has been a great pleasure to 'try to take over the
world' with you, at least that is what I recall as the goal we set in
Hong Kong.  The entire interaction of Python-based clients with
OpenStack has been made so much better with your contributions and
OpenStackClient would not have gotten as far as it has without them.


Your contributions and impact around these parts cannot be overstated. I 
have enjoyed our time working together and hold your work and 
contributions in extremely high regard.


Best of luck in your next endeavor - they are lucky to have you!

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] key_pair update on rebuild (a whole lot of conversations)

2017-10-03 Thread Matt Riedemann

On 10/3/2017 3:16 PM, Sean Dague wrote:

There is currently a spec up for being able to specify a new key_pair
name during the rebuild operation in Nova -
https://review.openstack.org/#/c/375221/

For those not completely familiar with Nova operations, rebuild triggers
the "reset this vm to initial state" by throwing out all the disks, and
rebuilding them from the initial glance images. It does however keep the
IP address and device models when you do that. So it's useful for
ephemeral but repeating workloads, where you'd rather not have the
network information change out from under you.


We also talked quite a bit about rebuild with volume-backed instances 
today, and the fact the root disk isn't replaced during rebuild in that 
case, for which there are many reported bugs...




The spec is a little vague about when this becomes really useful,
because this will not save you from "I lost my private key, and I have
important data on that disk". Because the disk is destroyed. That's the
point of rebuild. We once added this preserve_ephemeral flag to rebuild
for trippleo on ironic, but it's so nasty we've scoped it to only work
with ironic backends. Ephemeral should mean ephemeral.

Rebuild bypasses the scheduler. A rebuilt server stays on the same host
as it was before, which means the operation has a good chance of being
faster than a DELETE + CREATE, as the image cache on that host should
already have the base image for you instance.


It also means no chances for NoValidHost or resource claim failures.



A bunch of data was collected today in a lot of different IRC channels
(#openstack-nova, #openstack-infra, #openstack-operators).

= OpenStack Operators =

mnaser said that for their customers this would be useful. Keys get lost
often, but keeping the IP is actually valuable. They would also like this.

penick said that for their existing environment, they have a workflow
where this would be useful. But they are moving away from using nova for
key distribution because in Nova keys are user owned, which actually
works poorly given that everything else is project owned. So they are
building something to do key distribution after boot in the guest not
using nova's metadata.

Lots of people said they didn't use nova's keypair interfaces, they just
did it all in config management after the fact.

= Also on reboot? =

Because the reason people said they wanted it was: "I lost my private
key", the question at PTG was "does that mean you want it on reboot?"

But as we dive through the constraints of that, people that build "pet"
VMs typically delete or disable cloud-init (or similar systems) after
first boot. Without that kind of agent, this isn't going to work anyway.

So also on reboot seems very fragile and unuseful.

= Infra =

We asked the infra team if this is useful to them, the answer was no.
What would be useful them is if keypairs could be updated. They use a
symbolic name for a keypair but want to do regular key rotation. Right
now they do this by deleting then recreating keypairs, but that does
mean there is a window where there is no keypair with that name, so
server creates fail.

It is agreed that something supporting key rotation in the future would
be handy, that's not in this scope.

= Barbican =

In the tradition of making a simple fix a generic one, it does look like
there is a longer term part of this where Nova should really be able to
specify a Barbican resource url for a key so that things like rotation
could be dealt with in a system that specializes in that. It also would
address the very weird oddity of user vs. project scoping.

That's a bigger more nebulous thing. Other folks would need to be
engaged on that one.


= Where I think we are? =

I think with all this data we're at the following:

Q: Should we add this to rebuild
A: Yes, probably - after some enhancement to the spec *

* - we really should have much better use cases about the situations it
is expected to be used in. We spend a lot of time 2 and 3 years out
trying to figure out how anyone would ever use a feature, and adding
another one without this doesn't seem good

Q: should this also be on reboot?
A: NO - it would be too fragile


I also think figuring out a way to get Nova out of the key storage
business (which it really shouldn't be in) would be good. So if anyone
wants to tackle Nova using Barbican for keys, that would be ++. Rebuild
doesn't wait on that, but Barbican urls for keys seems like a much
better world to be in.

-Sean



Sean, thanks for summarizing the various discussions had today. I've 
also included the operators list on this.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Alex Schultz
On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince  wrote:
>
>
> On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz  wrote:
>>
>> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince  wrote:
>> > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>> >> Hey Dan,
>> >>
>> >> Thanks for sending out a note about this. I have a few questions
>> >> inline.
>> >>
>> >> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince 
>> >> wrote:
>> >> > One of the things the TripleO containers team is planning on
>> >> > tackling
>> >> > in Queens is fully containerizing the undercloud. At the PTG we
>> >> > created
>> >> > an etherpad [1] that contains a list of features that need to be
>> >> > implemented to fully replace instack-undercloud.
>> >> >
>> >>
>> >> I know we talked about this at the PTG and I was skeptical that this
>> >> will land in Queens. With the exception of the Container's team
>> >> wanting this, I'm not sure there is an actual end user who is looking
>> >> for the feature so I want to make sure we're not just doing more work
>> >> because we as developers think it's a good idea.
>> >
>> > I've heard from several operators that they were actually surprised we
>> > implemented containers in the Overcloud first. Validating a new
>> > deployment framework on a single node Undercloud (for operators) before
>> > overtaking their entire cloud deployment has a lot of merit to it IMO.
>> > When you share the same deployment architecture across the
>> > overcloud/undercloud it puts us in a better position to decide where to
>> > expose new features to operators first (when creating the undercloud or
>> > overcloud for example).
>> >
>> > Also, if you read my email again I've explicitly listed the
>> > "Containers" benefit last. While I think moving the undercloud to
>> > containers is a great benefit all by itself this is more of a
>> > "framework alignment" in TripleO and gets us out of maintaining huge
>> > amounts of technical debt. Re-using the same framework for the
>> > undercloud and overcloud has a lot of merit. It effectively streamlines
>> > the development process for service developers, and 3rd parties wishing
>> > to integrate some of their components on a single node. Why be forced
>> > to create a multi-node dev environment if you don't have to (aren't
>> > using HA for example).
>> >
>> > Lets be honest. While instack-undercloud helped solve the old "seed" VM
>> > issue it was outdated the day it landed upstream. The entire premise of
>> > the tool is that it uses old style "elements" to create the undercloud
>> > and we moved away from those as the primary means driving the creation
>> > of the Overcloud years ago at this point. The new 'undercloud_deploy'
>> > installer gets us back to our roots by once again sharing the same
>> > architecture to create the over and underclouds. A demo from long ago
>> > expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
>> > Q=5s
>> >
>> > In short, we aren't just doing more work because developers think it is
>> > a good idea. This has potential to be one of the most useful
>> > architectural changes in TripleO that we've made in years. Could
>> > significantly decrease our CI reasources if we use it to replace the
>> > existing scenarios jobs which take multiple VMs per job. Is a building
>> > block we could use for other features like and HA undercloud. And yes,
>> > it does also have a huge impact on developer velocity in that many of
>> > us already prefer to use the tool as a means of streamlining our
>> > dev/test cycles to minutes instead of hours. Why spend hours running
>> > quickstart Ansible scripts when in many cases you can just doit.sh. htt
>> > ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
>> >
>>
>> So like I've repeatedly said, I'm not completely against it as I agree
>> what we have is not ideal.  I'm not -2, I'm -1 pending additional
>> information. I'm trying to be realistic and reduce our risk for this
>> cycle.
>
>
> This reduces our complexity greatly I think in that once it is completed
> will allow us to eliminate two project (instack and instack-undercloud) and
> the maintenance thereof. Furthermore, as this dovetails nice with the
> Ansible
>

I agree. So I think there's some misconceptions here about my thoughts
on this effort. I am not against this effort. I am for this effort and
wish to see more of it. I want to see the effort communicated publicly
via ML and IRC meetings.  What I am against switching the default
undercloud method until the containerization of the undercloud has the
appropriate test coverage and documentation to ensure it is on par
with what it is replacing.  Does this make sense?

>>
>>  IMHO doit.sh is not acceptable as an undercloud installer and
>> this is what I've been trying to point out as the actual impact to the
>> end user who has to use this thing.
>
>
> doit.sh is an example of where the effort is today. 

[openstack-dev] Security of Meta-Data

2017-10-03 Thread Giuseppe de Candia
Hi Folks,


Are there any documented conventions regarding the security model for
MetaData?


Note that CloudInit allows passing user and ssh service public/private keys
via MetaData service (or ConfigDrive). One assumes it must be secure, but I
have not found a security model or documentation.


My understanding of the Neutron reference implementation is that MetaData
requests are HTTP (not HTTPS) and go from the VM to the MetaData proxy on
the Network Node (after which they are proxied to Nova meta-data API
server). The path from VM to Network Node using HTTP cannot guarantee
confidentiality and is also susceptible to Man-in-the-Middle attacks.


Some Neutron drivers proxy Metadata requests locally from the node hosting
the VM that makes the query. I have mostly seen this presented/motivated as
a way of removing dependency on the Network node, but it should also
increase security. Yet, I have not seen explicit discussions of the
security model, nor any attempt to set a standard for security of the
meta-data.

Finally, there do not seem to be granular controls over what meta-data is
presented over ConfigDrive (when enabled) vs. meta-data REST API. As an
example, Nova vendor data is presented over both, if both are enabled;
config drive is presumably more secure.

thanks,
Pino
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] AWS IAM session

2017-10-03 Thread William M Edmonds
+1

Lance Bragstad  wrote on 10/03/2017 04:08:31 PM:
> Hey all,
>
> It was mentioned in today's keystone meeting [0] that it would be useful
> to go through AWS IAM (or even GKE) as a group. With all the recent
> policy discussions and work, it seems useful to get our eyes on another
> system. The idea would be to spend time using a video conference/screen
> share to go through and play with policy together. The end result should
> keep us focused on the implementations we're working on today, but also
> provide clarity for the long-term vision of OpenStack's RBAC system.
>
> Are you interested in attending? If so, please respond to the thread.
> Once we have some interest, we can gauge when to hold the meeting, which
> tools we can use, and setting up a test IAM account.
>
> Thanks,
>
> Lance
>
> [0]
> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.
> 2017-10-03-18.00.log.html#l-119
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][docs][i18n][ptls] PDFs for project-specific docs with unified doc builds

2017-10-03 Thread Ian Y. Choi

Hello Doug,

Thanks a lot for the details on unified docs builds for infra side: Your 
links are really helpful

to better point where to implement.

I have first written it as one WIP spec: 
https://review.openstack.org/#/c/509297/
I would like to discuss it in upcoming Docs team IRC meeting and update 
the spec with feedback :)



With many thanks,

/Ian

Doug Hellmann wrote on 9/25/2017 10:50 PM:

[Topic tags added to subject line]

Excerpts from Ian Y. Choi's message of 2017-09-22 07:29:23 +0900:

Hello,

"Build PDF docs from rst-based guide documents" [1] was implemented in Ocata
cycle, and I have heard that there were a small conversation at the
Denver PTG
regarding getting PDFs for project-specific docs setup to help translations.

In my opinion, it would be a nice idea to extend [1] to project-specific
docs with unified doc builds. It seems that unified doc builds have been
much enhanced with [2]. Now I think having PDF build functionalities in
unified
doc builds would be a way to easily have PDFs for project-specific docs.

Would someone have any idea on this or help it with some good pointers?

The job-template for the unified doc build job is in
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/openstack-publish-jobs.yaml#n22

It uses the "docs" macro, which invokes the script
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/run-docs.sh

I think we would want to place any new logic for extending the build in
that script, although we should coordinate any changes with the Zuul v3
rollout because as part of that I have seen some suggestions to change
the expected interface for building documentation and we want to make
sure any changes we make will work with the updated interface.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Queens spec review sprint next week

2017-10-03 Thread Matt Riedemann

On 10/2/2017 8:28 AM, Matt Riedemann wrote:

On 9/28/2017 6:45 PM, Matt Riedemann wrote:

Let's do a Queens spec review sprint.

What day works for people that review specs?

Monday came up in the team meeting today, but Tuesday could be good 
too since Monday's are generally evil.




Let's do the Queens spec review on Tuesday October 3rd. If you have a 
spec up for review, please try to be in the #openstack-nova channel on 
freenode IRC in case reviewers have questions about your proposal.




Thanks to everyone that helped to review specs today.

We approved 9 specs which is a nice chunk, and a lot of other specs got 
review feedback on them.


Remember that October 19th is the spec freeze.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [storyboard] Should New Projects Be Using Storyboard?

2017-10-03 Thread Kendall Nelson
Yes we definitely need to update these places to stop pointing new projects
to Launchpad. Luckily the last few I've seen created haven't all gone to
Launchpad. That being said, I have a work item to go through and update all
mentions of Launchpad to point to Storyboard instead.

If anyone sees other places that need updating please let myself or other
Storyboard members know in the #storyboard channel, or, if you have a few
spare min, feel free to update it yourself. We'd greatly appreciate it.

Thanks Mike for bringing attention to this!

-Kendall Nelson (diablo_rojo)

On Tue, Oct 3, 2017, 3:57 PM Mike Perez  wrote:

> I noticed that the project creator [1] and cookiecutter [2] promote using
> launchpad. If we're migrating projects to storyboard today, should we stop
> promoting launchpad for new projects?
>
> [1] - https://docs.openstack.org/infra/manual/creators.html
> [2] -
> https://git.openstack.org/cgit/openstack-dev/cookiecutter/tree/cookiecutter.json#n6
>
> --
> Mike Perez
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Dan Prince
On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz  wrote:

> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince  wrote:
> > On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
> >> Hey Dan,
> >>
> >> Thanks for sending out a note about this. I have a few questions
> >> inline.
> >>
> >> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince 
> >> wrote:
> >> > One of the things the TripleO containers team is planning on
> >> > tackling
> >> > in Queens is fully containerizing the undercloud. At the PTG we
> >> > created
> >> > an etherpad [1] that contains a list of features that need to be
> >> > implemented to fully replace instack-undercloud.
> >> >
> >>
> >> I know we talked about this at the PTG and I was skeptical that this
> >> will land in Queens. With the exception of the Container's team
> >> wanting this, I'm not sure there is an actual end user who is looking
> >> for the feature so I want to make sure we're not just doing more work
> >> because we as developers think it's a good idea.
> >
> > I've heard from several operators that they were actually surprised we
> > implemented containers in the Overcloud first. Validating a new
> > deployment framework on a single node Undercloud (for operators) before
> > overtaking their entire cloud deployment has a lot of merit to it IMO.
> > When you share the same deployment architecture across the
> > overcloud/undercloud it puts us in a better position to decide where to
> > expose new features to operators first (when creating the undercloud or
> > overcloud for example).
> >
> > Also, if you read my email again I've explicitly listed the
> > "Containers" benefit last. While I think moving the undercloud to
> > containers is a great benefit all by itself this is more of a
> > "framework alignment" in TripleO and gets us out of maintaining huge
> > amounts of technical debt. Re-using the same framework for the
> > undercloud and overcloud has a lot of merit. It effectively streamlines
> > the development process for service developers, and 3rd parties wishing
> > to integrate some of their components on a single node. Why be forced
> > to create a multi-node dev environment if you don't have to (aren't
> > using HA for example).
> >
> > Lets be honest. While instack-undercloud helped solve the old "seed" VM
> > issue it was outdated the day it landed upstream. The entire premise of
> > the tool is that it uses old style "elements" to create the undercloud
> > and we moved away from those as the primary means driving the creation
> > of the Overcloud years ago at this point. The new 'undercloud_deploy'
> > installer gets us back to our roots by once again sharing the same
> > architecture to create the over and underclouds. A demo from long ago
> > expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
> > Q=5s
> >
> > In short, we aren't just doing more work because developers think it is
> > a good idea. This has potential to be one of the most useful
> > architectural changes in TripleO that we've made in years. Could
> > significantly decrease our CI reasources if we use it to replace the
> > existing scenarios jobs which take multiple VMs per job. Is a building
> > block we could use for other features like and HA undercloud. And yes,
> > it does also have a huge impact on developer velocity in that many of
> > us already prefer to use the tool as a means of streamlining our
> > dev/test cycles to minutes instead of hours. Why spend hours running
> > quickstart Ansible scripts when in many cases you can just doit.sh. htt
> > ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
> >
>
> So like I've repeatedly said, I'm not completely against it as I agree
> what we have is not ideal.  I'm not -2, I'm -1 pending additional
> information. I'm trying to be realistic and reduce our risk for this
> cycle.


This reduces our complexity greatly I think in that once it is completed
will allow us to eliminate two project (instack and instack-undercloud) and
the maintenance thereof. Furthermore, as this dovetails nice with the
Ansible


>  IMHO doit.sh is not acceptable as an undercloud installer and
> this is what I've been trying to point out as the actual impact to the
> end user who has to use this thing.


doit.sh is an example of where the effort is today. It is essentially the
same stuff we document online here:
http://tripleo.org/install/containers_deployment/undercloud.html.

Similar to quickstart it is just something meant to help you setup a dev
environment.


> We have an established
> installation method for the undercloud, that while isn't great, isn't
> a bash script with git fetches, etc.  So as for the implementation,
> this is what I want to see properly flushed out prior to accepting
> this feature as complete for Queens (and the new default).


Of course the feature would need to prove itself before it becomes the new
default Undercloud. I'm trying to build consensus 

Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Alex Schultz
On Tue, Oct 3, 2017 at 1:50 PM, Alex Schultz  wrote:
> On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince  wrote:
>> On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>>> Hey Dan,
>>>
>>> Thanks for sending out a note about this. I have a few questions
>>> inline.
>>>
>>> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince 
>>> wrote:
>>> > One of the things the TripleO containers team is planning on
>>> > tackling
>>> > in Queens is fully containerizing the undercloud. At the PTG we
>>> > created
>>> > an etherpad [1] that contains a list of features that need to be
>>> > implemented to fully replace instack-undercloud.
>>> >
>>>
>>> I know we talked about this at the PTG and I was skeptical that this
>>> will land in Queens. With the exception of the Container's team
>>> wanting this, I'm not sure there is an actual end user who is looking
>>> for the feature so I want to make sure we're not just doing more work
>>> because we as developers think it's a good idea.
>>
>> I've heard from several operators that they were actually surprised we
>> implemented containers in the Overcloud first. Validating a new
>> deployment framework on a single node Undercloud (for operators) before
>> overtaking their entire cloud deployment has a lot of merit to it IMO.
>> When you share the same deployment architecture across the
>> overcloud/undercloud it puts us in a better position to decide where to
>> expose new features to operators first (when creating the undercloud or
>> overcloud for example).
>>
>> Also, if you read my email again I've explicitly listed the
>> "Containers" benefit last. While I think moving the undercloud to
>> containers is a great benefit all by itself this is more of a
>> "framework alignment" in TripleO and gets us out of maintaining huge
>> amounts of technical debt. Re-using the same framework for the
>> undercloud and overcloud has a lot of merit. It effectively streamlines
>> the development process for service developers, and 3rd parties wishing
>> to integrate some of their components on a single node. Why be forced
>> to create a multi-node dev environment if you don't have to (aren't
>> using HA for example).
>>
>> Lets be honest. While instack-undercloud helped solve the old "seed" VM
>> issue it was outdated the day it landed upstream. The entire premise of
>> the tool is that it uses old style "elements" to create the undercloud
>> and we moved away from those as the primary means driving the creation
>> of the Overcloud years ago at this point. The new 'undercloud_deploy'
>> installer gets us back to our roots by once again sharing the same
>> architecture to create the over and underclouds. A demo from long ago
>> expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
>> Q=5s
>>
>> In short, we aren't just doing more work because developers think it is
>> a good idea. This has potential to be one of the most useful
>> architectural changes in TripleO that we've made in years. Could
>> significantly decrease our CI reasources if we use it to replace the
>> existing scenarios jobs which take multiple VMs per job. Is a building
>> block we could use for other features like and HA undercloud. And yes,
>> it does also have a huge impact on developer velocity in that many of
>> us already prefer to use the tool as a means of streamlining our
>> dev/test cycles to minutes instead of hours. Why spend hours running
>> quickstart Ansible scripts when in many cases you can just doit.sh. htt
>> ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
>>
>
> So like I've repeatedly said, I'm not completely against it as I agree
> what we have is not ideal.  I'm not -2, I'm -1 pending additional
> information. I'm trying to be realistic and reduce our risk for this
> cycle.   IMHO doit.sh is not acceptable as an undercloud installer and
> this is what I've been trying to point out as the actual impact to the
> end user who has to use this thing. We have an established
> installation method for the undercloud, that while isn't great, isn't
> a bash script with git fetches, etc.  So as for the implementation,
> this is what I want to see properly flushed out prior to accepting
> this feature as complete for Queens (and the new default).  I would
> like to see a plan of what features need to be added (eg. the stuff on
> the etherpad), folks assigned to do this work, and estimated
> timelines.  Given that we shouldn't be making major feature changes
> after M2 (~9 weeks), I want to get an understanding of what is
> realistically going to make it.  If after reviewing the initial
> details we find that it's not actually going to make M2, then let's
> agree to this now rather than trying to force it in at the end.
>
> I know you've been a great proponent of the containerized undercloud
> and I agree it offers a lot more for development efforts. But I just
> want to make sure that we are getting all the feedback we can 

[openstack-dev] [nova] key_pair update on rebuild (a whole lot of conversations)

2017-10-03 Thread Sean Dague
There is currently a spec up for being able to specify a new key_pair
name during the rebuild operation in Nova -
https://review.openstack.org/#/c/375221/

For those not completely familiar with Nova operations, rebuild triggers
the "reset this vm to initial state" by throwing out all the disks, and
rebuilding them from the initial glance images. It does however keep the
IP address and device models when you do that. So it's useful for
ephemeral but repeating workloads, where you'd rather not have the
network information change out from under you.

The spec is a little vague about when this becomes really useful,
because this will not save you from "I lost my private key, and I have
important data on that disk". Because the disk is destroyed. That's the
point of rebuild. We once added this preserve_ephemeral flag to rebuild
for trippleo on ironic, but it's so nasty we've scoped it to only work
with ironic backends. Ephemeral should mean ephemeral.

Rebuild bypasses the scheduler. A rebuilt server stays on the same host
as it was before, which means the operation has a good chance of being
faster than a DELETE + CREATE, as the image cache on that host should
already have the base image for you instance.

A bunch of data was collected today in a lot of different IRC channels
(#openstack-nova, #openstack-infra, #openstack-operators).

= OpenStack Operators =

mnaser said that for their customers this would be useful. Keys get lost
often, but keeping the IP is actually valuable. They would also like this.

penick said that for their existing environment, they have a workflow
where this would be useful. But they are moving away from using nova for
key distribution because in Nova keys are user owned, which actually
works poorly given that everything else is project owned. So they are
building something to do key distribution after boot in the guest not
using nova's metadata.

Lots of people said they didn't use nova's keypair interfaces, they just
did it all in config management after the fact.

= Also on reboot? =

Because the reason people said they wanted it was: "I lost my private
key", the question at PTG was "does that mean you want it on reboot?"

But as we dive through the constraints of that, people that build "pet"
VMs typically delete or disable cloud-init (or similar systems) after
first boot. Without that kind of agent, this isn't going to work anyway.

So also on reboot seems very fragile and unuseful.

= Infra =

We asked the infra team if this is useful to them, the answer was no.
What would be useful them is if keypairs could be updated. They use a
symbolic name for a keypair but want to do regular key rotation. Right
now they do this by deleting then recreating keypairs, but that does
mean there is a window where there is no keypair with that name, so
server creates fail.

It is agreed that something supporting key rotation in the future would
be handy, that's not in this scope.

= Barbican =

In the tradition of making a simple fix a generic one, it does look like
there is a longer term part of this where Nova should really be able to
specify a Barbican resource url for a key so that things like rotation
could be dealt with in a system that specializes in that. It also would
address the very weird oddity of user vs. project scoping.

That's a bigger more nebulous thing. Other folks would need to be
engaged on that one.


= Where I think we are? =

I think with all this data we're at the following:

Q: Should we add this to rebuild
A: Yes, probably - after some enhancement to the spec *

* - we really should have much better use cases about the situations it
is expected to be used in. We spend a lot of time 2 and 3 years out
trying to figure out how anyone would ever use a feature, and adding
another one without this doesn't seem good

Q: should this also be on reboot?
A: NO - it would be too fragile


I also think figuring out a way to get Nova out of the key storage
business (which it really shouldn't be in) would be good. So if anyone
wants to tackle Nova using Barbican for keys, that would be ++. Rebuild
doesn't wait on that, but Barbican urls for keys seems like a much
better world to be in.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread Mike Perez
On 11:17 Oct 03, Dean Troyer wrote:
> On Mon, Oct 2, 2017 at 9:13 PM, Jamie Lennox  wrote:
> > I'm really sad to announce that I'll be leaving the OpenStack community (at
> > least for a while), I've accepted a new position unrelated to OpenStack
> > that'll begin in a few weeks, and am going to be mostly on holiday until
> > then.
> 
> No, this just will not do. -2
> 
> Seriously, it has been a great pleasure to 'try to take over the
> world' with you, at least that is what I recall as the goal we set in
> Hong Kong.  The entire interaction of Python-based clients with
> OpenStack has been made so much better with your contributions and
> OpenStackClient would not have gotten as far as it has without them.
> Thank You.
> 
> dt
> 
> /me looking for one more post-Summit beer-debrief in the hotel lobby
> next month...

Yes add me for some bourbon. Thank you Jamie for helping me with various things
in the keystone api's. It has been a pleasure working with you.

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [policy] AWS IAM session

2017-10-03 Thread Lance Bragstad
Hey all,

It was mentioned in today's keystone meeting [0] that it would be useful
to go through AWS IAM (or even GKE) as a group. With all the recent
policy discussions and work, it seems useful to get our eyes on another
system. The idea would be to spend time using a video conference/screen
share to go through and play with policy together. The end result should
keep us focused on the implementations we're working on today, but also
provide clarity for the long-term vision of OpenStack's RBAC system.

Are you interested in attending? If so, please respond to the thread.
Once we have some interest, we can gauge when to hold the meeting, which
tools we can use, and setting up a test IAM account.

Thanks,

Lance

[0]
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-10-03-18.00.log.html#l-119




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-03 Thread milanisko k
+1 :)

--
milan

út 3. 10. 2017 v 18:02 odesílatel Vladyslav Drok 
napsal:

> +1 for Shiv
>
>
> On 3 Oct 2017 11:20 a.m., "Sam Betts (sambetts)" 
> wrote:
>
> +1,
>
>
>
> Sam
>
>
>
> On 03/10/2017, 08:21, "tua...@vn.fujitsu.com" 
> wrote:
>
>
>
> +1 , Yes, I definitely agree with you.
>
>
>
> Regards
>
> Tuan
>
>
>
> *From:* Nisha Agarwal [mailto:agarwalnisha1...@gmail.com]
> *Sent:* Tuesday, October 03, 2017 12:28 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for
> ironic-core
>
>
>
> +1
>
>
>
> Regards
>
> Nisha
>
>
>
> On Mon, Oct 2, 2017 at 11:13 PM, Loo, Ruby  wrote:
>
> +1, Thx Dmitry for the proposal and Shiv for doing all the work :D
>
>
>
> --ruby
>
>
>
> *From: *Dmitry Tantsur 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Monday, October 2, 2017 at 10:17 AM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *[openstack-dev] [ironic] Proposing Shivanand Tendulker for
> ironic-core
>
>
>
> Hi all!
>
> I would like to propose Shivanand (stendulker) to the core team.
>
> His stats have been consistently high [1]. He has given a lot of
> insightful reviews recently, and his expertise in the iLO driver is also
> very valuable for the team.
>
> As usual, please respond with your comments and objections.
>
> Thanks,
>
> Dmitry
>
>
> [1] http://stackalytics.com/report/contribution/ironic-group/90
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> The Secret Of Success is learning how to use pain and pleasure, instead
> of having pain and pleasure use you. If You do that you are in control
> of your life. If you don't life controls you.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Mike Perez
On 19:38 Oct 03, Jean-Philippe Evrard wrote:
> On Tue, Oct 3, 2017 at 5:40 PM, Monty Taylor  wrote:
> > Hey everybody!


 
> Hello,
> 
> I'd like to first thank everyone involved, doing this hard work.

Yes thank you all very much for your hardwork. I think we can all agree
"computers happen."

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storyboard] Should New Projects Be Using Storyboard?

2017-10-03 Thread Mike Perez
I noticed that the project creator [1] and cookiecutter [2] promote using
launchpad. If we're migrating projects to storyboard today, should we stop
promoting launchpad for new projects?

[1] - https://docs.openstack.org/infra/manual/creators.html
[2] - 
https://git.openstack.org/cgit/openstack-dev/cookiecutter/tree/cookiecutter.json#n6

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Alex Schultz
On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince  wrote:
> On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
>> Hey Dan,
>>
>> Thanks for sending out a note about this. I have a few questions
>> inline.
>>
>> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince 
>> wrote:
>> > One of the things the TripleO containers team is planning on
>> > tackling
>> > in Queens is fully containerizing the undercloud. At the PTG we
>> > created
>> > an etherpad [1] that contains a list of features that need to be
>> > implemented to fully replace instack-undercloud.
>> >
>>
>> I know we talked about this at the PTG and I was skeptical that this
>> will land in Queens. With the exception of the Container's team
>> wanting this, I'm not sure there is an actual end user who is looking
>> for the feature so I want to make sure we're not just doing more work
>> because we as developers think it's a good idea.
>
> I've heard from several operators that they were actually surprised we
> implemented containers in the Overcloud first. Validating a new
> deployment framework on a single node Undercloud (for operators) before
> overtaking their entire cloud deployment has a lot of merit to it IMO.
> When you share the same deployment architecture across the
> overcloud/undercloud it puts us in a better position to decide where to
> expose new features to operators first (when creating the undercloud or
> overcloud for example).
>
> Also, if you read my email again I've explicitly listed the
> "Containers" benefit last. While I think moving the undercloud to
> containers is a great benefit all by itself this is more of a
> "framework alignment" in TripleO and gets us out of maintaining huge
> amounts of technical debt. Re-using the same framework for the
> undercloud and overcloud has a lot of merit. It effectively streamlines
> the development process for service developers, and 3rd parties wishing
> to integrate some of their components on a single node. Why be forced
> to create a multi-node dev environment if you don't have to (aren't
> using HA for example).
>
> Lets be honest. While instack-undercloud helped solve the old "seed" VM
> issue it was outdated the day it landed upstream. The entire premise of
> the tool is that it uses old style "elements" to create the undercloud
> and we moved away from those as the primary means driving the creation
> of the Overcloud years ago at this point. The new 'undercloud_deploy'
> installer gets us back to our roots by once again sharing the same
> architecture to create the over and underclouds. A demo from long ago
> expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
> Q=5s
>
> In short, we aren't just doing more work because developers think it is
> a good idea. This has potential to be one of the most useful
> architectural changes in TripleO that we've made in years. Could
> significantly decrease our CI reasources if we use it to replace the
> existing scenarios jobs which take multiple VMs per job. Is a building
> block we could use for other features like and HA undercloud. And yes,
> it does also have a huge impact on developer velocity in that many of
> us already prefer to use the tool as a means of streamlining our
> dev/test cycles to minutes instead of hours. Why spend hours running
> quickstart Ansible scripts when in many cases you can just doit.sh. htt
> ps://github.com/dprince/undercloud_containers/blob/master/doit.sh
>

So like I've repeatedly said, I'm not completely against it as I agree
what we have is not ideal.  I'm not -2, I'm -1 pending additional
information. I'm trying to be realistic and reduce our risk for this
cycle.   IMHO doit.sh is not acceptable as an undercloud installer and
this is what I've been trying to point out as the actual impact to the
end user who has to use this thing. We have an established
installation method for the undercloud, that while isn't great, isn't
a bash script with git fetches, etc.  So as for the implementation,
this is what I want to see properly flushed out prior to accepting
this feature as complete for Queens (and the new default).  I would
like to see a plan of what features need to be added (eg. the stuff on
the etherpad), folks assigned to do this work, and estimated
timelines.  Given that we shouldn't be making major feature changes
after M2 (~9 weeks), I want to get an understanding of what is
realistically going to make it.  If after reviewing the initial
details we find that it's not actually going to make M2, then let's
agree to this now rather than trying to force it in at the end.

I know you've been a great proponent of the containerized undercloud
and I agree it offers a lot more for development efforts. But I just
want to make sure that we are getting all the feedback we can before
continuing down this path.  Since, as you point out, a bunch of this
work is already available for consumption by developers, I don't see
making it the new default as a 

[openstack-dev] [all] Important information for people with in-repo Zuul v3 config

2017-10-03 Thread Monty Taylor

Hi everybody,

The partial rollback of Zuulv3 is in place now. Zuulv2 is acting as your 
gate keeper once again. The status page for Zuulv2 can be found at 
http://status.openstack.org/zuul and Zuulv3 can be found at 
http://zuulv3.openstack.org.


With the partial rollback of v3, we've left the v3 check pipeline 
configured for everyone so that new v3 jobs can be iterated on in 
preparation for rolling forward. Doing so leaves open a potential hole 
for breakage, so ...


If you propose any changes to your repos that include changes to zuul 
config files (.zuul.yaml or .zuul.d) - PLEASE make sure that Zuul v3 
runs check jobs and responds before approving the patch.


If you don't don't do that, you could have zuul v2 land a patch that 
contains a syntax error that would result in invalid config for v3.
Note that this would break not only your repo - but all testing using 
Zuul v3 (in which case we would have to temporarily remove your 
repository from the v3 configuration or ask for immediate revert)!


Keep in mind that as we work on diagnosing the issue that caused the 
rollback, we could be restarting v3, shutting it down for a bit or it 
could be wedged - so v3 might not respond.


Make sure you get a response from v3 on any v3 related patches. Please.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 40

2017-10-03 Thread Chris Dent


( rendered: https://anticdent.org/tc-report-40.html )

This week opens OpenStack Technical Committee (TC) election season.
There's an [announcement email
thread](http://lists.openstack.org/pipermail/openstack-dev/2017-October/122933.html)
(note the followup with some corrections). Individuals in the
OpenStack community may self-nominate up until 2017-10-08, 23:45 UTC.
There are instructions for [how to submit your
candidacy](https://governance.openstack.org/election/#how-to-submit-your-candidacy).

If you are interested you should put yourself forward to run. The TC
is better when it has a mixture of voices and experiences. The
absolute time commitment is less than you probably think (you can
make it much more if you like) and no one is expected to be a world
leading expert in coding and deploying OpenStack. The required
experience is being engaged in, with, and by the OpenStack community.

Election season inevitably leads to questions of:

* what the TC _is designed_ to do
* what the TC _should_ do
* what the TC _actually_ did lately

A year ago Thierry published [What is the Role of the OpenStack
Technical Committee](https://ttx.re/role-of-the-tc.html):


Part of the reason why there are so many misconceptions about the
role of the TC is that its name is pretty misleading. The Technical
Committee is not primarily technical: most of the issues that the TC
tackles are open source project governance issues.


Then this year he wrote [Report on TC activity for the May-Oct 2017
membership](http://lists.openstack.org/pipermail/openstack-dev/2017-October/122962.html).

Combined, these go some distance to answering the design and actuality
questions.

The "should" question can be answered by the people who are able and
choose to run for the TC. Throughout the years people have taken
different approaches, some considering the TC a sort of reactive
judiciary that mediates and adjudicates disagreements while others
take the view that the TC should have a more active and executive
leadership role.

Some of this came up in [today's office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-03.log.html#t2017-10-03T09:01:27)
where I reported participating in a few conversations with people who
felt the TC was not relevant, so why run? The ensuing conversation may
be of interest if you're curious about the intersection of economics,
group dynamics, individualism versus consensualism in collaborative
environments, perception versus reality, and the need for leadership
and hard work.

# Other Topics

Conversations on Wednesday and Thursday of last week hit a couple of other
topics.

## LTS

On Wednesday the topic of [Long Term
Support](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-09-27.log.html#t2017-09-27T17:15:24)
came up again. There are effectively two camps:

* Those who wonder why this should be an upstream problem at all, as
  long as we are testing upgrades from N-1 we're doing what needs to
  be done.

* Those who think that if multiple companies are going to be working
  on LTS solutions anyway, wouldn't it be great to not duplicate
  effort?

And we hear reports of organization that want LTS to exist, but are
not willing to dedicate resources to see it happen, evidently still
confusing large-scale open source with "yay! I get free stuff!".

## Overlapping Projects

On Thursday we discussed some of the mechanics and challenges when
dealing with [overlapping
projects](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-09-28.log.html#t2017-09-28T15:01:35)
in the form of Trove and a potential new database-related project with
the working title of "Hoard". Amongst other things there's discussion of
properly using the [service types 
authority](https://service-types.openstack.org/)
and effectively naming resources when there may be another thing that
wants to use a similar name for not quite the same purpose.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread William M Edmonds

Jamie Lennox  wrote on 10/02/2017 10:13:49 PM:
>
> Hi All,

> I'm really sad to announce that I'll be leaving the OpenStack
> community (at least for a while), I've accepted a new position
> unrelated to OpenStack that'll begin in a few weeks, and am going to
> be mostly on holiday until then.
>
> I want to thank everyone I've had the pleasure of working with over
> the last few years - but particularly the Keystone community. I feel
> we as a team and I personally grew a lot over that time, we made
> some amazing achievements, and I couldn't be prouder to have worked
> with all of you.
>
> Obviously I'll be around at least during the night for some of the
> Sydney summit and will catch up with some of you there, and
> hopefully see some of you at linux.conf.au. To everyone else, thank
> you and I hope we'll meet again.
>
> Jamie Lennox, Stacker.
>

Boo! You will be greatly missed.

But I hope you enjoy your new position. Whatever it is, they are getting a
good one.

Will look for you in Sydney!

-matthew
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptl][tc] Accessible upgrade support

2017-10-03 Thread Sean McGinnis
I'm hoping this will get a little more attention.

We recently started discussing removing governance tags that did not have any
projects asserting them. I think this makes a lot of sense. Some tags were
defined apparently in the hope that it would get projects thinking about them
and wanting to either apply for the tag, or do the work to be able to meet the
requirements for that tag.

While this may have worked in some cases, we do have a few that are a little
unclear and not really getting much attention. We will likely clean up that
tag list a little, but there was some push back on at least one tag.

The supports-accessible-upgrade tag basically states that a service can be
upgraded without affecting access to the resources that the service manages
[1]. This actually fits with how at least Nova and Cinder work, so a patch is
now out there to assert this for those two projects [2].

I would bet there are several other projects out there that work in this same
way. Since we are looking between removing tags or using them (use it or lose
it), I would actually love to see more projects asserting this tag. If your
project manages a set of resources that are available even when your service
is down and/or being upgraded, please consider submitting a patch like [2] to
add your project to the list.

And just a general call out to raise awareness - please take a look through
the other defined tags and see if there are any others that are applicable to
your projects [3].

Thanks!

Sean (smcginnis)


[1] 
https://governance.openstack.org/tc/reference/tags/assert_supports-accessible-upgrade.html
[2] https://review.openstack.org/#/c/509170/
[3] https://governance.openstack.org/tc/reference/tags/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Jean-Philippe Evrard
On Tue, Oct 3, 2017 at 5:40 PM, Monty Taylor  wrote:
> Hey everybody!
>
> Thank you all for your patience over the last week as we've worked through
> the v3 rollout.  While we anticipated some problems, we've run into a number
> of unexpected issues, some of which will take some time to resolve.  It has
> become unreasonable to continue in our current state while they are being
> worked.
>
> With that in mind, we are going to perform a partial rollback to Zuul v2
> while we work on it so that teams can get work done. The details of that are
> as follows:
>
> The project-config repo remains frozen.  Generally we don't want to make
> changes to v2 jobs.  If a change must be made, it will need to be made to
> both v2 and v3 versions.  We will not run the migration script again.
>
> Zuul v3 will continue to run check and periodic jobs on all repos.  It will
> leave review messages, including +1/-1 votes.
>
> Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul v3.
> This will slow v2 down slightly, but allow us to continue to exercise v3
> enough to find problems.
>
> Zuul v2 and v3 can not both gate a project or set of projects.  In general,
> Zuul v2 will be gating all projects, except the few projects that are
> specifically v3-only: zuul-jobs, openstack-zuul-jobs, project-config, and
> zuul itself.
>
> We appreciate that some projects would prefer to use v3 exclusively, even
> while we continue to work to stabilize it.  However, in order to complete
> our work as quickly as possible, we may need to restart frequently or take
> extended v3 downtimes.  Because of this, and the reduced capacity that v3
> will be running with, we will keep the set of projects gating under v3
> limited to only those necessary.  But keep in mind, v3 will still be running
> check jobs on all projects, so you can continue to iterate on v3 job content
> in check.
>
> If you modified a script in your repo that is called from a job to work in
> v3, you may need to modify it to be compatible with both.  If you need to
> determine whether you are running under Zuul v2 or under v3 with legacy
> compatibility shims, check for the LOG_PATH environment variable.  It will
> only be present when running under Zuul v2 (and it is the variable that we
> are least likely to add to the v3 compatibility shim).
>
> Again - thank you all for your patience, and for all of the great work
> people have done working through the issues we've uncovered. As soon as
> we've got a handle on the most critical issues, we'll plan another
> roll-forward ... hopefully in a week or two, but we'll send out status
> updates as we have them.
>
> Thanks!
> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hello,

I'd like to first thank everyone involved, doing this hard work.

As an extra comment, please remember that Newton EOL is next week, so
it's maybe worth waiting after
that for the next full rollout/roll-forward (whatever the term is!) to
avoid the overlap of critical events in the same timeframe.

Best regards,
Jean-Philippe Evrard (evrardjp)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] TC candidacy

2017-10-03 Thread Doug Hellmann
I am announcing my candidacy for a position on the OpenStack Technical
Committee.

I started contributing to OpenStack in 2012, and I am currently
employed by Red Hat to work on OpenStack with a focus on long-term
project concerns.  I have served on the Technical Committee for the
last four years and as PTL of the Oslo and Release Management teams at
different points in the past. I won't repeat all of the information
about my history with the project (see last years nomination email if
you dont know me [1]).

Most recently I have been working with the Documentation team to
reorganize how we manage docs for OpenStack [2][3]. After more than
1200 reviews [4], we are well on our way to a healthy future.

Most of my contributions have been focused on enabling others in the
community.  From the documentation migration, to release automation,
and the community goals process, I have worked on tools, processes,
and patterns to make incremental improvements in our ability to
collaborate while building OpenStack. I view serving on the TC as an
extension of that work.

Earlier this year the TC met to work on our vision [5] for the future
of the TC. Two of the themes from the vision resonated with me
strongly: "Embracing Community Diversity" and "Growing New Leaders.

Most of our project teams have seen the effects of companies
refocusing or reducing their support of upstream development.  For the
community to thrive, we need to continue seeking new contributors. In
addition to looking at new companies, and new parts of our global
community, we need to encourage participation by more people who are
not spending significant amounts of their time on upstream work --
where OpenStack is not their full-time job. That will mean adjusting
the way we do code reviews, to place more emphasis on giving help
along with the usual review feedback. The reward for making this
cultural change will be more engagement with and contributions by our
users. I will be working on my ideas in this area through the First
Contact or Contributor Welcoming SIG [6].

The theme of finding new leaders is equally important. As Sean rightly
points out [7], the long term health of our community depends on our
ability to transition between leaders. We do this regularly in project
teams where the responsibilities are fairly well defined. We have
fewer people available to lead community-wide initiatives.  Regardless
of the outcome of the election, I will be working with the TC on
establishing a mentoring program for inter-project work to encourage
new people to step into community-wide leadership roles.

The OpenStack community is the most exciting and welcoming group I
have interacted with in more than 25 years of contributing to open
source projects. I look forward to continuing to be a part of the
community and serving the project.

Thank you,
Doug

Review history: https://review.openstack.org/#/q/reviewer:2472,n,z
Commit history: https://review.openstack.org/#/q/owner:2472,n,z
Foundation Profile: http://www.openstack.org/community/members/profile/359
Freenode: dhellmann
Website: https://doughellmann.com

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-September/104643.html
[2] 
http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html
[3] https://review.openstack.org/#/c/507629/
[4] https://review.openstack.org/#/q/intopic:doc-migration
[5] https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html
[6] 
http://lists.openstack.org/pipermail/openstack-sigs/2017-September/84.html
[7] http://lists.openstack.org/pipermail/openstack-dev/2017-October/122979.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Emilien Macchi
On Tue, Oct 3, 2017 at 10:12 AM, Dan Prince  wrote:
[...]
> I would let other chime in but the feedback I've gotten has mostly been
>  that it improves the dev/test cycle greatly.

[...]

I like both aschultz & dprince thoughts here, I agree with both of you
on most of the points made here.
I think we need to engage more efforts on CI (see what Alex wrote
about milestone, we should try to respect that, it has proved to be
helpful) & documentation as well (let's push doc before end of m2).

If we can make CI & doc working before end of m2, it's at least a good
step forward.
Also, we could ship this feature as "experimental" in Queens and
"stable" in Rocky (even if it has been developed since more than one
cycle by dprince & co), I think it's a reasonable path for our users.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Non-critical, non-gate patches freeze during migration to zuul3

2017-10-03 Thread Michał Jastrzębski
Since infra is splitting zuul to zuulv2 and v3 [1], we get our gates
back and allows us to work on zuulv3 gates at same time. Therefore we
can un-freeze repo. Feel free to merge patches:)

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123049.html

On 2 October 2017 at 10:13, Michał Jastrzębski  wrote:
> Hello,
>
> As you all know, Zuul v3 is on! Unfortunate side effect was that it
> broke our gates. For that reason I submitted patch removing legacy
> jobs at all and we will do quick migration to zuulv3 compatible, local
> jobs. That means between this patch merges [1] and we finish that, we
> will be without effective CI. For that reason I want us to not merge
> any patches that aren't critical bugfixes or gate-related work.
>
> Patches that migrates us to zuul v3 are [2] [3], please prioritize them.
>
> Regards,
> Michal
>
> [1] https://review.openstack.org/#/c/508944/
> [2] https://review.openstack.org/#/c/508661/
> [3] https://review.openstack.org/#/c/508376/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Doug Hellmann
Excerpts from Jesse Pretorius's message of 2017-10-03 15:57:17 +:
> On 10/3/17, 3:01 PM, "Doug Hellmann"  wrote:
> 
> >> Given that this topic has gone through several cycles of discussion and 
> >> has never gone anywhere, does it perhaps merit definition as a project 
> >> interface so that we can define the problem this is trying to solve and 
> >> set a standard formally once and for all?
> 
> >Maybe a couple of the various packaging projects can agree and just
> >set a de facto rule (and document it). That worked out OK for us
> >with the doc reorganization when we updated the docs.o.o site
> >templates.
> 
> I’m happy to facilitate that. Is there some sort of place where such 
> standards are recorded? Ie Where do I submit a review to and is there an 
> example to reference for the sort of information that should be in it?
> 

The docs team put that info in the spec for the migration. Do we
have a packaging SIG yet? That seems like an ideal body to own a
standard like this long term. Short term, just getting some agreement
on the mailing list would be a good start.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Sean Dague
On 10/03/2017 01:06 PM, Jeremy Stanley wrote:
> On 2017-10-03 12:00:44 -0500 (-0500), Matt Riedemann wrote:
>> On 10/3/2017 11:40 AM, Monty Taylor wrote:
>>> Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul v3. 
>>> This will slow v2 down slightly, but allow us to continue to exercise v3
>>> enough to find problems.
>>>
>>> Zuul v2 and v3 can not both gate a project or set of projects.  In
>>> general, Zuul v2 will be gating all projects, except the few projects that
>>> are specifically v3-only: zuul-jobs, openstack-zuul-jobs, project-config,
>>> and zuul itself.
>>
>> So if v3 is in check and periodic, and will be restarted and offline at
>> times, doesn't that mean we could have patches waiting for an extended
>> period of time on v3 results when the v2 jobs are long done? Or do the v3
>> jobs just timeout and being non-voting shouldn't impact the overall score?
> 
> Not at all, since v2 will also be running check jobs itself and it
> only pays attention to its own results for that puropose. So
> basically while we're rolled back to v2 you'll see both "Jenkins"
> (v2 account) and "Zuul" (v3 account) reporting check results most of
> the time but you can consider the v3 results merely an advisory
> indicator of whether your v3 jobs are working correctly.

Right, so zuul v3 will effectively be treated like 3rd party CI during
the transition. Sounds good.

The 80 / 20 split seems very reasonable, and a good way to let teams get
back to work while letting the v3 effort make forward progress with real
load to smoke out the issues.

Thanks for flipping over to this model.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-03 Thread Dan Prince
On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
> Hey Dan,
> 
> Thanks for sending out a note about this. I have a few questions
> inline.
> 
> On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince 
> wrote:
> > One of the things the TripleO containers team is planning on
> > tackling
> > in Queens is fully containerizing the undercloud. At the PTG we
> > created
> > an etherpad [1] that contains a list of features that need to be
> > implemented to fully replace instack-undercloud.
> > 
> 
> I know we talked about this at the PTG and I was skeptical that this
> will land in Queens. With the exception of the Container's team
> wanting this, I'm not sure there is an actual end user who is looking
> for the feature so I want to make sure we're not just doing more work
> because we as developers think it's a good idea.

I've heard from several operators that they were actually surprised we
implemented containers in the Overcloud first. Validating a new
deployment framework on a single node Undercloud (for operators) before
overtaking their entire cloud deployment has a lot of merit to it IMO.
When you share the same deployment architecture across the
overcloud/undercloud it puts us in a better position to decide where to
expose new features to operators first (when creating the undercloud or
overcloud for example).

Also, if you read my email again I've explicitly listed the
"Containers" benefit last. While I think moving the undercloud to
containers is a great benefit all by itself this is more of a
"framework alignment" in TripleO and gets us out of maintaining huge
amounts of technical debt. Re-using the same framework for the
undercloud and overcloud has a lot of merit. It effectively streamlines
the development process for service developers, and 3rd parties wishing
to integrate some of their components on a single node. Why be forced
to create a multi-node dev environment if you don't have to (aren't
using HA for example).

Lets be honest. While instack-undercloud helped solve the old "seed" VM
issue it was outdated the day it landed upstream. The entire premise of
the tool is that it uses old style "elements" to create the undercloud
and we moved away from those as the primary means driving the creation
of the Overcloud years ago at this point. The new 'undercloud_deploy'
installer gets us back to our roots by once again sharing the same
architecture to create the over and underclouds. A demo from long ago
expands on this idea a bit:  https://www.youtube.com/watch?v=y1qMDLAf26
Q=5s

In short, we aren't just doing more work because developers think it is
a good idea. This has potential to be one of the most useful
architectural changes in TripleO that we've made in years. Could
significantly decrease our CI reasources if we use it to replace the
existing scenarios jobs which take multiple VMs per job. Is a building
block we could use for other features like and HA undercloud. And yes,
it does also have a huge impact on developer velocity in that many of
us already prefer to use the tool as a means of streamlining our
dev/test cycles to minutes instead of hours. Why spend hours running
quickstart Ansible scripts when in many cases you can just doit.sh. htt
ps://github.com/dprince/undercloud_containers/blob/master/doit.sh

Lastly, this isn't just a containers team thing. We've been using the
undercloud_deploy architecture across many teams to help develop for
almost an entire cycle now. Huge benefits. I would go as far as saying
that undercloud_deploy was *the* biggest feature in Pike that enabled
us to bang out a majority of the docker/service templates in tripleo-
heat-templates.

>  Given that etherpad
> appears to contain a pretty big list of features, are we going to be
> able to land all of them by M2?  Would it be beneficial to craft a
> basic spec related to this to ensure we are not missing additional
> things?

I'm not sure there is a lot of value in creating a spec at this point.
We've already got an approved blueprint for the feature in Pike here: h
ttps://blueprints.launchpad.net/tripleo/+spec/containerized-undercloud

I think we might get more velocity out of grooming the etherpad and
perhaps dividing this work among the appropriate teams.

> 
> > Benefits of this work:
> > 
> >  -Alignment: aligning the undercloud and overcloud installers gets
> > rid
> > of dual maintenance of services.
> > 
> 
> I like reusing existing stuff. +1
> 
> >  -Composability: tripleo-heat-templates and our new Ansible
> > architecture around it are composable. This means any set of
> > services
> > can be used to build up your own undercloud. In other words the
> > framework here isn't just useful for "underclouds". It is really
> > the
> > ability to deploy Tripleo on a single node with no external
> > dependencies. Single node TripleO installer. The containers team
> > has
> > already been leveraging existing (experimental) undercloud_deploy
> > installer to develop services for Pike.
> > 
> 

Re: [openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Jeremy Stanley
On 2017-10-03 12:00:44 -0500 (-0500), Matt Riedemann wrote:
> On 10/3/2017 11:40 AM, Monty Taylor wrote:
> >Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul v3. 
> >This will slow v2 down slightly, but allow us to continue to exercise v3
> >enough to find problems.
> >
> >Zuul v2 and v3 can not both gate a project or set of projects.  In
> >general, Zuul v2 will be gating all projects, except the few projects that
> >are specifically v3-only: zuul-jobs, openstack-zuul-jobs, project-config,
> >and zuul itself.
> 
> So if v3 is in check and periodic, and will be restarted and offline at
> times, doesn't that mean we could have patches waiting for an extended
> period of time on v3 results when the v2 jobs are long done? Or do the v3
> jobs just timeout and being non-voting shouldn't impact the overall score?

Not at all, since v2 will also be running check jobs itself and it
only pays attention to its own results for that puropose. So
basically while we're rolled back to v2 you'll see both "Jenkins"
(v2 account) and "Zuul" (v3 account) reporting check results most of
the time but you can consider the v3 results merely an advisory
indicator of whether your v3 jobs are working correctly.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Clark Boylan
On Tue, Oct 3, 2017, at 10:00 AM, Matt Riedemann wrote:
> On 10/3/2017 11:40 AM, Monty Taylor wrote:
> > Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul 
> > v3.  This will slow v2 down slightly, but allow us to continue to 
> > exercise v3 enough to find problems.
> > 
> > Zuul v2 and v3 can not both gate a project or set of projects.  In 
> > general, Zuul v2 will be gating all projects, except the few projects 
> > that are specifically v3-only: zuul-jobs, openstack-zuul-jobs, 
> > project-config, and zuul itself.
> 
> So if v3 is in check and periodic, and will be restarted and offline at 
> times, doesn't that mean we could have patches waiting for an extended 
> period of time on v3 results when the v2 jobs are long done? Or do the 
> v3 jobs just timeout and being non-voting shouldn't impact the overall 
> score?

v2 will do all of the gating and will for the most part completely
ignore what v3 is doing. The exception to this is any -2's that zuul has
left will need to be cleared out (fungi is working on this as I write
this email).

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Matt Riedemann

On 10/3/2017 11:40 AM, Monty Taylor wrote:
Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul 
v3.  This will slow v2 down slightly, but allow us to continue to 
exercise v3 enough to find problems.


Zuul v2 and v3 can not both gate a project or set of projects.  In 
general, Zuul v2 will be gating all projects, except the few projects 
that are specifically v3-only: zuul-jobs, openstack-zuul-jobs, 
project-config, and zuul itself.


So if v3 is in check and periodic, and will be restarted and offline at 
times, doesn't that mean we could have patches waiting for an extended 
period of time on v3 results when the v2 jobs are long done? Or do the 
v3 jobs just timeout and being non-voting shouldn't impact the overall 
score?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Matt Riedemann

On 10/3/2017 10:53 AM, Matt Riedemann wrote:
However, if the only reason one would need to pass personality files 
during rebuild is because we don't persist them during the initial 
server create, do we really need to also allow passing user_data for 
rebuild?


Given personality files were added to the rebuild API back in Diablo [1] 
with no explanation in the commit message why, my assumption above is 
just that, an assumption.


[1] 
https://github.com/openstack/nova/commit/cebc98176926f57016a508d5c59b11f55dfcf2b3


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Zuul v3 Status - and Rollback Information

2017-10-03 Thread Monty Taylor

Hey everybody!

Thank you all for your patience over the last week as we've worked 
through the v3 rollout.  While we anticipated some problems, we've run 
into a number of unexpected issues, some of which will take some time to 
resolve.  It has become unreasonable to continue in our current state 
while they are being worked.


With that in mind, we are going to perform a partial rollback to Zuul v2 
while we work on it so that teams can get work done. The details of that 
are as follows:


The project-config repo remains frozen.  Generally we don't want to make 
changes to v2 jobs.  If a change must be made, it will need to be made 
to both v2 and v3 versions.  We will not run the migration script again.


Zuul v3 will continue to run check and periodic jobs on all repos.  It 
will leave review messages, including +1/-1 votes.


Our nodepool quota will be allocated 80% to Zuul v2, and 20% to ZUul 
v3.  This will slow v2 down slightly, but allow us to continue to 
exercise v3 enough to find problems.


Zuul v2 and v3 can not both gate a project or set of projects.  In 
general, Zuul v2 will be gating all projects, except the few projects 
that are specifically v3-only: zuul-jobs, openstack-zuul-jobs, 
project-config, and zuul itself.


We appreciate that some projects would prefer to use v3 exclusively, 
even while we continue to work to stabilize it.  However, in order to 
complete our work as quickly as possible, we may need to restart 
frequently or take extended v3 downtimes.  Because of this, and the 
reduced capacity that v3 will be running with, we will keep the set of 
projects gating under v3 limited to only those necessary.  But keep in 
mind, v3 will still be running check jobs on all projects, so you can 
continue to iterate on v3 job content in check.


If you modified a script in your repo that is called from a job to work 
in v3, you may need to modify it to be compatible with both.  If you 
need to determine whether you are running under Zuul v2 or under v3 with 
legacy compatibility shims, check for the LOG_PATH environment 
variable.  It will only be present when running under Zuul v2 (and it is 
the variable that we are least likely to add to the v3 compatibility shim).


Again - thank you all for your patience, and for all of the great work 
people have done working through the issues we've uncovered. As soon as 
we've got a handle on the most critical issues, we'll plan another 
roll-forward ... hopefully in a week or two, but we'll send out status 
updates as we have them.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread Anita Kuno

On 2017-10-02 10:42 PM, Lance Bragstad wrote:

+1,000 to all of what Steve said. It's still tough for me to wrap my
head around all the client/library work you shouldered. Your experience,
perspective, and insight will certainly be missed.

Thanks for being an awesome member of this community and best of luck on
the new gig, they're lucky to have you!

See you in Sydney

On 10/02/2017 09:22 PM, Steve Martinelli wrote:

It was great working with and getting to know you over the years
Jamie, you did tremendous work with in keystone, particularly
maintaining the libraries. I'm sure you'll succeed in your new
position, I'll miss our late-night-east-coast, early-morning-aus
chats. Keep in touch.

On Mon, Oct 2, 2017 at 10:13 PM, Jamie Lennox > wrote:

 Hi All,

 I'm really sad to announce that I'll be leaving the OpenStack
 community (at least for a while), I've accepted a new position
 unrelated to OpenStack that'll begin in a few weeks, and am going
 to be mostly on holiday until then.

 I want to thank everyone I've had the pleasure of working with
 over the last few years - but particularly the Keystone community.
 I feel we as a team and I personally grew a lot over that time, we
 made some amazing achievements, and I couldn't be prouder to have
 worked with all of you.

 Obviously I'll be around at least during the night for some of the
 Sydney summit and will catch up with some of you there, and
 hopefully see some of you at linux.conf.au .
 To everyone else, thank you and I hope we'll meet again.


 Jamie Lennox, Stacker.





I thank you for everything Jamie. You and your wife are incredible 
hosts, thank you.


I'm glad to hear you are doing well for yourself.

If our paths cross again, I'll be delighted.

Be well,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread Dean Troyer
On Mon, Oct 2, 2017 at 9:13 PM, Jamie Lennox  wrote:
> I'm really sad to announce that I'll be leaving the OpenStack community (at
> least for a while), I've accepted a new position unrelated to OpenStack
> that'll begin in a few weeks, and am going to be mostly on holiday until
> then.

No, this just will not do. -2

Seriously, it has been a great pleasure to 'try to take over the
world' with you, at least that is what I recall as the goal we set in
Hong Kong.  The entire interaction of Python-based clients with
OpenStack has been made so much better with your contributions and
OpenStackClient would not have gotten as far as it has without them.
Thank You.

dt

/me looking for one more post-Summit beer-debrief in the hotel lobby
next month...

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-03 Thread Vladyslav Drok
+1 for Shiv

On 3 Oct 2017 11:20 a.m., "Sam Betts (sambetts)"  wrote:

+1,



Sam



On 03/10/2017, 08:21, "tua...@vn.fujitsu.com"  wrote:



+1 , Yes, I definitely agree with you.



Regards

Tuan



*From:* Nisha Agarwal [mailto:agarwalnisha1...@gmail.com]
*Sent:* Tuesday, October 03, 2017 12:28 PM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for
ironic-core



+1



Regards

Nisha



On Mon, Oct 2, 2017 at 11:13 PM, Loo, Ruby  wrote:

+1, Thx Dmitry for the proposal and Shiv for doing all the work :D



--ruby



*From: *Dmitry Tantsur 
*Reply-To: *"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
*Date: *Monday, October 2, 2017 at 10:17 AM
*To: *"OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
*Subject: *[openstack-dev] [ironic] Proposing Shivanand Tendulker for
ironic-core



Hi all!

I would like to propose Shivanand (stendulker) to the core team.

His stats have been consistently high [1]. He has given a lot of insightful
reviews recently, and his expertise in the iLO driver is also very valuable
for the team.

As usual, please respond with your comments and objections.

Thanks,

Dmitry


[1] http://stackalytics.com/report/contribution/ironic-group/90


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





-- 

The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - Adding Takashi Yamamoto to the neutron-drivers team

2017-10-03 Thread Miguel Lavalle
Hi,

After the departure of Kevin Benton, I am adding Takashi Yamamoto to the
Neutron drivers team [1]. Takashi has been an active member of the Neutron
Core Reviewer team for more than 2 years, providing advice across the
reference implementation code.  He also leads the networking-midonet
sub-project team. As such, Takashi has a very good high level architectural
view of Neutron, necessary for deciding what features are feasible for the
platform. He will also provide valuable insights from the perspective of
the Neutron sub-projects.


1.
https://docs.openstack.org/neutron/latest/contributor/policies/neutron-teams.html#drivers-team
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Jesse Pretorius
On 10/3/17, 3:01 PM, "Doug Hellmann"  wrote:

>> Given that this topic has gone through several cycles of discussion and has 
>> never gone anywhere, does it perhaps merit definition as a project interface 
>> so that we can define the problem this is trying to solve and set a standard 
>> formally once and for all?

>Maybe a couple of the various packaging projects can agree and just
>set a de facto rule (and document it). That worked out OK for us
>with the doc reorganization when we updated the docs.o.o site
>templates.

I’m happy to facilitate that. Is there some sort of place where such standards 
are recorded? Ie Where do I submit a review to and is there an example to 
reference for the sort of information that should be in it?



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Matt Riedemann
We plan on deprecating personality files from the compute API in a new 
microversion. The spec for that is here:


https://review.openstack.org/#/c/509013/

Today you can pass new personality files to inject during rebuild, and 
at the PTG we said we'd allow passing new user_data to rebuild as a 
replacement for the personality files.


However, if the only reason one would need to pass personality files 
during rebuild is because we don't persist them during the initial 
server create, do we really need to also allow passing user_data for 
rebuild? The initial user_data is stored with the instance during 
create, and re-used during rebuild, so do we need to allow updating it 
during rebuild?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification subteam meeting is canceled

2017-10-03 Thread Balazs Gibizer

Hi,

Today's notification subteam meeting is canceled not to disturb spec 
review focus.


Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-10-03 Thread John Dennis

On 10/02/2017 10:13 PM, Jamie Lennox wrote:

Hi All,

I'm really sad to announce that I'll be leaving the OpenStack community 
(at least for a while), I've accepted a new position unrelated to 
OpenStack that'll begin in a few weeks, and am going to be mostly on 
holiday until then.


It's a shame to see you go Jamie. Aside from the OpenStack community you 
were also a co-worker. I have high regards for your technical skills as 
well as being a pleasure to work with. I wish you all the best in your 
future endeavors and based on past experience I expect whatever it is 
you'll succeed.


--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] policy community goal progress

2017-10-03 Thread Lance Bragstad
Hey all,

According to our burndown chart [0], just over half the projects have
started implementing the goal [1]. I've been proposing patches for some
of the projects in the not-started column. Most patches I've been
working on would benefit from a review from someone more experienced
with the project. Some projects have policies aren't documented in their
respective API reference, so providing useful descriptions is going to
have to come from project developers. Other patches are tripping over
fake policies that are inconsistent with the packaged policy file. Those
could require additional testing logic to ensure the tests are actually
testing the defaults and not just empty policies ("").

As always, if you have questions about how to get started, please feel
free to come find me. If I've proposed an implementation for a your
project [2], don't hesitate to pick it up and run with it. This will
free me up to continue helping other projects that have yet to get started.

Thanks!

Lance

[0] https://www.lbragstad.com/policy-burndown/
[1] https://governance.openstack.org/tc/goals/queens/policy-in-code.html
[2]
https://review.openstack.org/#/q/topic:policy-and-docs-in-code+status:open




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova docker replaced by zun?

2017-10-03 Thread Hongbin Lu


> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: October-03-17 5:44 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Nova docker replaced by zun?
> 
> On 09/29/2017 10:48 AM, ADAMS, STEVEN E wrote:
> > Can anyone point me to some background on why nova docker was
> > discontinued and how zun is the heir?
> >
> > Thx,
> >
> > Steve Adams
> >
> > AT
> >
> > https://github.com/openstack/nova-docker/blob/master/README.rst
> 
> The nova-docker driver discontinued because it was not maintained. In
> the entire OpenStack community we could not find a second person to
> help with the maintenance of it (it was only Dims doing any needed
> fixes).
> This was even though the driver was known to be running in multiple
> production clouds.
> 
> The project was shut down for that reason so that no one would
> mistakenly assume there was any maintenance or support on it. If you or
> others want to revive the project, that would be fine, as long as we
> can identify 2 individuals who will step up as maintainers.
> 
>   -Sean

[Hongbin Lu] A possibility is to revive nova-docker and engineer it as a thin 
layer to Zun. Zun has implemented several important functionalities, such as 
container lifecycle management, container networking with neutron, 
bind-mounting cinder volumes, etc. If nova-docker is engineered as a proxy to 
Zun, the burden of maintenance would be significant reduced. I believe Zun team 
would be happy to help to get the virt driver working well with Nova.

Best regards,
Hongbin

> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A way to delete a record in 'host_mappings' table

2017-10-03 Thread Dan Smith
> But the record in 'host_mappings' table of api database is not deleted
> (I tried it with nova master 8ca24bf1ff80f39b14726aca22b5cf52603ea5a0).
> The cell cannot be deleted if the records for the cell remains in 
> 'host_mappings' table.
> (An error occurs with a message "There are existing hosts mapped to cell with 
> uuid ...".)
> 
> Are there any ways (CLI, API) to delete the host record in 'host_mappings' 
> table?
> I couldn't find it.

Hmm, yeah, I bet this is a gap. Can you file a bug for this?

I think making the cell delete check for instances=0 in the cell and then 
deleting the host mapping along with the cell would be a good idea. We could 
also add a command to clean up orphaned host records, although hopefully that’s 
an exceptional situation.

—Dan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-10-03 Thread Dan Smith
> Any update on where we stand on issues now? Because every single patch I
> tried to land yesterday was killed by POST_FAILURE in various ways.
> Including some really small stuff - https://review.openstack.org/#/c/324720/

Yeah, Nova has only landed eight patches since Thursday. Most of those are 
test-only patches that run a subset of jobs, and a couple that landed in the 
wee hours when overall system load was low.

> Do we have a defined point on the calendar for getting the false
> negatives back below the noise threshold otherwise a rollback is
> implemented so that some of these issues can be addressed in parallel
> without holding up community development?

On Friday I was supportive of the decision to keep steaming forward instead of 
rolling back. Today, I’m a bit more concerned about light at the end of the 
tunnel. The infra folks have been hitting this hard for a long time, and for 
that I’m very appreciative. I too hope that we’re going to revisit mitigation 
strategies as we approach the weekiversary of being stuck.

-—Dan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Doug Hellmann
Excerpts from Jesse Pretorius's message of 2017-10-03 09:02:19 +:
> On 10/2/17, 1:45 PM, "Doug Hellmann"  wrote:
> 
> >etc implies they should be edited, though, and we're trying to move away
> >from that at least for the paste.ini files in most projects. So we may
> >need to decide on a case-by-case basis, unless we declare all of these
> >files as "sample" files that should be copied into the right place
> >before being edited.
> 
> For ‘sample’ files, where would be an appropriate placement? The relative 
> path ‘share’ instead of ‘etc’?
> 
> The placement of the files really should be focused more on the problem it’s 
> trying to solve.
> 
> The use-cases exposed so far are:
> 
> 1. For OpenStack-Ansible or any other deployment project deploying from 
> source, the only problem we’d like to have any configuration files for 
> services included in a compiled wheel. The path is irrelevant to us as we can 
> move the files to where they need to be, but we would like to cut out a bunch 
> of code which we now use to fetch these files from the git source, or 
> alternatively the vendored copies of the files we carry.
> 2. Packagers I’ve had discussion with also have implementations which fetch 
> these files from the git source. For them the sentiment appears to be largely 
> the same – consistency of placement for the files is important.
> 3. For anyone installing the software via a compiled wheel for whatever 
> reason, things get a little muddy – some want the files in the default 
> locations that the software looks for it so that after installation ‘it just 
> works’.
> 4. Some packagers want the files to be placed in the system root path 
> appropriate for the file when it is installed via a package.
> 
> To me the third use-case is a nice-to-have, given that if the files are 
> consistently placed then it can be worked with and anyone doing that already 
> has something to cover that need.
> 
> To me the fourth use-case is out of scope. It needs resolving via setuptools 
> and/or pep 491 before that can move forward.

I think I agree with all of that analysis.

My gut says put the files in "share" but maybe the folks more
familiar with package layout have better insight.

> 
> Given that this topic has gone through several cycles of discussion and has 
> never gone anywhere, does it perhaps merit definition as a project interface 
> so that we can define the problem this is trying to solve and set a standard 
> formally once and for all?
> 

Maybe a couple of the various packaging projects can agree and just
set a de facto rule (and document it). That worked out OK for us
with the doc reorganization when we updated the docs.o.o site
templates.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Jean-Philippe Evrard
On Tue, Oct 3, 2017 at 2:07 PM, Luigi Toscano  wrote:
> On Tuesday, 3 October 2017 14:31:05 CEST Thomas Goirand wrote:
>> On 10/02/2017 02:04 PM, Luigi Toscano wrote:
>> > Why not? Even if it does not fix the issue for proper installations,
>> > - it does not provent people from copying the files somewhere else (it
>> > happened in sahara for how long I can remember, we have been using
>> > data_files) - it fixes the deployment when the package is installed in a
>> > virtualenv; - it introduces consistency: the day data_files starts to do
>> > the right thing, everything will work; if it's not possible to fix it
>> > with data_files, it's easy to spot which files should be fixed because
>> > all handled by data_files.
>> >
>> > So definitely go for it.
>> >
>> > Ciao
>>
>> Why not? Simply because installing config files in /usr/etc is silly.
>> The question would rather be: why not accepting the PBR patch...
>
> It is silly, but again, people consuming from deb or RPM won't notice it.
> People using pip and virtualenv will get those files; the others will get them
> (compared to the previous "no available").
>
> Sure, having the python tools install the files in the right directory is the
> ideal final solution. My point is that the proposed solution is not worse than
> the previous one and fixes at least one use case that was not previously
> covered (the one that can be easily fixed).
>
>
> --
> Luigi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Agreed, let's continue and go ahead.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Monasca] Query for the official document of Monasca manual installation on openstack

2017-10-03 Thread manik bindlish
Hi,

I have a query regarding manual installation of Monasca on Openstack.

Please let me know the official document or link for manually node
wise installation of Monasca(completely) on Openstack

I have to integrate Monasca with Congress(Openstack: Newton).
Configuration is: 1 Controller node + 3 Compute node


Thanks,
Manik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-03 Thread Shamail Tahir


> On Oct 3, 2017, at 8:16 AM, Thierry Carrez  wrote:
> 
> Sean Dague wrote:
>> I'd like to announce that after 4 years serving on the OpenStack
>> Technical Committee, I will not be running in this fall's
>> election. Over the last 4 years we've navigated some rough seas
>> together, including the transition to more inclusion of projects, the
>> dive off the hype curve, the emergence of real interoperability
>> between clouds, and the beginnings of a new vision of OpenStack
>> pairing with more technologies beyond our community.
>> 
>> There remains a ton of good work to be done. But it's also important
>> that we have a wide range of leaders to do that. Those opportunities
>> only exist if we make space for new leaders to emerge. Rotational
>> leadership is part of what makes OpenStack great, and is part of what
>> will ensure that this community lasts far beyond any individuals
>> within it.
>> 
>> I plan to still be around in the community, and contribute where
>> needed. So this is not farewell. However it will be good to see new
>> faces among the folks leading the next steps in the community.
>> 
>> I would encourage all members of the community that are interested in
>> contributing to the future of OpenStack to step forward and run. It's
>> important to realize what the TC is and can be. This remains a
>> community driven by consensus, and the TC reflects that. Being a
>> member of the TC does raise your community visibility, but it does not
>> replace the need to listen, understand, communicate clearly, and
>> realize that hard work comes through compromise.
>> 
>> Good luck to all our candidates this fall, and thanks for letting me
>> represent you the past 4 years.
> 
> Thanks Sean for your service on the TC.
> 
> Your experience and knowledge of open source development dynamics will
> be missed ! I really appreciated your insights and grounded perspective
> in difficult governance discussions that we had to navigate together. I
> hope you'll still be able to weigh in on some of those, even if you
> won't participate in formal votes anymore.
+1

Thank you for demonstrating leadership through your actions. We really 
appreciated your involvement in ops meetups and actively asking for user 
feedback.
> 
> Cheers,
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Luigi Toscano
On Tuesday, 3 October 2017 14:31:05 CEST Thomas Goirand wrote:
> On 10/02/2017 02:04 PM, Luigi Toscano wrote:
> > Why not? Even if it does not fix the issue for proper installations,
> > - it does not provent people from copying the files somewhere else (it
> > happened in sahara for how long I can remember, we have been using
> > data_files) - it fixes the deployment when the package is installed in a
> > virtualenv; - it introduces consistency: the day data_files starts to do
> > the right thing, everything will work; if it's not possible to fix it
> > with data_files, it's easy to spot which files should be fixed because
> > all handled by data_files.
> > 
> > So definitely go for it.
> > 
> > Ciao
> 
> Why not? Simply because installing config files in /usr/etc is silly.
> The question would rather be: why not accepting the PBR patch...

It is silly, but again, people consuming from deb or RPM won't notice it.
People using pip and virtualenv will get those files; the others will get them 
(compared to the previous "no available").

Sure, having the python tools install the files in the right directory is the 
ideal final solution. My point is that the proposed solution is not worse than 
the previous one and fixes at least one use case that was not previously 
covered (the one that can be easily fixed).


-- 
Luigi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Odp.: [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-03 Thread mihaela.balas
Hi,

I appreciate the help. In neutron-server I have the following service providers 
enabled:

service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
service_provider = 
LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver

With Octavia provider L7 policy works fine. With haproxy (agent provider) I 
receive the error below.

On the haproxy agent I have the following setting (however, the neutron-server 
throws that error and not even sends any request to agent):

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Mihaela

From: Pawel Suder [mailto:pawel.su...@corp.ovh.com]
Sent: Tuesday, October 03, 2017 3:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Odp.: [neutron][lbaasv2][agent implementation] L7 
policy support


Hello Mihaela,



It seems that you are referring to that part of code 
https://github.com/openstack/neutron-lbaas/blob/mitaka-eol/neutron_lbaas/drivers/driver_base.py#L36

I found that document for Mitaka 
https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html

It might be related to incorrectly configured driver for LBaaS (or indeed not 
implemented driver for L7 policy for specific driver).

Questions:

* What do you have configured in neutron configuration in section 
[service_providers]?
* Which driver do you want to use?

Example line

service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Cheers,
Paweł

Od: mihaela.ba...@orange.com 
>
Wysłane: 3 października 2017 11:13:34
Do: OpenStack Development Mailing List (not for usage questions)
Temat: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy 
support

Hello,

Does the agent implementation of LBaaSv2 support L7 policies? I am testing with 
Mitaka version and I get "Not Implemented Error".

{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname": 
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"Calling driver operation 
NotImplementedManager.create"}
{"asctime": "2017-10-03 07:34:42.765","process": "18","levelname": 
"ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"There was an error in the 
driver"}
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>Traceback (most recent call last):
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 486, in _call_driver_operation
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>driver_method(context, db_entity)
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/drivers/driver_base.py",
 line 36, in create
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>raise NotImplementedError()
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>NotImplementedError
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>
{"asctime": "2017-10-03 07:34:42.800","process": "18","levelname": 
"ERROR","name": "neutron.api.v2.resource", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"create failed"}
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >Traceback (most 
recent call last):
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, 
in resource
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >result = 
method(request=request, **args)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in 
create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 

Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-10-03 Thread Gary Kotton
We have patches stuck for hours – only info is:
http://zuulv3.openstack.org/static/stream.html?uuid=128746a70c1843d7a94e887120ba381c=console.log
At the moment we are unable to do anything

On 10/3/17, 3:36 PM, "Boden Russell"  wrote:

On 10/3/17 5:17 AM, Sean Dague wrote:
> 
> Do we have a defined point on the calendar for getting the false
> negatives back below the noise threshold otherwise a rollback is
> implemented so that some of these issues can be addressed in parallel
> without holding up community development?

Along the same lines; where is the best place to get help with zuul v3
issues? The neutron-lib gate is on the floor with multiple problems; 2
broken gating jobs preventing patches from landing and all periodic jobs
broken preventing (safe) releases of neutron-lib. I've been adding the
issues to the etherpad [1] and trying to work through them solo, but
progress is very slow.


[1] https://etherpad.openstack.org/p/zuulv3-migration-faq

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-10-03 Thread Boden Russell
On 10/3/17 5:17 AM, Sean Dague wrote:
> 
> Do we have a defined point on the calendar for getting the false
> negatives back below the noise threshold otherwise a rollback is
> implemented so that some of these issues can be addressed in parallel
> without holding up community development?

Along the same lines; where is the best place to get help with zuul v3
issues? The neutron-lib gate is on the floor with multiple problems; 2
broken gating jobs preventing patches from landing and all periodic jobs
broken preventing (safe) releases of neutron-lib. I've been adding the
issues to the etherpad [1] and trying to work through them solo, but
progress is very slow.


[1] https://etherpad.openstack.org/p/zuulv3-migration-faq

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Meetings change

2017-10-03 Thread Jean-Philippe Evrard
Hello everyone,

Some people on this planet are more aware of others of this fact:
we have too many meetings in our life.

I don't think OpenStack-Ansible should be so greedy to take 8 hours of
your life a month for meetings. I therefore propose the reduction to 4
meetings/month: 3 bug triages and 1 community meeting.

On top of that, attendance in meetings is low, so I'd rather we find,
all together, a timeslot that matches the majority of us.

I started this etherpad [1], to list timeslots. I'd like you to:
1) (Optionally) Add timeslot that would suit you best
2) Vote for a timeslot in which you can regularily attend
OpenStack-Ansible meetings

Please give your irc nick too, that would help.

Thank you in advance.

Best regards,
Jean-Philippe Evrard (evrardjp)

[1] https://etherpad.openstack.org/p/osa-meetings-planification

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Thomas Goirand
On 10/02/2017 02:04 PM, Luigi Toscano wrote:
> Why not? Even if it does not fix the issue for proper installations,
> - it does not provent people from copying the files somewhere else (it 
> happened in sahara for how long I can remember, we have been using data_files)
> - it fixes the deployment when the package is installed in a virtualenv;
> - it introduces consistency: the day data_files starts to do the right thing, 
> everything will work; if it's not possible to fix it with data_files, it's 
> easy to spot which files should be fixed because all handled by data_files.
> 
> So definitely go for it.
> 
> Ciao

Why not? Simply because installing config files in /usr/etc is silly.
The question would rather be: why not accepting the PBR patch...

I'm deeply thinking about carying the --sysconfigdir path in Debian,
though I would very much prefer if I could avoid it. This would bring
inconsistency which is always better to avoid.

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [election] NON nomination for TC

2017-10-03 Thread Thierry Carrez
Sean Dague wrote:
> I'd like to announce that after 4 years serving on the OpenStack
> Technical Committee, I will not be running in this fall's
> election. Over the last 4 years we've navigated some rough seas
> together, including the transition to more inclusion of projects, the
> dive off the hype curve, the emergence of real interoperability
> between clouds, and the beginnings of a new vision of OpenStack
> pairing with more technologies beyond our community.
> 
> There remains a ton of good work to be done. But it's also important
> that we have a wide range of leaders to do that. Those opportunities
> only exist if we make space for new leaders to emerge. Rotational
> leadership is part of what makes OpenStack great, and is part of what
> will ensure that this community lasts far beyond any individuals
> within it.
> 
> I plan to still be around in the community, and contribute where
> needed. So this is not farewell. However it will be good to see new
> faces among the folks leading the next steps in the community.
> 
> I would encourage all members of the community that are interested in
> contributing to the future of OpenStack to step forward and run. It's
> important to realize what the TC is and can be. This remains a
> community driven by consensus, and the TC reflects that. Being a
> member of the TC does raise your community visibility, but it does not
> replace the need to listen, understand, communicate clearly, and
> realize that hard work comes through compromise.
> 
> Good luck to all our candidates this fall, and thanks for letting me
> represent you the past 4 years.

Thanks Sean for your service on the TC.

Your experience and knowledge of open source development dynamics will
be missed ! I really appreciated your insights and grounded perspective
in difficult governance discussions that we had to navigate together. I
hope you'll still be able to weigh in on some of those, even if you
won't participate in formal votes anymore.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Odp.: [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-03 Thread Pawel Suder
Hello Mihaela,


It seems that you are referring to that part of code 
https://github.com/openstack/neutron-lbaas/blob/mitaka-eol/neutron_lbaas/drivers/driver_base.py#L36

I found that document for Mitaka 
https://docs.openstack.org/mitaka/networking-guide/config-lbaas.html

It might be related to incorrectly configured driver for LBaaS (or indeed not 
implemented driver for L7 policy for specific driver).

Questions:

* What do you have configured in neutron configuration in section 
[service_providers]?
* Which driver do you want to use?

Example line

service_provider = 
LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

Cheers,
Paweł

Od: mihaela.ba...@orange.com 
Wysłane: 3 października 2017 11:13:34
Do: OpenStack Development Mailing List (not for usage questions)
Temat: [openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy 
support

Hello,

Does the agent implementation of LBaaSv2 support L7 policies? I am testing with 
Mitaka version and I get “Not Implemented Error”.

{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname": 
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"Calling driver operation 
NotImplementedManager.create"}
{"asctime": "2017-10-03 07:34:42.765","process": "18","levelname": 
"ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"There was an error in the 
driver"}
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>Traceback (most recent call last):
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 486, in _call_driver_operation
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>driver_method(context, db_entity)
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/drivers/driver_base.py",
 line 36, in create
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>raise NotImplementedError()
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>NotImplementedError
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>
{"asctime": "2017-10-03 07:34:42.800","process": "18","levelname": 
"ERROR","name": "neutron.api.v2.resource", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"create failed"}
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >Traceback (most 
recent call last):
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, 
in resource
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >result = 
method(request=request, **args)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in 
create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 
self._create(request, body, **kwargs)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 148, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >ectxt.value 
= e.inner_exc
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
self.force_reraise()
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
six.reraise(self.type_, self.value, self.tb)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 

Re: [openstack-dev] [keystone] [keystoneauth] Debug data isn't sanitized - bug 1638978

2017-10-03 Thread Jamie Lennox
Another option, pass log=False which we currently do for all the auth
requests. This will prevent debug printing the body at all, so con, by
default you can't see that message, but it's there because I never wanted
to mess around with masking individual service's secrets like this.

On 29 Sep. 2017 11:49 pm, "Lance Bragstad"  wrote:

>
>
> On 09/27/2017 06:38 AM, Bhor, Dinesh wrote:
>
> Hi Team,
>
>
>
> There are four solutions to fix the below bug:
>
> https://bugs.launchpad.net/keystoneauth/+bug/1638978
>
>
>
> 1) Carry a copy of mask_password() method to keystoneauth from oslo_utils
> [1]:
>
> *Pros:*
>
> A. keystoneauth will use already tested and used version of mask_password.
>
>
>
> *Cons:*
>
> A. keystoneauth will have to keep the version of mask_password() method
> sync with oslo_utils version.
>
>  If there are any new "_SANITIZE_KEYS" added to oslo_utils
> mask_password then those should be added in keystoneauth mask_password also.
>
> B. Copying the "mask_password" will also require to copy its supporting
> code [2] which is huge.
>
>
>
>
> I'm having flashbacks of the oslo-incubator days...
>
>
>
> 2) Use Oslo.utils mask_password() method in keystoneauth:
>
> *Pros:*
>
> A) No synching issue as described in solution #1. keystoneauth will
> directly use mask_password() method from Oslo.utils.
>
>
>
> *Cons:*
>
> A) You will need oslo.utils library to use keystoneauth.
>
> Objection by community:
>
> - keystoneauth community don't want any dependency on any of OpenStack
> common oslo libraries.
>
> Please refer to the comment from Morgan: https://bugs.launchpad.net/
> keystoneauth/+bug/1700751/comments/3
>
>
>
>
>
> 3) Add a custom logging filter in oslo logger
>
> Please refer to POC sample here: http://paste.openstack.org/show/617093/
>
> OpenStack core services using any OpenStack individual python-*client (for
> e.g python-cinderclient used in nova service) will need to pass oslo_logger
> object during it’s
>
> initialization which will do the work of masking sensitive information.
>
> Note: In nova, oslo.logger object is not passed during cinder client
> initialization (https://github.com/openstack/nova/blob/master/nova/volume/
> cinder.py#L135-L141),
>
> In this case, sensitive information will not be masked as it isn’t using
> Oslo.logger.
>
>
>
> *Pros:*
>
> A) No changes required in oslo.logger or any OpenStack services if
> mask_password method is modified in oslo.utils.
>
>
>
> *Cons:*
>
> A) Every log message will be scanned for certain password fields degrading
> the performance.
>
> B) If consumer of keystoneauth doesn’t use oslo_logger, then the sensitive
> information will not be masked.
>
> C) Will need to make changes wherever applicable to the OpenStack core
> services to pass oslo.logger object during python-novaclient initialization.
>
>
>
>
>
> 4) Add mask_password formatter parameter in oslo_log:
>
> Add "mask_password" formatter to sanitize sensitive data and pass it as a
> keyword argument to the log statement.
>
> If the mask_password is set, then only the sensitive information will be
> masked at the time of logging.
>
> The log statement will look like below:
>
>
>
> logger.debug("'adminPass': 'Now you see me'"), mask_password=True)
>
>
>
> Please refer to the POC code here: http://paste.openstack.org/show/618019/
>
>
>
> *Pros:  *
>
> A) No changes required in oslo.logger or any OpenStack services if
> mask_password method is modified in oslo.utils.
>
>
>
> *Cons:*
>
> A) If consumer of keystoneauth doesn’t use oslo_logger, then the sensitive
> information will not be masked.
>
> B) If you forget to pass mask_password=True for logging messages where
> sensitive information is present, then those fields won't be masked with
> ***.
>
>  But this can be clearly documented as suggested by Morgan and Lance.
>
> C) This solution requires you to add a below check in keystoneauth to
> avoid from an exception being raised in case logger is pure python Logger
> as it
>
>   doesn’t accept mask_password keyword argument.
>
>
>
> if isinstance(logger, logging.Logger):
>
> logger.debug(' '.join(string_parts))
>
> else:
>
> logger.debug(' '.join(string_parts), mask_password=True)
>
>
>
> This check assumes that the logger instance will be oslo_log only if it is
> not of python default logging.Logger.
>
> Keystoneauth community is not ready to have any dependency on any oslo-*
> lib, so it seems this solution has low acceptance chances.
>
>
> Options 2, 3, and 4 all require dependencies on oslo in order to work,
> which is a non-starter according to Morgan's comment in the bug [0].
> Options 3 and 4 will require a refactor to get keystoneauth to use oslo.log
> (today it uses the logging module from Python's standard library).
>
> [0] https://bugs.launchpad.net/keystoneauth/+bug/1700751/comments/3
>
>
>
> Please let me know your opinions about the above four approaches. Which
> one should we adopt?
>
>
>
> [1] 

Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-10-03 Thread John Fulton
On Oct 3, 2017 4:35 AM, "Giulio Fidente"  wrote:

On 09/18/2017 08:50 PM, Alex Schultz wrote:
> Hey folks,
>
> We started off our PTG with a retrospective for Pike. The output of
> which can be viewed here[0][1].
>
> One of the recurring themes from the retrospective and the PTG was the
> need for better communication during the cycle.  One of the ideas that
> was mentioned was adding a section to the weekly meeting calling for
> current status from the various tripleo squads[2].  Starting next week
> (Sept 26th), I would like for folks who are members of one of the
> squads be able to provide a brief status or a link to the current
> status during the weekly meeting.  There will be a spot added to the
> agenda to do a status roll call.  It was mentioned that folks may
> prefer to send a message to the ML and just be able to link to it
> similar to what the CI squad currently does[3].  We'll give this a few
> weeks and review how it works.
hi,

I drafted an etherpad for the Integration squad which I hope we can use
during the meeting to report about our status [1], primarily consisting
of Ceph and IPA integration for now.

Juan, John, feel free to make any change or add there anything you feel
is useful/necessary.

1. https://etherpad.openstack.org/p/tripleo-integration-squad-status


Thanks Giullio. I will update. --John



--
Giulio Fidente
GPG KEY: 08D733BA
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Update on Zuul v3 Migration - and what to do about issues

2017-10-03 Thread Sean Dague
Any update on where we stand on issues now? Because every single patch I
tried to land yesterday was killed by POST_FAILURE in various ways.
Including some really small stuff - https://review.openstack.org/#/c/324720/

That also includes the patch I'm told fixes some issues with zuul v3 in
the base devstack jobs - https://review.openstack.org/#/c/508344/3

It also appears that many of the skips stopped being a thing -
https://review.openstack.org/#/c/507527/ got a Tempest test run
attempted on it (though everything ended in Node failure).

Do we have a defined point on the calendar for getting the false
negatives back below the noise threshold otherwise a rollback is
implemented so that some of these issues can be addressed in parallel
without holding up community development?

-Sean

On 09/29/2017 10:58 AM, Monty Taylor wrote:
> Hey everybody!
> 
> tl;dr - If you're having issues with your jobs, check the FAQ, this
> email and followups on this thread for mentions of them. If it's an
> issue with your job and you can spot it (bad config) just submit a patch
> with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like
> to ask that you send a follow up email to this thread so that we can
> ensure we've got them all and so that others can see it too.
> 
> ** Zuul v3 Migration Status **
> 
> If you haven't noticed the Zuul v3 migration - awesome, that means it's
> working perfectly for you.
> 
> If you have - sorry for the disruption. It turns out we have a REALLY
> complicated array of job content you've all created. Hopefully the pain
> of the moment will be offset by the ability for you to all take direct
> ownership of your awesome content... so bear with us, your patience is
> appreciated.
> 
> If you find yourself with some extra time on your hands while you wait
> on something, you may find it helpful to read:
> 
>   https://docs.openstack.org/infra/manual/zuulv3.html
> 
> We're adding content to it as issues arise. Unfortunately, one of the
> issues is that the infra manual publication job stopped working.
> 
> While the infra manual publication is being fixed, we're collecting FAQ
> content for it in an etherpad:
> 
>   https://etherpad.openstack.org/p/zuulv3-migration-faq
> 
> If you have a job issue, check it first to see if we've got an entry for
> it. Once manual publication is fixed, we'll update the etherpad to point
> to the FAQ section of the manual.
> 
> ** Global Issues **
> 
> There are a number of outstanding issues that are being worked. As of
> right now, there are a few major/systemic ones that we're looking in to
> that are worth noting:
> 
> * Zuul Stalls
> 
> If you say to yourself "zuul doesn't seem to be doing anything, did I do
> something wrong?", we're having an issue that jeblair and Shrews are
> currently tracking down with intermittent connection issues in the
> backend plumbing.
> 
> When it happens it's an across the board issue, so fixing it is our
> number one priority.
> 
> * Incorrect node type
> 
> We've got reports of things running on trusty that should be running on
> xenial. The job definitions look correct, so this is also under
> investigation.
> 
> * Multinode jobs having POST FAILURE
> 
> There is a bug in the log collection trying to collect from all nodes
> while the old jobs were designed to only collect from the 'primary'.
> Patches are up to fix this and should be fixed soon.
> 
> * Branch Exclusions being ignored
> 
> This has been reported and its cause is currently unknown.
> 
> Thank you all again for your patience! This is a giant rollout with a
> bunch of changes in it, so we really do appreciate everyone's
> understanding as we work through it all.
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Configure SR-IOV VFs in tripleo

2017-10-03 Thread Saravanan KR
On Tue, Sep 26, 2017 at 3:37 PM, Moshe Levi  wrote:
> Hi  all,
>
>
>
> While working on tripleo-ovs-hw-offload work, I encounter the following
> issue with SR-IVO.
>
>
>
> I added -e ~/heat-templates/environments/neutron-sriov.yaml -e
> ~/heat-templates/environments/host-config-and-reboot.yaml to the
> overcloud-deploy.sh.
>
> The computes nodes are configure with the intel_iommu=on kernel option and
> the computes are reboot as expected,
>
> than the tripleo::host::sriov will create /etc/sysconfig/allocate_vfs to
> configure the SR-IOV VF. It seem it requires additional reboot for the
> SR-IOV VFs to be created. Is that expected behavior? Am I doing something
> wrong?

The file allocate_vfs is required for the subsequent reboots, but
during the deployment, the vfs are created by puppet-tripleo [1]. No
additional reboot required for creating VFs.

Regards,
Saravanan KR

[1] 
https://github.com/openstack/puppet-tripleo/blob/master/manifests/host/sriov.pp#L19

>
>
>
>
>
>
>
>
>
> [1]
> https://github.com/openstack/puppet-tripleo/blob/80e646ff779a0f8e201daec0c927809224ed5fdb/manifests/host/sriov.pp
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] May I run iscsiadm --op show & update 100 times?

2017-10-03 Thread Rikimaru Honjo

Hello Gorka,

On 2017/10/02 20:37, Gorka Eguileor wrote:

On 02/10, Rikimaru Honjo wrote:

Hello,

I'd like to discuss about the following bug of os-brick.

* os-brick's iscsi initiator unexpectedly reverts node.startup from "automatic" to 
"manual".
   https://bugs.launchpad.net/os-brick/+bug/1670237

The important point of this bug is:

When os-brick initializes iscsi connections:
1. os-brick will run "iscsiadm -m discovery" command if we use iscsi multipath.


This only happens with a small number of cinder drivers, since most
drivers try to avoid the discovery path due to the number of
disadvantages it presents for a reliable deployment.  The most notorious
issue is that the path to the discovery portal on the attaching node is
down you cannot attach the volume no matter how many of the other paths
are up.




2. os-brick will update node.startup values to "automatic" if we use iscsi.
3. "iscsiadm -m discovery" command will recreate iscsi node repositories.[1]
As a result, node.startup values of already attached volumes will be revert
to default(=manual).

Gorka Eguileor and I discussed how do I fix this bug[2].
Our idea is this:

1. Confirm node.startup values of all the iscsi targets before running 
discovery.
2. Re-update node.startup values of all the iscsi targets after running 
discovery.

But, I afraid that this operation will take a long time.
I ran showing & updating node.startup values 100 times for researching.
As a result, it took about 4 seconds.
When I ran 200 times, it took about 8 seconds.
I think this is a little long.

If we use multipath and attach 25 volumes, 100 targets will be created.
I think that updating 100 times is a possible use case.

How do you think about it?
Can I implement the above idea?



The approach I proposed is on the review is valid, the flaw is in the
specific implementation, you are doing 100 request where 4 would
suffice.

You don't need to do a request for each target-portal tuple, you only
need to do 1 request per portal, which reduces the number of calls to
iscsiadm from 100 to 4 in the case you mention.

You can check all targets for an IP with:
   iscsiadm -m node -p IP

This means that the performance hit from having 100 or 200 targets
should be negligible.


I have one question.

I can see node.startup values by 1 request per portal as you said.

But, may I update values by 1 request per portal?
Updating values has been done by 1 request per target until now.
So I think my patch should update values in same way(=1 request per target).


Cheers,
Gorka.




[1]This is correct behavior of iscsiadm.
https://github.com/open-iscsi/open-iscsi/issues/58#issuecomment-325528315
[2]https://bugs.launchpad.net/os-brick/+bug/1670237
--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
★社名とメールアドレスが変わりました。

NTTテクノクロス株式会社
クラウド&セキュリティ事業部 第二事業ユニット(CS2BU)
本上力丸
TEL.  :045-212-7539
E-mail:honjo.rikim...@po.ntt-tx.co.jp
〒220-0012
  横浜市西区みなとみらい4丁目4番5号
  横浜アイマークプレイス 13階



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nova docker replaced by zun?

2017-10-03 Thread Sean Dague
On 09/29/2017 10:48 AM, ADAMS, STEVEN E wrote:
> Can anyone point me to some background on why nova docker was
> discontinued and how zun is the heir?
> 
> Thx,
> 
> Steve Adams
> 
> AT
> 
> https://github.com/openstack/nova-docker/blob/master/README.rst

The nova-docker driver discontinued because it was not maintained. In
the entire OpenStack community we could not find a second person to help
with the maintenance of it (it was only Dims doing any needed fixes).
This was even though the driver was known to be running in multiple
production clouds.

The project was shut down for that reason so that no one would
mistakenly assume there was any maintenance or support on it. If you or
others want to revive the project, that would be fine, as long as we can
identify 2 individuals who will step up as maintainers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-03 Thread Sam Betts (sambetts)
+1,

Sam

On 03/10/2017, 08:21, "tua...@vn.fujitsu.com" 
> wrote:

+1 , Yes, I definitely agree with you.

Regards
Tuan

From: Nisha Agarwal [mailto:agarwalnisha1...@gmail.com]
Sent: Tuesday, October 03, 2017 12:28 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for 
ironic-core

+1

Regards
Nisha

On Mon, Oct 2, 2017 at 11:13 PM, Loo, Ruby 
> wrote:
+1, Thx Dmitry for the proposal and Shiv for doing all the work :D

--ruby

From: Dmitry Tantsur >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, October 2, 2017 at 10:17 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

Hi all!
I would like to propose Shivanand (stendulker) to the core team.

His stats have been consistently high [1]. He has given a lot of insightful 
reviews recently, and his expertise in the iLO driver is also very valuable for 
the team.
As usual, please respond with your comments and objections.
Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaasv2][agent implementation] L7 policy support

2017-10-03 Thread mihaela.balas
Hello,

Does the agent implementation of LBaaSv2 support L7 policies? I am testing with 
Mitaka version and I get "Not Implemented Error".

{"asctime": "2017-10-03 07:34:42.764","process": "18","levelname": 
"INFO","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"Calling driver operation 
NotImplementedManager.create"}
{"asctime": "2017-10-03 07:34:42.765","process": "18","levelname": 
"ERROR","name": "neutron_lbaas.services.loadbalancer.plugin", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"There was an error in the 
driver"}
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>Traceback (most recent call last):
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/services/loadbalancer/plugin.py",
 line 486, in _call_driver_operation
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>driver_method(context, db_entity)
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>  File 
"/opt/neutron/lib/python2.7/site-packages/neutron_lbaas/drivers/driver_base.py",
 line 36, in create
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>raise NotImplementedError()
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>NotImplementedError
2017-10-03 07:34:42.765 18 TRACE neutron_lbaas.services.loadbalancer.plugin  
>
{"asctime": "2017-10-03 07:34:42.800","process": "18","levelname": 
"ERROR","name": "neutron.api.v2.resource", "request_id": 
"req-186bf812-1cdf-496b-a117-711f1e42c6bd", "user_identity": {"user_id": 
"44364a07de754daa9ffeb2911fe3620a", "project_id": 
"a5f15235c0714365b98a50a11ec956e7", "domain_id": "-", "user_domain_id": "-", 
"project_domain_id": "-"},"instance": {},"message":"create failed"}
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >Traceback (most 
recent call last):
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 84, 
in resource
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >result = 
method(request=request, **args)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 410, in 
create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 
self._create(request, body, **kwargs)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 148, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >ectxt.value 
= e.inner_exc
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
self.force_reraise()
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
six.reraise(self.type_, self.value, self.tb)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >return 
f(*args, **kwargs)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 521, in 
_create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >obj = 
do_create(body)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/neutron/api/v2/base.py", line 503, in 
do_create
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
request.context, reservation.reservation_id)
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >
self.force_reraise()
2017-10-03 07:34:42.800 18 TRACE neutron.api.v2.resource  >  File 
"/opt/neutron/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2017-10-03 

Re: [openstack-dev] [packaging][all] Sample Config Files in setup.cfg

2017-10-03 Thread Jesse Pretorius
On 10/2/17, 1:45 PM, "Doug Hellmann"  wrote:

>etc implies they should be edited, though, and we're trying to move away
>from that at least for the paste.ini files in most projects. So we may
>need to decide on a case-by-case basis, unless we declare all of these
>files as "sample" files that should be copied into the right place
>before being edited.

For ‘sample’ files, where would be an appropriate placement? The relative path 
‘share’ instead of ‘etc’?

The placement of the files really should be focused more on the problem it’s 
trying to solve.

The use-cases exposed so far are:

1. For OpenStack-Ansible or any other deployment project deploying from source, 
the only problem we’d like to have any configuration files for services 
included in a compiled wheel. The path is irrelevant to us as we can move the 
files to where they need to be, but we would like to cut out a bunch of code 
which we now use to fetch these files from the git source, or alternatively the 
vendored copies of the files we carry.
2. Packagers I’ve had discussion with also have implementations which fetch 
these files from the git source. For them the sentiment appears to be largely 
the same – consistency of placement for the files is important.
3. For anyone installing the software via a compiled wheel for whatever reason, 
things get a little muddy – some want the files in the default locations that 
the software looks for it so that after installation ‘it just works’.
4. Some packagers want the files to be placed in the system root path 
appropriate for the file when it is installed via a package.

To me the third use-case is a nice-to-have, given that if the files are 
consistently placed then it can be worked with and anyone doing that already 
has something to cover that need.

To me the fourth use-case is out of scope. It needs resolving via setuptools 
and/or pep 491 before that can move forward.

Given that this topic has gone through several cycles of discussion and has 
never gone anywhere, does it perhaps merit definition as a project interface so 
that we can define the problem this is trying to solve and set a standard 
formally once and for all?



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Meeting reminder

2017-10-03 Thread Ildiko Vancsa
Hi Training Team,

This is a friendly reminder that we are having our meeting in 15 minutes (0900 
UTC) on #openstack-meeting-3.

You can find the agenda here: 
https://etherpad.openstack.org/p/openstack-upstream-institute-meetings

See you soon! :)

Thanks,
Ildikó
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rdo-list] [rdo][tripleo][kolla] Routine patch maintenance on trunk.rdoproject.org, Tue Oct 3rd

2017-10-03 Thread Javier Pena
> Hi,
> 
> We need to do some routine patching on trunk.rdoproject.org on Oct 3rd, at
> 8:00 UTC. There will be a brief downtime for a reboot, where jobs using
> packages from RDO Trunk can fail. Sorry for the inconveniences.
> 

Hi,

The maintenance is now complete. Everything should be back to normal, please 
contact us if you find any issue.

Regards,
Javier

> If you need additional information, please do not hesitate to contact us.
> 
> Regards,
> Javier
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Pike Retrospective & Status reporting

2017-10-03 Thread Giulio Fidente
On 09/18/2017 08:50 PM, Alex Schultz wrote:
> Hey folks,
> 
> We started off our PTG with a retrospective for Pike. The output of
> which can be viewed here[0][1].
> 
> One of the recurring themes from the retrospective and the PTG was the
> need for better communication during the cycle.  One of the ideas that
> was mentioned was adding a section to the weekly meeting calling for
> current status from the various tripleo squads[2].  Starting next week
> (Sept 26th), I would like for folks who are members of one of the
> squads be able to provide a brief status or a link to the current
> status during the weekly meeting.  There will be a spot added to the
> agenda to do a status roll call.  It was mentioned that folks may
> prefer to send a message to the ML and just be able to link to it
> similar to what the CI squad currently does[3].  We'll give this a few
> weeks and review how it works.
hi,

I drafted an etherpad for the Integration squad which I hope we can use
during the meeting to report about our status [1], primarily consisting
of Ceph and IPA integration for now.

Juan, John, feel free to make any change or add there anything you feel
is useful/necessary.

1. https://etherpad.openstack.org/p/tripleo-integration-squad-status
-- 
Giulio Fidente
GPG KEY: 08D733BA

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][oslo] Retiring openstack/pylockfile

2017-10-03 Thread Julien Danjou
On Mon, Oct 02 2017, Doug Hellmann wrote:

> Or https://github.com/jazzband 
>
> Now we need a project to list all of the organizations full of unmaintained 
> software...

Who's up for maintaining that list?

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't send correctly authorization request to Oslo policy

2017-10-03 Thread ruan.he
Hi Brian and Doug,
I've added this bug to the next Glance meeting agenda.
My college Thomas will assist the meeting. 
Thanks,
Ruan HE

-Original Message-
From: Brian Rosmaita [mailto:rosmaita.foss...@gmail.com] 
Sent: lundi 2 octobre 2017 13:18
To: Doug Hellmann
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance doesn't 
send correctly authorization request to Oslo policy

Thanks Doug.

Ruan, please put an item on the Glance meeting agenda.  The meeting is
14:00 UTC on Thursday [0].

thanks,
brian

[0] http://eavesdrop.openstack.org/#Glance_Team_Meeting

On Fri, Sep 29, 2017 at 11:49 AM, Doug Hellmann  wrote:
> The Glance team has weekly meetings just like the Oslo team. You’ll 
> find the details about the time and agenda on eavesdrop.openstack.org. 
> I think it would make sense to add an item to the agenda for their 
> next meeting to discuss this issue, and ask for someone to help guide 
> you in fixing it. If the Oslo team needs to get involved after there 
> is someone from Glance helping, then we can find the right person.
>
> Brian Rosmaita (rosmaita on IRC) is the Glance team PTL. I’ve copied 
> him on this email to make sure he notices this thread.
>
> Doug
>
> On Sep 29, 2017, at 11:24 AM, ruan...@orange.com wrote:
>
> Not yet, we are not familiar with the Glance team.
> Ruan
>
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com]
> Sent: vendredi 29 septembre 2017 16:26
> To: openstack-dev
> Subject: Re: [openstack-dev] [Oslo][oslo.policy][glance] Bug: Glance 
> doesn't send correctly authorization request to Oslo policy
>
> Excerpts from ruan.he's message of 2017-09-29 12:56:12 +:
>
> Hi folks,
> We are testing the http_check function in Oslo policy, and we figure 
> out a
> bug: https://bugs.launchpad.net/glance/+bug/1720354.
> We believe that this is due to the Glance part since it doesn't well 
> prepare the authorization request (body) to Oslo policy.
> Can we put this topic for the next Oslo meeting?
> Thanks,
> Ruan HE
>
>
> Do you have someone from the Glance team helping already?
>
> Doug
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> ___
>
> Ce message et ses pieces jointes peuvent contenir des informations 
> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
> exploites ou copies sans autorisation. Si vous avez recu ce message 
> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi 
> que les pieces jointes. Les messages electroniques etant susceptibles 
> d'alteration, Orange decline toute responsabilite si ce message a ete 
> altere, deforme ou falsifie. Merci.
>
> This message and its attachments may contain confidential or 
> privileged information that may be protected by law; they should not 
> be distributed, used or copied without authorisation.
> If you have received this email in error, please notify the sender and 
> delete this message and its attachments.
> As emails may be altered, Orange is not liable for messages that have 
> been modified, changed or falsified.
> Thank you.
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, 

Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

2017-10-03 Thread tua...@vn.fujitsu.com
+1 , Yes, I definitely agree with you.

Regards
Tuan

From: Nisha Agarwal [mailto:agarwalnisha1...@gmail.com]
Sent: Tuesday, October 03, 2017 12:28 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [ironic] Proposing Shivanand Tendulker for 
ironic-core

+1

Regards
Nisha

On Mon, Oct 2, 2017 at 11:13 PM, Loo, Ruby 
> wrote:
+1, Thx Dmitry for the proposal and Shiv for doing all the work :D

--ruby

From: Dmitry Tantsur >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Monday, October 2, 2017 at 10:17 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [ironic] Proposing Shivanand Tendulker for ironic-core

Hi all!
I would like to propose Shivanand (stendulker) to the core team.

His stats have been consistently high [1]. He has given a lot of insightful 
reviews recently, and his expertise in the iLO driver is also very valuable for 
the team.
As usual, please respond with your comments and objections.
Thanks,
Dmitry

[1] http://stackalytics.com/report/contribution/ironic-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
The Secret Of Success is learning how to use pain and pleasure, instead
of having pain and pleasure use you. If You do that you are in control
of your life. If you don't life controls you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Nova docker replaced by zun?

2017-10-03 Thread Kumari, Madhuri
Hi,

nova-docker was discontinued because the lifecycle of containers and VMs are 
different and the Nova APIs doesn't satisfy it.
Zun is an independent project of Nova which has its own set of APIs for 
managing containers on top of OpenStack.

For more information, you can read the FAQ section of Zun wiki page [1].

[1] https://wiki.openstack.org/wiki/Zun

Regards,
Madhuri


From: ADAMS, STEVEN E [mailto:sa2...@att.com]
Sent: Friday, September 29, 2017 8:18 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Nova docker replaced by zun?

Can anyone point me to some background on why nova docker was discontinued and 
how zun is the heir?
Thx,
Steve Adams
AT
https://github.com/openstack/nova-docker/blob/master/README.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] A way to delete a record in 'host_mappings' table

2017-10-03 Thread Takashi Natsume

Hi. Nova developers.

In cell v2 environment, when deleting a host (compute node) by 'nova 
service-delete',
the records are soft deleted in the 'services' table and the 
'compute_nodes' table of the cell database.


But the record in 'host_mappings' table of api database is not deleted
(I tried it with nova master 8ca24bf1ff80f39b14726aca22b5cf52603ea5a0).
The cell cannot be deleted if the records for the cell remains in 
'host_mappings' table.
(An error occurs with a message "There are existing hosts mapped to cell 
with uuid ...".)


Are there any ways (CLI, API) to delete the host record in 
'host_mappings' table?

I couldn't find it.

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev