Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Mark Kirkwood

On 21/06/17 02:08, Jay Pipes wrote:


On 06/20/2017 09:42 AM, Doug Hellmann wrote:

Does "service VM" need to be a first-class thing?  Akanda creates
them, using a service user. The VMs are tied to a "router" which
is the billable resource that the user understands and interacts with
through the API.


Frankly, I believe all of these types of services should be built as 
applications that run on OpenStack (or other) infrastructure. In other 
words, they should not be part of the infrastructure itself.


There's really no need for a user of a DBaaS to have access to the 
host or hosts the DB is running on. If the user really wanted that, 
they would just spin up a VM/baremetal server and install the thing 
themselves.




Yes, I think this area is where some hard thinking would be rewarded. I 
recall when I first met Trove, in my mind I expected to be 'carving off 
a piece of database'...and was a bit surprised to discover that it 
(essentially) leveraged Nova VM + OS + DB (no criticism intended - just 
saying I was surprised). Of course after delving into how it worked I 
realized that it did make sense to make use of the various Nova things 
(schedulers etc)*but* now we are thinking about re-architecting 
(plus more options exist now), it would make sense to revisit this area.


Best wishes

Mark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][horizon][fwaas][vpnaas] fwaas/vpnaas dashboard split out

2017-06-20 Thread Akihiro Motoki
Hi neutron and horizon teams (especially fwaas and vpnaas folks),

As we discussed so far, I prepared separate git repositories for FWaaS
and VPNaaS dashboards.
http://git.openstack.org/cgit/openstack/neutron-fwaas-dashboard/
http://git.openstack.org/cgit/openstack/neutron-vpnaas-dashboard/

All new features will be implemented in the new repositories, for
example, FWaaS v2 support.
The initial core members consist of neutron-fwaas/vpnaas-core
(respectively) + horizon-core.

There are several things to do to complete the split out.
I gathered a list of work items at the etherpad and we will track the
progress here.
https://etherpad.openstack.org/p/horizon-fwaas-vpnaas-splitout
If you are interested in helping the efforts, sign up on the etherpad
or contact me.

I would like to release the initial release which is compatible with
the current horizon
FWaaS/VPNaaS dashboard (with no new features).
I hope we can release it around R-8 week (Jul 3) or R-7 (Jul 10).

It also will be good examples for neutron stadium/related projects
which are interested in
adding dashboard support. AFAIK, networking-sfc, tap-as-a-service are
interested in it.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-20 Thread Kumari, Madhuri
+1 from me as well.

Thanks Dims and Yanyan for you contribution to Zun ☺

Regards,
Madhuri

From: Kevin Zhao [mailto:kevin.z...@linaro.org]
Sent: Wednesday, June 21, 2017 6:37 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [Zun] Propose addition of Zun core team and 
removal notice

+1 for me.
Thx!

On 20 June 2017 at 13:50, Pradeep Singh 
> wrote:
+1 from me,
Thanks Shunli for your great work :)

On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu 
> wrote:
Hi all,

I would like to propose the following change to the Zun core team:

+ Shunli Zhou (shunliz)

Shunli has been contributing to Zun for a while and did a lot of work. He has 
completed the BP for supporting resource claim and be closed to finish the 
filter scheduler BP. He showed a good understanding of the Zun’s code base and 
expertise on other OpenStack projects. The quantity [1] and quality of his 
submitted code also shows his qualification. Therefore, I think he will be a 
good addition to the core team.

In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan Hu 
requested to be removed from the core team. Dims had been helping us since the 
inception of the project. I treated him as mentor and his guidance is always 
helpful for the whole team. As the project becomes mature and stable, I agree 
with him that it is time to relieve him from the core reviewer responsibility 
because he has many other important responsibilities for the OpenStack 
community. Yanyan’s leaving is because he has been relocated and focused on an 
out-of-OpenStack area. I would like to take this chance to thank Dims and 
Yanyan for their contribution to Zun.

Core reviewers, please cast your vote on this proposal.

Best regards,
Hongbin




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone][Mistral][Devstack] Confusion between auth_url and auth_uri in keystone middleware

2017-06-20 Thread Jamie Lennox
On 16 June 2017 at 00:44, Mikhail Fedosin  wrote:

> Thanks György!
>
> On Thu, Jun 15, 2017 at 1:55 PM, Gyorgy Szombathelyi  doclerholding.com> wrote:
>
>> Hi Mikhail,
>>
>> (I'm not from the Keystone team, but did some patches for using
>> keystonauth1).
>>
>> >
>> > 2. Even if auth_url is set, it can't be used later, because it is not
>> registered in
>> > oslo_config [5]
>>
>> auth_url is actually a dynamic parameter and depends on the keystone auth
>> plugin used
>> (auth_type=xxx). The plugin which needs this parameter, registers it.
>>
>
> Based on this http://paste.openstack.org/show/612664/ I would say that
> the plugin doesn't register it :(
> It either can be a bug, or it was done intentionally, I don't know.
>
>
>>
>> >
>> > So I would like to get an advise from keystone team and understand what
>> I
>> > should do in such cases. Official documentation doesn't add clarity on
>> the
>> > matter because it recommends to use auth_uri in some cases and auth_url
>> in
>> > others.
>> > My suggestion is to add auth_url in the list of keystone authtoken
>> > middleware config options, so that the parameter can be used by the
>> others.
>>
>> Yepp, this makes some confusion, but adding auth_url will make a clash
>> with
>> most (all?) authentication plugins. auth_url can be considered as an
>> 'internal'
>> option for the keystoneauth1 modules, and not used by anything else (like
>> the keystonemiddleware itself). However if there would be a more elagant
>> solution, I would also hear about it.
>>
>> >
>> > Best,
>> > Mike
>> >
>> Br,
>> György
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> My final thought that we have to use both (auth_url and auth_uri) options
> in mistral config, which looks ugly, but necessary.
>
> Best,
> Mike
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
Hi,

I feel like the question has been answered in the thread, but as i'm
largely responsible for this I thought i'd pipe up here.

It's annoying and unfortunate that auth_uri and auth_url look so similar.
They've actually existed for some time side by side and ended up like that
out of evolution rather that any thought. Interestingly the first result
for auth_uri in google is [1]. I'd be happy to rename it for something else
if we can agree on what.

Regarding your paste (and the reason i popped up), i would consider this a
bug in mistral. The auth options aren't registered into oslo.config until
just before the plugin is loaded because depending on what you put in for
auth_type the options may be different. In practice pretty much every
plugin has an auth_url, but mistral shouldn't be assuming anything about
the structure of [keystone_authtoken]. That's the sole responsibility of
keystonemiddleware and it does change over time.

Jamie


[1] https://adam.younglogic.com/2016/06/auth_uri-vs-auth_url/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Propose addition of Zun core team and removal notice

2017-06-20 Thread Kevin Zhao
+1 for me.
Thx!

On 20 June 2017 at 13:50, Pradeep Singh  wrote:

> +1 from me,
> Thanks Shunli for your great work :)
>
> On Tue, Jun 20, 2017 at 10:02 AM, Hongbin Lu 
> wrote:
>
>> Hi all,
>>
>>
>>
>> I would like to propose the following change to the Zun core team:
>>
>>
>>
>> + Shunli Zhou (shunliz)
>>
>>
>>
>> Shunli has been contributing to Zun for a while and did a lot of work. He
>> has completed the BP for supporting resource claim and be closed to finish
>> the filter scheduler BP. He showed a good understanding of the Zun’s code
>> base and expertise on other OpenStack projects. The quantity [1] and
>> quality of his submitted code also shows his qualification. Therefore, I
>> think he will be a good addition to the core team.
>>
>>
>>
>> In addition, I have a removal notice. Davanum Srinivas (Dims) and Yanyan
>> Hu requested to be removed from the core team. Dims had been helping us
>> since the inception of the project. I treated him as mentor and his
>> guidance is always helpful for the whole team. As the project becomes
>> mature and stable, I agree with him that it is time to relieve him from the
>> core reviewer responsibility because he has many other important
>> responsibilities for the OpenStack community. Yanyan’s leaving is because
>> he has been relocated and focused on an out-of-OpenStack area. I would like
>> to take this chance to thank Dims and Yanyan for their contribution to Zun.
>>
>>
>>
>> Core reviewers, please cast your vote on this proposal.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Chris Friesen

On 06/20/2017 09:51 AM, Eric Fried wrote:

Nice Stephen!

For those who aren't aware, the rendered version (pretty, so pretty) can
be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:

http://docs-draft.openstack.org/10/475810/1/check/gate-nova-docs-ubuntu-xenial/25e5173//doc/build/html/scheduling.html?highlight=scheduling


Can we teach it to not put line breaks in the middle of words in the text boxes?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [nova] [os-vif] OVS plugin assumes an incorrect datapath_type in os-vif

2017-06-20 Thread Kevin Benton
vif_details has always been a bag of goodies for mech drivers to pack in
information relevant to wiring up the vif_type. This sounds like a pretty
standard addition so I don't see any blockers.

On Tue, Jun 20, 2017 at 9:16 AM, Alonso Hernandez, Rodolfo <
rodolfo.alonso.hernan...@intel.com> wrote:

> Hello fellows:
>
>
>
> Currently there is a bug in os-vif [1
> ]. When os-vif tries to
> plug an OVS interface, the datapath type is hardcoded:
>
> -  https://github.com/openstack/os-vif/blob/
> 9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L100-L101
>
> -  https://github.com/openstack/os-vif/blob/
> 9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L127-L128
>
> -  https://github.com/openstack/os-vif/blob/
> 9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L135-L136
>
> -  https://github.com/openstack/os-vif/blob/
> 9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L149-L150
>
>
>
> The problem is os-vif doesn’t have this information now. I’m proposing the
> following solution:
>
> -  Nova: https://review.openstack.org/#/c/474892/
>
> -  Neutron: https://review.openstack.org/#/c/474588/
>
> -  Neutron-lib: https://review.openstack.org/#/c/474248/
>
> -  os-vif: https://review.openstack.org/#/c/474914/
>
>
>
> Neutron will add to VIF details the datapath type to the vif details dict.
> If this information is not given in the config file, the default parameter
> written will be OVS_DATAPATH_SYSTEM, which is the default value. The change
> in neutron-lib is needed for Neutron to keep the same dict key name
> (matching the name set in nova.network.model)
>
> 1)
>
> 2)  Nova will receive this information (if given in the dict),
> getting the value stored in vif['details']. If the key is not set, the
> default datapath will be None. Because currently no information is passed
> and Nova doesn’t know about the different datapath types (this information
> is in Neutron), it makes sense not to assign any value. Nova is protected
> in case the dict doesn't have this information.
>
>
>
> Finally, os-vif will receive the VIF information given by Nova. If the
> datapath_type is not given in the variable (dict) or the value is None, the
> default value (OVS_DATAPATH_SYSTEM) will be set.
>
>
>
> As you can see, it's indeed an API change, but the projects affected are
> protected in case the information expected in the variable passed is not
> present.
>
>
>
> What do you think?
>
>
>
> Thank you in advance.
>
>
>
> [1] https://bugs.launchpad.net/os-vif/+bug/1632372
>
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Policy rules for APIs based on "domain_id"

2017-06-20 Thread Lance Bragstad
Domain support hasn't really been adopted across various OpenStack
projects, yet. Ocata was the first release where we had a v3-only
jenkins job set up for projects to run against (domains are a v3-only
concept in keystone and don't really exist in v2.0).

I think it would be great to push on some of that work so that we can
start working the concept of domain-scope into various services. I'd be
happy to help here. John Garbutt had some good ideas on this track, too.

https://review.openstack.org/#/c/433037/
https://review.openstack.org/#/c/427872/

On 06/20/2017 08:59 AM, Valeriy Ponomaryov wrote:
> Also, one more additional kind of "feature-request" is to be able to
> filter each project's entities per domain as well as we can do it with
> project/tenant now.
>
> So, as a result, we will be able to configure different "list" APIs to
> return objects grouped by either domain or project.
>
> Thoughts?
>
> On Tue, Jun 20, 2017 at 1:07 PM, Adam Heczko  > wrote:
>
> Hello Valeriy,
> agree, that would be very useful. I think that this deserves
> attention and cross project discussion.
> Maybe a community goal process [2] is a valid path forward in this
> regard.
>
> [2] https://governance.openstack.org/tc/goals/
> 
>
> On Tue, Jun 20, 2017 at 11:15 AM, Valeriy Ponomaryov
> > wrote:
>
> Hello OpenStackers,
>
> Wanted to pay some attention to one of restrictions in OpenStack.
> It came out, that it is impossible to define policy rules for
> API services based on "domain_id".
> As far as I know, only Keystone supports it.
>
> So, it is unclear whether it is intended or it is just
> technical debt that each OpenStack project should
> eliminate?
>
> For the moment, I filed bug [1].
>
> Use case is following: usage of Keystone API v3 all over the
> cloud and level of trust is domain, not project.
>
> And if it is technical debt how much different teams are
> interested in having such possibility?
>
> [1] https://bugs.launchpad.net/nova/+bug/1699060
> 
>
> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com 
> vponomar...@mirantis.com 
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> -- 
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>
>
>
>
> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com 
> vponomar...@mirantis.com 
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Zane Bitter

On 20/06/17 11:45, Jay Pipes wrote:

Good discussion, Zane. Comments inline.


++


On 06/20/2017 11:01 AM, Zane Bitter wrote:

On 20/06/17 10:08, Jay Pipes wrote:

On 06/20/2017 09:42 AM, Doug Hellmann wrote:

Does "service VM" need to be a first-class thing?  Akanda creates
them, using a service user. The VMs are tied to a "router" which
is the billable resource that the user understands and interacts with
through the API.


Frankly, I believe all of these types of services should be built as 
applications that run on OpenStack (or other) infrastructure. In 
other words, they should not be part of the infrastructure itself.


There's really no need for a user of a DBaaS to have access to the 
host or hosts the DB is running on. If the user really wanted that, 
they would just spin up a VM/baremetal server and install the thing 
themselves.


Hey Jay,
I'd be interested in exploring this idea with you, because I think 
everyone agrees that this would be a good goal, but at least in my 
mind it's not obvious what the technical solution should be. 
(Actually, I've read your email a bunch of times now, and I go back 
and forth on which one you're actually advocating for.) The two 
options, as I see it, are as follows:


1) The database VMs are created in the user's tena^W project. They 
connect directly to the tenant's networks, are governed by the user's 
quota, and are billed to the project as Nova VMs (on top of whatever 
additional billing might come along with the management services). A 
[future] feature in Nova (https://review.openstack.org/#/c/438134/) 
allows the Trove service to lock down access so that the user cannot 
actually interact with the server using Nova, but must go through the 
Trove API. On a cloud that doesn't include Trove, a user could run 
Trove as an application themselves and all it would have to do 
differently is not pass the service token to lock down the VM.


alternatively:

2) The database VMs are created in a project belonging to the operator 
of the service. They're connected to the user's network through 
, and isolated from other users' databases running in the same 
project through . 
Trove has its own quota management and billing. The user cannot 
interact with the server using Nova since it is owned by a different 
project. On a cloud that doesn't include Trove, a user could run Trove 
as an application themselves, by giving it credentials for their own 
project and disabling all of the cross-tenant networking stuff.


None of the above :)

Don't think about VMs at all. Or networking plumbing. Or volume storage 
or any of that.


OK, but somebody has to ;)

Think only in terms of what a user of a DBaaS really wants. At the end 
of the day, all they want is an address in the cloud where they can 
point their application to write and read data from.


Do they want that data connection to be fast and reliable? Of course, 
but how that happens is irrelevant to them


Do they want that data to be safe and backed up? Of course, but how that 
happens is irrelevant to them.


Fair enough. The world has changed a lot since RDS (which was the model 
for Trove) was designed, it's certainly worth reviewing the base 
assumptions before embarking on a new design.


The problem with many of these high-level *aaS projects is that they 
consider their user to be a typical tenant of general cloud 
infrastructure -- focused on launching VMs and creating volumes and 
networks etc. And the discussions around the implementation of these 
projects always comes back to minutia about how to set up secure 
communication channels between a control plane message bus and the 
service VMs.


Incidentally, the reason that discussions always come back to that is 
because OpenStack isn't very good at it, which is a huge problem not 
only for the *aaS projects but for user applications in general running 
on OpenStack.


If we had fine-grained authorisation and ubiquitous multi-tenant 
asynchronous messaging in OpenStack then I firmly believe that we, and 
application developers, would be in much better shape.


If you create these projects as applications that run on cloud 
infrastructure (OpenStack, k8s or otherwise),


I'm convinced there's an interesting idea here, but the terminology 
you're using doesn't really capture it. When you say 'as applications 
that run on cloud infrastructure', it sounds like you mean they should 
run in a Nova VM, or in a Kubernetes cluster somewhere, rather than on 
the OpenStack control plane. I don't think that's what you mean though, 
because you can (and IIUC Rackspace does) deploy OpenStack services that 
way already, and it has no real effect on the architecture of those 
services.


then the discussions focus 
instead on how the real end-users -- the ones that actually call the 
APIs and utilize the service -- would interact with the APIs and not the 
underlying infrastructure itself.


Here's an example to think about...

What if a provider of this DBaaS 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2017-06-20 10:08:54 -0400:
> On 06/20/2017 09:42 AM, Doug Hellmann wrote:
> > Does "service VM" need to be a first-class thing?  Akanda creates
> > them, using a service user. The VMs are tied to a "router" which
> > is the billable resource that the user understands and interacts with
> > through the API.
> 
> Frankly, I believe all of these types of services should be built as 
> applications that run on OpenStack (or other) infrastructure. In other 
> words, they should not be part of the infrastructure itself.
> 
> There's really no need for a user of a DBaaS to have access to the host 
> or hosts the DB is running on. If the user really wanted that, they 
> would just spin up a VM/baremetal server and install the thing themselves.
> 

There's one reason, and that is specialized resources that we don't
trust to be multi-tenant.

Baremetal done multi-tenant is hard, just ask our friends who were/are
running OnMetal. But baremetal done for the purposes of running MySQL
clusters that only allow users to access MySQL and control everything
via an agent of sorts is a lot simpler. You can let them all share a
layer 2 with no MAC filtering for instance, since you are in control at
the OS level.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][Release-job-failures] Release of openstack/kuryr-tempest-plugin failed

2017-06-20 Thread Kirill Zaitsev
Looks like kuryr-tempest-plugin doesn’t have a voting docs job for regular 
commits. I’ve added https://review.openstack.org/#/c/475901/ and 
https://review.openstack.org/#/c/475904/ to fix the issue.


Regards, Kirill

On 20 June 2017 at 21:51:49, Doug Hellmann (d...@doughellmann.com) wrote:

It looks like the kuryr-tempest-plugin repository has a documentation
job set up but no documentation.

http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-docs-ubuntu-xenial/7f096fd/console.html#_2017-06-20_17_03_00_590629

--- Begin forwarded message from jenkins ---
From: jenkins 
To: release-job-failures 
Date: Tue, 20 Jun 2017 17:08:35 +
Subject: [Release-job-failures] Release of openstack/kuryr-tempest-plugin failed

Build failed.

- kuryr-tempest-plugin-tarball 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-tarball/791b9b2/
 : SUCCESS in 3m 00s
- kuryr-tempest-plugin-tarball-signing 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-tarball-signing/f74ada9/
 : SUCCESS in 42s
- kuryr-tempest-plugin-pypi-both-upload 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-pypi-both-upload/43ac1e9/
 : SUCCESS in 30s
- kuryr-tempest-plugin-announce-release 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-announce-release/65d7a48/
 : SUCCESS in 5m 07s
- propose-kuryr-tempest-plugin-update-constraints 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/propose-kuryr-tempest-plugin-update-constraints/8b85c19/
 : SUCCESS in 23s
- kuryr-tempest-plugin-docs-ubuntu-xenial 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-docs-ubuntu-xenial/7f096fd/
 : FAILURE in 3m 32s

--- End forwarded message ---

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] New Office Hours Proposal

2017-06-20 Thread Harry Rybacki
Greetings All,

We would like to foster a more interactive community within Keystone
focused on fixing bugs on a regular basis! On a regular datetime (to
be voted upon) we will have "office hours"[1] where Keystone cores
will be available specifically to advise, help and review your efforts
in squashing bugs. We want to aggressively attack our growing list of
bugs and make sure Keystone is as responsive as possible to fixing
them. The best way to do this is get people working on them and have
the resources to get the fixes reviewed and merged.

Please take a few moments to fill out our Doodle poll[2] to select the
time block(s) that work best for you. We will tally the results and
announce the official Keystone Office hours on Friday, 23-June-2017,
by 2100 (UTC).

[1] - https://etherpad.openstack.org/p/keystone-office-hours
[2] - https://beta.doodle.com/poll/epvs95npfvrd3h5e


/R

Harry Rybacki
Software Engineer, Red Hat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Mike Bayer



On 06/20/2017 11:45 AM, Jay Pipes wrote:

Good discussion, Zane. Comments inline.

On 06/20/2017 11:01 AM, Zane Bitter wrote:


2) The database VMs are created in a project belonging to the operator 
of the service. They're connected to the user's network through 
, and isolated from other users' databases running in the same 
project through . 
Trove has its own quota management and billing. The user cannot 
interact with the server using Nova since it is owned by a different 
project. On a cloud that doesn't include Trove, a user could run Trove 
as an application themselves, by giving it credentials for their own 
project and disabling all of the cross-tenant networking stuff.


None of the above :)

Don't think about VMs at all. Or networking plumbing. Or volume storage 
or any of that.


Think only in terms of what a user of a DBaaS really wants. At the end 
of the day, all they want is an address in the cloud where they can 
point their application to write and read data from.


Do they want that data connection to be fast and reliable? Of course, 
but how that happens is irrelevant to them


Do they want that data to be safe and backed up? Of course, but how that 
happens is irrelevant to them.


Hi, I'm just newb trying to follow along...isnt that what #2 is 
proposing?  just it's talking about the implementation a bit.


(Guess this comes down to the terms "user" and "operator" - e.g. 
"operator" has the VMs w/ the DBs, "user" gets a login to a DB.  "user" 
is the person who pushes the trove button to "give me a database")






The problem with many of these high-level *aaS projects is that they 
consider their user to be a typical tenant of general cloud 
infrastructure -- focused on launching VMs and creating volumes and 
networks etc. And the discussions around the implementation of these 
projects always comes back to minutia about how to set up secure 
communication channels between a control plane message bus and the 
service VMs.


If you create these projects as applications that run on cloud 
infrastructure (OpenStack, k8s or otherwise), then the discussions focus 
instead on how the real end-users -- the ones that actually call the 
APIs and utilize the service -- would interact with the APIs and not the 
underlying infrastructure itself.


Here's an example to think about...

What if a provider of this DBaaS service wanted to jam 100 database 
instances on a single VM and provide connectivity to those database 
instances to 100 different tenants?


Would those tenants know if those databases were all serviced from a 
single database server process running on the VM? Or 100 contains each 
running a separate database server process? Or 10 containers running 10 
database server processes each?


No, of course not. And the tenant wouldn't care at all, because the 
point of the DBaaS service is to get a database. It isn't to get one or 
more VMs/containers/baremetal servers.


At the end of the day, I think Trove is best implemented as a hosted 
application that exposes an API to its users that is entirely separate 
from the underlying infrastructure APIs like Cinder/Nova/Neutron.


This is similar to Kevin's k8s Operator idea, which I support but in a 
generic fashion that isn't specific to k8s.


In the same way that k8s abstracts the underlying infrastructure (via 
its "cloud provider" concept), I think that Trove and similar projects 
need to use a similar abstraction and focus on providing a different API 
to their users that doesn't leak the underlying infrastructure API 
concepts out.


Best,
-jay

Of course the current situation, as Amrith alluded to, where the 
default is option (1) except without the lock-down feature in Nova, 
though some operators are deploying option (2) but it's not tested 
upstream... clearly that's the worst of all possible worlds, and AIUI 
nobody disagrees with that.


To my mind, (1) sounds more like "applications that run on OpenStack 
(or other) infrastructure", since it doesn't require stuff like the 
admin-only cross-project networking that makes it effectively "part of 
the infrastructure itself" - as evidenced by the fact that 
unprivileged users can run it standalone with little more than a 
simple auth middleware change. But I suspect you are going to use 
similar logic to argue for (2)? I'd be interested to hear your thoughts.


cheers,
Zane.

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [nova] How to handle nova show --minimal with embedded flavors

2017-06-20 Thread Dean Troyer
On Tue, Jun 20, 2017 at 12:07 PM, Chris Friesen
 wrote:
> In the existing novaclient code for show/rebuild, the --minimal option just
> skips doing the lookups on the flavor/image as described in the help text.
> It doesn't affect the other ~40 fields in the instance.  After the new
> microversion we already have the flavor details without doing the flavor
> lookup so I thought it made sense to display them.
>
> I suppose an argument could be made that for consistency we should keep the
> output with --minimal similar to what it was before.  If we want to go that
> route I'm happy to do so.

I would keep the output fields the same. If --minimal used to show
empty fields because the lookup was skipped, it would be OK to now
populate those, but I wouldn't add them just because you have the data
now without the lookup.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] where to find the CI backlog and issues we're tracking

2017-06-20 Thread Emilien Macchi
On Tue, Jun 20, 2017 at 12:49 PM, Wesley Hayutin  wrote:
> Greetings,
>
> It's become apparent that everyone in the tripleo community may not be aware
> of where CI specific work is tracked.
>
> To find out which CI related features or bug fixes are in progress or to see
> the backlog please consult [1].
>
> To find out what issues have been found in OpenStack via CI please consult
> [2].
>
> Thanks!

Thanks Wes for these informations. I was about to start adding more
links and informations when I realized monitoring TripleO CI might
deserve a little bit of training and documentation.
I'll take some time this week to create a new section in TripleO docs
with useful informations that we can easily share with our community
so everyone can learn how to be aware about CI status.


>
> [1] https://trello.com/b/U1ITy0cu/tripleo-ci-squad
> [2] https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][Release-job-failures] Release of openstack/kuryr-tempest-plugin failed

2017-06-20 Thread Doug Hellmann
It looks like the kuryr-tempest-plugin repository has a documentation
job set up but no documentation.

http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-docs-ubuntu-xenial/7f096fd/console.html#_2017-06-20_17_03_00_590629

--- Begin forwarded message from jenkins ---
From: jenkins 
To: release-job-failures 
Date: Tue, 20 Jun 2017 17:08:35 +
Subject: [Release-job-failures] Release of openstack/kuryr-tempest-plugin failed

Build failed.

- kuryr-tempest-plugin-tarball 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-tarball/791b9b2/
 : SUCCESS in 3m 00s
- kuryr-tempest-plugin-tarball-signing 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-tarball-signing/f74ada9/
 : SUCCESS in 42s
- kuryr-tempest-plugin-pypi-both-upload 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-pypi-both-upload/43ac1e9/
 : SUCCESS in 30s
- kuryr-tempest-plugin-announce-release 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-announce-release/65d7a48/
 : SUCCESS in 5m 07s
- propose-kuryr-tempest-plugin-update-constraints 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/propose-kuryr-tempest-plugin-update-constraints/8b85c19/
 : SUCCESS in 23s
- kuryr-tempest-plugin-docs-ubuntu-xenial 
http://logs.openstack.org/1e/1ee40b0f0ee4e92209b8ccff6d74f4980f6234ab/release/kuryr-tempest-plugin-docs-ubuntu-xenial/7f096fd/
 : FAILURE in 3m 32s

--- End forwarded message ---

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] nominating Abhishek Kekane for glance core

2017-06-20 Thread Mikhail Fedosin
Wasn't Abhishek a glance core before? What a surprise for me o_O
I thought that he was just being modest and did not put -2 on the patches.

Undoubtedly, we need to correct this misunderstanding as quickly as
possible and invite Abhishek to the core team.

On Mon, Jun 19, 2017 at 5:40 PM, Erno Kuvaja  wrote:

> On Fri, Jun 16, 2017 at 3:26 PM, Brian Rosmaita
>  wrote:
> > I'm nominating Abhishek Kekane (abhishekk on IRC) to be a Glance core
> > for the Pike cycle.  Abhishek has been around the Glance community for
> > a long time and is familiar with the architecture and design patterns
> > used in Glance and its related projects.  He's contributed code,
> > triaged bugs, provided bugfixes, and done quality reviews for Glance.
> >
> > Abhishek has been proposed for Glance core before, but some members of
> > the community were concerned that he wasn't able to devote sufficient
> > time to Glance.  Given the current situation with the project,
> > however, it would be an enormous help to have someone as knowledgeable
> > about Glance as Abhishek to have +2 powers.  I discussed this with
> > Abhishek, he's aware that some in the community have that concern, and
> > he's agreed to be a core reviewer for the Pike cycle.  The community
> > can revisit his status early in Queens.
> >
> > Now that I've written that down, that puts Abhishek in the same boat
> > as all core reviewers, i.e., their levels of participation and
> > commitment are assessed at the beginning of each cycle and adjustments
> > made.
> >
> > In any case, I'd like to put Abhishek to work as soon as possible!  So
> > please reply to this message with comments or concerns before 23:59
> > UTC on Monday 19 June.  I'd like to confirm Abhishek as a core on
> > Tuesday 20 June.
> >
> > thanks,
> > brian
> >
>
> +2 from me! This sounds like a great solution for our immediate
> staffing issues and I'm happy to hear Abhishek would have the cycles
> to help us. Lets hope we get to enjoy his knowledge and good quality
> reviews on many cycles forward.
>
> - Erno
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-20 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, June 20, 2017 5:59 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-dev[[nova] Simple question
> about sorting CPU topologies
> 
> On 06/20/2017 12:53 PM, Chris Friesen wrote:
> > On 06/20/2017 06:29 AM, Jay Pipes wrote:
> >> On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:
> >>> Sorry, The mail sent accidentally by mis-typing ...
> >>>
> >>> My question is, what is the benefit of the above preference?
> >>
> >> Hi Kevin!
> >>
> >> I believe the benefit is so that the compute node prefers CPU
> >> topologies that do not have hardware threads over CPU topologies
> that
> >> do include hardware threads.
[Mooney, Sean K] if you have not expressed that you want the require or isolate 
policy
Then you really cant infer which is better as for some workloads preferring 
hyperthread
Siblings will improve performance( 2 threads sharing data via l2 cache) and 
other it will reduce it
(2 thread that do not share data) 
> >>
> >> I'm not sure exactly of the reason for this preference, but perhaps
> >> it is due to assumptions that on some hardware, threads will compete
> >> for the same cache resources as other siblings on a core whereas
> >> cores may have their own caches (again, on some specific hardware).
> >
> > Isn't the definition of hardware threads basically the fact that the
> > sibling threads share the resources of a single core?
> >
> > Are there architectures that OpenStack runs on where hardware threads
> > don't compete for cache/TLB/execution units?  (And if there are, then
> > why are they called threads and not cores?)
[Mooney, Sean K] well on x86 when you turn on hypter threading your L1 data and 
instruction cache is
Partitioned in 2 with each half allocated to a thread sibling. The l2 cache 
which is also per core is shared
Between the 2 thread siblings so on intels x86 implementation the thread do not 
compete for l1 cache but do share l2
That could easibly change though in new generations. 

Pre xen architure I believe amd shared the floating point units between each 
smt thread but had separate integer execution units that
Were not shared. That meant for integer heavy workloads there smt 
implementation approached 2X performance limited by the
Shared load and store units and reduced to 0 scaling if both Treads tried to 
access the floating point execution unit concurrently.

So its not quite as clean cut as saying the thread  do or don’t share resources
Each vendor addresses this differently even with in x86 you are not required to 
have the partitioning
described above for cache as intel did or for the execution units. On other 
architectures im sure they have
come up with equally inventive ways to make this an interesting shade of grey 
when describing the difference
between a hardware thread a full core. 

> 
> I've learned over the years not to make any assumptions about hardware.
> 
> Thus my "not sure exactly" bet-hedging ;)
[Mooney, Sean K] yep hardware is weird and will always find ways to break your 
assumptions :)
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to handle nova show --minimal with embedded flavors

2017-06-20 Thread Chris Friesen

On 06/20/2017 07:59 AM, Matt Riedemann wrote:


Personally I think that if I specify --minimal I want minimal output, which
would just be the flavor's original name after the new microversion, which is
closer in behavior to how --minimal works today before the 2.47 microversion.


In the existing novaclient code for show/rebuild, the --minimal option just 
skips doing the lookups on the flavor/image as described in the help text.  It 
doesn't affect the other ~40 fields in the instance.  After the new microversion 
we already have the flavor details without doing the flavor lookup so I thought 
it made sense to display them.


I suppose an argument could be made that for consistency we should keep the 
output with --minimal similar to what it was before.  If we want to go that 
route I'm happy to do so.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-20 Thread Jay Pipes

On 06/20/2017 12:53 PM, Chris Friesen wrote:

On 06/20/2017 06:29 AM, Jay Pipes wrote:

On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:

Sorry, The mail sent accidentally by mis-typing ...

My question is, what is the benefit of the above preference?


Hi Kevin!

I believe the benefit is so that the compute node prefers CPU 
topologies that do
not have hardware threads over CPU topologies that do include hardware 
threads.


I'm not sure exactly of the reason for this preference, but perhaps it 
is due to
assumptions that on some hardware, threads will compete for the same 
cache
resources as other siblings on a core whereas cores may have their own 
caches

(again, on some specific hardware).


Isn't the definition of hardware threads basically the fact that the 
sibling threads share the resources of a single core?


Are there architectures that OpenStack runs on where hardware threads 
don't compete for cache/TLB/execution units?  (And if there are, then 
why are they called threads and not cores?)


I've learned over the years not to make any assumptions about hardware.

Thus my "not sure exactly" bet-hedging ;)

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-20 Thread Chris Friesen

On 06/20/2017 06:29 AM, Jay Pipes wrote:

On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:

Sorry, The mail sent accidentally by mis-typing ...

My question is, what is the benefit of the above preference?


Hi Kevin!

I believe the benefit is so that the compute node prefers CPU topologies that do
not have hardware threads over CPU topologies that do include hardware threads.

I'm not sure exactly of the reason for this preference, but perhaps it is due to
assumptions that on some hardware, threads will compete for the same cache
resources as other siblings on a core whereas cores may have their own caches
(again, on some specific hardware).


Isn't the definition of hardware threads basically the fact that the sibling 
threads share the resources of a single core?


Are there architectures that OpenStack runs on where hardware threads don't 
compete for cache/TLB/execution units?  (And if there are, then why are they 
called threads and not cores?)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] where to find the CI backlog and issues we're tracking

2017-06-20 Thread Wesley Hayutin
Greetings,

It's become apparent that everyone in the tripleo community may not be
aware of where CI specific work is tracked.

To find out which CI related features or bug fixes are in progress or to
see the backlog please consult [1].

To find out what issues have been found in OpenStack via CI please consult
[2].

Thanks!


[1] https://trello.com/b/U1ITy0cu/tripleo-ci-squad
[2] https://trello.com/b/WXJTwsuU/tripleo-and-rdo-ci-status
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-20 Thread Chris Friesen

On 06/20/2017 01:48 AM, Henning Schild wrote:

Hi,

We are using OpenStack for managing realtime guests. We modified
it and contributed to discussions on how to model the realtime
feature. More recent versions of OpenStack have support for realtime,
and there are a few proposals on how to improve that further.

But there is still no full answer on how to distribute threads across
host-cores. The vcpus are easy but for the emulation and io-threads
there are multiple options. I would like to collect the constraints
from a qemu/kvm perspective first, and than possibly influence the
OpenStack development

I will put the summary/questions first, the text below provides more
context to where the questions come from.
- How do you distribute your threads when reaching the really low
   cyclictest results in the guests? In [3] Rik talked about problems
   like hold holder preemption, starvation etc. but not where/how to
   schedule emulators and io
- Is it ok to put a vcpu and emulator thread on the same core as long as
   the guest knows about it? Any funny behaving guest, not just Linux.
- Is it ok to make the emulators potentially slow by running them on
   busy best-effort cores, or will they quickly be on the critical path
   if you do more than just cyclictest? - our experience says we don't
   need them reactive even with rt-networking involved


Our goal is to reach a high packing density of realtime VMs. Our
pragmatic first choice was to run all non-vcpu-threads on a shared set
of pcpus where we also run best-effort VMs and host load.
Now the OpenStack guys are not too happy with that because that is load
outside the assigned resources, which leads to quota and accounting
problems.


If you wanted to go this route, you could just edit the "vcpu_pin_set" entry in 
nova.conf on the compute nodes so that nova doesn't actually know about all of 
the host vCPUs.  Then you could run host load and emulator threads on the pCPUs 
that nova doesn't know about, and there will be no quota/accounting issues in nova.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MassivelyDistributed] [FEMDC] IRC Meeting tomorrow 15:00 UTC

2017-06-20 Thread lebre . adrien
Dear all, 

A gentle reminder for our meeting tomorrow. 
As usual, the agenda is available at: 
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
810)
Please feel free to add items.

Best, 
ad_rien_
PS: as you may have seen, we decided during our last meeting to switch from the 
 [MassivelyDistributed] tag to [FEMDC] (shorter and thus better ;)).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [nova] [os-vif] OVS plugin assumes an incorrect datapath_type in os-vif

2017-06-20 Thread Alonso Hernandez, Rodolfo
Hello fellows:

Currently there is a bug in os-vif 
[1]. When os-vif tries to plug 
an OVS interface, the datapath type is hardcoded:

-  
https://github.com/openstack/os-vif/blob/9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L100-L101

-  
https://github.com/openstack/os-vif/blob/9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L127-L128

-  
https://github.com/openstack/os-vif/blob/9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L135-L136

-  
https://github.com/openstack/os-vif/blob/9fb7fe512902a37432e38d884b8be59ce91582db/vif_plug_ovs/ovs.py#L149-L150

The problem is os-vif doesn’t have this information now. I’m proposing the 
following solution:

-  Nova: https://review.openstack.org/#/c/474892/

-  Neutron: https://review.openstack.org/#/c/474588/

-  Neutron-lib: https://review.openstack.org/#/c/474248/

-  os-vif: https://review.openstack.org/#/c/474914/

Neutron will add to VIF details the datapath type to the vif details dict. If 
this information is not given in the config file, the default parameter written 
will be OVS_DATAPATH_SYSTEM, which is the default value. The change in 
neutron-lib is needed for Neutron to keep the same dict key name (matching the 
name set in nova.network.model)

1)

2)  Nova will receive this information (if given in the dict), getting the 
value stored in vif['details']. If the key is not set, the default datapath 
will be None. Because currently no information is passed and Nova doesn’t know 
about the different datapath types (this information is in Neutron), it makes 
sense not to assign any value. Nova is protected in case the dict doesn't have 
this information.

Finally, os-vif will receive the VIF information given by Nova. If the 
datapath_type is not given in the variable (dict) or the value is None, the 
default value (OVS_DATAPATH_SYSTEM) will be set.

As you can see, it's indeed an API change, but the projects affected are 
protected in case the information expected in the variable passed is not 
present.

What do you think?

Thank you in advance.

[1] https://bugs.launchpad.net/os-vif/+bug/1632372




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Eric Fried
Nice Stephen!

For those who aren't aware, the rendered version (pretty, so pretty) can
be accessed via the gate-nova-docs-ubuntu-xenial jenkins job:

http://docs-draft.openstack.org/10/475810/1/check/gate-nova-docs-ubuntu-xenial/25e5173//doc/build/html/scheduling.html?highlight=scheduling

On 06/20/2017 09:09 AM, sfinu...@redhat.com wrote:

> 
> I have a document (with a nifty activity diagram in tow) for all the above
> available here:
> 
>   https://review.openstack.org/475810 
> 
> Should be more Google'able that mailing list posts for future us :)
> 
> Stephen
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-20 Thread Bogdan Dobrelya
On 20.06.2017 17:27, Michał Jastrzębski wrote:
> On 19 June 2017 at 06:05, Doug Hellmann  wrote:
>> Excerpts from Michał Jastrzębski's message of 2017-06-16 15:50:54 -0700:
>>> So I'm trying to figure out how to actually use it.
>>>
>>> We (and any other container based deploy..) will run into some
>>> chicken/egg problem - you need to deploy container to generate big
>>> yaml with defaults, then you need to overload it with your
>>
>> The config schema file (the "big YAML with defaults") should be part of
>> the packaged software, so the deployment tool shouldn't need to generate
>> it unless you're handling drivers that are not included in tree.
> 
> Right that's what I was missing, I guess we can generate these during
> buildtime without big issues, then it will be embedded into container,
> shouldn't be too hard change and would work for both source and
> binary.
>>> configurations, validate if they're not deprecated, run container with
>>
>> It doesn't do it today, but the thing that converts the input data to
>> the INI file could automatically translate old option names to their new
>> names.
>>
>>> this ansible role (or module...really doesn't matter), spit out final
>>
>> Why does the config file need to be generated inside a container?
> 
> Outside of container you don't have oslo or nova (python libs), so to
> get access to these you need to do it inside container.

That could be another container I suppose. Like those containers used
for build deps, there could be as well a container for config management
deps. Docker multi-stage [0] could help to achieve that smooth, w/o
impacting the service containers.

> 
>>> confg, lay it down, deploy container again. And that will have to be
>>> done for every host class (as configs might differ host to host). Imho
>>> a bit too much for this to be appealing (but I might be wrong). I'd
>>> much rather have:
>>> 1. Yaml as input to oslo.config instead of broken ini
>>
>> I'm not opposed to switching to YAML, but it's a bit more involved than
>> just adding support in the parser. All of the work that has been done on
>> generating sample default files and documentation needs to be updated to
>> support YAML. We need a migration path to move everyone from INI to
>> YAML. And we need to update devstack and all of its plugins to edit the
>> new file format. There are probably more tasks involved in the
>> migration. I'm dealing with a couple of other projects right now, and
>> don't have time to plan all of that out myself. If someone else wants to
>> pick it up, I can help with reviews on the spec and code changes.
> 
> Switching is a big no, everyone would hate us with emotion pure as
> mountain spring water. It's to support both at same time, which makes
> it slightly more complex. We could make full switch after few releases
> of deprecation I guess. Anyway, agree, lots of work.
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Jay Pipes

Good discussion, Zane. Comments inline.

On 06/20/2017 11:01 AM, Zane Bitter wrote:

On 20/06/17 10:08, Jay Pipes wrote:

On 06/20/2017 09:42 AM, Doug Hellmann wrote:

Does "service VM" need to be a first-class thing?  Akanda creates
them, using a service user. The VMs are tied to a "router" which
is the billable resource that the user understands and interacts with
through the API.


Frankly, I believe all of these types of services should be built as 
applications that run on OpenStack (or other) infrastructure. In other 
words, they should not be part of the infrastructure itself.


There's really no need for a user of a DBaaS to have access to the 
host or hosts the DB is running on. If the user really wanted that, 
they would just spin up a VM/baremetal server and install the thing 
themselves.


Hey Jay,
I'd be interested in exploring this idea with you, because I think 
everyone agrees that this would be a good goal, but at least in my mind 
it's not obvious what the technical solution should be. (Actually, I've 
read your email a bunch of times now, and I go back and forth on which 
one you're actually advocating for.) The two options, as I see it, are 
as follows:


1) The database VMs are created in the user's tena^W project. They 
connect directly to the tenant's networks, are governed by the user's 
quota, and are billed to the project as Nova VMs (on top of whatever 
additional billing might come along with the management services). A 
[future] feature in Nova (https://review.openstack.org/#/c/438134/) 
allows the Trove service to lock down access so that the user cannot 
actually interact with the server using Nova, but must go through the 
Trove API. On a cloud that doesn't include Trove, a user could run Trove 
as an application themselves and all it would have to do differently is 
not pass the service token to lock down the VM.


alternatively:

2) The database VMs are created in a project belonging to the operator 
of the service. They're connected to the user's network through , 
and isolated from other users' databases running in the same project 
through . Trove has its 
own quota management and billing. The user cannot interact with the 
server using Nova since it is owned by a different project. On a cloud 
that doesn't include Trove, a user could run Trove as an application 
themselves, by giving it credentials for their own project and disabling 
all of the cross-tenant networking stuff.


None of the above :)

Don't think about VMs at all. Or networking plumbing. Or volume storage 
or any of that.


Think only in terms of what a user of a DBaaS really wants. At the end 
of the day, all they want is an address in the cloud where they can 
point their application to write and read data from.


Do they want that data connection to be fast and reliable? Of course, 
but how that happens is irrelevant to them


Do they want that data to be safe and backed up? Of course, but how that 
happens is irrelevant to them.


The problem with many of these high-level *aaS projects is that they 
consider their user to be a typical tenant of general cloud 
infrastructure -- focused on launching VMs and creating volumes and 
networks etc. And the discussions around the implementation of these 
projects always comes back to minutia about how to set up secure 
communication channels between a control plane message bus and the 
service VMs.


If you create these projects as applications that run on cloud 
infrastructure (OpenStack, k8s or otherwise), then the discussions focus 
instead on how the real end-users -- the ones that actually call the 
APIs and utilize the service -- would interact with the APIs and not the 
underlying infrastructure itself.


Here's an example to think about...

What if a provider of this DBaaS service wanted to jam 100 database 
instances on a single VM and provide connectivity to those database 
instances to 100 different tenants?


Would those tenants know if those databases were all serviced from a 
single database server process running on the VM? Or 100 contains each 
running a separate database server process? Or 10 containers running 10 
database server processes each?


No, of course not. And the tenant wouldn't care at all, because the 
point of the DBaaS service is to get a database. It isn't to get one or 
more VMs/containers/baremetal servers.


At the end of the day, I think Trove is best implemented as a hosted 
application that exposes an API to its users that is entirely separate 
from the underlying infrastructure APIs like Cinder/Nova/Neutron.


This is similar to Kevin's k8s Operator idea, which I support but in a 
generic fashion that isn't specific to k8s.


In the same way that k8s abstracts the underlying infrastructure (via 
its "cloud provider" concept), I think that Trove and similar projects 
need to use a similar abstraction and focus on providing a different API 
to their users that doesn't leak the underlying infrastructure API 
concepts 

Re: [openstack-dev] [deployment][kolla][openstack-ansible][openstack-helm][tripleo] ansible role to produce oslo.config files for openstack services

2017-06-20 Thread Michał Jastrzębski
On 19 June 2017 at 06:05, Doug Hellmann  wrote:
> Excerpts from Michał Jastrzębski's message of 2017-06-16 15:50:54 -0700:
>> So I'm trying to figure out how to actually use it.
>>
>> We (and any other container based deploy..) will run into some
>> chicken/egg problem - you need to deploy container to generate big
>> yaml with defaults, then you need to overload it with your
>
> The config schema file (the "big YAML with defaults") should be part of
> the packaged software, so the deployment tool shouldn't need to generate
> it unless you're handling drivers that are not included in tree.

Right that's what I was missing, I guess we can generate these during
buildtime without big issues, then it will be embedded into container,
shouldn't be too hard change and would work for both source and
binary.
>> configurations, validate if they're not deprecated, run container with
>
> It doesn't do it today, but the thing that converts the input data to
> the INI file could automatically translate old option names to their new
> names.
>
>> this ansible role (or module...really doesn't matter), spit out final
>
> Why does the config file need to be generated inside a container?

Outside of container you don't have oslo or nova (python libs), so to
get access to these you need to do it inside container.

>> confg, lay it down, deploy container again. And that will have to be
>> done for every host class (as configs might differ host to host). Imho
>> a bit too much for this to be appealing (but I might be wrong). I'd
>> much rather have:
>> 1. Yaml as input to oslo.config instead of broken ini
>
> I'm not opposed to switching to YAML, but it's a bit more involved than
> just adding support in the parser. All of the work that has been done on
> generating sample default files and documentation needs to be updated to
> support YAML. We need a migration path to move everyone from INI to
> YAML. And we need to update devstack and all of its plugins to edit the
> new file format. There are probably more tasks involved in the
> migration. I'm dealing with a couple of other projects right now, and
> don't have time to plan all of that out myself. If someone else wants to
> pick it up, I can help with reviews on the spec and code changes.

Switching is a big no, everyone would hate us with emotion pure as
mountain spring water. It's to support both at same time, which makes
it slightly more complex. We could make full switch after few releases
of deprecation I guess. Anyway, agree, lots of work.

>
>> 2. Validator to throw an error if one of our regular,
>> template-rendered, configs is deprecated
>>
>> We can run this validator in gate to have quick feedback when
>> something gets deprecated.
>>
>> Thoughts?
>> Michal
>>
>> On 16 June 2017 at 13:24, Emilien Macchi  wrote:
>> > On Fri, Jun 16, 2017 at 11:09 AM, Jiří Stránský  wrote:
>> >> On 15.6.2017 19:06, Emilien Macchi wrote:
>> >>>
>> >>> I missed [tripleo] tag.
>> >>>
>> >>> On Thu, Jun 15, 2017 at 12:09 PM, Emilien Macchi 
>> >>> wrote:
>> 
>>  If you haven't followed the "Configuration management with etcd /
>>  confd" thread [1], Doug found out that using confd to generate
>>  configuration files wouldn't work for the Cinder case where we don't
>>  know in advance of the deployment what settings to tell confd to look
>>  at.
>>  We are still looking for a generic way to generate *.conf files for
>>  OpenStack, that would be usable by Deployment tools and operators.
>>  Right now, Doug and I are investigating some tooling that would be
>>  useful to achieve this goal.
>> 
>>  Doug has prototyped an Ansible role that would generate configuration
>>  files by consumming 2 things:
>> 
>>  * Configuration schema, generated by Ben's work with Machine Readable
>>  Sample Config.
>> $ oslo-config-generator --namespace cinder --format yaml >
>>  cinder-schema.yaml
>> 
>>  It also needs: https://review.openstack.org/#/c/474306/ to generate
>>  some extra data not included in the original version.
>> 
>>  * Parameters values provided in config_data directly in the playbook:
>>  config_data:
>>    DEFAULT:
>>  transport_url: rabbit://user:password@hostname
>>  verbose: true
>> 
>>  There are 2 options disabled by default but which would be useful for
>>  production environments:
>>  * Set to true to always show all configuration values:
>>  config_show_defaults
>>  * Set to true to show the help text: config_show_help: true
>> 
>>  The Ansible module is available on github:
>>  https://github.com/dhellmann/oslo-config-ansible
>> 
>>  To try this out, just run:
>> $ ansible-playbook ./playbook.yml
>> 
>>  You can quickly see the output of cinder.conf:
>>   https://clbin.com/HmS58
>> 
>> 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Zane Bitter

On 20/06/17 10:08, Jay Pipes wrote:

On 06/20/2017 09:42 AM, Doug Hellmann wrote:

Does "service VM" need to be a first-class thing?  Akanda creates
them, using a service user. The VMs are tied to a "router" which
is the billable resource that the user understands and interacts with
through the API.


Frankly, I believe all of these types of services should be built as 
applications that run on OpenStack (or other) infrastructure. In other 
words, they should not be part of the infrastructure itself.


There's really no need for a user of a DBaaS to have access to the host 
or hosts the DB is running on. If the user really wanted that, they 
would just spin up a VM/baremetal server and install the thing themselves.


Hey Jay,
I'd be interested in exploring this idea with you, because I think 
everyone agrees that this would be a good goal, but at least in my mind 
it's not obvious what the technical solution should be. (Actually, I've 
read your email a bunch of times now, and I go back and forth on which 
one you're actually advocating for.) The two options, as I see it, are 
as follows:


1) The database VMs are created in the user's tena^W project. They 
connect directly to the tenant's networks, are governed by the user's 
quota, and are billed to the project as Nova VMs (on top of whatever 
additional billing might come along with the management services). A 
[future] feature in Nova (https://review.openstack.org/#/c/438134/) 
allows the Trove service to lock down access so that the user cannot 
actually interact with the server using Nova, but must go through the 
Trove API. On a cloud that doesn't include Trove, a user could run Trove 
as an application themselves and all it would have to do differently is 
not pass the service token to lock down the VM.


alternatively:

2) The database VMs are created in a project belonging to the operator 
of the service. They're connected to the user's network through , 
and isolated from other users' databases running in the same project 
through . Trove has its 
own quota management and billing. The user cannot interact with the 
server using Nova since it is owned by a different project. On a cloud 
that doesn't include Trove, a user could run Trove as an application 
themselves, by giving it credentials for their own project and disabling 
all of the cross-tenant networking stuff.


Of course the current situation, as Amrith alluded to, where the default 
is option (1) except without the lock-down feature in Nova, though some 
operators are deploying option (2) but it's not tested upstream... 
clearly that's the worst of all possible worlds, and AIUI nobody 
disagrees with that.


To my mind, (1) sounds more like "applications that run on OpenStack (or 
other) infrastructure", since it doesn't require stuff like the 
admin-only cross-project networking that makes it effectively "part of 
the infrastructure itself" - as evidenced by the fact that unprivileged 
users can run it standalone with little more than a simple auth 
middleware change. But I suspect you are going to use similar logic to 
argue for (2)? I'd be interested to hear your thoughts.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Zane Bitter

On 18/06/17 07:35, Amrith Kumar wrote:
Trove has evolved rapidly over the past several years, since integration 
in IceHouse when it only supported single instances of a few databases. 
Today it supports a dozen databases including clusters and replication.


The user survey [1] indicates that while there is strong interest in the 
project, there are few large production deployments that are known of 
(by the development team).


Recent changes in the OpenStack community at large (company 
realignments, acquisitions, layoffs) and the Trove community in 
particular, coupled with a mounting burden of technical debt have 
prompted me to make this proposal to re-architect Trove.


This email summarizes several of the issues that face the project, both 
structurally and architecturally. This email does not claim to include a 
detailed specification for what the new Trove would look like, merely 
the recommendation that the community should come together and develop 
one so that the project can be sustainable and useful to those who wish 
to use it in the future.


TL;DR

Trove, with support for a dozen or so databases today, finds itself in a 
bind because there are few developers, and a code-base with a 
significant amount of technical debt.


Some architectural choices which the team made over the years have 
consequences which make the project less than ideal for deployers.


Given that there are no major production deployments of Trove at 
present, this provides us an opportunity to reset the project, learn 
from our v1 and come up with a strong v2.


An important aspect of making this proposal work is that we seek to 
eliminate the effort (planning, and coding) involved in migrating 
existing Trove v1 deployments to the proposed Trove v2. Effectively, 
with work beginning on Trove v2 as proposed here, Trove v1 as released 
with Pike will be marked as deprecated and users will have to migrate to 
Trove v2 when it becomes available.


I'm personally fine with not having a migration path (because I'm not 
personally running Trove v1 ;) although Thierry's point about choosing a 
different name is valid and surely something the TC will want to weigh 
in on.


However, I am always concerned about throwing out working code and 
rewriting from scratch. I'd be more comfortable if I saw some value 
being salvaged from the existing Trove project, other than as just an 
extended PoC/learning exercise. Would the API be similar to the current 
Trove one? Can at least some tests be salvaged to rapidly increase 
confidence that the new code works as expected?


While I would very much like to continue to support the users on Trove 
v1 through this transition, the simple fact is that absent community 
participation this will be impossible. Furthermore, given that there are 
no production deployments of Trove at this time, it seems pointless to 
build that upgrade path from Trove v1 to Trove v2; it would be the 
proverbial bridge from nowhere.


This (previous) statement is, I realize, contentious. There are those 
who have told me that an upgrade path must be provided, and there are 
those who have told me of unnamed deployments of Trove that would 
suffer. To this, all I can say is that if an upgrade path is of value to 
you, then please commit the development resources to participate in the 
community to make that possible. But equally, preventing a v2 of Trove 
or delaying it will only make the v1 that we have today less valuable.


We have learned a lot from v1, and the hope is that we can address that 
in v2. Some of the more significant things that I have learned are:


- We should adopt a versioned front-end API from the very beginning; 
making the REST API versioned is not a ‘v2 feature’


- A guest agent running on a tenant instance, with connectivity to a 
shared management message bus is a security loophole; encrypting 
traffic, per-tenant-passwords, and any other scheme is merely lipstick 
on a security hole


Totally agree here, any component of the architecture that is accessed 
directly by multiple tenants needs to be natively multi-tenant. I 
believe this has been one of the barriers to adoption.


- Reliance on Nova for compute resources is fine, but dependence on Nova 
VM specific capabilities (like instance rebuild) is not; it makes things 
like containers or bare-metal second class citizens


- A fair portion of what Trove does is resource orchestration; don’t 
reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as 
far along when Trove got started but that’s not the case today and we 
have an opportunity to fix that now


+1, obviously ;)

Although I also think Kevin's suggestion is worthy of serious consideration.

- A similarly significant portion of what Trove does is to implement a 
state-machine that will perform specific workflows involved in 
implementing database specific operations. This makes the Trove 
taskmanager a stateful entity. Some of the operations could take a fair 

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Jay Pipes

On 06/20/2017 09:51 AM, Alex Xu wrote:
2017-06-19 22:17 GMT+08:00 Jay Pipes >:

* Scheduler then creates a list of N of these data structures,
with the first being the data for the selected host, and the the
rest being data structures representing alternates consisting of
the next hosts in the ranked list that are in the same cell as
the selected host.

Yes, this is the proposed solution for allowing retries within a cell.

Is that possible we use trait to distinguish different cells? Then the 
retry can be done in the cell by query the placement directly with trait 
which indicate the specific cell.


Those traits will be some custom traits, and generate by the cell name.


No, we're not going to use traits in this way, for a couple reasons:

1) Placement doesn't and shouldn't know about Nova's internals. Cells 
are internal structures of Nova. Users don't know about them, neither 
should placement.


2) Traits describe a resource provider. A cell ID doesn't describe a 
resource provider, just like an aggregate ID doesn't describe a resource 
provider.



* Scheduler returns that list to conductor.
* Conductor determines the cell of the selected host, and sends
that list to the target cell.
* Target cell tries to build the instance on the selected host.
If it fails, it uses the allocation data in the data structure
to unclaim the resources for the selected host, and tries to
claim the resources for the next host in the list using its
allocation data. It then tries to build the instance on the next
host in the list of alternates. Only when all alternates fail
does the build request fail.

In the compute node, will we get rid of the allocation update in the 
periodic task "update_available_resource"? Otherwise, we will have race 
between the claim in the nova-scheduler and that periodic task.


Yup, good point, and yes, we will be removing the call to PUT 
/allocations in the compute node resource tracker. Only DELETE 
/allocations/{instance_uuid} will be called if something goes terribly 
wrong on instance launch.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread sfinucan
On Mon, 2017-06-19 at 09:36 -0500, Matt Riedemann wrote:
> On 6/19/2017 9:17 AM, Jay Pipes wrote:
> > On 06/19/2017 09:04 AM, Edward Leafe wrote:
> > > Current flow:
> 
> As noted in the nova-scheduler meeting this morning, this should have 
> been called "original plan" rather than "current flow", as Jay pointed 
> out inline.
> 
> > > * Scheduler gets a req spec from conductor, containing resource 
> > > requirements
> > > * Scheduler sends those requirements to placement
> > > * Placement runs a query to determine the root RPs that can satisfy 
> > > those requirements
> > 
> > Not root RPs. Non-sharing resource providers, which currently 
> > effectively means compute node providers. Nested resource providers 
> > isn't yet merged, so there is currently no concept of a hierarchy of 
> > providers.
> > 
> > > * Placement returns a list of the UUIDs for those root providers to 
> > > scheduler
> > 
> > It returns the provider names and UUIDs, yes.
> > 
> > > * Scheduler uses those UUIDs to create HostState objects for each
> > 
> > Kind of. The scheduler calls ComputeNodeList.get_all_by_uuid(), passing 
> > in a list of the provider UUIDs it got back from the placement service. 
> > The scheduler then builds a set of HostState objects from the results of 
> > ComputeNodeList.get_all_by_uuid().
> > 
> > The scheduler also keeps a set of AggregateMetadata objects in memory, 
> > including the association of aggregate to host (note: this is the 
> > compute node's *service*, not the compute node object itself, thus the 
> > reason aggregates don't work properly for Ironic nodes).
> > 
> > > * Scheduler runs those HostState objects through filters to remove 
> > > those that don't meet requirements not selected for by placement
> > 
> > Yep.
> > 
> > > * Scheduler runs the remaining HostState objects through weighers to 
> > > order them in terms of best fit.
> > 
> > Yep.
> > 
> > > * Scheduler takes the host at the top of that ranked list, and tries 
> > > to claim the resources in placement. If that fails, there is a race, 
> > > so that HostState is discarded, and the next is selected. This is 
> > > repeated until the claim succeeds.
> > 
> > No, this is not how things work currently. The scheduler does not claim 
> > resources. It selects the top (or random host depending on the selection 
> > strategy) and sends the launch request to the target compute node. The 
> > target compute node then attempts to claim the resources and in doing so 
> > writes records to the compute_nodes table in the Nova cell database as 
> > well as the Placement API for the compute node resource provider.
> 
> Not to nit pick, but today the scheduler sends the selected destinations 
> to the conductor. Conductor looks up the cell that a selected host is 
> in, creates the instance record and friends (bdms) in that cell and then 
> sends the build request to the compute host in that cell.
> 
> > 
> > > * Scheduler then creates a list of N UUIDs, with the first being the 
> > > selected host, and the the rest being alternates consisting of the 
> > > next hosts in the ranked list that are in the same cell as the 
> > > selected host.
> > 
> > This isn't currently how things work, no. This has been discussed, however.
> > 
> > > * Scheduler returns that list to conductor.
> > > * Conductor determines the cell of the selected host, and sends that 
> > > list to the target cell.
> > > * Target cell tries to build the instance on the selected host. If it 
> > > fails, it unclaims the resources for the selected host, and tries to 
> > > claim the resources for the next host in the list. It then tries to 
> > > build the instance on the next host in the list of alternates. Only 
> > > when all alternates fail does the build request fail.
> > 
> > This isn't currently how things work, no. There has been discussion of 
> > having the compute node retry alternatives locally, but nothing more 
> > than discussion.
> 
> Correct that this isn't how things currently work, but it was/is the 
> original plan. And the retry happens within the cell conductor, not on 
> the compute node itself. The top-level conductor is what's getting 
> selected hosts from the scheduler. The cell-level conductor is what's 
> getting a retry request from the compute. The cell-level conductor would 
> deallocate from placement for the currently claimed providers, and then 
> pick one of the alternatives passed down from the top and then make 
> allocations (a claim) against those, then send to an alternative compute 
> host for another build attempt.
> 
> So with this plan, there are two places to make allocations - the 
> scheduler first, and then the cell conductors for retries. This 
> duplication is why some people were originally pushing to move all 
> allocation-related work happen in the conductor service.
> 
> > > Proposed flow:
> > > * Scheduler gets a req spec from conductor, containing resource 
> > > requirements
> > > * Scheduler sends those 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Jay Pipes

On 06/20/2017 09:42 AM, Doug Hellmann wrote:

Does "service VM" need to be a first-class thing?  Akanda creates
them, using a service user. The VMs are tied to a "router" which
is the billable resource that the user understands and interacts with
through the API.


Frankly, I believe all of these types of services should be built as 
applications that run on OpenStack (or other) infrastructure. In other 
words, they should not be part of the infrastructure itself.


There's really no need for a user of a DBaaS to have access to the host 
or hosts the DB is running on. If the user really wanted that, they 
would just spin up a VM/baremetal server and install the thing themselves.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] How to handle nova show --minimal with embedded flavors

2017-06-20 Thread Matt Riedemann
Microversion 2.47 embeds the instance.flavor in the server response 
body. Chris Friesen is adding support for this microversion to 
novaclient [1] and a question has come up over how to deal with the 
--minimal option which before this microversion would just show the 
flavor id. When --minimal is not specified today, the flavor name and id 
are shown.


In Chris' change, he's showing the full flavor information regardless of 
the --minimal option.


The help for the --minimal option is different between show/rebuild 
commands and list.


show/rebuild: "Skips flavor/image lookups when showing servers."

list: "Get only UUID and name."

Personally I think that if I specify --minimal I want minimal output, 
which would just be the flavor's original name after the new 
microversion, which is closer in behavior to how --minimal works today 
before the 2.47 microversion.


I'm posting this in the mailing list for wider discussion/input.

[1] https://review.openstack.org/#/c/435141/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Policy rules for APIs based on "domain_id"

2017-06-20 Thread Valeriy Ponomaryov
Also, one more additional kind of "feature-request" is to be able to filter
each project's entities per domain as well as we can do it with
project/tenant now.

So, as a result, we will be able to configure different "list" APIs to
return objects grouped by either domain or project.

Thoughts?

On Tue, Jun 20, 2017 at 1:07 PM, Adam Heczko  wrote:

> Hello Valeriy,
> agree, that would be very useful. I think that this deserves attention and
> cross project discussion.
> Maybe a community goal process [2] is a valid path forward in this regard.
>
> [2] https://governance.openstack.org/tc/goals/
>
> On Tue, Jun 20, 2017 at 11:15 AM, Valeriy Ponomaryov <
> vponomar...@mirantis.com> wrote:
>
>> Hello OpenStackers,
>>
>> Wanted to pay some attention to one of restrictions in OpenStack.
>> It came out, that it is impossible to define policy rules for API
>> services based on "domain_id".
>> As far as I know, only Keystone supports it.
>>
>> So, it is unclear whether it is intended or it is just technical debt
>> that each OpenStack project should
>> eliminate?
>>
>> For the moment, I filed bug [1].
>>
>> Use case is following: usage of Keystone API v3 all over the cloud and
>> level of trust is domain, not project.
>>
>> And if it is technical debt how much different teams are interested in
>> having such possibility?
>>
>> [1] https://bugs.launchpad.net/nova/+bug/1699060
>>
>> --
>> Kind Regards
>> Valeriy Ponomaryov
>> www.mirantis.com
>> vponomar...@mirantis.com
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Adam Heczko
> Security Engineer @ Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Edward Leafe
On Jun 20, 2017, at 8:38 AM, Jay Pipes  wrote:
> 
>>> The example I posted used 3 resource providers. 2 compute nodes with no 
>>> local disk and a shared storage pool.
>> Now I’m even more confused. In the straw man example 
>> (https://review.openstack.org/#/c/471927/ 
>> ) 
>> >  
>> >
>>  I see only one variable ($COMPUTE_NODE_UUID) referencing a compute node in 
>> the response.
> 
> I'm referring to the example I put in this email threads in 
> paste.openstack.org  with numbers showing 1600 
> bytes for 3 resource providers:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2017-June/118593.html 
> 


And I’m referring to the comment I made on the spec back on June 13 that was 
never corrected/clarified. I’m glad you gave an example yesterday after I 
expressed my confusion; that was the whole purpose of starting this thread. 
Things may be clear to you, but they have confused me and others. We can’t help 
if we don’t understand.


-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Alex Xu
2017-06-19 22:17 GMT+08:00 Jay Pipes :

> On 06/19/2017 09:04 AM, Edward Leafe wrote:
>
>> Current flow:
>> * Scheduler gets a req spec from conductor, containing resource
>> requirements
>> * Scheduler sends those requirements to placement
>> * Placement runs a query to determine the root RPs that can satisfy those
>> requirements
>>
>
> Not root RPs. Non-sharing resource providers, which currently effectively
> means compute node providers. Nested resource providers isn't yet merged,
> so there is currently no concept of a hierarchy of providers.
>
> * Placement returns a list of the UUIDs for those root providers to
>> scheduler
>>
>
> It returns the provider names and UUIDs, yes.
>
> * Scheduler uses those UUIDs to create HostState objects for each
>>
>
> Kind of. The scheduler calls ComputeNodeList.get_all_by_uuid(), passing
> in a list of the provider UUIDs it got back from the placement service. The
> scheduler then builds a set of HostState objects from the results of
> ComputeNodeList.get_all_by_uuid().
>
> The scheduler also keeps a set of AggregateMetadata objects in memory,
> including the association of aggregate to host (note: this is the compute
> node's *service*, not the compute node object itself, thus the reason
> aggregates don't work properly for Ironic nodes).
>
> * Scheduler runs those HostState objects through filters to remove those
>> that don't meet requirements not selected for by placement
>>
>
> Yep.
>
> * Scheduler runs the remaining HostState objects through weighers to order
>> them in terms of best fit.
>>
>
> Yep.
>
> * Scheduler takes the host at the top of that ranked list, and tries to
>> claim the resources in placement. If that fails, there is a race, so that
>> HostState is discarded, and the next is selected. This is repeated until
>> the claim succeeds.
>>
>
> No, this is not how things work currently. The scheduler does not claim
> resources. It selects the top (or random host depending on the selection
> strategy) and sends the launch request to the target compute node. The
> target compute node then attempts to claim the resources and in doing so
> writes records to the compute_nodes table in the Nova cell database as well
> as the Placement API for the compute node resource provider.
>
> * Scheduler then creates a list of N UUIDs, with the first being the
>> selected host, and the the rest being alternates consisting of the next
>> hosts in the ranked list that are in the same cell as the selected host.
>>
>
> This isn't currently how things work, no. This has been discussed, however.
>
> * Scheduler returns that list to conductor.
>> * Conductor determines the cell of the selected host, and sends that list
>> to the target cell.
>> * Target cell tries to build the instance on the selected host. If it
>> fails, it unclaims the resources for the selected host, and tries to claim
>> the resources for the next host in the list. It then tries to build the
>> instance on the next host in the list of alternates. Only when all
>> alternates fail does the build request fail.
>>
>
> This isn't currently how things work, no. There has been discussion of
> having the compute node retry alternatives locally, but nothing more than
> discussion.
>
> Proposed flow:
>> * Scheduler gets a req spec from conductor, containing resource
>> requirements
>> * Scheduler sends those requirements to placement
>> * Placement runs a query to determine the root RPs that can satisfy those
>> requirements
>>
>
> Yes.
>
> * Placement then constructs a data structure for each root provider as
>> documented in the spec. [0]
>>
>
> Yes.
>
> * Placement returns a number of these data structures as JSON blobs. Due
>> to the size of the data, a page size will have to be determined, and
>> placement will have to either maintain that list of structured datafor
>> subsequent requests, or re-run the query and only calculate the data
>> structures for the hosts that fit in the requested page.
>>
>
> "of these data structures as JSON blobs" is kind of redundant... all our
> REST APIs return data structures as JSON blobs.
>
> While we discussed the fact that there may be a lot of entries, we did not
> say we'd immediately support a paging mechanism.
>
> * Scheduler continues to request the paged results until it has them all.
>>
>
> See above. Was discussed briefly as a concern but not work to do for first
> patches.
>
> * Scheduler then runs this data through the filters and weighers. No
>> HostState objects are required, as the data structures will contain all the
>> information that scheduler will need.
>>
>
> No, this isn't correct. The scheduler will have *some* of the information
> it requires for weighing from the returned data from the GET
> /allocation_candidates call, but not all of it.
>
> Again, operators have insisted on keeping the flexibility currently in the
> Nova scheduler to weigh/sort compute nodes by things like thermal metrics
> and kinds of data that the 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Doug Hellmann
Excerpts from Curtis's message of 2017-06-19 18:56:25 -0600:
> On Sun, Jun 18, 2017 at 5:35 AM, Amrith Kumar  wrote:
> > Trove has evolved rapidly over the past several years, since integration in
> > IceHouse when it only supported single instances of a few databases. Today
> > it supports a dozen databases including clusters and replication.
> >
> > The user survey [1] indicates that while there is strong interest in the
> > project, there are few large production deployments that are known of (by
> > the development team).
> >
> > Recent changes in the OpenStack community at large (company realignments,
> > acquisitions, layoffs) and the Trove community in particular, coupled with a
> > mounting burden of technical debt have prompted me to make this proposal to
> > re-architect Trove.
> >
> > This email summarizes several of the issues that face the project, both
> > structurally and architecturally. This email does not claim to include a
> > detailed specification for what the new Trove would look like, merely the
> > recommendation that the community should come together and develop one so
> > that the project can be sustainable and useful to those who wish to use it
> > in the future.
> >
> > TL;DR
> >
> > Trove, with support for a dozen or so databases today, finds itself in a
> > bind because there are few developers, and a code-base with a significant
> > amount of technical debt.
> >
> > Some architectural choices which the team made over the years have
> > consequences which make the project less than ideal for deployers.
> >
> > Given that there are no major production deployments of Trove at present,
> > this provides us an opportunity to reset the project, learn from our v1 and
> > come up with a strong v2.
> >
> > An important aspect of making this proposal work is that we seek to
> > eliminate the effort (planning, and coding) involved in migrating existing
> > Trove v1 deployments to the proposed Trove v2. Effectively, with work
> > beginning on Trove v2 as proposed here, Trove v1 as released with Pike will
> > be marked as deprecated and users will have to migrate to Trove v2 when it
> > becomes available.
> >
> > While I would very much like to continue to support the users on Trove v1
> > through this transition, the simple fact is that absent community
> > participation this will be impossible. Furthermore, given that there are no
> > production deployments of Trove at this time, it seems pointless to build
> > that upgrade path from Trove v1 to Trove v2; it would be the proverbial
> > bridge from nowhere.
> >
> > This (previous) statement is, I realize, contentious. There are those who
> > have told me that an upgrade path must be provided, and there are those who
> > have told me of unnamed deployments of Trove that would suffer. To this, all
> > I can say is that if an upgrade path is of value to you, then please commit
> > the development resources to participate in the community to make that
> > possible. But equally, preventing a v2 of Trove or delaying it will only
> > make the v1 that we have today less valuable.
> >
> > We have learned a lot from v1, and the hope is that we can address that in
> > v2. Some of the more significant things that I have learned are:
> >
> > - We should adopt a versioned front-end API from the very beginning; making
> > the REST API versioned is not a ‘v2 feature’
> >
> > - A guest agent running on a tenant instance, with connectivity to a shared
> > management message bus is a security loophole; encrypting traffic,
> > per-tenant-passwords, and any other scheme is merely lipstick on a security
> > hole
> >
> > - Reliance on Nova for compute resources is fine, but dependence on Nova VM
> > specific capabilities (like instance rebuild) is not; it makes things like
> > containers or bare-metal second class citizens
> >
> > - A fair portion of what Trove does is resource orchestration; don’t
> > reinvent the wheel, there’s Heat for that. Admittedly, Heat wasn’t as far
> > along when Trove got started but that’s not the case today and we have an
> > opportunity to fix that now
> >
> > - A similarly significant portion of what Trove does is to implement a
> > state-machine that will perform specific workflows involved in implementing
> > database specific operations. This makes the Trove taskmanager a stateful
> > entity. Some of the operations could take a fair amount of time. This is a
> > serious architectural flaw
> >
> > - Tenants should not ever be able to directly interact with the underlying
> > storage and compute used by database instances; that should be the default
> > configuration, not an untested deployment alternative
> >
> 
> As an operator I wouldn't run Trove as it is, unless I absolutely had to.
> 
> I think it is a good idea to reboot the project. I really think the
> concept of "service VMs" should be a thing. I'm not sure where the
> OpenStack community has landed on that, my fault for not paying close
> 

Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Jay Pipes

On 06/20/2017 08:43 AM, Edward Leafe wrote:
On Jun 20, 2017, at 6:54 AM, Jay Pipes > wrote:



It was the "per compute host" that I objected to.
I guess it would have helped to see an example of the data returned 
for multiple compute nodes. The straw man example was for a single 
compute node with SR-IOV, NUMA and shared storage. There was no 
indication how multiple hosts meeting the requested resources would 
be returned.


The example I posted used 3 resource providers. 2 compute nodes with 
no local disk and a shared storage pool.


Now I’m even more confused. In the straw man example 
(https://review.openstack.org/#/c/471927/) 
 I see 
only one variable ($COMPUTE_NODE_UUID) referencing a compute node in the 
response.


I'm referring to the example I put in this email threads in 
paste.openstack.org with numbers showing 1600 bytes for 3 resource 
providers:


http://lists.openstack.org/pipermail/openstack-dev/2017-June/118593.html

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-20 Thread Zane Bitter

On 19/06/17 20:56, Curtis wrote:

I really think the
concept of "service VMs" should be a thing. I'm not sure where the
OpenStack community has landed on that, my fault for not paying close
attention, but we should be able to create VMs for a tenant that are
not managed by the tenant but that could be billed to them in some
fashion. At least that's my opinion.


https://review.openstack.org/#/c/438134/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [openstack-docs][dev][all] Documentation repo freeze

2017-06-20 Thread Alexandra Settle


On 6/20/17, 2:12 PM, "Anne Gentle"  wrote:

On Tue, Jun 20, 2017 at 3:13 AM, Alexandra Settle  
wrote:
>
>
> On 6/19/17, 6:19 PM, "Petr Kovar"  wrote:
>
> On Mon, 19 Jun 2017 15:56:35 +
> Alexandra Settle  wrote:
>
> > Hi everyone,
> >
> > As of today - Monday, the 19th of June – please do NOT merge any 
patches into
> > the openstack-manuals repository that is not related to the topic:
> > “doc-migration”.
> >
> > We are currently in the phase of setting up for our MASSIVE 
migration and we
> > need to ensure that there will be minimal conflicts.
> >
> > You can find all patches related to that topic here:
> > 
https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals+branch:master+topic:doc-migration
> >
> > The only other patches that should be passed is the Zanata 
translation patches.
> >
> > If there are any concerns or questions, please do not hesitate to 
contact either
> > myself or Doug Hellmann for further clarification.
>
> Can we still merge into stable branches? As the migration only affects
> content in master, I think there's no need to freeze stable branches.
>
>
> Good question. I would say yes, as we are only porting over everything 
that is currently in master for now.
>

Yep, that sounds right to me, if we need to change content in a stable
branch, go ahead.

> I would also like to clarify that this only affects the documentation 
suite, not the tooling.

Yes, speaking of tooling, can anyone review
https://review.openstack.org/#/c/468021/ -- changes to the them?

On it! Thanks Anne ( 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][designate] Recommended way to inject the rndc.key into the designate container when using Bind9

2017-06-20 Thread Andy McCrae
Hi Lawrence,

Thanks for providing the feedback!

I am using OpenStack Designate with Bind9 as the slave and have managed to
> set it up with openstack-ansible in all respect bar one, I am unable to
> automatically inject the rndc.key file into the Designate container.
>
Is there a recognised way to do this (and similar things elsewhere across
> the OpenStack family) within the openstack-ansible framework without
> branching the repo and making modifications?
>

We don't currently have a set way to do that, although after talking with
Graham and few others, it seems this is something the designate role should
do, so I'd label that a bug. That said, rather than having a fork with
modifications it seems
like useful functionality that would be useful to most deployers of
Designate, so it would be great to create a patch to add this
functionality. I'm imagining it would just be a templated rndc.key file
with a "designate_rndc_key_value" variable
(or something along those lines!).

If that sounds like something you'd like to give a go there is some good
documentation around what to do to get started here:
https://docs.openstack.org/infra/manual/developers.html

Also! Feel free to jump into the #openstack-ansible channel on Freenode
irc, we're a pretty helpful bunch, and we'd love to help you get involved.

Hopefully that helps!
Andy



> WIth apologies in advance in the event that I have overlooked the
> essential piece of documentation on how to do this.
>
> Kind regards, Lawrence
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [openstack-docs][dev][all] Documentation repo freeze

2017-06-20 Thread Anne Gentle
On Tue, Jun 20, 2017 at 3:13 AM, Alexandra Settle  wrote:
>
>
> On 6/19/17, 6:19 PM, "Petr Kovar"  wrote:
>
> On Mon, 19 Jun 2017 15:56:35 +
> Alexandra Settle  wrote:
>
> > Hi everyone,
> >
> > As of today - Monday, the 19th of June – please do NOT merge any 
> patches into
> > the openstack-manuals repository that is not related to the topic:
> > “doc-migration”.
> >
> > We are currently in the phase of setting up for our MASSIVE migration 
> and we
> > need to ensure that there will be minimal conflicts.
> >
> > You can find all patches related to that topic here:
> > 
> https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals+branch:master+topic:doc-migration
> >
> > The only other patches that should be passed is the Zanata translation 
> patches.
> >
> > If there are any concerns or questions, please do not hesitate to 
> contact either
> > myself or Doug Hellmann for further clarification.
>
> Can we still merge into stable branches? As the migration only affects
> content in master, I think there's no need to freeze stable branches.
>
>
> Good question. I would say yes, as we are only porting over everything that 
> is currently in master for now.
>

Yep, that sounds right to me, if we need to change content in a stable
branch, go ahead.

> I would also like to clarify that this only affects the documentation suite, 
> not the tooling.

Yes, speaking of tooling, can anyone review
https://review.openstack.org/#/c/468021/ -- changes to the them?

Thanks -
Anne

>
> Cheers,
>
> Alex
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 

Read my blog: justwrite.click
Subscribe to Docs|Code: docslikecode.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Edward Leafe
On Jun 20, 2017, at 6:54 AM, Jay Pipes  wrote:
> 
>>> It was the "per compute host" that I objected to.
>> I guess it would have helped to see an example of the data returned for 
>> multiple compute nodes. The straw man example was for a single compute node 
>> with SR-IOV, NUMA and shared storage. There was no indication how multiple 
>> hosts meeting the requested resources would be returned.
> 
> The example I posted used 3 resource providers. 2 compute nodes with no local 
> disk and a shared storage pool.


Now I’m even more confused. In the straw man example 
(https://review.openstack.org/#/c/471927/ 
) 

 I see only one variable ($COMPUTE_NODE_UUID) referencing a compute node in the 
response.

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-20 Thread Jay Pipes

On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:

Sorry, The mail sent accidentally by mis-typing ...

My question is, what is the benefit of the above preference?


Hi Kevin!

I believe the benefit is so that the compute node prefers CPU topologies 
that do not have hardware threads over CPU topologies that do include 
hardware threads.


I'm not sure exactly of the reason for this preference, but perhaps it 
is due to assumptions that on some hardware, threads will compete for 
the same cache resources as other siblings on a core whereas cores may 
have their own caches (again, on some specific hardware).


Best,
-jay

On Tue, Jun 20, 2017 at 10:43 AM, Zhenyu Zheng 
> wrote:


Hi,

In
https://github.com/openstack/nova/blob/master/nova/virt/hardware.py#L396

we calculated every possible CPU topologies and sorted by:
# We want to
# - Minimize threads (ie larger sockets * cores is best)
# - Prefer sockets over cores
possible = sorted(possible, reverse=True,
key=lambda x: (x.sockets * x.cores,
x.sockets,
x.threads))




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] api.fault notification is never emitted

2017-06-20 Thread Balazs Gibizer

Hi,

I come across a questionable behavior of nova while I tried to use the 
notify_on_api_faults configuration option [0] while testing the related 
versioned notification transformation patch [1]. Based on the 
description of the config option and the code that uses it [2] nova 
sends and api.fault notification if the nova-api service encounters an 
unhandle exception. There is a FaultWrapper class [3] added to the 
pipeline of the REST request which catches every exception and calls 
the notification sending.
Based on some debugging in devstack this FaultWrapper never catches any 
exception. I injected a ValueError to the beginning of 
nova.objects.aggregate.Aggregate.create method. This resulted in an 
HTTPInternalServerError exception and HTTP 500 error code but the 
exception handling part of the FaultWrapper [4] was never reached. So I 
dig a bit deeper and I think I found the reason. Every REST API method 
is decorated with expected_errors decorator [5] which as a last resort 
translate the unexpected exception to HTTPInternalServerError. In the 
wsgi stack the actual REST api call is guarded with 
ResourceExceptionHandler context manager [7] which translates 
HTTPException to a Fault [8]. Then Fault is catched and translated to 
the REST response [7]. This way the exception never propagates back to 
the FaultWrapper in [6] and therefore the api.fault notification is 
never emitted.


You can see the api logs here [9] and the patch that I used to add the 
extra traces here [10]. Please note that there is a compute.exception 
notification visible in the log but that is a different notification 
emitted from wrap_exception decorator [11] used in compute.manager [12] 
and compute.api [13] only.


So my questions are:
1) Is it a bug in the nova wsgi or it is expected that the wsgi code 
catches everything?
2) Do we need FaultWrapper at all if the wsgi stack catches every 
exception?
3) Do we need api.fault notification at all? It seems nobody missed it 
so far.
4) If we want to have api.fault notification then what would be the 
good place to emit it? Maybe ResourceExceptionHandler at [8]?


I filed a bug for tracking purposes [14].

Cheers,
gibi


[0] 
https://github.com/openstack/nova/blob/e66e5822abf0e9f933cf6bd1b4c63007b170/nova/conf/notifications.py#L49

[1] https://review.openstack.org/#/c/469038
[2] 
https://github.com/openstack/nova/blob/d68626595ed54698c7eb013a788ee3b98e068cdd/nova/notifications/base.py#L83
[3] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/__init__.py#L79
[4] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/__init__.py#L87
[5] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/extensions.py#L325
[6] 
https://github.com/openstack/nova/blob/efde7a5dfad24cca361989eadf482899a5cab5db/nova/api/openstack/extensions.py#L368
[7] 
https://github.com/openstack/nova/blob/4a0fb6ae79acedabf134086d4dce6aae0e4f6209/nova/api/openstack/wsgi.py#L637
[8] 
https://github.com/openstack/nova/blob/4a0fb6ae79acedabf134086d4dce6aae0e4f6209/nova/api/openstack/wsgi.py#L418

[9] https://pastebin.com/Eu6rBjNN
[10] https://pastebin.com/en4aFutc
[11] 
https://github.com/openstack/nova/blob/master/nova/exception_wrapper.py#L57
[12] 
https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L105
[13] 
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L92

[14] https://bugs.launchpad.net/nova/+bug/1699115


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Jay Pipes

On 06/19/2017 09:26 PM, Boris Pavlovic wrote:

Hi,

Does this look too complicated and and a bit over designed.


Is that a question?

For example, why we can't store all data in memory of single python 
application with simple REST API and have
simple mechanism for plugins that are filtering. Basically there is no 
any kind of problems with storing it on single host.


You mean how things currently work minus the REST API?

If we even have 100k hosts and every host has about 10KB -> 1GB of RAM 
(I can just use phone)


There are easy ways to copy the state across different instance (sharing 
updates)


We already do this. It isn't as easy as you think. It's introduced a 
number of race conditions that we're attempting to address by doing 
claims in the scheduler.


And I thought that Placement project is going to be such centralized 
small simple APP for collecting all
resource information and doing this very very simple and easy placement 
selection...


1) Placement doesn't collect anything.
2) Placement is indeed a simple small app with a global view of resources
3) Placement doesn't do the sorting/weighing of destinations. The 
scheduler does that. See this thread for reasons why this is the case 
(operators didn't want to give up their complexity/flexibility in how 
they tweak selection decisions)
4) Placement simply tells the scheduler which providers have enough 
capacity for a requested set of resource amounts and required 
qualitative traits. It actually is pretty simple.


Best,
-jay


Best regards,
Boris Pavlovic

On Mon, Jun 19, 2017 at 5:05 PM, Edward Leafe > wrote:


On Jun 19, 2017, at 5:27 PM, Jay Pipes > wrote:



It was from the straw man example. Replacing the $FOO_UUID with
UUIDs, and then stripping out all whitespace resulted in about
1500 bytes. Your example, with whitespace included, is 1600 bytes.


It was the "per compute host" that I objected to.


I guess it would have helped to see an example of the data returned
for multiple compute nodes. The straw man example was for a single
compute node with SR-IOV, NUMA and shared storage. There was no
indication how multiple hosts meeting the requested resources would
be returned.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][designate][bind9] Looking for ways to limit users to adding hosts within fixed personal domain

2017-06-20 Thread Graham Hayes
On 20/06/17 12:37, Lawrence J. Albinson wrote:
> I am trying to find pointers to how I might limit non-privileged users
> to a single domain when adding hosts to Designate.
> 
> It is a private OpenStack cloud and each user will have a personal
> sub-domain of a common organisational domain, like so:
> fred.organisation.com. and will be able to add hosts such as:
> www.fred.organisation.com.  .
> 
> (The designate back-end is Bind9.)
> 
> Any pointers about how to do this would be very gratefully received.
> 
> Kind regards, Lawrence
> 
> Lawrence J Albinson

Sure - there are a few ways to do this, but the simplest would be the
following:

(I am assuming the zone is pre-created by the admin when provisioning
the project)

In the policy.json file we have controls for what users can do to zones
[1]

I would suggest changing

`create_zone`, `delete_zone`, and `update_zone` to `rule:admin`

then the admin can create the zone by running

`openstack zone create --sudo-project-id  --email
t...@example.com subdomain.example.com.`

And the zone should be created in the project, and they will have full
control of the recordsets inside that zone.

If that does not work, we support "zone transfers"[2] (its a terrible
name) where the admin can create the new sub zone in the admin project
and then transfer ownership to the new project.

1 -
https://github.com/openstack/designate/blob/master/etc/designate/policy.json#L43-L56

2 -
https://docs.openstack.org/developer/python-designateclient/shell-v2-examples.html#working-with-zone-transfer
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Trying to understand the proposed direction

2017-06-20 Thread Jay Pipes

On 06/19/2017 08:05 PM, Edward Leafe wrote:
On Jun 19, 2017, at 5:27 PM, Jay Pipes > wrote:


It was from the straw man example. Replacing the $FOO_UUID with 
UUIDs, and then stripping out all whitespace resulted in about 1500 
bytes. Your example, with whitespace included, is 1600 bytes.


It was the "per compute host" that I objected to.


I guess it would have helped to see an example of the data returned for 
multiple compute nodes. The straw man example was for a single compute 
node with SR-IOV, NUMA and shared storage. There was no indication how 
multiple hosts meeting the requested resources would be returned.


The example I posted used 3 resource providers. 2 compute nodes with no 
local disk and a shared storage pool.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][designate][bind9] Looking for ways to limit users to adding hosts within fixed personal domain

2017-06-20 Thread Lawrence J. Albinson
I am trying to find pointers to how I might limit non-privileged users to a 
single domain when adding hosts to Designate.

It is a private OpenStack cloud and each user will have a personal sub-domain 
of a common organisational domain, like so: fred.organisation.com. and will be 
able to add hosts such as: 
www.fred.organisation.com. .

(The designate back-end is Bind9.)

Any pointers about how to do this would be very gratefully received.

Kind regards, Lawrence

Lawrence J Albinson

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][designate] Recommended way to inject the rndc.key into the designate container when using Bind9

2017-06-20 Thread Lawrence J. Albinson
I am using OpenStack Designate with Bind9 as the slave and have managed to set 
it up with openstack-ansible in all respect bar one, I am unable to 
automatically inject the rndc.key file into the Designate container.

Is there a recognised way to do this (and similar things elsewhere across the 
OpenStack family) within the openstack-ansible framework without branching the 
repo and making modifications?

WIth apologies in advance in the event that I have overlooked the essential 
piece of documentation on how to do this.

Kind regards, Lawrence

Lawrence J Albinson

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][ec2-api] How about using boto3instead of boto in requirements

2017-06-20 Thread jiaopengju
Hi,
Thanks! I will try to use botocore instead of boto3 to find out whether the 
code running or not.


jiaopengju
mail: jiaopen...@cmss.chinamobile.com


原始邮件
发件人:Andrey pavlovandrey...@gmail.com
收件人:OpenStack Development Mailing List (not for usage 
questions)openstack-dev@lists.openstack.org; andrey.mpandrey...@gmail.com; 
Alexandre levinealexandrelev...@gmail.com
发送时间:2017年6月20日(周二) 18:25
主题:Re: [openstack-dev] [requirements][ec2-api] How about using boto3instead of 
boto in requirements


Hi,


We (ec2-api team) now in the middle of some investigations that should lead us 
to either to remove boto from the code or to change it to botocore as we 
decided previously.
We'll done with it to the middle of July.


Regards,
Andrey Pavlov.


On Tue, Jun 20, 2017 at 10:15 AM, Tony Breeds t...@bakeyournoodle.com wrote:

On Mon, Jun 19, 2017 at 09:33:02PM +0800, jiaopengju wrote:
  Hi Dims,
  I got response from core member of ec2-api. What do you think about it?
 
 
  --
  Hi,
 
 
  I don't treat adding new library as a problem.
 
 
  - I see that you don't remove boto - so your change doesn't affect ec2-api 
code.
 
 Part of the role of the requirements team is to ensure that we don't end
 up with several libraries that have significant overlap in
 functionality. Clearly boto and boto3 fall squarely in that camp.
 
 What the requirements team needs is some assurance that switching to
 boto3 is something that the ec2-api team would be able to do. Running
 on boto which has been deprecated in favor of boto3 make sense from a
 lot of levels. We're far enough into the Queens cycle that I doubt it'd
 happen this cycle :(
 
 Yours Tony.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][ec2-api] How about using boto3 instead of boto in requirements

2017-06-20 Thread Andrey Pavlov
Hi,

We (ec2-api team) now in the middle of some investigations that should lead
us to either to remove boto from the code or to change it to botocore as we
decided previously.
We'll done with it to the middle of July.

Regards,
Andrey Pavlov.

On Tue, Jun 20, 2017 at 10:15 AM, Tony Breeds 
wrote:

> On Mon, Jun 19, 2017 at 09:33:02PM +0800, jiaopengju wrote:
> > Hi Dims,
> > I got response from core member of ec2-api. What do you think about it?
> >
> >
> > --
> > Hi,
> >
> >
> > I don't treat adding new library as a problem.
> >
> >
> > - I see that you don't remove boto - so your change doesn't affect
> ec2-api code.
>
> Part of the role of the requirements team is to ensure that we don't end
> up with several libraries that have significant overlap in
> functionality.  Clearly boto and boto3 fall squarely in that camp.
>
> What the requirements team needs is some assurance that switching to
> boto3 is something that the ec2-api team would be able to do.  Running
> on boto which has been deprecated in favor of boto3 make sense from a
> lot of levels.  We're far enough into the Queens cycle that I doubt it'd
> happen this cycle :(
>
> Yours Tony.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Stepping down from core

2017-06-20 Thread Flavio Percoco

On 20/06/17 09:31 +1200, feilong wrote:

Hi there,

I've been a Glance core since 2013 and been involved in the Glance community 
even longer, so I care deeply about Glance. My situation right now is such that 
I cannot devote sufficient time to Glance, and while as you've seen elsewhere 
on the mailing list, Glance needs reviewers, I'm afraid that keeping my name on 
the core list is giving people a false impression of how dire the current 
Glance personnel situation is. So after discussed with Glance PTL, I'd like to 
offer my resignation as a member of the Glance core reviewer team. Thank you 
for your understanding.


Thanks for being honest and open about the situation. I agree with you that this
is the right move.

I'd like to thank you for all these years of service and I think it goes without
saying that you're welcome back in the team anytime you want.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-20 Thread Thierry Carrez
Thanks for the initial feedback everyone.
I proposed the matching governance change at:

https://review.openstack.org/475721

Please comment there if you think it's a good or bad idea.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Policy rules for APIs based on "domain_id"

2017-06-20 Thread Adam Heczko
Hello Valeriy,
agree, that would be very useful. I think that this deserves attention and
cross project discussion.
Maybe a community goal process [2] is a valid path forward in this regard.

[2] https://governance.openstack.org/tc/goals/

On Tue, Jun 20, 2017 at 11:15 AM, Valeriy Ponomaryov <
vponomar...@mirantis.com> wrote:

> Hello OpenStackers,
>
> Wanted to pay some attention to one of restrictions in OpenStack.
> It came out, that it is impossible to define policy rules for API services
> based on "domain_id".
> As far as I know, only Keystone supports it.
>
> So, it is unclear whether it is intended or it is just technical debt that
> each OpenStack project should
> eliminate?
>
> For the moment, I filed bug [1].
>
> Use case is following: usage of Keystone API v3 all over the cloud and
> level of trust is domain, not project.
>
> And if it is technical debt how much different teams are interested in
> having such possibility?
>
> [1] https://bugs.launchpad.net/nova/+bug/1699060
>
> --
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-20 Thread Flavio Percoco

On 20/06/17 00:33 +, joehuang wrote:

I think openstack community  provides a flat project market place for 
infrastructure is good enough:

all projects are just some "goods" in the market place, let the cloud operators 
to select projects
from the project market place for his own infrastructure.

We don't have to mark a project a core project or not, only need to tag 
attribute of a project, for
example how mature it is, how many "like" they have, what the cloud operator 
said for the project. etc.

All flat, just let people make decision by themselves, they are not idiot, they 
have wisdom
on building infrastructure.

Not all people need a package: you bought a package of ice-cream, but not all 
you will like it,
If they want package, distribution provider can help them to define and 
customize a package, if
you want customization, you will decide which ball of cream you want, isn't it?


The flavors you see in a ice-creem shop counter are not there by accident. Those
flavors have gone through a creation process, they have been tested and they
have also survived over the years. Some flavors are removed with time and some
others stay there forever.

Unfortunately, tagging those flavors won't cut it, which is why you don't see
tags in their labels when you go to an ice-cream shop. Some tags are implied,
other tags are inferred and other tags are subjective.

Experimenting with new flavors doesn't happen overnight in some person's
bedroom. The new flavors are tested using the *same* infrastructure as the other
flavors and once they reach a level of maturity, they are exposed in the counter
so that customers will able to consume them.

Ultimately, experimentation is part of the ice-cream shop's mission and it
requires time, effort and resources but not all experiments end well. At the
end, though, what really matters is that all these flavors serve the same
mission and that's why they are sold at the ice-cream shop, that's why they are
exposed in the counter. Customer's of the ice-cream shop know they can trust
what's in the counter. They know the exposed flavors serve their needs at a high
level and they can now focus on their specific needs.

So, do you really think it's just a set of flavors and it doesn't really matter
how those flavors got there?

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Policy rules for APIs based on "domain_id"

2017-06-20 Thread Valeriy Ponomaryov
Hello OpenStackers,

Wanted to pay some attention to one of restrictions in OpenStack.
It came out, that it is impossible to define policy rules for API services
based on "domain_id".
As far as I know, only Keystone supports it.

So, it is unclear whether it is intended or it is just technical debt that
each OpenStack project should
eliminate?

For the moment, I filed bug [1].

Use case is following: usage of Keystone API v3 all over the cloud and
level of trust is domain, not project.

And if it is technical debt how much different teams are interested in
having such possibility?

[1] https://bugs.launchpad.net/nova/+bug/1699060

-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Jun.21

2017-06-20 Thread joehuang
Hello, team,

Agenda of Jun.21 weekly meeting:

  1.  summary of tricircle demo in OPNFV summit
  3.  feature implementation review
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 1:00.


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Jun.21

2017-06-20 Thread joehuang
Hello, team,

Agenda of Jun.20 weekly meeting:

  1.  summary of tricircle demo in OPNFV summit
  3.  feature implementation review
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 1:00.


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators][all][Ironic][Nova] The /service API endpoint

2017-06-20 Thread milanisko k
Hi,


> 1) create a new API endpoint e.g. 'v1/service' that can report which
conductor is managing given node. Additionally it can also report aliveness
of all Ironic conductors and on which hosts they are running (similar to
nova service-list)



> 2) expose conductor_affinity in node-show (but resolve it to hostname
first).


IMO both; these things are better "cross-linked" to be able to explore the
API


--

milan

út 20. 6. 2017 v 8:39 odesílatel Kumari, Madhuri 
napsal:

> Hi All,
>
>
>
> I am working on a bug [1] in Ironic which talks exposing the state of
> conductor service running in OpenStack environment.
>
> There are two ways to do this:
>
>
>
> 1) create a new API endpoint e.g. 'v1/service' that can report which
> conductor is managing given node. Additionally it can also report aliveness
> of all Ironic conductors and on which hosts they are running (similar to
> nova service-list)
>
>
>
> 2) expose conductor_affinity in node-show (but resolve it to hostname
> first).
>
>
>
> Option #2 is probably quicker to implement, but option #1 has more
> benefits for operators.
>
>
>
> So I would like to know from the OpenStack operators and project teams who
> has this API:
>
>
>
> 1. What are the other use-case of this API?
>
> 2. Which option is better to implement? Is it worth adding a new API
> endpoint for the purpose?
>
> 2. Also why this API only expose the state of RPC servers and not the API
> server in the environment?
>
>
>
> [1] https://bugs.launchpad.net/ironic/+bug/1616878
>
>
>
> Regards,
>
> Madhuri
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [openstack-docs][dev][all] Documentation repo freeze

2017-06-20 Thread Alexandra Settle


On 6/19/17, 6:19 PM, "Petr Kovar"  wrote:

On Mon, 19 Jun 2017 15:56:35 +
Alexandra Settle  wrote:

> Hi everyone,
> 
> As of today - Monday, the 19th of June – please do NOT merge any patches 
into
> the openstack-manuals repository that is not related to the topic:
> “doc-migration”.
> 
> We are currently in the phase of setting up for our MASSIVE migration and 
we
> need to ensure that there will be minimal conflicts.
> 
> You can find all patches related to that topic here:
> 
https://review.openstack.org/#/q/status:open+project:openstack/openstack-manuals+branch:master+topic:doc-migration
> 
> The only other patches that should be passed is the Zanata translation 
patches.
> 
> If there are any concerns or questions, please do not hesitate to contact 
either
> myself or Doug Hellmann for further clarification.

Can we still merge into stable branches? As the migration only affects
content in master, I think there's no need to freeze stable branches. 


Good question. I would say yes, as we are only porting over everything that is 
currently in master for now.

I would also like to clarify that this only affects the documentation suite, 
not the tooling.

Cheers,

Alex 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Next weekly IRC meeting cancelled

2017-06-20 Thread Rob Cresswell (rcresswe)
Hey everyone,

The next weekly IRC meeting (2017-06-21) is cancelled, as I’m away for a week.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][ec2-api] How about using boto3 instead of boto in requirements

2017-06-20 Thread Tony Breeds
On Tue, Jun 20, 2017 at 05:15:59PM +1000, Tony Breeds wrote:
> On Mon, Jun 19, 2017 at 09:33:02PM +0800, jiaopengju wrote:
> > Hi Dims,
> > I got response from core member of ec2-api. What do you think about it?
> > 
> > 
> > --
> > Hi,
> > 
> > 
> > I don't treat adding new library as a problem.
> > 
> > 
> > - I see that you don't remove boto - so your change doesn't affect ec2-api 
> > code.
> 
> Part of the role of the requirements team is to ensure that we don't end
> up with several libraries that have significant overlap in
> functionality.  Clearly boto and boto3 fall squarely in that camp.
> 
> What the requirements team needs is some assurance that switching to
> boto3 is something that the ec2-api team would be able to do.  Running
> on boto which has been deprecated in favor of boto3 make sense from a
> lot of levels.  We're far enough into the Queens cycle that I doubt it'd
> happen this cycle :(

s/Queens/Pike/  I'm getting my cycles mixed up that's not a good sign.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] realtime kvm cpu affinities

2017-06-20 Thread Henning Schild
Hi,

We are using OpenStack for managing realtime guests. We modified
it and contributed to discussions on how to model the realtime
feature. More recent versions of OpenStack have support for realtime,
and there are a few proposals on how to improve that further.

But there is still no full answer on how to distribute threads across
host-cores. The vcpus are easy but for the emulation and io-threads
there are multiple options. I would like to collect the constraints
from a qemu/kvm perspective first, and than possibly influence the
OpenStack development

I will put the summary/questions first, the text below provides more
context to where the questions come from.
- How do you distribute your threads when reaching the really low
  cyclictest results in the guests? In [3] Rik talked about problems
  like hold holder preemption, starvation etc. but not where/how to
  schedule emulators and io
- Is it ok to put a vcpu and emulator thread on the same core as long as
  the guest knows about it? Any funny behaving guest, not just Linux.
- Is it ok to make the emulators potentially slow by running them on
  busy best-effort cores, or will they quickly be on the critical path
  if you do more than just cyclictest? - our experience says we don't
  need them reactive even with rt-networking involved


Our goal is to reach a high packing density of realtime VMs. Our
pragmatic first choice was to run all non-vcpu-threads on a shared set
of pcpus where we also run best-effort VMs and host load.
Now the OpenStack guys are not too happy with that because that is load
outside the assigned resources, which leads to quota and accounting
problems.

So the current OpenStack model is to run those threads next to one
or more vcpu-threads. [1] You will need to remember that the vcpus in
question should not be your rt-cpus in the guest. I.e. if vcpu0 shares
its pcpu with the hypervisor noise your preemptrt-guest would use
isolcpus=1.

Is that kind of sharing a pcpu really a good idea? I could imagine
things like smp housekeeping (cache invalidation etc.) to eventually
cause vcpu1 having to wait for the emulator stuck in IO.
Or maybe a busy polling vcpu0 starving its own emulator causing high
latency or even deadlocks.
Even if it happens to work for Linux guests it seems like a strong
assumption that an rt-guest that has noise cores can deal with even more
noise one scheduling level below.

More recent proposals [2] suggest a scheme where the emulator and io
threads are on a separate core. That sounds more reasonable /
conservative but dramatically increases the per VM cost. And the pcpus
hosting the hypervisor threads will probably be idle most of the time.
I guess in this context the most important question is whether qemu is
ever involved in "regular operation" if you avoid the obvious IO
problems on your critical path.

My guess is that just [1] has serious hidden latency problems and [2]
is taking it a step too far by wasting whole cores for idle emulators.
We would like to suggest some other way inbetween, that is a little
easier on the core count. Our current solution seems to work fine but
has the mentioned quota problems.
With this mail i am hoping to collect some constraints to derive a
suggestion from. Or maybe collect some information that could be added
to the current blueprints as reasoning/documentation.

Sorry if you receive this mail a second time, i was not subscribed to
openstack-dev the first time.

best regards,
Henning

[1]
https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/libvirt-real-time.html
[2]
https://specs.openstack.org/openstack/nova-specs/specs/ocata/approved/libvirt-emulator-threads-policy.html
[3]
http://events.linuxfoundation.org/sites/events/files/slides/kvmforum2015-realtimekvm.pdf

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] security group OVO change

2017-06-20 Thread Kevin Benton
Does this fix your issue? https://review.openstack.org/#/c/475445/

On Mon, Jun 19, 2017 at 12:21 AM, Gary Kotton  wrote:

> Sorry for being vague – have been debugging.
>
> We overwrite the base method:
>
>
>
> with db_api.context_manager.writer.using(context):
>
> secgroup_db = (
>
> super(NsxV3Plugin, self).create_security_group(
>
> context, security_group, default_sg))
>
> nsx_db.save_sg_mappings(context,
>
> secgroup_db['id'],
>
> ns_group['id'],
>
> firewall_section['id'])
>
> self._process_security_group_
> properties_create(context,
>
>
> secgroup_db,
>
>
> secgroup,
>
>
> default_sg)
>
>
>
> The secgroup_db that returns always has empty rules. If I remove the
> transaction then it works.
>
> Still trying to figure out why when we call:
>
> with db_api.context_manager.writer.using(context):
>
> secgroup_db = (
>
> super(NsxV3Plugin, self).create_security_group(…
>
>
>
> The rules are not populated. The db_api.context_manager.writer.using is
> what is causing the problem.
>
>
>
> As a work around we reread the object when we need to process the rules.
> Not sure if anyone else has hit this
>
> Thanks
>
> Gary
>
>
>
> *From: *Kevin Benton 
> *Reply-To: *OpenStack List 
> *Date: *Monday, June 19, 2017 at 10:01 AM
> *To: *OpenStack List 
> *Cc: *"isaku.yamah...@gmail.com" 
> *Subject: *Re: [openstack-dev] [neutron] security group OVO change
>
>
>
> Do you mean the callback event for AFTER_CREATE is missing the rules when
> it's for default security groups?
>
>
>
> On Sun, Jun 18, 2017 at 4:44 AM, Gary Kotton  wrote:
>
> Hi,
> That patch looks good. We still have an issue in that the create security
> groups does not return the list of the default rules.
> Thanks
> Gary
>
>
> On 6/17/17, 2:33 AM, "Isaku Yamahata"  wrote:
>
> It also broke networking-odl.
> The patch[1] is needed to unbreak.
> [1] https://review.openstack.org/#/c/448420/
>
> necessary db info is taken from context.session.new.
> But with OVO, those expunge themselves with create method.
> Those info needs to be passed as callback argument.
>
> Thanks,
>
> On Fri, Jun 16, 2017 at 01:25:28PM -0700,
> Ihar Hrachyshka  wrote:
>
> > To close the loop here,
> >
> > - this also broke heat py3 job (https://launchpad.net/bugs/1698355)
> > - we polished https://review.openstack.org/474575 to fix both
> > vmware-nsx and heat issues
> > - I also posted a patch for oslo.serialization for the bug that
> > triggered MemoryError in heat gate:
> > https://review.openstack.org/475052
> > - the vmware-nsx adoption patch is at:
> > https://review.openstack.org/#/c/474608/ and @boden is working on
> it,
> > should be ready to go in due course.
> >
> > Thanks and sorry for inconveniences,
> > Ihar
> >
> > On Thu, Jun 15, 2017 at 6:17 AM, Gary Kotton 
> wrote:
> > > Hi,
> > >
> > > The commit https://review.openstack.org/284738 has broken
> decomposed plugins
> > > (those that extend security groups and rules). The reason for this
> is that
> > > there is a extend callback that we use which expects to get a
> database
> > > object and the aforementioned patch passes a new neutron object.
> > >
> > > I have posted [i] to temporarily address the issue. An alternative
> is to
> > > revert the patch until the decomposed plugins can figure out how to
> > > correctly address this.
> > >
> > > Thanks
> > >
> > > Gary
> > >
> > > [i] https://review.openstack.org/474575
> > >
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Isaku Yamahata 
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [requirements][ec2-api] How about using boto3 instead of boto in requirements

2017-06-20 Thread Tony Breeds
On Mon, Jun 19, 2017 at 09:33:02PM +0800, jiaopengju wrote:
> Hi Dims,
> I got response from core member of ec2-api. What do you think about it?
> 
> 
> --
> Hi,
> 
> 
> I don't treat adding new library as a problem.
> 
> 
> - I see that you don't remove boto - so your change doesn't affect ec2-api 
> code.

Part of the role of the requirements team is to ensure that we don't end
up with several libraries that have significant overlap in
functionality.  Clearly boto and boto3 fall squarely in that camp.

What the requirements team needs is some assurance that switching to
boto3 is something that the ec2-api team would be able to do.  Running
on boto which has been deprecated in favor of boto3 make sense from a
lot of levels.  We're far enough into the Queens cycle that I doubt it'd
happen this cycle :(

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][all][Ironic][Nova] The /service API endpoint

2017-06-20 Thread Kumari, Madhuri
Hi All,

I am working on a bug [1] in Ironic which talks exposing the state of conductor 
service running in OpenStack environment.
There are two ways to do this:

1) create a new API endpoint e.g. 'v1/service' that can report which conductor 
is managing given node. Additionally it can also report aliveness of all Ironic 
conductors and on which hosts they are running (similar to nova service-list)

2) expose conductor_affinity in node-show (but resolve it to hostname first).

Option #2 is probably quicker to implement, but option #1 has more benefits for 
operators.

So I would like to know from the OpenStack operators and project teams who has 
this API:

1. What are the other use-case of this API?
2. Which option is better to implement? Is it worth adding a new API endpoint 
for the purpose?
2. Also why this API only expose the state of RPC servers and not the API 
server in the environment?

[1] https://bugs.launchpad.net/ironic/+bug/1616878

Regards,
Madhuri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev