[openstack-dev] [neutron] autodoc and tox-siblings

2018-07-02 Thread Takashi Yamamoto
hi,

- networking-midonet uses autodoc in their doc.
build-openstack-sphinx-docs runs it.
- build-openstack-sphinx-docs doesn't use tox-siblings. thus the job
uses released versions of dependencies. eg. neutron, neutron-XXXaas,
os-vif, etc
- released versions of dependencies and networking-midonet master are
not necessarily compatible
- a consequence: https://bugs.launchpad.net/networking-midonet/+bug/1779801
  (in this case, neutron-lib and neutron are not compatible)

possible solutions i can think of:
- stop using autodoc (i suspect i have to do this for now)
- make intermediate releases of neutron and friends
- finish neutron-lib work and stop importing neutron etc (ideal but we
have not reached this stage yet)
- make doc job use tox-siblings (as it used to do in tox_install era)

any suggestions?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Continuously growing request_specs table

2018-07-02 Thread Zhenyu Zheng
Thanks, I may have missed that one.

On Mon, Jul 2, 2018 at 10:29 PM Matt Riedemann  wrote:

> On 7/2/2018 2:47 AM, Zhenyu Zheng wrote:
> > It seems that the current request_specs record did not got removed even
> > when the related instance is gone, which lead to a continuously growing
> > request_specs table. How is that so?
> >
> > Is it because the delete process could be error and we have to recover
> > the request_spec if we deleted it?
> >
> > How about adding a nova-manage CLI command for operators to clean up
> > out-dated request specs records from the table by comparing the request
> > specs and existence of related instance?
>
> Already fixed in Rocky:
>
> https://review.openstack.org/#/c/515034/
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Jay Pipes

On 06/27/2018 07:23 PM, Zane Bitter wrote:

On 27/06/18 07:55, Jay Pipes wrote:
Above, I was saying that the scope of the *OpenStack* community is 
already too broad (IMHO). An example of projects that have made the 
*OpenStack* community too broad are purpose-built telco applications 
like Tacker [1] and Service Function Chaining. [2]


I've also argued in the past that all distro- or vendor-specific 
deployment tools (Fuel, Triple-O, etc [3]) should live outside of 
OpenStack because these projects are more products and the relentless 
drive of vendor product management (rightfully) pushes the scope of 
these applications to gobble up more and more feature space that may 
or may not have anything to do with the core OpenStack mission (and 
have more to do with those companies' product roadmap).


I'm still sad that we've never managed to come up with a single way to 
install OpenStack. The amount of duplicated effort expended on that 
problem is mind-boggling. At least we tried though. Excluding those 
projects from the community would have just meant giving up from the 
beginning.


You have to have motivation from vendors in order to achieve said single 
way of installing OpenStack. I gave up a long time ago on distros and 
vendors to get behind such an effort.


Where vendors see $$$, they will attempt to carve out value 
differentiation. And value differentiation leads to, well, differences, 
naturally.


And, despite what some might misguidedly think, Kubernetes has no single 
installation method. Their *official* setup/install page is here:


https://kubernetes.io/docs/setup/pick-right-solution/

It lists no fewer than *37* (!) different ways of installing Kubernetes, 
and I'm not even including anything listed in the "Custom Solutions" 
section.


I think Thierry's new map, that collects installer services in a 
separate bucket (that may eventually come with a separate git namespace) 
is a helpful way of communicating to users what's happening without 
forcing those projects outside of the community.


Sure, I agree the separate bucket is useful, particularly when paired 
with information that allows operators to know how stable and/or 
bleeding edge the code is expected to be -- you know, those "tags" that 
the TC spent time curating.



So to answer your question:

 zaneb: yeah... nobody I know who argues for a small stable 
core (in Nova) has ever said there should be fewer higher layer 
services.

 zaneb: I'm not entirely sure where you got that idea from.


Note the emphasis on *Nova* above?

Also note that when I've said that *OpenStack* should have a smaller 
mission and scope, that doesn't mean that higher-level services aren't 
necessary or wanted.


Thank you for saying this, and could I please ask you to repeat this 
disclaimer whenever you talk about a smaller scope for OpenStack.


Yes. I shall shout it from the highest mountains. [1]

Because for those of us working on higher-level services it feels like 
there has been a non-stop chorus (both inside and outside the project) 
of people wanting to redefine OpenStack as something that doesn't 
include us.


I've said in the past (on Twitter, can't find the link right now, but 
it's out there somewhere) something to the effect of "at some point, 
someone just needs to come out and say that OpenStack is, at its core, 
Nova, Neutron, Keystone, Glance and Cinder".


Perhaps this is what you were recollecting. I would use a different 
phrase nowadays to describe what I was thinking with the above.


I would say instead "Nova, Neutron, Cinder, Keystone and Glance [2] are 
a definitive lower level of an OpenStack deployment. They represent a 
set of required integrated services that supply the most basic 
infrastructure for datacenter resource management when deploying OpenStack."


Note the difference in wording. Instead of saying "OpenStack is X", I'm 
saying "These particular services represent a specific layer of an 
OpenStack deployment".


Nowadays, I would further add something to the effect of "Depending on 
the particular use cases and workloads the OpenStack deployer wishes to 
promote, an additional layer of services provides workload orchestration 
and workflow management capabilities. This layer of services include 
Heat, Mistral, Tacker, Service Function Chaining, Murano, etc".


Does that provide you with some closure on this feeling of "non-stop 
chorus" of exclusion that you mentioned above?


The reason I haven't dropped this discussion is because I really want to 
know if _all_ of those people were actually talking about something else 
(e.g. a smaller scope for Nova), or if it's just you. Because you and I 
are in complete agreement that Nova has grown a lot of obscure 
capabilities that make it fiendishly difficult to maintain, and that in 
many cases might never have been requested if we'd had higher-level 
tools that could meet the same use cases by composing simpler operations.


IMHO some of the contributing factors 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Jay Pipes

On 07/02/2018 03:12 PM, Fox, Kevin M wrote:

I think a lot of the pushback around not adding more common/required services 
is the extra load it puts on ops though. hence these:

  * Consider abolishing the project walls.
  * simplify the architecture for ops


IMO, those need to change to break free from the pushback and make progress on 
the commons again.


What *specifically* would you do, Kevin?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Zane Bitter

On 28/06/18 15:09, Fox, Kevin M wrote:

I'll weigh in a bit with my operator hat on as recent experience it pertains to 
the current conversation

Kubernetes has largely succeeded in common distribution tools where OpenStack 
has not been able to.
kubeadm was created as a way to centralize deployment best practices, config, 
and upgrade stuff into a common code based that other deployment tools can 
build on.

I think this has been successful for a few reasons:
  * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. 
(Eating its own dogfood)


This is also TripleO's philosophy :)


  * was willing to make their api robust enough to handle that self 
enhancement. (secrets are a thing, orchestration is not optional, etc)


I don't even think that self-upgrading was the most important 
consequence of that. Fundamentally, they understood how applications 
would use it and made sure that the batteries were included. I think the 
fact that they conceived it explicitly as an application operation 
technology made this an obvious choice. I suspect that the reason we've 
lagged in standardising those things in OpenStack is that there's so 
many other ways to think of OpenStack before you get to that one.



  * they decided to produce a reference product (very important to adoption IMO. You 
don't have to "build from source" to kick the tires.)
  * made the barrier to testing/development as low as 'curl 
http://..minikube; minikube start' (this spurs adoption and contribution)


That's not so different from devstack though.


  * not having large silo's in deployment projects allowed better communication 
on common tooling.
  * Operator focused architecture, not project based architecture. This 
simplifies the deployment situation greatly.
  * try whenever possible to focus on just the commons and push vendor specific 
needs to plugins so vendors can deal with vendor issues directly and not 
corrupt the core.


I agree with all of those, but to be fair to OpenStack, you're leaving 
out arguably the most important one:


* Installation instructions start with "assume a working datacenter"

They have that luxury; we do not. (To be clear, they are 100% right to 
take full advantage of that luxury. Although if there are still folks 
who go around saying that it's a trivial problem and OpenStackers must 
all be idiots for making it look so difficult, they should really stop 
embarrassing themselves.)



I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
something breaks only on the production system and needs hot patching on the 
spot. About 10% of the time, I've had to write the patch personally.

I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, 
what did I have to do? A couple hours of looking at release notes and trying to 
dig up examples of where things broke for others. Nothing popped up. Then:

on the controller, I ran:
yum install -y kubeadm #get the newest kubeadm
kubeadm upgrade plan #check things out

It told me I had 2 choices. I could:
  * kubeadm upgrade v1.9.8
  * kubeadm upgrade v1.10.5

I ran:
kubeadm upgrade v1.10.5

The control plane was down for under 60 seconds and then the cluster was 
upgraded. The rest of the services did a rolling upgrade live and took a few 
more minutes.

I can take my time to upgrade kubelets as mixed kubelet versions works well.

Upgrading kubelet is about as easy.

Done.

There's a lot of things to learn from the governance / architecture of 
Kubernetes..


+1


Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
tries to provide users. Scheduling a VM or a Container via an api with some 
kind of networking and storage is the same kind of thing in either case.


Yes, from a user perspective that is (very) broadly accurate. But again, 
Kubernetes assumes that somebody else has provided the bottom few layers 
of implementation, while OpenStack *is* the somebody else.



The how to get the software (openstack or k8s) running is about as polar 
opposite you can get though.

I think if OpenStack wants to gain back some of the steam it had before, it 
needs to adjust to the new world it is living in. This means:
  * Consider abolishing the project walls. They are driving bad architecture 
(not intentionally but as a side affect of structure)


In the spirit of cdent's blog post about random ideas: one idea I keep 
coming back to (and it's been around for a while, I don't remember who 
it first came from) is to start treating the compute node as a single 
project (I guess the k8s equivalent would be a kubelet). Have a single 
API - commands go in, events come out.


Note that this would not include just the compute-node functionality of 
Nova, Neutron and Cinder, but ultimately also that of Ceilometer, 
Watcher, Freezer, Masakari (and possibly Congress and Vitrage?) as well. 
Some of those 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Fox, Kevin M
I think Keystone is one of the exceptions currently, as it is the 
quintessential common service in all of OpenStack since the rule was made, all 
things auth belong to Keystone and the other projects don't waver from it. The 
same can not be said of, say, Barbican. Steps have been made recently to get 
farther down that path, but still is not there yet. Until it is blessed as a 
common, required component, other silo's are still disincentivized to depend on 
it.

I think a lot of the pushback around not adding more common/required services 
is the extra load it puts on ops though. hence these:
>  * Consider abolishing the project walls.
>  * simplify the architecture for ops

IMO, those need to change to break free from the pushback and make progress on 
the commons again.

Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Monday, July 02, 2018 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains 
> to the current conversation
>
> Kubernetes has largely succeeded in common distribution tools where OpenStack 
> has not been able to.
> kubeadm was created as a way to centralize deployment best practices, config, 
> and upgrade stuff into a common code based that other deployment tools can 
> build on.
>
> I think this has been successful for a few reasons:
>  * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. 
> (Eating its own dogfood)
>  * was willing to make their api robust enough to handle that self 
> enhancement. (secrets are a thing, orchestration is not optional, etc)
>  * they decided to produce a reference product (very important to adoption 
> IMO. You don't have to "build from source" to kick the tires.)
>  * made the barrier to testing/development as low as 'curl 
> http://..minikube; minikube start' (this spurs adoption and contribution)
>  * not having large silo's in deployment projects allowed better 
> communication on common tooling.
>  * Operator focused architecture, not project based architecture. This 
> simplifies the deployment situation greatly.
>  * try whenever possible to focus on just the commons and push vendor 
> specific needs to plugins so vendors can deal with vendor issues directly and 
> not corrupt the core.
>
> I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
> prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
> something breaks only on the production system and needs hot patching on the 
> spot. About 10% of the time, I've had to write the patch personally.
>
> I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For 
> comparison, what did I have to do? A couple hours of looking at release notes 
> and trying to dig up examples of where things broke for others. Nothing 
> popped up. Then:
>
> on the controller, I ran:
> yum install -y kubeadm #get the newest kubeadm
> kubeadm upgrade plan #check things out
>
> It told me I had 2 choices. I could:
>  * kubeadm upgrade v1.9.8
>  * kubeadm upgrade v1.10.5
>
> I ran:
> kubeadm upgrade v1.10.5
>
> The control plane was down for under 60 seconds and then the cluster was 
> upgraded. The rest of the services did a rolling upgrade live and took a few 
> more minutes.
>
> I can take my time to upgrade kubelets as mixed kubelet versions works well.
>
> Upgrading kubelet is about as easy.
>
> Done.
>
> There's a lot of things to learn from the governance / architecture of 
> Kubernetes..
>
> Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
> tries to provide users. Scheduling a VM or a Container via an api with some 
> kind of networking and storage is the same kind of thing in either case.
>
> The how to get the software (openstack or k8s) running is about as polar 
> opposite you can get though.
>
> I think if OpenStack wants to gain back some of the steam it had before, it 
> needs to adjust to the new world it is living in. This means:
>  * Consider abolishing the project walls. They are driving bad architecture 
> (not intentionally but as a side affect of structure)
>  * focus on the commons first.

Nearly all the work we're been doing from an identity perspective over
the last 18 months has enabled or directly improved the commons (or what
I would consider the commons). I agree that it's important, but we're
already focusing on it to the point where we're out of bandwidth.

Is the problem that it doesn't appear that way? Do we have different
ideas of what the "commons" are?

>  * simplify the architecture for ops:
>* make as much as possible stateless and centralize remaining state.
>* stop moving config options around with every release. Make it promote 
> automatically and persist it somewhere.
>* improve serial performance before sharding. k8s can do 5000 

Re: [openstack-dev] [Puppet] Requirements for running puppet unit tests?

2018-07-02 Thread Lars Kellogg-Stedman
On Thu, Jun 28, 2018 at 8:04 PM, Lars Kellogg-Stedman 
wrote:

> What is required to successfully run the rspec tests?


On the odd chance that it might be useful to someone else, here's the
Docker image I'm using to successfully run the rspec tests for
puppet-keystone:

  https://github.com/larsks/docker-image-rspec

Available on docker hub  as larsks/rspec.

Cheers,

-- 
Lars Kellogg-Stedman 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Lance Bragstad


On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains 
> to the current conversation
>
> Kubernetes has largely succeeded in common distribution tools where OpenStack 
> has not been able to.
> kubeadm was created as a way to centralize deployment best practices, config, 
> and upgrade stuff into a common code based that other deployment tools can 
> build on.
>
> I think this has been successful for a few reasons:
>  * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. 
> (Eating its own dogfood)
>  * was willing to make their api robust enough to handle that self 
> enhancement. (secrets are a thing, orchestration is not optional, etc)
>  * they decided to produce a reference product (very important to adoption 
> IMO. You don't have to "build from source" to kick the tires.)
>  * made the barrier to testing/development as low as 'curl 
> http://..minikube; minikube start' (this spurs adoption and contribution)
>  * not having large silo's in deployment projects allowed better 
> communication on common tooling.
>  * Operator focused architecture, not project based architecture. This 
> simplifies the deployment situation greatly.
>  * try whenever possible to focus on just the commons and push vendor 
> specific needs to plugins so vendors can deal with vendor issues directly and 
> not corrupt the core.
>
> I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
> prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
> something breaks only on the production system and needs hot patching on the 
> spot. About 10% of the time, I've had to write the patch personally.
>
> I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For 
> comparison, what did I have to do? A couple hours of looking at release notes 
> and trying to dig up examples of where things broke for others. Nothing 
> popped up. Then:
>
> on the controller, I ran:
> yum install -y kubeadm #get the newest kubeadm
> kubeadm upgrade plan #check things out
>
> It told me I had 2 choices. I could:
>  * kubeadm upgrade v1.9.8
>  * kubeadm upgrade v1.10.5
>
> I ran:
> kubeadm upgrade v1.10.5
>
> The control plane was down for under 60 seconds and then the cluster was 
> upgraded. The rest of the services did a rolling upgrade live and took a few 
> more minutes.
>
> I can take my time to upgrade kubelets as mixed kubelet versions works well.
>
> Upgrading kubelet is about as easy.
>
> Done.
>
> There's a lot of things to learn from the governance / architecture of 
> Kubernetes..
>
> Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
> tries to provide users. Scheduling a VM or a Container via an api with some 
> kind of networking and storage is the same kind of thing in either case.
>
> The how to get the software (openstack or k8s) running is about as polar 
> opposite you can get though.
>
> I think if OpenStack wants to gain back some of the steam it had before, it 
> needs to adjust to the new world it is living in. This means:
>  * Consider abolishing the project walls. They are driving bad architecture 
> (not intentionally but as a side affect of structure)
>  * focus on the commons first.

Nearly all the work we're been doing from an identity perspective over
the last 18 months has enabled or directly improved the commons (or what
I would consider the commons). I agree that it's important, but we're
already focusing on it to the point where we're out of bandwidth.

Is the problem that it doesn't appear that way? Do we have different
ideas of what the "commons" are?

>  * simplify the architecture for ops:
>* make as much as possible stateless and centralize remaining state.
>* stop moving config options around with every release. Make it promote 
> automatically and persist it somewhere.
>* improve serial performance before sharding. k8s can do 5000 nodes on one 
> control plane. No reason to do nova cells and make ops deal with it except 
> for the most huge of clouds
>  * consider a reference product (think Linux vanilla kernel. distro's can 
> provide their own variants. thats ok)
>  * come up with an architecture team for the whole, not the subsystem. The 
> whole thing needs to work well.
>  * encourage current OpenStack devs to test/deploy Kubernetes. It has some 
> very good ideas that OpenStack could benefit from. If you don't know what 
> they are, you can't adopt them.
>
> And I know its hard to talk about, but consider just adopting k8s as the 
> commons and build on top of it. OpenStack's api's are good. The 
> implementations right now are very very heavy for ops. You could tie in K8s's 
> pod scheduler with vm stuff running in containers and get a vastly simpler 
> architecture for operators to deal with. Yes, this would be a major 
> disruptive change to OpenStack. But long term, I think it would make for a 
> much healthier 

Re: [openstack-dev] [sahara][ptg] Sahara schedule

2018-07-02 Thread Jeremy Freudberg
Tuesday+Wednesday positive: gives time on Monday for the API SIG (I
personally would like to be there) and the Ask-me-anything/goal help
room

Tuesday+Wednesday negative: less time for Luigi (if he is at PTG) to
do QA things (but QA will also be there on Thursday)

Tuesday+Wednesday negative: the further we go into the week, the more
there is a risk that I am needed back at school (although from what I
can see now this will not be a problem)


Basically you can pick whatever days, and then I will make it work. I
don't want to be accountable for such a decision.


On Mon, Jul 2, 2018 at 12:50 PM, Telles Nobrega  wrote:
> Hi Saharans,
>
> as previously discussed, we are scheduled for Monday and Tuesday at the PTG
> in Denver. I would like to hear from folks who are planning to be there
> which days works best for you. Options are, Monday and Tuesday or Tuesday
> and Wednesday.
>
> Keep in mind that I can't guarantee a switch, I can only propose to the
> organizers and see what we can do.
>
> Thanks all,
> --
>
> TELLES NOBREGA
>
> SOFTWARE ENGINEER
>
> Red Hat Brasil
>
> Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo
>
> tenob...@redhat.com
>
> TRIED. TESTED. TRUSTED.
>  Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
> pelo Great Place to Work.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][ptg] Sahara schedule

2018-07-02 Thread Telles Nobrega
Hi Saharans,

as previously discussed, we are scheduled for Monday and Tuesday at the PTG
in Denver. I would like to hear from folks who are planning to be there
which days works best for you. Options are, Monday and Tuesday or Tuesday
and Wednesday.

Keep in mind that I can't guarantee a switch, I can only propose to the
organizers and see what we can do.

Thanks all,
-- 

TELLES NOBREGA

SOFTWARE ENGINEER

Red Hat Brasil  

Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo

tenob...@redhat.com

TRIED. TESTED. TRUSTED. 
 Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil
pelo Great Place to Work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about filter the flavor

2018-07-02 Thread Matt Riedemann

On 7/2/2018 2:43 AM, 李杰 wrote:
Oh,sorry,not this means,in my opinion,we could filter the flavor in 
flavor list.such as the cli:openstack flavor list --property key:value.


There is no support for natively filtering flavors by extra specs in the 
compute REST API so that would have to be added with a microversion (if 
we wanted to add that support). So it would require a nova spec, which 
would be reviewed for consideration at the earliest in the Stein 
release. OSC could do client-side filtering if it wanted.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Continuously growing request_specs table

2018-07-02 Thread Matt Riedemann

On 7/2/2018 2:47 AM, Zhenyu Zheng wrote:
It seems that the current request_specs record did not got removed even 
when the related instance is gone, which lead to a continuously growing 
request_specs table. How is that so?


Is it because the delete process could be error and we have to recover 
the request_spec if we deleted it?


How about adding a nova-manage CLI command for operators to clean up 
out-dated request specs records from the table by comparing the request 
specs and existence of related instance?


Already fixed in Rocky:

https://review.openstack.org/#/c/515034/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Chris Dent

On Thu, 28 Jun 2018, Fox, Kevin M wrote:


I think if OpenStack wants to gain back some of the steam it had before, it 
needs to adjust to the new world it is living in. This means:
* Consider abolishing the project walls. They are driving bad architecture (not 
intentionally but as a side affect of structure)
* focus on the commons first.
* simplify the architecture for ops:
  * make as much as possible stateless and centralize remaining state.
  * stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
  * improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
* consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
* come up with an architecture team for the whole, not the subsystem. The whole 
thing needs to work well.
* encourage current OpenStack devs to test/deploy Kubernetes. It has some very 
good ideas that OpenStack could benefit from. If you don't know what they are, 
you can't adopt them.


These are ideas worth thinking about. We may not be able to do them
(unclear) but they are stimulating and interesting and we need to
keep the converstaion going. Thank you.

I referenced this thread from a blog post I just made
https://anticdent.org/some-opinions-on-openstack.html
which is just a bunch of random ideas on tweaking OpenStack in the
face of growth and change. It's quite likely it's junk, but there
may be something useful to extract as we try to achieve some focus.


--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican][heat] Identifying secrets in Barbican

2018-07-02 Thread Ade Lee
On Thu, 2018-06-28 at 17:32 -0400, Zane Bitter wrote:
> On 28/06/18 15:00, Douglas Mendizabal wrote:
> > Replying inline.
> 
> [snip]
> > IIRC, using URIs instead of UUIDs was a federation pre-optimization
> > done many years ago when Barbican was brand new and we knew we
> > wanted
> > federation but had no idea how it would work.  The rationale was
> > that
> > the URI would contain both the ID of the secret as well as the
> > location
> > of where it was stored.
> > 
> > In retrospect, that was a terrible idea, and using UUIDs for
> > consistency with the rest of OpenStack would have been a better
> > choice.
> >   I've added a story to the python-barbicanclient storyboard to
> > enable
> > usage of UUIDs instead of URLs:
> > 
> > https://storyboard.openstack.org/#!/story/2002754
> 
> Cool, thanks for clearing that up. If UUID is going to become the/a 
> standard way to reference stuff in the future then we'll just use
> the 
> UUID for the property value.
> 
> > I'm sure you've noticed, but the URI that identifies the secret
> > includes the UUID that Barbican uses to identify the secret
> > internally:
> > 
> > http://{barbican-host}:9311/v1/secrets/{UUID}
> > 
> > So you don't actually need to store the URI, since it can be
> > reconstructed by just saving the UUID and then using whatever URL
> > Barbican has in the service catalog.
> > 
> > > 
> > > In a tangentially related question, since secrets are immutable
> > > once
> > > they've been uploaded, what's the best way to handle a case where
> > > you
> > > need to rotate a secret without causing a temporary condition
> > > where
> > > there is no version of the secret available? (The fact that
> > > there's
> > > no
> > > way to do this for Nova keypairs is a perpetual problem for
> > > people,
> > > and
> > > I'd anticipate similar use cases for Barbican.) I'm going to
> > > guess
> > > it's:
> > > 
> > > * Create a new secret with the same name
> > > * GET /v1/secrets/?name==created:desc=1 to find
> > > out
> > > the
> > > URL for the newest secret with that name
> > > * Use that URL when accessing the secret
> > > * Once the new secret is created, delete the old one
> > > 
> > > Should this, or whatever the actual recommended way of doing it
> > > is,
> > > be
> > > baked in to the client somehow so that not every user needs to
> > > reimplement it?
> > > 
> > 
> > When you store a secret (e.g. using POST /v1/secrets), the response
> > includes the URI both in the JSON body and in the Location: header.
> >   
> > There is no need for you to mess around with searching by name,
> > since
> > Barbican does not use the name to identify a secret.  You should
> > just
> > save the URI (or UUID) from the response, and then update the
> > resource
> > using the old secret to point to the new secret instead.
> 
> Sometimes user will want to be able to rotate secrets without
> updating 
> all of the places that they're referenced from though.
> 

The way you've described seems like the easiest way to do this, and I
agree that this seems like a reasonable and common use case for the
client.  I've added https://storyboard.openstack.org/#!/story/2002786 .

> cheers,
> Zane.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] default devstack barbican secret store ? and big picture question ?

2018-07-02 Thread Ade Lee
On Mon, 2018-06-18 at 17:23 +, Waines, Greg wrote:
> Hey ... a couple of NEWBY question for the Barbican Team.
>  
> I just setup a devstack with Barbican @ stable/queens .
>  
> Ran through the “Verify operation” commands (
> https://docs.openstack.org/barbican/latest/install/verify.html ) ...
> Everything worked.
> stack@barbican:~/devstack$ openstack secret list
>  
> stack@barbican:~/devstack$ openstack secret store --name mysecret --
> payload j4=]d21
> +---+--
> --+
> | Field | Value 
> |
> +---+--
> --+
> | Secret href   | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-
> e417-45a8-ae49-187f8d8c98d1 |
> | Name  | mysecret  
> |
> | Created   | None  
> |
> | Status| None  
> |
> | Content types | None  
> |
> | Algorithm | aes   
> |
> | Bit length| 256   
> |
> | Secret type   | opaque
> |
> | Mode  | cbc   
> |
> | Expiration| None  
> |
> +---+--
> --+
> stack@barbican:~/devstack$ 
> stack@barbican:~/devstack$ 
> stack@barbican:~/devstack$ openstack secret list
> +--
> --+--+---++
> -+---++-+--
> ++
> | Secret href   
> | Name | Created   | Status | Content
> types   | Algorithm | Bit length | Secret type | Mode |
> Expiration |
> +--
> --+--+---++
> -+---++-+--
> ++
> | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-
> 187f8d8c98d1 | mysecret | 2018-06-18T14:47:45+00:00 | ACTIVE |
> {u'default': u'text/plain'} | aes   |256 | opaque  |
> cbc  | None   |
> +--
> --+--+---++
> -+---++-+--
> ++
> stack@barbican:~/devstack$ openstack secret get
> http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-
> 187f8d8c98d1
> +---+--
> --+
> | Field | Value 
> |
> +---+--
> --+
> | Secret href   | http://10.10.10.17/key-manager/v1/secrets/87eb0f18-
> e417-45a8-ae49-187f8d8c98d1 |
> | Name  | mysecret  
> |
> | Created   | 2018-06-18T14:47:45+00:00 
> |
> | Status| ACTIVE
> |
> | Content types | {u'default': u'text/plain'}   
> |
> | Algorithm | aes   
> |
> | Bit length| 256   
> |
> | Secret type   | opaque
> |
> | Mode  | cbc   
> |
> | Expiration| None  
> |
> +---+--
> --+
> stack@barbican:~/devstack$ openstack secret get
> http://10.10.10.17/key-manager/v1/secrets/87eb0f18-e417-45a8-ae49-
> 187f8d8c98d1 --payload
> +-+-+
> | Field   | Value   |
> +-+-+
> | Payload | j4=]d21 |
> 

Re: [openstack-dev] [Edge-computing] [edge][glance][mixmatch]: Wiki of the possible architectures for image synchronisation

2018-07-02 Thread Csatari, Gergely (Nokia - HU/Budapest)
Hi,

Going inline.

From: Waines, Greg [mailto:greg.wai...@windriver.com]
Sent: Friday, June 29, 2018 4:25 AM

In-lined comments / questions below,
Greg.

From: "Csatari, Gergely (Nokia - HU/Budapest)" 
mailto:gergely.csat...@nokia.com>>
Date: Thursday, June 28, 2018 at 3:35 AM


Hi,

I’ve added the following pros and cons to the different options:



  *   One Glance with multiple backends 
[1]
[Greg]
I’m not sure I understand this option.

Is each Glance Backend completely independent ?   e.g. when I do a “glance 
image-create ...” am I specifying a backend and that’s where the image is to be 
stored ?
This is what I was originally thinking.
So I was thinking that synchronization of images to Edge Clouds is simply done 
by doing “glance image-create ...” to the appropriate backends.

But then you say “The syncronisation of the image data is the responsibility of 
the backend (eg.: CEPH).” ... which makes it sound like my thinking above is 
wrong and the Backends are NOT completely independent, but instead in some sort 
of replication configuration ... is this leveraging ceph replication factor or 
something (for example) ?
[G0]: According to my understanding the backends are in a replication 
configuration in this case. Jokke, am I right?

 *   Pros:
*   Relatively easy to implement based on the current Glance 
architecture
 *   Cons:
*   Requires the same Glance backend in every edge cloud instance
*   Requires the same OpenStack version in every edge cloud instance 
(apart from during upgrade)
*   Sensitivity for network connection loss is not clear
[Greg] I could be wrong, but even though the OpenStack services in the edge 
clouds are using the images in their glance backend with a direct URL,
I think the OpenStack services (e.g. nova) still need to get the direct URL via 
the Glance API which is ONLY available at the central site.
So don’t think this option supports autonomy of edge Subcloud when connectivity 
is lost to central site.
[G0]: Can’t the url point to the local Glance backend somehow?

  *   Several Glances with an independent syncronisation service, sych via 
Glance API 
[2]
 *   Pros:
*   Every edge cloud instance can have a different Glance backend
*   Can support multiple OpenStack versions in the different edge cloud 
instances
*   Can be extended to support multiple VIM types
 *   Cons:
*   Needs a new synchronisation service
[Greg] Don’t believe this is a big con ... suspect we are going to need this 
new synchronization service for synchronizing resources of a number of other 
openstack services ... not just glance.
[G0]: I agree, it is not a big con, but it is a con  Should I add some note 
saying, that a synch service is most probably needed anyway?

  *   Several Glances with an independent syncronisation service, synch using 
the backend 
[3]
[Greg] This option seems a little odd to me.
We are synching the GLANCE DB via some new synchronization service, but 
synching the Images themselves via the backend ... I think that would be tricky 
to ensure consistency.
[G0]: Yes, there is a place for errors here.

 *   Pros:
*   I could not find any
 *   Cons:
*   Needs a new synchronisation service


  *   One Glance and multiple Glance API servers 
[4]
 *   Pros:
*   Implicitly location aware
 *   Cons:
*   First usage of an image always takes a long time
*   In case of network connection error to the central Galnce Nova will 
have access to the images, but will not be able to figure out if the user have 
rights to use the image and will not have path to the images data
[Greg] Yeah we tripped over the issue that although the Glance API can cache 
the image itself, it does NOT cache the image meta data (which I am guessing 
has info like “user access” etc.) ... so this option improves latency of access 
to image itself but does NOT provide autonomy.

We plan on looking at options to resolve this, as we like the “implicit 
location awareness” of this option ... and believe it is an option that some 
customers will like.
If anyone has any ideas ?
Are these correct? Do I miss anything?

Thanks,
Gerg0

[1]: 
https://wiki.openstack.org/wiki/Image_handling_in_edge_environment#One_Glance_with_multiple_backends
[2]: 

[openstack-dev] [taas] LP project changes

2018-07-02 Thread Takashi Yamamoto
hi,

I created a LP team "tap-as-a-service-drivers",
whose initial members are same as the existing tap-as-a-service-core
group on gerrit.
I made the team the Maintainer and Driver of the tap-as-a-service project.
This way, someone in the team can take it over even if I disappeared
suddenly. :-)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][ptl] PTL On Vacation 3rd - 6th July

2018-07-02 Thread Dougal Matthews
Hey all,

I'll be out for the rest of the week after today. I don't anticipate
anything coming up but Renat Akhmerov is standing in as PTL while I'm out.

See you all on Monday next week.

Cheers,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral Monthly July 2018

2018-07-02 Thread Dougal Matthews
Hey Mistralites!

Here is your monthly recap of whats what in the Mistral community. Arriving
to you a day late as the 1st was a Sunday. When that happens I'll just aim
to send it as close to the 1st as I can. Either slightly early or slightly
late.


# General News

Vitalii Solodilov joined the Mistral core team. He has been contributing
regularly with high quality patches and reviews for a while now. Welcome
aboard!


# Releases

No releases this month. Rocky-3 is at the end of July, so we will see more
release activity this month.


# Notable Changes and Additions

- The action-execution-reporting blueprint was completed. This work sees a
heatbeat used to check that action executions are still running. If they
have stopped they will be closed. Previously they would be stuck in the
RUNNING state.
- A number of configuration options were added to change settings in the
YAQL engine.


# Milestones, Reviews, Bugs and Blueprints

- 26 commits and 222 reviews
- 105 Open bugs (no change from last month).
- Rocky-3 numbers:
Blueprints: 1 Unknown, 4 Not started, 3 Started, 1 Slow progress, 2
Implemented
Bugs: 2 Incomplete, 2 Invalid, 16 Confirmed, 7 Triaged, 13 In Progress,
3 Fix Released



That's all I have for this month! We have lots to do for Rocky-3, so back
to work! :-)

Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Continuously growing request_specs table

2018-07-02 Thread Zhenyu Zheng
Hi,

It seems that the current request_specs record did not got removed even
when the related instance is gone, which lead to a continuously growing
request_specs table. How is that so?

Is it because the delete process could be error and we have to recover the
request_spec if we deleted it?

How about adding a nova-manage CLI command for operators to clean up
out-dated request specs records from the table by comparing the request
specs and existence of related instance?

BR,

Kevin Zheng
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about filter the flavor

2018-07-02 Thread 李杰
Oh,sorry,not this means,in my opinion,we could filter the flavor in flavor 
list.such as the cli:openstack flavor list --property key:value.
 
 
-- Original --
From: "Sahid Orentino Ferdjaoui"; 
Date: 2018年7月2日(星期一) 下午3:20
To: "OpenStack Developmen"; 
Subject: Re: [openstack-dev] [nova] about filter the flavor

 
On Mon, Jul 02, 2018 at 11:08:51AM +0800, Rambo wrote:
> Hi,all
> 
> I have an idea.Now we can't filter the special flavor according to
> the property.Can we achieve it?If we achieved this,we can filter the
> flavor according the property's key and value to filter the
> flavor. What do you think of the idea?Can you tell me more about
> this ?Thank you very much.

Is that not the aim of AggregateTypeAffinityFilter and/or
AggregateInstanceExtraSpecFilter? Based on flavor or flavor properties
the instances can only be scheduled on a specific set of hosts.

https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/type_filter.py
https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/aggregate_instance_extra_specs.py

Thanks,
s.

> 
> Best Regards
> Rambo

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about filter the flavor

2018-07-02 Thread Sahid Orentino Ferdjaoui
On Mon, Jul 02, 2018 at 11:08:51AM +0800, Rambo wrote:
> Hi,all
> 
> I have an idea.Now we can't filter the special flavor according to
> the property.Can we achieve it?If we achieved this,we can filter the
> flavor according the property's key and value to filter the
> flavor. What do you think of the idea?Can you tell me more about
> this ?Thank you very much.

Is that not the aim of AggregateTypeAffinityFilter and/or
AggregateInstanceExtraSpecFilter? Based on flavor or flavor properties
the instances can only be scheduled on a specific set of hosts.

https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/type_filter.py
https://git.openstack.org/cgit/openstack/nova/tree/nova/scheduler/filters/aggregate_instance_extra_specs.py

Thanks,
s.

> 
> Best Regards
> Rambo

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev