[openstack-dev] [tacker] tacker rocky vPTG summary

2018-03-16 Thread 龚永生


hi,
tacker team has held a vPTG via zoom, which is recorded at 
https://etherpad.openstack.org/p/Tacker-PTG-Rocky


in summary:
P1 tasks:
1. web studio for vnf resources
2. sfc across k8s with openstack vims
3. make tacker server with monitoring features scaleable
other tasks:
1. policy for placement
2. vnfs from opensource and vender providers
3. cluster features


thanks for all participants.


regards,


gongysh
tacker ptl
99cloud__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [refstack] Full list of API Tests versus 'OpenStack Powered' Tests

2018-03-16 Thread Rochelle Grober
Submission is no longer anonymous, but the results are not public, still.  The 
submitter decides whether the guideline results are public, but if they do, 
only the guideline tests are made public.  If the submitter does not  actively 
select public  availability for the test results, all results default to 
private.

--Rocky

> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Thursday, March 15, 2018 7:41 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [refstack] Full list of API Tests versus
> 'OpenStack Powered' Tests
> 
> On 2018-03-15 14:16:30 + (+), arkady.kanev...@dell.com wrote:
> [...]
> > This can be submitted anonymously if you like.
> 
> Anonymous submissions got disabled (and the existing set of data from them
> deleted). See the announcement from a month ago for
> details:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2018-
> February/127103.html
> 
> --
> Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] storyboard evaluation

2018-03-16 Thread Jason E. Rist
On 03/02/2018 02:24 AM, Emilien Macchi wrote:
> A quick update:
> 
> - Discussed with Jiri Tomasek from TripleO UI squad and he agreed that his
> squad would start to use Storyboard, and experiment it.
> - I told him I would take care of making sure all UI bugs created in
> Launchpad would be moved to Storyboard.
> - Talked with Kendall and we agreed that we would move forward and migrate
> TripleO UI bugs to Storyboard.
> - TripleO UI Squad would report feedback about storyboard to the storyboard
> team with the help of other TripleO folks (me at least, I'm willing to
> help).
> 
> Hopefully this is progress and we can move forward. More updates to come
> about migration during the next days...
> 
> Thanks everyone involved in these productive discussions.
> 
> On Wed, Jan 17, 2018 at 12:33 PM, Thierry Carrez 
> wrote:
> 
>> Clint Byrum wrote:
>>> [...]
>>> That particular example board was built from tasks semi-automatically,
>>> using a tag, by this script running on a cron job somewhere:
>>>
>>> https://git.openstack.org/cgit/openstack-infra/zuul/
>> tree/tools/update-storyboard.py?h=feature/zuulv3
>>>
>>> We did this so that we could have a rule "any task that is open with
>>> the zuulv3 tag must be on this board". Jim very astutely noticed that
>>> I was not very good at being a robot that did this and thus created the
>>> script to ease me into retirement from zuul project management.
>>>
>>> The script adds new things in New, and moves tasks automatically to
>>> In Progress, and then removes them when they are completed. We would
>>> periodically groom the "New" items into an appropriate lane with the
>> hopes
>>> of building what you might call a rolling-sprint in Todo, and calling
>>> out blocked tasks in a regular meeting. Stories were added manually as
>>> a way to say "look in here and add tasks", and manually removed when
>>> the larger effort of the story was considered done.
>>>
>>> I rather like the semi-automatic nature of it, and would definitely
>>> suggest that something like this be included in Storyboard if other
>>> groups find the board building script useful. This made a cross-project
>>> effort between Nodepool and Zuul go more smoothly as we had some more
>>> casual contributors to both, and some more full-time.
>>
>> That's a great example that illustrates StoryBoard design: rather than
>> do too much upfront feature design, focus on primitives and expose them
>> fully through a strong API, then let real-world usage dictate patterns
>> that might result in future features.
>>
>> The downside of this approach is of course getting enough usage on a
>> product that appears a bit "raw" in terms of features. But I think we
>> are closing on getting that critical mass :)
>>
>> --
>> Thierry Carrez (ttx)
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
I just tried this but I think I might be doing something wrong...

http://storyboard.macchi.pro:9000/

This URL mentioned in the previous storyboard evaluation email does not
seem to work.

http://lists.openstack.org/pipermail/openstack-dev/2018-January/126258.html

Are you still evaluating this? Is the UI squad still expected to
contribute?  Do we have a better place to go for storyboard usage?  I
just ran into a bug and thought to myself "hey, I'll go drop this at the
storyboard spot, since that's what had been the plan" but avast, I could
not continue.

Can you enlighten me to the status?

-J

-- 
Jason E. Rist
Senior Software Engineer
OpenStack User Interfaces
Red Hat, Inc.
Freenode: jrist
github/twitter: knowncitizen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Mohammed Naser
On Fri, Mar 16, 2018 at 5:34 PM, Jeremy Stanley  wrote:
> On 2018-03-16 21:22:51 + (+), Jim Rollenhagen wrote:
> [...]
>> It seems mod_wsgi doesn't want python applications catching SIGHUP,
>> as Apache expects to be able to catch that. By default, it even ensures
>> signal handlers do not get registered.[0]
> [...]
>> Given we just had a goal to make all API services runnable as a WSGI
>> application, it seems wrong to enable mutable config for API services.
>> It's a super useful thing though, so I'd love to figure out a way we can do
>> it.
> [...]
>
> Given these are API services, can the APIs grow a (hopefully
> standardized) method to trigger this in lieu of signal handling? Or
> if the authentication requirements are too much, Zuul and friends
> have grown RPC sockets which can be used to inject these sorts of
> low-level commands over localhost to their service daemons (or could
> probably also do similar things over UNIX sockets if you don't want
> listeners on the loopback interface).

Throwing an idea out there, but maybe listening to file modification
events using something like inotify could be a possibility?

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Peter Penchev
On Fri, Mar 16, 2018 at 09:39:14AM -0700, Dan Smith wrote:
> > Can you be more specific about what is limiting you when you use
> > volume-backed instances?
> 
> Presumably it's because you're taking a trip over iscsi instead of using
> the native attachment mechanism for the technology that you're using? If
> so, that's a valid argument, but it's hard to see the tradeoff working
> in favor of adding all these drivers to nova as well.
> 
> If cinder doesn't support backend-specific connectors, maybe that's
> something we could work on? People keep saying that "cinder is where I
> put my storage, that's how I want to back my instances" when it comes to
> justifying BFV, and that argument is starting to resonate with me more
> and more.

Um, that's what we have os-brick for, isn't it?  And yes, we also have
an os-brick connector for the "STORPOOL" connection type that is also
part of the Queens release.

Best regards,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Jeremy Stanley
On 2018-03-16 21:22:51 + (+), Jim Rollenhagen wrote:
[...]
> It seems mod_wsgi doesn't want python applications catching SIGHUP,
> as Apache expects to be able to catch that. By default, it even ensures
> signal handlers do not get registered.[0]
[...]
> Given we just had a goal to make all API services runnable as a WSGI
> application, it seems wrong to enable mutable config for API services.
> It's a super useful thing though, so I'd love to figure out a way we can do
> it.
[...]

Given these are API services, can the APIs grow a (hopefully
standardized) method to trigger this in lieu of signal handling? Or
if the authentication requirements are too much, Zuul and friends
have grown RPC sockets which can be used to inject these sorts of
low-level commands over localhost to their service daemons (or could
probably also do similar things over UNIX sockets if you don't want
listeners on the loopback interface).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][neutron][arista] Release of openstack/networking-arista failed

2018-03-16 Thread Doug Hellmann
This Arista release is failing because the packaging job can't run "tox
-e venv" because neutron is listed in the requirements.txt for the
Arista code and in the constraints file.

Excerpts from zuul's message of 2018-03-16 19:50:48 +:
> Build failed.
> 
> - release-openstack-python 
> http://logs.openstack.org/25/25ac528d6771d3440fac428294194e08939fb5aa/release/release-openstack-python/e550904/
>  : FAILURE in 3m 30s
> - announce-release announce-release : SKIPPED
> - propose-update-constraints propose-update-constraints : SKIPPED
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul project evolution

2018-03-16 Thread Joshua Harlow

Awesome!

Might IMHO be useful to also start doing this with other projects.

James E. Blair wrote:

Hi,

To date, Zuul has (perhaps rightly) often been seen as an
OpenStack-specific tool.  That's only natural since we created it
explicitly to solve problems we were having in scaling the testing of
OpenStack.  Nevertheless, it is useful far beyond OpenStack, and even
before v3, it has found adopters elsewhere.  Though as we talk to more
people about adopting it, it is becoming clear that the less experience
they have with OpenStack, the more likely they are to perceive that Zuul
isn't made for them.

At the same time, the OpenStack Foundation has identified a number of
strategic focus areas related to open infrastructure in which to invest.
CI/CD is one of these.  The OpenStack project infrastructure team, the
Zuul team, and the Foundation staff recently discussed these issues and
we feel that establishing Zuul as its own top-level project with the
support of the Foundation would benefit everyone.

It's too early in the process for me to say what all the implications
are, but here are some things I feel confident about:

* The folks supporting the Zuul running for OpenStack will continue to
   do so.  We love OpenStack and it's just way too fun running the
   world's most amazing public CI system to do anything else.

* Zuul will be independently promoted as a CI/CD tool.  We are
   establishing our own website and mailing lists to facilitate
   interacting with folks who aren't otherwise interested in OpenStack.
   You can expect to hear more about this over the coming months.

* We will remain just as open as we have been -- the "four opens" are
   intrinsic to what we do.

As a first step in this process, I have proposed a change[1] to remove
Zuul from the list of official OpenStack projects.  If you have any
questions, please don't hesitate to discuss them here, or privately
contact me or the Foundation staff.

-Jim

[1] https://review.openstack.org/552637

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Jim Rollenhagen
knikolla brought up an interesting wedge in this goal in #openstack-keystone
today.

It seems mod_wsgi doesn't want python applications catching SIGHUP,
as Apache expects to be able to catch that. By default, it even ensures
signal handlers do not get registered.[0]

I can't quickly find uwsgi's recommendations on this, but I'd assume
it would be similar, as uwsgi uses SIGHUP as a signal to gracefully
reload all workers and the master process.

Given we just had a goal to make all API services runnable as a WSGI
application, it seems wrong to enable mutable config for API services.
It's a super useful thing though, so I'd love to figure out a way we can do
it.

Thoughts?

[0]
http://modwsgi.readthedocs.io/en/develop/configuration-directives/WSGIRestrictSignal.html
[1]
http://uwsgi-docs.readthedocs.io/en/latest/Management.html#signals-for-controlling-uwsgi


// jim

On Wed, Feb 28, 2018 at 5:27 AM, ChangBo Guo  wrote:

> Hi ALL,
>
> TC approved the  goal [0]  a week ago ,  so it's time to finish the work.
> we also have a short discussion in oslo meeting  at PTG, find more details
> in [1] ,
> we use storyboard to check the goal in  https://storyboard.openstack.
> org/#!/story/2001545.  It's appreciated PTL set the owner in time .
> Feel free to reach me( gcb) in IRC if you have any questions.
>
>
> [0] https://review.openstack.org/#/c/534605/
> [1] https://etherpad.openstack.org/p/oslo-ptg-rocky  From line 175
>
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Duncan Thomas
On 16 March 2018 at 16:39, Dan Smith  wrote:
>> Can you be more specific about what is limiting you when you use
>> volume-backed instances?
>
> Presumably it's because you're taking a trip over iscsi instead of using
> the native attachment mechanism for the technology that you're using? If
> so, that's a valid argument, but it's hard to see the tradeoff working
> in favor of adding all these drivers to nova as well.
>
> If cinder doesn't support backend-specific connectors, maybe that's
> something we could work on?

Cinder supports a range of connectors, and there has never been any
opposition in principle to supporting more.

I suggest looking at the RDB support in cinder as an example of a
strongly supported native attachment method.



-- 
Duncan Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] [tripleo] [puppet] [fuel] [kolla] [openstack-ansible] [cloudcafe] [magnum] [mogan] [sahara] [shovel] [watcher] [helm] [rally] Heads up: ironic classic drivers deprecation

2018-03-16 Thread Jean-Philippe Evrard
Hello,

Thanks for the notice!

JP

On 16 March 2018 at 12:09, Dmitry Tantsur  wrote:
> Hi all,
>
> If you see your project name in the subject that is because a global search
> revived usage of "pxe_ipmitool", "agent_ipmitool" or "pxe_ssh" drivers in
> the non-unit-test context in one or more of your repositories.
>
> The classic drivers, such as pxe_ipmitool, were deprecated in Queens, and
> we're on track with removing them in Rocky. Please read [1] about
> differences between classic drivers and newer hardware types. Please refer
> to [2] on how to update your code.
>
> Finally, the pxe_ssh driver was removed some time ago. Please use the
> standard IPMI driver with the virtualbmc project [3] instead.
>
> Please reach out to the ironic team (here or on #openstack-ironic) if you
> have any questions or need help with the transition.
>
> Dmitry
>
> [1] https://docs.openstack.org/ironic/latest/install/enabling-drivers.html
> [2]
> https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html
> [3] https://github.com/openstack/virtualbmc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Qos]Unable to apply qos policy with dscp marking rule to a port

2018-03-16 Thread A Vamsikrishna
Hi Manjeet / Isaku,

I am unable to apply qos policy with dscp marking rule to a port.


1.Create a Qos Policy
2.Create a dscp marking rule on to create qos policy
3.Apply above created policy to a port

openstack network qos rule set --dscp-mark 22 dscp-marking 
115e4f70-8034-41768fe9-2c47f8878a7d

HttpException: Conflict (HTTP 409) (Request-ID: 
req-da7d8998-9d8c-4aea-a10b-326cc21b608e), Rule dscp_marking is not supported 
by port 115e4f70-8034-41768fe9-2c47f8878a7d

stack@pike-ctrl:~/devstack$

Seeing above error during the qos policy application on a port.

Any suggestions on this ?

I see below review has been abandoned which is "Allow networking-odl to support 
DSCP Marking rule for qos driver":

https://review.openstack.org/#/c/460470/

Is dscp marking supported in PIKE ? Can you please confirm ?

I have raised below bug to track this issue:

https://bugs.launchpad.net/networking-odl/+bug/1756132



Thanks,
Vamsi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Jean-Philippe Evrard
Thanks!

On 16 March 2018 at 16:56, Doug Hellmann  wrote:
> Excerpts from Jeremy Stanley's message of 2018-03-16 13:43:00 +:
>> On 2018-03-16 08:34:28 -0500 (-0500), Sean McGinnis wrote:
>> > On Mar 16, 2018, at 04:02, Jean-Philippe Evrard  
>> > wrote:
>> >
>> > > For OpenStack-Ansible, we don't need to do anything for that
>> > > community goal.  I am not sure how we can remove our name from
>> > > the storyboard, so I just inform you here.
>> >
>> > I believe you can just mark the task as done if there is no
>> > additional work required.
>>
>> Yeah, either "merged" or "invalid" states should work. I'd lean
>> toward suggesting "invalid" in this case since the task did not
>> require any changes merged to your source code.
>
> Yes, we've been using "invalid" to indicate that no work was needed.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Dropping off kolla-kubernetes core reviewer team

2018-03-16 Thread Steven Dake (stdake)
Hey folks,

As many core reviewers in Kolla core teams may already know, I am focused on 
OpenStack board of director work and adjacent community work.  This involves 
bridging the OpenStack ecosystem and its various strategic focus areas with 
adjacent community projects that make sense.  My work in this area has led to 
my technical involvement in a Layer 7 networking project (https://istio.io) – 
specifically around the multicloud use case and connecting OpenStack 
public/private clouds with other cloud providers.  As a result, I don’t have 
time to commit to properly furthering the development of kolla-kubernetes nor 
providing reviews for this specific Kolla project deliverable.

I do plan to stay involved in Kolla as a reviewer in the other Kolla core teams 
and I am deeply committed to furthering OpenStack’s strategic focus areas in my 
board of director’s service.

If you are curious about these SFAs, you might consider reading:
https://blogs.cisco.com/cloud/openstack-solving-for-integration-in-open-source-adjacent-communities

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Peter Penchev
On Fri, Mar 16, 2018 at 09:23:11AM -0700, melanie witt wrote:
> On Fri, 16 Mar 2018 17:33:30 +0200, Peter Penchev wrote:
> > Would there be any major opposition to adding a StorPool shared
> > storage image backend, so that our customers are not limited to
> > volume-backed instances?  Right now, creating a StorPool volume and
> > snapshot from a Glance image and then booting instances from that
> > snapshot works great, but in some cases, including some provisioning
> > and accounting systems on top of OpenStack, it would be preferable to
> > go the Nova way and let the hypervisor think that it has a local(ish)
> > image to work with, even though it's on shared storage anyway.
> 
> Can you be more specific about what is limiting you when you use
> volume-backed instances?

It's not a problem for our current customers, but we had an OpenStack
PoC last year for a customer who was using some proprietary
provisioning+accounting system on top of OpenStack (sorry, I really
can't remember the name).  That particular system simply couldn't be
bothered to create a volume-backed instance, so we "helped" by doing
an insane hack: writing an almost-pass-through Compute API that would
intercept the boot request and DTRT behind the scenes (send a modified
request to the real Compute API), and then also writing
an almost-pass-through Identity API that would intercept the requests to
get the Compute API's endpoint and slip our API's address there.
The customer ended up not using OpenStack for completely unrelated
reasons, but there was certainly at least one instance of this.

> We've been kicking around the idea of beefing up
> support of boot-from-volume in nova such that "automatic boot-from-volume
> for instance create" works well enough that we could consider
> boot-from-volume the first-class way to support the vast variety of cinder
> storage backends and let cinder handle the details instead of trying to
> re-implement support of various storage backends in nova on a selective
> basis. I'd like to better understand what is lacking for you when you use
> boot-from-volume to leverage StorPool and determine whether it's something
> we could address in nova.

I'll see if I can remember anything more (ISTR also another case of
something that couldn't boot a volume-backed instance, but I really
cannot remember even what it was).  The problem was certainly not with
OpenStack proper, but with other systems built on top of it.

Best regards,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 12 March 2018

2018-03-16 Thread Colleen Murphy
# Keystone Team Update - Week of 12 March 2018

## News

### Keystone Admin-ness: the Future

At the Denver PTG, while grappling with the concept of admin-ness, we had a 
moment of clarity when we realized that there were some classes of admin 
actions that could be described as "global" across keystone projects, like 
listing all servers in all projects, and other admin actions that were better 
classified as "system" actions that operated on no project at all, like 
creating endpoints. From this came the new system scope[1] for operating on 
system-level APIs. But we have yet to properly deal with the 
global-across-projects case. There are conflicting views within the keystone 
team on how best to support this going forward[2], and whether we should enable 
system-scoped tokens to work on project-level operations or if we can lean on 
Hierarchical Multitenancy to enable this. Somewhat intermixed in this issue is 
how, or whether, to deal with cleaning up resources in other services that are 
tied to keystone projects when the service has no insight into keystone 
internals. If you have thoughts on these issues, please discuss on Adam's 
thread[3].

[1] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-13.log.html#t2018-03-13T22:42:44
[3] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128302.html

### Edge Computing

We've previously gotten requests to support syncing data across different 
keystone deployments at the application level rather than at the data storage 
level[4]. As Edge Computing gains stronger footing in our community[5], we need 
to start thinking about use cases like this and how to support them. We 
discussed this a bit[6] but we are a ways off from having a concrete plan. If 
you have thoughts on this, please reach out to us!

[4] https://review.openstack.org/#/c/323499/
[5] http://markvoelker.github.io/blog/dublin-ptg-edge-sessions/
[6] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-13.log.html#t2018-03-13T13:50:03

### JWT

We have a spec proposed[7] to implement JSON Web Tokens as a new token format 
similar to fernet. We discussed some of the particulars[8] with regard to 
whether the token needs to be encrypted and token size considerations. 
Implementing this might make a good Outreachy project since it is interesting 
and reasonably self-contained, but we will want to nail down these details 
before dumping it on an intern.

[7] https://review.openstack.org/#/c/541903/
[8] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-13.log.html#t2018-03-13T20:03:56

### Milestone Planning Meeting

We had a conference call meeting to organize our Rocky roadmap[9] and do some 
sprint-like planning for the first milestone. If you're working on something in 
the roadmap, please feel free to make updates to the Trello board as needed.

[9] https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap

### Outreachy projects

OpenStack didn't get into GSOC this year, but we still have a chance to submit 
applications for Outreachy[10]. We have some internship ideas[11] that we 
should add to and/or finalize ASAP. We need to have mentors assigned up-front 
who should submit the project idea themselves, but even if there is only one 
name attached to a project, we found last round that co-mentoring can be pretty 
successful for both the intern and the mentors.

[10] https://www.outreachy.org/communities/cfp/openstack/
[11] https://etherpad.openstack.org/p/keystone-internship-ideas

## Open Specs

Search query: https://goo.gl/eyTktx

Since last week, a new spec has been proposed to provide proper usable 
multi-factor auth[12]. In total we have five specs proposed for Rocky that are 
awaiting feedback.

We've also had a revival of a spec currently proposed to the backlog to improve 
OpenIDC support[13].

[12] https://review.openstack.org/#/c/553670
[13] https://review.openstack.org/#/c/373983

## Recently Merged Changes

Search query: https://goo.gl/hdD9Kw

We merged 13 changes this week. One of these was a significant bugfix to the 
template catalog backend[14]. We had postponed merging this with the idea that 
we might create a whole new, better, file-based catalog backend[15] but work on 
that had stalled (and is being picked up again).

[14] https://review.openstack.org/#/c/482364/
[15] https://review.openstack.org/#/c/483514/

## Changes that need Attention

Search query: https://goo.gl/tW5PiH

There are 36 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

We added our milestone goals to the release schedule[16]. The next deadline is 
the spec proposal freeze the week of April 16.

[16] https://review.openstack.org/#/c/553502/

## 

Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-03-16 13:43:00 +:
> On 2018-03-16 08:34:28 -0500 (-0500), Sean McGinnis wrote:
> > On Mar 16, 2018, at 04:02, Jean-Philippe Evrard  
> > wrote:
> > 
> > > For OpenStack-Ansible, we don't need to do anything for that
> > > community goal.  I am not sure how we can remove our name from
> > > the storyboard, so I just inform you here.
> > 
> > I believe you can just mark the task as done if there is no
> > additional work required.
> 
> Yeah, either "merged" or "invalid" states should work. I'd lean
> toward suggesting "invalid" in this case since the task did not
> require any changes merged to your source code.

Yes, we've been using "invalid" to indicate that no work was needed.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Dan Smith
> Can you be more specific about what is limiting you when you use
> volume-backed instances?

Presumably it's because you're taking a trip over iscsi instead of using
the native attachment mechanism for the technology that you're using? If
so, that's a valid argument, but it's hard to see the tradeoff working
in favor of adding all these drivers to nova as well.

If cinder doesn't support backend-specific connectors, maybe that's
something we could work on? People keep saying that "cinder is where I
put my storage, that's how I want to back my instances" when it comes to
justifying BFV, and that argument is starting to resonate with me more
and more.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread young, eric
I can provide some insights from the Dell EMC ScaleIO side.

As you can see from the patch that Matt pointed to, it is possible to add 
ephemeral/image backend support to Nova. That said, it is not easy and [IMHO] 
prone to error. There is no ‘driver model’ like there is in Cinder, where you 
just implement a spec and run tests. You have to go into the Nova code itself 
and add a whole bunch of logic specific to your backend. Once complete and you 
have your CI setup to run the Nova test suite, getting reviews complete is 
tough due to all of the different priorities. You’ll also need to keep an eye 
on others things going into Nova and be sure that your CI continues to report 
success.

If I had it to do all over again, I would strongly suggest that developers 
looking to add ‘yet another backend’ combine their resources and tackle the 
cleanup of the existing libvirt code as well as the generic Cinder backend 
support that Matt mentioned. 

The patch below, which adds ephemeral/image support for ScaleIo has not yet 
merged upstream; I am currently working with ScaleIO customers to determine how 
important it really is. I may find myself volunteering for the generic approach 
as I think it is a much better route.

Eric 





On 3/16/18, 12:00 PM, "Matt Riedemann"  wrote:

>On 3/16/2018 10:33 AM, Peter Penchev wrote:
>> Would there be any major opposition to adding a StorPool shared
>> storage image backend, so that our customers are not limited to
>> volume-backed instances?  Right now, creating a StorPool volume and
>> snapshot from a Glance image and then booting instances from that
>> snapshot works great, but in some cases, including some provisioning
>> and accounting systems on top of OpenStack, it would be preferable to
>> go the Nova way and let the hypervisor think that it has a local(ish)
>> image to work with, even though it's on shared storage anyway.  This
>> will go hand-in-hand with our planned Glance image driver, so that
>> creating a new instance from a Glance image would happen
>> instantaneously (create a StorPool volume from the StorPool snapshot
>> corresponding to the Glance image).
>> 
>
>Ask the EMC ScaleIO team how well this has gone for them:
>
>https://review.openstack.org/#/c/407440/
>
>There has been a lot of discussion about a generic Cinder image backend 
>driver in nova so that we don't need to have the same storage backend 
>driver explosion that Cinder has, and we could also then replace the 
>nova lvm/rbd image backends and just use Cinder volumes for 
>volume-backed instances.
>
>I could find lots of discussion references about this, but basically no 
>one is planning to step up to work on that, and the existing libvirt 
>imagebackend code is a mess, so piling more backends into the mix isn't 
>very attractive.
>
>Anyway, just FYI on all of that history.
>
>> If this will help the decision, we do have plans for adding a
>> full-blown Nova third-party CI in the near future, so that both our
>> volume attachment driver, this driver, and our upcoming Glance image
>> driver will see some more testing.
>
>3rd party CI would be a requirement to get it added anyway, it's not 
>really an option.
>
>-- 
>
>Thanks,
>
>Matt
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Peter Penchev
On Fri, Mar 16, 2018 at 11:04:06AM -0500, Sean McGinnis wrote:
> Just updating the subject line tag to glance. ;)

Errr, sorry, but no, this is for a Nova image backend (yes, namespace
overload, I know) - the driver that lets a Nova host create "local"
images for non-volume-backed instances.

> On Fri, Mar 16, 2018 at 05:33:30PM +0200, Peter Penchev wrote:
> > Hi,
> > 
> > A couple of years ago I created a Nova spec for the StorPool image
> > backend: https://review.openstack.org/#/c/137830/  There was some
> > discussion, but then our company could not immediately allocate the
> > resources to write the driver itself, so the spec languished and was
> > eventually abandoned.
> > 
> > Now that StorPool has a fully maintained Cinder driver and also a
> > fully maintained Nova volume attachment driver, both included in the
> > Queens release, and a Cinder third-party CI that runs all the tests
> > tagged with "volume", including some simple Nova tests, we'd like to
> > resurrect this spec and implement a Nova image backend, too.
> > Actually, it looks like due to customer demand we will write the
> > driver anyway and possibly maintain it outside the tree, but it would
> > be preferable (and, obviously, easier to catch up with wide-ranging
> > changes) to have it in.
> > 
> > Would there be any major opposition to adding a StorPool shared
> > storage image backend, so that our customers are not limited to
> > volume-backed instances?  Right now, creating a StorPool volume and
> > snapshot from a Glance image and then booting instances from that
> > snapshot works great, but in some cases, including some provisioning
> > and accounting systems on top of OpenStack, it would be preferable to
> > go the Nova way and let the hypervisor think that it has a local(ish)
> > image to work with, even though it's on shared storage anyway.  This
> > will go hand-in-hand with our planned Glance image driver, so that
> > creating a new instance from a Glance image would happen
> > instantaneously (create a StorPool volume from the StorPool snapshot
> > corresponding to the Glance image).
> > 
> > If this will help the decision, we do have plans for adding a
> > full-blown Nova third-party CI in the near future, so that both our
> > volume attachment driver, this driver, and our upcoming Glance image
> > driver will see some more testing.
> > 
> > Thanks in advance, and keep up the great work!

Best regards,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread melanie witt

On Fri, 16 Mar 2018 17:33:30 +0200, Peter Penchev wrote:

Would there be any major opposition to adding a StorPool shared
storage image backend, so that our customers are not limited to
volume-backed instances?  Right now, creating a StorPool volume and
snapshot from a Glance image and then booting instances from that
snapshot works great, but in some cases, including some provisioning
and accounting systems on top of OpenStack, it would be preferable to
go the Nova way and let the hypervisor think that it has a local(ish)
image to work with, even though it's on shared storage anyway.


Can you be more specific about what is limiting you when you use 
volume-backed instances? We've been kicking around the idea of beefing 
up support of boot-from-volume in nova such that "automatic 
boot-from-volume for instance create" works well enough that we could 
consider boot-from-volume the first-class way to support the vast 
variety of cinder storage backends and let cinder handle the details 
instead of trying to re-implement support of various storage backends in 
nova on a selective basis. I'd like to better understand what is lacking 
for you when you use boot-from-volume to leverage StorPool and determine 
whether it's something we could address in nova.


Cheers,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Chris Hoge

> On Mar 16, 2018, at 7:40 AM, Simon Leinen  wrote:
> 
> Joe Topjian writes:
>> Terraform hat! I want to slightly nit-pick this one since the words
>> "leak" and "admin-priv" can sound scary: Terraform technically wasn't
>> doing anything wrong. The problem was that Octavia was creating
>> resources but not setting ownership to the tenant. When it came time
>> to delete the resources, Octavia was correctly refusing, though it
>> incorrectly created said resources.
> 
> I dunno... if Octavia created those lower-layer resources on behalf of
> the user, then Octavia shouldn't refuse to remove those resources when
> the same user later asks it to - independent of what ownership Octavia
> chose to apply to those resources.  (It would be different it Neutron or
> Nova were asked by the user directly to remove the resources created by
> Octavia.)
> 
>> From reviewing the discussion, other parties were discovering this
>> issue and patching in parallel to your discovery. Both xgerman and
>> Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost
>> quickly applied the patch. It was a really awesome collaboration
>> between yourself, dims, xgerman, and Vexxhost.
> 
> Speaking as another operator: Does anyone seriously expect us to deploy
> a service (Octavia) in production at a stage where it exhibits this kind
> of behavior? Having to clean up leftover resources because the users who
> created them cannot remove them is not my idea of fun.  (And note that
> like most operators, we're a few releases behind, so we might not even
> get access to backports IF this gets fixed.)

Simon and Joe, one thing that I was not clear on (again, goes back to the
statement that mistakes I make are my own), is that this is behavior,
admin-scoped resources being created then not released, was seen in the Neutron
LBaaSv2 service. The fix _was_ to deploy Octavia and not use the Neutron API.
As such, I'm reluctant to use Terraform (or really, any other SDK) to deploy
load balancers against the Neutron API. I don't want to be leaking a bunch of
resources I can't delete. It's not good for the apps I’m trying to run and it’s
definitely not good for the cloud provider. I have much more confidence 
developing
against the Octavia service.

We figured this out as a group effort between Vexxhost, Joe, and the Octavia
team, and I'm exceptionally grateful to all of them for helping me to sort
those issues out.

Now, I ultimately dropped it in my own code because I can't rely on the
existence of Octavia across all clouds. It had nothing to do with the either
the reliability of the GopherCloud/Terraform SDKs or Octavia itself.

So, to repeat, leaking admin-scoped resources is a Neutron LBaaSv2 bug,
not an Octavia bug.

> In our case we're not a compute-oriented cloud provider, and some of our
> customers would really like to have a good LBaaS as part of our IaaS
> offering.  But our experience with this was so-so in the past - for
> example, we had to help customers migrate from LBaaSv1 to LBaaSv2.  Our
> resources (people, tolerance to user-affecting bugs and forced upgrades
> etc.) are limited, so we've become careful.
> 
> For users who want to use Kubernetes on our OpenStack service, we rather
> point them to Kubernetes's Ingress controller, which performs the LB
> function without requiring much from the underlying cloud.  Seems like a
> fine solution.
> -- 
> Simon.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] New image backend: StorPool

2018-03-16 Thread Sean McGinnis
Just updating the subject line tag to glance. ;)

On Fri, Mar 16, 2018 at 05:33:30PM +0200, Peter Penchev wrote:
> Hi,
> 
> A couple of years ago I created a Nova spec for the StorPool image
> backend: https://review.openstack.org/#/c/137830/  There was some
> discussion, but then our company could not immediately allocate the
> resources to write the driver itself, so the spec languished and was
> eventually abandoned.
> 
> Now that StorPool has a fully maintained Cinder driver and also a
> fully maintained Nova volume attachment driver, both included in the
> Queens release, and a Cinder third-party CI that runs all the tests
> tagged with "volume", including some simple Nova tests, we'd like to
> resurrect this spec and implement a Nova image backend, too.
> Actually, it looks like due to customer demand we will write the
> driver anyway and possibly maintain it outside the tree, but it would
> be preferable (and, obviously, easier to catch up with wide-ranging
> changes) to have it in.
> 
> Would there be any major opposition to adding a StorPool shared
> storage image backend, so that our customers are not limited to
> volume-backed instances?  Right now, creating a StorPool volume and
> snapshot from a Glance image and then booting instances from that
> snapshot works great, but in some cases, including some provisioning
> and accounting systems on top of OpenStack, it would be preferable to
> go the Nova way and let the hypervisor think that it has a local(ish)
> image to work with, even though it's on shared storage anyway.  This
> will go hand-in-hand with our planned Glance image driver, so that
> creating a new instance from a Glance image would happen
> instantaneously (create a StorPool volume from the StorPool snapshot
> corresponding to the Glance image).
> 
> If this will help the decision, we do have plans for adding a
> full-blown Nova third-party CI in the near future, so that both our
> volume attachment driver, this driver, and our upcoming Glance image
> driver will see some more testing.
> 
> Thanks in advance, and keep up the great work!
> 
> Best regards,
> Peter
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Fox, Kevin M
What about the other way around? An Octavia plugin that simply manages k8s 
Ingress objects on a k8s cluster? Depending on how operators are deploying 
openstack, this might be a much easier way to deploy Octavia.

Thanks,
Kevin

From: Lingxian Kong [anlin.k...@gmail.com]
Sent: Friday, March 16, 2018 5:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB 
APIs with K8s

Just FYI, l7 policy/rule support for Neutron LBaaS V2 and Octavia is on its 
way[1], because we will have both octavia and magnum deployed on our openstack 
based public cloud this year, an ingress controller for openstack(octavia) is 
also on our TODO list, any kind of collaboration are welcomed :-)

[1]: https://github.com/gophercloud/gophercloud/pull/833


Cheers,
Lingxian Kong (Larry)

On Fri, Mar 16, 2018 at 5:01 PM, Joe Topjian 
> wrote:
Hi Chris,

I wear a number of hats related to this discussion, so I'll add a few points of 
view :)

It turns out that with
Terraform, it's possible to tear down resources in a way that causes Neutron to
leak administrator-privileged resources that can not be deleted by a
non-privileged users. In discussions with the Neutron and Octavia teams, it was
strongly recommended that I move away from the Neutron LBaaSv2 API and instead
adopt Octavia. Vexxhost graciously installed Octavia and my request and I was
able to move past this issue.

Terraform hat! I want to slightly nit-pick this one since the words "leak" and 
"admin-priv" can sound scary: Terraform technically wasn't doing anything 
wrong. The problem was that Octavia was creating resources but not setting 
ownership to the tenant. When it came time to delete the resources, Octavia was 
correctly refusing, though it incorrectly created said resources.

>From reviewing the discussion, other parties were discovering this issue and 
>patching in parallel to your discovery. Both xgerman and Vexxhost jumped in to 
>confirm the behavior seen by Terraform. Vexxhost quickly applied the patch. It 
>was a really awesome collaboration between yourself, dims, xgerman, and 
>Vexxhost.

This highlights the first call to action for our public and private cloud
community: encouraging the rapid migration from older, unsupported APIs to
Octavia.

Operator hat! The clouds my team and I run are more compute-based. Our users 
would be more excited if we increased our GPU pool than enhanced the networking 
services. With that in mind, when I hear it said that "Octavia is 
backwards-compatible with Neutron LBaaS v2", I think "well, cool, that means we 
can keep running Neutron LBaaS v2 for now" and focus our efforts elsewhere.

I totally get why Octavia is advertised this way and it's very much 
appreciated. When I learned about Octavia, my knee-jerk reaction was "oh no, 
not another load balancer" but that was remedied when I learned it's more like 
LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not our primary 
focus and we can still squeak by with Neutron's LBaaS v2.

If you *really* wanted us to deploy Octavia ASAP, then a migration guide would 
be wonderful. I read over the "Developer / Operator Quick Start Guide" and 
found it very well written! I groaned over having to build an image but I also 
really appreciate the image builder script. If there can't be pre-built images 
available for testing, the second-best option is that script.

This highlights a second call to action for the SDK and provider developers:
recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
support for more advanced Octavia features.

Gophercloud hat! We've supported Octavia for a few months now, but purely by 
having the load-balancer client piggyback off of the Neutron LBaaS v2 API. We 
made the decision this morning, coincidentally enough, to have Octavia be a 
first-class service peered with Neutron rather than think of Octavia as a 
Neutron/network child. This will allow Octavia to fully flourish without worry 
of affecting the existing LBaaS v2 API (which we'll still keep around 
separately).

Thanks,
Joe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Matt Riedemann

On 3/16/2018 10:33 AM, Peter Penchev wrote:

Would there be any major opposition to adding a StorPool shared
storage image backend, so that our customers are not limited to
volume-backed instances?  Right now, creating a StorPool volume and
snapshot from a Glance image and then booting instances from that
snapshot works great, but in some cases, including some provisioning
and accounting systems on top of OpenStack, it would be preferable to
go the Nova way and let the hypervisor think that it has a local(ish)
image to work with, even though it's on shared storage anyway.  This
will go hand-in-hand with our planned Glance image driver, so that
creating a new instance from a Glance image would happen
instantaneously (create a StorPool volume from the StorPool snapshot
corresponding to the Glance image).



Ask the EMC ScaleIO team how well this has gone for them:

https://review.openstack.org/#/c/407440/

There has been a lot of discussion about a generic Cinder image backend 
driver in nova so that we don't need to have the same storage backend 
driver explosion that Cinder has, and we could also then replace the 
nova lvm/rbd image backends and just use Cinder volumes for 
volume-backed instances.


I could find lots of discussion references about this, but basically no 
one is planning to step up to work on that, and the existing libvirt 
imagebackend code is a mess, so piling more backends into the mix isn't 
very attractive.


Anyway, just FYI on all of that history.


If this will help the decision, we do have plans for adding a
full-blown Nova third-party CI in the near future, so that both our
volume attachment driver, this driver, and our upcoming Glance image
driver will see some more testing.


3rd party CI would be a requirement to get it added anyway, it's not 
really an option.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] State of the Kitchen - 2nd Edition

2018-03-16 Thread Samuel Cassiba
This is the second edition of what is going on in Chef OpenStack. The
goal is to give a quick overview to see our progress and what is on
the menu. Feedback is always welcome, as this is an iterative thing.

Appetizers

=> Pike has been branched! Supermarket has also received a round of
updates. https://supermarket.chef.io/users/openstack
=> chef-client 13.8 has been released, allowing the scenarios to
continue tracking the latest 13 series.
https://discourse.chef.io/t/chef-client-13-8-released/12652

Entrees
==
=> Queens development has commenced. Preliminary lab testing has
yielded positive results in Test Kitchen. Most changes seem to revolve
around deprecation chasing. https://review.openstack.org/550963 &
https://review.openstack.org/#/q/status:open+topic:queens_updates
=> Nova is continuing the trend of operating as an Apache web service.
https://review.openstack.org/552299

Desserts
===
=> The client (fog wrapper) and dns (Designate) cookbooks will be
coming home after stabilizing in Pike.
=> Chef 14 and ChefDK 3 is a thing next month. A heads-up will be sent
to this ML before this enters the gate.
https://blog.chef.io/2018/02/16/preparing-for-chef-14-and-chef-12-end-of-life/
=> More to come with upgrades. Stay tuned for specs and patches.

On The Menu
===
=> Buffalo Chicken Dip
-- 3-4 raw chicken breasts (flash-frozen gives a slightly different
mouth feel. it still makes food, so, you do you, boo)
-- 8 ounces (226g) cream cheese / Neufchatel
-- 1 cup (128g) hot sauce (Frank's RedHot recommended. substitute for
your own preferred pepper sauce)
-- 1 ounce (28g) dry ranch seasoning (substitute for store-bought
powder, or salad dressing from a bottle, if you must - ranch or bleu
cheese works here)
-- 4 ounces (113g) butter (grass-fed recommended because delicious)
Optional:
-- 4 slices cooked and crumbled (streaky) bacon
-- Cheese (shredded or cubed for melting consistency)

Add the chicken to a slowcooker in a single layer, if you have room.
Add hot sauce, butter, ranch right on top of the chicken. Cook on high
for 4 hours. Remove heat, drain juices, reserving juices. Shred
chicken. Add cream cheese, incorporate thoroughly. Reincorporate the
juices, gradually and thoroughly, taking care not to obliterate the
chicken, unless you like tangy, cheesy chicken mash. Serve as an
appetizer, or dig in with a fork.

Your humble cook,
Samuel Cassiba

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] bug squad meetings start Monday

2018-03-16 Thread Brian Rosmaita
The Glance Bug Squad will meet biweekly on Monday of even-numbered ISO
weeks at 10:00 UTC in #openstack-glance.  The meeting will last 45
minutes.

The first meeting will be 19 March 2018.

Agenda and notes: https://etherpad.openstack.org/p/glance-bug-squad-meeting

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APAC-friendly API-SIG meeting times

2018-03-16 Thread Ed Leafe
On Mar 15, 2018, at 10:31 PM, Gilles Dubreuil  wrote:
> 
> Any chance we can progress on this one?
> 
> I believe there are not enough participants to split the API SIG meeting in 
> 2, and also more likely because of the same lack of people across the 2 it 
> could make it pretty inefficient. Therefore I think changing the main meeting 
> time to another might be better but I could be wrong.
> 
> Anyway in all cases I can't make progress with a meeting in the middle of the 
> night for me so I would appreciate if we could re-activate this discussion.

What range of times would work for you? 

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] New image backend: StorPool

2018-03-16 Thread Peter Penchev
Hi,

A couple of years ago I created a Nova spec for the StorPool image
backend: https://review.openstack.org/#/c/137830/  There was some
discussion, but then our company could not immediately allocate the
resources to write the driver itself, so the spec languished and was
eventually abandoned.

Now that StorPool has a fully maintained Cinder driver and also a
fully maintained Nova volume attachment driver, both included in the
Queens release, and a Cinder third-party CI that runs all the tests
tagged with "volume", including some simple Nova tests, we'd like to
resurrect this spec and implement a Nova image backend, too.
Actually, it looks like due to customer demand we will write the
driver anyway and possibly maintain it outside the tree, but it would
be preferable (and, obviously, easier to catch up with wide-ranging
changes) to have it in.

Would there be any major opposition to adding a StorPool shared
storage image backend, so that our customers are not limited to
volume-backed instances?  Right now, creating a StorPool volume and
snapshot from a Glance image and then booting instances from that
snapshot works great, but in some cases, including some provisioning
and accounting systems on top of OpenStack, it would be preferable to
go the Nova way and let the hypervisor think that it has a local(ish)
image to work with, even though it's on shared storage anyway.  This
will go hand-in-hand with our planned Glance image driver, so that
creating a new instance from a Glance image would happen
instantaneously (create a StorPool volume from the StorPool snapshot
corresponding to the Glance image).

If this will help the decision, we do have plans for adding a
full-blown Nova third-party CI in the near future, so that both our
volume attachment driver, this driver, and our upcoming Glance image
driver will see some more testing.

Thanks in advance, and keep up the great work!

Best regards,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Does not hook for validating resource name (name/hostname for instance) required?

2018-03-16 Thread Matt Riedemann

On 3/16/2018 1:22 AM, 양유석 wrote:
Our company operates Openstack clusters and we had legacy DNS system, 
and it needs to check hostname check more strictly including RFC952. 
Also our operators demands for unique hostname in a region (we do not 
have tenant network yet using l3 only network). So for those reasons, we 
maintained custom validation logic for instance name.


But as everyone knows maintenance for custom codes are so burden, I am 
trying to find the applicable location for the demand.


imho, since there is schema validation for every resource, if any 
validation hooking API provided we can happily use it. Does anyone 
experience similar issue? Any advices will be appreciated.


There is a config option, "osapi_compute_unique_server_name_scope", 
which you can set to 'global' which should enforce in the DB layer that 
instance hostnames are unique.


However, thinking about this now, it's validated down in the cell DB 
layer, which is not global, so this likely doesn't work if you're using 
multiple cells, but I doubt you are right now.


Another related option is "multi_instance_display_name_template" but I 
see that's deprecated now, but I'm not aware of a proposed alternative 
for that option.


The names used should conform to RFC952, see:

https://github.com/openstack/nova/blob/7cbb5764d499dfdc90ef4a963daf217d58c840d4/nova//utils.py#L543

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-16 Thread Matt Riedemann

On 3/16/2018 9:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0



You could try setting this in your local.conf:

https://github.com/openstack-dev/devstack/blob/master/stackrc#L547

GITBRANCH["python-openstacksdk"]=0.11.3

But I don't see a similar entry for os-service-types.

I don't know if ^ will work, but it's what I'd try.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] update_provider_tree design updates

2018-03-16 Thread Jim Rollenhagen
>
> ...then there's no way I can know ahead of time what all those might be.
>  (In particular, if I want to support new devices without updating my
> code.)  I.e. I *can't* write the corresponding
> provider_tree.remove_trait(...) condition.  Maybe that never becomes a
> real problem because we'll never need to remove a dynamic trait.  Or
> maybe we can tolerate "leakage".  Or maybe we do something
> clever-but-ugly with namespacing (if
> trait.startswith('CUSTOM_DEV_VENDORID_')...).  We're consciously kicking
> this can down the road.
>
> And note that this "dynamic" problem is likely to be a much larger
> portion (possibly all) of the domain when we're talking about aggregates.
>
> Then there's ironic, which is currently set up to get its traits blindly
> from Inspector.  So Inspector not only needs to maintain the "owned
> traits" list (with all the same difficulties as above), but it must also
> either a) communicate that list to ironic virt so the latter can manage
> the add/remove logic; or b) own the add/remove logic and communicate the
> individual traits with a +/- on them so virt knows whether to add or
> remove them.


Just a nit, Ironic doesn't necessarily get its traits from inspector.
Ironic gets them from *some* API client, which may be an operator, or
inspector, or something else. Inspector is totally optional.

Anyway, I'm inclined to kick this can down the road a bit, as you mention.
I imagine that the ideal situation is for Ironic to remove traits from
placement
on the fly when they are removed in Ironic. Any other traits that
nova-compute
knows about (but Ironic doesn't), nova-compute can manage the removal
the same way as another virt driver.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Simon Leinen
Joe Topjian writes:
> Terraform hat! I want to slightly nit-pick this one since the words
> "leak" and "admin-priv" can sound scary: Terraform technically wasn't
> doing anything wrong. The problem was that Octavia was creating
> resources but not setting ownership to the tenant. When it came time
> to delete the resources, Octavia was correctly refusing, though it
> incorrectly created said resources.

I dunno... if Octavia created those lower-layer resources on behalf of
the user, then Octavia shouldn't refuse to remove those resources when
the same user later asks it to - independent of what ownership Octavia
chose to apply to those resources.  (It would be different it Neutron or
Nova were asked by the user directly to remove the resources created by
Octavia.)

> From reviewing the discussion, other parties were discovering this
> issue and patching in parallel to your discovery. Both xgerman and
> Vexxhost jumped in to confirm the behavior seen by Terraform. Vexxhost
> quickly applied the patch. It was a really awesome collaboration
> between yourself, dims, xgerman, and Vexxhost.

Speaking as another operator: Does anyone seriously expect us to deploy
a service (Octavia) in production at a stage where it exhibits this kind
of behavior? Having to clean up leftover resources because the users who
created them cannot remove them is not my idea of fun.  (And note that
like most operators, we're a few releases behind, so we might not even
get access to backports IF this gets fixed.)

In our case we're not a compute-oriented cloud provider, and some of our
customers would really like to have a good LBaaS as part of our IaaS
offering.  But our experience with this was so-so in the past - for
example, we had to help customers migrate from LBaaSv1 to LBaaSv2.  Our
resources (people, tolerance to user-affecting bugs and forced upgrades
etc.) are limited, so we've become careful.

For users who want to use Kubernetes on our OpenStack service, we rather
point them to Kubernetes's Ingress controller, which performs the LB
function without requiring much from the underlying cloud.  Seems like a
fine solution.
-- 
Simon.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-16 Thread Kwan, Louie
In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt, 

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do 

> git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens

And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0

Thanks.
Louie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Jeremy Stanley
On 2018-03-16 08:34:28 -0500 (-0500), Sean McGinnis wrote:
> On Mar 16, 2018, at 04:02, Jean-Philippe Evrard  
> wrote:
> 
> > For OpenStack-Ansible, we don't need to do anything for that
> > community goal.  I am not sure how we can remove our name from
> > the storyboard, so I just inform you here.
> 
> I believe you can just mark the task as done if there is no
> additional work required.

Yeah, either "merged" or "invalid" states should work. I'd lean
toward suggesting "invalid" in this case since the task did not
require any changes merged to your source code.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Sean McGinnis

> On Mar 16, 2018, at 04:02, Jean-Philippe Evrard  
> wrote:
> 
> Hello,
> 
> For OpenStack-Ansible, we don't need to do anything for that community
> goal.  I am not sure how we can remove our name from the storyboard,
> so I just inform you here.
> 
> Jean-Philippe Evrard (evrardjp)

I believe you can just mark the task as done if there is no additional work 
required.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Carlos Goncalves
On Fri, Mar 16, 2018 at 5:01 AM, Joe Topjian  wrote:

> Hi Chris,
>
> I wear a number of hats related to this discussion, so I'll add a few
> points of view :)
>
> It turns out that with
>> Terraform, it's possible to tear down resources in a way that causes
>> Neutron to
>> leak administrator-privileged resources that can not be deleted by a
>> non-privileged users. In discussions with the Neutron and Octavia teams,
>> it was
>> strongly recommended that I move away from the Neutron LBaaSv2 API and
>> instead
>> adopt Octavia. Vexxhost graciously installed Octavia and my request and I
>> was
>> able to move past this issue.
>>
>
> Terraform hat! I want to slightly nit-pick this one since the words "leak"
> and "admin-priv" can sound scary: Terraform technically wasn't doing
> anything wrong. The problem was that Octavia was creating resources but not
> setting ownership to the tenant. When it came time to delete the resources,
> Octavia was correctly refusing, though it incorrectly created said
> resources.
>
> From reviewing the discussion, other parties were discovering this issue
> and patching in parallel to your discovery. Both xgerman and Vexxhost
> jumped in to confirm the behavior seen by Terraform. Vexxhost quickly
> applied the patch. It was a really awesome collaboration between yourself,
> dims, xgerman, and Vexxhost.
>
>
>> This highlights the first call to action for our public and private cloud
>> community: encouraging the rapid migration from older, unsupported APIs to
>> Octavia.
>>
>
> Operator hat! The clouds my team and I run are more compute-based. Our
> users would be more excited if we increased our GPU pool than enhanced the
> networking services. With that in mind, when I hear it said that "Octavia
> is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that
> means we can keep running Neutron LBaaS v2 for now" and focus our efforts
> elsewhere.
>
> I totally get why Octavia is advertised this way and it's very much
> appreciated. When I learned about Octavia, my knee-jerk reaction was "oh
> no, not another load balancer" but that was remedied when I learned it's
> more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not
> our primary focus and we can still squeak by with Neutron's LBaaS v2.
>
> If you *really* wanted us to deploy Octavia ASAP, then a migration guide
> would be wonderful. I read over the "Developer / Operator Quick Start
> Guide" and found it very well written! I groaned over having to build an
> image but I also really appreciate the image builder script. If there can't
> be pre-built images available for testing, the second-best option is that
> script.
>


Periodic builds of Ubuntu and CentOS pre-built test images coming soon:
https://review.openstack.org/#/c/549259/

Periodic builds by the RDO project:
https://images.rdoproject.org/octavia/master/ (
https://review.rdoproject.org/r/#/c/11805/)


>
>> This highlights a second call to action for the SDK and provider
>> developers:
>> recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
>> support for more advanced Octavia features.
>>
>
> Gophercloud hat! We've supported Octavia for a few months now, but purely
> by having the load-balancer client piggyback off of the Neutron LBaaS v2
> API. We made the decision this morning, coincidentally enough, to have
> Octavia be a first-class service peered with Neutron rather than think of
> Octavia as a Neutron/network child. This will allow Octavia to fully
> flourish without worry of affecting the existing LBaaS v2 API (which we'll
> still keep around separately).
>
> Thanks,
> Joe
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Andreas Jaeger
On 2018-03-16 13:22, Jeffrey Zhang wrote:
> kolla install openstack packages through master tarball file on kolla
> master branch[0].
> 
> On stable branch, kolla install through neutron tag tarball. But i think
> there will be also
> some issue here. How about i want to install neutron-12.0.1.tar.gz,
> whereas neutron===12.0.0
> exist in the upper-constraints.txt file? 
> 
> [0] http://tarballs.openstack.org/neutron/neutron-master.tar.gz

I see, thanks.

Let me restore https://review.openstack.org/#/c/553030, it should get us
moving forward here - and then we can figure out whether there are other
options,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron][kolla] tools/tox_install changes - breakage with constraints

2018-03-16 Thread Jeffrey Zhang
kolla install openstack packages through master tarball file on kolla
master branch[0]. like

  pip install -c upper-constraints.txt neutron-master.tar.gz

On stable branch, kolla install through neutron tag tarball. so it should
work.
But i think there will be also some issue here. How about i want to install
neutron-12.0.1.tar.gz, whereas neutron===12.0.0 exist in the
upper-constraints.txt file?

[0] http://tarballs.openstack.org/neutron/neutron-master.tar.gz


On Fri, Mar 16, 2018 at 6:57 PM, Andreas Jaeger  wrote:

> On 2018-03-16 11:49, Jeffrey Zhang wrote:
> > Now it breaks the kolla's master branch jobs. And have to remove the
> > "horizon"
> > and "neutron" in the upper-constraints.txt file. check[1][2].
> >
> > i wanna know what's the correct way to install horizon develop
> > branch with upper-constraints.txt file?
> >
> >
> > [1] https://review.openstack.org/#/c/549456/4/docker/
> neutron/neutron-base/Dockerfile.j2
> >  neutron-base/Dockerfile.j2>
> > [2] https://review.openstack.org/#/c/549456/4/docker/
> horizon/Dockerfile.j2
> > 
>
> Sorry, that is too much magic for me to be able to help you.
>
> What are those doing? How do you install today? Please give me some
> instructions
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Jeffrey Zhang
kolla install openstack packages through master tarball file on kolla
master branch[0].

On stable branch, kolla install through neutron tag tarball. But i think
there will be also
some issue here. How about i want to install neutron-12.0.1.tar.gz, whereas
neutron===12.0.0
exist in the upper-constraints.txt file?

[0] http://tarballs.openstack.org/neutron/neutron-master.tar.gz

On Fri, Mar 16, 2018 at 6:53 PM, Andreas Jaeger  wrote:

> On 2018-03-16 11:42, Thomas Morin wrote:
> > This is related to the topic in "[horizon][neutron] tools/tox_install
> > changes - breakage with constraints".
> >
> > proposes to remove these projects from upper-constraints (for a
> > different reason)
> > https://review.openstack.org/#/c/552865
> >  that adds other projects to
> > global-requirements, explicitly postpone their addition to
> > upper-constraints to a later step
> >
> > Perhaps neutron and horizon should be removed from upper-constraints for
> > now ? (ie restore https://review.openstack.org/#/c/553030 ?)
>
> Yes, that would be one option. but I like to understand whether that
> would be a temporary solution - or the end solution.
>
> Jeffrey, how exactly are you installing neutron? From git? From tarballs?
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Lingxian Kong
Just FYI, l7 policy/rule support for Neutron LBaaS V2 and Octavia is on its
way[1], because we will have both octavia and magnum deployed on our
openstack based public cloud this year, an ingress controller for
openstack(octavia) is also on our TODO list, any kind of collaboration are
welcomed :-)

[1]: https://github.com/gophercloud/gophercloud/pull/833


Cheers,
Lingxian Kong (Larry)

On Fri, Mar 16, 2018 at 5:01 PM, Joe Topjian  wrote:

> Hi Chris,
>
> I wear a number of hats related to this discussion, so I'll add a few
> points of view :)
>
> It turns out that with
>> Terraform, it's possible to tear down resources in a way that causes
>> Neutron to
>> leak administrator-privileged resources that can not be deleted by a
>> non-privileged users. In discussions with the Neutron and Octavia teams,
>> it was
>> strongly recommended that I move away from the Neutron LBaaSv2 API and
>> instead
>> adopt Octavia. Vexxhost graciously installed Octavia and my request and I
>> was
>> able to move past this issue.
>>
>
> Terraform hat! I want to slightly nit-pick this one since the words "leak"
> and "admin-priv" can sound scary: Terraform technically wasn't doing
> anything wrong. The problem was that Octavia was creating resources but not
> setting ownership to the tenant. When it came time to delete the resources,
> Octavia was correctly refusing, though it incorrectly created said
> resources.
>
> From reviewing the discussion, other parties were discovering this issue
> and patching in parallel to your discovery. Both xgerman and Vexxhost
> jumped in to confirm the behavior seen by Terraform. Vexxhost quickly
> applied the patch. It was a really awesome collaboration between yourself,
> dims, xgerman, and Vexxhost.
>
>
>> This highlights the first call to action for our public and private cloud
>> community: encouraging the rapid migration from older, unsupported APIs to
>> Octavia.
>>
>
> Operator hat! The clouds my team and I run are more compute-based. Our
> users would be more excited if we increased our GPU pool than enhanced the
> networking services. With that in mind, when I hear it said that "Octavia
> is backwards-compatible with Neutron LBaaS v2", I think "well, cool, that
> means we can keep running Neutron LBaaS v2 for now" and focus our efforts
> elsewhere.
>
> I totally get why Octavia is advertised this way and it's very much
> appreciated. When I learned about Octavia, my knee-jerk reaction was "oh
> no, not another load balancer" but that was remedied when I learned it's
> more like LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not
> our primary focus and we can still squeak by with Neutron's LBaaS v2.
>
> If you *really* wanted us to deploy Octavia ASAP, then a migration guide
> would be wonderful. I read over the "Developer / Operator Quick Start
> Guide" and found it very well written! I groaned over having to build an
> image but I also really appreciate the image builder script. If there can't
> be pre-built images available for testing, the second-best option is that
> script.
>
>
>> This highlights a second call to action for the SDK and provider
>> developers:
>> recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
>> support for more advanced Octavia features.
>>
>
> Gophercloud hat! We've supported Octavia for a few months now, but purely
> by having the load-balancer client piggyback off of the Neutron LBaaS v2
> API. We made the decision this morning, coincidentally enough, to have
> Octavia be a first-class service peered with Neutron rather than think of
> Octavia as a Neutron/network child. This will allow Octavia to fully
> flourish without worry of affecting the existing LBaaS v2 API (which we'll
> still keep around separately).
>
> Thanks,
> Joe
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] [tripleo] [puppet] [fuel] [kolla] [openstack-ansible] [cloudcafe] [magnum] [mogan] [sahara] [shovel] [watcher] [helm] [rally] Heads up: ironic classic drivers deprecation

2018-03-16 Thread Dmitry Tantsur

Hi all,

If you see your project name in the subject that is because a global search 
revived usage of "pxe_ipmitool", "agent_ipmitool" or "pxe_ssh" drivers in the 
non-unit-test context in one or more of your repositories.


The classic drivers, such as pxe_ipmitool, were deprecated in Queens, and we're 
on track with removing them in Rocky. Please read [1] about differences between 
classic drivers and newer hardware types. Please refer to [2] on how to update 
your code.


Finally, the pxe_ssh driver was removed some time ago. Please use the standard 
IPMI driver with the virtualbmc project [3] instead.


Please reach out to the ironic team (here or on #openstack-ironic) if you have 
any questions or need help with the transition.


Dmitry

[1] https://docs.openstack.org/ironic/latest/install/enabling-drivers.html
[2] 
https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html
[3] https://github.com/openstack/virtualbmc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Thomas Morin
Hi Andreas,
In the documentation for networking-bgpvpn, we suggest to install these
packages with "pip install -c https://git.openstack.org/cgit/openstack/
requirements/plain/upper-constraints.txtt networking-bgpvpn=8.0.0" .In
many cases this can work well enough for people wanting to try this
component on top of an existing installation, assuming they follow a
few extra steps explained in the rest of the doc.
Adding networing-bgpvpn to upper-constraints.txt will break this way of
doing things.
-Thomas
Andreas Jaeger, 2018-03-16 11:53:
> On 2018-03-16 11:42, Thomas Morin wrote:
> > This is related to the topic in "[horizon][neutron]
> > tools/tox_install
> > changes - breakage with constraints".
> > 
> > proposes to remove these projects from upper-constraints (for a
> > different reason)
> > https://review.openstack.org/#/c/552865
> >  that adds other projects
> > to
> > global-requirements, explicitly postpone their addition to
> > upper-constraints to a later step
> > 
> > Perhaps neutron and horizon should be removed from upper-
> > constraints for
> > now ? (ie restore https://review.openstack.org/#/c/553030 ?)
> 
> Yes, that would be one option. but I like to understand whether that
> would be a temporary solution - or the end solution.
> 
> Jeffrey, how exactly are you installing neutron? From git? From
> tarballs?
> 
> Andreas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-11

2018-03-16 Thread Chris Dent


Here's a placement update!

# Most Important

While work has started on some of the already approved specs, there
are still a fair few under review, and a couple yet to be written.
Given the number of specs we've got going it's entirely likely we've
bitten off more than we can chew, but we'll see. Getting specs
landed early makes it easier to get the functionality merged sooner,
so: review some specs.

In active code reviews, the update provider tree stuff remains very
important as it's the keystone in getting the nova-side of placement
interaction working best.

# What's Changed

All the resource provider objects (the file resource_provider.py)
have moved under nova/api/openstack/placement and now inherit
directly of OVO. This is to harden and signal the boundary between
nova and placement, helping not just in the eventual extraction of
placement, but also in making placement lighter.  More on related
code in the extraction section below.

Standard resource class fields have been moved to a top level file,
rc_fields.py. This is a stopgap until os-resource-classes is
created.

A series of conversations, nicely summarized by Eric on this list

http://lists.openstack.org/pipermail/openstack-dev/2018-March/128383.html

, showed that the way we are managing the addition and removal of
traits and aggregates in the compute environment needs some tweaks
to control how and by whom changes can be made. Code is in progress
to deal with that, but the posting is worth a read to catch up on
the reasoning. It's not simple, but neither is the situation.

Aggregates can be managed with a generation now and probably today
code will merge that allows a generation in the a response when
POSTing to create a resource provider.

The nova-scheduler process can run with multiple workers if the
driver is the filter scheduler. It will be run that way in devstack,
henceforth.

# Questions

[Add yours here?]

# Bugs

* Placement related bugs without owners:  https://goo.gl/TgiPXb
  15, +1 on last week
* In progress placement bugs: https://goo.gl/vzGGDQ
  11, no data for last week (because I only realized today I should
  do this)

# Specs

* https://review.openstack.org/#/c/550244/
   Propose standardized provider descriptor file

* https://review.openstack.org/#/c/549067/
   VMware: place instances on resource pool
   (using update_provider_tree)

* https://review.openstack.org/#/c/549184/
   Spec: report client placement version discovery

* https://review.openstack.org/#/c/548237/
   Update placement aggregates spec to clarify generation handling

* https://review.openstack.org/#/c/418393/
   Provide error codes for placement API

* https://review.openstack.org/#/c/545057/
   mirror nova host aggregates to placement API

* https://review.openstack.org/#/c/552924/
  Proposes NUMA topology with RPs

* https://review.openstack.org/#/c/544683/
  Account for host agg allocation ratio in placement

* https://review.openstack.org/#/c/552927/
  Spec for isolating configuration of placement database

* https://review.openstack.org/#/c/552105/
  Support default allocation ratios

* https://review.openstack.org/#/c/438640/4
  Spec on preemptible servers

# Main Themes

## Update Provider Tree

The ability of virt drivers to represent what resource providers
they know about--whether that be numa, or clustered resources--is
supported by the update_provider_tree method. Part of it is done,
but some details remain:

 https://review.openstack.org/#/q/topic:bp/update-provider-tree

There's new stuff in here for the add/remove traits and aggregates
stuff discussed above.

## Request Filters

These are a way for the nova scheduler to doctor the request being
sent to placement, using a sane interface.

https://review.openstack.org/#/q/topic:bp/placement-req-filter

That is waiting on the member_of functionality to merge:

https://review.openstack.org/#/c/552098/

## Mirror nova host aggregates to placement

This makes it so some kinds of aggregate filtering can be done
"placement side" by mirroring nova host aggregates into placement
aggregates.

 https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates

It's part of what will make the req filters above useful.

## Forbidden Traits

A way of expressing "I'd like resources that do _not_ have trait X".
Spec for this has been approved, but the code hasn't been started
yet.

## Consumer Generations

In discussion yesterday it was agreed that edleafe will start the
ball rolling on this and I (cdent) will be his virtual pair.

# Extraction

As mentioned above there's been some progress here: objects have
moved under the placement hierarchy. The next patch in that stack is
to move some exceptions

https://review.openstack.org/#/c/549862/

followed by code to use a different configuration setting and setup
for the placement database connection. This has an old -2 on it,
requesting a spec to describe what's going on. That spec is here:


Re: [openstack-dev] [mistral][tempest][congress] import or retain mistral tempest service client

2018-03-16 Thread Dougal Matthews
On 13 March 2018 at 18:51, Eric K  wrote:

> Hi Mistral folks and others,
>
> I'm working on Congress tempest tests [1] for integration with Mistral. In
> the tests, we use a Mistral service client to call Mistral APIs and
> compare results against those obtained by Mistral driver for Congress.
>
> Regarding the service client, Congress can either import directly from
> Mistral tempest plugin [2] or maintain its own copy within Congress
> tempest plugin. I'm not sure whether Mistral team expects the service
> client to be internal use only, so I hope to hear folks' thoughts on which
> approach is preferred. Thanks very much!
>

I don't have a strong opinion here. I am happy for you to use the Mistral
service client, but it will be hard to guarantee stability. It has been
stable (since it hasn't changed), but we have a temptest refactor planned
(once we move the final tempest tests from mistraclient to
mistral-tempest-plugin). So there is a fair chance we will break the API at
that point, however, I don't know when it will happen, as nobody is
currently working on it.

I have cc'ed Chandan - hopefully he can provide some input. He has advised
me and the Mistral team regarding tempest before.


>
> Eric
>
> [1] https://review.openstack.org/#/c/538336/
> [2]
> https://github.com/openstack/mistral-tempest-plugin/blob/
> master/mistral_tem
> pest_tests/services/v2/mistral_client.py
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, March 16th

2018-03-16 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something)
governance-related that is not reflected on the tracker yet, please feel
free to add to it !


== Recently-approved changes ==

Nothing merged this week, but we are getting closer to final approval on
several long-standing proposals, see below!


== Voting in progress ==

One of the option for clarifying testing for interop programs seems to
be consensual enough to pass. It extends potential test locations to
match the current state of things and what teams agree to support.
Please review and chime in on:

https://review.openstack.org/#/c/550571/

The definition of "extended maintenance" is also reaching its
conclusion, with a proposal that is being consensual so far. Under this
proposal stable branches shall remain open to accept fixes as long as
reasonably possible, enabling people to step up and maintain branches
beyond the minimal maintenance window guaranteed by the stable branch
maintenance team. Please review and comment on:

https://review.openstack.org/#/c/548916/

Tony Breeds proposed to rename the clarify the scope for the
recently-added PowerStackers team, to reflect its focus on PowerVM
(rather than all things POWER). We are still missing a number of votes
to pass that change. Please see:

https://review.openstack.org/#/c/551413/

Finally, we have a change to move Zuul out of the OpenStack Infra
project team repositories and OpenStack governance. This is the first
step in establishing our infrastructure tooling under its own brand, to
make it easier to promote it beyond OpenStack. The rationale was
explained by Jim Blair in more detail at:

http://lists.openstack.org/pipermail/openstack-dev/2018-March/128396.html

While the decision is more of an infra team internal decision on which
repositories they directly maintain, feel free to chime in on the thread
or the review with your thoughts on this. This already has majority
support from the TC, and will be approved on Tuesday unless new
objections are posted:

https://review.openstack.org/#/c/552637/


== Under discussion ==

There is a proposal about splitting the Kolla-kubernetes team out of the
Kolla/Kolla-ansible team, to reflect the fact that the teams are
actually separate. This obviously creates naming/namespace questions,
which are as everyone knows the hardest problem in computer science.
Please chime in on:

https://review.openstack.org/#/c/552531/

A new project team was just proposed to make the Adjutant project an
official OpenStack deliverable. Adjutant is a service built to help
manage certain elements of operations processes by providing micro APIs
around complex underlying workflow. Please review the proposal at:

https://review.openstack.org/#/c/553643/


== TC member actions/focus/discussions for the coming week(s) ==

For this week I expect final votes on the Extended maintenance proposal
and the interop tests locations, which may trigger additional
last-minute discussions.

I'll create stories in StoryBoard to track high-level TC initiatives. A
general overview of the stories to create is available for review at:

https://etherpad.openstack.org/p/rocky-tc-stories

We should also establish an etherpad to discuss potential Forum sessions
we'd like to file for the Vancouver Summit.


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

Feel free to add your own office hour conversation starter at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron][kolla] tools/tox_install changes - breakage with constraints

2018-03-16 Thread Andreas Jaeger
On 2018-03-16 11:49, Jeffrey Zhang wrote:
> Now it breaks the kolla's master branch jobs. And have to remove the
> "horizon"
> and "neutron" in the upper-constraints.txt file. check[1][2]. 
> 
> i wanna know what's the correct way to install horizon develop
> branch with upper-constraints.txt file?
> 
> 
> [1] 
> https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/Dockerfile.j2
> 
> [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2
> 

Sorry, that is too much magic for me to be able to help you.

What are those doing? How do you install today? Please give me some
instructions

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Andreas Jaeger
On 2018-03-16 11:42, Thomas Morin wrote:
> This is related to the topic in "[horizon][neutron] tools/tox_install
> changes - breakage with constraints".
> 
> proposes to remove these projects from upper-constraints (for a
> different reason)
> https://review.openstack.org/#/c/552865
>  that adds other projects to
> global-requirements, explicitly postpone their addition to
> upper-constraints to a later step
> 
> Perhaps neutron and horizon should be removed from upper-constraints for
> now ? (ie restore https://review.openstack.org/#/c/553030 ?)

Yes, that would be one option. but I like to understand whether that
would be a temporary solution - or the end solution.

Jeffrey, how exactly are you installing neutron? From git? From tarballs?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][neutron][kolla] tools/tox_install changes - breakage with constraints

2018-03-16 Thread Jeffrey Zhang
Now it breaks the kolla's master branch jobs. And have to remove the
"horizon"
and "neutron" in the upper-constraints.txt file. check[1][2].

i wanna know what's the correct way to install horizon develop
branch with upper-constraints.txt file?


[1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/
Dockerfile.j2
[2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2



On Thu, Mar 15, 2018 at 9:28 PM, Doug Hellmann 
wrote:

> Excerpts from Thomas Morin's message of 2018-03-15 10:15:38 +0100:
> > Hi Doug,
> >
> > Doug Hellmann, 2018-03-14 23:42:
> > > We keep doing lots of infra-related work to make it "easy" to do
> > >  when it comes to
> > > managing dependencies.  There are three ways to address the issue
> > > with horizon and neutron, and none of them involve adding features
> > > to pbr.
> > >
> > > 1. Things that are being used like libraries need to release like
> > >libraries. Real releases. With appropriate version numbers. So
> > >that other things that depend on them can express valid
> > > dependencies.
> > >
> > > 2. Extract the relevant code into libraries and release *those*.
> > >
> > > 3. Things that are not stable enough to be treated as a library
> > >shouldn't be used that way. Move the things that use the
> > > application
> > >code as library code back into the repo with the thing that they
> > >are tied to but that we don't want to (or can't) treat like a
> > >library.
> >
> > What about the case where there is co-development of features across
> > repos ? One specific case I have in mind is the Neutron stadium where
>
> We do that all the time with the Oslo libraries. It's not as easy as
> having everything in one repo, but we manage.
>
> > we sometimes have features in neutron repo that are worked on as a pre-
> > requisite for things that will be done in a neutron-* or networking-*
> > project. Another is a case for instance where we need to add in project
> > X a tempest test to validate the resolution of a bug for which the fix
> > actually happened in project B (and where B is not a library).
>
> If the tempest test can't live in B because it uses part of X, then I
> think X and B are really one thing and you're doing more work than you
> need to be doing to keep them in separate libraries.
>
> > My intuition is that it is not illegitimate to expect this kind of
> > development workflow to be feasible; but at the same time I read your
> > suggestion above as meaning that it belongs to the real of "things we
> > shouldn't be doing in the first place".  The only way I can reconcile
>
> You read me correctly.
>
> We install a bunch of components from source for integration tests
> in devstack-gate because we want the final releases to work together.
> But those things only interact via REST APIs, and don't import each
> other.  The cases with neutron and horizon are different. Even the
> *unit* tests of the add-ons require code from the "parent" app. That
> indicates a level of coupling that is not being properly addressed by
> the release model and code management practices for the parent apps.
>
> > the two would be to conclude we should collapse all the module in
> > neutron-*/networking-* into neutron, but doing that would have quite a
> > lot of side effects (yes, this is an understatement).
>
> That's not the only way to do it. The other way would be to properly
> decompose the shared code into a library and then provide *stable
> APIs* so code can be consumed by the add-on modules. That will make
> evolving things a little more difficult because of the stability
> requirement. So it's a trade off. I think the teams involved should
> make that trade off (in one direction or another), instead of
> building tools to continue to avoid dealing with it.
>
> So let's start by examining the root of the problem: Why are the things
> that need to import neutron/horizon not part of the neutron/horizon
> repositories in the first place?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Jeffrey Zhang
thanks Thomas, i will move my question into that topics.

anyone who are interesting this issue, please reply in "[horizon][neutron]
tools/tox_install changes - breakage with constraints".

On Fri, Mar 16, 2018 at 6:42 PM, Thomas Morin 
wrote:

> This is related to the topic in "[horizon][neutron] tools/tox_install
> changes - breakage with constraints".
>
> proposes to remove these projects from upper-constraints (for a different
> reason)
> https://review.openstack.org/#/c/552865 that adds other projects to
> global-requirements, explicitly postpone their addition to
> upper-constraints to a later step
>
> Perhaps neutron and horizon should be removed from upper-constraints for
> now ? (ie restore https://review.openstack.org/#/c/553030 ?)
>
> -Thomas
>
>
> Jeffrey Zhang, 2018-03-16 18:31:
>
> recently, a new patch is merged[0]. It adds neutron and horizon itself into
> upper-constraints.txt. But this will break installing horizon and neutron
> with
> upper-constraints.txt.
>
> Now it breaks the kolla's master branch patch. And have to remove the
> "horizon"
> and "neutron" in the files. check[1][2].
>
> The easier way to re-produce this is
>
>   git clone https://github.com/openstack/horizon.git
>   cd horizon
>   pip install -c https://git.openstack.org/cgit/openstack/requirements/
> plain/upper-constraints.txt .
>
> So the question is, is this expected? if so, what's the correct way to
> install horizon develop
> branch with upper-constraints.txt file?
>
>
> [0] https://review.openstack.org/#/c/550475/
> [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/
> Dockerfile.j2
> [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2
>
>
> --
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Dublin PTG Summary

2018-03-16 Thread Thomas Morin
Miguel Lavalle, 2018-03-12 13:45:
> * Ruijing Guo proposed to support VLAN transparency in Neutron OVS
> agent.
> 
> [...]   - While on this topic, the conversation temporarily forked to
> the use of registers instead of ovsdb port tags in L2 agent br-int
> and possibly remove br-tun. Thomas Morin committed to draft a RFE for
> this.

Here is the RFE: https://bugs.launchpad.net/neutron/+bug/1756296

It does not yet cover a possible following step where br-tun would be
removed.

Cheers,

-Thomas


> __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Thomas Morin
This is related to the topic in "[horizon][neutron] tools/tox_install
changes - breakage with constraints".
proposes to remove these projects from upper-constraints (for a
different reason)https://review.openstack.org/#/c/552865 that adds
other projects to global-requirements, explicitly postpone their
addition to upper-constraints to a later step
Perhaps neutron and horizon should be removed from upper-constraints
for now ? (ie restore  https://review.openstack.org/#/c/553030  ?)
Jeffrey Zhang, 2018-03-16 18:31:
> recently, a new patch is merged[0]. It adds neutron and horizon
> itself into
> upper-constraints.txt. But this will break installing horizon and
> neutron with
> upper-constraints.txt. 
> 
> Now it breaks the kolla's master branch patch. And have to remove the
> "horizon"
> and "neutron" in the files. check[1][2]. 
> 
> The easier way to re-produce this is
> 
>   git clone https://github.com/openstack/horizon.git
>   cd horizon
>   pip install -c https://git.openstack.org/cgit/openstack/requirement
> s/plain/upper-constraints.txt .
> 
> So the question is, is this expected? if so, what's the correct way
> to install horizon develop
> branch with upper-constraints.txt file?
> 
> 
> [0] https://review.openstack.org/#/c/550475/
> [1] https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-
> base/Dockerfile.j2
> [2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfi
> le.j2
> 
> 
> -- 
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-Thomas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][kolla] new requirements change on upper-constraints.txt break horizon & neutron

2018-03-16 Thread Jeffrey Zhang
recently, a new patch is merged[0]. It adds neutron and horizon itself into
upper-constraints.txt. But this will break installing horizon and neutron
with
upper-constraints.txt.

Now it breaks the kolla's master branch patch. And have to remove the
"horizon"
and "neutron" in the files. check[1][2].

The easier way to re-produce this is

  git clone https://github.com/openstack/horizon.git
  cd horizon
  pip install -c
https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt
.

So the question is, is this expected? if so, what's the correct way to
install horizon develop
branch with upper-constraints.txt file?


[0] https://review.openstack.org/#/c/550475/
[1]
https://review.openstack.org/#/c/549456/4/docker/neutron/neutron-base/Dockerfile.j2
[2] https://review.openstack.org/#/c/549456/4/docker/horizon/Dockerfile.j2


-- 
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg][sig][upgrades] Upgrade SIG

2018-03-16 Thread James Page
Hi All

I finally got round to writing up my summary of the Upgrades session at the
PTG in Dublin (see [0]).

One outcome of that session was to form a new SIG centered on Upgrading
OpenStack - I'm pleased to announce that the SIG has been formally accepted!

The objective of the Upgrade SIG is to improve the overall upgrade process
for OpenStack Clouds, covering both offline ‘fast-forward’ upgrades and
online ‘rolling’ upgrades, by providing a forum for cross-project
collaboration between operators and developers to document and codify best
practice for upgrading OpenStack.

If you are interested in participating in the SIG please add your details
to the wiki page under 'Interested Parties':

  https://wiki.openstack.org/wiki/Upgrade_SIG

I'll be working with the other SIG leads to setup regular IRC meetings in
the next week or so - we expect to alternate between slots that are
compatible with all time zones.

Regards

James

[0]
https://javacruft.wordpress.com/2018/03/16/winning-with-openstack-upgrades/
[1] https://governance.openstack.org/sigs/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-03-16 Thread Andreas Jaeger
thanks for the proposal, Doug. I need an example to understand how
things will work out...

so, let me use a real-life example (version numbers are made up):

openstackdocstheme uses sphinx and needs sphinx 1.6.0 or higher but
knows version 1.6.7 is broken.

So, openstackdocstheme would add to its requirements file:
sphinx>=1.6.0,!=1.6.7

Any project might assume they work with an older version, and have in
their requirements file:
Sphinx>=1.4.0
openstackdocstheme

The global requirements file would just contain:
openstackdocstheme
sphinx!=1.6.7

The upper-constraints file would contain:
sphinx===1.7.1

If we need to block sphinx 1.7.x - as we do right now - , we only update
requirements repo to have in global requirements file:
openstackdocstheme
sphinx!=1.6.7,<1.7.0

and have in upper-constraints:
sphinx===1.6.6

But projects should *not* add the cap to their projects like:
sphinx>=1.6.0,!=1.6.7,<=1.7.0

Is that all correct?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Ansible Disk requirements [docs] [osa]

2018-03-16 Thread Miguel Angel Ajo Pelayo
Right, that's a little absurd, 1TB? :-) , I completely agree.

They could live with anything, but I'd try to estimate minimums across
distributions
for example, an RDO test deployment with containers looks like:

(undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.8 "sudo df -h
; sudo free -h;"

Filesystem  Size  Used Avail Use% Mounted on
/dev/vda250G  *7.4G*   43G  15% /
devtmpfs2.9G 0  2.9G   0% /dev
[]


tmpfs   581M 0  581M   0% /run/user/1000
  totalusedfree  shared  buff/cache
available
Mem:   5.7G*1.1G *   188M2.4M
4.4G4.1G
Swap:0B  0B  0B

Which looks rather lightweight. We need to consider logging space etc..
I'd say 20G could be enough without considering instance disks?



On Fri, Mar 16, 2018 at 9:39 AM Jean-Philippe Evrard <
jean-phili...@evrard.me> wrote:

> Hello,
>
> That's what it always was, but it was hidden in the pages. Now that I
> refactored the pages to be more visible, you spotted it :)
> Congratulations!
>
> More seriously, I'd like to remove that requirement, showing people
> can do whatever they like. It all depends on how/where they store
> images, ephemeral storage...
>
> Will commit a patch today.
>
> Best regards,
> Jean-Philippe Evrard
>
>
>
> On 15 March 2018 at 18:31, Gordon, Kent S
>  wrote:
> > Compute host disk requirements for Openstack Ansible seem high in the
> > documentation.
> >
> > I think I have used smaller compute hosts in the past.
> > Did something change in Queens?
> >
> >
> https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html
> >
> >
> > Compute hosts
> >
> > Disk space requirements depend on the total number of instances running
> on
> > each host and the amount of disk space allocated to each instance.
> >
> > Compute hosts must have a minimum of 1 TB of disk space available.
> >
> >
> >
> >
> > --
> > Kent S. Gordon
> > kent.gor...@verizonwireless.com Work:682-831-3601 <(682)%20831-3601>
> Mobile: 817-905-6518 <(817)%20905-6518>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Claudiu Belu
Interesting.

I'll take a look as well (Winstackers). Just an FYI, SIGHUP doesn't exist in 
Windows, so for services like nova-compute, neutron-hyperv-agent, 
neutron-ovs-agent, ceilometer-polling we'd have to use something else.

Best regards,

Claudiu Belu


From: Jean-Philippe Evrard [jean-phili...@evrard.me]
Sent: Friday, March 16, 2018 11:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the 
debug option at runtime

Hello,

For OpenStack-Ansible, we don't need to do anything for that community
goal.  I am not sure how we can remove our name from the storyboard,
so I just inform you here.

Jean-Philippe Evrard (evrardjp)

On 28 February 2018 at 05:27, ChangBo Guo  wrote:
> Hi ALL,
>
> TC approved the  goal [0]  a week ago ,  so it's time to finish the work. we
> also have a short discussion in oslo meeting  at PTG, find more details in
> [1] ,
> we use storyboard to check the goal in
> https://storyboard.openstack.org/#!/story/2001545.  It's appreciated PTL set
> the owner in time .
> Feel free to reach me( gcb) in IRC if you have any questions.
>
>
> [0] https://review.openstack.org/#/c/534605/
> [1] https://etherpad.openstack.org/p/oslo-ptg-rocky  From line 175
>
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime

2018-03-16 Thread Jean-Philippe Evrard
Hello,

For OpenStack-Ansible, we don't need to do anything for that community
goal.  I am not sure how we can remove our name from the storyboard,
so I just inform you here.

Jean-Philippe Evrard (evrardjp)

On 28 February 2018 at 05:27, ChangBo Guo  wrote:
> Hi ALL,
>
> TC approved the  goal [0]  a week ago ,  so it's time to finish the work. we
> also have a short discussion in oslo meeting  at PTG, find more details in
> [1] ,
> we use storyboard to check the goal in
> https://storyboard.openstack.org/#!/story/2001545.  It's appreciated PTL set
> the owner in time .
> Feel free to reach me( gcb) in IRC if you have any questions.
>
>
> [0] https://review.openstack.org/#/c/534605/
> [1] https://etherpad.openstack.org/p/oslo-ptg-rocky  From line 175
>
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-sig/news

2018-03-16 Thread Chris Dent


Meta: When responding to lists, please do not cc individuals, just
repond to the list. Thanks, response within.

On Fri, 16 Mar 2018, Gilles Dubreuil wrote:

In order to continue and progress on the API Schema guideline [1] as 
mentioned in [2] to make APIs more machine-discoverable and also discussed 
during [3].


Unfortunately until a new or either a second meeting time slot has been 
allocated,  inconveniently for everyone, have to be done by emails.


I'm sorry that the meeting time is excluding you and others, but our
efforts to have either a second meeting or to change the time have
met with limited response (except from you).

In any case, the meeting are designed to be checkpoints where we
resolve stuck questions and checkpoint where we are on things. It is
better that most of the work be done in emails and on reviews as
that's the most inclusive, and is less dependent on time-related
variables.

So moving the discussion about schemas here is the right thing and
the fact that it hasn't happened (until now) is the reason for what
appears to be a rather lukewarm reception from the people writing
the API-SIG newsletter: if there's no traffic on either the gerrit
review or here in email then there's no evidence of demand. You're
asserting here that there is; that's great.

Of course new features have to be decided (voted) by the community but how 
does that work when there are not enough people voting in?
It seems unfair to decide not to move forward and ignore the request because 
the others people interested are not participating at this level.


In a world of limited resources we can't impose work on people. The
SIG is designed to be a place where people can come to make progress
on API-related issues. If people don't show up, progress can't be
made. Showing up doesn't have to mean show up at an IRC meeting. In
fact I very much hope that it never means that. Instead it means
writing things (like your email message) and seeking out
collaborators to push your idea(s) forward.

It's very important  to consider the fact "I" am representing more than just 
myself but an Openstack integration team, whose members are supporting me, 
and our work impacts others teams involved in their open source product 
consuming OpenStack. I'm sorry if I haven't made this more clear from the 
beginning, I guess I'm still learning on the particiaption process. So from 
now on, I'm going to use "us" instead.


Can some of those "us" show up on the mailing list, the gerrit
reviews, and prototype work that Graham has done?

Also from discussions with other developers from AT (OpenStack summit in 
Sydney) and SAP (Misty project) who are already using automation to consume 
APIs, this is really needed.


Them too.

I've also mentioned the now known fact that no SDK has full time resources to 
maintain it (which was the initial trigger for us) more automation is the 
only sustainable way to continue the journey.


Finally how can we dare say no to more automation? Unless of course, only 
artisan work done by real hipster is allowed ;)


Nobody is saying no to automation (as far as I'm aware). Some people
(e.g., me, but not just me) are saying "unless there's an active
community to do this work and actively publish about it and the
related use cases that drive it it's impossible to make it a
priority". Some other people (also me, but not just me) are also
saying "schematizing API client generation is not my favorite thing"
but that's just a personal opinion and essentially meaningless
because yet other people are saying "I love API schema!".

What's missing, though, is continuous enagement on producing
children of that love.

Furthermore, API-Schema will be problematic for services that use 
microversions. If you have some insight or opinions on this, please add your 
comments to that review.


I understand microversion standardization (OpenAPI) has not happened yet or 
if it ever does but that shouldn't preclude making progress.


Of course, but who are you expecting to make that progress? The
API-SIGs statement of "not something we're likely to pursue as a
part of guidance" is about apparent unavailability of interested
people. If that changes then the guidance situation probably changes
too.

But not writing guiadance is different from provide a place to talk
about it. That's what a SIG is for. Think of it as a room with
coffee and snacks where it is safe to talk about anything related to
APIs. And that room exists in email just as much as it does in IRC
and at the PTG. Ideally it exists _most_ in email.

So summarize and clarify, we are talking about SDK being able to build their 
interface to Openstack APIs in an automated way but statically from API 
Schema generated by every project. Such API Schema is already built in memory 
during API reference documentation generation and could be saved in JSON 
format (for instance) (see [5]).


What do you see as the current roadblocks preventing this work from
continuing to 

Re: [openstack-dev] [infra][all] Anyone using our ubuntu-mariadb mirror?

2018-03-16 Thread Jean-Philippe Evrard
Hello,

We were using it until a couple of weeks ago, when 10.1.31 got out.
10.1.31 got issues with clustering and we moved to use a mirror of
10.1. (here 10.1.30), instead of 10.1.
We haven't decided if we'll move back to 10.1 when 10.1.32 will be out.

You can remove it for now, I think we can discuss this again when
10.1.32 will be out.

Best regards,
JP

On 14 March 2018 at 22:50, Ian Wienand  wrote:
> Hello,
>
> We discovered an issue with our mariadb package mirroring that
> suggests it hasn't been updating for some time.
>
> This would be packages from
>
>  http://mirror.X.Y.openstack.org/ubuntu-mariadb/10.<1|2>
>
> This was originally added in [1].  AFAICT from codesearch, it is
> currently unused.  We export the top-level directory in the mirror
> config scripts as NODEPOOL_MARIADB_MIRROR, which is not referenced in
> any jobs [2], and I couldn't find anything setting up apt repos
> pointing to it.
>
> Thus since it's not updating and nothing seems to reference it, I am
> going to assume it is unused and remove it next week.  If not, please
> respond and we can organise a fix.
>
> -i
>
> [1] https://review.openstack.org/#/c/307831/
> [2] 
> http://codesearch.openstack.org/?q=NODEPOOL_MARIADB_MIRROR=nope==
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack Ansible Disk requirements [docs] [osa]

2018-03-16 Thread Jean-Philippe Evrard
Hello,

That's what it always was, but it was hidden in the pages. Now that I
refactored the pages to be more visible, you spotted it :)
Congratulations!

More seriously, I'd like to remove that requirement, showing people
can do whatever they like. It all depends on how/where they store
images, ephemeral storage...

Will commit a patch today.

Best regards,
Jean-Philippe Evrard



On 15 March 2018 at 18:31, Gordon, Kent S
 wrote:
> Compute host disk requirements for Openstack Ansible seem high in the
> documentation.
>
> I think I have used smaller compute hosts in the past.
> Did something change in Queens?
>
> https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/overview-requirements.html
>
>
> Compute hosts
>
> Disk space requirements depend on the total number of instances running on
> each host and the amount of disk space allocated to each instance.
>
> Compute hosts must have a minimum of 1 TB of disk space available.
>
>
>
>
> --
> Kent S. Gordon
> kent.gor...@verizonwireless.com Work:682-831-3601 Mobile: 817-905-6518
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][vote] core nomination for caoyuan

2018-03-16 Thread duon...@vn.fujitsu.com
+1

From: Jeffrey Zhang [mailto:zhang.lei@gmail.com]
Sent: Monday, March 12, 2018 9:07 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [kolla][vote] core nomination for caoyuan

​​Kolla core reviewer team,

It is my pleasure to nominate caoyuan for kolla core team.

caoyuan's output is fantastic over the last cycle. And he is the most
active non-core contributor on Kolla project for last 180 days[1]. He
focuses on configuration optimize and improve the pre-checks feature.

Consider this nomination a +1 vote from me.

A +1 vote indicates you are in favor of caoyuan as a candidate, a -1
is a veto. Voting is open for 7 days until Mar 12th, or a unanimous
response is reached or a veto vote occurs.

[1] http://stackalytics.com/report/contribution/kolla-group/180
--
Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tatu][Nova] Handling instance destruction

2018-03-16 Thread Juan Antonio Osorio
Having an interface for vendordata that gets deletes would be quite nice.
Right now for novajoin we listen to the nova notifications for updates and
deletes; if this could be handled natively by vendordata, it would simplify
our codebase.

BR

On Fri, Mar 16, 2018 at 7:34 AM, Michael Still  wrote:

> Thanks for this. I read the README for the project after this and I do now
> realise you're using notifications for some of these events.
>
> I guess I'm still pondering if its reasonable to have everyone listen to
> notifications to build systems like these, or if we should messages to
> vendordata to handle these actions. Vendordata is intended at deployers, so
> having a simple and complete interface seems important.
>
> There were also comments in the README about wanting to change the data
> that appears in the metadata server over time. I'm wondering how that maps
> into the configdrive universe. Could you explain those comments a bit more
> please?
>
> Thanks for your quick reply,
> Michael
>
>
>
>
> On Fri, Mar 16, 2018 at 2:18 PM, Pino de Candia <
> giuseppe.decan...@gmail.com> wrote:
>
>> Hi Michael,
>>
>> Thanks for your message... and thanks for your vendordata work!
>>
>> About your question, Tatu listens to events on the oslo message bus.
>> Specifically, it reacts to compute.instance.delete.end by cleaning up
>> per-instance resources. It also listens to project creation and user role
>> assignment changes. The code is at:
>> https://github.com/openstack/tatu/blob/master/tatu/notifications.py
>>
>> best,
>> Pino
>>
>>
>> On Thu, Mar 15, 2018 at 3:42 PM, Michael Still  wrote:
>>
>>> Heya,
>>>
>>> I've just stumbled across Tatu and the design presentation [1], and I am
>>> wondering how you handle cleaning up instances when they are deleted given
>>> that nova vendordata doesn't expose a "delete event".
>>>
>>> Specifically I'm wondering if we should add support for such an event to
>>> vendordata somehow, given I can now think of a couple of use cases for it.
>>>
>>> Thanks,
>>> Michael
>>>
>>> 1: https://docs.google.com/presentation/d/1HI5RR3SNUu1If-A5Z
>>> i4EMvjl-3TKsBW20xEUyYHapfM/edit#slide=id.p
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Does not hook for validating resource name (name/hostname for instance) required?

2018-03-16 Thread 양유석
Hi.

I have a question for operating Openstack cluster. Since it's my first mail
to mailing list, if it's not a right place to do, sorry for that and please
let me know the right one. :)

Our company operates Openstack clusters and we had legacy DNS system, and
it needs to check hostname check more strictly including RFC952. Also our
operators demands for unique hostname in a region (we do not have tenant
network yet using l3 only network). So for those reasons, we maintained
custom validation logic for instance name.

But as everyone knows maintenance for custom codes are so burden, I am
trying to find the applicable location for the demand.

imho, since there is schema validation for every resource, if any
validation hooking API provided we can happily use it. Does anyone
experience similar issue? Any advices will be appreciated.

Thanks.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev