Re: [Openstack-operators] Milan Ops Midcycle - Cells v2 session

2017-03-17 Thread Arne Wiebalck
Hi Matt,

> On 17 Mar 2017, at 01:41, Matt Riedemann  wrote:
> 
> On 3/14/2017 4:11 AM, Arne Wiebalck wrote:
>> A first list of topics for the Cells v2 session is available here:
>> 
>> https://etherpad.openstack.org/p/MIL-ops-cellsv2
>> 
>> Please feel free to add items you’d like to see discussed.
>> 
>> Thanks!
>> Belmiro & Arne
>> 
>> --
>> Arne Wiebalck
>> CERN IT
>> 
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> 
> Hi,
> 
> I've gone through the MIL ops midcycle etherpad for cells v2 [1] and left 
> some notes, answers, links to the PTG cells v2 recap, and some 
> questions/feedback of my own.

Thanks for updating the etherpad.

> Specifically, there was a request that some nova developers could be at the 
> ops meetup session and as noted in the etherpad, the fact this was happening 
> came as a late surprise to several of us. The developers are already trying 
> to get funding to the PTG and the summit (if they are lucky), and throwing in 
> a third travel venue is tough, especially with little to no direct advance 
> notice. Please ping us in IRC or direct email, or put it on the weekly nova 
> meeting agenda as a reminder. Then we can try and get someone there if 
> possible.

Great, thanks. I think the cells v2 session at the MIL ops meet up was somewhat 
special as none of the attendees (except for me) was
using cells v1, and only one site was already on Mitaka and had hence seen the 
first signs of v2 in its deployment. So, while these session
live from people sharing their experience, this one was more on the concept of 
cells and their advantages in general and then some theory
about v2 that I had prepared reading through release notes. That’s where I 
thought that for changes that are are 2 or 3 releases away for most
operators but that will be mandatory, a developer would be in much better 
position to give that overview and answer specific questions than I
was. This is of course not limited to nova, and I gave that feedback to Melvin 
as well for future ops meet-ups. 

Maybe it was simply a little too early for a cells v2 session :-)

> 
> If you're going to be in Boston for the Forum and are interested in talking 
> about Nova, our topic brainstorming etherpad is here [2].
> 
> [1] https://etherpad.openstack.org/p/MIL-ops-cellsv2
> [2] https://etherpad.openstack.org/p/BOS-Nova-brainstorming


As you probably saw on the etherpad, there is interest from the operators’ side 
in a discussion in Boston about cells v2, would be great if
we could make this happen.

Cheers,
 Arne

--
Arne Wiebalck
CERN IT


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [all] [quotas] Unified Limits Conceptual Spec RFC

2017-03-17 Thread Sean Dague
Background:

At the Atlanta PTG there was yet another attempt to get hierarchical
quotas more generally addressed in OpenStack. A proposal was put forward
that considered storing the limit information in Keystone
(https://review.openstack.org/#/c/363765/). While there were some
concerns on details that emerged out of that spec, the concept of the
move to Keystone was actually really well received in that room by a
wide range of parties, and it seemed to solve some interesting questions
around project hierarchy validation. We were perilously close to having
a path forward for a community request that's had a hard time making
progress over the last couple of years.

Let's keep that flame alive!


Here is the proposal for the Unified Limits in Keystone approach -
https://review.openstack.org/#/c/440815/. It is intentionally a high
level spec that largely lays out where the conceptual levels of control
will be. It intentionally does not talk about specific quota models
(there is a follow on that is doing some of that, under the assumption
that the exact model(s) supported will take a while, and that the
keystone interfaces are probably not going to substantially change based
on model).

We're shooting for a 2 week comment cycle here to then decide if we can
merge and move forward during this cycle or not. So please
comment/question now (either in the spec or here on the mailing list).

It is especially important that we get feedback from teams that have
limits implementations internally, as well as any that have started on
hierarchical limits/quotas (which I believe Cinder is the only one).

Thanks for your time, and look forward to seeing comments on this.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Thank you for the ops-midcycle in Milano

2017-03-17 Thread Saverio Proto
Hello !

thank you for the great event. Mariano & all the people from Milano
made an excellent work.

thanks all to all of you that helped moderating the sessions and
contributing to the etherpads.

it was really a great event and I am looking forward for the next ones.

Cheers,

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Shamail Tahir
Hi Operators and Working Groups,

Please see the thread below... Keystone is looking for use cases related to the 
"reseller"/hierarchical multi-tenancy capabilities.  I'm sure your input would 
be appreciated!

Regards,
Shamail 


Begin forwarded message:

> From: Lance Bragstad 
> Date: March 16, 2017 at 10:10:03 PM GMT+1
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Subject: [openstack-dev] [keystone][all] Reseller - do we need it?
> Reply-To: "OpenStack Development Mailing List \(not for usage questions\)" 
> 
> 
> Hey folks,
> 
> The reseller use case [0] has been popping up frequently in various 
> discussions [1], including unified limits.
> 
> For those who are unfamiliar with the reseller concept, it came out of early 
> discussions regarding hierarchical multi-tenancy (HMT). It essentially allows 
> a certain level of opaqueness within project trees. This opaqueness would 
> make it easier for providers to "resell" infrastructure, without having 
> customers/providers see all the way up and down the project tree, hence it 
> was termed reseller. Keystone originally had some ideas of how to implement 
> this after the HMT implementation laid the ground work, but it was never 
> finished.
> 
> With it popping back up in conversations, I'm looking for folks who are 
> willing to represent the idea. Participating in this thread doesn't mean 
> you're on the hook for implementing it or anything like that. 
> 
> Are you interested in reseller and willing to provide use-cases?
> 
> 
> 
> [0] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Milan Ops Midcycle

2017-03-17 Thread Melvin Hillsman
Hey everyone,

I want to send a big thank you to everyone who participated in the Midcycle!

To our sponsors we appreciate you making a venue, food, and all the other 
logistics that went into making the event a success.

Thank you to every moderator who dealt with me nagging them about their 
sessions and getting some actionable items out of their discussions; I have one 
more request coming :)

Additionally thank you to all the companies who sent their folks to the 
Midcycle:

Enter, Bloomberg, Nuage Networks, Intel, Cloudbase, Switch, and OpenStack

And to you folks who attended in person and remotely, we are very greatful that 
we sold every ticket, ate lots of Italian food, the courtyard side chats, 
networking, collaboration, and so much more. Be sure to communicate the value 
you got out of attending but most importantly let us work together outside of 
the Midcycle to accomplish what we can.

Remember OSOps meeting in #openstack-meeting-5 on 03/27/17 at 1400 UTC

--
Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center
mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
Learner | Ideation | Belief | Responsibility | Command
http://osic.org

smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph rbd root disk unexpected deletion

2017-03-17 Thread Saverio Proto
Hello Mike,

what version of openstack ?
is the instance booting from ephemeral disk or booting from cinder volume ?

When you boot from volume, that will be the root disk of your
instance. The user could have clicked on "Delete Volume on Instance
Delete". It can be selected when creating a new instance.

Saverio

2017-03-13 15:47 GMT+01:00 Mike Lowe :
> Over the weekend a user reported that his instance was in a stopped state and 
> could not be started, on further examination it appears that the vm had 
> crashed and the strange thing is that the root disk is now gone.  Has anybody 
> come across anything like this before?
>
> And why on earth is it attempting deletion of the rbd device without deletion 
> of the instance?
>
> 2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
> failed
> 2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
> failed
> 2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
> failed
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ceph rbd root disk unexpected deletion

2017-03-17 Thread Mike Lowe
This was Newton booting from ephemeral disk.  There were no delete events in 
the nova api database, just an unexpected stop when the kernel oom killer got 
qemu.  


> On Mar 17, 2017, at 8:28 AM, Saverio Proto  wrote:
> 
> Hello Mike,
> 
> what version of openstack ?
> is the instance booting from ephemeral disk or booting from cinder volume ?
> 
> When you boot from volume, that will be the root disk of your
> instance. The user could have clicked on "Delete Volume on Instance
> Delete". It can be selected when creating a new instance.
> 
> Saverio
> 
> 2017-03-13 15:47 GMT+01:00 Mike Lowe :
>> Over the weekend a user reported that his instance was in a stopped state 
>> and could not be started, on further examination it appears that the vm had 
>> crashed and the strange thing is that the root disk is now gone.  Has 
>> anybody come across anything like this before?
>> 
>> And why on earth is it attempting deletion of the rbd device without 
>> deletion of the instance?
>> 
>> 2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
>> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
>> failed
>> 2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
>> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
>> failed
>> 2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] 
>> rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms 
>> failed
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] [neutron] Hooks for instance actions like creation, deletion

2017-03-17 Thread Jay Pipes

On 03/16/2017 10:48 PM, Masha Atakova wrote:

Hi everyone,

Is there any up-to-date functionality in nova / neutron which allows to
run some additional code triggered by changes in instance like creating
or deleting an instance?

I see that nova hooks are deprecated as of Nova 13:

https://github.com/openstack/nova/blob/master/nova/hooks.py#L19

While it's hard to find the reason for this deprecation, I also struggle
to find if there's any up-to-date alternative to those hooks.


The hooks were for internal (to Nova) code triggers and made the 
behaviour of Nova potentially inconsistent between deployments, which 
limited interoperability.


The way to trigger additional code running on changes to instance state 
is to listen on the outbound Nova notifications message queue topic.


Listen to the notifications queue topic for instance.create[.start|end] 
and instance.delete events. You can read more about the notifications 
queue and how to set up a subscriber here:


http://alesnosek.com/blog/2015/05/25/openstack-nova-notifications-subscriber/

Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] [neutron] Hooks for instance actions like creation, deletion

2017-03-17 Thread Matt Riedemann

On 3/17/2017 8:50 AM, Jay Pipes wrote:

On 03/16/2017 10:48 PM, Masha Atakova wrote:

Hi everyone,

Is there any up-to-date functionality in nova / neutron which allows to
run some additional code triggered by changes in instance like creating
or deleting an instance?

I see that nova hooks are deprecated as of Nova 13:

https://github.com/openstack/nova/blob/master/nova/hooks.py#L19

While it's hard to find the reason for this deprecation, I also struggle
to find if there's any up-to-date alternative to those hooks.


The hooks were for internal (to Nova) code triggers and made the
behaviour of Nova potentially inconsistent between deployments, which
limited interoperability.

The way to trigger additional code running on changes to instance state
is to listen on the outbound Nova notifications message queue topic.

Listen to the notifications queue topic for instance.create[.start|end]
and instance.delete events. You can read more about the notifications
queue and how to set up a subscriber here:

http://alesnosek.com/blog/2015/05/25/openstack-nova-notifications-subscriber/


Best,
-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


There is also dynamic vendordata v2 which was added in Newton:

https://docs.openstack.org/developer/nova/vendordata.html

We got feedback during the Pike PTG from some people, using hooks during 
instance create, that the dynamic vendordata serves their needs now.


If vendordata or notifications do not serve your use case, we suggest 
you explain your use case in the open so the community can try to see if 
it's something that has already been solved or is something worth 
upstreaming because it's a common problem shared by multiple deployments.


--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Milan Ops Midcycle

2017-03-17 Thread Mariano Cunietti
Thank you Melvin,
great job organizing all of this!
And thanks Robert for sharing this: https://www.youtube.com/watch?v=U6RGO0zfndw
See you in Boston!

M.


NOTICE: my email address has changed from mcunie...@enter.it to 
mcunie...@enter.eu. Former address will work for some weeks before it’s been 
shut.

Sent from mobile device

> On 17 Mar 2017, at 13:20, Melvin Hillsman  wrote:
> 
> Hey everyone,
> 
> I want to send a big thank you to everyone who participated in the Midcycle!
> 
> To our sponsors we appreciate you making a venue, food, and all the other 
> logistics that went into making the event a success.
> 
> Thank you to every moderator who dealt with me nagging them about their 
> sessions and getting some actionable items out of their discussions; I have 
> one more request coming :)
> 
> Additionally thank you to all the companies who sent their folks to the 
> Midcycle:
> 
> Enter, Bloomberg, Nuage Networks, Intel, Cloudbase, Switch, and OpenStack
> 
> And to you folks who attended in person and remotely, we are very greatful 
> that we sold every ticket, ate lots of Italian food, the courtyard side 
> chats, networking, collaboration, and so much more. Be sure to communicate 
> the value you got out of attending but most importantly let us work together 
> outside of the Midcycle to accomplish what we can.
> 
> Remember OSOps meeting in #openstack-meeting-5 on 03/27/17 at 1400 UTC
> 
> --
> Melvin Hillsman
> Ops Technical Lead
> OpenStack Innovation Center
> mrhills...@gmail.com
> phone: (210) 312-1267
> mobile: (210) 413-1659
> Learner | Ideation | Belief | Responsibility | Command
> http://osic.org
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Third Bi-Annual Community Contributor Awards

2017-03-17 Thread Kendall Nelson
Hello All!

As we approach the Boston Summit and Forum we also approach another round
of Community Contributor Awards (CCA's)! The nomination runs how through
April 23rd. So please nominate those you look to for guidance, those that
ask the challenging questions, those that don't get enough recognition for
their hard work or anyone else you think deserves a medal! You can nominate
more than one person as well :)

Winners will be announced publicly at the ceremony, but also notified
individually a week or so prior to the Summit.

Here is the nomination form:
https://openstackfoundation.formstack.com/forms/cca_nominations_boston

See you all in Boston!

-Kendall Nelson (diablo_rojo)
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [nova] Should we delete the (unexposed) os-pci API?

2017-03-17 Thread Matt Riedemann
I was working on writing a spec for a blueprint [1] that would have 
touched on the os-pci API [2] and got to documenting about how it's not 
even documented [3] when Alex pointed out that the API is not even 
enabled [4][5].


It turns out that the os-pci API was added in the Nova V3 API and pulled 
back out, and [5] was a tracking bug to add it back in with a 
microversion, and that never happened.


Given the ugliness described in [3], and that I think our views on 
exposing this type of information have changed [6] since it was 
originally added, I'm proposing that we just delete the API code.


The API code itself was added back in Icehouse [7].

I tend to think if someone cared about needing this information in the 
REST API, they would have asked for it by now. As it stands, it's just 
technical debt and even if we did expose it, there are existing issues 
in the API, like the fact that the os-hypervisors extension just takes 
the compute_nodes.pci_stats dict and dumps it to json out of the REST 
API with no control over the keys in the response. That means if we ever 
change the fields in the PciDevicePool object, we implicitly introduce a 
backward incompatible change in the REST API.


So I move that we delete the (dead) code. Are there good reasons not to?

[1] 
https://blueprints.launchpad.net/nova/+spec/service-hyper-pci-uuid-in-api
[2] 
https://github.com/openstack/nova/blob/15.0.0/nova/api/openstack/compute/pci.py

[3] https://bugs.launchpad.net/nova/+bug/1673869
[4] https://github.com/openstack/nova/blob/15.0.0/setup.cfg#L132
[5] https://bugs.launchpad.net/nova/+bug/1426241
[6] 
https://docs.openstack.org/developer/nova/policies.html?highlight=metrics#metrics-gathering

[7] https://review.openstack.org/#/c/51135/

--

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Boston Forum Brainstorming

2017-03-17 Thread Melvin Hillsman
Hey everyone!

Please be aware of the following dates if you have not heard already. I
know we mentioned this extensively at the Midcycle this week and yep, here
we are again! Take note that the deadline for the brainstorming phase is
fast approaching.

March 20: end of brainstorming phase, opening of formal submission tool
EOD April 2: deadline for topic submission
April 10: publication of the schedule

Take just a few moments to drop your thoughts in the etherpad(s) found at:

https://wiki.openstack.org/wiki/Forum/Boston2017

In particular:
https://etherpad.openstack.org/p/BOS-UC-brainstorming

If you have any questions, reply back to this thread or reach out to any of
us.

-- 
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Fwd: [Openstack] nova-network -> neutron migration docs and stories?

2017-03-17 Thread Melvin Hillsman
-- Forwarded message --
From: Joe Topjian 
Date: Fri, Mar 17, 2017 at 10:52 PM
Subject: Re: [Openstack] nova-network -> neutron migration docs and stories?
To: Andrew Bogott 
Cc: "openst...@lists.openstack.org" 


Hi Andrew,

NeCTAR published a suite of scripts for doing a nova-network to neutron
migration: https://github.com/NeCTAR-RC/novanet2neutron

IIRC, another organization reported success with these scripts a few months
ago on the openstack-operators list.

I'm currently doing some trial runs and all looks good. I had to make some
slight modifications to account for IPv6 and floating IPs, but the scripts
are very simple and readable, so it was easy to do. I'll probably post
those modifications to Github in the next week or two.

We'll be doing the actual migration in May.

Hope that helps,
Joe


On Fri, Mar 17, 2017 at 2:18 PM, Andrew Bogott 
wrote:

> Googling for nova-network migration advice gets me a lot of hits but
> many are fragmentary and/or incomplete[1][2]  I know that lots of people
> have gone through this process, though, and that there are probably as many
> different solutions as there are migration stories.
>
> So:  If you have done this migration, please send me links! Blog
> posts, docpages that you found useful, whatever you have to offer.  We have
> lots of ideas about how to move forward, but it's always nice to not repeat
> other people's mistakes.  We're running Liberty with flat dhcp and floating
> IPs.
>
> Thanks!
>
> -Andrew
>
> [1] https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNet
> work/HowTo#How_to_test_migration_process
>  ' TODO - fill in the migration process script here'
>
> [2] https://www.slideshare.net/julienlim/openstack-nova-network-
> to-neutron-migration-survey-results
>  Slide 4: 'Develop tools to facilitate migration.'  Did they?
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openst...@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openst...@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack




-- 
Kind regards,

Melvin Hillsman
Ops Technical Lead
OpenStack Innovation Center

mrhills...@gmail.com
phone: (210) 312-1267
mobile: (210) 413-1659
http://osic.org

Learner | Ideation | Belief | Responsibility | Command
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators