Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Juvonen, Tomi (Nokia - FI/Espoo)
+1 
Good work indeed.
>From: EXT John Garbutt [mailto:j...@johngarbutt.com] 
>Sent: Friday, November 06, 2015 5:32 PM
>To: OpenStack Development Mailing List
>Subject: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core
>
>Hi,
>
>I propose we add Alex Xu[1] to nova-core.
>
>Over the last few cycles he has consistently been doing great work,
>including some quality reviews, particularly around the API.
>
>Please respond with comments, +1s, or objections within one week.
>
>Many thanks,
>John
>
>[1]http://stackalytics.com/?module=nova-group&user_id=xuhj&release=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Qiao, Liyong
+1, Alex worked on Nova project for a long time, and push lot of API feature in 
last few cycles,
And spend lots of time doing reviewing, I am glad to add my +1 to him.

BR, Eli(Li Yong)Qiao

-Original Message-
From: Ed Leafe [mailto:e...@leafe.com] 
Sent: Saturday, November 07, 2015 2:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

On Nov 6, 2015, at 9:32 AM, John Garbutt  wrote:

> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work, 
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.

I'm not a core, but would like to add my hearty +1.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-06 Thread Vikas Choudhary
+1 for "container-in-vm"

On Fri, Nov 6, 2015 at 10:48 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Fri, Nov 6, 2015 at 1:20 PM, Baohua Yang  wrote:
>
>> It does cause confusing by calling container-inside-vm as nested
>> container.
>>
>> The "nested" term in container area usually means
>> container-inside-container.
>>
>
> I try to always put it as VM-nested container. But I probably slipped in
> some mentions.
>
>
>> we may refer this (container-inside-vm) explicitly as vm-holding
>> container.
>>
>
> container-in-vm?
>
>
>>
>> On Fri, Nov 6, 2015 at 12:13 PM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> @Gal, I was asking about "container in nova vm" case.
>>> Not sure if you were referring to this case as nested containers case. I
>>> guess nested containers case would be "containers inside containers" and
>>> this could be hosted on nova vm and nova bm node. Is my understanding
>>> correct?
>>>
>>> Thanks Gal and Toni, for now i got answer to my query related to
>>> "container in vm" case.
>>>
>>> -Vikas
>>>
>>> On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:
>>>
 The current OVS binding proposals are not for nested containers.
 I am not sure if you are asking about that case or about the nested
 containers inside a VM case.

 For the nested containers, we will use Neutron solutions that support
 this kind of configuration, for example
 if you look at OVN you can define "parent" and "sub" ports, so OVN
 knows to perform the logical pipeline in the compute host
 and only perform VLAN tagging inside the VM (as Toni mentioned)

 If you need more clarification you can catch me on IRC as well and we
 can talk.

 On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
 choudharyvika...@gmail.com> wrote:

> Hi All,
>
> I would appreciate inputs on following queries:
> 1. Are we assuming nova bm nodes to be docker host for now?
>
> If Not:
>  - Assuming nova vm as docker host and ovs as networking
> plugin:
> This line is from the etherpad[1], "Eachdriver would have
> an executable that receives the name of the veth pair that has to be bound
> to the overlay" .
> Query 1:  As per current ovs binding proposals by
> Feisky[2] and Diga[3], vif seems to be binding with br-int on vm. I am
> unable to understand how overlay will work. AFAICT , neutron will 
> configure
> br-tun of compute machines ovs only. How overlay(br-tun) configuration 
> will
> happen inside vm ?
>
>  Query 2: Are we having double encapsulation(both at vm
> and compute)? Is not it possible to bind vif into compute host br-int?
>
>  Query3: I did not see subnet tags for network plugin
> being passed in any of the binding patches[2][3][4]. Dont we need that?
>
>
> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
> [2]  https://review.openstack.org/#/c/241558/
> [3]  https://review.openstack.org/#/c/232948/1
> [4]  https://review.openstack.org/#/c/227972/
>
>
> -Vikas Choudhary
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 --
 Best Regards ,

 The G.


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsub

Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Mark Baker
Certainly the aim is to support upgrades between LTS releases.
Getting a meaningful keynote slot at an OpenStack summit is more of a
challenge.

On 6 Nov 2015 9:27 pm, "Jonathan Proulx"  wrote:
>
> On Fri, Nov 06, 2015 at 05:28:13PM +, Mark Baker wrote:
> :Worth mentioning that OpenStack releases that come out at the same time
as
> :Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> :supported for 5 years by Canonical so are already kind of an LTS. Support
> :in this context means patches, updates and commercial support (for a
fee).
> :For paying customers 3 years of patches, updates and commercial support
for
> :April releases, (Kilo, O, Q etc..) is also available.
>
> 
> And Canonical will support a live upgarde directly from Essex to
> Icehouse and Icehouse to Mitaka?
>
> I'd love to see Shuttleworth do that that as a live keynote, but only
> on a system with at least hundres on nodes and many VMs...
> 
>
> That's where LTS falls down conceptually we're struggling to make
> single release upgrades work at this point.
>
> I do agree LTS for release would be great but honestly OpenStack isn't
> Mature enough for that yet.
>
> -Jon
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] Mitaka Priorities discussion mapped to launchpad

2015-11-06 Thread Tripp, Travis S
Hello Searchlighters,

I just wanted to let everybody know that I’ve captured the results of our 
priorities discussion on the Searchlight launchpad blueprints page.  In some 
cases, this meant creating a new blueprint (sometimes with only a little 
information).  Next week, it would be great if we can review this in our weekly 
meeting.

Thanks,
Travis



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Reminder: Team meeting on Monday at 2100 UTC

2015-11-06 Thread Armando M.
A kind reminder for next week's meeting.

Please add agenda items to the meeting here [1].

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Network/Meetings
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][security] Next steps: openstack-ansible-security

2015-11-06 Thread Jesse Pretorius
On Friday, 6 November 2015, Major Hayden  wrote:
>
> At this moment, openstack-ansible-security[1] is feature complete and all
> of the Ansible tasks and documentation for the STIGs are merged.  Exciting!


Excellent work, thank you!


> I've done lots of work to ensure that the role uses sane defaults so that
> it can be applied to the majority of OpenStack deployments without
> disrupting services.  It only supports Ubuntu 14.04 for now, but that's
> openstack-ansible's supported platform as well.


We're on a trajectory to get other platforms supported too, so I think that
work in this regards may as well get going. If there are parties interested
in adding role support for Fedora, Gentoo and others then I'd say that it
should be spec'd and can go ahead!


> I'd like to start by adding it to the gate-check-commit.sh script so that
> the security configurations are applied prior to running tempest.


While I applaud the idea, changing the current commit integration test is
probably not the best approach. We're in the middle of splitting the roles
out into their own repositories and also extending the gate checks into
multiple use-cases.

I think that the best option for now will be to add the implementation of
the security role as an additional use-case. Depending on the results there
we can figure out whether the role should be a default in all use cases.


-- 
Jesse Pretorius
mobile: +44 7586 906045
email: jesse.pretor...@gmail.com
skype: jesse.pretorius
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-06 Thread Tim Hinrichs
Congress allows users to write a policy that executes an action under
certain conditions.

The conditions can be based on any data Congress has access to, which
includes nova servers, neutron networks, cinder storage, keystone users,
etc.  We also have some Ceilometer statistics; I'm not sure about whether
it's easy to get the Keystone notifications that you're talking about
today, but notifications are on our roadmap.  If the user's login is
reflected in the Keystone API, we may already be getting that event.

The action could in theory be a mistral/heat API or an arbitrary script.
Right now we're set up to invoke any method on any of the python-clients
we've integrated with.  We've got an integration with heat but not
mistral.  New integrations are typically easy.

Happy to talk more.

Tim



On Fri, Nov 6, 2015 at 9:17 AM Doug Hellmann  wrote:

> Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
> > On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann 
> wrote:
> >
> > > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > > > > Can people help me work through the right set of tools for this
> use
> > > case
> > > > > > (has come up from several Operators) and map out a plan to
> implement
> > > it:
> > > > > >
> > > > > > Large cloud with many users coming from multiple Federation
> sources
> > > has
> > > > > > a policy of providing a minimal setup for each user upon first
> visit
> > > to
> > > > > > the cloud:  Create a project for the user with a minimal quota,
> and
> > > > > > provide them a role assignment.
> > > > > >
> > > > > > Here are the gaps, as I see it:
> > > > > >
> > > > > > 1.  Keystone provides a notification that a user has logged in,
> but
> > > > > > there is nothing capable of executing on this notification at the
> > > > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > > > >
> > > > > > 2.  Keystone does not have a workflow engine, and should not be
> > > > > > auto-creating projects.  This is something that should be
> performed
> > > via
> > > > > > a Heat template, and Keystone does not know about Heat, nor
> should
> > > it.
> > > > > >
> > > > > > 3.  The Mapping code is pretty static; it assumes a user entry
> or a
> > > > > > group entry in identity when creating a role assignment, and
> neither
> > > > > > will exist.
> > > > > >
> > > > > > We can assume a special domain for Federated users to have
> per-user
> > > > > > projects.
> > > > > >
> > > > > > So; lets assume a Heat Template that does the following:
> > > > > >
> > > > > > 1. Creates a user in the per-user-projects domain
> > > > > > 2. Assigns a role to the Federated user in that project
> > > > > > 3. Sets the minimal quota for the user
> > > > > > 4. Somehow notifies the user that the project has been set up.
> > > > > >
> > > > > > This last probably assumes an email address from the Federated
> > > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> > > authenticated
> > > > > > for any projects" error, and is stumped.
> > > > > >
> > > > > > How is quota assignment done in the other projects now?  What
> happens
> > > > > > when a project is created in Keystone?  Does that information
> gets
> > > > > > transferred to the other services, and, if so, how?  Do most
> people
> > > use
> > > > > > a custom provisioning tool for this workflow?
> > > > > >
> > > > >
> > > > > I know at Dreamhost we built some custom integration that was
> triggered
> > > > > when someone turned on the Dreamcompute service in their account
> in our
> > > > > existing user management system. That integration created the
> account
> > > in
> > > > > keystone, set up a default network in neutron, etc. I've long
> thought
> > > we
> > > > > needed a "new tenant creation" service of some sort, that sits
> outside
> > > > > of our existing services and pokes them to do something when a new
> > > > > tenant is established. Using heat as the implementation makes
> sense,
> > > for
> > > > > things that heat can control, but we don't want keystone to depend
> on
> > > > > heat and we don't want to bake such a specialized feature into heat
> > > > > itself.
> > > > >
> > > >
> > > > I agree, an automation piece that is built-in and easy to add to
> > > > OpenStack would be great.
> > > >
> > > > I do not agree that it should be Heat. Heat is for managing stacks
> that
> > > > live on and change over time and thus need the complexity of the
> graph
> > > > model Heat presents.
> > > >
> > > > I'd actually say that Mistral or Ansible are better choices for
> this. A
> > > > service which listens to the notification bus and triggered a
> workflow
> > > > defined somewhere in either Ansible playbooks or Mistral's workflow
> > > > language would simply run through the "skel" workflow for each user.
> > > >
> > > > The actual wor

Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-11-06 Thread Jesse Pretorius
On Friday, 6 November 2015, Major Hayden  wrote:
>
> I found a CA role[1] for Ansible on Galaxy, but it appears to be GPLv3
> code. :/


Considering that the role would not be imported into the OpenStack-Ansible
code tree, the license for this role would not be an issue I don't think.
What matters more is whether the role is functional for the purpose of
building an integration test use-case.


-- 
Jesse Pretorius
mobile: +44 7586 906045
email: jesse.pretor...@gmail.com
skype: jesse.pretorius
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-11-06 Thread Doug Hellmann
One thing I forgot to mention in my original email was the discussion
about when we would have this themes conversation for the N cycle.
I had originally hoped we would discuss the themes online before
the summit, and that those would inform decisions about summit
sessions. Several other folks in the room made the point that we
were unlikely to come up with a theme so surprising that we would
add or drop a summit session from any existing planning, so having
the discussion in person at the summit to add background to the
other sessions for the week was more constructive. I'd like to hear
from some folks about whether that worked out this time, and then
we can decide closer to the N summit whether to use an email thread
or some other venue instead of (or in addition to) a summit session
in Austin.

I also plan to start some email threads this cycle after each
milestone to re-consider the themes and get feedback about how we're
making progress.  I hope the release liaisons, at least, will
participate in those discussions, and it would be great to have the
product working group involved as well.

As far as rolling upgrades, I know a couple of projects are thinking
about that this cycle. As I said in the summary of that part of the
session, it's not really a feature that we're going to implement
and call "done" so much as a shift in thinking about how we design
things in the future. Tracking the specs and blueprints for work
related to that across all projects would be helpful, especially
early in the cycle like this where feedback on requirements will
make the most difference.

Doug

Excerpts from Barrett, Carol L's message of 2015-11-06 21:32:11 +:
> Doug - Thanks for leading the session and this summary. What is your view on 
> next steps to establish themes for the N-release? And specifically around 
> rolling upgrades (my personal favorite).
> 
> Thanks
> Carol
> 
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com] 
> Sent: Friday, November 06, 2015 1:11 PM
> To: openstack-dev
> Subject: [openstack-dev] [all] summarizing the cross-project summit session 
> on Mitaka themes
> 
> At the summit last week one of the early cross-project sessions tried to 
> identify some common “themes” or “goals” for the Mitaka cycle. I proposed the 
> session to talk about some of the areas of work that all of our teams need to 
> do, but that fall by the wayside when we don't pull the whole community 
> together to focus attention on them. We had several ideas proposed, and some 
> lively discussion about them. The notes are in the etherpad [1], and I will 
> try to summarize the discussion here.
> 
> 1. Functional testing, especially of client libraries, came up as a result of 
> a few embarrassingly broken client releases during Liberty.  Those issues 
> were found and fixed quickly, but they exposed a gap in our test coverage.
> 
> 2. Adding tests useful for DefCore and similar interoperability testing was 
> suggested in part because of our situation in Glance, where many of the 
> image-related API tests actually talk to the Nova API instead of the Glance 
> API. We may have other areas where additional tests in tempest could 
> eventually find their way into the DefCore definition, ensuring more 
> interoperability between deployed OpenStack clouds.
> 
> 3. We talked for a while about being more opinionated in things like 
> architecture and deployment dependencies. I don’t think we resolved this one, 
> but I’m sure the discussion fed into the DLM discussion later that day in a 
> separate session.
> 
> 4. Improving consistency of quota management across projects came up.  We’ve 
> talked in the past about a separate quota management library or service, but 
> no one has yet stepped up to spearhead the effort to launch such a project.
> 
> 5. Rolling upgrades was a very popular topic, in the room and on the product 
> working group’s priority list. The point was made that this requires a shift 
> in thinking about how to design and implement projects, not just some simple 
> code changes that can be rolled out in a single cycle. I know many teams are 
> looking at addressing rolling upgrades.
> 
> 6. os-cloud-config support in clients was raised. There is a cross-project 
> spec at https://review.openstack.org/#/c/236712/ to cover this.
> 
> 7. "Fixing existing things as a priority over features” came up, and has been 
> a recurring topic of discussion for a few cycles now.
> The idea of having a “maintenance” cycle where all teams was floated, though 
> it might be tough to get everyone aligned to doing that at the same time.  
> Alternately, if we work out a way to support individual teams doing that we 
> could let teams schedule them as they feel they are useful. We could also 
> dedicate more review time to maintenance than features, without excluding 
> features entirely.
> There seemed to be quite a bit of support in the room for the general idea, 
> though making it actionabl

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Replies inline.


On Fri, Nov 6, 2015 at 1:48 PM, Salvatore Orlando 
wrote:

> More comments inline.
> I shall stop trying being ironic (pun intended) in my posts.
>

:(


>
> Salvatore
>
> On 5 November 2015 at 18:37, Kyle Mestery  wrote:
>
>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
 Hi Salvatore,

 Thanks for the feedback. I agree with you that arbitrary JSON blobs will
 make IPAM much more powerful. Some other projects already do things like
 this.

>>>
>>> :( Actually, though "powerful" it also leads to implementation details
>>> leaking directly out of the public REST API. I'm very negative on this and
>>> would prefer an actual codified REST API that can be relied on regardless
>>> of backend driver or implementation.
>>>
>>
>> I agree with Jay here. We've had people propose similar things in Neutron
>> before, and I've been against them. The entire point of the Neutron REST
>> API is to not leak these details out. It dampens the strength of the
>> logical model, and it tends to have users become reliant on backend
>> implementations.
>>
>
> I see I did not manage to convey accurately irony and sarcasm in my
> previous post ;)
> The point was that thanks to a blooming number of extensions the Neutron
> API is already hardly portable. Blob attributes (or dict attributes, or
> key/value list attributes, or whatever does not have a precise schema) are
> a nail in the coffin, and also violate the only tenet Neutron has somehow
> managed to honour, which is being backend agnostic.
> And the fact that the port binding extension is pretty much that is not a
> valid argument, imho.
> On the other hand, I'm all in for extending DB schema and driver logic to
> suit all IPAM needs; at the end of the day that's what do with plugins for
> all sort of stuff.
>


Agreed. Filed an rfe bug: https://bugs.launchpad.net/neutron/+bug/1513981.
Spec coming up for review.



>
>
>
>>
>>
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
 'extras' arbitrary JSON field. This allows us to put any information in
 there that we think is important for us.

>>>
>>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>>> structured, not a Wild West free-for-all. The biggest problem with using
>>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>>> ability to evolve the API in a structured, versioned way. Instead of
>>> evolving the API using microversions, instead every vendor just jams
>>> whatever they feel like into the JSON blob over time. There's no way for
>>> clients to know what the server will return at any given time.
>>>
>>> Achieving consensus on a REST API that meets the needs of a variety of
>>> backend implementations is *hard work*, yes, but it's what we need to do if
>>> we are to have APIs that are viewed in the industry as stable,
>>> discoverable, and reliably useful.
>>>
>>
>> ++, this is the correct way forward.
>>
>
> Cool, but let me point out that experience has thought us that anything
> that is a result of a compromise between several parties following
> different agendas is bound to failure as it does not fully satisfy the
> requirements of any stakeholder.
> If these information are needed for making scheduling decisions based on
> network requirements, then it makes sense to expose this information also
> at the API layer (I assume there also plans for making the scheduler
> *seriously* network aware). However, this information should have a
> well-defined schema with no leeway for 'extensions; such schema can evolve
> over time.
>
>
>> Thanks,
>> Kyle
>>
>>
>>>
>>> Best,
>>> -jay
>>>
>>> Best,
>>> -jay
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.


 On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
 mailto:salv.orla...@gmail.com>> wrote:

 Arbitrary blobs are a powerful tools to circumvent limitations of an
 API, as well as other constraints which might be imposed for
 versioning or portability purposes.
 The parameters that should end up in such blob are typically
 specific for the target IPAM driver (to an extent they might even
 identify a specific driver to use), and therefore an API consumer
 who knows what backend is performing IPAM can surely leverage it.

 Therefore this would make a lot of sense, assuming API portability
 and not leaking backend details are not a concern.
 The Neutron team API & DB lieutenants will be able to provide more
 input on this regard.

 In this case other approaches such as a vendor specific extension
 are not a solution - assuming your granularity level is the
 allocation pool; indeed allocation pools are not first-class neutron
 resources, and it is not therefore possible to have APIs which
 associate vendor speci

Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Richard Jones
+1 this would definitely be the best approach from Horizon's perspective
since we can cache the capabilities per-flavour. Having to request
capabilities per-instance would be an unreasonable burden on the poor users
of Horizon.

On 6 November 2015 at 23:09, Sean Dague  wrote:

> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
> > On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
> >> Hello all,
> >> I came across [1] which is notionally an ironic bug in that horizon
> presents
> >> VM operations (like suspend) to users.  Clearly these options don't
> make sense
> >> to ironic which can be confusing.
> >>
> >> There is a horizon fix that just disables migrate/suspened and other
> functaions
> >> if the operator sets a flag say ironic is present.  Clealy this is sub
> optimal
> >> for a mixed hv environment.
> >>
> >> The data needed (hpervisor type) is currently avilable only to admins,
> a quick
> >> hack to remove this policy restriction is functional.
> >>
> >> There are a few ways to solve this.
> >>
> >>  1. Change the default from "rule:admin_api" to "" (for
> >> os_compute_api:os-extended-server-attributes and
> >> os_compute_api:os-hypervisors), and set a list of values we're
> >> comfortbale exposing the user (hypervisor_type and
> >> hypervisor_hostname).  So a user can get the hypervisor_name as
> part of
> >> the instance deatils and get the hypervisor_type from the
> >> os-hypervisors.  This would work for horizon but increases the API
> load
> >> on nova and kinda implies that horizon would have to cache the data
> and
> >> open-code assumptions that hypervisor_type can/can't do action $x
> >>
> >>  2. Include the hypervisor_type with the instance data.  This would
> place the
> >> burdon on nova.  It makes the looking up instance details slightly
> more
> >> complex but doesn't result in additional API queries, nor caching
> >> overhead in horizon.  This has the same opencoding issues as Option
> 1.
> >>
> >>  3. Define a service user and have horizon look up the hypervisors
> details via
> >> that role.  Has all the drawbacks as option 1 and I'm struggling to
> >> think of many benefits.
> >>
> >>  4. Create a capabilitioes API of some description, that can be queried
> so that
> >> consumers (horizon) can known
> >>
> >>  5. Some other way for users to know what kind of hypervisor they're
> on, Perhaps
> >> there is an established image property that would work here?
> >>
> >> If we're okay with exposing the hypervisor_type to users, then #2 is
> pretty
> >> quick and easy, and could be done in Mitaka.  Option 4 is probably the
> best
> >> long term solution but I think is best done in 'N' as it needs lots of
> >> discussion.
> >
> > I think that exposing hypervisor_type is very much the *wrong* approach
> > to this problem. The set of allowed actions varies based on much more
> than
> > just the hypervisor_type. The hypervisor version may affect it, as may
> > the hypervisor architecture, and even the version of Nova. If horizon
> > restricted its actions based on hypevisor_type alone, then it is going
> > to inevitably prevent the user from performing otherwise valid actions
> > in a number of scenarios.
> >
> > IMHO, a capabilities based approach is the only viable solution to
> > this kind of problem.
>
> Right, we just had a super long conversation about this in #openstack-qa
> yesterday with mordred, jroll, and deva around what it's going to take
> to get upgrade tests passing with ironic.
>
> Capabilities is the right approach, because it means we're future
> proofing our interface by telling users what they can do, not some
> arbitrary string that they need to cary around a separate library to
> figure those things out.
>
> It seems like capabilities need to exist on flavor, and by proxy instance.
>
> GET /flavors/bm.large/capabilities
>
> {
>  "actions": {
>  'pause': False,
>  'unpause': False,
>  'rebuild': True
>  ..
>   }
>
> A starting point would definitely be the set of actions that you can
> send to the flavor/instance. There may be features beyond that we'd like
> to classify as capabilities, but actions would be a very concrete and
> attainable starting point. With microversions we don't have to solve
> this all at once, start with a concrete thing and move forward.
>
> Sending an action that was "False" for the instance/flavor would return
> a 400 BadRequest high up at the API level, much like input validation
> via jsonschema.
>
> This is nothing new, we've talked about it in the abstract in the Nova
> space for a while. We've yet had anyone really take this on. If you
> wanted to run with a spec and code, it would be welcome.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-

Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-11-06 Thread Adam Young

On 11/06/2015 04:41 PM, Major Hayden wrote:

The dogtag service looks interesting, but it has quite a few dependencies that 
may be a bit heavy resource-wise within the average openstack-ansible 
environment.


Nah, its not any worse than any other application; Java, Tomcat, and the 
code itself.  NSS is part of the base distro.


Dogtag should be the goto for anything not self-signed upstream.  In the 
case where people already have CAs, they can use whatever their 
organization provides.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Hi Neil,

Please find my reply inline.

On Fri, Nov 6, 2015 at 1:08 PM, Neil Jerram 
wrote:

> Yes, maybe. I'm interested in a pluggable IPAM module that will allocate
> an IP address for a VM that depends on where that VM's host is‎ in the
> physical data center network. Is that similar to your requirement?
>
> We have a similar requirement where we want to pick a network thats
accessible in the rack that VM belongs to. We have L3 Top-of-rack, so the
network is confined to the rack. Right now, we are achieving this by naming
physical network name in a certain way, but thats not going to scale.

We also want to be able to make scheduling decisions based on IP
availability. So we need to know rack <-> network <-> mapping.  We can't
embed all factors in a name. It will be impossible to make scheduling
decisions by parsing name and comparing. GoDaddy has also been doing
something similar [1], [2].


> I don't yet know whether that might lead me to want to store additional
> data in the Neutron DB. My intuition though is that it shouldn't, and that
> any additional data or state that I need for this IPAM module should be
> stored separately from the Neutron DB.
>

 Where are you planning to store that information? If we need similar
information, and if more folks need it, we can add it to Neutron DB in IPAM
tables.

[1]
http://www.dorm.org/blog/openstack-architecture-at-go-daddy-part-3-nova/#Scheduler_Customization
[2]
http://www.dorm.org/blog/openstack-architecture-at-go-daddy-part-2-neutron/#Customizations_to_Abstract_Away_Layer_2


> Regards,
>Neil
>
>


>
> *From: *Shraddha Pandhe
> *Sent: *Friday, 6 November 2015 20:23
> *To: *OpenStack Development Mailing List (not for usage questions)
> *Reply To: *OpenStack Development Mailing List (not for usage questions)
> *Subject: *Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in
> ipam db tables
>
> Bumping this up :)
>
>
> Folks, does anyone else have a similar requirement to ours? Are folks
> making scheduling decisions based on networking?
>
>
>
> On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe <
> spandhe.openst...@gmail.com> wrote:
>
>> Hi,
>>
>> I agree with all of you about the REST Apis.
>>
>> As I said before, I had to bring up the idea of JSON blob because based
>> on previous discussions, it looked like neutron community was not willing
>> to enhance the schemas for different ipam dbs. Entire rationale behind
>> pluggable IPAM is to provide flexibility. So, community should be open to
>> ideas for enhancing the schema to incorporate more information in the db
>> tables. I would be extremely happy if use cases for different companies are
>> considered and schema is enhanced to include specific columns in db
>>  schemas instead of a column with random JSON blob.
>>
>> Lets pick up subnets db table for example. We have some use cases where
>> it would be great if following information is associated with the subnet db
>> table
>>
>> 1. Rack switch info
>> 2. Backplane info
>> 3. DHCP ip helpers
>> 4. Option to tag allocation pools inside subnets
>> 5. Multiple gateway addresses
>>
>> We also want to store some information about the backplanes locally, so a
>> different table might be useful.
>>
>> In a way, this information is not specific to our company. Its generic
>> information which ought to go with the subnets. Different companies can use
>> this information differently in their IPAM drivers. But, the information
>> needs to be made available to justify the flexibility of ipam
>>
>> In Yahoo! OpenStack is still not the source of truth for this kind of
>> information and database limitation is one of the reasons. I would prefer
>> to avoid having our own database to make sure that our use-cases are always
>> shared with the community.
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery  wrote:
>>
>>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>>>
 On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:

> Hi Salvatore,
>
> Thanks for the feedback. I agree with you that arbitrary JSON blobs
> will
> make IPAM much more powerful. Some other projects already do things
> like
> this.
>

 :( Actually, though "powerful" it also leads to implementation details
 leaking directly out of the public REST API. I'm very negative on this and
 would prefer an actual codified REST API that can be relied on regardless
 of backend driver or implementation.

>>>
>>> I agree with Jay here. We've had people propose similar things in
>>> Neutron before, and I've been against them. The entire point of the Neutron
>>> REST API is to not leak these details out. It dampens the strength of the
>>> logical model, and it tends to have users become reliant on backend
>>> implementations.
>>>
>>>

 e.g. In Ironic, node has driver_info, which is JSON. it also has an
> 'extras' arbitrary JSON field. This allows us to put any information in
> there that we think is im

[openstack-dev] [openstack-ansible][security] Next steps: openstack-ansible-security

2015-11-06 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hello there,

At this moment, openstack-ansible-security[1] is feature complete and all of 
the Ansible tasks and documentation for the STIGs are merged.  Exciting!

I've done lots of work to ensure that the role uses sane defaults so that it 
can be applied to the majority of OpenStack deployments without disrupting 
services.  It only supports Ubuntu 14.04 for now, but that's 
openstack-ansible's supported platform as well.

I'd like to start by adding it to the gate-check-commit.sh script so that the 
security configurations are applied prior to running tempest.  This should 
hopefully catch any defaults that could be disruptive in an openstack-ansible 
environment.  If that works, I'd like to add it to the run-playbooks.sh script 
so that it runs for all deployments (toggled via a configuration option, of 
course).

Does that seem like a decent plan?  Let me know if that makes sense and I'll 
get to work.

[1] http://docs.openstack.org/developer/openstack-ansible-security/

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJWPSDTAAoJEHNwUeDBAR+x0/sP/iOO29N5wqLmbI/LU5FlGK6l
RMnFLDmzw5bMYOHW8xeh8E689CIEnV2caew65raSKWxH5321hQfCkvxabR5UKEaE
H4w/QUkHRCQz1UMYxL8/QuOqrluCf1T9pkVvOIcw3o1AKKAMMTVvB73ZP9HGkMEL
y9zRtMby8Q99bRImTXvC9UDZGLhA3eK22jEQlwNxrbotTm2Ydz5jnxn1tFoEXUK1
n52skdokchjxn59U0VE+ITWCF9u05xy3oyT2ihoSRSGj5vTNf7u/wHHZr9330Wn6
VZ5JwqcOTmlp8svhiouTUTw7hBhM9gJ1f5BuuIxz7rcFgCwrUFwVfAyte+wG0S0B
0kH5F0jdsNy7AoQ/C6L+xq2Y4P9z6c3qGUvJY1EsYpTz8RjMNFCdyLwZyks2IiCG
S+XCZGBWIIFjtl0MVBdMG42toak1e8fll+Lc5N5Pto1ru3a6b8LxuaXBts5kEXh9
dzu7XFaNU5GxIAWZWcMnjG0OvYXqPC4tMjT9eNp/fWEbezVlLPEvwESLgGjy6+Bg
C7RAw599NEgfvkkWG9nS9AvRCdJVgTS7GsQHbHNxacwjApRkG4meMcrykW/vHBks
wY9kII932CTbv1sfsunGkm4+sh8/z39eCS6Ny+NDoW/Bqig0unUjZHm4WkvNHYFS
lrdlLLaolbSwY7UTFsBb
=fPim
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Salvatore Orlando
More comments inline.
I shall stop trying being ironic (pun intended) in my posts.

Salvatore

On 5 November 2015 at 18:37, Kyle Mestery  wrote:

> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>
>> :( Actually, though "powerful" it also leads to implementation details
>> leaking directly out of the public REST API. I'm very negative on this and
>> would prefer an actual codified REST API that can be relied on regardless
>> of backend driver or implementation.
>>
>
> I agree with Jay here. We've had people propose similar things in Neutron
> before, and I've been against them. The entire point of the Neutron REST
> API is to not leak these details out. It dampens the strength of the
> logical model, and it tends to have users become reliant on backend
> implementations.
>

I see I did not manage to convey accurately irony and sarcasm in my
previous post ;)
The point was that thanks to a blooming number of extensions the Neutron
API is already hardly portable. Blob attributes (or dict attributes, or
key/value list attributes, or whatever does not have a precise schema) are
a nail in the coffin, and also violate the only tenet Neutron has somehow
managed to honour, which is being backend agnostic.
And the fact that the port binding extension is pretty much that is not a
valid argument, imho.
On the other hand, I'm all in for extending DB schema and driver logic to
suit all IPAM needs; at the end of the day that's what do with plugins for
all sort of stuff.



>
>
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>> structured, not a Wild West free-for-all. The biggest problem with using
>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>> ability to evolve the API in a structured, versioned way. Instead of
>> evolving the API using microversions, instead every vendor just jams
>> whatever they feel like into the JSON blob over time. There's no way for
>> clients to know what the server will return at any given time.
>>
>> Achieving consensus on a REST API that meets the needs of a variety of
>> backend implementations is *hard work*, yes, but it's what we need to do if
>> we are to have APIs that are viewed in the industry as stable,
>> discoverable, and reliably useful.
>>
>
> ++, this is the correct way forward.
>

Cool, but let me point out that experience has thought us that anything
that is a result of a compromise between several parties following
different agendas is bound to failure as it does not fully satisfy the
requirements of any stakeholder.
If these information are needed for making scheduling decisions based on
network requirements, then it makes sense to expose this information also
at the API layer (I assume there also plans for making the scheduler
*seriously* network aware). However, this information should have a
well-defined schema with no leeway for 'extensions; such schema can evolve
over time.


> Thanks,
> Kyle
>
>
>>
>> Best,
>> -jay
>>
>> Best,
>> -jay
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>> mailto:salv.orla...@gmail.com>> wrote:
>>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for
>>> versioning or portability purposes.
>>> The parameters that should end up in such blob are typically
>>> specific for the target IPAM driver (to an extent they might even
>>> identify a specific driver to use), and therefore an API consumer
>>> who knows what backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability
>>> and not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more
>>> input on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension
>>> are not a solution - assuming your granularity level is the
>>> allocation pool; indeed allocation pools are not first-class neutron
>>> resources, and it is not therefore possible to have APIs which
>>> associate vendor specific properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe
>>> mailto:spandhe.openst...@gmail.com>>
>>> wrote:
>>>
>>> Hi folks,
>>>
>>> I have a small question/suggestion about IPAM.
>>>
>>> With IPAM, we are allowing users to have their own IPAM dr

Re: [openstack-dev] [openstack-ansible][security] Creating a CA for openstack-ansible deployments?

2015-11-06 Thread Major Hayden
On 10/29/2015 08:42 AM, Clark, Robert Graham wrote:
> It sounds like what you probably need is a lightweight CA, without 
> revocation, that gives you some basic constraints by which you can restrict 
> certificate issuance to just your ansible tasks and that could potentially be 
> thrown away when it’s no longer required. Particularly something light enough 
> that it could live on any deployment/installer node.
> 
> This sounds like it _might_ be a good fit for Anchor[1], though possibly not 
> if I’ve misunderstood your use-case.
> 
> [1] https://wiki.openstack.org/wiki/Security#Anchor_-_Ephemeral_PKI

Thanks, Robert.  After talking a bit in the last OpenStack Security IRC meeting 
and doing a deep dive into Anchor, I'm not sure I'm looking for a CA that 
issues ephemeral certificates.

For example, issuing ephemeral certificates for RabbitMQ or MySQL would involve 
frequent restarts of each service to apply new certificates on a regular basis 
(if I'm understanding Anchor correctly).  I could see how this wouldn't be a 
big issue on a web/API front-end, like horizon, but it would definitely cause 
some disruptions for services that are slower to start, like RabbitMQ and MySQL.

I found a CA role[1] for Ansible on Galaxy, but it appears to be GPLv3 code. :/

Another suggestion was to use Letsencrypt, but it's in a limited access period 
at the moment.  It also supplies ephemeral certs, as Anchor does.

The dogtag service looks interesting, but it has quite a few dependencies that 
may be a bit heavy resource-wise within the average openstack-ansible 
environment.

I'm still on the hunt for a good solution but I appreciate the input so far!

[1] https://github.com/debops/ansible-pki

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-06 Thread Augustina Ragwitz
I totally second that on the low-hanging-fruit tag! And I'm happy to 
help with some of the triaging. I'm still pretty new to Nova so I'm not 
sure how helpful I'll be but it seems like a good task for getting to 
know more about things at least from a high level.


Would bug triaging be a meta low-hanging-fruit item? ;)

On 11/06/2015 12:54 PM, Diana Clarke wrote:

On Fri, Nov 6, 2015 at 11:54 AM, Markus Zoeller  wrote:

below is the first report of bug stats I intend to post weekly.
We discussed it shortly during the Mitaka summit that this report
could be useful to keep the attention of the open bugs at a certain
level. Let me know if you think it's missing something.

Thanks Markus!

On the topic of triaging bugs, I'd love to see more of the Nova bugs
tagged with low-hanging-fruit (when appropriate) for those of us
looking for new-contributor-friendly bugs to work on.

 https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit

Thanks again & have a great weekend!

Cheers,

--diana

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Converting the AIO bootstrap script to Ansible

2015-11-06 Thread Major Hayden
Hey folks,

After 51 patch sets, I feel that the AIO bootstrap conversion to Ansible is 
worth reviewing[1].  There was a bunch of logic within the bootstrap-aio.sh 
script that took a bunch of tries to get right.  Also, I ended up with some ssh 
timing issues in the dsvm tests that caused some serious head-scratching.

I've tried to copy the exact functionality from bootstrap-aio.sh without making 
many improvements.  There were some areas where Ansible made things much 
simpler, which was nice.  This should also make it easier to support more than 
one operating system (the multi-platform-host blueprint) and I've stubbed out 
some initial support for RPM-based distributions within a variables file in the 
playbook.

Feel free to critique it and I'll get to work on making the changes.  The 
spec[2] should answer most of the questions about the effort.

Thanks! :)

[1] https://review.openstack.org/#/c/239525/
[2] 
http://specs.openstack.org/openstack/openstack-ansible-specs/specs/mitaka/convert-aio-bootstrap-to-ansible.html

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-11-06 Thread Barrett, Carol L
Doug - Thanks for leading the session and this summary. What is your view on 
next steps to establish themes for the N-release? And specifically around 
rolling upgrades (my personal favorite).

Thanks
Carol

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: Friday, November 06, 2015 1:11 PM
To: openstack-dev
Subject: [openstack-dev] [all] summarizing the cross-project summit session on 
Mitaka themes

At the summit last week one of the early cross-project sessions tried to 
identify some common “themes” or “goals” for the Mitaka cycle. I proposed the 
session to talk about some of the areas of work that all of our teams need to 
do, but that fall by the wayside when we don't pull the whole community 
together to focus attention on them. We had several ideas proposed, and some 
lively discussion about them. The notes are in the etherpad [1], and I will try 
to summarize the discussion here.

1. Functional testing, especially of client libraries, came up as a result of a 
few embarrassingly broken client releases during Liberty.  Those issues were 
found and fixed quickly, but they exposed a gap in our test coverage.

2. Adding tests useful for DefCore and similar interoperability testing was 
suggested in part because of our situation in Glance, where many of the 
image-related API tests actually talk to the Nova API instead of the Glance 
API. We may have other areas where additional tests in tempest could eventually 
find their way into the DefCore definition, ensuring more interoperability 
between deployed OpenStack clouds.

3. We talked for a while about being more opinionated in things like 
architecture and deployment dependencies. I don’t think we resolved this one, 
but I’m sure the discussion fed into the DLM discussion later that day in a 
separate session.

4. Improving consistency of quota management across projects came up.  We’ve 
talked in the past about a separate quota management library or service, but no 
one has yet stepped up to spearhead the effort to launch such a project.

5. Rolling upgrades was a very popular topic, in the room and on the product 
working group’s priority list. The point was made that this requires a shift in 
thinking about how to design and implement projects, not just some simple code 
changes that can be rolled out in a single cycle. I know many teams are looking 
at addressing rolling upgrades.

6. os-cloud-config support in clients was raised. There is a cross-project spec 
at https://review.openstack.org/#/c/236712/ to cover this.

7. "Fixing existing things as a priority over features” came up, and has been a 
recurring topic of discussion for a few cycles now.
The idea of having a “maintenance” cycle where all teams was floated, though it 
might be tough to get everyone aligned to doing that at the same time.  
Alternately, if we work out a way to support individual teams doing that we 
could let teams schedule them as they feel they are useful. We could also 
dedicate more review time to maintenance than features, without excluding 
features entirely.
There seemed to be quite a bit of support in the room for the general idea, 
though making it actionable will take some more thought.

8. Mike Perez is working with teams to increase our third-party CI for 
vendor-specific drivers and other deployment choices. This theme wouldn’t 
necessarily apply to every team, but there was a lot of support for it.

9. Training more contributors to debug gate issues came up late in the session. 
Anita Kuno has taken up this challenge, and has started collecting useful 
resources in a mailing list thread, the archives for which are split across 2 
months, so see both [2] and [3] if you missed it in your email client.

10. We wrapped up with a short discussion of making sure we have all the 
necessary cross-project liaisons in place to ensure good communication and 
coordination. Liaisons for Mitaka are listed in 
https://wiki.openstack.org/wiki/CrossProjectLiaisons

[1] https://etherpad.openstack.org/p/mitaka-crossproject-themes
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077913.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-November/078173.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Jonathan Proulx
On Fri, Nov 06, 2015 at 05:28:13PM +, Mark Baker wrote:
:Worth mentioning that OpenStack releases that come out at the same time as
:Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
:supported for 5 years by Canonical so are already kind of an LTS. Support
:in this context means patches, updates and commercial support (for a fee).
:For paying customers 3 years of patches, updates and commercial support for
:April releases, (Kilo, O, Q etc..) is also available.


And Canonical will support a live upgarde directly from Essex to
Icehouse and Icehouse to Mitaka?

I'd love to see Shuttleworth do that that as a live keynote, but only
on a system with at least hundres on nodes and many VMs...


That's where LTS falls down conceptually we're struggling to make
single release upgrades work at this point.

I do agree LTS for release would be great but honestly OpenStack isn't
Mature enough for that yet.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-11-06 Thread Doug Hellmann
At the summit last week one of the early cross-project sessions
tried to identify some common “themes” or “goals” for the Mitaka
cycle. I proposed the session to talk about some of the areas of
work that all of our teams need to do, but that fall by the wayside
when we don't pull the whole community together to focus attention
on them. We had several ideas proposed, and some lively discussion
about them. The notes are in the etherpad [1], and I will try to
summarize the discussion here.

1. Functional testing, especially of client libraries, came up as
a result of a few embarrassingly broken client releases during
Liberty.  Those issues were found and fixed quickly, but they exposed
a gap in our test coverage.

2. Adding tests useful for DefCore and similar interoperability
testing was suggested in part because of our situation in Glance,
where many of the image-related API tests actually talk to the Nova
API instead of the Glance API. We may have other areas where
additional tests in tempest could eventually find their way into
the DefCore definition, ensuring more interoperability between
deployed OpenStack clouds.

3. We talked for a while about being more opinionated in things
like architecture and deployment dependencies. I don’t think we
resolved this one, but I’m sure the discussion fed into the DLM
discussion later that day in a separate session.

4. Improving consistency of quota management across projects came
up.  We’ve talked in the past about a separate quota management
library or service, but no one has yet stepped up to spearhead the
effort to launch such a project.

5. Rolling upgrades was a very popular topic, in the room and on
the product working group’s priority list. The point was made that
this requires a shift in thinking about how to design and implement
projects, not just some simple code changes that can be rolled out
in a single cycle. I know many teams are looking at addressing
rolling upgrades.

6. os-cloud-config support in clients was raised. There is a
cross-project spec at https://review.openstack.org/#/c/236712/ to
cover this.

7. "Fixing existing things as a priority over features” came up,
and has been a recurring topic of discussion for a few cycles now.
The idea of having a “maintenance” cycle where all teams was floated,
though it might be tough to get everyone aligned to doing that at
the same time.  Alternately, if we work out a way to support
individual teams doing that we could let teams schedule them as
they feel they are useful. We could also dedicate more review time
to maintenance than features, without excluding features entirely.
There seemed to be quite a bit of support in the room for the general
idea, though making it actionable will take some more thought.

8. Mike Perez is working with teams to increase our third-party CI
for vendor-specific drivers and other deployment choices. This theme
wouldn’t necessarily apply to every team, but there was a lot of
support for it.

9. Training more contributors to debug gate issues came up late in
the session. Anita Kuno has taken up this challenge, and has started
collecting useful resources in a mailing list thread, the archives
for which are split across 2 months, so see both [2] and [3] if you
missed it in your email client.

10. We wrapped up with a short discussion of making sure we have
all the necessary cross-project liaisons in place to ensure good
communication and coordination. Liaisons for Mitaka are listed in
https://wiki.openstack.org/wiki/CrossProjectLiaisons

[1] https://etherpad.openstack.org/p/mitaka-crossproject-themes
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077913.html
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-November/078173.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Neil Jerram
Yes, maybe. I'm interested in a pluggable IPAM module that will allocate an IP 
address for a VM that depends on where that VM's host is‎ in the physical data 
center network. Is that similar to your requirement?

I don't yet know whether that might lead me to want to store additional data in 
the Neutron DB. My intuition though is that it shouldn't, and that any 
additional data or state that I need for this IPAM module should be stored 
separately from the Neutron DB.

Regards,
   Neil


From: Shraddha Pandhe
Sent: Friday, 6 November 2015 20:23
To: OpenStack Development Mailing List (not for usage questions)
Reply To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db 
tables


Bumping this up :)


Folks, does anyone else have a similar requirement to ours? Are folks making 
scheduling decisions based on networking?



On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe 
mailto:spandhe.openst...@gmail.com>> wrote:
Hi,

I agree with all of you about the REST Apis.

As I said before, I had to bring up the idea of JSON blob because based on 
previous discussions, it looked like neutron community was not willing to 
enhance the schemas for different ipam dbs. Entire rationale behind pluggable 
IPAM is to provide flexibility. So, community should be open to ideas for 
enhancing the schema to incorporate more information in the db tables. I would 
be extremely happy if use cases for different companies are considered and 
schema is enhanced to include specific columns in db  schemas instead of a 
column with random JSON blob.

Lets pick up subnets db table for example. We have some use cases where it 
would be great if following information is associated with the subnet db table

1. Rack switch info
2. Backplane info
3. DHCP ip helpers
4. Option to tag allocation pools inside subnets
5. Multiple gateway addresses

We also want to store some information about the backplanes locally, so a 
different table might be useful.

In a way, this information is not specific to our company. Its generic 
information which ought to go with the subnets. Different companies can use 
this information differently in their IPAM drivers. But, the information needs 
to be made available to justify the flexibility of ipam

In Yahoo! OpenStack is still not the source of truth for this kind of 
information and database limitation is one of the reasons. I would prefer to 
avoid having our own database to make sure that our use-cases are always shared 
with the community.








On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery 
mailto:mest...@mestery.com>> wrote:
On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes 
mailto:jaypi...@gmail.com>> wrote:
On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary JSON blobs will
make IPAM much more powerful. Some other projects already do things like
this.

:( Actually, though "powerful" it also leads to implementation details leaking 
directly out of the public REST API. I'm very negative on this and would prefer 
an actual codified REST API that can be relied on regardless of backend driver 
or implementation.

I agree with Jay here. We've had people propose similar things in Neutron 
before, and I've been against them. The entire point of the Neutron REST API is 
to not leak these details out. It dampens the strength of the logical model, 
and it tends to have users become reliant on backend implementations.


e.g. In Ironic, node has driver_info, which is JSON. it also has an
'extras' arbitrary JSON field. This allows us to put any information in
there that we think is important for us.

Yeah, and this is a bad thing, IMHO. Public REST APIs should be structured, not 
a Wild West free-for-all. The biggest problem with using free-form JSON blobs 
in RESTful APIs like this is that you throw away the ability to evolve the API 
in a structured, versioned way. Instead of evolving the API using 
microversions, instead every vendor just jams whatever they feel like into the 
JSON blob over time. There's no way for clients to know what the server will 
return at any given time.

Achieving consensus on a REST API that meets the needs of a variety of backend 
implementations is *hard work*, yes, but it's what we need to do if we are to 
have APIs that are viewed in the industry as stable, discoverable, and reliably 
useful.

++, this is the correct way forward.

Thanks,
Kyle


Best,
-jay

Best,
-jay

Hoping to get some positive feedback from API and DB lieutenants too.


On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
mailto:salv.orla...@gmail.com> 
>> wrote:

Arbitrary blobs are a powerful tools to circumvent limitations of an
API, as well as other constraints which might be imposed for
versioning or portability purposes.
The parameters that should end up in such blob are typically

Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-06 Thread Diana Clarke
On Fri, Nov 6, 2015 at 11:54 AM, Markus Zoeller  wrote:
> below is the first report of bug stats I intend to post weekly.
> We discussed it shortly during the Mitaka summit that this report
> could be useful to keep the attention of the open bugs at a certain
> level. Let me know if you think it's missing something.

Thanks Markus!

On the topic of triaging bugs, I'd love to see more of the Nova bugs
tagged with low-hanging-fruit (when appropriate) for those of us
looking for new-contributor-friendly bugs to work on.

https://bugs.launchpad.net/nova/+bugs?field.tag=low-hanging-fruit

Thanks again & have a great weekend!

Cheers,

--diana

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Donald Talton
I agree, but to use your argument: how hard would it be to setup a small group 
to do this for the community? I’m sure there would be a few people interested 
in maintaining it…

From: matt [mailto:m...@nycresistor.com]
Sent: Friday, November 06, 2015 1:18 PM
To: Fox, Kevin M
Cc: Jesse Keating; OpenStack Development Mailing List (not for usage 
questions); openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno 
"alive" for longer.

backporting patches isn't too terribly hard to be honest.  you could probably 
hire a consultant to do it if need be.  mirantis would probably quote you a 
price.

On Fri, Nov 6, 2015 at 3:10 PM, Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
Kind of related, as an op, we see a lot of 3rd party repositories that recently 
only supported rhel5 move to finally supporting rhel6 because rhel7 came out 
and rhel5 went to long term support contract only. This caused us to have to 
support rhel5 way longer then we would have liked. Now, we're stuck at 6 
instead of 7. :/

Some number of users will stick with juno until it is EOL and then move. 
Sometimes its because its a desire to not make a change. Sometimes its 
considered a good thing by the ops that they finally have a "good enough" 
excuse (EOL) to move forward "finally" (sigh of relief). :)

Thanks,
Kevin

From: Jesse Keating [j...@bluebox.net]
Sent: Friday, November 06, 2015 10:14 AM
To: Dan Smith
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno 
"alive" for longer.
We (Blue Box, an IBM company) do have a lot of installs on Juno, however we'll 
be aggressively moving to Kilo, so we are not interested in keeping Juno alive.


- jlk

On Fri, Nov 6, 2015 at 9:37 AM, Dan Smith 
mailto:d...@danplanet.com>> wrote:
> Worth mentioning that OpenStack releases that come out at the same time
> as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> are supported for 5 years by Canonical so are already kind of an LTS.
> Support in this context means patches, updates and commercial support
> (for a fee).
> For paying customers 3 years of patches, updates and commercial support
> for April releases, (Kilo, O, Q etc..) is also available.

Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
maintaining an older release for so long is a good use of people or CI
resources, especially given how hard it can be for us to keep even
recent stable releases working and maintained.

--Dan

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-06 Thread Davanum Srinivas
Jeremy,

Yes, i went to re-read that thread :) yes, it was about cassandra NOT zookeeper
http://markmail.org/message/lzsr72cideli5qvt

On Fri, Nov 6, 2015 at 3:14 PM, Jeremy Stanley  wrote:
> On 2015-11-06 10:55:14 -0800 (-0800), Joshua Harlow wrote:
> [...]
>> Basically here is the TLDR of the question/complaint:
>>
>> '''
>> Zookeeper, a java application, will force you to install oracles
>> virtual machine implementation for it to work, and it doesn't work
>> with the openjdk,
> [...]
>
> Also, for those of you who are, like me, easily confused and think
> all Java applications look the same: it's easy to conflate this
> discussion with the recent thread (sorry for lack of citation, I
> can't find the corresponding subject line at the moment) about
> Cassandra which _did_ in fact have some confirmation from their
> developer community--at least in the form of code comments--of only
> being reliable/supported in conjunction with a non-free JDK. Thanks
> be to Joshua for setting me straight in #openstack-infra on that
> matter! ;)
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-06 Thread Shraddha Pandhe
Bumping this up :)


Folks, does anyone else have a similar requirement to ours? Are folks
making scheduling decisions based on networking?



On Thu, Nov 5, 2015 at 12:24 PM, Shraddha Pandhe <
spandhe.openst...@gmail.com> wrote:

> Hi,
>
> I agree with all of you about the REST Apis.
>
> As I said before, I had to bring up the idea of JSON blob because based on
> previous discussions, it looked like neutron community was not willing to
> enhance the schemas for different ipam dbs. Entire rationale behind
> pluggable IPAM is to provide flexibility. So, community should be open to
> ideas for enhancing the schema to incorporate more information in the db
> tables. I would be extremely happy if use cases for different companies are
> considered and schema is enhanced to include specific columns in db
>  schemas instead of a column with random JSON blob.
>
> Lets pick up subnets db table for example. We have some use cases where it
> would be great if following information is associated with the subnet db
> table
>
> 1. Rack switch info
> 2. Backplane info
> 3. DHCP ip helpers
> 4. Option to tag allocation pools inside subnets
> 5. Multiple gateway addresses
>
> We also want to store some information about the backplanes locally, so a
> different table might be useful.
>
> In a way, this information is not specific to our company. Its generic
> information which ought to go with the subnets. Different companies can use
> this information differently in their IPAM drivers. But, the information
> needs to be made available to justify the flexibility of ipam
>
> In Yahoo! OpenStack is still not the source of truth for this kind of
> information and database limitation is one of the reasons. I would prefer
> to avoid having our own database to make sure that our use-cases are always
> shared with the community.
>
>
>
>
>
>
>
>
> On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery  wrote:
>
>> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>>
>>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>>
 Hi Salvatore,

 Thanks for the feedback. I agree with you that arbitrary JSON blobs will
 make IPAM much more powerful. Some other projects already do things like
 this.

>>>
>>> :( Actually, though "powerful" it also leads to implementation details
>>> leaking directly out of the public REST API. I'm very negative on this and
>>> would prefer an actual codified REST API that can be relied on regardless
>>> of backend driver or implementation.
>>>
>>
>> I agree with Jay here. We've had people propose similar things in Neutron
>> before, and I've been against them. The entire point of the Neutron REST
>> API is to not leak these details out. It dampens the strength of the
>> logical model, and it tends to have users become reliant on backend
>> implementations.
>>
>>
>>>
>>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
 'extras' arbitrary JSON field. This allows us to put any information in
 there that we think is important for us.

>>>
>>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>>> structured, not a Wild West free-for-all. The biggest problem with using
>>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>>> ability to evolve the API in a structured, versioned way. Instead of
>>> evolving the API using microversions, instead every vendor just jams
>>> whatever they feel like into the JSON blob over time. There's no way for
>>> clients to know what the server will return at any given time.
>>>
>>> Achieving consensus on a REST API that meets the needs of a variety of
>>> backend implementations is *hard work*, yes, but it's what we need to do if
>>> we are to have APIs that are viewed in the industry as stable,
>>> discoverable, and reliably useful.
>>>
>>
>> ++, this is the correct way forward.
>>
>> Thanks,
>> Kyle
>>
>>
>>>
>>> Best,
>>> -jay
>>>
>>> Best,
>>> -jay
>>>
>>> Hoping to get some positive feedback from API and DB lieutenants too.


 On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
 mailto:salv.orla...@gmail.com>> wrote:

 Arbitrary blobs are a powerful tools to circumvent limitations of an
 API, as well as other constraints which might be imposed for
 versioning or portability purposes.
 The parameters that should end up in such blob are typically
 specific for the target IPAM driver (to an extent they might even
 identify a specific driver to use), and therefore an API consumer
 who knows what backend is performing IPAM can surely leverage it.

 Therefore this would make a lot of sense, assuming API portability
 and not leaking backend details are not a concern.
 The Neutron team API & DB lieutenants will be able to provide more
 input on this regard.

 In this case other approaches such as a vendor specific extension
 are not a solution - assuming your granularity level

Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-06 Thread Fox, Kevin M
Awesome. Thanks for tracking this down. Knowing that other big players are 
running production systems in that configuration makes us much more comfortable 
with the idea. :)

Kevin

From: Joshua Harlow [harlo...@fastmail.com]
Sent: Friday, November 06, 2015 10:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

I just wanted to bring this up in its own thread as I know it was a
concern of (some) folks at the DLM session in tokyo[0] and I'd like to
try to bust this myth using hopefully objective people/users of
zookeeper (besides myself and yahoo, the company I work for) so that
this myth can be put to bed.

Basically here is the TLDR of the question/complaint:

'''
Zookeeper, a java application, will force you to install oracles virtual
machine implementation for it to work, and it doesn't work with the
openjdk, and if tooz (and oslo library) has a capable driver that uses
zookeeper internally (via kazoo @ http://kazoo.readthedocs.org) then it
will force deployers of openstack and its components that will use more
of tooz to install oracles virtual machine implementation.

This will not work!!
There is no way I can do that!!
Yell!! Shout!! Cry!!
'''

That's the *jist* of it (with additional dramatization included).

So in order to dispel this, I tried in that session to say 'actually I
have heard nothing saying it doesn't work with openjdk' in that session
but the voices did not seem to hear that (or they were unable to listen
due to there emotions stressed/high). Either way I wanted to ensure that
people do know it does work with the openjdk and here is a set of
testimonials from real users of zookeeper + openjdk that it does work there:

 From Min Pae[1] on the Cue[2] team:

'''
<@sputnik13> harlowja for what it's worth we use zookeeper with openjdk
'''

 From Greg Hill[3] who works on the rackspace bigdata[4] team:

'''
 and yes, we run Zookeeper on openjdk, and we haven't
heard of any problems with it
'''

 From Joe Smith[5][6] (who is at twitter, and is the Mesos/Aurora SRE
Tech Lead there):

'''
 and yep, we (twitter) use zookeeper for service discovery
 someone asked me that question back at mesoscon in seattle,
fwiw https://youtu.be/nNrh-gdu9m4?t=34m43s
 Yasumoto do u know if u use openjdk or oraclejdk?
 harlowja: yep, openjdk7
 but we're migrating up to 8
'''

 From Martijn Verburg who is an an openjdk developer (and CEO)[7][8]
that has some insightful info as well:

'''
So OpenJDK and Oracle JDK are almost identical in their make up
*especially* on the server side. Many, many orgs like Google, Twitter,
the biggest investment bank in the world, all use OpenJDK as opposed to
Oracle's JDK.

---

The difference is the quality of the OpenJDK binaries built and released
by package maintainers.

If you are getting IcedTea from RedHat (their supported OpenJDK binary)
or Azul's Zulu (Fully supported OpenJDK) then you're *absolutely fine*.

If you're relying on the Debian or Fedora packages then *occasionally*
those package maintainers don't put out a great binary as they don't run
the TCK tests (partly because they can't as they are unwilling/unable to
pay Oracle for that TCK).

Hope that all makes sense...
'''

So I hope the above is enough of *proof* that yes the openjdk is fine,
there may have been some bugs in the past, but those afaik have all been
resolved and there are major contributors stepping up (and continuing to
step up) to make sure that zookeeper + openjdk continue to work (because
companies/projects/people... like mentioned above depend on it).

-Josh

[0] https://etherpad.openstack.org/p/mitaka-cross-project-dlm
[1] https://launchpad.net/~sputnik13
[2] https://wiki.openstack.org/wiki/Cue
[3] https://launchpad.net/~greg-hill
[4] http://www.rackspace.com/cloud/big-data
[5] http://www.bjoli.com/
[6] https://github.com/Yasumoto
[7] http://martijnverburg.blogspot.com/
[8] http://www.infoq.com/interviews/verburg-ljc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread matt
backporting patches isn't too terribly hard to be honest.  you could
probably hire a consultant to do it if need be.  mirantis would probably
quote you a price.

On Fri, Nov 6, 2015 at 3:10 PM, Fox, Kevin M  wrote:

> Kind of related, as an op, we see a lot of 3rd party repositories that
> recently only supported rhel5 move to finally supporting rhel6 because
> rhel7 came out and rhel5 went to long term support contract only. This
> caused us to have to support rhel5 way longer then we would have liked.
> Now, we're stuck at 6 instead of 7. :/
>
> Some number of users will stick with juno until it is EOL and then move.
> Sometimes its because its a desire to not make a change. Sometimes its
> considered a good thing by the ops that they finally have a "good enough"
> excuse (EOL) to move forward "finally" (sigh of relief). :)
>
> Thanks,
> Kevin
>
> --
> *From:* Jesse Keating [j...@bluebox.net]
> *Sent:* Friday, November 06, 2015 10:14 AM
> *To:* Dan Smith
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org
> *Subject:* Re: [Openstack-operators] [openstack-dev] [stable][all]
> Keeping Juno "alive" for longer.
>
> We (Blue Box, an IBM company) do have a lot of installs on Juno, however
> we'll be aggressively moving to Kilo, so we are not interested in keeping
> Juno alive.
>
>
> - jlk
>
> On Fri, Nov 6, 2015 at 9:37 AM, Dan Smith  wrote:
>
>> > Worth mentioning that OpenStack releases that come out at the same time
>> > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
>> > are supported for 5 years by Canonical so are already kind of an LTS.
>> > Support in this context means patches, updates and commercial support
>> > (for a fee).
>> > For paying customers 3 years of patches, updates and commercial support
>> > for April releases, (Kilo, O, Q etc..) is also available.
>>
>> Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
>> maintaining an older release for so long is a good use of people or CI
>> resources, especially given how hard it can be for us to keep even
>> recent stable releases working and maintained.
>>
>> --Dan
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-06 Thread Jeremy Stanley
On 2015-11-06 10:55:14 -0800 (-0800), Joshua Harlow wrote:
[...]
> Basically here is the TLDR of the question/complaint:
> 
> '''
> Zookeeper, a java application, will force you to install oracles
> virtual machine implementation for it to work, and it doesn't work
> with the openjdk,
[...]

Also, for those of you who are, like me, easily confused and think
all Java applications look the same: it's easy to conflate this
discussion with the recent thread (sorry for lack of citation, I
can't find the corresponding subject line at the moment) about
Cassandra which _did_ in fact have some confirmation from their
developer community--at least in the form of code comments--of only
being reliable/supported in conjunction with a non-free JDK. Thanks
be to Joshua for setting me straight in #openstack-infra on that
matter! ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][neutron][ipam][networking-infoblox] networking-infoblox 1.0.0

2015-11-06 Thread John Belamaric
I am happy to announce that we have released version 1.0.0 of the Infoblox IPAM 
driver for OpenStack. This driver uses the pluggable IPAM framework delivered 
in Neutron's Liberty release, enabling the use of Infoblox for allocating 
subnets and IP addresses.

More information and the code may be found at the networking-infoblox Launchpad 
page [1]. Bugs may also be reported on the same page.

[1] https://launchpad.net/networking-infoblox

We are continuing to develop this driver to offer additional functionality, so 
please do provide any feedback you may have.

Thank you,

John


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Fox, Kevin M
Kind of related, as an op, we see a lot of 3rd party repositories that recently 
only supported rhel5 move to finally supporting rhel6 because rhel7 came out 
and rhel5 went to long term support contract only. This caused us to have to 
support rhel5 way longer then we would have liked. Now, we're stuck at 6 
instead of 7. :/

Some number of users will stick with juno until it is EOL and then move. 
Sometimes its because its a desire to not make a change. Sometimes its 
considered a good thing by the ops that they finally have a "good enough" 
excuse (EOL) to move forward "finally" (sigh of relief). :)

Thanks,
Kevin


From: Jesse Keating [j...@bluebox.net]
Sent: Friday, November 06, 2015 10:14 AM
To: Dan Smith
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno 
"alive" for longer.

We (Blue Box, an IBM company) do have a lot of installs on Juno, however we'll 
be aggressively moving to Kilo, so we are not interested in keeping Juno alive.


- jlk

On Fri, Nov 6, 2015 at 9:37 AM, Dan Smith 
mailto:d...@danplanet.com>> wrote:
> Worth mentioning that OpenStack releases that come out at the same time
> as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> are supported for 5 years by Canonical so are already kind of an LTS.
> Support in this context means patches, updates and commercial support
> (for a fee).
> For paying customers 3 years of patches, updates and commercial support
> for April releases, (Kilo, O, Q etc..) is also available.

Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
maintaining an older release for so long is a good use of people or CI
resources, especially given how hard it can be for us to keep even
recent stable releases working and maintained.

--Dan

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Juvonen, Tomi (Nokia - FI/Espoo)
+1 :)
>From: EXT John Garbutt [mailto:j...@johngarbutt.com] 
>Sent: Friday, November 06, 2015 5:32 PM
>To: OpenStack Development Mailing List
>Subject: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core
>
>Hi,
>
>I propose we add Sylvain Bauza[1] to nova-core.
>
>Over the last few cycles he has consistently been doing great work,
>including some quality reviews, particularly around the Scheduler.
>
>Please respond with comments, +1s, or objections within one week.
>
>Many thanks,
>John
>
>[1] 
>http://stackalytics.com/?module=nova-group&user_id=sylvain-bauza&release=all
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Sean Dague
On 11/06/2015 01:15 AM, Tony Breeds wrote:
> Hello all,
> 
> I'll start by acknowledging that this is a big and complex issue and I
> do not claim to be across all the view points, nor do I claim to be
> particularly persuasive ;P
> 
> Having stated that, I'd like to seek constructive feedback on the idea of
> keeping Juno around for a little longer.  During the summit I spoke to a
> number of operators, vendors and developers on this topic.  There was some
> support and some "That's crazy pants!" responses.  I clearly didn't make it
> around to everyone, hence this email.
> 
> Acknowledging my affiliation/bias:  I work for Rackspace in the private
> cloud team.  We support a number of customers currently running Juno that are,
> for a variety of reasons, challenged by the Kilo upgrade.

The upstream strategy has been make upgrades unexciting, and then folks
can move forward easily.

I would really like to unpack what those various reasons are that people
are trapped. Because figuring out why they feel that way is important
data in what needs to be done better on upgrade support and testing.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Doug Hellmann
Excerpts from Joshua Harlow's message of 2015-11-06 11:11:02 -0800:
> Clint Byrum wrote:
> > Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> >> Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> >>> Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > Worth mentioning that OpenStack releases that come out at the same time
> > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > are supported for 5 years by Canonical so are already kind of an LTS.
> > Support in this context means patches, updates and commercial support
> > (for a fee).
> > For paying customers 3 years of patches, updates and commercial support
> > for April releases, (Kilo, O, Q etc..) is also available.
>  Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
>  maintaining an older release for so long is a good use of people or CI
>  resources, especially given how hard it can be for us to keep even
>  recent stable releases working and maintained.
> 
> >>> The argument in the original post, I think, is that we should not
> >>> stand in the way of the vendors continuing to collaborate on stable
> >>> maintenance in the upstream context after the EOL date. We already have
> >>> distro vendors doing work in the stable branches, but at EOL we push
> >>> them off to their respective distro-specific homes.
> >>>
> >>> As much as I'd like everyone to get on the CD train, I think it might
> >>> make sense to enable the vendors to not diverge, but instead let them
> >>> show up with people and commitment and say "Hey we're going to keep
> >>> Juno/Mitaka/etc alive!".
> >>>
> >>> So perhaps what would make sense is defining a process by which they can
> >>> make that happen.
> >> Do we need a new process? Aren't the existing stable maintenance
> >> and infrastructure teams clearly defined?
> >>
> >> We have this discussion whenever a release is about to go EOL, and
> >> the result is more or less the same each time. The burden of
> >> maintaining stable branches for longer than we do is currently
> >> greater than the resources being applied upstream to do that
> >> maintenance. Until that changes, I don't realistically see us being
> >> able to increase the community's commitment. That's not a lack of
> >> willingness, just an assessment of our current resources.
> >
> > I tend to agree with you. I only bring up a new process because I wonder
> > if the distro vendors would even be interested in collaborating on this,
> > or if this is just sort of "what they do" and we should accept that
> > they're going to do it outside upstream no matter how easy we make it.
> >
> > If we do believe that, and are OK with that, then we should not extend
> > EOL's, and we should make sure users understand that when they choose
> > the source of their OpenStack software.
> 
> Except for the fact that you are now forcing deployers that may or may 
> not be ok with paying for paid support to now pay for it... What is the 
> adoption rate/expected adoption rate of someone transitioning there 
> current cloud (which they did not pay support for) to a paid support model?

I'm not sure where that's coming from. We're not talking about
shortening any of the timelines we set for releases so far, only
sticking with them rather than extending them.

> 
> Does that require them to redeploy/convert there whole cloud using 
> vendors provided packages/deployment model... If so, jeez, that sounds 
> iffy...
> 
> And if a large majority of deployers aren't able to do that conversion 
> (or aren't willing to pay for support) and those same deployers are 
> willing to provide developers/others to ensure the old branches continue 
> to work and they know the issues of CI and they are willing to stay 
> on-top of that (a old-branch-dictator/leader may be needed to ensure 
> this?) then meh, I think we as a community should just let those 
> deployers have at it (ensuring they keep on working on the old branches 
> via what 'old-branch-dictator/leader/group' says is broken/needs fixing...)

That's basically my position. If someone actually shows up to do
the stable branch work, we can discuss changing timelines. So far
we have not had a stampede of new assistance, as far as I can tell.
We do still have a small committed group working on it, but I'm not
prepared to ask them to take on more of a commitment without
additional help.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Joshua Harlow's message of 2015-11-06 11:11:02 -0800:
> Clint Byrum wrote:
> > Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> >> Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> >>> Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > Worth mentioning that OpenStack releases that come out at the same time
> > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > are supported for 5 years by Canonical so are already kind of an LTS.
> > Support in this context means patches, updates and commercial support
> > (for a fee).
> > For paying customers 3 years of patches, updates and commercial support
> > for April releases, (Kilo, O, Q etc..) is also available.
>  Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
>  maintaining an older release for so long is a good use of people or CI
>  resources, especially given how hard it can be for us to keep even
>  recent stable releases working and maintained.
> 
> >>> The argument in the original post, I think, is that we should not
> >>> stand in the way of the vendors continuing to collaborate on stable
> >>> maintenance in the upstream context after the EOL date. We already have
> >>> distro vendors doing work in the stable branches, but at EOL we push
> >>> them off to their respective distro-specific homes.
> >>>
> >>> As much as I'd like everyone to get on the CD train, I think it might
> >>> make sense to enable the vendors to not diverge, but instead let them
> >>> show up with people and commitment and say "Hey we're going to keep
> >>> Juno/Mitaka/etc alive!".
> >>>
> >>> So perhaps what would make sense is defining a process by which they can
> >>> make that happen.
> >> Do we need a new process? Aren't the existing stable maintenance
> >> and infrastructure teams clearly defined?
> >>
> >> We have this discussion whenever a release is about to go EOL, and
> >> the result is more or less the same each time. The burden of
> >> maintaining stable branches for longer than we do is currently
> >> greater than the resources being applied upstream to do that
> >> maintenance. Until that changes, I don't realistically see us being
> >> able to increase the community's commitment. That's not a lack of
> >> willingness, just an assessment of our current resources.
> >
> > I tend to agree with you. I only bring up a new process because I wonder
> > if the distro vendors would even be interested in collaborating on this,
> > or if this is just sort of "what they do" and we should accept that
> > they're going to do it outside upstream no matter how easy we make it.
> >
> > If we do believe that, and are OK with that, then we should not extend
> > EOL's, and we should make sure users understand that when they choose
> > the source of their OpenStack software.
> 
> Except for the fact that you are now forcing deployers that may or may 
> not be ok with paying for paid support to now pay for it... What is the 
> adoption rate/expected adoption rate of someone transitioning there 
> current cloud (which they did not pay support for) to a paid support model?
> 
> Does that require them to redeploy/convert there whole cloud using 
> vendors provided packages/deployment model... If so, jeez, that sounds 
> iffy...
> 
> And if a large majority of deployers aren't able to do that conversion 
> (or aren't willing to pay for support) and those same deployers are 
> willing to provide developers/others to ensure the old branches continue 
> to work and they know the issues of CI and they are willing to stay 
> on-top of that (a old-branch-dictator/leader may be needed to ensure 
> this?) then meh, I think we as a community should just let those 
> deployers have at it (ensuring they keep on working on the old branches 
> via what 'old-branch-dictator/leader/group' says is broken/needs fixing...)

Right, what I think where this leads though is that those who have
developers converge on CD, and those who have no developers have to pay
for support anyway. Running without developers and without a support
entity that can actually fix things is an interesting combination and
I'd be very curious to hear if there are any deployers having a positive
experience working that way.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-11-06 10:50:23 -0800:
> Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> > Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> > > Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > > > > Worth mentioning that OpenStack releases that come out at the same 
> > > > > time
> > > > > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + 
> > > > > Mitaka)
> > > > > are supported for 5 years by Canonical so are already kind of an LTS.
> > > > > Support in this context means patches, updates and commercial support
> > > > > (for a fee).
> > > > > For paying customers 3 years of patches, updates and commercial 
> > > > > support
> > > > > for April releases, (Kilo, O, Q etc..) is also available.
> > > > 
> > > > Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> > > > maintaining an older release for so long is a good use of people or CI
> > > > resources, especially given how hard it can be for us to keep even
> > > > recent stable releases working and maintained.
> > > > 
> > > 
> > > The argument in the original post, I think, is that we should not
> > > stand in the way of the vendors continuing to collaborate on stable
> > > maintenance in the upstream context after the EOL date. We already have
> > > distro vendors doing work in the stable branches, but at EOL we push
> > > them off to their respective distro-specific homes.
> > > 
> > > As much as I'd like everyone to get on the CD train, I think it might
> > > make sense to enable the vendors to not diverge, but instead let them
> > > show up with people and commitment and say "Hey we're going to keep
> > > Juno/Mitaka/etc alive!".
> > > 
> > > So perhaps what would make sense is defining a process by which they can
> > > make that happen.
> > 
> > Do we need a new process? Aren't the existing stable maintenance
> > and infrastructure teams clearly defined?
> > 
> > We have this discussion whenever a release is about to go EOL, and
> > the result is more or less the same each time. The burden of
> > maintaining stable branches for longer than we do is currently
> > greater than the resources being applied upstream to do that
> > maintenance. Until that changes, I don't realistically see us being
> > able to increase the community's commitment. That's not a lack of
> > willingness, just an assessment of our current resources.
> 
> I tend to agree with you. I only bring up a new process because I wonder
> if the distro vendors would even be interested in collaborating on this,
> or if this is just sort of "what they do" and we should accept that
> they're going to do it outside upstream no matter how easy we make it.
> 
> If we do believe that, and are OK with that, then we should not extend
> EOL's, and we should make sure users understand that when they choose
> the source of their OpenStack software.

OK, sure. If we can improve the process, then we should discuss that. We
did accommodate distro requests to continue tagging stable releases for
Liberty, but I'm not sure that compromise was made as the result of
promises of more resources.

Thierry did bring up the idea that the stable maintenance team should
stand alone, rather than being part of the release management team. That
would give the team its own PTL, and give them more autonomy about
deciding stable processes. I support the idea, but no one has come
forward and offered to drive it, yet.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-06 Thread Fox, Kevin M


> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: Thursday, November 05, 2015 3:19 PM
> To: openstack-dev
> Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager
> discussion @ the summit
> 
> Excerpts from Fox, Kevin M's message of 2015-11-05 13:18:13 -0800:
> > Your assuming there are only 2 choices,  zk or db+rabbit. I'm claiming
> > both hare suboptimal at present. a 3rd might be needed. Though even
> with its flaws, the db+rabbit choice has a few benefits too.
> >
> 
> Well, I'm assuming it is zk/etcd/consul, because while the java argument is
> rather religious, the reality is all three are significantly different from
> databases and message queues and thus will be "snowflakes". But yes, I
> _am_ assuming that Zookeeper is a natural, logical, simple choice, and that
> fact that it runs in a jvm is a poor reason to avoid it.

Yes. Having a snowflake there is probably unavoidable, but how much of one is.

I've had to tune jvm stuff like the java stack size when things spontaneously 
break, and then they tell you, oh, yeah, what that happens, go tweak such and 
such in the jvm... Unix sysadmins usually know the  common things for c  apps 
without much effort. And tend to know to look in advance. In my, somewhat 
limited experience with go, the runtime seems closer to regular unix programs 
then jvm ones.

The term 'java' is often conflated to mean both the java language, and the jvm 
runtime. When people talk about java, often they are talking about the jvm. I 
think this is one of those cases. Its easier to debug c/go for unix admins not 
trained specifically in jvm behaviors/tunables.

> 
> > You also seem to assert that to support large clouds, the default must be
> something that can scale that large. While that would be nice, I don't think
> its a requirement if its overly burdensome on deployers of non huge clouds.
> >
> 
> I think the current solution even scales poorly for medium sized clouds.
> Only the tiniest of clouds with the fewest nodes can really sustain all of 
> that
> polling without incurring cost for that overhead that would be better spent
> on serviceing users.

While not ideal, I've run clouds with around 100 nodes on a single controller. 
If its doable today, it should be doable with the new system. Its not ideal, 
but if it's a zero effort deploy, and easy to debug, that has something going 
for it.

> 
> > I don't have metrics, but I would be surprised if most deployments today
> (production + other) used 3 controllers with a full ha setup. I would guess
> that the majority are single controller setups. With those, the overhead of
> maintaining a whole dlm like zk seems like overkill. If db+rabbit would work
> for that one case, that would be one less thing to have to setup for an op.
> They already have to setup db+rabbit. Or even a clm plugin of some sort,
> that won't scale, but would be very easy to deploy, and change out later
> when needed would be very useful.
> >
> 
> We do have metrics:
> 
> http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
> 
> Page 35, "How many physical compute nodes do OpenStack clouds have?"
> 

Not what I was asking. It was asking how many controllers, not how many compute 
nodes. Like I said above, 1 controller can handle quite a bit of compute nodes.

> 
> 10-99:42%
> 1-9:  36%
> 100-999:  15%
> 1000-: 7%
> 
> So for respondents to that survey, yes, "most" are running less than 100
> nodes. However, by compute node count, if we extrapolate a bit:
> 
> There were 154 respondents so:
> 
> 10-99 * 42% =640 - 6403 nodes
> 1-9 * 36% =  55 - 498 nodes
> 100-999 * 15% =  2300 - 23076 nodes
> 1000- * 7% = 1 - 107789 nodes
>

This is good, but I believe this is biased towards the top end.

Respondents are much more likely to respond if they have a larger cloud to brag 
about. Folks doing it for development, testing, and other reasons may not 
respond because its not worth the effort. 

> So in terms of the number of actual computers running OpenStack compute,
> as an example, from the survey respondents, there are more computes
> running in *one* of the clouds with more than 1000 nodes than there are in
> *all* of the clouds with less than 10 nodes, and certainly more in all of the
> clouds over 1000 nodes, than in all of the clouds with less than 100 nodes.

For the reason listed above, I don't think we have enough evidence draw too 
strong a conclusion from this.

> 
> What this means, to me, is that the investment in OpenStack should focus
> on those with > 1000, since those orgs are definitely investing a lot more
> today. We shouldn't make it _hard_ to do a tiny cloud, but I think it's ok to
> make the tiny cloud less efficient if it means we can grow it into a monster
> cloud at any point and we continue to garner support from orgs who need to
> build large scale clouds.

Yeah, I'd say, we for sure need a solution for 1000+.

We also need a really 

Re: [openstack-dev] [All][Glance Mitaka Priorities

2015-11-06 Thread Louis Taylor
On Fri, Nov 06, 2015 at 06:31:23PM +, Bhandaru, Malini K wrote:
> Hello Glance Team/Flavio
> 
> Would you please provide link to Glance priorities at 
> https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Glance
> 
> [ Malini] Regards
> Malini

I don't belive there was an etherpad for this session. We were discussing the
list of priorities for Mitaka located here:


https://specs.openstack.org/openstack/glance-specs/priorities/mitaka-priorities.html

I've updated the wiki page with a link to the spec.

Cheers,
Louis


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:

Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:

Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:

Worth mentioning that OpenStack releases that come out at the same time
as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
are supported for 5 years by Canonical so are already kind of an LTS.
Support in this context means patches, updates and commercial support
(for a fee).
For paying customers 3 years of patches, updates and commercial support
for April releases, (Kilo, O, Q etc..) is also available.

Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
maintaining an older release for so long is a good use of people or CI
resources, especially given how hard it can be for us to keep even
recent stable releases working and maintained.


The argument in the original post, I think, is that we should not
stand in the way of the vendors continuing to collaborate on stable
maintenance in the upstream context after the EOL date. We already have
distro vendors doing work in the stable branches, but at EOL we push
them off to their respective distro-specific homes.

As much as I'd like everyone to get on the CD train, I think it might
make sense to enable the vendors to not diverge, but instead let them
show up with people and commitment and say "Hey we're going to keep
Juno/Mitaka/etc alive!".

So perhaps what would make sense is defining a process by which they can
make that happen.

Do we need a new process? Aren't the existing stable maintenance
and infrastructure teams clearly defined?

We have this discussion whenever a release is about to go EOL, and
the result is more or less the same each time. The burden of
maintaining stable branches for longer than we do is currently
greater than the resources being applied upstream to do that
maintenance. Until that changes, I don't realistically see us being
able to increase the community's commitment. That's not a lack of
willingness, just an assessment of our current resources.


I tend to agree with you. I only bring up a new process because I wonder
if the distro vendors would even be interested in collaborating on this,
or if this is just sort of "what they do" and we should accept that
they're going to do it outside upstream no matter how easy we make it.

If we do believe that, and are OK with that, then we should not extend
EOL's, and we should make sure users understand that when they choose
the source of their OpenStack software.


Except for the fact that you are now forcing deployers that may or may 
not be ok with paying for paid support to now pay for it... What is the 
adoption rate/expected adoption rate of someone transitioning there 
current cloud (which they did not pay support for) to a paid support model?


Does that require them to redeploy/convert there whole cloud using 
vendors provided packages/deployment model... If so, jeez, that sounds 
iffy...


And if a large majority of deployers aren't able to do that conversion 
(or aren't willing to pay for support) and those same deployers are 
willing to provide developers/others to ensure the old branches continue 
to work and they know the issues of CI and they are willing to stay 
on-top of that (a old-branch-dictator/leader may be needed to ensure 
this?) then meh, I think we as a community should just let those 
deployers have at it (ensuring they keep on working on the old branches 
via what 'old-branch-dictator/leader/group' says is broken/needs fixing...)


My 2 cents,

Josh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [dlm] Zookeeper and openjdk, mythbusted

2015-11-06 Thread Joshua Harlow
I just wanted to bring this up in its own thread as I know it was a 
concern of (some) folks at the DLM session in tokyo[0] and I'd like to 
try to bust this myth using hopefully objective people/users of 
zookeeper (besides myself and yahoo, the company I work for) so that 
this myth can be put to bed.


Basically here is the TLDR of the question/complaint:

'''
Zookeeper, a java application, will force you to install oracles virtual 
machine implementation for it to work, and it doesn't work with the 
openjdk, and if tooz (and oslo library) has a capable driver that uses 
zookeeper internally (via kazoo @ http://kazoo.readthedocs.org) then it 
will force deployers of openstack and its components that will use more 
of tooz to install oracles virtual machine implementation.


This will not work!!
There is no way I can do that!!
Yell!! Shout!! Cry!!
'''

That's the *jist* of it (with additional dramatization included).

So in order to dispel this, I tried in that session to say 'actually I 
have heard nothing saying it doesn't work with openjdk' in that session 
but the voices did not seem to hear that (or they were unable to listen 
due to there emotions stressed/high). Either way I wanted to ensure that 
people do know it does work with the openjdk and here is a set of 
testimonials from real users of zookeeper + openjdk that it does work there:


From Min Pae[1] on the Cue[2] team:

'''
<@sputnik13> harlowja for what it's worth we use zookeeper with openjdk
'''

From Greg Hill[3] who works on the rackspace bigdata[4] team:

'''
 and yes, we run Zookeeper on openjdk, and we haven't 
heard of any problems with it

'''

From Joe Smith[5][6] (who is at twitter, and is the Mesos/Aurora SRE 
Tech Lead there):


'''
 and yep, we (twitter) use zookeeper for service discovery
 someone asked me that question back at mesoscon in seattle, 
fwiw https://youtu.be/nNrh-gdu9m4?t=34m43s

 Yasumoto do u know if u use openjdk or oraclejdk?
 harlowja: yep, openjdk7
 but we're migrating up to 8
'''

From Martijn Verburg who is an an openjdk developer (and CEO)[7][8] 
that has some insightful info as well:


'''
So OpenJDK and Oracle JDK are almost identical in their make up 
*especially* on the server side. Many, many orgs like Google, Twitter, 
the biggest investment bank in the world, all use OpenJDK as opposed to 
Oracle's JDK.


---

The difference is the quality of the OpenJDK binaries built and released 
by package maintainers.


If you are getting IcedTea from RedHat (their supported OpenJDK binary) 
or Azul's Zulu (Fully supported OpenJDK) then you're *absolutely fine*.


If you're relying on the Debian or Fedora packages then *occasionally* 
those package maintainers don't put out a great binary as they don't run 
the TCK tests (partly because they can't as they are unwilling/unable to 
pay Oracle for that TCK).


Hope that all makes sense...
'''

So I hope the above is enough of *proof* that yes the openjdk is fine, 
there may have been some bugs in the past, but those afaik have all been 
resolved and there are major contributors stepping up (and continuing to 
step up) to make sure that zookeeper + openjdk continue to work (because 
companies/projects/people... like mentioned above depend on it).


-Josh

[0] https://etherpad.openstack.org/p/mitaka-cross-project-dlm
[1] https://launchpad.net/~sputnik13
[2] https://wiki.openstack.org/wiki/Cue
[3] https://launchpad.net/~greg-hill
[4] http://www.rackspace.com/cloud/big-data
[5] http://www.bjoli.com/
[6] https://github.com/Yasumoto
[7] http://martijnverburg.blogspot.com/
[8] http://www.infoq.com/interviews/verburg-ljc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2015-11-06 10:28:41 -0800:
> Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> > Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > > > Worth mentioning that OpenStack releases that come out at the same time
> > > > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > > > are supported for 5 years by Canonical so are already kind of an LTS.
> > > > Support in this context means patches, updates and commercial support
> > > > (for a fee).
> > > > For paying customers 3 years of patches, updates and commercial support
> > > > for April releases, (Kilo, O, Q etc..) is also available.
> > > 
> > > Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> > > maintaining an older release for so long is a good use of people or CI
> > > resources, especially given how hard it can be for us to keep even
> > > recent stable releases working and maintained.
> > > 
> > 
> > The argument in the original post, I think, is that we should not
> > stand in the way of the vendors continuing to collaborate on stable
> > maintenance in the upstream context after the EOL date. We already have
> > distro vendors doing work in the stable branches, but at EOL we push
> > them off to their respective distro-specific homes.
> > 
> > As much as I'd like everyone to get on the CD train, I think it might
> > make sense to enable the vendors to not diverge, but instead let them
> > show up with people and commitment and say "Hey we're going to keep
> > Juno/Mitaka/etc alive!".
> > 
> > So perhaps what would make sense is defining a process by which they can
> > make that happen.
> 
> Do we need a new process? Aren't the existing stable maintenance
> and infrastructure teams clearly defined?
> 
> We have this discussion whenever a release is about to go EOL, and
> the result is more or less the same each time. The burden of
> maintaining stable branches for longer than we do is currently
> greater than the resources being applied upstream to do that
> maintenance. Until that changes, I don't realistically see us being
> able to increase the community's commitment. That's not a lack of
> willingness, just an assessment of our current resources.

I tend to agree with you. I only bring up a new process because I wonder
if the distro vendors would even be interested in collaborating on this,
or if this is just sort of "what they do" and we should accept that
they're going to do it outside upstream no matter how easy we make it.

If we do believe that, and are OK with that, then we should not extend
EOL's, and we should make sure users understand that when they choose
the source of their OpenStack software.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Jeremy Stanley
On 2015-11-06 10:12:21 -0800 (-0800), Clint Byrum wrote:
[...]
> Note that it's not just backporters though. It's infra resources too.

Aye, there's the rub. We don't just EOL these branches for fun or
because we hate old things or because ooh shiny squirrel. We EOL
them at a cadence where the community has demonstrated it loses its
ability to keep them healthy and testable (evaluated based on
performance over recent prior cycles because we have to warn
downstreams well in advance as to when they should expect upstream
support to cease).

Downstream maintainers regularly claim they will step up their
assistance upstream to keep stable branches alive if only we'll
extend the lifespan on them, so we tried that with Icehouse and,
based on our experience there, scaled back the lifespan of Juno
again accordingly. Keep in mind that extending support of stable
branches necessarily implies supporting a larger _number_ of stable
branches in parallel. If we switched from 12 months after release to
18 then we're maintaining at least 3 stable branches at any point in
time. If we extend it to 24 months then that's 4 stable branches.

To those who suggest solving this by claiming one is a LTS release
every couple years, you're implying a vastly different upgrade model
than we have now. If we declare Juno is a LTS and leave it supported
another 12 months, then 6 months from now when we EOL stable/kilo
we'll be telling deployers that they have to upgrade from supported
stable/juno through unsupported stable/kilo to supported
stable/icehouse before running stable/mitaka. Or else you're saying
you intend to fix the current inability of our projects to skip
intermediate releases entirely during upgrades (a great idea, and so
I'm thrilled by those of you who intend to make it a reality, we can
revisit the LTS discussion once you finish that).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zaqar] OpenStack Tokyo Summit Summary

2015-11-06 Thread Doug Hellmann
Excerpts from Fei Long Wang's message of 2015-11-07 01:31:09 +1300:
> Greetings,
> 
> Firstly, thank you for everyone joined Zaqar sessions at Tokyo summit. 
> We definitely made some great progress for those working sessions. Here 
> are the high level summary and those are basically our Mitaka 
> priorities. I may miss something so please feel free to comment/reply 
> this mail.
> 
> Sahara + Zaqar
> -
> 
> We have a great discussion with Ethan Gafford from Sahara team. Sahara 
> team is happy to use Zaqar to fix some potential security issues. The 
> main user case will be covered in Mitaka is protecting tenant guest and 
> data from administrative user. So what Zaqar team needs to do in Mitaka 
> is completing the zaqar client function gaps for v2 to support signed 
> URL, which will be used by Sahara guest agent. Ethan will create a spec 
> in Sahara to track this work. This is a POC of what it'd look like to 
> have a guest agent in Sahara on top of Zaqar. The Sahara team has not 
> decided to use Zaqar yet but this would be the bases for that discussion.
> 
> Horizon + Zaqar
> --
> 
> We used 1 horizon work session and 1 Zaqar work session to discuss this 
> topic. The main user case we would like to address is the async 
> notification so that Horizon won't have to poll the other OpenStack 
> components(e.g. Nova, Glance or Cinder) per second to get the latest 
> status. And I'm really happy to see we worked out a basic plan by 
> leveraging Zaqar's notification and websocket.
> 
> 1. Implement a basic filter for Zaqar subscription, so that Zaqar can 
> decide if the message should be posted/forwarded to the subscriber when 
> there is a new message posted the queue. With this feature, Horizon will 
> only be notified by its interested notifications.
> 
> https://blueprints.launchpad.net/zaqar/+spec/suport-filter-for-subscription
> 
> 2. Listen OpenStack notifications
> 
> We may need more discussion about this to make sure if it should be in 
> the scope of Zaqar's services. It could be separated process/service of 
> Zaqar to listen/collect interested notifications/messages and post them 
> in to particular Zaqar queues. It sounds very interesting and useful but 
> we need to define the scope carefully for sure.

The Telemetry team discussed splitting out the code in Ceilometer that
listens for notifications to make it a more generic service that accepts
plugins to process events. One such plugin might be an interface to
filter events and republish them to Zaqar, so if you're interested in
working on that you might want to coordinate with the Telemetry team to
avoid duplicating effort.

> 
> 
> Pool Group and Flavor
> -
> 
> Thanks MD MADEEM proposed this topic so that we have a chance to review 
> the design of pool, pool group and flavor. Now the pool group and flavor 
> has a 1:1 mapping relationship and the pool group and pool has a 1:n 
> mapping relationship. But end user don't know the existence of pool, so 
> flavor is the way for end user to select what kind of storage(based on 
> capabilities) he want to use. Since pool group can't provide more 
> information than flavor so it's not really necessary, so we decide to 
> deprecate/remove it in Mitaka. Given this is hidden from users (done 
> automatically by Zaqar), there won't be an impact on the end user and 
> the API backwards compatibility will be kept.
> 
> https://blueprints.launchpad.net/zaqar/+spec/deprecate-pool-group
> 
> Zaqar Client
> 
> 
> Some function gaps need to be filled in Mitaka. Personally, I would rate 
> the client work as the 1st priority of M since it's very key for the 
> integration with other OpenStack components. For v1.1, the support for 
> pool and flavor hasn't been completed. For v2, we're stilling missing 
> the support for subscription and signed URL.
> 
> https://blueprints.launchpad.net/zaqar/+spec/finish-client-support-for-v1.1-features
> 
> SqlAlchemy Migration
> -
> 
> Now we're missing the db migration support for SqlAlchemy, the control 
> plane driver. We will fix it in M as well.
> 
> https://blueprints.launchpad.net/zaqar/+spec/sqlalchemy-migration 
> 
> 
> 
> Guys, please contribute this thread to fill the points/things I missed 
> or pop up in #openstack-zaqar channel directly with questions and 
> suggestions.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Ed Leafe
On Nov 6, 2015, at 9:32 AM, John Garbutt  wrote:

> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.

Well deserved! +1 from me.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Ed Leafe
On Nov 6, 2015, at 9:32 AM, John Garbutt  wrote:

> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.

I'm not a core, but would like to add my hearty +1.

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Glance Mitaka Priorities

2015-11-06 Thread Bhandaru, Malini K
Hello Glance Team/Flavio

Would you please provide link to Glance priorities at 
https://wiki.openstack.org/wiki/Design_Summit/Mitaka/Etherpads#Glance

[ Malini] Regards
Malini

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-11-06 10:12:21 -0800:
> Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > > Worth mentioning that OpenStack releases that come out at the same time
> > > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > > are supported for 5 years by Canonical so are already kind of an LTS.
> > > Support in this context means patches, updates and commercial support
> > > (for a fee).
> > > For paying customers 3 years of patches, updates and commercial support
> > > for April releases, (Kilo, O, Q etc..) is also available.
> > 
> > Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> > maintaining an older release for so long is a good use of people or CI
> > resources, especially given how hard it can be for us to keep even
> > recent stable releases working and maintained.
> > 
> 
> The argument in the original post, I think, is that we should not
> stand in the way of the vendors continuing to collaborate on stable
> maintenance in the upstream context after the EOL date. We already have
> distro vendors doing work in the stable branches, but at EOL we push
> them off to their respective distro-specific homes.
> 
> As much as I'd like everyone to get on the CD train, I think it might
> make sense to enable the vendors to not diverge, but instead let them
> show up with people and commitment and say "Hey we're going to keep
> Juno/Mitaka/etc alive!".
> 
> So perhaps what would make sense is defining a process by which they can
> make that happen.

Do we need a new process? Aren't the existing stable maintenance
and infrastructure teams clearly defined?

We have this discussion whenever a release is about to go EOL, and
the result is more or less the same each time. The burden of
maintaining stable branches for longer than we do is currently
greater than the resources being applied upstream to do that
maintenance. Until that changes, I don't realistically see us being
able to increase the community's commitment. That's not a lack of
willingness, just an assessment of our current resources.

Doug

> 
> Note that it's not just backporters though. It's infra resources too.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Erik McCormick's message of 2015-11-06 09:36:44 -0800:
> On Fri, Nov 6, 2015 at 12:28 PM, Mark Baker  wrote:
> > Worth mentioning that OpenStack releases that come out at the same time as
> > Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> > supported for 5 years by Canonical so are already kind of an LTS. Support in
> > this context means patches, updates and commercial support (for a fee).
> > For paying customers 3 years of patches, updates and commercial support for
> > April releases, (Kilo, O, Q etc..) is also available.
> >
> 
> Does that mean that you are actually backporting and gate testing
> patches downstream that aren't being done upstream? I somehow doubt
> it, but if so, then it would be great if you could lead some sort of
> initiative to push those patches back upstream.
> 

If Canonical and Ubuntu still work the way they worked when I was
involved, then yes and no. The initial patches still happen upstream,
in trunk. But the difference is the backporting can't happen upstream in
stable branches after EOL, because those branches are shut down. That
seems a shame, as the community at large would likely be better served
if the vendors can continue to land their stable patches for as long as
they're working on them.

That said, I think it would take a bit of a shift in participation to get
the needed resources in the right place (like infra) to make that happen.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Clint Byrum
Excerpts from Dan Smith's message of 2015-11-06 09:37:44 -0800:
> > Worth mentioning that OpenStack releases that come out at the same time
> > as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> > are supported for 5 years by Canonical so are already kind of an LTS.
> > Support in this context means patches, updates and commercial support
> > (for a fee).
> > For paying customers 3 years of patches, updates and commercial support
> > for April releases, (Kilo, O, Q etc..) is also available.
> 
> Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
> maintaining an older release for so long is a good use of people or CI
> resources, especially given how hard it can be for us to keep even
> recent stable releases working and maintained.
> 

The argument in the original post, I think, is that we should not
stand in the way of the vendors continuing to collaborate on stable
maintenance in the upstream context after the EOL date. We already have
distro vendors doing work in the stable branches, but at EOL we push
them off to their respective distro-specific homes.

As much as I'd like everyone to get on the CD train, I think it might
make sense to enable the vendors to not diverge, but instead let them
show up with people and commitment and say "Hey we're going to keep
Juno/Mitaka/etc alive!".

So perhaps what would make sense is defining a process by which they can
make that happen.

Note that it's not just backporters though. It's infra resources too.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Tokyo summit recap

2015-11-06 Thread John Dickinson
Below are some high-level notes from the topics discussed at the summit by the 
Swift community.


encryption
 We're continuing the data-at-rest encryption work. For our first 
deliverable, we'll have a feature that allows operators to set an encryption 
key for the cluster and all encryption will be transparent to the end-user. 
There's still a lot of work to be done, but we've made very good progress, and 
we think we know the missing parts.

container sync
 IBM in particular is interested in improving container sync. Specifically, 
there are two ways to improve it. First, improve throughput by increasing 
concurrency and parallelism. Second, reduce cluster load by improving the ways 
in which we find objects and containers that need to be synced.

hummingbird
 Rackspace has some impressive results from their hummingbird experiment. 
It still doesn't have feature parity, but Rackspace has some internal tests to 
suss out where the gaps are. They should be releasing those soon. Additionally, 
there are some good lessons to learn from the way hummingbird is doing 
replication that can be applied to the Python implementation. There is still 
some anticipated work on hummingbird to further improve replication, and that 
may invalidate any Golang/Python integration work now.

ops feedback
 Operators asked for better support around finding what accounts are in the 
cluster. This is used for utilization. Everyone needs it, and everyone 
implements it slightly differently. Operators would like support upstream for 
an approved way to find accounts.
 Additional ops requests included more info around per-disk usage metrics 
and better account/container replication.

swiftclient
 Much of the pain in using swiftclient is around auth. It's hard to find 
what the right parameters are and how they are used. We're investigating better 
keystone integration, and we will be raising the default auth version used from 
v1 to v3.
 Much of the rest of the pain around using swiftclient is from a lack of 
docs. We'll be building a docs outline that will be used as a TODO list where 
will will add additional docs, including examples. These docs are intended to 
be used by end-users (CLI and SDK consumers).

ring placement
 Clay is improving the way partitions are placed in the ring during a 
rebalance. This matters a *lot* for smaller clusters and clusters that are 
adding significant capacity (eg whole new failure domains).

EC patches
 We've got ongoing work in EC to improve the way it works and fix bugs. 
Most of this is already proposed to gerrit.
 We've also spent time characterizing EC performance in various clusters 
with different configs. We'll be using this data to further improve EC support 
in Swift.

symlinks
 There's a lot of interest in a symlink functionality in Swift. Any sort of 
data migration or changing policies would likely make use of this. Although 
there is a very basic amount of functionality implemented as a PoC today, 
there's a lot more to be done before this is "done". Mostly it's been a design 
exercise so far.

container sharding
 Matt is working on container sharding and shared some impressive results 
from his current dev work.

fast-post
 Alistair has been working on getting fast-POST to work properly. It's 
nearly done and pretty much needs some reviews for the patch (albeit with a few 
outstanding comments addressed).

Data migration and policy management
 There's ongoing discussion about migrating data between policies, changing 
policies, and using different media for auto-manged storage tiers. Much of this 
work is happening in the 3rd party ecosystem and/or is dependent on other 
not-yet-completed work (like symlinks, notifications, etc)

python3 plan
 We've got a plan to slowly get py3 support in Swift. This matters because 
distros will start shipping py3 as a default (or py3 only) next year. The 
current plan will end up being a long slog, though.

defcore
 For further integration with DefCore, we must get the testr support for 
our functional tests merged soon so they can be run via tempest. Then we can 
start categorizing the newly-available-to-defcore tests into defcore 
capabilities. Anything that goes into existing capabilities become a test that 
clusters must pass. Any new capability goes through the defcore process to 
become a required capability in 2017.

Keystone policy support
 We are moving towards adding oslo.policy support in swift keystoneauth 
middleware. First step is to ensure we have comprehensive functional test 
coverage of the auth mechanisms before modifying the middleware





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/opensta

Re: [openstack-dev] [Openstack-operators] [logs] Neutron not logging user information on wsgi requests by default

2015-11-06 Thread Kris G. Lindgren
Fixes to below:

logging_context_format_string is the correct config option name.

The exact link that I wanted for [1] below is actually: 
https://review.openstack.org/#/c/172508/2/lib/neutron-legacy

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren" mailto:klindg...@godaddy.com>>
Date: Friday, November 6, 2015 at 10:27 AM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [Openstack-operators] [logs] Neutron not logging user information on 
wsgi requests by default

Hello all,

I noticed the otherday that in our Openstack install (Kilo) Neutron seems to be 
the only project that was not logging the username/tenant information on every 
wsgi request.  Nova/Glance/heat all log a username and/or project on each 
request.  Our wsgi logs from neutron look like the following:

2015-11-05 13:45:24.302 14549 INFO neutron.wsgi 
[req-ab633261-da6d-4ac7-8a35-5d321a8b4a8f ] 10.224.48.132 - - [05/Nov/2015 
13:45:24]
"GET /v2.0/networks.json?id=2d5fe344-4e98-4ccc-8c91-b8064d17c64c HTTP/1.1" 200 
655 0.027550

I did a fair amount of digging and it seems that devstack is by default 
overriding the context log format for neutron to add the username/tenant 
information into the logs.  However, there is active work to remove this 
override from devstack[1].  However, using the devstack way I was able to true 
up our neutron wsgi logs to be inline with what other services are providing.

If you add:
loggin_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s %(name)s 
[%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s

To the [DEFAULT] section of neutron.conf and restart neutron-server.  You will 
now get log output like the following:

 2015-11-05 18:07:31.033 INFO neutron.wsgi 
[req-ebf1d3c9-b556-48a7-b1fa-475dd9df0bf7  ] 10.224.48.132 - - [05/Nov/2015 18:07:31]
"GET /v2.0/networks.json?id=55e1b92a-a2a3-4d64-a2d8-4b0bee46f3bf HTTP/1.1" 200 
617 0.035515

So go forth, check your logs, and before you need to use your logs to debug who 
did what,when, and where.  Get the information that you need added to the wsgi 
logs.  if you are not seeing wsgi logs for your projects trying enabling 
verbose=true in the [DEFAULT] section as well.

Adding [logs] tag since it would be nice to have all projects logging to a 
standard wsgi format out of the gate.

[1] - https://review.openstack.org/#/c/172510/2
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Andrew Laski

On 11/06/15 at 09:18am, Dan Smith wrote:

I propose we add Sylvain Bauza[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the Scheduler.

Please respond with comments, +1s, or objections within one week.


+1 for tasty cheese.


+1 for Sylvain, and a second +1 for tasty cheese.


--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Erik McCormick
On Fri, Nov 6, 2015 at 12:28 PM, Mark Baker  wrote:
> Worth mentioning that OpenStack releases that come out at the same time as
> Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
> supported for 5 years by Canonical so are already kind of an LTS. Support in
> this context means patches, updates and commercial support (for a fee).
> For paying customers 3 years of patches, updates and commercial support for
> April releases, (Kilo, O, Q etc..) is also available.
>

Does that mean that you are actually backporting and gate testing
patches downstream that aren't being done upstream? I somehow doubt
it, but if so, then it would be great if you could lead some sort of
initiative to push those patches back upstream.


-Erik

>
>
> Best Regards
>
>
> Mark Baker
>
> On Fri, Nov 6, 2015 at 5:03 PM, James King  wrote:
>>
>> +1 for some sort of LTS release system.
>>
>> Telcos and risk-averse organizations working with sensitive data might not
>> be able to upgrade nearly as fast as the releases keep coming out. From the
>> summit in Japan it sounds like companies running some fairly critical public
>> infrastructure on Openstack aren’t going to be upgrading to Kilo any time
>> soon.
>>
>> Public clouds might even benefit from this. I know we (Dreamcompute) are
>> working towards tracking the upstream releases closer… but it’s not feasible
>> for everyone.
>>
>> I’m not sure whether the resources exist to do this but it’d be a nice to
>> have, imho.
>>
>> > On Nov 6, 2015, at 11:47 AM, Donald Talton 
>> > wrote:
>> >
>> > I like the idea of LTS releases.
>> >
>> > Speaking to my own deployments, there are many new features we are not
>> > interested in, and wouldn't be, until we can get organizational (cultural)
>> > change in place, or see stability and scalability.
>> >
>> > We can't rely on, or expect, that orgs will move to the CI/CD model for
>> > infra, when they aren't even ready to do that for their own apps. It's 
>> > still
>> > a new "paradigm" for many of us. CI/CD requires a considerable engineering
>> > effort, and given that the decision to "switch" to OpenStack is often 
>> > driven
>> > by cost-savings over enterprise virtualization, adding those costs back in
>> > via engineering salaries doesn't make fiscal sense.
>> >
>> > My big argument is that if Icehouse/Juno works and is stable, and I
>> > don't need newer features from subsequent releases, why would I expend the
>> > effort until such a time that I do want those features? Thankfully there 
>> > are
>> > vendors that understand this. Keeping up with the release cycle just for 
>> > the
>> > sake of keeping up with the release cycle is exhausting.
>> >
>> > -Original Message-
>> > From: Tony Breeds [mailto:t...@bakeyournoodle.com]
>> > Sent: Thursday, November 05, 2015 11:15 PM
>> > To: OpenStack Development Mailing List
>> > Cc: openstack-operat...@lists.openstack.org
>> > Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for
>> > longer.
>> >
>> > Hello all,
>> >
>> > I'll start by acknowledging that this is a big and complex issue and I
>> > do not claim to be across all the view points, nor do I claim to be
>> > particularly persuasive ;P
>> >
>> > Having stated that, I'd like to seek constructive feedback on the idea
>> > of keeping Juno around for a little longer.  During the summit I spoke to a
>> > number of operators, vendors and developers on this topic.  There was some
>> > support and some "That's crazy pants!" responses.  I clearly didn't make it
>> > around to everyone, hence this email.
>> >
>> > Acknowledging my affiliation/bias:  I work for Rackspace in the private
>> > cloud team.  We support a number of customers currently running Juno that
>> > are, for a variety of reasons, challenged by the Kilo upgrade.
>> >
>> > Here is a summary of the main points that have come up in my
>> > conversations, both for and against.
>> >
>> > Keep Juno:
>> > * According to the current user survey[1] Icehouse still has the
>> >   biggest install base in production clouds.  Juno is second, which
>> > makes
>> >   sense. If we EOL Juno this month that means ~75% of production clouds
>> >   will be running an EOL'd release.  Clearly many of these operators
>> > have
>> >   support contracts from their vendor, so those operators won't be left
>> >   completely adrift, but I believe it's the vendors that benefit from
>> > keeping
>> >   Juno around. By working together *in the community* we'll see the best
>> >   results.
>> >
>> > * We only recently EOL'd Icehouse[2].  Sure it was well communicated,
>> > but we
>> >   still have a huge Icehouse/Juno install base.
>> >
>> > For me this is pretty compelling but for balance 
>> >
>> > Keep the current plan and EOL Juno Real Soon Now:
>> > * There is also no ignoring the elephant in the room that with HP
>> > stepping
>> >   back from public cloud there are questions about our CI capacity, and
>> >   keeping Juno will have an impact on that crit

Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Dan Smith
> Worth mentioning that OpenStack releases that come out at the same time
> as Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka)
> are supported for 5 years by Canonical so are already kind of an LTS.
> Support in this context means patches, updates and commercial support
> (for a fee).
> For paying customers 3 years of patches, updates and commercial support
> for April releases, (Kilo, O, Q etc..) is also available.

Yeah. IMHO, this is what you pay your vendor for. I don't think upstream
maintaining an older release for so long is a good use of people or CI
resources, especially given how hard it can be for us to keep even
recent stable releases working and maintained.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][bugs] Weekly Status Report

2015-11-06 Thread Kashyap Chamarthy
On Fri, Nov 06, 2015 at 05:54:59PM +0100, Markus Zoeller wrote:
> Hey folks,
> 
> below is the first report of bug stats I intend to post weekly.
> We discussed it shortly during the Mitaka summit that this report
> could be useful to keep the attention of the open bugs at a certain
> level. Let me know if you think it's missing something.

Nice.  Thanks for this super useful report (especially the queries)!

For cadence, I feel a week flies by too quickly, which is likely to
cause people to train their muscle memory to mark these emails as read.
Maybe bi-weekly?

> 
> New bugs which are *not* assigned to any subteam
> 
> count: 19
> query: http://bit.ly/1WF68Iu
> 
> 
> New bugs which are *not* triaged
> 
> subteam: libvirt 
> count: 14 
> query: http://bit.ly/1Hx3RrL
> subteam: volumes 
> count: 11
> query: http://bit.ly/1NU2DM0
> subteam: network : 
> count: 4
> query: http://bit.ly/1LVAQdq
> subteam: db : 
> count: 4
> query: http://bit.ly/1LVATWG
> subteam: 
> count: 67
> query: http://bit.ly/1RBVZLn
> 
> 
> High prio bugs which are *not* in progress
> --
> count: 39
> query: http://bit.ly/1MCKoHA
> 
> 
> Critical bugs which are *not* in progress
> -
> count: 0
> query: http://bit.ly/1kfntfk
> 
> 
> Readings
> 
> * https://wiki.openstack.org/wiki/BugTriage
> * https://wiki.openstack.org/wiki/Nova/BugTriage
> * 
> http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Mark Baker
Worth mentioning that OpenStack releases that come out at the same time as
Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
supported for 5 years by Canonical so are already kind of an LTS. Support
in this context means patches, updates and commercial support (for a fee).
For paying customers 3 years of patches, updates and commercial support for
April releases, (Kilo, O, Q etc..) is also available.



Best Regards


Mark Baker

On Fri, Nov 6, 2015 at 5:03 PM, James King  wrote:

> +1 for some sort of LTS release system.
>
> Telcos and risk-averse organizations working with sensitive data might not
> be able to upgrade nearly as fast as the releases keep coming out. From the
> summit in Japan it sounds like companies running some fairly critical
> public infrastructure on Openstack aren’t going to be upgrading to Kilo any
> time soon.
>
> Public clouds might even benefit from this. I know we (Dreamcompute) are
> working towards tracking the upstream releases closer… but it’s not
> feasible for everyone.
>
> I’m not sure whether the resources exist to do this but it’d be a nice to
> have, imho.
>
> > On Nov 6, 2015, at 11:47 AM, Donald Talton 
> wrote:
> >
> > I like the idea of LTS releases.
> >
> > Speaking to my own deployments, there are many new features we are not
> interested in, and wouldn't be, until we can get organizational (cultural)
> change in place, or see stability and scalability.
> >
> > We can't rely on, or expect, that orgs will move to the CI/CD model for
> infra, when they aren't even ready to do that for their own apps. It's
> still a new "paradigm" for many of us. CI/CD requires a considerable
> engineering effort, and given that the decision to "switch" to OpenStack is
> often driven by cost-savings over enterprise virtualization, adding those
> costs back in via engineering salaries doesn't make fiscal sense.
> >
> > My big argument is that if Icehouse/Juno works and is stable, and I
> don't need newer features from subsequent releases, why would I expend the
> effort until such a time that I do want those features? Thankfully there
> are vendors that understand this. Keeping up with the release cycle just
> for the sake of keeping up with the release cycle is exhausting.
> >
> > -Original Message-
> > From: Tony Breeds [mailto:t...@bakeyournoodle.com]
> > Sent: Thursday, November 05, 2015 11:15 PM
> > To: OpenStack Development Mailing List
> > Cc: openstack-operat...@lists.openstack.org
> > Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for
> longer.
> >
> > Hello all,
> >
> > I'll start by acknowledging that this is a big and complex issue and I
> do not claim to be across all the view points, nor do I claim to be
> particularly persuasive ;P
> >
> > Having stated that, I'd like to seek constructive feedback on the idea
> of keeping Juno around for a little longer.  During the summit I spoke to a
> number of operators, vendors and developers on this topic.  There was some
> support and some "That's crazy pants!" responses.  I clearly didn't make it
> around to everyone, hence this email.
> >
> > Acknowledging my affiliation/bias:  I work for Rackspace in the private
> cloud team.  We support a number of customers currently running Juno that
> are, for a variety of reasons, challenged by the Kilo upgrade.
> >
> > Here is a summary of the main points that have come up in my
> conversations, both for and against.
> >
> > Keep Juno:
> > * According to the current user survey[1] Icehouse still has the
> >   biggest install base in production clouds.  Juno is second, which makes
> >   sense. If we EOL Juno this month that means ~75% of production clouds
> >   will be running an EOL'd release.  Clearly many of these operators have
> >   support contracts from their vendor, so those operators won't be left
> >   completely adrift, but I believe it's the vendors that benefit from
> keeping
> >   Juno around. By working together *in the community* we'll see the best
> >   results.
> >
> > * We only recently EOL'd Icehouse[2].  Sure it was well communicated,
> but we
> >   still have a huge Icehouse/Juno install base.
> >
> > For me this is pretty compelling but for balance 
> >
> > Keep the current plan and EOL Juno Real Soon Now:
> > * There is also no ignoring the elephant in the room that with HP
> stepping
> >   back from public cloud there are questions about our CI capacity, and
> >   keeping Juno will have an impact on that critical resource.
> >
> > * Juno (and other stable/*) resources have a non-zero impact on *every*
> >   project, esp. @infra and release management.  We need to ensure this
> >   isn't too much of a burden.  This mostly means we need enough
> trustworthy
> >   volunteers.
> >
> > * Juno is also tied up with Python 2.6 support. When
> >   Juno goes, so will Python 2.6 which is a happy feeling for a number of
> >   people, and more importantly reduces complexity in our project
> >   infrastructure.
> >
> > * 

[openstack-dev] [Openstack-operators] [logs] Neutron not logging user information on wsgi requests by default

2015-11-06 Thread Kris G. Lindgren
Hello all,

I noticed the otherday that in our Openstack install (Kilo) Neutron seems to be 
the only project that was not logging the username/tenant information on every 
wsgi request.  Nova/Glance/heat all log a username and/or project on each 
request.  Our wsgi logs from neutron look like the following:

2015-11-05 13:45:24.302 14549 INFO neutron.wsgi 
[req-ab633261-da6d-4ac7-8a35-5d321a8b4a8f ] 10.224.48.132 - - [05/Nov/2015 
13:45:24]
"GET /v2.0/networks.json?id=2d5fe344-4e98-4ccc-8c91-b8064d17c64c HTTP/1.1" 200 
655 0.027550

I did a fair amount of digging and it seems that devstack is by default 
overriding the context log format for neutron to add the username/tenant 
information into the logs.  However, there is active work to remove this 
override from devstack[1].  However, using the devstack way I was able to true 
up our neutron wsgi logs to be inline with what other services are providing.

If you add:
loggin_context_format_string = %(asctime)s.%(msecs)03d %(levelname)s %(name)s 
[%(request_id)s %(user_name)s %(project_name)s] %(instance)s%(message)s

To the [DEFAULT] section of neutron.conf and restart neutron-server.  You will 
now get log output like the following:

 2015-11-05 18:07:31.033 INFO neutron.wsgi 
[req-ebf1d3c9-b556-48a7-b1fa-475dd9df0bf7  ] 10.224.48.132 - - [05/Nov/2015 18:07:31]
"GET /v2.0/networks.json?id=55e1b92a-a2a3-4d64-a2d8-4b0bee46f3bf HTTP/1.1" 200 
617 0.035515

So go forth, check your logs, and before you need to use your logs to debug who 
did what,when, and where.  Get the information that you need added to the wsgi 
logs.  if you are not seeing wsgi logs for your projects trying enabling 
verbose=true in the [DEFAULT] section as well.

Adding [logs] tag since it would be nice to have all projects logging to a 
standard wsgi format out of the gate.

[1] - https://review.openstack.org/#/c/172510/2
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Vladimir Kuklin
Just my 2 cents here - let's do docker backup and roll it up onto brand new
Fuel 8 node.

On Fri, Nov 6, 2015 at 7:54 PM, Oleg Gelbukh  wrote:

> Matt,
>
> You are talking about this part of Operations guide [1], or you mean
> something else?
>
> If yes, then we still need to extract data from backup containers. I'd
> prefer backup of DB in simple plain text file, since our DBs are not that
> big.
>
> [1]
> https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn 
> wrote:
>
>> Oleg,
>>
>> All the volatile information, including a DB dump, are contained in the
>> small Fuel Master backup. There should be no information lost unless there
>> was manual customization done inside the containers (such as puppet
>> manifest changes). There shouldn't be a need to back up the entire
>> containers.
>>
>> The information we would lose would include the IP configuration
>> interfaces besides the one used for the Fuel PXE network and any custom
>> configuration done on the Fuel Master.
>>
>> I want #1 to work smoothly, but #2 should also be a safe route.
>>
>> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh 
>> wrote:
>>
>>> Evgeniy,
>>>
>>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>>>
 Also we should decide when to run containers
 upgrade + host upgrade? Before or after new CentOS is installed?
 Probably
 it should be done before we run backup, in order to get the latest
 scripts for
 backup/restore actions.

>>>
>>> We're working to determine if we need to backup/upgrade containers at
>>> all. My expectation is that we should be OK with just backup of DB, IP
>>> addresses settings from astute.yaml for the master node, and credentials
>>> from configuration files for the services.
>>>
>>> --
>>> Best regards,
>>> Oleg Gelbukh
>>>
>>>

 Thanks,

 On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> At the moment I'm working on deprecating Fuel upgrade tarball.
> Currently, it includes the following:
>
> * RPM repository (upstream + mos)
> * DEB repository (mos)
> * openstack.yaml
> * version.yaml
> * upgrade script itself (+ virtualenv)
>
> Apart from upgrading docker containers this upgrade script makes
> copies of the RPM/DEB repositories and puts them on the master node naming
> these repository directories depending on what is written in 
> openstack.yaml
> and version.yaml. My plan was something like:
>
> 1) deprecate version.yaml (move all fields from there to various
> places)
> 2) deliver openstack.yaml with fuel-openstack-metadata package
> 3) do not put new repos on the master node (instead we should use
> online repos or use fuel-createmirror to make local mirrors)
> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>
> Then UX was supposed to be roughly like:
>
> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
> 2) yum install fuel-upgrade
> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
> there should have not be parts coping RPM/DEB repos)
>
> However, it turned out that Fuel 8.0 is going to be run on Centos 7
> and it is not enough to just do things which we usually did during
> upgrades. Now there are two ways to upgrade:
> 1) to use the official Centos upgrade script for upgrading from 6 to 7
> 2) to backup the master node, then reinstall it from scratch and then
> apply backup
>
> Upgrade team is trying to understand which way is more appropriate.
> Regarding to my tarball related activities, I'd say that this package 
> based
> upgrade approach can be aligned with (1) (fuel-upgrade would use official
> Centos upgrade script as a first step for upgrade), but it definitely can
> not be aligned with (2), because it assumes reinstalling the master node
> from scratch.
>
> Right now, I'm finishing the work around deprecating version.yaml and
> my further steps would be to modify fuel-upgrade script so it does not 
> copy
> RPM/DEB repos, but those steps make little sense taking into account 
> Centos
> 7 feature.
>
> Colleagues, let's make a decision about how we are going to upgrade
> the master node ASAP. Probably my tarball related work should be reduced 
> to
> just throwing tarball away.
>
>
> Vladimir Kozhukalov
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 

Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Dan Smith
> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.

+1 for tasty cheese.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-06 Thread Antoni Segura Puimedon
On Fri, Nov 6, 2015 at 1:20 PM, Baohua Yang  wrote:

> It does cause confusing by calling container-inside-vm as nested
> container.
>
> The "nested" term in container area usually means
> container-inside-container.
>

I try to always put it as VM-nested container. But I probably slipped in
some mentions.


> we may refer this (container-inside-vm) explicitly as vm-holding container.
>

container-in-vm?


>
> On Fri, Nov 6, 2015 at 12:13 PM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> @Gal, I was asking about "container in nova vm" case.
>> Not sure if you were referring to this case as nested containers case. I
>> guess nested containers case would be "containers inside containers" and
>> this could be hosted on nova vm and nova bm node. Is my understanding
>> correct?
>>
>> Thanks Gal and Toni, for now i got answer to my query related to
>> "container in vm" case.
>>
>> -Vikas
>>
>> On Thu, Nov 5, 2015 at 6:00 PM, Gal Sagie  wrote:
>>
>>> The current OVS binding proposals are not for nested containers.
>>> I am not sure if you are asking about that case or about the nested
>>> containers inside a VM case.
>>>
>>> For the nested containers, we will use Neutron solutions that support
>>> this kind of configuration, for example
>>> if you look at OVN you can define "parent" and "sub" ports, so OVN knows
>>> to perform the logical pipeline in the compute host
>>> and only perform VLAN tagging inside the VM (as Toni mentioned)
>>>
>>> If you need more clarification you can catch me on IRC as well and we
>>> can talk.
>>>
>>> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi All,

 I would appreciate inputs on following queries:
 1. Are we assuming nova bm nodes to be docker host for now?

 If Not:
  - Assuming nova vm as docker host and ovs as networking plugin:
 This line is from the etherpad[1], "Eachdriver would have
 an executable that receives the name of the veth pair that has to be bound
 to the overlay" .
 Query 1:  As per current ovs binding proposals by Feisky[2]
 and Diga[3], vif seems to be binding with br-int on vm. I am unable to
 understand how overlay will work. AFAICT , neutron will configure br-tun of
 compute machines ovs only. How overlay(br-tun) configuration will happen
 inside vm ?

  Query 2: Are we having double encapsulation(both at vm and
 compute)? Is not it possible to bind vif into compute host br-int?

  Query3: I did not see subnet tags for network plugin being
 passed in any of the binding patches[2][3][4]. Dont we need that?


 [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
 [2]  https://review.openstack.org/#/c/241558/
 [3]  https://review.openstack.org/#/c/232948/1
 [4]  https://review.openstack.org/#/c/227972/


 -Vikas Choudhary


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Best Regards ,
>>>
>>> The G.
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-06 Thread Doug Hellmann
Excerpts from Dolph Mathews's message of 2015-11-05 16:31:28 -0600:
> On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann  wrote:
> 
> > Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > > > Can people help me work through the right set of tools for this use
> > case
> > > > > (has come up from several Operators) and map out a plan to implement
> > it:
> > > > >
> > > > > Large cloud with many users coming from multiple Federation sources
> > has
> > > > > a policy of providing a minimal setup for each user upon first visit
> > to
> > > > > the cloud:  Create a project for the user with a minimal quota, and
> > > > > provide them a role assignment.
> > > > >
> > > > > Here are the gaps, as I see it:
> > > > >
> > > > > 1.  Keystone provides a notification that a user has logged in, but
> > > > > there is nothing capable of executing on this notification at the
> > > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > > >
> > > > > 2.  Keystone does not have a workflow engine, and should not be
> > > > > auto-creating projects.  This is something that should be performed
> > via
> > > > > a Heat template, and Keystone does not know about Heat, nor should
> > it.
> > > > >
> > > > > 3.  The Mapping code is pretty static; it assumes a user entry or a
> > > > > group entry in identity when creating a role assignment, and neither
> > > > > will exist.
> > > > >
> > > > > We can assume a special domain for Federated users to have per-user
> > > > > projects.
> > > > >
> > > > > So; lets assume a Heat Template that does the following:
> > > > >
> > > > > 1. Creates a user in the per-user-projects domain
> > > > > 2. Assigns a role to the Federated user in that project
> > > > > 3. Sets the minimal quota for the user
> > > > > 4. Somehow notifies the user that the project has been set up.
> > > > >
> > > > > This last probably assumes an email address from the Federated
> > > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> > authenticated
> > > > > for any projects" error, and is stumped.
> > > > >
> > > > > How is quota assignment done in the other projects now?  What happens
> > > > > when a project is created in Keystone?  Does that information gets
> > > > > transferred to the other services, and, if so, how?  Do most people
> > use
> > > > > a custom provisioning tool for this workflow?
> > > > >
> > > >
> > > > I know at Dreamhost we built some custom integration that was triggered
> > > > when someone turned on the Dreamcompute service in their account in our
> > > > existing user management system. That integration created the account
> > in
> > > > keystone, set up a default network in neutron, etc. I've long thought
> > we
> > > > needed a "new tenant creation" service of some sort, that sits outside
> > > > of our existing services and pokes them to do something when a new
> > > > tenant is established. Using heat as the implementation makes sense,
> > for
> > > > things that heat can control, but we don't want keystone to depend on
> > > > heat and we don't want to bake such a specialized feature into heat
> > > > itself.
> > > >
> > >
> > > I agree, an automation piece that is built-in and easy to add to
> > > OpenStack would be great.
> > >
> > > I do not agree that it should be Heat. Heat is for managing stacks that
> > > live on and change over time and thus need the complexity of the graph
> > > model Heat presents.
> > >
> > > I'd actually say that Mistral or Ansible are better choices for this. A
> > > service which listens to the notification bus and triggered a workflow
> > > defined somewhere in either Ansible playbooks or Mistral's workflow
> > > language would simply run through the "skel" workflow for each user.
> > >
> > > The actual workflow would probably almost always be somewhat site
> > > specific, but it would make sense for Keystone to include a few basic
> > ones
> > > as "contrib" elements. For instance, the "notify the user" piece would
> > > likely be simplest if you just let the workflow tool send an email. But
> > > if your cloud has Zaqar, you may want to use that as well or instead.
> > >
> > > Adding Mistral here to see if they have some thoughts on how this
> > > might work.
> > >
> > > BTW, if this does form into a new project, I suggest naming it
> > > Skeleton[1]
> >
> > Following the pattern of Kite's naming, I think a Dirigible is a
> > better way to get users into the cloud. :-)
> >
> 
> lol +1
> 
> Is this use case specifically for keystone-to-keystone, or for federation
> in general?

The use case I had in mind was actually signing up a new user for
a cloud (at Dreamhost that meant enabling a paid service in their
account in the existing management tool outside of OpenStack). I'm not
sure how it relates to federation, but it seems like that might just be
another trigger 

Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread James King
+1 for some sort of LTS release system.

Telcos and risk-averse organizations working with sensitive data might not be 
able to upgrade nearly as fast as the releases keep coming out. From the summit 
in Japan it sounds like companies running some fairly critical public 
infrastructure on Openstack aren’t going to be upgrading to Kilo any time soon.

Public clouds might even benefit from this. I know we (Dreamcompute) are 
working towards tracking the upstream releases closer… but it’s not feasible 
for everyone.

I’m not sure whether the resources exist to do this but it’d be a nice to have, 
imho.

> On Nov 6, 2015, at 11:47 AM, Donald Talton  wrote:
> 
> I like the idea of LTS releases. 
> 
> Speaking to my own deployments, there are many new features we are not 
> interested in, and wouldn't be, until we can get organizational (cultural) 
> change in place, or see stability and scalability. 
> 
> We can't rely on, or expect, that orgs will move to the CI/CD model for 
> infra, when they aren't even ready to do that for their own apps. It's still 
> a new "paradigm" for many of us. CI/CD requires a considerable engineering 
> effort, and given that the decision to "switch" to OpenStack is often driven 
> by cost-savings over enterprise virtualization, adding those costs back in 
> via engineering salaries doesn't make fiscal sense.
> 
> My big argument is that if Icehouse/Juno works and is stable, and I don't 
> need newer features from subsequent releases, why would I expend the effort 
> until such a time that I do want those features? Thankfully there are vendors 
> that understand this. Keeping up with the release cycle just for the sake of 
> keeping up with the release cycle is exhausting.
> 
> -Original Message-
> From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
> Sent: Thursday, November 05, 2015 11:15 PM
> To: OpenStack Development Mailing List
> Cc: openstack-operat...@lists.openstack.org
> Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.
> 
> Hello all,
> 
> I'll start by acknowledging that this is a big and complex issue and I do not 
> claim to be across all the view points, nor do I claim to be particularly 
> persuasive ;P
> 
> Having stated that, I'd like to seek constructive feedback on the idea of 
> keeping Juno around for a little longer.  During the summit I spoke to a 
> number of operators, vendors and developers on this topic.  There was some 
> support and some "That's crazy pants!" responses.  I clearly didn't make it 
> around to everyone, hence this email.
> 
> Acknowledging my affiliation/bias:  I work for Rackspace in the private cloud 
> team.  We support a number of customers currently running Juno that are, for 
> a variety of reasons, challenged by the Kilo upgrade.
> 
> Here is a summary of the main points that have come up in my conversations, 
> both for and against.
> 
> Keep Juno:
> * According to the current user survey[1] Icehouse still has the
>   biggest install base in production clouds.  Juno is second, which makes
>   sense. If we EOL Juno this month that means ~75% of production clouds
>   will be running an EOL'd release.  Clearly many of these operators have
>   support contracts from their vendor, so those operators won't be left 
>   completely adrift, but I believe it's the vendors that benefit from keeping
>   Juno around. By working together *in the community* we'll see the best
>   results.
> 
> * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but we
>   still have a huge Icehouse/Juno install base.
> 
> For me this is pretty compelling but for balance  
> 
> Keep the current plan and EOL Juno Real Soon Now:
> * There is also no ignoring the elephant in the room that with HP stepping
>   back from public cloud there are questions about our CI capacity, and
>   keeping Juno will have an impact on that critical resource.
> 
> * Juno (and other stable/*) resources have a non-zero impact on *every*
>   project, esp. @infra and release management.  We need to ensure this
>   isn't too much of a burden.  This mostly means we need enough trustworthy
>   volunteers.
> 
> * Juno is also tied up with Python 2.6 support. When
>   Juno goes, so will Python 2.6 which is a happy feeling for a number of
>   people, and more importantly reduces complexity in our project
>   infrastructure.
> 
> * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
>   that are "on the hook" for multiple years of support, so for that case
>   we're really only delaying the inevitable.
> 
> * Some number of the production clouds may never migrate from $version, in
>   which case longer support for Juno isn't going to help them.
> 
> 
> I'm sure these question were well discussed at the VYR summit where we set 
> the EOL date for Juno, but I was new then :) What I'm asking is:
> 
> 1) Is it even possible to keep Juno alive (is the impact on the project as
>   a whole acceptable)?
> 
> Assuming a positi

Re: [openstack-dev] [release] Release countdown for week R-21, Nov 9-13

2015-11-06 Thread Doug Hellmann
Excerpts from Neil Jerram's message of 2015-11-06 12:15:54 +:
> On 05/11/15 14:22, Doug Hellmann wrote:
> > All deliverables should have reno configured before Mitaka 1. See
> > http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
> > for details, and follow up on that thread with questions.
> 
> I guess that 'deliverables' do not include projects with
> release:independent.  Is that correct?

We use the term "deliverables" for the things we package because
some are produced from multiple repositories.

> 
> Nevertheless, would use of reno be recommended for release:independent
> projects too?

Yes, certainly. Even release:none repos might benefit from standardizing
on release notes management.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Gareth
+11

Alex, good news for you !

On Sat, Nov 7, 2015 at 12:36 AM, Daniel P. Berrange  wrote:
> On Fri, Nov 06, 2015 at 03:32:04PM +, John Garbutt wrote:
>> Hi,
>>
>> I propose we add Alex Xu[1] to nova-core.
>>
>> Over the last few cycles he has consistently been doing great work,
>> including some quality reviews, particularly around the API.
>>
>> Please respond with comments, +1s, or objections within one week.
>
> +1 from me, the tireless API patch & review work has been very helpful
> to our efforts in this area.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Gareth

Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
OpenStack contributor, kun_huang@freenode
My promise: if you find any spelling or grammar mistakes in my email
from Mar 1 2013, notify me
and I'll donate $1 or ¥1 to an open organization you specify.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][bugs] Weekly Status Report

2015-11-06 Thread Markus Zoeller
Hey folks,

below is the first report of bug stats I intend to post weekly.
We discussed it shortly during the Mitaka summit that this report
could be useful to keep the attention of the open bugs at a certain
level. Let me know if you think it's missing something.

Stats
=

New bugs which are *not* assigned to any subteam

count: 19
query: http://bit.ly/1WF68Iu


New bugs which are *not* triaged

subteam: libvirt 
count: 14 
query: http://bit.ly/1Hx3RrL
subteam: volumes 
count: 11
query: http://bit.ly/1NU2DM0
subteam: network : 
count: 4
query: http://bit.ly/1LVAQdq
subteam: db : 
count: 4
query: http://bit.ly/1LVATWG
subteam: 
count: 67
query: http://bit.ly/1RBVZLn


High prio bugs which are *not* in progress
--
count: 39
query: http://bit.ly/1MCKoHA


Critical bugs which are *not* in progress
-
count: 0
query: http://bit.ly/1kfntfk


Readings

* https://wiki.openstack.org/wiki/BugTriage
* https://wiki.openstack.org/wiki/Nova/BugTriage
* 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078252.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Matt,

You are talking about this part of Operations guide [1], or you mean
something else?

If yes, then we still need to extract data from backup containers. I'd
prefer backup of DB in simple plain text file, since our DBs are not that
big.

[1]
https://docs.mirantis.com/openstack/fuel/fuel-7.0/operations.html#howto-backup-and-restore-fuel-master

--
Best regards,
Oleg Gelbukh

On Fri, Nov 6, 2015 at 6:03 PM, Matthew Mosesohn 
wrote:

> Oleg,
>
> All the volatile information, including a DB dump, are contained in the
> small Fuel Master backup. There should be no information lost unless there
> was manual customization done inside the containers (such as puppet
> manifest changes). There shouldn't be a need to back up the entire
> containers.
>
> The information we would lose would include the IP configuration
> interfaces besides the one used for the Fuel PXE network and any custom
> configuration done on the Fuel Master.
>
> I want #1 to work smoothly, but #2 should also be a safe route.
>
> On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh 
> wrote:
>
>> Evgeniy,
>>
>> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>>
>>> Also we should decide when to run containers
>>> upgrade + host upgrade? Before or after new CentOS is installed? Probably
>>> it should be done before we run backup, in order to get the latest
>>> scripts for
>>> backup/restore actions.
>>>
>>
>> We're working to determine if we need to backup/upgrade containers at
>> all. My expectation is that we should be OK with just backup of DB, IP
>> addresses settings from astute.yaml for the master node, and credentials
>> from configuration files for the services.
>>
>> --
>> Best regards,
>> Oleg Gelbukh
>>
>>
>>>
>>> Thanks,
>>>
>>> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 At the moment I'm working on deprecating Fuel upgrade tarball.
 Currently, it includes the following:

 * RPM repository (upstream + mos)
 * DEB repository (mos)
 * openstack.yaml
 * version.yaml
 * upgrade script itself (+ virtualenv)

 Apart from upgrading docker containers this upgrade script makes copies
 of the RPM/DEB repositories and puts them on the master node naming these
 repository directories depending on what is written in openstack.yaml and
 version.yaml. My plan was something like:

 1) deprecate version.yaml (move all fields from there to various places)
 2) deliver openstack.yaml with fuel-openstack-metadata package
 3) do not put new repos on the master node (instead we should use
 online repos or use fuel-createmirror to make local mirrors)
 4) deliver fuel-upgrade package (throw away upgrade virtualenv)

 Then UX was supposed to be roughly like:

 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
 2) yum install fuel-upgrade
 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
 there should have not be parts coping RPM/DEB repos)

 However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
 it is not enough to just do things which we usually did during upgrades.
 Now there are two ways to upgrade:
 1) to use the official Centos upgrade script for upgrading from 6 to 7
 2) to backup the master node, then reinstall it from scratch and then
 apply backup

 Upgrade team is trying to understand which way is more appropriate.
 Regarding to my tarball related activities, I'd say that this package based
 upgrade approach can be aligned with (1) (fuel-upgrade would use official
 Centos upgrade script as a first step for upgrade), but it definitely can
 not be aligned with (2), because it assumes reinstalling the master node
 from scratch.

 Right now, I'm finishing the work around deprecating version.yaml and
 my further steps would be to modify fuel-upgrade script so it does not copy
 RPM/DEB repos, but those steps make little sense taking into account Centos
 7 feature.

 Colleagues, let's make a decision about how we are going to upgrade the
 master node ASAP. Probably my tarball related work should be reduced to
 just throwing tarball away.


 Vladimir Kozhukalov


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> _

Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Sean Dague
On 11/06/2015 10:32 AM, John Garbutt wrote:
> Hi,
> 
> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.
> 
> Many thanks,
> John
> 
> [1] 
> http://stackalytics.com/?module=nova-group&user_id=sylvain-bauza&release=all
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

+1


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Sean Dague
On 11/06/2015 10:32 AM, John Garbutt wrote:
> Hi,
> 
> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.
> 
> Many thanks,
> John
> 
> [1]http://stackalytics.com/?module=nova-group&user_id=xuhj&release=all
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

+1

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Jay Pipes

+1

On 11/06/2015 10:32 AM, John Garbutt wrote:

Hi,

I propose we add Alex Xu[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the API.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1]http://stackalytics.com/?module=nova-group&user_id=xuhj&release=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Jay Pipes

+1

On 11/06/2015 10:32 AM, John Garbutt wrote:

Hi,

I propose we add Sylvain Bauza[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the Scheduler.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1] http://stackalytics.com/?module=nova-group&user_id=sylvain-bauza&release=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Donald Talton
I like the idea of LTS releases. 

Speaking to my own deployments, there are many new features we are not 
interested in, and wouldn't be, until we can get organizational (cultural) 
change in place, or see stability and scalability. 

We can't rely on, or expect, that orgs will move to the CI/CD model for infra, 
when they aren't even ready to do that for their own apps. It's still a new 
"paradigm" for many of us. CI/CD requires a considerable engineering effort, 
and given that the decision to "switch" to OpenStack is often driven by 
cost-savings over enterprise virtualization, adding those costs back in via 
engineering salaries doesn't make fiscal sense.

My big argument is that if Icehouse/Juno works and is stable, and I don't need 
newer features from subsequent releases, why would I expend the effort until 
such a time that I do want those features? Thankfully there are vendors that 
understand this. Keeping up with the release cycle just for the sake of keeping 
up with the release cycle is exhausting.

-Original Message-
From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
Sent: Thursday, November 05, 2015 11:15 PM
To: OpenStack Development Mailing List
Cc: openstack-operat...@lists.openstack.org
Subject: [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

Hello all,

I'll start by acknowledging that this is a big and complex issue and I do not 
claim to be across all the view points, nor do I claim to be particularly 
persuasive ;P

Having stated that, I'd like to seek constructive feedback on the idea of 
keeping Juno around for a little longer.  During the summit I spoke to a number 
of operators, vendors and developers on this topic.  There was some support and 
some "That's crazy pants!" responses.  I clearly didn't make it around to 
everyone, hence this email.

Acknowledging my affiliation/bias:  I work for Rackspace in the private cloud 
team.  We support a number of customers currently running Juno that are, for a 
variety of reasons, challenged by the Kilo upgrade.

Here is a summary of the main points that have come up in my conversations, 
both for and against.

Keep Juno:
 * According to the current user survey[1] Icehouse still has the
   biggest install base in production clouds.  Juno is second, which makes
   sense. If we EOL Juno this month that means ~75% of production clouds
   will be running an EOL'd release.  Clearly many of these operators have
   support contracts from their vendor, so those operators won't be left 
   completely adrift, but I believe it's the vendors that benefit from keeping
   Juno around. By working together *in the community* we'll see the best
   results.

 * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but we
   still have a huge Icehouse/Juno install base.

For me this is pretty compelling but for balance  

Keep the current plan and EOL Juno Real Soon Now:
 * There is also no ignoring the elephant in the room that with HP stepping
   back from public cloud there are questions about our CI capacity, and
   keeping Juno will have an impact on that critical resource.

 * Juno (and other stable/*) resources have a non-zero impact on *every*
   project, esp. @infra and release management.  We need to ensure this
   isn't too much of a burden.  This mostly means we need enough trustworthy
   volunteers.

 * Juno is also tied up with Python 2.6 support. When
   Juno goes, so will Python 2.6 which is a happy feeling for a number of
   people, and more importantly reduces complexity in our project
   infrastructure.

 * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
   that are "on the hook" for multiple years of support, so for that case
   we're really only delaying the inevitable.

 * Some number of the production clouds may never migrate from $version, in
   which case longer support for Juno isn't going to help them.


I'm sure these question were well discussed at the VYR summit where we set the 
EOL date for Juno, but I was new then :) What I'm asking is:

1) Is it even possible to keep Juno alive (is the impact on the project as
   a whole acceptable)?

Assuming a positive answer:

2) Who's going to do the work?
- Me, who else?
3) What do we do if people don't actually do the work but we as a community
   have made a commitment?
4) If we keep Juno alive for $some_time, does that imply we also bump the
   life cycle on Kilo and liberty and Mitaka etc?

Yours Tony.

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
(page 20)
[2] http://git.openstack.org/cgit/openstack/nova/tag/?h=icehouse-eol


This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.
__
OpenStack Development Mailing List (not for usage quest

[openstack-dev] [Fuel][UI] Fuel UI switched to a new build/module system

2015-11-06 Thread Vitaly Kramskikh
Hi,

I'd like to inform you that Fuel UI migrated from require.js to webpack. It
will give us lots of benefits like significant improvement of developer
experience and will allow us to easily separate Fuel UI from Nailgun. For
more information please read the spec
.

For those who use Nailgun in fake mode, it means that they need to take
some extra actions to make Fuel UI work - since we don't have anymore an
uncomressed UI version which compiles itself in the browser (and this
allowed us to resolve huge amount of tech debt - we have to support only
one environment). You need to run npm install to fetch new modules and
proceed with one of 2 possible ways:

   - If you don't plan to modify Fuel UI, it would be better just to
   compile Fuel UI by running gulp build - after that compiled UI will be
   served by Nailgun as usual. Don't forget to rerun npm install && gulp
   build after fetching new changes.
   - If you plan to modify Fuel UI, there is another option - use a
   development server. It watches for changes and files and automatically
   recompiles Fuel UI (using incremental compilation, which is usually much
   faster than gulp build) and triggers refresh in browser. You can run it
   via gulp dev-server.

If you have issues with the new code, feel free to contact us in #fuel-ui
or #fuel-dev channels.

-- 
Vitaly Kramskikh,
Fuel UI Tech Lead,
Mirantis, Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread Daniel P. Berrange
On Fri, Nov 06, 2015 at 03:32:04PM +, John Garbutt wrote:
> Hi,
> 
> I propose we add Alex Xu[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the API.
> 
> Please respond with comments, +1s, or objections within one week.

+1 from me, the tireless API patch & review work has been very helpful
to our efforts in this area.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread Daniel P. Berrange
On Fri, Nov 06, 2015 at 03:32:00PM +, John Garbutt wrote:
> Hi,
> 
> I propose we add Sylvain Bauza[1] to nova-core.
> 
> Over the last few cycles he has consistently been doing great work,
> including some quality reviews, particularly around the Scheduler.
> 
> Please respond with comments, +1s, or objections within one week.

+1 from me, I think Sylvain will be a valuable addition to the team
for his scheduler expertize.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Understanding stable/branch process for Neutron subprojects

2015-11-06 Thread John Belamaric
>>> 
>> All new features must go to master only. Your master should always be  
>> tested and work with neutron master (meaning, your master should target  
>> Mitaka, not Liberty).
>> 
>>> 

We have a very similar situation in networking-infoblox to what Neil was saying 
about Calico. In our case, we needed the framework for pluggable IPAM to 
produce our driver. We are now targeting Liberty, but based on the above our 
plan is:

1) Release 1.0 from master (soon)
2) Create stable/liberty based on 1.0
3) Continue to add features in master, but *maintain compatibility with 
stable/liberty*.

It is important that our next version works with stable/liberty, not just 
master/Mitaka.

John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] shotgun code freeze

2015-11-06 Thread Dmitry Pyzhov
Great job! We are much closer to removal of fuel-web repo.

On Tue, Oct 27, 2015 at 7:35 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Dear colleagues,
>
> I am glad to announce that since now shotgun is a separate project.
> fuel-web/shotgun directory has been deprecated. There is yet another patch
> that has not been merged https://review.openstack.org/238525 (adds
> .gitignore file to the new shotgun repo). Please review it.
>
> Shotgun
>
>- Launchpad bug https://bugs.launchpad.net/fuel/+bug/1506894
>- project-config patch https://review.openstack.org/235355 (DONE)
>- pypi (DONE)
>- run_tests.sh https://review.openstack.org/235368 (DONE)
>- rpm/deb specs https://review.openstack.org/#/c/235382/ (DONE)
>- fuel-ci verification jobs https://review.fuel-infra.org/12872 (DONE)
>- label jenkins slaves for verification (DONE)
>- directory freeze (DONE)
>- prepare upstream (DONE)
>- waiting for project-config patch to be merged (DONE)
>- .gitreview https://review.openstack.org/238476 (DONE)
>- .gitignore https://review.openstack.org/238525 (ON REVIEW)
>- custom jobs parameters https://review.fuel-infra.org/13209 (DONE)
>- fix core group (DONE)
>- fuel-main https://review.openstack.org/#/c/238953/ (DONE)
>- packaging-ci  https://review.fuel-infra.org/13181 (DONE)
>- MAINTAINERS https://review.openstack.org/239410 (DONE)
>- deprecate shotgun directory https://review.openstack.org/239407
>(DONE)
>- fix verify-fuel-web-docs job (it installs shotgun for some reason)
>https://review.fuel-infra.org/#/c/13194/ (DONE)
>- remove old shotgun package (DONE)
>
>
>
> Vladimir Kozhukalov
>
> On Wed, Oct 21, 2015 at 2:46 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> As you might know I'm working on splitting multiproject fuel-web
>> repository into several sub-projects. Shotgun is one of directories that
>> are to be moved to a separate git project.
>>
>> Checklist for this to happen is as follows:
>>
>>- Launchpad bug https://bugs.launchpad.net/fuel/+bug/1506894
>>- project-config patch  https://review.openstack.org/#/c/235355 (ON
>>REVIEW)
>>- pypi project
>>https://pypi.python.org/pypi?%3Aaction=pkg_edit&name=Shotgun (DONE)
>>- run_tests.sh  https://review.openstack.org/235368 (DONE)
>>- rpm/deb specs  https://review.openstack.org/#/c/235382 (DONE)
>>- fuel-ci verification jobs https://review.fuel-infra.org/#/c/12872/ (ON
>>REVIEW)
>>- label jenkins slaves for verification jobs (ci team)
>>- directory freeze (WE ARE HERE)
>>- prepare upstream (TODO)
>>- waiting for project-config patch to be merged (ON REVIEW)
>>- fuel-main patch (TODO)
>>- packaging-ci patch (TODO)
>>- deprecate fuel-web/shotgun directory (TODO)
>>
>> Now we are at the point where we need to freeze fuel-web/shotgun
>> directory. So, I'd like to announce code freeze for this directory and all
>> patches that make changes in the directory and are currently on review will
>> need to be backported to the new git repository.
>>
>> Vladimir Kozhukalov
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [inspector] Auto discovery extension for Ironic Inspector

2015-11-06 Thread Bruno Cornec

Hello,

Pavlo Shchelokovskyy said on Tue, Nov 03, 2015 at 09:41:51PM +:

For auto-setting driver options on enrollment, I would vote for option 2
with default being fake driver + optional CMDB integration. This would ease
managing a homogeneous pool of BMs, but still (using fake driver or data
from CMDB) work reasonably well in heterogeneous case.

As for setting a random password, CMDB integration is crucial IMO. Large
deployments usually have some sort of it already, and it must serve as a
single source of truth for the deployment. So if inspector is changing the
ipmi password, it should not only notify/update Ironic's knowledge on that
node, but also notify/update the CMDB on that change - at least there must
be a possibility (a ready-to-use plug point) to do that before we roll out
such feature.


wrt interaction with CMDB, we have investigating around some ideas tha
we have gathered at https://github.com/uggla/alexandria/wiki

Some code has been written to try to model some of these aspects, but
having more contributors and patches to enhance that integration would
be great ! Similarly available at https://github.com/uggla/alexandria

We had planned to talk about these ideas at the previous OpenStack
summit but didn't get enough votes it seems. So now aiming at preenting
to the next one ;-)

HTH,
Bruno.
--
Open Source Profession, Linux Community Lead WW  http://hpintelco.net
HPE EMEA EG Open Source Technology Strategist http://hp.com/go/opensource
FLOSS projects: http://mondorescue.org http://project-builder.org
Musique ancienne? http://www.musique-ancienne.org http://www.medieval.org

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Change VIP address via API

2015-11-06 Thread Aleksey Kasatkin
Mike, Vladimir,

Yes,
1. We need to add IPs on-the-fly (need to add POST functionality) ,
otherwise it will be VIP-like way (change network roles in plugin or
release).
2. We should allow to leave fields 'network_role', 'node_roles', 'namespace'
empty. So, validation should be changed.

So, answer here

> Q. Any allocated IP could be accessible via these handlers, so now we can
> restrict user to access VIPs only
> and answer with some error to other ip_addrs ids.
>
should be "Any allocated IP is accessible via these handlers", so URLs can
be changed to
/clusters//network_configuration/ips/
/clusters//network_configuration/ips//
Nodes IPs maybe the different story though.

Alex,

'node_roles' determines in what node group to allocate IP. So, it will be
group with controller nodes for our base VIPs
(they all have node_roles=['controller'] which is default setting).
It can be some other node group for nodes with different role. E.g. ceph
nodes use some ceph/vip network role and VIP is defined
for this network role (with 'network_role'='ceph/vip' and
'node_roles'=['ceph/osd']).
This VIP will be allocated
in the network that 'ceph/vip' is mapped to and in the node group where
ceph nodes are located. ceph nodes cannot be located
in more than one node group then (as VIP cannot migrate between node groups
now).



Aleksey Kasatkin


On Fri, Nov 6, 2015 at 10:20 AM, Vladimir Kuklin 
wrote:

> +1 to Mike
>
> It would be awesome to get an API handler that allows one to actually add
> an ip address to IP_addrs table. As well as an IP range to ip_ranges table.
>
> On Fri, Nov 6, 2015 at 6:15 AM, Mike Scherbakov 
> wrote:
>
>> Is there a way to make it more generic, not "VIP" specific? Let's say I
>> want to reserve address(-es) for something for whatever reason, and then I
>> want to use them by some tricky way.
>> More specifically, can we reserve IP address(-es) with some codename, and
>> use it later?
>> 12.12.12.12 - my-shared-ip
>> 240.0.0.2 - my-multicast
>> and then use them in puppet / whatever deployment code by $my-shared-ip,
>> $my-multicast?
>>
>> Thanks,
>>
>> On Tue, Nov 3, 2015 at 8:49 AM Aleksey Kasatkin 
>> wrote:
>>
>>> Folks,
>>>
>>> Here is a resume of our recent discussion:
>>>
>>> 1. Add new URLs for processing VIPs:
>>>
>>> /clusters//network_configuration/vips/ (GET)
>>> /clusters//network_configuration/vips// (GET, PUT)
>>>
>>> where  is the id in ip_addrs table.
>>> So, user can get all VIPS, get one VIP by id, change parameters (IP
>>> address) for one VIP by its id.
>>> More possibilities can be added later.
>>>
>>> Q. Any allocated IP could be accessible via these handlers, so now we
>>> can restrict user to access VIPs only
>>> and answer with some error to other ip_addrs ids.
>>>
>>> 2. Add current VIP meta into ip_addrs table.
>>>
>>> Create new field in ip_addrs table for placing VIP metadata there.
>>> Current set of ip_addrs fields:
>>> id (int),
>>> network (FK),
>>> node (FK),
>>> ip_addr (string),
>>> vip_type (string),
>>> network_data (relation),
>>> node_data (relation)
>>>
>>> Q. We could replace vip_type (it contains VIP name now) with vip_info.
>>>
>>> 3. Allocate VIPs on cluster creation and seek VIPs at all network
>>> changes.
>>>
>>> So, VIPs will be checked (via network roles descriptions) and
>>> re-allocated in ip_addrs table
>>> at these points:
>>> a. create cluster
>>> b. modify networks configuration
>>> c. modify one network
>>> d. modify network template
>>> e. change nodes set for cluster
>>> f. change node roles set on nodes
>>> g. modify cluster attributes (change set of plugins)
>>> h. modify release
>>>
>>> 4. Add 'manual' field into VIP meta to indicate whether it is
>>> auto-allocated or not.
>>>
>>> So, whole VIP description may look like:
>>> {
>>> 'name': 'management'
>>> 'network_role': 'mgmt/vip',
>>> 'namespace': 'haproxy',
>>> 'node_roles': ['controller'],
>>> 'alias': 'management_vip',
>>> 'manual': True,
>>> }
>>>
>>> Example of current VIP description:
>>>
>>> https://github.com/openstack/fuel-web/blob/master/nailgun/nailgun/fixtures/openstack.yaml#L207
>>>
>>> Nailgun will re-allocate VIP address if 'manual' == False.
>>>
>>> 5. Q. what to do when the given address overlaps with the network from
>>> another
>>> environment? overlaps with the network of current environment which does
>>> not match the
>>> network role of the VIP?
>>>
>>> Use '--force' parameter to change it. PUT will fail otherwise.
>>>
>>>
>>> Guys, please review this and share your comments here,
>>>
>>> Thanks,
>>>
>>>
>>>
>>> Aleksey Kasatkin
>>>
>>>
>>> On Tue, Nov 3, 2015 at 10:47 AM, Aleksey Kasatkin <
>>> akasat...@mirantis.com> wrote:
>>>
 Igor,

 > For VIP allocation we should use POST request. It's ok to use PUT for
 setting (changing) IP address.

 My proposal is about setting IP addresses for VIPs only (auto and
 manual).
 No any other allocations.
 Do you propose to use POST

Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Matt Riedemann



On 11/6/2015 9:20 AM, Matt Riedemann wrote:



On 11/6/2015 4:43 AM, Thierry Carrez wrote:

Tony Breeds wrote:

[...]
1) Is it even possible to keep Juno alive (is the impact on the
project as
a whole acceptable)?


It is *technically* possible, imho. The main cost to keep it is that the
branches get regularly broken by various other changes, and those breaks
are non-trivial to fix (we have taken steps to make branches more
resilient, but those only started to appear in stable/liberty). The
issues sometimes propagate (through upgrade testing) to master, at which
point it becomes everyone's problem to fix it. The burden ends up
falling on the usual gate fixers heroes, a rare resource we need to
protect.

So it's easy to say "we should keep the branch since so many people
still use it", unless we have significantly more people working on (and
capable of) fixing it when it's broken, the impact on the project is
just not acceptable.

It's not the first time this has been suggested, and every time our
answer was "push more resources in fixing existing stable branches and
we might reconsider it". We got promised lots of support. But I don't
think we have yet seen real change in that area (I still see the same
usual suspects fixing stable gates), and things can still barely keep
afloat with our current end-of-life policy...

Stable branches also come with security support, so keeping more
branches opened mechanically adds to the work of the Vulnerability
Management Team, another rare resource.

There are other hidden costs on the infrastructure side (we can't get
rid of a number of things that we have moved away from until the old
branch still needing those things is around), but I'll let someone
closer to the metal answer that one.


Assuming a positive answer:

2) Who's going to do the work?
 - Me, who else?
3) What do we do if people don't actually do the work but we as a
community
have made a commitment?


In the past, that generally meant people opposed to the idea of
extending support periods having to stand up for the community promise
and fix the mess in the end.

PS: stable gates are currently broken for horizon/juno, trove/kilo, and
neutron-lbaas/liberty.



__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



In general I'm in favor of trying to keep the stable branches available
as long as possible because of (1) lots of production deployments not
upgrading as fast as we (the dev team) assume they are and (2)
backporting security fixes upstream is much nicer as a community than
doing it out of tree when you support 5+ years of releases.

Having said that, the downside points above are very valid, i.e. not
enough resources to help, we want to drop py26, things get wedged easily
and there aren't people around to monitor or fix it, or understand how
all of the stable branch + infra + QA stuff fits together.

It also extends the life and number of tests that need to be run against
things in Tempest, which already runs several dozen jobs per change
proposed today (since Tempest is branchless).

At this point stable/juno is pretty much a goner, IMO. The last few
months of activity that I've been involved in have been dealing with
requirements capping issues, which as we've seen you fix one issue to
unwedge a project and with the g-r syncs we end up breaking 2 other
projects, and the cycle never ends.

This is not as problematic in stable/kilo because we've done a better
job of isolating versions in g-r from the start, but things won't get
really good until stable/liberty when we've got upper-constraints in
action.

So I'm optimistic that we can keep stable/kilo around and working longer
than what we've normally done in the past, but I don't hold out much
hope for stable/juno at this point given it's current state.



Didn't mean to break the cross-list chain.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Proposal to add Alex Xu to nova-core

2015-11-06 Thread John Garbutt
Hi,

I propose we add Alex Xu[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the API.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1]http://stackalytics.com/?module=nova-group&user_id=xuhj&release=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Proposal to add Sylvain Bauza to nova-core

2015-11-06 Thread John Garbutt
Hi,

I propose we add Sylvain Bauza[1] to nova-core.

Over the last few cycles he has consistently been doing great work,
including some quality reviews, particularly around the Scheduler.

Please respond with comments, +1s, or objections within one week.

Many thanks,
John

[1] http://stackalytics.com/?module=nova-group&user_id=sylvain-bauza&release=all

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Matt Riedemann



On 11/6/2015 4:43 AM, Thierry Carrez wrote:

Tony Breeds wrote:

[...]
1) Is it even possible to keep Juno alive (is the impact on the project as
a whole acceptable)?


It is *technically* possible, imho. The main cost to keep it is that the
branches get regularly broken by various other changes, and those breaks
are non-trivial to fix (we have taken steps to make branches more
resilient, but those only started to appear in stable/liberty). The
issues sometimes propagate (through upgrade testing) to master, at which
point it becomes everyone's problem to fix it. The burden ends up
falling on the usual gate fixers heroes, a rare resource we need to protect.

So it's easy to say "we should keep the branch since so many people
still use it", unless we have significantly more people working on (and
capable of) fixing it when it's broken, the impact on the project is
just not acceptable.

It's not the first time this has been suggested, and every time our
answer was "push more resources in fixing existing stable branches and
we might reconsider it". We got promised lots of support. But I don't
think we have yet seen real change in that area (I still see the same
usual suspects fixing stable gates), and things can still barely keep
afloat with our current end-of-life policy...

Stable branches also come with security support, so keeping more
branches opened mechanically adds to the work of the Vulnerability
Management Team, another rare resource.

There are other hidden costs on the infrastructure side (we can't get
rid of a number of things that we have moved away from until the old
branch still needing those things is around), but I'll let someone
closer to the metal answer that one.


Assuming a positive answer:

2) Who's going to do the work?
 - Me, who else?
3) What do we do if people don't actually do the work but we as a community
have made a commitment?


In the past, that generally meant people opposed to the idea of
extending support periods having to stand up for the community promise
and fix the mess in the end.

PS: stable gates are currently broken for horizon/juno, trove/kilo, and
neutron-lbaas/liberty.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



In general I'm in favor of trying to keep the stable branches available 
as long as possible because of (1) lots of production deployments not 
upgrading as fast as we (the dev team) assume they are and (2) 
backporting security fixes upstream is much nicer as a community than 
doing it out of tree when you support 5+ years of releases.


Having said that, the downside points above are very valid, i.e. not 
enough resources to help, we want to drop py26, things get wedged easily 
and there aren't people around to monitor or fix it, or understand how 
all of the stable branch + infra + QA stuff fits together.


It also extends the life and number of tests that need to be run against 
things in Tempest, which already runs several dozen jobs per change 
proposed today (since Tempest is branchless).


At this point stable/juno is pretty much a goner, IMO. The last few 
months of activity that I've been involved in have been dealing with 
requirements capping issues, which as we've seen you fix one issue to 
unwedge a project and with the g-r syncs we end up breaking 2 other 
projects, and the cycle never ends.


This is not as problematic in stable/kilo because we've done a better 
job of isolating versions in g-r from the start, but things won't get 
really good until stable/liberty when we've got upper-constraints in action.


So I'm optimistic that we can keep stable/kilo around and working longer 
than what we've normally done in the past, but I don't hold out much 
hope for stable/juno at this point given it's current state.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Matthew Mosesohn
Oleg,

All the volatile information, including a DB dump, are contained in the
small Fuel Master backup. There should be no information lost unless there
was manual customization done inside the containers (such as puppet
manifest changes). There shouldn't be a need to back up the entire
containers.

The information we would lose would include the IP configuration interfaces
besides the one used for the Fuel PXE network and any custom configuration
done on the Fuel Master.

I want #1 to work smoothly, but #2 should also be a safe route.

On Fri, Nov 6, 2015 at 5:39 PM, Oleg Gelbukh  wrote:

> Evgeniy,
>
> On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:
>
>> Also we should decide when to run containers
>> upgrade + host upgrade? Before or after new CentOS is installed? Probably
>> it should be done before we run backup, in order to get the latest
>> scripts for
>> backup/restore actions.
>>
>
> We're working to determine if we need to backup/upgrade containers at all.
> My expectation is that we should be OK with just backup of DB, IP addresses
> settings from astute.yaml for the master node, and credentials from
> configuration files for the services.
>
> --
> Best regards,
> Oleg Gelbukh
>
>
>>
>> Thanks,
>>
>> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> At the moment I'm working on deprecating Fuel upgrade tarball.
>>> Currently, it includes the following:
>>>
>>> * RPM repository (upstream + mos)
>>> * DEB repository (mos)
>>> * openstack.yaml
>>> * version.yaml
>>> * upgrade script itself (+ virtualenv)
>>>
>>> Apart from upgrading docker containers this upgrade script makes copies
>>> of the RPM/DEB repositories and puts them on the master node naming these
>>> repository directories depending on what is written in openstack.yaml and
>>> version.yaml. My plan was something like:
>>>
>>> 1) deprecate version.yaml (move all fields from there to various places)
>>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>>> 3) do not put new repos on the master node (instead we should use online
>>> repos or use fuel-createmirror to make local mirrors)
>>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>>
>>> Then UX was supposed to be roughly like:
>>>
>>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>>> 2) yum install fuel-upgrade
>>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>>> there should have not be parts coping RPM/DEB repos)
>>>
>>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>>> it is not enough to just do things which we usually did during upgrades.
>>> Now there are two ways to upgrade:
>>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>>> 2) to backup the master node, then reinstall it from scratch and then
>>> apply backup
>>>
>>> Upgrade team is trying to understand which way is more appropriate.
>>> Regarding to my tarball related activities, I'd say that this package based
>>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>>> Centos upgrade script as a first step for upgrade), but it definitely can
>>> not be aligned with (2), because it assumes reinstalling the master node
>>> from scratch.
>>>
>>> Right now, I'm finishing the work around deprecating version.yaml and my
>>> further steps would be to modify fuel-upgrade script so it does not copy
>>> RPM/DEB repos, but those steps make little sense taking into account Centos
>>> 7 feature.
>>>
>>> Colleagues, let's make a decision about how we are going to upgrade the
>>> master node ASAP. Probably my tarball related work should be reduced to
>>> just throwing tarball away.
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread John Garbutt
On 6 November 2015 at 14:22, Anne Gentle  wrote:
> I'm pretty sure microversions are hard to document no matter what we do so
> we just need to pick a way and move forward.
> Here's what is in the spec:
> For microversions, we'll need at least 2 copies of the previous reference
> info (enable a dropdown for the user to choose a prior version or one that
> matches theirs)

+1

> Need to keep deprecated options.

Thats not really a thing in microversion land.
Things are present or deleted, in a particular version.
That should be simpler.

> An example of version
> comparisons https://libgit2.github.com/libgit2/#HEAD

:)
That the example I couldn't find.
I feel that maps (almost) perfectly to microversions.
I could be missing something obvious though.

> Let's discuss weekly at both the Nova API meeting and the API Working group
> meeting to refine the design. I'm back next week and plan to update the
> spec.

+1

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Alex Xu
2015-11-06 20:46 GMT+08:00 John Garbutt :

> On 6 November 2015 at 03:31, Alex Xu  wrote:
> > Hi, folks
> >
> > Nova API sub-team is working on the swagger generation. And there is PoC
> > https://review.openstack.org/233446
> >
> > But before we are going to next step, I really hope we can get agreement
> > with how to support Microversions and Actions. The PoC have demo about
> > Microversions. It generates min version action as swagger spec standard,
> for
> > the other version actions, it named as extended attribute, like:
> >
> > {
> > '/os-keypairs': {
> > "get": {
> > 'x-start-version': '2.1',
> > 'x-end-version': '2.1',
> > 'description': '',
> >
> > },
> > "x-get-2.2-2.9": {
> > 'x-start-version': '2.2',
> > 'x-end-version': '2.9',
> > 'description': '',
> > .
> > }
> > }
> > }
> >
> > x-start-version and x-end-version are the metadata for Microversions,
> which
> > should be used by UI code to parse.
> >
> > This is just based on my initial thought, and there is another thought is
> > generating a set full swagger specs for each Microversion. But I think
> how
> > to show Microversions and Actions should be depended how the doc UI to
> parse
> > that also.
> >
> > As there is doc project to turn swagger to UI:
> > https://github.com/russell/fairy-slipper  But it didn't support
> > Microversions. So hope doc team can work with us and help us to find out
> > format to support Microversions and Actions which good for UI parse and
> > swagger generation.
> >
> > Any thoughts folks?
>
> I can't find the URL to the example, but I though the plan was each
> microversion generates a full doc tree.
>

yea, we said that in nova api meeting. and this is the example what we
expect UI looks like https://libgit2.github.com/libgit2/#HEAD

I just want to ensure with doc team and Russell this is good for them on
the implementation of fairy-slipper.


>
> It also notes the changes between the versions, so you look at the
> latest version, you can tell between which versions the API was
> modified.
>
> I remember annegentle had a great example of this style, will try ping
> here about that next week.
>


yea, let's talk about it in the meeting.


>
> Thanks,
> John
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
On Fri, Nov 6, 2015 at 3:32 PM, Alexander Kostrikov  wrote:

> Hi, Vladimir!
> I think that option (2) 'to backup the master node, then reinstall it
> from scratch and then apply backup' is a better way for upgrade.
> In that way we are concentrating on two problems in one feature:
> backups and upgrades.
>
That will ease development, testing and also reduce feature creep.
>

Alexander, +1 on this.

--
Best regards,
Oleg Gelbukh

>
> P.S.
> It is hard to refer to (2) because You have thee (2)-s.
>
> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
>> it includes the following:
>>
>> * RPM repository (upstream + mos)
>> * DEB repository (mos)
>> * openstack.yaml
>> * version.yaml
>> * upgrade script itself (+ virtualenv)
>>
>> Apart from upgrading docker containers this upgrade script makes copies
>> of the RPM/DEB repositories and puts them on the master node naming these
>> repository directories depending on what is written in openstack.yaml and
>> version.yaml. My plan was something like:
>>
>> 1) deprecate version.yaml (move all fields from there to various places)
>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>> 3) do not put new repos on the master node (instead we should use online
>> repos or use fuel-createmirror to make local mirrors)
>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>
>> Then UX was supposed to be roughly like:
>>
>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>> 2) yum install fuel-upgrade
>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>> there should have not be parts coping RPM/DEB repos)
>>
>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>> it is not enough to just do things which we usually did during upgrades.
>> Now there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Kind Regards,
>
> Alexandr Kostrikov,
>
> Mirantis, Inc.
>
> 35b/3, Vorontsovskaya St., 109147, Moscow, Russia
>
>
> Tel.: +7 (495) 640-49-04
> Tel.: +7 (925) 716-64-52 <%2B7%20%28906%29%20740-64-79>
>
> Skype: akostrikov_mirantis
>
> E-mail: akostri...@mirantis.com 
>
> *www.mirantis.com *
> *www.mirantis.ru *
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Alex Xu
2015-11-06 22:22 GMT+08:00 Anne Gentle :

>
>
> On Thu, Nov 5, 2015 at 9:31 PM, Alex Xu  wrote:
>
>> Hi, folks
>>
>> Nova API sub-team is working on the swagger generation. And there is PoC
>> https://review.openstack.org/233446
>>
>> But before we are going to next step, I really hope we can get agreement
>> with how to support Microversions and Actions. The PoC have demo about
>> Microversions. It generates min version action as swagger spec standard,
>> for the other version actions, it named as extended attribute, like:
>>
>> {
>> '/os-keypairs': {
>> "get": {
>> 'x-start-version': '2.1',
>> 'x-end-version': '2.1',
>> 'description': '',
>>
>> },
>> "x-get-2.2-2.9": {
>> 'x-start-version': '2.2',
>> 'x-end-version': '2.9',
>> 'description': '',
>> .
>> }
>> }
>> }
>>
>> x-start-version and x-end-version are the metadata for Microversions,
>> which should be used by UI code to parse.
>>
>
> The swagger.io editor will not necessarily recognize extended attributes
> (x- are extended attributes), right? I don't think we intend for these
> files to be hand-edited once they are generated, though, so I consider it a
> non-issue that the editor can't edit microversioned source.
>
>

yes, right. The editor can just ignore the extended attributes. I just want
to show if we have something more than swagger standard spec to support
Microversions and Actions, we should use swagger spec supported way to
extend.


>
>> This is just based on my initial thought, and there is another thought is
>> generating a set full swagger specs for each Microversion. But I think how
>> to show Microversions and Actions should be depended how the doc UI to
>> parse that also.
>>
>> As there is doc project to turn swagger to UI:
>> https://github.com/russell/fairy-slipper  But it didn't support
>> Microversions. So hope doc team can work with us and help us to find out
>> format to support Microversions and Actions which good for UI parse and
>> swagger generation.
>>
>
> Last release was a proof of concept for being able to generate Swagger.
> Next we'll bring fairy-slipper into OpenStack and work with the API working
> group and the Nova API team to enhance it.
>
> This release we can further enhance with microversions. Nothing's
> preventing that to my knowledge, other than Russell needs more input to
> make the output what we want. This email is a good start.
>

yea, really appreciate if Russell can give some input as he work on
fairy-slipper.


>
> I'm pretty sure microversions are hard to document no matter what we do so
> we just need to pick a way and move forward. Here's what is in the spec:
> For microversions, we'll need at least 2 copies of the previous reference
> info (enable a dropdown for the user to choose a prior version or one that
> matches theirs) Need to keep deprecated options.  An example of version
> comparisons https://libgit2.github.com/libgit2/#HEAD
>
> Let's discuss weekly at both the Nova API meeting and the API Working
> group meeting to refine the design. I'm back next week and plan to update
> the spec.
>

yea, let's talk more at next meeting, thanks!


> Anne
>
>
>
>>
>> Any thoughts folks?
>>
>> Thanks
>> Alex
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] BIOS Configuration

2015-11-06 Thread Serge Kovaleff
Mea culpa. I was suggested that New REST API entry will be added to Ironic
API and NOT IPA (Ironic Python Agent) as I misunderstood from the beginning.

Cheers,
Serge Kovaleff
http://www.mirantis.com
cell: +38 (063) 83-155-70

On Fri, Nov 6, 2015 at 1:33 PM, Serge Kovaleff 
wrote:

> Hi Lucas,
>
> 
> I meant if it's possible to access/update BIOS configuration without any
> agent.
> Something similar to remote execution engine via Ansible.
> I am inspired by agent-less "Ansible-deploy-driver"
> https://review.openstack.org/#/c/241946/
>
> There is definitely benefits of using the agent e.g. Heartbeats.
> Nevertheless, the idea of minimal agent-less environment is quite
> appealing for me.
>
> Cheers,
> Serge Kovaleff
>
>
> On Fri, Oct 23, 2015 at 4:58 PM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
>
>> Hi,
>>
>> > I am interested in remote BIOS configuration.
>> > There is "New driver interface for BIOS configuration specification"
>> > https://review.openstack.org/#/c/209612/
>> >
>> > Is it possible to implement this without REST API endpoint?
>> >
>>
>> I may be missing something here but without the API how will the user
>> set the configurations? We need the ReST API so we can abstract the
>> interface for this for all the different drivers in Ironic.
>>
>> Also, feel free to add suggestions in the spec patch itself.
>>
>> Cheers,
>> Lucas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Evgeniy,

On Fri, Nov 6, 2015 at 3:27 PM, Evgeniy L  wrote:

> Also we should decide when to run containers
> upgrade + host upgrade? Before or after new CentOS is installed? Probably
> it should be done before we run backup, in order to get the latest scripts
> for
> backup/restore actions.
>

We're working to determine if we need to backup/upgrade containers at all.
My expectation is that we should be OK with just backup of DB, IP addresses
settings from astute.yaml for the master node, and credentials from
configuration files for the services.

--
Best regards,
Oleg Gelbukh


>
> Thanks,
>
> On Fri, Nov 6, 2015 at 1:29 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> At the moment I'm working on deprecating Fuel upgrade tarball. Currently,
>> it includes the following:
>>
>> * RPM repository (upstream + mos)
>> * DEB repository (mos)
>> * openstack.yaml
>> * version.yaml
>> * upgrade script itself (+ virtualenv)
>>
>> Apart from upgrading docker containers this upgrade script makes copies
>> of the RPM/DEB repositories and puts them on the master node naming these
>> repository directories depending on what is written in openstack.yaml and
>> version.yaml. My plan was something like:
>>
>> 1) deprecate version.yaml (move all fields from there to various places)
>> 2) deliver openstack.yaml with fuel-openstack-metadata package
>> 3) do not put new repos on the master node (instead we should use online
>> repos or use fuel-createmirror to make local mirrors)
>> 4) deliver fuel-upgrade package (throw away upgrade virtualenv)
>>
>> Then UX was supposed to be roughly like:
>>
>> 1) configure /etc/yum.repos.d/nailgun.repo (add new RPM MOS repo)
>> 2) yum install fuel-upgrade
>> 3) /usr/bin/fuel-upgrade (script was going to become lighter, because
>> there should have not be parts coping RPM/DEB repos)
>>
>> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and
>> it is not enough to just do things which we usually did during upgrades.
>> Now there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api]

2015-11-06 Thread Everett Toews
On Nov 6, 2015, at 6:30 AM, John Garbutt 
mailto:j...@johngarbutt.com>> wrote:

On 6 November 2015 at 12:11, Sean Dague mailto:s...@dague.net>> 
wrote:
On 11/06/2015 04:13 AM, Salvatore Orlando wrote:
It makes sense to have a single point were response pagination is made
in API processing, rather than scattering pagination across Nova REST
controllers; unfortunately if I am not really able to comment how
feasible that would be in Nova's WSGI framework.

However, I'd just like to add that there is an approved guideline for
API response pagination [1], and if would be good if all these effort
follow the guideline.

Salvatore

[1] 
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html

The pagination part is just a TODO in there.

Ideally, I would like us to fill out that pagination part first.

If we can't get global agreement quickly, we should at least get a
Nova API wide standard pattern.

Am I missing something here?

When I sent my initial reply to this thread, I Cc'd the author of the 
pagination guideline at wu...@unitedstack.com. 
However, I got a bounce message so it's a bit unclear if wuhao is still working 
on this. If someone knows this person, can you please highlight this thread?

If we don't hear a response on this thread or the review, we can move forward 
another way.

Everett
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Master node upgrade

2015-11-06 Thread Oleg Gelbukh
Hi

We should think about separating packages for master node and openstack. I
> guess we should use 2 repository:
> 1. MOS - repository for OpenStack related nodes
> 2. MasterNode - repository for packages that are used for master node only.
>
>
At the moment, this is pretty simple as we only support Ubuntu as target
node system as of 7.0 and 8.0, and our Master node runs on CentOS. Thus,
our CentOS repo is for Fuel node, and Ubuntu repo is for OpenStack.


> However, it turned out that Fuel 8.0 is going to be run on Centos 7 and it
>> is not enough to just do things which we usually did during upgrades. Now
>> there are two ways to upgrade:
>> 1) to use the official Centos upgrade script for upgrading from 6 to 7
>> 2) to backup the master node, then reinstall it from scratch and then
>> apply backup
>>
>
> +1 for 2. We cannot guarantee that #1 will work smoothly. Also, there is
> some technical dept we cannot solve with #1 (i.e. - Docker device mapper).
> Also, the customer might have environments running on CentOS 6 so
> supporting all scenarios is quite hard. IF we do this we can redesign
> docker related part so we'll have huge profit later on.
>
>
In Upgrade team, we researched these 2 options. Option #1 allows us to keep
procedure close to what we had in previous versions, but it won't be
automatic as there are too many changes in our flavor of CentOS 6.6. Option
#2, on the other hand, will require developing essentially a new workflow:
1. backup the DB and settings,
2. prepare custom config for bootstrap_master_node script (to retain IP
addressing),
3. reinstall Fuel node with 8.0,
4. upload and upgrade DB,
5. restore keystone/db credentials

This sequence of steps is high level, of course, and might change in the
development. Its additional value that backup/restore parts of it could be
used separately to create backups of the Fuel node.

Our current plan is to pursue option #2 in the following 3 weeks. I will
keep this list updated on our progress as soon as we have any.

--
Best regards,
Oleg Gelbukh


> A a company we will help the clients who might want to upgrade from
> 5.1-7.0 to 8.0, but that will include analysing environment/plugins and
> making personal scenario for upgrade. It might be 'fuel-octane' to migrate
> workload to a new cloud or some script/documentation to perform upgrade.
>
>
>>
>> Upgrade team is trying to understand which way is more appropriate.
>> Regarding to my tarball related activities, I'd say that this package based
>> upgrade approach can be aligned with (1) (fuel-upgrade would use official
>> Centos upgrade script as a first step for upgrade), but it definitely can
>> not be aligned with (2), because it assumes reinstalling the master node
>> from scratch.
>>
>> Right now, I'm finishing the work around deprecating version.yaml and my
>> further steps would be to modify fuel-upgrade script so it does not copy
>> RPM/DEB repos, but those steps make little sense taking into account Centos
>> 7 feature.
>>
>
> +1.
>
>
>> Colleagues, let's make a decision about how we are going to upgrade the
>> master node ASAP. Probably my tarball related work should be reduced to
>> just throwing tarball away.
>>
>
> +2. That will allow us to:
> 1. Reduce ISO size
> 2. Increase ISO compilation by including -j8
> 3. Speed up CI
>
>
>>
>> Vladimir Kozhukalov
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] Understanding stable/branch process for Neutron subprojects

2015-11-06 Thread Neil Jerram
On 06/11/15 13:46, Ihar Hrachyshka wrote:
> Neil Jerram  wrote:
>
>> Prompted by the thread about maybe allowing subproject teams to do their
>> own stable maint, I have some questions about what I should be doing in
>> networking-calico; and I guess the answers may apply generally to
>> subprojects.
>>
>> Let's start from the text at
>> http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html:
>>
>>> Stable branches for libraries should be created at the same time when
>> "libraries"?  Should that say "subprojects”?
> Yes. Please send a patch to fix wording.

https://review.openstack.org/#/c/242506/

> I think I understand the point here.  However, networking-calico doesn't
> yet have a stable/liberty branch, and in practice its master branch
> currently targets Neutron stable/liberty code.  (For example, its
> DevStack setup instructions say "git checkout stable/liberty".)
>
> Well that’s unfortunate. You should allow devstack to check out the needed  
> branch for neutron instead of overwriting its choice.

I'm afraid I don't understand, could you explain further?  Here's what
the setup instructions [1] currently say:

  # Clone the DevStack repository.
  git clone https://git.openstack.org/openstack-dev/devstack

  # Use the stable/liberty branch.
  cd devstack
  git checkout stable/liberty

What should they say instead?

[1]
https://git.openstack.org/cgit/openstack/networking-calico/tree/devstack/bootstrap.sh

>
>> To get networking-calico into a correct state per the above guideline, I
>> think I'd need/want to
>>
>> - create a stable/liberty branch (from the current master, as there is
>> nothing in master that actually depends on Neutron changes since
>> stable/liberty)
>>
>> - continue developing useful enhancements on the stable/liberty branch -
>> because my primary target for now is the released Liberty - and then
>> merge those to master
>>
> Once spinned out, stable branches should receive bug fixes only. No new  
> features, db migrations, configuration changes are allowed in stable  
> branches.
>
>> - eventually, develop on the master branch also, to take advantage of
>> and keep current with changes in Neutron master.
>>
> All new features must go to master only. Your master should always be  
> tested and work with neutron master (meaning, your master should target  
> Mitaka, not Liberty).
>
>> But is that compatible with the permitted stable branch process?  It
>> sounds like the permitted process requires me to develop everything on
>> master first, then (ask to) cherry-pick specific changes to the stable
>> branch - which isn't actually natural for the current situation (or
>> targeting Liberty releases).
>>
> Yes, that’s what current stable branch process implies. All stadium  
> projects must follow the same stable branch process.
>
> Now, you may also not see any value in supporting Liberty, then you can  
> avoid creating a branch for it; but it seems it’s not the case here.
>
> All that said, we already have stadium projects that violate the usual  
> process for master (f.e. GBP project targets its master development to kilo  
> - sic!) I believe that’s something to clear up as part of discussion of  
> what it really means to be a stadium project. I believe following general  
> workflow that is common to the project as a whole is one of the  
> requirements that we should impose.

Thanks for these clear answers.  I'll work towards getting all this correct.

Regards,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [doc] How to support Microversions and Actions in Swagger Spec

2015-11-06 Thread Anne Gentle
On Thu, Nov 5, 2015 at 9:31 PM, Alex Xu  wrote:

> Hi, folks
>
> Nova API sub-team is working on the swagger generation. And there is PoC
> https://review.openstack.org/233446
>
> But before we are going to next step, I really hope we can get agreement
> with how to support Microversions and Actions. The PoC have demo about
> Microversions. It generates min version action as swagger spec standard,
> for the other version actions, it named as extended attribute, like:
>
> {
> '/os-keypairs': {
> "get": {
> 'x-start-version': '2.1',
> 'x-end-version': '2.1',
> 'description': '',
>
> },
> "x-get-2.2-2.9": {
> 'x-start-version': '2.2',
> 'x-end-version': '2.9',
> 'description': '',
> .
> }
> }
> }
>
> x-start-version and x-end-version are the metadata for Microversions,
> which should be used by UI code to parse.
>

The swagger.io editor will not necessarily recognize extended attributes
(x- are extended attributes), right? I don't think we intend for these
files to be hand-edited once they are generated, though, so I consider it a
non-issue that the editor can't edit microversioned source.


>
> This is just based on my initial thought, and there is another thought is
> generating a set full swagger specs for each Microversion. But I think how
> to show Microversions and Actions should be depended how the doc UI to
> parse that also.
>
> As there is doc project to turn swagger to UI:
> https://github.com/russell/fairy-slipper  But it didn't support
> Microversions. So hope doc team can work with us and help us to find out
> format to support Microversions and Actions which good for UI parse and
> swagger generation.
>

Last release was a proof of concept for being able to generate Swagger.
Next we'll bring fairy-slipper into OpenStack and work with the API working
group and the Nova API team to enhance it.

This release we can further enhance with microversions. Nothing's
preventing that to my knowledge, other than Russell needs more input to
make the output what we want. This email is a good start.

I'm pretty sure microversions are hard to document no matter what we do so
we just need to pick a way and move forward. Here's what is in the spec:
For microversions, we'll need at least 2 copies of the previous reference
info (enable a dropdown for the user to choose a prior version or one that
matches theirs) Need to keep deprecated options.  An example of version
comparisons https://libgit2.github.com/libgit2/#HEAD

Let's discuss weekly at both the Nova API meeting and the API Working group
meeting to refine the design. I'm back next week and plan to update the
spec.
Anne



>
> Any thoughts folks?
>
> Thanks
> Alex
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread John Garbutt
On 6 November 2015 at 14:09, Sean Dague  wrote:
> On 11/06/2015 08:44 AM, Alex Xu wrote:
>>
>>
>> 2015-11-06 20:59 GMT+08:00 Sean Dague > >:
>>
>> On 11/06/2015 07:28 AM, John Garbutt wrote:
>> > On 6 November 2015 at 12:09, Sean Dague > > wrote:
>> >> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
>> >>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>>  Hello all,
>>  I came across [1] which is notionally an ironic bug in that
>> horizon presents
>>  VM operations (like suspend) to users.  Clearly these options
>> don't make sense
>>  to ironic which can be confusing.
>> 
>>  There is a horizon fix that just disables migrate/suspened and
>> other functaions
>>  if the operator sets a flag say ironic is present.  Clealy this
>> is sub optimal
>>  for a mixed hv environment.
>> 
>>  The data needed (hpervisor type) is currently avilable only to
>> admins, a quick
>>  hack to remove this policy restriction is functional.
>> 
>>  There are a few ways to solve this.
>> 
>>   1. Change the default from "rule:admin_api" to "" (for
>>  os_compute_api:os-extended-server-attributes and
>>  os_compute_api:os-hypervisors), and set a list of values we're
>>  comfortbale exposing the user (hypervisor_type and
>>  hypervisor_hostname).  So a user can get the
>> hypervisor_name as part of
>>  the instance deatils and get the hypervisor_type from the
>>  os-hypervisors.  This would work for horizon but increases
>> the API load
>>  on nova and kinda implies that horizon would have to cache
>> the data and
>>  open-code assumptions that hypervisor_type can/can't do
>> action $x
>> 
>>   2. Include the hypervisor_type with the instance data.  This
>> would place the
>>  burdon on nova.  It makes the looking up instance details
>> slightly more
>>  complex but doesn't result in additional API queries, nor
>> caching
>>  overhead in horizon.  This has the same opencoding issues
>> as Option 1.
>> 
>>   3. Define a service user and have horizon look up the
>> hypervisors details via
>>  that role.  Has all the drawbacks as option 1 and I'm
>> struggling to
>>  think of many benefits.
>> 
>>   4. Create a capabilitioes API of some description, that can be
>> queried so that
>>  consumers (horizon) can known
>> 
>>   5. Some other way for users to know what kind of hypervisor
>> they're on, Perhaps
>>  there is an established image property that would work here?
>> 
>>  If we're okay with exposing the hypervisor_type to users, then
>> #2 is pretty
>>  quick and easy, and could be done in Mitaka.  Option 4 is
>> probably the best
>>  long term solution but I think is best done in 'N' as it needs
>> lots of
>>  discussion.
>> >>>
>> >>> I think that exposing hypervisor_type is very much the *wrong*
>> approach
>> >>> to this problem. The set of allowed actions varies based on much
>> more than
>> >>> just the hypervisor_type. The hypervisor version may affect it,
>> as may
>> >>> the hypervisor architecture, and even the version of Nova. If
>> horizon
>> >>> restricted its actions based on hypevisor_type alone, then it is
>> going
>> >>> to inevitably prevent the user from performing otherwise valid
>> actions
>> >>> in a number of scenarios.
>> >>>
>> >>> IMHO, a capabilities based approach is the only viable solution to
>> >>> this kind of problem.
>> >>
>> >> Right, we just had a super long conversation about this in
>> #openstack-qa
>> >> yesterday with mordred, jroll, and deva around what it's going to
>> take
>> >> to get upgrade tests passing with ironic.
>> >>
>> >> Capabilities is the right approach, because it means we're future
>> >> proofing our interface by telling users what they can do, not some
>> >> arbitrary string that they need to cary around a separate library to
>> >> figure those things out.
>> >>
>> >> It seems like capabilities need to exist on flavor, and by proxy
>> instance.
>> >>
>> >> GET /flavors/bm.large/capabilities
>> >>
>> >> {
>> >>  "actions": {
>> >>  'pause': False,
>> >>  'unpause': False,
>> >>  'rebuild': True
>> >>  ..
>> >>   }
>> >>
>>
>>
>> Does this need admin to set the capabilities? If yes, that looks like
>> pain to admin to set capabilities for all the flavors. This should be
>> the capabilities the instance

Re: [openstack-dev] [all][bugs] Developers Guide: Who's mergingthat?

2015-11-06 Thread John Garbutt
On 6 November 2015 at 13:38, Markus Zoeller  wrote:
> Jeremy Stanley  wrote on 11/05/2015 07:11:37 PM:
>
>> From: Jeremy Stanley 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: 11/05/2015 07:17 PM
>> Subject: Re: [openstack-dev] [all][bugs] Developers Guide: Who's merging
> that?
>>
>> On 2015-11-05 16:23:56 +0100 (+0100), Markus Zoeller wrote:
>> > some months ago I wrote down all the things a developer should know
>> > about the bug handling process in general [1]. It is written as a
>> > project agnostic thing and got some +1s but it isn't merged yet.
>> > It would be helpful when I could use it to give this as a pointer
>> > to new contributors as I'm under the impression that the mental image
>> > differs a lot among the contributors. So, my questions are:
>> >
>> > 1) Who's in charge of merging such non-project-specific things?
>> [...]
>>
>> This is a big part of the problem your addition is facing, in my
>> opinion. The OpenStack Infrastructure Manual is an attempt at a
>> technical manual for interfacing with the systems written and
>> maintained by the OpenStack Project Infrastructure team. It has,
>> unfortunately, also grown some sections which contain cultural
>> background and related recommendations because until recently there
>> was no better venue for those topics, but we're going to be ripping
>> those out and proposing them to documents maintained by more
>> appropriate teams at the earliest opportunity.
>
> I've written this for the Nova docs originally but got sent to the
> infra-manual as the "project agnostic thing".
>
>> Bug management falls into a grey area currently, where a lot of the
>> information contributors need is cultural background mixed with
>> workflow information on using Launchpad (which is not really managed
>> by the Infra team). [...]
>
> True, that's what I try to contribute here. I'm aware of the intended
> change in our issue tracker and tried to write the text so it needs
> only a few changes when this transition is done.
>
>> Cultural content about the lifecycle of bugs, standard practices for
>> triage, et cetera are likely better suited to the newly created
>> Project Team Guide;[...]
>
> The Project Team Guide was news to me, I'm going to have a look if
> it would fit.

+1 for trying to see how this fits into the Project Team Guide.

Possibly somewhere in here, add about having an open bug tracker?
http://docs.openstack.org/project-team-guide/open-development.html#specifications

You can see the summit discussions on the project team guide here:
https://etherpad.openstack.org/p/mitaka-crossproject-doc-the-way

Thanks,
johnthetubaguy

>> So anyway, to my main point, topics in collaboratively-maintained
>> documentation are going to end up being closely tied to the
>> expertise of the review team for the document being targeted. In the
>> case of the Infra Manual that's the systems administrators who
>> configure and maintain our community infrastructure. I won't speak
>> for others on the team, but I don't personally feel comfortable
>> deciding what details a user should include in a bug report for
>> python-novaclient, or how the Cinder team should triage their bug
>> reports.
>>
>> I expect that the lack of core reviews are due to:
>>
>> 1. Few of the core reviewers feel they can accurately judge much of
>> the content you've proposed in that change.
>>
>> 2. Nobody feels empowered to tell you that this large and
>> well-written piece of documentation you've spent a lot of time
>> putting together is a poor fit and should be split up and much of it
>> put somewhere else more suitable (especially without a suggestion as
>> to where that might be).
>>
>> 3. The core review team for this is the core review team for all our
>> infrastructure systems, and we're all unfortunately very behind in
>> handling the current review volume.
>
> Maybe the time has come for me to think about starting a blog...
> Thanks Stanley, for your time and feedback.
>
> Regards, Markus Zoeller (markus_z)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Sean Dague
On 11/06/2015 08:44 AM, Alex Xu wrote:
> 
> 
> 2015-11-06 20:59 GMT+08:00 Sean Dague  >:
> 
> On 11/06/2015 07:28 AM, John Garbutt wrote:
> > On 6 November 2015 at 12:09, Sean Dague  > wrote:
> >> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
> >>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>  Hello all,
>  I came across [1] which is notionally an ironic bug in that
> horizon presents
>  VM operations (like suspend) to users.  Clearly these options
> don't make sense
>  to ironic which can be confusing.
> 
>  There is a horizon fix that just disables migrate/suspened and
> other functaions
>  if the operator sets a flag say ironic is present.  Clealy this
> is sub optimal
>  for a mixed hv environment.
> 
>  The data needed (hpervisor type) is currently avilable only to
> admins, a quick
>  hack to remove this policy restriction is functional.
> 
>  There are a few ways to solve this.
> 
>   1. Change the default from "rule:admin_api" to "" (for
>  os_compute_api:os-extended-server-attributes and
>  os_compute_api:os-hypervisors), and set a list of values we're
>  comfortbale exposing the user (hypervisor_type and
>  hypervisor_hostname).  So a user can get the
> hypervisor_name as part of
>  the instance deatils and get the hypervisor_type from the
>  os-hypervisors.  This would work for horizon but increases
> the API load
>  on nova and kinda implies that horizon would have to cache
> the data and
>  open-code assumptions that hypervisor_type can/can't do
> action $x
> 
>   2. Include the hypervisor_type with the instance data.  This
> would place the
>  burdon on nova.  It makes the looking up instance details
> slightly more
>  complex but doesn't result in additional API queries, nor
> caching
>  overhead in horizon.  This has the same opencoding issues
> as Option 1.
> 
>   3. Define a service user and have horizon look up the
> hypervisors details via
>  that role.  Has all the drawbacks as option 1 and I'm
> struggling to
>  think of many benefits.
> 
>   4. Create a capabilitioes API of some description, that can be
> queried so that
>  consumers (horizon) can known
> 
>   5. Some other way for users to know what kind of hypervisor
> they're on, Perhaps
>  there is an established image property that would work here?
> 
>  If we're okay with exposing the hypervisor_type to users, then
> #2 is pretty
>  quick and easy, and could be done in Mitaka.  Option 4 is
> probably the best
>  long term solution but I think is best done in 'N' as it needs
> lots of
>  discussion.
> >>>
> >>> I think that exposing hypervisor_type is very much the *wrong*
> approach
> >>> to this problem. The set of allowed actions varies based on much
> more than
> >>> just the hypervisor_type. The hypervisor version may affect it,
> as may
> >>> the hypervisor architecture, and even the version of Nova. If
> horizon
> >>> restricted its actions based on hypevisor_type alone, then it is
> going
> >>> to inevitably prevent the user from performing otherwise valid
> actions
> >>> in a number of scenarios.
> >>>
> >>> IMHO, a capabilities based approach is the only viable solution to
> >>> this kind of problem.
> >>
> >> Right, we just had a super long conversation about this in
> #openstack-qa
> >> yesterday with mordred, jroll, and deva around what it's going to
> take
> >> to get upgrade tests passing with ironic.
> >>
> >> Capabilities is the right approach, because it means we're future
> >> proofing our interface by telling users what they can do, not some
> >> arbitrary string that they need to cary around a separate library to
> >> figure those things out.
> >>
> >> It seems like capabilities need to exist on flavor, and by proxy
> instance.
> >>
> >> GET /flavors/bm.large/capabilities
> >>
> >> {
> >>  "actions": {
> >>  'pause': False,
> >>  'unpause': False,
> >>  'rebuild': True
> >>  ..
> >>   }
> >>
> 
> 
> Does this need admin to set the capabilities? If yes, that looks like
> pain to admin to set capabilities for all the flavors. This should be
> the capabilities the instance required. And hypervisor should report
> their capabilities, and reflect to instance.

No, in version 1 there should be some mechanism that pulls this up from
the d

Re: [openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-06 Thread Alex Xu
2015-11-06 20:59 GMT+08:00 Sean Dague :

> On 11/06/2015 07:28 AM, John Garbutt wrote:
> > On 6 November 2015 at 12:09, Sean Dague  wrote:
> >> On 11/06/2015 04:49 AM, Daniel P. Berrange wrote:
> >>> On Fri, Nov 06, 2015 at 05:08:59PM +1100, Tony Breeds wrote:
>  Hello all,
>  I came across [1] which is notionally an ironic bug in that
> horizon presents
>  VM operations (like suspend) to users.  Clearly these options don't
> make sense
>  to ironic which can be confusing.
> 
>  There is a horizon fix that just disables migrate/suspened and other
> functaions
>  if the operator sets a flag say ironic is present.  Clealy this is
> sub optimal
>  for a mixed hv environment.
> 
>  The data needed (hpervisor type) is currently avilable only to
> admins, a quick
>  hack to remove this policy restriction is functional.
> 
>  There are a few ways to solve this.
> 
>   1. Change the default from "rule:admin_api" to "" (for
>  os_compute_api:os-extended-server-attributes and
>  os_compute_api:os-hypervisors), and set a list of values we're
>  comfortbale exposing the user (hypervisor_type and
>  hypervisor_hostname).  So a user can get the hypervisor_name as
> part of
>  the instance deatils and get the hypervisor_type from the
>  os-hypervisors.  This would work for horizon but increases the
> API load
>  on nova and kinda implies that horizon would have to cache the
> data and
>  open-code assumptions that hypervisor_type can/can't do action $x
> 
>   2. Include the hypervisor_type with the instance data.  This would
> place the
>  burdon on nova.  It makes the looking up instance details
> slightly more
>  complex but doesn't result in additional API queries, nor caching
>  overhead in horizon.  This has the same opencoding issues as
> Option 1.
> 
>   3. Define a service user and have horizon look up the hypervisors
> details via
>  that role.  Has all the drawbacks as option 1 and I'm struggling
> to
>  think of many benefits.
> 
>   4. Create a capabilitioes API of some description, that can be
> queried so that
>  consumers (horizon) can known
> 
>   5. Some other way for users to know what kind of hypervisor they're
> on, Perhaps
>  there is an established image property that would work here?
> 
>  If we're okay with exposing the hypervisor_type to users, then #2 is
> pretty
>  quick and easy, and could be done in Mitaka.  Option 4 is probably
> the best
>  long term solution but I think is best done in 'N' as it needs lots of
>  discussion.
> >>>
> >>> I think that exposing hypervisor_type is very much the *wrong* approach
> >>> to this problem. The set of allowed actions varies based on much more
> than
> >>> just the hypervisor_type. The hypervisor version may affect it, as may
> >>> the hypervisor architecture, and even the version of Nova. If horizon
> >>> restricted its actions based on hypevisor_type alone, then it is going
> >>> to inevitably prevent the user from performing otherwise valid actions
> >>> in a number of scenarios.
> >>>
> >>> IMHO, a capabilities based approach is the only viable solution to
> >>> this kind of problem.
> >>
> >> Right, we just had a super long conversation about this in #openstack-qa
> >> yesterday with mordred, jroll, and deva around what it's going to take
> >> to get upgrade tests passing with ironic.
> >>
> >> Capabilities is the right approach, because it means we're future
> >> proofing our interface by telling users what they can do, not some
> >> arbitrary string that they need to cary around a separate library to
> >> figure those things out.
> >>
> >> It seems like capabilities need to exist on flavor, and by proxy
> instance.
> >>
> >> GET /flavors/bm.large/capabilities
> >>
> >> {
> >>  "actions": {
> >>  'pause': False,
> >>  'unpause': False,
> >>  'rebuild': True
> >>  ..
> >>   }
> >>
>

Does this need admin to set the capabilities? If yes, that looks like pain
to admin to set capabilities for all the flavors. This should be the
capabilities the instance required. And hypervisor should report their
capabilities, and reflect to instance.


> >> A starting point would definitely be the set of actions that you can
> >> send to the flavor/instance. There may be features beyond that we'd like
> >> to classify as capabilities, but actions would be a very concrete and
> >> attainable starting point. With microversions we don't have to solve
> >> this all at once, start with a concrete thing and move forward.
>

+1, Microversions give us a way to improve our API! And capabilities API is
really important.


> >>
> >> Sending an action that was "False" for the instance/flavor would return
> >> a 400 BadRequest high up at the API level, much like input validation
> >> via jsonschema.

Re: [openstack-dev] [neutron][stable] Understanding stable/branch process for Neutron subprojects

2015-11-06 Thread Ihar Hrachyshka

Neil Jerram  wrote:


Prompted by the thread about maybe allowing subproject teams to do their
own stable maint, I have some questions about what I should be doing in
networking-calico; and I guess the answers may apply generally to
subprojects.

Let's start from the text at
http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html:


Stable branches for libraries should be created at the same time when


"libraries"?  Should that say "subprojects”?


Yes. Please send a patch to fix wording.




corresponding neutron stable branches are cut off. This is to avoid
situations when a postponed cut-off results in a stable branch that
contains some patches that belong to the next release. This would
require reverting patches, and this is something you should avoid.


(Textually, I think "created" would be clearer here than "cut off", if
that is the intended meaning.  "cut off" could also mean "deleted" or
"stop being used”.)


Same.



I think I understand the point here.  However, networking-calico doesn't
yet have a stable/liberty branch, and in practice its master branch
currently targets Neutron stable/liberty code.  (For example, its
DevStack setup instructions say "git checkout stable/liberty".)



Well that’s unfortunate. You should allow devstack to check out the needed  
branch for neutron instead of overwriting its choice.



To get networking-calico into a correct state per the above guideline, I
think I'd need/want to

- create a stable/liberty branch (from the current master, as there is
nothing in master that actually depends on Neutron changes since
stable/liberty)

- continue developing useful enhancements on the stable/liberty branch -
because my primary target for now is the released Liberty - and then
merge those to master



Once spinned out, stable branches should receive bug fixes only. No new  
features, db migrations, configuration changes are allowed in stable  
branches.



- eventually, develop on the master branch also, to take advantage of
and keep current with changes in Neutron master.



All new features must go to master only. Your master should always be  
tested and work with neutron master (meaning, your master should target  
Mitaka, not Liberty).



But is that compatible with the permitted stable branch process?  It
sounds like the permitted process requires me to develop everything on
master first, then (ask to) cherry-pick specific changes to the stable
branch - which isn't actually natural for the current situation (or
targeting Liberty releases).



Yes, that’s what current stable branch process implies. All stadium  
projects must follow the same stable branch process.


Now, you may also not see any value in supporting Liberty, then you can  
avoid creating a branch for it; but it seems it’s not the case here.


All that said, we already have stadium projects that violate the usual  
process for master (f.e. GBP project targets its master development to kilo  
- sic!) I believe that’s something to clear up as part of discussion of  
what it really means to be a stadium project. I believe following general  
workflow that is common to the project as a whole is one of the  
requirements that we should impose.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][bugs] Developers Guide: Who's mergingthat?

2015-11-06 Thread Markus Zoeller
Jeremy Stanley  wrote on 11/05/2015 07:11:37 PM:

> From: Jeremy Stanley 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/05/2015 07:17 PM
> Subject: Re: [openstack-dev] [all][bugs] Developers Guide: Who's merging 
that?
> 
> On 2015-11-05 16:23:56 +0100 (+0100), Markus Zoeller wrote:
> > some months ago I wrote down all the things a developer should know
> > about the bug handling process in general [1]. It is written as a
> > project agnostic thing and got some +1s but it isn't merged yet.
> > It would be helpful when I could use it to give this as a pointer
> > to new contributors as I'm under the impression that the mental image
> > differs a lot among the contributors. So, my questions are:
> > 
> > 1) Who's in charge of merging such non-project-specific things?
> [...]
> 
> This is a big part of the problem your addition is facing, in my
> opinion. The OpenStack Infrastructure Manual is an attempt at a
> technical manual for interfacing with the systems written and
> maintained by the OpenStack Project Infrastructure team. It has,
> unfortunately, also grown some sections which contain cultural
> background and related recommendations because until recently there
> was no better venue for those topics, but we're going to be ripping
> those out and proposing them to documents maintained by more
> appropriate teams at the earliest opportunity.

I've written this for the Nova docs originally but got sent to the
infra-manual as the "project agnostic thing". 

> Bug management falls into a grey area currently, where a lot of the
> information contributors need is cultural background mixed with
> workflow information on using Launchpad (which is not really managed
> by the Infra team). [...]

True, that's what I try to contribute here. I'm aware of the intended
change in our issue tracker and tried to write the text so it needs
only a few changes when this transition is done.
 
> Cultural content about the lifecycle of bugs, standard practices for
> triage, et cetera are likely better suited to the newly created
> Project Team Guide;[...]

The Project Team Guide was news to me, I'm going to have a look if
it would fit.
 
> So anyway, to my main point, topics in collaboratively-maintained
> documentation are going to end up being closely tied to the
> expertise of the review team for the document being targeted. In the
> case of the Infra Manual that's the systems administrators who
> configure and maintain our community infrastructure. I won't speak
> for others on the team, but I don't personally feel comfortable
> deciding what details a user should include in a bug report for
> python-novaclient, or how the Cinder team should triage their bug
> reports.
> 
> I expect that the lack of core reviews are due to:
> 
> 1. Few of the core reviewers feel they can accurately judge much of
> the content you've proposed in that change.
> 
> 2. Nobody feels empowered to tell you that this large and
> well-written piece of documentation you've spent a lot of time
> putting together is a poor fit and should be split up and much of it
> put somewhere else more suitable (especially without a suggestion as
> to where that might be).
> 
> 3. The core review team for this is the core review team for all our
> infrastructure systems, and we're all unfortunately very behind in
> handling the current review volume.

Maybe the time has come for me to think about starting a blog...
Thanks Stanley, for your time and feedback.

Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >