[openstack-dev] [Nova] MS Update

2014-12-05 Thread Gary Kotton
Hi,
MS has been down for a few days. The following patches will help us get it up 
and running again:
 - requirements - https://review.openstack.org/139545
 - oslo.vmware - https://review.openstack.org/139296 (depends on the patch 
above)
 - devstack - https://review.openstack.org/139515
Hopefully once the above are in we will be back to business as usual.
Thanks
Gary
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-05 Thread Kevin Benton
I see the difference now.
The main concern I see with the NOOP type is that creating the virtual
interface could require different logic for certain hypervisors. In that
case Neutron would now have to know things about nova and to me it seems
like that's slightly too far the other direction.

On Thu, Dec 4, 2014 at 8:00 AM, Neil Jerram 
wrote:

> Kevin Benton  writes:
>
> > What you are proposing sounds very reasonable. If I understand
> > correctly, the idea is to make Nova just create the TAP device and get
> > it attached to the VM and leave it 'unplugged'. This would work well
> > and might eliminate the need for some drivers. I see no reason to
> > block adding a VIF type that does this.
>
> I was actually floating a slightly more radical option than that: the
> idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does
> absolutely _nothing_, not even create the TAP device.
>
> (My pending Nova spec at https://review.openstack.org/#/c/130732/
> proposes VIF_TYPE_TAP, for which Nova _does_ creates the TAP device, but
> then does nothing else - i.e. exactly what you've described just above.
> But in this email thread I was musing about going even further, towards
> providing a platform for future networking experimentation where Nova
> isn't involved at all in the networking setup logic.)
>
> > However, there is a good reason that the VIF type for some OVS-based
> > deployments require this type of setup. The vSwitches are connected to
> > a central controller using openflow (or ovsdb) which configures
> > forwarding rules/etc. Therefore they don't have any agents running on
> > the compute nodes from the Neutron side to perform the step of getting
> > the interface plugged into the vSwitch in the first place. For this
> > reason, we will still need both types of VIFs.
>
> Thanks.  I'm not advocating that existing VIF types should be removed,
> though - rather wondering if similar function could in principle be
> implemented without Nova VIF plugging - or what that would take.
>
> For example, suppose someone came along and wanted to implement a new
> OVS-like networking infrastructure?  In principle could they do that
> without having to enhance the Nova VIF driver code?  I think at the
> moment they couldn't, but that they would be able to if VIF_TYPE_NOOP
> (or possibly VIF_TYPE_TAP) was already in place.  In principle I think
> it would then be possible for the new implementation to specify
> VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does the kind
> of configuration and vSwitch plugging that you've described above.
>
> Does that sound correct, or am I missing something else?
>
> >> 1 .When the port is created in the Neutron DB, and handled (bound
> > etc.)
> > by the plugin and/or mechanism driver, the TAP device name is already
> > present at that time.
> >
> > This is backwards. The tap device name is derived from the port ID, so
> > the port has already been created in Neutron at that point. It is just
> > unbound. The steps are roughly as follows: Nova calls neutron for a
> > port, Nova creates/plugs VIF based on port, Nova updates port on
> > Neutron, Neutron binds the port and notifies agent/plugin/whatever to
> > finish the plumbing, Neutron notifies Nova that port is active, Nova
> > unfreezes the VM.
> >
> > None of that should be affected by what you are proposing. The only
> > difference is that your Neutron agent would also perform the
> > 'plugging' operation.
>
> Agreed - but thanks for clarifying the exact sequence of events.
>
> I wonder if what I'm describing (either VIF_TYPE_NOOP or VIF_TYPE_TAP)
> might fit as part of the "Nova-network/Neutron Migration" priority
> that's just been announced for Kilo.  I'm aware that a part of that
> priority is concerned with live migration, but perhaps it could also
> include the goal of future networking work not having to touch Nova
> code?
>
> Regards,
> Neil
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-05 Thread Erik Moe

One reason for trying to get an more complete API into Neutron is to have a 
standardized API. So users know what to expect and for providers to have 
something to comply to. Do you suggest we bring this standardization work to 
some other forum, OPNFV for example? Neutron provides low level hooks and the 
rest is defined elsewhere. Maybe this could work, but there would probably be 
other issues if the actual implementation is not on the edge or outside Neutron.

/Erik


From: Ian Wells [mailto:ijw.ubu...@cack.org.uk]
Sent: den 4 december 2014 20:19
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

On 1 December 2014 at 21:26, Mohammad Hanif 
mailto:mha...@brocade.com>> wrote:
I hope we all understand how edge VPN works and what interactions are 
introduced as part of this spec.  I see references to neutron-network mapping 
to the tunnel which is not at all case and the edge-VPN spec doesn’t propose 
it.  At a very high level, there are two main concepts:

  1.  Creation of a per tenant VPN “service” on a PE (physical router) which 
has a connectivity to other PEs using some tunnel (not known to tenant or 
tenant-facing).  An attachment circuit for this VPN service is also created 
which carries a “list" of tenant networks (the list is initially empty) .
  2.  Tenant “updates” the list of tenant networks in the attachment circuit 
which essentially allows the VPN “service” to add or remove the network from 
being part of that VPN.
A service plugin implements what is described in (1) and provides an API which 
is called by what is described in (2).  The Neutron driver only “updates” the 
attachment circuit using an API (attachment circuit is also part of the service 
plugin’ data model).   I don’t see where we are introducing large data model 
changes to Neutron?

Well, you have attachment types, tunnels, and so on - these are all objects 
with data models, and your spec is on Neutron so I'm assuming you plan on 
putting them into the Neutron database - where they are, for ever more, a 
Neutron maintenance overhead both on the dev side and also on the ops side, 
specifically at upgrade.

How else one introduces a network service in OpenStack if it is not through a 
service plugin?

Again, I've missed something here, so can you define 'service plugin' for me?  
How similar is it to a Neutron extension - which we agreed at the summit we 
should take pains to avoid, per Salvatore's session?
And the answer to that is to stop talking about plugins or trying to integrate 
this into the Neutron API or the Neutron DB, and make it an independent service 
with a small and well defined interaction with Neutron, which is what the 
edge-id proposal suggests.  If we do incorporate it into Neutron then there are 
probably 90% of Openstack users and developers who don't want or need it but 
care a great deal if it breaks the tests.  If it isn't in Neutron they simply 
don't install it.

As we can see, tenant needs to communicate (explicit or otherwise) to 
add/remove its networks to/from the VPN.  There has to be a channel and the 
APIs to achieve this.

Agreed.  I'm suggesting it should be a separate service endpoint.
--
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack Bootstrapping Hour - Keystone - Friday Dec 5th 20:00 UTC (15:00 Americas/New_York)

2014-12-05 Thread Boris Bobrov
On Thursday 04 December 2014 19:08:06 Sean Dague wrote:
> Sorry for the late announce, too much turkey and pie
> 
> This Friday, Dec 5th, we'll be talking with Steve Martinelli and David
> Stanek about Keystone Authentication in OpenStack.

Wiki page says that the event will be Friday Dec 5th - 19:00 UTC (15:00 
Americas/New_York), while the subject in your mail has 20:00 UTC. Could you 
please clarify that?

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session length on wiki.openstack.org

2014-12-05 Thread Thierry Carrez
I agree, and I cross-posted that question to openstack-infra to make
sure the infra team sees it:

http://lists.openstack.org/pipermail/openstack-infra/2014-December/002215.html

Carl Baldwin wrote:
> +1  I've been meaning to say something like this but never got around
> to it.  Thanks for speaking up.
> 
> On Thu, Dec 4, 2014 at 6:03 PM, Tony Breeds  wrote:
>> Hello Wiki masters,
>> Is there anyway to extend the session length on the wiki?  In my current
>> work flow I login to the wiki do work and then get distracted by code/IRC  
>> when
>> I go back to the wiki I'm almost always logged out (I'm guessing due to
>> inactivity).  It feels like this is about 30mins but I could be wrong.
>>
>> Is there anyway for me to tweak this session length for myself?
>> If not can it be increased to say 2 hours?
>>
>> Yours Tony.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Alternate meeting time

2014-12-05 Thread Sullivan, Jon Paul
From: James Polley [mailto:j...@jamezpolley.com]
Sent: 04 December 2014 17:17
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Alternate meeting time



On Thu, Dec 4, 2014 at 11:40 AM, marios 
mailto:mar...@redhat.com>> wrote:
On 04/12/14 11:40, James Polley wrote:
> Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've
> had 10 people respond so far. The winning time so far is Monday 2100UTC
> - 7 "yes" and one "If I have to".

for me it currently shows 1200 UTC as the preferred time.

You're the 11th responder :) And yes, 1200/1400/1500 are now all leading  with 
8/0/3.

So to be clear, we are voting here for the alternate meeting. The
'original' meeting is at 1900UTC. If in fact 2100UTC ends up being the
most popular, what would be the point of an alternating meeting that is
only 2 hours apart in time?

To me the point would be to get more people able to come along to the meeting. 
But if the difference *was* that small, I'd be wanting to ask if changing the 
format or content of the meeting could convince more people to join the 1900UTC 
meeting - I think that having just one meeting for the whole team would be 
preferable, if we could manage it.

But at present, it looks like if we want to maximise attendance, we should be 
focusing on European early afternoon. That unfortunately means that it's going 
to be very hard for those of us in Australia/New Zealand/China/Japan to make it 
- 1400UTC is 1am Sydney, 10pm Beijing. It's 7:30pm New Delhi, which might be 
doable, but I don't know of anyone working there who would regularly attend.


[JP]  - how about we rethink both meeting times then?  19:00 UTC seems like a 
time that is convenient for only one timezone, and ideally each meeting time 
should at least be convenient for 2 major geographies.  If 15:00 UTC were one 
meeting, that should be a time convenient for all of Europe, and also ok back 
as far back as the USA West coast.  Then a meeting up at 21:00 UTC should cover 
most of Australasia and also provide a good alternate time through to the US 
East coast.

Even if those aren’t the 2 times chosen, maybe that is the thinking we need 
here?

Thanks,
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2.
Registered Number: 361933

The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as "HP CONFIDENTIAL".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] global or per-project specific ssl config options, or both?

2014-12-05 Thread Matthew Gilliard
Hi Matt, Nova,

  I'll look into this.

Gilliard

On Thu, Dec 4, 2014 at 9:51 PM, Matt Riedemann
 wrote:
>
>
> On 12/4/2014 6:02 AM, Davanum Srinivas wrote:
>>
>> +1 to @markmc's "default is global value and override for project
>> specific key" suggestion.
>>
>> -- dims
>>
>>
>>
>> On Wed, Dec 3, 2014 at 11:57 PM, Matt Riedemann
>>  wrote:
>>>
>>> I've posted this to the 12/4 nova meeting agenda but figured I'd
>>> socialize
>>> it here also.
>>>
>>> SSL options - do we make them per-project or global, or both? Neutron and
>>> Cinder have config-group specific SSL options in nova, Glance is using
>>> oslo
>>> sslutils global options since Juno which was contentious for a time in a
>>> separate review in Icehouse [1].
>>>
>>> Now [2] wants to break that out for Glance, but we also have a patch [3]
>>> for
>>> Keystone to use the global oslo SSL options, we should be consistent, but
>>> does that require a blueprint now?
>>>
>>> In the Icehouse patch, markmc suggested using a DictOpt where the default
>>> value is the global value, which could be coming from the oslo [ssl]
>>> group
>>> and then you could override that with a project-specific key, e.g.
>>> cinder,
>>> neutron, glance, keystone.
>>>
>>> [1] https://review.openstack.org/#/c/84522/
>>> [2] https://review.openstack.org/#/c/131066/
>>> [3] https://review.openstack.org/#/c/124296/
>>>
>>> --
>>>
>>> Thanks,
>>>
>>> Matt Riedemann
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>
> The consensus in the nova meeting today, I think, was that we generally like
> the idea of the DictOpt with global oslo ssl as the default and then be able
> to configure that per-service if needed.
>
> Does anyone want to put up a POC on how that would work to see how ugly
> and/or usable that would be?  I haven't dug into the DictOpt stuff yet and
> am kind of time-constrained at the moment.
>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack setup in Data Center with two Dell Blade Server

2014-12-05 Thread dhanesh1212121212
Hi All,


We have a requirement to configure Openstack Juno setup in Data Center with
two Dell Blade Server.

We are planning to use one blade server to install Centos 7 which will be
our Hypervisor

Second machine we will install Xenserver.

 Inside Xenserver we will create two CentOS 7 machine one for Management ,
Object Storage , Block Storage and Second one (CentOS 7) for Network Node.

Please guide me on this.



Thanks and regards,
Dhanesh M.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] policy on old / virtually abandoned patches

2014-12-05 Thread Daniel P. Berrange
On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote:
> Nova currently has 197 patches that have seen no activity in the last 4
> weeks (project:openstack/nova age:4weeks status:open).

On a somewhat related note, nova-specs currently has 17 specs
open against specs/juno, most with -2 votes. I think we should
just mass-abandon anything still touching the specs/juno directory.
If people cared about them still they would have submitted for
specs/kilo.

So any objection to killing everything in the list below:

+-+---+--+---+-+--+
| URL | Subject 
  | Created  | Tests | Reviews | Workflow |
+-+---+--+---+-+--+
| https://review.openstack.org/86938  | Add tasks to the v3 API 
  | 237 days |  1| -2  |  |
| https://review.openstack.org/88334  | Add support for USB controller  
  | 231 days |  1| -2  |  |
| https://review.openstack.org/89766  | Add useful metrics into utilization 
based scheduli... | 226 days |  1| -2  |  |
| https://review.openstack.org/90239  | Blueprint for Cinder Multi attach 
volumes | 224 days |  1| -2  |  |
| https://review.openstack.org/90647  | Add utilization based weighers on top 
of MetricsWe... | 221 days |  1| -2  |  |
| https://review.openstack.org/96543  | Smart Scheduler (Solver Scheduler) - 
Constraint ba... | 189 days |  1| -2  |  |
| https://review.openstack.org/97441  | Add nova spec for bp/isnot-operator 
  | 185 days |  1| -2  |  |
| https://review.openstack.org/99476  | Dedicate aggregates for specific 
tenants  | 176 days |  1| -2  |  |
| https://review.openstack.org/99576  | Add client token to CreateServer
  | 176 days |  1| -2  |  |
| https://review.openstack.org/101921 | Spec for Neutron migration feature  
  | 164 days |  1| -2  |  |
| https://review.openstack.org/103617 | Support Identity V3 API 
  | 157 days |  1| -1  |  |
| https://review.openstack.org/105385 | Leverage the features of IBM GPFS to 
store cached ... | 150 days |  1| -2  |  |
| https://review.openstack.org/108582 | Add ironic boot mode filters
  | 136 days |  1| -2  |  |
| https://review.openstack.org/110639 | Blueprint for the implementation of 
Nested Quota D... | 127 days |  1| -2  |  |
| https://review.openstack.org/111308 | Added VirtProperties object blueprint   
  | 125 days |  1| -2  |  |
| https://review.openstack.org/111745 | Improve instance boot time  
  | 122 days |  1| -2  |  |
| https://review.openstack.org/116280 | Add a new filter to implement project 
isolation fe... | 104 days |  1| -2  |  |
+-+---+--+---+-+--+


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] policy on old / virtually abandoned patches

2014-12-05 Thread Matthew Oliver
I have a script that does 95% of what you want:

https://github.com/matthewoliver/swift_abandon_notifier

We are using it for swift reviews. At the moment the only thing it doesn't
do is actually abandon, it instead sends a warning email and waits n days
(2 weeks by default) for action, if it still turns up it adds it to a list
of abandoned changes.

Eg: http://abandoner.oliver.net.au

So anything that appears in that list can be abandoned by a core.

Feel free to use it (just uses a yaml file for configuration) and we can
all benefit from enhancements made ;)

Matt
On Dec 5, 2014 11:06 PM, "Daniel P. Berrange"  wrote:

> On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote:
> > Nova currently has 197 patches that have seen no activity in the last 4
> > weeks (project:openstack/nova age:4weeks status:open).
>
> On a somewhat related note, nova-specs currently has 17 specs
> open against specs/juno, most with -2 votes. I think we should
> just mass-abandon anything still touching the specs/juno directory.
> If people cared about them still they would have submitted for
> specs/kilo.
>
> So any objection to killing everything in the list below:
>
>
> +-+---+--+---+-+--+
> | URL | Subject
>| Created  | Tests | Reviews | Workflow |
>
> +-+---+--+---+-+--+
> | https://review.openstack.org/86938  | Add tasks to the v3 API
>  | 237 days |  1| -2  |  |
> | https://review.openstack.org/88334  | Add support for USB controller
> | 231 days |  1| -2  |  |
> | https://review.openstack.org/89766  | Add useful metrics into
> utilization based scheduli... | 226 days |  1| -2  |  |
> | https://review.openstack.org/90239  | Blueprint for Cinder Multi attach
> volumes | 224 days |  1| -2  |  |
> | https://review.openstack.org/90647  | Add utilization based weighers on
> top of MetricsWe... | 221 days |  1| -2  |  |
> | https://review.openstack.org/96543  | Smart Scheduler (Solver
> Scheduler) - Constraint ba... | 189 days |  1| -2  |  |
> | https://review.openstack.org/97441  | Add nova spec for
> bp/isnot-operator   | 185 days |  1| -2  |
> |
> | https://review.openstack.org/99476  | Dedicate aggregates for specific
> tenants  | 176 days |  1| -2  |  |
> | https://review.openstack.org/99576  | Add client token to CreateServer
> | 176 days |  1| -2  |  |
> | https://review.openstack.org/101921 | Spec for Neutron migration
> feature| 164 days |  1| -2  |  |
> | https://review.openstack.org/103617 | Support Identity V3 API
>  | 157 days |  1| -1  |  |
> | https://review.openstack.org/105385 | Leverage the features of IBM GPFS
> to store cached ... | 150 days |  1| -2  |  |
> | https://review.openstack.org/108582 | Add ironic boot mode filters
> | 136 days |  1| -2  |  |
> | https://review.openstack.org/110639 | Blueprint for the implementation
> of Nested Quota D... | 127 days |  1| -2  |  |
> | https://review.openstack.org/111308 | Added VirtProperties object
> blueprint | 125 days |  1| -2  |  |
> | https://review.openstack.org/111745 | Improve instance boot time
> | 122 days |  1| -2  |  |
> | https://review.openstack.org/116280 | Add a new filter to implement
> project isolation fe... | 104 days |  1| -2  |  |
>
> +-+---+--+---+-+--+
>
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] OpenStack Bootstrapping Hour - Keystone - Friday Dec 5th 20:00 UTC (15:00 Americas/New_York)

2014-12-05 Thread Sean Dague
On 12/05/2014 03:37 AM, Boris Bobrov wrote:
> On Thursday 04 December 2014 19:08:06 Sean Dague wrote:
>> Sorry for the late announce, too much turkey and pie
>>
>> This Friday, Dec 5th, we'll be talking with Steve Martinelli and David
>> Stanek about Keystone Authentication in OpenStack.
> 
> Wiki page says that the event will be Friday Dec 5th - 19:00 UTC (15:00 
> Americas/New_York), while the subject in your mail has 20:00 UTC. Could you 
> please clarify that?

It's 20:00 UTC, sorry about that. With the DST switch it looks like
there was a bad copy/paste somewhere. Also the youtube link should give
you a real time countdown - https://www.youtube.com/watch?v=Th61TgUVnzU
to when it is.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] policy on old / virtually abandoned patches

2014-12-05 Thread Daniel P. Berrange
On Fri, Dec 05, 2014 at 11:16:28PM +1100, Matthew Oliver wrote:
> I have a script that does 95% of what you want:
> 
> https://github.com/matthewoliver/swift_abandon_notifier
> 
> We are using it for swift reviews. At the moment the only thing it doesn't
> do is actually abandon, it instead sends a warning email and waits n days
> (2 weeks by default) for action, if it still turns up it adds it to a list
> of abandoned changes.

Nova already has a similar script that does the abandon too, but both
yours & novas are based on activity / review feedback. I'm explicitly
considering abanadoning based on the file path for the spec.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-12-05 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 04/12/14 16:59, Vadivel Poonathan wrote:
> Hi Kyle and all,
> 
> Was there any conclusion in the design summit or the meetings
> afterward about splitting the vendor plugins/drivers from the
> mainstream neutron and documentation of out-of-tree
> plugins/drivers?...

It's expected that the following spec that covers the plugins split to
be approved and implemented during Kilo:
https://review.openstack.org/134680

> 
> Thanks, Vad --
> 
> 
> On Thu, Oct 23, 2014 at 11:27 AM, Kyle Mestery
> mailto:mest...@mestery.com>> wrote:
> 
> On Thu, Oct 23, 2014 at 12:35 PM, Vadivel Poonathan 
> mailto:vadivel.openst...@gmail.com>> 
> wrote:
>> Hi Kyle and Anne,
>> 
>> Thanks for the clarifications... understood and it makes sense.
>> 
>> However, per my understanding, the drivers (aka plugins) are
>> meant to be developed and supported by third-party vendors,
>> outside of the OpenStack community, and they are supposed to work
>> as plug-n-play... they are not part of the core OpenStack
>> development, nor any of its components. If that is the case, then
>> why should OpenStack community include and maintain them as part 
>> of it, for every release?...  Wouldnt it be enough to limit the
>> scope with the plugin framework and built-in drivers such as
>> LinuxBridge or OVS etc?... not extending to commercial
>> vendors?...  (It is just a curious question, forgive me if i
>> missed something and correct me!).
>> 
> You haven't misunderstood anything, we're in the process of
> splitting these things out, and this will be a prime focus of the
> Neutron design summit track at the upcoming summit.
> 
> Thanks, Kyle
> 
>> At the same time, IMHO, there must be some reference or a page
> within the
>> scope of OpenStack documentation (not necessarily the core docs,
> but some
>> wiki page or reference link or so - as Anne suggested) to
>> mention
> the list
>> of the drivers/plugins supported as of given release and may be
>> an
> external
>> link to know more details about the driver, if the link is
>> provided by respective vendor.
>> 
>> 
>> Anyway, besides my opinion, the wiki page similar to hypervisor
> driver would
>> be good for now atleast, until the direction/policy level
>> decision
> is made
>> to maintain out-of-tree plugins/drivers.
>> 
>> 
>> Thanks, Vad --
>> 
>> 
>> 
>> 
>> On Thu, Oct 23, 2014 at 9:46 AM, Edgar Magana
> mailto:edgar.mag...@workday.com>>
>> wrote:
>>> 
>>> I second Anne’s and Kyle comments. Actually, I like very much
>>> the
> wiki
>>> part to provide some visibility for out-of-tree
>>> plugins/drivers
> but not into
>>> the official documentation.
>>> 
>>> Thanks,
>>> 
>>> Edgar
>>> 
>>> From: Anne Gentle >> > Reply-To: "OpenStack Development
>>> Mailing List (not for usage
> questions)"
>>>  >
>>> Date: Thursday, October 23, 2014 at 8:51 AM To: Kyle Mestery
>>> mailto:mest...@mestery.com>> Cc:
>>> "OpenStack Development Mailing List (not for usage questions)" 
>>>  >
>>> Subject: Re: [openstack-dev] [Neutron] Neutron documentation
>>> to
> update
>>> about new vendor plugin, but without code in repository?
>>> 
>>> 
>>> 
>>> On Thu, Oct 23, 2014 at 10:31 AM, Kyle Mestery
> mailto:mest...@mestery.com>>
>>> wrote:
 
 Vad:
 
 The third-party CI is required for your upstream driver. I
 think what's different from my reading of this thread is the
 question of what is the requirement to have a driver listed
 in the upstream documentation which is not in the upstream
 codebase. To my
> knowledge,
 we haven't done this. Thus, IMHO, we should NOT be utilizing
> upstream
 documentation to document drivers which are themselves not
 upstream. When we split out the drivers which are currently
 upstream in
> neutron
 into a separate repo, they will still be upstream. So my
 opinion
> here
 is that if your driver is not upstream, it shouldn't be in
 the upstream documentation. But I'd like to hear others
 opinions as
> well.
 
>>> 
>>> This is my sense as well.
>>> 
>>> The hypervisor drivers are documented on the wiki, sometimes
>>> they're in-tree, sometimes they're not, but the state of
>>> testing is
> documented on
>>> the wiki. I think we could take this approach for network and
>>> storage drivers as well.
>>> 
>>> https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>>> 
>>> Anne
>>> 
 
 Thanks, Kyle
 
 On Thu, Oct 23, 2014 at 9:44 AM, Vadivel Poonathan 
  > wrote:
> Kyle, Gentle reminder... when you get a chance!..
> 
> Anne, In case, if i need to send it to different group or
> email-id
> to reach
> Kyle Mestery, pls. let me know. Thanks for your help.
> 
> Regards, Vad --
> 
> 
> On Tue, Oct 21, 2014 at 8:51 AM, Vadivel Poonathan 
>  

[openstack-dev] [devstack] set -o nounset in devstack?

2014-12-05 Thread Sean Dague
I got bit by another bug yesterday where there was a typo between
variables in the source tree. So I started down the process of set -o
nounset to see how bad it would be to prevent that in the future.

There are 2 major classes of issues where the code is functioning fine,
but is caught by nounset:

FOO=$(trueorfalse True $FOO)

if [[ -n "$FOO" ]]; ...


The trueorfalse issue can be fixed if we change the function to be:

function trueorfalse {
local xtrace=$(set +o | grep xtrace)
set +o xtrace
local default=$1
local testval="${!2+x}"

[[ -z "$testval" ]] && { echo "$default"; return; }
[[ "0 no No NO false False FALSE" =~ "$testval" ]] && { echo
"False"; return; }
[[ "1 yes Yes YES true True TRUE" =~ "$testval" ]] && { echo "True";
return; }
echo "$default"
$xtrace
}


FOO=$(trueorfalse True FOO)

... then works.

the -z and -n bits can be addressed with either FOO=${FOO:-} or an isset
function that interpolates. FOO=${FOO:-} actually feels better to me
because it's part of the spirit of things.

I've found a few bugs already even though I'm probably only about 20% to
a complete run working.


So... the question is, is this worth it? It's going to have fall out in
lesser used parts of the code where we don't catch things (like -o
errexit did). However it should help flush out a class of bugs in the
process.

Opinions from devstack contributors / users welcomed.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] policy on old / virtually abandoned patches

2014-12-05 Thread Joe Gordon
On Dec 5, 2014 7:07 AM, "Daniel P. Berrange"  wrote:
>
> On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote:
> > Nova currently has 197 patches that have seen no activity in the last 4
> > weeks (project:openstack/nova age:4weeks status:open).
>
> On a somewhat related note, nova-specs currently has 17 specs
> open against specs/juno, most with -2 votes. I think we should
> just mass-abandon anything still touching the specs/juno directory.
> If people cared about them still they would have submitted for
> specs/kilo.
>
> So any objection to killing everything in the list below:

+1, makes sense to me.

>
>
+-+---+--+---+-+--+
> | URL | Subject
 | Created  | Tests | Reviews | Workflow |
>
+-+---+--+---+-+--+
> | https://review.openstack.org/86938  | Add tasks to the v3 API
 | 237 days |  1| -2  |  |
> | https://review.openstack.org/88334  | Add support for USB controller
| 231 days |  1| -2  |  |
> | https://review.openstack.org/89766  | Add useful metrics into
utilization based scheduli... | 226 days |  1| -2  |  |
> | https://review.openstack.org/90239  | Blueprint for Cinder Multi attach
volumes | 224 days |  1| -2  |  |
> | https://review.openstack.org/90647  | Add utilization based weighers on
top of MetricsWe... | 221 days |  1| -2  |  |
> | https://review.openstack.org/96543  | Smart Scheduler (Solver
Scheduler) - Constraint ba... | 189 days |  1| -2  |  |
> | https://review.openstack.org/97441  | Add nova spec for
bp/isnot-operator   | 185 days |  1| -2  |
|
> | https://review.openstack.org/99476  | Dedicate aggregates for specific
tenants  | 176 days |  1| -2  |  |
> | https://review.openstack.org/99576  | Add client token to CreateServer
| 176 days |  1| -2  |  |
> | https://review.openstack.org/101921 | Spec for Neutron migration
feature| 164 days |  1| -2  |  |
> | https://review.openstack.org/103617 | Support Identity V3 API
 | 157 days |  1| -1  |  |
> | https://review.openstack.org/105385 | Leverage the features of IBM GPFS
to store cached ... | 150 days |  1| -2  |  |
> | https://review.openstack.org/108582 | Add ironic boot mode filters
| 136 days |  1| -2  |  |
> | https://review.openstack.org/110639 | Blueprint for the implementation
of Nested Quota D... | 127 days |  1| -2  |  |
> | https://review.openstack.org/111308 | Added VirtProperties object
blueprint | 125 days |  1| -2  |  |
> | https://review.openstack.org/111745 | Improve instance boot time
| 122 days |  1| -2  |  |
> | https://review.openstack.org/116280 | Add a new filter to implement
project isolation fe... | 104 days |  1| -2  |  |
>
+-+---+--+---+-+--+
>
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
:|
> |: http://libvirt.org  -o- http://virt-manager.org
:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
:|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] policy on old / virtually abandoned patches

2014-12-05 Thread Sean Dague
On 12/05/2014 08:05 AM, Joe Gordon wrote:
> 
> On Dec 5, 2014 7:07 AM, "Daniel P. Berrange"  > wrote:
>>
>> On Tue, Nov 18, 2014 at 07:06:59AM -0500, Sean Dague wrote:
>> > Nova currently has 197 patches that have seen no activity in the last 4
>> > weeks (project:openstack/nova age:4weeks status:open).
>>
>> On a somewhat related note, nova-specs currently has 17 specs
>> open against specs/juno, most with -2 votes. I think we should
>> just mass-abandon anything still touching the specs/juno directory.
>> If people cared about them still they would have submitted for
>> specs/kilo.
>>
>> So any objection to killing everything in the list below:
> 
> +1, makes sense to me.

Agreed. +1.

> 
>>
>>
> +-+---+--+---+-+--+
>> | URL | Subject   
>| Created  | Tests | Reviews | Workflow |
>>
> +-+---+--+---+-+--+
>> | https://review.openstack.org/86938  | Add tasks to the v3 API   
>| 237 days |  1| -2  |  |
>> | https://review.openstack.org/88334  | Add support for USB
> controller| 231 days |  1| -2  |  |
>> | https://review.openstack.org/89766  | Add useful metrics into
> utilization based scheduli... | 226 days |  1| -2  |  |
>> | https://review.openstack.org/90239  | Blueprint for Cinder Multi
> attach volumes | 224 days |  1| -2  |  |
>> | https://review.openstack.org/90647  | Add utilization based weighers
> on top of MetricsWe... | 221 days |  1| -2  |  |
>> | https://review.openstack.org/96543  | Smart Scheduler (Solver
> Scheduler) - Constraint ba... | 189 days |  1| -2  |  |
>> | https://review.openstack.org/97441  | Add nova spec for
> bp/isnot-operator   | 185 days |  1| -2  | 
> |
>> | https://review.openstack.org/99476  | Dedicate aggregates for
> specific tenants  | 176 days |  1| -2  |  |
>> | https://review.openstack.org/99576  | Add client token to
> CreateServer  | 176 days |  1| -2  |  |
>> | https://review.openstack.org/101921 | Spec for Neutron migration
> feature| 164 days |  1| -2  |  |
>> | https://review.openstack.org/103617 | Support Identity V3 API   
>| 157 days |  1| -1  |  |
>> | https://review.openstack.org/105385 | Leverage the features of IBM
> GPFS to store cached ... | 150 days |  1| -2  |  |
>> | https://review.openstack.org/108582 | Add ironic boot mode filters 
> | 136 days |  1| -2  |  |
>> | https://review.openstack.org/110639 | Blueprint for the
> implementation of Nested Quota D... | 127 days |  1| -2  | 
> |
>> | https://review.openstack.org/111308 | Added VirtProperties object
> blueprint | 125 days |  1| -2  |  |
>> | https://review.openstack.org/111745 | Improve instance boot time   
> | 122 days |  1| -2  |  |
>> | https://review.openstack.org/116280 | Add a new filter to implement
> project isolation fe... | 104 days |  1| -2  |  |
>>
> +-+---+--+---+-+--+
>>
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com  -o-   
> http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org  -o-   
>  http://virt-manager.org :|
>> |: http://autobuild.org   -o-   
>  http://search.cpan.org/~danberr/ :|
>> |: http://entangle-photo.org   -o- 
>  http://live.gnome.org/gtk-vnc :|
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-05 Thread joehuang
Dear all & TC & PTL,

In the 40 minutes cross-project summit session “Approaches for scaling out”[1], 
almost 100 peoples attended the meeting, and the conclusion is that cells can 
not cover the use cases and requirements which the OpenStack cascading 
solution[2] aim to address, the background including use cases and requirements 
is also described in the mail.

After the summit, we just ported the PoC[3] source code from IceHouse based to 
Juno based.

Now, let's move forward:

The major task is to introduce new driver/agent to existing core projects, for 
the core idea of cascading is to add Nova as the hypervisor backend of Nova, 
Cinder as the block storage backend of Cinder, Neutron as the backend of 
Neutron, Glance as one image location of Glance, Ceilometer as the store of 
Ceilometer.
a). Need cross-program decision to run cascading as an incubated project mode 
or register BP separately in each involved project. CI for cascading is quite 
different from traditional test environment, at least 3 OpenStack instance 
required for cross OpenStack networking test cases.
b). Volunteer as the cross project coordinator.
c). Volunteers for implementation and CI.

Background of OpenStack cascading vs cells:

1. Use cases
a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" ), 
establishing globally addressable tenants which result in efficient services 
deployment.
b). Telefonica use case[5], create virtual DC( data center) cross multiple 
physical DCs with seamless experience.
c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For NFV 
cloud, it’s in nature the cloud will be distributed but inter-connected in many 
data centers.

2.requirements
a). The operator has multiple sites cloud; each site can use one or multiple 
vendor’s OpenStack distributions.
b). Each site with its own requirements and upgrade schedule while maintaining 
standard OpenStack API
c). The multi-site cloud must provide unified resource management with global 
Open API exposed, for example create virtual DC cross multiple physical DCs 
with seamless experience.
Although a prosperity orchestration layer could be developed for the multi-site 
cloud, but it's prosperity API in the north bound interface. The cloud 
operators want the ecosystem friendly global open API for the mutli-site cloud 
for global access.

3. What problems does cascading solve that cells doesn't cover:
OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core 
architecture idea of OpenStack cascading is to add Nova as the hypervisor 
backend of Nova, Cinder as the block storage backend of Cinder, Neutron as the 
backend of Neutron, Glance as one image location of Glance, Ceilometer as the 
store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks (from 
different vendor's distribution, or different version ) which may located in 
different sites (or data centers ) through the OpenStack API, meanwhile the 
cloud still expose OpenStack API as the north-bound API in the cloud level.

4. Why cells can’t do that:
Cells provide the scale out capability to Nova, but from the OpenStack as a 
whole point of view, it’s still working like one OpenStack instance.
a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. This 
approach provides the multi-site cloud with one unified API endpoint and 
unified resource management, but consolidation of multi-vendor/multi-version 
OpenStack instances across one or more data centers cannot be fulfilled.
b). Each site installed one child cell and accompanied standalone Cinder, 
Neutron(or Nova-network), Glance, Ceilometer. This approach makes 
multi-vendor/multi-version OpenStack distribution co-existence in multi-site 
seem to be feasible, but the requirement for unified API endpoint and unified 
resource management cannot be fulfilled. Cross Neutron networking automation is 
also missing, which should otherwise be done manually or use proprietary 
orchestration layer.

For more information about cascading and cells, please refer to the discussion 
thread before Paris Summit [7].

[1]Approaches for scaling out: 
https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
[2]OpenStack cascading solution: 
https://wiki.openstack.org/wiki/OpenStack_cascading_solution
[3]Cascading PoC: https://github.com/stackforge/tricircle
[4]Vodafone use case (9'02" to 12'30"): 
https://www.youtube.com/watch?v=-KOJYvhmxQI
[5]Telefonica use case: 
http://www.telefonica.com/en/descargas/mwc/present_20140224.pdf
[6]ETSI NFV use cases: 
http://www.etsi.org/deliver/etsi_gs/nfv/001_099/001/01.01.01_60/gs_nfv001v010101p.pdf
[7]Cascading thread before design summit: 
http://openstack.10931.n7.nabble.com/all-tc-Multi-clouds-integration-by-OpenStack-cascading-td54115.html

Best Regards
Chaoyi Huang (joehuang)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/

[openstack-dev] [MagnetoDB] Intercycle release package versioning

2014-12-05 Thread Aleksei Chuprin (CS)
Hello everyone,

Because MagnetoDB project uses more frequent releases than other OpenStack 
projects, i propose use following versioning strategy for MagnetoDB packages:

1:2014.2-0ubuntu1
1:2014.2~rc2-0ubuntu1
1:2014.2~rc1-0ubuntu1
1:2014.2~b2-0ubuntu1
1:2014.2~b2.dev{MMDD}_{GIT_SHA1}-0ubuntu1
1:2014.2~b2.dev{MMDD}_{GIT_SHA1}-0ubuntu1
1:2014.2~b1-0ubuntu1

What do you think about this?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Daniel P. Berrange
On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:
> One of the things that happens over time is that some of our core
> reviewers move on to other projects. This is a normal and healthy
> thing, especially as nova continues to spin out projects into other
> parts of OpenStack.
> 
> However, it is important that our core reviewers be active, as it
> keeps them up to date with the current ways we approach development in
> Nova. I am therefore removing some no longer sufficiently active cores
> from the nova-core group.
> 
> I’d like to thank the following people for their contributions over the years:
> 
> * cbehrens: Chris Behrens
> * vishvananda: Vishvananda Ishaya
> * dan-prince: Dan Prince
> * belliott: Brian Elliott
> * p-draigbrady: Padraig Brady
> 
> I’d love to see any of these cores return if they find their available
> time for code reviews increases.

What stats did you use to decide whether to cull these reviewers ? Looking
at the stats over a 6 month period, I think Padraig Brady is still having
a significant positive impact on Nova - on a par with both cerberus and
alaski who you've not proposing for cut. I think we should keep Padraig
on the team, but probably suggest cutting Markmc instead

  http://russellbryant.net/openstack-stats/nova-reviewers-180.txt

+-+++
|   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  | 
Disagreements* |
+-+++
| berrange ** |1766   26 435  12 1293 35773.9% |  157 ( 
 8.9%)  |
| jaypipes ** |1359   11 378 436 534 13371.4%  |  109 ( 
 8.0%)  |
|   jogo **   |1053  131 326   7 589 35356.6%  |   47 ( 
 4.5%)  |
|   danms **  | 921   67 381  23 450 16751.4%  |   32 ( 
 3.5%)  |
|  oomichi ** | 8894 306  55 524 18265.1%  |   40 ( 
 4.5%)  |
|johngarbutt **   | 808  319 227  10 252 14532.4%  |   37 ( 
 4.6%)  |
|  mriedem ** | 642   27 279  25 311 13652.3%  |   17 ( 
 2.6%)  |
|  klmitch ** | 6061  90   2 513  7085.0%  |   67 ( 
11.1%)  |
| ndipanov ** | 588   19 179  10 380 11366.3%  |   62 ( 
10.5%)  |
|mikalstill **| 564   31  34   3 496 20788.5%  |   20 ( 
 3.5%)  |
|  cyeoh-0 ** | 546   12 207  30 297 10359.9%  |   35 ( 
 6.4%)  |
|  sdague **  | 511   23  89   6 393 22978.1%  |   25 ( 
 4.9%)  |
| russellb ** | 4656  83   0 376 15880.9%  |   23 ( 
 4.9%)  |
|  alaski **  | 4151  65  21 328 14984.1%  |   24 ( 
 5.8%)  |
| cerberus ** | 4056  25  48 326 10292.3%  |   33 ( 
 8.1%)  |
|   p-draigbrady **   | 3762  40   9 325  6488.8%  |   49 ( 
13.0%)  |
|  markmc **  | 2432  54   3 184  6977.0%  |   14 ( 
 5.8%)  |
| belliott ** | 2311  68   5 157  3570.1%  |   19 ( 
 8.2%)  |
|dan-prince **| 1782  48   9 119  2971.9%  |   11 ( 
 6.2%)  |
| cbehrens ** | 1322  49   2  79  1961.4%  |6 ( 
 4.5%)  |
|vishvananda **   |  540   5   3  46  1590.7%  |5 ( 
 9.3%)  |


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Matt Riedemann



On 12/5/2014 7:41 AM, Daniel P. Berrange wrote:

On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:

One of the things that happens over time is that some of our core
reviewers move on to other projects. This is a normal and healthy
thing, especially as nova continues to spin out projects into other
parts of OpenStack.

However, it is important that our core reviewers be active, as it
keeps them up to date with the current ways we approach development in
Nova. I am therefore removing some no longer sufficiently active cores
from the nova-core group.

I’d like to thank the following people for their contributions over the years:

* cbehrens: Chris Behrens
* vishvananda: Vishvananda Ishaya
* dan-prince: Dan Prince
* belliott: Brian Elliott
* p-draigbrady: Padraig Brady

I’d love to see any of these cores return if they find their available
time for code reviews increases.


What stats did you use to decide whether to cull these reviewers ? Looking
at the stats over a 6 month period, I think Padraig Brady is still having
a significant positive impact on Nova - on a par with both cerberus and
alaski who you've not proposing for cut. I think we should keep Padraig
on the team, but probably suggest cutting Markmc instead

   http://russellbryant.net/openstack-stats/nova-reviewers-180.txt

+-+++
|   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  | 
Disagreements* |
+-+++
| berrange ** |1766   26 435  12 1293 35773.9% |  157 ( 
 8.9%)  |
| jaypipes ** |1359   11 378 436 534 13371.4%  |  109 ( 
 8.0%)  |
|   jogo **   |1053  131 326   7 589 35356.6%  |   47 ( 
 4.5%)  |
|   danms **  | 921   67 381  23 450 16751.4%  |   32 ( 
 3.5%)  |
|  oomichi ** | 8894 306  55 524 18265.1%  |   40 ( 
 4.5%)  |
|johngarbutt **   | 808  319 227  10 252 14532.4%  |   37 ( 
 4.6%)  |
|  mriedem ** | 642   27 279  25 311 13652.3%  |   17 ( 
 2.6%)  |
|  klmitch ** | 6061  90   2 513  7085.0%  |   67 ( 
11.1%)  |
| ndipanov ** | 588   19 179  10 380 11366.3%  |   62 ( 
10.5%)  |
|mikalstill **| 564   31  34   3 496 20788.5%  |   20 ( 
 3.5%)  |
|  cyeoh-0 ** | 546   12 207  30 297 10359.9%  |   35 ( 
 6.4%)  |
|  sdague **  | 511   23  89   6 393 22978.1%  |   25 ( 
 4.9%)  |
| russellb ** | 4656  83   0 376 15880.9%  |   23 ( 
 4.9%)  |
|  alaski **  | 4151  65  21 328 14984.1%  |   24 ( 
 5.8%)  |
| cerberus ** | 4056  25  48 326 10292.3%  |   33 ( 
 8.1%)  |
|   p-draigbrady **   | 3762  40   9 325  6488.8%  |   49 ( 
13.0%)  |
|  markmc **  | 2432  54   3 184  6977.0%  |   14 ( 
 5.8%)  |
| belliott ** | 2311  68   5 157  3570.1%  |   19 ( 
 8.2%)  |
|dan-prince **| 1782  48   9 119  2971.9%  |   11 ( 
 6.2%)  |
| cbehrens ** | 1322  49   2  79  1961.4%  |6 ( 
 4.5%)  |
|vishvananda **   |  540   5   3  46  1590.7%  |5 ( 
 9.3%)  |


Regards,
Daniel



FWIW, markmc is already off the list [1].

[1] https://review.openstack.org/#/admin/groups/25,members

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Nikola Đipanov
On 12/05/2014 01:05 AM, Michael Still wrote:
> One of the things that happens over time is that some of our core
> reviewers move on to other projects. This is a normal and healthy
> thing, especially as nova continues to spin out projects into other
> parts of OpenStack.
> 
> However, it is important that our core reviewers be active, as it
> keeps them up to date with the current ways we approach development in
> Nova. I am therefore removing some no longer sufficiently active cores
> from the nova-core group.
> 
> I’d like to thank the following people for their contributions over the years:
> 
> * cbehrens: Chris Behrens
> * vishvananda: Vishvananda Ishaya
> * dan-prince: Dan Prince
> * belliott: Brian Elliott
> * p-draigbrady: Padraig Brady
> 

I am personally -1 on Padraig and Vish, especially Padraig. As one of
the coreutils maintainers - his contribution to Nova is invaluable
regardless whatever metrics you apply to his reviews makes him appear on
this list (hint - quality should really be the only one). Removing him
from core will probably not affect that, but I personally definitely
trust him to not vote +2 on the stuff he is not in touch with, and view
his +2 when I see them as a sign of thorough reviews. Also he has not
exactly been inactive lately by any measure.

Vish has not been active for some time now, but he is on IRC and in the
community still (as opposed to Chris for example), so not sure why do
this now.

N.


> I’d love to see any of these cores return if they find their available
> time for code reviews increases.
> 
> Thanks,
> Michael
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Sahid Orentino Ferdjaoui
On Fri, Dec 05, 2014 at 01:41:59PM +, Daniel P. Berrange wrote:
> On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:
> > One of the things that happens over time is that some of our core
> > reviewers move on to other projects. This is a normal and healthy
> > thing, especially as nova continues to spin out projects into other
> > parts of OpenStack.
> > 
> > However, it is important that our core reviewers be active, as it
> > keeps them up to date with the current ways we approach development in
> > Nova. I am therefore removing some no longer sufficiently active cores
> > from the nova-core group.
> > 
> > I’d like to thank the following people for their contributions over the 
> > years:
> > 
> > * cbehrens: Chris Behrens
> > * vishvananda: Vishvananda Ishaya
> > * dan-prince: Dan Prince
> > * belliott: Brian Elliott
> > * p-draigbrady: Padraig Brady
> > 
> > I’d love to see any of these cores return if they find their available
> > time for code reviews increases.
> 
> What stats did you use to decide whether to cull these reviewers ? Looking
> at the stats over a 6 month period, I think Padraig Brady is still having
> a significant positive impact on Nova - on a par with both cerberus and
> alaski who you've not proposing for cut. I think we should keep Padraig
> on the team, but probably suggest cutting Markmc instead
> 
>   http://russellbryant.net/openstack-stats/nova-reviewers-180.txt
> 
> +-+++
> |   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  | 
> Disagreements* |
> +-+++
> | berrange ** |1766   26 435  12 1293 35773.9% |  157 
> (  8.9%)  |
> | jaypipes ** |1359   11 378 436 534 13371.4%  |  109 
> (  8.0%)  |
> |   jogo **   |1053  131 326   7 589 35356.6%  |   47 
> (  4.5%)  |
> |   danms **  | 921   67 381  23 450 16751.4%  |   32 
> (  3.5%)  |
> |  oomichi ** | 8894 306  55 524 18265.1%  |   40 
> (  4.5%)  |
> |johngarbutt **   | 808  319 227  10 252 14532.4%  |   37 
> (  4.6%)  |
> |  mriedem ** | 642   27 279  25 311 13652.3%  |   17 
> (  2.6%)  |
> |  klmitch ** | 6061  90   2 513  7085.0%  |   67 
> ( 11.1%)  |
> | ndipanov ** | 588   19 179  10 380 11366.3%  |   62 
> ( 10.5%)  |
> |mikalstill **| 564   31  34   3 496 20788.5%  |   20 
> (  3.5%)  |
> |  cyeoh-0 ** | 546   12 207  30 297 10359.9%  |   35 
> (  6.4%)  |
> |  sdague **  | 511   23  89   6 393 22978.1%  |   25 
> (  4.9%)  |
> | russellb ** | 4656  83   0 376 15880.9%  |   23 
> (  4.9%)  |
> |  alaski **  | 4151  65  21 328 14984.1%  |   24 
> (  5.8%)  |
> | cerberus ** | 4056  25  48 326 10292.3%  |   33 
> (  8.1%)  |
> |   p-draigbrady **   | 3762  40   9 325  6488.8%  |   49 
> ( 13.0%)  |
> |  markmc **  | 2432  54   3 184  6977.0%  |   14 
> (  5.8%)  |
> | belliott ** | 2311  68   5 157  3570.1%  |   19 
> (  8.2%)  |
> |dan-prince **| 1782  48   9 119  2971.9%  |   11 
> (  6.2%)  |
> | cbehrens ** | 1322  49   2  79  1961.4%  |6 
> (  4.5%)  |
> |vishvananda **   |  540   5   3  46  1590.7%  |5 
> (  9.3%)  |
> 

+1

Padraig gave to us several robust reviews on important topics. Lose him
will make more difficult the work on nova.

s.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Daniel P. Berrange
On Fri, Dec 05, 2014 at 07:44:24AM -0600, Matt Riedemann wrote:
> 
> 
> On 12/5/2014 7:41 AM, Daniel P. Berrange wrote:
> >On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:
> >>One of the things that happens over time is that some of our core
> >>reviewers move on to other projects. This is a normal and healthy
> >>thing, especially as nova continues to spin out projects into other
> >>parts of OpenStack.
> >>
> >>However, it is important that our core reviewers be active, as it
> >>keeps them up to date with the current ways we approach development in
> >>Nova. I am therefore removing some no longer sufficiently active cores
> >>from the nova-core group.
> >>
> >>I’d like to thank the following people for their contributions over the 
> >>years:
> >>
> >>* cbehrens: Chris Behrens
> >>* vishvananda: Vishvananda Ishaya
> >>* dan-prince: Dan Prince
> >>* belliott: Brian Elliott
> >>* p-draigbrady: Padraig Brady
> >>
> >>I’d love to see any of these cores return if they find their available
> >>time for code reviews increases.
> >
> >What stats did you use to decide whether to cull these reviewers ? Looking
> >at the stats over a 6 month period, I think Padraig Brady is still having
> >a significant positive impact on Nova - on a par with both cerberus and
> >alaski who you've not proposing for cut. I think we should keep Padraig
> >on the team, but probably suggest cutting Markmc instead
> >
> >   http://russellbryant.net/openstack-stats/nova-reviewers-180.txt
> >
> >+-+++
> >|   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  | 
> >Disagreements* |
> >+-+++
> >| berrange ** |1766   26 435  12 1293 35773.9% |  
> >157 (  8.9%)  |
> >| jaypipes ** |1359   11 378 436 534 13371.4%  |  
> >109 (  8.0%)  |
> >|   jogo **   |1053  131 326   7 589 35356.6%  |   
> >47 (  4.5%)  |
> >|   danms **  | 921   67 381  23 450 16751.4%  |   
> >32 (  3.5%)  |
> >|  oomichi ** | 8894 306  55 524 18265.1%  |   
> >40 (  4.5%)  |
> >|johngarbutt **   | 808  319 227  10 252 14532.4%  |   
> >37 (  4.6%)  |
> >|  mriedem ** | 642   27 279  25 311 13652.3%  |   
> >17 (  2.6%)  |
> >|  klmitch ** | 6061  90   2 513  7085.0%  |   
> >67 ( 11.1%)  |
> >| ndipanov ** | 588   19 179  10 380 11366.3%  |   
> >62 ( 10.5%)  |
> >|mikalstill **| 564   31  34   3 496 20788.5%  |   
> >20 (  3.5%)  |
> >|  cyeoh-0 ** | 546   12 207  30 297 10359.9%  |   
> >35 (  6.4%)  |
> >|  sdague **  | 511   23  89   6 393 22978.1%  |   
> >25 (  4.9%)  |
> >| russellb ** | 4656  83   0 376 15880.9%  |   
> >23 (  4.9%)  |
> >|  alaski **  | 4151  65  21 328 14984.1%  |   
> >24 (  5.8%)  |
> >| cerberus ** | 4056  25  48 326 10292.3%  |   
> >33 (  8.1%)  |
> >|   p-draigbrady **   | 3762  40   9 325  6488.8%  |   
> >49 ( 13.0%)  |
> >|  markmc **  | 2432  54   3 184  6977.0%  |   
> >14 (  5.8%)  |
> >| belliott ** | 2311  68   5 157  3570.1%  |   
> >19 (  8.2%)  |
> >|dan-prince **| 1782  48   9 119  2971.9%  |   
> >11 (  6.2%)  |
> >| cbehrens ** | 1322  49   2  79  1961.4%  |
> >6 (  4.5%)  |
> >|vishvananda **   |  540   5   3  46  1590.7%  |
> >5 (  9.3%)  |
> >
> >
> >Regards,
> >Daniel
> >
> 
> FWIW, markmc is already off the list [1].

Ah yes, must be that russell needs to update the config file for his script
to stop marking markmc as core.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-05 Thread Davanum Srinivas
Joe,

Related to this topic, At the summit, there was a session on Cells v2
and following up on that there have been BP(s) filed in Nova
championed by Andrew -
https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z

thanks,
dims

On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
> Dear all & TC & PTL,
>
> In the 40 minutes cross-project summit session “Approaches for scaling 
> out”[1], almost 100 peoples attended the meeting, and the conclusion is that 
> cells can not cover the use cases and requirements which the OpenStack 
> cascading solution[2] aim to address, the background including use cases and 
> requirements is also described in the mail.
>
> After the summit, we just ported the PoC[3] source code from IceHouse based 
> to Juno based.
>
> Now, let's move forward:
>
> The major task is to introduce new driver/agent to existing core projects, 
> for the core idea of cascading is to add Nova as the hypervisor backend of 
> Nova, Cinder as the block storage backend of Cinder, Neutron as the backend 
> of Neutron, Glance as one image location of Glance, Ceilometer as the store 
> of Ceilometer.
> a). Need cross-program decision to run cascading as an incubated project mode 
> or register BP separately in each involved project. CI for cascading is quite 
> different from traditional test environment, at least 3 OpenStack instance 
> required for cross OpenStack networking test cases.
> b). Volunteer as the cross project coordinator.
> c). Volunteers for implementation and CI.
>
> Background of OpenStack cascading vs cells:
>
> 1. Use cases
> a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" 
> ), establishing globally addressable tenants which result in efficient 
> services deployment.
> b). Telefonica use case[5], create virtual DC( data center) cross multiple 
> physical DCs with seamless experience.
> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For 
> NFV cloud, it’s in nature the cloud will be distributed but inter-connected 
> in many data centers.
>
> 2.requirements
> a). The operator has multiple sites cloud; each site can use one or multiple 
> vendor’s OpenStack distributions.
> b). Each site with its own requirements and upgrade schedule while 
> maintaining standard OpenStack API
> c). The multi-site cloud must provide unified resource management with global 
> Open API exposed, for example create virtual DC cross multiple physical DCs 
> with seamless experience.
> Although a prosperity orchestration layer could be developed for the 
> multi-site cloud, but it's prosperity API in the north bound interface. The 
> cloud operators want the ecosystem friendly global open API for the 
> mutli-site cloud for global access.
>
> 3. What problems does cascading solve that cells doesn't cover:
> OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core 
> architecture idea of OpenStack cascading is to add Nova as the hypervisor 
> backend of Nova, Cinder as the block storage backend of Cinder, Neutron as 
> the backend of Neutron, Glance as one image location of Glance, Ceilometer as 
> the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks 
> (from different vendor's distribution, or different version ) which may 
> located in different sites (or data centers ) through the OpenStack API, 
> meanwhile the cloud still expose OpenStack API as the north-bound API in the 
> cloud level.
>
> 4. Why cells can’t do that:
> Cells provide the scale out capability to Nova, but from the OpenStack as a 
> whole point of view, it’s still working like one OpenStack instance.
> a). if Cells is deployed with shared Cinder, Neutron, Glance, Ceilometer. 
> This approach provides the multi-site cloud with one unified API endpoint and 
> unified resource management, but consolidation of multi-vendor/multi-version 
> OpenStack instances across one or more data centers cannot be fulfilled.
> b). Each site installed one child cell and accompanied standalone Cinder, 
> Neutron(or Nova-network), Glance, Ceilometer. This approach makes 
> multi-vendor/multi-version OpenStack distribution co-existence in multi-site 
> seem to be feasible, but the requirement for unified API endpoint and unified 
> resource management cannot be fulfilled. Cross Neutron networking automation 
> is also missing, which should otherwise be done manually or use proprietary 
> orchestration layer.
>
> For more information about cascading and cells, please refer to the 
> discussion thread before Paris Summit [7].
>
> [1]Approaches for scaling out: 
> https://etherpad.openstack.org/p/kilo-crossproject-scale-out-openstack
> [2]OpenStack cascading solution: 
> https://wiki.openstack.org/wiki/OpenStack_cascading_solution
> [3]Cascading PoC: https://github.com/stackforge/tricircle
> [4]Vodafone use case (9'02" to 12'30"): 
> https://www.youtube.com/watch?v=-KOJYvhmxQI
> [5]Telefonica use case: 
> http://www.telefonica.com/en/descargas/mwc/prese

Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Brant Knudson
On Fri, Dec 5, 2014 at 7:56 AM, Daniel P. Berrange 
wrote:

>
> > FWIW, markmc is already off the list [1].
>
> Ah yes, must be that russell needs to update the config file for his script
> to stop marking markmc as core.
>
>
> Regards,
> Daniel
> --
>


Anyone can do it: https://review.openstack.org/#/c/139637/

 - Brant



> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org  -o- http://virt-manager.org
> :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc
> :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session length on wiki.openstack.org

2014-12-05 Thread Jeremy Stanley
On 2014-12-04 18:37:48 -0700 (-0700), Carl Baldwin wrote:
> +1  I've been meaning to say something like this but never got
> around to it.  Thanks for speaking up.

https://storyboard.openstack.org/#!/story/1172753

I think Ryan said it might be a bug in the OpenID plug-in, but if so
he didn't put that comment in the bug.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Russell Bryant
On 12/05/2014 08:41 AM, Daniel P. Berrange wrote:
> On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:
>> One of the things that happens over time is that some of our core
>> reviewers move on to other projects. This is a normal and healthy
>> thing, especially as nova continues to spin out projects into other
>> parts of OpenStack.
>>
>> However, it is important that our core reviewers be active, as it
>> keeps them up to date with the current ways we approach development in
>> Nova. I am therefore removing some no longer sufficiently active cores
>> from the nova-core group.
>>
>> I’d like to thank the following people for their contributions over the 
>> years:
>>
>> * cbehrens: Chris Behrens
>> * vishvananda: Vishvananda Ishaya
>> * dan-prince: Dan Prince
>> * belliott: Brian Elliott
>> * p-draigbrady: Padraig Brady
>>
>> I’d love to see any of these cores return if they find their available
>> time for code reviews increases.
> 
> What stats did you use to decide whether to cull these reviewers ? Looking
> at the stats over a 6 month period, I think Padraig Brady is still having
> a significant positive impact on Nova - on a par with both cerberus and
> alaski who you've not proposing for cut. I think we should keep Padraig
> on the team, but probably suggest cutting Markmc instead
> 
>   http://russellbryant.net/openstack-stats/nova-reviewers-180.txt
> 
> +-+++
> |   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  | 
> Disagreements* |
> +-+++
> | berrange ** |1766   26 435  12 1293 35773.9% |  157 
> (  8.9%)  |
> | jaypipes ** |1359   11 378 436 534 13371.4%  |  109 
> (  8.0%)  |
> |   jogo **   |1053  131 326   7 589 35356.6%  |   47 
> (  4.5%)  |
> |   danms **  | 921   67 381  23 450 16751.4%  |   32 
> (  3.5%)  |
> |  oomichi ** | 8894 306  55 524 18265.1%  |   40 
> (  4.5%)  |
> |johngarbutt **   | 808  319 227  10 252 14532.4%  |   37 
> (  4.6%)  |
> |  mriedem ** | 642   27 279  25 311 13652.3%  |   17 
> (  2.6%)  |
> |  klmitch ** | 6061  90   2 513  7085.0%  |   67 
> ( 11.1%)  |
> | ndipanov ** | 588   19 179  10 380 11366.3%  |   62 
> ( 10.5%)  |
> |mikalstill **| 564   31  34   3 496 20788.5%  |   20 
> (  3.5%)  |
> |  cyeoh-0 ** | 546   12 207  30 297 10359.9%  |   35 
> (  6.4%)  |
> |  sdague **  | 511   23  89   6 393 22978.1%  |   25 
> (  4.9%)  |
> | russellb ** | 4656  83   0 376 15880.9%  |   23 
> (  4.9%)  |
> |  alaski **  | 4151  65  21 328 14984.1%  |   24 
> (  5.8%)  |
> | cerberus ** | 4056  25  48 326 10292.3%  |   33 
> (  8.1%)  |
> |   p-draigbrady **   | 3762  40   9 325  6488.8%  |   49 
> ( 13.0%)  |
> |  markmc **  | 2432  54   3 184  6977.0%  |   14 
> (  5.8%)  |
> | belliott ** | 2311  68   5 157  3570.1%  |   19 
> (  8.2%)  |
> |dan-prince **| 1782  48   9 119  2971.9%  |   11 
> (  6.2%)  |
> | cbehrens ** | 1322  49   2  79  1961.4%  |6 
> (  4.5%)  |
> |vishvananda **   |  540   5   3  46  1590.7%  |5 
> (  9.3%)  |
> 

Yeah, I was pretty surprised to see pbrady on this list, as well.  The
above was 6 months, but even if you drop it to the most recent 3 months,
he's still active ...


> Reviews for the last 90 days in nova
> ** -- nova-core team member
> +-+---++
> |   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- % | 
> Disagreements* |
> +-+---++
> | berrange ** | 708   13 145   1 549 20077.7% |   47 
> (  6.6%)  |
> |   jogo **   | 594   40 218   4 332 17456.6% |   27 
> (  4.5%)  |
> | jaypipes ** | 509   10 180  17 302  7762.7% |   33 
> (  6.5%)  |
> |  oomichi ** | 3921 136  10 245  7465.1% |6 
> (  1.5%)  |
> |   danms **  | 386   38 155  16 177  7750.0% |   16 
> (  4.1%)  |
> | ndipanov ** | 345   17 118   7 203  6160.9% |   32 
> (  9.3%)  |
> |  mriedem ** | 304   12 136  12 144  5651.3% |   12 
> (  3.9%)  |
> |  klmitch ** | 2811  42   0 238  1984.7% |   32 
> ( 11.4%)  |
> |  cyeoh-0 ** | 270   11 112  12 135  47

Re: [openstack-dev] [Fuel] Change diagnostic snapshot compression algoritm

2014-12-05 Thread Dmitry Pyzhov
I've moved the bug to 6.1. And I'm going to add it to our roadmap as a
separate item.

On Wed, Nov 26, 2014 at 1:31 PM, Mike Scherbakov 
wrote:

> Can we put it as a work item for diagnostic snapshot improvements, so we
> won't forget about this in 6.1?
>
>
> On Tuesday, November 25, 2014, Dmitry Pyzhov  wrote:
>
>> Thank you all for your feedback. Request postponed to the next release.
>> We will compare available solutions.
>>
>> On Mon, Nov 24, 2014 at 2:36 PM, Vladimir Kuklin 
>> wrote:
>>
>>> guys, there is already pxz utility in ubuntu repos. let's test it
>>>
>>> On Mon, Nov 24, 2014 at 2:32 PM, Bartłomiej Piotrowski <
>>> bpiotrow...@mirantis.com> wrote:
>>>
 On 24 Nov 2014, at 12:25, Matthew Mosesohn 
 wrote:
 > I did this exercise over many iterations during Docker container
 > packing and found that as long as the data is under 1gb, it's going to
 > compress really well with xz. Over 1gb and lrzip looks more attractive
 > (but only on high memory systems). In reality, we're looking at log
 > footprints from OpenStack environments on the order of 500mb to 2gb.
 >
 > xz is very slow on single-core systems with 1.5gb of memory, but it's
 > quite a bit faster if you run it on a more powerful system. I've found
 > level 4 compression to be the best compromise that works well enough
 > that it's still far better than gzip. If increasing compression time
 > by 3-5x is too much for you guys, why not just go to bzip? You'll
 > still improve compression but be able to cut back on time.
 >
 > Best Regards,
 > Matthew Mosesohn

 Alpha release of xz supports multithreading via -T (or —threads)
 parameter.
 We could also use pbzip2 instead of regular bzip to cut some time on
 multi-core
 systems.

 Regards,
 Bartłomiej Piotrowski
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 45bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> --
> Mike Scherbakov
> #mihgen
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] bug triage and review backlog sprint retrospective

2014-12-05 Thread Doug Hellmann
The Oslo team spent yesterday working on our backlog. We triaged all of our 
“New” bugs and re-triaged many of the other existing open bugs, closing some as 
obsolete or fixed. We reviewed many patches in the backlog as well, and landed 
quite a few. We did not clear as much of the backlog for our “big” libraries as 
I had hoped, but we still made good progress on the other libraries.

The oslo.db, oslo.messaging, and taskflow libraries all have quite large 
backlogs of reviews to be done. These libraries require some specialized 
knowledge, and so it was a little more difficult to include them in the bulk 
review sprint. I would like to schedule separate review days each of these 
libraries in turn. We will talk about this at the next team meeting and see if 
we can schedule the first for a few weeks from now.

We used https://etherpad.openstack.org/p/oslo-kilo-sprint for coordinating, and 
you’ll find more detailed notes there if you’re interested.

Thanks to everyone who participated!
Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for Additional Meeting for third-party

2014-12-05 Thread Kurt Taylor
In my opinion, further discussion is needed. The proposal on the table is
to have 2 weekly meetings, one at the existing time of 1800UTC on Monday
and, also in the same week, to have another meeting at 0800 UTC on Tuesday.

Here are some of the problems that I see with this approach:

1. Meeting content: Having 2 meetings per week is more than is needed at
this stage of the working group. There just isn't enough meeting content to
justify having two meetings every week.

2. Decisions: Any decision made at one meeting will potentially be undone
at the next, or at least not fully explained. It will be difficult to keep
consistent direction with the overall work group.

3. Meeting chair(s): Currently we do not have a commitment for a long-term
chair of this new second weekly meeting. I will not be able to attend this
new meeting at the proposed time.

4. Current meeting time: I am not aware of anyone that likes the current
time of 1800 UTC on Monday. The current time is the main reason it is hard
for EU and APAC CI Operators to attend.

My proposal was to have only 1 meeting per week at alternating times, just
as other work groups have done to solve this problem. (See examples at:
https://wiki.openstack.org/wiki/Meetings)  I volunteered to chair, then ask
other CI Operators to chair as the meetings evolved. The meeting times
could be any between 1300-0300 UTC. That way, one week we are good for US
and Europe, the next week for APAC.

Kurt Taylor (krtaylor)


On Wed, Dec 3, 2014 at 11:10 PM, trinath.soman...@freescale.com <
trinath.soman...@freescale.com> wrote:

> +1.
>
> --
> Trinath Somanchi - B39208
> trinath.soman...@freescale.com | extn: 4048
>
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: Thursday, December 04, 2014 3:55 AM
> To: openstack-in...@lists.openstack.org
> Subject: Re: [OpenStack-Infra] [openstack-dev] [third-party]Time for
> Additional Meeting for third-party
>
> On 12/03/2014 03:15 AM, Omri Marcovitch wrote:
> > Hello Anteaya,
> >
> > A meeting between 8:00 - 16:00 UTC time will be great (Israel).
> >
> >
> > Thanks
> > Omri
> >
> > -Original Message-
> > From: Joshua Hesketh [mailto:joshua.hesk...@rackspace.com]
> > Sent: Wednesday, December 03, 2014 9:04 AM
> > To: He, Yongli; OpenStack Development Mailing List (not for usage
> > questions); openstack-in...@lists.openstack.org
> > Subject: Re: [openstack-dev] [OpenStack-Infra] [third-party]Time for
> > Additional Meeting for third-party
> >
> > Hey,
> >
> > 0700 -> 1000 UTC would work for me most weeks fwiw.
> >
> > Cheers,
> > Josh
> >
> > Rackspace Australia
> >
> > On 12/3/14 11:17 AM, He, Yongli wrote:
> >> anteaya,
> >>
> >> UTC 7:00 AM to UTC9:00, or UTC11:30 to UTC13:00 is ideal time for china.
> >>
> >> if there is no time slot there, just pick up any time between UTC
> >> 7:00 AM to UCT 13:00. ( UTC9:00 to UTC 11:30 is on road to home and
> >> dinner.)
> >>
> >> Yongi He
> >> -Original Message-
> >> From: Anita Kuno [mailto:ante...@anteaya.info]
> >> Sent: Tuesday, December 02, 2014 4:07 AM
> >> To: openstack Development Mailing List;
> >> openstack-in...@lists.openstack.org
> >> Subject: [openstack-dev] [third-party]Time for Additional Meeting for
> >> third-party
> >>
> >> One of the actions from the Kilo Third-Party CI summit session was to
> start up an additional meeting for CI operators to participate from
> non-North American time zones.
> >>
> >> Please reply to this email with times/days that would work for you. The
> current third party meeting is on Mondays at 1800 utc which works well
> since Infra meetings are on Tuesdays. If we could find a time that works
> for Europe and APAC that is also on Monday that would be ideal.
> >>
> >> Josh Hesketh has said he will try to be available for these meetings,
> he is in Australia.
> >>
> >> Let's get a sense of what days and timeframes work for those interested
> and then we can narrow it down and pick a channel.
> >>
> >> Thanks everyone,
> >> Anita.
> >>
>
> Okay first of all thanks to everyone who replied.
>
> Again, to clarify, the purpose of this thread has been to find a suitable
> additional third-party meeting time geared towards folks in EU and APAC. We
> live on a sphere, there is no time that will suit everyone.
>
> It looks like we are converging on 0800 UTC as a time and I am going to
> suggest Tuesdays. We have very little competition for space at that date
> + time combination so we can use #openstack-meeting (I have already
> booked the space on the wikipage).
>
> So barring further discussion, see you then!
>
> Thanks everyone,
> Anita.
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac

[openstack-dev] [poppy] Kilo-2 Priorities

2014-12-05 Thread Amit Gandhi
Thanks everyone who attended the weekly meeting yesterday.

Great job in delivering on the kilo-1 deliverables for Poppy CDN.  
https://launchpad.net/poppy/+milestone/kilo-1

We were able to be dev complete the following items:
- The ability to configure a service containing domains, origins, caching 
rules, and restrictions with a CDN provider
- The ability to purge content from a CDN provider
- The ability to define flavors
- The ability to check the health of the system
- The following Transport drivers - Pecan
- The following Storage drivers - Cassandra
- The following DNS drivers - Rackspace Cloud DNS
- The following CDN providers - Akamai, Fastly, MaxCDN, Amazon CloudFront


As we move focus on to the Kilo-2 milestone,  lets put emphasis around testing, 
bug fixing, refactoring, and making the work done up to now reliable and well 
tested, before we move on to the next set of major features.

I have updated the kilo-2 milestone deliverables to reflect these goals. 
https://launchpad.net/poppy/+milestone/kilo-2

Thanks

Amit Gandhi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Managing no-mergepy template duplication

2014-12-05 Thread Dan Prince
On Wed, 2014-12-03 at 14:42 +0100, Tomas Sedovic wrote:
> On 12/03/2014 11:11 AM, Steven Hardy wrote:
> > Hi all,
> >
> > Lately I've been spending more time looking at tripleo and doing some
> > reviews. I'm particularly interested in helping the no-mergepy and
> > subsequent puppet-software-config implementations mature (as well as
> > improving overcloud updates via heat).
> >
> > Since Tomas's patch landed[1] to enable --no-mergepy in
> > tripleo-heat-templates, it's become apparent that frequently patches are
> > submitted which only update overcloud-source.yaml, so I've been trying to
> > catch these and ask for a corresponding change to e.g controller.yaml.
> >
> 
> You beat me to this. Thanks for writing it up!
> 
> > This raises the following questions:
> >
> > 1. Is it reasonable to -1 a patch and ask folks to update in both places?
> 
> I'm in favour.
> 
> > 2. How are we going to handle this duplication and divergence?


To follow this up we are getting in really bad shape with divergence
already. I found 3 missing sets of Rabbit, Keystone, and Neutron DVR
parameters which due to the merge window were properly ported into
overcloud-without-mergepy.yaml yet.

https://review.openstack.org/#/c/139649/ (missing Rabbit parameters)

https://review.openstack.org/#/c/139656/ (missing Keystone parameters)

https://review.openstack.org/#/c/139671/ (missing Neutron DVR
parameters)

We need to be very careful at this point not to continue merging things
into overcloud-source.yaml which don't have the equivalent bits for
overcloud-without-mergepy.yaml.

Dan

> 
> I'm not sure we can. get_file doesn't handle structured data and I don't 
> know what else we can do. Maybe we could split out all SoftwareConfig 
> resources to separate files (like Dan did in [nova config])? But the 
> SoftwareDeployments, nova servers, etc. have a different structure.
> 
> [nova config] https://review.openstack.org/#/c/130303/
> 
> > 3. What's the status of getting gating CI on the --no-mergepy templates?
> 
> Derek, can we add a job that's identical to 
> "check-tripleo-ironic-overcloud-{f20,precise}-nonha" except it passes 
> "--no-mergepy" to devtest.sh?
> 
> > 4. What barriers exist (now that I've implemented[2] the eliding 
> > functionality
> > requested[3] for ResourceGroup) to moving to the --no-mergepy
> > implementation by default?
> 
> I'm about to post a patch that moves us from ResourceGroup to 
> AutoScalingGroup (for rolling updates), which is going to complicate 
> this a bit.
> 
> Barring that, I think you've identified all the requirements: CI job, 
> parity between the merge/non-merge templates and a process that 
> maintains it going forward (or puts the old ones in a maintanence-only 
> mode).
> 
> Anyone have anything else that's missing?
> 
> >
> > Thanks for any clarification you can provide! :)
> >
> > Steve
> >
> > [1] https://review.openstack.org/#/c/123100/
> > [2] https://review.openstack.org/#/c/128365/
> > [3] https://review.openstack.org/#/c/123713/
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Joe Gordon
On Fri, Dec 5, 2014 at 4:39 PM, Russell Bryant  wrote:

> On 12/05/2014 08:41 AM, Daniel P. Berrange wrote:
> > On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:
> >> One of the things that happens over time is that some of our core
> >> reviewers move on to other projects. This is a normal and healthy
> >> thing, especially as nova continues to spin out projects into other
> >> parts of OpenStack.
> >>
> >> However, it is important that our core reviewers be active, as it
> >> keeps them up to date with the current ways we approach development in
> >> Nova. I am therefore removing some no longer sufficiently active cores
> >> from the nova-core group.
> >>
> >> I’d like to thank the following people for their contributions over the
> years:
> >>
> >> * cbehrens: Chris Behrens
> >> * vishvananda: Vishvananda Ishaya
> >> * dan-prince: Dan Prince
> >> * belliott: Brian Elliott
> >> * p-draigbrady: Padraig Brady
> >>
> >> I’d love to see any of these cores return if they find their available
> >> time for code reviews increases.
> >
> > What stats did you use to decide whether to cull these reviewers ?
> Looking
> > at the stats over a 6 month period, I think Padraig Brady is still having
> > a significant positive impact on Nova - on a par with both cerberus and
> > alaski who you've not proposing for cut. I think we should keep Padraig
> > on the team, but probably suggest cutting Markmc instead
> >
> >   http://russellbryant.net/openstack-stats/nova-reviewers-180.txt
> >
> >
> +-+++
> > |   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %  |
> Disagreements* |
> >
> +-+++
> > | berrange ** |1766   26 435  12 1293 35773.9%
> |  157 (  8.9%)  |
> > | jaypipes ** |1359   11 378 436 534 13371.4%
> |  109 (  8.0%)  |
> > |   jogo **   |1053  131 326   7 589 35356.6%
> |   47 (  4.5%)  |
> > |   danms **  | 921   67 381  23 450 16751.4%
> |   32 (  3.5%)  |
> > |  oomichi ** | 8894 306  55 524 18265.1%
> |   40 (  4.5%)  |
> > |johngarbutt **   | 808  319 227  10 252 14532.4%
> |   37 (  4.6%)  |
> > |  mriedem ** | 642   27 279  25 311 13652.3%
> |   17 (  2.6%)  |
> > |  klmitch ** | 6061  90   2 513  7085.0%
> |   67 ( 11.1%)  |
> > | ndipanov ** | 588   19 179  10 380 11366.3%
> |   62 ( 10.5%)  |
> > |mikalstill **| 564   31  34   3 496 20788.5%
> |   20 (  3.5%)  |
> > |  cyeoh-0 ** | 546   12 207  30 297 10359.9%
> |   35 (  6.4%)  |
> > |  sdague **  | 511   23  89   6 393 22978.1%
> |   25 (  4.9%)  |
> > | russellb ** | 4656  83   0 376 15880.9%
> |   23 (  4.9%)  |
> > |  alaski **  | 4151  65  21 328 14984.1%
> |   24 (  5.8%)  |
> > | cerberus ** | 4056  25  48 326 10292.3%
> |   33 (  8.1%)  |
> > |   p-draigbrady **   | 3762  40   9 325  6488.8%
> |   49 ( 13.0%)  |
> > |  markmc **  | 2432  54   3 184  6977.0%
> |   14 (  5.8%)  |
> > | belliott ** | 2311  68   5 157  3570.1%
> |   19 (  8.2%)  |
> > |dan-prince **| 1782  48   9 119  2971.9%
> |   11 (  6.2%)  |
> > | cbehrens ** | 1322  49   2  79  1961.4%
> |6 (  4.5%)  |
> > |vishvananda **   |  540   5   3  46  1590.7%
> |5 (  9.3%)  |
> >
>
> Yeah, I was pretty surprised to see pbrady on this list, as well.  The
> above was 6 months, but even if you drop it to the most recent 3 months,
> he's still active ...
>

As you are more then aware of, our policy for removing people from core is
to leave that up the the PTL (I believe you wrote that) [0]. And I don't
think numbers alone are a good metric for sorting out who to remove.  That
being said no matter what happens, with our fast track policy, if pbrady is
dropped it shouldn't be hard to re-add him.


[0]
https://wiki.openstack.org/wiki/Nova/CoreTeam#Adding_or_Removing_Members



>
>
> > Reviews for the last 90 days in nova
> > ** -- nova-core team member
> >
> +-+---++
> > |   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- % |
> Disagreements* |
> >
> +-+---++
> > | berrange ** | 708   13 145   1 549 20077.7% |
>  47 (  6.6%)  |
> > |   jogo **   | 594   40 218   4 332 17456

Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Russell Bryant
On 12/05/2014 11:23 AM, Joe Gordon wrote:
> As you are more then aware of, our policy for removing people from core
> is to leave that up the the PTL (I believe you wrote that) [0]. And I
> don't think numbers alone are a good metric for sorting out who to
> remove.  That being said no matter what happens, with our fast track
> policy, if pbrady is dropped it shouldn't be hard to re-add him.

Yes, I'm aware of and not questioning the policy.  Usually drops are
pretty obvious.  This one wasn't.  It seems reasonable to discuss.
Maybe we don't have a common set of expectations.  Anyway, I'll follow
up in private.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] global or per-project specific ssl config options, or both?

2014-12-05 Thread Matthew Gilliard
I just put up a quick pre-weekend POC at
https://review.openstack.org/#/c/139672/ - comments welcome on that
patch.

Thanks :)

On Fri, Dec 5, 2014 at 10:07 AM, Matthew Gilliard
 wrote:
> Hi Matt, Nova,
>
>   I'll look into this.
>
> Gilliard
>
> On Thu, Dec 4, 2014 at 9:51 PM, Matt Riedemann
>  wrote:
>>
>>
>> On 12/4/2014 6:02 AM, Davanum Srinivas wrote:
>>>
>>> +1 to @markmc's "default is global value and override for project
>>> specific key" suggestion.
>>>
>>> -- dims
>>>
>>>
>>>
>>> On Wed, Dec 3, 2014 at 11:57 PM, Matt Riedemann
>>>  wrote:

 I've posted this to the 12/4 nova meeting agenda but figured I'd
 socialize
 it here also.

 SSL options - do we make them per-project or global, or both? Neutron and
 Cinder have config-group specific SSL options in nova, Glance is using
 oslo
 sslutils global options since Juno which was contentious for a time in a
 separate review in Icehouse [1].

 Now [2] wants to break that out for Glance, but we also have a patch [3]
 for
 Keystone to use the global oslo SSL options, we should be consistent, but
 does that require a blueprint now?

 In the Icehouse patch, markmc suggested using a DictOpt where the default
 value is the global value, which could be coming from the oslo [ssl]
 group
 and then you could override that with a project-specific key, e.g.
 cinder,
 neutron, glance, keystone.

 [1] https://review.openstack.org/#/c/84522/
 [2] https://review.openstack.org/#/c/131066/
 [3] https://review.openstack.org/#/c/124296/

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>
>> The consensus in the nova meeting today, I think, was that we generally like
>> the idea of the DictOpt with global oslo ssl as the default and then be able
>> to configure that per-service if needed.
>>
>> Does anyone want to put up a POC on how that would work to see how ugly
>> and/or usable that would be?  I haven't dug into the DictOpt stuff yet and
>> am kind of time-constrained at the moment.
>>
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Joe Gordon
On Dec 5, 2014 11:39 AM, "Russell Bryant"  wrote:
>
> On 12/05/2014 11:23 AM, Joe Gordon wrote:
> > As you are more then aware of, our policy for removing people from core
> > is to leave that up the the PTL (I believe you wrote that) [0]. And I
> > don't think numbers alone are a good metric for sorting out who to
> > remove.  That being said no matter what happens, with our fast track
> > policy, if pbrady is dropped it shouldn't be hard to re-add him.
>
> Yes, I'm aware of and not questioning the policy.  Usually drops are
> pretty obvious.  This one wasn't.  It seems reasonable to discuss.
> Maybe we don't have a common set of expectations.  Anyway, I'll follow
> up in private.
>

Agreed

> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - the setup of a DHCP sub-group

2014-12-05 Thread Chuck Carlino

On 11/26/2014 08:55 PM, Don Kehn wrote:
Sure, will try and gen to it over the holiday, do you have a link to 
the spec repo?



Hi Don,

Has there been any progress on a DHCP sub-group?

Regards,
Chuck


On Mon, Nov 24, 2014 at 3:27 PM, Carl Baldwin > wrote:


Don,

Could the spec linked to your BP be moved to the specs repository?
I'm hesitant to start reading it as a google doc when I know I'm going
to want to make comments and ask questions.

Carl

On Thu, Nov 13, 2014 at 9:19 AM, Don Kehn mailto:dek...@gmail.com>> wrote:
> If this shows up twice sorry for the repeat:
>
> Armando, Carl:
> During the Summit, Armando and I had a very quick conversation
concern a
> blue print that I submitted,
>
https://blueprints.launchpad.net/neutron/+spec/dhcp-cpnr-integration
and
> Armando had mention the possibility of getting together a
sub-group tasked
> with DHCP Neutron concerns. I have talk with Infoblox folks (see
> https://blueprints.launchpad.net/neutron/+spec/neutron-ipam),
and everyone
> seems to be in agreement that there is synergy especially
concerning the
> development of a relay and potentially looking into how DHCP is
handled. In
> addition during the Fridays meetup session on DHCP that I gave
there seems
> to be some general interest by some of the operators as well.
>
> So what would be the formality in going forth to start a
sub-group and
> getting this underway?
>
> DeKehn
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org

> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Don Kehn
303-442-0060


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-12-05 Thread Eichberger, German
Hi Brandon + Stephen,

Having all those permutations (and potentially testing them) made us lean 
against the sharing case in the first place. It’s just a lot of extra work for 
only a small number of our customers.

German

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Thursday, December 04, 2014 9:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
Cases that led us to adopt this.

Hi Brandon,

Yeah, in your example, member1 could potentially have 8 different statuses (and 
this is a small example!)...  If that member starts flapping, it means that 
every time it flaps there are 8 notifications being passed upstream.

Note that this problem actually doesn't get any better if we're not sharing 
objects but are just duplicating them (ie. not sharing objects but the user 
makes references to the same back-end machine as 8 different members.)

To be honest, I don't see sharing entities at many levels like this being the 
rule for most of our installations-- maybe a few percentage points of 
installations will do an excessive sharing of members, but I doubt it. So 
really, even though reporting status like this is likely to generate a pretty 
big tree of data, I don't think this is actually a problem, eh. And I don't see 
sharing entities actually reducing the workload of what needs to happen behind 
the scenes. (It just allows us to conceal more of this work from the user.)

Stephen



On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan 
mailto:brandon.lo...@rackspace.com>> wrote:
Sorry it's taken me a while to respond to this.

So I wasn't thinking about this correctly.  I was afraid you would have
to pass in a full tree of parent child representations to /loadbalancers
to update anything a load balancer it is associated to (including down
to members).  However, after thinking about it, a user would just make
an association call on each object.  For Example, associate member1 with
pool1, associate pool1 with listener1, then associate loadbalancer1 with
listener1.  Updating is just as simple as updating each entity.

This does bring up another problem though.  If a listener can live on
many load balancers, and a pool can live on many listeners, and a member
can live on many pools, there's lot of permutations to keep track of for
status.  you can't just link a member's status to a load balancer bc a
member can exist on many pools under that load balancer, and each pool
can exist under many listeners under that load balancer.  For example,
say I have these:

lb1
lb2
listener1
listener2
pool1
pool2
member1
member2

lb1 -> [listener1, listener2]
lb2 -> [listener1]
listener1 -> [pool1, pool2]
listener2 -> [pool1]
pool1 -> [member1, member2]
pool2 -> [member1]

member1 can now have a different statuses under pool1 and pool2.  since
listener1 and listener2 both have pool1, this means member1 will now
have a different status for listener1 -> pool1 and listener2 -> pool2
combination.  And so forth for load balancers.

Basically there's a lot of permutations and combinations to keep track
of with this model for statuses.  Showing these in the body of load
balancer details can get quite large.

I hope this makes sense because my brain is ready to explode.

Thanks,
Brandon

On Thu, 2014-11-27 at 08:52 +, Samuel Bercovici wrote:
> Brandon, can you please explain further (1) bellow?
>
> -Original Message-
> From: Brandon Logan 
> [mailto:brandon.lo...@rackspace.com]
> Sent: Tuesday, November 25, 2014 12:23 AM
> To: 
> openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use 
> Cases that led us to adopt this.
>
> My impression is that the statuses of each entity will be shown on a detailed 
> info request of a loadbalancer.  The root level objects would not have any 
> statuses.  For example a user makes a GET request to /loadbalancers/{lb_id} 
> and the status of every child of that load balancer is show in a 
> "status_tree" json object.  For example:
>
> {"name": "loadbalancer1",
>  "status_tree":
>   {"listeners":
> [{"name": "listener1", "operating_status": "ACTIVE",
>   "default_pool":
> {"name": "pool1", "status": "ACTIVE",
>  "members":
>[{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}
>
> Sam, correct me if I am wrong.
>
> I generally like this idea.  I do have a few reservations with this:
>
> 1) Creating and updating a load balancer requires a full tree configuration 
> with the current extension/plugin logic in neutron.  Since updates will 
> require a full tree, it means the user would have to know the full tree 
> configuration just to simply update a name.  Solving this would require 
> nested child resources in the URL, which the current neutron extension/plugin 
> does not allow.  Maybe the new one will.
>
> 2) The status_

Re: [openstack-dev] [all] bugs with paste pipelines and multiple projects and upgrading

2014-12-05 Thread Lance Bragstad
A review has been posted allowing proper upgrades to the Keystone paste
file in grenade, and the XML references have been removed for the upgrade
case [1]. There is also documentation in the Kilo Release Notes detailing
the upgrade process for XML removal from Juno to Kilo [2].

[1] https://review.openstack.org/#/c/139051/
[2] https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#Upgrade_Notes

On Wed, Dec 3, 2014 at 10:26 AM, Sean Dague  wrote:

> On 12/03/2014 10:57 AM, Lance Bragstad wrote:
> >
> >
> > On Wed, Dec 3, 2014 at 9:18 AM, Sean Dague  > > wrote:
> >
> > We've hit two interesting issues this week around multiple projects
> > installing into the paste pipeline of a server.
> >
> > 1) the pkg_resources explosion in grenade. Basically ceilometer
> modified
> > swift paste.ini to add it's own code into swift (that's part of
> normal
> > ceilometer install in devstack -
> >
> https://github.com/openstack-dev/devstack/blob/master/lib/swift#L376-L381
> >
> > This meant when we upgraded and started swift, it turns out that
> we're
> > actually running old ceilometer code. A requirements mismatch caused
> an
> > explosion (which we've since worked around), however demonstrates a
> > clear problem with installing code in another project's pipeline.
> >
> > 2) keystone is having issues dropping XML api support. It turns out
> that
> > parts of it's paste pipeline are actually provided by keystone
> > middleware, which means that keystone can't provide a sane "this is
> not
> > supported" message in a proxy class for older paste config files.
> >
> >
> > I made an attempt to capture some of the information on the specific
> > grenade case we were hitting for XML removal in a bug report [1]. We can
> > keep the classes in keystone/middleware/core.py as a workaround for now
> > with essentially another deprecation message, but at some point we
> > should pull the plug on defining XmlBodyMiddleware in our
> > keystone-paste.ini [2] as it won't do anything anyway and shouldn't be
> > in the configuration. Since this deals with a configuration change, this
> > could "always" break a customer. What criteria should we follow for
> > cases like this?
> >
> > From visiting with Sean in -qa, typically service configurations don't
> > change for the grenade target on upgrade, but if we have to make a
> > change on upgrade (to clean out old cruft for example), how do we go
> > about that?
>
> Add content here -
> https://github.com/openstack-dev/grenade/tree/master/from-juno
>
> Note: you'll get a -2 unless you provide a link to Release Notes
> somewhere that highlights this as an Upgrade Impact for users for the
> next release.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session length on wiki.openstack.org

2014-12-05 Thread Clark Boylan
On Fri, Dec 5, 2014, at 06:26 AM, Jeremy Stanley wrote:
> On 2014-12-04 18:37:48 -0700 (-0700), Carl Baldwin wrote:
> > +1  I've been meaning to say something like this but never got
> > around to it.  Thanks for speaking up.
> 
> https://storyboard.openstack.org/#!/story/1172753
> 
> I think Ryan said it might be a bug in the OpenID plug-in, but if so
> he didn't put that comment in the bug.
> -- 
> Jeremy Stanley
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I dug up an upstream bug for this [0] unfortunately it is quite terse
and not very helpful. Anyone with existing mediwiki account credentials
want to ping that thread and see if we can get more info?

[0]
http://www.mediawiki.org/w/index.php?title=Thread:Extension_talk:OpenID/Is_it_possible_to_change_the_expiration_of_the_session/cookie_for_logged-in_users%3F&lqt_method=thread_history

Clark

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] set -o nounset in devstack?

2014-12-05 Thread Dean Troyer
On Fri, Dec 5, 2014 at 6:45 AM, Sean Dague  wrote:

> I got bit by another bug yesterday where there was a typo between
> variables in the source tree. So I started down the process of set -o
> nounset to see how bad it would be to prevent that in the future.
>

[...]


> The trueorfalse issue can be fixed if we change the function to be:
>
> function trueorfalse {
> local xtrace=$(set +o | grep xtrace)
> set +o xtrace
> local default=$1
> local testval="${!2+x}"
>
> [[ -z "$testval" ]] && { echo "$default"; return; }
>

There should be an $xtrace in that return path


> [[ "0 no No NO false False FALSE" =~ "$testval" ]] && { echo
> "False"; return; }
> [[ "1 yes Yes YES true True TRUE" =~ "$testval" ]] && { echo "True";
> return; }
> echo "$default"
> $xtrace
> }
>
>
> FOO=$(trueorfalse True FOO)
>
> ... then works.
>

I'm good with this.


> the -z and -n bits can be addressed with either FOO=${FOO:-} or an isset
> function that interpolates. FOO=${FOO:-} actually feels better to me
> because it's part of the spirit of things.
>

I think I agree, but we have a lot og is_*() functions so that wouldn't be
to far of a departure, I could be convinced either way I suppose.  This is
going to be the hard part of the cleanup and ongoing enforcement.


> So... the question is, is this worth it? It's going to have fall out in
> lesser used parts of the code where we don't catch things (like -o
> errexit did). However it should help flush out a class of bugs in the
> process.
>

This is going to be a long process to do the change, I think we will need
to bracket parts of the code as they get cleaned up to avoid regressions
slipping in.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.

2014-12-05 Thread Stephen Balukoff
German-- but the point is that sharing apparently has no effect on the
number of permutations for status information. The only difference here is
that without sharing it's more work for the user to maintain and modify
trees of objects.

On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German  wrote:

>  Hi Brandon + Stephen,
>
>
>
> Having all those permutations (and potentially testing them) made us lean
> against the sharing case in the first place. It’s just a lot of extra work
> for only a small number of our customers.
>
>
>
> German
>
>
>
> *From:* Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> *Sent:* Thursday, December 04, 2014 9:17 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
>
>
>
> Hi Brandon,
>
>
>
> Yeah, in your example, member1 could potentially have 8 different statuses
> (and this is a small example!)...  If that member starts flapping, it means
> that every time it flaps there are 8 notifications being passed upstream.
>
>
>
> Note that this problem actually doesn't get any better if we're not
> sharing objects but are just duplicating them (ie. not sharing objects but
> the user makes references to the same back-end machine as 8 different
> members.)
>
>
>
> To be honest, I don't see sharing entities at many levels like this being
> the rule for most of our installations-- maybe a few percentage points of
> installations will do an excessive sharing of members, but I doubt it. So
> really, even though reporting status like this is likely to generate a
> pretty big tree of data, I don't think this is actually a problem, eh. And
> I don't see sharing entities actually reducing the workload of what needs
> to happen behind the scenes. (It just allows us to conceal more of this
> work from the user.)
>
>
>
> Stephen
>
>
>
>
>
>
>
> On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan 
> wrote:
>
> Sorry it's taken me a while to respond to this.
>
> So I wasn't thinking about this correctly.  I was afraid you would have
> to pass in a full tree of parent child representations to /loadbalancers
> to update anything a load balancer it is associated to (including down
> to members).  However, after thinking about it, a user would just make
> an association call on each object.  For Example, associate member1 with
> pool1, associate pool1 with listener1, then associate loadbalancer1 with
> listener1.  Updating is just as simple as updating each entity.
>
> This does bring up another problem though.  If a listener can live on
> many load balancers, and a pool can live on many listeners, and a member
> can live on many pools, there's lot of permutations to keep track of for
> status.  you can't just link a member's status to a load balancer bc a
> member can exist on many pools under that load balancer, and each pool
> can exist under many listeners under that load balancer.  For example,
> say I have these:
>
> lb1
> lb2
> listener1
> listener2
> pool1
> pool2
> member1
> member2
>
> lb1 -> [listener1, listener2]
> lb2 -> [listener1]
> listener1 -> [pool1, pool2]
> listener2 -> [pool1]
> pool1 -> [member1, member2]
> pool2 -> [member1]
>
> member1 can now have a different statuses under pool1 and pool2.  since
> listener1 and listener2 both have pool1, this means member1 will now
> have a different status for listener1 -> pool1 and listener2 -> pool2
> combination.  And so forth for load balancers.
>
> Basically there's a lot of permutations and combinations to keep track
> of with this model for statuses.  Showing these in the body of load
> balancer details can get quite large.
>
> I hope this makes sense because my brain is ready to explode.
>
> Thanks,
> Brandon
>
>
> On Thu, 2014-11-27 at 08:52 +, Samuel Bercovici wrote:
> > Brandon, can you please explain further (1) bellow?
> >
> > -Original Message-
> > From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
> > Sent: Tuesday, November 25, 2014 12:23 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
> >
> > My impression is that the statuses of each entity will be shown on a
> detailed info request of a loadbalancer.  The root level objects would not
> have any statuses.  For example a user makes a GET request to
> /loadbalancers/{lb_id} and the status of every child of that load balancer
> is show in a "status_tree" json object.  For example:
> >
> > {"name": "loadbalancer1",
> >  "status_tree":
> >   {"listeners":
> > [{"name": "listener1", "operating_status": "ACTIVE",
> >   "default_pool":
> > {"name": "pool1", "status": "ACTIVE",
> >  "members":
> >[{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}
> >
> > Sam, correct me if I am wrong.
> >
> > I generally like this idea.  I do have a few reservations with this:
> >
> > 1) Creating and updating 

Re: [openstack-dev] [TripleO] Alternate meeting time

2014-12-05 Thread Clint Byrum
Excerpts from marios's message of 2014-12-04 02:40:23 -0800:
> On 04/12/14 11:40, James Polley wrote:
> > Just taking a look at http://doodle.com/27ffgkdm5gxzr654 again - we've
> > had 10 people respond so far. The winning time so far is Monday 2100UTC
> > - 7 "yes" and one "If I have to".
> 
> for me it currently shows 1200 UTC as the preferred time.
> 
> So to be clear, we are voting here for the alternate meeting. The
> 'original' meeting is at 1900UTC. If in fact 2100UTC ends up being the
> most popular, what would be the point of an alternating meeting that is
> only 2 hours apart in time?
> 

Actually that's a good point. I didn't really think about it before I
voted, but the regular time is perfect for me, so perhaps I should
remove my vote, and anyone else who does not need the alternate time
should consider doing so as well.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Managing no-mergepy template duplication

2014-12-05 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2014-12-04 01:09:18 -0800:
> On Wed, Dec 03, 2014 at 06:54:48PM -0800, Clint Byrum wrote:
> > Excerpts from Dan Prince's message of 2014-12-03 18:35:15 -0800:
> > > On Wed, 2014-12-03 at 10:11 +, Steven Hardy wrote:
> > > > Hi all,
> > > > 
> > > > Lately I've been spending more time looking at tripleo and doing some
> > > > reviews. I'm particularly interested in helping the no-mergepy and
> > > > subsequent puppet-software-config implementations mature (as well as
> > > > improving overcloud updates via heat).
> > > > 
> > > > Since Tomas's patch landed[1] to enable --no-mergepy in
> > > > tripleo-heat-templates, it's become apparent that frequently patches are
> > > > submitted which only update overcloud-source.yaml, so I've been trying 
> > > > to
> > > > catch these and ask for a corresponding change to e.g controller.yaml.
> > > > 
> > > > This raises the following questions:
> > > > 
> > > > 1. Is it reasonable to -1 a patch and ask folks to update in both 
> > > > places?
> > > 
> > > Yes! In fact until we abandon merge.py we shouldn't land anything that
> > > doesn't make the change in both places. Probably more important to make
> > > sure things go into the new (no-mergepy) templates though.
> > > 
> > > > 2. How are we going to handle this duplication and divergence?
> > > 
> > > Move as quickly as possible to the new without-mergepy varients? That is
> > > my vote anyways.
> > > 
> > > > 3. What's the status of getting gating CI on the --no-mergepy templates?
> > > 
> > > Devtest already supports it by simply setting an option (which sets an
> > > ENV variable). Just need to update tripleo-ci to do that and then make
> > > the switch.
> > > 
> > > > 4. What barriers exist (now that I've implemented[2] the eliding 
> > > > functionality
> > > > requested[3] for ResourceGroup) to moving to the --no-mergepy
> > > > implementation by default?
> > > 
> > > None that I know of.
> > > 
> > 
> > I concur with Dan. Elide was the last reason not to use this.
> 
> That's great news! :)
> 
> > One thing to consider is that there is no actual upgrade path from
> > non-autoscaling-group based clouds, to auto-scaling-group based
> > templates. We should consider how we'll do that before making it the
> > default. So, I suggest we discuss possible upgrade paths and then move
> > forward with switching one of the CI jobs to using the new templates.
> 
> This is probably going to be really hard :(
> 
> The sort of pattern which might work is:
> 
> 1. Abandon mergepy based stack
> 2. Have helper script to reformat abandon data into nomergepy based adopt
> data
> 3. Adopt stack
> 
> Unforunately there are several abandon/adopt bugs we'll have to fix if we
> decide this is the way to go (original author hasn't maintained it, but we
> can pick up the slack if it's on the critical path for TripleO).
> 
> An alternative could be the external resource feature Angus is looking at:
> 
> https://review.openstack.org/#/c/134848/
> 
> This would be more limited (we just reference rather than manage the
> existing resources), but potentially safer.
> 
> The main risk here is import (or subsequent update) operations becoming
> destructive and replacing things, but I guess to some extent this is a risk
> with any change to tripleo-heat-templates.
> 

So you and I talked on IRC, but I want to socialize what we talked about
more.

The abandon/adopt pipeline is a bit broken in Heat and hasn't proven to be
as useful as I'd hoped when it was first specced out. It seems too broad,
and relies on any tools understanding how to morph a whole new format
(the abandon json).

With external_reference, the external upgrade process just needs to know
how to morph the template. So if we're combining 8 existing servers into
an autoscaling group, we just need to know how to make an autoscaling
group with 8 servers as the external reference ids. This is, I think,
the shortest path to a working solution, as I feel the external
reference work in Heat is relatively straight forward and the spec has
widescale agreement.

There was another approach I mentioned, which is that we can teach Heat
how to morph resources. So we could teach Heat that servers can be made
into autoscaling groups, and vice-versa. This is a whole new feature
though, and IMO, something that should be tackled _after_ we make it
work with the external_reference feature, as this is basically a
superset of what we'll do externally.

> Has any thought been given to upgrade CI testing?  I'm thinking grenade or
> grenade-style testing here where we test maintaing a deployed overcloud
> over an upgrade of (some subset of) changes.
> 
> I know the upgrade testing thing will be hard, but to me it's a key
> requirement to mature heat-driven updates vs those driven by external
> tooling.

Upgrade testing is vital to the future of the project IMO. We really
haven't validated the image based update method upstream yet. In Helion,
we're using tripl

[openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-05 Thread Matt Riedemann
In Juno we effectively disabled live snapshots with libvirt due to bug 
1334398 [1] failing the gate about 25% of the time.


I was going through the Juno release notes today and saw this as a known 
issue, which reminded me of it and was wondering if there is anything 
being done about it?


As I recall, it *works* but it wasn't working under the stress our 
check/gate system puts on that code path.


One thing I'm thinking is, couldn't we make this an experimental config 
option and by default it's disabled but we could run it in the 
experimental queue, or people could use it without having to patch the 
code to remove the artificial minimum version constraint put in the code.


Something like:

if CONF.libvirt.live_snapshot_supported:
   # do your thing

[1] https://bugs.launchpad.net/nova/+bug/1334398

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-05 Thread Sean Dague
On 12/05/2014 01:50 PM, Matt Riedemann wrote:
> In Juno we effectively disabled live snapshots with libvirt due to bug
> 1334398 [1] failing the gate about 25% of the time.
> 
> I was going through the Juno release notes today and saw this as a known
> issue, which reminded me of it and was wondering if there is anything
> being done about it?
> 
> As I recall, it *works* but it wasn't working under the stress our
> check/gate system puts on that code path.
> 
> One thing I'm thinking is, couldn't we make this an experimental config
> option and by default it's disabled but we could run it in the
> experimental queue, or people could use it without having to patch the
> code to remove the artificial minimum version constraint put in the code.
> 
> Something like:
> 
> if CONF.libvirt.live_snapshot_supported:
># do your thing
> 
> [1] https://bugs.launchpad.net/nova/+bug/1334398

So, it works. If you aren't booting / shutting down guests at exactly
the same time as snapshotting. I believe cburgess said in IRC yesterday
he was going to take another look at it next week.

I'm happy to put this into dansmith's pattented [workarounds] config
group (coming soon to fix the qemu-convert bug). But I don't think this
should be a normal libvirt option.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docker][containers][qa] nova-docker CI failing a lot on unrelated nova patches

2014-12-05 Thread Ian Main
Sean Dague wrote:
> On 12/04/2014 05:38 PM, Matt Riedemann wrote:
> > 
> > 
> > On 12/4/2014 4:06 PM, Michael Still wrote:
> >> +Eric and Ian
> >>
> >> On Fri, Dec 5, 2014 at 8:31 AM, Matt Riedemann
> >>  wrote:
> >>> This came up in the nova meeting today, I've opened a bug [1] for it.
> >>> Since
> >>> this isn't maintained by infra we don't have log indexing so I can't use
> >>> logstash to see how pervasive it us, but multiple people are
> >>> reporting the
> >>> same thing in IRC.
> >>>
> >>> Who is maintaining the nova-docker CI and can look at this?
> >>>
> >>> It also looks like the log format for the nova-docker CI is a bit
> >>> weird, can
> >>> that be cleaned up to be more consistent with other CI log results?
> >>>
> >>> [1] https://bugs.launchpad.net/nova-docker/+bug/1399443
> >>>
> >>> -- 
> >>>
> >>> Thanks,
> >>>
> >>> Matt Riedemann
> >>>
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> > 
> > Also, according to the 3rd party CI requirements [1] I should see
> > nova-docker CI in the third party wiki page [2] so I can get details on
> > who to contact when this fails but that's not done.
> > 
> > [1] http://ci.openstack.org/third_party.html#requirements
> > [2] https://wiki.openstack.org/wiki/ThirdPartySystems
> 
> It's not the 3rd party CI job we are talking about, it's the one in the
> check queue which is run by infra.
> 
> But, more importantly, jobs in those queues need shepards that will fix
> them. Otherwise they will get deleted.
> 
> Clarkb provided the fix for the log structure right now -
> https://review.openstack.org/#/c/139237/1 so at least it will look
> vaguely sane on failures
> 
>   -Sean

This is one of the reasons we might like to have this in nova core.  Otherwise
we will just keep addressing issues as they come up.  We would likely be
involved doing this if it were part of nova core anyway.

Ian

> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database

2014-12-05 Thread Andrew Laski
The cells v2 effort is going to be introducing a new database into 
Nova.  This has been an opportunity to rethink and approach a few things 
differently, including how we should handle migrations. There have been 
discussions for a long time now about switching over to alembic for 
migrations so I want to ask, should we start using alembic from the 
start for this new database?


The question was first raised by Dan Smith on 
https://review.openstack.org/#/c/135424/


I do have some concern about having two databases managed in two 
different ways, but if the details are well hidden behind a nova-manage 
command I'm not sure it will actually matter in practice.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug 1334398 and libvirt live snapshot support

2014-12-05 Thread Matt Riedemann



On 12/5/2014 1:32 PM, Sean Dague wrote:

On 12/05/2014 01:50 PM, Matt Riedemann wrote:

In Juno we effectively disabled live snapshots with libvirt due to bug
1334398 [1] failing the gate about 25% of the time.

I was going through the Juno release notes today and saw this as a known
issue, which reminded me of it and was wondering if there is anything
being done about it?

As I recall, it *works* but it wasn't working under the stress our
check/gate system puts on that code path.

One thing I'm thinking is, couldn't we make this an experimental config
option and by default it's disabled but we could run it in the
experimental queue, or people could use it without having to patch the
code to remove the artificial minimum version constraint put in the code.

Something like:

if CONF.libvirt.live_snapshot_supported:
# do your thing

[1] https://bugs.launchpad.net/nova/+bug/1334398


So, it works. If you aren't booting / shutting down guests at exactly
the same time as snapshotting. I believe cburgess said in IRC yesterday
he was going to take another look at it next week.

I'm happy to put this into dansmith's pattented [workarounds] config
group (coming soon to fix the qemu-convert bug). But I don't think this
should be a normal libvirt option.

-Sean



Yeah the [workarounds] group is what got me thinking about it too as a 
config option, otherwise I think the idea of an [experimental] config 
group has come up before as a place to put 'not tested, here be dragons' 
type stuff.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database

2014-12-05 Thread Matt Riedemann



On 12/5/2014 1:45 PM, Andrew Laski wrote:

The cells v2 effort is going to be introducing a new database into
Nova.  This has been an opportunity to rethink and approach a few things
differently, including how we should handle migrations. There have been
discussions for a long time now about switching over to alembic for
migrations so I want to ask, should we start using alembic from the
start for this new database?

The question was first raised by Dan Smith on
https://review.openstack.org/#/c/135424/

I do have some concern about having two databases managed in two
different ways, but if the details are well hidden behind a nova-manage
command I'm not sure it will actually matter in practice.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't have experience with Alembic but I'd think we should use Alembic 
for the new database unless there is a compelling reason not to. Maybe 
we need Mike Bayer (or other oslo.db people) to give us an idea of what 
kinds of problems we might have with managing two databases with two 
different migration schemes.


But the last part you said is key for me, if we can abstract it well 
then hopefully it's not very painful.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database

2014-12-05 Thread Johannes Erdfelt
On Fri, Dec 05, 2014, Andrew Laski  wrote:
> The cells v2 effort is going to be introducing a new database into
> Nova.  This has been an opportunity to rethink and approach a few
> things differently, including how we should handle migrations. There
> have been discussions for a long time now about switching over to
> alembic for migrations so I want to ask, should we start using
> alembic from the start for this new database?
> 
> The question was first raised by Dan Smith on
> https://review.openstack.org/#/c/135424/
> 
> I do have some concern about having two databases managed in two
> different ways, but if the details are well hidden behind a
> nova-manage command I'm not sure it will actually matter in
> practice.

This would be a good time for people to review my proposed spec:

https://review.openstack.org/#/c/102545/

Not only does it help operators but it also helps developers since all
they would need to do in the future is update the model and DDL
statements are generated based on comparing the running schema with the
model.

BTW, it uses Alembic under the hood for most of the heavy lifting.

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database

2014-12-05 Thread Sylvain Bauza


Le 05/12/2014 21:14, Matt Riedemann a écrit :



On 12/5/2014 1:45 PM, Andrew Laski wrote:

The cells v2 effort is going to be introducing a new database into
Nova.  This has been an opportunity to rethink and approach a few things
differently, including how we should handle migrations. There have been
discussions for a long time now about switching over to alembic for
migrations so I want to ask, should we start using alembic from the
start for this new database?

The question was first raised by Dan Smith on
https://review.openstack.org/#/c/135424/

I do have some concern about having two databases managed in two
different ways, but if the details are well hidden behind a nova-manage
command I'm not sure it will actually matter in practice.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't have experience with Alembic but I'd think we should use 
Alembic for the new database unless there is a compelling reason not 
to. Maybe we need Mike Bayer (or other oslo.db people) to give us an 
idea of what kinds of problems we might have with managing two 
databases with two different migration schemes.


But the last part you said is key for me, if we can abstract it well 
then hopefully it's not very painful.





I had some experience with Alembic in a previous Stackforge project and 
I'm definitely +1 on using it for the Cells V2 database.


We can just provide a nova-manage cell-db service that would facade the 
migration backend, whatever it is.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Spring cleaning nova-core

2014-12-05 Thread Michael Still
I used Russell's 60 day stats in making this decision. I can't find a
documented historical precedent on what period the stats should be
generated over, however 60 days seems entirely reasonable to me.

2014-12-05 15:41:11.212927

Reviews for the last 60 days in nova
** -- nova-core team member
+-+---++
|   Reviewer  | Reviews   -2  -1  +1  +2  +A+/- %
| Disagreements* |
+-+---++
| berrange ** | 669   13 134   1 521 19478.0%
|   47 (  7.0%)  |
|   jogo **   | 431   38 161   2 230 11753.8%
|   19 (  4.4%)  |
|  oomichi ** | 3091 106   4 198  5865.4%
|3 (  1.0%)  |
|   danms **  | 293   34 133  15 111  4343.0%
|   12 (  4.1%)  |
| jaypipes ** | 290   10 108  14 158  4259.3%
|   15 (  5.2%)  |
| ndipanov ** | 192   10  78   6  98  2454.2%
|   24 ( 12.5%)  |
|  klmitch ** | 1901  22   0 167  1287.9%
|   21 ( 11.1%)  |
|  cyeoh-0 ** | 1840  70  10 104  4162.0%
|9 (  4.9%)  |
|  mriedem ** | 1733  86   8  76  3148.6%
|8 (  4.6%)  |
|johngarbutt **   | 164   19  79   6  60  2440.2%
|7 (  4.3%)  |
| cerberus ** | 1510   9  40 102  3894.0%
|7 (  4.6%)  |
|mikalstill **| 1452   8   1 134  4893.1%
|3 (  2.1%)  |
|  alaski **  | 1040   7   6  91  5493.3%
|5 (  4.8%)  |
|  sdague **  |  986  21   2  69  4072.4%
|4 (  4.1%)  |
| russellb ** |  861  10   0  75  2987.2%
|5 (  5.8%)  |
|   p-draigbrady **   |  600  12   1  47  1080.0%
|4 (  6.7%)  |
| belliott ** |  320   8   1  23  1575.0%
|4 ( 12.5%)  |
|vishvananda **   |   80   2   0   6   175.0%
|2 ( 25.0%)  |
|dan-prince **|   70   0   0   7   3   100.0%
|4 ( 57.1%)  |
| cbehrens ** |   40   2   0   2   050.0%
|1 ( 25.0%)  |

The previously held standard for core reviewer activity has been an
_average_ of two reviews per day, which is why I used the 60 days
stats (to eliminate vacations and so forth). It should be noted that
the top ten or so reviewers are doing at lot more than that.

All of the reviewers I dropped are valued members of the team, and I
am sad to see all of them go. However, it is important that reviewers
remain active.

It should also be noted that with the exception of one person (who
hasn't been under discussion in this thread) I discussed doing this
will all of these people on 12 June 2014. This was not a sudden move,
and shouldn't be a surprise to the reviewers involved.

One final point to reiterate -- we have always said as a project that
former cores can be re-added if their review rate picks up again. This
isn't a punishment, its a recognition that those people have gone off
to work on other things and that nova is no longer their focus. I'd
welcome an increased review rate from all involved.

Michael

On Sat, Dec 6, 2014 at 1:39 AM, Russell Bryant  wrote:
> On 12/05/2014 08:41 AM, Daniel P. Berrange wrote:
>> On Fri, Dec 05, 2014 at 11:05:28AM +1100, Michael Still wrote:
>>> One of the things that happens over time is that some of our core
>>> reviewers move on to other projects. This is a normal and healthy
>>> thing, especially as nova continues to spin out projects into other
>>> parts of OpenStack.
>>>
>>> However, it is important that our core reviewers be active, as it
>>> keeps them up to date with the current ways we approach development in
>>> Nova. I am therefore removing some no longer sufficiently active cores
>>> from the nova-core group.
>>>
>>> I’d like to thank the following people for their contributions over the 
>>> years:
>>>
>>> * cbehrens: Chris Behrens
>>> * vishvananda: Vishvananda Ishaya
>>> * dan-prince: Dan Prince
>>> * belliott: Brian Elliott
>>> * p-draigbrady: Padraig Brady
>>>
>>> I’d love to see any of these cores return if they find their available
>>> time for code reviews increases.
>>
>> What stats did you use to decide whether to cull these reviewers ? Looking
>> at the stats over a 6 month period, I think Padraig Brady is still having
>> a significant positive impact on Nova - on a par with both cerberus and
>> alaski who you've not proposing for cut. I think we should keep Padraig
>> on the team, but probably suggest cutting Markmc instead
>>
>>   http://russellbryant.net/openstack-stats/nova-reviewers-180.txt
>>
>> +-+++
>> |   Reviewer  | Reviews   -2  -1  +1  +2  

[openstack-dev] [[Openstack-dev] [Ironic] Ironic-conductor fails to start - "AttributeError '_keepalive_evt'"

2014-12-05 Thread Lohit Valleru
Hello All,

I am trying to deploy bare-metal nodes using openstack-ironic. It is a 2 -
node architecture with controller/keystone/mysql on a virtual machine, and
cinder/compute/nova network on a physical machine on a CentOS 7 environment.

openstack-ironic-common-2014.2-2.el7.centos.noarch
openstack-ironic-api-2014.2-2.el7.centos.noarch
openstack-ironic-conductor-2014.2-2.el7.centos.noarch

I have followed this document,
http://docs.openstack.org/developer/ironic/deploy/install-guide.html#ipmi-support

and installed ironic. But when i start ironic-conductor, i get the below
error :

ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service
 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 ERROR
ironic.common.service [-] Service error occurred when cleaning up the RPC
manager. Error: 'ConductorManager' object has no attribute '_keepalive_evt'
 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service Traceback (most recent call last):
 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service   File
"/usr/lib/python2.7/site-packages/ironic/common/service.py", line 91, in
stop
 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service self.manager.del_host()
ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service   File
"/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 235,
in del_host
 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service self._keepalive_evt.set()
 hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service AttributeError: 'ConductorManager' object has no
attribute '_keepalive_evt'
 hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
ironic.common.service
 hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 INFO
ironic.common.service [-] Stopped RPC server for service
ironic.conductor_manager on host hc004.

A look at the source code, tells me that it is something related to RPC
service being started/stopped.

Also, I cannot debug this more as - I do not see any logs being created
with respect to ironic.
Do i have to explicitly enable the logging properties in ironic.conf, or
are they expected to be working by default?

Here is the configuration from ironic.conf

#

[DEFAULT]
verbose=true
rabbit_host=172.18.246.104
auth_strategy=keystone
debug=true

[keystone_authtoken]
auth_host=172.18.246.104
auth_uri=http://172.18.246.104:5000/v2.0
admin_user=ironic
admin_password=
admin_tenant_name=service

[database]
connection = mysql://ironic:x@172.18.246.104/ironic?charset=utf8

[glance]
glance_host=172.18.246.104

#

I understand that i did not give neutron URL as required by the
documentation. The reason : that i have architecture limitations to install
neutron networking and would like to experiment if nova-network and dhcp
pxe server will server the purpose although i highly doubt that.

However, i wish to know if the above issue is anyway related to
non-existent neutron network, or if it is related to something else.

Please do let me know.

Thank you,

Lohit
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [[Openstack-dev] [Ironic] Ironic-conductor fails to start - "AttributeError '_keepalive_evt'"

2014-12-05 Thread Devananda van der Veen
Hi Lohit,

In the future, please do not cross-post or copy-and-paste usage questions
on the development list. Since you posted this question on the general list
(*) -- which is exactly where you should post it -- I will respond there.

Regards,
Devananda

(*) http://lists.openstack.org/pipermail/openstack/2014-December/010698.html



On Fri Dec 05 2014 at 1:15:44 PM Lohit Valleru 
wrote:

> Hello All,
>
> I am trying to deploy bare-metal nodes using openstack-ironic. It is a 2 -
> node architecture with controller/keystone/mysql on a virtual machine, and
> cinder/compute/nova network on a physical machine on a CentOS 7 environment.
>
> openstack-ironic-common-2014.2-2.el7.centos.noarch
> openstack-ironic-api-2014.2-2.el7.centos.noarch
> openstack-ironic-conductor-2014.2-2.el7.centos.noarch
>
> I have followed this document,
>
> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#ipmi-support
>
> and installed ironic. But when i start ironic-conductor, i get the below
> error :
>
> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service
>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 ERROR
> ironic.common.service [-] Service error occurred when cleaning up the RPC
> manager. Error: 'ConductorManager' object has no attribute '_keepalive_evt'
>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service Traceback (most recent call last):
>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service   File
> "/usr/lib/python2.7/site-packages/ironic/common/service.py", line 91, in
> stop
>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service self.manager.del_host()
> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service   File
> "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 235,
> in del_host
>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service self._keepalive_evt.set()
>  hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service AttributeError: 'ConductorManager' object has no
> attribute '_keepalive_evt'
>  hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
> ironic.common.service
>  hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 INFO
> ironic.common.service [-] Stopped RPC server for service
> ironic.conductor_manager on host hc004.
>
> A look at the source code, tells me that it is something related to RPC
> service being started/stopped.
>
> Also, I cannot debug this more as - I do not see any logs being created
> with respect to ironic.
> Do i have to explicitly enable the logging properties in ironic.conf, or
> are they expected to be working by default?
>
> Here is the configuration from ironic.conf
>
> #
>
> [DEFAULT]
> verbose=true
> rabbit_host=172.18.246.104
> auth_strategy=keystone
> debug=true
>
> [keystone_authtoken]
> auth_host=172.18.246.104
> auth_uri=http://172.18.246.104:5000/v2.0
> admin_user=ironic
> admin_password=
> admin_tenant_name=service
>
> [database]
> connection = mysql://ironic:x@172.18.246.104/ironic?charset=utf8
>
> [glance]
> glance_host=172.18.246.104
>
> #
>
> I understand that i did not give neutron URL as required by the
> documentation. The reason : that i have architecture limitations to install
> neutron networking and would like to experiment if nova-network and dhcp
> pxe server will server the purpose although i highly doubt that.
>
> However, i wish to know if the above issue is anyway related to
> non-existent neutron network, or if it is related to something else.
>
> Please do let me know.
>
> Thank you,
>
> Lohit
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] sqlalchemy-migrate vs alembic for new database

2014-12-05 Thread Mike Bayer

> On Dec 5, 2014, at 3:14 PM, Matt Riedemann  wrote:
> 
> 
> 
> On 12/5/2014 1:45 PM, Andrew Laski wrote:
>> The cells v2 effort is going to be introducing a new database into
>> Nova.  This has been an opportunity to rethink and approach a few things
>> differently, including how we should handle migrations. There have been
>> discussions for a long time now about switching over to alembic for
>> migrations so I want to ask, should we start using alembic from the
>> start for this new database?
>> 
>> The question was first raised by Dan Smith on
>> https://review.openstack.org/#/c/135424/
>> 
>> I do have some concern about having two databases managed in two
>> different ways, but if the details are well hidden behind a nova-manage
>> command I'm not sure it will actually matter in practice.
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> I don't have experience with Alembic but I'd think we should use Alembic for 
> the new database unless there is a compelling reason not to. Maybe we need 
> Mike Bayer (or other oslo.db people) to give us an idea of what kinds of 
> problems we might have with managing two databases with two different 
> migration schemes.
> 
> But the last part you said is key for me, if we can abstract it well then 
> hopefully it's not very painful.

sqlalchemy-migrate doesn’t really have a dedicated maintainer anymore, AFAICT.  
It’s pretty much on stackforge life support.   So while the issue of merging 
together a project with migrate and alembic at the same time seems to be 
something for which there are some complexity and some competing ideas (I have 
one that’s pretty fancy, but I haven’t spec’ed or implemented it yet, so for 
now there are “wrappers” that run both), it sort of has to happen regardless.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [ha] potential issue with implicit async-compatible mysql drivers

2014-12-05 Thread Mike Bayer
Hey list -

I’m posting this here just to get some ideas on what might be happening here, 
as it may or may not have some impact on Openstack if and when we move to MySQL 
drivers that are async-patchable, like MySQL-connector or PyMySQL.  I had a 
user post this issue a few days ago which I’ve since distilled into test cases 
for PyMySQL and MySQL-connector separately.   It uses gevent, not eventlet, so 
I’m not really sure if this applies.  But there’s plenty of very smart people 
here so if anyone can shed some light on what is actually happening here, that 
would help.

The program essentially illustrates code that performs several steps upon a 
connection, however if the greenlet is suddenly killed, the state from the 
connection, while damaged, is still being allowed to continue on in some way, 
and what’s super-catastrophic here is that you see a transaction actually being 
committed *without* all the statements proceeding on it. 

In my work with MySQL drivers, I’ve noted for years that they are all very, 
very bad at dealing with concurrency-related issues.  The whole “MySQL has gone 
away” and “commands out of sync” errors are ones that we’ve all just drowned 
in, and so often these are due to the driver getting mixed up due to concurrent 
use of a connection.  However this one seems more insidious.   Though at the 
same time, the script has some complexity happening (like a simplistic 
connection pool) and I’m not really sure where the core of the issue lies.

The script is at https://gist.github.com/zzzeek/d196fa91c40cb515365e and also 
below.  If you run it for a few seconds, go over to your MySQL command line and 
run this query:

SELECT * FROM table_b WHERE a_id not in (SELECT id FROM table_a) ORDER BY a_id 
DESC;

and what you’ll see is tons of rows in table_b where the “a_id” is zero 
(because cursor.lastrowid fails), but the *rows are committed*.   If you read 
the segment of code that does this, it should be impossible:

connection = pool.get()
rowid = execute_sql(
connection,
"INSERT INTO table_a (data) VALUES (%s)", ("a",)
)

gevent.sleep(random.random() * 0.2)
 
try:
execute_sql(
connection,
"INSERT INTO table_b (a_id, data) VALUES (%s, %s)",
(rowid, "b",)
)
 
connection.commit()
 
pool.return_conn(connection) 

except Exception:
connection.rollback()
pool.return_conn(connection)

so if the gevent.sleep() throws a timeout error, somehow we are getting thrown 
back in there, with the connection in an invalid state, but not invalid enough 
to commit.

If a simple check for “SELECT connection_id()” is added, this query fails and 
the whole issue is prevented.  Additionally, if you put a foreign key 
constraint on that b_table.a_id, then the issue is prevented, and you see that 
the constraint violation is happening all over the place within the commit() 
call.   The connection is being used such that its state just started after the 
gevent.sleep() call.  

Now, there’s also a very rudimental connection pool here.   That is also part 
of what’s going on.  If i try to run without the pool, the whole script just 
runs out of connections, fast, which suggests that this gevent timeout cleans 
itself up very, very badly.   However, SQLAlchemy’s pool works a lot like this 
one, so if folks here can tell me if the connection pool is doing something 
bad, then that’s key, because I need to make a comparable change in 
SQLAlchemy’s pool.   Otherwise I worry our eventlet use could have big problems 
under high load.





# -*- coding: utf-8 -*-
import gevent.monkey
gevent.monkey.patch_all()

import collections
import threading
import time
import random
import sys

import logging
logging.basicConfig()
log = logging.getLogger('foo')
log.setLevel(logging.DEBUG)

#import pymysql as dbapi
from mysql import connector as dbapi


class SimplePool(object):
def __init__(self):
self.checkedin = collections.deque([
self._connect() for i in range(50)
])
self.checkout_lock = threading.Lock()
self.checkin_lock = threading.Lock()

def _connect(self):
return dbapi.connect(
user="scott", passwd="tiger",
host="localhost", db="test")

def get(self):
with self.checkout_lock:
while not self.checkedin:
time.sleep(.1)
return self.checkedin.pop()

def return_conn(self, conn):
try:
conn.rollback()
except:
log.error("Exception during rollback", exc_info=True)
try:
conn.close()
except:
log.error("Exception during close", exc_info=True)

# recycle to a new connection
conn = self._connect()
with s

Re: [openstack-dev] [[Openstack-dev] [Ironic] Ironic-conductor fails to start - "AttributeError '_keepalive_evt'"

2014-12-05 Thread Lohit Valleru
I apologize. I was not sure about where to post the errors.

I will post to the general list from next time.

Thank you,

Lohit

On Friday, December 5, 2014, Devananda van der Veen 
wrote:

> Hi Lohit,
>
> In the future, please do not cross-post or copy-and-paste usage questions
> on the development list. Since you posted this question on the general list
> (*) -- which is exactly where you should post it -- I will respond there.
>
> Regards,
> Devananda
>
> (*)
> http://lists.openstack.org/pipermail/openstack/2014-December/010698.html
>
>
>
> On Fri Dec 05 2014 at 1:15:44 PM Lohit Valleru  > wrote:
>
>> Hello All,
>>
>> I am trying to deploy bare-metal nodes using openstack-ironic. It is a 2
>> - node architecture with controller/keystone/mysql on a virtual machine,
>> and cinder/compute/nova network on a physical machine on a CentOS 7
>> environment.
>>
>> openstack-ironic-common-2014.2-2.el7.centos.noarch
>> openstack-ironic-api-2014.2-2.el7.centos.noarch
>> openstack-ironic-conductor-2014.2-2.el7.centos.noarch
>>
>> I have followed this document,
>>
>> http://docs.openstack.org/developer/ironic/deploy/install-guide.html#ipmi-support
>>
>> and installed ironic. But when i start ironic-conductor, i get the below
>> error :
>>
>> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service
>>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 ERROR
>> ironic.common.service [-] Service error occurred when cleaning up the RPC
>> manager. Error: 'ConductorManager' object has no attribute '_keepalive_evt'
>>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service Traceback (most recent call last):
>>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service   File
>> "/usr/lib/python2.7/site-packages/ironic/common/service.py", line 91, in
>> stop
>>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service self.manager.del_host()
>> ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service   File
>> "/usr/lib/python2.7/site-packages/ironic/conductor/manager.py", line 235,
>> in del_host
>>  ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service self._keepalive_evt.set()
>>  hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service AttributeError: 'ConductorManager' object has no
>> attribute '_keepalive_evt'
>>  hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 TRACE
>> ironic.common.service
>>  hc004 ironic-conductor[15997]: 2014-12-05 15:38:12.457 15997 INFO
>> ironic.common.service [-] Stopped RPC server for service
>> ironic.conductor_manager on host hc004.
>>
>> A look at the source code, tells me that it is something related to RPC
>> service being started/stopped.
>>
>> Also, I cannot debug this more as - I do not see any logs being created
>> with respect to ironic.
>> Do i have to explicitly enable the logging properties in ironic.conf, or
>> are they expected to be working by default?
>>
>> Here is the configuration from ironic.conf
>>
>> #
>>
>> [DEFAULT]
>> verbose=true
>> rabbit_host=172.18.246.104
>> auth_strategy=keystone
>> debug=true
>>
>> [keystone_authtoken]
>> auth_host=172.18.246.104
>> auth_uri=http://172.18.246.104:5000/v2.0
>> admin_user=ironic
>> admin_password=
>> admin_tenant_name=service
>>
>> [database]
>> connection = mysql://ironic:x@172.18.246.104/ironic?charset=utf8
>>
>> [glance]
>> glance_host=172.18.246.104
>>
>> #
>>
>> I understand that i did not give neutron URL as required by the
>> documentation. The reason : that i have architecture limitations to install
>> neutron networking and would like to experiment if nova-network and dhcp
>> pxe server will server the purpose although i highly doubt that.
>>
>> However, i wish to know if the above issue is anyway related to
>> non-existent neutron network, or if it is related to something else.
>>
>> Please do let me know.
>>
>> Thank you,
>>
>> Lohit
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Core/Vendor code decomposition

2014-12-05 Thread Armando M.
Hi folks,

For a few weeks now the Neutron team has worked tirelessly on [1].

This initiative stems from the fact that as the project matures, evolution
of processes and contribution guidelines need to evolve with it. This is to
ensure that the project can keep on thriving in order to meet the needs of
an ever growing community.

The effort of documenting intentions, and fleshing out the various details
of the proposal is about to reach an end, and we'll soon kick the tires to
put the proposal into practice. Since the spec has grown pretty big, I'll
try to capture the tl;dr below.

If you have any comment please do not hesitate to raise them here and/or
reach out to us.

tl;dr >>>

>From the Kilo release, we'll initiate a set of steps to change the
following areas:

   - Code structure: every plugin or driver that exists or wants to exist
   as part of Neutron project is decomposed in an slim vendor integration
   (which lives in the Neutron repo), plus a bulkier vendor library (which
   lives in an independent publicly available repo);
   - Contribution process: this extends to the following aspects:
  - Design and Development: the process is largely unchanged for the
  part that pertains the vendor integration; the maintainer team is fully
  auto governed for the design and development of the vendor library;
  - Testing and Continuous Integration: maintainers will be required to
  support their vendor integration with 3rd CI testing; the
requirements for
  3rd CI testing are largely unchanged;
  - Defect management: the process is largely unchanged, issues
  affecting the vendor library can be tracked with whichever
tool/process the
  maintainer see fit. In cases where vendor library fixes need to be
  reflected in the vendor integration, the usual OpenStack defect
management
  apply.
  - Documentation: there will be some changes to the way plugins and
  drivers are documented with the intention of promoting discoverability of
  the integrated solutions.
   - Adoption and transition plan: we strongly advise maintainers to stay
   abreast of the developments of this effort, as their code, their CI, etc
   will be affected. The core team will provide guidelines and support
   throughout this cycle the ensure a smooth transition.

To learn more, please refer to [1].

Many thanks,
Armando

[1] https://review.openstack.org/#/c/134680
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Neutron documentation to update about new vendor plugin, but without code in repository?

2014-12-05 Thread Armando M.
For anyone who had an interest in following this thread, they might want to
have a look at [1], and [2] (which is the tl;dr version [1]).

HTH
Armando

[1] https://review.openstack.org/#/c/134680
[2]
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052346.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] reminder: alternate meeting time

2014-12-05 Thread Devananda van der Veen
This is a friendly reminder that our weekly IRC meetings have begun
alternating times every week to try to accommodate more of our contributors.

Next week's meeting will be at 0500 UTC Tuesday (9pm PST Monday) in the
#openstack-meeting-3 channel. Details, as always, are on the wiki [0].

Regards,
Devananda

[0] https://wiki.openstack.org/wiki/Meetings/Ironic
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mike Bayer 20141205

2014-12-05 Thread Mike Bayer
1. Alembic release - I worked through some regressions introduced by Alembic 
0.7.0 and the subsequent 0.7.1 with the Neutron folks.  This started on Monday 
with https://review.openstack.org/#/c/137989/, and by Wednesday I had 
identified enough small regressions in 0.7.0 that I had to put 0.7.1 out, so 
that review got expedited with https://review.openstack.org/#/c/138998/ 
following from Neutron devs to continue fixing.   Version 0.7.1 includes the 
foreign key autogenerate support first proposed by Ann Kamyshnikova.  Changelog 
at http://alembic.readthedocs.org/en/latest/changelog.html#change-0.7.1.

2. MySQL driver stuff.   I have a SQLAlchemy user who is running some kind of 
heavy load with gevent and PyMySQL.  While this user is not openstack-specific, 
the thing he is doing is a lot like what we might be doing if and when we move 
our MySQL drivers to MySQL-connector-Python, which is compatible with eventlet 
in that it is pure Python and can be monkeypatched.The issue observed by 
this user applies to both PyMySQL and MySQL-connector, and I can reproduce it 
*without* using SQLAlchemy, though it does use a very makeshift connection pool 
designed to approximate what SQLAlchemy’s does.   The issue is scary because it 
illustrates Python code that should have been killed being invoked on a 
database connection that should have been dead, calling commit(), and then 
actually *succeeding* in committing only *part* of the data.   This is not an 
issue that impacts Openstack right now but if the same thing applies to 
eventlet, then this would definitely be something we’d need to worry about if 
we start using MySQL-connector in a high load scenario (which has been the 
plan) so I’ve forwarded my findings onto openstack-dev to see if anyone can 
help me understand it.  The intro + test case for this issue starts at 
http://lists.openstack.org/pipermail/openstack-dev/2014-December/052344.html. 

3. enginefacade - The engine facade as I described in 
https://review.openstack.org/#/c/125181/, which we also talked about on the 
Nova compute call this week, is now built!  I spent monday and tuesday on the 
buildout for this, and that can be seen and reviewed here: 
https://review.openstack.org/#/c/138215/  As of today I’m still nursing it 
through CI, as even with projects using the “legacy” APIs, they are still 
finding lots of little silly things that I keep having to fix (people calling 
the old EngineFacade with arguments I didn’t expect, people importing from 
oslo.db in an order I did not expect, etc).  While these consuming projects 
could be fixed to not have these little issues, for now I am trying to push 
everything to work as identically as possible to how it was earlier, when the 
new API is not explicitly invoked.   I’ll be continuing to get this to pass all 
tempest runs through next week.

For enginefacade I’d like the folks from the call to take a look, and in 
particular if Matthew Booth wants to look into it, this is ready to start being 
used for prototyping Nova with it.

4. Connectivity stuff - today I worked a bunch with Viktor Sergeyev who has 
been trying to fix an issue with MySQL OperationalErrors that are raised when 
the database is shut off entirely; in oslo.db we have logic that wraps all 
exceptions unconditionally, including that it identifies disconnect exceptions. 
 In the case where the DB throws a disconnect, and we loop around to “retry” 
this query in order to get it to reconnect, then that reconnect continues to 
fail, the second run doesn’t get wrapped.   So today I’ve fixed both the 
upstream issue for SQLAlchemy 1.0, and also made a series of adjustments to 
oslo.db to accommodate SQLAlchemy 1.0’s system correctly as well as to work 
around the issue when SQLAlchemy < 1.0 is present.   That’s a three-series of 
patches that are unsurprisingly going to take some nursing to get through the 
gate, so I’ll be continuing with that next week.  This series starts at 
https://review.openstack.org/139725 https://review.openstack.org/139733 
https://review.openstack.org/139738 .

5. SQLA 1.0 stuff. - getting SQLAlchemy 1.0 close to release is becoming 
critical so I’ve been moving around issues and priorities to expedite this.  
There’s many stability enhancements oslo.db would benefit from as well as some 
major performance-related features that I’ve been planning all along to 
introduce to projects.   1.0 is very full of lots of changes that aren’t really 
being tested outside of my own CI, so getting something out the door on it is 
key, otherwise it will just be too different from 0.9 in order for people to 
have smooth upgrades.   I do run SQLA 1.0 in CI against a subset of Neutron, 
Nova, Keystone and Oslo tests so we should be in OK shape, but there is still a 
lot to go.  Work completed so far can be seen at 
http://docs.sqlalchemy.org/en/latest/changelog/migration_10.html.  


___
OpenStack-dev mailing list
OpenStack-

Re: [openstack-dev] Mike Bayer 20141205

2014-12-05 Thread Mike Bayer
this was sent to the wrong list!   please ignore.   (or if you find it 
interesting, then great!)


> On Dec 5, 2014, at 6:13 PM, Mike Bayer  wrote:
> 
> 1. Alembic release - I worked through some regressions introduced by Alembic 
> 0.7.0 and the subsequent 0.7.1 with the Neutron folks.  This started on 
> Monday with https://review.openstack.org/#/c/137989/, and by Wednesday I had 
> identified enough small regressions in 0.7.0 that I had to put 0.7.1 out, so 
> that review got expedited with https://review.openstack.org/#/c/138998/ 
> following from Neutron devs to continue fixing.   Version 0.7.1 includes the 
> foreign key autogenerate support first proposed by Ann Kamyshnikova.  
> Changelog at 
> http://alembic.readthedocs.org/en/latest/changelog.html#change-0.7.1.
> 
> 2. MySQL driver stuff.   I have a SQLAlchemy user who is running some kind of 
> heavy load with gevent and PyMySQL.  While this user is not 
> openstack-specific, the thing he is doing is a lot like what we might be 
> doing if and when we move our MySQL drivers to MySQL-connector-Python, which 
> is compatible with eventlet in that it is pure Python and can be 
> monkeypatched.The issue observed by this user applies to both PyMySQL and 
> MySQL-connector, and I can reproduce it *without* using SQLAlchemy, though it 
> does use a very makeshift connection pool designed to approximate what 
> SQLAlchemy’s does.   The issue is scary because it illustrates Python code 
> that should have been killed being invoked on a database connection that 
> should have been dead, calling commit(), and then actually *succeeding* in 
> committing only *part* of the data.   This is not an issue that impacts 
> Openstack right now but if the same thing applies to eventlet, then this 
> would definitely be something we’d need to worry about if we start using 
> MySQL-connector in a high load scenario (which has been the plan) so I’ve 
> forwarded my findings onto openstack-dev to see if anyone can help me 
> understand it.  The intro + test case for this issue starts at 
> http://lists.openstack.org/pipermail/openstack-dev/2014-December/052344.html. 
> 
> 3. enginefacade - The engine facade as I described in 
> https://review.openstack.org/#/c/125181/, which we also talked about on the 
> Nova compute call this week, is now built!  I spent monday and tuesday on the 
> buildout for this, and that can be seen and reviewed here: 
> https://review.openstack.org/#/c/138215/  As of today I’m still nursing it 
> through CI, as even with projects using the “legacy” APIs, they are still 
> finding lots of little silly things that I keep having to fix (people calling 
> the old EngineFacade with arguments I didn’t expect, people importing from 
> oslo.db in an order I did not expect, etc).  While these consuming projects 
> could be fixed to not have these little issues, for now I am trying to push 
> everything to work as identically as possible to how it was earlier, when the 
> new API is not explicitly invoked.   I’ll be continuing to get this to pass 
> all tempest runs through next week.
> 
> For enginefacade I’d like the folks from the call to take a look, and in 
> particular if Matthew Booth wants to look into it, this is ready to start 
> being used for prototyping Nova with it.
> 
> 4. Connectivity stuff - today I worked a bunch with Viktor Sergeyev who has 
> been trying to fix an issue with MySQL OperationalErrors that are raised when 
> the database is shut off entirely; in oslo.db we have logic that wraps all 
> exceptions unconditionally, including that it identifies disconnect 
> exceptions.  In the case where the DB throws a disconnect, and we loop around 
> to “retry” this query in order to get it to reconnect, then that reconnect 
> continues to fail, the second run doesn’t get wrapped.   So today I’ve fixed 
> both the upstream issue for SQLAlchemy 1.0, and also made a series of 
> adjustments to oslo.db to accommodate SQLAlchemy 1.0’s system correctly as 
> well as to work around the issue when SQLAlchemy < 1.0 is present.   That’s a 
> three-series of patches that are unsurprisingly going to take some nursing to 
> get through the gate, so I’ll be continuing with that next week.  This series 
> starts at https://review.openstack.org/139725 
> https://review.openstack.org/139733 https://review.openstack.org/139738 .
> 
> 5. SQLA 1.0 stuff. - getting SQLAlchemy 1.0 close to release is becoming 
> critical so I’ve been moving around issues and priorities to expedite this.  
> There’s many stability enhancements oslo.db would benefit from as well as 
> some major performance-related features that I’ve been planning all along to 
> introduce to projects.   1.0 is very full of lots of changes that aren’t 
> really being tested outside of my own CI, so getting something out the door 
> on it is key, otherwise it will just be too different from 0.9 in order for 
> people to have smooth upgrades.   I do run SQLA 1.0 in CI against a subset of 
>

Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit recap and move forward

2014-12-05 Thread joehuang
Hello, Davanum,

Thanks for your reply.

Cells can't meet the demand for the use cases and requirements described in the 
mail. 

> 1. Use cases
> a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" 
> ), establishing globally addressable tenants which result in efficient 
> services deployment.
> b). Telefonica use case[5], create virtual DC( data center) cross multiple 
> physical DCs with seamless experience.
> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For 
> NFV cloud, it’s in nature the cloud will be distributed but inter-connected 
> in many data centers.
>
> 2.requirements
> a). The operator has multiple sites cloud; each site can use one or multiple 
> vendor’s OpenStack distributions.
> b). Each site with its own requirements and upgrade schedule while 
> maintaining standard OpenStack API
> c). The multi-site cloud must provide unified resource management with global 
> Open API exposed, for example create virtual DC cross multiple physical DCs 
> with seamless experience.
> Although a prosperity orchestration layer could be developed for the 
> multi-site cloud, but it's prosperity API in the north bound interface. The 
> cloud operators want the ecosystem friendly global open API for the 
> mutli-site cloud for global access.


Best Regards

Chaoyi Huang ( joehuang )


From: Davanum Srinivas [dava...@gmail.com]
Sent: 05 December 2014 21:56
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [tc] [PTL] Cascading vs. Cells – summit 
recap and move forward

Joe,

Related to this topic, At the summit, there was a session on Cells v2
and following up on that there have been BP(s) filed in Nova
championed by Andrew -
https://review.openstack.org/#/q/owner:%22Andrew+Laski%22+status:open,n,z

thanks,
dims

On Fri, Dec 5, 2014 at 8:23 AM, joehuang  wrote:
> Dear all & TC & PTL,
>
> In the 40 minutes cross-project summit session “Approaches for scaling 
> out”[1], almost 100 peoples attended the meeting, and the conclusion is that 
> cells can not cover the use cases and requirements which the OpenStack 
> cascading solution[2] aim to address, the background including use cases and 
> requirements is also described in the mail.
>
> After the summit, we just ported the PoC[3] source code from IceHouse based 
> to Juno based.
>
> Now, let's move forward:
>
> The major task is to introduce new driver/agent to existing core projects, 
> for the core idea of cascading is to add Nova as the hypervisor backend of 
> Nova, Cinder as the block storage backend of Cinder, Neutron as the backend 
> of Neutron, Glance as one image location of Glance, Ceilometer as the store 
> of Ceilometer.
> a). Need cross-program decision to run cascading as an incubated project mode 
> or register BP separately in each involved project. CI for cascading is quite 
> different from traditional test environment, at least 3 OpenStack instance 
> required for cross OpenStack networking test cases.
> b). Volunteer as the cross project coordinator.
> c). Volunteers for implementation and CI.
>
> Background of OpenStack cascading vs cells:
>
> 1. Use cases
> a). Vodafone use case[4](OpenStack summit speech video from 9'02" to 12'30" 
> ), establishing globally addressable tenants which result in efficient 
> services deployment.
> b). Telefonica use case[5], create virtual DC( data center) cross multiple 
> physical DCs with seamless experience.
> c). ETSI NFV use cases[6], especially use case #1, #2, #3, #5, #6, 8#. For 
> NFV cloud, it’s in nature the cloud will be distributed but inter-connected 
> in many data centers.
>
> 2.requirements
> a). The operator has multiple sites cloud; each site can use one or multiple 
> vendor’s OpenStack distributions.
> b). Each site with its own requirements and upgrade schedule while 
> maintaining standard OpenStack API
> c). The multi-site cloud must provide unified resource management with global 
> Open API exposed, for example create virtual DC cross multiple physical DCs 
> with seamless experience.
> Although a prosperity orchestration layer could be developed for the 
> multi-site cloud, but it's prosperity API in the north bound interface. The 
> cloud operators want the ecosystem friendly global open API for the 
> mutli-site cloud for global access.
>
> 3. What problems does cascading solve that cells doesn't cover:
> OpenStack cascading solution is "OpenStack orchestrate OpenStacks". The core 
> architecture idea of OpenStack cascading is to add Nova as the hypervisor 
> backend of Nova, Cinder as the block storage backend of Cinder, Neutron as 
> the backend of Neutron, Glance as one image location of Glance, Ceilometer as 
> the store of Ceilometer. Thus OpenStack is able to orchestrate OpenStacks 
> (from different vendor's distribution, or different version ) which may 
> located in different sites (or data centers ) through the OpenStack API, 

Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-05 Thread Neil Jerram
Ian Wells  writes:

> On 4 December 2014 at 08:00, Neil Jerram 
> wrote:
>
> Kevin Benton  writes:
> I was actually floating a slightly more radical option than that:
> the
> idea that there is a VIF type (VIF_TYPE_NOOP) for which Nova does
> absolutely _nothing_, not even create the TAP device.
> 
>
> Nova always does something, and that something amounts to 'attaches
> the VM to where it believes the endpoint to be'. Effectively you
> should view the VIF type as the form that's decided on during
> negotiation between Neutron and Nova - Neutron says 'I will do this
> much and you have to take it from there'. (In fact, I would prefer
> that it was *more* of a negotiation, in the sense that the hypervisor
> driver had a say to Neutron of what VIF types it supported and
> preferred, and Neutron could choose from a selection, but I don't
> think it adds much value at the moment and I didn't want to propose a
> change just for the sake of it.) I think you're just proposing that
> the hypervisor driver should do less of the grunt work of connection.
>
> Also, libvirt is not the only hypervisor driver and I've found it
> interesting to nose through the others for background reading, even if
> you're not using them much.
>
> For example, suppose someone came along and wanted to implement a
> new
> OVS-like networking infrastructure? In principle could they do
> that
> without having to enhance the Nova VIF driver code? I think at the
> moment they couldn't, but that they would be able to if
> VIF_TYPE_NOOP
> (or possibly VIF_TYPE_TAP) was already in place. In principle I
> think
> it would then be possible for the new implementation to specify
> VIF_TYPE_NOOP to Nova, and to provide a Neutron agent that does
> the kind
> of configuration and vSwitch plugging that you've described above.
> 
>
> At the moment, the rule is that *if* you create a new type of
> infrastructure then *at that point* you create your new VIF plugging
> type to support it - vhostuser being a fine example, having been
> rejected on the grounds that it was, at the end of Juno, speculative.
> I'm not sure I particularly like this approach but that's how things
> are at the moment - largely down to not wanting to add code that isn;t
> used and therefore tested.
>
> None of this is criticism of your proposal, which sounds reasonable; I
> was just trying to provide a bit of context.

Many thanks for your explanations; I think I'm understanding this more
fully now.  For example, I now see that, when using libvirt, Nova has to
generate config that describes all aspects of the VM to launch,
including how the VNIC is implemented and how it's bound to networking
on the host.  Also different hypervisors, or layers like libvirt, may go
to different lengths as regards how far they connect the VNIC to some
form of networking on the host, and I can see that Nova would want to
normalize that, i.e. to ensure that a predictable level of connectivity
has always been achieved, regardless of hypervisor, by the time that
Nova hands over to someone else such as Neutron.

Therefore I see now that Nova _must_ be involved to some extent in VIF
plugging, and hence that VIF_TYPE_NOOP doesn't fly.

For a minimal, generic implementation of an unbridged TAP interface,
then, we're back to VIF_TYPE_TAP as I've proposed in
https://review.openstack.org/#/c/130732/.  I've just revised and
reuploaded this, based on the insight provided by this ML thread, and
hope people will take a look.

Many thanks,
 Neil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] Boundary between Nova and Neutron involvement in network setup?

2014-12-05 Thread Neil Jerram
Kevin Benton  writes:

> I see the difference now. 
> The main concern I see with the NOOP type is that creating the virtual
> interface could require different logic for certain hypervisors. In
> that case Neutron would now have to know things about nova and to me
> it seems like that's slightly too far the other direction. 

Many thanks, Kevin.  I see this now too, as I've just written more fully
in my response to Ian.

Based on your and others' insight, I've revised and reuploaded my
VIF_TYPE_TAP spec, and hope it's a lot clearer now.

Regards,
 Neil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Code pointer for processing cinder backend config

2014-12-05 Thread Pradip Mukhopadhyay
Hello,


Suppose I have a backend specification in cinder.conf as follows:

[nfs_pradipm]
volume_backend_name=nfs_pradipm
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=IP
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=admin
netapp_password=password
netapp_vserver=my_vs1
nfs_shares_config=/home/ubuntu/nfs.shares



Where this config info is getting parsed out in the cinder code?



Thanks,
Pradip
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id

2014-12-05 Thread Ian Wells
I have no problem with standardising the API, and I would suggest that a
service that provided nothing but endpoints could be begun as the next
phase of 'advanced services' broken out projects to standardise that API.
I just don't want it in Neutron itself.

On 5 December 2014 at 00:33, Erik Moe  wrote:

>
>
> One reason for trying to get an more complete API into Neutron is to have
> a standardized API. So users know what to expect and for providers to have
> something to comply to. Do you suggest we bring this standardization work
> to some other forum, OPNFV for example? Neutron provides low level hooks
> and the rest is defined elsewhere. Maybe this could work, but there would
> probably be other issues if the actual implementation is not on the edge or
> outside Neutron.
>
>
>
> /Erik
>
>
>
>
>
> *From:* Ian Wells [mailto:ijw.ubu...@cack.org.uk]
> *Sent:* den 4 december 2014 20:19
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Neutron] Edge-VPN and Edge-Id
>
>
>
> On 1 December 2014 at 21:26, Mohammad Hanif  wrote:
>
>   I hope we all understand how edge VPN works and what interactions are
> introduced as part of this spec.  I see references to neutron-network
> mapping to the tunnel which is not at all case and the edge-VPN spec
> doesn’t propose it.  At a very high level, there are two main concepts:
>
>1. Creation of a per tenant VPN “service” on a PE (physical router)
>which has a connectivity to other PEs using some tunnel (not known to
>tenant or tenant-facing).  An attachment circuit for this VPN service is
>also created which carries a “list" of tenant networks (the list is
>initially empty) .
>2. Tenant “updates” the list of tenant networks in the attachment
>circuit which essentially allows the VPN “service” to add or remove the
>network from being part of that VPN.
>
>  A service plugin implements what is described in (1) and provides an API
> which is called by what is described in (2).  The Neutron driver only
> “updates” the attachment circuit using an API (attachment circuit is also
> part of the service plugin’ data model).   I don’t see where we are
> introducing large data model changes to Neutron?
>
>
>
> Well, you have attachment types, tunnels, and so on - these are all
> objects with data models, and your spec is on Neutron so I'm assuming you
> plan on putting them into the Neutron database - where they are, for ever
> more, a Neutron maintenance overhead both on the dev side and also on the
> ops side, specifically at upgrade.
>
>
>
>   How else one introduces a network service in OpenStack if it is not
> through a service plugin?
>
>
>
> Again, I've missed something here, so can you define 'service plugin' for
> me?  How similar is it to a Neutron extension - which we agreed at the
> summit we should take pains to avoid, per Salvatore's session?
>
> And the answer to that is to stop talking about plugins or trying to
> integrate this into the Neutron API or the Neutron DB, and make it an
> independent service with a small and well defined interaction with Neutron,
> which is what the edge-id proposal suggests.  If we do incorporate it into
> Neutron then there are probably 90% of Openstack users and developers who
> don't want or need it but care a great deal if it breaks the tests.  If it
> isn't in Neutron they simply don't install it.
>
>
>
>   As we can see, tenant needs to communicate (explicit or otherwise) to
> add/remove its networks to/from the VPN.  There has to be a channel and the
> APIs to achieve this.
>
>
>
> Agreed.  I'm suggesting it should be a separate service endpoint.
> --
>
> Ian.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Fixing the console.log grows forever bug.

2014-12-05 Thread Tony Breeds
Hi All,
In the most recent team meeting we briefly discussed: [1] where the
console.log grows indefinitely, eventually causing guest stalls.  I mentioned
that I was working on a spec to fix this issue.

My original plan was fairly similar to [2]  In that we'd switch libvirt/qemu to
using a unix domain socket and write a simple helper to read from that socket
and write to disk.  That helper would close and reopen the on disk file upon
receiving a HUP (so logrotate just works).   Life would be good. and we could
all move on.

However I was encouraged to investigate fixing this in qemu, such that qemu
could process the HUP and make life better for all.  This is certainly doable
and I'm happy[3] to do this work.  I've floated the idea past qemu-devel and
they seem okay with the idea.  My main concern is in lag and supporting
qemu/libvirt that can't handle this option.

For the sake of discussion  I'll lay out my best guess right now on fixing this
in qemu.

qemu 2.2.0 /should/ release this year the ETA is 2014-12-09[4] so the fix I'm
proposing would be available in qemu 2.3.0 which I think will be available in
June/July 2015.  So we'd be into 'L' development before this fix is available
and possibly 'M' before the community distros (Fedora and  Ubuntu)[5] include
and almost certainly longer for Enterprise distros.  Along with the qemu
development I expect there to be some libvirt development as well but right now
I don't think that's critical to the feature or this discussion.

So if that timeline is approximately correct:

- Can we wait this long to fix the bug?  As opposed to having it squashed in 
Kilo.
- What do we do in nova for the next ~12 months while know there isn't a qemu 
to fix this?
- Then once there is a qemu that fixes the issue, do we just say 'thou must use
  qemu 2.3.0' or would nova still need to support old and new qemu's ?

[1] https://bugs.launchpad.net/nova/+bug/832507
[2] https://review.openstack.org/#/c/80865/
[3] For some value of happy ;P
[4] From http://wiki.qemu.org/Planning/2.2
[5] Debian and Gentoo are a little harder to quantify in this scenario but no
less important.

Yours Tony.

PS: If any of you have a secret laundry list of things qemu should do to make
life easier for nova.  Put them on a wiki page so we can discuss them.
PPS: If this is going to be a thing we do (write features and fixes in qemu)
 we're going to need a consistent plan on how we cope with that.


pgpa905VGJA30.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Session length on wiki.openstack.org

2014-12-05 Thread Tony Breeds
On Fri, Dec 05, 2014 at 02:26:46PM +, Jeremy Stanley wrote:
> On 2014-12-04 18:37:48 -0700 (-0700), Carl Baldwin wrote:
> > +1  I've been meaning to say something like this but never got
> > around to it.  Thanks for speaking up.
> 
> https://storyboard.openstack.org/#!/story/1172753
> 
> I think Ryan said it might be a bug in the OpenID plug-in, but if so
> he didn't put that comment in the bug.

Thanks.  I'll try to track that.
 
Yours Tony.


pgpvGlBW3Tvct.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev