Re: [openstack-dev] [Neutron] VIP for LBaaS on same port?

2013-09-05 Thread Itsuro ODA
Hi,

Please consider the following use case make abailable too:
ip address/subnet of vips are same but protocol_port are diffrent.
This is not able either currently.

Thanks.

On Thu, 5 Sep 2013 10:26:36 +0400
Eugene Nikanorov  wrote:

> Hi Stephen,
> 
> Currently it's not possible. But we're planning to change this in near
> future.
> There is a blueprint which supposes change in vip-pool relationship (change
> it from 1:1 to m:n), as part of it's implementation, unconditional l2 port
> creation will be removed from loadbalancer_db.py
> 
> Here's a blueprint:
> https://blueprints.launchpad.net/neutron/+spec/lbaas-multiple-vips-per-pool
> Here's a link to review which removes port creation and moves it into the
> plugin driver instead: https://review.openstack.org/#/c/41396/
> Currently the patch is abandoned until Icehouse, but you can experiment
> with it.
> 
> Feel free to ask any further questions.
> 
> Thanks,
> Eugene.
> 
> 
> On Thu, Sep 5, 2013 at 10:13 AM, Stephen Gran
> wrote:
> 
> > Hi,
> >
> > One of the things I'll be looking at in the near future is writing a
> > driver for the neutron lbaas service to talk to a bit of hardware we have.
> >  The normal idiom with this hardware is to have a single interface, with
> > multiple IP addresses attached.  It doesn't look like this is currently
> > possible in the lbaas model - loadbalancer_db.py unconditionally creates a
> > port.
> >
> > What I am hoping to be able to do is create instances within openstack
> > based on appliance images, give them a neutron port on the right subnet,
> > and then add secondary IPs as people create loadbalancers.  This would also
> > give us the flexibility to attach security groups to that single port more
> > easily, but that's a nice side effect.
> >
> > Does this sound possible?  What would be the best way of achieving this,
> > given the way things work currently?
> >
> > Cheers,
> > --
> > Stephen Gran
> > Senior Systems Integrator - theguardian.com
> > Please consider the environment before printing this email.
> > --**--**--
> > Visit theguardian.com
> > On your mobile, download the Guardian iPhone app theguardian.com/iphoneand 
> > our iPad edition
> > theguardian.com/iPad   Save up to 32% by subscribing to the Guardian and
> > Observer - choose the papers you want and get full digital access.
> > Visit subscribe.theguardian.com
> >
> > This e-mail and all attachments are confidential and may also
> > be privileged. If you are not the named recipient, please notify
> > the sender and delete the e-mail and all attachments immediately.
> > Do not disclose the contents to another person. You may not use
> > the information for any purpose, or store, or copy, it in any way.
> >
> > Guardian News & Media Limited is not liable for any computer
> > viruses or other material transmitted with or as part of this
> > e-mail. You should employ virus checking software.
> >
> > Guardian News & Media Limited
> >
> > A member of Guardian Media Group plc
> > Registered Office
> > PO Box 68164
> > Kings Place
> > 90 York Way
> > London
> > N1P 2AP
> >
> > Registered in England Number 908396
> >
> > --**--**
> > --
> >
> >
> > __**_
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.**org 
> > http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
> >

-- 
Itsuro ODA 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VIP for LBaaS on same port?

2013-09-05 Thread Stephen Gran

On 05/09/13 08:27, Itsuro ODA wrote:

Hi,

Please consider the following use case make abailable too:
ip address/subnet of vips are same but protocol_port are diffrent.
This is not able either currently.


Oh yes, I'll certainly need that ability as well.  Well, one more thing 
to put on the list to look at :)


Cheers,
--
Stephen Gran
Senior Systems Integrator - theguardian.com
Please consider the environment before printing this email.
--
Visit theguardian.com   

On your mobile, download the Guardian iPhone app theguardian.com/iphone and our iPad edition theguardian.com/iPad   
Save up to 32% by subscribing to the Guardian and Observer - choose the papers you want and get full digital access.

Visit subscribe.theguardian.com

This e-mail and all attachments are confidential and may also
be privileged. If you are not the named recipient, please notify
the sender and delete the e-mail and all attachments immediately.
Do not disclose the contents to another person. You may not use
the information for any purpose, or store, or copy, it in any way.

Guardian News & Media Limited is not liable for any computer
viruses or other material transmitted with or as part of this
e-mail. You should employ virus checking software.

Guardian News & Media Limited

A member of Guardian Media Group plc
Registered Office
PO Box 68164
Kings Place
90 York Way
London
N1P 2AP

Registered in England Number 908396

--


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] VIP for LBaaS on same port?

2013-09-05 Thread Eugene Nikanorov
Itsuro, Stephen,

Support for this ability is planned in the scope of the blueprint I've
mentioned.

Thanks,
Eugene.


On Thu, Sep 5, 2013 at 11:39 AM, Stephen Gran
wrote:

> On 05/09/13 08:27, Itsuro ODA wrote:
>
>> Hi,
>>
>> Please consider the following use case make abailable too:
>> ip address/subnet of vips are same but protocol_port are diffrent.
>> This is not able either currently.
>>
>
> Oh yes, I'll certainly need that ability as well.  Well, one more thing to
> put on the list to look at :)
>
>
> Cheers,
> --
> Stephen Gran
> Senior Systems Integrator - theguardian.com
> Please consider the environment before printing this email.
> --**--**--
> Visit theguardian.com
> On your mobile, download the Guardian iPhone app theguardian.com/iphoneand 
> our iPad edition
> theguardian.com/iPad   Save up to 32% by subscribing to the Guardian and
> Observer - choose the papers you want and get full digital access.
>
> Visit subscribe.theguardian.com
>
> This e-mail and all attachments are confidential and may also
> be privileged. If you are not the named recipient, please notify
> the sender and delete the e-mail and all attachments immediately.
> Do not disclose the contents to another person. You may not use
> the information for any purpose, or store, or copy, it in any way.
>
> Guardian News & Media Limited is not liable for any computer
> viruses or other material transmitted with or as part of this
> e-mail. You should employ virus checking software.
>
> Guardian News & Media Limited
>
> A member of Guardian Media Group plc
> Registered Office
> PO Box 68164
> Kings Place
> 90 York Way
> London
> N1P 2AP
>
> Registered in England Number 908396
>
> --**--**
> --
>
>
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [brick] Status and plans for the brick shared volume code

2013-09-05 Thread Thierry Carrez
John Griffith wrote:
> The code currently is and will be maintained in Cinder, and the Cinder
> team will sync changes across to Nova.  The first order of business for
> Icehouse will be to get the library built up and usable, then convert
> over to using that so as to avoid the syncing issues.

This may have been discussed before, but is there any reason to avoid
the Oslo incubator for such a library ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Revert Baremetal v3 API extension?

2013-09-05 Thread John Garbutt
+1 I meant to raise that myself when I saw some changes there the other day.

On 4 September 2013 15:52, Thierry Carrez  wrote:
> Russell Bryant wrote:
>> On 09/04/2013 10:26 AM, Dan Smith wrote:
>>> Hi all,
>>>
>>> As someone who has felt about as much pain as possible from the
>>> dual-maintenance of the v2 and v3 API extensions, I felt compelled to
>>> bring up one that I think we can drop. The baremetal extension was
>>> ported to v3 API before (I think) the decision was made to make v3
>>> experimental for Havana. There are a couple of patches up for review
>>> right now that make obligatory changes to one or both of the versions,
>>> which is what made me think about this.
>>>
>>> Since Ironic is on the horizon and was originally slated to deprecate
>>> the in-nova-tree baremetal support for Havana, and since v3 is only
>>> experimental in Havana, I think we can drop the baremetal extension for
>>> the v3 API for now. If Nova's baremetal support isn't ready for
>>> deprecation by the time we're ready to promote the v3 API, we can
>>> re-introduce it at that time. Until then, I propose we avoid carrying
>>> it for a soon-to-be-deprecated feature.
>>>
>>> Thoughts?
>>
>> Sounds reasonable to me.  Anyone else have a differing opinion about it?
>
> +1
>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Revert Baremetal v3 API extension?

2013-09-05 Thread Alex Xu

+1
On 2013年09月05日 17:51, John Garbutt wrote:

+1 I meant to raise that myself when I saw some changes there the other day.

On 4 September 2013 15:52, Thierry Carrez  wrote:

Russell Bryant wrote:

On 09/04/2013 10:26 AM, Dan Smith wrote:

Hi all,

As someone who has felt about as much pain as possible from the
dual-maintenance of the v2 and v3 API extensions, I felt compelled to
bring up one that I think we can drop. The baremetal extension was
ported to v3 API before (I think) the decision was made to make v3
experimental for Havana. There are a couple of patches up for review
right now that make obligatory changes to one or both of the versions,
which is what made me think about this.

Since Ironic is on the horizon and was originally slated to deprecate
the in-nova-tree baremetal support for Havana, and since v3 is only
experimental in Havana, I think we can drop the baremetal extension for
the v3 API for now. If Nova's baremetal support isn't ready for
deprecation by the time we're ready to promote the v3 API, we can
re-introduce it at that time. Until then, I propose we avoid carrying
it for a soon-to-be-deprecated feature.

Thoughts?

Sounds reasonable to me.  Anyone else have a differing opinion about it?

+1

--
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Thu Sep 5th at 1500 UTC

2013-09-05 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Thu Sep 5th at 1500 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Review Havana-3 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-3
* State of DB2 driver (dhellmann)
  * https://bugs.launchpad.net/ceilometer/+bug/1208547 
  * The DB2 driver does not return the right data for get_resources(), and
it doesn't look like we know how to fix it. On IRC on Aug 3 we discussed
the idea of removing the driver from the release by RC1 if we don't have
a solution. We need to discuss and agree on the approach we want to take
to address the issue.
* Release python-ceilometerclient? 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Savanna] Guidance for adding a new plugin (CDH)

2013-09-05 Thread Andrei Savu
Thanks Matt!

I've added the following blueprint (check the full specification for more
details):
https://blueprints.launchpad.net/savanna/+spec/cdh-plugin

I'm now working on some code to get early feedback.

Regards,

-- Andrei Savu / axemblr.com

On Wed, Sep 4, 2013 at 11:35 PM, Matthew Farrellee  wrote:

> On 09/04/2013 04:06 PM, Andrei Savu wrote:
>
>> Hi guys -
>>
>> I have just started to play with Savanna a few days ago - I'm still
>> going through the code. Next week I want to start to work on a plugin
>> that will deploy CDH using Cloudera Manager.
>>
>> What process should I follow? I'm new to launchpad / Gerrit. Should I
>> start by creating a blueprint and a bug / improvement request?
>>
>
> Savanna is following all OpenStack community practices so you can check
> out 
> https://wiki.openstack.org/**wiki/How_To_Contributeto
>  get a good idea of what to do.
>
> In short, yes please use launchpad and gerrit and create a blueprint.
>
>
>  Is there any public OpenStack deployment that I can use for testing?
>> Should 0.2 work with Grizzly at trystack.org ?
>>
>
> 0.2 will work with Grizzly. I've not tried trystack so let us know if it
> works.
>
>
> Best,
>
>
> matt
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Feature Freeze Notes

2013-09-05 Thread Russell Bryant
Greetings,

We have now passed the freeze.  The great news is that we were able to
merge the code for 40 blueprints [4]!  That's roughly the same that we
completed in h1 and h2 combined.  (We're also trying to get a couple
more through the gate before making the havana-3 branch)

The bad news is that *way* too much code came in during havana-3 as
opposed to more evenly spread out throughout the development cycle.
This caused a bit of a bottleneck in the review process, so some
features didn't make it.  We had hard choices to make, but reviewed and
merged as much as we could.

Thank you to everyone that put in extra effort over the last few weeks.
 I really appreciate it.  There were 4815 code reviews done by 196
people over the last 30 days in Nova and 15637 reviews by 499 people
across all projects [1][2] !

There is a documented procedure for requesting a feature freeze
exception [3].  For Nova, I would add that I would also like to see the
request posted to the mailing list with a subject that looks something
like "[Nova] FFE Request: ".  That way, interested parties
can chime in easily.  It will also help get my (and others') attention
to the request.

Thanks!

[1] http://russellbryant.net/openstack-stats/nova-reviewers-30.txt
[2] http://russellbryant.net/openstack-stats/all-reviewers-30.txt
[3] https://wiki.openstack.org/wiki/FeatureFreeze
[4] https://launchpad.net/nova/+milestone/havana-3

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenLdap for Keystone

2013-09-05 Thread Brad Topol
devstack has the ability to install keystone with openldap and configure 
them together.  Look at the online doc for stack.sh on how to configure 
devstack to install keystone with openldap.

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Cindy Willman (919) 268-5296



From:   "Miller, Mark M (EB SW Cloud - R&D - Corvallis)" 

To: OpenStack Development Mailing List 

Date:   09/04/2013 06:32 PM
Subject:[openstack-dev] OpenLdap for Keystone



Hello,
 
I have been struggling trying to configure OpenLdap to work with Keystone. 
I have found a gazillion snippets about this topic, but no step-by-step 
documents on how to install and configure OpenLdap so it will work with 
current Keystone releases. I am hoping that someone has a tested answer 
for me.
 
Thanks in advance,
 
Mark Miller___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift data serialization

2013-09-05 Thread CHABANI Mohamed El Hadi
Hi peoples,

I'm using an integration tool working on the top of my Swift API, this tool
requires that the data back from Swift must be structured as XML or JSON
format. To get containers/objects list, the operation is quite simple, i
just add the 'format=XML/JSON' to got data serialized.

But for some operations such authentification, or getting pictures
metadata, and lot of uses cases in Swift...i don't know how to convert the
plain text into XML/JSON !! if someone could help me on, it will really
great.

Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] Weekly IRC meetings

2013-09-05 Thread Tomas Sedovic

Hey everyone,

after much wrangling, we've come to something resembling a consensus: 
the weekly IRC meetings will be held on Tuesdays 19:00 UTC. This should 
accommodate the US folks (who always have it easy), and both lifeless 
and devs in Europe.


The details are documented here:

https://wiki.openstack.org/wiki/Meetings#Tuskar_meeting

I'll send out the agenda for the first meeting later (at most 24 hours 
before the meeting starts).


Three cheers for timezones!

--
Tomas Sedovic
Tuskar PTL

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] Small cluster size

2013-09-05 Thread Snider, Tim
I'd like to get input from the community on a 'realistic' size of a small Swift 
cluster that might be deployed & used in the field for production. SAIO / test 
/ lab setups aren't a consideration. I'm interested in hearing about both 
private and public cluster sizes that are deployed for production use.  4 nodes 
or fewer doesn't  seems pretty small - 6 or 8 seems like a more realistic size 
of a small cluster. But I don't have any actual data/customer experience for 
those assumptions.
Followup questions:
Given that cluster size,  do all nodes act as both Swift proxy and storage 
nodes? I assume they do.
How big does a cluster get before node roles are separated?
Thanks for the input,
Tim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Issue with bulk create when custom collection name is used

2013-09-05 Thread Purandhar Sairam Mannidi
Hi,



I’m observing an issue in prepare_request_body function in api/v2/base.py
file when using bulk create option with custom collection name (like proxy)
and resource name (proxies)

 It is checking for proxys as the collection name. can't we use PLURLAS in
api/v2/attributes.py where we update collection name to resource name
mapping?

If this is a issue, then i'll raise a bug. Please comment.


Thanks,

Sairam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Issue with bulk create when custom collection name is used

2013-09-05 Thread Purandhar Sairam Mannidi
sorry, collection name id proxies and resource name is proxy. Its a typo


On Thu, Sep 5, 2013 at 7:10 PM, Purandhar Sairam Mannidi <
sairam...@gmail.com> wrote:

> Hi,
>
>
>
> I’m observing an issue in prepare_request_body function in api/v2/base.py
> file when using bulk create option with custom collection name (like proxy)
> and resource name (proxies)
>
>  It is checking for proxys as the collection name. can't we use PLURLAS in
> api/v2/attributes.py where we update collection name to resource name
> mapping?
>
> If this is a issue, then i'll raise a bug. Please comment.
>
>
> Thanks,
>
> Sairam
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [brick] Status and plans for the brick shared volume code

2013-09-05 Thread John Griffith
On Thu, Sep 5, 2013 at 2:04 AM, Thierry Carrez wrote:

> John Griffith wrote:
> > The code currently is and will be maintained in Cinder, and the Cinder
> > team will sync changes across to Nova.  The first order of business for
> > Icehouse will be to get the library built up and usable, then convert
> > over to using that so as to avoid the syncing issues.
>
> This may have been discussed before, but is there any reason to avoid
> the Oslo incubator for such a library ?
>
Not really no, in fact that's always been a consideration (
https://blueprints.launchpad.net/oslo/+spec/shared-block-storage-library)

> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-05 Thread Mark McLoughlin
Hi

I'd like to request a feature freeze exception for the final (and
admittedly the largest) patch in the series of 40 patches to port Nova to
oslo.messaging:

  https://review.openstack.org/39929

While this change doesn't provide any immediate user-visible benefit, it
would be massively helpful in maintaining momentum behind the effort all
through the Havana cycle to move the RPC code from oslo-incubator into a
library.

In terms of risk of regression, there is certainly some risk but that risk
is mitigated by the fact that the core code of each of the transport
drivers has been modified minimally. The idea was to delay re-factoring
these drivers until we were sure that we hadn't caused any regressions in
Nova. The code has been happily passing the devstack/tempest based
integration tests for 10 days now.

Thanks,
Mark.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Weekly IRC meetings

2013-09-05 Thread Oleg Gelbukh
Tomas,

Thanks for very interesting project and this meeting as opportunity to
follow its progress!

Just to clarify, it looks like this time slot already booked for Savanna
meeting on #openstack-meeting-alt channel, isn't it?

--
Best regards,
Oleg Gelbukh
Mirantis, Inc.


On Thu, Sep 5, 2013 at 5:37 PM, Tomas Sedovic  wrote:

> Hey everyone,
>
> after much wrangling, we've come to something resembling a consensus: the
> weekly IRC meetings will be held on Tuesdays 19:00 UTC. This should
> accommodate the US folks (who always have it easy), and both lifeless and
> devs in Europe.
>
> The details are documented here:
>
> https://wiki.openstack.org/**wiki/Meetings#Tuskar_meeting
>
> I'll send out the agenda for the first meeting later (at most 24 hours
> before the meeting starts).
>
> Three cheers for timezones!
>
> --
> Tomas Sedovic
> Tuskar PTL
>
> __**_
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.**org 
> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Weekly IRC meetings

2013-09-05 Thread Ilya Shakhat
Oleg,

Savanna meeting is on Thursdays at 18:00 UTC (
https://wiki.openstack.org/wiki/Meetings#Savanna_.28Hadoop.29_meeting)

Ilya.


2013/9/5 Oleg Gelbukh 

> Tomas,
>
> Thanks for very interesting project and this meeting as opportunity to
> follow its progress!
>
> Just to clarify, it looks like this time slot already booked for Savanna
> meeting on #openstack-meeting-alt channel, isn't it?
>
> --
> Best regards,
> Oleg Gelbukh
> Mirantis, Inc.
>
>
> On Thu, Sep 5, 2013 at 5:37 PM, Tomas Sedovic  wrote:
>
>> Hey everyone,
>>
>> after much wrangling, we've come to something resembling a consensus: the
>> weekly IRC meetings will be held on Tuesdays 19:00 UTC. This should
>> accommodate the US folks (who always have it easy), and both lifeless and
>> devs in Europe.
>>
>> The details are documented here:
>>
>> https://wiki.openstack.org/**wiki/Meetings#Tuskar_meeting
>>
>> I'll send out the agenda for the first meeting later (at most 24 hours
>> before the meeting starts).
>>
>> Three cheers for timezones!
>>
>> --
>> Tomas Sedovic
>> Tuskar PTL
>>
>> __**_
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.**org 
>> http://lists.openstack.org/**cgi-bin/mailman/listinfo/**openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (some) Feature freezes / Havana-3 milestone candidates available

2013-09-05 Thread Thierry Carrez
Hi everyone,

Milestone-proposed branches were created for Keystone, Glance, Horizon,
Cinder, Ceilometer and Heat in preparation for the havana-3 milestone
publication tomorrow. Nova and Neutron should follow in the next hour.

Those projects are now feature-frozen. You should no longer merge
featureful code unless it's being given an explicit feature freeze
exception by the PTL. The goal is to strictly limit the amount of new
code and disruption as we enter a test-heavy and documentation-heavy
phase of our development cycle. This is critical in delivering a timely
and high-quality final release.

Please test proposed deliveries to ensure no critical regression found
its way in. Milestone-critical fixes will be backported to the
milestone-proposed branch until final delivery of the milestone, and
will be tracked using the "havana-3" milestone targeting.

You can find candidate tarballs at:
http://tarballs.openstack.org/keystone/keystone-milestone-proposed.tar.gz
http://tarballs.openstack.org/glance/glance-milestone-proposed.tar.gz
http://tarballs.openstack.org/horizon/horizon-milestone-proposed.tar.gz
http://tarballs.openstack.org/cinder/cinder-milestone-proposed.tar.gz
http://tarballs.openstack.org/ceilometer/ceilometer-milestone-proposed.tar.gz
http://tarballs.openstack.org/heat/heat-milestone-proposed.tar.gz

You can also access the milestone-proposed branches directly at:
https://github.com/openstack/keystone/tree/milestone-proposed
https://github.com/openstack/glance/tree/milestone-proposed
https://github.com/openstack/horizon/tree/milestone-proposed
https://github.com/openstack/cinder/tree/milestone-proposed
https://github.com/openstack/ceilometer/tree/milestone-proposed
https://github.com/openstack/heat/tree/milestone-proposed

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-05 Thread Davanum Srinivas
Mark,

Has this changeset get through a full tempest with QPid enabled?

thanks,
dims


On Thu, Sep 5, 2013 at 10:17 AM, Mark McLoughlin  wrote:

> Hi
>
> I'd like to request a feature freeze exception for the final (and
> admittedly the largest) patch in the series of 40 patches to port Nova to
> oslo.messaging:
>
>   https://review.openstack.org/39929
>
> While this change doesn't provide any immediate user-visible benefit, it
> would be massively helpful in maintaining momentum behind the effort all
> through the Havana cycle to move the RPC code from oslo-incubator into a
> library.
>
> In terms of risk of regression, there is certainly some risk but that risk
> is mitigated by the fact that the core code of each of the transport
> drivers has been modified minimally. The idea was to delay re-factoring
> these drivers until we were sure that we hadn't caused any regressions in
> Nova. The code has been happily passing the devstack/tempest based
> integration tests for 10 days now.
>
> Thanks,
> Mark.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [brick] Status and plans for the brick shared volume code

2013-09-05 Thread Russell Bryant
On 09/05/2013 09:46 AM, John Griffith wrote:
> 
> 
> 
> On Thu, Sep 5, 2013 at 2:04 AM, Thierry Carrez  > wrote:
> 
> John Griffith wrote:
> > The code currently is and will be maintained in Cinder, and the Cinder
> > team will sync changes across to Nova.  The first order of
> business for
> > Icehouse will be to get the library built up and usable, then convert
> > over to using that so as to avoid the syncing issues.
> 
> This may have been discussed before, but is there any reason to avoid
> the Oslo incubator for such a library ?
> 
> Not really no, in fact that's always been a consideration
> (https://blueprints.launchpad.net/oslo/+spec/shared-block-storage-library)

I figured it just made sense from a team perspective to have Cinder
maintain this.  That's where the relevant domain expertise is.

The mechanics would certainly be easier as far as syncing code, since it
would be with the other code in the same situation.  However, that's
short term anyway.  Hopefully the real library is out ASAP in Icehouse.

Btw, these changes for Nova didn't make for the feature freeze.  So, we
will have to discuss whether an exception makes sense.  The alternative
is to just defer Nova's use of brick to Icehouse and when it's released
as a library.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Small cluster size

2013-09-05 Thread Tom Fifield
Here's a straw-man:

* 5 storage nodes
* 2 proxy servers

5 storage nodes for a reasonable zone breakdown for 3 replicas; separate
proxy nodes for security segregation (working to avoid unencrypted,
unauthenticated rsync having any chance of leaking through the net) and
network segregation (separating replication traffic); and at 2 proxies
for HA.


Regards,

Tom


On 05/09/13 06:38, Snider, Tim wrote:
> I'd like to get input from the community on a 'realistic' size of a small 
> Swift cluster that might be deployed & used in the field for production. SAIO 
> / test / lab setups aren't a consideration. I'm interested in hearing about 
> both private and public cluster sizes that are deployed for production use.  
> 4 nodes or fewer doesn't  seems pretty small - 6 or 8 seems like a more 
> realistic size of a small cluster. But I don't have any actual data/customer 
> experience for those assumptions.
> Followup questions:
> Given that cluster size,  do all nodes act as both Swift proxy and storage 
> nodes? I assume they do.
> How big does a cluster get before node roles are separated?
> Thanks for the input,
> Tim
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-05 Thread Mark McLoughlin
Hi

On Thu, 2013-09-05 at 10:43 -0400, Davanum Srinivas wrote:
> Mark,
> 
> Has this changeset get through a full tempest with QPid enabled?

No, I've only done local testing with the qpid transport to date.

I think Smokestack is the only CI tool actively testing the qpid driver.
I ran out of time adding oslo.messaging to Smokestack before heading off
on vacation, but I expect I'll get to it next week.

Cheers,
Mark.


> 
> thanks,
> dims
> 
> 
> On Thu, Sep 5, 2013 at 10:17 AM, Mark McLoughlin  wrote:
> 
> > Hi
> >
> > I'd like to request a feature freeze exception for the final (and
> > admittedly the largest) patch in the series of 40 patches to port Nova to
> > oslo.messaging:
> >
> >   https://review.openstack.org/39929
> >
> > While this change doesn't provide any immediate user-visible benefit, it
> > would be massively helpful in maintaining momentum behind the effort all
> > through the Havana cycle to move the RPC code from oslo-incubator into a
> > library.
> >
> > In terms of risk of regression, there is certainly some risk but that risk
> > is mitigated by the fact that the core code of each of the transport
> > drivers has been modified minimally. The idea was to delay re-factoring
> > these drivers until we were sure that we hadn't caused any regressions in
> > Nova. The code has been happily passing the devstack/tempest based
> > integration tests for 10 days now.
> >
> > Thanks,
> > Mark.
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-05 Thread Russell Bryant
On 09/05/2013 10:17 AM, Mark McLoughlin wrote:
> Hi
> 
> I'd like to request a feature freeze exception for the final (and
> admittedly the largest) patch in the series of 40 patches to port Nova
> to oslo.messaging:
> 
>   https://review.openstack.org/39929
> 
> While this change doesn't provide any immediate user-visible benefit, it
> would be massively helpful in maintaining momentum behind the effort all
> through the Havana cycle to move the RPC code from oslo-incubator into a
> library.
> 
> In terms of risk of regression, there is certainly some risk but that
> risk is mitigated by the fact that the core code of each of the
> transport drivers has been modified minimally. The idea was to delay
> re-factoring these drivers until we were sure that we hadn't caused any
> regressions in Nova. The code has been happily passing the
> devstack/tempest based integration tests for 10 days now.

When do you expect major refactoring to happen in oslo.messaging?  I get
that the current code was minimally modified, but I just want to
understand how the timelines line up with the release and ongoing
maintenance of the Havana release.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-05 Thread Davanum Srinivas
Thanks Mark. Looks like we need to get someone to manually
trigger Smokestack to run against this review at least once since i don't
see any +1's from Smokestack for some reason.


On Thu, Sep 5, 2013 at 11:49 AM, Mark McLoughlin  wrote:

> Hi
>
> On Thu, 2013-09-05 at 10:43 -0400, Davanum Srinivas wrote:
> > Mark,
> >
> > Has this changeset get through a full tempest with QPid enabled?
>
> No, I've only done local testing with the qpid transport to date.
>
> I think Smokestack is the only CI tool actively testing the qpid driver.
> I ran out of time adding oslo.messaging to Smokestack before heading off
> on vacation, but I expect I'll get to it next week.
>
> Cheers,
> Mark.
>
>
> >
> > thanks,
> > dims
> >
> >
> > On Thu, Sep 5, 2013 at 10:17 AM, Mark McLoughlin 
> wrote:
> >
> > > Hi
> > >
> > > I'd like to request a feature freeze exception for the final (and
> > > admittedly the largest) patch in the series of 40 patches to port Nova
> to
> > > oslo.messaging:
> > >
> > >   https://review.openstack.org/39929
> > >
> > > While this change doesn't provide any immediate user-visible benefit,
> it
> > > would be massively helpful in maintaining momentum behind the effort
> all
> > > through the Havana cycle to move the RPC code from oslo-incubator into
> a
> > > library.
> > >
> > > In terms of risk of regression, there is certainly some risk but that
> risk
> > > is mitigated by the fact that the core code of each of the transport
> > > drivers has been modified minimally. The idea was to delay re-factoring
> > > these drivers until we were sure that we hadn't caused any regressions
> in
> > > Nova. The code has been happily passing the devstack/tempest based
> > > integration tests for 10 days now.
> > >
> > > Thanks,
> > > Mark.
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> >
> >
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] qa meeting cancelled today

2013-09-05 Thread Sean Dague
It's freeze week, and most of the core team is out on vacation / holiday 
today, so I'm going to cancel the QA meeting today. See you guys next 
week and in #openstack-qa.


-Sean

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenLdap for Keystone

2013-09-05 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Thanks Brad for the pointer. Is there any way to just install the OpenLdap 
piece and not the entire OpenStack?

Mark

From: Brad Topol [mailto:bto...@us.ibm.com]
Sent: Thursday, September 05, 2013 5:37 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] OpenLdap for Keystone

devstack has the ability to install keystone with openldap and configure them 
together.  Look at the online doc for stack.sh on how to configure devstack to 
install keystone with openldap.

Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Cindy Willman (919) 268-5296



From:"Miller, Mark M (EB SW Cloud - R&D - Corvallis)" 
mailto:mark.m.mil...@hp.com>>
To:OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date:09/04/2013 06:32 PM
Subject:[openstack-dev] OpenLdap for Keystone




Hello,

I have been struggling trying to configure OpenLdap to work with Keystone. I 
have found a gazillion snippets about this topic, but no step-by-step documents 
on how to install and configure OpenLdap so it will work with current Keystone 
releases. I am hoping that someone has a tested answer for me.

Thanks in advance,

Mark Miller___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift data serialization

2013-09-05 Thread Pete Zaitcev
On Thu, 5 Sep 2013 15:05:55 +0200
CHABANI Mohamed El Hadi  wrote:

> But for some operations such authentification, or getting pictures
> metadata, and lot of uses cases in Swift...i don't know how to convert the
> plain text into XML/JSON !! if someone could help me on, it will really
> great.

I don't understand enough to answer this. Authentication data
seems properly encoded: it's either headers (v1) or JSON (v2).
As for metatadata, it's also wrapped into headers or JSON.
You are free to nest encodings, of course, but Swift does
not need to get involved, it would seem to me.

Are you accessing Swift from a language binding or trying to
script the CLI client? I image the latter may cause the confusion
inherent in the question.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nova/Neutron Havana-3 milestone candidates available

2013-09-05 Thread Thierry Carrez
Hi everyone,

Last but not least, milestone-proposed branches were just created for
Nova and Neutron in preparation for the havana-3 milestone publication
tomorrow. Two feature patches still need to land there after having
fought the merge queue all day... but they will be in the branch soon.

You can find candidate tarballs at:
http://tarballs.openstack.org/nova/nova-milestone-proposed.tar.gz
http://tarballs.openstack.org/neutron/neutron-milestone-proposed.tar.gz

You can also access the milestone-proposed branches directly at:
https://github.com/openstack/nova/tree/milestone-proposed
https://github.com/openstack/neutron/tree/milestone-proposed

Regards,

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [brick] Status and plans for the brick shared volume code

2013-09-05 Thread John Griffith
On Thu, Sep 5, 2013 at 9:04 AM, Russell Bryant  wrote:

> On 09/05/2013 09:46 AM, John Griffith wrote:
> >
> >
> >
> > On Thu, Sep 5, 2013 at 2:04 AM, Thierry Carrez  > > wrote:
> >
> > John Griffith wrote:
> > > The code currently is and will be maintained in Cinder, and the
> Cinder
> > > team will sync changes across to Nova.  The first order of
> > business for
> > > Icehouse will be to get the library built up and usable, then
> convert
> > > over to using that so as to avoid the syncing issues.
> >
> > This may have been discussed before, but is there any reason to avoid
> > the Oslo incubator for such a library ?
> >
> > Not really no, in fact that's always been a consideration
> > (
> https://blueprints.launchpad.net/oslo/+spec/shared-block-storage-library)
>
> I figured it just made sense from a team perspective to have Cinder
> maintain this.  That's where the relevant domain expertise is.
>
> The mechanics would certainly be easier as far as syncing code, since it
> would be with the other code in the same situation.  However, that's
> short term anyway.  Hopefully the real library is out ASAP in Icehouse.
>
> Btw, these changes for Nova didn't make for the feature freeze.  So, we
>

BO :)

will have to discuss whether an exception makes sense.  The alternative
> is to just defer Nova's use of brick to Icehouse and when it's released
> as a library.
>
> --
> Russell Bryant
>

Ok, that being the case I think I'll work on getting it in library form
straight away in Icehouse and go from there.  I'm not sure the exception
makes sense at this stage to be honest, I'll double check on the issues
that were fixed and see where we stand.

Thanks,
John
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenLdap for Keystone

2013-09-05 Thread Dean Troyer
On Thu, Sep 5, 2013 at 11:18 AM, Miller, Mark M (EB SW Cloud - R&D -
Corvallis)  wrote:

>  Thanks Brad for the pointer. Is there any way to just install the
> OpenLdap piece and not the entire OpenStack?
>

You can install a Keystone-only DevStack, but I suspect you just want the
OpenLDAP bits...if that is the case look in lib/keystone[1] and lib/ldap[2]
for the steps DevStack takes to perform the installation.  The
configure_keystone()[3] function has all of the bits to configure Keystone.

dt

[1] https://github.com/openstack-dev/devstack/blob/master/lib/keystone
[2] https://github.com/openstack-dev/devstack/blob/master/lib/ldap
[3] https://github.com/openstack-dev/devstack/blob/master/lib/keystone#L102

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] FFE Request: oslo-messaging

2013-09-05 Thread Mark McLoughlin
On Thu, 2013-09-05 at 11:00 -0400, Russell Bryant wrote:
> On 09/05/2013 10:17 AM, Mark McLoughlin wrote:
> > Hi
> > 
> > I'd like to request a feature freeze exception for the final (and
> > admittedly the largest) patch in the series of 40 patches to port Nova
> > to oslo.messaging:
> > 
> >   https://review.openstack.org/39929
> > 
> > While this change doesn't provide any immediate user-visible benefit, it
> > would be massively helpful in maintaining momentum behind the effort all
> > through the Havana cycle to move the RPC code from oslo-incubator into a
> > library.
> > 
> > In terms of risk of regression, there is certainly some risk but that
> > risk is mitigated by the fact that the core code of each of the
> > transport drivers has been modified minimally. The idea was to delay
> > re-factoring these drivers until we were sure that we hadn't caused any
> > regressions in Nova. The code has been happily passing the
> > devstack/tempest based integration tests for 10 days now.
> 
> When do you expect major refactoring to happen in oslo.messaging?  I get
> that the current code was minimally modified, but I just want to
> understand how the timelines line up with the release and ongoing
> maintenance of the Havana release.

Yep, good question.

AFAIR we discussed this at the last Oslo IRC meeting and decided that
re-factoring will wait until Icehouse so we can more easily sync fixes
from oslo-incubator to oslo.messaging.

Porting Quantum, Cinder, Ceilometer and Heat, memoving the code from
oslo-incubator and re-factoring the drivers in oslo.messaging would be
goals for early on in the Icehouse cycle.

Cheers,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for today meeting at 2000 UTC

2013-09-05 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in 
#openstack-meeting on thursdays, 2000 UTC. The next meeting is today, 
2013-09-05!!!

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Discuss ongoing status of the overall effort and any needed coordination.
- Talk about unification of blocks/flows (and any missing pieces).
- Talk about moving toward a distributed "engine" instead of distributed flow.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, problems, open-reviews, issues, solutions, 
questions (and more).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Etherpad for IceHouse Scheduler Sessions

2013-09-05 Thread Day, Phil
Hi Folks,

As per the meeting this week, I've started an Etherpad to help plan out the  
scheduler sessions ahead of the Design Summit:

https://etherpad.openstack.org/IceHouse-Nova-Scheduler-Sessions

Phil

> -Original Message-
> From: Dugger, Donald D [mailto:donald.d.dug...@intel.com]
> Sent: 03 September 2013 05:11
> To: OpenStack Development Mailing List
> Subject: [openstack-dev] Scheduler sub-group meeting on 9/3
> 
> Let's try and have a meeting this week (hopefully a few can attend).  Topics
> that come to mind:
> 
> 1) Multiple-scheduler-drivers (seems to be some open issues from the mailing
> list)
> 2) Scheduler design sessions at the Icehouse summit
> 3) Perspective for Nova scheduler (Boris' proposal at
> https://docs.google.com/document/d/1_DRv7it_mwalEZzLy5WO92TJcummpm
> WL4NWsWf0UWiQ/edit#heading=h.6ixj0ctv4rwu  )
> 
> 
> --
> Don Dugger
> "Censeo Toto nos in Kansa esse decisse." - D. Gale
> Ph: 303/443-3786
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] No meeting today, focus on bugs and reviewing FFE requests

2013-09-05 Thread Russell Bryant
Greetings,

Let's cancel the Nova meeting today.  I know of at least a few people
that are out for various reasons.  Those that aren't are really worn out
from the feature freeze week.  Let's break for this week and meet again
next week as we go hard on fixing bugs.

In the meantime, feel free to start ramping up on bug triage, as we have
some catching up to do there.

https://wiki.openstack.org/wiki/Nova/BugTriage

Also, please be on the lookout for Nova feature freeze exceptions.  They
should all be posted to the dev mailing list.  I would like input from
others if you have an opinion on them.

Thanks!

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting minutes September 5

2013-09-05 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-05-18.05.html
Minutes (text): 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-05-18.05.txt
Log: 
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-05-18.05.log.html

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Murano Release 0.2 announcement

2013-09-05 Thread Georgy Okrokvertskhov
Hello folks,

Murano Team is happy to announce that after 3 months of development the
stable version of Murano - v0.2 has been released. A full list of changes
and another necessary information can be found in Release Notes on the
project Wiki 

https://wiki.openstack.org/wiki/Murano/ReleaseNotes_v0.2

What's new since v0.1:
* Workflow diagnostics in Murano Conductor.
* Dynamic UI (UI forms built on YAML definitions without any custom code).
* Support for SSL both in REST API and RabbitMQ communications.
* Ability to select Windows image, Availability Zone and instance flavor.
* Additional Services:
   - MS SQL Single Instance.
   - MS SQL Server AlwaysOn Cluster.

Improvements:

* Support for External Active Directory.
* Detailed documentation for writing XML Workflows.
* Improved HA for Murano Conductor.
* REST API generalization.

Fixed Bugs:

A complete list of bugs fixed in Murano v0.2 can be found
here
.

Also, for release 0.2 we prepared several screencasts:
1. Introduction: common information about Murano with deployment of ASP.NET Web
Farm as an example http://www.youtube.com/watch?v=W-8x0lz5hO8
2. Murano v0.2 features: Deployment of MS SQL AlwaysOn with
Active Directory http://www.youtube.com/watch?v=Y_CmrZfKy18

Common information about Murano:
Murano Wiki: https://wiki.openstack.org/wiki/Murano/
Launchpad Project: https://launchpad.net/murano
IRC channel: #murano at FreeNode.

All necessary documentation (including Getting Started Guide,
Administrator Guide, Developer Guide and so on) can be found on the project
Wiki.

Enjoy!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: bp/instance-group-api-extension

2013-09-05 Thread Debojyoti Dutta
Hi

As per my IRC chats with dansmith, russellb, this feature needs the
user auth checks (being taken care of in
https://bugs.launchpad.net/nova/+bug/1221396).

Dan has some more comments 

Could we please do a FFE for this one  has been waiting for a long
time and we have done all that we were asked relatively quickly 
in H2 it was gated by API object refactor. Most of the current
comments (pre last one by Dan) are due to
https://bugs.launchpad.net/nova/+bug/1221396

-- 
-Debo~

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenLdap for Keystone

2013-09-05 Thread Miller, Mark M (EB SW Cloud - R&D - Corvallis)
Thanks Dean. I was able to combine sections of each script to make one that 
installs OpenLdap for Keystone.

Mark

From: Dean Troyer [mailto:dtro...@gmail.com]
Sent: Thursday, September 05, 2013 9:45 AM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] OpenLdap for Keystone

On Thu, Sep 5, 2013 at 11:18 AM, Miller, Mark M (EB SW Cloud - R&D - Corvallis) 
mailto:mark.m.mil...@hp.com>> wrote:
Thanks Brad for the pointer. Is there any way to just install the OpenLdap 
piece and not the entire OpenStack?

You can install a Keystone-only DevStack, but I suspect you just want the 
OpenLDAP bits...if that is the case look in lib/keystone[1] and lib/ldap[2] for 
the steps DevStack takes to perform the installation.  The 
configure_keystone()[3] function has all of the bits to configure Keystone.

dt

[1] https://github.com/openstack-dev/devstack/blob/master/lib/keystone
[2] https://github.com/openstack-dev/devstack/blob/master/lib/ldap
[3] https://github.com/openstack-dev/devstack/blob/master/lib/keystone#L102

--

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift account auditor duplicated code

2013-09-05 Thread Pete Zaitcev
Hi, Guys:

Here's a weird piece of duplicated call to account_audit() in
swift/account/auditor.py:

for path, device, partition in all_locs:
self.account_audit(path)
if time.time() - reported >= 3600:  # once an hour
self.logger.info(_('Since %(time)s: Account audits: ' ...)
self.account_audit(path)
dump_recon_cache({'account_audits_since': reported, ...)
reported = time.time()

This was apparently caused by Florian's ccb6334c going on top of
Darrell's 3d3ed34f. Is this intentional, and if not, should we be
fixing it?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: hyper-v-rdp-console

2013-09-05 Thread Alessandro Pilotti
This is an FFE request for adding console support for Hyper-V. Unlike most 
other hypervisors, Hyper-V guest console access is based on RDP instead of VNC. 
This blueprint adds RDP support in Nova, implemented in a way consistent with 
the existing VNC and SPICE protocols.

It's an essential feature for Hyper-V, requiring a relatively small 
implementation in the Hyper-V driver and a Nova public API .

Blueprint:

https://blueprints.launchpad.net/nova/+spec/hyper-v-rdp-console

Reviews:

https://review.openstack.org/#/c/41265/
https://review.openstack.org/#/c/43312/
https://review.openstack.org/#/c/43502/

Here's also a link for the related python-novaclient commit:
https://review.openstack.org/#/c/44250/


This blueprint, implemented during the H3 timeframe has been temporarily 
blocked with a -2 between Aug 12th and Aug 27th due to the dependency on the 
admin-api blueprint which has subsequently been cancelled 
(https://blueprints.launchpad.net/nova/+spec/admin-api).
This led to not being able to commit the final implementation until very late 
in the H3 cycle, thus the FFE request now.

The commits related to this blueprint remained basically unreviewed since then, 
but I definitely cannot complain.
The Nova team has been amazingly helpful in getting most of the Havana Hyper-V 
blueprints merged in time for H3 against all odds, which definitely deserves a 
big THANK YOU from the Hyper-V sub-project team! :-)

Due to its importance for Hyper-V support in OpenStack, it'd be great if we 
could be able to give a chance to this Havana blueprint to be reviewed as well, 
if possible.

Beside the ML and direct emails, I'll be also available anytime to talk about 
this on IRC: alexpilotti


Thanks!

Alessandro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-05 Thread Baldwin, Carl (HPCS Neutron)
Brian,

As far as I know, no consensus was reached.

A problem was discovered that happens when spawning multiple processes.
The mysql connection seems to "go away" after between 10-60 seconds in my
testing causing a seemingly random API call to fail.  After that, it is
okay.  This must be due to some interaction between forking the process
and the mysql connection pool.  This needs to be solved but I haven't had
the time to look in to it this week.

I'm not sure if the other proposal suffers from this problem.

Carl

On 9/4/13 3:34 PM, "Brian Cline"  wrote:

>Was any consensus on this ever reached? It appears both reviews are still
>open. I'm partial to review 37131 as it attacks the problem a more
>concisely, and, as mentioned, combined the efforts of the two more
>effective patches. I would echo Carl's sentiments that it's an easy
>review minus the few minor behaviors discussed on the review thread today.
>
>We feel very strongly about these making it into Havana -- being confined
>to a single neutron-server instance per cluster or region is a huge
>bottleneck--essentially the only controller process with massive CPU
>churn in environments with constant instance churn, or sudden large
>batches of new instance requests.
>
>In Grizzly, this behavior caused addresses not to be issued to some
>instances during boot, due to quantum-server thinking the DHCP agents
>timed out and were no longer available, when in reality they were just
>backlogged (waiting on quantum-server, it seemed).
>
>Is it realistically looking like this patch will be cut for h3?
>
>--
>Brian Cline
>Software Engineer III, Product Innovation
>
>SoftLayer, an IBM Company
>4849 Alpha Rd, Dallas, TX 75244
>214.782.7876 direct  |  bcl...@softlayer.com
> 
>
>-Original Message-
>From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
>Sent: Wednesday, August 28, 2013 3:04 PM
>To: Mark McClain
>Cc: OpenStack Development Mailing List
>Subject: [openstack-dev] [Neutron] The three API server multi-worker
>process patches.
>
>All,
>
>We've known for a while now that some duplication of work happened with
>respect to adding multiple worker processes to the neutron-server.  There
>were a few mistakes made which led to three patches being done
>independently of each other.
>
>Can we settle on one and accept it?
>
>I have changed my patch at the suggestion of one of the other 2 authors,
>Peter Feiner, in attempt to find common ground.  It now uses openstack
>common code and therefore it is more concise than any of the original
>three and should be pretty easy to review.  I'll admit to some bias toward
>my own implementation but most importantly, I would like for one of these
>implementations to land and start seeing broad usage in the community
>earlier than later.
>
>Carl Baldwin
>
>PS Here are the two remaining patches.  The third has been abandoned.
>
>https://review.openstack.org/#/c/37131/
>https://review.openstack.org/#/c/36487/
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-05 Thread Nachi Ueno
Hi Folks

We choose https://review.openstack.org/#/c/37131/ <-- This patch to go on.
We are also discussing in this patch.

Best
Nachi



2013/9/5 Baldwin, Carl (HPCS Neutron) :
> Brian,
>
> As far as I know, no consensus was reached.
>
> A problem was discovered that happens when spawning multiple processes.
> The mysql connection seems to "go away" after between 10-60 seconds in my
> testing causing a seemingly random API call to fail.  After that, it is
> okay.  This must be due to some interaction between forking the process
> and the mysql connection pool.  This needs to be solved but I haven't had
> the time to look in to it this week.
>
> I'm not sure if the other proposal suffers from this problem.
>
> Carl
>
> On 9/4/13 3:34 PM, "Brian Cline"  wrote:
>
>>Was any consensus on this ever reached? It appears both reviews are still
>>open. I'm partial to review 37131 as it attacks the problem a more
>>concisely, and, as mentioned, combined the efforts of the two more
>>effective patches. I would echo Carl's sentiments that it's an easy
>>review minus the few minor behaviors discussed on the review thread today.
>>
>>We feel very strongly about these making it into Havana -- being confined
>>to a single neutron-server instance per cluster or region is a huge
>>bottleneck--essentially the only controller process with massive CPU
>>churn in environments with constant instance churn, or sudden large
>>batches of new instance requests.
>>
>>In Grizzly, this behavior caused addresses not to be issued to some
>>instances during boot, due to quantum-server thinking the DHCP agents
>>timed out and were no longer available, when in reality they were just
>>backlogged (waiting on quantum-server, it seemed).
>>
>>Is it realistically looking like this patch will be cut for h3?
>>
>>--
>>Brian Cline
>>Software Engineer III, Product Innovation
>>
>>SoftLayer, an IBM Company
>>4849 Alpha Rd, Dallas, TX 75244
>>214.782.7876 direct  |  bcl...@softlayer.com
>>
>>
>>-Original Message-
>>From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
>>Sent: Wednesday, August 28, 2013 3:04 PM
>>To: Mark McClain
>>Cc: OpenStack Development Mailing List
>>Subject: [openstack-dev] [Neutron] The three API server multi-worker
>>process patches.
>>
>>All,
>>
>>We've known for a while now that some duplication of work happened with
>>respect to adding multiple worker processes to the neutron-server.  There
>>were a few mistakes made which led to three patches being done
>>independently of each other.
>>
>>Can we settle on one and accept it?
>>
>>I have changed my patch at the suggestion of one of the other 2 authors,
>>Peter Feiner, in attempt to find common ground.  It now uses openstack
>>common code and therefore it is more concise than any of the original
>>three and should be pretty easy to review.  I'll admit to some bias toward
>>my own implementation but most importantly, I would like for one of these
>>implementations to land and start seeing broad usage in the community
>>earlier than later.
>>
>>Carl Baldwin
>>
>>PS Here are the two remaining patches.  The third has been abandoned.
>>
>>https://review.openstack.org/#/c/37131/
>>https://review.openstack.org/#/c/36487/
>>
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Api samples and the feature freeze

2013-09-05 Thread Christopher Yeoh
Hi,

I'd just like to clarify whether adding api samples for the V3 API
is considered a feature and whether they can be added during the freeze.
Adding api samples just adds extra testcases and the output from those
testcases in the doc substree.

The risk is very very low as neither addition can affect the normal
operation of the Nova services. And if anything, the extra testcases
can help pick up bugs, either existing or new changes with the gate.
It also makes it much easier to generate API documentation.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE request: unix domain socket consoles for libvirt

2013-09-05 Thread Michael Still
Hi. This code has been in review since July 29, but a combination of
my focus on code reviews for others and having a baby has resulted in
it not landing. This feature is important to libvirt and closes a
critical bug we've had open way too long. The reviews:

https://review.openstack.org/#/c/39048/
https://review.openstack.org/#/c/43099/

I'd appreciate peoples thoughts on a FFE for this feature.

Thanks,
Michael

-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] FFE Request: utilization aware scheduling

2013-09-05 Thread Wang, Shane
Hi core developers and everyone,

Please allow me to make an FFE request for adding utilization aware scheduling 
support in Havana.

The blueprint: 
https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling.
The patches are:
[1] https://review.openstack.org/#/c/35759/
[2] https://review.openstack.org/#/c/35764/
[3] https://review.openstack.org/#/c/35765/
[4] https://review.openstack.org/#/c/35760/
[5] https://review.openstack.org/#/c/35767/

The other one https://review.openstack.org/#/c/44007/ is optional (that can be 
skipped) because users can write their own monitors with the framework.
The above 5 patches are essential to make the feature work.

Since the patches were out at the beginning of July, they have been reviewed by 
some developers and revised.
Recently cores got some time to look at them and part of the patches got 2 +2 
by cores (e.g. Brian, Joe, Daniel).
However, due to the commit dependency, they were not approved to merge but in 
the waiting list.

Could we please add a FFE for that? Would you please evaluate whether the 
patches are close to the merge quality or far away from that?
If they are far away, we will continue to work on them in Icehouse to improve 
them.
Thanks a lot in advance!

Anyway, thank you very much for giving any feedbacks or comments on them, no 
matter whether they can be regarded as an exception.

Thanks and Best Regards.
--
Shane Wang




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] The three API server multi-worker process patches.

2013-09-05 Thread Yingjun Li
+1 for Carl's patch, and i have abandoned my patch..

About the `MySQL server gone away` problem, I fixed it by set
'pool_recycle' to 1 in db/api.py.


在 2013年9月6日星期五,Nachi Ueno 写道:

> Hi Folks
>
> We choose https://review.openstack.org/#/c/37131/ <-- This patch to go on.
> We are also discussing in this patch.
>
> Best
> Nachi
>
>
>
> 2013/9/5 Baldwin, Carl (HPCS Neutron) :
> > Brian,
> >
> > As far as I know, no consensus was reached.
> >
> > A problem was discovered that happens when spawning multiple processes.
> > The mysql connection seems to "go away" after between 10-60 seconds in my
> > testing causing a seemingly random API call to fail.  After that, it is
> > okay.  This must be due to some interaction between forking the process
> > and the mysql connection pool.  This needs to be solved but I haven't had
> > the time to look in to it this week.
> >
> > I'm not sure if the other proposal suffers from this problem.
> >
> > Carl
> >
> > On 9/4/13 3:34 PM, "Brian Cline"  wrote:
> >
> >>Was any consensus on this ever reached? It appears both reviews are still
> >>open. I'm partial to review 37131 as it attacks the problem a more
> >>concisely, and, as mentioned, combined the efforts of the two more
> >>effective patches. I would echo Carl's sentiments that it's an easy
> >>review minus the few minor behaviors discussed on the review thread
> today.
> >>
> >>We feel very strongly about these making it into Havana -- being confined
> >>to a single neutron-server instance per cluster or region is a huge
> >>bottleneck--essentially the only controller process with massive CPU
> >>churn in environments with constant instance churn, or sudden large
> >>batches of new instance requests.
> >>
> >>In Grizzly, this behavior caused addresses not to be issued to some
> >>instances during boot, due to quantum-server thinking the DHCP agents
> >>timed out and were no longer available, when in reality they were just
> >>backlogged (waiting on quantum-server, it seemed).
> >>
> >>Is it realistically looking like this patch will be cut for h3?
> >>
> >>--
> >>Brian Cline
> >>Software Engineer III, Product Innovation
> >>
> >>SoftLayer, an IBM Company
> >>4849 Alpha Rd, Dallas, TX 75244
> >>214.782.7876 direct  |  bcl...@softlayer.com
> >>
> >>
> >>-Original Message-
> >>From: Baldwin, Carl (HPCS Neutron) [mailto:carl.bald...@hp.com]
> >>Sent: Wednesday, August 28, 2013 3:04 PM
> >>To: Mark McClain
> >>Cc: OpenStack Development Mailing List
> >>Subject: [openstack-dev] [Neutron] The three API server multi-worker
> >>process patches.
> >>
> >>All,
> >>
> >>We've known for a while now that some duplication of work happened with
> >>respect to adding multiple worker processes to the neutron-server.  There
> >>were a few mistakes made which led to three patches being done
> >>independently of each other.
> >>
> >>Can we settle on one and accept it?
> >>
> >>I have changed my patch at the suggestion of one of the other 2 authors,
> >>Peter Feiner, in attempt to find common ground.  It now uses openstack
> >>common code and therefore it is more concise than any of the original
> >>three and should be pretty easy to review.  I'll admit to some bias
> toward
> >>my own implementation but most importantly, I would like for one of these
> >>implementations to land and start seeing broad usage in the community
> >>earlier than later.
> >>
> >>Carl Baldwin
> >>
> >>PS Here are the two remaining patches.  The third has been abandoned.
> >>
> >>https://review.openstack.org/#/c/37131/
> >>https://review.openstack.org/#/c/36487/
> >>
> >>
> >>___
> >>OpenStack-dev mailing list
> >>OpenStack-dev@lists.openstack.org
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Havana M3 Release

2013-09-05 Thread John Wood
Hello folks,

The Barbican team is proud to announce the third milestone delivery with the 
OpenStack project: Havana-3.

This milestone can be found at:

https://launchpad.net/cloudkeep/havana/havana-3/+download/barbican-2013.2.b3.tar.gz

With this milestone, 3 blueprints have been implemented. Here is a quick 
summary of the new features:

  * Added a variety of refinements to the API and crypto plugin contracts.
  * Added PKCS #11 interface support for HSMs, currently operating with 
SafeNet's Luna SA
  * Added Role Base Access Control (RBAC) utilizing 4 roles: admin, creator, 
audit, observer.
  * Added an error reason to 'orders' resource creates that fail asynchronously

More details can be found at: https://launchpad.net/cloudkeep/havana/havana-3,
and here: https://github.com/cloudkeep/barbican/wiki/Release-Notes

An updated Python client library and command line interface for Barbican is 
also available in Pypi here: https://pypi.python.org/pypi/python-barbicanclient/

Thanks to Jarret Raim, Andrew Hartnett, Douglas Sims, Sheena Gregson, John 
Vrbanac, Douglas Mendizabal, Paul Kehrer, Melissa Kam, Arash Ghoreyshi, Malini 
Bhandaru and John Wood for their contributions to this milestone.

Thanks,
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] I will be on vacation from 9/5 to 9/15, urgent call: 13811509950

2013-09-05 Thread Yang XY Yu

I will be out of the office starting  2013-09-05 and will not return until
2013-09-15.

I will be on my marriage leave from 9/5 to 9/15, for any urgent issue
please call me before 9/7.

For daily work, please ask my scrum master Zhu Zhu for help.
For glance issue, please ask glance SME Feilong Wang for help.
For defect report, there will be no report next week.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Proposal to add Ivan Melnikov to taskflow-core

2013-09-05 Thread Joshua Harlow
Greetings all stackers,

I propose that we add Ivan Melnikov to the 
taskflow-core team [1].

Ivan has been actively contributing to taskflow for a while now, both in
code and reviews.  He provides superb quality reviews and is doing an awesome 
job
with the engine concept. So I think he would make a great addition to the core
review team.

Please respond with +1/-1.

Thanks much!

[1] https://wiki.openstack.org/wiki/TaskFlow/CoreTeam
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev