[openstack-dev] [neutron][neutron-lbaas][stable] Cherry pick a patch from master to stable/liberty

2016-08-07 Thread zhi
hi,
Recently, I saw a patch which fixed the listener's admin_state_up
status[1]. This patch is already merged in master branch. But it is not
merged into stable/liberty.

So I upload a new patch[2] to merge that patch into stable/liberty.



[1]: https://review.openstack.org/#/c/266816/
[2]: https://review.openstack.org/#/c/352238/


Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] entity graph layout

2016-08-07 Thread Afek, Ifat (Nokia - IL)
There is no such blueprint at the moment.
You are more than welcome to add one, in case you have some ideas for 
improvements.

Ifat.

From: Yujun Zhang
Date: Monday, 8 August 2016 at 09:21


Great, it works.
But it would be better if we could improve the default layout. Is there any 
blueprint in progress?
--
Yujun

On Sun, Aug 7, 2016 at 1:09 PM Afek, Ifat (Nokia - IL) 
mailto:ifat.a...@nokia.com>> wrote:
Hi,

It is possible to adjust the layout of the graph. You can double-click on a 
vertex and it will remain pinned to its place. You can then move the pinned 
vertices around to adjust the graph layout.

Hope it helped, and let us know if you need additional help with your demo.

Best Regards,
Ifat.


From: Yujun Zhang
Date: Friday, 5 August 2016 at 09:32
Hi, all,

I'm building a demo of vitrage. The dynamic entity graph looks interesting.

But when more entities are added, things becomes crowded and the links screw 
over each other. Dragging the items will not help much.

Is it possible to adjust the layout so I can get a more regular/stable tree 
view of the entities?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] entity graph layout

2016-08-07 Thread Yujun Zhang
Great, it works.
But it would be better if we could improve the default layout. Is there any
blueprint in progress?
--
Yujun

On Sun, Aug 7, 2016 at 1:09 PM Afek, Ifat (Nokia - IL) 
wrote:

> Hi,
>
> It is possible to adjust the layout of the graph. You can double-click on
> a vertex and it will remain pinned to its place. You can then move the
> pinned vertices around to adjust the graph layout.
>
> Hope it helped, and let us know if you need additional help with your demo.
>
> Best Regards,
> Ifat.
>
>
> From: Yujun Zhang
> Date: Friday, 5 August 2016 at 09:32
>
> Hi, all,
>
> I'm building a demo of vitrage. The dynamic entity graph looks
> interesting.
>
> But when more entities are added, things becomes crowded and the links
> screw over each other. Dragging the items will not help much.
>
> Is it possible to adjust the layout so I can get a more regular/stable
> tree view of the entities?
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer team

2016-08-07 Thread Thomas Bechtold
+1  Tom will be a great addition!

On Thu, Aug 04, 2016 at 01:02:06PM +0300, Valeriy Ponomaryov wrote:
> Yeah, Tom consists of experience. +1
> 
> On Thu, Aug 4, 2016 at 12:35 PM, Ramana Raja  wrote:
> 
> > +1. Tom's reviews and guidance are helpful
> > and spot-on.
> >
> > -Ramana
> >
> > On Thursday, August 4, 2016 7:52 AM, Zhongjun (A) 
> > wrote:
> > > Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> > reviewer team
> > >
> > >
> > >
> > > +1 Tom will be a great addition to the core team.
> > >
> > >
> > >
> > >
> > >
> > >
> > > 发件人 : Dustin Schoenbrun [mailto:dscho...@redhat.com]
> > > 发送时间 : 2016 年 8 月 4 日 4:55
> > > 收件人 : OpenStack Development Mailing List (not for usage questions)
> > > 主题 : Re: [openstack-dev] [Manila] Nominate Tom Barron for core reviewer
> > team
> > >
> > >
> > >
> > >
> > >
> > > +1
> > >
> > >
> > >
> > >
> > >
> > > Tom will be a marvelous resource for us to learn from!
> > >
> > >
> > >
> > >
> > >
> > > Dustin Schoenbrun
> > > OpenStack Quality Engineer
> > > Red Hat, Inc.
> > > dscho...@redhat.com
> > >
> > >
> > >
> > >
> > >
> > > On Wed, Aug 3, 2016 at 4:19 PM, Knight, Clinton <
> > clinton.kni...@netapp.com >
> > > wrote:
> > >
> > >
> > > +1
> > >
> > >
> > >
> > >
> > >
> > > Tom will be a great asset for Manila.
> > >
> > >
> > > Clinton
> > >
> > >
> > >
> > >
> > >
> > > _
> > > From: Ravi, Goutham < goutham.r...@netapp.com >
> > > Sent: Wednesday, August 3, 2016 3:01 PM
> > > Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> > reviewer
> > > team
> > > To: OpenStack Development Mailing List (not for usage questions) <
> > > openstack-dev@lists.openstack.org >
> > >
> > >
> > >
> > > (Not a core member, so plus 0.02)
> > >
> > >
> > >
> > > I’ve learned a ton of things from Tom and continue to do so!
> > >
> > >
> > >
> > >
> > > From: Rodrigo Barbieri < rodrigo.barbieri2...@gmail.com >
> > > Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> > <
> > > openstack-dev@lists.openstack.org >
> > > Date: Wednesday, August 3, 2016 at 2:48 PM
> > > To: "OpenStack Development Mailing List (not for usage questions)" <
> > > openstack-dev@lists.openstack.org >
> > > Subject: Re: [openstack-dev] [Manila] Nominate Tom Barron for core
> > reviewer
> > > team
> > >
> > >
> > >
> > >
> > >
> > > +1
> > >
> > > Tom contributes a lot to the Manila project.
> > >
> > > --
> > > Rodrigo Barbieri
> > > Computer Scientist
> > > OpenStack Manila Core Contributor
> > > Federal University of São Carlos
> > >
> > >
> > >
> > >
> > >
> > > On Aug 3, 2016 15:42, "Ben Swartzlander" < b...@swartzlander.org > wrote:
> > >
> > >
> > >
> > > Tom (tbarron on IRC) has been working on OpenStack (both cinder and
> > manila)
> > > for more than 2 years and has spent a great deal of time on Manila
> > reviews
> > > in the last release. Tom brings another package/distro point of view to
> > the
> > > community as well as former storage vendor experience.
> > >
> > > -Ben Swartzlander
> > > Manila PTL
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> 
> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomar...@mirantis.com

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Ton Ngo

Hi Ricardo,
 Great to have feedback from real use case.  Spyros and I had a
discussion on this in Austin
and we sketched out the implementation.  Once you open the blueprint, we
will add the details
and consider additional scenarios.
Ton,



From:   Ricardo Rocha 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   08/07/2016 12:59 PM
Subject:Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
nodes



Hi Ton.

I think we should. Also in cases where multiple volume types are available
(in our case with different iops) there would be additional parameters
required to select the volume type. I'll add it this week.

It's a detail though, spawning container clusters with Magnum is now super
easy (and fast!).

Cheers,
  Ricardo

On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo  wrote:
  Hi Ricardo,
  For your question 1, you can modify the Heat template to not create the
  Cinder volume and tweak the call to
  configure-docker-storage.sh to use local storage. It should be fairly
  straightforward. You just need to make
  sure the local storage of the flavor is sufficient to host the containers
  in the benchmark.
  If you think this is a common scenario, we can open a blueprint for this
  option.
  Ton,

  Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55 AM---Hi.
  Quick update is 1000 nodes and 7 million reqs/sec :) - Ricardo Rocha
  ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7 million
  reqs/sec :) - and the number of

  From: Ricardo Rocha 
  To: "OpenStack Development Mailing List (not for usage questions)" <
  openstack-dev@lists.openstack.org>
  Date: 08/05/2016 04:51 AM



  Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
  nodes



  Hi.

  Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
  requests should be higher but we had some internal issues. We have a
  submission for barcelona to provide a lot more details.

  But a couple questions came during the exercise:

  1. Do we really need a volume in the VMs? On large clusters this is a
  burden, and local storage only should be enough?

  2. We observe a significant delay (~10min, which is half the total time
  to deploy the cluster) on heat when it seems to be crunching the
  kube_minions nested stacks. Once it's done, it still adds new stacks
  gradually, so it doesn't look like it precomputed all the info in advance

  Anyone tried to scale Heat to stacks this size? We end up with a stack
  with:
  * 1000 nested stacks (depth 2)
  * 22000 resources
  * 47008 events

  And already changed most of the timeout/retrial values for rpc to get
  this working.

  This delay is already visible in clusters of 512 nodes, but 40% of the
  time in 1000 nodes seems like something we could improve. Any hints on
  Heat configuration optimizations for large stacks very welcome.

  Cheers,
    Ricardo

  On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol  wrote:
Thanks Ricardo! This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet: bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680

Inactive hide details for Ton Ngo---06/17/2016 12:10:33 PM---Thanks
Ricardo for sharing the data, this is really encouraging! TTon
Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data,
this is really encouraging! Ton,

From: Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date: 06/17/2016 12:10 PM
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
100s of nodes








Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
AM---Hi. Just thought the Magnum team would be happy to hear :)
Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
Magnum team would be happy to hear :)

From: Ricardo Rocha 
To: "OpenStack Development Mailing List (not for usage questions)"

Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
nodes



Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million
requests / sec.

Check here for some details:

https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html


We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where 

[openstack-dev] [tacker] Weekly meeting agenda

2016-08-07 Thread Sridhar Ramaswamy
Tackers,

Here is the agenda for this week's irc meeting,

https://wiki.openstack.org/wiki/Meetings/Tacker

- Announcements
- Newton release deadlines
- Newton priorities - feature / RFE go/no-go check
- Mistral workflow BP
- Open Discussion

Let me know if you've anything else to discuss.

thanks,
Sridhar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-07 Thread Adam Young

On 08/06/2016 08:44 AM, John Dennis wrote:

On 08/05/2016 06:06 PM, Adam Young wrote:

Ah...just noticed the redirect is to :5000, not port :13000 which is
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml";
Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"
AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse";
>
https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata
Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"

AllowCreate="true"
/>



My guess is HA proxy is not passing on the proper, and the
mod_auth_mellon does not know to rewrite it from 5000 to 13000


You can't change the contents of a SAML AuthnRequest, often they are 
signed. Also, the AssertionConsumerServiceURL's and other URL's in 
SAML messages are validated to assure they match the metadata 
associated with EntityID (issuer). The addresses used inbound and 
outbound have to be correctly handled by the proxy configuration 
without modifying the content of the message being passed on the 
transport.



Got a a little further by twerking HA proxy settings.  Added in

  redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
  rsprep ^Location:\ http://(.*) Location:\ https://\1

whicxh tells HA proxy to translate Location headers (used in redirects) 
from http to https.



As of now, it looks good up until the response comes back from the IdP 
and mod mellon rejects it.  I think this is due to Mellon issuing a 
request for http://:  but it gets translated through the 
proxy as https://:.



mod_auth_mellon is failing the following check in auth_mellon_handler.c


  url = am_reconstruct_url(r);

  ...

  if (response->parent.Destination) {

if (strcmp(response->parent.Destination, url)) {
ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
  "Invalid Destination on Response. Should be: 
%s",

  url);
lasso_login_destroy(login);
return HTTP_BAD_REQUEST;
}
}

It does not spit out the parent.Destination value, but considering I am 
seeing http and not https in the error message, I assume that at least 
the protocol does not match.  Full error message at the bottom.


Assuming the problem is just that the URL is http and not https, I have 
an approach that should work.  I need to test it out, but want to record 
it here, and also get feedback:


I can clone the current 10-keystone_wsgi_main.conf which listens for 
straight http on port 5000.  If I make a file 
11-keystone_wsgi_main.conf  that listens on port 13000 (not on the 
external VIP)  but that enables SSL, I should be able to make HA proxy 
talk to that port and re-encrypt traffic, maintaining the 'https://' 
protocol.



However, I am not certain that Destination means the SP URL.  It seems 
like it should mean the IdP.  Further on in auth_mellon_handler.c


  destination_url = lasso_provider_get_metadata_one(
provider, "SingleSignOnService HTTP-Redirect");
if (destination_url == NULL) {
/* HTTP-Redirect unsupported - try HTTP-POST. */
http_method = LASSO_HTTP_METHOD_POST;
destination_url = lasso_provider_get_metadata_one(
provider, "SingleSignOnService HTTP-POST");
}

Looking in the metadata, it seems that this value should be:

 Location="https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml"; 
/>


So maybe something has rewritten the value used as the url ?


Here is the full error message


Invalid Destination on Response. Should be: 
http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse, 
referer: 
https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml?SAMLRequest=nZJba%2BMwEEb%2FitG7I%2BXi1Igk4OYCge5S0m4f%2BlKEM2lFLcmrGWc3%2F35HDu22D22hIDCMZ%2FTpHGmGxjWtrjp68jv43QFS9tc1HnX%2FYy666HUwaFF74wA11fqm%2BnGlRwOl2xgo1KERb0Y%2BnzCIEMkGL7Ltai4e1LoYq%2FFoXapJWU2GhSouN5vhelpNyqIcX2xEdgcRuX8ueJyHEDvYeiTjiUtqOM1VmavprRppXkVxL7IVM1hvqJ96ImpRS2n34MnSaWBOofOP%2BR6aJqfhhVID4n5pWICMYBqHMrSQEupn%2BQIoE5nIlsEjpODPEOtzk667GPmbW9c2trYksk2INfSm5%2BJgGoTEc81K7BFeK9WLoRTWOYg3EI%2B2hl%2B7q%2F80ryf8AEcXSil5HEvH9eBlG5B2gG06mljMEo3uVcbFd7d0QGZvyMzk291m5%2Bf0k61sV9eBwU8J25kvpKWK3eeHvlVTNB4ty2MdHPZnyRdDrIhiB0IuzpHvH%2B3iHw%3D%3D&RelayState=http%3A%2F%2Fopenstack.ayoung-dell-t1700.test%3A5000%2Fv3%2Fauth%2FOS-FEDERATION%2Fwebsso%2Fsaml2%3Forigin%3Dhttp%3A%2F%2Fopenstack.ayoung-dell-t1700.test%2Fdashboard%2Fauth%2Fwebsso%2F&SigAlg=http%3A%2F%2Fwww.w3.org%2F2000%2F09%2Fxmldsig%23rsa-sha1&Signature=oJzAwE7ma3m0gZtO%2FvPQKCnk18u4OsjKcRQ3wiDu7txUGiPr4Cc9XIzKIGwzSGPSaWi8j1qbN76XwdNICOk! 

HI5RsTdeS2Yeufw5Q5Ahol5cJHGEQOKa84iMzxkW9OtWgoYZnnXH3n2SCZkhLebabvJ72wfxskZ9iJ9JlVogHO8V%2BXUZ891sX1Rpm3UKH

Re: [openstack-dev] [nova] [scheduler] Use ResourceProviderTags instead of ResourceClass?

2016-08-07 Thread Alex Xu
Chris, thanks for the blog to explain your idea! It helps me understand
your idea better.

I agree the goal for API interface design in your blog. But one point I
guess you also agree, that is "The interface is easy to understand for API
user". So look at the example of API request flow with gabbi,  it is pretty
clear for me even I didn't spend any time to learn the gabbi. That means:
gabbi is cool and the interface is clear! But the only confuse is "total:
∞". And the related ResourceClass is "ssd", does it mean disk size is
infinite? For a user, he is learning our API, he needs to search the
document, due to he want to know "what is this special usage way means to".
If user can understand our API without any document, so that is prefect.

I agree all of other point you said, limit resource, unified concept. If we
want to finish that goal, I think the way is "Use ResourceProviderTags
instead of ResourceClass", not "Use ResourceClass instead of ResourceClass"

2016-08-05 21:16 GMT+08:00 Chris Dent :

> On Tue, 2 Aug 2016, Alex Xu wrote:
>
> Chris have a thought about using ResourceClass to describe Capabilities
>> with an infinite inventory. In the beginning we brain storming the idea of
>> Tags, Tan Lin have same thought, but we say no very quickly, due to the
>> ResourceClass is really about Quantitative stuff. But Chris give very good
>> point about simplify the ResourceProvider model and the API.
>>
>
> I'm still leaning in this direction. I realized I wasn't explaining
> myself very well and "because I like it" isn't really a good enough
> for doing anything, so I wrote something up about it:
>
>https://anticdent.org/simple-resource-provision.html
>
> --
> Chris Dent   ┬─┬ノ( º _ ºノ) http://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] Use ResourceProviderTags instead of ResourceClass?

2016-08-07 Thread Yingxin Cheng
2016-08-05 21:16 GMT+08:00 Chris Dent :

> On Tue, 2 Aug 2016, Alex Xu wrote:
>
> Chris have a thought about using ResourceClass to describe Capabilities
>> with an infinite inventory. In the beginning we brain storming the idea of
>> Tags, Tan Lin have same thought, but we say no very quickly, due to the
>> ResourceClass is really about Quantitative stuff. But Chris give very good
>> point about simplify the ResourceProvider model and the API.
>>
>
> I'm still leaning in this direction. I realized I wasn't explaining
> myself very well and "because I like it" isn't really a good enough
> for doing anything, so I wrote something up about it:
>
>https://anticdent.org/simple-resource-provision.html
>
>
Reusing the existing infrastructure of resource classes, inventories and
allocations does make implementation easier with capabilities as well as
their calculations and representations, at least at the beginning.

But I'm still not convinced by this direction, because it introduces
unnecessary reuses as well as overhead for capabilities. Instead of
attaching a capability directly to a resource provider, it needs to create
an inventory and assign the capability to inventory, indirectly. Moreover,
it reuses allocations and even the "compare-and-swap" strategy with the
implementation of "generation" field in the resource provider. And it
introduces further complexities and obscurities if we decide to disable the
unnecessary consumable features for capabilities.

The existing architecture of resource provider is mainly for consumable
resources. And we don't want capabilities to be consumable by mistake. It
is an inherently different implementation for non consumable capabilities,
so I tend to agree to implement qualitative part of resource provider from
a fresher start to keep it simple and direct. And add features
incrementally if they are thought necessary.


---
Regards
Yingxin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat

2016-08-07 Thread Ryan Hallisey
Hi,

There are a few additional items needed here before you can use the container 
work
from tht in the undercloud.

First, we deploy on Atomic.  Atomic already has docker started when it boots. 
We can't
use Atomic for the undercloud because there is no yum to install the clients.  
The clients
would have to be in another container. Instead of using Atomic, Docker could be 
setup and
configured by a script before the deployment. Second, you need to build the 
images locally
and push them to a local Docker registry. Therefore, there needs to be 
additional bits
that configure and set up the registry followed by cloning Kolla and building 
the
container images.  Lastly, configs are generated using puppet tags. I'm not 
sure every
service has a _config tag in the puppet scripts currently.

Thanks,
-Ryan


- Original Message -
From: "Dan Prince" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Friday, August 5, 2016 7:34:32 AM
Subject: Re: [openstack-dev] [TripleO] a new Undercloud install driven by Heat


Lastly, there is container work ongoing for the Overcloud. Again, I'd
like to see us adopt a format that would allow it to be used in the
Undercloud as well as opposed to having to re-implement features in the
Over and Under clouds all the time.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Clint Byrum
Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
> On 05/08/16 21:48, Ricardo Rocha wrote:
> > Hi.
> >
> > Quick update is 1000 nodes and 7 million reqs/sec :) - and the number 
> > of requests should be higher but we had some internal issues. We have 
> > a submission for barcelona to provide a lot more details.
> >
> > But a couple questions came during the exercise:
> >
> > 1. Do we really need a volume in the VMs? On large clusters this is a 
> > burden, and local storage only should be enough?
> >
> > 2. We observe a significant delay (~10min, which is half the total 
> > time to deploy the cluster) on heat when it seems to be crunching the 
> > kube_minions nested stacks. Once it's done, it still adds new stacks 
> > gradually, so it doesn't look like it precomputed all the info in advance
> >
> > Anyone tried to scale Heat to stacks this size? We end up with a stack 
> > with:
> > * 1000 nested stacks (depth 2)
> > * 22000 resources
> > * 47008 events
> >
> > And already changed most of the timeout/retrial values for rpc to get 
> > this working.
> >
> > This delay is already visible in clusters of 512 nodes, but 40% of the 
> > time in 1000 nodes seems like something we could improve. Any hints on 
> > Heat configuration optimizations for large stacks very welcome.
> >
> Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
> max_resources_per_stack = -1
> 
> Enforcing this for large stacks has a very high overhead, we make this 
> change in the TripleO undercloud too.
> 

Wouldn't this necessitate having a private Heat just for Magnum? Not
having a resource limit per stack would leave your Heat engines
vulnerable to being DoS'd by malicious users, since one can create many
many thousands of resources, and thus python objects, in just a couple
of cleverly crafted templates (which is why I added the setting).

This makes perfect sense in the undercloud of TripleO, which is a
private, single tenant OpenStack. But, for Magnum.. now you're talking
about the Heat that users have access to.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl][requirements] nomination period started

2016-08-07 Thread Tony Breeds
On Wed, Jul 27, 2016 at 11:13:03AM -0500, Tony Breeds wrote:

> I'd like to nominate for PTL of the, to be formed, requirements project.

I'd just like to clarify something as I've been asked by several people
privately.  The gist of the question is:
You're the Stable PTL if elected to requirements PTL what happens to Stable?

Frankly unless the community has a major objection to doing so I'm quite
willing to do the PTL roll for both projects.  There is considerable overlap
between the skills and (pun intended) requirements in both projects/rolls.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Steve Baker

On 05/08/16 21:48, Ricardo Rocha wrote:

Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number 
of requests should be higher but we had some internal issues. We have 
a submission for barcelona to provide a lot more details.


But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a 
burden, and local storage only should be enough?


2. We observe a significant delay (~10min, which is half the total 
time to deploy the cluster) on heat when it seems to be crunching the 
kube_minions nested stacks. Once it's done, it still adds new stacks 
gradually, so it doesn't look like it precomputed all the info in advance


Anyone tried to scale Heat to stacks this size? We end up with a stack 
with:

* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get 
this working.


This delay is already visible in clusters of 512 nodes, but 40% of the 
time in 1000 nodes seems like something we could improve. Any hints on 
Heat configuration optimizations for large stacks very welcome.



Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this 
change in the TripleO undercloud too.



Cheers,
  Ricardo

On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol > wrote:


Thanks Ricardo! This is very exciting progress!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet: bto...@us.ibm.com 
Assistant: Kendra Witherspoon (919) 254-0680

Inactive hide details for Ton Ngo---06/17/2016 12:10:33
PM---Thanks Ricardo for sharing the data, this is really
encouraging! TTon Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo
for sharing the data, this is really encouraging! Ton,

From: Ton Ngo/Watson/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage
questions\)" mailto:openstack-dev@lists.openstack.org>>
Date: 06/17/2016 12:10 PM
Subject: Re: [openstack-dev] [magnum] 2 million requests / sec,
100s of nodes






Thanks Ricardo for sharing the data, this is really encouraging!
Ton,

Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
AM---Hi. Just thought the Magnum team would be happy to hear
:)Ricardo Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the
Magnum team would be happy to hear :)

From: Ricardo Rocha mailto:rocha.po...@gmail.com>>
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Date: 06/17/2016 08:16 AM
Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s
of nodes




Hi.

Just thought the Magnum team would be happy to hear :)

We had access to some hardware the last couple days, and tried some
tests with Magnum and Kubernetes - following an original blog post
from the kubernetes team.

Got a 200 node kubernetes bay (800 cores) reaching 2 million
requests / sec.

Check here for some details:_

__https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html_



We'll try bigger in a couple weeks, also using the Rally work from
Winnie, Ton and Spyros to see where it breaks. Already identified a
couple issues, will add bugs or push patches for those. If you have
ideas or suggestions for the next tests let us know.

Magnum is looking pretty good!

Cheers,
Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
_
__http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-d

Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Roman Vasilets
HI,
  Great to hear it! From the view of Rally team=)

-Best regards, Roman Vasylets

On Sun, Aug 7, 2016 at 10:55 PM, Ricardo Rocha 
wrote:

> Hi Ton.
>
> I think we should. Also in cases where multiple volume types are available
> (in our case with different iops) there would be additional parameters
> required to select the volume type. I'll add it this week.
>
> It's a detail though, spawning container clusters with Magnum is now super
> easy (and fast!).
>
> Cheers,
>   Ricardo
>
> On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo  wrote:
>
>> Hi Ricardo,
>> For your question 1, you can modify the Heat template to not create the
>> Cinder volume and tweak the call to
>> configure-docker-storage.sh to use local storage. It should be fairly
>> straightforward. You just need to make
>> sure the local storage of the flavor is sufficient to host the containers
>> in the benchmark.
>> If you think this is a common scenario, we can open a blueprint for this
>> option.
>> Ton,
>>
>> [image: Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55
>> AM---Hi. Quick update is 1000 nodes and 7 million reqs/sec :) -]Ricardo
>> Rocha ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7
>> million reqs/sec :) - and the number of
>>
>> From: Ricardo Rocha 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 08/05/2016 04:51 AM
>>
>> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>> nodes
>> --
>>
>>
>>
>> Hi.
>>
>> Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
>> requests should be higher but we had some internal issues. We have a
>> submission for barcelona to provide a lot more details.
>>
>> But a couple questions came during the exercise:
>>
>> 1. Do we really need a volume in the VMs? On large clusters this is a
>> burden, and local storage only should be enough?
>>
>> 2. We observe a significant delay (~10min, which is half the total time
>> to deploy the cluster) on heat when it seems to be crunching the
>> kube_minions nested stacks. Once it's done, it still adds new stacks
>> gradually, so it doesn't look like it precomputed all the info in advance
>>
>> Anyone tried to scale Heat to stacks this size? We end up with a stack
>> with:
>> * 1000 nested stacks (depth 2)
>> * 22000 resources
>> * 47008 events
>>
>> And already changed most of the timeout/retrial values for rpc to get
>> this working.
>>
>> This delay is already visible in clusters of 512 nodes, but 40% of the
>> time in 1000 nodes seems like something we could improve. Any hints on Heat
>> configuration optimizations for large stacks very welcome.
>>
>> Cheers,
>>   Ricardo
>>
>> On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <*bto...@us.ibm.com*
>> > wrote:
>>
>>Thanks Ricardo! This is very exciting progress!
>>
>>--Brad
>>
>>
>>Brad Topol, Ph.D.
>>IBM Distinguished Engineer
>>OpenStack
>>(919) 543-0646
>>Internet: *bto...@us.ibm.com* 
>>Assistant: Kendra Witherspoon (919) 254-0680
>>
>>[image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
>>PM---Thanks Ricardo for sharing the data, this is really encouraging! 
>> T]Ton
>>Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this 
>> is
>>really encouraging! Ton,
>>
>>From: Ton Ngo/Watson/IBM@IBMUS
>>To: "OpenStack Development Mailing List \(not for usage questions\)" <
>>*openstack-dev@lists.openstack.org*
>>>
>>Date: 06/17/2016 12:10 PM
>>Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s
>>of nodes
>>
>>
>>--
>>
>>
>>
>>Thanks Ricardo for sharing the data, this is really encouraging!
>>Ton,
>>
>>[image: Inactive hide details for Ricardo Rocha ---06/17/2016
>>08:16:15 AM---Hi. Just thought the Magnum team would be happy to hear 
>> :)]Ricardo
>>Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would 
>> be
>>happy to hear :)
>>
>>From: Ricardo Rocha <*rocha.po...@gmail.com* >
>>To: "OpenStack Development Mailing List (not for usage questions)" <
>>*openstack-dev@lists.openstack.org*
>>>
>>Date: 06/17/2016 08:16 AM
>>Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>>nodes
>>--
>>
>>
>>
>>Hi.
>>
>>Just thought the Magnum team would be happy to hear :)
>>
>>We had access to some hardware the last couple days, and tried some
>>tests with Magnum and Kubernetes - following an original blog post
>>from the kubernetes team.
>>
>>Got a 200 node kubernetes bay (800 cores) reaching 2 million requests
>>/ sec.
>>
>>Check here for some details:
>>
>>
>> *https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html*
>>
>> 

Re: [openstack-dev] [rally] Support executing multi scenarios in a single task file

2016-08-07 Thread Roman Vasilets
Hi,
  This feature is under the work. We have made already the great work:
Introduced task format converter [1] and made patch related to DB
refactoring [2]. After we will merge [2] patch we will be able to move on
and implement next work items. [2] was the hardest part thus it takes a lot
of time. We need to implement not only this spec(feature) see our road map
[3] and to fix some bugs [4], so we will be happy to get any help and I
believe this will speed up implementing of new task format.

- Best regards, Roman Vasylets.


[1]
https://github.com/openstack/rally/commit/d0f079230602705251512448a135f8598c0a29b3
[2] https://review.openstack.org/#/c/297020/
[3]
https://docs.google.com/spreadsheets/d/16DXpfbqvlzMFaqaXAcJsBzzpowb_XpymaK2aFY2gA2g/edit#gid=0
[4] https://bugs.launchpad.net/rally

On Sun, Aug 7, 2016 at 5:50 PM, Hui Kang  wrote:

> Hi, OpenStack Developers,
> I came across this rally spec [1]. The spec mentions defining and
> executing multiple scenarios
> in a single rally task file. However, as this was written about 8month
> ago, may I know if this
> feature is implemented in rally? If not, is there any related patch?
> Thanks in advance.
>
> - Hui
>
> [1] https://github.com/openstack/rally/blob/master/doc/specs/
> in-progress/new_rally_input_task_format.rst
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of nodes

2016-08-07 Thread Ricardo Rocha
Hi Ton.

I think we should. Also in cases where multiple volume types are available
(in our case with different iops) there would be additional parameters
required to select the volume type. I'll add it this week.

It's a detail though, spawning container clusters with Magnum is now super
easy (and fast!).

Cheers,
  Ricardo

On Fri, Aug 5, 2016 at 5:11 PM, Ton Ngo  wrote:

> Hi Ricardo,
> For your question 1, you can modify the Heat template to not create the
> Cinder volume and tweak the call to
> configure-docker-storage.sh to use local storage. It should be fairly
> straightforward. You just need to make
> sure the local storage of the flavor is sufficient to host the containers
> in the benchmark.
> If you think this is a common scenario, we can open a blueprint for this
> option.
> Ton,
>
> [image: Inactive hide details for Ricardo Rocha ---08/05/2016 04:51:55
> AM---Hi. Quick update is 1000 nodes and 7 million reqs/sec :) -]Ricardo
> Rocha ---08/05/2016 04:51:55 AM---Hi. Quick update is 1000 nodes and 7
> million reqs/sec :) - and the number of
>
> From: Ricardo Rocha 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 08/05/2016 04:51 AM
>
> Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s of
> nodes
> --
>
>
>
> Hi.
>
> Quick update is 1000 nodes and 7 million reqs/sec :) - and the number of
> requests should be higher but we had some internal issues. We have a
> submission for barcelona to provide a lot more details.
>
> But a couple questions came during the exercise:
>
> 1. Do we really need a volume in the VMs? On large clusters this is a
> burden, and local storage only should be enough?
>
> 2. We observe a significant delay (~10min, which is half the total time to
> deploy the cluster) on heat when it seems to be crunching the kube_minions
> nested stacks. Once it's done, it still adds new stacks gradually, so it
> doesn't look like it precomputed all the info in advance
>
> Anyone tried to scale Heat to stacks this size? We end up with a stack
> with:
> * 1000 nested stacks (depth 2)
> * 22000 resources
> * 47008 events
>
> And already changed most of the timeout/retrial values for rpc to get this
> working.
>
> This delay is already visible in clusters of 512 nodes, but 40% of the
> time in 1000 nodes seems like something we could improve. Any hints on Heat
> configuration optimizations for large stacks very welcome.
>
> Cheers,
>   Ricardo
>
> On Sun, Jun 19, 2016 at 10:59 PM, Brad Topol <*bto...@us.ibm.com*
> > wrote:
>
>Thanks Ricardo! This is very exciting progress!
>
>--Brad
>
>
>Brad Topol, Ph.D.
>IBM Distinguished Engineer
>OpenStack
>(919) 543-0646
>Internet: *bto...@us.ibm.com* 
>Assistant: Kendra Witherspoon (919) 254-0680
>
>[image: Inactive hide details for Ton Ngo---06/17/2016 12:10:33
>PM---Thanks Ricardo for sharing the data, this is really encouraging! T]Ton
>Ngo---06/17/2016 12:10:33 PM---Thanks Ricardo for sharing the data, this is
>really encouraging! Ton,
>
>From: Ton Ngo/Watson/IBM@IBMUS
>To: "OpenStack Development Mailing List \(not for usage questions\)" <
>*openstack-dev@lists.openstack.org* 
>>
>Date: 06/17/2016 12:10 PM
>Subject: Re: [openstack-dev] [magnum] 2 million requests / sec, 100s
>of nodes
>
>
>--
>
>
>
>Thanks Ricardo for sharing the data, this is really encouraging!
>Ton,
>
>[image: Inactive hide details for Ricardo Rocha ---06/17/2016 08:16:15
>AM---Hi. Just thought the Magnum team would be happy to hear :)]Ricardo
>Rocha ---06/17/2016 08:16:15 AM---Hi. Just thought the Magnum team would be
>happy to hear :)
>
>From: Ricardo Rocha <*rocha.po...@gmail.com* >
>To: "OpenStack Development Mailing List (not for usage questions)" <
>*openstack-dev@lists.openstack.org* 
>>
>Date: 06/17/2016 08:16 AM
>Subject: [openstack-dev] [magnum] 2 million requests / sec, 100s of
>nodes
>--
>
>
>
>Hi.
>
>Just thought the Magnum team would be happy to hear :)
>
>We had access to some hardware the last couple days, and tried some
>tests with Magnum and Kubernetes - following an original blog post
>from the kubernetes team.
>
>Got a 200 node kubernetes bay (800 cores) reaching 2 million requests
>/ sec.
>
>Check here for some details:
>
>
> *https://openstack-in-production.blogspot.ch/2016/06/scaling-magnum-and-kubernetes-2-million.html*
>
> 
>
>We'll try bigger in a couple weeks, also using the Rally work from
>Winnie, Ton and Spyros to see where it breaks. Already identified a
>couple issues, will add bugs or push patches for those. If you have
>ideas or suggestions for the next tests let us know.
>
>Magnum is looking 

Re: [openstack-dev] [Magnum] Adding opensuse as new driver to Magnum

2016-08-07 Thread Hongbin Lu
Added to the agenda of next team meeting: 
https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-08-09_1600_UTC
 .

Best regards,
Hongbin

> -Original Message-
> From: Murali Allada [mailto:murali.all...@rackspace.com]
> Sent: August-04-16 12:38 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Magnum] Adding opensuse as new driver to
> Magnum
> 
> Michal,
> 
> The right place for drivers is the /drivers folder.
> 
> Take a look at the existing drivers as an examples. You'll also need to
> update this file
> https://github.com/openstack/magnum/blob/master/setup.cfg#L60
> and add a new entry point for the driver.
> 
> I would encourage you to hold off on this patch. We are currently
> working on using stevedore to load drivers and moving all the heat
> stack creation and update operations to each driver.
> 
> -Murali
> 
> 
> From: Michal Jura 
> Sent: Thursday, August 4, 2016 3:26 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Magnum] Adding opensuse as new driver to
> Magnum
> 
> Hi,
> 
> I would like to put for discussion adding new driver to Magnum. We
> would like to propose opensuse with kubernetes as driver. I did some
> initial work in bug
> 
> https://launchpad.net/bugs/1600197
> and changes
> https://review.openstack.org/339480
> https://review.openstack.org/349994
> 
> I've got also some comments from you about how this should be proceed.
> 
> As maintainer for this change I can propose myself.
> 
> I have couple question about moving this driver to /contrib directory.
> If I will do this how this driver should be installed from there?
> 
> Thank you for all answers and help with doing this,
> 
> Best regards,
> Michal
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [osic][kolla] Start of OSIC scale testing documentation [WIP]

2016-08-07 Thread Steven Dake (stdake)
Hey Koalaians,

See review:
https://review.openstack.org/352101

What I'd like to see happen is everyone working on scale testing commit their 
work by pulling down that review, modifying it, and submitting it to gerrit.  
Lets not mege it until the OSIC cluster is no longer in our hands.  If you want 
to review the documentation along the way that works too - so we can fix any 
problems found.  I think this will be better then a etherpad->git conversion 4 
weeks from now.  Lets do the conversion along the way.

TIA :)
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-07 Thread Colette Alexander
On Tue, Aug 2, 2016 at 11:29 AM, Thierry Carrez  wrote:
>
> Doug Hellmann wrote:
> > [...]
> >> Likewise, what if the Manila project team decides they aren't interested
> >> in supporting Python 3.5 or a particular greenlet library du jour that
> >> has been mandated upon them? Is the only filesystem-as-a-service project
> >> going to be booted from the tent?
> >
> > I hardly think "move off of the EOL-ed version of our language" and
> > "use a library du jour" are in the same class.  All of the topics
> > discussed so far are either focused on eliminating technical debt
> > that project teams have not prioritized consistently or adding
> > features that, again for consistency, are deemed important by the
> > overall community (API microversioning falls in that category,
> > though that's an example and not in any way an approved goal right
> > now).
>
> Right, the proposal is pretty clearly about setting a number of
> reasonable, small goals for a release cycle that would be awesome to
> collectively reach. Not really invasive top-down design mandates that we
> would expect teams to want to resist.
>
> IMHO if a team has a good reason for not wanting or not being able to
> fulfill a common goal that's fine -- it just needs to get documented and
> should not result in itself in getting kicked out from anything. If a
> team regularly skips on common goals (and/or misses releases, and/or
> doesn't fix security issues) that's a general sign that it's not really
> behaving like an OpenStack project and then a case could be opened for
> removal, but there is nothing new here.
>

+1 to all of this.

As someone who was in leadership training with both Thierry and Doug,
I just want to point out that 'reasonable, small goals' and 'support a
particular library du jour or get kicked out of OpenStack' are
immensely different reads on the potential of this situation.

OpenStack suffers from a perception that it is not a usable, cohesive
set of projects that work together. Like many perceptions out in the
wild about OSS projects, that is part PR/spin/haters-gonna-hate, and
part reality. Apart from PR-land, the idea of having some basic
standards and cross-project goals to meet not just before a project is
accepted, but also as it matures, hopefully serves the ultimate
purpose of presenting an OpenStack with enough consistency and
usability that it doesn't make a user/operator want to throw a server
out the window.

Where does the technical independence of a project begin encroaching
on the ability of other projects to effectively work for their users
or their developer participation? Where do the desires of a single
project begin to negatively impact the entirety of OpenStack, and how
does the community respond when faced with tough situations like that?

The TC, as a body that oversees the overarching technical direction of
the entire group of projects, has identified areas where OpenStack
could be *highly* improved in its functionality and usability - those
certainly, on their surface, seem like noble goals for any person
creating any project with an end user that is not themselves. That the
TC is the mechanism to do that unifying, helpful, future-preserving
thing for OpenStack and also the same mechanism that, in the wrong
hands, could be used to ask impossible things of projects (and use
their failure to kick them out of the community, eventually) is
totally true, but it's also always been true of the TC. That's why
preserving our culture of leadership by election is so important -
leaders are elected to serve their constituents - *not* the other way
around.

-colette/gothicmindfood

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] Support executing multi scenarios in a single task file

2016-08-07 Thread Hui Kang
Hi, OpenStack Developers,
I came across this rally spec [1]. The spec mentions defining and
executing multiple scenarios
in a single rally task file. However, as this was written about 8month
ago, may I know if this
feature is implemented in rally? If not, is there any related patch?
Thanks in advance.

- Hui

[1] 
https://github.com/openstack/rally/blob/master/doc/specs/in-progress/new_rally_input_task_format.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev