Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Chris Friesen

On 11/03/2015 11:45 PM, John Griffith wrote:



On Tue, Nov 3, 2015 at 3:20 PM, Boris Pavlovic mailto:bo...@pavlovic.me>> wrote:

Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool that
works with OpenStack are working with resources (e.g. VM, Volumes, Images,
..) in the next way:

 >>> resource = api.resouce_do_some_stuff()
 >>> while api.resource_get(resource["uuid"]) != expected_status
 >>>sleep(a_bit)

For each async operation they are polling and call many times resource_get()
which creates significant load on API and DB layers due the nature of this
request. (Usually getting full information about resources produces SQL
requests that contains multiple JOINs, e,g for nova vm it's 6 joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?



​Hey Boris,

As I asked in IRC, I'm kinda curious what the difference is here in terms of API
and DB calls.  I very well might be missing an idea here, but currently we do a
get by ID in that loop that you mention, the only difference I see in what
you're suggesting is a reduced payload maybe?  A response that only includes the
status?

I may be missing an important idea here, but it seems to me that you would still
have the same number of API calls and DB request, just possibly a slightly
smaller payload.  Let me know if I'm missing the idea here.



I think the idea is that we would only retrieve resource status rather than the 
full information about the resource.  In doing so we would:


1) Reduce the load on the DB due to doing fewer JOINs and retrieving less data.
2) Reduce the message payload.

I suspect that the first one is more important.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] SR-IOV subteam

2015-11-04 Thread Shinobu Kinjo
That's good to know.
Thank you!

 Shinobu

- Original Message -
From: "Moshe Levi" 
To: "OpenStack Development Mailing List (not for usage questions)" 

Sent: Wednesday, November 4, 2015 4:56:05 PM
Subject: Re: [openstack-dev] [Nova] SR-IOV subteam

Maybe we can you use the pci- passthrough meeting slot 
http://eavesdrop.openstack.org/#PCI_Passthrough_Meeting 
It been a long time since we had a meeting. 


> -Original Message-
> From: Nikola Đipanov [mailto:ndipa...@redhat.com]
> Sent: Tuesday, November 03, 2015 6:53 AM
> To: OpenStack Development Mailing List  d...@lists.openstack.org>
> Subject: [openstack-dev] [Nova] SR-IOV subteam
> 
> Hello Nova,
> 
> Looking at Mitaka specs, but also during the Tokyo design summit sessions,
> we've seen several discussions and requests for enhancements to the Nova
> SR-IOV functionality.
> 
> It has been brought up during the Summit that we may want to organize as a
> subteam to track all of the efforts better and make sure we get all the expert
> reviews on stuff more quickly.
> 
> I have already added an entry on the subteams page [1] and on the reviews
> etherpad for Mitaka [2]. We may also want to have a meeting slot. As I am
> out for the week, I'll let others propose a time for it (that will hopefully 
> work
> for all interested parites and their
> timezones) and we can take it from there next week.
> 
> As always - comments and suggestions much appreciated.
> 
> Many thanks,
> Nikola
> 
> [1] https://wiki.openstack.org/wiki/Nova#Nova_subteams
> [2] https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-04 Thread Louis Taylor
On Wed, Nov 04, 2015 at 03:03:29PM +1300, Fei Long Wang wrote:
> Hi Doug,
> 
> Thanks for posting this. I'm working on this for Zaqar now and there is a
> question. As for the stable/liberty patch, where does the "60fdcaba00e30d02"
> in [1] come from? Thanks.
> 
> [1] 
> https://review.openstack.org/#/c/241322/1/releasenotes/notes/60fdcaba00e30d02-start-using-reno.yaml

This is from running the reno command to create a uniquely named release note
file. See http://docs.openstack.org/developer/reno/usage.html

Cheers,
Louis


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Enabling and Reading notifications

2015-11-04 Thread Thomas Herve
On Tue, Nov 3, 2015 at 11:22 PM, Pratik Mallya 
wrote:

> Hello,
>
> I was looking for guidance as to how to enable notifications in heat and
> if there is already a tool that can read those events? Looking through the
> code, it gives somewhat conflicting information as to the extent to which
> notifications are supported. e.g. [1] says its not supported, but there is
> an integration test [2] available.
>
>
Hi,

[1] is something totally different, it's the notification property of stack
objects, for AWS compatibility. It was never implemented.

To enable Heat notifications, you simply need to set notification_driver,
messagingv2 being a goind value. It will then be published in your message
bus (depending on how it's configured). Ceilometer will consume those, or
you can plug any tool to read them on the appropriate exchange.

-- 
Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Handling of 202 response code

2015-11-04 Thread Saravanan KR

Hello,

How are the HTTP status code 202 response (from nova) is handled in 
Horizon to know the asynchronous operation is completed?


Background:
I am working on Bug #1506429 [1] where invoking 'Detach Interface' 
initiates the detaching and refreshes the page. But the interface which 
is detached, is not removed from the 'IP Address' list in the instance 
panel view. It is removed if you do a manual page refresh (in browser).


Why:
In Horizon, 'Detach Interface' Action triggers the Nova API [2] which 
returns status code as 202 (Request Accepted and processing 
asynchronously). Without checking for the asynchronous result, the 
request has been responded in horizon as 'Detached' and refreshes the 
page. Since the interface detach is in progress and not completed, it is 
again listed.


There may be multiple solutions:
1) Waiting for the synchronous and then respond
2) Do not trigger page refresh and respond with Operation in progress
3) If there is a mechanism to know delete in progress, do not list the 
interface


To decide on the solution, it is important to know how 202 responses 
should be handled. Can anyone can help with understanding?


Regards,
Saravanan KR

[1] https://bugs.launchpad.net/horizon/+bug/1506429
[2] 
http://developer.openstack.org/api-ref-compute-v2.1.html#deleteAttachedInterface


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-04 Thread Michał Dubiel
Ok, I see. Thanks for all the answers.

Regards,
Michal

On 3 November 2015 at 22:50, Matt Riedemann 
wrote:

>
>
> On 11/3/2015 11:57 AM, Michał Dubiel wrote:
>
>> Hi all,
>>
>> We have a simple patch allowing to use OpenContrail's vrouter with
>> vhostuser vif types (currently only OVS has support for that). We would
>> like to contribute it.
>>
>> However, We would like this change to land in the next maintenance
>> release of Kilo. Is it possible? What should be the process for this?
>> Should we prepare a blueprint and review request for the 'master' branch
>> first? It is small self contained change so I believe it does not need a
>> nova-spec.
>>
>> Regards,
>> Michal
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> The short answer is 'no' to backporting features to stable branches.
>
> As the other reply said, feature changes are targeted to master.
>
> The full stable branch policy is here:
>
> https://wiki.openstack.org/wiki/StableBranch
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-04 Thread Sean M. Collins
On Tue, Nov 03, 2015 at 02:08:26PM EST, Vikram Choudhary wrote:
> Thanks for all your efforts Sean.
> 
> I was actually thinking a separate IRC for this effort would be great and
> will help all the interested people to come together and develop.
> 
> Any thoughts on this?

Unless it becomes super popular I think it's fine to just discuss on
#openstack-neutron. 

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about Microsoft Hyper-V CI tests

2015-11-04 Thread Ihar Hrachyshka

Sławek Kapłoński  wrote:



It is strange for me because it looks that error is somewhere in
create_network. I didn't change anything in code which is creating  
networks.

Other tests are fine IMHO.
So my question is: should I check reason of this errors and try to fix it  
also

in my patch? Or how should I proceed with such kind of errors?


It is usual for some third party CI jobs to misbehave from time to time. I  
would check if the same failure occurs in other patches, and if so,  
wouldn’t be bothered too much. There are plenty of reasons why it could  
fail, but in any case, those jobs are not under neutron team control, so  
often times cannot easily figure out what goes wrong there. In that case  
just leave CI debugging to Hyper-V folks who set up the job.


Ihar

signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-04 Thread Ihar Hrachyshka

Sean M. Collins  wrote:


Hi Ihar,

This sounds good. I actually had a draft e-mail that I've been saving
until I got back, that may be relevant. Some contributors met on Friday
to discuss the packet classification framework, mostly centered around
just building a reusable library that can be shared among multiple
services.

It was my view that just getting the different APIs to share a common
data model would be a big first step, since we can refactor a lot of
common internal data structures without any user facing API changes.



Yes, code reuse could help a lot. Though I believe that convergence should  
also be achieved on API level. There is still no API merged for either  
traffic classifier or service groups, so I would be glad if we step back  
and think how we cover both cases with single API.


APIs are not that easy to refactor or deprecate since they are user  
visible. I better take a bit more time to polish single API than throw  
multiple application specific API as needed.


I quickly went back to my hotel room on Friday (after stealing some red  
bulls from the

dev lounge) to start hacking on a shared library for packet
classification, that can be re-used by other projects.

At this point, the code is mostly SQLAlchemy models, but the objective is  
to
try and see if the models are actually useful, and can be re-used by  
multiple services.


On the FwaaS side I plan on proving out the models by attempting to
replace some of the FwaaS database models with models from the
common-classifier. I also plan on putting together some simple tests to
see if it can also handle classifiers for security groups in the future,
since there has already been some ideas about creating a common backend
for both FwaaS and the Security Group implementation.

Anyway, the code is currently up on GitHub - I just threw it on there
because I wanted to scratch my hacking itch quickly.

https://github.com/sc68cal/neutron-classifier

Hopefully this can help spur more discussion.



That’s a good start, and I like the approach with inheritance. Let’s  
iterate on the neutron-specs review for traffic classifier to get to some  
result.


In the meantime, I highly recommend we don’t merge anything for service  
groups.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] notification subteam

2015-11-04 Thread Balázs Gibizer
> From: Michael Davies [mailto:mich...@the-davies.net]
> Sent: November 04, 2015 00:36
> 
> On Wed, Nov 4, 2015 at 8:49 AM, Michael Still  wrote:
> 
> 
>   I'd be interested in being involved with this, and I know Paul
> Murray is interested as well.
> 
>   I went to make a doodle, but then realised the only non-
> terrible timeslot for Australia / UK / US Central is 8pm UTC (7am Australia,
> 8pm London, 2pm Central US). So what do people think of that time slot?
> 
> 
> I'm interested along with Mario about making sure Ironic and Nova
> notifications follow similar paths, so I'd probably lurk along to this as 
> well (so
> the proposed time slot works for me).

Michael, thanks for the proposal. UTC 20:00 is a bit late but I will manage it. 
I've proposed a patch [1] to book a meeting slot for Tuesday. 

Cheers,
Gibi

[1] https://review.openstack.org/#/c/241546
>
> --
> 
> Michael Davies   mich...@the-davies.net
> Rackspace Cloud Builders Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Infra] Neutron gate issues

2015-11-04 Thread Gary Kotton
Hi,
At the moment the neutron gate is broken due to LBaaSv1 issues. This is meant 
to be deprecated in favor of LBaaSV2 
(https://review.openstack.org/#/c/239863).
Is there any chance that we can get that approved?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] weekly meeting #5

2015-11-04 Thread Antoine Cabot
Hello!

Here's an initial agenda for our weekly meeting, today at 1600 UTC
in #openstack-meeting-3:

https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda#11.2F4.2F2015_Agenda:

Feel free to add any items you'd like to discuss.

Thanks,

Antoine
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][dsvm] openstack packages pre-installed dsvm node?

2015-11-04 Thread Gareth
Hey

When choosing nodes to run a gate job, is there a liberty-trusty node?
The liberty core services is installed and well setup. It is helpful
for development on non-core projects and saves much time.

-- 
Gareth

Cloud Computing, OpenStack, Distributed Storage, Fitness, Basketball
OpenStack contributor, kun_huang@freenode
My promise: if you find any spelling or grammar mistakes in my email
from Mar 1 2013, notify me
and I'll donate $1 or ¥1 to an open organization you specify.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] Handling of 202 response code

2015-11-04 Thread Matthias Runge
On 04/11/15 09:25, Saravanan KR wrote:

> There may be multiple solutions:
> 1) Waiting for the synchronous and then respond
> 2) Do not trigger page refresh and respond with Operation in progress
> 3) If there is a mechanism to know delete in progress, do not list the
> interface
> 
> To decide on the solution, it is important to know how 202 responses
> should be handled. Can anyone can help with understanding?

Asynchronous operations are handled in horizon as synchronous
operations. To illustrate that: launch an instance, you'll get
immediately a feedback ("launch instance issued").
But, you don't get a status feedback directly. Horizon pulls nova api
for status updates via ajax calls.

So: there is no solution yet for this. You could duplicate the same
update strategy as in launch instance (on instances page), create a
volume (on volumes table), etc.

In ideal case, one would use something like a message bus to get
notified on changes.

Matthias



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] [Plugins] Calamari plugin project has been moved on Openstack workspace

2015-11-04 Thread Alessandro Martellone
Hi all,

we have moved on https://github.com/openstack/fuel-plugin-calamari the Calamari
plugin project  for Fuel (7.0 and 6.1).

The user guide is available at:
https://github.com/openstack/fuel-plugin-calamari/blob/master/doc/source/user-guide.rst

The specification url is available at:
https://github.com/openstack/fuel-specs/blob/master/specs/7.0/calamari.rst


Best regards,
Alessandro
-- 


Alessandro M. Martellone



CREATE-NET

Smart Infrastructures Area

Via alla Cascata 56/D

38123 Povo Trento (Italy)



T  (+39) 0461 312437

E  alessandro.martell...@create-net.org

W  www.create-net.org

my agenda 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic classifiers

2015-11-04 Thread Vikram Choudhary
I am fine with this!

-Original Message-
From: Sean M. Collins [mailto:s...@coreitpro.com] 
Sent: 04 November 2015 14:28
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][qos][fwaas] service groups vs. traffic 
classifiers

On Tue, Nov 03, 2015 at 02:08:26PM EST, Vikram Choudhary wrote:
> Thanks for all your efforts Sean.
> 
> I was actually thinking a separate IRC for this effort would be great 
> and will help all the interested people to come together and develop.
> 
> Any thoughts on this?

Unless it becomes super popular I think it's fine to just discuss on 
#openstack-neutron. 

--
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is there any incubating Data Backup projects?

2015-11-04 Thread Marzi, Fausto
Hi Thomas,
As per your quest the binaries have been removed as per:

- https://review.openstack.org/#/c/241325/

Thanks,
Fausto

-Original Message-
From: Thomas Goirand [mailto:z...@debian.org] 
Sent: 31 October 2015 16:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Is there any incubating Data Backup projects?

On 10/31/2015 01:07 PM, Marzi, Fausto wrote:
> Hi Yitao,
> 
> That's the idea and the direction we are taking. Please refer to the 
> following wiki for more information:
> 
> -  https://wiki.openstack.org/wiki/Freezer
> 
> Currently there's an application to include Freezer in the big tent, 
> it is available here:
> 
> -  https://review.openstack.org/#/c/239668/
> 
> Every Thursday we have a public meeting on IRC Freenode 
> #openstack-freezer at 4:00 pm GMT, but we can have a conversation on 
> hangout anytime you want.

Hi,

I'm happy to see such project as Freezer, though look here:
https://github.com/openstack/freezer/tree/master/freezer/bin

It's full of .dll, .exe and such, without even the source code available in the 
repository.

Please clean this up: remove every binaries and replace it by dependencies. If 
this cannot be done because it's windows, then remove the windows part and host 
it in another (non-free) git repository.

Yes, it will be more painful for you to work on it. Though we have no
choice: we're doing free software.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] Resigning TelcoWG core team

2015-11-04 Thread Calum Loudon
I'm sorry to hear that , Marc.  Thanks for your efforts and best wishes for the 
future.

Calum


Calum Loudon 
Director, Architecture
+44 (0)208 366 1177
 
METASWITCH NETWORKS 
THE BRAINS OF THE NEW GLOBAL NETWORK
www.metaswitch.com



-Original Message-
From: Marc Koderer [mailto:m...@koderer.com] 
Sent: 03 November 2015 08:18
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: [openstack-dev] [NFV][Telco] Resigning TelcoWG core team

Hello TelcoWG,

Due to personal reasons I have to resign my TelcoWG core team membership.
I will remove myself from the core reviewer group.

Thanks for all the support!

Regards
Marc


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Implementation and policy of keystone

2015-11-04 Thread Toshiya Shiga
Hi, Adam,

I see.
I will check the policy that was determined in the summit.

Thanks and Regards,

---
Toshiya Shiga

> -Original Message-
> From: Adam Young [mailto:ayo...@redhat.com]
> Sent: Tuesday, November 03, 2015 1:06 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [keystone] Implementation and policy of
> keystone
> 
> On 11/01/2015 08:19 AM, Toshiya Shiga wrote:
> > Hi.
> >
> > We want to implement that the project users are able to manage the users
> of own's project.
> > We are considering some proposals for that.
> >
> > which keystone project's policy is using "domain" (e.g. reseller)or is
> not using "domain"?
> > We want a proposal that meets the project of policy.
> 
> Big topic on the Keystone side of this past summit.
> 
> THe short:
> Don't use domain scoped tokens.  Use Project scoped tokens.  We are
> finishing up the "domain is a project" work and also getting a way to set
> the "admin" project for a deployment;
> 
> 
> 
> >
> >
> > Thanks and Regards,
> >
> > ---
> >
> > Toshiya Shiga
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Mesos orchestration as discussed at mid cycle (action required from core reviewers)

2015-11-04 Thread Paul Bourke
+1 - the below sums up my impressions from the summit which I was good 
with. Look forward to seeing what Angus the guys come up with.


-Paul

On 02/11/15 17:02, Steven Dake (stdake) wrote:

Hey folks,

We had an informal vote at the mid cycle from the core reviewers, and it
was a majority vote, so we went ahead and started the process of the
introduction of mesos orchestration into Kolla.

For background for our few core reviewers that couldn’t make it and the
broader community, Angus Salkeld has committed himself and 3 other
Mirantis engineers full time to investigate if Mesos could be used as an
orchestration engine in place of Ansible.  We are NOT dropping our
Ansible implementation in the short or long term.  Kolla will continue
to lead with Ansible.  At some point in Mitaka or the N cycle we may
move the ansible bits to a repository called “kolla-ansible” and the
kolla repository would end up containing the containers only.

The general consensus was that if folks wanted to add additional
orchestration systems for Kolla, they were free to do so if they did the
development and made a commitment to maintaining one core reviewer team
with broad expertise among the core reviewer team of how these various
systems work.

Angus has agreed to the following

 1. A new team called “kolla-mesos-core” with 2 members.  One of the
members is Angus Salkeld, the other is selected by Angus Salkeld
since this is a cookie cutter empty repository.  This is typical of
how new projects would operate, but we don’t want a code dump and
instead want an integrated core team.  To prevent a situation which
the current Ansible expertise shy away from the Mesos
implementation, the core reviewer team has committed to reviewing
the mesos code to get a feel for it.
 2. Over the next 6-8 weeks these two folks will strive to join the
Kolla core team by typical means 1) irc participation 2) code
generation 3) effective and quality reviews 4) mailing list
participation
 3. Angus will create a technical specification which will we will
roll-call voted and only accepted once a majority of core review
team is satisfied with the solution.
 4. The kolla-mesos deliverable will be under Kolla governance and be
managed by the Kolla core reviewer team after the kolla-mesos-core
team is deprecated.
 5. If the experiment fails, kolla-mesos will be placed in the attic.
  There is no specific window for the experiments, it is really up
to Angus to decide if the technique is viable down the road.
 6. For the purpose of voting, the kolla-mesos-core team won’t be
permitted to vote (on things like this or other roll-call votes in
the community) until they are “promoted” to the koala-core reviewer
team.


The core reviewer team has agreed to the following

 1. Review patches in kolla-mesos repository
 2. Actively learn how the mesos orchestration system works in the
context of Kolla
 3. Actively support Angus’s effort in the existing Kolla code base as
long as it is not harmful to the Kolla code base

We all believe this will lead to a better outcome then Mirantis
developing some code on their own and later dumping it into the Kolla
governance or operating as a fork.

I’d like to give the core reviewers another chance to vote since the
voting was semi-rushed.

I am +1 given the above constraints.  I think this will help Kolla grow
and potentially provide a better (or arguably different) orchestration
system and is worth the investigation.  At no time will we put the
existing Kolla Ansible + Docker goodness into harms way, so I see no
harm in an independent repository especially if the core reviewer team
strives to work as one team (rather then two independent teams with the
same code base).

Abstaining is the same as voting as –1, so please vote one way or
another with a couple line blob about your thoughts on the idea.

Note of the core reviewers there, we had 7 +1 votes (and we have a 9
individual core reviewer team so there is already a majority but I’d
like to give everyone an opportunity weigh in).

Regards
-steve



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-04 Thread John Garbutt
In terms of adding this into master, we can go for a spec-less
blueprint in Nova.

Reach out to me on IRC if I can help you through the process.

Thanks,
johnthetubaguy

PS
We are working on making this easier in the future, by using OS VIF Lib.

On 4 November 2015 at 08:56, Michał Dubiel  wrote:
> Ok, I see. Thanks for all the answers.
>
> Regards,
> Michal
>
> On 3 November 2015 at 22:50, Matt Riedemann 
> wrote:
>>
>>
>>
>> On 11/3/2015 11:57 AM, Michał Dubiel wrote:
>>>
>>> Hi all,
>>>
>>> We have a simple patch allowing to use OpenContrail's vrouter with
>>> vhostuser vif types (currently only OVS has support for that). We would
>>> like to contribute it.
>>>
>>> However, We would like this change to land in the next maintenance
>>> release of Kilo. Is it possible? What should be the process for this?
>>> Should we prepare a blueprint and review request for the 'master' branch
>>> first? It is small self contained change so I believe it does not need a
>>> nova-spec.
>>>
>>> Regards,
>>> Michal
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> The short answer is 'no' to backporting features to stable branches.
>>
>> As the other reply said, feature changes are targeted to master.
>>
>> The full stable branch policy is here:
>>
>> https://wiki.openstack.org/wiki/StableBranch
>>
>> --
>>
>> Thanks,
>>
>> Matt Riedemann
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Default PostgreSQL server encoding is 'ascii'

2015-11-04 Thread Artem Roma
Hi, folks!

Recently I've been working on this bug [1] and have found that default
encoding of database server used by FUEL infrastructure components
(Nailgun, OSTF, etc) is ascii. At least this is true for environment set up
via VirtualBox scripts. This situation may (and already does returning to
the bug) cause obfuscating problems when dealing with non-ascii string data
supplied by user such as names for nodes, clusters etc. Nailgun encodes
such data in UTF-8 before sending to the database so misinterpretation by
former while saving it is sure thing.

I wonder if we have such situation on all Fuel environments or only on
those set by VB scripts, because as for me it seems as pretty serious flaw
in our infrastructure. It would be great to have some comments from people
more competent in areas regarding to the matter.

​[1]​ https://bugs.launchpad.net/fuel/+bug/1472275

-- 
Regards!)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Infra] "gate-neutron-lbaasv1-dsvm-api" blocking patch merge

2015-11-04 Thread Mohan Kumar
Hi,

Jenkins blocking patch merge due to " gate-neutron-lbaasv1-dsvm-api
"
failures which is unrelated to patch-set changes .

Patch: https://review.openstack.org/#/c/237896/

Please help to resolve this issue .

Regards.,
Mohankumar.N
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Sean Dague
Thanks Dims,

+2

On 11/03/2015 07:45 AM, Davanum Srinivas wrote:
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims
> 
> 
> On Mon, Nov 2, 2015 at 11:05 PM, Joshua Harlow  wrote:
>> Thanks robert,
>>
>> I've started to tweak https://review.openstack.org/#/c/209661/ with regard
>> to the outcome of that (at least to cover the basics)... Should be finished
>> up soon (I hope).
>>
>>
>> Robert Collins wrote:
>>>
>>> Hi, at the summit we had a big session on distributed lock managers
>>> (DLMs).
>>>
>>> I'd just like to highlight the conclusions we came to in the session (
>>>  https://etherpad.openstack.org/p/mitaka-cross-project-dlm
>>>  )
>>>
>>> Firstly OpenStack projects that want to use a DLM can make it a hard
>>> dependency. Previously we've had a unwritten policy that DLMs should
>>> be optional, which has led to us writing poor DLM-like things backed
>>> by databases :(. So this is a huge and important step forward in our
>>> architecture.
>>>
>>> As in our existing pattern of usage for database and message-queues,
>>> we'll use an oslo abstraction layer: tooz. This doesn't preclude a
>>> different answer in special cases - but they should be considered
>>> special and exception, not the general case.
>>>
>>> Based on the project requirements surfaced in the discussion, it seems
>>> likely that all of konsul, etc and zookeeper will be able to have
>>> suitable production ready drivers written for tooz. Specifically no
>>> project required a fair locking implementation in the DLM.
>>>
>>> After our experience with oslo.messaging however, we wanted to avoid
>>> the situation of having unmaintained drivers and no signalling to
>>> users about them.
>>>
>>> So, we resolved to adopt roughly the oslo.messaging requirements for
>>> drivers, with a couple of tweaks...
>>>
>>> Production drivers in-tree will need:
>>>   - two nominated developers responsible for it
>>>   - gating functional tests that use dsvm
>>> Test drivers in-tree will need:
>>>   - clear identification that the driver is a test driver - in the
>>> module name at minimum
>>>
>>> All hail our new abstraction overlords.
>>>
>>> -Rob
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Sean Dague
On 11/03/2015 05:45 AM, Julien Danjou wrote:
> On Tue, Nov 03 2015, Jay Pipes wrote:
> 
>> My suggestion was to add a new POST /servers/search URI resource that can 
>> take
>> a request body containing large numbers of filter arguments, encoded in a 
>> JSON
>> object.
>>
>> API working group, what thoughts do you have about this? Please add your
>> comments to the Gerrit spec patch if you have time.
> 
> FWIW, we already have an extensive support for that in both Ceilometer
> and Gnocchi. It looks like a small JSON query DSL that we're able to
> "compile" down to SQL Alchemy filters.
> 
> A few examples are:
>   
> http://docs.openstack.org/developer/gnocchi/rest.html#searching-for-resources
> 
> I've planed for a long time to move this code to a library, so if Nova's
> interested, I can try to move that forward eagerly.

I guess I wonder what the expected interaction with things like
Searchlight is? Searchlight was largely created for providing this kind
of fast access to subsets of resources based on arbitrary attribute search.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-04 Thread Gary Kotton


From: "mest...@mestery.com" 
mailto:mest...@mestery.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 3, 2015 at 7:09 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][stable] should we open gate for per 
sub-project stable-maint teams?

On Tue, Nov 3, 2015 at 10:49 AM, Ihar Hrachyshka 
mailto:ihrac...@redhat.com>> wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

currently we have a single neutron-wide stable-maint gerrit group that
maintains all stable branches for all stadium subprojects. I believe
that in lots of cases it would be better to have subproject members to
run their own stable maintenance programs, leaving
neutron-stable-maint folks to help them in non-obvious cases, and to
periodically validate that project wide stable policies are still honore
d.

I suggest we open gate to creating subproject stable-maint teams where
current neutron-stable-maint members feel those subprojects are ready
for that and can be trusted to apply stable branch policies in
consistent way.

Note that I don't suggest we grant those new permissions completely
automatically. If neutron-stable-maint team does not feel safe to give
out those permissions to some stable branches, their feeling should be
respected.

I believe it will be beneficial both for subprojects that would be
able to iterate on backports in more efficient way; as well as for
neutron-stable-maint members who are often busy with other stuff, and
often times are not the best candidates to validate technical validity
of backports in random stadium projects anyway. It would also be in
line with general 'open by default' attitude we seem to embrace in
Neutron.

If we decide it's the way to go, there are alternatives on how we
implement it. For example, we can grant those subproject teams all
permissions to merge patches; or we can leave +W votes to
neutron-stable-maint group.

I vote for opening the gates, *and* for granting +W votes where
projects showed reasonable quality of proposed backports before; and
leaving +W to neutron-stable-maint in those rare cases where history
showed backports could get more attention and safety considerations
[with expectation that those subprojects will eventually own +W votes
as well, once quality concerns are cleared].

If we indeed decide to bootstrap subproject stable-maint teams, I
volunteer to reach the candidate teams for them to decide on initial
lists of stable-maint members, and walk them thru stable policies.

Comments?


As someone who spends a considerable amount of time reviewing stable backports 
on a regular basis across all the sub-projects, I'm in favor of this approach. 
I'd like to be included when selecting teams which are approproate to have 
their own stable teams as well. Please include me when doing that.

+1


Thanks,
Kyle

Ihar
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
=HE+y
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] Visibility consistency for packages and images

2015-11-04 Thread Olivier Lemasle
Hi all,

Ekaterina Chernova suggested last week to discuss the matter of
visibility consistency for murano packages and glance images,
following my bug report on that subject [1].

The general idea is to make sure that if a murano package is public,
it should be really available for all projects, which mean that:
- if it depends on other murano packages, these packages must be public,
- if it depends on glance images, these images must be public.

In fact, I created this bug report after Alexander Tivelkov's
suggesion on a review request [2] I did to fix a related bug [3]. In
this other bug report, I focused on images visibility during the
initial import of a package, because dependant murano packages are
already imported with the same visibility. It seemed to me most
confusing that packages are made public if the images are private. So
I did a fix in murano-dashboard, which is already merged [4], and
another one for python-muranoclient, still in review ([2]).

What are your thoughts on this subject? Do we need to address first
the general dependency issue? Is this a murano, glance or glare
subject?

Do we still need to do something specific for the initial import
(currently, dependency resolution for packages and images is done both
in murano-dashboard and in python-muranoclient)?

Thank you for your inputs,

[1] https://bugs.launchpad.net/murano/+bug/1509208
[2] https://review.openstack.org/#/c/236834/
[3] https://bugs.launchpad.net/murano/+bug/1507139
[4] https://review.openstack.org/#/c/236830/

-- 
Olivier Lemasle
Software Engineer
Apalia™
Mobile: +33-611-69-12-11
http://www.apalia.net
olivier.lema...@apalia.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Logging - filling up my tiny SSDs

2015-11-04 Thread Sean Dague
On 11/02/2015 10:36 AM, Sean M. Collins wrote:
> On Sun, Nov 01, 2015 at 10:12:10PM EST, Davanum Srinivas wrote:
>> Sean,
>>
>> I typically switch off screen and am able to redirect logs to a specified
>> directory. Does this help?
>>
>> USE_SCREEN=False
>> LOGDIR=/opt/stack/logs/
> 
> It's not that I want to disable screen. I want screen to run, and not
> log the output to files, since I have a tiny 16GB ssd card on these NUCs
> and it fills it up if I leave it running for a week or so. 

If you right a patch, I think it's fine to include, however it's a
pretty edge case. Super small disks (I didn't even realize they made SSD
that small, I thought 120 was about the floor), and running devstack for
long times without rebuild.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Sean Dague
On 10/28/2015 07:15 PM, Anne Gentle wrote:
> Hi all, 
> 
> I wanted to write up some of the discussion points from a cross-project
> session here at the Tokyo Summit about cross-project communications. The
> etherpad is here: 
> https://etherpad.openstack.org/p/mitaka-crossproject-comms
> 
> One item that came up that I wanted to ensure we gather feedback on is
> evolving the cross-project meeting to an "as needed" rotation, held at
> any time on Tuesdays or Wednesdays. We can set up meetbot in a new
> meeting room, #cross-project-meeting, and then bring in the necessary
> participants while also inviting everyone to attend. 
> 
> I sense this helps with the timezone issues we all face, as well as
> brings together the relevant projects in a moment, while allowing other
> projects to filter out unnecessary discussions to help everyone focus
> further on solving cross-project issues.
> 
> The rest of the action items are in the etherpad, but since I originally
> suggested changing the meeting time, I wanted to circle back on this new
> idea specifically. Let us know your thoughts.

Has anyone considered using #openstack-dev, instead of a new meeting
room? #openstack-dev is mostly a ghost town at this point, and deciding
that instead it would be the dedicated cross project space, including
meetings support, might be interesting.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-powervm] Please createnetworking-powervm on PyPI

2015-11-04 Thread Andrew Thorstensen
Hi Kyle,

My team owns the networking-powervm project.  When we moved from the 
StackForge to OpenStack namespace we changed the name from neutron-powervm 
to networking-powervm.  There is no reason for the PyPI project to have a 
different name and we were planning to publish an update shortly with the 
networking-powervm name.

We were planning to do this sometime next week.  Do we need it done 
sooner?


Thanks!

Drew Thorstensen
Power Systems / Cloud Software



From:   Kyle Mestery 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   11/03/2015 10:09 PM
Subject:[openstack-dev] [neutron] [networking-powervm] Please 
create  networking-powervm on PyPI



I'm reaching out to whoever owns the networking-powervm project [1]. I 
have a review out [2] which updates the PyPI publishing jobs so we can 
push releases for networking-powervm. However, in looking at PyPI, I don't 
see a networking-powervm project, but instead a neutron-powervm project. 
Is there a reason for the PyPI project to have a different name? I believe 
this will not allow us to push releases, as the name of the projects need 
to match. Further, the project creation guide recommends naming them the 
same [4].

Can someone from the PowerVM team look at registering networking-powervm 
on PyPI and correcting this please?

Thanks!
Kyle

[1] https://launchpad.net/neutron-powervm
[2] https://review.openstack.org/#/c/233466/
[3] https://pypi.python.org/pypi/neutron-powervm/0.1.0
[4] http://docs.openstack.org/infra/manual/creators.html#pypi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] "gate-neutron-lbaasv1-dsvm-api" blocking patch merge

2015-11-04 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 04/11/15 12:48, Mohan Kumar wrote:
> Hi,
> 
> Jenkins blocking patch merge due to "
> gate-neutron-lbaasv1-dsvm-api 
> "
>
> 
failures which is unrelated to patch-set changes .
> 
> Patch: https://review.openstack.org/#/c/237896/
> 
> Please help to resolve this issue .
> 

It's a known issue: https://launchpad.net/bugs/1512937

There is already a lbaas patch for that:
https://review.openstack.org/#/c/241481/

Ihar
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWOfw5AAoJEC5aWaUY1u57+98IAMPqiaGwqUfaPyMWiD7GVNi0
qhCSh/YRR8UlS45Lsf+FMSkcqnYg5pRkjY7FE0wksnExLNFnq+bZT8zcutUHmaIv
x8ONhFd0fD8N6XvdCBidAKOncglCCwi0IVNLH8BlKwsYgW8r/QerP8br1h1Xjp91
ZdvvCccpGna26xTgiErNbjhxALUENv7aBvCh52sq7XWFkdqZUz/ePUmX4W0Jo20k
M821EQTBwlPAPVicLqaSbV/AxW+X7OthNiR2BYlMorrMmycTK9FpclSztjFI9Gta
2GS6XIDLF4owhhPy0SG6zhD7O/GDIsB/VjJQBGQiAtj42OXToYxmIQhfsHD4j5Y=
=JyN7
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Davanum Srinivas
+1000 Sean

On Wed, Nov 4, 2015 at 6:30 AM, Sean Dague  wrote:
> On 10/28/2015 07:15 PM, Anne Gentle wrote:
>> Hi all,
>>
>> I wanted to write up some of the discussion points from a cross-project
>> session here at the Tokyo Summit about cross-project communications. The
>> etherpad is here:
>> https://etherpad.openstack.org/p/mitaka-crossproject-comms
>>
>> One item that came up that I wanted to ensure we gather feedback on is
>> evolving the cross-project meeting to an "as needed" rotation, held at
>> any time on Tuesdays or Wednesdays. We can set up meetbot in a new
>> meeting room, #cross-project-meeting, and then bring in the necessary
>> participants while also inviting everyone to attend.
>>
>> I sense this helps with the timezone issues we all face, as well as
>> brings together the relevant projects in a moment, while allowing other
>> projects to filter out unnecessary discussions to help everyone focus
>> further on solving cross-project issues.
>>
>> The rest of the action items are in the etherpad, but since I originally
>> suggested changing the meeting time, I wanted to circle back on this new
>> idea specifically. Let us know your thoughts.
>
> Has anyone considered using #openstack-dev, instead of a new meeting
> room? #openstack-dev is mostly a ghost town at this point, and deciding
> that instead it would be the dedicated cross project space, including
> meetings support, might be interesting.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
I was spot checking the grenade multinode job to make sure it looks like
it was doing the correct thing. In doing so I found that ~15minutes of
it's hour long build time is compiling lxml and numpy 3 times each.

Due to our exact calculations by upper-constraints.txt we ensure exactly
the right version of each of those in old & new & subnode (old).

Is there a nodepool cache strategy where we could pre build these? A 25%
performance win comes out the other side if there is a strategy here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Thierry Carrez
Davanum Srinivas wrote:
>> Has anyone considered using #openstack-dev, instead of a new meeting
>> room? #openstack-dev is mostly a ghost town at this point, and deciding
>> that instead it would be the dedicated cross project space, including
>> meetings support, might be interesting.

+1

Originally #openstack-dev was the only dev channel, the one we ask every
dev to join by default. Then it was the channel that teams would use if
they didn't have their own. Now that most/all teams have their own,
nobody discusses in it anymore, but it still is our most crowded channel
(by virtue of being the old default dev channel).

So officially repurposing it for cross-project discussions /
announcements sounds like a good idea. Think of it as a permanent
cross-project meeting / announcement space.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Salvatore Orlando
Regarding Jay's proposal, this would be tantamount to defining an API
action for retrieving instances, something currently being discussed here
[1].
The only comment I have is that I am not entirely surely whether using the
POST verb for operations which do no alter at all the server representation
of any object is in accordance with RFC 7231.
A search API like the one pointed out by Julien is interesting; at first
glance I'm not able to comment on its RESTfulness - it definitely has
plenty of use cases and enables users to run complex queries; one possible
downside is that it increases the complexity of simple queries.

For the purpose of the Nova spec I think it might be ok to limit the
functionality to a "small number of instance ids" as expressed in the spec.
On the other hand how crazy it would be to limit the number of bytes in the
URL by allowing to specify contract form of instance UUIDs - in a way
similar to git commits?

[1] https://review.openstack.org/#/c/234994/

On 4 November 2015 at 13:17, Sean Dague  wrote:

> On 11/03/2015 05:45 AM, Julien Danjou wrote:
> > On Tue, Nov 03 2015, Jay Pipes wrote:
> >
> >> My suggestion was to add a new POST /servers/search URI resource that
> can take
> >> a request body containing large numbers of filter arguments, encoded in
> a JSON
> >> object.
> >>
> >> API working group, what thoughts do you have about this? Please add your
> >> comments to the Gerrit spec patch if you have time.
> >
> > FWIW, we already have an extensive support for that in both Ceilometer
> > and Gnocchi. It looks like a small JSON query DSL that we're able to
> > "compile" down to SQL Alchemy filters.
> >
> > A few examples are:
> >
> http://docs.openstack.org/developer/gnocchi/rest.html#searching-for-resources
> >
> > I've planed for a long time to move this code to a library, so if Nova's
> > interested, I can try to move that forward eagerly.
>
> I guess I wonder what the expected interaction with things like
> Searchlight is? Searchlight was largely created for providing this kind
> of fast access to subsets of resources based on arbitrary attribute search.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Anne Gentle
On Wed, Nov 4, 2015 at 7:00 AM, Thierry Carrez 
wrote:

> Davanum Srinivas wrote:
> >> Has anyone considered using #openstack-dev, instead of a new meeting
> >> room? #openstack-dev is mostly a ghost town at this point, and deciding
> >> that instead it would be the dedicated cross project space, including
> >> meetings support, might be interesting.
>
> +1
>
> Originally #openstack-dev was the only dev channel, the one we ask every
> dev to join by default. Then it was the channel that teams would use if
> they didn't have their own. Now that most/all teams have their own,
> nobody discusses in it anymore, but it still is our most crowded channel
> (by virtue of being the old default dev channel).
>
> So officially repurposing it for cross-project discussions /
> announcements sounds like a good idea. Think of it as a permanent
> cross-project meeting / announcement space.
>
>
Sure, though I have to point out cross-project means more than devs... but
I think it's a good idea to re-use a channel lots of OpenStack contributors
are in already.

Anne


> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Antoni Segura Puimedon
Hi Kuryrs,

Last Friday, as part of the contributors meetup, we discussed also code
contribution etiquette. Like other OpenStack project (Magnum comes to
mind), the etiquette for what to do when there is disagreement in the way
to code a blueprint of fix a bug is as follows:

1.- Try to reach out so that the original implementation gets closer to a
compromise by having the discussion in gerrit (and Mailing list if it
requires a wider range of arguments).
2.- If a compromise can't be reached, feel free to make a separate
implementation arguing well its difference, virtues and comparative
disadvantages. We trust the whole community of reviewers to be able to
judge which is the best implementation and I expect that often the
reviewers will steer both submissions closer than they originally were.
3.- If both competing implementations get the necessary support, the core
reviewers will take a specific decision on which to take based on technical
merit. Important factor are:
* conciseness,
* simplicity,
* loose coupling,
* logging and error reporting,
* test coverage,
* extensibility (when an immediate pending and blueprinted feature can
better be built on top of it).
* documentation,
* performance.

It is important to remember that technical disagreement is a healthy thing
and should be tackled with civility. If we follow the rules above, it will
lead to a healthier project and a more friendly community in which
everybody can propose their vision with equal standing. Of course,
sometimes there may be a feeling of duplicity, but even in the case where
one's solution it is not selected (and I can assure you I've been there and
know how it can feel awkward) it usually still enriches the discussion and
constitutes a contribution that improves the project.

Regards,

Toni
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Baohua Yang
+1, Antoni!
btw, is our weekly meeting still on meeting-4 channel?
Not found it there yesterday.

On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

> Hi Kuryrs,
>
> Last Friday, as part of the contributors meetup, we discussed also code
> contribution etiquette. Like other OpenStack project (Magnum comes to
> mind), the etiquette for what to do when there is disagreement in the way
> to code a blueprint of fix a bug is as follows:
>
> 1.- Try to reach out so that the original implementation gets closer to a
> compromise by having the discussion in gerrit (and Mailing list if it
> requires a wider range of arguments).
> 2.- If a compromise can't be reached, feel free to make a separate
> implementation arguing well its difference, virtues and comparative
> disadvantages. We trust the whole community of reviewers to be able to
> judge which is the best implementation and I expect that often the
> reviewers will steer both submissions closer than they originally were.
> 3.- If both competing implementations get the necessary support, the core
> reviewers will take a specific decision on which to take based on technical
> merit. Important factor are:
> * conciseness,
> * simplicity,
> * loose coupling,
> * logging and error reporting,
> * test coverage,
> * extensibility (when an immediate pending and blueprinted feature can
> better be built on top of it).
> * documentation,
> * performance.
>
> It is important to remember that technical disagreement is a healthy thing
> and should be tackled with civility. If we follow the rules above, it will
> lead to a healthier project and a more friendly community in which
> everybody can propose their vision with equal standing. Of course,
> sometimes there may be a feeling of duplicity, but even in the case where
> one's solution it is not selected (and I can assure you I've been there and
> know how it can feel awkward) it usually still enriches the discussion and
> constitutes a contribution that improves the project.
>
> Regards,
>
> Toni
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jay Pipes

On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:

Hi,

The change in https://review.openstack.org/237122 touches a feature from
ironic that has not been released in any tag yet.

At first, we from the team who has written the patch thought that, as it
has not been part of any release, we could do backwards incompatible
changes on that part of the code. As it turned out from discussing with
the community, ironic commits to keeping the master branch backwards
compatible and a deprecation process is needed in that case.

That stated, the question at hand is: How long should this deprecation
process last?

This spec specifies the deprecation policy we should follow:
https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


As from its excerpt below, the minimum obsolescence period must be
max(next_release, 3 months).

"""
Based on that data, an obsolescence date will be set. At the very
minimum the feature (or API, or configuration option) should be marked
deprecated (and still be supported) in the next stable release branch,
and for at least three months linear time. For example, a feature
deprecated in November 2015 should still appear in the Mitaka release
and stable/mitaka stable branch and cannot be removed before the
beginning of the N development cycle in April 2016. A feature deprecated
in March 2016 should still appear in the Mitaka release and
stable/mitaka stable branch, and cannot be removed before June 2016.
"""

This spec, however, only covers released and/or tagged code.

tl;dr:

How should we proceed regarding code/features/configs/APIs that have not
even been tagged yet?

Isn't waiting for the next OpenStack release in this case too long?
Otherwise, we are going to have features/configs/APIs/etc. that are
deprecated from their very first tag/release.

How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
months? max(next_tag, 3 months)?


-1

The reason the wording is that way is because lots of people deploy 
OpenStack services in a continuous deployment model, from the master 
source branches (sometimes minus X number of commits as these deployers 
run the code through their test platforms).


Not everyone uses tagged releases, and OpenStack as a community has 
committed (pun intended) to serving these continuous deployment scenarios.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Antoni Segura Puimedon
On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang  wrote:

> +1, Antoni!
> btw, is our weekly meeting still on meeting-4 channel?
> Not found it there yesterday.
>

Yes, it is still on openstack-meeting-4, but this week we skipped it, since
some of us were
traveling and we already held the meeting on Friday. Next Monday it will be
held as usual
and the following week we start alternating (we have yet to get a room for
that one).

>
> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
> toni+openstac...@midokura.com> wrote:
>
>> Hi Kuryrs,
>>
>> Last Friday, as part of the contributors meetup, we discussed also code
>> contribution etiquette. Like other OpenStack project (Magnum comes to
>> mind), the etiquette for what to do when there is disagreement in the way
>> to code a blueprint of fix a bug is as follows:
>>
>> 1.- Try to reach out so that the original implementation gets closer to a
>> compromise by having the discussion in gerrit (and Mailing list if it
>> requires a wider range of arguments).
>> 2.- If a compromise can't be reached, feel free to make a separate
>> implementation arguing well its difference, virtues and comparative
>> disadvantages. We trust the whole community of reviewers to be able to
>> judge which is the best implementation and I expect that often the
>> reviewers will steer both submissions closer than they originally were.
>> 3.- If both competing implementations get the necessary support, the core
>> reviewers will take a specific decision on which to take based on technical
>> merit. Important factor are:
>> * conciseness,
>> * simplicity,
>> * loose coupling,
>> * logging and error reporting,
>> * test coverage,
>> * extensibility (when an immediate pending and blueprinted feature
>> can better be built on top of it).
>> * documentation,
>> * performance.
>>
>> It is important to remember that technical disagreement is a healthy
>> thing and should be tackled with civility. If we follow the rules above, it
>> will lead to a healthier project and a more friendly community in which
>> everybody can propose their vision with equal standing. Of course,
>> sometimes there may be a feeling of duplicity, but even in the case where
>> one's solution it is not selected (and I can assure you I've been there and
>> know how it can feel awkward) it usually still enriches the discussion and
>> constitutes a contribution that improves the project.
>>
>> Regards,
>>
>> Toni
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best wishes!
> Baohua
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Jay Pipes

On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
that works with OpenStack are working with resources (e.g. VM, Volumes,
Images, ..) in the next way:

 >>> resource = api.resouce_do_some_stuff()
 >>> while api.resource_get(resource["uuid"]) != expected_status
 >>>sleep(a_bit)

For each async operation they are polling and call many times
resource_get() which creates significant load on API and DB layers due
the nature of this request. (Usually getting full information about
resources produces SQL requests that contains multiple JOINs, e,g for
nova vm it's 6 joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?


+1

All APIs should have an HTTP HEAD call on important resources for 
retrieving quick status information for the resource.


In fact, I proposed exactly this in my Compute "vNext" API proposal:

http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

Swift's API supports HEAD for accounts:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta

containers:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta

and objects:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta

So, yeah, I agree.
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-04 Thread Baohua Yang
Sure, thanks!
And suggest add the time and channel information at the kuryr wiki page.


On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon <
toni+openstac...@midokura.com> wrote:

>
>
> On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang  wrote:
>
>> +1, Antoni!
>> btw, is our weekly meeting still on meeting-4 channel?
>> Not found it there yesterday.
>>
>
> Yes, it is still on openstack-meeting-4, but this week we skipped it,
> since some of us were
> traveling and we already held the meeting on Friday. Next Monday it will
> be held as usual
> and the following week we start alternating (we have yet to get a room for
> that one).
>
>>
>> On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon <
>> toni+openstac...@midokura.com> wrote:
>>
>>> Hi Kuryrs,
>>>
>>> Last Friday, as part of the contributors meetup, we discussed also code
>>> contribution etiquette. Like other OpenStack project (Magnum comes to
>>> mind), the etiquette for what to do when there is disagreement in the way
>>> to code a blueprint of fix a bug is as follows:
>>>
>>> 1.- Try to reach out so that the original implementation gets closer to
>>> a compromise by having the discussion in gerrit (and Mailing list if it
>>> requires a wider range of arguments).
>>> 2.- If a compromise can't be reached, feel free to make a separate
>>> implementation arguing well its difference, virtues and comparative
>>> disadvantages. We trust the whole community of reviewers to be able to
>>> judge which is the best implementation and I expect that often the
>>> reviewers will steer both submissions closer than they originally were.
>>> 3.- If both competing implementations get the necessary support, the
>>> core reviewers will take a specific decision on which to take based on
>>> technical merit. Important factor are:
>>> * conciseness,
>>> * simplicity,
>>> * loose coupling,
>>> * logging and error reporting,
>>> * test coverage,
>>> * extensibility (when an immediate pending and blueprinted feature
>>> can better be built on top of it).
>>> * documentation,
>>> * performance.
>>>
>>> It is important to remember that technical disagreement is a healthy
>>> thing and should be tackled with civility. If we follow the rules above, it
>>> will lead to a healthier project and a more friendly community in which
>>> everybody can propose their vision with equal standing. Of course,
>>> sometimes there may be a feeling of duplicity, but even in the case where
>>> one's solution it is not selected (and I can assure you I've been there and
>>> know how it can feel awkward) it usually still enriches the discussion and
>>> constitutes a contribution that improves the project.
>>>
>>> Regards,
>>>
>>> Toni
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best wishes!
>> Baohua
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best wishes!
Baohua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Cory Benfield

> On 4 Nov 2015, at 13:13, Salvatore Orlando  wrote:
> 
> Regarding Jay's proposal, this would be tantamount to defining an API action 
> for retrieving instances, something currently being discussed here [1].
> The only comment I have is that I am not entirely surely whether using the 
> POST verb for operations which do no alter at all the server representation 
> of any object is in accordance with RFC 7231.

It’s totally fine, so long as you define things appropriately. Jay’s suggestion 
does exactly that, and is entirely in line with RFC 7231.

The analogy here is to things like complex search forms. Many search engines 
allow you to construct very complex search queries (consider something like 
Amazon or eBay, where you can filter on all kinds of interesting criteria). 
These forms are often submitted to POST endpoints rather than GET.

This is totally fine. In fact, the first example from RFC 7231 Section 4.3.3 
(POST) applies here: “POST is used for the following functions (among others): 
Providing a block of data […] to a data-handling process”. In this case, the 
data-handling function is the search function on the server.

The *only* downside of Jay’s approach is that the response cannot really be 
cached. It’s not clear to me whether anyone actually deploys a cache in this 
kind of role though, so it may not hurt too much.

Cory




signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [networking-powervm] Please createnetworking-powervm on PyPI

2015-11-04 Thread Kyle Mestery
On Wed, Nov 4, 2015 at 6:29 AM, Andrew Thorstensen 
wrote:

> Hi Kyle,
>
> My team owns the networking-powervm project.  When we moved from the
> StackForge to OpenStack namespace we changed the name from neutron-powervm
> to networking-powervm.  There is no reason for the PyPI project to have a
> different name and we were planning to publish an update shortly with the
> networking-powervm name.
>
> We were planning to do this sometime next week.  Do we need it done sooner?
>
> That should be perfect, let me know when it's done so I can remove the WIP
on the patch below. Thanks!


>
> Thanks!
>
> Drew Thorstensen
> Power Systems / Cloud Software
>
>
>
> From:Kyle Mestery 
> To:"OpenStack Development Mailing List (not for usage questions)"
> 
> Date:11/03/2015 10:09 PM
> Subject:[openstack-dev] [neutron] [networking-powervm] Please
> createnetworking-powervm on PyPI
> --
>
>
>
> I'm reaching out to whoever owns the networking-powervm project [1]. I
> have a review out [2] which updates the PyPI publishing jobs so we can push
> releases for networking-powervm. However, in looking at PyPI, I don't see a
> networking-powervm project, but instead a neutron-powervm project. Is there
> a reason for the PyPI project to have a different name? I believe this will
> not allow us to push releases, as the name of the projects need to match.
> Further, the project creation guide recommends naming them the same [4].
>
> Can someone from the PowerVM team look at registering networking-powervm
> on PyPI and correcting this please?
>
> Thanks!
> Kyle
>
> [1] *https://launchpad.net/neutron-powervm*
> 
> [2] *https://review.openstack.org/#/c/233466/*
> 
> [3] *https://pypi.python.org/pypi/neutron-powervm/0.1.0*
> 
>
> [4] *http://docs.openstack.org/infra/manual/creators.html#pypi*
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Sean Dague
On 11/04/2015 09:00 AM, Jay Pipes wrote:
> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
>> Hi stackers,
>>
>> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
>> that works with OpenStack are working with resources (e.g. VM, Volumes,
>> Images, ..) in the next way:
>>
>>  >>> resource = api.resouce_do_some_stuff()
>>  >>> while api.resource_get(resource["uuid"]) != expected_status
>>  >>>sleep(a_bit)
>>
>> For each async operation they are polling and call many times
>> resource_get() which creates significant load on API and DB layers due
>> the nature of this request. (Usually getting full information about
>> resources produces SQL requests that contains multiple JOINs, e,g for
>> nova vm it's 6 joins).
>>
>> What if we add new API method that will just resturn resource status by
>> UUID? Or even just extend get request with the new argument that returns
>> only status?
> 
> +1
> 
> All APIs should have an HTTP HEAD call on important resources for
> retrieving quick status information for the resource.
> 
> In fact, I proposed exactly this in my Compute "vNext" API proposal:
> 
> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
> 
> Swift's API supports HEAD for accounts:
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
> 
> 
> containers:
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
> 
> 
> and objects:
> 
> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
> 
> So, yeah, I agree.
> -jay

How would you expect this to work on "servers"? HEAD specifically
forbids returning a body, and, unlike swift, we don't return very much
information in our headers.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
> >Hi,
> >
> >The change in https://review.openstack.org/237122 touches a feature from
> >ironic that has not been released in any tag yet.
> >
> >At first, we from the team who has written the patch thought that, as it
> >has not been part of any release, we could do backwards incompatible
> >changes on that part of the code. As it turned out from discussing with
> >the community, ironic commits to keeping the master branch backwards
> >compatible and a deprecation process is needed in that case.
> >
> >That stated, the question at hand is: How long should this deprecation
> >process last?
> >
> >This spec specifies the deprecation policy we should follow:
> >https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> >
> >
> >As from its excerpt below, the minimum obsolescence period must be
> >max(next_release, 3 months).
> >
> >"""
> >Based on that data, an obsolescence date will be set. At the very
> >minimum the feature (or API, or configuration option) should be marked
> >deprecated (and still be supported) in the next stable release branch,
> >and for at least three months linear time. For example, a feature
> >deprecated in November 2015 should still appear in the Mitaka release
> >and stable/mitaka stable branch and cannot be removed before the
> >beginning of the N development cycle in April 2016. A feature deprecated
> >in March 2016 should still appear in the Mitaka release and
> >stable/mitaka stable branch, and cannot be removed before June 2016.
> >"""
> >
> >This spec, however, only covers released and/or tagged code.
> >
> >tl;dr:
> >
> >How should we proceed regarding code/features/configs/APIs that have not
> >even been tagged yet?
> >
> >Isn't waiting for the next OpenStack release in this case too long?
> >Otherwise, we are going to have features/configs/APIs/etc. that are
> >deprecated from their very first tag/release.
> >
> >How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
> >months? max(next_tag, 3 months)?
> 
> -1
> 
> The reason the wording is that way is because lots of people deploy
> OpenStack services in a continuous deployment model, from the master source
> branches (sometimes minus X number of commits as these deployers run the
> code through their test platforms).
> 
> Not everyone uses tagged releases, and OpenStack as a community has
> committed (pun intended) to serving these continuous deployment scenarios.

Right, so I asked Gabriel to send this because it's an odd case, and I'd
like to clear up the governance doc on this, since it doesn't seem to
say much about code that was never released.

The rule is a cycle boundary *and* at least 3 months. However, in this
case, the code was never in a release at all, much less a stable
release. So looking at the two types of deployers:

1) CD from trunk: 3 months is fine, we do that, done.

2) Deploying stable releases: if we only wait three months and not a
cycle boundary, they'll never see it. If we do wait for a cycle
boundary, we're pushing deprecated code to them for (seemingly to me) no
benefit.

So, it makes sense to me to not introduce the cycle boundary thing in
this case. But there is value in keeping the rule simple, and if we want
this one to pass a cycle boundary to optimize for that, I'm okay with
that too. :)

(Side note: there's actually a third type of deployer for Ironic; one
that deploys intermediate releases. I think if we give them at least one
release and three months, they're okay, so the general standard
deprecation rule covers them.)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Infra] "gate-neutron-lbaasv1-dsvm-api" blocking patch merge

2015-11-04 Thread Smigiel, Dariusz
> 
> On 04/11/15 12:48, Mohan Kumar wrote:
> > Hi,
> >
> > Jenkins blocking patch merge due to "
> > gate-neutron-lbaasv1-dsvm-api
> >  dsvm-
> api/f631adf/>"
> >
> >
> failures which is unrelated to patch-set changes .
> >
> > Patch: https://review.openstack.org/#/c/237896/
> >
> > Please help to resolve this issue .
> >
> 
> It's a known issue: https://launchpad.net/bugs/1512937
> 
> There is already a lbaas patch for that:
> https://review.openstack.org/#/c/241481/
> 
> Ihar

It's currently merged, so in next ~30 minutes should be fixed.

-- 
 Dariusz Smigiel
 Intel Technology Poland


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Mitaka priorities

2015-11-04 Thread Jim Rollenhagen
Hi folks,

I posted a review to add our Mitaka priorities to our specs repo:
https://review.openstack.org/#/c/241223/

Ruby made a good point that not everyone was at the summit and she'd
like buyoff on the patch from all cores before we land it. I tend to
agree, so I ask that cores that were not in the planning session please
review this ASAP.

There are some cores that were there and still in Tokyo, so in the
interest of landing this quickly, I'm okay with moving forward without
them.

Thanks!

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Matthew Thode
On 11/04/2015 06:47 AM, Sean Dague wrote:
> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
> 
> Due to our exact calculations by upper-constraints.txt we ensure exactly
> the right version of each of those in old & new & subnode (old).
> 
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.
> 
>   -Sean
> 
python wheel repo could help maybe?

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][ceilometer][aodh][gnocchi] no (in person) mid-cycle for Mitaka

2015-11-04 Thread gord chung

hi all,

after discussions on the usefulness of a Telemetry mid-cycle, we've 
decided to forego having an in-person mid-cycle for Mitaka. to avoid 
rehashing already discussed items, see: same reasons as Glance[1]. much 
thanks to jasonamyers for offering a venue for a Telemetry mid-cycle.


that said we will try to have a virtual one similar to Liberty[2] should 
any items pop up over the development cycle. we would target some time 
during January.


looking forward to N*, an in-person mid-cycle might be beneficial if all 
data/telemetry related projects were to meetup. we had great 
participation from projects such as CloudKitty, Vitrage, etc...[3] which 
leverage parts of Ceilometer/Aodh/Gnocchi. if we all came together, i 
think that would make a worthwhile mid-cycle. this is something we can 
discuss over the coming cycle.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078239.html

[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/068911.html
[3] https://wiki.openstack.org/wiki/Ceilometer#Ceilometer_Extensions

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Jay Pipes

On 11/04/2015 09:32 AM, Sean Dague wrote:

On 11/04/2015 09:00 AM, Jay Pipes wrote:

On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

Hi stackers,

Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
that works with OpenStack are working with resources (e.g. VM, Volumes,
Images, ..) in the next way:

  >>> resource = api.resouce_do_some_stuff()
  >>> while api.resource_get(resource["uuid"]) != expected_status
  >>>sleep(a_bit)

For each async operation they are polling and call many times
resource_get() which creates significant load on API and DB layers due
the nature of this request. (Usually getting full information about
resources produces SQL requests that contains multiple JOINs, e,g for
nova vm it's 6 joins).

What if we add new API method that will just resturn resource status by
UUID? Or even just extend get request with the new argument that returns
only status?


+1

All APIs should have an HTTP HEAD call on important resources for
retrieving quick status information for the resource.

In fact, I proposed exactly this in my Compute "vNext" API proposal:

http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

Swift's API supports HEAD for accounts:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta


containers:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta


and objects:

http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta

So, yeah, I agree.
-jay


How would you expect this to work on "servers"? HEAD specifically
forbids returning a body, and, unlike swift, we don't return very much
information in our headers.


I didn't propose doing it on a collection resource like "servers". Only 
on an entity resource like a single "server".


HEAD /v2/{tenant}/servers/{uuid}
HTTP/1.1 200 OK
Content-Length: 1022
Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
Content-Type: application/json
Date: Thu, 16 Jan 2014 21:13:19 GMT
OpenStack-Compute-API-Server-VM-State: ACTIVE
OpenStack-Compute-API-Server-Power-State: RUNNING
OpenStack-Compute-API-Server-Task-State: NONE

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about Microsoft Hyper-V CI tests

2015-11-04 Thread Johnston, Nate
I noticed the same failure in the neutron-dsvm-tempest test for the Neutron DVR 
HA change, https://review.openstack.org/#/c/143169

I have not yet been able to determine the cause.

Thanks,

—N.

On Nov 3, 2015, at 3:57 PM, sla...@kaplonski.pl 
wrote:

Hello,

I'm now working on patch to neutron to add QoS in linuxbridge: https://
review.openstack.org/#/c/236210/
Patch is not finished yet but I have some "problem" with some tests. For
example Microsoft Hyper-V CI check are failing. When I checked logs of this
tests in http://64.119.130.115/neutron/236210/7/results.html.gz file I found
error like:

ft1.1: setUpClass
(tempest.api.network.test_networks.NetworksIpV6TestAttrs)_StringException:
Traceback (most recent call last):
 File "tempest/test.py", line 274, in setUpClass
   six.reraise(etype, value, trace)
 File "tempest/test.py", line 267, in setUpClass
   cls.resource_setup()
 File "tempest/api/network/test_networks.py", line 65, in resource_setup
   cls.network = cls.create_network()
 File "tempest/api/network/base.py", line 152, in create_network
   body = cls.networks_client.create_network(name=network_name)
 File "tempest/services/network/json/networks_client.py", line 21, in
create_network
   return self.create_resource(uri, post_data)
 File "tempest/services/network/json/base.py", line 59, in create_resource
   resp, body = self.post(req_uri, req_post_data)
 File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 259, in post
   return self.request('POST', url, extra_headers, headers, body)
 File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 639, in request
   resp, resp_body)
 File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
rest_client.py", line 757, in _error_checker
   resp=resp)
tempest_lib.exceptions.UnexpectedResponseCode: Unexpected response code
received
Details: 503


It is strange for me because it looks that error is somewhere in
create_network. I didn't change anything in code which is creating networks.
Other tests are fine IMHO.
So my question is: should I check reason of this errors and try to fix it also
in my patch? Or how should I proceed with such kind of errors?

--
Pozdrawiam / Best regards
Sławek Kapłoński
slawek@kaplonski.pl__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Marian Horban
Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services
,
https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I think that
adding of 'mutable' parameter into cfg.Opt constructor could just add mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
>From my point of view every service should register list of hooks to reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html
.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on SIGHUP"
-
Some config options could be changed using API(for example quotas) that's
why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Sean Dague
On 11/04/2015 09:49 AM, Jay Pipes wrote:
> On 11/04/2015 09:32 AM, Sean Dague wrote:
>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
>>> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
 Hi stackers,

 Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
 that works with OpenStack are working with resources (e.g. VM, Volumes,
 Images, ..) in the next way:

   >>> resource = api.resouce_do_some_stuff()
   >>> while api.resource_get(resource["uuid"]) != expected_status
   >>>sleep(a_bit)

 For each async operation they are polling and call many times
 resource_get() which creates significant load on API and DB layers due
 the nature of this request. (Usually getting full information about
 resources produces SQL requests that contains multiple JOINs, e,g for
 nova vm it's 6 joins).

 What if we add new API method that will just resturn resource status by
 UUID? Or even just extend get request with the new argument that
 returns
 only status?
>>>
>>> +1
>>>
>>> All APIs should have an HTTP HEAD call on important resources for
>>> retrieving quick status information for the resource.
>>>
>>> In fact, I proposed exactly this in my Compute "vNext" API proposal:
>>>
>>> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
>>>
>>> Swift's API supports HEAD for accounts:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
>>>
>>>
>>>
>>> containers:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
>>>
>>>
>>>
>>> and objects:
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
>>>
>>>
>>> So, yeah, I agree.
>>> -jay
>>
>> How would you expect this to work on "servers"? HEAD specifically
>> forbids returning a body, and, unlike swift, we don't return very much
>> information in our headers.
> 
> I didn't propose doing it on a collection resource like "servers". Only
> on an entity resource like a single "server".
> 
> HEAD /v2/{tenant}/servers/{uuid}
> HTTP/1.1 200 OK
> Content-Length: 1022
> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> Content-Type: application/json
> Date: Thu, 16 Jan 2014 21:13:19 GMT
> OpenStack-Compute-API-Server-VM-State: ACTIVE
> OpenStack-Compute-API-Server-Power-State: RUNNING
> OpenStack-Compute-API-Server-Task-State: NONE

Right, but these headers aren't in the normal resource. They are
returned in the body only.

The point of HEAD is give me the same thing as GET without the body,
because I only care about the headers. Swift resources are structured in
a way where this information is useful.

Our resources are not. We've also had specific requests to prevent
header bloat because it impacts the HTTP caching systems. Also, it's
pretty clear that headers are really not where you want to put volatile
information, which this is.

I think we should step back here and figure out what the actual problem
is, and what ways we might go about solving it. This has jumped directly
to a point in time optimized fast poll loop. It will shave a few cycles
off right now on our current implementation, but will still be orders of
magnitude more costly that consuming the Nova notifications if the only
thing that is cared about is task state transitions. And it's an API
change we have to live with largely *forever* so short term optimization
is not what we want to go for. We should focus on the long term game here.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Question about Microsoft Hyper-V CI tests

2015-11-04 Thread Andrei Bacos
Hi,

Since the response code was 503 (Service Unavailable) it might have been
a timeout/problem in our CI, we usually debug failed builds and recheck
if necessary.

I rechecked https://review.openstack.org/#/c/236210/ and it was
successful on the same patchset.

Going to take a look at https://review.openstack.org/#/c/143169 too.

Thanks,
Andrei


On 11/04/2015 04:48 PM, Johnston, Nate wrote:
> I noticed the same failure in the neutron-dsvm-tempest test for the
> Neutron DVR HA change, https://review.openstack.org/#/c/143169
> 
> I have not yet been able to determine the cause.
> 
> Thanks,
> 
> —N.
> 
>> On Nov 3, 2015, at 3:57 PM, sla...@kaplonski.pl
>>  wrote:
>>
>> Hello,
>>
>> I'm now working on patch to neutron to add QoS in linuxbridge: https://
>> review.openstack.org/#/c/236210/ 
>> Patch is not finished yet but I have some "problem" with some tests. For
>> example Microsoft Hyper-V CI check are failing. When I checked logs of
>> this
>> tests in http://64.119.130.115/neutron/236210/7/results.html.gz file I
>> found
>> error like:
>>
>> ft1.1: setUpClass
>> (tempest.api.network.test_networks.NetworksIpV6TestAttrs)_StringException:
>>
>> Traceback (most recent call last):
>>  File "tempest/test.py", line 274, in setUpClass
>>six.reraise(etype, value, trace)
>>  File "tempest/test.py", line 267, in setUpClass
>>cls.resource_setup()
>>  File "tempest/api/network/test_networks.py", line 65, in resource_setup
>>cls.network = cls.create_network()
>>  File "tempest/api/network/base.py", line 152, in create_network
>>body = cls.networks_client.create_network(name=network_name)
>>  File "tempest/services/network/json/networks_client.py", line 21, in
>> create_network
>>return self.create_resource(uri, post_data)
>>  File "tempest/services/network/json/base.py", line 59, in create_resource
>>resp, body = self.post(req_uri, req_post_data)
>>  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
>> rest_client.py", line 259, in post
>>return self.request('POST', url, extra_headers, headers, body)
>>  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
>> rest_client.py", line 639, in request
>>resp, resp_body)
>>  File "/usr/local/lib/python2.7/dist-packages/tempest_lib/common/
>> rest_client.py", line 757, in _error_checker
>>resp=resp)
>> tempest_lib.exceptions.UnexpectedResponseCode: Unexpected response code
>> received
>> Details: 503
>>
>>
>> It is strange for me because it looks that error is somewhere in
>> create_network. I didn't change anything in code which is creating
>> networks.
>> Other tests are fine IMHO.
>> So my question is: should I check reason of this errors and try to fix
>> it also
>> in my patch? Or how should I proceed with such kind of errors?
>>
>> --
>> Pozdrawiam / Best regards
>> Sławek Kapłoński
>> slawek@kaplonski.pl__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][keystone][release][docs] cross-project liaison update

2015-11-04 Thread Steve Martinelli

In Tokyo the Keystone team decided to make a few changes to its
cross-project liaisons

  - Lance Bradstag will be the new Docs liaison
  - I'll be taking over Morgan's duties as the Release liaison

The following folks will continue to act as liaisons:

  - Brant Knudson for Oslo
  - David Stanek for QA
  - Dolph Matthews for Stable and VMT

I've updated https://wiki.openstack.org/wiki/CrossProjectLiaisons
accordingly

Thanks,

Steve Martinelli
OpenStack Keystone Project Team Lead
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread John Garbutt
On 4 November 2015 at 14:49, Jay Pipes  wrote:
> On 11/04/2015 09:32 AM, Sean Dague wrote:
>>
>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
>>>
>>> On 11/03/2015 05:20 PM, Boris Pavlovic wrote:

 Hi stackers,

 Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
 that works with OpenStack are working with resources (e.g. VM, Volumes,
 Images, ..) in the next way:

   >>> resource = api.resouce_do_some_stuff()
   >>> while api.resource_get(resource["uuid"]) != expected_status
   >>>sleep(a_bit)

 For each async operation they are polling and call many times
 resource_get() which creates significant load on API and DB layers due
 the nature of this request. (Usually getting full information about
 resources produces SQL requests that contains multiple JOINs, e,g for
 nova vm it's 6 joins).

 What if we add new API method that will just resturn resource status by
 UUID? Or even just extend get request with the new argument that returns
 only status?
>>>
>>>
>>> +1
>>>
>>> All APIs should have an HTTP HEAD call on important resources for
>>> retrieving quick status information for the resource.
>>>
>>> In fact, I proposed exactly this in my Compute "vNext" API proposal:
>>>
>>> http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head
>>>
>>> Swift's API supports HEAD for accounts:
>>>
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta
>>>
>>>
>>> containers:
>>>
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta
>>>
>>>
>>> and objects:
>>>
>>>
>>> http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta
>>>
>>> So, yeah, I agree.
>>> -jay
>>
>>
>> How would you expect this to work on "servers"? HEAD specifically
>> forbids returning a body, and, unlike swift, we don't return very much
>> information in our headers.
>
>
> I didn't propose doing it on a collection resource like "servers". Only on
> an entity resource like a single "server".
>
> HEAD /v2/{tenant}/servers/{uuid}
> HTTP/1.1 200 OK
> Content-Length: 1022
> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> Content-Type: application/json
> Date: Thu, 16 Jan 2014 21:13:19 GMT
> OpenStack-Compute-API-Server-VM-State: ACTIVE
> OpenStack-Compute-API-Server-Power-State: RUNNING
> OpenStack-Compute-API-Server-Task-State: NONE

For polling, that sounds quite efficient and handy.

For "servers" we could do this (I think there was a spec up that wanted this):

HEAD /v2/{tenant}/servers
HTTP/1.1 200 OK
Content-Length: 1022
Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
Content-Type: application/json
Date: Thu, 16 Jan 2014 21:13:19 GMT
OpenStack-Compute-API-Server-Count: 13

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread John Garbutt
On 4 November 2015 at 15:00, Sean Dague  wrote:
> On 11/04/2015 09:49 AM, Jay Pipes wrote:
>> On 11/04/2015 09:32 AM, Sean Dague wrote:
>>> On 11/04/2015 09:00 AM, Jay Pipes wrote:
 On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
> Hi stackers,
>
> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> that works with OpenStack are working with resources (e.g. VM, Volumes,
> Images, ..) in the next way:
>
>   >>> resource = api.resouce_do_some_stuff()
>   >>> while api.resource_get(resource["uuid"]) != expected_status
>   >>>sleep(a_bit)
>
> For each async operation they are polling and call many times
> resource_get() which creates significant load on API and DB layers due
> the nature of this request. (Usually getting full information about
> resources produces SQL requests that contains multiple JOINs, e,g for
> nova vm it's 6 joins).
>
> What if we add new API method that will just resturn resource status by
> UUID? Or even just extend get request with the new argument that
> returns
> only status?

 +1

 All APIs should have an HTTP HEAD call on important resources for
 retrieving quick status information for the resource.

 In fact, I proposed exactly this in my Compute "vNext" API proposal:

 http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

 Swift's API supports HEAD for accounts:

 http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta



 containers:

 http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta



 and objects:

 http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta


 So, yeah, I agree.
 -jay
>>>
>>> How would you expect this to work on "servers"? HEAD specifically
>>> forbids returning a body, and, unlike swift, we don't return very much
>>> information in our headers.
>>
>> I didn't propose doing it on a collection resource like "servers". Only
>> on an entity resource like a single "server".
>>
>> HEAD /v2/{tenant}/servers/{uuid}
>> HTTP/1.1 200 OK
>> Content-Length: 1022
>> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
>> Content-Type: application/json
>> Date: Thu, 16 Jan 2014 21:13:19 GMT
>> OpenStack-Compute-API-Server-VM-State: ACTIVE
>> OpenStack-Compute-API-Server-Power-State: RUNNING
>> OpenStack-Compute-API-Server-Task-State: NONE
>
> Right, but these headers aren't in the normal resource. They are
> returned in the body only.
>
> The point of HEAD is give me the same thing as GET without the body,
> because I only care about the headers. Swift resources are structured in
> a way where this information is useful.

I guess we would have to add this to GET requests, for consistency,
which feels like duplication.

> Our resources are not. We've also had specific requests to prevent
> header bloat because it impacts the HTTP caching systems. Also, it's
> pretty clear that headers are really not where you want to put volatile
> information, which this is.

Hmm, you do make a good point about caching.

> I think we should step back here and figure out what the actual problem
> is, and what ways we might go about solving it. This has jumped directly
> to a point in time optimized fast poll loop. It will shave a few cycles
> off right now on our current implementation, but will still be orders of
> magnitude more costly that consuming the Nova notifications if the only
> thing that is cared about is task state transitions. And it's an API
> change we have to live with largely *forever* so short term optimization
> is not what we want to go for.

I do agree with that.

> We should focus on the long term game here.

The long term plan being the end user async API? Maybe using
websockets, or similar?
https://etherpad.openstack.org/p/liberty-cross-project-user-notifications

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] acceptance: run WSGI for API services

2015-11-04 Thread Jason Guiditta

On 14/08/15 09:45 -0400, Emilien Macchi wrote:

So far we have WSGI support for puppet-keystone & pupper-ceilometer.
I'm currently working on other components to easily deploy OpenStack
running API services using apache/wsgi instead of eventlet.

I would like to propose some change in our beaker tests:

stable/kilo:
* puppet-{ceilometer,keystone}: test both cases so we validate the
upgrade with beaker
* puppet-*: no wsgi support now, but eventually could be backported from
master (liberty) once pushed.

master (future stable/liberty):
* puppet-{ceilometer,keystone}: keep only WSGI scenario
* puppet-*: push WSGI support in manifests, test them in beaker,
eventually backport them to stable/kilo, and if on time (before
stable/libery), drop non-WSGI scenario.

The goal here is to:
* test upgrade from non-WSGI to WSGI setup in stable/kilo for a maximum
of modules
* keep WSGI scenario only for Liberty

Thoughts?
--
Emilien Macchi


Sorry for the late reply, but I am wondering if anyone knows how we
(via puppet, pacemaker, or whatever else - even the services
themselves, within apache) would handle start order if all these
services become wsgi apps running under apache?  In other words, for
an HA deployment, as an example, we typically set ceilometer-central
to start _after_ keystone.  If they are both in apache, how could this
be done?  Is it truly not needed? If not, is this something new, or
have those of us working on deployments with the pacemaker
architecture been misinformed all this time?

-j



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Sean McGinnis
On Wed, Nov 04, 2015 at 07:30:51AM -0500, Sean Dague wrote:
> On 10/28/2015 07:15 PM, Anne Gentle wrote:
> 
> Has anyone considered using #openstack-dev, instead of a new meeting
> room? #openstack-dev is mostly a ghost town at this point, and deciding
> that instead it would be the dedicated cross project space, including
> meetings support, might be interesting.
> 
>   -Sean

+1 - That makes a lot of sense to me.

> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Adding a new feature in Kilo. Is it possible?

2015-11-04 Thread Michał Dubiel
Hi John,

Just to let you know, I have just filled in the blueprint here:
https://blueprints.launchpad.net/nova/+spec/libvirt-vhostuser-vrouter-vif

Regards,
Michal

On 4 November 2015 at 12:20, John Garbutt  wrote:

> In terms of adding this into master, we can go for a spec-less
> blueprint in Nova.
>
> Reach out to me on IRC if I can help you through the process.
>
> Thanks,
> johnthetubaguy
>
> PS
> We are working on making this easier in the future, by using OS VIF Lib.
>
> On 4 November 2015 at 08:56, Michał Dubiel  wrote:
> > Ok, I see. Thanks for all the answers.
> >
> > Regards,
> > Michal
> >
> > On 3 November 2015 at 22:50, Matt Riedemann 
> > wrote:
> >>
> >>
> >>
> >> On 11/3/2015 11:57 AM, Michał Dubiel wrote:
> >>>
> >>> Hi all,
> >>>
> >>> We have a simple patch allowing to use OpenContrail's vrouter with
> >>> vhostuser vif types (currently only OVS has support for that). We would
> >>> like to contribute it.
> >>>
> >>> However, We would like this change to land in the next maintenance
> >>> release of Kilo. Is it possible? What should be the process for this?
> >>> Should we prepare a blueprint and review request for the 'master'
> branch
> >>> first? It is small self contained change so I believe it does not need
> a
> >>> nova-spec.
> >>>
> >>> Regards,
> >>> Michal
> >>>
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> The short answer is 'no' to backporting features to stable branches.
> >>
> >> As the other reply said, feature changes are targeted to master.
> >>
> >> The full stable branch policy is here:
> >>
> >> https://wiki.openstack.org/wiki/StableBranch
> >>
> >> --
> >>
> >> Thanks,
> >>
> >> Matt Riedemann
> >>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Doug Hellmann
Excerpts from Marian Horban's message of 2015-11-04 17:00:55 +0200:
> Hi guys,
> 
> Unfortunately I haven't been on Tokio summit but I know that there was
> discussion about dynamic reloading of configuration.
> Etherpad refs:
> https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services
> ,
> https://etherpad.openstack.org/p/mitaka-oslo-security-logging
> 
> In this thread I want to discuss agreements reached on the summit and
> discuss
> implementation details.
> 
> Some notes taken from etherpad and my remarks:
> 
> 1. "Adding "mutable" parameter for each option."
> "Do we have an option mutable=True on CfgOpt? Yes"
> -
> As I understood 'mutable' parameter must indicate whether service contains
> code responsible for reloading of this option or not.
> And this parameter should be one of the arguments of cfg.Opt constructor.
> Problems:
> 1. Library's options.
> SSL options ca_file, cert_file, key_file taken from oslo.service library
> could be reloaded in nova-api so these options should be mutable...
> But for some projects that don't need SSL support reloading of SSL options
> doesn't make sense. For such projects this option should be non mutable.
> Problem is that oslo.service - single and there are many different projects
> which use it in different way.
> The same options could be mutable and non mutable in different contexts.

No, that would not be allowed. An option would either always be mutable,
or never. Library options, such as logging levels, would be marked
mutable and the library would need to provide a callback of some sort to
be invoked when the configuration is reloaded. If we can't do this for
SSL-related options, those are not mutable.

> 2. Support of config options on some platforms.
> Parameter "mutable" could be different for different platforms. Some
> options
> make sense only for specific platforms. If we mark such options as mutable
> it could be misleading on some platforms.

Again, if the option cannot be made mutable everywhere it is not
mutable.

> 3. Dependency of options.
> There are many 'workers' options(osapi_compute_workers, ec2_workers,
> metadata_workers, workers). These options specify number of workers for
> OpenStack API services.
> If value of the 'workers' option is greater than '1' instance of
> ProcessLauncher is created otherwise instance of ServiceLauncher is created.
> When ProcessLauncher receives SIGHUP it reloads it own configuration,
> gracefully terminates children and respawns new children.
> This mechanism allows to reload many config options implicitly.
> But if value of the 'workers' option equals '1' instance of ServiceLauncher
> is created.
> ServiceLauncher starts everything in single process and in this case we
> don't have such implicit reloading.
> 
> I think that mutability of options is a complicated feature and I think that
> adding of 'mutable' parameter into cfg.Opt constructor could just add mess.

The idea is to start with a very small number of options. Most of the
ones identified in the summit session are owned by the application,
if I remember correctly. After the configuration changes, the same
function that calls the reload for oslo.config would call the necessary
reload functions in the libraries and application modules that have
mutable options.

> 
> 2. "oslo.service catches SIGHUP and calls oslo.config"
> -
> From my point of view every service should register list of hooks to reload
> config options. oslo.service should catch SIGHUP and call list of
> registered
> hooks one by one with specified order.
> Discussion of such implementation was started in ML:
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html

The reload hooks may need to be called in a specific order, so for
now we're leaving it up to the application to do that, rather than
having a generic registry.

> .
> Raw reviews:
> https://review.openstack.org/#/c/228892/,
> https://review.openstack.org/#/c/223668/.
> 
> 3. "oslo.config is responsible to log changes which were ignored on SIGHUP"
> -
> Some config options could be changed using API(for example quotas)
> that's
> why
> oslo.config doesn't know actual configuration of service and can't log
> changes of configuration.

This proposal only applies to options defined by oslo.config, which
should not be duplicated in the database.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Sean Dague
On 11/04/2015 10:13 AM, John Garbutt wrote:
> On 4 November 2015 at 14:49, Jay Pipes  wrote:
>> On 11/04/2015 09:32 AM, Sean Dague wrote:
>>>
>>> On 11/04/2015 09:00 AM, Jay Pipes wrote:

 On 11/03/2015 05:20 PM, Boris Pavlovic wrote:
>
> Hi stackers,
>
> Usually such projects like Heat, Tempest, Rally, Scalar, and other tool
> that works with OpenStack are working with resources (e.g. VM, Volumes,
> Images, ..) in the next way:
>
>   >>> resource = api.resouce_do_some_stuff()
>   >>> while api.resource_get(resource["uuid"]) != expected_status
>   >>>sleep(a_bit)
>
> For each async operation they are polling and call many times
> resource_get() which creates significant load on API and DB layers due
> the nature of this request. (Usually getting full information about
> resources produces SQL requests that contains multiple JOINs, e,g for
> nova vm it's 6 joins).
>
> What if we add new API method that will just resturn resource status by
> UUID? Or even just extend get request with the new argument that returns
> only status?


 +1

 All APIs should have an HTTP HEAD call on important resources for
 retrieving quick status information for the resource.

 In fact, I proposed exactly this in my Compute "vNext" API proposal:

 http://docs.oscomputevnext.apiary.io/#reference/server/serversid/head

 Swift's API supports HEAD for accounts:


 http://developer.openstack.org/api-ref-objectstorage-v1.html#showAccountMeta


 containers:


 http://developer.openstack.org/api-ref-objectstorage-v1.html#showContainerMeta


 and objects:


 http://developer.openstack.org/api-ref-objectstorage-v1.html#showObjectMeta

 So, yeah, I agree.
 -jay
>>>
>>>
>>> How would you expect this to work on "servers"? HEAD specifically
>>> forbids returning a body, and, unlike swift, we don't return very much
>>> information in our headers.
>>
>>
>> I didn't propose doing it on a collection resource like "servers". Only on
>> an entity resource like a single "server".
>>
>> HEAD /v2/{tenant}/servers/{uuid}
>> HTTP/1.1 200 OK
>> Content-Length: 1022
>> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
>> Content-Type: application/json
>> Date: Thu, 16 Jan 2014 21:13:19 GMT
>> OpenStack-Compute-API-Server-VM-State: ACTIVE
>> OpenStack-Compute-API-Server-Power-State: RUNNING
>> OpenStack-Compute-API-Server-Task-State: NONE
> 
> For polling, that sounds quite efficient and handy.
> 
> For "servers" we could do this (I think there was a spec up that wanted this):
> 
> HEAD /v2/{tenant}/servers
> HTTP/1.1 200 OK
> Content-Length: 1022
> Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT
> Content-Type: application/json
> Date: Thu, 16 Jan 2014 21:13:19 GMT
> OpenStack-Compute-API-Server-Count: 13

This seems like a fundamental abuse of HTTP honestly. If you find
yourself creating a ton of new headers, you are probably doing it wrong.

I do think the near term work around is to actually use Searchlight.
They're monitoring the notifications bus for nova, and refreshing
resources when they see a notification which might have changed it. It
still means that Searchlight is hitting our API more than ideal, but at
least only one service is doing so, and if the rest hit that instead
they'll get the resource without any db hits (it's all through an
elastic search cluster).

I think longer term we probably need a dedicated event service in
OpenStack. A few of us actually had an informal conversation about this
during the Nova notifications session to figure out if there was a way
to optimize the Searchlight path. Nearly everyone wants websockets,
which is good. The problem is, that means you've got to anticipate
10,000+ open websockets as soon as we expose this. Which means the stack
to deliver that sanely isn't just a bit of python code, it's also the
highly optimized server underneath.

So, I feel like with Searchlight we've got a work around that's more
efficient than we're going to make with an API that we really don't want
to support down the road. Because I definitely don't want to make
general purpose search a thing inside every service, as in order to make
it efficient we're going to have to reimplement most of searchlight in
the services.

Instead of spending the energy on this path, it would be much better to
push forward on the end user events path, which is really the long term
model we want.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Brant Knudson
On Wed, Nov 4, 2015 at 6:47 AM, Sean Dague  wrote:

> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
>
>
I've always wondered why lxml is used rather than python's built-in XML
support. Is there some function that xml.etree is missing that lxml.etree
provides? The only thing I know about is that lxml has better support for
some XPATH features.

:: Brant


> Due to our exact calculations by upper-constraints.txt we ensure exactly
> the right version of each of those in old & new & subnode (old).
>
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.
>
> -Sean
>
>



> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][nova] Handling lots of GET query string parameters?

2015-11-04 Thread Salvatore Orlando
Inline,
Salvatore

On 4 November 2015 at 15:11, Cory Benfield  wrote:

>
> > On 4 Nov 2015, at 13:13, Salvatore Orlando 
> wrote:
> >
> > Regarding Jay's proposal, this would be tantamount to defining an API
> action for retrieving instances, something currently being discussed here
> [1].
> > The only comment I have is that I am not entirely surely whether using
> the POST verb for operations which do no alter at all the server
> representation of any object is in accordance with RFC 7231.
>
> It’s totally fine, so long as you define things appropriately. Jay’s
> suggestion does exactly that, and is entirely in line with RFC 7231.
>
> The analogy here is to things like complex search forms. Many search
> engines allow you to construct very complex search queries (consider
> something like Amazon or eBay, where you can filter on all kinds of
> interesting criteria). These forms are often submitted to POST endpoints
> rather than GET.
>
> This is totally fine. In fact, the first example from RFC 7231 Section
> 4.3.3 (POST) applies here: “POST is used for the following functions (among
> others): Providing a block of data […] to a data-handling process”. In this
> case, the data-handling function is the search function on the server.
>

I looked back at the RFC and indeed it does not state anywhere that a POST
operation is required to change somehow the state of any object, so the
approach is entirely fine from this aspect as well.


>
> The *only* downside of Jay’s approach is that the response cannot really
> be cached. It’s not clear to me whether anyone actually deploys a cache in
> this kind of role though, so it may not hurt too much.
>

I believe there will be not a great advantage from caching this kind of
responses, as cache hits would be very low anyway.


> Cory
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] new change management tools and processes for stable/liberty and mitaka

2015-11-04 Thread Doug Hellmann
Excerpts from Louis Taylor's message of 2015-11-04 08:05:44 +:
> On Wed, Nov 04, 2015 at 03:03:29PM +1300, Fei Long Wang wrote:
> > Hi Doug,
> > 
> > Thanks for posting this. I'm working on this for Zaqar now and there is a
> > question. As for the stable/liberty patch, where does the "60fdcaba00e30d02"
> > in [1] come from? Thanks.
> > 
> > [1] 
> > https://review.openstack.org/#/c/241322/1/releasenotes/notes/60fdcaba00e30d02-start-using-reno.yaml
> 
> This is from running the reno command to create a uniquely named release note
> file. See http://docs.openstack.org/developer/reno/usage.html

Right, we need the files to have a unique name so reno generates a part
of the file name as unique, combined with the partial filename you give
it on the command line.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Creating puppet-keystone-core and proposing Richard Megginson core-reviewer

2015-11-04 Thread Emilien Macchi


On 11/03/2015 03:56 PM, Matt Fischer wrote:
> Sorry I replied to this right away but used the wrong email address and
> it bounced!
> 
>> I've appreciated all of richs v3 contributions to keystone. +1 from me.

2 positives votes from our core-reviewer team.
No negative vote at all.

I guess that's a 'yes', welcome Rich, you're the first
puppet-keystone-core member!

Note: anyone else core-reviewer on Puppet modules is also core on
puppet-keystone by the way.

Congrats Rich!

> On Tue, Nov 3, 2015 at 4:38 AM, Sofer Athlan-Guyot  > wrote:
> 
> He's very good reviewer with a deep knowledge of keystone and puppet.
> Thank you Richard for your help.
> 
> +1
> 
> Emilien Macchi  > writes:
> 
> > At the Summit we discussed about scaling-up our team.
> > We decided to investigate the creation of sub-groups specific to our
> > modules that would have +2 power.
> >
> > I would like to start with puppet-keystone:
> > https://review.openstack.org/240666
> >
> > And propose Richard Megginson part of this group.
> >
> > Rich is leading puppet-keystone work since our Juno cycle. Without his
> > leadership and skills, I'm not sure we would have Keystone v3 support
> > in our modules.
> > He's a good Puppet reviewer and takes care of backward compatibility.
> > He also has strong knowledge at how Keystone works. He's always
> > willing to lead our roadmap regarding identity deployment in
> > OpenStack.
> >
> > Having him on-board is for us an awesome opportunity to be ahead of
> > other deployments tools and supports many features in Keystone that
> > real deployments actually need.
> >
> > I would like to propose him part of the new puppet-keystone-core
> > group.
> >
> > Thank you Rich for your work, which is very appreciated.
> 
> --
> Sofer Athlan-Guyot
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 10:50 AM, Brant Knudson wrote:
> 
> 
> On Wed, Nov 4, 2015 at 6:47 AM, Sean Dague  > wrote:
> 
> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
> 
> 
> I've always wondered why lxml is used rather than python's built-in XML
> support. Is there some function that xml.etree is missing that
> lxml.etree provides? The only thing I know about is that lxml has better
> support for some XPATH features.

It's all xpath semantics as far as I know.

I was told that SAML needs a thing the built in xml library doesn't
support. I doubt that any other projects really need it. However, there
would be a ton of unwind in nova to get rid of it given how extensively
it's used in libvirt driver.

I don't know about other projects. It also only benefits us if we can
remove it from g-r entirely.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ian Wienand as core reviewer on diskimage-builder

2015-11-04 Thread James Slagle
On Wed, Nov 4, 2015 at 12:25 AM, Gregory Haynes  wrote:
> Hello everyone,
>
> I would like to propose adding Ian Wienand as a core reviewer on the
> diskimage-builder project. Ian has been making a significant number of
> contributions for some time to the project, and has been a great help in
> reviews lately. Thus, I think we could benefit greatly by adding him as
> a core reviewer.
>
> Current cores - Please respond with any approvals/objections by next Friday
> (November 13th).

+1

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 07:47 AM, Sean Dague wrote:
> I was spot checking the grenade multinode job to make sure it looks like
> it was doing the correct thing. In doing so I found that ~15minutes of
> it's hour long build time is compiling lxml and numpy 3 times each.
> 
> Due to our exact calculations by upper-constraints.txt we ensure exactly
> the right version of each of those in old & new & subnode (old).
> 
> Is there a nodepool cache strategy where we could pre build these? A 25%
> performance win comes out the other side if there is a strategy here.

Also, if anyone needs more data -
http://paste.openstack.org/show/477989/ is the current cost during a
devstack tempest run (not grenade, which makes this 3x worse).

This was calculated with this parser -
https://review.openstack.org/#/c/241676/

Building cryptography twice looks like a fun one, and cffi 3 times.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins][Ironic] Deploy Ironic with fuel ?

2015-11-04 Thread Stanisław Dałek

Pavlo,

I tried to deploy Ironic from Fuel 8.0 release (iso build 
fuel-community-8.0-83-2015-11-02_05-42-11)


4b1f91ad496c571e4cbc5931134db6479e582c8b

After creating a fuel network-group as You suggested to Loic and 
configuring an Ironic deployment (4 nodes including controller, compute, 
cinder and Ironic) , I am getting an error during the deployment of 
firewall.pp on the controller node:


$$network_metadata["vips"]["baremetal"] is :undef, not a hash or array 
at /etc/puppet/modules/osnailyfacter/modular/firewall/firewall.pp:52 on 
node node-36.domain.tld


It seems the the vip for the baremetal network hasn't been created.
Please note, I didn't deploy the Fuel-Plugin-Ironic Loic refers to (I 
tried it in 7.0, but it doesn't work anyway), but used the Ironic 
functionality available in 8.0.


Is there anyway I can make it work??

Best regards
Stanislaw



Subject: 	Re: [openstack-dev] [Fuel][Plugins][Ironic] Deploy Ironic with 
fuel ?  	permalink 


From:   Pavlo Shchelokovskyy (pshc...@mirantis.com)
Date:   Oct 20, 2015 9:36:37 am
List:   org.openstack.lists.openstack-dev

Hi Loic,

the story of this plugin is a bit complicated. We've done it as PoC of 
integrating Ironic into Mirantis OpenStack/Fuel during 7.0 release. 
Currently we are working on integrating Ironic into core of Fuel 
targeting its 8.0 release. Given that, the plugin is not official in any 
sense, is not certified according to Fuel plugins guidelines, is not 
supported at all and has had only limited testing on a small in-house lab.


To successfully deploy Ironic with this plugin "as-is" you'd most 
probably need access to Mirantis package repositories as it relies on 
some patches to fuel-agent that we use for bootstrapping, and some of 
those are not merged yet, so we use repos created by our CI from Gerrit 
changes. Probably though you can hack on the code and disable such 
dependencies/building and uploading the custom bootstrap image, activate 
clear upstream Ironic drivers and then use upstream images with e.g. 
ironic-python-agent for bootstrapping baremetal nodes.


As to your network setup question - the baremetal network is somewhat 
similar to the public network in Fuel, which needs two ip ranges 
defined, one for service nodes, and the other for actual VMs to assign 
as floating ips. Thus networking setup for the plugin should be done as 
follows (naming it "baremetal" is mandatory):


fuel network-group --name baremetal --cidr 192.168.3.0/24 -c --nodegroup 
1 --meta='{ip_range: ["192.168.3.2", "192.168.3.50"], notation: 
"ip_ranges"}'


where the ip range (I've put some example values) is for those service 
OpenStack nodes that host Ironic services and need to have access to 
this provider network where BM nodes do live (this range is then 
auto-filled to network.baremetal section of Networking settings tab in 
Fuel UI). The range for the actual BM nodes is defined then on the 
"Settings->Ironic" tab in Fuel UI once Ironic checkbox there is activated.


I admit we do need to make some effort and document the plugin a bit 
better (actually at all :) ) to not confuse people wishing to try it out.


Best regards,

On Mon, Oct 19, 2015 at 6:45 AM,  wrote:

Hello,

I’m currently searching for information about Ironic Fuel plugin : 
https://github.com/openstack/fuel-plugin-ironic I don’t find any 
documentation on it.


I’ve tried to install and deploy an Openstack environment with Fuel 7.0 
and Ironic plugin but it failed. After adding ironic role to a node Fuel 
UI crashed, due to a missing network “baremetal” . When creating a 
network group


fuel network-group --create --node-group 1 --name \

"baremetal" --cidr 192.168.3.0/24

UI works again, but I got some errors in the deployment, during network 
configuration. So I think I have to configure a network template, did 
someone already do this for this plugin ?


Regards,

Loic

_ 



Ce message et ses pieces jointes peuvent contenir des informations
confidentielles ou privilegiees et ne doivent donc pas etre diffuses, 
exploites ou copies sans autorisation. Si vous avez recu ce
message par erreur, veuillez le signaler a l'expediteur et le detruire 
ainsi que les pieces jointes. Les messages
electroniques etant susceptibles d'alteration, Orange decline toute 
responsabilite si ce message a ete altere, deforme ou

falsifie. Merci.

This message and its attachments may contain confidential or privileged
information that may be protected by law; they should not be 
distributed, used or copied without authorisation. If you have received 
this email in error, please notify the sender and delete
this message and its attachments. As emails may be altered, Orange is 
not liable for messages that have been

modified, changed or falsified. Thank you.

_

Re: [openstack-dev] [infra][dsvm] openstack packages pre-installed dsvm node?

2015-11-04 Thread Jeremy Stanley
On 2015-11-04 18:14:43 +0800 (+0800), Gareth wrote:
> When choosing nodes to run a gate job, is there a liberty-trusty node?
> The liberty core services is installed and well setup. It is helpful
> for development on non-core projects and saves much time.

Integration tests really do need to install the services from source
during the job, not in advance. Otherwise you would be unable to
have your project's change declare a cross-repository dependency
(depends-on) to a pending change in another one of those projects.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Jeremy Stanley
On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> On 11/04/2015 06:47 AM, Sean Dague wrote:
[...]
> > Is there a nodepool cache strategy where we could pre build these? A 25%
> > performance win comes out the other side if there is a strategy here.
> 
> python wheel repo could help maybe?

That's along the lines of how I expect we'd need to solve it.
Basically add a new DIB element to openstack-infra/project-config in
nodepool/elements (or extend the cache-devstack element already
there) to figure out which version(s) it needs to prebuild and then
populate a wheelhouse which can be leveraged by the jobs running on
the resulting diskimage. The test scripts in the
openstack/requirements repo may already have much of this logic
implemented for the purpose of testing that we can build sane wheels
of all our requirements.

This of course misses situations where the requirements change and
the diskimages haven't been rebuilt or in jobs testing proposed
changes which explicitly alter these requirements, but could be
augmented by similar mechanisms in devstack itself to avoid building
them more than once.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
>> On 11/04/2015 06:47 AM, Sean Dague wrote:
> [...]
>>> Is there a nodepool cache strategy where we could pre build these? A 25%
>>> performance win comes out the other side if there is a strategy here.
>>
>> python wheel repo could help maybe?
> 
> That's along the lines of how I expect we'd need to solve it.
> Basically add a new DIB element to openstack-infra/project-config in
> nodepool/elements (or extend the cache-devstack element already
> there) to figure out which version(s) it needs to prebuild and then
> populate a wheelhouse which can be leveraged by the jobs running on
> the resulting diskimage. The test scripts in the
> openstack/requirements repo may already have much of this logic
> implemented for the purpose of testing that we can build sane wheels
> of all our requirements.
> 
> This of course misses situations where the requirements change and
> the diskimages haven't been rebuilt or in jobs testing proposed
> changes which explicitly alter these requirements, but could be
> augmented by similar mechanisms in devstack itself to avoid building
> them more than once.

Ok, so given that pip automatically builds a local wheel cache now when
it installs this... is it as simple as
https://review.openstack.org/#/c/241692/ ?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Morgan Fainberg
On Nov 4, 2015 09:14, "Sean Dague"  wrote:
>
> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> > On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> >> On 11/04/2015 06:47 AM, Sean Dague wrote:
> > [...]
> >>> Is there a nodepool cache strategy where we could pre build these? A
25%
> >>> performance win comes out the other side if there is a strategy here.
> >>
> >> python wheel repo could help maybe?
> >
> > That's along the lines of how I expect we'd need to solve it.
> > Basically add a new DIB element to openstack-infra/project-config in
> > nodepool/elements (or extend the cache-devstack element already
> > there) to figure out which version(s) it needs to prebuild and then
> > populate a wheelhouse which can be leveraged by the jobs running on
> > the resulting diskimage. The test scripts in the
> > openstack/requirements repo may already have much of this logic
> > implemented for the purpose of testing that we can build sane wheels
> > of all our requirements.
> >
> > This of course misses situations where the requirements change and
> > the diskimages haven't been rebuilt or in jobs testing proposed
> > changes which explicitly alter these requirements, but could be
> > augmented by similar mechanisms in devstack itself to avoid building
> > them more than once.
>
> Ok, so given that pip automatically builds a local wheel cache now when
> it installs this... is it as simple as
> https://review.openstack.org/#/c/241692/ ?
>

If it is that easy, what a fantastic win in speeding things up!

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread gord chung

apologies, if the below was mentioned at some point in this thread.

On 04/11/2015 10:42 AM, Sean Dague wrote:

This seems like a fundamental abuse of HTTP honestly. If you find
yourself creating a ton of new headers, you are probably doing it wrong.
if we want to explore the HTTP path, did we consider using ETags[1] to 
check whether resources have changed? it's something used by Gnocchi's 
API to handle resource changes.


I do think the near term work around is to actually use Searchlight.
They're monitoring the notifications bus for nova, and refreshing
resources when they see a notification which might have changed it. It
still means that Searchlight is hitting our API more than ideal, but at
least only one service is doing so, and if the rest hit that instead
they'll get the resource without any db hits (it's all through an
elastic search cluster).

I think longer term we probably need a dedicated event service in
OpenStack. A few of us actually had an informal conversation about this
during the Nova notifications session to figure out if there was a way
to optimize the Searchlight path. Nearly everyone wants websockets,
which is good. The problem is, that means you've got to anticipate
10,000+ open websockets as soon as we expose this. Which means the stack
to deliver that sanely isn't just a bit of python code, it's also the
highly optimized server underneath.
as part of the StackTach integration efforts, Ceilometer (as of Juno) 
listens to all notifications in the OpenStack ecosystem and builds a 
normalised event model[2] from it. the normalised event data is stored 
in a backend (elasticsearch, sql, mongodb, hbase) and from this you can 
query based on required attributes. in addition to storing events, in 
Liberty, Aodh (alarming service) added support to take events and create 
alarms based on change of state[3] with expanded functionality to be 
added. this was added to handle the NFV use case but may also be 
relevant here as it seems like we want to have an action based on status 
changes.


i should mention that we discussed splitting out the event logic in 
Ceilometer to create a generic listener[4] service which could convert 
notification data to meters, events, and anything else. this isn't a 
high priority item but might be an integration point for those looking 
to leverage notifications in OpenStack.


[1] https://en.wikipedia.org/wiki/HTTP_ETag
[2] http://docs.openstack.org/admin-guide-cloud/telemetry-events.html
[3] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/event-alarm-evaluator.html

[4] https://etherpad.openstack.org/p/mitaka-telemetry-split

cheers,

--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-04 Thread Eichberger, German
This seems we will get some more velocity which is good!!
+1

German

From: Gary Kotton mailto:gkot...@vmware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, November 4, 2015 at 5:24 AM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][stable] should we open gate for per 
sub-project stable-maint teams?



From: "mest...@mestery.com" 
mailto:mest...@mestery.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, November 3, 2015 at 7:09 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron][stable] should we open gate for per 
sub-project stable-maint teams?

On Tue, Nov 3, 2015 at 10:49 AM, Ihar Hrachyshka 
mailto:ihrac...@redhat.com>> wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

currently we have a single neutron-wide stable-maint gerrit group that
maintains all stable branches for all stadium subprojects. I believe
that in lots of cases it would be better to have subproject members to
run their own stable maintenance programs, leaving
neutron-stable-maint folks to help them in non-obvious cases, and to
periodically validate that project wide stable policies are still honore
d.

I suggest we open gate to creating subproject stable-maint teams where
current neutron-stable-maint members feel those subprojects are ready
for that and can be trusted to apply stable branch policies in
consistent way.

Note that I don't suggest we grant those new permissions completely
automatically. If neutron-stable-maint team does not feel safe to give
out those permissions to some stable branches, their feeling should be
respected.

I believe it will be beneficial both for subprojects that would be
able to iterate on backports in more efficient way; as well as for
neutron-stable-maint members who are often busy with other stuff, and
often times are not the best candidates to validate technical validity
of backports in random stadium projects anyway. It would also be in
line with general 'open by default' attitude we seem to embrace in
Neutron.

If we decide it's the way to go, there are alternatives on how we
implement it. For example, we can grant those subproject teams all
permissions to merge patches; or we can leave +W votes to
neutron-stable-maint group.

I vote for opening the gates, *and* for granting +W votes where
projects showed reasonable quality of proposed backports before; and
leaving +W to neutron-stable-maint in those rare cases where history
showed backports could get more attention and safety considerations
[with expectation that those subprojects will eventually own +W votes
as well, once quality concerns are cleared].

If we indeed decide to bootstrap subproject stable-maint teams, I
volunteer to reach the candidate teams for them to decide on initial
lists of stable-maint members, and walk them thru stable policies.

Comments?


As someone who spends a considerable amount of time reviewing stable backports 
on a regular basis across all the sub-projects, I'm in favor of this approach. 
I'd like to be included when selecting teams which are approproate to have 
their own stable teams as well. Please include me when doing that.

+1


Thanks,
Kyle

Ihar
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
=HE+y
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread gord chung

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most 
operators *never* update configuration files after the initial setup and 
if they did it was very rarely (monthly updates). the question related 
to Ceilometer and its pipeline configuration file so the results might 
be specific to Ceilometer. I think you should definitely query operators 
before undertaking any work. the last thing you want to do is implement 
a feature no one really needs/wants.


[1] 
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services, 


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and 
discuss

implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service 
contains

code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL 
options

doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different 
projects

which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some 
options
make sense only for specific platforms. If we mark such options as 
mutable

it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is 
created.

When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of 
ServiceLauncher

is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I 
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add 
mess.


2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to 
reload
config options. oslo.service should catch SIGHUP and call list of 
registered

hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on 
SIGHUP"

-
Some config options could be changed using API(for example quotas) 
that's why

oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] mitaka release schedule

2015-11-04 Thread Doug Hellmann
PTLs and release liaisons,

The mitaka release schedule is in the wiki at
https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

Please note that there are only 5 weeks between Feature Freeze and
the final release, instead of the usual 6. This means we have more
time before the freeze for feature development, but it also means
that we need to be more strict about limiting Feature Freeze
Exceptions (FFEs) this cycle than we were for Liberty because we
will have less time to finish them and fix release-blocking bugs.

The Feature Freeze date for the Mitaka3 milestone is March 3, with
FFEs to be completed by March 11 so we can produce initial Release
Candidates (RCs) by March 18.

Remember that non-client libraries should freeze a week earlier
around February 26 and client libraries should freeze with the
services on March 3. If you have service work that will require
client library work, the service work will need to land early enough
to allow the new features in the client to land by the final deadline.

If any of the freeze conditions or dates are not clear, please ask
questions via a follow-up on this thread so everyone can benefit
from the answers.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-04 Thread Murray, Paul (HP Cloud)


> From: Jay Pipes [mailto:jaypi...@gmail.com]
> On 10/27/2015 01:16 PM, Chris Friesen wrote:
> > On 10/26/2015 06:02 PM, Jay Pipes wrote:
> >
> >> I believe strongly that we should deprecate the existing migrate,
> >> resize, an live-migrate APIs in favor of a single consolidated,
> >> consistent "move"
> >> REST API
> >> that would have the following characteristics:
> >>
> >> * No manual or wait-input states in any FSM graph
> >
> > Sounds good.
> >
> >> * Removal of the term "resize" from the API entirely (the target
> >> resource sizing is an attribute of the move operation, not a
> >> different type of API operation in and of itself)
> >
> > I disagree on this one.
> >
> > As an end-user, if my goal is to take an existing instance and give it
> > bigger disks, my first instinct isn't going to be to look at the "move"
> > operation.  I'm going to look for "scale", or "resize", or something
> > like that.
> >
> > And if an admin wants to migrate an instance away from its current
> > host, why would they want to change its disk size in the process?
> 
> A fair point. However, I think that a generic update VM API, which would
> allow changes to the resources consumed by the VM along with capabiities
> like CPU model or local disk performance (SSD) is a better way to handle this
> than a resize-specific API.


Sorry I am so late to this - but this stuck out for me. 

Resize is an operation that a cloud user would do to his VM. Usually the
cloud user does not know what host the VM is running on so a resize does 
not appear to be a move at all.

Migrate is an operation that a cloud operator does to a VM that is not normally
available to a cloud user. A cloud operator does not change the VM because 
the operator just provides what the user asked for. He only choses where he is 
going to put it.

It seems clear to me that resize and migrate are very definitely different 
things,
even if they are implemented using the same code path internally for 
convenience.
At the very least I believe they need to be kept separate at the API so we can 
apply
different policy to control access to them.



> 
> So, in other words, I'd support this:
> 
> PATCH /servers/
> 
> with some corresponding request payload that would indicate the required
> changes.
> 
> > I do think it makes sense to combine the external APIs for live and
> > cold migration.  Those two are fundamentally similar, logically
> > separated only by whether the instance stays running or not.
> >
> > And I'm perfectly fine with having the internal implementation of all
> > three share a code path, I just don't think it makes sense for the
> > *external* API.
> 
> I think you meant to say you don't think it makes sense to have three
> separate external APIs for what is fundamentally the same operation (move
> a VM), right?
> 
> Best,
> -jay
> 
> >> * Transition to a task-based API for poll-state requests. This means
> >> that in order for a caller to determine the state of a VM the caller
> >> would call something like GET /servers//tasks/ in order
> >> to see the history of state changes or subtask operations for a
> >> particular request to move a VM
> > enstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > Sounds good.
> >
> > Chris
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Robert Collins
On 5 November 2015 at 06:21, Morgan Fainberg  wrote:
>
...
>
> If it is that easy, what a fantastic win in speeding things up!

It'll help, but skew as new things are released - so e.g. after a
release of numpy until the next image builds and is enabled
successfully.

We could have a network mirror with prebuilt wheels with some care,
but thats a bit more work. The upside is we could be refreshing it
hourly or so without a multi-GB upload.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Match type checking from oslo.config.

2015-11-04 Thread Sofer Athlan-Guyot
Hunter Haugen  writes:

> I have some code that is similar to this in the F5 and Netscaler
> modules. I make a generic "truthy" property that accepts various
> truthy/falsy values
> (https://github.com/puppetlabs/puppetlabs-netscaler/blob/master/lib/puppet/property/netscaler_
> truthy.rb) then just define that as the parent of the property
> (https://github.com/puppetlabs/puppetlabs-netscaler/blob/master/lib/puppet/type/netscaler_
> csvserver.rb#L73-L75)

Ouha!  I didn't know that property could have parent class defined.
This is nice.  Does it work also for parameter ?

The NetScalerTruthy is more or less what would be needed for thruthy stuff.

On my side I came up with this solution (for different stuff, but the
same principle could be used here as well):

https://review.openstack.org/#/c/238954/10/lib/puppet_x/keystone/type/read_only.rb

And I call it like that:

  newproperty(:id) do
include PuppetX::Keystone::Type::ReadOnly
  end

I was thinking of extending this scheme to have needed types (Boolean,
...):

  newproperty(:truth) do
include PuppetX::Openstack::Type::Boolean
  end

Your solution in NetScalerTruthy is nice, integrated with puppet, but
require a function call.

My "solution" require no function call unless you have to pass
parameters. If you have to pass parameter, the interface I used is a
preset function.  Here is an example:

https://review.openstack.org/#/c/239434/8/lib/puppet_x/keystone/type/required.rb

and you use it like this:

  newparam(:type) do
isnamevar
def required_custom_message
  'Not specifying type parameter in Keystone_endpoint is a bug. ' \
'See bug https://bugs.launchpad.net/puppet-keystone/+bug/1506996 ' \
"and https://review.openstack.org/#/c/238954/ for more information.\n"
end
include PuppetX::Keystone::Type::Required
  end

So, modulo you can have parameter with parent, both solutions could be
used.  Which one will it be:
 - one solution (NetScalerTruthy) is based on inheritance, mine on composition.
 - you have a function call to make with NetScalerTruthy no matter what;
 - you have to define function to pass parameter with my solution (but
   that shouldn't be required very often)

I tend to prefer my resulting syntax, but that's really me ... I may be
biased.

What do you think ?

>
> On Mon, Nov 2, 2015 at 12:06 PM Cody Herriges 
> wrote:
>
> Sofer Athlan-Guyot wrote:
> > Hi,
> >
> > The idea would be to have some of the types defined oslo config
> >
> 
> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/types.
> py
> > ported to puppet type. Those that looks like good candidates
> are:
> > - Boolean;
> > - IPAddress;
> > and in a lesser extend:
> > - Integer;
> > - Float;
> >
> > For instance in puppet type requiring a Boolean, we may test
> > "/[tT]rue|[fF]alse/", but the real thing is :
> >
> > TRUE_VALUES = ['true', '1', 'on', 'yes']
> > FALSE_VALUES = ['false', '0', 'off', 'no']
> >
> 
> Good idea. I'd only add that we should convert 'true' and 'false'
> to
> real booleans for Puppet's purposes since the Puppet language is
> now typed.
> 
> --
> Cody
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Sean Dague
On 11/04/2015 01:26 PM, Robert Collins wrote:
> On 5 November 2015 at 06:21, Morgan Fainberg  
> wrote:
>>
> ...
>>
>> If it is that easy, what a fantastic win in speeding things up!
> 
> It'll help, but skew as new things are released - so e.g. after a
> release of numpy until the next image builds and is enabled
> successfully.
> 
> We could have a network mirror with prebuilt wheels with some care,
> but thats a bit more work. The upside is we could be refreshing it
> hourly or so without a multi-GB upload.

It only really will screw when upper-constraints.txt gets updated on a
branch.

I honestly think it's ok to not be perfect here. In the base case we'll
speed up a good chunk, and we'll be slower (though not as slow as today)
for a day after we bump upper-constraints for something expensive (like
numpy). It seems like a reasonable trade off for not much complexity.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-04 Thread Jonathan D. Proulx
On Wed, Nov 04, 2015 at 06:17:17PM +, Murray, Paul (HP Cloud) wrote:
:> From: Jay Pipes [mailto:jaypi...@gmail.com]
:> A fair point. However, I think that a generic update VM API, which would
:> allow changes to the resources consumed by the VM along with capabiities
:> like CPU model or local disk performance (SSD) is a better way to handle this
:> than a resize-specific API.
:
:
:Sorry I am so late to this - but this stuck out for me. 
:
:Resize is an operation that a cloud user would do to his VM. Usually the
:cloud user does not know what host the VM is running on so a resize does 
:not appear to be a move at all.
:
:Migrate is an operation that a cloud operator does to a VM that is not normally
:available to a cloud user. A cloud operator does not change the VM because 
:the operator just provides what the user asked for. He only choses where he is 
:going to put it.
:
:It seems clear to me that resize and migrate are very definitely different 
things,
:even if they are implemented using the same code path internally for 
convenience.
:At the very least I believe they need to be kept separate at the API so we can 
apply
:different policy to control access to them.

As an operator I'm with Paul on this.

By all means use the same code path becasue behind the scenes it *is*
the same thing.  

BUT, at the API level we do need the distinction particularly for access
control policy. The UX 'findablility' is important too, but if that
were the only issue a bit of syntactic sugar in the UI could take care
of it.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Match type checking from oslo.config.

2015-11-04 Thread Hunter Haugen
> Ouha!  I didn't know that property could have parent class defined.
> This is nice.  Does it work also for parameter ?

I haven't tried, but property is just a subclass of parameter so
truthy could probably be made a parameter then become a parent of
either a property or a parameter.

>
> The NetScalerTruthy is more or less what would be needed for thruthy stuff.
>
> On my side I came up with this solution (for different stuff, but the
> same principle could be used here as well):
>
> https://review.openstack.org/#/c/238954/10/lib/puppet_x/keystone/type/read_only.rb
>
> And I call it like that:
>
>   newproperty(:id) do
> include PuppetX::Keystone::Type::ReadOnly
>   end
>
> I was thinking of extending this scheme to have needed types (Boolean,
> ...):
>
>   newproperty(:truth) do
> include PuppetX::Openstack::Type::Boolean
>   end
>
> Your solution in NetScalerTruthy is nice, integrated with puppet, but
> require a function call.

The function call is to a) pass documentation inline (since I assume
every attribute has different documentation so didn't want to hardcode
it in the truthy class), and b) pass the default truthy/falsy values
that should be exposed to the provider (ie, allow you to cast all
truthy values to `"enable"` and `"disable"` instead of only supporting
`true` and `false`.

The truthy class could obviously be implemented such that if no block
is passed to the attribute then the method is automatically called
with default values, then you wouldn't even need the `include` mixin.
>
> My "solution" require no function call unless you have to pass
> parameters. If you have to pass parameter, the interface I used is a
> preset function.  Here is an example:
>
> https://review.openstack.org/#/c/239434/8/lib/puppet_x/keystone/type/required.rb
>
> and you use it like this:
>
>   newparam(:type) do
> isnamevar
> def required_custom_message
>   'Not specifying type parameter in Keystone_endpoint is a bug. ' \
> 'See bug https://bugs.launchpad.net/puppet-keystone/+bug/1506996 '
> \
> "and https://review.openstack.org/#/c/238954/ for more
> information.\n"
> end
> include PuppetX::Keystone::Type::Required
>   end
>
> So, modulo you can have parameter with parent, both solutions could be
> used.  Which one will it be:
>  - one solution (NetScalerTruthy) is based on inheritance, mine on
> composition.
>  - you have a function call to make with NetScalerTruthy no matter what;
>  - you have to define function to pass parameter with my solution (but
>that shouldn't be required very often)
>
> I tend to prefer my resulting syntax, but that's really me ... I may be
> biased.
>
> What do you think ?
>
>>
>> On Mon, Nov 2, 2015 at 12:06 PM Cody Herriges 
>> wrote:
>>
>> Sofer Athlan-Guyot wrote:
>> > Hi,
>> >
>> > The idea would be to have some of the types defined oslo config
>> >
>>
>> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/types.
>> py
>> > ported to puppet type. Those that looks like good candidates
>> are:
>> > - Boolean;
>> > - IPAddress;
>> > and in a lesser extend:
>> > - Integer;
>> > - Float;
>> >
>> > For instance in puppet type requiring a Boolean, we may test
>> > "/[tT]rue|[fF]alse/", but the real thing is :
>> >
>> > TRUE_VALUES = ['true', '1', 'on', 'yes']
>> > FALSE_VALUES = ['false', '0', 'off', 'no']
>> >
>>
>> Good idea. I'd only add that we should convert 'true' and 'false'
>> to
>> real booleans for Puppet's purposes since the Puppet language is
>> now typed.
>>
>> --
>> Cody
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 


-Hunter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Brad Topol

+1  That's an extremely good suggestion!!!

--Brad


Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Sean McGinnis 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   11/04/2015 10:36 AM
Subject:Re: [openstack-dev] Troubleshooting cross-project comms



On Wed, Nov 04, 2015 at 07:30:51AM -0500, Sean Dague wrote:
> On 10/28/2015 07:15 PM, Anne Gentle wrote:
>
> Has anyone considered using #openstack-dev, instead of a new meeting
> room? #openstack-dev is mostly a ghost town at this point, and deciding
> that instead it would be the dedicated cross project space, including
> meetings support, might be interesting.
>
>-Sean

+1 - That makes a lot of sense to me.

>
> --
> Sean Dague
> http://dague.net
>
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Troubleshooting cross-project comms

2015-11-04 Thread Lee Calcote
Especially given the pervasiveness of discussion topics. +1

Lee

> On Nov 4, 2015, at 12:44 PM, Brad Topol  wrote:
> 
> +1 That's an extremely good suggestion!!!
> 
> --Brad
> 
> 
> Brad Topol, Ph.D.
> IBM Distinguished Engineer
> OpenStack
> (919) 543-0646
> Internet: bto...@us.ibm.com
> Assistant: Kendra Witherspoon (919) 254-0680
> 
> Sean McGinnis ---11/04/2015 10:36:22 AM---On Wed, Nov 04, 2015 
> at 07:30:51AM -0500, Sean Dague wrote: > On 10/28/2015 07:15 PM, Anne Gentle 
> wr
> 
> From: Sean McGinnis 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 11/04/2015 10:36 AM
> Subject: Re: [openstack-dev] Troubleshooting cross-project comms
> 
> 
> 
> 
> On Wed, Nov 04, 2015 at 07:30:51AM -0500, Sean Dague wrote:
> > On 10/28/2015 07:15 PM, Anne Gentle wrote:
> > 
> > Has anyone considered using #openstack-dev, instead of a new meeting
> > room? #openstack-dev is mostly a ghost town at this point, and deciding
> > that instead it would be the dedicated cross project space, including
> > meetings support, might be interesting.
> > 
> > -Sean
> 
> +1 - That makes a lot of sense to me.
> 
> > 
> > -- 
> > Sean Dague
> > http://dague.net 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> > 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api][tc][perfromance] API for getting only status of resources

2015-11-04 Thread Robert Collins
On 5 November 2015 at 04:42, Sean Dague  wrote:
> On 11/04/2015 10:13 AM, John Garbutt wrote:

> I think longer term we probably need a dedicated event service in
> OpenStack. A few of us actually had an informal conversation about this
> during the Nova notifications session to figure out if there was a way
> to optimize the Searchlight path. Nearly everyone wants websockets,
> which is good. The problem is, that means you've got to anticipate
> 10,000+ open websockets as soon as we expose this. Which means the stack
> to deliver that sanely isn't just a bit of python code, it's also the
> highly optimized server underneath.

So any decent epoll implementation should let us hit that without a
super optimised server - eventlet being in that category. I totally
get that we're going to expect thundering herds, but websockets isn't
new and the stacks we have - apache, eventlet - have been around long
enough to adjust to the rather different scaling pattern.

So - lets not panic, get a proof of concept up somewhere and then run
an actual baseline test. If thats shockingly bad *then* lets panic.

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Robert Collins
On 5 November 2015 at 07:37, Sean Dague  wrote:

> It only really will screw when upper-constraints.txt gets updated on a
> branch.

Bah, yes.

> I honestly think it's ok to not be perfect here. In the base case we'll
> speed up a good chunk, and we'll be slower (though not as slow as today)
> for a day after we bump upper-constraints for something expensive (like
> numpy). It seems like a reasonable trade off for not much complexity.

Oh, I clearly wasn't clear. I think your patch is a good thing. I'm
highlighting the corner case and proposing a down-the-track way to
address it.

And the reason I'm doing that is that Clark has said that we have lots
and lots of trouble updating images, so I'm expecting the corner case
to be fairly common :/.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Gabriel Bezerra

Em 04.11.2015 11:32, Jim Rollenhagen escreveu:

On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
>Hi,
>
>The change in https://review.openstack.org/237122 touches a feature from
>ironic that has not been released in any tag yet.
>
>At first, we from the team who has written the patch thought that, as it
>has not been part of any release, we could do backwards incompatible
>changes on that part of the code. As it turned out from discussing with
>the community, ironic commits to keeping the master branch backwards
>compatible and a deprecation process is needed in that case.
>
>That stated, the question at hand is: How long should this deprecation
>process last?
>
>This spec specifies the deprecation policy we should follow:
>https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
>
>
>As from its excerpt below, the minimum obsolescence period must be
>max(next_release, 3 months).
>
>"""
>Based on that data, an obsolescence date will be set. At the very
>minimum the feature (or API, or configuration option) should be marked
>deprecated (and still be supported) in the next stable release branch,
>and for at least three months linear time. For example, a feature
>deprecated in November 2015 should still appear in the Mitaka release
>and stable/mitaka stable branch and cannot be removed before the
>beginning of the N development cycle in April 2016. A feature deprecated
>in March 2016 should still appear in the Mitaka release and
>stable/mitaka stable branch, and cannot be removed before June 2016.
>"""
>
>This spec, however, only covers released and/or tagged code.
>
>tl;dr:
>
>How should we proceed regarding code/features/configs/APIs that have not
>even been tagged yet?
>
>Isn't waiting for the next OpenStack release in this case too long?
>Otherwise, we are going to have features/configs/APIs/etc. that are
>deprecated from their very first tag/release.
>
>How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
>months? max(next_tag, 3 months)?

-1

The reason the wording is that way is because lots of people deploy
OpenStack services in a continuous deployment model, from the master 
source
branches (sometimes minus X number of commits as these deployers run 
the

code through their test platforms).

Not everyone uses tagged releases, and OpenStack as a community has
committed (pun intended) to serving these continuous deployment 
scenarios.


Right, so I asked Gabriel to send this because it's an odd case, and 
I'd

like to clear up the governance doc on this, since it doesn't seem to
say much about code that was never released.

The rule is a cycle boundary *and* at least 3 months. However, in this
case, the code was never in a release at all, much less a stable
release. So looking at the two types of deployers:

1) CD from trunk: 3 months is fine, we do that, done.

2) Deploying stable releases: if we only wait three months and not a
cycle boundary, they'll never see it. If we do wait for a cycle
boundary, we're pushing deprecated code to them for (seemingly to me) 
no

benefit.

So, it makes sense to me to not introduce the cycle boundary thing in
this case. But there is value in keeping the rule simple, and if we 
want

this one to pass a cycle boundary to optimize for that, I'm okay with
that too. :)

(Side note: there's actually a third type of deployer for Ironic; one
that deploys intermediate releases. I think if we give them at least 
one

release and three months, they're okay, so the general standard
deprecation rule covers them.)

// jim


So, summarizing that:

* untagged/master: 3 months

* tagged/intermediate release: max(next tag/intermediate release, 3 
months)


* stable release: max(next release, 3 months)

Is it correct?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html


So my general though on the above is yes, definitely consult operators 
to see if they would use this, although if a feature doesn't exist and 
has never existed (say outside of ceilometer) then it's sort of hard to 
get an accurate survey result from a group of people that have never had 
the feature in the first place... Either way it should be done, just to 
get more knowledge...


I know operators (at yahoo!) want to be able to dynamically change the 
logging level, and that's not a monthly task, but more of an 'as-needed' 
one that would be very helpful when things start going badly... So 
perhaps the set of reloadable configuration should start out small and 
not encompass all the things...




On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,

https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as
mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is
created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of
ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add
mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to
reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of such implementation was started in ML:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074558.html.
Raw reviews:
https://review.openstack.org/#/c/228892/,
https://review.openstack.org/#/c/223668/.

3. "oslo.config is responsible to log changes which were ignored on
SIGHUP"
-
Some config options could be changed using API(for example quotas)
that's why
oslo.config doesn't know actual configuration of service and can't log
changes of configuration.

Regards, Marian Horban


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.or

Re: [openstack-dev] [oslo][oslo.config][oslo.service] Dynamic Reconfiguration of OpenStack Services

2015-11-04 Thread Joshua Harlow
Along this line, thinks like the following are likely more changeable 
(and my guess is operators would want to change them when things start 
going badly), for example from a nova.conf that I have laying around...


[DEFAULT]

rabbit_hosts=...
rpc_response_timeout=...
default_notification_level=...
default_log_levels=...

[glance]

api_servers=...

(and more)

Some of those I think should have higher priority as being 
reconfigurable, but I think operators should be asked what they think 
would be useful and prioritize those.


Some of those really are service discovery 'types' (rabbit_hosts, 
glance/api_servers, keystone/api_servers) but fixing this is likely a 
longer term goal (see conversations in keystone).


Joshua Harlow wrote:

gord chung wrote:

we actually had a solution implemented in Ceilometer to handle this[1].

that said, based on the results of our survey[2], we found that most
operators *never* update configuration files after the initial setup and
if they did it was very rarely (monthly updates). the question related
to Ceilometer and its pipeline configuration file so the results might
be specific to Ceilometer. I think you should definitely query operators
before undertaking any work. the last thing you want to do is implement
a feature no one really needs/wants.

[1]
http://specs.openstack.org/openstack/ceilometer-specs/specs/liberty/reload-file-based-pipeline-configuration.html

[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075628.html



So my general though on the above is yes, definitely consult operators
to see if they would use this, although if a feature doesn't exist and
has never existed (say outside of ceilometer) then it's sort of hard to
get an accurate survey result from a group of people that have never had
the feature in the first place... Either way it should be done, just to
get more knowledge...

I know operators (at yahoo!) want to be able to dynamically change the
logging level, and that's not a monthly task, but more of an 'as-needed'
one that would be very helpful when things start going badly... So
perhaps the set of reloadable configuration should start out small and
not encompass all the things...



On 04/11/2015 10:00 AM, Marian Horban wrote:

Hi guys,

Unfortunately I haven't been on Tokio summit but I know that there was
discussion about dynamic reloading of configuration.
Etherpad refs:
https://etherpad.openstack.org/p/mitaka-cross-project-dynamic-config-services,


https://etherpad.openstack.org/p/mitaka-oslo-security-logging

In this thread I want to discuss agreements reached on the summit and
discuss
implementation details.

Some notes taken from etherpad and my remarks:

1. "Adding "mutable" parameter for each option."
"Do we have an option mutable=True on CfgOpt? Yes"
-
As I understood 'mutable' parameter must indicate whether service
contains
code responsible for reloading of this option or not.
And this parameter should be one of the arguments of cfg.Opt
constructor.
Problems:
1. Library's options.
SSL options ca_file, cert_file, key_file taken from oslo.service library
could be reloaded in nova-api so these options should be mutable...
But for some projects that don't need SSL support reloading of SSL
options
doesn't make sense. For such projects this option should be non mutable.
Problem is that oslo.service - single and there are many different
projects
which use it in different way.
The same options could be mutable and non mutable in different contexts.
2. Support of config options on some platforms.
Parameter "mutable" could be different for different platforms. Some
options
make sense only for specific platforms. If we mark such options as
mutable
it could be misleading on some platforms.
3. Dependency of options.
There are many 'workers' options(osapi_compute_workers, ec2_workers,
metadata_workers, workers). These options specify number of workers for
OpenStack API services.
If value of the 'workers' option is greater than '1' instance of
ProcessLauncher is created otherwise instance of ServiceLauncher is
created.
When ProcessLauncher receives SIGHUP it reloads it own configuration,
gracefully terminates children and respawns new children.
This mechanism allows to reload many config options implicitly.
But if value of the 'workers' option equals '1' instance of
ServiceLauncher
is created.
ServiceLauncher starts everything in single process and in this case we
don't have such implicit reloading.

I think that mutability of options is a complicated feature and I
think that
adding of 'mutable' parameter into cfg.Opt constructor could just add
mess.

2. "oslo.service catches SIGHUP and calls oslo.config"
-
From my point of view every service should register list of hooks to
reload
config options. oslo.service should catch SIGHUP and call list of
registered
hooks one by one with specified order.
Discussion of s

[openstack-dev] Cinder mid-cycle planning survey

2015-11-04 Thread Duncan Thomas
Hi Folks

The Cinder team is trying to plan our mid-cycle meetup again.

Can anybody interested in attending please fill out this quick survey to
help with planning, please?

https://www.surveymonkey.com/r/Q5FZX68

Closing date is 11th November.

Thanks
-- 
-- 
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
> >On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> >On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
> >>Hi,
> >>
> >>The change in https://review.openstack.org/237122 touches a feature from
> >>ironic that has not been released in any tag yet.
> >>
> >>At first, we from the team who has written the patch thought that, as it
> >>has not been part of any release, we could do backwards incompatible
> >>changes on that part of the code. As it turned out from discussing with
> >>the community, ironic commits to keeping the master branch backwards
> >>compatible and a deprecation process is needed in that case.
> >>
> >>That stated, the question at hand is: How long should this deprecation
> >>process last?
> >>
> >>This spec specifies the deprecation policy we should follow:
> >>https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> >>
> >>
> >>As from its excerpt below, the minimum obsolescence period must be
> >>max(next_release, 3 months).
> >>
> >>"""
> >>Based on that data, an obsolescence date will be set. At the very
> >>minimum the feature (or API, or configuration option) should be marked
> >>deprecated (and still be supported) in the next stable release branch,
> >>and for at least three months linear time. For example, a feature
> >>deprecated in November 2015 should still appear in the Mitaka release
> >>and stable/mitaka stable branch and cannot be removed before the
> >>beginning of the N development cycle in April 2016. A feature deprecated
> >>in March 2016 should still appear in the Mitaka release and
> >>stable/mitaka stable branch, and cannot be removed before June 2016.
> >>"""
> >>
> >>This spec, however, only covers released and/or tagged code.
> >>
> >>tl;dr:
> >>
> >>How should we proceed regarding code/features/configs/APIs that have not
> >>even been tagged yet?
> >>
> >>Isn't waiting for the next OpenStack release in this case too long?
> >>Otherwise, we are going to have features/configs/APIs/etc. that are
> >>deprecated from their very first tag/release.
> >>
> >>How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
> >>months? max(next_tag, 3 months)?
> >
> >-1
> >
> >The reason the wording is that way is because lots of people deploy
> >OpenStack services in a continuous deployment model, from the master
> >source
> >branches (sometimes minus X number of commits as these deployers run the
> >code through their test platforms).
> >
> >Not everyone uses tagged releases, and OpenStack as a community has
> >committed (pun intended) to serving these continuous deployment scenarios.
> >
> >Right, so I asked Gabriel to send this because it's an odd case, and I'd
> >like to clear up the governance doc on this, since it doesn't seem to
> >say much about code that was never released.
> >
> >The rule is a cycle boundary *and* at least 3 months. However, in this
> >case, the code was never in a release at all, much less a stable
> >release. So looking at the two types of deployers:
> >
> >1) CD from trunk: 3 months is fine, we do that, done.
> >
> >2) Deploying stable releases: if we only wait three months and not a
> >cycle boundary, they'll never see it. If we do wait for a cycle
> >boundary, we're pushing deprecated code to them for (seemingly to me) no
> >benefit.
> >
> >So, it makes sense to me to not introduce the cycle boundary thing in
> >this case. But there is value in keeping the rule simple, and if we want
> >this one to pass a cycle boundary to optimize for that, I'm okay with
> >that too. :)
> >
> >(Side note: there's actually a third type of deployer for Ironic; one
> >that deploys intermediate releases. I think if we give them at least one
> >release and three months, they're okay, so the general standard
> >deprecation rule covers them.)
> >
> >// jim
> 
> So, summarizing that:
> 
> * untagged/master: 3 months
> 
> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
> 
> * stable release: max(next release, 3 months)
> 
> Is it correct?

No, my proposal is that, but s/max/AND/.

This also needs buyoff from other folks in the community, and an update
to the document in the governance repo which requires TC approval.

For now we must assume a cycle boundary and three months, and/or hold off on
the patch until this is decided.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Sean Dague
On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
>> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
>>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
>>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
 Hi,

 The change in https://review.openstack.org/237122 touches a feature from
 ironic that has not been released in any tag yet.

 At first, we from the team who has written the patch thought that, as it
 has not been part of any release, we could do backwards incompatible
 changes on that part of the code. As it turned out from discussing with
 the community, ironic commits to keeping the master branch backwards
 compatible and a deprecation process is needed in that case.

 That stated, the question at hand is: How long should this deprecation
 process last?

 This spec specifies the deprecation policy we should follow:
 https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst


 As from its excerpt below, the minimum obsolescence period must be
 max(next_release, 3 months).

 """
 Based on that data, an obsolescence date will be set. At the very
 minimum the feature (or API, or configuration option) should be marked
 deprecated (and still be supported) in the next stable release branch,
 and for at least three months linear time. For example, a feature
 deprecated in November 2015 should still appear in the Mitaka release
 and stable/mitaka stable branch and cannot be removed before the
 beginning of the N development cycle in April 2016. A feature deprecated
 in March 2016 should still appear in the Mitaka release and
 stable/mitaka stable branch, and cannot be removed before June 2016.
 """

 This spec, however, only covers released and/or tagged code.

 tl;dr:

 How should we proceed regarding code/features/configs/APIs that have not
 even been tagged yet?

 Isn't waiting for the next OpenStack release in this case too long?
 Otherwise, we are going to have features/configs/APIs/etc. that are
 deprecated from their very first tag/release.

 How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
 months? max(next_tag, 3 months)?
>>>
>>> -1
>>>
>>> The reason the wording is that way is because lots of people deploy
>>> OpenStack services in a continuous deployment model, from the master
>>> source
>>> branches (sometimes minus X number of commits as these deployers run the
>>> code through their test platforms).
>>>
>>> Not everyone uses tagged releases, and OpenStack as a community has
>>> committed (pun intended) to serving these continuous deployment scenarios.
>>>
>>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
>>> like to clear up the governance doc on this, since it doesn't seem to
>>> say much about code that was never released.
>>>
>>> The rule is a cycle boundary *and* at least 3 months. However, in this
>>> case, the code was never in a release at all, much less a stable
>>> release. So looking at the two types of deployers:
>>>
>>> 1) CD from trunk: 3 months is fine, we do that, done.
>>>
>>> 2) Deploying stable releases: if we only wait three months and not a
>>> cycle boundary, they'll never see it. If we do wait for a cycle
>>> boundary, we're pushing deprecated code to them for (seemingly to me) no
>>> benefit.
>>>
>>> So, it makes sense to me to not introduce the cycle boundary thing in
>>> this case. But there is value in keeping the rule simple, and if we want
>>> this one to pass a cycle boundary to optimize for that, I'm okay with
>>> that too. :)
>>>
>>> (Side note: there's actually a third type of deployer for Ironic; one
>>> that deploys intermediate releases. I think if we give them at least one
>>> release and three months, they're okay, so the general standard
>>> deprecation rule covers them.)
>>>
>>> // jim
>>
>> So, summarizing that:
>>
>> * untagged/master: 3 months
>>
>> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
>>
>> * stable release: max(next release, 3 months)
>>
>> Is it correct?
> 
> No, my proposal is that, but s/max/AND/.
> 
> This also needs buyoff from other folks in the community, and an update
> to the document in the governance repo which requires TC approval.
> 
> For now we must assume a cycle boundary and three months, and/or hold off on
> the patch until this is decided.

The AND version of this seems to respect the spirit of the original
intent. The 3 month window was designed to push back a little on last
minute deprecations for release, that we deleted the second master
landed. Which looked very different for stable release vs. CD consuming
folks.

The intermediate release or no-release model just wasn't considered
initially.

-Sean

-- 
Sean Dague
http://dague.net

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-04 Thread Ed Leafe
On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
> 
> Here's a Devstack review for zookeeper in support of this initiative:
> 
> https://review.openstack.org/241040
> 
> Thanks,
> Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All][Glance] Feedback on the proposed refactor to the image import process required

2015-11-04 Thread Brian Rosmaita
Thanks to everyone who has commented on the spec and/or participated in
the discussions at the summit last week.

I've uploaded a new patch set that describes my interpretation of the
image import workflow and API calls that were discussed.

Please take a look and leave comments.

--
cheers,
brian


On 10/20/15, 1:06 PM, "Brian Rosmaita" 
wrote:

>Hello,
>
>I've updated the image import spec [4] to incorporate the discussion thus
>far.
>
>The fishbowl session [5] is scheduled for Thursday, October 29,
>2:40pm-3:20pm.
>
>If you read through the spec and the current discussion on the review,
>you'll be in a good position to help us get this worked out during the
>summit.
>
>--
>cheers,
>brian 
>
>On 10/9/15, 3:39 AM, "Flavio Percoco"  wrote:
>
>>Greetings,
>>
>>There was recently a discussion[0] on the mailing list, started by Doug
>>Hellman, to discuss some issues related to Glance's API, the conflicts
>>between v1 and v2 and how this is making some pandas sad.
>>
>>The above served as a starting point for a discussion around the
>>current API, how it can be improved, etc. This discussions happened on
>>IRC[1], on  a call (sorry, I forgot to record this call, this is entirely
>>my fault) and on an etherpad[2]. Later on, Brian Rosmaita summarized
>>all this in a document[3], which became a spec[4]. :D
>>
>>The spec is the central point of discussion now and it contains a more
>>structured, more organized and more concrete proposal that needs to be
>>discussed. Nevertheless, I believe there's still lot to do there and I
>>also believe - I'm sure others do as well - this spec could use
>>opinions from a broader audience. Therefore, I'd really appreciate
>>your opinion on this thread.
>>
>>This will also be discussed at the summit[5] in a fishbowl session and
>>I hope to see you all there as well.
>>
>>I'd like to thank everyone that has participated in this discussion so
>>far and I hope to see others chime in as well.
>>
>>Flavio
>>
>>[0] 
>>http://lists.openstack.org/pipermail/openstack-dev/2015-September/074360.
>>h
>>tml
>>[1] 
>>http://eavesdrop.openstack.org/irclogs/%23openstack-glance/%23openstack-g
>>l
>>ance.2015-09-22.log.html#t2015-09-22T14:31:00
>>[2] https://etherpad.openstack.org/p/glance-upload-mechanism-reloaded
>>[3] 
>>https://docs.google.com/document/d/1_mQZlUN_AtqhH6qh3ANz-m1zCOYkp1GyxndLt
>>Y
>>MFRb0
>>[4] https://review.openstack.org/#/c/232371/
>>[5] 
>>http://mitakadesignsummit.sched.org/event/398b1f44af7a4ae3dde9cb47d4d52d9
>>a
>>
>>-- 
>>@flaper87
>>Flavio Percoco
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] How to proceed about deprecation of untagged code?

2015-11-04 Thread Jim Rollenhagen
On Wed, Nov 04, 2015 at 02:55:49PM -0500, Sean Dague wrote:
> On 11/04/2015 02:42 PM, Jim Rollenhagen wrote:
> > On Wed, Nov 04, 2015 at 04:08:18PM -0300, Gabriel Bezerra wrote:
> >> Em 04.11.2015 11:32, Jim Rollenhagen escreveu:
> >>> On Wed, Nov 04, 2015 at 08:44:36AM -0500, Jay Pipes wrote:
> >>> On 11/03/2015 11:40 PM, Gabriel Bezerra wrote:
>  Hi,
> 
>  The change in https://review.openstack.org/237122 touches a feature from
>  ironic that has not been released in any tag yet.
> 
>  At first, we from the team who has written the patch thought that, as it
>  has not been part of any release, we could do backwards incompatible
>  changes on that part of the code. As it turned out from discussing with
>  the community, ironic commits to keeping the master branch backwards
>  compatible and a deprecation process is needed in that case.
> 
>  That stated, the question at hand is: How long should this deprecation
>  process last?
> 
>  This spec specifies the deprecation policy we should follow:
>  https://github.com/openstack/governance/blob/master/reference/tags/assert_follows-standard-deprecation.rst
> 
> 
>  As from its excerpt below, the minimum obsolescence period must be
>  max(next_release, 3 months).
> 
>  """
>  Based on that data, an obsolescence date will be set. At the very
>  minimum the feature (or API, or configuration option) should be marked
>  deprecated (and still be supported) in the next stable release branch,
>  and for at least three months linear time. For example, a feature
>  deprecated in November 2015 should still appear in the Mitaka release
>  and stable/mitaka stable branch and cannot be removed before the
>  beginning of the N development cycle in April 2016. A feature deprecated
>  in March 2016 should still appear in the Mitaka release and
>  stable/mitaka stable branch, and cannot be removed before June 2016.
>  """
> 
>  This spec, however, only covers released and/or tagged code.
> 
>  tl;dr:
> 
>  How should we proceed regarding code/features/configs/APIs that have not
>  even been tagged yet?
> 
>  Isn't waiting for the next OpenStack release in this case too long?
>  Otherwise, we are going to have features/configs/APIs/etc. that are
>  deprecated from their very first tag/release.
> 
>  How about sticking to min(next_release, 3 months)? Or next_tag? Or 3
>  months? max(next_tag, 3 months)?
> >>>
> >>> -1
> >>>
> >>> The reason the wording is that way is because lots of people deploy
> >>> OpenStack services in a continuous deployment model, from the master
> >>> source
> >>> branches (sometimes minus X number of commits as these deployers run the
> >>> code through their test platforms).
> >>>
> >>> Not everyone uses tagged releases, and OpenStack as a community has
> >>> committed (pun intended) to serving these continuous deployment scenarios.
> >>>
> >>> Right, so I asked Gabriel to send this because it's an odd case, and I'd
> >>> like to clear up the governance doc on this, since it doesn't seem to
> >>> say much about code that was never released.
> >>>
> >>> The rule is a cycle boundary *and* at least 3 months. However, in this
> >>> case, the code was never in a release at all, much less a stable
> >>> release. So looking at the two types of deployers:
> >>>
> >>> 1) CD from trunk: 3 months is fine, we do that, done.
> >>>
> >>> 2) Deploying stable releases: if we only wait three months and not a
> >>> cycle boundary, they'll never see it. If we do wait for a cycle
> >>> boundary, we're pushing deprecated code to them for (seemingly to me) no
> >>> benefit.
> >>>
> >>> So, it makes sense to me to not introduce the cycle boundary thing in
> >>> this case. But there is value in keeping the rule simple, and if we want
> >>> this one to pass a cycle boundary to optimize for that, I'm okay with
> >>> that too. :)
> >>>
> >>> (Side note: there's actually a third type of deployer for Ironic; one
> >>> that deploys intermediate releases. I think if we give them at least one
> >>> release and three months, they're okay, so the general standard
> >>> deprecation rule covers them.)
> >>>
> >>> // jim
> >>
> >> So, summarizing that:
> >>
> >> * untagged/master: 3 months
> >>
> >> * tagged/intermediate release: max(next tag/intermediate release, 3 months)
> >>
> >> * stable release: max(next release, 3 months)
> >>
> >> Is it correct?
> > 
> > No, my proposal is that, but s/max/AND/.
> > 
> > This also needs buyoff from other folks in the community, and an update
> > to the document in the governance repo which requires TC approval.
> > 
> > For now we must assume a cycle boundary and three months, and/or hold off on
> > the patch until this is decided.
> 
> The AND version of this seems to respect the spirit of the original
> intent. The 3 month window was designed to push back a little on

Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-04 Thread Javeria Khan
Thanks Igor, Alex. Guess there isn't any support for running tasks directly
on the Fuel Master node for now.

I did try moving to deployment_tasks.yaml, however it leads to other issues
such as "/etc/fuel/plugins// does not exist" failing on
deployments.

I'm trying to move back to using the former tasks.yaml, but the
fuel-plugin-builder keeps looking for deployment_tasks.yaml now. There
should be some build source list I can remove?


--
Javeria

On Wed, Nov 4, 2015 at 12:44 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> please note that such tasks are executed inside 'mcollective' docker
> container, not on the Fuel master host system.
>
> Regards,
> Alex
>
> On Tue, Nov 3, 2015 at 10:41 PM, Igor Kalnitsky 
> wrote:
>
>> Hi Javeria,
>>
>> Try to use 'master' in 'role' field. Example:
>>
>> - role: 'master'
>>   stage: pre_deployment
>>   type: shell
>>   parameters:
>>   cmd: echo all > /tmp/plugin.all
>>   timeout: 42
>>
>> Let me know if you need additional help.
>>
>> Thanks,
>> Igor
>>
>> P.S: Since Fuel 7.0 it's recommended to use deployment_tasks.yaml
>> instead of tasks.yaml. Please see Fuel Plugins wiki page for details.
>>
>> On Tue, Nov 3, 2015 at 10:26 PM, Javeria Khan 
>> wrote:
>> > Hey everyone,
>> >
>> > I've been working on a fuel plugin and for some reason just cant figure
>> out
>> > how to run a task on the fuel master node through the tasks.yaml. Is
>> there
>> > even a role for it?
>> >
>> > Something similar to what ansible does with localhost would work.
>> >
>> > Thanks,
>> > Javeria
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] [infra] speeding up gate runs?

2015-11-04 Thread Clark Boylan
On Wed, Nov 4, 2015, at 09:14 AM, Sean Dague wrote:
> On 11/04/2015 12:10 PM, Jeremy Stanley wrote:
> > On 2015-11-04 08:43:27 -0600 (-0600), Matthew Thode wrote:
> >> On 11/04/2015 06:47 AM, Sean Dague wrote:
> > [...]
> >>> Is there a nodepool cache strategy where we could pre build these? A 25%
> >>> performance win comes out the other side if there is a strategy here.
> >>
> >> python wheel repo could help maybe?
> > 
> > That's along the lines of how I expect we'd need to solve it.
> > Basically add a new DIB element to openstack-infra/project-config in
> > nodepool/elements (or extend the cache-devstack element already
> > there) to figure out which version(s) it needs to prebuild and then
> > populate a wheelhouse which can be leveraged by the jobs running on
> > the resulting diskimage. The test scripts in the
> > openstack/requirements repo may already have much of this logic
> > implemented for the purpose of testing that we can build sane wheels
> > of all our requirements.
> > 
> > This of course misses situations where the requirements change and
> > the diskimages haven't been rebuilt or in jobs testing proposed
> > changes which explicitly alter these requirements, but could be
> > augmented by similar mechanisms in devstack itself to avoid building
> > them more than once.
> 
> Ok, so given that pip automatically builds a local wheel cache now when
> it installs this... is it as simple as
> https://review.openstack.org/#/c/241692/ ?
It is not that simple and this change will probably need to be reverted.
We don't install the build deps for these packages during the dib run.
We only add them to the appropriate apt/yum caches. This means that the
image builds will start to fail because lxml won't find libxml2-dev and
whatever other headers packages it needs in order to link against the
appropriate libs.

The issue here is we do our best to force devstack to do the work at run
time to make sure that devstack-gate or our images aren't masking some
bug or become a required part of the devstack process. This means that
none of these packages are installed and won't be available to the pip
install.

We have already had to revert a similar change in the past and at the
time the basic agreement was we should go back to building wheel package
mirrors that jobs could take advantage of. That work floundered due to a
lack of reviews, but I still think that is the correct way to solve this
problem. Basic idea for that is to have some periodic jobs build a
distro/arch/release specific wheel cache then rsync that over to all our
pypi mirrors for use by the jobs.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >