[openstack-dev] 答复: [heat] autoscaling across regions and availability zones

2014-07-03 Thread Huangtianhua
I have register a bp about this : 
https://blueprints.launchpad.net/heat/+spec/implement-autoscalinggroup-availabilityzones
・
・ And I am thinking how to implement this recently.
・
・ According to AWS autoscaling implementation  “attempts to distribute 
instances evenly between the Availability Zones that are enabled for your Auto 
Scaling group.
・ Auto Scaling does this by attempting to launch new instances in the 
Availability Zone with the fewest instances. If the attempt fails, however, 
Auto Scaling will attempt to launch in other zones until it succeeds.”

But there is a doubt about the “fewest instance”, .e.g

There are two azs,
   Az1: has two instances
   Az2: has three instances
・And then to create a asg with 4 instances, I think we should 
create two instances respectively in az1 and az2, right? Now if need to extend 
to 5 instances for the asg, which az to lauch new instance?
If you interested in this bp, I think we can 
discuss this:)


Thanks
发件人: Mike Spreitzer [mailto:mspre...@us.ibm.com]
发送时间: 2014年7月2日 4:23
收件人: OpenStack Development Mailing List
主题: [openstack-dev] [heat] autoscaling across regions and availability zones

An AWS autoscaling group can span multiple availability zones in one region.  
What is the thinking about how to get analogous functionality in OpenStack?

Warmup question: what is the thinking about how to get the levels of isolation 
seen between AWS regions when using OpenStack?  What is the thinking about how 
to get the level of isolation seen between AWS AZs in the same AWS Region when 
using OpenStack?  Do we use OpenStack Region and AZ, respectively?  Do we 
believe that OpenStack AZs can really be as independent as we want them (note 
that this is phrased to not assume we only want as much isolation as AWS 
provides --- they have had high profile outages due to lack of isolation 
between AZs in a region)?

I am going to assume that the answer to the question about ASG spanning 
involves spanning OpenStack regions and/or AZs.  In the case of spanning AZs, 
Heat has already got one critical piece: the OS::Heat::InstanceGroup and 
AWS::AutoScaling::AutoScalingGroup types of resources take a list of AZs as an 
optional parameter.  Presumably all four kinds of scaling group (i.e., also 
OS::Heat::AutoScalingGroup and OS::Heat::ResourceGroup) should have such a 
parameter.  We would need to change the code that generates the template for 
the nested stack that is the group, so that it spreads the members across the 
AZs in a way that is as balanced as is possible at the time.

Currently, a stack does not have an AZ.  That makes the case of an 
OS::Heat::AutoScalingGroup whose members are nested stacks interesting --- how 
does one of those nested stacks get into the right AZ?  And what does that 
mean, anyway?  The meaning would have to be left up to the template author.  
But he needs something he can write in his member template to reference the 
desired AZ for the member stack.  I suppose we could stipulate that if the 
member template has a parameter named "availability_zone" and typed "string" 
then the scaling group takes care of providing the right value to that 
parameter.

To spread across regions adds two things.  First, all four kinds of scaling 
group would need the option to be given a list of regions instead of a list of 
AZs.  More likely, a list of contexts as defined in 
https://review.openstack.org/#/c/53313/ --- that would make this handle 
multi-cloud as well as multi-region.  The other thing this adds is a concern 
for context health.  It is not enough to ask Ceilometer to monitor member 
health --- in multi-region or multi-cloud you also have to worry about the 
possibility that Ceilometer itself goes away.  It would have to be the scaling 
group's responsibility to monitor for context health, and react properly to 
failure of a whole context.

Does this sound about right?  If so, I could draft a spec.

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][specs] listing the entire API in a new spec

2014-07-03 Thread Steve Martinelli
To add to the growing pains of keystone-specs,
one thing I've noticed is, there is inconsistency in the 'REST API Impact'
section.

To be clear here, I don't mean we shouldn't
include what new APIs will be created, I think that is essential. But rather,
remove the need to specifically spell out the request and response blocks.

Personally, I find it redundant for
a few reasons:

1) We already have identity-api, which
will need to be updated once the spec is completed anyway.
2) It's easy to get bogged down in the
spec review as it is, I don't want to have to point out mistakes in the
request/response blocks too (as I'll need to do that when reviewing the
identity-api patch anyway).
3) Come time to propose the identity-api
patch, there might be differences in what was proposed in the spec.

Personally I'd be OK with just stating
the HTTP method and the endpoint. Thoughts?

Many apologies in advance for my pedantic-ness!

Regards,

Steve Martinelli
Software Developer - Openstack
Keystone Core Member





Phone:
1-905-413-2851
E-mail: steve...@ca.ibm.com

8200 Warden Ave
Markham, ON L6G 1C7
Canada



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-03 Thread Eugene Nikanorov
German,

First of all extension list looks lbaas-centric right now.
Secondly, TLS and L7 are such APIs which objects should not require
loadbalancer or flavor to be created (like pool or healthmonitor that are
pure db objects).
Only when you associate those objects with loadbalancer (or its child
objects), driver may tell if it supports them.
Which means that you can't really turn those on or off, it's a generic API.
>From user perspective flavor description (as interim) is sufficient to show
what is supported by drivers behind the flavor.

Also, I think that turning "extensions" on/off is a bit of side problem to
a service specification, so let's resolve it separately.


Thanks,
Eugene.


On Fri, Jul 4, 2014 at 3:07 AM, Eichberger, German  wrote:

> I am actually a bit bummed that Extensions are postponed. In LBaaS we are
> working hard on L7 and TLS extensions which we (the operators) like to
> switch on and off with different flavors...
>
> German
>
> -Original Message-
> From: Kyle Mestery [mailto:mest...@noironetworks.com]
> Sent: Thursday, July 03, 2014 2:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] Flavor framework: Conclusion
>
> Awesome, thanks for working on this Eugene and Mark! I'll still leave an
> item on Monday's meeting agenda to discuss this, hopefully it can be brief.
>
> Thanks,
> Kyle
>
> On Thu, Jul 3, 2014 at 10:32 AM, Eugene Nikanorov 
> wrote:
> > Hi,
> >
> > Mark and me has spent some time today discussing existing proposals
> > and I think we got to a consensus.
> > Initially I had two concerns about Mark's proposal which are
> > - extension list attribute on the flavor
> > - driver entry point on the service profile
> >
> > The first idea (ext list) need to be clarified more as we get more
> > drivers that needs it.
> > Right now we have FWaaS/VPNaaS which don't have extensions at all and
> > we have LBaaS where all drivers support all extensions.
> > So extension list can be postponed until we clarify how exactly we
> > want this to be exposed to the user and how we want it to function on
> > implementation side.
> >
> > Driver entry point which implies dynamic loading per admin's request
> > is a important discussion point (at least, previously this idea
> > received negative opinions from some cores) We'll implement service
> > profiles, but this exact aspect of how driver is specified/loadede
> > will be discussed futher.
> >
> > So based on that I'm going to start implementing this.
> > I think that implementation result will allow us to develop in
> > different directions (extension list vs tags, dynamic loading and
> > such) depending on more information about how this is utilized by
> deployers and users.
> >
> > Thanks,
> > Eugene.
> >
> >
> >
> > On Thu, Jul 3, 2014 at 5:57 PM, Susanne Balle 
> wrote:
> >>
> >> +1
> >>
> >>
> >> On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery
> >> 
> >> wrote:
> >>>
> >>> We're coming down to the wire here with regards to Neutron BPs in
> >>> Juno, and I wanted to bring up the topic of the flavor framework BP.
> >>> This is a critical BP for things like LBaaS, FWaaS, etc. We need
> >>> this work to land in Juno, as these other work items are dependent on
> it.
> >>> There are still two proposals [1] [2], and after the meeting last
> >>> week [3] it appeared we were close to conclusion on this. I now see
> >>> a bunch of comments on both proposals.
> >>>
> >>> I'm going to again suggest we spend some time discussing this at the
> >>> Neutron meeting on Monday to come to a closure on this. I think
> >>> we're close. I'd like to ask Mark and Eugene to both look at the
> >>> latest comments, hopefully address them before the meeting, and then
> >>> we can move forward with this work for Juno.
> >>>
> >>> Thanks for all the work by all involved on this feature! I think
> >>> we're close and I hope we can close on it Monday at the Neutron
> meeting!
> >>>
> >>> Kyle
> >>>
> >>> [1] https://review.openstack.org/#/c/90070/
> >>> [2] https://review.openstack.org/102723
> >>> [3]
> >>> http://eavesdrop.openstack.org/meetings/networking_advanced_services
> >>> /2014/networking_advanced_services.2014-06-27-17.30.log.html
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org

Re: [openstack-dev] [heat] One more lifecycle plug point - in scaling groups

2014-07-03 Thread Mike Spreitzer
Steven Hardy  wrote on 07/02/2014 06:16:21 AM:

> On Wed, Jul 02, 2014 at 02:41:19AM +, Adrian Otto wrote:
> >Zane,
> >If you happen to have a link to this blueprint, could you replywith 
it? ...
>
> I believe Zane was referring to:
> 
> https://blueprints.launchpad.net/heat/+spec/update-hooks
> 
> This is also related to the action aware software config spec:
> 
> https://review.openstack.org/#/c/98742/
> 
> So in future, you might autoscale nested stacks containing action-aware
> software config resources, then you could define specific actions which
> happen e.g on scale-down (on action DELETE).

Thanks, those are great pointers.  The second pretty much covers the 
first, right?

I do think the issue these address --- the need to get application logic 
involved in, e.g., shutdown --- is most of what an application needs; 
involvement in selection of which member(s) to delete is much less 
important (provided that clean shutdown mechanism prevents concurrent 
shutdowns).  So that provides a pretty good decoupling between the 
application's concerns and a holistic scheduler's concerns.

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repo not synced

2014-07-03 Thread Andreas Jaeger
On 07/04/2014 01:30 AM, Salvatore Orlando wrote:
> git.openstack.org  has an up-to-date
> log: http://git.openstack.org/cgit/openstack/neutron-specs/log/
> 
> Unfortunately I don't know what the policy is for syncing repos with github.

they should sync automatically, something is wrong on the infra site -
let's tell them.

Compare:
http://git.openstack.org/cgit/openstack/neutron-specs/log/
https://github.com/openstack/neutron-specs

Andreas

> Salvatore
> 
> 
> On 4 July 2014 00:34, Sumit Naiksatam  > wrote:
> 
> Is this still the right repo for this:
> https://github.com/openstack/neutron-specs
> 
> The latest commit on the master branch shows June 25th timestamp, but
> we have had a lots of patches merging after that. Where are those
> going?
> 
> Thanks,
> ~Sumit.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Building deploy ramdisks with dracut

2014-07-03 Thread Ben Nemec
I've recently been looking into using dracut to build the
deploy-ramdisks that we use for TripleO.  There are a few reasons for
this: 1) dracut is a fairly standard way to generate a ramdisk, so users
are more likely to know how to debug problems with it.  2) If we build
with dracut, we get a lot of the udev/net/etc stuff that we're currently
doing manually for free.  3) (aka the self-serving one ;-) RHEL 7
doesn't include busybox, so we can't currently build ramdisks on that
distribution using the existing ramdisk element.

For the RHEL issue, this could just be an alternate way to build
ramdisks, but given some of the other benefits I mentioned above I
wonder if it would make sense to look at completely replacing the
existing element.  From my investigation thus far, I think dracut can
accommodate all of the functionality in the existing ramdisk element,
and it looks to be available on all of our supported distros.

So that's my pitch in favor of using dracut for ramdisks.  Any thoughts?
 Thanks.

https://dracut.wiki.kernel.org/index.php/Main_Page

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][libvirt] why use domain destroy instead of shutdown?

2014-07-03 Thread melanie witt
Hi all,

I noticed in nova/virt/libvirt/driver.py we use domain destroy instead of 
domain shutdown in most cases (except for soft reboot). Is there a special 
reason we don't use shutdown to do a graceful shutdown of the guest for the 
stop, shelve, migrate, etc functions? Using destroy can corrupt the guest file 
system.

Thanks,
Melanie


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][qa] Returning 203 in keystone v2 list apis?

2014-07-03 Thread Adam Young

On 07/03/2014 04:48 PM, David Kranz wrote:
While moving success response code checking in tempest to the client, 
I noticed that exactly one of the calls to list users for a tenant 
checked for 200 or 203. Looking at 
http://docs.openstack.org/api/openstack-identity-service/2.0/content/, 
it seems that most of the list apis can return 203. But given that 
almost all of the tempest tests only pass on getting 200, I am 
guessing that 203 is not actually ever being returned. Is the doc just 
wrong? If not, what kind of call would trigger a 203 response?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
We found some inconsistencies due to how Apachje impolemente the HEAD 
call:  it seems that HEAD is supposted to return the same value as GET, 
only without the Body.  Apache actually calls GET for a HEAD call, and 
then truncates the body.


Not sure if that is what you are seeing

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][qa] Returning 203 in keystone v2 list apis?

2014-07-03 Thread Anne Gentle
On Thu, Jul 3, 2014 at 4:18 PM, Brant Knudson  wrote:

>
> On Thu, Jul 3, 2014 at 3:48 PM, David Kranz  wrote:
>
>> While moving success response code checking in tempest to the client, I
>> noticed that exactly one of the calls to list users for a tenant checked
>> for 200 or 203. Looking at http://docs.openstack.org/api/
>> openstack-identity-service/2.0/content/, it seems that most of the list
>> apis can return 203. But given that almost all of the tempest tests only
>> pass on getting 200, I am guessing that 203 is not actually ever being
>> returned. Is the doc just wrong? If not, what kind of call would trigger a
>> 203 response?
>>
>>  -David
>>
>
> I can't find anyplace where Keystone returns a 203, and if it did it would
> be a strange thing to do.
>
> From the HTTP 1.1 spec, a client could get 203 Non-Authoritative
> Information to any request if the request went through a proxy and the
> proxy decided to muck with the headers. Since we can't stop someone from
> putting a proxy in front of Keystone, I don't think it's wrong to list it
> as a possible successful response. I think it's redundant to list it though
> since this applies to any HTTP request... just it's redundant to list 500
> and 503 as a possible error response.
>
>
Hi Brant,
Yes, I found that 203 could be returned from a cached proxy depending on
the cloud provider. It is a possible successful response.



> I looked into trying to correct this in the docs once but couldn't figure
> out the wadls -- https://review.openstack.org/#/c/89291/
>
>
I'll take a look at this, sorry that we didn't realize the difficulty
sooner. I can amend your patch if you un-abandon it.
Anne


>  - Brant
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Specs repo

2014-07-03 Thread Salvatore Orlando
git.openstack.org has an up-to-date log:
http://git.openstack.org/cgit/openstack/neutron-specs/log/

Unfortunately I don't know what the policy is for syncing repos with github.

Salvatore


On 4 July 2014 00:34, Sumit Naiksatam  wrote:

> Is this still the right repo for this:
> https://github.com/openstack/neutron-specs
>
> The latest commit on the master branch shows June 25th timestamp, but
> we have had a lots of patches merging after that. Where are those
> going?
>
> Thanks,
> ~Sumit.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-03 Thread Eichberger, German
I am actually a bit bummed that Extensions are postponed. In LBaaS we are 
working hard on L7 and TLS extensions which we (the operators) like to switch 
on and off with different flavors...  

German

-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com] 
Sent: Thursday, July 03, 2014 2:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] Flavor framework: Conclusion

Awesome, thanks for working on this Eugene and Mark! I'll still leave an item 
on Monday's meeting agenda to discuss this, hopefully it can be brief.

Thanks,
Kyle

On Thu, Jul 3, 2014 at 10:32 AM, Eugene Nikanorov  
wrote:
> Hi,
>
> Mark and me has spent some time today discussing existing proposals 
> and I think we got to a consensus.
> Initially I had two concerns about Mark's proposal which are
> - extension list attribute on the flavor
> - driver entry point on the service profile
>
> The first idea (ext list) need to be clarified more as we get more 
> drivers that needs it.
> Right now we have FWaaS/VPNaaS which don't have extensions at all and 
> we have LBaaS where all drivers support all extensions.
> So extension list can be postponed until we clarify how exactly we 
> want this to be exposed to the user and how we want it to function on 
> implementation side.
>
> Driver entry point which implies dynamic loading per admin's request 
> is a important discussion point (at least, previously this idea 
> received negative opinions from some cores) We'll implement service 
> profiles, but this exact aspect of how driver is specified/loadede 
> will be discussed futher.
>
> So based on that I'm going to start implementing this.
> I think that implementation result will allow us to develop in 
> different directions (extension list vs tags, dynamic loading and 
> such) depending on more information about how this is utilized by deployers 
> and users.
>
> Thanks,
> Eugene.
>
>
>
> On Thu, Jul 3, 2014 at 5:57 PM, Susanne Balle  wrote:
>>
>> +1
>>
>>
>> On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery 
>> 
>> wrote:
>>>
>>> We're coming down to the wire here with regards to Neutron BPs in 
>>> Juno, and I wanted to bring up the topic of the flavor framework BP.
>>> This is a critical BP for things like LBaaS, FWaaS, etc. We need 
>>> this work to land in Juno, as these other work items are dependent on it.
>>> There are still two proposals [1] [2], and after the meeting last 
>>> week [3] it appeared we were close to conclusion on this. I now see 
>>> a bunch of comments on both proposals.
>>>
>>> I'm going to again suggest we spend some time discussing this at the 
>>> Neutron meeting on Monday to come to a closure on this. I think 
>>> we're close. I'd like to ask Mark and Eugene to both look at the 
>>> latest comments, hopefully address them before the meeting, and then 
>>> we can move forward with this work for Juno.
>>>
>>> Thanks for all the work by all involved on this feature! I think 
>>> we're close and I hope we can close on it Monday at the Neutron meeting!
>>>
>>> Kyle
>>>
>>> [1] https://review.openstack.org/#/c/90070/
>>> [2] https://review.openstack.org/102723
>>> [3]
>>> http://eavesdrop.openstack.org/meetings/networking_advanced_services
>>> /2014/networking_advanced_services.2014-06-27-17.30.log.html
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-03 Thread Eichberger, German
Hi Jorge,

+1 for QUEUED and DETACHED

I would suggest to make the time how long we keep entities in DELETED state 
configurable. We use something like 30 days, too, but we have made it 
configurable to adapt to changes...

German

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Thursday, July 03, 2014 11:59 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not 
exist in a driver backend

+1 to QUEUED status.

For entities that have the concept of being attached/detached why not have a 
'DETACHED' status to indicate that the entity is not provisioned at all (i.e. 
The config is just stored in the DB). When it is attached during provisioning 
then we can set it to 'ACTIVE' or any of the other provisioning statuses such 
as 'ERROR', 'PENDING_UPDATE', etc. Lastly, it wouldn't make much sense to have 
a 'DELETED' status on these types of entities until the user actually issues a 
DELETE API request (not to be confused with detaching). Which begs another 
question, when items are deleted how long should the API return responses for 
that resource? We have a 90 day threshold for this in our current 
implementation after which the API returns 404's for the resource.

Cheers,
--Jorge




On 7/3/14 10:39 AM, "Phillip Toohill" 
wrote:

>If the objects remain in 'PENDING_CREATE' until provisioned it would 
>seem that the process got stuck in that status and may be in a bad 
>state from user perspective. I like the idea of QUEUED or similar to 
>reference that the object has been accepted but not provisioned.
>
>Phil
>
>On 7/3/14 10:28 AM, "Brandon Logan"  wrote:
>
>>With the new API and object model refactor there have been some issues 
>>arising dealing with the status of entities.  The main issue is that 
>>Listener, Pool, Member, and Health Monitor can exist independent of a 
>>Load Balancer.  The Load Balancer is the entity that will contain the 
>>information about which driver to use (through provider or flavor).  
>>If a Listener, Pool, Member, or Health Monitor is created without a 
>>link to a Load Balancer, then what status does it have?  At this point 
>>it only exists in the database and is really just waiting to be 
>>provisioned by a driver/backend.
>>
>>Some possibilities discussed:
>>A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name 
>>Entities just remain in PENDING_CREATE until provisioned by a driver 
>>Entities just remain in ACTIVE until provisioned by a driver
>>
>>Opinions and suggestions?
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Milton Xu (mxu)
Thanks Nader for the extra effort and I am glad you made progress here.

Milton

From: Nader Lahouti [mailto:nader.laho...@gmail.com]
Sent: Thursday, July 03, 2014 3:59 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron][oslo.messaging]

Thanks a lot Doug and Alexei for your prompt response.
I used what Doug suggested and worked.

Thanks again for the help.
Nader.

On Thu, Jul 3, 2014 at 12:04 PM, Doug Hellmann 
mailto:doug.hellm...@dreamhost.com>> wrote:
On Thu, Jul 3, 2014 at 1:58 PM, Alexei Kornienko
mailto:alexei.kornie...@gmail.com>> wrote:
> Hi,
>
> You can use oslo.messaging._drivers.impl_rabbit instead of impl_kombu
> It was renamed and slightly change but I think it will work as you expect.
You should not depend on using any API defined in that module. The
_drivers package is a private package inside oslo.messaging, and
shouldn't be used directly. Use the public, documented, API instead to
ensure that future changes to the internal implementation details of
oslo.messaging do not break your code.

Doug

>
> Regards,
> Alexei Kornienko
>
>
> On 07/03/2014 08:47 PM, Nader Lahouti wrote:
>
> Hi All and Ihar,
>
> As part of blueprint oslo-messaging the neutron/openstack/common/rpc tree is
> removed. I was using impl_kombu module to process notification from keystone
> with this following code sample:
> ...
> from neutron.openstack.common.rpc import impl_kombu
>try:
>conf = impl_kombu.cfg.CONF
> topicname = self._topic_name
> exchange = self._exchange_name
> connection = impl_kombu.Connection(conf)
> connection.declare_topic_consumer(topic,
>   self.callback,
>   topic, exchange)
> connection.consume()
> except Exception:
> connection.close()
>
>
> Can you please let me what needs to be done to replace the above code and
> make it work with current neutron code?
>
>
> Thanks in advance,
> Nader.
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Nader Lahouti
Thanks a lot Doug and Alexei for your prompt response.
I used what Doug suggested and worked.

Thanks again for the help.
Nader.


On Thu, Jul 3, 2014 at 12:04 PM, Doug Hellmann 
wrote:

> On Thu, Jul 3, 2014 at 1:58 PM, Alexei Kornienko
>  wrote:
> > Hi,
> >
> > You can use oslo.messaging._drivers.impl_rabbit instead of impl_kombu
> > It was renamed and slightly change but I think it will work as you
> expect.
>
> You should not depend on using any API defined in that module. The
> _drivers package is a private package inside oslo.messaging, and
> shouldn't be used directly. Use the public, documented, API instead to
> ensure that future changes to the internal implementation details of
> oslo.messaging do not break your code.
>
> Doug
>
> >
> > Regards,
> > Alexei Kornienko
> >
> >
> > On 07/03/2014 08:47 PM, Nader Lahouti wrote:
> >
> > Hi All and Ihar,
> >
> > As part of blueprint oslo-messaging the neutron/openstack/common/rpc
> tree is
> > removed. I was using impl_kombu module to process notification from
> keystone
> > with this following code sample:
> > ...
> > from neutron.openstack.common.rpc import impl_kombu
> >try:
> >conf = impl_kombu.cfg.CONF
> > topicname = self._topic_name
> > exchange = self._exchange_name
> > connection = impl_kombu.Connection(conf)
> > connection.declare_topic_consumer(topic,
> >   self.callback,
> >   topic, exchange)
> > connection.consume()
> > except Exception:
> > connection.close()
> >
> >
> > Can you please let me what needs to be done to replace the above code and
> > make it work with current neutron code?
> >
> >
> > Thanks in advance,
> > Nader.
> >
> >
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Discussion of capabilities feature

2014-07-03 Thread Doug Shelley
Iccha,

Thanks for the feedback. I guess I should have been more specific - my intent 
here was to layout use cases and requirements and not talk about specific 
implementations. I believe that if we can get agreement on the requirements, it 
will be easier to review/discuss design/implementation choices. Some of your 
comments are specific to how one might chose to implement against these 
requirements - I think we should defer those questions until we gain some 
agreement on requirements.

More feedback below...marked with [DAS]

Regards,
Doug

From: Iccha Sethi [mailto:iccha.se...@rackspace.com]
Sent: July-03-14 4:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] Discussion of capabilities feature

Hey Doug,

Thank you so much for putting this together. I have some 
questions/clarifications(inline) which would be useful to be addressed in the 
spec.


From: Doug Shelley mailto:d...@tesora.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 3, 2014 at 2:20 PM
To: "OpenStack Development Mailing List (not for usage questions) 
(openstack-dev@lists.openstack.org)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [trove] Discussion of capabilities feature

At yesterday's Trove team meeting [1] there was significant discussion around 
the Capabilities [2] feature. While the community previously approved a BP and 
some of the initial implementation, it is apparent now that there is no 
agreement in the community around the requirements, use cases or proposed 
implementation.

I mentioned in the meeting that I thought it would make sense to adjust the 
current BP and spec to reflect the concerns and hopefully come up with 
something that we can get consensus on. Ahead of this, I thought it would to 
try to write up some of the key points and get some feedback here before 
updating the spec.

First, here are what I think the goals of the Capabilities feature are:
1. Provide other components with a mechanism for understanding which aspects of 
Trove are currently available and/or in use
>> Good point about communicating to other components. We can highlight how 
>> this would help other projects like horizon dynamically modify their UI 
>> based on the api response.
[DAS] Absolutely


[2] "This proposal includes the ability to setup different capabilities for 
different datastore versions. " So capabilities is specific to data 
stores/datastore versions and not for trove in general right?

[DAS] This is from the original spec - I kind of pushed the reset to make sure 
we understand the requirements at this point. Although what the requirements 
below contemplate is certainly oriented around datastore managers/datastores 
and versions.

Also it would be useful for us as a community to maybe lay some ground rules 
for what is a capability and what is not in the spec. For example, how to 
distinguish what goes in 
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L273 as a 
config value and what does not.
[DAS] Hopefully this will become clearer through this process

2. Allow operators the ability to control some aspects of Trove at deployment 
time
>> If we are controlling the aspects at deploy time what advantages do having 
>> tables like capabilities and capabilities_overrides offer over having in the 
>> config file under the config groups for different data stores like 
>> [mysql][redis] etc? I think it would be useful to document these answers 
>> because they might keep resurfacing in the future.
[DAS] Certainly at the time the design/implementation is fleshed out these 
choices would be relevant to be discussed.
Also want to make sure we are not trying to solve the problem of config 
override during run time here because that is an entirely different problem not 
in scope here.

Use Cases

1. Unimplemented feature - this is the case where one/some datastore managers 
provide support for some specific capability but others don't. A good example 
would be replication support as we are only planning to support the MySQL 
manager in the first version. As other datastore managers gain support for the 
capability, these would be enabled.
2. Unsupported feature - similar to #1 except this would be the case where the 
datastore manager inherently doesn't support the capability. For example, Redis 
doesn't have support for volumes.
3. Operator controllable feature - this would be a capability that can be 
controlled at deployment time at the option of the operator. For example, 
whether to provide access to the root user on instance creation.
>> Are not 1 and 2 set at deploy time as well?
[DAS] I see 1 and 2 and basically baked into a particular version of the 
product and provided at run time.

4. Downstream capabilities addition - basically the ability to use capabilities 
as an extension 

[openstack-dev] [neutron] Specs repo

2014-07-03 Thread Sumit Naiksatam
Is this still the right repo for this:
https://github.com/openstack/neutron-specs

The latest commit on the master branch shows June 25th timestamp, but
we have had a lots of patches merging after that. Where are those
going?

Thanks,
~Sumit.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 04:34 PM, Kevin Benton wrote:
> Yes, I can propose a spec for that. It probably won't be until Monday.
> Is that okay?
> 
Sure, that's fine. Thanks Kevin, I look forward to your spec once it is
up. Enjoy tomorrow. :D

Thanks Kevin,
Anita.
> 
> On Thu, Jul 3, 2014 at 11:42 AM, Anita Kuno  wrote:
> 
>> On 07/03/2014 02:33 PM, Kevin Benton wrote:
>>> Maybe we can require period checks against the head of the master
>>> branch (which should always pass) and build statistics based on the
>> results
>>> of that.
>> I like this suggestion. I really like this suggestion.
>>
>> H, what to do with a good suggestion? I wonder if we could capture
>> it in an infra-spec and work on it from there.
>>
>> Would you feel comfortable offering a draft as an infra-spec and then
>> perhaps we can discuss the design through the spec?
>>
>> What do you think?
>>
>> Thanks Kevin,
>> Anita.
>>
>>> Otherwise it seems like we have to take a CI system's word for it
>>> that a particular patch indeed broke that system.
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> On Thu, Jul 3, 2014 at 11:07 AM, Anita Kuno 
>> wrote:
>>>
 On 07/03/2014 01:27 PM, Kevin Benton wrote:
>> This allows the viewer to see categories of reviews based upon their
>> divergence from OpenStack's Jenkins results. I think evaluating
>> divergence from Jenkins might be a metric worth consideration.
>
> I think the only thing this really reflects though is how much the
>> third
> party CI system is mirroring Jenkins.
> A system that frequently diverges may be functioning perfectly fine and
> just has a vastly different code path that it is integration testing so
 it
> is legitimately detecting failures the OpenStack CI cannot.
 Great.

 How do we measure the degree to which it is legitimately detecting
 failures?

 Thanks Kevin,
 Anita.
>
> --
> Kevin Benton
>
>
> On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno 
>> wrote:
>
>> On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
>>> Apologies for quoting again the top post of the thread.
>>>
>>> Comments inline (mostly thinking aloud)
>>> Salvatore
>>>
>>>
>>> On 30 June 2014 22:22, Jay Pipes  wrote:
>>>
 Hi Stackers,

 Some recent ML threads [1] and a hot IRC meeting today [2] brought
>> up
>> some
 legitimate questions around how a newly-proposed Stackalytics report
>> page
 for Neutron External CI systems [2] represented the results of an
>> external
 CI system as "successful" or not.

 First, I want to say that Ilya and all those involved in the
>> Stackalytics
 program simply want to provide the most accurate information to
>> developers
 in a format that is easily consumed. While there need to be some
>> changes in
 how data is shown (and the wording of things like "Tests
>> Succeeded"),
 I
 hope that the community knows there isn't any ill intent on the part
 of
 Mirantis or anyone who works on Stackalytics. OK, so let's keep the
 conversation civil -- we're all working towards the same goals of
 transparency and accuracy. :)

 Alright, now, Anita and Kurt Taylor were asking a very poignant
>> question:

 "But what does CI tested really mean? just running tests? or tested
>> to
 pass some level of requirements?"

 In this nascent world of external CI systems, we have a set of
>> issues
>> that
 we need to resolve:

 1) All of the CI systems are different.

 Some run Bash scripts. Some run Jenkins slaves and devstack-gate
>> scripts.
 Others run custom Python code that spawns VMs and publishes logs to
 some
 public domain.

 As a community, we need to decide whether it is worth putting in the
 effort to create a single, unified, installable and runnable CI
 system,
>> so
 that we can legitimately say "all of the external systems are
 identical,
 with the exception of the driver code for vendor X being substituted
 in
>> the
 Neutron codebase."

>>>
>>> I think such system already exists, and it's documented here:
>>> http://ci.openstack.org/
>>> Still, understanding it is quite a learning curve, and running it is
 not
>>> exactly straightforward. But I guess that's pretty much
>> understandable
>>> given the complexity of the system, isn't it?
>>>
>>>

 If the goal of the external CI systems is to produce reliable,
>> consistent
 results, I feel the answer to the above is "yes", but I'm interested
 to
 hear what others think. Frankly, in the world of benchmarks, it
>> would
 be
 unthinkable to say "go ahead and everyone run your own benchm

Re: [openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 03/07/14 19:47, Nader Lahouti wrote:
> Hi All and Ihar,
> 
> As part of blueprint oslo-messaging the
> neutron/openstack/common/rpc tree is removed. I was using
> impl_kombu module to process notification from keystone with this
> following code sample: ... from neutron.openstack.common.rpc import
> impl_kombu try: conf = impl_kombu.cfg.CONF topicname =
> self._topic_name exchange = self._exchange_name connection =
> impl_kombu.Connection(conf) 
> connection.declare_topic_consumer(topic, self.callback, topic,
> exchange) connection.consume() except Exception: 
> connection.close()
> 

Why exposing broker specific API into your code? What if notifications
are served by other type of broker (Qpid? ZeroMQ?) I wouldn't expect
to see any references to kombu implementation in any code outside
oslo.messaging.

> 
> Can you please let me what needs to be done to replace the above
> code and make it work with current neutron code?
> 
> 
> Thanks in advance, Nader.
> 
> 
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.22 (Darwin)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCgAGBQJTtcvbAAoJEC5aWaUY1u57mDgH/RoV0LN1yZ4SRFO8/U0V2F6O
NezHOJ310jf4H/1MjH82qztLnS6oMZiUJ+ddEL6U1nPkfOz+FrrF4Im16QQqFaC8
kVAJ7HMZGrWIvy+ZwOJfnY/EXmo5QFMLV2xmxIuD0mHS7PIOpnHjrdPixGafRfnZ
MORZAzto/WiBgiGhHn4WScli+RxyAnKWEM+3SF8CsaKOAZ/Jwa5breJe/OQcF0bU
FYjEAsGpG+FAm5l5NS0AEa1fIVwBxHCkRFzGFxH85KfLjs8vyplUxiAMo7zs6UnD
q2OIrMi6i48Iq1soV410RaZEqDEbYMk20ATqTGnPlHzD+YGPDpO8HDvxkJ2KkIo=
=AcD0
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-07-03 Thread Jorge Miramontes
I was implying that it applies to all drivers.

Cheers,
--Jorge

From: Eugene Nikanorov mailto:enikano...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 3, 2014 3:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

> I also don't think it is fair for certain drivers to hold other drivers 
> "hostage"

For some time there was a policy (openstack-wide) that public API should have a 
free open source implementation.
In this sense open source driver may hold other drivers as "hostages".

Eugene.


On Thu, Jul 3, 2014 at 10:37 PM, Jorge Miramontes 
mailto:jorge.miramon...@rackspace.com>> wrote:
I agree.

Also, since we are planning on having two different API versions run in 
parallel the only driver that needs to be worked on initially is the reference 
implementation. I'm guessing we will have two reference implementations, one 
for v1 and one for v2. The v2 implementation currently seems to be modified 
from v1 in order to get the highest velocity in terms of exposing API 
functionality. There is a reason we aren't working on Octavia right now and I 
think the same rationale holds for other drivers. So, I believe we should 
expose as much functionality possible with a functional open-source driver and 
then other drivers will catch up.

As for drivers that can't implement certain features the only potential issue I 
see is a type of vendor lock-in. For example, let's say I am an operator 
agnostic power API user. I host with operator A and they use a driver that 
implements all functionality exposed via the API. Now, let's say I want to move 
to operator B because operator A isn't working for me. Let's also say that 
operator B doesn't implement all functionality exposed via the API. From the 
user's perspective they are locked out of going to operator B because their API 
integrated code won't port seamlessly. With this example in mind, however, I 
also don't think it is fair for certain drivers to hold other drivers 
"hostage". From my perspective, if users really want a feature then every 
driver implementor should have the incentive to implement said feature and will 
benefit them in the long run. Anyways, that my $0.02.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 24, 2014 7:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>

Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

Making sure all drivers support the features offered in Neutron LBaaS means we 
are stuck going with the 'least common denominator' in all cases. While this 
ensures all vendors implement the same things in the functionally the same way, 
it also is probably a big reason the Neutron LBaaS project has been so 
incredibly slow in seeing new features added over the last two years.

In the gerrit review that Dustin linked, it sounds like the people contributing 
to the discussion are in favor of allowing drivers to reject some 
configurations as unsupported through use of exceptions (details on how that 
will work is being hashed out now if you want to participate in that 
discussion).  Let's assume, therefore, that with the LBaaS v2 API and Object 
model we're also going to get this ability-- which of course also means that 
drivers do not have to support every feature exposed by the API.

(And again, as Dustin pointed out, a Linux LVS-based driver definitely wouldn't 
be able to support any L7 features at all, yet it's still a very useful driver 
for many deployments.)

Finally, I do not believe that the LBaaS project should be "held back" because 
one vendor's implementation doesn't work well with a couple features exposed in 
the API. As Dustin said, let the API expose a rich feature set and allow 
drivers to reject certain configurations when they don't support them.

Stephen



On Tue, Jun 24, 2014 at 9:09 AM, Dustin Lundquist 
mailto:dus...@null-ptr.net>> wrote:
I brought this up on https://review.openstack.org/#/c/101084/.


-Dustin


On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Dustin
I agree with the concept you described but as far as I understand it is not 
currently supported in Neutron.
So a driver should be fully compatible with the interface it implements.

Avishay

From: Dustin Lundquist [mailto:dus...@null-ptr.net]
Sent: Tuesday, June 24, 2014 5:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values


Re: [openstack-dev] [keystone][qa] Returning 203 in keystone v2 list apis?

2014-07-03 Thread Brant Knudson
On Thu, Jul 3, 2014 at 3:48 PM, David Kranz  wrote:

> While moving success response code checking in tempest to the client, I
> noticed that exactly one of the calls to list users for a tenant checked
> for 200 or 203. Looking at http://docs.openstack.org/api/
> openstack-identity-service/2.0/content/, it seems that most of the list
> apis can return 203. But given that almost all of the tempest tests only
> pass on getting 200, I am guessing that 203 is not actually ever being
> returned. Is the doc just wrong? If not, what kind of call would trigger a
> 203 response?
>
>  -David
>

I can't find anyplace where Keystone returns a 203, and if it did it would
be a strange thing to do.

>From the HTTP 1.1 spec, a client could get 203 Non-Authoritative
Information to any request if the request went through a proxy and the
proxy decided to muck with the headers. Since we can't stop someone from
putting a proxy in front of Keystone, I don't think it's wrong to list it
as a possible successful response. I think it's redundant to list it though
since this applies to any HTTP request... just it's redundant to list 500
and 503 as a possible error response.

I looked into trying to correct this in the docs once but couldn't figure
out the wadls -- https://review.openstack.org/#/c/89291/

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-07-03 Thread Kyle Mestery
On Thu, Jul 3, 2014 at 10:14 AM, Paul Czarkowski
 wrote:
> I¹m seeing similar. Instances launch,  they show as having Ips in
> `neutron list`  but I cannot access them via IP.
>
> Other thing I¹ve notices is that doing a `neutron agent-list` gives me an
> empty list,  I would assume it should at least show the dhcp agent ?
>
Which plugin are you using? For ML2 with OVS or LB, you should have L2
agents on each compute host in addition to the DHCP and L3 agents. I
think perhaps your problem is different than Rob's.

> On 7/1/14, 12:00 PM, "Kyle Mestery"  wrote:
>
>>Hi Rob:
>>
>>Can you try adding the following config to your local.conf? I'd like
>>to see if this gets you going or not. It will force it to use gre
>>tunnels for tenant networks. By default it will not.
>>
>>ENABLE_TENANT_TUNNELS=True
>>
>>On Tue, Jul 1, 2014 at 10:53 AM, Rob Crittenden 
>>wrote:
>>> Rob Crittenden wrote:
 Mark Kirkwood wrote:
> On 25/06/14 10:59, Rob Crittenden wrote:
>> Before I get punted onto the operators list, I post this here because
>> this is the default config and I'd expect the defaults to just work.
>>
>> Running devstack inside a VM with a single NIC configured and this in
>> localrc:
>>
>> disable_service n-net
>> enable_service q-svc
>> enable_service q-agt
>> enable_service q-dhcp
>> enable_service q-l3
>> enable_service q-meta
>> enable_service neutron
>> Q_USE_DEBUG_COMMAND=True
>>
>> Results in a successful install but no DHCP address assigned to
>>hosts I
>> launch and other oddities like no CIDR in nova net-list output.
>>
>> Is this still the default way to set things up for single node? It is
>> according to https://wiki.openstack.org/wiki/NeutronDevstack
>>
>>
>
> That does look ok: I have an essentially equivalent local.conf:
>
> ...
> ENABLED_SERVICES+=,-n-net
> ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest
>
> I don't have 'neutron' specifically enabled... not sure if/why that
> might make any difference tho. However instance launching and ip
>address
> assignment seem to work ok.
>
> However I *have* seen the issue of instances not getting ip addresses
>in
> single host setups, and it is often due to use of virt io with bridges
> (with is the default I think). Try:
>
> nova.conf:
> ...
> libvirt_use_virtio_for_bridges=False

 Thanks for the suggestion. At least in master this was replaced by a
new
 section, libvirt, but even setting it to False didn't do the trick for
 me. I see the same behavior.
>>>
>>> OK, I've tested the havana and icehouse branches in F-20 and they don't
>>> seem to have a working neutron either. I see the same thing. I can
>>> launch a VM but it isn't getting a DHCP address.
>>>
>>> Maybe I'll try in some Ubuntu release to see if this is Fedora-specific.
>>>
>>> rob
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-03 Thread Kyle Mestery
Awesome, thanks for working on this Eugene and Mark! I'll still leave
an item on Monday's meeting agenda to discuss this, hopefully it can
be brief.

Thanks,
Kyle

On Thu, Jul 3, 2014 at 10:32 AM, Eugene Nikanorov
 wrote:
> Hi,
>
> Mark and me has spent some time today discussing existing proposals and I
> think we got to a consensus.
> Initially I had two concerns about Mark's proposal which are
> - extension list attribute on the flavor
> - driver entry point on the service profile
>
> The first idea (ext list) need to be clarified more as we get more drivers
> that needs it.
> Right now we have FWaaS/VPNaaS which don't have extensions at all and we
> have LBaaS where all drivers support all extensions.
> So extension list can be postponed until we clarify how exactly we want this
> to be exposed to the user and how we want it to function on implementation
> side.
>
> Driver entry point which implies dynamic loading per admin's request is a
> important discussion point (at least, previously this idea received negative
> opinions from some cores)
> We'll implement service profiles, but this exact aspect of how driver is
> specified/loadede will be discussed futher.
>
> So based on that I'm going to start implementing this.
> I think that implementation result will allow us to develop in different
> directions (extension list vs tags, dynamic loading and such) depending on
> more information about how this is utilized by deployers and users.
>
> Thanks,
> Eugene.
>
>
>
> On Thu, Jul 3, 2014 at 5:57 PM, Susanne Balle  wrote:
>>
>> +1
>>
>>
>> On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery 
>> wrote:
>>>
>>> We're coming down to the wire here with regards to Neutron BPs in
>>> Juno, and I wanted to bring up the topic of the flavor framework BP.
>>> This is a critical BP for things like LBaaS, FWaaS, etc. We need this
>>> work to land in Juno, as these other work items are dependent on it.
>>> There are still two proposals [1] [2], and after the meeting last week
>>> [3] it appeared we were close to conclusion on this. I now see a bunch
>>> of comments on both proposals.
>>>
>>> I'm going to again suggest we spend some time discussing this at the
>>> Neutron meeting on Monday to come to a closure on this. I think we're
>>> close. I'd like to ask Mark and Eugene to both look at the latest
>>> comments, hopefully address them before the meeting, and then we can
>>> move forward with this work for Juno.
>>>
>>> Thanks for all the work by all involved on this feature! I think we're
>>> close and I hope we can close on it Monday at the Neutron meeting!
>>>
>>> Kyle
>>>
>>> [1] https://review.openstack.org/#/c/90070/
>>> [2] https://review.openstack.org/102723
>>> [3]
>>> http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.log.html
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] nova bug scrub web page

2014-07-03 Thread Tracy Jones
Hi Folks - I have taken a script from the infra folks and jogo, made some 
tweaks and have put it into a web page.  Please see it here 
http://54.201.139.117/demo.html


This is all of the new, confirmed, triaged, and in progress bugs that we have 
in nova as of a couple of hours ago.  I have added ways to search it, sort it, 
and filter it based on

1.  All bugs
2.  Bugs that have not been updated in the last 30 days
3.  Bugs that have never been updated
4.  Bugs in progress
5.  Bugs without owners.


I chose this as they are things I was interested in seeing, but there are 
obviously a lot of other things I can do here.  I plan on adding a cron job to 
update the data ever hour or so.  Take a look and let me know if your have 
feedback.

Tracy


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-03 Thread Dave Walker
Hi,

This is very similar to an issue I encountered with Glance.  For some
unknown reason, we were adding a Location header for 200 responses.

When served behind apache+mod_fcgid, the module saw the Location
header and has a hard coded conversion to 302 Redirect.  This caused
glanceclient to follow the redirect loop continually.  As you can
imagine, the stealthy changing was a real oddity to debug.

More details are here:
https://bugs.launchpad.net/glance/+bug/1299095

Standardisation with standards helps avoid non-standard behaviour. :)

--
Kind Regards,
Dave Walker


On 2 July 2014 03:48, Robert Collins  wrote:
> Wearing my HTTP fanatic hat - I think this is actually an important
> change to do. Skew like this can cause all sorts of odd behaviours in
> client libraries.
>
> -Rob
>
> On 2 July 2014 13:13, Morgan Fainberg  wrote:
>> In the endeavor to move from the default deployment of Keystone being 
>> eventlet (in devstack) to Apache + mod_wsgi, I noticed that there was an odd 
>> mis-match on a single set of tempest tests relating to trusts. Under 
>> eventlet a HTTP 204 No Content was being returned, but under mod_wsgi an 
>> HTTP 200 OK was being returned. After some investigation it turned out that 
>> in some cases mod_wsgi will rewrite HEAD requests to GET requests under the 
>> hood; this is to ensure that the response from Apache is properly built when 
>> dealing with filtering. A number of wsgi applications just return nothing on 
>> a HEAD request, which is incorrect, so mod_wsgi tries to compensate.
>>
>> The HTTP spec states: "The HEAD method is identical to GET except that the 
>> server must not return any Entity-Body in the response. The metainformation 
>> contained in the HTTP headers in response to a HEAD request should be 
>> identical to the information sent in response to a GET request. This method 
>> can be used for obtaining metainformation about the resource identified by 
>> the Request-URI without transferring the Entity-Body itself. This method is 
>> often used for testing hypertext links for validity, accessibility, and 
>> recent modification.”
>>
>> Keystone has 3 Routes where HEAD will result in a 204 and GET will result in 
>> a 200.
>>
>> * /v3/auth/tokens
>> * /v2.0/tokens/{token_id}
>> * /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one 
>> tested by Tempest.
>>
>> The easiest solution is to correct the case where we are out of line with 
>> the HTTP spec and ensure these cases return the same status code for GET and 
>> HEAD methods. This however changes the response status of a public REST API. 
>> Before we do this, I wanted to ensure the community, developers, and TC did 
>> not have an issue with this correction. We are not changing the class of 
>> status (e.g. 2xx to 4xx or vice-versa). This would simply be returning the 
>> same response between GET and HEAD requests. The fix for this would be to 
>> modify the 3 tempest tests in question to expect HTTP 200 instead of 204.
>>
>> There are a couple of other cases where Keystone registers a HEAD route but 
>> no GET route (these would be corrected at the same time, to ensure 
>> compatibility). The final correction is to enforce that Keystone would not 
>> send any data on HEAD requests (it is possible to do so, but we have not had 
>> it happen).
>>
>> You can see a proof-of-concept review that shows the tempest failures here: 
>> https://review.openstack.org/#/c/104026
>>
>> If this change (even though it is in violation of 
>> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable 
>> is acceptable, it will unblock the last of a very few things to have 
>> Keystone default deploy via devstack under Apache (and gate upon it). Please 
>> let me know if anyone has significant issues with this change / concerns as 
>> I would like to finish up this road to mod_wsgi based Keystone as early in 
>> the Juno cycle as possible.
>>
>> Cheers,
>> Morgan Fainberg
>>
>>
>> —
>> Morgan Fainberg
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][qa] Returning 203 in keystone v2 list apis?

2014-07-03 Thread David Kranz
While moving success response code checking in tempest to the client, I 
noticed that exactly one of the calls to list users for a tenant checked 
for 200 or 203. Looking at 
http://docs.openstack.org/api/openstack-identity-service/2.0/content/, 
it seems that most of the list apis can return 203. But given that 
almost all of the tempest tests only pass on getting 200, I am guessing 
that 203 is not actually ever being returned. Is the doc just wrong? If 
not, what kind of call would trigger a 203 response?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack and SDN

2014-07-03 Thread Thulasi ram Valleru
The most challenge part for me for the past few days is understanding the
how SDN can reduce the burden on Neutron.

A single SDN plugin consider OpenDayLight controller plugin deployed on
neutron. I have physical and virtual switches which support open flow. I
know OpenFlow will install the flow to the switches and routers.

When you create a abstract tenant network with, how Neutron configure these
changes on the physical switches or virtual switches considering it has
only OpenDayLight plugin installed.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Kevin Benton
Yes, I can propose a spec for that. It probably won't be until Monday.
Is that okay?


On Thu, Jul 3, 2014 at 11:42 AM, Anita Kuno  wrote:

> On 07/03/2014 02:33 PM, Kevin Benton wrote:
> > Maybe we can require period checks against the head of the master
> > branch (which should always pass) and build statistics based on the
> results
> > of that.
> I like this suggestion. I really like this suggestion.
>
> H, what to do with a good suggestion? I wonder if we could capture
> it in an infra-spec and work on it from there.
>
> Would you feel comfortable offering a draft as an infra-spec and then
> perhaps we can discuss the design through the spec?
>
> What do you think?
>
> Thanks Kevin,
> Anita.
>
> > Otherwise it seems like we have to take a CI system's word for it
> > that a particular patch indeed broke that system.
> >
> > --
> > Kevin Benton
> >
> >
> > On Thu, Jul 3, 2014 at 11:07 AM, Anita Kuno 
> wrote:
> >
> >> On 07/03/2014 01:27 PM, Kevin Benton wrote:
>  This allows the viewer to see categories of reviews based upon their
>  divergence from OpenStack's Jenkins results. I think evaluating
>  divergence from Jenkins might be a metric worth consideration.
> >>>
> >>> I think the only thing this really reflects though is how much the
> third
> >>> party CI system is mirroring Jenkins.
> >>> A system that frequently diverges may be functioning perfectly fine and
> >>> just has a vastly different code path that it is integration testing so
> >> it
> >>> is legitimately detecting failures the OpenStack CI cannot.
> >> Great.
> >>
> >> How do we measure the degree to which it is legitimately detecting
> >> failures?
> >>
> >> Thanks Kevin,
> >> Anita.
> >>>
> >>> --
> >>> Kevin Benton
> >>>
> >>>
> >>> On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno 
> wrote:
> >>>
>  On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
> > Apologies for quoting again the top post of the thread.
> >
> > Comments inline (mostly thinking aloud)
> > Salvatore
> >
> >
> > On 30 June 2014 22:22, Jay Pipes  wrote:
> >
> >> Hi Stackers,
> >>
> >> Some recent ML threads [1] and a hot IRC meeting today [2] brought
> up
>  some
> >> legitimate questions around how a newly-proposed Stackalytics report
>  page
> >> for Neutron External CI systems [2] represented the results of an
>  external
> >> CI system as "successful" or not.
> >>
> >> First, I want to say that Ilya and all those involved in the
>  Stackalytics
> >> program simply want to provide the most accurate information to
>  developers
> >> in a format that is easily consumed. While there need to be some
>  changes in
> >> how data is shown (and the wording of things like "Tests
> Succeeded"),
> >> I
> >> hope that the community knows there isn't any ill intent on the part
> >> of
> >> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
> >> conversation civil -- we're all working towards the same goals of
> >> transparency and accuracy. :)
> >>
> >> Alright, now, Anita and Kurt Taylor were asking a very poignant
>  question:
> >>
> >> "But what does CI tested really mean? just running tests? or tested
> to
> >> pass some level of requirements?"
> >>
> >> In this nascent world of external CI systems, we have a set of
> issues
>  that
> >> we need to resolve:
> >>
> >> 1) All of the CI systems are different.
> >>
> >> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
>  scripts.
> >> Others run custom Python code that spawns VMs and publishes logs to
> >> some
> >> public domain.
> >>
> >> As a community, we need to decide whether it is worth putting in the
> >> effort to create a single, unified, installable and runnable CI
> >> system,
>  so
> >> that we can legitimately say "all of the external systems are
> >> identical,
> >> with the exception of the driver code for vendor X being substituted
> >> in
>  the
> >> Neutron codebase."
> >>
> >
> > I think such system already exists, and it's documented here:
> > http://ci.openstack.org/
> > Still, understanding it is quite a learning curve, and running it is
> >> not
> > exactly straightforward. But I guess that's pretty much
> understandable
> > given the complexity of the system, isn't it?
> >
> >
> >>
> >> If the goal of the external CI systems is to produce reliable,
>  consistent
> >> results, I feel the answer to the above is "yes", but I'm interested
> >> to
> >> hear what others think. Frankly, in the world of benchmarks, it
> would
> >> be
> >> unthinkable to say "go ahead and everyone run your own benchmark
> >> suite",
> >> because you would get wildly different results. A similar problem
> has
> >> emerged here.
> >>
> >
> > I don't think the particular infrastructure which m

Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-03 Thread Kevin Benton
Are these zuul refs publicly accessible so that the third party CI systems
could reference then to guarantee they are testing the same thing?


On Thu, Jul 3, 2014 at 11:31 AM, Jay Pipes  wrote:

> On 07/03/2014 02:10 PM, Kevin Benton wrote:
>
>> The reason I thought it changed was that this is the first cycle where I
>> have encountered scenarios where my unit tests for the patch run fine
>> locally, but then they fail when they are checked by Jenkins (caused by
>> a change after the parent of my patch). I suppose I was just lucky
>> before and never had anything merge after I proposed a patch that caused
>> a conflict with mine.
>>
>> I suspect this is a problem then for many third-party CI systems because
>> the simple approach of setting [PROJECT]_REPO and [PROJECT]_BRANCH in
>> devstack to point to the gerrit server will not work correctly since it
>> will just test the patch without merging it.
>>
>> Where is this merging process handled in the OpenStack CI? Is that done
>> in Zuul with the custom Zuul branch is passed to devstack?
>>
>
> Yes. The zuul-merger daemon is responsible for managing this, and the
> devstack-gate project handles the checkout and setup of the git repos for
> all of the OpenStack projects.
>
> Best,
> -jay
>
>  --
>> Kevin Benton
>>
>>
>> On Tue, Jul 1, 2014 at 4:00 PM, Jeremy Stanley > > wrote:
>>
>> On 2014-07-01 10:05:45 -0700 (-0700), Kevin Benton wrote:
>> [...]
>>  > As I understand it, this behavior for the main OpenStack CI check
>>  > queue changed to the latter some time over the past few months.
>> [...]
>>
>> I'm not sure what you think changed, but we've (upstream OpenStack
>> CI) been testing proposed patches merged to their target branches
>> for years...
>> --
>> Jeremy Stanley
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> 
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Kevin Benton
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-07-03 Thread Eugene Nikanorov
> I also don't think it is fair for certain drivers to hold other drivers
"hostage"

For some time there was a policy (openstack-wide) that public API should
have a free open source implementation.
In this sense open source driver may hold other drivers as "hostages".

Eugene.


On Thu, Jul 3, 2014 at 10:37 PM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

>   I agree.
>
>  Also, since we are planning on having two different API versions run in
> parallel the only driver that needs to be worked on initially is the
> reference implementation. I'm guessing we will have two reference
> implementations, one for v1 and one for v2. The v2 implementation currently
> seems to be modified from v1 in order to get the highest velocity in terms
> of exposing API functionality. There is a reason we aren't working on
> Octavia right now and I think the same rationale holds for other drivers.
> So, I believe we should expose as much functionality possible with a
> functional open-source driver and then other drivers will catch up.
>
>  As for drivers that can't implement certain features the only potential
> issue I see is a type of vendor lock-in. For example, let's say I am an
> operator agnostic power API user. I host with operator A and they use a
> driver that implements all functionality exposed via the API. Now, let's
> say I want to move to operator B because operator A isn't working for me.
> Let's also say that operator B doesn't implement all functionality exposed
> via the API. From the user's perspective they are locked out of going to
> operator B because their API integrated code won't port seamlessly. With
> this example in mind, however, I also don't think it is fair for certain
> drivers to hold other drivers "hostage". From my perspective, if users
> really want a feature then every driver implementor should have the
> incentive to implement said feature and will benefit them in the long run.
> Anyways, that my $0.02.
>
>  Cheers,
> --Jorge
>
>   From: Stephen Balukoff 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Tuesday, June 24, 2014 7:30 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule
> - comapre_type values
>
>   Making sure all drivers support the features offered in Neutron LBaaS
> means we are stuck going with the 'least common denominator' in all cases.
> While this ensures all vendors implement the same things in the
> functionally the same way, it also is probably a big reason the Neutron
> LBaaS project has been so incredibly slow in seeing new features added over
> the last two years.
>
>  In the gerrit review that Dustin linked, it sounds like the people
> contributing to the discussion are in favor of allowing drivers to reject
> some configurations as unsupported through use of exceptions (details on
> how that will work is being hashed out now if you want to participate in
> that discussion).  Let's assume, therefore, that with the LBaaS v2 API and
> Object model we're also going to get this ability-- which of course also
> means that drivers do not have to support every feature exposed by the API.
>
>  (And again, as Dustin pointed out, a Linux LVS-based driver definitely
> wouldn't be able to support any L7 features at all, yet it's still a very
> useful driver for many deployments.)
>
>  Finally, I do not believe that the LBaaS project should be "held back"
> because one vendor's implementation doesn't work well with a couple
> features exposed in the API. As Dustin said, let the API expose a rich
> feature set and allow drivers to reject certain configurations when they
> don't support them.
>
>  Stephen
>
>
>
> On Tue, Jun 24, 2014 at 9:09 AM, Dustin Lundquist 
> wrote:
>
>> I brought this up on https://review.openstack.org/#/c/101084/.
>>
>>
>>  -Dustin
>>
>>
>> On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
>> wrote:
>>
>>>  Hi Dustin
>>>
>>> I agree with the concept you described but as far as I understand it is
>>> not currently supported in Neutron.
>>>
>>> So a driver should be fully compatible with the interface it implements.
>>>
>>>
>>>
>>> Avishay
>>>
>>>
>>>
>>> *From:* Dustin Lundquist [mailto:dus...@null-ptr.net]
>>> *Sent:* Tuesday, June 24, 2014 5:41 PM
>>> *To:* OpenStack Development Mailing List (not for usage questions)
>>> *Subject:* Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7
>>> Rule - comapre_type values
>>>
>>>
>>>
>>> I think the API should provide an richly featured interface, and
>>> individual drivers should indicate if they support the provided
>>> configuration. For example there is a spec for a Linux LVS LBaaS driver,
>>> this driver would not support TLS termination or any layer 7 features, but
>>> would still be valuable for some deployments. The user experience of such a
>>> solution could be improved i

Re: [openstack-dev] [trove] Discussion of capabilities feature

2014-07-03 Thread Iccha Sethi
Hey Doug,

Thank you so much for putting this together. I have some 
questions/clarifications(inline) which would be useful to be addressed in the 
spec.


From: Doug Shelley mailto:d...@tesora.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, July 3, 2014 at 2:20 PM
To: "OpenStack Development Mailing List (not for usage questions) 
(openstack-dev@lists.openstack.org)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [trove] Discussion of capabilities feature

At yesterday's Trove team meeting [1] there was significant discussion around 
the Capabilities [2] feature. While the community previously approved a BP and 
some of the initial implementation, it is apparent now that there is no 
agreement in the community around the requirements, use cases or proposed 
implementation.

I mentioned in the meeting that I thought it would make sense to adjust the 
current BP and spec to reflect the concerns and hopefully come up with 
something that we can get consensus on. Ahead of this, I thought it would to 
try to write up some of the key points and get some feedback here before 
updating the spec.

First, here are what I think the goals of the Capabilities feature are:
1. Provide other components with a mechanism for understanding which aspects of 
Trove are currently available and/or in use
>> Good point about communicating to other components. We can highlight how 
>> this would help other projects like horizon dynamically modify their UI 
>> based on the api response.

[2] "This proposal includes the ability to setup different capabilities for 
different datastore versions. “ So capabilities is specific to data 
stores/datastore versions and not for trove in general right?

Also it would be useful for us as a community to maybe lay some ground rules 
for what is a capability and what is not in the spec. For example, how to 
distinguish what goes in 
https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L273 as a 
config value and what does not.

2. Allow operators the ability to control some aspects of Trove at deployment 
time
>> If we are controlling the aspects at deploy time what advantages do having 
>> tables like capabilities and capabilities_overrides offer over having in the 
>> config file under the config groups for different data stores like 
>> [mysql][redis] etc? I think it would be useful to document these answers 
>> because they might keep resurfacing in the future.
Also want to make sure we are not trying to solve the problem of config 
override during run time here because that is an entirely different problem not 
in scope here.

Use Cases

1. Unimplemented feature - this is the case where one/some datastore managers 
provide support for some specific capability but others don't. A good example 
would be replication support as we are only planning to support the MySQL 
manager in the first version. As other datastore managers gain support for the 
capability, these would be enabled.
2. Unsupported feature - similar to #1 except this would be the case where the 
datastore manager inherently doesn't support the capability. For example, Redis 
doesn't have support for volumes.
3. Operator controllable feature - this would be a capability that can be 
controlled at deployment time at the option of the operator. For example, 
whether to provide access to the root user on instance creation.
>> Are not 1 and 2 set at deploy time as well?
4. Downstream capabilities addition - basically the ability to use capabilities 
as an extension point. Allow downstream implementations to add capabilities 
that aren't present in upstream Trove.

Requirements

1. There are a well known set of capabilities that are provided with upstream 
Trove. Each capability is either read-only (basically use cases 1 & 2) or 
read-write (use case 3). Use case #4 capabilities are not part of the "well 
known" set.
2. Each capability can be over-ridden at the datastore manager level, the 
datastore level or the datastore version level. The datastore manager level 
would be used for the read only capabilities and specified by a given version 
of Trove. Datastore/Datastore version overrides would be for Operator 
controllable capabilities that are read-write.
>> Is there going to be a distinction at design level between read-write/read 
>> only capabilities? For example are operators going to be forbidden from 
>> changing certain capabilities?

3. The datastore/datastore version overrides are only present if created by the 
Operator at deployment time.
>> Again if this is deployment time only, should we be having config files  for 
>> different data stores? And instead of having to populate databases by 
>> admins, this could be taken care of by config management tools in 
>> deployments?

4. A clean Trove install should create the domain of known capabilities and the

Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-03 Thread Dmitry Borodaenko
On Thu, Jul 3, 2014 at 7:05 AM, Aleksandr Didenko  wrote:
>> I think we should allow user to delete unneeded releases.
>
> In this case user won't be able to add new nodes to the existing
> environments of the same version. So we should check and warn user about it,
> or simply not allow to delete releases if there are live envs with the same
> version.

+1

-DmitryB

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on resulting HTTP status (proposed REST API Response Status Changes)

2014-07-03 Thread Morgan Fainberg
Here is the list of patches pending to resolve this issue (Keystone Master, 
Keystone Stable/Icehouse, and Tempest)

https://review.openstack.org/#/q/status:open+topic:bug/1334368,n,z 


—
Morgan Fainberg


--
From: Nathan Kinder nkin...@redhat.com
Reply: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: July 1, 2014 at 20:02:45
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject:  Re: [openstack-dev] [Keystone] HTTP Get and HEAD requests mismatch on 
resulting HTTP status (proposed REST API Response Status Changes)

>  
>  
> On 07/01/2014 07:48 PM, Robert Collins wrote:
> > Wearing my HTTP fanatic hat - I think this is actually an important
> > change to do. Skew like this can cause all sorts of odd behaviours in
> > client libraries.
>  
> +1. The current behavior of inconsistent response codes between the two
> recommended methods of deploying Keystone should definitely be
> considered as a bug IMHO. Consistency in responses is important
> regardless of how Keystone is deployed, and it seems obvious that we
> should modify the responses that are out of spec to achieve consistency.
>  
> -NGK
> >
> > -Rob
> >
> > On 2 July 2014 13:13, Morgan Fainberg wrote:
> >> In the endeavor to move from the default deployment of Keystone being 
> >> eventlet (in  
> devstack) to Apache + mod_wsgi, I noticed that there was an odd mis-match on 
> a single set  
> of tempest tests relating to trusts. Under eventlet a HTTP 204 No Content was 
> being returned,  
> but under mod_wsgi an HTTP 200 OK was being returned. After some 
> investigation it turned  
> out that in some cases mod_wsgi will rewrite HEAD requests to GET requests 
> under the hood;  
> this is to ensure that the response from Apache is properly built when 
> dealing with filtering.  
> A number of wsgi applications just return nothing on a HEAD request, which is 
> incorrect,  
> so mod_wsgi tries to compensate.
> >>
> >> The HTTP spec states: "The HEAD method is identical to GET except that the 
> >> server must  
> not return any Entity-Body in the response. The metainformation contained in 
> the HTTP  
> headers in response to a HEAD request should be identical to the information 
> sent in response  
> to a GET request. This method can be used for obtaining metainformation about 
> the resource  
> identified by the Request-URI without transferring the Entity-Body itself. 
> This method  
> is often used for testing hypertext links for validity, accessibility, and 
> recent modification.”  
> >>
> >> Keystone has 3 Routes where HEAD will result in a 204 and GET will result 
> >> in a 200.
> >>
> >> * /v3/auth/tokens
> >> * /v2.0/tokens/{token_id}
> >> * /OS-TRUST/trusts/{trust_id}/roles/{role_id} <--- This is the only one 
> >> tested  
> by Tempest.
> >>
> >> The easiest solution is to correct the case where we are out of line with 
> >> the HTTP spec  
> and ensure these cases return the same status code for GET and HEAD methods. 
> This however  
> changes the response status of a public REST API. Before we do this, I wanted 
> to ensure  
> the community, developers, and TC did not have an issue with this correction. 
> We are not  
> changing the class of status (e.g. 2xx to 4xx or vice-versa). This would 
> simply be returning  
> the same response between GET and HEAD requests. The fix for this would be to 
> modify the  
> 3 tempest tests in question to expect HTTP 200 instead of 204.
> >>
> >> There are a couple of other cases where Keystone registers a HEAD route 
> >> but no GET route  
> (these would be corrected at the same time, to ensure compatibility). The 
> final correction  
> is to enforce that Keystone would not send any data on HEAD requests (it is 
> possible to  
> do so, but we have not had it happen).
> >>
> >> You can see a proof-of-concept review that shows the tempest failures 
> >> here: https://review.openstack.org/#/c/104026  
> >>
> >> If this change (even though it is in violation of 
> >> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Not_Acceptable
> >>   
> is acceptable, it will unblock the last of a very few things to have Keystone 
> default deploy  
> via devstack under Apache (and gate upon it). Please let me know if anyone 
> has significant  
> issues with this change / concerns as I would like to finish up this road to 
> mod_wsgi based  
> Keystone as early in the Juno cycle as possible.
> >>
> >> Cheers,
> >> Morgan Fainberg
> >>
> >>
> >> —
> >> Morgan Fainberg
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openst

[openstack-dev] Announcing git-review 1.24

2014-07-03 Thread Jeremy Stanley
I am pleased to announce git-review 1.24 is officially released
today (Thursday, July 3, 2014). This version brings together 77 new
changes from 20 different collaborators including fixes for 15
bugs and a variety of other improvements:

https://git.openstack.org/cgit/openstack-infra/git-review/log/?id=1.23..1.24&showmsg=1
 >

Some brief highlights:

* The warning message about the availability of new releases has
been removed--these days a majority of modern Linux distributions
and other Unix derivatives provide usably recent versions of
git-review and that's the expected source for most users (also
packagers were typically just patching that part out of the code
anyway).

* The Python packaging for git-review has been revamped to use
tag-based versioning with PBR (which has now been added as a
build-time dependency, though not a run-time dependency).

* Fairly significant refactoring replaced most error handling in
git-review with exceptions and specific exit codes, making it much
more friendly when used as a backend to other tools. However this
cleanup brought in dependencies on the argparse and requests Python
modules, so it can no longer be run in-place as a standalone script
without first being installed.

* A comprehensive functional test suite is included, which will
download/run Gerrit on a loopback interface and then exercise
git-review functions with it.

* Several breakages involving extended UTF-8 codepoints and
operation under non-English locales were addressed.

* Initial support has been implemented for interacting with newer
Gerrit versions over HTTP(S) as an alternative to SSH, improving
flexibility for users in more restrictive network environments.

The release tarball can be installed from PyPI as usual, and can
also be found at:

http://tarballs.openstack.org/git-review/git-review-1.24.tar.gz >

Its checksums are...

md5sum: 537c78591e74853c204c5b3d88c0c4fd

sha256: 20fa8be4b86430b41153c270f770dd270bde06ff70c60c411aa9adc9db2f512a

It's also available as a Python wheel:

http://tarballs.openstack.org/git-review/git_review-1.24-py2-none-any.whl 
>

md5sum: 4bb6b9c7042120d508e13ff0abe52e87

sha256: 8fa88ce99c50de1509e2b7944cb5272a91d3c354eacffc9aed9641af15b1d6d0

A huge thank-you to everyone who contributed!
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [blazar] py26 and py27 jobs failing

2014-07-03 Thread Fuente, Pablo A
Yes I know that, but we need the big patch in order to get our V2 REST
API working, so I asked to review this patch instead of submitting a new
one.

On Wed, 2014-07-02 at 17:24 -0400, Ryan Petrello wrote:
> That's a pretty notable review to accommodate the pecan change.  In the
> meantime, wouldn't something like this would get the py26 and py27 tests
> passing?
> 
> index 9aced3f..4051fad 100644
> --- a/climate/tests/api/test_root.py
> +++ b/climate/tests/api/test_root.py
> @@ -22,8 +22,7 @@ class TestRoot(api.APITest):
>  response = self.get_json('/',
>   expect_errors=True,
>   path_prefix='')
> -self.assertEqual(response.status_int, 200)
> -self.assertEqual(response.content_type, "text/html")
> +self.assertEqual(response.status_int, 204)
>  self.assertEqual(response.body, '')
> 
> On 07/02/14 09:08:35 PM, Fuente, Pablo A wrote:
> > Blazar cores,
> > Please review https://review.openstack.org/99389. We need this merged
> > ASAP in order to get a +1 from Jenkins in our py26 and py27 jobs. Pecan
> > new version returns a 204 instead 200 in one case (when the API returns
> > and empty dictionary) and one test case is failing for this reason. This
> > patch solve the bug as a side effect, because it returns the versions
> > instead of an empty dictionary.
> > 
> > Pablo.
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [QA][Infra] Mid-Cycle Meet-up Registration Closed

2014-07-03 Thread James E. Blair
Matthew Treinish  writes:

> Hi Everyone,
>
> Just a quick update, we have to close registration for the Infra/QA mid-cycle
> meet-up. Based on the number of people who have signed up on the wiki page [1]
> we are basically at the maximum capacity for the rooms we reserved. So if you
> had intended to come but didn't sign up on the wiki unfortunately there isn't
> any space left.

We've had a few people contact us after registration closed.  I've added
their names, in order, to a waitlist on the wiki page:

  https://wiki.openstack.org/wiki/Qa_Infra_Meetup_2014

If you have already registered and find that you can no longer attend,
please let us know ASAP.

If you can only attend some days, please note that in the comments
field.

If you would like to attend but have not registered, you may add your
name to the end of the waitlist in case there are cancellations; but we
can't guarantee anything.

Thanks to everyone who has expressed interest!  And I'm sorry we can't
accommodate everyone.

-Jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-03 Thread Russell Bryant
On 07/03/2014 01:53 PM, Sylvain Bauza wrote:
> Hi,
> 
> ==
> tl; dr: A decision has been made to split out the scheduler to a
> separate project not on a feature parity basis with nova-scheduler, your
> comments are welcome.
> ==

...

> During the last Gantt meeting held Tuesday, we discussed about the
> status and the problems we have. As we are close to Juno-2, there are
> some concerns about which blueprints would be implemented by Juno, so
> Gantt would be updated after. Due to the problems raised in the
> different blueprints (please see the links there), it has been agreed to
> follow a path a bit different from the one agreed at the Summit : once
> B/ is merged, Gantt will be updated and work will happen in there while
> work with C/ will happen in parallel. That means we need to backport in
> Gantt all changes happening to the scheduler, but (and this is the most
> important point) until C/ is merged into Gantt, Gantt won't support
> filters which decide on aggregates or instance groups. In other words,
> until C/ happens (but also A/), Gantt won't be feature-parity with
> Nova-scheduler.
> 
> That doesn't mean Gantt will move forward and leave all missing features
> out of it, we will be dedicated to feature-parity as top priority but
> that implies that the first releases of Gantt will be experimental and
> considered for testing purposes only.

I don't think this sounds like the best approach.  It sounds like effort
will go into maintaining two schedulers instead of continuing to focus
effort on the refactoring necessary to decouple the scheduler from Nova.
 It's heading straight for a "nova-network and Neutron" scenario, where
we're maintaining both for much longer than we want to.

I strongly prefer not starting a split until it's clear that the switch
to the new scheduler can be done as quickly as possible.  That means
that we should be able to start a deprecation and removal timer on
nova-scheduler.  Proceeding with a split now will only make it take even
longer to get there, IMO.

This was the primary reason the last gantt split was scraped.  I don't
understand why we'd go at it again without finishing the job first.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Discussion of capabilities feature

2014-07-03 Thread Doug Shelley
At yesterday's Trove team meeting [1] there was significant discussion around 
the Capabilities [2] feature. While the community previously approved a BP and 
some of the initial implementation, it is apparent now that there is no 
agreement in the community around the requirements, use cases or proposed 
implementation.

I mentioned in the meeting that I thought it would make sense to adjust the 
current BP and spec to reflect the concerns and hopefully come up with 
something that we can get consensus on. Ahead of this, I thought it would to 
try to write up some of the key points and get some feedback here before 
updating the spec.

First, here are what I think the goals of the Capabilities feature are:
1. Provide other components with a mechanism for understanding which aspects of 
Trove are currently available and/or in use
2. Allow operators the ability to control some aspects of Trove at deployment 
time

Use Cases

1. Unimplemented feature - this is the case where one/some datastore managers 
provide support for some specific capability but others don't. A good example 
would be replication support as we are only planning to support the MySQL 
manager in the first version. As other datastore managers gain support for the 
capability, these would be enabled.
2. Unsupported feature - similar to #1 except this would be the case where the 
datastore manager inherently doesn't support the capability. For example, Redis 
doesn't have support for volumes.
3. Operator controllable feature - this would be a capability that can be 
controlled at deployment time at the option of the operator. For example, 
whether to provide access to the root user on instance creation.
4. Downstream capabilities addition - basically the ability to use capabilities 
as an extension point. Allow downstream implementations to add capabilities 
that aren't present in upstream Trove.

Requirements

1. There are a well known set of capabilities that are provided with upstream 
Trove. Each capability is either read-only (basically use cases 1 & 2) or 
read-write (use case 3). Use case #4 capabilities are not part of the "well 
known" set.
2. Each capability can be over-ridden at the datastore manager level, the 
datastore level or the datastore version level. The datastore manager level 
would be used for the read only capabilities and specified by a given version 
of Trove. Datastore/Datastore version overrides would be for Operator 
controllable capabilities that are read-write.
3. The datastore/datastore version overrides are only present if created by the 
Operator at deployment time.
4. A clean Trove install should create the domain of known capabilities and the 
datastore manager overrides relevant to the installed version of Trove.
5. Upgrades - need to provide a mechanism to migrate from a version of Trove 
where:
a. A capability is being moved from historical config file into the capability 
mechanism
b. A previously non-existent capability is being introduced.
c. Capability adjustments have occurred in the newer version that affect the 
datastore manager level capabilities. This likely has some impact on 
old-version guest agents running against capability upgrades.

Any feedback is welcome. Hopefully, based on the feedback we can update the 
spec and move forward on adjusting the implementation.

Regards,
Doug

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2014-07-02.log
 starting at 18:05
[2] https://wiki.openstack.org/wiki/Trove/trove-capabilities


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting minutes July 3

2014-07-03 Thread Dmitry Mescheryakov
Again, thanks everyone who have joined Sahara meeting. Below are the
logs from the meeting.

Minutes: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-03-18.06.html
Logs: 
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-03-18.06.log.html

Thanks,

Dmitry

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Doug Hellmann
On Thu, Jul 3, 2014 at 1:58 PM, Alexei Kornienko
 wrote:
> Hi,
>
> You can use oslo.messaging._drivers.impl_rabbit instead of impl_kombu
> It was renamed and slightly change but I think it will work as you expect.

You should not depend on using any API defined in that module. The
_drivers package is a private package inside oslo.messaging, and
shouldn't be used directly. Use the public, documented, API instead to
ensure that future changes to the internal implementation details of
oslo.messaging do not break your code.

Doug

>
> Regards,
> Alexei Kornienko
>
>
> On 07/03/2014 08:47 PM, Nader Lahouti wrote:
>
> Hi All and Ihar,
>
> As part of blueprint oslo-messaging the neutron/openstack/common/rpc tree is
> removed. I was using impl_kombu module to process notification from keystone
> with this following code sample:
> ...
> from neutron.openstack.common.rpc import impl_kombu
>try:
>conf = impl_kombu.cfg.CONF
> topicname = self._topic_name
> exchange = self._exchange_name
> connection = impl_kombu.Connection(conf)
> connection.declare_topic_consumer(topic,
>   self.callback,
>   topic, exchange)
> connection.consume()
> except Exception:
> connection.close()
>
>
> Can you please let me what needs to be done to replace the above code and
> make it work with current neutron code?
>
>
> Thanks in advance,
> Nader.
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-03 Thread Jorge Miramontes
+1 to QUEUED status.

For entities that have the concept of being attached/detached why not have
a 'DETACHED' status to indicate that the entity is not provisioned at all
(i.e. The config is just stored in the DB). When it is attached during
provisioning then we can set it to 'ACTIVE' or any of the other
provisioning statuses such as 'ERROR', 'PENDING_UPDATE', etc. Lastly, it
wouldn't make much sense to have a 'DELETED' status on these types of
entities until the user actually issues a DELETE API request (not to be
confused with detaching). Which begs another question, when items are
deleted how long should the API return responses for that resource? We
have a 90 day threshold for this in our current implementation after which
the API returns 404's for the resource.

Cheers,
--Jorge




On 7/3/14 10:39 AM, "Phillip Toohill" 
wrote:

>If the objects remain in 'PENDING_CREATE' until provisioned it would seem
>that the process got stuck in that status and may be in a bad state from
>user perspective. I like the idea of QUEUED or similar to reference that
>the object has been accepted but not provisioned.
>
>Phil
>
>On 7/3/14 10:28 AM, "Brandon Logan"  wrote:
>
>>With the new API and object model refactor there have been some issues
>>arising dealing with the status of entities.  The main issue is that
>>Listener, Pool, Member, and Health Monitor can exist independent of a
>>Load Balancer.  The Load Balancer is the entity that will contain the
>>information about which driver to use (through provider or flavor).  If
>>a Listener, Pool, Member, or Health Monitor is created without a link to
>>a Load Balancer, then what status does it have?  At this point it only
>>exists in the database and is really just waiting to be provisioned by a
>>driver/backend.
>>
>>Some possibilities discussed:
>>A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name
>>Entities just remain in PENDING_CREATE until provisioned by a driver
>>Entities just remain in ACTIVE until provisioned by a driver
>>
>>Opinions and suggestions?
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Which entities need status

2014-07-03 Thread Jorge Miramontes
Hey German,

We have similar statuses. I have been wanting to add a 'QUEUED' status
however. The reason is that we currently use 'BUILD' which indicates
active provisioning when in reality it is actually queued first and then
provisioned. Thus, there are potential issues when trying to determine
average provisioning times. Furthermore, customers are accustomed to
certain provisioning times and if those times seems longer than usual they
tend to complain. If we had a 'QUEUED' status then customers would most
likely not get upset (or as upset). I would also like the ability to move
from 'ERROR' back to an 'ACTIVE' state. And error status for us means
something didn't happen correctly during provisioning and updating.
However, most of the time the load balancer is still servicing traffic.
Forcing a customer to re-create a load balancer that is serving web
traffic is a bad thing, especially in our case since we have static ip
addresses. We have monitoring on load balancers that go into an 'ERROR'
status and take action to correct the issue.

Cheers,
--Jorge




On 6/24/14 11:30 PM, "Eichberger, German"  wrote:

>Hi,
>
>I second Stephen's suggestion with the status matrix. I have heard of
>(provisional) status, operational status, admin status -- I am curious
>what states exists and how an object would transition between them.
>
>Libra uses only one status field and it transitions as follows:
>
>BUILDING -> ACTIVE|ERROR
>ACTIVE -> DEGARDED|ERROR|DELETED
>DEGRADED -> ACTIVE|ERROR|DELETED
>ERROR -> DELETED
>
>That said I don't think admin status is that important for me as an
>operator since my user's usually delete lbs and re-create them. But I am
>curious how other operators feel.
>
>Thanks,
>German
>
>-Original Message-
>From: Brandon Logan [mailto:brandon.lo...@rackspace.com]
>Sent: Tuesday, June 24, 2014 8:46 PM
>To: openstack-dev@lists.openstack.org
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need status
>
>Alright y'all have convinced me for now.  How the status is show on
>shared entities is still yet to be determined.  However, we don't have
>any shared entities (unless we really want health monitors shareable
>right now) at this point so the status won't get complicated for this
>first iteration. 
>
>Thanks,
>Brandon
>
>On Wed, 2014-06-25 at 01:10 +, Doug Wiegley wrote:
>> Hi Stephen,
>> 
>> 
>> > Ultimately, as we will have several objects which have many-to-many
>> relationships with other objects, the 'status' of an object that is
>> shared between what will ultimately be two separate physical entities
>> on the back-end should be represented by a dictionary, and any
>> 'reduction' of this on behalf of the user should happen within the UI.
>> It does make things more complex to deal with in certain kinds of
>> failure scenarios, but we don't help ourselves at all by trying to
>> hide, say, when a member of a pool referenced by one listener is 'UP'
>> and the same member of the same pool referenced by a different
>> listener is 'DOWN'.  :/
>> 
>> 
>> For M:N, that’s actually an additional status field that rightly
>> belongs as another column in the join table, if at all (allow me to
>> queue up all of my normal M:N objections here in this case, I’m just
>> talking normal db representation.)  The bare object itself still has
>> status of its own.
>> 
>> 
>> Doug
>> 
>> 
>> 
>> 
>> 
>> 
>> From: Stephen Balukoff 
>> Reply-To: "OpenStack Development Mailing List (not for usage
>> questions)" 
>> Date: Tuesday, June 24, 2014 at 6:02 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: Re: [openstack-dev] [Neutron][LBaaS] Which entities need
>> status
>> 
>> 
>> 
>> Ultimately, as we will have several objects which have many-to-many
>> relationships with other objects, the 'status' of an object that is
>> shared between what will ultimately be two separate physical entities
>> on the back-end should be represented by a dictionary, and any
>> 'reduction' of this on behalf of the user should happen within the UI.
>> It does make things more complex to deal with in certain kinds of
>> failure scenarios, but we don't help ourselves at all by trying to
>> hide, say, when a member of a pool referenced by one listener is 'UP'
>> and the same member of the same pool referenced by a different
>> listener is 'DOWN'.  :/
>> 
>> 
>> Granted, our version 1 implementation of these objects is going to be
>> simplified, but it doesn't hurt to think about where we're headed with
>> this API and object model.
>> 
>> 
>> I think it would be worthwhile for someone to produce a status matrix
>> showing which kinds of status are available for each object type, and
>> what the possible values of those statuses are, and what they mean.
>> Given the question of what 'status' means is very complicated indeed,
>> I think this is the only way we're going to actually make forward
>> progress in this discussion.
>> 
>> 
>> Stephen
>> 
>> 
>> 
>> 
>> On Tue, Jun 24, 2014

Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 02:33 PM, Kevin Benton wrote:
> Maybe we can require period checks against the head of the master
> branch (which should always pass) and build statistics based on the results
> of that. 
I like this suggestion. I really like this suggestion.

H, what to do with a good suggestion? I wonder if we could capture
it in an infra-spec and work on it from there.

Would you feel comfortable offering a draft as an infra-spec and then
perhaps we can discuss the design through the spec?

What do you think?

Thanks Kevin,
Anita.

> Otherwise it seems like we have to take a CI system's word for it
> that a particular patch indeed broke that system.
> 
> --
> Kevin Benton
> 
> 
> On Thu, Jul 3, 2014 at 11:07 AM, Anita Kuno  wrote:
> 
>> On 07/03/2014 01:27 PM, Kevin Benton wrote:
 This allows the viewer to see categories of reviews based upon their
 divergence from OpenStack's Jenkins results. I think evaluating
 divergence from Jenkins might be a metric worth consideration.
>>>
>>> I think the only thing this really reflects though is how much the third
>>> party CI system is mirroring Jenkins.
>>> A system that frequently diverges may be functioning perfectly fine and
>>> just has a vastly different code path that it is integration testing so
>> it
>>> is legitimately detecting failures the OpenStack CI cannot.
>> Great.
>>
>> How do we measure the degree to which it is legitimately detecting
>> failures?
>>
>> Thanks Kevin,
>> Anita.
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno  wrote:
>>>
 On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
> Apologies for quoting again the top post of the thread.
>
> Comments inline (mostly thinking aloud)
> Salvatore
>
>
> On 30 June 2014 22:22, Jay Pipes  wrote:
>
>> Hi Stackers,
>>
>> Some recent ML threads [1] and a hot IRC meeting today [2] brought up
 some
>> legitimate questions around how a newly-proposed Stackalytics report
 page
>> for Neutron External CI systems [2] represented the results of an
 external
>> CI system as "successful" or not.
>>
>> First, I want to say that Ilya and all those involved in the
 Stackalytics
>> program simply want to provide the most accurate information to
 developers
>> in a format that is easily consumed. While there need to be some
 changes in
>> how data is shown (and the wording of things like "Tests Succeeded"),
>> I
>> hope that the community knows there isn't any ill intent on the part
>> of
>> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
>> conversation civil -- we're all working towards the same goals of
>> transparency and accuracy. :)
>>
>> Alright, now, Anita and Kurt Taylor were asking a very poignant
 question:
>>
>> "But what does CI tested really mean? just running tests? or tested to
>> pass some level of requirements?"
>>
>> In this nascent world of external CI systems, we have a set of issues
 that
>> we need to resolve:
>>
>> 1) All of the CI systems are different.
>>
>> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
 scripts.
>> Others run custom Python code that spawns VMs and publishes logs to
>> some
>> public domain.
>>
>> As a community, we need to decide whether it is worth putting in the
>> effort to create a single, unified, installable and runnable CI
>> system,
 so
>> that we can legitimately say "all of the external systems are
>> identical,
>> with the exception of the driver code for vendor X being substituted
>> in
 the
>> Neutron codebase."
>>
>
> I think such system already exists, and it's documented here:
> http://ci.openstack.org/
> Still, understanding it is quite a learning curve, and running it is
>> not
> exactly straightforward. But I guess that's pretty much understandable
> given the complexity of the system, isn't it?
>
>
>>
>> If the goal of the external CI systems is to produce reliable,
 consistent
>> results, I feel the answer to the above is "yes", but I'm interested
>> to
>> hear what others think. Frankly, in the world of benchmarks, it would
>> be
>> unthinkable to say "go ahead and everyone run your own benchmark
>> suite",
>> because you would get wildly different results. A similar problem has
>> emerged here.
>>
>
> I don't think the particular infrastructure which might range from an
> openstack-ci clone to a 100-line bash script would have an impact on
>> the
> "reliability" of the quality assessment regarding a particular driver
>> or
> plugin. This is determined, in my opinion, by the quantity and nature
>> of
> tests one runs on a specific driver. In Neutron for instance, there is
>> a
> wide range of choices - from a few test cases in tempest.api.network to

Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Fawad Khaliq
On Thu, Jul 3, 2014 at 10:27 AM, Kevin Benton  wrote:

> >This allows the viewer to see categories of reviews based upon their
> >divergence from OpenStack's Jenkins results. I think evaluating
> >divergence from Jenkins might be a metric worth consideration.
>
> I think the only thing this really reflects though is how much the third
> party CI system is mirroring Jenkins.
>  A system that frequently diverges may be functioning perfectly fine and
> just has a vastly different code path that it is integration testing so it
> is legitimately detecting failures the OpenStack CI cannot.
>
Exactly. +1

>
> --
> Kevin Benton
>
>
> On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno  wrote:
>
>> On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
>> > Apologies for quoting again the top post of the thread.
>> >
>> > Comments inline (mostly thinking aloud)
>> > Salvatore
>> >
>> >
>> > On 30 June 2014 22:22, Jay Pipes  wrote:
>> >
>> >> Hi Stackers,
>> >>
>> >> Some recent ML threads [1] and a hot IRC meeting today [2] brought up
>> some
>> >> legitimate questions around how a newly-proposed Stackalytics report
>> page
>> >> for Neutron External CI systems [2] represented the results of an
>> external
>> >> CI system as "successful" or not.
>> >>
>> >> First, I want to say that Ilya and all those involved in the
>> Stackalytics
>> >> program simply want to provide the most accurate information to
>> developers
>> >> in a format that is easily consumed. While there need to be some
>> changes in
>> >> how data is shown (and the wording of things like "Tests Succeeded"), I
>> >> hope that the community knows there isn't any ill intent on the part of
>> >> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
>> >> conversation civil -- we're all working towards the same goals of
>> >> transparency and accuracy. :)
>> >>
>> >> Alright, now, Anita and Kurt Taylor were asking a very poignant
>> question:
>> >>
>> >> "But what does CI tested really mean? just running tests? or tested to
>> >> pass some level of requirements?"
>> >>
>> >> In this nascent world of external CI systems, we have a set of issues
>> that
>> >> we need to resolve:
>> >>
>> >> 1) All of the CI systems are different.
>> >>
>> >> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
>> scripts.
>> >> Others run custom Python code that spawns VMs and publishes logs to
>> some
>> >> public domain.
>> >>
>> >> As a community, we need to decide whether it is worth putting in the
>> >> effort to create a single, unified, installable and runnable CI
>> system, so
>> >> that we can legitimately say "all of the external systems are
>> identical,
>> >> with the exception of the driver code for vendor X being substituted
>> in the
>> >> Neutron codebase."
>> >>
>> >
>> > I think such system already exists, and it's documented here:
>> > http://ci.openstack.org/
>> > Still, understanding it is quite a learning curve, and running it is not
>> > exactly straightforward. But I guess that's pretty much understandable
>> > given the complexity of the system, isn't it?
>> >
>> >
>> >>
>> >> If the goal of the external CI systems is to produce reliable,
>> consistent
>> >> results, I feel the answer to the above is "yes", but I'm interested to
>> >> hear what others think. Frankly, in the world of benchmarks, it would
>> be
>> >> unthinkable to say "go ahead and everyone run your own benchmark
>> suite",
>> >> because you would get wildly different results. A similar problem has
>> >> emerged here.
>> >>
>> >
>> > I don't think the particular infrastructure which might range from an
>> > openstack-ci clone to a 100-line bash script would have an impact on the
>> > "reliability" of the quality assessment regarding a particular driver or
>> > plugin. This is determined, in my opinion, by the quantity and nature of
>> > tests one runs on a specific driver. In Neutron for instance, there is a
>> > wide range of choices - from a few test cases in tempest.api.network to
>> the
>> > full smoketest job. As long there is no minimal standard here, then it
>> > would be difficult to assess the quality of the evaluation from a CI
>> > system, unless we explicitly keep into account coverage into the
>> evaluation.
>> >
>> > On the other hand, different CI infrastructures will have different
>> levels
>> > in terms of % of patches tested and % of infrastructure failures. I
>> think
>> > it might not be a terrible idea to use these parameters to evaluate how
>> > good a CI is from an infra standpoint. However, there are still open
>> > questions. For instance, a CI might have a low patch % score because it
>> > only needs to test patches affecting a given driver.
>> >
>> >
>> >> 2) There is no mediation or verification that the external CI system is
>> >> actually testing anything at all
>> >>
>> >> As a community, we need to decide whether the current system of
>> >> self-policing should continue. If it should, then language on reports
>> like
>> >> [3] should be ve

Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - comapre_type values

2014-07-03 Thread Jorge Miramontes
I agree.

Also, since we are planning on having two different API versions run in 
parallel the only driver that needs to be worked on initially is the reference 
implementation. I'm guessing we will have two reference implementations, one 
for v1 and one for v2. The v2 implementation currently seems to be modified 
from v1 in order to get the highest velocity in terms of exposing API 
functionality. There is a reason we aren't working on Octavia right now and I 
think the same rationale holds for other drivers. So, I believe we should 
expose as much functionality possible with a functional open-source driver and 
then other drivers will catch up.

As for drivers that can't implement certain features the only potential issue I 
see is a type of vendor lock-in. For example, let's say I am an operator 
agnostic power API user. I host with operator A and they use a driver that 
implements all functionality exposed via the API. Now, let's say I want to move 
to operator B because operator A isn't working for me. Let's also say that 
operator B doesn't implement all functionality exposed via the API. From the 
user's perspective they are locked out of going to operator B because their API 
integrated code won't port seamlessly. With this example in mind, however, I 
also don't think it is fair for certain drivers to hold other drivers 
"hostage". From my perspective, if users really want a feature then every 
driver implementor should have the incentive to implement said feature and will 
benefit them in the long run. Anyways, that my $0.02.

Cheers,
--Jorge

From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, June 24, 2014 7:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

Making sure all drivers support the features offered in Neutron LBaaS means we 
are stuck going with the 'least common denominator' in all cases. While this 
ensures all vendors implement the same things in the functionally the same way, 
it also is probably a big reason the Neutron LBaaS project has been so 
incredibly slow in seeing new features added over the last two years.

In the gerrit review that Dustin linked, it sounds like the people contributing 
to the discussion are in favor of allowing drivers to reject some 
configurations as unsupported through use of exceptions (details on how that 
will work is being hashed out now if you want to participate in that 
discussion).  Let's assume, therefore, that with the LBaaS v2 API and Object 
model we're also going to get this ability-- which of course also means that 
drivers do not have to support every feature exposed by the API.

(And again, as Dustin pointed out, a Linux LVS-based driver definitely wouldn't 
be able to support any L7 features at all, yet it's still a very useful driver 
for many deployments.)

Finally, I do not believe that the LBaaS project should be "held back" because 
one vendor's implementation doesn't work well with a couple features exposed in 
the API. As Dustin said, let the API expose a rich feature set and allow 
drivers to reject certain configurations when they don't support them.

Stephen



On Tue, Jun 24, 2014 at 9:09 AM, Dustin Lundquist 
mailto:dus...@null-ptr.net>> wrote:
I brought this up on https://review.openstack.org/#/c/101084/.


-Dustin


On Tue, Jun 24, 2014 at 7:57 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi Dustin
I agree with the concept you described but as far as I understand it is not 
currently supported in Neutron.
So a driver should be fully compatible with the interface it implements.

Avishay

From: Dustin Lundquist [mailto:dus...@null-ptr.net]
Sent: Tuesday, June 24, 2014 5:41 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Layer7 Switching - L7 Rule - 
comapre_type values

I think the API should provide an richly featured interface, and individual 
drivers should indicate if they support the provided configuration. For example 
there is a spec for a Linux LVS LBaaS driver, this driver would not support TLS 
termination or any layer 7 features, but would still be valuable for some 
deployments. The user experience of such a solution could be improved if the 
driver to propagate up a message specifically identifying the unsupported 
feature.


-Dustin

On Tue, Jun 24, 2014 at 4:28 AM, Avishay Balderman 
mailto:avish...@radware.com>> wrote:
Hi
One of L7 Rule attributes is ‘compare_type’.
This field is the match operator that the rule should activate against the 
value found in the request.
Below is list of the possible values:
- Regexp
- StartsWith
- EndsWith
- Contains
- EqualTo (*)
- GreaterThan (*)
- LessThan (*)

The last

Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Kevin Benton
Maybe we can require period checks against the head of the master
branch (which should always pass) and build statistics based on the results
of that. Otherwise it seems like we have to take a CI system's word for it
that a particular patch indeed broke that system.

--
Kevin Benton


On Thu, Jul 3, 2014 at 11:07 AM, Anita Kuno  wrote:

> On 07/03/2014 01:27 PM, Kevin Benton wrote:
> >> This allows the viewer to see categories of reviews based upon their
> >> divergence from OpenStack's Jenkins results. I think evaluating
> >> divergence from Jenkins might be a metric worth consideration.
> >
> > I think the only thing this really reflects though is how much the third
> > party CI system is mirroring Jenkins.
> > A system that frequently diverges may be functioning perfectly fine and
> > just has a vastly different code path that it is integration testing so
> it
> > is legitimately detecting failures the OpenStack CI cannot.
> Great.
>
> How do we measure the degree to which it is legitimately detecting
> failures?
>
> Thanks Kevin,
> Anita.
> >
> > --
> > Kevin Benton
> >
> >
> > On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno  wrote:
> >
> >> On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
> >>> Apologies for quoting again the top post of the thread.
> >>>
> >>> Comments inline (mostly thinking aloud)
> >>> Salvatore
> >>>
> >>>
> >>> On 30 June 2014 22:22, Jay Pipes  wrote:
> >>>
>  Hi Stackers,
> 
>  Some recent ML threads [1] and a hot IRC meeting today [2] brought up
> >> some
>  legitimate questions around how a newly-proposed Stackalytics report
> >> page
>  for Neutron External CI systems [2] represented the results of an
> >> external
>  CI system as "successful" or not.
> 
>  First, I want to say that Ilya and all those involved in the
> >> Stackalytics
>  program simply want to provide the most accurate information to
> >> developers
>  in a format that is easily consumed. While there need to be some
> >> changes in
>  how data is shown (and the wording of things like "Tests Succeeded"),
> I
>  hope that the community knows there isn't any ill intent on the part
> of
>  Mirantis or anyone who works on Stackalytics. OK, so let's keep the
>  conversation civil -- we're all working towards the same goals of
>  transparency and accuracy. :)
> 
>  Alright, now, Anita and Kurt Taylor were asking a very poignant
> >> question:
> 
>  "But what does CI tested really mean? just running tests? or tested to
>  pass some level of requirements?"
> 
>  In this nascent world of external CI systems, we have a set of issues
> >> that
>  we need to resolve:
> 
>  1) All of the CI systems are different.
> 
>  Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> >> scripts.
>  Others run custom Python code that spawns VMs and publishes logs to
> some
>  public domain.
> 
>  As a community, we need to decide whether it is worth putting in the
>  effort to create a single, unified, installable and runnable CI
> system,
> >> so
>  that we can legitimately say "all of the external systems are
> identical,
>  with the exception of the driver code for vendor X being substituted
> in
> >> the
>  Neutron codebase."
> 
> >>>
> >>> I think such system already exists, and it's documented here:
> >>> http://ci.openstack.org/
> >>> Still, understanding it is quite a learning curve, and running it is
> not
> >>> exactly straightforward. But I guess that's pretty much understandable
> >>> given the complexity of the system, isn't it?
> >>>
> >>>
> 
>  If the goal of the external CI systems is to produce reliable,
> >> consistent
>  results, I feel the answer to the above is "yes", but I'm interested
> to
>  hear what others think. Frankly, in the world of benchmarks, it would
> be
>  unthinkable to say "go ahead and everyone run your own benchmark
> suite",
>  because you would get wildly different results. A similar problem has
>  emerged here.
> 
> >>>
> >>> I don't think the particular infrastructure which might range from an
> >>> openstack-ci clone to a 100-line bash script would have an impact on
> the
> >>> "reliability" of the quality assessment regarding a particular driver
> or
> >>> plugin. This is determined, in my opinion, by the quantity and nature
> of
> >>> tests one runs on a specific driver. In Neutron for instance, there is
> a
> >>> wide range of choices - from a few test cases in tempest.api.network to
> >> the
> >>> full smoketest job. As long there is no minimal standard here, then it
> >>> would be difficult to assess the quality of the evaluation from a CI
> >>> system, unless we explicitly keep into account coverage into the
> >> evaluation.
> >>>
> >>> On the other hand, different CI infrastructures will have different
> >> levels
> >>> in terms of % of patches tested and % of infrastructure failures. I
> think
> >>> it

Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-03 Thread Jay Pipes

On 07/03/2014 02:10 PM, Kevin Benton wrote:

The reason I thought it changed was that this is the first cycle where I
have encountered scenarios where my unit tests for the patch run fine
locally, but then they fail when they are checked by Jenkins (caused by
a change after the parent of my patch). I suppose I was just lucky
before and never had anything merge after I proposed a patch that caused
a conflict with mine.

I suspect this is a problem then for many third-party CI systems because
the simple approach of setting [PROJECT]_REPO and [PROJECT]_BRANCH in
devstack to point to the gerrit server will not work correctly since it
will just test the patch without merging it.

Where is this merging process handled in the OpenStack CI? Is that done
in Zuul with the custom Zuul branch is passed to devstack?


Yes. The zuul-merger daemon is responsible for managing this, and the 
devstack-gate project handles the checkout and setup of the git repos 
for all of the OpenStack projects.


Best,
-jay


--
Kevin Benton


On Tue, Jul 1, 2014 at 4:00 PM, Jeremy Stanley mailto:fu...@yuggoth.org>> wrote:

On 2014-07-01 10:05:45 -0700 (-0700), Kevin Benton wrote:
[...]
 > As I understand it, this behavior for the main OpenStack CI check
 > queue changed to the latter some time over the past few months.
[...]

I'm not sure what you think changed, but we've (upstream OpenStack
CI) been testing proposed patches merged to their target branches
for years...
--
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Kevin Benton


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-03 Thread Kevin Benton
>In short, you need to test every single proposed patch to the system fully
and consistently, otherwise there's simply no point in running any tests at
all, as you will spend an inordinate amount of time tracking down what
broke what.

I agree that every patch should be tested. However, since third party
systems aren't involved in the serial gate merge process, there is still a
chance that a patch can break a third party system after it gets merged
into master. To check for this condition with a third-party CI, you also
need a job that runs after every merge into master so the maintainers can
immediately identify a patch that caused a failure after merging and
disable their checks until it is fixed.


On Thu, Jul 3, 2014 at 10:06 AM, Jay Pipes  wrote:

> On 07/03/2014 08:42 AM, Luke Gorrie wrote:
>
>> On 3 July 2014 02:44, Michael Still > > wrote:
>>
>> The main purpose is to let change reviewers know that a change might
>> be problematic for a piece of code not well tested by the gate
>>
>>
>> Just a thought:
>>
>> A "sampling" approach could be a reasonable way to stay responsive under
>> heavy load and still give a strong signal to reviewers about whether a
>> change is likely to be problematic.
>>
>> I mean: Kevin mentions that his CI gets an hours-long queue during peak
>> review season. One way to deal with that could be skipping some events
>> e.g. toss a coin to decide whether to test the next revision of a change
>> that he has already +1'd previously. That would keep responsiveness
>> under control even when throughput is a problem.
>>
>> (A bit like how a router manages a congested input queue or how a
>> sampling profiler keeps overhead low.)
>>
>> Could be worth keeping the rules flexible enough to permit this kind of
>> thing, at least?
>>
>
> The problem with this is that it assumes all patch sets contain equivalent
> levels of change, which is incorrect. One patch set may contain changes
> that significantly affect the SnappCo plugin. A sampling system might miss
> that important patchset, and you'd spend a lot of time trying to figure out
> which patch caused issues for you when a later patchset (that included the
> problematic important patch that was merged) causes failures that seem
> unrelated to the patch currently undergoing tests.
>
> In short, you need to test every single proposed patch to the system fully
> and consistently, otherwise there's simply no point in running any tests at
> all, as you will spend an inordinate amount of time tracking down what
> broke what.
>
> Best,
> -jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] - rebasing patches for CI

2014-07-03 Thread Kevin Benton
The reason I thought it changed was that this is the first cycle where I
have encountered scenarios where my unit tests for the patch run fine
locally, but then they fail when they are checked by Jenkins (caused by a
change after the parent of my patch). I suppose I was just lucky before and
never had anything merge after I proposed a patch that caused a conflict
with mine.

I suspect this is a problem then for many third-party CI systems because
the simple approach of setting [PROJECT]_REPO and [PROJECT]_BRANCH in
devstack to point to the gerrit server will not work correctly since it
will just test the patch without merging it.

Where is this merging process handled in the OpenStack CI? Is that done in
Zuul with the custom Zuul branch is passed to devstack?

--
Kevin Benton


On Tue, Jul 1, 2014 at 4:00 PM, Jeremy Stanley  wrote:

> On 2014-07-01 10:05:45 -0700 (-0700), Kevin Benton wrote:
> [...]
> > As I understand it, this behavior for the main OpenStack CI check
> > queue changed to the latter some time over the past few months.
> [...]
>
> I'm not sure what you think changed, but we've (upstream OpenStack
> CI) been testing proposed patches merged to their target branches
> for years...
> --
> Jeremy Stanley
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 01:27 PM, Kevin Benton wrote:
>> This allows the viewer to see categories of reviews based upon their
>> divergence from OpenStack's Jenkins results. I think evaluating
>> divergence from Jenkins might be a metric worth consideration.
> 
> I think the only thing this really reflects though is how much the third
> party CI system is mirroring Jenkins.
> A system that frequently diverges may be functioning perfectly fine and
> just has a vastly different code path that it is integration testing so it
> is legitimately detecting failures the OpenStack CI cannot.
Great.

How do we measure the degree to which it is legitimately detecting failures?

Thanks Kevin,
Anita.
> 
> --
> Kevin Benton
> 
> 
> On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno  wrote:
> 
>> On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
>>> Apologies for quoting again the top post of the thread.
>>>
>>> Comments inline (mostly thinking aloud)
>>> Salvatore
>>>
>>>
>>> On 30 June 2014 22:22, Jay Pipes  wrote:
>>>
 Hi Stackers,

 Some recent ML threads [1] and a hot IRC meeting today [2] brought up
>> some
 legitimate questions around how a newly-proposed Stackalytics report
>> page
 for Neutron External CI systems [2] represented the results of an
>> external
 CI system as "successful" or not.

 First, I want to say that Ilya and all those involved in the
>> Stackalytics
 program simply want to provide the most accurate information to
>> developers
 in a format that is easily consumed. While there need to be some
>> changes in
 how data is shown (and the wording of things like "Tests Succeeded"), I
 hope that the community knows there isn't any ill intent on the part of
 Mirantis or anyone who works on Stackalytics. OK, so let's keep the
 conversation civil -- we're all working towards the same goals of
 transparency and accuracy. :)

 Alright, now, Anita and Kurt Taylor were asking a very poignant
>> question:

 "But what does CI tested really mean? just running tests? or tested to
 pass some level of requirements?"

 In this nascent world of external CI systems, we have a set of issues
>> that
 we need to resolve:

 1) All of the CI systems are different.

 Some run Bash scripts. Some run Jenkins slaves and devstack-gate
>> scripts.
 Others run custom Python code that spawns VMs and publishes logs to some
 public domain.

 As a community, we need to decide whether it is worth putting in the
 effort to create a single, unified, installable and runnable CI system,
>> so
 that we can legitimately say "all of the external systems are identical,
 with the exception of the driver code for vendor X being substituted in
>> the
 Neutron codebase."

>>>
>>> I think such system already exists, and it's documented here:
>>> http://ci.openstack.org/
>>> Still, understanding it is quite a learning curve, and running it is not
>>> exactly straightforward. But I guess that's pretty much understandable
>>> given the complexity of the system, isn't it?
>>>
>>>

 If the goal of the external CI systems is to produce reliable,
>> consistent
 results, I feel the answer to the above is "yes", but I'm interested to
 hear what others think. Frankly, in the world of benchmarks, it would be
 unthinkable to say "go ahead and everyone run your own benchmark suite",
 because you would get wildly different results. A similar problem has
 emerged here.

>>>
>>> I don't think the particular infrastructure which might range from an
>>> openstack-ci clone to a 100-line bash script would have an impact on the
>>> "reliability" of the quality assessment regarding a particular driver or
>>> plugin. This is determined, in my opinion, by the quantity and nature of
>>> tests one runs on a specific driver. In Neutron for instance, there is a
>>> wide range of choices - from a few test cases in tempest.api.network to
>> the
>>> full smoketest job. As long there is no minimal standard here, then it
>>> would be difficult to assess the quality of the evaluation from a CI
>>> system, unless we explicitly keep into account coverage into the
>> evaluation.
>>>
>>> On the other hand, different CI infrastructures will have different
>> levels
>>> in terms of % of patches tested and % of infrastructure failures. I think
>>> it might not be a terrible idea to use these parameters to evaluate how
>>> good a CI is from an infra standpoint. However, there are still open
>>> questions. For instance, a CI might have a low patch % score because it
>>> only needs to test patches affecting a given driver.
>>>
>>>
 2) There is no mediation or verification that the external CI system is
 actually testing anything at all

 As a community, we need to decide whether the current system of
 self-policing should continue. If it should, then language on reports
>> like
 [3] should be very clear that a

Re: [openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Alexei Kornienko

Hi,

You can use /oslo.messaging._drivers.impl_rabbit/ instead of impl_kombu
It was renamed and slightly change but I think it will work as you expect.

Regards,
Alexei Kornienko

On 07/03/2014 08:47 PM, Nader Lahouti wrote:

Hi All and Ihar,

As part of blueprint oslo-messaging the neutron/openstack/common/rpc 
tree is removed. I was using impl_kombu module to process notification 
from keystone with this following code sample:

...
from neutron.openstack.common.rpc import impl_kombu
   try:
   conf = impl_kombu.cfg.CONF
topicname = self._topic_name
exchange = self._exchange_name
connection = impl_kombu.Connection(conf)
connection.declare_topic_consumer(topic,
self.callback,
topic, exchange)
connection.consume()
except Exception:
connection.close()


Can you please let me what needs to be done to replace the above code 
and make it work with current neutron code?



Thanks in advance,
Nader.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] [Gantt] Scheduler split status

2014-07-03 Thread Sylvain Bauza
Hi,

==
tl; dr: A decision has been made to split out the scheduler to a
separate project not on a feature parity basis with nova-scheduler, your
comments are welcome.
==

As it has been agreed now a cycle ago, the nova-scheduler will be ported
to a separate OpenStack project, called Gantt [1]. The plan is to do all
the necessary changes in Nova, and then split the code into a separate
project and provide CI against the new project [2]


During the preparation phase, it has been identified a couple of
blueprints which needed to be delivered before the split could happen :

A/
https://blueprints.launchpad.net/nova/+spec/remove-cast-to-schedule-run-instance
(merged): was about removing the possibility for the scheduler to proxy
calls to compute nodes. Now the scheduler can't call computes nodes when
booting. That said, there is still one pending action [3] about cold
migrations that needs to be tackled. Your reviews are welcome on the
spec [4] and implementation [5]


B/ A scheduler library has to be provided, so the interface would be the
same for both nova-scheduler and Gantt. The idea is to define all the
inputs/outputs of the scheduler, in particular how we update the
Scheduler internal state (here the ComputeNode table). The spec has been
approved, the implementation is waiting for reviews [6]. The main
problem is about the ComputeNode (well, compute_nodes to be precise)
table and the foreign key it has on Service, but also the foreign key
that PCITracker has on ComputeNode ID primary key, which requires the
table to be left in Nova (albeit for the solely use of the scheduler)

C/ Some of the Scheduler filters currently access other Nova objects
(aggregates and instance groups) and ServiceGroups are accessed by the
Scheduler driver to know the state of each host (is it up or not ?), so
we need to port these calls to Nova and update the scheduler state from
a distant perspective. This spec is currently under review [7] and the
changes are currently being disagreed [8].



During the last Gantt meeting held Tuesday, we discussed about the
status and the problems we have. As we are close to Juno-2, there are
some concerns about which blueprints would be implemented by Juno, so
Gantt would be updated after. Due to the problems raised in the
different blueprints (please see the links there), it has been agreed to
follow a path a bit different from the one agreed at the Summit : once
B/ is merged, Gantt will be updated and work will happen in there while
work with C/ will happen in parallel. That means we need to backport in
Gantt all changes happening to the scheduler, but (and this is the most
important point) until C/ is merged into Gantt, Gantt won't support
filters which decide on aggregates or instance groups. In other words,
until C/ happens (but also A/), Gantt won't be feature-parity with
Nova-scheduler.

That doesn't mean Gantt will move forward and leave all missing features
out of it, we will be dedicated to feature-parity as top priority but
that implies that the first releases of Gantt will be experimental and
considered for testing purposes only.


Your thoughts are welcome here whatever your opinion is.


Thanks,
-Sylvain

[1] https://etherpad.openstack.org/p/icehouse-external-scheduler
[2] https://etherpad.openstack.org/p/juno-nova-gantt-apis
[3]
https://blueprints.launchpad.net/nova/+spec/move-prep-resize-to-conductor
[4] https://review.openstack.org/94916
[5] https://review.openstack.org/103503 (WIP)
[6] https://review.openstack.org/82778
[7] https://review.openstack.org/89893
[8] https://review.openstack.org/101128 and
https://review.openstack.org/101196

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Heat] Reminder: Mid-cycle Meetup - Attendance Confirmation

2014-07-03 Thread Jordan OMara

On 30/06/14 13:02 -0400, Jordan OMara wrote:

On 25/06/14 14:32 -0400, Jordan OMara wrote:

On 25/06/14 18:20 +, Carlino, Chuck (OpenStack TripleO, Neutron) wrote:

Is $179/day the expected rate?

Thanks,
Chuck


Yes, that's the best rate available from both of the downtown
(walkable) hotels.


Just an update that we only have a few rooms left in our block at the
Marriott. Please book ASAP if you haven't 


Final reminder: our group rate expires tomorrow!


--
Jordan O'Mara 
Red Hat Engineering, Raleigh 


pgpu8uLlRZ1iW.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [olso] olso incubator logging issues

2014-07-03 Thread Doug Hellmann
On Mon, Jun 30, 2014 at 9:00 AM, Sean Dague  wrote:
> Every time I crack open a nova logs in detail, at least 2 new olso
> incubator log issues have been introduced.
>
> The current ones is clearly someone is over exploding arrays, as we're
> getting things like:
> 2014-06-29 13:36:41.403 19459 DEBUG nova.openstack.common.processutils
> [-] Running cmd (subprocess): [ ' e n v ' ,   ' L C _ A L L = C ' ,   '
> L A N G = C ' ,   ' q e m u - i m g ' ,   ' i n f o ' ,   ' / o p t / s
> t a c k / d a t a / n o v a / i n s t a n c e s / e f f 7 3 1 3 a - 1 1
> b 2 - 4 0 2 b - 9 c c d - 6 5 7 8 c b 8 7 9 2 d b / d i s k ' ] execute
> /opt/stack/new/nova/nova/openstack/common/processutils.py:160
>
> (yes all those spaces are in there, which now effectively inhibits search).
>
> Also on every wsgi request to Nova API we get something like this:
>
>
> 2014-06-29 13:26:43.836 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute:get will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.837 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:security_groups will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:security_groups will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:keypairs will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:hide_server_addresses will be now enforced
> enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.838 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:extended_volumes will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:config_drive will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:server_usage will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.842 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:extended_status will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:extended_server_attributes will be now enforced
> enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:extended_ips_mac will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:extended_ips will be now enforced enforce
> /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.843 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]
> Rule compute_extension:extended_availability_zone will be now enforced
> enforce /opt/stack/new/nova/nova/openstack/common/policy.py:288
> 2014-06-29 13:26:43.844 DEBUG nova.openstack.common.policy
> [req-86680d63-6d6c-4962-9274-1de7de8ca37d
> FixedIPsNegativeTestJson-768779075 FixedIPsNegativeTestJson-579905596]

Re: [openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Doug Hellmann
You'll find the documentation for using oslo.messaging at
http://docs.openstack.org/developer/oslo.messaging/

Based on the fact that you mention listening for notifications, you
probably want to look at the notification listener documentation in
particular 
(http://docs.openstack.org/developer/oslo.messaging/notification_listener.html).

Doug

On Thu, Jul 3, 2014 at 1:47 PM, Nader Lahouti  wrote:
> Hi All and Ihar,
>
> As part of blueprint oslo-messaging the neutron/openstack/common/rpc tree is
> removed. I was using impl_kombu module to process notification from keystone
> with this following code sample:
> ...
> from neutron.openstack.common.rpc import impl_kombu
>try:
>conf = impl_kombu.cfg.CONF
> topicname = self._topic_name
> exchange = self._exchange_name
> connection = impl_kombu.Connection(conf)
> connection.declare_topic_consumer(topic,
>   self.callback,
>   topic, exchange)
> connection.consume()
> except Exception:
> connection.close()
>
>
> Can you please let me what needs to be done to replace the above code and
> make it work with current neutron code?
>
>
> Thanks in advance,
> Nader.
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][oslo.messaging]

2014-07-03 Thread Nader Lahouti
Hi All and Ihar,

As part of blueprint oslo-messaging the neutron/openstack/common/rpc tree
is removed. I was using impl_kombu module to process notification from
keystone with this following code sample:
...
from neutron.openstack.common.rpc import impl_kombu
   try:
   conf = impl_kombu.cfg.CONF
topicname = self._topic_name
exchange = self._exchange_name
connection = impl_kombu.Connection(conf)
connection.declare_topic_consumer(topic,
  self.callback,
  topic, exchange)
connection.consume()
except Exception:
connection.close()


Can you please let me what needs to be done to replace the above code and
make it work with current neutron code?


Thanks in advance,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Nominations for jenkins-job-builder core

2014-07-03 Thread Zaro
Looks like there are no objections to this nomination so I have made Marc
and Darragh Jenkins Job Builder Core reviewers.  Congrats guys and keep the
reviews rolling.


On Mon, Jun 30, 2014 at 10:52 AM, Darragh Bailey 
wrote:

>
>
> On 20 June 2014 22:01, James E. Blair  wrote:
>
>> Hi,
>>
>> The Jenkins Job Builder project (part of the Infrastructure program) is
>> quite popular even outside of OpenStack and has a group of specialist
>> core reviewers supplemental to the rest of the Infrastructure program.
>>
>> To that group I would like to add Darragh Bailey:
>>
>>
>> https://review.openstack.org/#/q/reviewer:%22Darragh+Bailey%22+project:openstack-infra/jenkins-job-builder,n,z
>>
>> -Jim
>>
>>
> Thanks for the nomination Jim, I'm enjoying working on JJB immensely and
> hope that I can continue to contribute in a way that others appreciate.
>
> --
> Darragh Bailey
> "Nothing is foolproof to a sufficiently talented fool"
>
> ___
> OpenStack-Infra mailing list
> openstack-in...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Openstack and SQLAlchemy

2014-07-03 Thread Doug Hellmann
On Mon, Jun 30, 2014 at 12:56 PM, Mike Bayer  wrote:
> Hi all -
>
> For those who don't know me, I'm Mike Bayer, creator/maintainer of
> SQLAlchemy, Alembic migrations and Dogpile caching.   In the past month
> I've become a full time Openstack developer working for Red Hat, given
> the task of carrying Openstack's database integration story forward.
> To that extent I am focused on the oslo.db project which going forward
> will serve as the basis for database patterns used by other Openstack
> applications.
>
> I've summarized what I've learned from the community over the past month
> in a wiki entry at:
>
> https://wiki.openstack.org/wiki/Openstack_and_SQLAlchemy
>
> The page also refers to an ORM performance proof of concept which you
> can see at https://github.com/zzzeek/nova_poc.
>
> The goal of this wiki page is to publish to the community what's come up
> for me so far, to get additional information and comments, and finally
> to help me narrow down the areas in which the community would most
> benefit by my contributions.
>
> I'd like to get a discussion going here, on the wiki, on IRC (where I am
> on freenode with the nickname zzzeek) with the goal of solidifying the
> blueprints, issues, and SQLAlchemy / Alembic features I'll be focusing
> on as well as recruiting contributors to help in all those areas.  I
> would welcome contributors on the SQLAlchemy / Alembic projects directly
> as well, as we have many areas that are directly applicable to Openstack.
>
> I'd like to thank Red Hat and the Openstack community for welcoming me
> on board and I'm looking forward to digging in more deeply in the coming
> months!
>
> - mike

Good stuff, Mike, thanks for writing it all down. I'm looking forward
to seeing how much performance can be improved without drastic
rewrites! :-)

Doug

>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Kevin Benton
>This allows the viewer to see categories of reviews based upon their
>divergence from OpenStack's Jenkins results. I think evaluating
>divergence from Jenkins might be a metric worth consideration.

I think the only thing this really reflects though is how much the third
party CI system is mirroring Jenkins.
A system that frequently diverges may be functioning perfectly fine and
just has a vastly different code path that it is integration testing so it
is legitimately detecting failures the OpenStack CI cannot.

--
Kevin Benton


On Thu, Jul 3, 2014 at 6:49 AM, Anita Kuno  wrote:

> On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
> > Apologies for quoting again the top post of the thread.
> >
> > Comments inline (mostly thinking aloud)
> > Salvatore
> >
> >
> > On 30 June 2014 22:22, Jay Pipes  wrote:
> >
> >> Hi Stackers,
> >>
> >> Some recent ML threads [1] and a hot IRC meeting today [2] brought up
> some
> >> legitimate questions around how a newly-proposed Stackalytics report
> page
> >> for Neutron External CI systems [2] represented the results of an
> external
> >> CI system as "successful" or not.
> >>
> >> First, I want to say that Ilya and all those involved in the
> Stackalytics
> >> program simply want to provide the most accurate information to
> developers
> >> in a format that is easily consumed. While there need to be some
> changes in
> >> how data is shown (and the wording of things like "Tests Succeeded"), I
> >> hope that the community knows there isn't any ill intent on the part of
> >> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
> >> conversation civil -- we're all working towards the same goals of
> >> transparency and accuracy. :)
> >>
> >> Alright, now, Anita and Kurt Taylor were asking a very poignant
> question:
> >>
> >> "But what does CI tested really mean? just running tests? or tested to
> >> pass some level of requirements?"
> >>
> >> In this nascent world of external CI systems, we have a set of issues
> that
> >> we need to resolve:
> >>
> >> 1) All of the CI systems are different.
> >>
> >> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> scripts.
> >> Others run custom Python code that spawns VMs and publishes logs to some
> >> public domain.
> >>
> >> As a community, we need to decide whether it is worth putting in the
> >> effort to create a single, unified, installable and runnable CI system,
> so
> >> that we can legitimately say "all of the external systems are identical,
> >> with the exception of the driver code for vendor X being substituted in
> the
> >> Neutron codebase."
> >>
> >
> > I think such system already exists, and it's documented here:
> > http://ci.openstack.org/
> > Still, understanding it is quite a learning curve, and running it is not
> > exactly straightforward. But I guess that's pretty much understandable
> > given the complexity of the system, isn't it?
> >
> >
> >>
> >> If the goal of the external CI systems is to produce reliable,
> consistent
> >> results, I feel the answer to the above is "yes", but I'm interested to
> >> hear what others think. Frankly, in the world of benchmarks, it would be
> >> unthinkable to say "go ahead and everyone run your own benchmark suite",
> >> because you would get wildly different results. A similar problem has
> >> emerged here.
> >>
> >
> > I don't think the particular infrastructure which might range from an
> > openstack-ci clone to a 100-line bash script would have an impact on the
> > "reliability" of the quality assessment regarding a particular driver or
> > plugin. This is determined, in my opinion, by the quantity and nature of
> > tests one runs on a specific driver. In Neutron for instance, there is a
> > wide range of choices - from a few test cases in tempest.api.network to
> the
> > full smoketest job. As long there is no minimal standard here, then it
> > would be difficult to assess the quality of the evaluation from a CI
> > system, unless we explicitly keep into account coverage into the
> evaluation.
> >
> > On the other hand, different CI infrastructures will have different
> levels
> > in terms of % of patches tested and % of infrastructure failures. I think
> > it might not be a terrible idea to use these parameters to evaluate how
> > good a CI is from an infra standpoint. However, there are still open
> > questions. For instance, a CI might have a low patch % score because it
> > only needs to test patches affecting a given driver.
> >
> >
> >> 2) There is no mediation or verification that the external CI system is
> >> actually testing anything at all
> >>
> >> As a community, we need to decide whether the current system of
> >> self-policing should continue. If it should, then language on reports
> like
> >> [3] should be very clear that any numbers derived from such systems
> should
> >> be taken with a grain of salt. Use of the word "Success" should be
> avoided,
> >> as it has connotations (in English, at least) that the result has 

Re: [openstack-dev] Moving neutron to oslo.db

2014-07-03 Thread Boris Pavlovic
Ben,


As I know the API of oslo.db and oslo-incubator/db are almost the same.
So why it should be complicated?


Best regards,
Boris Pavlovic


On Thu, Jul 3, 2014 at 9:10 PM, Ben Nemec  wrote:

> +27, -2401
>
> Wow, that's pretty painless.  Were there earlier patches to Neutron to
> prepare for the transition or was it really that easy?
>
> On 07/03/2014 07:34 AM, Salvatore Orlando wrote:
> > No I was missing everything and kept wasting time because of alembic.
> >
> > This will teach me to keep my mouth shut and don't distract people who
> are
> > actually doing good work.
> >
> > Thanks for doings this work.
> >
> > Salvatore
> >
> >
> > On 3 July 2014 14:15, Roman Podoliaka  wrote:
> >
> >> Hi Salvatore,
> >>
> >> I must be missing something. Hasn't it been done in
> >> https://review.openstack.org/#/c/103519/? :)
> >>
> >> Thanks,
> >> Roman
> >>
> >> On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando 
> >> wrote:
> >>> Hi,
> >>>
> >>> As you surely now, in Juno oslo.db will graduate [1]
> >>> I am currently working on the port. It's been already cleared that
> making
> >>> alembic migrations "idempotent" and healing the DB schema is a
> >> requirement
> >>> for this task.
> >>> These two activities are tracked by the blueprints [2] and [3].
> >>> I think we've seen enough in Openstack to understand that there is no
> >> chance
> >>> of being able to do the port to oslo.db in Juno.
> >>>
> >>> While blueprint [2] is already approved, I suggest to target also [3]
> for
> >>> Juno so that we might be able to port neutron to oslo.db as soon as K
> >> opens.
> >>> I expect this port to be not as invasive as the one for oslo.messaging
> >> which
> >>> required quite a lot of patches.
> >>>
> >>> Salvatore
> >>>
> >>> [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
> >>> [2] https://review.openstack.org/#/c/95738/
> >>> [3] https://review.openstack.org/#/c/101963/
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-03 Thread Doug Hellmann
On Thu, Jul 3, 2014 at 11:27 AM, Mark McLoughlin  wrote:
> Hey
>
> This is an attempt to summarize a really useful discussion that Victor,
> Flavio and I have been having today. At the bottom are some background
> links - basically what I have open in my browser right now thinking
> through all of this.
>
> We're attempting to take baby-steps towards moving completely from
> eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
> first victim.
>
> Ceilometer's code is run in response to various I/O events like REST API
> requests, RPC calls, notifications received, etc. We eventually want the
> asyncio event loop to be what schedules Ceilometer's code in response to
> these events. Right now, it is eventlet doing that.
>
> Now, because we're using eventlet, the code that is run in response to
> these events looks like synchronous code that makes a bunch of
> synchronous calls. For example, the code might do some_sync_op() and
> that will cause a context switch to a different greenthread (within the
> same native thread) where we might handle another I/O event (like a REST
> API request) while we're waiting for some_sync_op() to return:
>
>   def foo(self):
>   result = some_sync_op()  # this may yield to another greenlet
>   return do_stuff(result)
>
> Eventlet's infamous monkey patching is what make this magic happen.
>
> When we switch to asyncio's event loop, all of this code needs to be
> ported to asyncio's explicitly asynchronous approach. We might do:
>
>   @asyncio.coroutine
>   def foo(self):
>   result = yield from some_async_op(...)
>   return do_stuff(result)
>
> or:
>
>   @asyncio.coroutine
>   def foo(self):
>   fut = Future()
>   some_async_op(callback=fut.set_result)
>   ...
>   result = yield from fut
>   return do_stuff(result)
>
> Porting from eventlet's implicit async approach to asyncio's explicit
> async API will be seriously time consuming and we need to be able to do
> it piece-by-piece.
>
> The question then becomes what do we need to do in order to port a
> single oslo.messaging RPC endpoint method in Ceilometer to asyncio's
> explicit async approach?
>
> The plan is:
>
>   - we stick with eventlet; everything gets monkey patched as normal
>
>   - we register the greenio event loop with asyncio - this means that
> e.g. when you schedule an asyncio coroutine, greenio runs it in a
> greenlet using eventlet's event loop
>
>   - oslo.messaging will need a new variant of eventlet executor which
> knows how to dispatch an asyncio coroutine. For example:
>
> while True:
> incoming = self.listener.poll()
> method = dispatcher.get_endpoint_method(incoming)
> if asyncio.iscoroutinefunc(method):
> result = method()
> self._greenpool.spawn_n(incoming.reply, result)
> else:
> self._greenpool.spawn_n(method)
>
> it's important that even with a coroutine endpoint method, we send
> the reply in a greenthread so that the dispatch greenthread doesn't
> get blocked if the incoming.reply() call causes a greenlet context
> switch
>
>   - when all of ceilometer has been ported over to asyncio coroutines,
> we can stop monkey patching, stop using greenio and switch to the
> asyncio event loop
>
>   - when we make this change, we'll want a completely native asyncio
> oslo.messaging executor. Unless the oslo.messaging drivers support
> asyncio themselves, that executor will probably need a separate
> native thread to poll for messages and send replies.

We tried to keep eventlet out of the drivers. Does it make sense to do
the same for asyncio?

Does this change have any effect on the WSGI services, and the WSGI
container servers we can use to host them?

> If you're confused, that's normal. We had to take several breaks to get
> even this far because our brains kept getting fried.

I won't claim to understand all of the nuances, but it seems like a
good way to stage the changes. Thanks to everyone involved for working
it out!

>
> HTH,
> Mark.
>
> Victor's excellent docs on asyncio and trollius:
>
>   https://docs.python.org/3/library/asyncio.html
>   http://trollius.readthedocs.org/
>
> Victor's proposed asyncio executor:
>
>   https://review.openstack.org/70948
>
> The case for adopting asyncio in OpenStack:
>
>   https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio
>
> A previous email I wrote about an asyncio executor:
>
>  http://lists.openstack.org/pipermail/openstack-dev/2013-June/009934.html
>
> The mock-up of an asyncio executor I wrote:
>
>   
> https://github.com/markmc/oslo-incubator/blob/8509b8b/openstack/common/messaging/_executors/impl_tulip.py
>
> My blog post on async I/O and Python:
>
>   http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/
>
> greenio - greelets support for asyncio:
>
>   https://github.com/1st1/greenio/
>
>
> __

Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-03 Thread Jay Pipes

On 07/03/2014 08:42 AM, Luke Gorrie wrote:

On 3 July 2014 02:44, Michael Still mailto:mi...@stillhq.com>> wrote:

The main purpose is to let change reviewers know that a change might
be problematic for a piece of code not well tested by the gate


Just a thought:

A "sampling" approach could be a reasonable way to stay responsive under
heavy load and still give a strong signal to reviewers about whether a
change is likely to be problematic.

I mean: Kevin mentions that his CI gets an hours-long queue during peak
review season. One way to deal with that could be skipping some events
e.g. toss a coin to decide whether to test the next revision of a change
that he has already +1'd previously. That would keep responsiveness
under control even when throughput is a problem.

(A bit like how a router manages a congested input queue or how a
sampling profiler keeps overhead low.)

Could be worth keeping the rules flexible enough to permit this kind of
thing, at least?


The problem with this is that it assumes all patch sets contain 
equivalent levels of change, which is incorrect. One patch set may 
contain changes that significantly affect the SnappCo plugin. A sampling 
system might miss that important patchset, and you'd spend a lot of time 
trying to figure out which patch caused issues for you when a later 
patchset (that included the problematic important patch that was merged) 
causes failures that seem unrelated to the patch currently undergoing tests.


In short, you need to test every single proposed patch to the system 
fully and consistently, otherwise there's simply no point in running any 
tests at all, as you will spend an inordinate amount of time tracking 
down what broke what.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving neutron to oslo.db

2014-07-03 Thread Ben Nemec
+27, -2401

Wow, that's pretty painless.  Were there earlier patches to Neutron to
prepare for the transition or was it really that easy?

On 07/03/2014 07:34 AM, Salvatore Orlando wrote:
> No I was missing everything and kept wasting time because of alembic.
> 
> This will teach me to keep my mouth shut and don't distract people who are
> actually doing good work.
> 
> Thanks for doings this work.
> 
> Salvatore
> 
> 
> On 3 July 2014 14:15, Roman Podoliaka  wrote:
> 
>> Hi Salvatore,
>>
>> I must be missing something. Hasn't it been done in
>> https://review.openstack.org/#/c/103519/? :)
>>
>> Thanks,
>> Roman
>>
>> On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando 
>> wrote:
>>> Hi,
>>>
>>> As you surely now, in Juno oslo.db will graduate [1]
>>> I am currently working on the port. It's been already cleared that making
>>> alembic migrations "idempotent" and healing the DB schema is a
>> requirement
>>> for this task.
>>> These two activities are tracked by the blueprints [2] and [3].
>>> I think we've seen enough in Openstack to understand that there is no
>> chance
>>> of being able to do the port to oslo.db in Juno.
>>>
>>> While blueprint [2] is already approved, I suggest to target also [3] for
>>> Juno so that we might be able to port neutron to oslo.db as soon as K
>> opens.
>>> I expect this port to be not as invasive as the one for oslo.messaging
>> which
>>> required quite a lot of patches.
>>>
>>> Salvatore
>>>
>>> [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
>>> [2] https://review.openstack.org/#/c/95738/
>>> [3] https://review.openstack.org/#/c/101963/
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party] Simple and robust CI script?

2014-07-03 Thread Luke Gorrie
On 1 July 2014 19:12, Luke Gorrie  wrote:

> It does not yet run devstack/tempest and I hope to reuse that part from
> somebody else's efforts.
>

shellci is happily voting on the sandbox with the Snabb NFV CI account so
far: http://egg.snabb.co:81/shellci/shellci.log

Time to make it start running real tempest tests.

I whipped up a simple Vagrantfile that runs devstack and tempest in a
disposable VM. The idea is that out-of-the-box you get a setup that runs
tempest and votes on the results. Then you customize local.conf,
tempest.conf, and optionally the whole script to do the appropriate testing
for your driver. (Or, if you like, skip this part and supply your own
testing script to do whatever you like.)

Vagrant scripts only in a Gist for now:
https://gist.github.com/lukego/bdefc792b8255d141e4c

I'll see how the performance looks. Vagrant probably slows down serial
performance but should make independent parallel runs easy. I ordered a
hetzner.de server with 128GB RAM and if that comes through tomorrow we'll
see how that plays out.

The plan for parallelism is sharding. Each gerrit-stream event will be
hashed into one of N buckets and then you can run N copies of the testing
script (on whatever machine(s)) and each copy chooses a different hash
bucket to trigger on.

Let's see how promising (or not) that looks tomorrow :-). If it works out
for me then hopefully somebody else will want to kick the tires next week.

(We'll need a separate 100 line budget for the Vagrant/devstack/tempest
stuff by the look of it! Lies, damned lies, budgets...)

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-03 Thread Jay Pipes

On 07/03/2014 03:49 AM, Luke Gorrie wrote:

On 3 July 2014 02:44, Michael Still mailto:mi...@stillhq.com>> wrote:

I have seen both. Normally there's a failure, reviewers notice, and
then the developer spins trying out fixes by uploading new patch sets.


Interesting. Yes, I can see that you need fast response from CIs to
support that scenario. 12-hour edit-compile-run loop will ruin anybody's
day/week/month.

My rule of thumb is three hours by the way. I'd like to say something
like "not significantly slower than jenkins", but that's hard to
quantify.

How do people normally throughput-optimize their CIs?


By using single-use slaves and things like nodepool to manage a "ready 
set" of VMs that are used for these single-use slaves.



I suppose that parallelism is the basic trick, but how do people achieve
it? Concurrent tempest runs on the same host? or on a pool of (virtual)
hosts? or in one-shot disposable VMs?


One-shot disposable VMs, managed by nodepool or some other custom script.


The CI that I operate is currently running tests serially on a "bare
metal" server. This is okay for now (no backlog) but I would like to be
able to ramp up the number of tests performed and then performance could
become an issue.

The idea I'm playing with now is to support N servers each running M
parallel virtual machines for tempest. I'm tempted to use a disposable
Vagrant VM for each tempest run both in order to isolate tests from each
other (those running in parallel and also those that have run before)
and perhaps even make it possible for others (4th parties?) to grab my
Vagrantfile and replicate my test environment (if they want a faster
turn-around than via Gerrit).


I don't think Vagrant will give you anything other than a slow launch 
time, frankly. You will want to have a system that keeps a pool of 
available single-use slave VMs ready to run a test job, report the 
results to Gearman (or Gerrit if you prefer to skip the entire queue 
mechanism), and terminate the VM.



I'd be very curious to know what is working well/badly for others at the
moment so that I can avoid stepping on land mines :-).


devstack-gate works very well for what it is supposed to do:

 * Checkout git repos of all projects related to OpenStack test runs
 * Checkout the SHA1 of the code in Gerrit repo for the project under 
test (Neutron in your case)
 * Configure all of the OpenStack services and start all the services 
in separate processes (screen sessions)

 * Direct logs to a standardized location
 * Run Tempest or another test suite based on environment variables

For your 100 line shell script project (which I can almost guarantee 
will end up being more than 100 lines ;), I would recommend using 
devstack-gate as much as you can, as it's taken dozens of engineers many 
man-months to get it right, and you can be sure that you will run the 
same tests that are executed in the gate, set up in an environment that 
is exactly like the gate, therefore achieving some level of consistency 
with the gate.


Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Launchpad comment boxes, not wrapping where you want??

2014-07-03 Thread Jay Pipes

On 07/03/2014 10:48 AM, Derek Higgins wrote:


If your like me, launchpads wrapping of paragraphs in comment boxes gets
under your skin, making it very difficult to follow tracebacks, logs,
etc see
https://bugs.launchpad.net/launchpad/+bug/545125

I finally got motivated to try and do something about it (at least in
chrome), as the above bug has been open for 4 years. Anyways one
solution people might be interested in

1. Use chrome
2. install the stylebot extension
3. Go to a launchpad bug
4. on the top right click css->open stylebot->edit css
5. add the text
p {
 max-width: 100%;
}
6. and save

your done

hope this helps keep a few people sane,


This really is a nice little hack. :)

Thanks, Derek!

-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 10:31 AM, Sullivan, Jon Paul wrote:
>> -Original Message-
>> From: Anita Kuno [mailto:ante...@anteaya.info]
>> Sent: 03 July 2014 15:06
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is "Success"
>> exactly?
> 
> I guess you missed this last time - the mail had gotten quite long :D
> 
I had yes, thanks for drawing my attention to it.
 Hi Jon Paul: (Is it Jon Paul or Jon?)
>>>
>>> Hi Anita - it's Jon-Paul or JP.
>>>
Ah, thanks JP.

>>>
>>> But there is a second side to what you were saying which was the
>> developer feedback.  I guess I am suggesting that if you are putting a
>> system in place for developers to vote on the 3rd party CI, should that
>> same system be in effect for the Openstack check/gate jobs?
>>>
>> It already is, it is called #openstack-infra. All day long (the 24 hour
>> day) developers drop in and tell us exactly how they feel about any
>> aspect of OpenStack Infrastructure. They let us know when documentation
>> is confusing, when things are broken, when a patch should have been
>> merged and failed to be, when Zuul is caught in a retest loop and
>> occasionally when we get something right.
> 
> I had presumed this to be the case, and I guess this is the first port of 
> call when developers have questions on 3rd-party CI?  If so, then a very 
> interesting metric that would speak to the reliability of the 3rd CI might be 
> responsiveness to irc questions?
> 
Yes, developers ask questions about what specific 3rd party accounts are
doing when commenting on their patches all the time. Often some version
of "Why is systemx-ci commenting on my patch?" Many of them ask in infra
and many of them ping me directly.

Then we move into some variation of "Systemx-ci is {some behaviour that
does not meet requirements}. {What do I do? | Can someone do something
to fix this? | Can we disable this system?}
Requirements: http://ci.openstack.org/third_party.html#requirements
Open Patches:
https://review.openstack.org/#/q/status:open+project:openstack-infra/config+branch:master+topic:third-party,n,z
and
https://review.openstack.org/#/c/104565/

Sure responsiveness to irc questions would be an interesting metric. Now
how to collect data. I suppose you could scrape irc logs - I don't want
to see the regex to parse what is considered to be irc responsiveness.
You could ask the infra team if you like, but then that is a subset of
what I have already suggested for all developers plus puts more work on
infra which I will not voluntarily do, not if we can avoid it. You could
ask me, but my response will be based on an aggregation of my gut
responses based on personal experience with individual admins for
different accounts, it doesn't scale and while I feel it has some
credence should not be the sole source of information for any metric
given the scope of the issue. We currently have 70 gerrit ci accounts,
I'm not going to offer an opinion on accounts I have never interacted
with if everything has been running fine and they have had no reason to
interact with me.

By allowing the developers affected by the third party systems offer
their feedback, a more diverse source of data is collected. Keep in mind
that as a developer I have never had to splunk logs from third party ci
on my patches since the majority of my patches are for infra, which has
very little testing by third party ci. I'd like to have input from
developer who do interact with third party ci artifacts.

>>
>> OpenStack Infra logs can be found here:
>> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/
>>
>> I don't think having an irc channel for third party is practical because
>> it simply will split infra resources and I have my doubts about how
>> responsive folks would be in it. Hence my suggestion of the pages to
>> allow developers to share the kind of information they share in
>> openstack-infra all the time.
> 
> Yes - I can understand your viewpoint on this, and it makes sense to have a 
> forum where developers can raise commetns or concerns and those responsible 
> for the 3rd party CI can respond.
Thanks and hopefully they will respond, and at the very least it will be
a quick way of seeing how many developers have attempted to give
feedback and the speed or lack thereof of a response.

There are some system admins that are very responsive and some are even
beginning to be proactive, by sending an email to the ml (dev and/or
infra) and informing us when their system is failing to build (we have
to get faster at disabling systems in those circumstances, but I
appreciate the proactiveness here) as well as posting when they move
their logs to a url with a dns rather than a hard coded ip address and
that breaks backward compatibility. Thank you for being proactive.
http://lists.openstack.org/pipermail/openstack-infra/2014-July/001473.html
http://lists.openstack.org/pipermail/openstack-dev/2014-July/039270.html

Thanks JP,
Anita.
> 
>>
>> 

[openstack-dev] [Containers] Nova virt driver requirements

2014-07-03 Thread Dmitry Guryanov
Hi, All!

As far as I know, there are some requirements, which virt driver must meet to 
use Openstack 'label'. For example, it's not allowed to mount cinder volumes 
inside host OS.

Are there any documents, describing all such things? How can I determine, if 
my virtualization driver for nova (developed outside of nova mainline) works 
correctly and meet nova's security requirements?


-- 
Dmitry Guryanov

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-03 Thread Phillip Toohill
If the objects remain in 'PENDING_CREATE' until provisioned it would seem
that the process got stuck in that status and may be in a bad state from
user perspective. I like the idea of QUEUED or similar to reference that
the object has been accepted but not provisioned.

Phil

On 7/3/14 10:28 AM, "Brandon Logan"  wrote:

>With the new API and object model refactor there have been some issues
>arising dealing with the status of entities.  The main issue is that
>Listener, Pool, Member, and Health Monitor can exist independent of a
>Load Balancer.  The Load Balancer is the entity that will contain the
>information about which driver to use (through provider or flavor).  If
>a Listener, Pool, Member, or Health Monitor is created without a link to
>a Load Balancer, then what status does it have?  At this point it only
>exists in the database and is really just waiting to be provisioned by a
>driver/backend.
>
>Some possibilities discussed:
>A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name
>Entities just remain in PENDING_CREATE until provisioned by a driver
>Entities just remain in ACTIVE until provisioned by a driver
>
>Opinions and suggestions?
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-03 Thread Eugene Nikanorov
Hi,

Mark and me has spent some time today discussing existing proposals and I
think we got to a consensus.
Initially I had two concerns about Mark's proposal which are
- extension list attribute on the flavor
- driver entry point on the service profile

The first idea (ext list) need to be clarified more as we get more drivers
that needs it.
Right now we have FWaaS/VPNaaS which don't have extensions at all and we
have LBaaS where all drivers support all extensions.
So extension list can be postponed until we clarify how exactly we want
this to be exposed to the user and how we want it to function on
implementation side.

Driver entry point which implies dynamic loading per admin's request is a
important discussion point (at least, previously this idea received
negative opinions from some cores)
We'll implement service profiles, but this exact aspect of how driver is
specified/loadede will be discussed futher.

So based on that I'm going to start implementing this.
I think that implementation result will allow us to develop in different
directions (extension list vs tags, dynamic loading and such) depending on
more information about how this is utilized by deployers and users.

Thanks,
Eugene.



On Thu, Jul 3, 2014 at 5:57 PM, Susanne Balle  wrote:

> +1
>
>
> On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery 
> wrote:
>
>> We're coming down to the wire here with regards to Neutron BPs in
>> Juno, and I wanted to bring up the topic of the flavor framework BP.
>> This is a critical BP for things like LBaaS, FWaaS, etc. We need this
>> work to land in Juno, as these other work items are dependent on it.
>> There are still two proposals [1] [2], and after the meeting last week
>> [3] it appeared we were close to conclusion on this. I now see a bunch
>> of comments on both proposals.
>>
>> I'm going to again suggest we spend some time discussing this at the
>> Neutron meeting on Monday to come to a closure on this. I think we're
>> close. I'd like to ask Mark and Eugene to both look at the latest
>> comments, hopefully address them before the meeting, and then we can
>> move forward with this work for Juno.
>>
>> Thanks for all the work by all involved on this feature! I think we're
>> close and I hope we can close on it Monday at the Neutron meeting!
>>
>> Kyle
>>
>> [1] https://review.openstack.org/#/c/90070/
>> [2] https://review.openstack.org/102723
>> [3]
>> http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.log.html
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Status of entities that do not exist in a driver backend

2014-07-03 Thread Brandon Logan
With the new API and object model refactor there have been some issues
arising dealing with the status of entities.  The main issue is that
Listener, Pool, Member, and Health Monitor can exist independent of a
Load Balancer.  The Load Balancer is the entity that will contain the
information about which driver to use (through provider or flavor).  If
a Listener, Pool, Member, or Health Monitor is created without a link to
a Load Balancer, then what status does it have?  At this point it only
exists in the database and is really just waiting to be provisioned by a
driver/backend.

Some possibilities discussed:
A new status of QUEUED, PENDING_ACTIVE, SCHEDULED, or some other name
Entities just remain in PENDING_CREATE until provisioned by a driver
Entities just remain in ACTIVE until provisioned by a driver

Opinions and suggestions?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Asyncio and oslo.messaging

2014-07-03 Thread Mark McLoughlin
Hey

This is an attempt to summarize a really useful discussion that Victor,
Flavio and I have been having today. At the bottom are some background
links - basically what I have open in my browser right now thinking
through all of this.

We're attempting to take baby-steps towards moving completely from
eventlet to asyncio/trollius. The thinking is for Ceilometer to be the
first victim.

Ceilometer's code is run in response to various I/O events like REST API
requests, RPC calls, notifications received, etc. We eventually want the
asyncio event loop to be what schedules Ceilometer's code in response to
these events. Right now, it is eventlet doing that.

Now, because we're using eventlet, the code that is run in response to
these events looks like synchronous code that makes a bunch of
synchronous calls. For example, the code might do some_sync_op() and
that will cause a context switch to a different greenthread (within the
same native thread) where we might handle another I/O event (like a REST
API request) while we're waiting for some_sync_op() to return:

  def foo(self):
  result = some_sync_op()  # this may yield to another greenlet
  return do_stuff(result)

Eventlet's infamous monkey patching is what make this magic happen.

When we switch to asyncio's event loop, all of this code needs to be
ported to asyncio's explicitly asynchronous approach. We might do:

  @asyncio.coroutine
  def foo(self):
  result = yield from some_async_op(...)
  return do_stuff(result)

or:

  @asyncio.coroutine
  def foo(self):
  fut = Future()
  some_async_op(callback=fut.set_result)
  ...
  result = yield from fut
  return do_stuff(result)

Porting from eventlet's implicit async approach to asyncio's explicit
async API will be seriously time consuming and we need to be able to do
it piece-by-piece.

The question then becomes what do we need to do in order to port a
single oslo.messaging RPC endpoint method in Ceilometer to asyncio's
explicit async approach?

The plan is:

  - we stick with eventlet; everything gets monkey patched as normal

  - we register the greenio event loop with asyncio - this means that 
e.g. when you schedule an asyncio coroutine, greenio runs it in a 
greenlet using eventlet's event loop

  - oslo.messaging will need a new variant of eventlet executor which 
knows how to dispatch an asyncio coroutine. For example:

while True:
incoming = self.listener.poll()
method = dispatcher.get_endpoint_method(incoming)
if asyncio.iscoroutinefunc(method):
result = method()
self._greenpool.spawn_n(incoming.reply, result)
else:
self._greenpool.spawn_n(method)

it's important that even with a coroutine endpoint method, we send 
the reply in a greenthread so that the dispatch greenthread doesn't
get blocked if the incoming.reply() call causes a greenlet context
switch

  - when all of ceilometer has been ported over to asyncio coroutines, 
we can stop monkey patching, stop using greenio and switch to the 
asyncio event loop

  - when we make this change, we'll want a completely native asyncio 
oslo.messaging executor. Unless the oslo.messaging drivers support 
asyncio themselves, that executor will probably need a separate
native thread to poll for messages and send replies.

If you're confused, that's normal. We had to take several breaks to get
even this far because our brains kept getting fried.

HTH,
Mark.

Victor's excellent docs on asyncio and trollius:

  https://docs.python.org/3/library/asyncio.html
  http://trollius.readthedocs.org/

Victor's proposed asyncio executor:

  https://review.openstack.org/70948

The case for adopting asyncio in OpenStack:

  https://wiki.openstack.org/wiki/Oslo/blueprints/asyncio

A previous email I wrote about an asyncio executor:

 http://lists.openstack.org/pipermail/openstack-dev/2013-June/009934.html

The mock-up of an asyncio executor I wrote:

  
https://github.com/markmc/oslo-incubator/blob/8509b8b/openstack/common/messaging/_executors/impl_tulip.py

My blog post on async I/O and Python:

  http://blogs.gnome.org/markmc/2013/06/04/async-io-and-python/

greenio - greelets support for asyncio:

  https://github.com/1st1/greenio/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [DevStack] neutron config not working

2014-07-03 Thread Paul Czarkowski
I¹m seeing similar. Instances launch,  they show as having Ips in
`neutron list`  but I cannot access them via IP.

Other thing I¹ve notices is that doing a `neutron agent-list` gives me an
empty list,  I would assume it should at least show the dhcp agent ?

On 7/1/14, 12:00 PM, "Kyle Mestery"  wrote:

>Hi Rob:
>
>Can you try adding the following config to your local.conf? I'd like
>to see if this gets you going or not. It will force it to use gre
>tunnels for tenant networks. By default it will not.
>
>ENABLE_TENANT_TUNNELS=True
>
>On Tue, Jul 1, 2014 at 10:53 AM, Rob Crittenden 
>wrote:
>> Rob Crittenden wrote:
>>> Mark Kirkwood wrote:
 On 25/06/14 10:59, Rob Crittenden wrote:
> Before I get punted onto the operators list, I post this here because
> this is the default config and I'd expect the defaults to just work.
>
> Running devstack inside a VM with a single NIC configured and this in
> localrc:
>
> disable_service n-net
> enable_service q-svc
> enable_service q-agt
> enable_service q-dhcp
> enable_service q-l3
> enable_service q-meta
> enable_service neutron
> Q_USE_DEBUG_COMMAND=True
>
> Results in a successful install but no DHCP address assigned to
>hosts I
> launch and other oddities like no CIDR in nova net-list output.
>
> Is this still the default way to set things up for single node? It is
> according to https://wiki.openstack.org/wiki/NeutronDevstack
>
>

 That does look ok: I have an essentially equivalent local.conf:

 ...
 ENABLED_SERVICES+=,-n-net
 ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,q-metering,tempest

 I don't have 'neutron' specifically enabled... not sure if/why that
 might make any difference tho. However instance launching and ip
address
 assignment seem to work ok.

 However I *have* seen the issue of instances not getting ip addresses
in
 single host setups, and it is often due to use of virt io with bridges
 (with is the default I think). Try:

 nova.conf:
 ...
 libvirt_use_virtio_for_bridges=False
>>>
>>> Thanks for the suggestion. At least in master this was replaced by a
>>>new
>>> section, libvirt, but even setting it to False didn't do the trick for
>>> me. I see the same behavior.
>>
>> OK, I've tested the havana and icehouse branches in F-20 and they don't
>> seem to have a working neutron either. I see the same thing. I can
>> launch a VM but it isn't getting a DHCP address.
>>
>> Maybe I'll try in some Ubuntu release to see if this is Fedora-specific.
>>
>> rob
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] team meeting July 3 1800 UTC

2014-07-03 Thread Sergey Lukjanov
I'm on Paris mid-cycle sprint, so, Alex Ignatov and Dmitry
Mescheryakov will chair the meeting today.

On Wed, Jul 2, 2014 at 4:35 PM, Sergey Lukjanov  wrote:
> Hi folks,
>
> We'll be having the Sahara team meeting as usual in
> #openstack-meeting-alt channel.
>
> Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings
>
> http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140703T18
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-07-03 Thread
Salvatore,

There is FIP distribution at the agent level, in the sense the N/S of FIP for a 
VM will be hosted on the same compute node.  We centralized SNAT from feedback 
by others.  The current design and code only supports centralized SNAT for DVR 
routers.  The design could be modified to allow for distributed SNAT as an 
option but would be a tough task to get in for the first release of DVR support.
We wanted to come in with the basic support first.

Yours,

Michael Smith
Hewlett-Packard Company
HP Networking R&D
8000 Foothills Blvd. M/S 5557
Roseville, CA 95747
PC Phone: 916 540-1884
Ph: 916 785-0918
Fax: 916 785-1199

From: Salvatore Orlando [mailto:sorla...@nicira.com]
Sent: Thursday, July 03, 2014 3:41 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] DVR SNAT shortcut

I would just add that if I'm not mistaken the DVR work would also include the 
features currently offered by nova network's 'multi-host' capability.
While DVR clearly does a lot more than multi host, keeping SNAT centralized 
only might not fully satisfy this requirement.
Indeed nova-network offers SNAT at the compute node thus achieving distribution 
of N-S traffic.

I agree with Zang's point regarding wasting public IPs. On the other hand one 
IP per agent with double SNAT might be a reasonable compromise.
And in that case I'm not sure whether sharing SNAT source IPs among tenants 
would have any security implications, so somebody else might comment there.

Summarizing, I think that distributing N-S traffic is important, but I don't 
think that to achieve this we'd necessarily need to implement SNAT at the 
compute nodes. I have reviewed the l3 agent part of the DVR work, it seems that 
there will be floating IP distribution at the agent level - but I could not 
understand whether there will be also SNAT distribution.

Salvatore


On 3 July 2014 10:45, Zang MingJie 
mailto:zealot0...@gmail.com>> wrote:
Although the SNAT DVR has some trade off, I still think it is
necessary. Here is pros and cons for consideration:

pros:

save W-E bandwidth
high availability (distributed, no single point failure)

cons:

waste public ips (one ip per compute node vs one ip per l3-agent, if
double-SNAT implemented)
different tenants may share SNAT source ips
compute node requires public interface

Under certain deployment, the cons may not cause problems, can we
provide SNAT DVR as a alternative option, which can be fully
controlled by could admin ? The admin chooses whether use it or not.

>> To resolve the problem, we are using double-SNAT,
>
>> first, set up one namespace for each router, SNAT tenant ip ranges to
>> a separate range, say 169.254.255.0/24
>
>> then, SNAT from 169.254.255.0/24 to public network.
>
>> We are already using this method, and saved tons of ips in our
>> deployment, only one public ip is required per router agent
>
> Functionally it could works, but break the existing normal OAM pattern, which 
> expecting VMs from one tenant share a public IP, but share no IP with other 
> tenant. As I know, at least some customer don't accept this way, they think 
> VMs in different hosts appear as different public IP is very strange.
>
> In fact I severely doubt the value of N-S distributing in a real 
> commercialized production environment, including FIP. There are many things 
> that traditional N-S central nodes need to control: security, auditing, 
> logging, and so on, it is not the simple forwarding. We need a tradeoff 
> between performance and policy control model:
>
> 1. N-S traffic is usually much less than W-E traffic, do we really need 
> distribute N-S traffic besides W-E traffic?
> 2. With NFV progress like intel DPDK, we can build very cost-effective 
> service application on commodity x86 server (simple SNAT with 10Gbps/s per 
> core at average Internet packet length)
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Launchpad comment boxes, not wrapping where you want??

2014-07-03 Thread Derek Higgins

If your like me, launchpads wrapping of paragraphs in comment boxes gets
under your skin, making it very difficult to follow tracebacks, logs,
etc see
https://bugs.launchpad.net/launchpad/+bug/545125

I finally got motivated to try and do something about it (at least in
chrome), as the above bug has been open for 4 years. Anyways one
solution people might be interested in

1. Use chrome
2. install the stylebot extension
3. Go to a launchpad bug
4. on the top right click css->open stylebot->edit css
5. add the text
p {
max-width: 100%;
}
6. and save

your done

hope this helps keep a few people sane,
Derek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Sullivan, Jon Paul
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: 03 July 2014 15:06
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is "Success"
> exactly?

I guess you missed this last time - the mail had gotten quite long :D

> >> Hi Jon Paul: (Is it Jon Paul or Jon?)
> >
> > Hi Anita - it's Jon-Paul or JP.
> >
> >>
> >
> > But there is a second side to what you were saying which was the
> developer feedback.  I guess I am suggesting that if you are putting a
> system in place for developers to vote on the 3rd party CI, should that
> same system be in effect for the Openstack check/gate jobs?
> >
> It already is, it is called #openstack-infra. All day long (the 24 hour
> day) developers drop in and tell us exactly how they feel about any
> aspect of OpenStack Infrastructure. They let us know when documentation
> is confusing, when things are broken, when a patch should have been
> merged and failed to be, when Zuul is caught in a retest loop and
> occasionally when we get something right.

I had presumed this to be the case, and I guess this is the first port of call 
when developers have questions on 3rd-party CI?  If so, then a very interesting 
metric that would speak to the reliability of the 3rd CI might be 
responsiveness to irc questions?

> 
> OpenStack Infra logs can be found here:
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/
> 
> I don't think having an irc channel for third party is practical because
> it simply will split infra resources and I have my doubts about how
> responsive folks would be in it. Hence my suggestion of the pages to
> allow developers to share the kind of information they share in
> openstack-infra all the time.

Yes - I can understand your viewpoint on this, and it makes sense to have a 
forum where developers can raise commetns or concerns and those responsible for 
the 3rd party CI can respond.

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks, 
Jon-Paul Sullivan ☺ Cloud Services - @hpcloud

Postal Address: Hewlett-Packard Galway Limited, Ballybrit Business Park, Galway.
Registered Office: Hewlett-Packard Galway Limited, 63-74 Sir John Rogerson's 
Quay, Dublin 2. 
Registered Number: 361933
 
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error you should 
delete it from your system immediately and advise the sender.

To any recipient of this message within HP, unless otherwise stated, you should 
consider this message and attachments as "HP CONFIDENTIAL".
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]Performance of security group

2014-07-03 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Oh, so you have the enhancement implemented? Great! Any numbers that
shows how much we gain from that?

/Ihar

On 03/07/14 02:49, shihanzhang wrote:
> Hi, Miguel Angel Ajo! Yes, the ipset implementation is ready, today
> I will modify my spec, when the spec is approved, I will commit the
> codes as soon as possilbe!
> 
> 
> 
> 
> 
> At 2014-07-02 10:12:34, "Miguel Angel Ajo" 
> wrote:
>> 
>> Nice Shihanzhang,
>> 
>> Do you mean the ipset implementation is ready, or just the
>> spec?.
>> 
>> 
>> For the SG group refactor, I don't worry about who does it, or
>> who takes the credit, but I believe it's important we address
>> this bottleneck during Juno trying to match nova's scalability.
>> 
>> Best regards, Miguel Ángel.
>> 
>> 
>> On 07/02/2014 02:50 PM, shihanzhang wrote:
>>> hi Miguel Ángel and Ihar Hrachyshka, I agree with you that
>>> split  the work in several specs, I have finished the work (
>>> ipset optimization), you can do 'sg rpc optimization (without 
>>> fanout)'. as the third part(sg rpc optimization (with fanout)),
>>> I think we need talk about it, because just using ipset to
>>> optimize security group agent codes does not bring the best
>>> results!
>>> 
>>> Best regards, shihanzhang.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> At 2014-07-02 04:43:24, "Ihar Hrachyshka" 
>>> wrote:
> On 02/07/14 10:12, Miguel Angel Ajo wrote:
> 
>> Shihazhang,
> 
>> I really believe we need the RPC refactor done for this cycle,
>> and given the close deadlines we have (July 10 for spec
>> submission and July 20 for spec approval).
> 
>> Don't you think it's going to be better to split the work in 
>> several specs?
> 
>> 1) ipset optimization   (you) 2) sg rpc optimization (without 
>> fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you
>> , me)
> 
> 
>> This way we increase the chances of having part of this for the 
>> Juno cycle. If we go for something too complicated is going to
>> take more time for approval.
> 
> 
> I agree. And it not only increases chances to get at least some of 
> those highly demanded performance enhancements to get into Juno,
> it's also "the right thing to do" (c). It's counterproductive to
> put multiple vaguely related enhancements in single spec. This
> would dim review focus and put us into position of getting
> 'all-or-nothing'. We can't afford that.
> 
> Let's leave one spec per enhancement. @Shihazhang, what do you
> think?
> 
> 
>> Also, I proposed the details of "2", trying to bring awareness
>> on the topic, as I have been working with the scale lab in Red
>> Hat to find and understand those issues, I have a very good
>> knowledge of the problem and I believe I could make a very fast
>> advance on the issue at the RPC level.
> 
>> Given that, I'd like to work on this specific part, whether or
>> not we split the specs, as it's something we believe critical
>> for neutron scalability and thus, *nova parity*.
> 
>> I will start a separate spec for "2", later on, if you find it
>> ok, we keep them as separate ones, if you believe having just 1
>> spec (for 1 & 2) is going be safer for juno-* approval, then we
>> can incorporate my spec in yours, but then
>> "add-ipset-to-security" is not a good spec title to put all this
>> together.
> 
> 
>> Best regards, Miguel Ángel.
> 
> 
>> On 07/02/2014 03:37 AM, shihanzhang wrote:
>>> 
>>> hi Miguel Angel Ajo Pelayo! I agree with you and modify my
>>> spes, but I will also optimization the RPC from security group
>>> agent to neutron server. Now the modle is
>>> 'port[rule1,rule2...], port...', I will change it to 'port[sg1,
>>> sg2..]', this can reduce the size of RPC respose message from
>>> neutron server to security group agent.
>>> 
>>> At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo" 
>>>  wrote:
 
 
 Ok, I was talking with Édouard @ IRC, and as I have time to 
 work into this problem, I could file an specific spec for
 the security group RPC optimization, a masterplan in two
 steps:
 
 1) Refactor the current RPC communication for 
 security_groups_for_devices, which could be used for full 
 syncs, etc..
 
 2) Benchmark && make use of a fanout queue per security
 group to make sure only the hosts with instances on a
 certain security group get the updates as they happen.
 
 @shihanzhang do you find it reasonable?
 
 
 
 - Original Message -
> - Original Message -
>> @Nachi: Yes that could a good improvement to factorize
>> the RPC
> mechanism.
>> 
>> Another idea: What about creating a RPC topic per
>> security group (quid of the
> RPC topic
>> scalability) on which an agent subscribes if one of its 
>> ports is
> associated
>> to the security group?
>> 
>> Regards, Édouard.
>> 
>> 
> 
> 
> Hmm, Interesting,
> 
> @Nachi, I'm not sure I fully understood:
> 
> 
> SG_LIST [ SG1, SG2] SG_RULE_LIS

Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-03 Thread Aleksandr Didenko
Hi,

> I think we should allow user to delete unneeded releases.

In this case user won't be able to add new nodes to the existing
environments of the same version. So we should check and warn user about
it, or simply not allow to delete releases if there are live envs with the
same version.



On Thu, Jul 3, 2014 at 3:45 PM, Dmitry Pyzhov  wrote:

> So, our releases will have following versions of releases on UI:
> 5.0) "2014.1"
> 5.0.1) "2014.1.1-5.0.1"
> 5.1) "2014.1.1-5.1"
>
> And if someone install 5.0, upgrade it to 5.0.1 and then upgrade to 5.1,
> he will have three releases for each OS. I think we should allow user to
> delete unneeded releases. It also will add free space on his masternode.
>
>
> On Wed, Jul 2, 2014 at 1:34 PM, Igor Kalnitsky 
> wrote:
>
>> Hello,
>>
>> > Could you please clarify what exactly you mean by  "our patches" /
>> > "our first patch"?
>>
>> I mean which version should we use in 5.0.1, for example? As far as I
>> understand @DmitryB, it have to be "2014.1-5.0.1". Am I right?
>>
>> Thanks,
>> Igor
>>
>>
>>
>> On Tue, Jul 1, 2014 at 8:47 PM, Aleksandr Didenko 
>> wrote:
>>
>>> Hi,
>>>
>>> my 2 cents:
>>>
>>> 1) Fuel version (+1 to Dmitry)
>>> 2) Could you please clarify what exactly you mean by "our patches" /
>>> "our first patch"?
>>>
>>>
>>>
>>>
>>> On Tue, Jul 1, 2014 at 8:04 PM, Dmitry Borodaenko <
>>> dborodae...@mirantis.com> wrote:
>>>
 1) Puppet manifests are part of Fuel so the version of Fuel should be
 used. It is possible to have more than one version of Fuel per
 OpenStack version, but not the other way around: if we upgrade
 OpenStack version we also increase version of Fuel.

 2) Should be a combination of both: it should indicate which OpenStack
 version it is based on (2014.1.1), and version of Fuel it's included
 in (5.0.1), e.g. 2014.1.1-5.0.1. Between Fuel versions, we can have
 additional bugfix patches added to shipped OpenStack components.

 my 2c,
 -DmitryB


 On Tue, Jul 1, 2014 at 9:50 AM, Igor Kalnitsky 
 wrote:
 > Hi fuelers,
 >
 > I'm working on Patching for OpenStack and I have the following
 questions:
 >
 > 1/ We need to save new puppets and repos under some versioned folder:
 >
 > /etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.
 >
 > So the question is which version to use? Fuel or OpenStack?
 >
 > 2/ Which version we have to use for our patchs? We have an OpenStack
 2014.1.
 > Should we use 2014.1.1 for our first patch? Or we have to use another
 > format?
 >
 > I need a quick reply since these questions have to be solved for
 5.0.1 too.
 >
 > Thanks,
 > Igor
 >
 >
 > ___
 > OpenStack-dev mailing list
 > OpenStack-dev@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >



 --
 Dmitry Borodaenko

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 09:52 AM, Sullivan, Jon Paul wrote:
>> -Original Message-
>> From: Anita Kuno [mailto:ante...@anteaya.info]
>> Sent: 03 July 2014 13:53
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is "Success"
>> exactly?
>>
>> On 07/03/2014 06:22 AM, Sullivan, Jon Paul wrote:
 -Original Message-
 From: Anita Kuno [mailto:ante...@anteaya.info]
 Sent: 01 July 2014 14:42
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [third-party-ci][neutron] What is
>> "Success"
 exactly?

 On 06/30/2014 09:13 PM, Jay Pipes wrote:
> On 06/30/2014 07:08 PM, Anita Kuno wrote:
>> On 06/30/2014 04:22 PM, Jay Pipes wrote:
>>> Hi Stackers,
>>>
>>> Some recent ML threads [1] and a hot IRC meeting today [2] brought
>>> up some legitimate questions around how a newly-proposed
>>> Stackalytics report page for Neutron External CI systems [2]
>>> represented the results of an external CI system as "successful"
>>> or
 not.
>>>
>>> First, I want to say that Ilya and all those involved in the
>>> Stackalytics program simply want to provide the most accurate
>>> information to developers in a format that is easily consumed.
>>> While there need to be some changes in how data is shown (and the
>>> wording of things like "Tests Succeeded"), I hope that the
>>> community knows there isn't any ill intent on the part of Mirantis
>>> or anyone who works on Stackalytics. OK, so let's keep the
>>> conversation civil -- we're all working towards the same goals of
>>> transparency and accuracy. :)
>>>
>>> Alright, now, Anita and Kurt Taylor were asking a very poignant
>>> question:
>>>
>>> "But what does CI tested really mean? just running tests? or
>>> tested to pass some level of requirements?"
>>>
>>> In this nascent world of external CI systems, we have a set of
>>> issues that we need to resolve:
>>>
>>> 1) All of the CI systems are different.
>>>
>>> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
>>> scripts. Others run custom Python code that spawns VMs and
>>> publishes logs to some public domain.
>>>
>>> As a community, we need to decide whether it is worth putting in
>>> the effort to create a single, unified, installable and runnable
>>> CI system, so that we can legitimately say "all of the external
>>> systems are identical, with the exception of the driver code for
>>> vendor X being substituted in the Neutron codebase."
>>>
>>> If the goal of the external CI systems is to produce reliable,
>>> consistent results, I feel the answer to the above is "yes", but
>>> I'm interested to hear what others think. Frankly, in the world of
>>> benchmarks, it would be unthinkable to say "go ahead and everyone
>>> run your own benchmark suite", because you would get wildly
>>> different results. A similar problem has emerged here.
>>>
>>> 2) There is no mediation or verification that the external CI
>>> system is actually testing anything at all
>>>
>>> As a community, we need to decide whether the current system of
>>> self-policing should continue. If it should, then language on
>>> reports like [3] should be very clear that any numbers derived
>>> from such systems should be taken with a grain of salt. Use of the
>>> word "Success" should be avoided, as it has connotations (in
>>> English, at
>>> least) that the result has been verified, which is simply not the
>>> case as long as no verification or mediation occurs for any
>>> external
 CI system.
>>>
>>> 3) There is no clear indication of what tests are being run, and
>>> therefore there is no clear indication of what "success" is
>>>
>>> I think we can all agree that a test has three possible outcomes:
>>> pass, fail, and skip. The results of a test suite run therefore is
>>> nothing more than the aggregation of which tests passed, which
>>> failed, and which were skipped.
>>>
>>> As a community, we must document, for each project, what are
>>> expected set of tests that must be run for each merged patch into
>>> the project's source tree. This documentation should be
>>> discoverable so that reports like [3] can be crystal-clear on what
>>> the data shown actually means. The report is simply displaying the
>>> data it receives from Gerrit. The community needs to be proactive
>>> in saying "this is what is expected to be tested." This alone
>>> would allow the report to give information such as "External CI
>>> system ABC performed the
 expected tests. X tests passed.
>>> Y tests failed. Z tests were skipped." Likewise, it would also
>>> make it possible for the report to give information such as
>>> "External CI system 

Re: [openstack-dev] Using tmux instead of screen in devstack

2014-07-03 Thread Dean Troyer
On Thu, Jul 3, 2014 at 2:59 AM, Anant Patil  wrote:
>
> >> If we did move from screen to tmux.
> We aren't moving away... I have emphasized this enough. One should be
> able just "choose" tmux explicitly otherwise things will run in screen
> by default, as usual.
>

And I am saying due to the nature of how we use screen, we are not going to
support two.  Adding a third set of semantics to consider when debugging
process problems is not an option.

dt

-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Flavor framework: Conclusion

2014-07-03 Thread Susanne Balle
+1


On Wed, Jul 2, 2014 at 10:12 PM, Kyle Mestery 
wrote:

> We're coming down to the wire here with regards to Neutron BPs in
> Juno, and I wanted to bring up the topic of the flavor framework BP.
> This is a critical BP for things like LBaaS, FWaaS, etc. We need this
> work to land in Juno, as these other work items are dependent on it.
> There are still two proposals [1] [2], and after the meeting last week
> [3] it appeared we were close to conclusion on this. I now see a bunch
> of comments on both proposals.
>
> I'm going to again suggest we spend some time discussing this at the
> Neutron meeting on Monday to come to a closure on this. I think we're
> close. I'd like to ask Mark and Eugene to both look at the latest
> comments, hopefully address them before the meeting, and then we can
> move forward with this work for Juno.
>
> Thanks for all the work by all involved on this feature! I think we're
> close and I hope we can close on it Monday at the Neutron meeting!
>
> Kyle
>
> [1] https://review.openstack.org/#/c/90070/
> [2] https://review.openstack.org/102723
> [3]
> http://eavesdrop.openstack.org/meetings/networking_advanced_services/2014/networking_advanced_services.2014-06-27-17.30.log.html
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Sullivan, Jon Paul
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: 03 July 2014 13:53
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is "Success"
> exactly?
> 
> On 07/03/2014 06:22 AM, Sullivan, Jon Paul wrote:
> >> -Original Message-
> >> From: Anita Kuno [mailto:ante...@anteaya.info]
> >> Sent: 01 July 2014 14:42
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is
> "Success"
> >> exactly?
> >>
> >> On 06/30/2014 09:13 PM, Jay Pipes wrote:
> >>> On 06/30/2014 07:08 PM, Anita Kuno wrote:
>  On 06/30/2014 04:22 PM, Jay Pipes wrote:
> > Hi Stackers,
> >
> > Some recent ML threads [1] and a hot IRC meeting today [2] brought
> > up some legitimate questions around how a newly-proposed
> > Stackalytics report page for Neutron External CI systems [2]
> > represented the results of an external CI system as "successful"
> > or
> >> not.
> >
> > First, I want to say that Ilya and all those involved in the
> > Stackalytics program simply want to provide the most accurate
> > information to developers in a format that is easily consumed.
> > While there need to be some changes in how data is shown (and the
> > wording of things like "Tests Succeeded"), I hope that the
> > community knows there isn't any ill intent on the part of Mirantis
> > or anyone who works on Stackalytics. OK, so let's keep the
> > conversation civil -- we're all working towards the same goals of
> > transparency and accuracy. :)
> >
> > Alright, now, Anita and Kurt Taylor were asking a very poignant
> > question:
> >
> > "But what does CI tested really mean? just running tests? or
> > tested to pass some level of requirements?"
> >
> > In this nascent world of external CI systems, we have a set of
> > issues that we need to resolve:
> >
> > 1) All of the CI systems are different.
> >
> > Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> > scripts. Others run custom Python code that spawns VMs and
> > publishes logs to some public domain.
> >
> > As a community, we need to decide whether it is worth putting in
> > the effort to create a single, unified, installable and runnable
> > CI system, so that we can legitimately say "all of the external
> > systems are identical, with the exception of the driver code for
> > vendor X being substituted in the Neutron codebase."
> >
> > If the goal of the external CI systems is to produce reliable,
> > consistent results, I feel the answer to the above is "yes", but
> > I'm interested to hear what others think. Frankly, in the world of
> > benchmarks, it would be unthinkable to say "go ahead and everyone
> > run your own benchmark suite", because you would get wildly
> > different results. A similar problem has emerged here.
> >
> > 2) There is no mediation or verification that the external CI
> > system is actually testing anything at all
> >
> > As a community, we need to decide whether the current system of
> > self-policing should continue. If it should, then language on
> > reports like [3] should be very clear that any numbers derived
> > from such systems should be taken with a grain of salt. Use of the
> > word "Success" should be avoided, as it has connotations (in
> > English, at
> > least) that the result has been verified, which is simply not the
> > case as long as no verification or mediation occurs for any
> > external
> >> CI system.
> >
> > 3) There is no clear indication of what tests are being run, and
> > therefore there is no clear indication of what "success" is
> >
> > I think we can all agree that a test has three possible outcomes:
> > pass, fail, and skip. The results of a test suite run therefore is
> > nothing more than the aggregation of which tests passed, which
> > failed, and which were skipped.
> >
> > As a community, we must document, for each project, what are
> > expected set of tests that must be run for each merged patch into
> > the project's source tree. This documentation should be
> > discoverable so that reports like [3] can be crystal-clear on what
> > the data shown actually means. The report is simply displaying the
> > data it receives from Gerrit. The community needs to be proactive
> > in saying "this is what is expected to be tested." This alone
> > would allow the report to give information such as "External CI
> > system ABC performed the
> >> expected tests. X tests passed.
> > Y tests failed. Z tests were skipped." Likewise, it would also
> > make it possible for the report to give information such as
> > "External CI system DEF did not perform the expected tests.",
> > which is

Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 07:12 AM, Salvatore Orlando wrote:
> Apologies for quoting again the top post of the thread.
> 
> Comments inline (mostly thinking aloud)
> Salvatore
> 
> 
> On 30 June 2014 22:22, Jay Pipes  wrote:
> 
>> Hi Stackers,
>>
>> Some recent ML threads [1] and a hot IRC meeting today [2] brought up some
>> legitimate questions around how a newly-proposed Stackalytics report page
>> for Neutron External CI systems [2] represented the results of an external
>> CI system as "successful" or not.
>>
>> First, I want to say that Ilya and all those involved in the Stackalytics
>> program simply want to provide the most accurate information to developers
>> in a format that is easily consumed. While there need to be some changes in
>> how data is shown (and the wording of things like "Tests Succeeded"), I
>> hope that the community knows there isn't any ill intent on the part of
>> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
>> conversation civil -- we're all working towards the same goals of
>> transparency and accuracy. :)
>>
>> Alright, now, Anita and Kurt Taylor were asking a very poignant question:
>>
>> "But what does CI tested really mean? just running tests? or tested to
>> pass some level of requirements?"
>>
>> In this nascent world of external CI systems, we have a set of issues that
>> we need to resolve:
>>
>> 1) All of the CI systems are different.
>>
>> Some run Bash scripts. Some run Jenkins slaves and devstack-gate scripts.
>> Others run custom Python code that spawns VMs and publishes logs to some
>> public domain.
>>
>> As a community, we need to decide whether it is worth putting in the
>> effort to create a single, unified, installable and runnable CI system, so
>> that we can legitimately say "all of the external systems are identical,
>> with the exception of the driver code for vendor X being substituted in the
>> Neutron codebase."
>>
> 
> I think such system already exists, and it's documented here:
> http://ci.openstack.org/
> Still, understanding it is quite a learning curve, and running it is not
> exactly straightforward. But I guess that's pretty much understandable
> given the complexity of the system, isn't it?
> 
> 
>>
>> If the goal of the external CI systems is to produce reliable, consistent
>> results, I feel the answer to the above is "yes", but I'm interested to
>> hear what others think. Frankly, in the world of benchmarks, it would be
>> unthinkable to say "go ahead and everyone run your own benchmark suite",
>> because you would get wildly different results. A similar problem has
>> emerged here.
>>
> 
> I don't think the particular infrastructure which might range from an
> openstack-ci clone to a 100-line bash script would have an impact on the
> "reliability" of the quality assessment regarding a particular driver or
> plugin. This is determined, in my opinion, by the quantity and nature of
> tests one runs on a specific driver. In Neutron for instance, there is a
> wide range of choices - from a few test cases in tempest.api.network to the
> full smoketest job. As long there is no minimal standard here, then it
> would be difficult to assess the quality of the evaluation from a CI
> system, unless we explicitly keep into account coverage into the evaluation.
> 
> On the other hand, different CI infrastructures will have different levels
> in terms of % of patches tested and % of infrastructure failures. I think
> it might not be a terrible idea to use these parameters to evaluate how
> good a CI is from an infra standpoint. However, there are still open
> questions. For instance, a CI might have a low patch % score because it
> only needs to test patches affecting a given driver.
> 
> 
>> 2) There is no mediation or verification that the external CI system is
>> actually testing anything at all
>>
>> As a community, we need to decide whether the current system of
>> self-policing should continue. If it should, then language on reports like
>> [3] should be very clear that any numbers derived from such systems should
>> be taken with a grain of salt. Use of the word "Success" should be avoided,
>> as it has connotations (in English, at least) that the result has been
>> verified, which is simply not the case as long as no verification or
>> mediation occurs for any external CI system.
>>
> 
> 
> 
> 
>> 3) There is no clear indication of what tests are being run, and therefore
>> there is no clear indication of what "success" is
>>
>> I think we can all agree that a test has three possible outcomes: pass,
>> fail, and skip. The results of a test suite run therefore is nothing more
>> than the aggregation of which tests passed, which failed, and which were
>> skipped.
>>
>> As a community, we must document, for each project, what are expected set
>> of tests that must be run for each merged patch into the project's source
>> tree. This documentation should be discoverable so that reports like [3]
>> can be crystal-clear on what the data sh

Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Anita Kuno
On 07/03/2014 06:22 AM, Sullivan, Jon Paul wrote:
>> -Original Message-
>> From: Anita Kuno [mailto:ante...@anteaya.info]
>> Sent: 01 July 2014 14:42
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is "Success"
>> exactly?
>>
>> On 06/30/2014 09:13 PM, Jay Pipes wrote:
>>> On 06/30/2014 07:08 PM, Anita Kuno wrote:
 On 06/30/2014 04:22 PM, Jay Pipes wrote:
> Hi Stackers,
>
> Some recent ML threads [1] and a hot IRC meeting today [2] brought
> up some legitimate questions around how a newly-proposed
> Stackalytics report page for Neutron External CI systems [2]
> represented the results of an external CI system as "successful" or
>> not.
>
> First, I want to say that Ilya and all those involved in the
> Stackalytics program simply want to provide the most accurate
> information to developers in a format that is easily consumed. While
> there need to be some changes in how data is shown (and the wording
> of things like "Tests Succeeded"), I hope that the community knows
> there isn't any ill intent on the part of Mirantis or anyone who
> works on Stackalytics. OK, so let's keep the conversation civil --
> we're all working towards the same goals of transparency and
> accuracy. :)
>
> Alright, now, Anita and Kurt Taylor were asking a very poignant
> question:
>
> "But what does CI tested really mean? just running tests? or tested
> to pass some level of requirements?"
>
> In this nascent world of external CI systems, we have a set of
> issues that we need to resolve:
>
> 1) All of the CI systems are different.
>
> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> scripts. Others run custom Python code that spawns VMs and publishes
> logs to some public domain.
>
> As a community, we need to decide whether it is worth putting in the
> effort to create a single, unified, installable and runnable CI
> system, so that we can legitimately say "all of the external systems
> are identical, with the exception of the driver code for vendor X
> being substituted in the Neutron codebase."
>
> If the goal of the external CI systems is to produce reliable,
> consistent results, I feel the answer to the above is "yes", but I'm
> interested to hear what others think. Frankly, in the world of
> benchmarks, it would be unthinkable to say "go ahead and everyone
> run your own benchmark suite", because you would get wildly
> different results. A similar problem has emerged here.
>
> 2) There is no mediation or verification that the external CI system
> is actually testing anything at all
>
> As a community, we need to decide whether the current system of
> self-policing should continue. If it should, then language on
> reports like [3] should be very clear that any numbers derived from
> such systems should be taken with a grain of salt. Use of the word
> "Success" should be avoided, as it has connotations (in English, at
> least) that the result has been verified, which is simply not the
> case as long as no verification or mediation occurs for any external
>> CI system.
>
> 3) There is no clear indication of what tests are being run, and
> therefore there is no clear indication of what "success" is
>
> I think we can all agree that a test has three possible outcomes:
> pass, fail, and skip. The results of a test suite run therefore is
> nothing more than the aggregation of which tests passed, which
> failed, and which were skipped.
>
> As a community, we must document, for each project, what are
> expected set of tests that must be run for each merged patch into
> the project's source tree. This documentation should be discoverable
> so that reports like [3] can be crystal-clear on what the data shown
> actually means. The report is simply displaying the data it receives
> from Gerrit. The community needs to be proactive in saying "this is
> what is expected to be tested." This alone would allow the report to
> give information such as "External CI system ABC performed the
>> expected tests. X tests passed.
> Y tests failed. Z tests were skipped." Likewise, it would also make
> it possible for the report to give information such as "External CI
> system DEF did not perform the expected tests.", which is excellent
> information in and of itself.
>
> ===
>
> In thinking about the likely answers to the above questions, I
> believe it would be prudent to change the Stackalytics report in
> question [3] in the following ways:
>
> a. Change the "Success %" column header to "% Reported +1 Votes"
> b. Change the phrase " Green cell - tests ran successfully, red cell
> - tests failed" to "Green cell - System voted

Re: [openstack-dev] [Fuel] Few hot questions related to patching for openstack

2014-07-03 Thread Dmitry Pyzhov
So, our releases will have following versions of releases on UI:
5.0) "2014.1"
5.0.1) "2014.1.1-5.0.1"
5.1) "2014.1.1-5.1"

And if someone install 5.0, upgrade it to 5.0.1 and then upgrade to 5.1, he
will have three releases for each OS. I think we should allow user to
delete unneeded releases. It also will add free space on his masternode.


On Wed, Jul 2, 2014 at 1:34 PM, Igor Kalnitsky 
wrote:

> Hello,
>
> > Could you please clarify what exactly you mean by  "our patches" /
> > "our first patch"?
>
> I mean which version should we use in 5.0.1, for example? As far as I
> understand @DmitryB, it have to be "2014.1-5.0.1". Am I right?
>
> Thanks,
> Igor
>
>
>
> On Tue, Jul 1, 2014 at 8:47 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> my 2 cents:
>>
>> 1) Fuel version (+1 to Dmitry)
>> 2) Could you please clarify what exactly you mean by "our patches" / "our
>> first patch"?
>>
>>
>>
>>
>> On Tue, Jul 1, 2014 at 8:04 PM, Dmitry Borodaenko <
>> dborodae...@mirantis.com> wrote:
>>
>>> 1) Puppet manifests are part of Fuel so the version of Fuel should be
>>> used. It is possible to have more than one version of Fuel per
>>> OpenStack version, but not the other way around: if we upgrade
>>> OpenStack version we also increase version of Fuel.
>>>
>>> 2) Should be a combination of both: it should indicate which OpenStack
>>> version it is based on (2014.1.1), and version of Fuel it's included
>>> in (5.0.1), e.g. 2014.1.1-5.0.1. Between Fuel versions, we can have
>>> additional bugfix patches added to shipped OpenStack components.
>>>
>>> my 2c,
>>> -DmitryB
>>>
>>>
>>> On Tue, Jul 1, 2014 at 9:50 AM, Igor Kalnitsky 
>>> wrote:
>>> > Hi fuelers,
>>> >
>>> > I'm working on Patching for OpenStack and I have the following
>>> questions:
>>> >
>>> > 1/ We need to save new puppets and repos under some versioned folder:
>>> >
>>> > /etc/puppet/{version}/ or /var/www/nailgun/{version}/centos.
>>> >
>>> > So the question is which version to use? Fuel or OpenStack?
>>> >
>>> > 2/ Which version we have to use for our patchs? We have an OpenStack
>>> 2014.1.
>>> > Should we use 2014.1.1 for our first patch? Or we have to use another
>>> > format?
>>> >
>>> > I need a quick reply since these questions have to be solved for 5.0.1
>>> too.
>>> >
>>> > Thanks,
>>> > Igor
>>> >
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>>
>>> --
>>> Dmitry Borodaenko
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party] - minimum response time for 3rd party CI responses

2014-07-03 Thread Luke Gorrie
On 3 July 2014 02:44, Michael Still  wrote:

> The main purpose is to let change reviewers know that a change might
> be problematic for a piece of code not well tested by the gate


Just a thought:

A "sampling" approach could be a reasonable way to stay responsive under
heavy load and still give a strong signal to reviewers about whether a
change is likely to be problematic.

I mean: Kevin mentions that his CI gets an hours-long queue during peak
review season. One way to deal with that could be skipping some events e.g.
toss a coin to decide whether to test the next revision of a change that he
has already +1'd previously. That would keep responsiveness under control
even when throughput is a problem.

(A bit like how a router manages a congested input queue or how a sampling
profiler keeps overhead low.)

Could be worth keeping the rules flexible enough to permit this kind of
thing, at least?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][third-party-ci] Midokura CI Bot results migration

2014-07-03 Thread Anita Kuno
On 07/03/2014 07:54 AM, Lucas Eznarriaga wrote:
> Hi,
> 
> Due to maintenance reasons, our CI results server has been moved to a new
> machine.
> As we didn't use a dns name before, previous links posted on the
> review.openstack.org will appear as broken but they're still available
> changing the previous hardcoded ip: 119.15.112.63 by the new dns name:
> 3rdparty-logs.midokura.com
> 
> Sorry for the inconvenience,
> Best regards,
> 
> Lucas
> Midokura
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
Hello Lucas:

Thank you for being proactive and informing everyone of the backwards
incompatible url.

Given that you had previously hardcoded the ip address into your urls,
are you willing to offer a patch to
http://ci.openstack.org/third_party.html found here:
http://git.openstack.org/cgit/openstack-infra/config/tree/doc/source/third_party.rst
to prevent other third party ci systems from using backward incompatible
urls in future.

Let me know your thoughts. If you are willing but don't know how I can
help you. If you aren't willing, I can do it myself, but I would like to
start helping others offer patches to third party documentation.

Thanks Lucas,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving neutron to oslo.db

2014-07-03 Thread Salvatore Orlando
No I was missing everything and kept wasting time because of alembic.

This will teach me to keep my mouth shut and don't distract people who are
actually doing good work.

Thanks for doings this work.

Salvatore


On 3 July 2014 14:15, Roman Podoliaka  wrote:

> Hi Salvatore,
>
> I must be missing something. Hasn't it been done in
> https://review.openstack.org/#/c/103519/? :)
>
> Thanks,
> Roman
>
> On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando 
> wrote:
> > Hi,
> >
> > As you surely now, in Juno oslo.db will graduate [1]
> > I am currently working on the port. It's been already cleared that making
> > alembic migrations "idempotent" and healing the DB schema is a
> requirement
> > for this task.
> > These two activities are tracked by the blueprints [2] and [3].
> > I think we've seen enough in Openstack to understand that there is no
> chance
> > of being able to do the port to oslo.db in Juno.
> >
> > While blueprint [2] is already approved, I suggest to target also [3] for
> > Juno so that we might be able to port neutron to oslo.db as soon as K
> opens.
> > I expect this port to be not as invasive as the one for oslo.messaging
> which
> > required quite a lot of patches.
> >
> > Salvatore
> >
> > [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
> > [2] https://review.openstack.org/#/c/95738/
> > [3] https://review.openstack.org/#/c/101963/
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Moving neutron to oslo.db

2014-07-03 Thread Roman Podoliaka
Hi Salvatore,

I must be missing something. Hasn't it been done in
https://review.openstack.org/#/c/103519/? :)

Thanks,
Roman

On Thu, Jul 3, 2014 at 2:51 PM, Salvatore Orlando  wrote:
> Hi,
>
> As you surely now, in Juno oslo.db will graduate [1]
> I am currently working on the port. It's been already cleared that making
> alembic migrations "idempotent" and healing the DB schema is a requirement
> for this task.
> These two activities are tracked by the blueprints [2] and [3].
> I think we've seen enough in Openstack to understand that there is no chance
> of being able to do the port to oslo.db in Juno.
>
> While blueprint [2] is already approved, I suggest to target also [3] for
> Juno so that we might be able to port neutron to oslo.db as soon as K opens.
> I expect this port to be not as invasive as the one for oslo.messaging which
> required quite a lot of patches.
>
> Salvatore
>
> [1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
> [2] https://review.openstack.org/#/c/95738/
> [3] https://review.openstack.org/#/c/101963/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] oslo.i18n 0.1.0 released

2014-07-03 Thread Flavio Percoco
On 07/02/2014 09:11 PM, Doug Hellmann wrote:
> The Oslo team is pleased to announce the first release of oslo.i18n,
> the library that replaces the gettextutils module from oslo-incubator.
> 
> The new library has been uploaded to PyPI, and there is a changeset in
> the queue update the global requirements list and our package mirror:
> https://review.openstack.org/104304
> 
> Documentation for the library is available on our developer docs site:
> http://docs.openstack.org/developer/oslo.i18n/
> 
> The spec for the graduation blueprint includes some advice for
> migrating to the new library:
> http://git.openstack.org/cgit/openstack/oslo-specs/tree/specs/juno/graduate-oslo-i18n.rst
> 
> Please report bugs using the Oslo bug tracker in launchpad:
> http://bugs.launchpad.net/oslo
> 
> Thanks to everyone who helped with reviews and patches to make this
> release possible!

w0t, and now:

GRADUATE ALL THE THINGS


-- 
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] milestone-proposed is dead, long lives proposed/foo

2014-07-03 Thread Jeremy Stanley
On 2014-07-03 11:20:43 +0400 (+0400), Yuriy Taraday wrote:
> I mean other mirrors like we have in our local net. Given not so
> good connection to upstream repos (the reason we have this mirror
> in the first place) I can't think of reliable way to clean them
> up. Where can I find scripts that propagate deletions to official
> mirrors? Maybe I can get some idea from them?

Our official mirrors are "push mirrors" (Gerrit pushes every Git
update to them individually), so I don't think that's going to be
much help for your situation. Probably you could just run a cron job
to compare available remote branches and delete them for you locally
once they no longer exist upstream.

Keep in mind that this is nothing new. We already delete stable/*
branches when they reach end of support, so the policy change for
new proposed/* branches only really doubles the number of branches
we're deleting per cycle now.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][third-party-ci] Midokura CI Bot results migration

2014-07-03 Thread Lucas Eznarriaga
Hi,

Due to maintenance reasons, our CI results server has been moved to a new
machine.
As we didn't use a dns name before, previous links posted on the
review.openstack.org will appear as broken but they're still available
changing the previous hardcoded ip: 119.15.112.63 by the new dns name:
3rdparty-logs.midokura.com

Sorry for the inconvenience,
Best regards,

Lucas
Midokura
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Moving neutron to oslo.db

2014-07-03 Thread Salvatore Orlando
Hi,

As you surely now, in Juno oslo.db will graduate [1]
I am currently working on the port. It's been already cleared that making
alembic migrations "idempotent" and healing the DB schema is a requirement
for this task.
These two activities are tracked by the blueprints [2] and [3].
I think we've seen enough in Openstack to understand that there is no
chance of being able to do the port to oslo.db in Juno.

While blueprint [2] is already approved, I suggest to target also [3] for
Juno so that we might be able to port neutron to oslo.db as soon as K
opens. I expect this port to be not as invasive as the one for
oslo.messaging which required quite a lot of patches.

Salvatore

[1] https://wiki.openstack.org/wiki/Oslo/GraduationStatus#oslo.db
[2] https://review.openstack.org/#/c/95738/
[3] https://review.openstack.org/#/c/101963/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-07-03 Thread Macdonald-Wallace, Matthew
And the spec is now up at https://review.openstack.org/104524 for everyone to 
pull apart... ;)

Matt

> -Original Message-
> From: Macdonald-Wallace, Matthew
> Sent: 03 July 2014 11:18
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [TripleO] os-refresh-config run frequency
> 
> FWIW, I've just registered https://blueprints.launchpad.net/tripleo/+spec/re-
> assert-system-state and I'm about to start work on the spec.
> 
> Matt
> 
> > -Original Message-
> > From: Clint Byrum [mailto:cl...@fewbar.com]
> > Sent: 27 June 2014 17:01
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [TripleO] os-refresh-config run frequency
> >
> > Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-27
> > 00:14:49
> > -0700:
> > > Hi Clint,
> > >
> > > > -Original Message-
> > > > From: Clint Byrum [mailto:cl...@fewbar.com]
> > > > Sent: 26 June 2014 20:21
> > > > To: openstack-dev
> > > > Subject: Re: [openstack-dev] [TripleO] os-refresh-config run
> > > > frequency
> > > >
> > >
> > > > So I see two problems highlighted above.
> > > >
> > > > 1) We don't re-assert ephemeral state set by o-r-c scripts. You're
> > > > right, and we've been talking about it for a while. The right
> > > > thing to do is have os-collect- config re-run its command on boot.
> > > > I don't think a cron job is the right way to go, we should just
> > > > have a file in /var/run that is placed there only on a successful
> > > > run of the command. If
> > that file does not exist, then we run the command.
> > > >
> > > > I've just opened this bug in response:
> > > >
> > > > https://bugs.launchpad.net/os-collect-config/+bug/1334804
> > >
> > >
> > > Cool, I'm more than happy for this to be done elsewhere, I'm glad
> > > that people
> > are in agreement with me on the concept and that work has already
> > started on this.
> > >
> > > I'll add some notes to the bug if needed later on today.
> > >
> > > > 2) We don't re-assert any state on a regular basis.
> > > >
> > > > So one reason we haven't focused on this, is that we have a
> > > > stretch goal of running with a readonly root partition. It's
> > > > gotten lost in a lot of the craziness of "just get it working",
> > > > but with rebuilds blowing away root now, leading to anything not
> > > > on the state drive (/mnt currently), there's a good chance that this 
> > > > will
> work relatively well.
> > > >
> > > > Now, since people get root, they can always override the readonly
> > > > root and make changes. we hates thiss!.
> > > >
> > > > I'm open to ideas, however, os-refresh-config is definitely not
> > > > the place to solve this. It is intended as a non-resident command
> > > > to be called when it is time to assert state. os-collect-config is
> > > > intended to gather configurations, and expose them to a command
> > > > that it runs, and thus should be the mechanism by which os- 
> > > > refresh-config
> is run.
> > > >
> > > > I'd like to keep this conversation separate from one in which we
> > > > discuss more mechanisms to make os-refresh-config robust. There
> > > > are a bunch of things we can do, but I think we should focus just
> > > > on "how do we
> > re-assert state?".
> > >
> > > OK, that's fair enough.
> > >
> > > > Because we're able to say right now that it is only for running
> > > > when config changes, we can wave our hands and say it's ok that we
> > > > restart everything on every run. As Jan alluded to, that won't
> > > > work so well if we run it every 20 minutes.
> > >
> > > Agreed, and chatting with Jan and a couple of others yesterday we
> > > came to
> > the conclusion that whatever we do here it will require "tweaking" of
> > a number of elements to safely restart services.
> > >
> > > > So, I wonder if we can introduce a config version into 
> > > > os-collect-config.
> > > >
> > > > Basically os-collect-config would keep a version along with its cache.
> > > > Whenever a new version is detected, os-collect-config would set a
> > > > value in the environment that informs the command "this is a new
> > > > version of config". From that, scripts can do things like this:
> > > >
> > > > if [ -n "$OS_CONFIG_NEW_VERSION" ] ; then
> > > >   service X restart
> > > > else
> > > >   if !service X status ; then service X start fi
> > > >
> > > > This would lay the groundwork for future abilities to compare
> > > > old/new so we can take shortcuts by diffing the two config versions.
> > > > For instance if we look at old vs. new and we don't see any of the
> > > > keys we care about changed, we can skip restarting.
> > >
> > > I like this approach - does this require a new spec? If so, I'll
> > > start an etherpad
> > to collect thoughts on it before writing it up for approval.
> >
> > I think this should be a tripleo spec. If you're volunteering write
> > it, hooray \o/. It will require several work items. Off the top of my head:
> >
> > - Add version awareness to os-collect-config
> > 

Re: [openstack-dev] [neutron]Performance of security group

2014-07-03 Thread Miguel Angel Ajo

I have created a separate spec for the RPC part.

https://review.openstack.org/104522


On 07/02/2014 05:52 PM, Kyle Mestery wrote:

On Wed, Jul 2, 2014 at 3:43 AM, Ihar Hrachyshka  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/07/14 10:12, Miguel Angel Ajo wrote:


Shihazhang,

I really believe we need the RPC refactor done for this cycle, and
given the close deadlines we have (July 10 for spec submission and
July 20 for spec approval).

Don't you think it's going to be better to split the work in
several specs?

1) ipset optimization   (you) 2) sg rpc optimization (without
fanout) (me) 3) sg rpc optimization (with fanout) (edouard, you ,
me)


This way we increase the chances of having part of this for the
Juno cycle. If we go for something too complicated is going to take
more time for approval.



I agree. And it not only increases chances to get at least some of
those highly demanded performance enhancements to get into Juno, it's
also "the right thing to do" (c). It's counterproductive to put
multiple vaguely related enhancements in single spec. This would dim
review focus and put us into position of getting 'all-or-nothing'. We
can't afford that.

Let's leave one spec per enhancement. @Shihazhang, what do you think?


+100

File these as separate specs, and lets see how much of this we can get
into Juno.

Thanks for taking this enhancement and performance improvement on everyone!

Kyle



Also, I proposed the details of "2", trying to bring awareness on
the topic, as I have been working with the scale lab in Red Hat to
find and understand those issues, I have a very good knowledge of
the problem and I believe I could make a very fast advance on the
issue at the RPC level.

Given that, I'd like to work on this specific part, whether or not
we split the specs, as it's something we believe critical for
neutron scalability and thus, *nova parity*.

I will start a separate spec for "2", later on, if you find it ok,
we keep them as separate ones, if you believe having just 1 spec
(for 1 & 2) is going be safer for juno-* approval, then we can
incorporate my spec in yours, but then "add-ipset-to-security" is
not a good spec title to put all this together.


Best regards, Miguel Ángel.


On 07/02/2014 03:37 AM, shihanzhang wrote:


hi Miguel Angel Ajo Pelayo! I agree with you and modify my spes,
but I will also optimization the RPC from security group agent to
neutron server. Now the modle is 'port[rule1,rule2...], port...',
I will change it to 'port[sg1, sg2..]', this can reduce the size
of RPC respose message from neutron server to security group
agent.

At 2014-07-01 09:06:17, "Miguel Angel Ajo Pelayo"
 wrote:



Ok, I was talking with Édouard @ IRC, and as I have time to
work into this problem, I could file an specific spec for the
security group RPC optimization, a masterplan in two steps:

1) Refactor the current RPC communication for
security_groups_for_devices, which could be used for full
syncs, etc..

2) Benchmark && make use of a fanout queue per security group
to make sure only the hosts with instances on a certain
security group get the updates as they happen.

@shihanzhang do you find it reasonable?



- Original Message -

- Original Message -

@Nachi: Yes that could a good improvement to factorize the
RPC

mechanism.


Another idea: What about creating a RPC topic per security
group (quid of the

RPC topic

scalability) on which an agent subscribes if one of its
ports is

associated

to the security group?

Regards, Édouard.





Hmm, Interesting,

@Nachi, I'm not sure I fully understood:


SG_LIST [ SG1, SG2] SG_RULE_LIST = [SG_Rule1, SG_Rule2] ..
port[SG_ID1, SG_ID2], port2 , port3


Probably we may need to include also the SG_IP_LIST =
[SG_IP1, SG_IP2] ...


and let the agent do all the combination work.

Something like this could make sense?

Security_Groups = {SG1:{IPs:[],RULES:[],
SG2:{IPs:[],RULES:[]} }

Ports = {Port1:[SG1, SG2], Port2: [SG1]  }


@Edouard, actually I like the idea of having the agent
subscribed to security groups they have ports on... That
would remove the need to include all the security groups
information on every call...

But would need another call to get the full information of a
set of security groups at start/resync if we don't already
have any.




On Fri, Jun 20, 2014 at 4:04 AM, shihanzhang <

ayshihanzh...@126.com >

wrote:



hi Miguel Ángel, I am very agree with you about the
following point:

* physical implementation on the hosts (ipsets, nftables,
... )

--this can reduce the load of compute node.

* rpc communication mechanisms.

-- this can reduce the load of neutron server can you help
me to review my BP specs?







At 2014-06-19 10:11:34, "Miguel Angel Ajo Pelayo" <

mangel...@redhat.com >

wrote:


Hi it's a very interesting topic, I was getting ready to
raise the same concerns about our security groups
implementation,

shihanzhang

thank you for starting this topic.

Not only at low level w

Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Salvatore Orlando
Apologies for quoting again the top post of the thread.

Comments inline (mostly thinking aloud)
Salvatore


On 30 June 2014 22:22, Jay Pipes  wrote:

> Hi Stackers,
>
> Some recent ML threads [1] and a hot IRC meeting today [2] brought up some
> legitimate questions around how a newly-proposed Stackalytics report page
> for Neutron External CI systems [2] represented the results of an external
> CI system as "successful" or not.
>
> First, I want to say that Ilya and all those involved in the Stackalytics
> program simply want to provide the most accurate information to developers
> in a format that is easily consumed. While there need to be some changes in
> how data is shown (and the wording of things like "Tests Succeeded"), I
> hope that the community knows there isn't any ill intent on the part of
> Mirantis or anyone who works on Stackalytics. OK, so let's keep the
> conversation civil -- we're all working towards the same goals of
> transparency and accuracy. :)
>
> Alright, now, Anita and Kurt Taylor were asking a very poignant question:
>
> "But what does CI tested really mean? just running tests? or tested to
> pass some level of requirements?"
>
> In this nascent world of external CI systems, we have a set of issues that
> we need to resolve:
>
> 1) All of the CI systems are different.
>
> Some run Bash scripts. Some run Jenkins slaves and devstack-gate scripts.
> Others run custom Python code that spawns VMs and publishes logs to some
> public domain.
>
> As a community, we need to decide whether it is worth putting in the
> effort to create a single, unified, installable and runnable CI system, so
> that we can legitimately say "all of the external systems are identical,
> with the exception of the driver code for vendor X being substituted in the
> Neutron codebase."
>

I think such system already exists, and it's documented here:
http://ci.openstack.org/
Still, understanding it is quite a learning curve, and running it is not
exactly straightforward. But I guess that's pretty much understandable
given the complexity of the system, isn't it?


>
> If the goal of the external CI systems is to produce reliable, consistent
> results, I feel the answer to the above is "yes", but I'm interested to
> hear what others think. Frankly, in the world of benchmarks, it would be
> unthinkable to say "go ahead and everyone run your own benchmark suite",
> because you would get wildly different results. A similar problem has
> emerged here.
>

I don't think the particular infrastructure which might range from an
openstack-ci clone to a 100-line bash script would have an impact on the
"reliability" of the quality assessment regarding a particular driver or
plugin. This is determined, in my opinion, by the quantity and nature of
tests one runs on a specific driver. In Neutron for instance, there is a
wide range of choices - from a few test cases in tempest.api.network to the
full smoketest job. As long there is no minimal standard here, then it
would be difficult to assess the quality of the evaluation from a CI
system, unless we explicitly keep into account coverage into the evaluation.

On the other hand, different CI infrastructures will have different levels
in terms of % of patches tested and % of infrastructure failures. I think
it might not be a terrible idea to use these parameters to evaluate how
good a CI is from an infra standpoint. However, there are still open
questions. For instance, a CI might have a low patch % score because it
only needs to test patches affecting a given driver.


> 2) There is no mediation or verification that the external CI system is
> actually testing anything at all
>
> As a community, we need to decide whether the current system of
> self-policing should continue. If it should, then language on reports like
> [3] should be very clear that any numbers derived from such systems should
> be taken with a grain of salt. Use of the word "Success" should be avoided,
> as it has connotations (in English, at least) that the result has been
> verified, which is simply not the case as long as no verification or
> mediation occurs for any external CI system.
>




> 3) There is no clear indication of what tests are being run, and therefore
> there is no clear indication of what "success" is
>
> I think we can all agree that a test has three possible outcomes: pass,
> fail, and skip. The results of a test suite run therefore is nothing more
> than the aggregation of which tests passed, which failed, and which were
> skipped.
>
> As a community, we must document, for each project, what are expected set
> of tests that must be run for each merged patch into the project's source
> tree. This documentation should be discoverable so that reports like [3]
> can be crystal-clear on what the data shown actually means. The report is
> simply displaying the data it receives from Gerrit. The community needs to
> be proactive in saying "this is what is expected to be tested." This alone
> woul

Re: [openstack-dev] [Neutron] DVR SNAT shortcut

2014-07-03 Thread Salvatore Orlando
I would just add that if I'm not mistaken the DVR work would also include
the features currently offered by nova network's 'multi-host' capability.
While DVR clearly does a lot more than multi host, keeping SNAT centralized
only might not fully satisfy this requirement.
Indeed nova-network offers SNAT at the compute node thus achieving
distribution of N-S traffic.

I agree with Zang's point regarding wasting public IPs. On the other hand
one IP per agent with double SNAT might be a reasonable compromise.
And in that case I'm not sure whether sharing SNAT source IPs among tenants
would have any security implications, so somebody else might comment there.

Summarizing, I think that distributing N-S traffic is important, but I
don't think that to achieve this we'd necessarily need to implement SNAT at
the compute nodes. I have reviewed the l3 agent part of the DVR work, it
seems that there will be floating IP distribution at the agent level - but
I could not understand whether there will be also SNAT distribution.

Salvatore



On 3 July 2014 10:45, Zang MingJie  wrote:

> Although the SNAT DVR has some trade off, I still think it is
> necessary. Here is pros and cons for consideration:
>
> pros:
>
> save W-E bandwidth
> high availability (distributed, no single point failure)
>
> cons:
>
> waste public ips (one ip per compute node vs one ip per l3-agent, if
> double-SNAT implemented)
> different tenants may share SNAT source ips
> compute node requires public interface
>
> Under certain deployment, the cons may not cause problems, can we
> provide SNAT DVR as a alternative option, which can be fully
> controlled by could admin ? The admin chooses whether use it or not.
>
> >> To resolve the problem, we are using double-SNAT,
> >
> >> first, set up one namespace for each router, SNAT tenant ip ranges to
> >> a separate range, say 169.254.255.0/24
> >
> >> then, SNAT from 169.254.255.0/24 to public network.
> >
> >> We are already using this method, and saved tons of ips in our
> >> deployment, only one public ip is required per router agent
> >
> > Functionally it could works, but break the existing normal OAM pattern,
> which expecting VMs from one tenant share a public IP, but share no IP with
> other tenant. As I know, at least some customer don't accept this way, they
> think VMs in different hosts appear as different public IP is very strange.
> >
> > In fact I severely doubt the value of N-S distributing in a real
> commercialized production environment, including FIP. There are many things
> that traditional N-S central nodes need to control: security, auditing,
> logging, and so on, it is not the simple forwarding. We need a tradeoff
> between performance and policy control model:
> >
> > 1. N-S traffic is usually much less than W-E traffic, do we really need
> distribute N-S traffic besides W-E traffic?
> > 2. With NFV progress like intel DPDK, we can build very cost-effective
> service application on commodity x86 server (simple SNAT with 10Gbps/s per
> core at average Internet packet length)
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API meeting

2014-07-03 Thread Christopher Yeoh
Hi,

Just a reminder that the weekly Nova API meeting is being held tomorrow
Friday UTC . 

We encourage cloud operators and those who use the REST API such as
SDK developers and others who and are interested in the future of the
API to participate. 

In other timezones the meeting is at:

EST 20:00 (Thu)
Japan 09:00 (Fri)
China 08:00 (Fri)
ACDT 9:30 (Fri)

The proposed agenda and meeting details are here: 

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda. 

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party-ci][neutron] What is "Success" exactly?

2014-07-03 Thread Sullivan, Jon Paul
> -Original Message-
> From: Anita Kuno [mailto:ante...@anteaya.info]
> Sent: 01 July 2014 14:42
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [third-party-ci][neutron] What is "Success"
> exactly?
> 
> On 06/30/2014 09:13 PM, Jay Pipes wrote:
> > On 06/30/2014 07:08 PM, Anita Kuno wrote:
> >> On 06/30/2014 04:22 PM, Jay Pipes wrote:
> >>> Hi Stackers,
> >>>
> >>> Some recent ML threads [1] and a hot IRC meeting today [2] brought
> >>> up some legitimate questions around how a newly-proposed
> >>> Stackalytics report page for Neutron External CI systems [2]
> >>> represented the results of an external CI system as "successful" or
> not.
> >>>
> >>> First, I want to say that Ilya and all those involved in the
> >>> Stackalytics program simply want to provide the most accurate
> >>> information to developers in a format that is easily consumed. While
> >>> there need to be some changes in how data is shown (and the wording
> >>> of things like "Tests Succeeded"), I hope that the community knows
> >>> there isn't any ill intent on the part of Mirantis or anyone who
> >>> works on Stackalytics. OK, so let's keep the conversation civil --
> >>> we're all working towards the same goals of transparency and
> >>> accuracy. :)
> >>>
> >>> Alright, now, Anita and Kurt Taylor were asking a very poignant
> >>> question:
> >>>
> >>> "But what does CI tested really mean? just running tests? or tested
> >>> to pass some level of requirements?"
> >>>
> >>> In this nascent world of external CI systems, we have a set of
> >>> issues that we need to resolve:
> >>>
> >>> 1) All of the CI systems are different.
> >>>
> >>> Some run Bash scripts. Some run Jenkins slaves and devstack-gate
> >>> scripts. Others run custom Python code that spawns VMs and publishes
> >>> logs to some public domain.
> >>>
> >>> As a community, we need to decide whether it is worth putting in the
> >>> effort to create a single, unified, installable and runnable CI
> >>> system, so that we can legitimately say "all of the external systems
> >>> are identical, with the exception of the driver code for vendor X
> >>> being substituted in the Neutron codebase."
> >>>
> >>> If the goal of the external CI systems is to produce reliable,
> >>> consistent results, I feel the answer to the above is "yes", but I'm
> >>> interested to hear what others think. Frankly, in the world of
> >>> benchmarks, it would be unthinkable to say "go ahead and everyone
> >>> run your own benchmark suite", because you would get wildly
> >>> different results. A similar problem has emerged here.
> >>>
> >>> 2) There is no mediation or verification that the external CI system
> >>> is actually testing anything at all
> >>>
> >>> As a community, we need to decide whether the current system of
> >>> self-policing should continue. If it should, then language on
> >>> reports like [3] should be very clear that any numbers derived from
> >>> such systems should be taken with a grain of salt. Use of the word
> >>> "Success" should be avoided, as it has connotations (in English, at
> >>> least) that the result has been verified, which is simply not the
> >>> case as long as no verification or mediation occurs for any external
> CI system.
> >>>
> >>> 3) There is no clear indication of what tests are being run, and
> >>> therefore there is no clear indication of what "success" is
> >>>
> >>> I think we can all agree that a test has three possible outcomes:
> >>> pass, fail, and skip. The results of a test suite run therefore is
> >>> nothing more than the aggregation of which tests passed, which
> >>> failed, and which were skipped.
> >>>
> >>> As a community, we must document, for each project, what are
> >>> expected set of tests that must be run for each merged patch into
> >>> the project's source tree. This documentation should be discoverable
> >>> so that reports like [3] can be crystal-clear on what the data shown
> >>> actually means. The report is simply displaying the data it receives
> >>> from Gerrit. The community needs to be proactive in saying "this is
> >>> what is expected to be tested." This alone would allow the report to
> >>> give information such as "External CI system ABC performed the
> expected tests. X tests passed.
> >>> Y tests failed. Z tests were skipped." Likewise, it would also make
> >>> it possible for the report to give information such as "External CI
> >>> system DEF did not perform the expected tests.", which is excellent
> >>> information in and of itself.
> >>>
> >>> ===
> >>>
> >>> In thinking about the likely answers to the above questions, I
> >>> believe it would be prudent to change the Stackalytics report in
> >>> question [3] in the following ways:
> >>>
> >>> a. Change the "Success %" column header to "% Reported +1 Votes"
> >>> b. Change the phrase " Green cell - tests ran successfully, red cell
> >>> - tests failed" to "Green cell - System voted +1, red cell - System
> >>> voted -1"
> >>>
> >>> and then,

Re: [openstack-dev] [TripleO] os-refresh-config run frequency

2014-07-03 Thread Macdonald-Wallace, Matthew
FWIW, I've just registered 
https://blueprints.launchpad.net/tripleo/+spec/re-assert-system-state and I'm 
about to start work on the spec.

Matt

> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com]
> Sent: 27 June 2014 17:01
> To: openstack-dev
> Subject: Re: [openstack-dev] [TripleO] os-refresh-config run frequency
> 
> Excerpts from Macdonald-Wallace, Matthew's message of 2014-06-27 00:14:49
> -0700:
> > Hi Clint,
> >
> > > -Original Message-
> > > From: Clint Byrum [mailto:cl...@fewbar.com]
> > > Sent: 26 June 2014 20:21
> > > To: openstack-dev
> > > Subject: Re: [openstack-dev] [TripleO] os-refresh-config run
> > > frequency
> > >
> >
> > > So I see two problems highlighted above.
> > >
> > > 1) We don't re-assert ephemeral state set by o-r-c scripts. You're
> > > right, and we've been talking about it for a while. The right thing
> > > to do is have os-collect- config re-run its command on boot. I don't
> > > think a cron job is the right way to go, we should just have a file
> > > in /var/run that is placed there only on a successful run of the command. 
> > > If
> that file does not exist, then we run the command.
> > >
> > > I've just opened this bug in response:
> > >
> > > https://bugs.launchpad.net/os-collect-config/+bug/1334804
> >
> >
> > Cool, I'm more than happy for this to be done elsewhere, I'm glad that 
> > people
> are in agreement with me on the concept and that work has already started on
> this.
> >
> > I'll add some notes to the bug if needed later on today.
> >
> > > 2) We don't re-assert any state on a regular basis.
> > >
> > > So one reason we haven't focused on this, is that we have a stretch
> > > goal of running with a readonly root partition. It's gotten lost in
> > > a lot of the craziness of "just get it working", but with rebuilds
> > > blowing away root now, leading to anything not on the state drive
> > > (/mnt currently), there's a good chance that this will work relatively 
> > > well.
> > >
> > > Now, since people get root, they can always override the readonly
> > > root and make changes. we hates thiss!.
> > >
> > > I'm open to ideas, however, os-refresh-config is definitely not the
> > > place to solve this. It is intended as a non-resident command to be
> > > called when it is time to assert state. os-collect-config is
> > > intended to gather configurations, and expose them to a command that
> > > it runs, and thus should be the mechanism by which os- refresh-config is 
> > > run.
> > >
> > > I'd like to keep this conversation separate from one in which we
> > > discuss more mechanisms to make os-refresh-config robust. There are
> > > a bunch of things we can do, but I think we should focus just on "how do 
> > > we
> re-assert state?".
> >
> > OK, that's fair enough.
> >
> > > Because we're able to say right now that it is only for running when
> > > config changes, we can wave our hands and say it's ok that we
> > > restart everything on every run. As Jan alluded to, that won't work
> > > so well if we run it every 20 minutes.
> >
> > Agreed, and chatting with Jan and a couple of others yesterday we came to
> the conclusion that whatever we do here it will require "tweaking" of a number
> of elements to safely restart services.
> >
> > > So, I wonder if we can introduce a config version into os-collect-config.
> > >
> > > Basically os-collect-config would keep a version along with its cache.
> > > Whenever a new version is detected, os-collect-config would set a
> > > value in the environment that informs the command "this is a new
> > > version of config". From that, scripts can do things like this:
> > >
> > > if [ -n "$OS_CONFIG_NEW_VERSION" ] ; then
> > >   service X restart
> > > else
> > >   if !service X status ; then service X start fi
> > >
> > > This would lay the groundwork for future abilities to compare
> > > old/new so we can take shortcuts by diffing the two config versions.
> > > For instance if we look at old vs. new and we don't see any of the
> > > keys we care about changed, we can skip restarting.
> >
> > I like this approach - does this require a new spec? If so, I'll start an 
> > etherpad
> to collect thoughts on it before writing it up for approval.
> 
> I think this should be a tripleo spec. If you're volunteering write it, 
> hooray \o/. It
> will require several work items. Off the top of my head:
> 
> - Add version awareness to os-collect-config
> - Add version awareness to all os-refresh-config scripts that do
>   disruptive things.
> - Add periodic command run to os-collect-config
> 
> Let's call it 're-assert-system-state'. Sound good?
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >