Re: [openstack-dev] [Cinder] DRBD integration as volume driver

2014-06-09 Thread Philipp Marek
Hello Duncan,

> The best thing to do with the code is push up a gerrit review! No need
> to be shy, and you're very welcome to push up code before the
> blueprint is in, it just won't get merged.
thank you for your encouragement!


I pushed another fix for Cinder last week (2 lines, allowing to start 
the services via "pudb") by committing


commit 7b6c6685ba3fb40b6ed65d8e3697fa9aac899d85
Author: Philipp Marek 
Date:   Fri Jun 6 11:48:52 2014 +0200

Make starting cinder services possible with "pudb", too.


I had that rebased to be on top of
6ff7d035bf507bf2ec9d066e3fcf81f29b4b481c
(the then-master HEAD), and pushed to
refs/for/master
on
ssh://phma...@review.openstack.org:29418/openstack/cinder

but couldn't find the commit in Gerrit anywhere ..

Even a search
https://review.openstack.org/#/q/owner:self,n,z
is empty.


Clicking around I found
https://review.openstack.org/#/admin/projects/openstack/cinder
which says 
Require a valid contributor agreement to upload: TRUE
but to the best of my knowledge this should be done:
https://review.openstack.org/#/settings/agreements
says
Verified   ICLA   OpenStack Individual Contributor License Agreement


So I'm a bit confused right now - what am I doing wrong?



> I'm very interested in this code.
As soon as I've figured out how this Gerrit thing works you can take a 
look ... (or even sooner, see the github link in my previous mail.)



Regards,

Phil


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][gate] ceilometer unit test frequently failing in gate

2014-06-09 Thread Eoghan Glynn


> Over the last 7 days ceilometer unit test jobs have a 18% failure rate in the
> gate queue [0], while we see expect to see some failures in integration
> testing, unit tests should not be failing in the gate with such a high
> frequency (and for so long).
> 
> It looks like these failures are due to two bugs [1] [2]. I would like to
> propose that until these bugs are resolved, that ceilometer refrain from
> approving patches as to not negatively impact the gate queue, which is
> already in a tenuous state.

Hi Joe,

Thanks for raising this.

We have approved patches addressing both persistent failures
in the verification queue:

  https://review.openstack.org/98953
  https://review.openstack.org/98820

BTW these data on per-project unit test failure rates sound
interesting and useful. Are these rates surfaced somewhere
easily consumable (by folks outside of the QA team)?

Cheers,
Eoghan
 
> best,
> Joe
> 
> [0]
> http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiRmluaXNoZWQ6XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svY2VpbG9tZXRlclwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLWNlaWxvbWV0ZXItcHl0aG9uMjdcIiBPUiAgYnVpbGRfbmFtZTpcImdhdGUtY2VpbG9tZXRlci1weXRob24yNlwiKSIsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50Iiwib2Zmc2V0IjowLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwMjM1NjkyMDE2MCwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6ImJ1aWxkX3N0YXR1cyJ9
> [1] https://bugs.launchpad.net/ceilometer/+bug/1323524
> [2] https://bugs.launchpad.net/ceilometer/+bug/1327344
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Multi-tenancy and ceilometer triggers

2014-06-09 Thread Angus Salkeld
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi

I was looking at 
https://blueprints.launchpad.net/mistral/+spec/mistral-ceilometer-integration
and trying to figure out how to implement that.

I can see some problems:
- - at the moment the trust is created when you PUT the workbook definition
  this means that if a totally different user executes the workbook, it will be 
run as the user that
created the workbook :-O
 
https://github.com/stackforge/mistral/blob/master/mistral/services/workbooks.py#L27
 
https://github.com/stackforge/mistral/blob/master/mistral/engine/data_flow.py#L92
- - Workbooks can't be sharable if the trust is created at workbook create time.
- - If the trust is not created at workbook create time, how do you use 
triggers?

It seems to me that it is a mistake putting the "triggers" in the workbook
because there are three entities here:
1) the shareable workbook with tasks (a template really that could be stored in 
glance)
2) the execution entity (keeps track of the running tasks)
3) the person / trigger that initiates the execution
   - execution context
   - authenticated token

if we put "3)" into "1)" we are going to have authentication issues and
potentially give up the idea of sharing workbooks.

I'd suggest we have a new entity (and endpoint) for triggers.
- - This would associate 3 things: trust_id, workbook and trigger rule
- - This could also be then used to generate a URL for ceilometer or solum to 
call
  in an autonomous way.
- - One issue is if your workflow takes a *really* long time and you don't use 
the
  trigger then you won't have a trust, but a normal user token. But maybe if
  the manually initiates the execution, we can create a "manual trigger" in the
  background?

I can also help out with: 
https://blueprints.launchpad.net/mistral/+spec/mistral-multitenancy
I believe all that needs to be done is to filter the db items by project_id 
that is
in the user context.

Any thoughts on the above (or better ways of moving forward)?

- -Angus

-BEGIN PGP SIGNATURE-
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJTlp7HAAoJEFrDYBLxZjWoqtAH/3Un3miZmcPjXCO/klU7jsXw
nEYQhWBI+IJuZ5W9MgSHkLg2PwfL6nFxhzyFjG5GloH7QQjO+jGIeE+sBSwPPF/K
kTkllROUhzOO+VFMTIA3y+c173oklmmUtznbuUvDLgLtxNEgtxOWyvZMF3vHO5sS
VkzfSXhg+VbZdg7lVqkaPOtRY/tJ7uVvtskeGZJRIVbE1iINGtqW0aC0WMXXLb7c
7ek8H9lYuxiQ10++7lU+0g6Yn6Momtcmh5j+dTZvJsZw/XEPCc+aDYsE+Yz9tqwb
blh2tWAqNri+xWtumyIAnfv2teJtiDUkzRqRTwxycBOdrkhQ6Nq0RpTCg15jNsA=
=TXJE
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] [Heat] Mid-cycle collaborative meetup

2014-06-09 Thread Robert Collins
That seems pretty solid reasoning; +1 from me no July 21-25th; I know
not everyone can attend - but with the size of the team every
situation will have some folk that can't play.

What do we need to do to lock this in to RedHat's venue schedule?

-Rob

On 9 June 2014 11:44, Jaromir Coufal  wrote:
> Hi,
>
> it looks that there is no more activity on the survey for mid-cycle dates so
> I went forward to evaluate it.
>
> I created a table view into the etherpad [0] and results are following:
> * option1 (Jul 28 - Aug 1): 27 attendees - collides with Nova/Ironic
> * option2 (Jul 21-25) : 27 attendees
> * option3 (Jul 25-29) : 17 attendees - collides with Nova/Ironic
> * option4 (Aug 11-15) : 13 attendees
>
> I think that we can remove options 3 and 4 from the consideration, because
> there is lot of people who can't make it. So we have option1 and option2
> left. Since Robert and Devananda (PTLs on the projects) can't make option1,
> which also conflicts with Nova/Ironic meetup, I think it is pretty
> straightforward.
>
> Based on the survey the winning date for the mid-cycle meetup is option2:
> July 21th - 25th.
>
> Does anybody have very strong reason why we shouldn't fix the date for
> option2 and proceed forward with the organization for the meetup?
>
> Thanks for all the interest
>
> -- Jarda
>
> [0] https://etherpad.openstack.org/p/juno-midcycle-meetup
>
>
> On 2014/28/05 13:05, Jaromir Coufal wrote:
>>
>> Hi to all,
>>
>> after previous TripleO & Ironic mid-cycle meetup, which I believe was
>> beneficial for all, I would like to suggest that we meet again in the
>> middle of Juno cycle to discuss current progress, blockers, next steps
>> and of course get some beer all together :)
>>
>> Last time, TripleO and Ironic merged their meetings together and I think
>> it was great idea. This time I would like to invite also Heat team if
>> they want to join. Our cooperation is increasing and I think it would be
>> great, if we can discuss all issues together.
>>
>> Red Hat offered to host this event, so I am very happy to invite you all
>> and I would like to ask, who would come if there was a mid-cycle meetup
>> in following dates and place:
>>
>> * July 28 - Aug 1
>> * Red Hat office, Raleigh, North Carolina
>>
>> If you are intending to join, please, fill yourselves into this etherpad:
>> https://etherpad.openstack.org/p/juno-midcycle-meetup
>>
>> Cheers
>> -- Jarda
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Vijay Venkatachalam

My vote is for option #2 (without the registration). It is simpler to start 
with this approach. How is delete handled though?

Ex. What is the expectation when user attempts to delete a 
certificate/container which is referred by an entity like LBaaS listener?


1.   Will there be validation in Barbican to prevent this? *OR*

2.   LBaaS listener will have a dangling reference/pointer to certificate?

Thanks,
Vijay V.

From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
Sent: Tuesday, June 10, 2014 7:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Weighing in here:

I'm all for option #2 as well.

Stephen

On Mon, Jun 9, 2014 at 4:42 PM, Clint Byrum 
mailto:cl...@fewbar.com>> wrote:
Excerpts from Douglas Mendizabal's message of 2014-06-09 16:08:02 -0700:
> Hi all,
>
> I’m strongly in favor of having immutable TLS-typed containers, and very
> much opposed to storing every revision of changes done to a container.  I
> think that storing versioned containers would add too much complexity to
> Barbican, where immutable containers would work well.
>
Agree completely. Create a new one for new values. Keep the old ones
while they're still active.

>
> I’m still not sold on the idea of registering services with Barbican, even
> though (or maybe especially because) Barbican would not be using this data
> for anything.  I understand the problem that we’re trying to solve by
> associating different resources across projects, but I don’t feel like
> Barbican is the right place to do this.
>
Agreed also, this is simply not Barbican or Neutron's role. Be a REST
API for secrets and networking, not all dancing all singing nannies that
prevent any possibly dangerous behavior with said API's.

> It seems we’re leaning towards option #2, but I would argue that
> orchestration of services is outside the scope of Barbican’s role as a
> secret-store.  I think this is a problem that may need to be solved at a
> higher level.  Maybe an openstack-wide registry of dependend entities
> across services?
An optional openstack-wide registry of depended entities is called
"Heat".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] mocking policy

2014-06-09 Thread Lyle, David
I have no problem with this proposal.

David

On 6/4/14, 6:41 AM, "Radomir Dopieralski"  wrote:

>Hello,
>
>I'd like to start a discussion about the use of mocking libraries in
>Horizon's tests, in particular, mox and mock.
>
>As you may know, Mox is the library that has been used so far, and we
>have a lot of tests written using it. It is based on a similar Java
>library and does very strict checking, although its error reporting may
>leave something more to be desired.
>
>Mock is a more pythonic library, insluded in the stdlib of recent Python
>versions, but also available as a separate library for older pythons. It
>has a much more relaxed approach, allowing you to only test the things
>that you actually care about and to write tests that don't have to be
>rewritten after each and every refactoring.
>
>Some OpenStack projects, such as Nova, seem to have adopted an approach
>that favors Mock in newly written tests, but allows use of Mox for older
>tests, or when it's more suitable for the job.
>
>In Horizon we only use Mox, and Mock is not even in requirements.txt. I
>would like to propose to add Mock to requirements.txt and start using it
>in new tests where it makes more sense than Mox -- in particular, when
>we are writing unit tests only testing small part of the code.
>
>Thoughts?
>-- 
>Radomir Dopieralski
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gantt] scheduler group meeting agenda 6/10

2014-06-09 Thread Dugger, Donald D
1) Forklift (tasks & status)

2) No-db scheduler discussion

3) Policy based scheduler ( https://review.openstack.org/#/c/61386/ )

4) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic+Nova] Is it recommended to use Nova driver to manage Ironic ?

2014-06-09 Thread 严超
Hi, All:
I've read "GUIDE 2: Deploying with DevStack and Ironic+Nova " on
*https://etherpad.openstack.org/p/IronicDeployDevstack
*
Is it recommended to use Nova driver to manage Ironic ? Or Is it
recommended to use TripleO  to manage Ironic. What is the best practice for
Ironic in openstack to work well with other components like neutron and
horizon.

*Best Regards!*


*Chao Yan--**My twitter:Andy Yan @yanchao727
*


*My Weibo:http://weibo.com/herewearenow
--*
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-09 Thread Luke Gorrie
Howdy Kyle,

On 9 June 2014 22:37, Kyle Mestery  wrote:

> After talking with various infra folks, we've noticed the Tail-f CI
> system is not voting anymore. According to some informal research, the
> last run for this CI setup was in April [1]. Can you verify this
> system is still running? We will need this to be working by the middle
> of Juno-2, with a history of voting or we may remove the Tail-f driver
> from the tree.
>

The history is that I have debugged the CI setup using the Sandbox repo
hooks. Then I shut that down. The next step is to bring it up and connect
it to the Neutron project Gerrit hook. I'll get on to that -- thanks for
the prod.

I am being very conservative about making changes to the way I interact
with the core CI infrastructure because frankly I am scared of accidentally
creating unintended wide-reaching consequences :).

Also, along these lines, I'm curious why DriverLog reports this driver
> "Green" and as tested [2]. What is the criteria for this? I'd like to
> propose a patch changing this driver from "Green" to something else
> since it's not running for the past few months.
>

Fair question. I am happy to make the DriverLog reflect reality. Is
DriverLog doing this based on the presence of a 'ci' section in
default_data.json? (Is the needed patch to temporarily remove that section?)

I'll focus on getting my CI hooked up to the Neutron project hook in order
to moot this issue anyway.

Cheers,
-Luke
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Stephen Balukoff
Weighing in here:

I'm all for option #2 as well.

Stephen


On Mon, Jun 9, 2014 at 4:42 PM, Clint Byrum  wrote:

> Excerpts from Douglas Mendizabal's message of 2014-06-09 16:08:02 -0700:
> > Hi all,
> >
> > I’m strongly in favor of having immutable TLS-typed containers, and very
> > much opposed to storing every revision of changes done to a container.  I
> > think that storing versioned containers would add too much complexity to
> > Barbican, where immutable containers would work well.
> >
>
> Agree completely. Create a new one for new values. Keep the old ones
> while they're still active.
>
> >
> > I’m still not sold on the idea of registering services with Barbican,
> even
> > though (or maybe especially because) Barbican would not be using this
> data
> > for anything.  I understand the problem that we’re trying to solve by
> > associating different resources across projects, but I don’t feel like
> > Barbican is the right place to do this.
> >
>
> Agreed also, this is simply not Barbican or Neutron's role. Be a REST
> API for secrets and networking, not all dancing all singing nannies that
> prevent any possibly dangerous behavior with said API's.
>
> > It seems we’re leaning towards option #2, but I would argue that
> > orchestration of services is outside the scope of Barbican’s role as a
> > secret-store.  I think this is a problem that may need to be solved at a
> > higher level.  Maybe an openstack-wide registry of dependend entities
> > across services?
>
> An optional openstack-wide registry of depended entities is called
> "Heat".
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo-specs approval process

2014-06-09 Thread Doug Hellmann
On Mon, Jun 9, 2014 at 5:56 PM, Ben Nemec  wrote:
> Hi all,
>
> While the oslo-specs repository has been available for a while and a
> number of specs proposed, we hadn't agreed on a process for actually
> approving them (i.e. the normal 2 +2's or something else).  This was
> discussed at the Oslo meeting last Friday and the method decided upon by
> the people present was that only the PTL (Doug Hellmann, dhellmann on
> IRC) would approve specs.
>
> However, he noted that he would still like to see at _least_ 2 +2's on a
> spec, and +1's from interested users are always appreciated as well.
> Basically he's looking for a consensus from the reviewers.
>
> This e-mail is intended to notify anyone interested in the oslo-specs
> process of how it will work going forward, and to provide an opportunity
> for anyone not at the meeting to object if they so desire.  Barring a
> significant concern being raised, the method outlined above will be
> followed from now on.
>
> Meeting discussion log:
> http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.log.html#l-66
>
> Thanks.
>
> -Ben

Thanks, Ben.

As Ben said, everyone is welcome to review the plans but I would
*especially* like all liaisons from other programs to take a look
through the specs with an eye for potential issues.

Thanks in advance for your feedback!
Doug

>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Requirements around statistics and billing

2014-06-09 Thread Stephen Balukoff
Hi Jorge,

This thread as started mostly to try to determine what people are actually
after, as far as stats collection goes. Surprisingly (to me at least), it
doesn't look like we need all that much to meet the needs of those who have
responded to this thread thus far.

In any case, my sights are set on preparation for next week's hack-a-thon,
and in any case I don't see stats gathering as a high priority for this
group right now. (Though basic stats will need to happen before we can
consider the service "minimally viable," IMO.) In any case, I'm seeing
nothing anyone is asking for, stats-wise, which is going to cause problems
for us as we continue on our current course developing LBaaS.

And having this thread to look back on when it comes time to actually solve
the stats problem will be a good thing, eh.

Stephen


On Fri, Jun 6, 2014 at 11:35 AM, Jorge Miramontes <
jorge.miramon...@rackspace.com> wrote:

> Hey Stephen,
>
> What we really care about are the following:
>
> - Inbound bandwidth (bytes)
> - Outbound bandwidth (bytes)
> - "Instance" Uptime (requires create/delete events)
>
> Just to note our current LBaaS implementation at Rackspace keeps track of
> when features are enabled/disabled. For example, we have markers for when
> SSL is turned on/off, markers for when we suspend/unsuspend load
> balancers, etc. Some of this stuff is used for tracking purposes, some of
> it is used for billing purposes and some of it used for both purposes. We
> also keep track of all user initiated API requests to help us out when
> issues arise.
>
> From my experience building usage collection systems just know it is not a
> trivial task, especially if we need to track events. One good tip is to be
> as explicit as possible and as granular as possible. Being implicit causes
> bad things to happen. Also, if we didn't have UDP as a protocol I would
> recommend using Hadoop's map reduce functionality to get accurate
> statistics by map-reducing request logs.
>
> I would not advocate tracking per node statistics as the user can track
> that information by themselves if they really want to. We currently, don't
> have any customers that have asked for this feature.
>
> If you want to tackle the usage collection problem for Neutron LBaaS I
> would be glad to help as I've got quite a bit of experience in this
> subject matter.
>
> Cheers,
> --Jorge
>
>
>
>
> From:  , German 
> Reply-To:  "OpenStack Development Mailing List (not for usage questions)"
> 
> Date:  Tuesday, June 3, 2014 5:20 PM
> To:  "OpenStack Development Mailing List (not for usage questions)"
> 
> Subject:  Re: [openstack-dev] [Neutron][LBaaS] Requirements around
> statistics and  billing
>
>
> >Hi Stephen,
> >
> >We would like all those numbers as well
> >J
> >
> >Additionally, we measure:
> >·
> >When a lb instance was created, deleted, etc.
> >·
> >For monitoring we ³ping² a load balancers health check and report/act on
> >the results
> >·
> >For user¹s troubleshooting we make the haproxy logs available. Which
> >contain connection information like from, to, duration, protocol, status
> >(though
> > we frequently have been told that this is not really useful for
> >debuggingŠ) and of course having that more gui-fied would be neat
> >
> >German
> >
> >
> >
> >From: Stephen Balukoff [mailto:sbaluk...@bluebox.net]
> >
> >Sent: Tuesday, May 27, 2014 8:22 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: [openstack-dev] [Neutron][LBaaS] Requirements around statistics
> >and billing
> >
> >
> >Hi folks!
> >
> >
> >We have yet to have any kind of meaningful discussion on this list around
> >load balancer stats (which, I presume to include data that will
> >eventually need to be consumed by a billing system). I'd like to get the
> >discussion started here,
> > as this will have significant meaning for how we both make this data
> >available to users, and how we implement back-end systems to be able to
> >provide this data.
> >
> >
> >
> >So!  What kinds of data are people looking for, as far as load balancer
> >statistics.
> >
> >
> >
> >For our part, as an absolute minimum we need the following per
> >loadbalancer + listener combination:
> >
> >
> >
> >* Total bytes transferred in for a given period
> >
> >* Total bytes transferred out for a given period
> >
> >
> >
> >Our product and billing people I'm sure would like the following as well:
> >
> >
> >
> >* Some kind of peak connections / second data (95th percentile or average
> >over a period, etc.)
> >
> >* Total connections for a given period
> >
> >* Total HTTP / HTTPS requests served for a given period
> >
> >
> >
> >And the people who work on UIs and put together dashboards would like:
> >
> >
> >
> >* Current requests / second (average for last X seconds, either
> >on-demand, or simply dumped regularly).
> >
> >* Current In/Out bytes throughput
> >
> >
> >
> >And our monitoring people would like this:
> >
> >
> >
> >* Errors / second
> >
> >* Current connections / se

Re: [openstack-dev] use of the word certified

2014-06-09 Thread Doug Hellmann
On Mon, Jun 9, 2014 at 6:11 PM, Eoghan Glynn  wrote:
>
>
>> Based on the discussion I'd like to propose these options:
>> 1. Cinder-certified driver - This is an attempt to move the "certification"
>> to the project level.
>> 2. CI-tested driver - This is probably the most accurate, at least for what
>> we're trying to achieve for Juno: Continuous Integration of Vendor-specific
>> Drivers.
>
> Hi Ramy,
>
> Thanks for these constructive suggestions.
>
> The second option is certainly a very direct and specific reflection of
> what is actually involved in getting the Cinder project's imprimatur.

I do like "tested."

I'd like to understand what the foundation is planning for
"certification" as well, to know how big of an issue this really is.
Even if they aren't going to certify drivers, I have heard discussions
around training and possibly other areas so I would hate for us to
introduce confusion by having different uses of that term in similar
contexts. Mark, do you know who is working on that within the board or
foundation?

Doug

>
> The first option is also a bit clearer, in the sense of the scope of the
> certification.
>
> Cheers,
> Eoghan
>
>> Ramy
>>
>> -Original Message-
>> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
>> Sent: Monday, June 09, 2014 4:50 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] use of the word certified
>>
>> On 6 June 2014 18:29, Anita Kuno  wrote:
>> > So there are certain words that mean certain things, most don't, some do.
>> >
>> > If words that mean certain things are used then some folks start using
>> > the word and have expectations around the word and the OpenStack
>> > Technical Committee and other OpenStack programs find themselves on
>> > the hook for behaviours that they didn't agree to.
>> >
>> > Currently the word under discussion is "certified" and its derivatives:
>> > certification, certifying, and others with root word "certificate".
>> >
>> > This came to my attention at the summit with a cinder summit session
>> > with the one of the cerficiate words in the title. I had thought my
>> > point had been made but it appears that there needs to be more
>> > discussion on this. So let's discuss.
>> >
>> > Let's start with the definition of certify:
>> > cer·ti·fy
>> > verb (used with object), cer·ti·fied, cer·ti·fy·ing.
>> > 1. to attest as certain; give reliable information of; confirm: He
>> > certified the truth of his claim.
>>
>> So the cinder team are attesting that a set of tests have been run against a
>> driver: a certified driver.
>>
>> > 3. to guarantee; endorse reliably: to certify a document with an
>> > official seal.
>>
>> We (the cinder) team) are guaranteeing that the driver has been tested, in at
>> least one configuration, and found to pass all of the tempest tests. This is
>> a far better state than we were at 6 months ago, where many drivers didn't
>> even pass a smoke test.
>>
>> > 5. to award a certificate to (a person) attesting to the completion of
>> > a course of study or the passing of a qualifying examination.
>>
>> The cinder cert process is pretty much an exam.
>>
>>
>> I think the work certification covers exactly what we are doing. Give
>> cinder-core are the people on the hook for any cinder problems (including
>> vendor specific ones), and the cinder core are the people who get
>> bad-mouthed when there are problems (including vendor specific ones), I
>> think this level of certification gives us value.
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][gate] ceilometer unit test frequently failing in gate

2014-06-09 Thread Joe Gordon
Over the last 7 days ceilometer unit test jobs have a 18% failure rate in
the gate queue [0], while we see expect to see some failures in integration
testing, unit tests should not be failing in the gate with such a high
frequency (and for so long).

It looks like these failures are due to two bugs [1] [2]. I would like to
propose that until these bugs are resolved, that ceilometer refrain from
approving patches as to not negatively impact the gate queue, which is
already in a tenuous state.


best,
Joe

[0]
http://logstash.openstack.org/#eyJmaWVsZHMiOltdLCJzZWFyY2giOiJtZXNzYWdlOlwiRmluaXNoZWQ6XCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIgQU5EIHByb2plY3Q6XCJvcGVuc3RhY2svY2VpbG9tZXRlclwiIEFORCBidWlsZF9xdWV1ZTpcImdhdGVcIiBBTkQgKGJ1aWxkX25hbWU6XCJnYXRlLWNlaWxvbWV0ZXItcHl0aG9uMjdcIiBPUiAgYnVpbGRfbmFtZTpcImdhdGUtY2VpbG9tZXRlci1weXRob24yNlwiKSIsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50Iiwib2Zmc2V0IjowLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwMjM1NjkyMDE2MCwibW9kZSI6InNjb3JlIiwiYW5hbHl6ZV9maWVsZCI6ImJ1aWxkX3N0YXR1cyJ9
[1] https://bugs.launchpad.net/ceilometer/+bug/1323524
[2] https://bugs.launchpad.net/ceilometer/+bug/1327344
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] Hacking 0.9.1 released

2014-06-09 Thread Joe Gordon
On Mon, Jun 9, 2014 at 12:24 PM, Joe Gordon  wrote:

> Hi folks,
>
> Hacking 0.9.1 has just been released (hacking 0.9.1 had a minor bug).
> Unlike other dependencies 'OpenStack Proposal Bot' does not automatically
> push out a patch to the new version.
>

Edit: hacking 0.9.0 had a minor bug


> The recommended way to upgrade to hacking 0.9.1 is to add any new failing
> tests to the exclude list in tox.ini and fix those in subsequent patches
> (example: https://review.openstack.org/#/c/98864/).
>
> pep8 1.5.x changed a whole bunch of internals, so when upgrading to the
> new hacking please make sure your local checks still work.
>
>
> best,
> Joe
>
> Release Notes:
>
>
>- New dependency versions, all with new features
>- pep8==1.5.6 [*https://github.com/jcrocholl/pep8/blob/master/CHANGES.txt
>   
>   *]
>  - Report E129 instead of E125 for visually indented line with
>  same indent as next logical line.
>  - Report E265 for space before block comment.
>  - Report E713 and E714 when operators ``not in`` and ``is not``
>  are  recommended (taken from hacking).
>  - Report E131 instead of E121 / E126 if the hanging indent is
>  not consistent within the same continuation block.  It helps when 
> error
>  E121 or E126 is in the ``ignore`` list.
>  - Report E126 instead of E121 when the continuation line is
>  hanging with extra indentation, even if indentation is not a 
> multiple of 4.
>   - pyflakes==0.8.1
>   - flake8==2.1.0
>- More rules support noqa
>   - Added to: H701, H702, H232, H234, H235, H237
>- Gate on Python3 compatibility
>- Dropped H901,H902 as those are now in pep8 and enforced by E713 and
>E714
>- Support for separate localization catalogs
>- Rule numbers added to http://docs.openstack.org/developer/hacking/
>- Improved performance
>- New Rules:
>   - H104  File contains nothing but comments
>   - H305  imports not grouped correctly
>   - H307  like imports should be grouped together
>   - H405  multi line docstring summary not separated with an empty
>   line
>   - H904  Wrap long lines in parentheses instead of a backslash
>
>
> Thank you to everyone who contributed to hacking 0.9.1:
> * Joe Gordon
> * Ivan A. Melnikov
> * Ben Nemec
> * Chang Bo Guo
> * Nikola Dipanov
> * Clay Gerrard
> * Cyril Roelandt
> * Dirk Mueller
> * James E. Blair
> * Jeremy Stanley
> * Julien Danjou
> * Lei Zhang
> * Marc Abramowitz
> * Mike Perez
> * Radomir Dopieralski
> * Samuel Merritt
> * YAMAMOTO Takashi
> * ZhiQiang Fan
> * fujioka yuuichi
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Clint Byrum
Excerpts from Douglas Mendizabal's message of 2014-06-09 16:08:02 -0700:
> Hi all,
> 
> I’m strongly in favor of having immutable TLS-typed containers, and very
> much opposed to storing every revision of changes done to a container.  I
> think that storing versioned containers would add too much complexity to
> Barbican, where immutable containers would work well.
> 

Agree completely. Create a new one for new values. Keep the old ones
while they're still active.

> 
> I’m still not sold on the idea of registering services with Barbican, even
> though (or maybe especially because) Barbican would not be using this data
> for anything.  I understand the problem that we’re trying to solve by
> associating different resources across projects, but I don’t feel like
> Barbican is the right place to do this.
> 

Agreed also, this is simply not Barbican or Neutron's role. Be a REST
API for secrets and networking, not all dancing all singing nannies that
prevent any possibly dangerous behavior with said API's.

> It seems we’re leaning towards option #2, but I would argue that
> orchestration of services is outside the scope of Barbican’s role as a
> secret-store.  I think this is a problem that may need to be solved at a
> higher level.  Maybe an openstack-wide registry of dependend entities
> across services?

An optional openstack-wide registry of depended entities is called
"Heat".

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Douglas Mendizabal
I understand how this could be helpful, but I still don’t understand why
this is Barbican’s problem to solve.

>From Jorge’s original email:

>> Using this method requires services, such as LBaaS, to "register" in
>>the form of metadata to a barbican container.

If our assumptions are that the GUI can handle information, and that power
users are savvy.  Then how does that require Barbican to store the
metadata?  I would argue that the GUI can store its own metadata, and that
Power Users should be savvy enough to update their LBs (via PUT or
whatever) after uploading a new certificate.


-Doug

On 6/9/14, 6:10 PM, "John Wood"  wrote:

>The impression I have from this thread is that Containers should remain
>immutable, but it would be helpful to allow services like LBaaS to
>register as interested in a given Container. This could be the full URI
>to the load balancer instance for example. This information would allow
>clients to see what services (and load balancer instances in this
>example) are using a Container, so they can update them if a new
>Container replaces the old one. They could also see what services depend
>on a Container before trying to remove the Container.
>
>A blueprint submission to Barbican tomorrow should provide more details
>on this, and let the Barbican and LBaaS communities weigh in on this
>feature.
>
>Thanks,
>John
>
>
>
>From: Tiwari, Arvind [arvind.tiw...@hp.com]
>Sent: Monday, June 09, 2014 2:54 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>As per current implementation, containers are immutable.
>Do we have any use case to make it mutable? Can we live with new
>container instead of updating an existing container?
>
>Arvind
>
>-Original Message-
>From: Samuel Bercovici [mailto:samu...@radware.com]
>Sent: Monday, June 09, 2014 1:31 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>As far as I understand the Current Barbican implementation is immutable.
>Can anyone from Barbican comment on this?
>
>-Original Message-
>From: Jain, Vivek [mailto:vivekj...@ebay.com]
>Sent: Monday, June 09, 2014 8:34 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>+1 for the idea of making certificate immutable.
>However, if Barbican allows updating certs/containers then versioning is
>a must.
>
>Thanks,
>Vivek
>
>
>On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:
>
>>Hi,
>>
>>I think that option 2 should be preferred at this stage.
>>I also think that certificate should be immutable, if you want a new
>>one, create a new one and update the listener to use it.
>>This removes any chance of mistakes, need for versioning etc.
>>
>>-Sam.
>>
>>-Original Message-
>>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>>Sent: Friday, June 06, 2014 10:16 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>>Integration Ideas
>>
>>Hey everyone,
>>
>>Per our IRC discussion yesterday I'd like to continue the discussion on
>>how Barbican and Neutron LBaaS will interact. There are currently two
>>ideas in play and both will work. If you have another idea please free
>>to add it so that we may evaluate all the options relative to each other.
>>Here are the two current ideas:
>>
>>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>>services) consumes to identify when to update/delete updated secrets
>>from Barbican. For those that aren't up to date with the Neutron LBaaS
>>API Revision, the project/tenant/user provides a secret (container?) id
>>when enabling SSL/TLS functionality.
>>
>>* Example: If a user makes a change to a secret/container in Barbican
>>then Neutron LBaaS will see an event and take the appropriate action.
>>
>>PROS:
>> - Barbican is going to create an eventing system regardless so it will
>>be supported.
>> - Decisions are made on behalf of the user which lessens the amount of
>>calls the user has to make.
>>
>>CONS:
>> - An eventing framework can become complex especially since we need to
>>ensure delivery of an event.
>> - Implementing an eventing system will take more time than option #2ŠI
>>think.
>>
>>2. Push orchestration decisions to API users. This idea comes with two
>>assumptions. The first assumption is that most providers' customers use
>>the cloud via a GUI, which in turn can handle any orchestration
>>decisions that need to be made. The second assumption is that power API
>>users are savvy and can handle their decisions as well. Using this
>>method requires services, such as LBaaS, to "register" in the form of
>>metadata to a barbican container.
>>
>>* Example: If a user 

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
Mike,

For the "typical" case, your proposal sounds reasonable to me. That
should protect against cross-session locking while still getting the
benefits of testing DML without committing to disk.

The issue I was originally raising is, of course, the "special" case
-- testing of migrations -- which, I think, could be solved in much
the same way. Given N test runners, create N empty schemata, hand each
migration-test-runner a schema from that pool. When that test runner
is done, drop and recreate that schema.

AIUI, Nodepool is already doing something similar here:
  
https://git.openstack.org/cgit/openstack-infra/nodepool/tree/nodepool/tests/__init__.py#n71

Regards,
Devananda



On Mon, Jun 9, 2014 at 12:58 PM, Mike Bayer  wrote:
>
> On Jun 9, 2014, at 1:08 PM, Mike Bayer  wrote:
>
>>
>> On Jun 9, 2014, at 12:50 PM, Devananda van der Veen 
>>  wrote:
>>
>>> There may be some problems with MySQL when testing parallel writes in
>>> different non-committing transactions, even in READ COMMITTED mode,
>>> due to InnoDB locking, if the queries use non-unique secondary indexes
>>> for UPDATE or SELECT..FOR UPDATE queries. This is done by the
>>> "with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
>>> in Nova. So I would not recommend this approach, even though, in
>>> principle, I agree it would be a much more efficient way of testing
>>> database reads/writes.
>>>
>>> More details here:
>>> http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
>>> http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html
>>
>> OK, but just to clarify my understanding, what is the approach to testing 
>> writes in parallel right now, are we doing CREATE DATABASE for two entirely 
>> distinct databases with some kind of generated name for each one?  
>> Otherwise, if the parallel tests are against the same database, this issue 
>> exists regardless (unless autocommit mode is used, is FOR UPDATE accepted 
>> under those conditions?)
>
> Took a look and this seems to be the case, from oslo.db:
>
> def create_database(engine):
> """Provide temporary user and database for each particular 
> test."""
> driver = engine.name
>
> auth = {
> 'database': ''.join(random.choice(string.ascii_lowercase)
> for i in moves.range(10)),
> # ...
>
> sqls = [
> "drop database if exists %(database)s;",
> "create database %(database)s;"
> ]
>
> Just thinking out loud here, I’ll move these ideas to a new wiki page after 
> this post.My idea now is that OK, we provide ad-hoc databases for tests, 
> but look into the idea that we create N ad-hoc databases, corresponding to 
> parallel test runs - e.g. if we are running five tests concurrently, we make 
> five databases.   Tests that use a database will be dished out among this 
> pool of available schemas.   In the *typical* case (which means not the case 
> that we’re testing actual migrations, that’s a special case) we build up the 
> schema on each database via migrations or even create_all() just once, run 
> tests within rolled-back transactions one-per-database, then the DBs are torn 
> down when the suite is finished.
>
> Sorry for the thread hijack.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
   The use case was that a cert inside the container could be updated while the 
private key stays the same. IE a new cert would be a resigning of the same old 
key. By immutable we mean to say that the same UUID would be used on the lbaas 
side. This is a heavy handed way of expecting the user to manually update their 
lbaas instances when they update a cert. 

Yes we can live with an immutable container which seems to be the direction 
we are going now.

On Jun 9, 2014, at 2:54 PM, "Tiwari, Arvind"  wrote:

> As per current implementation, containers are immutable. 
> Do we have any use case to make it mutable? Can we live with new container 
> instead of updating an existing container?
> 
> Arvind 
> 
> -Original Message-
> From: Samuel Bercovici [mailto:samu...@radware.com] 
> Sent: Monday, June 09, 2014 1:31 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
> Integration Ideas
> 
> As far as I understand the Current Barbican implementation is immutable.
> Can anyone from Barbican comment on this?
> 
> -Original Message-
> From: Jain, Vivek [mailto:vivekj...@ebay.com]
> Sent: Monday, June 09, 2014 8:34 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
> Integration Ideas
> 
> +1 for the idea of making certificate immutable.
> However, if Barbican allows updating certs/containers then versioning is a 
> must.
> 
> Thanks,
> Vivek
> 
> 
> On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:
> 
>> Hi,
>> 
>> I think that option 2 should be preferred at this stage.
>> I also think that certificate should be immutable, if you want a new 
>> one, create a new one and update the listener to use it.
>> This removes any chance of mistakes, need for versioning etc.
>> 
>> -Sam.
>> 
>> -Original Message-
>> From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>> Sent: Friday, June 06, 2014 10:16 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
>> Integration Ideas
>> 
>> Hey everyone,
>> 
>> Per our IRC discussion yesterday I'd like to continue the discussion on 
>> how Barbican and Neutron LBaaS will interact. There are currently two 
>> ideas in play and both will work. If you have another idea please free 
>> to add it so that we may evaluate all the options relative to each other.
>> Here are the two current ideas:
>> 
>> 1. Create an eventing system for Barbican that Neutron LBaaS (and other
>> services) consumes to identify when to update/delete updated secrets 
>> from Barbican. For those that aren't up to date with the Neutron LBaaS 
>> API Revision, the project/tenant/user provides a secret (container?) id 
>> when enabling SSL/TLS functionality.
>> 
>> * Example: If a user makes a change to a secret/container in Barbican 
>> then Neutron LBaaS will see an event and take the appropriate action.
>> 
>> PROS:
>> - Barbican is going to create an eventing system regardless so it will 
>> be supported.
>> - Decisions are made on behalf of the user which lessens the amount of 
>> calls the user has to make.
>> 
>> CONS:
>> - An eventing framework can become complex especially since we need to 
>> ensure delivery of an event.
>> - Implementing an eventing system will take more time than option #2ŠI 
>> think.
>> 
>> 2. Push orchestration decisions to API users. This idea comes with two 
>> assumptions. The first assumption is that most providers' customers use 
>> the cloud via a GUI, which in turn can handle any orchestration 
>> decisions that need to be made. The second assumption is that power API 
>> users are savvy and can handle their decisions as well. Using this 
>> method requires services, such as LBaaS, to "register" in the form of 
>> metadata to a barbican container.
>> 
>> * Example: If a user makes a change to a secret the GUI can see which 
>> services are registered and opt to warn the user of consequences. Power 
>> users can look at the registered services and make decisions how they 
>> see fit.
>> 
>> PROS:
>> - Very simple to implement. The only code needed to make this a 
>> reality is at the control plane (API) level.
>> - This option is more loosely coupled that option #1.
>> 
>> CONS:
>> - Potential for services to not register/unregister. What happens in 
>> this case?
>> - Pushes complexity of decision making on to GUI engineers and power 
>> API users.
>> 
>> 
>> I would like to get a consensus on which option to move forward with 
>> ASAP since the hackathon is coming up and delivering Barbican to 
>> Neutron LBaaS integration is essential to exposing SSL/TLS 
>> functionality, which almost everyone has stated is a #1/#2 priority.
>> 
>> I'll start the decision making process by advocating for option #2. My 
>> reason for choosing option #2 has to deal mostly with the simplicity

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread John Wood
The impression I have from this thread is that Containers should remain 
immutable, but it would be helpful to allow services like LBaaS to register as 
interested in a given Container. This could be the full URI to the load 
balancer instance for example. This information would allow clients to see what 
services (and load balancer instances in this example) are using a Container, 
so they can update them if a new Container replaces the old one. They could 
also see what services depend on a Container before trying to remove the 
Container.

A blueprint submission to Barbican tomorrow should provide more details on 
this, and let the Barbican and LBaaS communities weigh in on this feature.

Thanks,
John



From: Tiwari, Arvind [arvind.tiw...@hp.com]
Sent: Monday, June 09, 2014 2:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

As per current implementation, containers are immutable.
Do we have any use case to make it mutable? Can we live with new container 
instead of updating an existing container?

Arvind

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com]
Sent: Monday, June 09, 2014 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com]
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:

>Hi,
>
>I think that option 2 should be preferred at this stage.
>I also think that certificate should be immutable, if you want a new
>one, create a new one and update the listener to use it.
>This removes any chance of mistakes, need for versioning etc.
>
>-Sam.
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 10:16 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>Hey everyone,
>
>Per our IRC discussion yesterday I'd like to continue the discussion on
>how Barbican and Neutron LBaaS will interact. There are currently two
>ideas in play and both will work. If you have another idea please free
>to add it so that we may evaluate all the options relative to each other.
>Here are the two current ideas:
>
>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>services) consumes to identify when to update/delete updated secrets
>from Barbican. For those that aren't up to date with the Neutron LBaaS
>API Revision, the project/tenant/user provides a secret (container?) id
>when enabling SSL/TLS functionality.
>
>* Example: If a user makes a change to a secret/container in Barbican
>then Neutron LBaaS will see an event and take the appropriate action.
>
>PROS:
> - Barbican is going to create an eventing system regardless so it will
>be supported.
> - Decisions are made on behalf of the user which lessens the amount of
>calls the user has to make.
>
>CONS:
> - An eventing framework can become complex especially since we need to
>ensure delivery of an event.
> - Implementing an eventing system will take more time than option #2ŠI
>think.
>
>2. Push orchestration decisions to API users. This idea comes with two
>assumptions. The first assumption is that most providers' customers use
>the cloud via a GUI, which in turn can handle any orchestration
>decisions that need to be made. The second assumption is that power API
>users are savvy and can handle their decisions as well. Using this
>method requires services, such as LBaaS, to "register" in the form of
>metadata to a barbican container.
>
>* Example: If a user makes a change to a secret the GUI can see which
>services are registered and opt to warn the user of consequences. Power
>users can look at the registered services and make decisions how they
>see fit.
>
>PROS:
> - Very simple to implement. The only code needed to make this a
>reality is at the control plane (API) level.
> - This option is more loosely coupled that option #1.
>
>CONS:
> - Potential for services to not register/unregister. What happens in
>this case?
> - Pushes complexity of decision making on to GUI engineers and power
>API users.
>
>
>I would like to get a consensus on which option to move forward with
>ASAP since the hackathon is coming up and delivering Barbican to
>Neutron LBaaS integration is essential to exp

[openstack-dev] [Nova] [Ironic] [TripleO] Fixing HostManager, take two

2014-06-09 Thread Devananda van der Veen
Last week, we tried to fix a bug in the way that Nova's baremetal and
ironic drivers are using the HostManager / HostState classes --
they're incorrectly reporting capabilities in an older fashion, which
is not in use any more, and thus not exposing the node's "stats" to
the scheduler. The fix actually broke both drivers but went unnoticed
in reviews on the original patch. Reverting that took about a week,
and Ironic patches have been blocked since then, but that's not what
I'm writing about.

I'd like to present my view of all the related patches and propose a
way forward for this fix. I'd also like to thank Hans for looking into
this and proposing a fix in the first place, and thank Hans and many
others for helping to address the resulting issues very quickly.


This is the original bug:
  https://bugs.launchpad.net/nova/+bug/1260265
  BaremetalHostManager cannot distinguish baremetal hosts from other hosts

The original attempted fix (now reverted):
  https://review.openstack.org/#/c/94043

This broke Ironic because it changed the signature of
HostState.__init__(), and it broke Nova-baremetal because it didn't
save "stats" in update_from_compute_node(). A fix was proposed for
each project...

for Nova:
  https://review.openstack.org/#/c/97806/2

for Ironic:
  https://review.openstack.org/#/c/97447/5

If 97806 had been part of the original 94043, this change would
probably not have negatively affected nova's baremetal driver.
However, it still would have broken Ironic until 97447 could have been
landed. I should have noticed this when the
check-tempest-dsvm-virtual-ironic-nv job on that patch failed (I, like
others, have apparently fallen into the bad habit of ignoring test
results which say "non-voting").

So, until such time as the necessary driver and other changes are able
to land in Nova, and at Sean's suggestion, we've proposed a change to
the nova unit tests to "watch" those internal APIs that Ironic depends
on:
  https://review.openstack.org/#/c/98201

This will at least make it very explicit to any Nova reviewer that a
change to these APIs will affect Ironic. We can also set up a watch on
changes to this file, alerting us if there is a patch changing an API
that we depend on.

As for how to proceed, I would like to suggest the following:
- 97447 be reworked to support both the current and proposed HostState
parameter lists
- 94043 and 97806 be squashed and reproposed, but held until after
97447 and 98201 land
- a new patch be proposed to ironic to remove support for the now-old
HostState parameter list


Thoughts? Suggestions?

Cheers,
Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Douglas Mendizabal
Hi all,

I’m strongly in favor of having immutable TLS-typed containers, and very
much opposed to storing every revision of changes done to a container.  I
think that storing versioned containers would add too much complexity to
Barbican, where immutable containers would work well.


I’m still not sold on the idea of registering services with Barbican, even
though (or maybe especially because) Barbican would not be using this data
for anything.  I understand the problem that we’re trying to solve by
associating different resources across projects, but I don’t feel like
Barbican is the right place to do this.

It seems we’re leaning towards option #2, but I would argue that
orchestration of services is outside the scope of Barbican’s role as a
secret-store.  I think this is a problem that may need to be solved at a
higher level.  Maybe an openstack-wide registry of dependend entities
across services?

-Doug

On 6/9/14, 2:54 PM, "Tiwari, Arvind"  wrote:

>As per current implementation, containers are immutable.
>Do we have any use case to make it mutable? Can we live with new
>container instead of updating an existing container?
>
>Arvind 
>
>-Original Message-
>From: Samuel Bercovici [mailto:samu...@radware.com]
>Sent: Monday, June 09, 2014 1:31 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>As far as I understand the Current Barbican implementation is immutable.
>Can anyone from Barbican comment on this?
>
>-Original Message-
>From: Jain, Vivek [mailto:vivekj...@ebay.com]
>Sent: Monday, June 09, 2014 8:34 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>+1 for the idea of making certificate immutable.
>However, if Barbican allows updating certs/containers then versioning is
>a must.
>
>Thanks,
>Vivek
>
>
>On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:
>
>>Hi,
>>
>>I think that option 2 should be preferred at this stage.
>>I also think that certificate should be immutable, if you want a new
>>one, create a new one and update the listener to use it.
>>This removes any chance of mistakes, need for versioning etc.
>>
>>-Sam.
>>
>>-Original Message-
>>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>>Sent: Friday, June 06, 2014 10:16 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>>Integration Ideas
>>
>>Hey everyone,
>>
>>Per our IRC discussion yesterday I'd like to continue the discussion on
>>how Barbican and Neutron LBaaS will interact. There are currently two
>>ideas in play and both will work. If you have another idea please free
>>to add it so that we may evaluate all the options relative to each other.
>>Here are the two current ideas:
>>
>>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>>services) consumes to identify when to update/delete updated secrets
>>from Barbican. For those that aren't up to date with the Neutron LBaaS
>>API Revision, the project/tenant/user provides a secret (container?) id
>>when enabling SSL/TLS functionality.
>>
>>* Example: If a user makes a change to a secret/container in Barbican
>>then Neutron LBaaS will see an event and take the appropriate action.
>>
>>PROS:
>> - Barbican is going to create an eventing system regardless so it will
>>be supported.
>> - Decisions are made on behalf of the user which lessens the amount of
>>calls the user has to make.
>>
>>CONS:
>> - An eventing framework can become complex especially since we need to
>>ensure delivery of an event.
>> - Implementing an eventing system will take more time than option #2ŠI
>>think.
>>
>>2. Push orchestration decisions to API users. This idea comes with two
>>assumptions. The first assumption is that most providers' customers use
>>the cloud via a GUI, which in turn can handle any orchestration
>>decisions that need to be made. The second assumption is that power API
>>users are savvy and can handle their decisions as well. Using this
>>method requires services, such as LBaaS, to "register" in the form of
>>metadata to a barbican container.
>>
>>* Example: If a user makes a change to a secret the GUI can see which
>>services are registered and opt to warn the user of consequences. Power
>>users can look at the registered services and make decisions how they
>>see fit.
>>
>>PROS:
>> - Very simple to implement. The only code needed to make this a
>>reality is at the control plane (API) level.
>> - This option is more loosely coupled that option #1.
>>
>>CONS:
>> - Potential for services to not register/unregister. What happens in
>>this case?
>> - Pushes complexity of decision making on to GUI engineers and power
>>API users.
>>
>>
>>I would like to get a consensus on which option to move forward with
>>ASAP since the hackathon is comi

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
   The barbican team was considering making the container mutable but I don't 
think it matters now
since every one has chimed in and wants the container to be immutable. The 
current discussion now is that
the TLS container will be immutable but the meta data will not be.

I'm not sure what is meant by "versioning".  If vivek cares to elaborate that 
would be helpful.


On Jun 9, 2014, at 2:30 PM, Samuel Bercovici  wrote:

> As far as I understand the Current Barbican implementation is immutable.
> Can anyone from Barbican comment on this?
> 
> -Original Message-
> From: Jain, Vivek [mailto:vivekj...@ebay.com] 
> Sent: Monday, June 09, 2014 8:34 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
> Integration Ideas
> 
> +1 for the idea of making certificate immutable.
> However, if Barbican allows updating certs/containers then versioning is a 
> must.
> 
> Thanks,
> Vivek
> 
> 
> On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:
> 
>> Hi,
>> 
>> I think that option 2 should be preferred at this stage.
>> I also think that certificate should be immutable, if you want a new 
>> one, create a new one and update the listener to use it.
>> This removes any chance of mistakes, need for versioning etc.
>> 
>> -Sam.
>> 
>> -Original Message-
>> From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>> Sent: Friday, June 06, 2014 10:16 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
>> Integration Ideas
>> 
>> Hey everyone,
>> 
>> Per our IRC discussion yesterday I'd like to continue the discussion on 
>> how Barbican and Neutron LBaaS will interact. There are currently two 
>> ideas in play and both will work. If you have another idea please free 
>> to add it so that we may evaluate all the options relative to each other.
>> Here are the two current ideas:
>> 
>> 1. Create an eventing system for Barbican that Neutron LBaaS (and other
>> services) consumes to identify when to update/delete updated secrets 
>> from Barbican. For those that aren't up to date with the Neutron LBaaS 
>> API Revision, the project/tenant/user provides a secret (container?) id 
>> when enabling SSL/TLS functionality.
>> 
>> * Example: If a user makes a change to a secret/container in Barbican 
>> then Neutron LBaaS will see an event and take the appropriate action.
>> 
>> PROS:
>> - Barbican is going to create an eventing system regardless so it will 
>> be supported.
>> - Decisions are made on behalf of the user which lessens the amount of 
>> calls the user has to make.
>> 
>> CONS:
>> - An eventing framework can become complex especially since we need to 
>> ensure delivery of an event.
>> - Implementing an eventing system will take more time than option #2ŠI 
>> think.
>> 
>> 2. Push orchestration decisions to API users. This idea comes with two 
>> assumptions. The first assumption is that most providers' customers use 
>> the cloud via a GUI, which in turn can handle any orchestration 
>> decisions that need to be made. The second assumption is that power API 
>> users are savvy and can handle their decisions as well. Using this 
>> method requires services, such as LBaaS, to "register" in the form of 
>> metadata to a barbican container.
>> 
>> * Example: If a user makes a change to a secret the GUI can see which 
>> services are registered and opt to warn the user of consequences. Power 
>> users can look at the registered services and make decisions how they 
>> see fit.
>> 
>> PROS:
>> - Very simple to implement. The only code needed to make this a 
>> reality is at the control plane (API) level.
>> - This option is more loosely coupled that option #1.
>> 
>> CONS:
>> - Potential for services to not register/unregister. What happens in 
>> this case?
>> - Pushes complexity of decision making on to GUI engineers and power 
>> API users.
>> 
>> 
>> I would like to get a consensus on which option to move forward with 
>> ASAP since the hackathon is coming up and delivering Barbican to 
>> Neutron LBaaS integration is essential to exposing SSL/TLS 
>> functionality, which almost everyone has stated is a #1/#2 priority.
>> 
>> I'll start the decision making process by advocating for option #2. My 
>> reason for choosing option #2 has to deal mostly with the simplicity of 
>> implementing such a mechanism. Simplicity also means we can implement 
>> the necessary code and get it approved much faster which seems to be a 
>> concern for everyone. What option does everyone else want to move 
>> forward with?
>> 
>> 
>> 
>> Cheers,
>> --Jorge
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@l

Re: [openstack-dev] use of the word certified

2014-06-09 Thread Eoghan Glynn


> Based on the discussion I'd like to propose these options:
> 1. Cinder-certified driver - This is an attempt to move the "certification"
> to the project level.
> 2. CI-tested driver - This is probably the most accurate, at least for what
> we're trying to achieve for Juno: Continuous Integration of Vendor-specific
> Drivers.

Hi Ramy,

Thanks for these constructive suggestions.

The second option is certainly a very direct and specific reflection of
what is actually involved in getting the Cinder project's imprimatur.

The first option is also a bit clearer, in the sense of the scope of the
certification.

Cheers,
Eoghan

> Ramy
> 
> -Original Message-
> From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
> Sent: Monday, June 09, 2014 4:50 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] use of the word certified
> 
> On 6 June 2014 18:29, Anita Kuno  wrote:
> > So there are certain words that mean certain things, most don't, some do.
> >
> > If words that mean certain things are used then some folks start using
> > the word and have expectations around the word and the OpenStack
> > Technical Committee and other OpenStack programs find themselves on
> > the hook for behaviours that they didn't agree to.
> >
> > Currently the word under discussion is "certified" and its derivatives:
> > certification, certifying, and others with root word "certificate".
> >
> > This came to my attention at the summit with a cinder summit session
> > with the one of the cerficiate words in the title. I had thought my
> > point had been made but it appears that there needs to be more
> > discussion on this. So let's discuss.
> >
> > Let's start with the definition of certify:
> > cer·ti·fy
> > verb (used with object), cer·ti·fied, cer·ti·fy·ing.
> > 1. to attest as certain; give reliable information of; confirm: He
> > certified the truth of his claim.
> 
> So the cinder team are attesting that a set of tests have been run against a
> driver: a certified driver.
> 
> > 3. to guarantee; endorse reliably: to certify a document with an
> > official seal.
> 
> We (the cinder) team) are guaranteeing that the driver has been tested, in at
> least one configuration, and found to pass all of the tempest tests. This is
> a far better state than we were at 6 months ago, where many drivers didn't
> even pass a smoke test.
> 
> > 5. to award a certificate to (a person) attesting to the completion of
> > a course of study or the passing of a qualifying examination.
> 
> The cinder cert process is pretty much an exam.
> 
> 
> I think the work certification covers exactly what we are doing. Give
> cinder-core are the people on the hook for any cinder problems (including
> vendor specific ones), and the cinder core are the people who get
> bad-mouthed when there are problems (including vendor specific ones), I
> think this level of certification gives us value.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo-specs approval process

2014-06-09 Thread Ben Nemec
Hi all,

While the oslo-specs repository has been available for a while and a
number of specs proposed, we hadn't agreed on a process for actually
approving them (i.e. the normal 2 +2's or something else).  This was
discussed at the Oslo meeting last Friday and the method decided upon by
the people present was that only the PTL (Doug Hellmann, dhellmann on
IRC) would approve specs.

However, he noted that he would still like to see at _least_ 2 +2's on a
spec, and +1's from interested users are always appreciated as well.
Basically he's looking for a consensus from the reviewers.

This e-mail is intended to notify anyone interested in the oslo-specs
process of how it will work going forward, and to provide an opportunity
for anyone not at the meeting to object if they so desire.  Barring a
significant concern being raised, the method outlined above will be
followed from now on.

Meeting discussion log:
http://eavesdrop.openstack.org/meetings/oslo/2014/oslo.2014-06-06-16.00.log.html#l-66

Thanks.

-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Janczuk, Tomasz
I could not agree more with the need to re-think Marconi’s current approach to 
scenario breadth and implementation extensibility/flexibility. The broader the 
HTTP API surface area, the more limited are the implementation choices, and the 
harder are performance trade-offs. Current HTTP APIs of Marconi have a large 
surface area that aspires to serve too many purposes. It seriously limits 
implementation choices. For example, one cannot fully map Marconi’s HTTP APIs 
onto an AMQP messaging model (I tried last week to write a RabbitMQ plug-in for 
Marconi with miserable results).

I strongly believe Marconi would benefit from a very small  HTTP API surface 
that targets queue based messaging semantics. Queue based messaging is a well 
understood and accepted messaging model with a lot of proven prior art and 
customer demand from SQS, to Azure Storage Queues, to IronMQ, etc. While other 
messaging patterns certainly exist, they are niche compared to the basic, queue 
based, publish/consume pattern. If Marconi aspires to support non-queue 
messaging patterns, it should be done in an optional way (with a “MAY” in the 
HTTP API spec, which corresponds to option A below), or as a separate project 
(option B). Regardless the choice, the key to success is in in keeping the 
“MUST” HTTP API endpoints of Marconi limited in scope to the strict queue based 
messaging semantics.

I would be very interested in helping to flesh out such minimalistic HTTP 
surface area.

Thanks,
Tomasz Janczuk
@tjanczuk
HP

From: Kurt Griffiths 
mailto:kurt.griffi...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Mon, 9 Jun 2014 19:31:03 +
To: OpenStack Dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [marconi] Reconsidering the unified API model

Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.

Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads

Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).

Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways t

Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Doug Hellmann
On Mon, Jun 9, 2014 at 3:31 PM, Kurt Griffiths
 wrote:
> Folks, this may be a bit of a bombshell, but I think we have been dancing
> around the issue for a while now and we need to address it head on. Let me
> start with some background.
>
> Back when we started designing the Marconi API, we knew that we wanted to
> support several messaging patterns. We could do that using a unified queue
> resource, combining both task distribution and feed semantics. Or we could
> create disjoint resources in the API, or even create two separate services
> altogether, one each for the two semantic groups.
>
> The decision was made to go with a unified API for these reasons:
>
> It would afford hybrid patterns, such as auditing or diagnosing a task
> distribution queue
> Once you implement guaranteed delivery for a message feed over HTTP,
> implementing task distribution is a relatively straightforward addition. If
> you want both types of semantics, you don’t necessarily gain anything by
> implementing them separately.
>
> Lately we have been talking about writing drivers for traditional message
> brokers that will not be able to support the message feeds part of the API.
> I’ve started to think that having a huge part of the API that may or may not
> “work”, depending on how Marconi is deployed, is not a good story for users,
> esp. in light of the push to make different clouds more interoperable.
>
> Therefore, I think we have a very big decision to make here as a team and a
> community. I see three options right now. I’ve listed several—but by no
> means conclusive—pros and cons for each, as well as some counterpoints,
> based on past discussions.
>
> Option A. Allow drivers to only implement part of the API
>
> For:
>
> Allows for a wider variety of backends. (counter: may create subtle
> differences in behavior between deployments)
> May provide opportunities for tuning deployments for specific workloads
>
> Against:
>
> Makes it hard for users to create applications that work across multiple
> clouds, since critical functionality may or may not be available in a given
> deployment. (counter: how many users need cross-cloud compatibility? Can
> they degrade gracefully?)
>
>
> Option B. Split the service in two. Different APIs, different services. One
> would be message feeds, while the other would be something akin to Amazon’s
> SQS.
>
> For:
>
> Same as Option A, plus creates a clean line of functionality for deployment
> (deploy one service or the other, or both, with clear expectations of what
> messaging patterns are supported in any case).
>
> Against:
>
> Removes support for hybrid messaging patterns (counter: how useful are such
> patterns in the first place?)
> Operators now have two services to deploy and support, rather than just one
> (counter: can scale them independently, perhaps leading to gains in
> efficiency)
>
>
> Option C. Require every backend to support the entirety of the API as it now
> stands.
>
> For:
>
> Least disruptive in terms of the current API design and implementation
> Affords a wider variety of messaging patterns (counter: YAGNI?)
> Reuses code in drivers and API between feed and task distribution operations
> (counter: there may be ways to continue sharing some code if the API is
> split)
>
> Against:
>
> Requires operators to deploy a NoSQL cluster (counter: many operators are
> comfortable with NoSQL today)
> Currently requires MongoDB, which is AGPL (counter: a Redis driver is under
> development)
> A unified API is hard to tune for performance (counter: Redis driver should
> be able to handle high-throughput use cases, TBD)

We went with a single large storage API in ceilometer initially, but
we had some discussions at the Juno summit about it being a bad
decision because it resulted in storing some data like alarm
definitions in database formats that just didn't make sense for that.
Julien and Eoghan may want to fill in more details.

Keystone has separate backends for tenants, tokens, the catalog, etc.,
so you have precedent there for splitting up the features in a way
that makes it easier for driver authors and for building features on
appropriate backends.

Doug

>
> I’d love to get everyone’s thoughts on these options; let's brainstorm for a
> bit, then we can home in on the option that makes the most sense. We may
> need to do some POCs or experiments to get enough information to make a good
> decision.
>
> @kgriffs
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [driverlog] Tail-f CI and it's lack of running and it's DriverLog status

2014-06-09 Thread Kyle Mestery
Hi Luke:

After talking with various infra folks, we've noticed the Tail-f CI
system is not voting anymore. According to some informal research, the
last run for this CI setup was in April [1]. Can you verify this
system is still running? We will need this to be working by the middle
of Juno-2, with a history of voting or we may remove the Tail-f driver
from the tree.

Also, along these lines, I'm curious why DriverLog reports this driver
"Green" and as tested [2]. What is the criteria for this? I'd like to
propose a patch changing this driver from "Green" to something else
since it's not running for the past few months.

Thanks,
Kyle

[1] https://review.openstack.org/#/c/76002/
[2] http://stackalytics.com/report/driverlog?project_id=openstack%2Fneutron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Official bug tags

2014-06-09 Thread Devananda van der Veen
Hi all!

Dmitry called it to my attention last week that we lacked any official
guidelines on bug tags, and I've just gotten around to following up on
it. I've created an official list in launchpad and added that to the
OpenStack bug tag tags list wiki page here:
  https://wiki.openstack.org/wiki/Bug_Tags#Ironic

I've also updated the tags on a few bugs that were close-but-not-quite
(eg, s/docs/documentation/).

Regards,
-Devananda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Sam Harwell
Option A can be made usable provided you do the following:


1.   Add an endpoint for determining whether or not the current service 
supports optional feature X.

2.   For each optional feature of the API, clearly document that the 
feature is optional, and name the feature it is part of.

3.   If the optional feature is defined within the core Marconi 
specification, require implementations to return a 501 for affected URIs if the 
feature is not supported (this is in addition to, not in place of, item #1 
above).

A description of some key documentation elements I am looking for when a 
service includes optional functionality is listed under the heading “Conceptual 
Grouping” in the following document:
https://github.com/sharwell/openstack.net/wiki/The-JSON-Checklist

Thank you,
Sam Harwell

From: Kurt Griffiths [mailto:kurt.griffi...@rackspace.com]
Sent: Monday, June 09, 2014 2:31 PM
To: OpenStack Dev
Subject: [openstack-dev] [marconi] Reconsidering the unified API model

Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.
Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads
Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).
Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways to continue sharing some code if the API 
is split)
Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are 
comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is 
under development)
  *   A unified API is hard to tune for performance (counter: Redis driver 
should be able to handle high-throughput use cases, TBD)
I’d love to get everyone’s thoughts on these options; let's brainstorm for a 
bit, then we can home in on the option that makes the most sense. We may need 
to do some POCs or experiments to get enough information to make a good 
decision.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-09 Thread Amir Sadoughi
Paul,

Beyond explicit configuration for the cloud operator, documentation and API 
validation for the end user, is there anything specific you would like to see 
as a “warning label”? Does iptables do TCP sequence number validation? Where we 
can, we should strive to match iptables behavior.

Regarding OVS flows and security groups, we can provide a tool to explain how 
security group rules are mapped to the integration bridge. In the proposed 
solution contained in the blueprint, security group rule flows would be 
distinguished from other agent’s flows via cookie.

Regarding packet logging, I don’t know if OVS is capable of it. If iptables in 
Neutron does not currently support that feature, I don’t think Neutron should 
explicitly support out-of-tree features.

Amir

On Jun 3, 2014, at 6:59 AM, CARVER, PAUL 
mailto:pc2...@att.com>> wrote:


Amir Sadoughi wrote:

>Specifically, OVS lacks connection tracking so it won’t have a RELATED feature 
>or stateful rules
>for non-TCP flows. (OVS connection tracking is currently under development, to 
>be released by 2015

It definitely needs a big obvious warning label on this. A stateless firewall 
hasn’t been acceptable in serious
security environments for at least a decade. “Real” firewalls do things like 
TCP sequence number validation
to ensure that someone isn’t hi-jacking an existing connection and TCP flag 
validation to make sure that someone
isn’t “fuzzing” by sending invalid combinations of flags in order to uncover 
bugs in servers behind the firewall.


>- debugging OVS is new to users compared to debugging old iptables

This one is very important in my opinion. There absolutely needs to be a 
section in the documentation
on displaying and interpreting the rules generated by Neutron. I’m pretty sure 
that if you tell anyone
with Linux admin experience that Neutron security groups are iptables based, 
they should be able to
figure their way around iptables –L or iptables –S without much help.

If they haven’t touched iptables in a while, five minutes reading “man 
iptables” should be enough
for them to figure out the important options and they can readily see the 
relationship between
what they put in a security group and what shows up in the iptables chain. I 
don’t think there’s
anywhere near that ease of use on how to list the OvS ruleset for a VM and see 
how it corresponds
to the Neutron security group.


Finally, logging of packets (including both dropped and permitted connections) 
is mandatory in many
environments. Does OvS have the ability to do the necessary logging? Although 
Neutron
security groups don’t currently enable logging, the capabilities are present in 
the underlying
iptables and can be enabled with some work. If OvS doesn’t support logging of 
connections then
this feature definitely needs to be clearly marked as “not a firewall 
substitute” so that admins
are clearly informed that they still need a “real” firewall for audit 
compliance and may only
consider OvS based Neutron security groups as an additional layer of protection 
behind the
“real” firewall.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-09 Thread Amir Sadoughi
Carl,

You are correct in both distinctions. Like I mentioned to Paul, beyond explicit 
configuration for the cloud operator, documentation and API validation for the 
end user, is there anything specific you would like to see as a “warning label”?

Amir

On Jun 3, 2014, at 9:01 AM, Carl Baldwin 
mailto:c...@ecbaldwin.net>> wrote:


How does ovs handle tcp flows?  Does it include stateful tracking of tcp -- as 
your wording below implies -- or does it do stateless inspection of returning 
tcp packets?  It appears it is the latter.  This isn't the same as providing a 
stateful ESTABLISHED feature.  Many users may not fully understand the 
differences.

One of the most basic use cases, which is to ping an outside Ip address from 
inside a nova instance would not work without connection tracking with the 
default security groups which don't allow ingress except related and 
established.  This may surprise many.

Carl

Hi all,

In the Neutron weekly meeting today[0], we discussed the ovs-firewall-driver 
blueprint[1]. Moving forward, OVS features today will give us "80%" of the 
iptables security groups behavior. Specifically, OVS lacks connection tracking 
so it won’t have a RELATED feature or stateful rules for non-TCP flows. (OVS 
connection tracking is currently under development, to be released by 2015[2]). 
To make the “20%" difference more explicit to the operator and end user, we 
have proposed feature configuration to provide security group rules API 
validation that would validate based on connection tracking ability, for 
example.

Several ideas floated up during the chat today, I wanted to expand the 
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?
- performance improvements under a new OVS firewall driver untested so far 
(vthapar is working on this)
- incomplete implementation will cause confusion, educational burden
- debugging OVS is new to users compared to debugging old iptables
- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

In my humble opinion, merging the blueprint for Juno will provide us a viable, 
more performant security groups implementation than what we have available 
today.

Amir


[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] blueprint ovs-firewall-driver: OVS implementation of security groups

2014-06-09 Thread Amir Sadoughi
Salvatore,

The 80% distinction came from a discussion I had at summit, representing that 
the majority of features described by the current security groups could be 
implemented today with OVS without connection tracking. It’s not based on any 
mathematical calculation… more of a pseudo-application of Pareto’s principle. :)

Correct, the OVS tcp_flags feature will be used to implement an emulated 
statefulness for TCP flows whereas non-TCP flows would use the 
source-port-range-min, source-port-range-max extended API to implement 
stateless flows.

Performance measurements would have to come after implementations are made for 
the proposed blueprint. Although, benchmarks of the two existing FirewallDriver 
implementations can be done today. We can measure number of concurrent 
connections until failure, overall bandwidth as percentage of line rate, etc. 
Are there any other specific metrics you would like to see in the benchmark?

Amir

On Jun 3, 2014, at 2:51 AM, Salvatore Orlando 
mailto:sorla...@nicira.com>> wrote:

I would like to understand how did we get to this 80%/20% distinction.
In other terms, it seems conntrack's RELATED features won't be supported for 
non-tcp traffic. What about the ESTABLISHED feature? The blueprint specs refers 
to tcp_flags=ack.
Or will that be supported through the source port matching extension which is 
being promoted?

More comments inline.

On 3 June 2014 01:22, Amir Sadoughi 
mailto:amir.sadou...@rackspace.com>> wrote:
Hi all,

In the Neutron weekly meeting today[0], we discussed the ovs-firewall-driver 
blueprint[1]. Moving forward, OVS features today will give us "80%" of the 
iptables security groups behavior. Specifically, OVS lacks connection tracking 
so it won’t have a RELATED feature or stateful rules for non-TCP flows. (OVS 
connection tracking is currently under development, to be released by 2015[2]). 
To make the “20%" difference more explicit to the operator and end user, we 
have proposed feature configuration to provide security group rules API 
validation that would validate based on connection tracking ability, for 
example.

I am stilly generally skeptic of API changes which surface backend details on 
user-facing APIs. I understand why you are proposing this however, and I think 
it would be good to get first an assessment of the benefits brought by such a 
change before making a call on changing API behaviour to reflect security group 
implementation on the backend.


Several ideas floated up during the chat today, I wanted to expand the 
discussion to the mailing list for further debate. Some ideas include:
- marking ovs-firewall-driver as experimental in Juno
- What does it mean to be marked as “experimental”?

In this case experimental would be a way to say "not 100% functional".  You 
would not expect a public service provider exposing neutron APIs backed by this 
driver, but maybe in some private deployments where the missing features are 
not a concern it could be used.

- performance improvements under a new OVS firewall driver untested so far 
(vthapar is working on this)

>From the last comment in your post it seems you already have proof of the 
>performance improvement, perhaps you can add those to the "Performance Impact" 
>section on the spec.

- incomplete implementation will cause confusion, educational burden

It's more about technical debt in my opinion, but this is not necessarily the 
case.

- debugging OVS is new to users compared to debugging old iptables

This won't be a concern as long as we have good documentation to back the 
implementation.
As Neutron is usually sloppy with documentation - then it's a concern.

- waiting for upstream OVS to implement (OpenStack K- or even L- cycle)

In my humble opinion, merging the blueprint for Juno will provide us a viable, 
more performant security groups implementation than what we have available 
today.

Amir


[0] 
http://eavesdrop.openstack.org/meetings/networking/2014/networking.2014-06-02-21.01.log.html
[1] https://review.openstack.org/#/c/89712/
[2] http://openvswitch.org/pipermail/dev/2014-May/040567.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary "extra specs" for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 12:56 PM, Jay Pipes  wrote:
> On 06/09/2014 01:38 PM, Joe Cropper wrote:
>>
>> On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil  wrote:
>>>
>>> Hi Joe,
>>>
>>>
>>>
>>> Can you give some examples of what that data would be used for ?
>>
>>
>> Sure!  For example, in the PowerKVM world, hosts can be dynamically
>> configured to run in split-core processor mode.  This setting can be
>> dynamically changed and it'd be nice to allow the driver to track this
>> somehow -- and it probably doesn't warrant its own explicit field in
>> compute_node.  Likewise, PowerKVM also has a concept of the maximum
>> SMT level in which its guests can run (which can also vary dynamically
>> based on the split-core setting) and it would also be nice to tie such
>> settings to the compute node.
>
>
> That information is typically stored in the compute_node.cpu_info field.
>
>
>> Overall, this would give folks writing compute drivers the ability to
>> attach the "extra spec" style data to a compute node for a variety of
>> purposes -- two simple examples provided above, but there are many
>> more.  :-)
>
>
> If it's something that the driver can discover on its own and that the
> driver can/should use in determining the capabilities that the hypervisor
> offers, then at this point, I believe compute_node.cpu_info is the place to
> put that information. It's probably worth renaming the cpu_info field to
> just "capabilities" instead, to be more generic and indicate that it's a
> place the driver stores discoverable capability information about the
> node...

Thanks, that's a great point!  While that's fair for those items that
are self-discoverable for the driver that also are cpu_info'ish in
nature, there are also some additional use cases I should mention.
Imagine some higher level projects [above nova] want to associate
arbitrary bits of information with the compute host for
project-specific uses.  For example, suppose I have an orchestration
project that does coordinated live migrations and I want to put some
specific restrictions on the # of concurrent migrations that should
occur for the respective compute node (and let the end-user adjust
these values).  Having it directly associated with the compute node in
nova gives us some nice ways to keep data consistency.  I think this
would be a great way to gain some additional parity with some of the
other nova structures such as flavors' extra_specs and instances'
metadata/system_metadata.

Thanks,
Joe

>
> Now, for *user-defined* taxonomies, I'm a big fan of simple string tagging,
> as is proposed for the server instance model in this spec:
>
> https://review.openstack.org/#/c/91444/
>
> Best,
> jay
>
>
>>>
>>>
>>>
>>> It sounds on the face of it that what you’re looking for is pretty
>>> similar
>>> to what Extensible Resource Tracker sets out to do
>>> (https://review.openstack.org/#/c/86050
>>> https://review.openstack.org/#/c/71557)
>>
>>
>> Thanks for pointing this out.  I actually ran across these while I was
>> searching the code to see what might already exist in this space.
>> Actually, the compute node 'stats' was always a first guess, but these
>> are clearly heavily reserved for the resource tracker and wind up
>> getting purged/deleted over time since the 'extra specs' I reference
>> above aren't necessarily tied to the spawning/deleting of instances.
>> In other words, they're not really consumable resources, per-se.
>> Unless I'm overlooking a way (perhaps I am) to use this
>> extensible-resource-tracker blueprint for arbitrary key-value pairs
>> **not** related to instances, I think we need something additional?
>>
>> I'd happily create a new blueprint for this as well.
>>
>>>
>>>
>>>
>>> Phil
>>>
>>>
>>>
>>> From: Joe Cropper [mailto:cropper@gmail.com]
>>> Sent: 07 June 2014 07:30
>>> To: openstack-dev@lists.openstack.org
>>> Subject: [openstack-dev] Arbitrary "extra specs" for compute nodes?
>>>
>>>
>>>
>>> Hi Folks,
>>>
>>> I was wondering if there was any such mechanism in the compute node
>>> structure to hold arbitrary key-value pairs, similar to flavors'
>>> "extra_specs" concept?
>>>
>>> It appears there are entries for things like pci_stats, stats and
>>> recently
>>> added extra_resources -- but these all tend to have more specific usages
>>> vs.
>>> just arbitrary data that may want to be maintained about the compute node
>>> over the course of its lifetime.
>>>
>>> Unless I'm overlooking an existing construct for this, would this be
>>> something that folks would welcome a Juno blueprint for--i.e., adding
>>> extra_specs style column with a JSON-formatted string that could be
>>> loaded
>>> as a dict of key-value pairs?
>>>
>>> Thoughts?
>>>
>>> Thanks,
>>>
>>> Joe
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> ___
>> OpenStack-dev 

[openstack-dev] [ironic]Adding support for ManagedFRU in the IPMI driver

2014-06-09 Thread Lemieux, Luc
Hi I work for Kontron a hardware company that is a member of the foundation 
since this year.

One of our blade product hold 2 complete servers (I7 Haswell chip, 16Gb ram, 
120 Gb SSD each) that are managed by a single IPMI BMC (Board Management 
Controller) using the IPMI "ManagedFRU" concept. This concept allows both 
servers to be individually managed by the one management IPMI address.

However this concept was not thought of in the Nova Baremetal driver and 
probably is not also in Ironic.

This Managed FRU concept is common within the ATCA hardware for expansion cards 
uTCA and we think that this abstraction might become more and more present in 
future hardware that wants to provide as much processing as possible in as 
little a format as possible. Our SYMkloud box offers up to 9 nodes like I 
described earlier (so 18 I7 Haswell) in a 2U rack form factor.

Where should I start to look to see if adding the ability in Ironic to detect 
throught the bootstrap if a server is of type "ManagedFRU", that is more than 
one server for the same IPMI address, then use "Redirect" type IPMI commands 
(so a special driver I guess) to individually manage these server is a feature 
that could be useful in the long run for Ironic.

We as a company want to get involved in the community and see this as a 
possible contribution that we could make.

Thank you!

Luc Lemieux | Software Designer, Application Ready Platforms | Kontron Canada | 
T 450 437 4661 | E luc.lemi...@ca.kontron.com
Kontron Canada Inc
4555 Rue Ambroise-Lafortune
Boisbriand (Québec) J7H 0A4

L'information contenue dans le présent document est la propriété de Kontron 
Canada Inc. et est divulguée en toute confidentialité. Cette information ne 
doit pas être utilisée, divulguée ou reproduite sans le consentement écrit 
explicite de Kontron Canada Inc. Si vous n'êtes pas le destinataire prévu et 
avez reçu cette communication par erreur, veuillez contacter l'originateur et 
supprimer toute copie.

The information contained in this document is confidential and property of 
Kontron Canada Inc. Any unauthorized review, use, disclosure or distribution is 
prohibited without express written consent of Kontron Canada Inc. If you are 
not the intended recipient, please contact the sender and destroy all copies of 
the original message and enclosed attachments.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] Designate Incubation Request

2014-06-09 Thread Mac Innes, Kiall
On Mon, 2014-06-09 at 07:25 -0400, Sean Dague wrote:
> On 06/06/2014 12:06 PM, Mac Innes, Kiall wrote:
> > Several of the TC requested we have an openstack-infra managed DevStack
> > gate enabled before they would cast their vote - I'm happy to say, we've
> > got it :)
> > 
> > With the merge of [1], Designate now has voting devstack /
> > requirements / docs jobs. An example of the DevStack run is at [2].
> > 
> > "Vote Designate" @ [3] :)
> > 
> > Thanks,
> > Kiall
> > 
> > [1]: https://review.openstack.org/#/c/98439/
> > [2]: https://review.openstack.org/#/c/98442/
> > [3]: https://review.openstack.org/#/c/97609/
> 
> I'm seeing in [2] api logs that something was run (at least 1 API
> request was processed), but it's hard to see where that is in the
> console logs. Pointers?
> 
>   -Sean
> 

Hey Sean,

Yes - on Saturday, after sending this email on Friday, I noticed the
exercises were not running - devstack-gate has them disabled by default.

We landed a patch to the job this morning to allow us to run them, and
have a series of patches in the check/gate queues to enable the
exercises for all patches. An example of the output is at [1] - this
will be enabled for all patches once [2] lands.

Thanks,
Kiall

[1]:
http://logs.openstack.org/88/98788/6/check/gate-designate-devstack-dsvm/98b5704/console.html
[2]: https://review.openstack.org/#/c/98788/


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Mike Bayer

On Jun 9, 2014, at 1:08 PM, Mike Bayer  wrote:

> 
> On Jun 9, 2014, at 12:50 PM, Devananda van der Veen  
> wrote:
> 
>> There may be some problems with MySQL when testing parallel writes in
>> different non-committing transactions, even in READ COMMITTED mode,
>> due to InnoDB locking, if the queries use non-unique secondary indexes
>> for UPDATE or SELECT..FOR UPDATE queries. This is done by the
>> "with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
>> in Nova. So I would not recommend this approach, even though, in
>> principle, I agree it would be a much more efficient way of testing
>> database reads/writes.
>> 
>> More details here:
>> http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
>> http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html
> 
> OK, but just to clarify my understanding, what is the approach to testing 
> writes in parallel right now, are we doing CREATE DATABASE for two entirely 
> distinct databases with some kind of generated name for each one?  Otherwise, 
> if the parallel tests are against the same database, this issue exists 
> regardless (unless autocommit mode is used, is FOR UPDATE accepted under 
> those conditions?)

Took a look and this seems to be the case, from oslo.db:

def create_database(engine):
"""Provide temporary user and database for each particular test."""
driver = engine.name

auth = {
'database': ''.join(random.choice(string.ascii_lowercase)
for i in moves.range(10)),
# ...

sqls = [
"drop database if exists %(database)s;",
"create database %(database)s;"
]

Just thinking out loud here, I’ll move these ideas to a new wiki page after 
this post.My idea now is that OK, we provide ad-hoc databases for tests, 
but look into the idea that we create N ad-hoc databases, corresponding to 
parallel test runs - e.g. if we are running five tests concurrently, we make 
five databases.   Tests that use a database will be dished out among this pool 
of available schemas.   In the *typical* case (which means not the case that 
we’re testing actual migrations, that’s a special case) we build up the schema 
on each database via migrations or even create_all() just once, run tests 
within rolled-back transactions one-per-database, then the DBs are torn down 
when the suite is finished.

Sorry for the thread hijack.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Tiwari, Arvind
As per current implementation, containers are immutable. 
Do we have any use case to make it mutable? Can we live with new container 
instead of updating an existing container?

Arvind 

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com] 
Sent: Monday, June 09, 2014 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com]
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:

>Hi,
>
>I think that option 2 should be preferred at this stage.
>I also think that certificate should be immutable, if you want a new 
>one, create a new one and update the listener to use it.
>This removes any chance of mistakes, need for versioning etc.
>
>-Sam.
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 10:16 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
>Integration Ideas
>
>Hey everyone,
>
>Per our IRC discussion yesterday I'd like to continue the discussion on 
>how Barbican and Neutron LBaaS will interact. There are currently two 
>ideas in play and both will work. If you have another idea please free 
>to add it so that we may evaluate all the options relative to each other.
>Here are the two current ideas:
>
>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>services) consumes to identify when to update/delete updated secrets 
>from Barbican. For those that aren't up to date with the Neutron LBaaS 
>API Revision, the project/tenant/user provides a secret (container?) id 
>when enabling SSL/TLS functionality.
>
>* Example: If a user makes a change to a secret/container in Barbican 
>then Neutron LBaaS will see an event and take the appropriate action.
>
>PROS:
> - Barbican is going to create an eventing system regardless so it will 
>be supported.
> - Decisions are made on behalf of the user which lessens the amount of 
>calls the user has to make.
>
>CONS:
> - An eventing framework can become complex especially since we need to 
>ensure delivery of an event.
> - Implementing an eventing system will take more time than option #2ŠI 
>think.
>
>2. Push orchestration decisions to API users. This idea comes with two 
>assumptions. The first assumption is that most providers' customers use 
>the cloud via a GUI, which in turn can handle any orchestration 
>decisions that need to be made. The second assumption is that power API 
>users are savvy and can handle their decisions as well. Using this 
>method requires services, such as LBaaS, to "register" in the form of 
>metadata to a barbican container.
>
>* Example: If a user makes a change to a secret the GUI can see which 
>services are registered and opt to warn the user of consequences. Power 
>users can look at the registered services and make decisions how they 
>see fit.
>
>PROS:
> - Very simple to implement. The only code needed to make this a 
>reality is at the control plane (API) level.
> - This option is more loosely coupled that option #1.
>
>CONS:
> - Potential for services to not register/unregister. What happens in 
>this case?
> - Pushes complexity of decision making on to GUI engineers and power 
>API users.
>
>
>I would like to get a consensus on which option to move forward with 
>ASAP since the hackathon is coming up and delivering Barbican to 
>Neutron LBaaS integration is essential to exposing SSL/TLS 
>functionality, which almost everyone has stated is a #1/#2 priority.
>
>I'll start the decision making process by advocating for option #2. My 
>reason for choosing option #2 has to deal mostly with the simplicity of 
>implementing such a mechanism. Simplicity also means we can implement 
>the necessary code and get it approved much faster which seems to be a 
>concern for everyone. What option does everyone else want to move 
>forward with?
>
>
>
>Cheers,
>--Jorge
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

[openstack-dev] [UX] [Heat] [Mistral] [Murano] [Neutron] [Solum] Cross-project UI library: gathering the requirements

2014-06-09 Thread Timur Sufiev
Hi All,

At the Solum-Murano-Heat cross-project session [1] during the
Openstack Juno Summit it was decided that it would be beneficial for
the Solum, Murano and Heat projects to implement common UX patterns in
separate library. During an early discussion several more projects
were added (Mistral and Neutron), and an initial UI draft was proposed
[2]. That initial concept is just a first step in finding the common
ground between the needs of aforementioned projects and is much likely
to be reworked in future. So I’d like to initiate a discussion to
gather specific use cases from Solum, Heat and Neutron projects
(Murano and Mistral are already somehow covered) as well as gather in
this thread all people who are interested in the project.

[1] https://etherpad.openstack.org/p/9XQ7Q2NQdv
[2] 
https://docs.google.com/a/mirantis.com/document/d/19Q9JwoO77724RyOp7XkpYmALwmdb7JjoQHcDv4ffZ-I/edit#

-- 
Timur Sufiev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [hacking] Hacking 0.9.1 released

2014-06-09 Thread Joe Gordon
Hi folks,

Hacking 0.9.1 has just been released (hacking 0.9.1 had a minor bug).
Unlike other dependencies 'OpenStack Proposal Bot' does not automatically
push out a patch to the new version.

The recommended way to upgrade to hacking 0.9.1 is to add any new failing
tests to the exclude list in tox.ini and fix those in subsequent patches
(example: https://review.openstack.org/#/c/98864/).

pep8 1.5.x changed a whole bunch of internals, so when upgrading to the new
hacking please make sure your local checks still work.


best,
Joe

Release Notes:


   - New dependency versions, all with new features
   - pep8==1.5.6 [*https://github.com/jcrocholl/pep8/blob/master/CHANGES.txt
  
  *]
 - Report E129 instead of E125 for visually indented line with same
 indent as next logical line.
 - Report E265 for space before block comment.
 - Report E713 and E714 when operators ``not in`` and ``is not``
 are  recommended (taken from hacking).
 - Report E131 instead of E121 / E126 if the hanging indent is not
 consistent within the same continuation block.  It helps when
error E121 or
 E126 is in the ``ignore`` list.
 - Report E126 instead of E121 when the continuation line is
 hanging with extra indentation, even if indentation is not a
multiple of 4.
  - pyflakes==0.8.1
  - flake8==2.1.0
   - More rules support noqa
  - Added to: H701, H702, H232, H234, H235, H237
   - Gate on Python3 compatibility
   - Dropped H901,H902 as those are now in pep8 and enforced by E713 and
   E714
   - Support for separate localization catalogs
   - Rule numbers added to http://docs.openstack.org/developer/hacking/
   - Improved performance
   - New Rules:
  - H104  File contains nothing but comments
  - H305  imports not grouped correctly
  - H307  like imports should be grouped together
  - H405  multi line docstring summary not separated with an empty line
  - H904  Wrap long lines in parentheses instead of a backslash


Thank you to everyone who contributed to hacking 0.9.1:
* Joe Gordon
* Ivan A. Melnikov
* Ben Nemec
* Chang Bo Guo
* Nikola Dipanov
* Clay Gerrard
* Cyril Roelandt
* Dirk Mueller
* James E. Blair
* Jeremy Stanley
* Julien Danjou
* Lei Zhang
* Marc Abramowitz
* Mike Perez
* Radomir Dopieralski
* Samuel Merritt
* YAMAMOTO Takashi
* ZhiQiang Fan
* fujioka yuuichi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] Reconsidering the unified API model

2014-06-09 Thread Kurt Griffiths
Folks, this may be a bit of a bombshell, but I think we have been dancing 
around the issue for a while now and we need to address it head on. Let me 
start with some background.

Back when we started designing the Marconi API, we knew that we wanted to 
support several messaging patterns. We could do that using a unified queue 
resource, combining both task distribution and feed semantics. Or we could 
create disjoint resources in the API, or even create two separate services 
altogether, one each for the two semantic groups.

The decision was made to go with a unified API for these reasons:

  *   It would afford hybrid patterns, such as auditing or diagnosing a task 
distribution queue
  *   Once you implement guaranteed delivery for a message feed over HTTP, 
implementing task distribution is a relatively straightforward addition. If you 
want both types of semantics, you don’t necessarily gain anything by 
implementing them separately.

Lately we have been talking about writing drivers for traditional message 
brokers that will not be able to support the message feeds part of the API. 
I’ve started to think that having a huge part of the API that may or may not 
“work”, depending on how Marconi is deployed, is not a good story for users, 
esp. in light of the push to make different clouds more interoperable.

Therefore, I think we have a very big decision to make here as a team and a 
community. I see three options right now. I’ve listed several—but by no means 
conclusive—pros and cons for each, as well as some counterpoints, based on past 
discussions.

Option A. Allow drivers to only implement part of the API

For:

  *   Allows for a wider variety of backends. (counter: may create subtle 
differences in behavior between deployments)
  *   May provide opportunities for tuning deployments for specific workloads

Against:

  *   Makes it hard for users to create applications that work across multiple 
clouds, since critical functionality may or may not be available in a given 
deployment. (counter: how many users need cross-cloud compatibility? Can they 
degrade gracefully?)

Option B. Split the service in two. Different APIs, different services. One 
would be message feeds, while the other would be something akin to Amazon’s SQS.

For:

  *   Same as Option A, plus creates a clean line of functionality for 
deployment (deploy one service or the other, or both, with clear expectations 
of what messaging patterns are supported in any case).

Against:

  *   Removes support for hybrid messaging patterns (counter: how useful are 
such patterns in the first place?)
  *   Operators now have two services to deploy and support, rather than just 
one (counter: can scale them independently, perhaps leading to gains in 
efficiency)

Option C. Require every backend to support the entirety of the API as it now 
stands.

For:

  *   Least disruptive in terms of the current API design and implementation
  *   Affords a wider variety of messaging patterns (counter: YAGNI?)
  *   Reuses code in drivers and API between feed and task distribution 
operations (counter: there may be ways to continue sharing some code if the API 
is split)

Against:

  *   Requires operators to deploy a NoSQL cluster (counter: many operators are 
comfortable with NoSQL today)
  *   Currently requires MongoDB, which is AGPL (counter: a Redis driver is 
under development)
  *   A unified API is hard to tune for performance (counter: Redis driver 
should be able to handle high-throughput use cases, TBD)

I’d love to get everyone’s thoughts on these options; let's brainstorm for a 
bit, then we can home in on the option that makes the most sense. We may need 
to do some POCs or experiments to get enough information to make a good 
decision.

@kgriffs
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Samuel Bercovici
As far as I understand the Current Barbican implementation is immutable.
Can anyone from Barbican comment on this?

-Original Message-
From: Jain, Vivek [mailto:vivekj...@ebay.com] 
Sent: Monday, June 09, 2014 8:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:

>Hi,
>
>I think that option 2 should be preferred at this stage.
>I also think that certificate should be immutable, if you want a new 
>one, create a new one and update the listener to use it.
>This removes any chance of mistakes, need for versioning etc.
>
>-Sam.
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 10:16 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
>Integration Ideas
>
>Hey everyone,
>
>Per our IRC discussion yesterday I'd like to continue the discussion on 
>how Barbican and Neutron LBaaS will interact. There are currently two 
>ideas in play and both will work. If you have another idea please free 
>to add it so that we may evaluate all the options relative to each other.
>Here are the two current ideas:
>
>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>services) consumes to identify when to update/delete updated secrets 
>from Barbican. For those that aren't up to date with the Neutron LBaaS 
>API Revision, the project/tenant/user provides a secret (container?) id 
>when enabling SSL/TLS functionality.
>
>* Example: If a user makes a change to a secret/container in Barbican 
>then Neutron LBaaS will see an event and take the appropriate action.
>
>PROS:
> - Barbican is going to create an eventing system regardless so it will 
>be supported.
> - Decisions are made on behalf of the user which lessens the amount of 
>calls the user has to make.
>
>CONS:
> - An eventing framework can become complex especially since we need to 
>ensure delivery of an event.
> - Implementing an eventing system will take more time than option #2ŠI 
>think.
>
>2. Push orchestration decisions to API users. This idea comes with two 
>assumptions. The first assumption is that most providers' customers use 
>the cloud via a GUI, which in turn can handle any orchestration 
>decisions that need to be made. The second assumption is that power API 
>users are savvy and can handle their decisions as well. Using this 
>method requires services, such as LBaaS, to "register" in the form of 
>metadata to a barbican container.
>
>* Example: If a user makes a change to a secret the GUI can see which 
>services are registered and opt to warn the user of consequences. Power 
>users can look at the registered services and make decisions how they 
>see fit.
>
>PROS:
> - Very simple to implement. The only code needed to make this a 
>reality is at the control plane (API) level.
> - This option is more loosely coupled that option #1.
>
>CONS:
> - Potential for services to not register/unregister. What happens in 
>this case?
> - Pushes complexity of decision making on to GUI engineers and power 
>API users.
>
>
>I would like to get a consensus on which option to move forward with 
>ASAP since the hackathon is coming up and delivering Barbican to 
>Neutron LBaaS integration is essential to exposing SSL/TLS 
>functionality, which almost everyone has stated is a #1/#2 priority.
>
>I'll start the decision making process by advocating for option #2. My 
>reason for choosing option #2 has to deal mostly with the simplicity of 
>implementing such a mechanism. Simplicity also means we can implement 
>the necessary code and get it approved much faster which seems to be a 
>concern for everyone. What option does everyone else want to move 
>forward with?
>
>
>
>Cheers,
>--Jorge
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Anita Kuno
On 06/09/2014 03:17 PM, Eoghan Glynn wrote:
> 
> 
 So there are certain words that mean certain things, most don't, some do.

 If words that mean certain things are used then some folks start using
 the word and have expectations around the word and the OpenStack
 Technical Committee and other OpenStack programs find themselves on the
 hook for behaviours that they didn't agree to.

 Currently the word under discussion is "certified" and its derivatives:
 certification, certifying, and others with root word "certificate".

 This came to my attention at the summit with a cinder summit session
 with the one of the cerficiate words in the title. I had thought my
 point had been made but it appears that there needs to be more
 discussion on this. So let's discuss.

 Let's start with the definition of certify:
 cer·ti·fy
 verb (used with object), cer·ti·fied, cer·ti·fy·ing.
 1. to attest as certain; give reliable information of; confirm: He
 certified the truth of his claim.
 2. to testify to or vouch for in writing: The medical examiner will
 certify his findings to the court.
 3. to guarantee; endorse reliably: to certify a document with an
 official seal.
 4. to guarantee (a check) by writing on its face that the account
 against which it is drawn has sufficient funds to pay it.
 5. to award a certificate to (a person) attesting to the completion of a
 course of study or the passing of a qualifying examination.
 Source: http://dictionary.reference.com/browse/certify

 The issue I have with the word certify is that it requires someone or a
 group of someones to attest to something. The thing attested to is only
 as credible as the someone or the group of someones doing the attesting.
 We have no process, nor do I feel we want to have a process for
 evaluating the reliability of the somones or groups of someones doing
 the attesting.

 I think that having testing in place in line with other programs testing
 of patches (third party ci) in cinder should be sufficient to address
 the underlying concern, namely reliability of opensource hooks to
 proprietary code and/or hardware. I would like the use of the word
 "certificate" and all its roots to no longer be used in OpenStack
 programs with regard to testing. This won't happen until we get some
 discussion and agreement on this, which I would like to have.

 Thank you for your participation,
 Anita.
>>>
>>> Hi Anita,
>>>
>>> Just a note on cross-posting to both the os-dev and os-tc lists.
>>>
>>> Anyone not on the TC who will hits reply-all is likely to see their
>>> post be rejected by the TC list moderator, but go through to the
>>> more open dev list.
>>>
>>> As a result, the thread diverges (as we saw with the recent election
>>> stats/turnout thread).
>>>
>>> Also, moderation rejects are an unpleasant user experience.
>>>
>>> So if a post is intended to reach out for input from the wider dev
>>> community, it's better to post *only* to the -dev list, or vice versa
>>> if you want to interact with a narrower audience.
>> My post was intended to include the tc list in the discussion
>>
>> I have no say in what posts the tc email list moderator accepts or does
>> not, or how those posts not accepted are informed of their status.
> 
> Well the TC list moderation policy isn't so much the issue here, as the
> practice of cross-posting between open- and closed-moderation lists.
> 
> Even absent strict moderation being applied, as hasn't been the case for
> this thread, cross-posting still tends to cause divergence of threads due
> to moderator-lag and individuals choosing not to cross-post their replies.
> 
> The os-dev subscriber list should be a strict super-set of the os-tc list,
> so anything posted just to the former will naturally be visible to the TC
> membership also.
> 
> Thanks,
> Eoghan
> 
I think you need to start a new topic with your thoughts on how the
email lists should be organized. This particular conversation doesn't
have much to do with the topic at hand anymore.

Thanks Eoghan,
Anita.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Eoghan Glynn


> >> So there are certain words that mean certain things, most don't, some do.
> >>
> >> If words that mean certain things are used then some folks start using
> >> the word and have expectations around the word and the OpenStack
> >> Technical Committee and other OpenStack programs find themselves on the
> >> hook for behaviours that they didn't agree to.
> >>
> >> Currently the word under discussion is "certified" and its derivatives:
> >> certification, certifying, and others with root word "certificate".
> >>
> >> This came to my attention at the summit with a cinder summit session
> >> with the one of the cerficiate words in the title. I had thought my
> >> point had been made but it appears that there needs to be more
> >> discussion on this. So let's discuss.
> >>
> >> Let's start with the definition of certify:
> >> cer·ti·fy
> >> verb (used with object), cer·ti·fied, cer·ti·fy·ing.
> >> 1. to attest as certain; give reliable information of; confirm: He
> >> certified the truth of his claim.
> >> 2. to testify to or vouch for in writing: The medical examiner will
> >> certify his findings to the court.
> >> 3. to guarantee; endorse reliably: to certify a document with an
> >> official seal.
> >> 4. to guarantee (a check) by writing on its face that the account
> >> against which it is drawn has sufficient funds to pay it.
> >> 5. to award a certificate to (a person) attesting to the completion of a
> >> course of study or the passing of a qualifying examination.
> >> Source: http://dictionary.reference.com/browse/certify
> >>
> >> The issue I have with the word certify is that it requires someone or a
> >> group of someones to attest to something. The thing attested to is only
> >> as credible as the someone or the group of someones doing the attesting.
> >> We have no process, nor do I feel we want to have a process for
> >> evaluating the reliability of the somones or groups of someones doing
> >> the attesting.
> >>
> >> I think that having testing in place in line with other programs testing
> >> of patches (third party ci) in cinder should be sufficient to address
> >> the underlying concern, namely reliability of opensource hooks to
> >> proprietary code and/or hardware. I would like the use of the word
> >> "certificate" and all its roots to no longer be used in OpenStack
> >> programs with regard to testing. This won't happen until we get some
> >> discussion and agreement on this, which I would like to have.
> >>
> >> Thank you for your participation,
> >> Anita.
> > 
> > Hi Anita,
> > 
> > Just a note on cross-posting to both the os-dev and os-tc lists.
> > 
> > Anyone not on the TC who will hits reply-all is likely to see their
> > post be rejected by the TC list moderator, but go through to the
> > more open dev list.
> > 
> > As a result, the thread diverges (as we saw with the recent election
> > stats/turnout thread).
> > 
> > Also, moderation rejects are an unpleasant user experience.
> > 
> > So if a post is intended to reach out for input from the wider dev
> > community, it's better to post *only* to the -dev list, or vice versa
> > if you want to interact with a narrower audience.
> My post was intended to include the tc list in the discussion
> 
> I have no say in what posts the tc email list moderator accepts or does
> not, or how those posts not accepted are informed of their status.

Well the TC list moderation policy isn't so much the issue here, as the
practice of cross-posting between open- and closed-moderation lists.

Even absent strict moderation being applied, as hasn't been the case for
this thread, cross-posting still tends to cause divergence of threads due
to moderator-lag and individuals choosing not to cross-post their replies.

The os-dev subscriber list should be a strict super-set of the os-tc list,
so anything posted just to the former will naturally be visible to the TC
membership also.

Thanks,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Alex Glikson
>> So maybe the problem isn?t having the flavors so much, but in how the 
user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the 
system would find a ?best match? based on criteria set by the cloud admin 
then would that be a more user friendly solution ? 

Interesting idea.. Thoughts how this can be achieved?

Alex




From:   "Day, Phil" 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   06/06/2014 12:38 PM
Subject:Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
allocation ratio out of scheduler



 
From: Scott Devoid [mailto:dev...@anl.gov] 
Sent: 04 June 2014 17:36
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Proposal: Move CPU and memory 
allocation ratio out of scheduler
 
Not only live upgrades but also dynamic reconfiguration. 

Overcommitting affects the quality of service delivered to the cloud user. 
 In this situation in particular, as in many situations in general, I 
think we want to enable the service provider to offer multiple qualities 
of service.  That is, enable the cloud provider to offer a selectable 
level of overcommit.  A given instance would be placed in a pool that is 
dedicated to the relevant level of overcommit (or, possibly, a better pool 
if the selected one is currently full).  Ideally the pool sizes would be 
dynamic.  That's the dynamic reconfiguration I mentioned preparing for. 
 
+1 This is exactly the situation I'm in as an operator. You can do 
different levels of overcommit with host-aggregates and different flavors, 
but this has several drawbacks:
1.  The nature of this is slightly exposed to the end-user, through 
extra-specs and the fact that two flavors cannot have the same name. One 
scenario we have is that we want to be able to document our flavor 
names--what each name means, but we want to provide different QoS 
standards for different projects. Since flavor names must be unique, we 
have to create different flavors for different levels of service. 
Sometimes you do want to lie to your users!
[Day, Phil] I agree that there is a problem with having every new option 
we add in extra_specs leading to a new set of flavors.There are a 
number of changes up for review to expose more hypervisor capabilities via 
extra_specs that also have this potential problem.What I?d really like 
to be able to ask for a s a user is something like ?a medium instance with 
a side order of overcommit?, rather than have to choose from a long list 
of variations.I did spend some time trying to think of a more elegant 
solution ? but as the user wants to know what combinations are available 
it pretty much comes down to needing that full list of combinations 
somewhere.So maybe the problem isn?t having the flavors so much, but 
in how the user currently has to specific an exact match from that list.
If the user could say ?I want a flavor with these attributes? and then the 
system would find a ?best match? based on criteria set by the cloud admin 
(for example I might or might not want to allow a request for an 
overcommitted instance to use my not-overcommited flavor depending on the 
roles of the tenant) then would that be a more user friendly solution ? 
 
2.  If I have two pools of nova-compute HVs with different overcommit 
settings, I have to manage the pool sizes manually. Even if I use puppet 
to change the config and flip an instance into a different pool, that 
requires me to restart nova-compute. Not an ideal situation.
[Day, Phil] If the pools are aggregates, and the overcommit is defined by 
aggregate meta-data then I don?t see why you  need to restart 
nova-compute.
3.  If I want to do anything complicated, like 3 overcommit tiers with 
"good", "better", "best" performance and allow the scheduler to pick 
"better" for a "good" instance if the "good" pool is full, this is very 
hard and complicated to do with the current system.
[Day, Phil]  Yep, a combination of filters and weighting functions would 
allow you to do this ? its not really tied to whether the overcommit Is 
defined in the scheduler or the host though as far as I can see. 
 
I'm looking forward to seeing this in nova-specs!
~ Scott___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Jay Pipes

On 06/09/2014 02:57 PM, Devananda van der Veen wrote:

On Mon, Jun 9, 2014 at 10:49 AM, Jay Pipes  wrote:

On 06/09/2014 12:50 PM, Devananda van der Veen wrote:


There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
"with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html



Hi Deva,

MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not
READ_COMMITTED... are you saying that somewhere in the Ironic codebase we
are setting the isolation mode manually to READ_COMMITTED for some reason?

Best,
-jay



Jay,

Not saying that at all. I was responding to Mike's suggested approach
for testing DB changes (which was actually off topic from my original
post), in which he suggested using READ_COMMITTED.


Apologies, thx for the clarification, Deva,

-jay


-Deva




On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka 
wrote:


Hi Mike,


However, when testing an application that uses a fixed set of tables,
as should be the case for the majority if not all Openstack apps, there’s no
reason that these tables need to be recreated for every test.



This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2]
https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer  wrote:



On Jun 6, 2014, at 8:12 PM, Devananda van der Veen

wrote:

I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until
recently
when Lucas tried to add a DB migration and noticed that it didn't run...
So
I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they
should
report that they were skipped.
https://bugs.launchpad.net/oslo/+bug/1327397
"No notice given when db migrations are not run due to missing
engine"

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests
ran --
and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then "opportunistically" checks each
back-end,
if it's available. This opportunistic checking was inherited from Nova
so
that tests could pass on developer workstations where not all backends
are
present (eg, I have mysql installed, but not postgres), and still
transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately
-
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the
configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an
exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp()
raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in
parallel,
they fail randomly because sometimes two test threads run different
migration te

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
On Mon, Jun 9, 2014 at 10:49 AM, Jay Pipes  wrote:
> On 06/09/2014 12:50 PM, Devananda van der Veen wrote:
>>
>> There may be some problems with MySQL when testing parallel writes in
>> different non-committing transactions, even in READ COMMITTED mode,
>> due to InnoDB locking, if the queries use non-unique secondary indexes
>> for UPDATE or SELECT..FOR UPDATE queries. This is done by the
>> "with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
>> in Nova. So I would not recommend this approach, even though, in
>> principle, I agree it would be a much more efficient way of testing
>> database reads/writes.
>>
>> More details here:
>> http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
>> http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html
>
>
> Hi Deva,
>
> MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not
> READ_COMMITTED... are you saying that somewhere in the Ironic codebase we
> are setting the isolation mode manually to READ_COMMITTED for some reason?
>
> Best,
> -jay
>

Jay,

Not saying that at all. I was responding to Mike's suggested approach
for testing DB changes (which was actually off topic from my original
post), in which he suggested using READ_COMMITTED.

-Deva

>
>> On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka 
>> wrote:
>>>
>>> Hi Mike,
>>>
>> However, when testing an application that uses a fixed set of tables,
>> as should be the case for the majority if not all Openstack apps, 
>> there’s no
>> reason that these tables need to be recreated for every test.
>>>
>>>
>>> This is a very good point. I tried to use the recipe from SQLAlchemy
>>> docs to run Nova DB API tests (yeah, I know, this might sound
>>> confusing, but these are actually methods that access the database in
>>> Nova) on production backends (MySQL and PostgreSQL). The abandoned
>>> patch is here [1]. Julia Varlamova has been working on rebasing this
>>> on master and should upload a new patch set soon.
>>>
>>> Overall, the approach with executing a test within a transaction and
>>> then emitting ROLLBACK worked quite well. The only problem I ran into
>>> were tests doing ROLLBACK on purpose. But you've updated the recipe
>>> since then and this can probably be solved by using of save points. I
>>> used a separate DB per a test running process to prevent race
>>> conditions, but we should definitely give READ COMMITTED approach a
>>> try. If it works, that will awesome.
>>>
>>> With a few tweaks of PostgreSQL config I was able to run Nova DB API
>>> tests in 13-15 seconds, while SQLite in memory took about 7s.
>>>
>>> Action items for me and Julia probably: [2] needs a spec with [1]
>>> updated accordingly. Using of this 'test in a transaction' approach
>>> seems to be a way to go for running all db related tests except the
>>> ones using DDL statements (as any DDL statement commits the current
>>> transaction implicitly on MySQL and SQLite AFAIK).
>>>
>>> Thanks,
>>> Roman
>>>
>>> [1] https://review.openstack.org/#/c/33236/
>>> [2]
>>> https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
>>>
>>> On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer  wrote:


 On Jun 6, 2014, at 8:12 PM, Devananda van der Veen
 
 wrote:

 I think some things are broken in the oslo-incubator db migration code.

 Ironic moved to this when Juno opened and things seemed fine, until
 recently
 when Lucas tried to add a DB migration and noticed that it didn't run...
 So
 I looked into it a bit today. Below are my findings.

 Firstly, I filed this bug and proposed a fix, because I think that tests
 that don't run any code should not report that they passed -- they
 should
 report that they were skipped.
https://bugs.launchpad.net/oslo/+bug/1327397
"No notice given when db migrations are not run due to missing
 engine"

 Then, I edited the test_migrations.conf file appropriately for my local
 mysql service, ran the tests again, and verified that migration tests
 ran --
 and they passed. Great!

 Now, a little background... Ironic's TestMigrations class inherits from
 oslo's BaseMigrationTestCase, then "opportunistically" checks each
 back-end,
 if it's available. This opportunistic checking was inherited from Nova
 so
 that tests could pass on developer workstations where not all backends
 are
 present (eg, I have mysql installed, but not postgres), and still
 transparently run on all backends in the gate. I couldn't find such
 opportunistic testing in the oslo db migration test code, unfortunately
 -
 but maybe it's well hidden.

 Anyhow. When I stopped the local mysql service (leaving the
 configuration
 unchanged), I expected the tests to be skipped, but instead I got two
 surprise failures:
 - test_mysql_opportunistically() failed because setUp() raises an
 exception

[openstack-dev] [nova] New extra spec operator proposed

2014-06-09 Thread Maldonado, Facundo N
Hi folks,
I submitted a new blueprint proposing the addition of a new operator to the 
existing ones.

BP: 
https://blueprints.launchpad.net/nova/+spec/add-all-in-list-operator-to-extra-spec-ops
Spec review: https://review.openstack.org/#/c/98179/

What do you think?

Thanks,
Facundo.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-09 Thread Sean Dague
On 06/09/2014 01:38 PM, David Kranz wrote:
> On 06/02/2014 06:57 AM, Sean Dague wrote:
>> Towards the end of the summit there was a discussion about us using a
>> shared review dashboard to see if a common view by the team would help
>> accelerate people looking at certain things. I spent some time this
>> weekend working on a tool to make building custom dashboard urls much
>> easier.
>>
>> My current proposal is the following, and would like comments on it:
>> https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash
>>
>> All items in the dashboard are content that you've not voted on in the
>> current patch revision, that you don't own, and that have passing
>> Jenkins test results.
>>
>> 1. QA Specs - these need more eyes, so we highlight them at top of page
>> 2. Patches that are older than 5 days, with no code review
>> 3. Patches that you are listed as a reviewer on, but haven't voting on
>> current version
>> 4. Patches that already have a +2, so should be landable if you agree.
>> 5. Patches that have no negative code review feedback on them
>> 6. Patches older than 2 days, with no code review
> Thanks, Sean. This is working great for me, but I think there is another
> important item that is missing and hope it is possible to add, perhaps
> even as among the most important items:
> 
> Patches that you gave a -1, but the response is a comment explaining why
> the -1 should be withdrawn rather than a new patch.

So how does one automatically detect those using the gerrit query language?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday June 10th at 19:00 UTC

2014-06-09 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting on Tuesday June 10th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][Infra] Mid-Cycle Meet-up

2014-06-09 Thread Matthew Treinish
On Thu, May 29, 2014 at 12:07:07PM -0400, Matthew Treinish wrote:
> 
> Hi Everyone,
> 
> So we'd like to announce to everyone that we're going to be doing a combined
> Infra and QA program mid-cycle meet-up. It will be the week of July 14th in
> Darmstadt, Germany at Deutsche Telekom who has graciously offered to sponsor 
> the
> event. The plan is to use the week as both a time for face to face 
> collaboration
> for both programs respectively as well as having a couple days of 
> bootstrapping
> for new users/contributors. The intent was that this would be useful for 
> people
> who are interested in contributing to either Infra or QA, and those who are
> running third party CI systems.
> 
> The current break down for the week that we're looking at is:
> 
> July 14th: Infra
> July 15th: Infra
> July 16th: Bootstrapping for new users
> July 17th: More bootstrapping
> July 18th: QA
> 
> We still have to work out more details, and will follow up once we have them.
> But, we thought it would be better to announce the event earlier so people can
> start to plan travel if they need it.
> 
> 
> Thanks,
> 
> Matt Treinish
> Jim Blair


Just a quick follow-up, the agenda has changed slightly based on room
availability since I first sent out the announcement. You can find up-to-date
information on the meet-up wiki page:

https://wiki.openstack.org/wiki/Qa_Infra_Meetup_2014

Once we work out a detailed agenda of discussion topics/work items for the 3
discussion days I'll update the wiki page.

Also, if you're intending to attend please put your name on the wiki page's
registration section.

Thanks,

Matt Treinish


pgpXjzH_m1tHt.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:47 PM, Joe Cropper wrote:

There may also be specific software entitlement issues that make it
useful to deterministically know which host your VM will be placed on.
This can be quite common in large organizations that have certain
software that can be tied to certain hardware or hardware with certain #
of CPU capacity, etc.


Sure, agreed. However the "cloudy" way of doing things (as opposed to 
the enterprise IT/managed hosting way of doing things) is to rely on 
abstractions like host aggregates and not allow details of the physical 
host machine to leak out of the public cloud API.


Best,
-jay



On Mon, Jun 9, 2014 at 11:32 AM, Chris Friesen
mailto:chris.frie...@windriver.com>> wrote:

On 06/09/2014 07:59 AM, Jay Pipes wrote:

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and
testing
purposes.


Pray tell, what is that valid use case?


I find it useful for setting up specific testcases when trying to
validate thingsput *this* instance on *this* host, put *those*
instances on *those* hosts, now pull the power plug on *this*
host...etc.

I wouldn't expect the typical openstack end-user to need it though.

Chris

_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:32 PM, Chris Friesen wrote:

On 06/09/2014 07:59 AM, Jay Pipes wrote:

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.


Pray tell, what is that valid use case?


I find it useful for setting up specific testcases when trying to
validate thingsput *this* instance on *this* host, put *those*
instances on *those* hosts, now pull the power plug on *this* host...etc.


So, violating the main design tenet of cloud computing: though shalt not 
care what physical machine your virtual machine lives on. :)



I wouldn't expect the typical openstack end-user to need it though.


Me either :)

I will point out, though, that it is indeed possible to achieve the same 
use case using host aggregates that would not break the main design 
tenet of cloud computing... just make two host aggregates, one for each 
compute node involved in your testing, and then simply supply scheduler 
hints that would only match one aggregate or the other.


Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary "extra specs" for compute nodes?

2014-06-09 Thread Jay Pipes

On 06/09/2014 01:38 PM, Joe Cropper wrote:

On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil  wrote:

Hi Joe,



Can you give some examples of what that data would be used for ?


Sure!  For example, in the PowerKVM world, hosts can be dynamically
configured to run in split-core processor mode.  This setting can be
dynamically changed and it'd be nice to allow the driver to track this
somehow -- and it probably doesn't warrant its own explicit field in
compute_node.  Likewise, PowerKVM also has a concept of the maximum
SMT level in which its guests can run (which can also vary dynamically
based on the split-core setting) and it would also be nice to tie such
settings to the compute node.


That information is typically stored in the compute_node.cpu_info field.


Overall, this would give folks writing compute drivers the ability to
attach the "extra spec" style data to a compute node for a variety of
purposes -- two simple examples provided above, but there are many
more.  :-)


If it's something that the driver can discover on its own and that the 
driver can/should use in determining the capabilities that the 
hypervisor offers, then at this point, I believe compute_node.cpu_info 
is the place to put that information. It's probably worth renaming the 
cpu_info field to just "capabilities" instead, to be more generic and 
indicate that it's a place the driver stores discoverable capability 
information about the node...


Now, for *user-defined* taxonomies, I'm a big fan of simple string 
tagging, as is proposed for the server instance model in this spec:


https://review.openstack.org/#/c/91444/

Best,
jay





It sounds on the face of it that what you’re looking for is pretty similar
to what Extensible Resource Tracker sets out to do
(https://review.openstack.org/#/c/86050
https://review.openstack.org/#/c/71557)


Thanks for pointing this out.  I actually ran across these while I was
searching the code to see what might already exist in this space.
Actually, the compute node 'stats' was always a first guess, but these
are clearly heavily reserved for the resource tracker and wind up
getting purged/deleted over time since the 'extra specs' I reference
above aren't necessarily tied to the spawning/deleting of instances.
In other words, they're not really consumable resources, per-se.
Unless I'm overlooking a way (perhaps I am) to use this
extensible-resource-tracker blueprint for arbitrary key-value pairs
**not** related to instances, I think we need something additional?

I'd happily create a new blueprint for this as well.





Phil



From: Joe Cropper [mailto:cropper@gmail.com]
Sent: 07 June 2014 07:30
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] Arbitrary "extra specs" for compute nodes?



Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
"extra_specs" concept?

It appears there are entries for things like pci_stats, stats and recently
added extra_resources -- but these all tend to have more specific usages vs.
just arbitrary data that may want to be maintained about the compute node
over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be loaded
as a dict of key-value pairs?

Thoughts?

Thanks,

Joe


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV] Re: NFV in OpenStack use cases and context

2014-06-09 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: "ITAI MENDELSOHN (ITAI)" , "OpenStack 
> Development Mailing List (not for usage
> 
> Just adding openstack-dev to the CC for now :).
> 
> - Original Message -
> > From: "ITAI MENDELSOHN (ITAI)" 
> > Subject: Re: NFV in OpenStack use cases and context
> > 
> > Can we look at them one by one?
> > 
> > Use case 1 - It's pure IaaS
> > Use case 2 - Virtual network function as a service. It's actually about
> > exposing services to end customers (enterprises) by the service provider.
> > Use case 3 - VNPaaS - is similar to #2 but at the service level. At larger
> > scale and not at the "app" level only.
> > Use case 4 - VNF forwarding graphs. It's actually about dynamic
> > connectivity between apps.
> > Use case 5 - vEPC and vIMS - Those are very specific (good) examples of SP
> > services to be deployed.
> > Use case 6 - virtual mobile base station. Another very specific example,
> > with different characteristics than the other two above.
> > Use case 7 - Home virtualisation.
> > Use case 8 - Virtual CDN
> > 
> > As I see it those have totally different relevancy to OpenStack.
> > Assuming we don't want to boil the ocean hereŠ
> > 
> > 1-3 seems to me less relevant here.
> > 4 seems to be a Neutron area.
> > 5-8 seems to be usefully to understand the needs of the NFV apps. The use
> > case can help to map those needs.
> > 
> > For 4 I guess the main part is about chaining and Neutron between DCs.
> > Soma may call it SDN in WAN...
> > 
> > For 5-8 at the end an option is to map all those into:
> > -performance (net BW, storage BW mainly). That can be mapped to SR-IOV,
> > NUMA. Etc'
> > -determinism. Shall we especially minimise "noisy" neighbours. Not sure
> > how NFV is special here, but for sure it's a major concern for lot of SPs.
> > That can be mapped to huge pages, cache QOS, etc'.
> > -overcoming of short term hurdles (just because of apps migrations
> > issues). Small example is the need to define the tick policy of KVM just
> > because that's what the app needs. Again, not sure how NFV special it is,
> > and again a major concern of mainly application owners in the NFV domain.
> > 
> > Make sense?

Hi Itai,

This makes sense to me. I think what we need to expand upon, with the ETSI NFV 
documents as a reference, is a two to three paragraph explanation of each use 
case explained at a more basic level - ideally on the Wiki page. It seems that 
use case 5 might make a particularly good initial target to work on fleshing 
out as an example? We could then look at linking the use case to concrete 
requirements based on this, I suspect we might want to break them down into:

a) The bare minimum requirements for OpenStack to support the use case at all. 
That is, requirements that without which the VNF simply can not function.

b) The requirements that are not mandatory but would be beneficial for 
OpenStack to support the use case. In particularly that might be requirements 
that would improve VNF performance or reliability by some margin (possibly 
significantly) but which it can function without if absolutely required.

Thoughts?

Steve



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Jay Pipes

On 06/09/2014 12:50 PM, Devananda van der Veen wrote:

There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
"with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html


Hi Deva,

MySQL/InnoDB's default isolation mode is REPEATABLE_READ, not 
READ_COMMITTED... are you saying that somewhere in the Ironic codebase 
we are setting the isolation mode manually to READ_COMMITTED for some 
reason?


Best,
-jay


On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka  wrote:

Hi Mike,


However, when testing an application that uses a fixed set of tables, as should 
be the case for the majority if not all Openstack apps, there’s no reason that 
these tables need to be recreated for every test.


This is a very good point. I tried to use the recipe from SQLAlchemy
docs to run Nova DB API tests (yeah, I know, this might sound
confusing, but these are actually methods that access the database in
Nova) on production backends (MySQL and PostgreSQL). The abandoned
patch is here [1]. Julia Varlamova has been working on rebasing this
on master and should upload a new patch set soon.

Overall, the approach with executing a test within a transaction and
then emitting ROLLBACK worked quite well. The only problem I ran into
were tests doing ROLLBACK on purpose. But you've updated the recipe
since then and this can probably be solved by using of save points. I
used a separate DB per a test running process to prevent race
conditions, but we should definitely give READ COMMITTED approach a
try. If it works, that will awesome.

With a few tweaks of PostgreSQL config I was able to run Nova DB API
tests in 13-15 seconds, while SQLite in memory took about 7s.

Action items for me and Julia probably: [2] needs a spec with [1]
updated accordingly. Using of this 'test in a transaction' approach
seems to be a way to go for running all db related tests except the
ones using DDL statements (as any DDL statement commits the current
transaction implicitly on MySQL and SQLite AFAIK).

Thanks,
Roman

[1] https://review.openstack.org/#/c/33236/
[2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends

On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer  wrote:


On Jun 6, 2014, at 8:12 PM, Devananda van der Veen 
wrote:

I think some things are broken in the oslo-incubator db migration code.

Ironic moved to this when Juno opened and things seemed fine, until recently
when Lucas tried to add a DB migration and noticed that it didn't run... So
I looked into it a bit today. Below are my findings.

Firstly, I filed this bug and proposed a fix, because I think that tests
that don't run any code should not report that they passed -- they should
report that they were skipped.
   https://bugs.launchpad.net/oslo/+bug/1327397
   "No notice given when db migrations are not run due to missing engine"

Then, I edited the test_migrations.conf file appropriately for my local
mysql service, ran the tests again, and verified that migration tests ran --
and they passed. Great!

Now, a little background... Ironic's TestMigrations class inherits from
oslo's BaseMigrationTestCase, then "opportunistically" checks each back-end,
if it's available. This opportunistic checking was inherited from Nova so
that tests could pass on developer workstations where not all backends are
present (eg, I have mysql installed, but not postgres), and still
transparently run on all backends in the gate. I couldn't find such
opportunistic testing in the oslo db migration test code, unfortunately -
but maybe it's well hidden.

Anyhow. When I stopped the local mysql service (leaving the configuration
unchanged), I expected the tests to be skipped, but instead I got two
surprise failures:
- test_mysql_opportunistically() failed because setUp() raises an exception
before the test code could call calling _have_mysql()
- test_mysql_connect_fail() actually failed! Again, because setUp() raises
an exception before running the test itself

Unfortunately, there's one more problem... when I run the tests in parallel,
they fail randomly because sometimes two test threads run different
migration tests, and the setUp() for one thread (remember, it calls
_reset_databases) blows up the other test.

Out of 10 runs, it failed three times, each with different errors:
   NoSuchTableError: `chassis`
   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
database exists
   ProgrammingError: (ProgrammingError) (1146, "Table
'test_migrations.alembic_ver

Re: [openstack-dev] [qa] shared review dashboard proposal

2014-06-09 Thread David Kranz

On 06/02/2014 06:57 AM, Sean Dague wrote:

Towards the end of the summit there was a discussion about us using a
shared review dashboard to see if a common view by the team would help
accelerate people looking at certain things. I spent some time this
weekend working on a tool to make building custom dashboard urls much
easier.

My current proposal is the following, and would like comments on it:
https://github.com/sdague/gerrit-dash-creator/blob/master/dashboards/qa-program.dash

All items in the dashboard are content that you've not voted on in the
current patch revision, that you don't own, and that have passing
Jenkins test results.

1. QA Specs - these need more eyes, so we highlight them at top of page
2. Patches that are older than 5 days, with no code review
3. Patches that you are listed as a reviewer on, but haven't voting on
current version
4. Patches that already have a +2, so should be landable if you agree.
5. Patches that have no negative code review feedback on them
6. Patches older than 2 days, with no code review
Thanks, Sean. This is working great for me, but I think there is another 
important item that is missing and hope it is possible to add, perhaps 
even as among the most important items:


Patches that you gave a -1, but the response is a comment explaining why 
the -1 should be withdrawn rather than a new patch.


 -David


These are definitely a judgement call on what people should be looking
at, but this seems a pretty reasonable triaging list. I'm happy to have
a discussion on changes to this list.

The url for this is -  http://goo.gl/g4aMjM

(the long url is very long:
https://review.openstack.org/#/dashboard/?foreach=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade+OR+project%3Aopenstack%2Fqa-specs%29+status%3Aopen+NOT+owner%3Aself+NOT+label%3AWorkflow%3C%3D-1+label%3AVerified%3E%3D1%2Cjenkins+NOT+label%3ACode-Review%3C%3D-1%2Cself+NOT+label%3ACode-Review%3E%3D1%2Cself&title=QA+Review+Inbox&QA+Specs=project%3Aopenstack%2Fqa-specs&Needs+Feedback+%28Changes+older+than+5+days+that+have+not+been+reviewed+by+anyone%29=NOT+label%3ACode-Review%3C%3D2+age%3A5d&Your+are+a+reviewer%2C+but+haven%27t+voted+in+the+current+revision=reviewer%3Aself&Needs+final+%2B2=%28project%3Aopenstack%2Ftempest+OR+project%3Aopenstack-dev%2Fgrenade%29+label%3ACode-Review%3E%3D2+limit%3A50&Passed+Jenkins%2C+No+Negative+Feedback=NOT+label%3ACode-Review%3E%3D2+NOT+label%3ACode-Review%3C%3D-1+limit%3A50&Wayward+Changes+%28Changes+with+no+code+review+in+the+last+2days%29=NOT+label%3ACode-Review%3C%3D2+age%3A2d

The url can be regenerated easily using the gerrit-dash-creator.

-Sean



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary "extra specs" for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 5:17 AM, Day, Phil  wrote:
> Hi Joe,
>
>
>
> Can you give some examples of what that data would be used for ?

Sure!  For example, in the PowerKVM world, hosts can be dynamically
configured to run in split-core processor mode.  This setting can be
dynamically changed and it'd be nice to allow the driver to track this
somehow -- and it probably doesn't warrant its own explicit field in
compute_node.  Likewise, PowerKVM also has a concept of the maximum
SMT level in which its guests can run (which can also vary dynamically
based on the split-core setting) and it would also be nice to tie such
settings to the compute node.

Overall, this would give folks writing compute drivers the ability to
attach the "extra spec" style data to a compute node for a variety of
purposes -- two simple examples provided above, but there are many
more.  :-)

>
>
>
> It sounds on the face of it that what you’re looking for is pretty similar
> to what Extensible Resource Tracker sets out to do
> (https://review.openstack.org/#/c/86050
> https://review.openstack.org/#/c/71557)

Thanks for pointing this out.  I actually ran across these while I was
searching the code to see what might already exist in this space.
Actually, the compute node 'stats' was always a first guess, but these
are clearly heavily reserved for the resource tracker and wind up
getting purged/deleted over time since the 'extra specs' I reference
above aren't necessarily tied to the spawning/deleting of instances.
In other words, they're not really consumable resources, per-se.
Unless I'm overlooking a way (perhaps I am) to use this
extensible-resource-tracker blueprint for arbitrary key-value pairs
**not** related to instances, I think we need something additional?

I'd happily create a new blueprint for this as well.

>
>
>
> Phil
>
>
>
> From: Joe Cropper [mailto:cropper@gmail.com]
> Sent: 07 June 2014 07:30
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] Arbitrary "extra specs" for compute nodes?
>
>
>
> Hi Folks,
>
> I was wondering if there was any such mechanism in the compute node
> structure to hold arbitrary key-value pairs, similar to flavors'
> "extra_specs" concept?
>
> It appears there are entries for things like pci_stats, stats and recently
> added extra_resources -- but these all tend to have more specific usages vs.
> just arbitrary data that may want to be maintained about the compute node
> over the course of its lifetime.
>
> Unless I'm overlooking an existing construct for this, would this be
> something that folks would welcome a Juno blueprint for--i.e., adding
> extra_specs style column with a JSON-formatted string that could be loaded
> as a dict of key-value pairs?
>
> Thoughts?
>
> Thanks,
>
> Joe
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Design for Alarming and Alarm Management

2014-06-09 Thread Liz Blanchard
Hi all,

Thanks again for the great comments on the initial cut of wireframes. I’ve 
updated them a fair amount based on feedback in this e-mail thread along with 
the feedback written up here:
https://etherpad.openstack.org/p/alarm-management-page-design-discussion

Here is a link to the new version:
http://people.redhat.com/~lsurette/OpenStack/Alarm%20Management%20-%202014-06-05.pdf

And a quick explanation of the updates that I made from the last version:

1) Removed severity.

2) Added Status column. I also added details around the fact that users can 
enable/disable alerts.

3) Updated Alarm creation workflow to include choosing the project and user 
(optionally for filtering the resource list), choosing resource, and allowing 
for choose of amount of time to monitor for alarming.
 -Perhaps we could be even more sophisticated for how we let users filter 
down to find the right resources that they want to monitor for alarms?

4) As for notifying users…I’ve updated the “Alarms” section to be “Alarms 
History”. The point here is to show any Alarms that have occurred to notify the 
user. Other notification ideas could be to allow users to get notified of 
alerts via e-mail (perhaps a user setting?). I’ve added a wireframe for this 
update in User Settings. Then the Alarms Management section would just be where 
the user creates, deletes, enables, and disables alarms. Do you still think we 
don’t need the “alarms” tab? Perhaps this just becomes iteration 2 and is left 
out for now as you mention in your etherpad.

5) Question about combined alarms…currently I’ve designed it so that a user 
could create multiple levels in the “Alarm When…” section. They could combine 
these with AND/ORs. Is this going far enough? Or do we actually need to allow 
users to combine Alarms that might watch different resources?

6) I updated the Actions column to have the “More” drop down which is 
consistent with other tables in Horizon.

7) Added in a section in the “Add Alarm” workflow for “Actions after Alarm”. 
I’m thinking we could have some sort of If State is X, do X type selections, 
but I’m looking to understand more details about how the backend works for this 
feature. Eoghan gave examples of logging and potentially scaling out via Heat. 
Would simple drop downs support these events?

8) I can definitely add in a “scheduling” feature with respect to Alarms. I 
haven’t added it in yet, but I could see this being very useful in future 
revisions of this feature.

9) Another though is that we could add in some padding for outlier data as 
Eoghan mentioned. Perhaps a setting for “This has happened 3 times over the 
last minute, so now send an alarm.”?  

A new round of feedback is of course welcome :)

Best,
Liz

On Jun 4, 2014, at 1:27 PM, Liz Blanchard  wrote:

> Thanks for the excellent feedback on these, guys! I’ll be working on making 
> updates over the next week and will send a fresh link out when done. Anyone 
> else with feedback, please feel free to fire away.
> 
> Best,
> Liz
> On Jun 4, 2014, at 12:33 PM, Eoghan Glynn  wrote:
> 
>> 
>> Hi Liz,
>> 
>> Two further thoughts occurred to me after hitting send on
>> my previous mail.
>> 
>> First, is the concept of alarm dimensioning; see my RDO Ceilometer
>> getting started guide[1] for an explanation of that notion.
>> 
>> "A key associated concept is the notion of dimensioning which defines the 
>> set of matching meters that feed into an alarm evaluation. Recall that 
>> meters are per-resource-instance, so in the simplest case an alarm might be 
>> defined over a particular meter applied to all resources visible to a 
>> particular user. More useful however would the option to explicitly select 
>> which specific resources we're interested in alarming on. On one extreme we 
>> would have narrowly dimensioned alarms where this selection would have only 
>> a single target (identified by resource ID). On the other extreme, we'd have 
>> widely dimensioned alarms where this selection identifies many resources 
>> over which the statistic is aggregated, for example all instances booted 
>> from a particular image or all instances with matching user metadata (the 
>> latter is how Heat identifies autoscaling groups)."
>> 
>> We'd have to think about how that concept is captured in the
>> UX for alarm creation/update.
>> 
>> Second, there are a couple of more advanced alarming features 
>> that were added in Icehouse:
>> 
>> 1. The ability to constrain alarms on time ranges, such that they
>>  would only fire say during 9-to-5 on a weekday. This would
>>  allow for example different autoscaling policies to be applied
>>  out-of-hours, when resource usage is likely to be cheaper and
>>  manual remediation less straight-forward.
>> 
>> 2. The ability to exclude low-quality datapoints with anomolously
>>  low sample counts. This allows the leading edge of the trend of
>>  widely dimensioned alarms not to be skewed by eagerly-reporting
>>  outliers.
>> 
>> Perhaps n

Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Jain, Vivek
+1 for the idea of making certificate immutable.
However, if Barbican allows updating certs/containers then versioning is a
must.

Thanks,
Vivek


On 6/8/14, 11:48 PM, "Samuel Bercovici"  wrote:

>Hi,
>
>I think that option 2 should be preferred at this stage.
>I also think that certificate should be immutable, if you want a new one,
>create a new one and update the listener to use it.
>This removes any chance of mistakes, need for versioning etc.
>
>-Sam.
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 10:16 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>Hey everyone,
>
>Per our IRC discussion yesterday I'd like to continue the discussion on
>how Barbican and Neutron LBaaS will interact. There are currently two
>ideas in play and both will work. If you have another idea please free to
>add it so that we may evaluate all the options relative to each other.
>Here are the two current ideas:
>
>1. Create an eventing system for Barbican that Neutron LBaaS (and other
>services) consumes to identify when to update/delete updated secrets from
>Barbican. For those that aren't up to date with the Neutron LBaaS API
>Revision, the project/tenant/user provides a secret (container?) id when
>enabling SSL/TLS functionality.
>
>* Example: If a user makes a change to a secret/container in Barbican
>then Neutron LBaaS will see an event and take the appropriate action.
>
>PROS:
> - Barbican is going to create an eventing system regardless so it will
>be supported.
> - Decisions are made on behalf of the user which lessens the amount of
>calls the user has to make.
>
>CONS:
> - An eventing framework can become complex especially since we need to
>ensure delivery of an event.
> - Implementing an eventing system will take more time than option #2ŠI
>think.
>
>2. Push orchestration decisions to API users. This idea comes with two
>assumptions. The first assumption is that most providers' customers use
>the cloud via a GUI, which in turn can handle any orchestration decisions
>that need to be made. The second assumption is that power API users are
>savvy and can handle their decisions as well. Using this method requires
>services, such as LBaaS, to "register" in the form of metadata to a
>barbican container.
>
>* Example: If a user makes a change to a secret the GUI can see which
>services are registered and opt to warn the user of consequences. Power
>users can look at the registered services and make decisions how they see
>fit.
>
>PROS:
> - Very simple to implement. The only code needed to make this a reality
>is at the control plane (API) level.
> - This option is more loosely coupled that option #1.
>
>CONS:
> - Potential for services to not register/unregister. What happens in
>this case?
> - Pushes complexity of decision making on to GUI engineers and power API
>users.
>
>
>I would like to get a consensus on which option to move forward with ASAP
>since the hackathon is coming up and delivering Barbican to Neutron LBaaS
>integration is essential to exposing SSL/TLS functionality, which almost
>everyone has stated is a #1/#2 priority.
>
>I'll start the decision making process by advocating for option #2. My
>reason for choosing option #2 has to deal mostly with the simplicity of
>implementing such a mechanism. Simplicity also means we can implement the
>necessary code and get it approved much faster which seems to be a
>concern for everyone. What option does everyone else want to move forward
>with?
>
>
>
>Cheers,
>--Jorge
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Arbitrary "extra specs" for compute nodes?

2014-06-09 Thread Joe Cropper
On Mon, Jun 9, 2014 at 10:07 AM, Chris Friesen
 wrote:
> On 06/07/2014 12:30 AM, Joe Cropper wrote:
>>
>> Hi Folks,
>>
>> I was wondering if there was any such mechanism in the compute node
>> structure to hold arbitrary key-value pairs, similar to flavors'
>> "extra_specs" concept?
>>
>> It appears there are entries for things like pci_stats, stats and
>> recently added extra_resources -- but these all tend to have more
>> specific usages vs. just arbitrary data that may want to be maintained
>> about the compute node over the course of its lifetime.
>>
>> Unless I'm overlooking an existing construct for this, would this be
>> something that folks would welcome a Juno blueprint for--i.e., adding
>> extra_specs style column with a JSON-formatted string that could be
>> loaded as a dict of key-value pairs?
>
>
> If nothing else, you could put the compute node in a host aggregate and
> assign metadata to it.

Yeah, I recognize this could be done, but I think that would be using
the host aggregate metadata a little too loosely since the metadata
I'm after is really tied explicitly to the compute node.  This would
present too many challenges when someone would want to use host
aggregates and the compute node-specific metadata.

>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-09 Thread Joe Gordon
On Jun 9, 2014 4:12 AM, "Jesse Pretorius"  wrote:
>
> Hi everyone,
>
> We have a need to be able to dedicate a specific host aggregate to a list
of tenants/projects. If the aggregate is marked as such, the aggregate may
only be used by that specified list of tenants and those tenants may only
be scheduled to that aggregate.
>
> The AggregateMultiTenancyIsolation filter almost does what we need - it
pushes all new instances created by a specified tenant to the designated
aggregate. However, it also seems to still see that aggregate as available
for other tenants.
>
> The description in the documentation [1] states: "If a host is in an
aggregate that has the metadata key filter_tenant_id it only creates
instances from that tenant (or list of tenants)."
>
> This would seem to us either as a code bug, or a documentation bug?
>
> If the filter is working as intended, then I'd like to propose working on
a patch to the filter which has an additional metadata field (something
like 'filter_tenant_exclusive') which - when 'true' - will consider the
filter_tenant_id list to be the only projects/tenants which may be
scheduled onto the host aggregate, and the only host aggregate which the
list of projects/tenants which may be scheduled onto.
>
> Note that there has been some similar work done with [2] and [3]. [2]
actually works as we expect, but as is noted in the gerrit comments it
seems rather wasteful to add a new filter when we could use the existing
filter as a base. [3] is a much larger framework to facilitate end-users
being able to request a whole host allocation - while this could be a nice
addition, it's overkill for what we're looking for. We're happy to
facilitate this with a simple admin-only allocation.
>
> So - should I work on a nova-specs proposal for a change, or should I
just log a bug against either nova or docs? :) Guidance would be
appreciated.

This sounds like a very reasonable idea, and we already have precedent for
doing things like this.

As for bug vs blueprint, it's more of new feature, and something good to
document so I'd say this should be very small blueprint that is very
restricted in scope.

>
> [1]
http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
> [2]
https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
> [3] https://blueprints.launchpad.net/nova/+spec/whole-host-allocation
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Third-Party CI Issue: direct access to review.openstack.org port 29418 required

2014-06-09 Thread Asselin, Ramy
All,

I've been working on setting up our Cinder 3rd party CI setup.
I ran into an issue where Zuul requires direct access to review.openstack.org 
port 29418, which is currently blocked in my environment. It should be 
unblocked around the end of June.

Since this will likely affect other vendors, I encourage you to take a few 
minutes and check if this affects you in order to allow sufficient time to 
resolve.

Please follow the instructions in section "Reading the Event Stream" here: [1]
Make sure you can get the event stream ~without~ any tunnels or proxies, etc. 
such as corkscrew [2].
(Double-check that any such configurations are commented out in: ~/.ssh/config 
and /etc/ssh/ssh_config)

Ramy (irc: asselin)

[1] http://ci.openstack.org/third_party.html
[2] http://en.wikipedia.org/wiki/Corkscrew_(program)




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Carlos Garza
I understand this concern and was advocating that a configuration option be 
available to disable or enable auto updating of SSL certificates. But since 
every one is in favor of storing meta data on the barbican container directly I 
guess this is a moot point now.

On Jun 6, 2014, at 5:52 PM, "Eichberger, German"  
wrote:

> Jorge + John,
> 
> I am most concerned with a user changing his secret in barbican and then the 
> LB trying to update and causing downtime. Some users like to control when the 
> downtime occurs.
> 
> For #1 it was suggested that once the event is delivered it would be up to a 
> user to enable an "auto-update flag".
> 
> In the case of #2 I am a bit worried about error cases: e.g. uploading the 
> certificates succeeds but registering the loadbalancer(s) fails. So using the 
> barbican system for those warnings might not as fool proof as we are hoping. 
> 
> One thing I like about #2 over #1 is that it pushes a lot of the information 
> to Barbican. I think a user would expect when he uploads a new certificate to 
> Barbican that the system warns him right away about load balancers using the 
> old cert. With #1 he might get an e-mails from LBaaS telling him things 
> changed (and we helpfully updated all affected load balancers) -- which isn't 
> as immediate as #2. 
> 
> If we implement an "auto-update flag" for #1 we can have both. User's who 
> like #2 juts hit the flag. Then the discussion changes to what we should 
> implement first and I agree with Jorge + John that this should likely be #2.
> 
> German
> 
> -Original Message-
> From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
> Sent: Friday, June 06, 2014 3:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
> Integration Ideas
> 
> Hey John,
> 
> Correct, I was envisioning that the Barbican request would not be affected, 
> but rather, the GUI operator or API user could use the registration 
> information to do so should they want to do so.
> 
> Cheers,
> --Jorge
> 
> 
> 
> 
> On 6/6/14 4:53 PM, "John Wood"  wrote:
> 
>> Hello Jorge,
>> 
>> Just noting that for option #2, it seems to me that the registration 
>> feature in Barbican would not be required for the first version of this 
>> integration effort, but we should create a blueprint for it nonetheless.
>> 
>> As for your question about services not registering/unregistering, I 
>> don't see an issue as long as the presence or absence of registered 
>> services on a Container/Secret does not **block** actions from 
>> happening, but rather is information that can be used to warn clients 
>> through their processes. For example, Barbican would still delete a 
>> Container/Secret even if it had registered services.
>> 
>> Does that all make sense though?
>> 
>> Thanks,
>> John
>> 
>> 
>> From: Youcef Laribi [youcef.lar...@citrix.com]
>> Sent: Friday, June 06, 2014 2:47 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
>> Integration Ideas
>> 
>> +1 for option 2.
>> 
>> In addition as an additional safeguard, the LBaaS service could check 
>> with Barbican when failing to use an existing secret to see if the 
>> secret has changed (lazy detection).
>> 
>> Youcef
>> 
>> -Original Message-
>> From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>> Sent: Friday, June 06, 2014 12:16 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
>> Integration Ideas
>> 
>> Hey everyone,
>> 
>> Per our IRC discussion yesterday I'd like to continue the discussion on 
>> how Barbican and Neutron LBaaS will interact. There are currently two 
>> ideas in play and both will work. If you have another idea please free 
>> to add it so that we may evaluate all the options relative to each other.
>> Here are the two current ideas:
>> 
>> 1. Create an eventing system for Barbican that Neutron LBaaS (and other
>> services) consumes to identify when to update/delete updated secrets 
>> from Barbican. For those that aren't up to date with the Neutron LBaaS 
>> API Revision, the project/tenant/user provides a secret (container?) id 
>> when enabling SSL/TLS functionality.
>> 
>> * Example: If a user makes a change to a secret/container in Barbican 
>> then Neutron LBaaS will see an event and take the appropriate action.
>> 
>> PROS:
>> - Barbican is going to create an eventing system regardless so it will 
>> be supported.
>> - Decisions are made on behalf of the user which lessens the amount of 
>> calls the user has to make.
>> 
>> CONS:
>> - An eventing framework can become complex especially since we need to 
>> ensure delivery of an event.
>> - Implementing an eventing system will take more time than option #2ŠI 
>> think.
>> 
>> 2. Push orche

Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Mike Bayer

On Jun 9, 2014, at 12:50 PM, Devananda van der Veen  
wrote:

> There may be some problems with MySQL when testing parallel writes in
> different non-committing transactions, even in READ COMMITTED mode,
> due to InnoDB locking, if the queries use non-unique secondary indexes
> for UPDATE or SELECT..FOR UPDATE queries. This is done by the
> "with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
> in Nova. So I would not recommend this approach, even though, in
> principle, I agree it would be a much more efficient way of testing
> database reads/writes.
> 
> More details here:
> http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
> http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

OK, but just to clarify my understanding, what is the approach to testing 
writes in parallel right now, are we doing CREATE DATABASE for two entirely 
distinct databases with some kind of generated name for each one?  Otherwise, 
if the parallel tests are against the same database, this issue exists 
regardless (unless autocommit mode is used, is FOR UPDATE accepted under those 
conditions?)




> 
> On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka  
> wrote:
>> Hi Mike,
>> 
> However, when testing an application that uses a fixed set of tables, as 
> should be the case for the majority if not all Openstack apps, there’s no 
> reason that these tables need to be recreated for every test.
>> 
>> This is a very good point. I tried to use the recipe from SQLAlchemy
>> docs to run Nova DB API tests (yeah, I know, this might sound
>> confusing, but these are actually methods that access the database in
>> Nova) on production backends (MySQL and PostgreSQL). The abandoned
>> patch is here [1]. Julia Varlamova has been working on rebasing this
>> on master and should upload a new patch set soon.
>> 
>> Overall, the approach with executing a test within a transaction and
>> then emitting ROLLBACK worked quite well. The only problem I ran into
>> were tests doing ROLLBACK on purpose. But you've updated the recipe
>> since then and this can probably be solved by using of save points. I
>> used a separate DB per a test running process to prevent race
>> conditions, but we should definitely give READ COMMITTED approach a
>> try. If it works, that will awesome.
>> 
>> With a few tweaks of PostgreSQL config I was able to run Nova DB API
>> tests in 13-15 seconds, while SQLite in memory took about 7s.
>> 
>> Action items for me and Julia probably: [2] needs a spec with [1]
>> updated accordingly. Using of this 'test in a transaction' approach
>> seems to be a way to go for running all db related tests except the
>> ones using DDL statements (as any DDL statement commits the current
>> transaction implicitly on MySQL and SQLite AFAIK).
>> 
>> Thanks,
>> Roman
>> 
>> [1] https://review.openstack.org/#/c/33236/
>> [2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
>> 
>> On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer  wrote:
>>> 
>>> On Jun 6, 2014, at 8:12 PM, Devananda van der Veen 
>>> wrote:
>>> 
>>> I think some things are broken in the oslo-incubator db migration code.
>>> 
>>> Ironic moved to this when Juno opened and things seemed fine, until recently
>>> when Lucas tried to add a DB migration and noticed that it didn't run... So
>>> I looked into it a bit today. Below are my findings.
>>> 
>>> Firstly, I filed this bug and proposed a fix, because I think that tests
>>> that don't run any code should not report that they passed -- they should
>>> report that they were skipped.
>>>  https://bugs.launchpad.net/oslo/+bug/1327397
>>>  "No notice given when db migrations are not run due to missing engine"
>>> 
>>> Then, I edited the test_migrations.conf file appropriately for my local
>>> mysql service, ran the tests again, and verified that migration tests ran --
>>> and they passed. Great!
>>> 
>>> Now, a little background... Ironic's TestMigrations class inherits from
>>> oslo's BaseMigrationTestCase, then "opportunistically" checks each back-end,
>>> if it's available. This opportunistic checking was inherited from Nova so
>>> that tests could pass on developer workstations where not all backends are
>>> present (eg, I have mysql installed, but not postgres), and still
>>> transparently run on all backends in the gate. I couldn't find such
>>> opportunistic testing in the oslo db migration test code, unfortunately -
>>> but maybe it's well hidden.
>>> 
>>> Anyhow. When I stopped the local mysql service (leaving the configuration
>>> unchanged), I expected the tests to be skipped, but instead I got two
>>> surprise failures:
>>> - test_mysql_opportunistically() failed because setUp() raises an exception
>>> before the test code could call calling _have_mysql()
>>> - test_mysql_connect_fail() actually failed! Again, because setUp() raises
>>> an exception before running the test itself
>>> 
>>> Unfortunately, there's one more problem... when I run the tests 

[openstack-dev] [Mistral] Mistral weekly meeting - meeting minutes

2014-06-09 Thread Timur Nurlygayanov
Hi team,

Thank you all for participating in Mistral weekly meeting today,

meeting minutes are available by the following links:
Minutes:
http://eavesdrop.openstack.org/meetings/mistral_weekly_meeting/2014/mistral_weekly_meeting.2014-06-09-15.59.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/mistral_weekly_meeting/2014/mistral_weekly_meeting.2014-06-09-15.59.txt
Log:
http://eavesdrop.openstack.org/meetings/mistral_weekly_meeting/2014/mistral_weekly_meeting.2014-06-09-15.59.log.html


-- 

Timur,
QA Engineer
OpenStack Projects
Mirantis Inc
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread Georgy Okrokvertskhov
Hi Hossein,

In additions you may check the following:
Heat OS::Heat::HARestarter resource
http://docs.openstack.org/developer/heat/template_guide/openstack.html
This blog entry about clustering:
http://vmtrooper.com/openstack-your-windows-cluster-with-neutron-allowed-address-pairs/
Mistral project, specifically for Live migration:
https://wiki.openstack.org/wiki/Mistral#Live_migration
Murano project for legacy app management and composing:
https://wiki.openstack.org/wiki/Murano/ProjectOverview

Thanks,
Georgy


On Mon, Jun 9, 2014 at 9:30 AM, hossein zabolzadeh 
wrote:

> Really thanks Georgy for your complete answer. My major concern on
> openstack was HA on my legacy apps(I wanted to use cloudstack instead of
> openstack becasue of its more attention to legacy apps and more HA
> features). But now, I will check your listed HA solutions on openstack and
> come back as soon as possible.
>
>
> On Mon, Jun 9, 2014 at 8:53 PM, Georgy Okrokvertskhov <
> gokrokvertsk...@mirantis.com> wrote:
>
>> Hi,
>>
>> You still can run legacy application on OpenStack with HA and DR using
>> the same good old school tools like pacemaker, heartbeat, DRBD etc. There
>> are all necessary features available in latest OpenStack. The most
>> important feature for HA - secondary IP address was implemented in Havana.
>> Now you can assign multiple IP addresses to the single VM port. Secondary
>> IP can be used as a VIP in pacemaker so it is possible to create classic
>> Active-Passive setup for any application. HAProxy is still there an you can
>> use it for any application which uses IP based transport for communication.
>> This secondary IP feature allows you to run even Windows cluster
>> applications without any significant changes in setup in comparison to the
>> running cluster on physical nodes.
>>
>> There is no shared volumes (yet as I know) but you can use DRBD on VM to
>> sync two volumes attached to two different VMs and shared network
>> filesystems as a service is almost there. Using these approaches it is
>> possible to have data resilience for legacy applications too.
>>
>> There is no automagic things which make legacy apps resilient, but it is
>> still possible to do with using known tools as there are no limitations
>> from OpenStack infrastructure side for that. As I know there were
>> discussions about exposing HA clusters on hypervisors that will allow some
>> kind of resilience automatically (through automatic migrations or
>> evacuation) but there is no active work on it visible.
>>
>> Thanks
>> Georgy
>>
>>
>>
>>
>>
>> On Mon, Jun 9, 2014 at 7:16 AM, Matthew Farina 
>> wrote:
>>
>>> In my experience building apps that run in OpenStack, you don't give
>>> up state. You shift how you handle state.
>>>
>>> For example, instead of always routing a user to the same instance and
>>> that instance holding the session data there is a common session store
>>> for the app (possibly synced between regions). If you store session on
>>> each instance and loose an instance you'll run into problems. If
>>> sessions is more of a service for each instance than an instance
>>> coming and going isn't a big deal.
>>>
>>> A good database as a service, swift (object storage), and maybe a
>>> microservice architecture may be helpful.
>>>
>>> Legacy applications might have some issues with the architecture
>>> changes and some may not be a good fit for cloud architectures. One
>>> way to help legacy applications is to use block storage, keep the
>>> latest snapshot of the instance in glance (image service), and monitor
>>> an instance. If an instance goes offline you can easily create a new
>>> one from the image and mount block storage with the data.
>>>
>>> - Matt
>>>
>>>
>>>
>>> On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh 
>>> wrote:
>>> > Hi OpenStack Development Community,
>>> > I know that the OpenStack interest is to become a cloud computing
>>> operating
>>> > system. And this simple sentence means: "Say goodbye to Statefull
>>> > Applications".
>>> > But, as you know we are in the transition phase from stateful apps to
>>> > stateless apps(Remember Pets and Cattle Example). Legacy apps are
>>> still in
>>> > used and how openstack can address the problems of running stateful
>>> > applications(e.g. HA, DR, FT, R,...)?
>>> > HA: High Availability
>>> > DR: Disaster Recovery
>>> > FT: Fault Tolerance
>>> > R: Resiliancy!
>>> >
>>> > ___
>>> > OpenStack-dev mailing list
>>> > OpenStack-dev@lists.openstack.org
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Georgy Okrokvertskhov
>> Architect,
>> OpenStack Platform Products,
>> Mirantis
>> http://www.mirantis.com
>> Tel. +1 650 963 9828
>> Mob. +1 650 996 3284
>>
>> __

Re: [openstack-dev] use of the word certified

2014-06-09 Thread Asselin, Ramy
Based on the discussion I'd like to propose these options:
1. Cinder-certified driver - This is an attempt to move the "certification" to 
the project level.
2. CI-tested driver - This is probably the most accurate, at least for what 
we're trying to achieve for Juno: Continuous Integration of Vendor-specific 
Drivers.

Ramy

-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Monday, June 09, 2014 4:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] use of the word certified

On 6 June 2014 18:29, Anita Kuno  wrote:
> So there are certain words that mean certain things, most don't, some do.
>
> If words that mean certain things are used then some folks start using 
> the word and have expectations around the word and the OpenStack 
> Technical Committee and other OpenStack programs find themselves on 
> the hook for behaviours that they didn't agree to.
>
> Currently the word under discussion is "certified" and its derivatives:
> certification, certifying, and others with root word "certificate".
>
> This came to my attention at the summit with a cinder summit session 
> with the one of the cerficiate words in the title. I had thought my 
> point had been made but it appears that there needs to be more 
> discussion on this. So let's discuss.
>
> Let's start with the definition of certify:
> cer·ti·fy
> verb (used with object), cer·ti·fied, cer·ti·fy·ing.
> 1. to attest as certain; give reliable information of; confirm: He 
> certified the truth of his claim.

So the cinder team are attesting that a set of tests have been run against a 
driver: a certified driver.

> 3. to guarantee; endorse reliably: to certify a document with an 
> official seal.

We (the cinder) team) are guaranteeing that the driver has been tested, in at 
least one configuration, and found to pass all of the tempest tests. This is a 
far better state than we were at 6 months ago, where many drivers didn't even 
pass a smoke test.

> 5. to award a certificate to (a person) attesting to the completion of 
> a course of study or the passing of a qualifying examination.

The cinder cert process is pretty much an exam.


I think the work certification covers exactly what we are doing. Give 
cinder-core are the people on the hook for any cinder problems (including 
vendor specific ones), and the cinder core are the people who get bad-mouthed 
when there are problems (including vendor specific ones), I think this level of 
certification gives us value.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Oslo] [Ironic] DB migration woes

2014-06-09 Thread Devananda van der Veen
There may be some problems with MySQL when testing parallel writes in
different non-committing transactions, even in READ COMMITTED mode,
due to InnoDB locking, if the queries use non-unique secondary indexes
for UPDATE or SELECT..FOR UPDATE queries. This is done by the
"with_lockmode('update')" SQLAlchemy phrase, and is used in ~10 places
in Nova. So I would not recommend this approach, even though, in
principle, I agree it would be a much more efficient way of testing
database reads/writes.

More details here:
http://dev.mysql.com/doc/refman/5.5/en/innodb-locks-set.html and
http://dev.mysql.com/doc/refman/5.5/en/innodb-record-level-locks.html

On Sun, Jun 8, 2014 at 8:46 AM, Roman Podoliaka  wrote:
> Hi Mike,
>
 However, when testing an application that uses a fixed set of tables, as 
 should be the case for the majority if not all Openstack apps, there’s no 
 reason that these tables need to be recreated for every test.
>
> This is a very good point. I tried to use the recipe from SQLAlchemy
> docs to run Nova DB API tests (yeah, I know, this might sound
> confusing, but these are actually methods that access the database in
> Nova) on production backends (MySQL and PostgreSQL). The abandoned
> patch is here [1]. Julia Varlamova has been working on rebasing this
> on master and should upload a new patch set soon.
>
> Overall, the approach with executing a test within a transaction and
> then emitting ROLLBACK worked quite well. The only problem I ran into
> were tests doing ROLLBACK on purpose. But you've updated the recipe
> since then and this can probably be solved by using of save points. I
> used a separate DB per a test running process to prevent race
> conditions, but we should definitely give READ COMMITTED approach a
> try. If it works, that will awesome.
>
> With a few tweaks of PostgreSQL config I was able to run Nova DB API
> tests in 13-15 seconds, while SQLite in memory took about 7s.
>
> Action items for me and Julia probably: [2] needs a spec with [1]
> updated accordingly. Using of this 'test in a transaction' approach
> seems to be a way to go for running all db related tests except the
> ones using DDL statements (as any DDL statement commits the current
> transaction implicitly on MySQL and SQLite AFAIK).
>
> Thanks,
> Roman
>
> [1] https://review.openstack.org/#/c/33236/
> [2] https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends
>
> On Sat, Jun 7, 2014 at 10:27 PM, Mike Bayer  wrote:
>>
>> On Jun 6, 2014, at 8:12 PM, Devananda van der Veen 
>> wrote:
>>
>> I think some things are broken in the oslo-incubator db migration code.
>>
>> Ironic moved to this when Juno opened and things seemed fine, until recently
>> when Lucas tried to add a DB migration and noticed that it didn't run... So
>> I looked into it a bit today. Below are my findings.
>>
>> Firstly, I filed this bug and proposed a fix, because I think that tests
>> that don't run any code should not report that they passed -- they should
>> report that they were skipped.
>>   https://bugs.launchpad.net/oslo/+bug/1327397
>>   "No notice given when db migrations are not run due to missing engine"
>>
>> Then, I edited the test_migrations.conf file appropriately for my local
>> mysql service, ran the tests again, and verified that migration tests ran --
>> and they passed. Great!
>>
>> Now, a little background... Ironic's TestMigrations class inherits from
>> oslo's BaseMigrationTestCase, then "opportunistically" checks each back-end,
>> if it's available. This opportunistic checking was inherited from Nova so
>> that tests could pass on developer workstations where not all backends are
>> present (eg, I have mysql installed, but not postgres), and still
>> transparently run on all backends in the gate. I couldn't find such
>> opportunistic testing in the oslo db migration test code, unfortunately -
>> but maybe it's well hidden.
>>
>> Anyhow. When I stopped the local mysql service (leaving the configuration
>> unchanged), I expected the tests to be skipped, but instead I got two
>> surprise failures:
>> - test_mysql_opportunistically() failed because setUp() raises an exception
>> before the test code could call calling _have_mysql()
>> - test_mysql_connect_fail() actually failed! Again, because setUp() raises
>> an exception before running the test itself
>>
>> Unfortunately, there's one more problem... when I run the tests in parallel,
>> they fail randomly because sometimes two test threads run different
>> migration tests, and the setUp() for one thread (remember, it calls
>> _reset_databases) blows up the other test.
>>
>> Out of 10 runs, it failed three times, each with different errors:
>>   NoSuchTableError: `chassis`
>>   ERROR 1007 (HY000) at line 1: Can't create database 'test_migrations';
>> database exists
>>   ProgrammingError: (ProgrammingError) (1146, "Table
>> 'test_migrations.alembic_version' doesn't exist")
>>
>> As far as I can tell, this is all coming from:
>>
>> https://github

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Joe Cropper
There may also be specific software entitlement issues that make it useful
to deterministically know which host your VM will be placed on.  This can
be quite common in large organizations that have certain software that can
be tied to certain hardware or hardware with certain # of CPU capacity, etc.

Regards,
Joe


On Mon, Jun 9, 2014 at 11:32 AM, Chris Friesen 
wrote:

> On 06/09/2014 07:59 AM, Jay Pipes wrote:
>
>> On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:
>>
>>> Forcing an instance to a specific host is very useful for the
>>> operator - it fulfills a valid use case for monitoring and testing
>>> purposes.
>>>
>>
>> Pray tell, what is that valid use case?
>>
>
> I find it useful for setting up specific testcases when trying to validate
> thingsput *this* instance on *this* host, put *those* instances on
> *those* hosts, now pull the power plug on *this* host...etc.
>
> I wouldn't expect the typical openstack end-user to need it though.
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Chris Friesen

On 06/09/2014 07:59 AM, Jay Pipes wrote:

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.


Pray tell, what is that valid use case?


I find it useful for setting up specific testcases when trying to 
validate thingsput *this* instance on *this* host, put *those* 
instances on *those* hosts, now pull the power plug on *this* host...etc.


I wouldn't expect the typical openstack end-user to need it though.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread hossein zabolzadeh
Really thanks Georgy for your complete answer. My major concern on
openstack was HA on my legacy apps(I wanted to use cloudstack instead of
openstack becasue of its more attention to legacy apps and more HA
features). But now, I will check your listed HA solutions on openstack and
come back as soon as possible.


On Mon, Jun 9, 2014 at 8:53 PM, Georgy Okrokvertskhov <
gokrokvertsk...@mirantis.com> wrote:

> Hi,
>
> You still can run legacy application on OpenStack with HA and DR using the
> same good old school tools like pacemaker, heartbeat, DRBD etc. There are
> all necessary features available in latest OpenStack. The most important
> feature for HA - secondary IP address was implemented in Havana. Now you
> can assign multiple IP addresses to the single VM port. Secondary IP can be
> used as a VIP in pacemaker so it is possible to create classic
> Active-Passive setup for any application. HAProxy is still there an you can
> use it for any application which uses IP based transport for communication.
> This secondary IP feature allows you to run even Windows cluster
> applications without any significant changes in setup in comparison to the
> running cluster on physical nodes.
>
> There is no shared volumes (yet as I know) but you can use DRBD on VM to
> sync two volumes attached to two different VMs and shared network
> filesystems as a service is almost there. Using these approaches it is
> possible to have data resilience for legacy applications too.
>
> There is no automagic things which make legacy apps resilient, but it is
> still possible to do with using known tools as there are no limitations
> from OpenStack infrastructure side for that. As I know there were
> discussions about exposing HA clusters on hypervisors that will allow some
> kind of resilience automatically (through automatic migrations or
> evacuation) but there is no active work on it visible.
>
> Thanks
> Georgy
>
>
>
>
>
> On Mon, Jun 9, 2014 at 7:16 AM, Matthew Farina 
> wrote:
>
>> In my experience building apps that run in OpenStack, you don't give
>> up state. You shift how you handle state.
>>
>> For example, instead of always routing a user to the same instance and
>> that instance holding the session data there is a common session store
>> for the app (possibly synced between regions). If you store session on
>> each instance and loose an instance you'll run into problems. If
>> sessions is more of a service for each instance than an instance
>> coming and going isn't a big deal.
>>
>> A good database as a service, swift (object storage), and maybe a
>> microservice architecture may be helpful.
>>
>> Legacy applications might have some issues with the architecture
>> changes and some may not be a good fit for cloud architectures. One
>> way to help legacy applications is to use block storage, keep the
>> latest snapshot of the instance in glance (image service), and monitor
>> an instance. If an instance goes offline you can easily create a new
>> one from the image and mount block storage with the data.
>>
>> - Matt
>>
>>
>>
>> On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh 
>> wrote:
>> > Hi OpenStack Development Community,
>> > I know that the OpenStack interest is to become a cloud computing
>> operating
>> > system. And this simple sentence means: "Say goodbye to Statefull
>> > Applications".
>> > But, as you know we are in the transition phase from stateful apps to
>> > stateless apps(Remember Pets and Cattle Example). Legacy apps are still
>> in
>> > used and how openstack can address the problems of running stateful
>> > applications(e.g. HA, DR, FT, R,...)?
>> > HA: High Availability
>> > DR: Disaster Recovery
>> > FT: Fault Tolerance
>> > R: Resiliancy!
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Georgy Okrokvertskhov
> Architect,
> OpenStack Platform Products,
> Mirantis
> http://www.mirantis.com
> Tel. +1 650 963 9828
> Mob. +1 650 996 3284
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Spec review request

2014-06-09 Thread Ben Nemec
Please don't send review requests to the list.  The preferred methods of
requesting reviews are explained here:
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015264.html

Thanks.

-Ben

On 06/07/2014 12:31 AM, Kanzhe Jiang wrote:
> The serviceBase and insertion spec has been up for review for a while. It
> would be great if it can be reviewed and moved forward.
> 
> https://review.openstack.org/#/c/93128/
> 
> Thanks,
> Kanzhe
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Jorge Miramontes
Hey German,

I agree with you. I don't really want to go with option #1 because making
decisions on behalf of the user (especially when security is involved) can
be quite tricky and dangerous. Your concerns are valid for option #2 but I
still think it is the better option to go with. I believe Carlos and Adam
are working with our Barbican team on a blueprint for option #2 so it
would be nice if you could take a look at that and see how we can
implement it to mitigate the concerns you laid out. While it would be nice
for us to figure out how to ensure registration/unregistration at least
the API user has the necessary info to ensure it themselves if need be.

I'm not sure if I like the "auto-update" flag concept after all as it adds
a layer of complexity depending on what the user has set.  I'd prefer
either an "LBaaS makes all decisions on behalf of the user" or "LBaaS
makes no deacons on behalf of the user" approach with the latter being my
preference. In one of my earlier emails I asked the fundamental question
of whether "flexibility" is worthwhile at the cost of complexity. I prefer
to start off simple since we don't have any real validation on whether
these "flexible" features will actually be used. Once we have a product
that is being widely deployed should "flexible" feature necessity become
evident.

Cheers,
--Jorge




On 6/6/14 5:52 PM, "Eichberger, German"  wrote:

>Jorge + John,
>
>I am most concerned with a user changing his secret in barbican and then
>the LB trying to update and causing downtime. Some users like to control
>when the downtime occurs.
>
>For #1 it was suggested that once the event is delivered it would be up
>to a user to enable an "auto-update flag".
>
>In the case of #2 I am a bit worried about error cases: e.g. uploading
>the certificates succeeds but registering the loadbalancer(s) fails. So
>using the barbican system for those warnings might not as fool proof as
>we are hoping. 
>
>One thing I like about #2 over #1 is that it pushes a lot of the
>information to Barbican. I think a user would expect when he uploads a
>new certificate to Barbican that the system warns him right away about
>load balancers using the old cert. With #1 he might get an e-mails from
>LBaaS telling him things changed (and we helpfully updated all affected
>load balancers) -- which isn't as immediate as #2.
>
>If we implement an "auto-update flag" for #1 we can have both. User's who
>like #2 juts hit the flag. Then the discussion changes to what we should
>implement first and I agree with Jorge + John that this should likely be
>#2.
>
>German
>
>-Original Message-
>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>Sent: Friday, June 06, 2014 3:05 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>Integration Ideas
>
>Hey John,
>
>Correct, I was envisioning that the Barbican request would not be
>affected, but rather, the GUI operator or API user could use the
>registration information to do so should they want to do so.
>
>Cheers,
>--Jorge
>
>
>
>
>On 6/6/14 4:53 PM, "John Wood"  wrote:
>
>>Hello Jorge,
>>
>>Just noting that for option #2, it seems to me that the registration
>>feature in Barbican would not be required for the first version of this
>>integration effort, but we should create a blueprint for it nonetheless.
>>
>>As for your question about services not registering/unregistering, I
>>don't see an issue as long as the presence or absence of registered
>>services on a Container/Secret does not **block** actions from
>>happening, but rather is information that can be used to warn clients
>>through their processes. For example, Barbican would still delete a
>>Container/Secret even if it had registered services.
>>
>>Does that all make sense though?
>>
>>Thanks,
>>John
>>
>>
>>From: Youcef Laribi [youcef.lar...@citrix.com]
>>Sent: Friday, June 06, 2014 2:47 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>>Integration Ideas
>>
>>+1 for option 2.
>>
>>In addition as an additional safeguard, the LBaaS service could check
>>with Barbican when failing to use an existing secret to see if the
>>secret has changed (lazy detection).
>>
>>Youcef
>>
>>-Original Message-
>>From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com]
>>Sent: Friday, June 06, 2014 12:16 PM
>>To: OpenStack Development Mailing List (not for usage questions)
>>Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS
>>Integration Ideas
>>
>>Hey everyone,
>>
>>Per our IRC discussion yesterday I'd like to continue the discussion on
>>how Barbican and Neutron LBaaS will interact. There are currently two
>>ideas in play and both will work. If you have another idea please free
>>to add it so that we may evaluate all the options relative to each other.
>>Here are the two 

Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread Georgy Okrokvertskhov
Hi,

You still can run legacy application on OpenStack with HA and DR using the
same good old school tools like pacemaker, heartbeat, DRBD etc. There are
all necessary features available in latest OpenStack. The most important
feature for HA - secondary IP address was implemented in Havana. Now you
can assign multiple IP addresses to the single VM port. Secondary IP can be
used as a VIP in pacemaker so it is possible to create classic
Active-Passive setup for any application. HAProxy is still there an you can
use it for any application which uses IP based transport for communication.
This secondary IP feature allows you to run even Windows cluster
applications without any significant changes in setup in comparison to the
running cluster on physical nodes.

There is no shared volumes (yet as I know) but you can use DRBD on VM to
sync two volumes attached to two different VMs and shared network
filesystems as a service is almost there. Using these approaches it is
possible to have data resilience for legacy applications too.

There is no automagic things which make legacy apps resilient, but it is
still possible to do with using known tools as there are no limitations
from OpenStack infrastructure side for that. As I know there were
discussions about exposing HA clusters on hypervisors that will allow some
kind of resilience automatically (through automatic migrations or
evacuation) but there is no active work on it visible.

Thanks
Georgy





On Mon, Jun 9, 2014 at 7:16 AM, Matthew Farina  wrote:

> In my experience building apps that run in OpenStack, you don't give
> up state. You shift how you handle state.
>
> For example, instead of always routing a user to the same instance and
> that instance holding the session data there is a common session store
> for the app (possibly synced between regions). If you store session on
> each instance and loose an instance you'll run into problems. If
> sessions is more of a service for each instance than an instance
> coming and going isn't a big deal.
>
> A good database as a service, swift (object storage), and maybe a
> microservice architecture may be helpful.
>
> Legacy applications might have some issues with the architecture
> changes and some may not be a good fit for cloud architectures. One
> way to help legacy applications is to use block storage, keep the
> latest snapshot of the instance in glance (image service), and monitor
> an instance. If an instance goes offline you can easily create a new
> one from the image and mount block storage with the data.
>
> - Matt
>
>
>
> On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh 
> wrote:
> > Hi OpenStack Development Community,
> > I know that the OpenStack interest is to become a cloud computing
> operating
> > system. And this simple sentence means: "Say goodbye to Statefull
> > Applications".
> > But, as you know we are in the transition phase from stateful apps to
> > stateless apps(Remember Pets and Cattle Example). Legacy apps are still
> in
> > used and how openstack can address the problems of running stateful
> > applications(e.g. HA, DR, FT, R,...)?
> > HA: High Availability
> > DR: Disaster Recovery
> > FT: Fault Tolerance
> > R: Resiliancy!
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for Capabilities and Tags

2014-06-09 Thread Tripp, Travis S
FYI: We now have the initial Glance spec up for review.  
https://review.openstack.org/#/c/98554/

We generalized a few concepts and will look at how to bring a few of those 
concepts back in potentially via a future spec.

Thanks,
Travis

From: Tripp, Travis S
Sent: Friday, May 30, 2014 4:27 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for 
Capabilities and Tags
Importance: High

Thanks, Zane and Georgy!

We’ll begin getting all the expected sections for the new Glance spec repo into 
this document next week and then will upload in RST format for formal review. 
That is a bit more expedient since there are still several people editing. In 
the meantime, we’ll take any additional comments in the google doc.

Thanks,
Travis

From: Georgy Okrokvertskhov [mailto:gokrokvertsk...@mirantis.com]
Sent: Friday, May 30, 2014 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance] [Heat] Glance Metadata Catalog for 
Capabilities and Tags
Importance: High

I think this is a great feature to have it in Glance. Tagging mechanism for 
objects which are not owned by Glance is complimentary to artifact 
catalog\repository in Glance. As soon as we keep tags and artifacts metadata 
close to each other the end-user will be able to use them seamlessly.
Artifacts also can use tags to find objects outside of artifact repository 
which is always good to have.
In Murano project we use Glance tags to find correct images which are required 
by specific applications. It will be great to extend this to other objects like 
networks, routers and flavors so that application write can specify kind of 
object are required for his application.

Thanks,
Georgy

On Fri, May 30, 2014 at 11:45 AM, Zane Bitter 
mailto:zbit...@redhat.com>> wrote:
On 29/05/14 18:42, Tripp, Travis S wrote:
Hello everyone!

At the summit in Atlanta we demonstrated the “Graffiti” project
concepts.  We received very positive feedback from members of multiple
dev projects as well as numerous operators.  We were specifically asked
multiple times about getting the Graffiti metadata catalog concepts into
Glance so that we can start to officially support the ideas we
demonstrated in Horizon.

After a number of additional meetings at the summit and working through
ideas the past week, we’ve created the initial proposal for adding a
Metadata Catalog to Glance for capabilities and tags.  This is distinct
from the “Artifact Catalog”, but we do see that capability and tag
catalog can be used with the artifact catalog.

We’ve detailed our initial proposal in the following Google Doc.  Mark
Washenberger agreed that this was a good place to capture the initial
proposal and we can later move it over to the Glance spec repo which
will be integrated with Launchpad blueprints soon.

https://docs.google.com/document/d/1cS2tJZrj748ZsttAabdHJDzkbU9nML5S4oFktFNNd68

Please take a look and let’s discuss!

Also, the following video is a brief recap of what was demo’ d at the
summit.  It should help to set a lot of understanding behind the ideas
in the proposal.

https://www.youtube.com/watch?v=Dhrthnq1bnw

Thank you!

Travis Tripp (HP)

Murali Sundar (Intel)
*A Few Related Blueprints *


https://blueprints.launchpad.net/horizon/+spec/instance-launch-using-capability-filtering

https://blueprints.launchpad.net/horizon/+spec/tagging

https://blueprints.launchpad.net/horizon/+spec/faceted-search

https://blueprints.launchpad.net/horizon/+spec/host-aggregate-update-metadata

https://blueprints.launchpad.net/python-cinderclient/+spec/support-volume-image-metadata

+1, this is something that will be increasingly important to orchestration. The 
folks working on the TOSCA (and others) -> HOT translator project might be able 
to comment in more detail, but basically as people start wanting to write 
templates that run on multiple clouds (potentially even non-OpenStack clouds) 
some sort of catalog for capabilities will become crucial.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The possible ways of high availability for non-cloud-ready apps running on openstack

2014-06-09 Thread hossein zabolzadeh
Hi there.
I am dealing with large amount of legacy application(MediaWiki, Joomla,
...) running on openstack. I am looking for the best way to improve high
availability of my instances. All applications are not designed for
fail(Non-Cloud-Ready Apps). So, what is the best way of improving HA on my
non-clustered instances(Stateful Instances)?
Thanks in advance.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-09 Thread Anita Kuno
On 06/09/2014 03:38 AM, Eoghan Glynn wrote:
> 
> 
>> So there are certain words that mean certain things, most don't, some do.
>>
>> If words that mean certain things are used then some folks start using
>> the word and have expectations around the word and the OpenStack
>> Technical Committee and other OpenStack programs find themselves on the
>> hook for behaviours that they didn't agree to.
>>
>> Currently the word under discussion is "certified" and its derivatives:
>> certification, certifying, and others with root word "certificate".
>>
>> This came to my attention at the summit with a cinder summit session
>> with the one of the cerficiate words in the title. I had thought my
>> point had been made but it appears that there needs to be more
>> discussion on this. So let's discuss.
>>
>> Let's start with the definition of certify:
>> cer·ti·fy
>> verb (used with object), cer·ti·fied, cer·ti·fy·ing.
>> 1. to attest as certain; give reliable information of; confirm: He
>> certified the truth of his claim.
>> 2. to testify to or vouch for in writing: The medical examiner will
>> certify his findings to the court.
>> 3. to guarantee; endorse reliably: to certify a document with an
>> official seal.
>> 4. to guarantee (a check) by writing on its face that the account
>> against which it is drawn has sufficient funds to pay it.
>> 5. to award a certificate to (a person) attesting to the completion of a
>> course of study or the passing of a qualifying examination.
>> Source: http://dictionary.reference.com/browse/certify
>>
>> The issue I have with the word certify is that it requires someone or a
>> group of someones to attest to something. The thing attested to is only
>> as credible as the someone or the group of someones doing the attesting.
>> We have no process, nor do I feel we want to have a process for
>> evaluating the reliability of the somones or groups of someones doing
>> the attesting.
>>
>> I think that having testing in place in line with other programs testing
>> of patches (third party ci) in cinder should be sufficient to address
>> the underlying concern, namely reliability of opensource hooks to
>> proprietary code and/or hardware. I would like the use of the word
>> "certificate" and all its roots to no longer be used in OpenStack
>> programs with regard to testing. This won't happen until we get some
>> discussion and agreement on this, which I would like to have.
>>
>> Thank you for your participation,
>> Anita.
> 
> Hi Anita,
> 
> Just a note on cross-posting to both the os-dev and os-tc lists.
> 
> Anyone not on the TC who will hits reply-all is likely to see their
> post be rejected by the TC list moderator, but go through to the
> more open dev list.
> 
> As a result, the thread diverges (as we saw with the recent election
> stats/turnout thread).
> 
> Also, moderation rejects are an unpleasant user experience.
> 
> So if a post is intended to reach out for input from the wider dev
> community, it's better to post *only* to the -dev list, or vice versa
> if you want to interact with a narrower audience.
My post was intended to include the tc list in the discussion

I have no say in what posts the tc email list moderator accepts or does
not, or how those posts not accepted are informed of their status.

Thanks Eoghan,
Anita.
> 
> Thanks,
> Eoghan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Arbitrary "extra specs" for compute nodes?

2014-06-09 Thread Chris Friesen

On 06/07/2014 12:30 AM, Joe Cropper wrote:

Hi Folks,

I was wondering if there was any such mechanism in the compute node
structure to hold arbitrary key-value pairs, similar to flavors'
"extra_specs" concept?

It appears there are entries for things like pci_stats, stats and
recently added extra_resources -- but these all tend to have more
specific usages vs. just arbitrary data that may want to be maintained
about the compute node over the course of its lifetime.

Unless I'm overlooking an existing construct for this, would this be
something that folks would welcome a Juno blueprint for--i.e., adding
extra_specs style column with a JSON-formatted string that could be
loaded as a dict of key-value pairs?


If nothing else, you could put the compute node in a host aggregate and 
assign metadata to it.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Promoting healing script to scheme migration script?

2014-06-09 Thread Johannes Erdfelt
On Mon, Jun 09, 2014, Jakub Libosvar  wrote:
> I'd like to get some opinions on following idea:
> 
> Because currently we have (thanks to Ann) WIP of healing script capable
> of changing database scheme by comparing tables in the database to
> models in current codebase, I started to think whether it could be used
> generally to db upgrades instead of "generating" migration scripts.

Do you have a link to these healing scripts?

> If I understand correctly the purpose of migration scripts used to be to:
> 1) separate changes according plugins
> 2) upgrade database scheme
> 3) migrate data according the changed scheme
> 
> Since we dropped on conditional migrations, we can cross out no.1).
> The healing script is capable of doing no.2) without any manual effort
> and without adding migration script.
> 
> That means if we will decide to go along with using script for updating
> database scheme, migration scripts will be needed only for data
> migration (no.3)) which are from my experience rare.
> 
> Also other benefit would be that we won't need to store all database
> models from Icehouse release which we probably will need in case we want
> to "heal" database in order to achieve idempotent Icehouse database
> scheme with Juno codebase.
> 
> Please share your ideas and reveal potential glitches in the proposal.

I'm actually working on a project to implement declarative schema
migrations for Nova using the existing model we currently maintain.

The main goals for our project are to reduce the amount of work
maintaining the database schema but also to reduce the amount of
downtime during software upgrades by doing schema changes online (where
possible).

I'd like to see what other haves done and are working on the future so
we don't unnecessarily duplicate work :)

JE


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Reasons to use Behat/behavior driven development in an SDK?

2014-06-09 Thread Matthew Farina
Jamie, thanks for sharing those links. They are quite useful and led
me to a couple questions.

1. To quote the first link, "To do this, we need a way to describe the
requirement such that everyone – the business folks, the analyst, the
developer and the tester – have a common understanding of the scope of
the work." Where are the business folks, the analyst, and the tester?
behat does things in human readable language that's really useful for
the non-developer. Where do we have these in the development of this
SDK?

I ask because in that first post the idea of working with these
different types of people is a central point. If you're working on a
client project for non-technical clients, which is common in
consulting circles, and in enterprise apps where you have analysts,
product managers, and others this is definitely useful for them. Where
are they in the loop on developing this SDK?

2. Can you point to end users of the SDK who aren't developers or
engineers? That's not to say that someone developing an application
that uses the SDK isn't working with non-developers. If they have a
story about uploading a file to persistent storage the implementation
by the developer might use the SDK. But, us using BDD doesn't help
that process.

This is really about the people involved in this project and consuming
it. It's different from typical client consulting work or general
consumer facing production. The audience is different. Can you explain
how this technology is useful to this specific audience in a practical
way?

Thanks,
Matt


On Fri, Jun 6, 2014 at 12:55 PM, Jamie Hannaford
 wrote:
> Hey all,
>
> Sorry for the length of reply - but I want to provide as much information as
> possible about this topic.
>
> Instead of enumerating pros and cons, I want to give a bit of context first
> (about what “feature stories” actually are), and then respond to some common
> misconceptions about them. This way, the pros/cons of Behat are a bit more
> substantiated.
>
>
> Would Behat replace PHPUnit?
>
> No - they’re completely different. We’d still use phpunit for unit testing
> because it’s way better at xunit-like assertions. We’d use behat instead for
> functional testing - making sure that features work against a production
> API.
>
>
> Who’s using Behat and is it suitable for us?
>
> From what I’ve heard, we’re using it for some projects at Rackspace and
> possibly some OpenStack projects - but I need to double check that. I’ve
> reached out to some folks about their experiences with it - so I’ll post the
> findings when I hear back.
>
>
> What are BDD feature stories?
>
> Here’s a link to a fantastic article which explains the benefits of BDD
> feature stories: http://dannorth.net/whats-in-a-story/
>
> tl;dr:
>
> BDD takes the position that you can turn an idea for a requirement into
> implemented, tested, production-ready code simply and effectively, as long
> as the requirement is specific enough that everyone knows what’s going on.
> To do this, we need a way to describe the requirement such that everyone –
> end-user, contributor, manager, technical lead (in short, anyone interested
> in using our SDK in their business) – have a common understanding of the
> scope of the work. You are showing them, in human-readable language, the
> features of the SDK and what it offers them. The result is that everyone —
> regardless of proficiency, skill level and familiarity with the codebase —
> is on the same level of understanding. From this they can agree a common
> definition of “done”, and we escape the dual gumption traps of “that’s not
> what I asked for” or “I forgot to tell you about this other thing”.
>
> This, then, is the role of a Story. It is a description of a requirement and
> a set of criteria by which we all agree that it is “done”. It helps us
> understand and satisfy customer use-cases in a well expressed and clear way.
> It also helps us track project progress by having well-established
> acceptance criteria for feature sets.
>
>
> 3 misconceptions about BDD
>
> (Inspired by
> http://www.thoughtworks.com/insights/blog/3-misconceptions-about-bdd)
>
> 1. End-users don’t care about this! They want code
>
> This is actually a completely misdirected point. The purpose of behat is not
> to serve as a public-facing repository of sample code. Its actual purpose is
> twofold: to serve as a functional test suite (i.e. make sure our SDK works
> against an API), and secondly to serve as a communication device - to codify
> features in a human-readable way.
>
> It’s the role of documentation to explain the concepts of the SDK with
> detailed code samples. Another good idea is to provide a “samples” folder
> that contains standalone scripts for common use-cases - this is what we
> offer for our current SDK, and users appreciate it. Both of these will allow
> developers to copy and paste working code for their requirements.
>
> 2. Contributors don’t want to write these specifications!
>
> My response is this: how ca

Re: [openstack-dev] [neutron] Mid-cycle questions for folks

2014-06-09 Thread Rossella Sblendido
I had to call too. I got same conditions as Carl.

cheers,

Rossella

On 06/05/2014 04:45 PM, Kyle Mestery wrote:
> It would be ideal if folks could use the room block I reserved when
> booking, if their company policy allows it. I've gotten word from the
> hotel they may release the block if more people don't use it, just
> FYI.
>
> On Thu, Jun 5, 2014 at 5:46 AM, Paul Michali (pcm)  wrote:
>> I booked through our company travel and got a comparable rate ($111 or $114, 
>> I can’t recall the exact price).
>>
>> Regards,
>>
>> PCM (Paul Michali)
>>
>> MAIL …..…. p...@cisco.com
>> IRC ……..… pcm_ (irc.freenode.com)
>> TW ………... @pmichali
>> GPG Key … 4525ECC253E31A83
>> Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83
>>
>>
>>
>> On Jun 5, 2014, at 12:48 AM, Carl Baldwin  wrote:
>>
>>> Yes, I was able to book it for $114 a night with no prepayment.  I had
>>> to call.  The agent found the block under Cisco and the date range.
>>>
>>> Carl
>>>
>>> On Wed, Jun 4, 2014 at 4:43 PM, Kyle Mestery  
>>> wrote:
 I think it's even cheaper than that. Try calling the hotel to get the
 better rate, I think Carl was able to successfully acquire the room at
 the cheaper rate (something like $115 a night or so).

 On Wed, Jun 4, 2014 at 4:56 PM, Edgar Magana Perdomo (eperdomo)
  wrote:
> I tried to book online and it seems that the pre-payment is 
> non-refundable:
>
> "Hyatt.Com Rate Rate RulesFull prepayment required, non-refundable, no
> date changes."
>
>
> The price is $149 USD per night. Is that what you have blocked?
>
> Edgar
>
> On 6/4/14, 2:47 PM, "Kyle Mestery"  wrote:
>
>> Hi all:
>>
>> I was curious if people are having issues booking the room from the
>> block I have setup. I received word from the hotel that only one (1!)
>> person has booked yet. Given the mid-cycle is approaching in a month,
>> I wanted to make sure that people are making plans for travel. Are
>> people booking in places other than the one I had setup as reserved?
>> If so, I'll remove the room block. Keep in mind the hotel I had a
>> block reserved at is very convenient in that it's literally walking
>> distance to the mid-cycle location at the Bloomington, MN Cisco
>> offices.
>>
>> Thanks!
>> Kyle
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][db] Promoting healing script to scheme migration script?

2014-06-09 Thread Jakub Libosvar
Forgot to add tags, sorry

On 06/09/2014 04:18 PM, Jakub Libosvar wrote:
> Hi all,
> 
> I'd like to get some opinions on following idea:
> 
> Because currently we have (thanks to Ann) WIP of healing script capable
> of changing database scheme by comparing tables in the database to
> models in current codebase, I started to think whether it could be used
> generally to db upgrades instead of "generating" migration scripts.
> 
> If I understand correctly the purpose of migration scripts used to be to:
> 1) separate changes according plugins
> 2) upgrade database scheme
> 3) migrate data according the changed scheme
> 
> Since we dropped on conditional migrations, we can cross out no.1).
> The healing script is capable of doing no.2) without any manual effort
> and without adding migration script.
> 
> That means if we will decide to go along with using script for updating
> database scheme, migration scripts will be needed only for data
> migration (no.3)) which are from my experience rare.
> 
> Also other benefit would be that we won't need to store all database
> models from Icehouse release which we probably will need in case we want
> to "heal" database in order to achieve idempotent Icehouse database
> scheme with Juno codebase.
> 
> Please share your ideas and reveal potential glitches in the proposal.
> 
> Thank you,
> Kuba
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread Matthew Farina
In my experience building apps that run in OpenStack, you don't give
up state. You shift how you handle state.

For example, instead of always routing a user to the same instance and
that instance holding the session data there is a common session store
for the app (possibly synced between regions). If you store session on
each instance and loose an instance you'll run into problems. If
sessions is more of a service for each instance than an instance
coming and going isn't a big deal.

A good database as a service, swift (object storage), and maybe a
microservice architecture may be helpful.

Legacy applications might have some issues with the architecture
changes and some may not be a good fit for cloud architectures. One
way to help legacy applications is to use block storage, keep the
latest snapshot of the instance in glance (image service), and monitor
an instance. If an instance goes offline you can easily create a new
one from the image and mount block storage with the data.

- Matt



On Mon, Jun 9, 2014 at 7:27 AM, hossein zabolzadeh  wrote:
> Hi OpenStack Development Community,
> I know that the OpenStack interest is to become a cloud computing operating
> system. And this simple sentence means: "Say goodbye to Statefull
> Applications".
> But, as you know we are in the transition phase from stateful apps to
> stateless apps(Remember Pets and Cattle Example). Legacy apps are still in
> used and how openstack can address the problems of running stateful
> applications(e.g. HA, DR, FT, R,...)?
> HA: High Availability
> DR: Disaster Recovery
> FT: Fault Tolerance
> R: Resiliancy!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Promoting healing script to scheme migration script?

2014-06-09 Thread Jakub Libosvar
Hi all,

I'd like to get some opinions on following idea:

Because currently we have (thanks to Ann) WIP of healing script capable
of changing database scheme by comparing tables in the database to
models in current codebase, I started to think whether it could be used
generally to db upgrades instead of "generating" migration scripts.

If I understand correctly the purpose of migration scripts used to be to:
1) separate changes according plugins
2) upgrade database scheme
3) migrate data according the changed scheme

Since we dropped on conditional migrations, we can cross out no.1).
The healing script is capable of doing no.2) without any manual effort
and without adding migration script.

That means if we will decide to go along with using script for updating
database scheme, migration scripts will be needed only for data
migration (no.3)) which are from my experience rare.

Also other benefit would be that we won't need to store all database
models from Icehouse release which we probably will need in case we want
to "heal" database in order to achieve idempotent Icehouse database
scheme with Juno codebase.

Please share your ideas and reveal potential glitches in the proposal.

Thank you,
Kuba

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Use of final and private keywords to limit extending

2014-06-09 Thread Matthew Farina
If you don't mind I'd like to step back for a moment and talk about
the end users of this codebase and the types code it will be used in.

We're looking to make application developers successful in PHP. The
top 10% of PHP application developers aren't an issue. If they have an
SDK or not they will build amazing things. It's the long tail of app
devs. Many of these developers don't know things we might take for
granted, like dependency injection. A lot of them may writing
spaghetti procedural code. I use these examples because I've run into
them in the past couple months. We need to make these folks successful
in a cost effective and low barrier to entry manner.

When I've gotten into the world of closed source PHP (or any other
language for that matter) and work that's not in the popular space
I've seen many things that aren't clean or pretty. But, they work.

That means this SDK needs to be useful in the modern frameworks (which
vary widely on opinions) and in environments we may not like.

The other thing I'd like to talk about is the protected keyword. I use
this a lot. Using protected means an outside caller can't access the
method. Only other methods on the class or classes that extend it.
This is an easy way to have an API and internals.

Private is different. Private means it's part of the class but not
there for extended classes. It's not just about controlling the public
API for callers but not letting classes that extend this one have
access to the functionality.

Given the scope of who our users are...

- Any place we use the `final` scoping we need to explain how to
extend it properly. It's a teaching moment for someone who might not
come to a direction on what to do very quickly. Think about the long
tail of developers and projects, most of which are not open source.

Note, I said I'm not opposed to using final. It's an intentional
decision. For the kinds of things we're doing I can't see all to many
use cases for using final. We need to enable users to be successful
without controlling how they write applications because this is an
add-on to help them not a driver for their architecture.

- For scoping private and public APIs, `protected` is a better keyword
unless we are intending on blocking extension. If we block extension
we should explain how to handled overriding things that are likely to
happen in real world applications that are not ideally written or
architected.

At the end of the day, applications that successfully do what they
need to do while using OpenStack on the backend is what will make
OpenStack more successful. We need to help make it easy for the
developers, no matter how they choose to code, to be successful. I
find it useful to focus on end users and their practical cases over
the theory of how to design something.

Thoughts,
Matt


On Fri, Jun 6, 2014 at 10:01 AM, Jamie Hannaford
 wrote:
> So this is an issue that’s been heavily discussed recently in the PHP
> community.
>
> Based on personal opinion, I heavily favor and use private properties in
> software I write. I haven’t, however, used the “final” keyword that much.
> But the more I read about and see it being used, the more inclined I am to
> use it in projects. Here’s a great overview of why it’s useful for public
> APIs: http://verraes.net/2014/05/final-classes-in-php/
>
> Here’s a tl;dr executive summary:
>
> - Open/Closed principle. It’s important to understand that “Open for
> extension”, does not mean “Open for inheritance”. Composition, strategies,
> callbacks, plugins, event listeners, … are all valid ways to extend without
> inheritance. And usually, they are much preferred to inheritance – hence the
> conventional recommendation in OOP to “favour composition over inheritance”.
> Inheritance creates more coupling, that can be hard to get rid of, and that
> can make understanding the code quite tough.
>
> - Providing an API is a responsibility: by allowing end-users to access
> features of our SDK, we need to give certain guarantees of stability or low
> change frequency. The behavior of classes should be deterministic - i.e. we
> should be able to trust that a class does a certain thing. There’s no trust
> whatsoever if that behavior can be edited and overridden from external code.
>
> - Future-proofing: the fewer behaviours and extension points we expose, the
> more freedom we have to change system internals. This is the idea behind
> encapsulation.
>
> You said that we should only use private and final keywords if there’s an
> overwhelming reason to do so. I completely disagree. I actually want to flip
> the proposition here: I think we should only use public keywords if we’re
> CERTAIN we want to encourage and allow the inheritance of that class. By
> making a class inheritable, you are saying to the outside world: this class
> is meant to be extended. And the majority of times this is not what we want.
> Sure there are times when inheritance may well be the best option - but you
> can support extensio

Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/05/2014 09:54 AM, Day, Phil wrote:

-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: 04 June 2014 19:23 To:
openstack-dev@lists.openstack.org Subject: Re: [openstack-dev]
[nova] Proposal: Move CPU and memory allocation ratio out of
scheduler

On 06/04/2014 11:56 AM, Day, Phil wrote:

Hi Jay,


* Host aggregates may also have a separate allocation ratio
that overrides any configuration setting that a particular host
may have


So with your proposal would the resource tracker be responsible
for picking and using override values defined as part of an
aggregate that includes the host ?


Not quite sure what you're asking, but I *think* you are asking
whether I am proposing that the host aggregate's allocation ratio
that a compute node might be in would override any allocation ratio
that might be set on the compute node? I would say that no, the
idea would be that the compute node's allocation ratio would
override any host aggregate it might belong to.



I'm not sure why you would want it that way round - aggregates lets
me set/change the value of a number of hosts, and change the set of
hosts that the values apply to.That in general seems a much
better model for operators that having to manage things on a per host
basis.

Why not keep the current model where an aggregate  setting overrides
the "default" - that will now come from the host config rather that
scheduler config ?


That's actually exactly what I proposed in the blueprint spec:

https://review.openstack.org/#/c/98664/

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

2014-06-09 Thread Jay Pipes

On 06/06/2014 08:07 AM, Murray, Paul (HP Cloud) wrote:

Forcing an instance to a specific host is very useful for the
operator - it fulfills a valid use case for monitoring and testing
purposes.


Pray tell, what is that valid use case?


I am not defending a particular way of doing this, just
bringing up that it has to be handled. The effect on limits is purely
implementation - no limits get set so it by-passes any resource
constraints, which is deliberate.

-Original Message- From: Jay Pipes
[mailto:jaypi...@gmail.com] Sent: 04 June 2014 19:17 To:
openstack-dev@lists.openstack.org Subject: Re: [openstack-dev] [nova]
Proposal: Move CPU and memory allocation ratio out of scheduler

On 06/04/2014 06:10 AM, Murray, Paul (HP Cloud) wrote:

Hi Jay,

This sounds good to me. You left out the part of limits from the
discussion - these filters set the limits used at the resource
tracker.


Yes, and that is, IMO, bad design. Allocation ratios are the domain
of the compute node and the resource tracker. Not the scheduler. The
allocation ratios simply adjust the amount of resources that the
compute node advertises to others. Allocation ratios are *not*
scheduler policy, and they aren't related to flavours.


You also left out the force-to-host and its effect on limits.


force-to-host is definitively non-cloudy. It was a bad idea that
should never have been added to Nova in the first place.

That said, I don't see how force-to-host has any affect on limits.
Limits should not be output from the scheduler. In fact, they
shouldn't be anything other than an *input* to the scheduler,
provided in each host state struct that gets built from records
updated in the resource tracker and the Nova database.


Yes, I would agree with doing this at the resource tracker too.

And of course the extensible resource tracker is the right way to
do it J


:) Yes, clearly this is something that I ran into while brainstorming
around the extensible resource tracker patches.

Best, -jay


Paul.

*From:*Jay Lau [mailto:jay.lau@gmail.com] *Sent:* 04 June 2014
10:04 *To:* OpenStack Development Mailing List (not for usage
questions) *Subject:* Re: [openstack-dev] [nova] Proposal: Move CPU
and memory allocation ratio out of scheduler

Does there is any blueprint related to this? Thanks.

2014-06-03 21:29 GMT+08:00 Jay Pipes mailto:jaypi...@gmail.com>>:

Hi Stackers,

tl;dr =

Move CPU and RAM allocation ratio definition out of the Nova
scheduler and into the resource tracker. Remove the calculations
for overcommit out of the core_filter and ram_filter scheduler
pieces.

Details ===

Currently, in the Nova code base, the thing that controls whether
or not the scheduler places an instance on a compute host that is
already "full" (in terms of memory or vCPU usage) is a pair of
configuration options* called cpu_allocation_ratio and
ram_allocation_ratio.

These configuration options are defined in, respectively,
nova/scheduler/filters/core_filter.py and
nova/scheduler/filters/ram_filter.py.

Every time an instance is launched, the scheduler loops through a
collection of host state structures that contain resource
consumption figures for each compute node. For each compute host,
the core_filter and ram_filter's host_passes() method is called. In
the host_passes() method, the host's reported total amount of CPU
or RAM is multiplied by this configuration option, and the product
is then subtracted from the reported used amount of CPU or RAM. If
the result is greater than or equal to the number of vCPUs needed
by the instance being launched, True is returned and the host
continues to be considered during scheduling decisions.

I propose we move the definition of the allocation ratios out of
the scheduler entirely, as well as the calculation of the total
amount of resources each compute node contains. The resource
tracker is the most appropriate place to define these configuration
options, as the resource tracker is what is responsible for keeping
track of total and used resource amounts for all compute nodes.

Benefits:

* Allocation ratios determine the amount of resources that a
compute node advertises. The resource tracker is what determines
the amount of resources that each compute node has, and how much of
a particular type of resource have been used on a compute node. It
therefore makes sense to put calculations and definition of
allocation ratios where they naturally belong. * The scheduler
currently needlessly re-calculates total resource amounts on every
call to the scheduler. This isn't necessary. The total resource
amounts don't change unless either a configuration option is
changed on a compute node (or host aggregate), and this calculation
can be done more efficiently once in the resource tracker. * Move
more logic out of the scheduler * With the move to an extensible
resource tracker, we can more easily evolve to defining all
resource-related options in the same place (instead of in different
filter files in the schedule

[openstack-dev] Constraint validation and property list filtering in Murano

2014-06-09 Thread Alexander Tivelkov
Hi folks,

There is an important topic which I would like to discuss: it seems like
there is a place for improvement in UI validation and filtering in Murano.

The reason of writing this is a change-set [1] (being an implementation of
blueprint [2]) which allows package developers to specify the constraints
for Flavor fields in dynamic UI definitions, and a little controversy about
this commit among the core team.
In my opinion, the change itself is great (thanks, Ryan!) and I am going to
put my +2 on it, but I would like to say that there may exist a better and
more complete approach, which we probably should adopt in future.


The main idea is that in Murano we have a concept of Application
Definitions, and these definitions should be complete enough to specify all
the properties, dependencies, constraints and limitations for each
application in the Catalog.
Currently we write these defintions in MuranoPL, and the constraints and
limitations are defined as its Contracts.

For example, imagine we have an application which should be run on a Server
having some specific hardware spec, e.g. having not less then 2 CPU cores
and at least 8 Gb of RAM.
In this case, these limits may be expressed as the Contract on the property
defining the reference to the VM. The contract may look like this:

$.class(Instance).check($.flavor.cpuCores>=2 and $.flavor.ramMb>=8192)

(this will require us to create a data structure for flavors: currently we
use plain string names - but this is quite an easy and straitforward change)

Defining filter constraints on the UI side without having them in MuranoPL
constraints is not enough: even if the UI is used to restrict the values of
some properties, these restrictions may be ignored if the input object mode
is composed manually and is sent to MuranoAPI without UI usage. This means
that the MuranoPL contract should be the primary source of
constraints/limitations, while the UI-side properties only suppliment them.

This causes the need of defining constraints in two locations: in MuranoPL
for runtime validation and in UI definitions for "client-side" checks and
filtering. These two have different notations: MuranoPL uses flexible
yaql-based contracts, which allow to construct and enforce almost eny
expressions, while DynamicUI has a limited number of available properties
for each type of input field. If some field does not have ability to
enforce some check, then it has to be added in python code and commited to
Murano's codebase, which contradicts with the mission of Application
Catalog.
This approach is overcomplicated, as it requires the package developer to
learn two different notations. Also it is error-prone, as there is no
automatic way to ensure that the "ui-side" constraint definitions do really
match the MuranoPL contracts.


So, I would prefer to have a single location of constraint definitions -
MuranoPL contracts. These contracts (in their yaql form) should be
processible by the dynamic UI  and should be used for both field value
checks and dropdown lists filterings.
Also, the UI form for each component of the environment should be displayed
and validated in the context of the contract applied to this component.
In the example given above, the virtual machine contract is defined for the
application class, while the UI form for it is defined for "Instance"
class. While this form should be the same in all usages of this class, its
context (availability and possible values of different fields) should be
defined by the contracts defined by the class which uses it, i.e. the
Application.



As a bottom line, I would suggest to accept commit [1] for now (we need
flavor filtering anyway), but agree that this should be a temporary
workaround. Meahwile, we need to design and implement a way of passing
contracts from MuranoPL classes to the UI engine and use this contracts fro
both API-side validation and list filtering.


[1] https://review.openstack.org/#/c/97904/
[2]
https://blueprints.launchpad.net/murano/+spec/filter-flavor-for-each-service

--
Regards,
Alexander Tivelkov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Gate still backed up - need assistance with nova-network logging enhancements

2014-06-09 Thread Sean Dague
Based on some back of envelope math the gate is basically processing 2
changes an hour, failing one of them. So if you want to know how long
the gate is, take the length / 2 in hours.

Right now we're doing a lot of revert roulette, trying to revert things
that we think landed about the time things went bad. I call this
roulette because in many cases the actual issue isn't well understood. A
key reason for this is:

*nova network is a blackhole*

There is no work unit logging in nova-network, and no attempted
verification that the commands it ran did a thing. Most of these
failures that we don't have good understanding of are the network not
working under nova-network.

So we could *really* use a volunteer or two to prioritize getting that
into nova-network. Without it we might manage to turn down the failure
rate by reverting things (or we might not) but we won't really know why,
and we'll likely be here again soon.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

2014-06-09 Thread Eugene Nikanorov
Mike,

Thanks a lot for your response!
Some comments:
> There’s some in-Python filtering following it which does not seem
necessary; the "alloc.vxlan_vni not in vxlan_vnis” phrase
> could just as well be a SQL “NOT IN” expression.
There we have to do specific set intersection between configured ranges and
existing allocation. That could be done in sql,
but that certainly would lead to a huge sql query text as full vxlan range
could consist of 16 millions of ids.

>  The synchronize_session=“fetch” is certainly a huge part of the time
spent here
You've actually made a good point about synchronize_session=“fetch” which
was obviously misused by me.
It seems to save up to 40% of plain deleting time.

I've fixed that and get some speedup with deletes for both mysql and
postgress that reduced difference between chunked/non-chunked version:

50k vnis to add/deletePg adding vnisPg deleting vnisPg TotalMysql adding
vnisMysql deleting vnisMysql totalnon-chunked sql221537151530chuked in 10020
1333141428

Results of chunked and non-chunked version look closer, but gap increases
with vni range size (based on few tests of 150k vni range)

So I'm going to fix chunked version that is on review now. If you think
that the benefit doesn't worth complexity - please let me know.

Thanks,
Eugene.

On Mon, Jun 9, 2014 at 1:33 AM, Mike Bayer  wrote:

>
> On Jun 7, 2014, at 4:38 PM, Eugene Nikanorov 
> wrote:
>
> Hi folks,
>
> There was a small discussion about the better way of doing sql operations
> for vni synchronization with the config.
> Initial proposal was to handle those in chunks. Carl also suggested to
> issue a single sql query.
> I've did some testing with my sql and postgress.
> I've tested the following scenario: vxlan range is changed from
> 5:15 to 0:10 and vice versa.
> That involves adding and deleting 5 vni in each test.
>
> Here are the numbers:
>  50k vnis to add/deletePg adding vnisPg deleting vnis Pg TotalMysql
> adding vnis Mysql deleting vnisMysql totalnon-chunked sql 232245 142034 
> chunked
> in 10020 173714 1731
>
> I've done about 5 tries to get each number to minimize random floating
> factor (due to swaps, disc or cpu activity or other factors)
> That might be surprising that issuing multiple sql statements instead one
> big is little bit more efficient, so I would appreciate if someone could
> reproduce those numbers.
> Also I'd like to note that part of code that iterates over vnis fetched
> from db is taking 10 seconds both on mysql and postgress and is a part of
> "deleting vnis" numbers.
> In other words, difference between multiple DELETE sql statements and
> single one is even bigger (in percent) than these numbers show.
>
> The code which I used to test is here:
> http://paste.openstack.org/show/83298/
> Right now the chunked version is commented out, so to switch between
> versions some lines should be commented and some - uncommented.
>
>
> I’ve taken a look at this, though I’m not at the point where I have things
> set up to run things like this within full context, and I don’t know that I
> have any definitive statements to make, but I do have some suggestions:
>
> 1. I do tend to chunk things a lot, selects, deletes, inserts, though the
> chunk size I work with is typically more like 1000, rather than 100.   When
> chunking, we’re looking to select a size that doesn’t tend to overload the
> things that are receiving the data (query buffers, structures internal to
> both SQLAlchemy as well as the DBAPI and the relational database), but at
> the same time doesn’t lead to too much repetition on the Python side (where
> of course there’s a lot of slowness).
>
> 2. Specifically regarding “WHERE x IN (…..)”, I always chunk those.  When
> we use IN with a list of values, we’re building an actual SQL string that
> becomes enormous.  This puts strain on the database’s query engine that is
> not optimized for SQL strings that are hundreds of thousands of characters
> long, and on some backends this size is limited; on Oracle, there’s a limit
> of 1000 items.   So I’d always chunk this kind of thing.
>
> 3. I’m not sure of the broader context of this code, but in fact placing a
> literal list of items in the IN in this case seems unnecessary; the
> “vmis_to_remove” list itself was just SELECTed two lines above.   There’s
> some in-Python filtering following it which does not seem necessary; the "
> alloc.vxlan_vni not in vxlan_vnis” phrase could just as well be a SQL
> “NOT IN” expression.  Not sure if determination of the “.allocated” flag
> can be done in SQL, if that’s a plain column, then certainly.Again not
> sure if this is just an artifact of how the test is done here, but if the
> goal is to optimize this code for speed, doing a DELETE…WHERE .. IN (SELECT
> ..) is probably better.   I see that the SELECT is using a lockmode, but it
> would seem that if just the rows we care to DELETE are inlined within the
> DELETE itself this wouldn’t be needed either.
>
> It’s likely 

Re: [openstack-dev] [nova] AggregateMultiTenancyIsolation scheduler filter - bug, or new feature proposal?

2014-06-09 Thread Belmiro Moreira
Hi Jesse,

I would say that is a documentation bug for the
“AggregateMultiTenancyIsolation” filter.


When this was implemented the objective was to schedule only instances from
specific tenants for those aggregates but not make them exclusive.


That’s why the work on
https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
started but was left on hold because it was believed
https://blueprints.launchpad.net/nova/+spec/whole-host-allocation had some
similarities and eventually could solve the problem in a more generic way.


However p-clouds implementation is marked as “slow progress” and I believe
there is no active work at the moment.


Probably is a good time to review the "ProjectsToAggregateFilter" filter
again. The implementation and reviews are available at
https://review.openstack.org/#/c/28635/


One of the problems raised was performance concerns considering the number
of DB queries required. However this can be documented if people intend to
enable the filter.

In the review there was also the discussion about a config option for the
old filter.


cheers,

Belmiro


--

Belmiro Moreira

CERN

Email: belmiro.more...@cern.ch

IRC: belmoreira



On Mon, Jun 9, 2014 at 1:12 PM, Jesse Pretorius 
wrote:

> Hi everyone,
>
> We have a need to be able to dedicate a specific host aggregate to a list
> of tenants/projects. If the aggregate is marked as such, the aggregate may
> only be used by that specified list of tenants and those tenants may only
> be scheduled to that aggregate.
>
> The AggregateMultiTenancyIsolation filter almost does what we need - it
> pushes all new instances created by a specified tenant to the designated
> aggregate. However, it also seems to still see that aggregate as available
> for other tenants.
>
> The description in the documentation [1] states: "If a host is in an
> aggregate that has the metadata key filter_tenant_id it only creates
> instances from that tenant (or list of tenants)."
>
> This would seem to us either as a code bug, or a documentation bug?
>
> If the filter is working as intended, then I'd like to propose working on
> a patch to the filter which has an additional metadata field (something
> like 'filter_tenant_exclusive') which - when 'true' - will consider the
> filter_tenant_id list to be the only projects/tenants which may be
> scheduled onto the host aggregate, and the only host aggregate which the
> list of projects/tenants which may be scheduled onto.
>
> Note that there has been some similar work done with [2] and [3]. [2]
> actually works as we expect, but as is noted in the gerrit comments it
> seems rather wasteful to add a new filter when we could use the existing
> filter as a base. [3] is a much larger framework to facilitate end-users
> being able to request a whole host allocation - while this could be a nice
> addition, it's overkill for what we're looking for. We're happy to
> facilitate this with a simple admin-only allocation.
>
> So - should I work on a nova-specs proposal for a change, or should I just
> log a bug against either nova or docs? :) Guidance would be appreciated.
>
> [1]
> http://docs.openstack.org/trunk/config-reference/content/section_compute-scheduler.html
> [2]
> https://blueprints.launchpad.net/nova/+spec/multi-tenancy-isolation-only-aggregates
> [3] https://blueprints.launchpad.net/nova/+spec/whole-host-allocation
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Using saltstack as orchestrator for fuel

2014-06-09 Thread Dmitriy Shulyak
Hi folks,

I know that sometime ago saltstack was evaluated to be used as orchestrator
in fuel, so I've prepared some initial specification, that addresses basic
points of integration, and general requirements for orchestrator.

In my opinion saltstack perfectly fits our needs, and we can benefit from
using mature orchestrator, that has its own community. I still dont have
all the answers, but , anyway, i would like to ask all of you to start a
review for specification

https://docs.google.com/document/d/1uOHgxM9ZT_2IdcmWvgpEfCMoV8o0Fk7BoAlsGHEoIfs/edit?usp=sharing

I will place it in fuel-docs repo as soon as specification will be full
enough to start POC, or if you think that spec should placed there as is, i
can do it now

Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Tuskar-UI] Location for common dashboard code?

2014-06-09 Thread Tzu-Mainn Chen
Hiya,

That makes sense.  Just to take a concrete example - in tuskar-ui, our
flavors' table code 
(https://github.com/openstack/tuskar-ui/blob/master/tuskar_ui/infrastructure/flavors/tables.py)
uses the following code from openstack_dashboard.dashboards.admin.flavors.tables
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/flavors/tables.py):

1) extends CreateFlavor LinkAction to modify the linked url
2) extends DeleteFlavor DeleteAction to add an 'allowed' check
3) uses FlavorFilterAction FilterAction
4) uses get_size and get_disk_size methods for formatting a column value

Would it be suggested that all of the above from the admin dashboard go
into openstack_dashboard/common?

I could see arguments for tuskar-ui not to extend 1) and simply create a
new LinkAction since we're using an entirely new url.  4) seems to me
that it might possibly belong in the api code.  So would we just take
2) and 3) and stick it in openstack_dashboard/common/flavors/tables.py
or something. . . ?

Thanks,
Tzu-Mainn Chen


- Original Message -
> I think this falls inline with other items we are working toward in
> Horizon, namely more pluggable components on panels.
> 
> I think creating a directory in openstack_dashboard for these reusable
> components makes a lot of sense. And usage should eventually moved to
> there.
> I would suggest something as mundane as ³openstack_dashboard/common².
> 
> David
> 
> On 5/28/14, 10:36 AM, "Tzu-Mainn Chen"  wrote:
> 
> >Heya,
> >
> >Tuskar-UI is currently extending classes directly from
> >openstack-dashboard.  For example, right now
> >our UI for Flavors extends classes in both
> >openstack_dashboard.dashboards.admin.flavors.tables and
> >openstack_dashboard.dashboards.admin.flavors.workflows.  In the future,
> >this sort of pattern will
> >increase; we anticipate doing similar things with Heat code in
> >openstack-dashboard.
> >
> >However, since tuskar-ui is intended to be a separate dashboard that has
> >the potential to live
> >away from openstack-dashboard, it does feel odd to directly extend
> >openstack-dashboard dashboard
> >components.  Is there a separate place where such code might live?
> >Something similar in concept
> >to
> >https://github.com/openstack/horizon/tree/master/openstack_dashboard/usage
> > ?
> >
> >
> >Thanks,
> >Tzu-Mainn Chen
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Eoghan Glynn


> On 9 June 2014 09:44, Eoghan Glynn  wrote:
> 
> > Since "certification" seems to be quite an overloaded term
> > already, I wonder would a more back-to-basics phrase such as
> > "quality assured" better capture the Cinder project's use of
> > the word?
> >
> > It does exactly what it says on the tin ... i.e. captures the
> > fact that a vendor has run an agreed battery of tests against
> > their driver and the harness has reported green-ness with a
> > meaning that is well understood upstream (as the Tempest test
> > cases are in the public domain).
> 
> 
> I think 'quality-assured' makes a far stronger statement than
> 'certified'.

Hmmm, what kind of statement is made by the title of the program
under which the Tempest harness falls:

  
https://github.com/openstack/governance/blob/master/reference/programs.yaml#L247

The purpose of Quality Assurance is to assure quality, no?

So essentially anything that passes such QA tests, has had its
quality assured in a well-understood sense?

> 'Certified' indicated that some configuration has been
> shown to work for for some set of feature, and some organisation is
> attesting to the fact that is true. This is /exactly/ what the cinder
> team is attesting to, and this program was bought in
> _because_a_large_number_of_drivers_didn't_work_in_the_slightest_.
> Since it is the cinder team who are going to get up fielding support
> for cinder code, and the cinder team who's reputation is on the line
> over the quality of cinder code, I think we are exactly the people who
> can design a certification program, and that is exactly what we have
> done.

Sure, no issue at all with the Cinder team being best placed to
judge what works and what doesn't in terms of Cinder backends.

Just gently suggesting that due to the terminology-overload, it
might be wise to choose a term with fewer connotations.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration Ideas

2014-06-09 Thread Evgeny Fedoruk
Hi All,

A Spec RST  document was added to Gerrit for review
https://review.openstack.org/#/c/98640

You are welcome to start commenting it for any open discussions.
I tried to address each aspect being discussed,
please add comments about missing things.

Thanks,
Evgeny


-Original Message-
From: Samuel Bercovici 
Sent: Monday, June 09, 2014 9:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Samuel Bercovici; Evgeny Fedoruk
Subject: RE: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS 
Integration Ideas

Hi,

I think that option 2 should be preferred at this stage.
I also think that certificate should be immutable, if you want a new one, 
create a new one and update the listener to use it. 
This removes any chance of mistakes, need for versioning etc.

-Sam.

-Original Message-
From: Jorge Miramontes [mailto:jorge.miramon...@rackspace.com] 
Sent: Friday, June 06, 2014 10:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] Barbican Neutron LBaaS Integration 
Ideas

Hey everyone,

Per our IRC discussion yesterday I'd like to continue the discussion on how 
Barbican and Neutron LBaaS will interact. There are currently two ideas in play 
and both will work. If you have another idea please free to add it so that we 
may evaluate all the options relative to each other.
Here are the two current ideas:

1. Create an eventing system for Barbican that Neutron LBaaS (and other
services) consumes to identify when to update/delete updated secrets from 
Barbican. For those that aren't up to date with the Neutron LBaaS API Revision, 
the project/tenant/user provides a secret (container?) id when enabling SSL/TLS 
functionality.

* Example: If a user makes a change to a secret/container in Barbican then 
Neutron LBaaS will see an event and take the appropriate action.

PROS:
 - Barbican is going to create an eventing system regardless so it will be 
supported.
 - Decisions are made on behalf of the user which lessens the amount of calls 
the user has to make.

CONS:
 - An eventing framework can become complex especially since we need to ensure 
delivery of an event.
 - Implementing an eventing system will take more time than option #2ŠI think.

2. Push orchestration decisions to API users. This idea comes with two 
assumptions. The first assumption is that most providers' customers use the 
cloud via a GUI, which in turn can handle any orchestration decisions that need 
to be made. The second assumption is that power API users are savvy and can 
handle their decisions as well. Using this method requires services, such as 
LBaaS, to "register" in the form of metadata to a barbican container.

* Example: If a user makes a change to a secret the GUI can see which services 
are registered and opt to warn the user of consequences. Power users can look 
at the registered services and make decisions how they see fit.

PROS:
 - Very simple to implement. The only code needed to make this a reality is at 
the control plane (API) level.
 - This option is more loosely coupled that option #1.

CONS:
 - Potential for services to not register/unregister. What happens in this case?
 - Pushes complexity of decision making on to GUI engineers and power API users.


I would like to get a consensus on which option to move forward with ASAP since 
the hackathon is coming up and delivering Barbican to Neutron LBaaS integration 
is essential to exposing SSL/TLS functionality, which almost everyone has 
stated is a #1/#2 priority.

I'll start the decision making process by advocating for option #2. My reason 
for choosing option #2 has to deal mostly with the simplicity of implementing 
such a mechanism. Simplicity also means we can implement the necessary code and 
get it approved much faster which seems to be a concern for everyone. What 
option does everyone else want to move forward with?



Cheers,
--Jorge


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] use of the word certified

2014-06-09 Thread Duncan Thomas
On 6 June 2014 18:29, Anita Kuno  wrote:
> So there are certain words that mean certain things, most don't, some do.
>
> If words that mean certain things are used then some folks start using
> the word and have expectations around the word and the OpenStack
> Technical Committee and other OpenStack programs find themselves on the
> hook for behaviours that they didn't agree to.
>
> Currently the word under discussion is "certified" and its derivatives:
> certification, certifying, and others with root word "certificate".
>
> This came to my attention at the summit with a cinder summit session
> with the one of the cerficiate words in the title. I had thought my
> point had been made but it appears that there needs to be more
> discussion on this. So let's discuss.
>
> Let's start with the definition of certify:
> cer·ti·fy
> verb (used with object), cer·ti·fied, cer·ti·fy·ing.
> 1. to attest as certain; give reliable information of; confirm: He
> certified the truth of his claim.

So the cinder team are attesting that a set of tests have been run
against a driver: a certified driver.

> 3. to guarantee; endorse reliably: to certify a document with an
> official seal.

We (the cinder) team) are guaranteeing that the driver has been
tested, in at least one configuration, and found to pass all of the
tempest tests. This is a far better state than we were at 6 months
ago, where many drivers didn't even pass a smoke test.

> 5. to award a certificate to (a person) attesting to the completion of a
> course of study or the passing of a qualifying examination.

The cinder cert process is pretty much an exam.


I think the work certification covers exactly what we are doing. Give
cinder-core are the people on the hook for any cinder problems
(including vendor specific ones), and the cinder core are the people
who get bad-mouthed when there are problems (including vendor specific
ones), I think this level of certification gives us value.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] TLS support RST document on Gerrit

2014-06-09 Thread Evgeny Fedoruk
Hi All,



A Spec. RST  document for LBaaS TLS support was added to Gerrit for review

https://review.openstack.org/#/c/98640



You are welcome to start commenting it for any open discussions.

I tried to address each aspect being discussed, please add comments about 
missing things.



Thanks,

Evgeny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-tc] use of the word certified

2014-06-09 Thread Duncan Thomas
On 9 June 2014 09:44, Eoghan Glynn  wrote:

> Since "certification" seems to be quite an overloaded term
> already, I wonder would a more back-to-basics phrase such as
> "quality assured" better capture the Cinder project's use of
> the word?
>
> It does exactly what it says on the tin ... i.e. captures the
> fact that a vendor has run an agreed battery of tests against
> their driver and the harness has reported green-ness with a
> meaning that is well understood upstream (as the Tempest test
> cases are in the public domain).


I think 'quality-assured' makes a far stronger statement than
'certified'. 'Certified' indicated that some configuration has been
shown to work for for some set of feature, and some organisation is
attesting to the fact that is true. This is /exactly/ what the cinder
team is attesting to, and this program was bought in
_because_a_large_number_of_drivers_didn't_work_in_the_slightest_.
Since it is the cinder team who are going to get up fielding support
for cinder code, and the cinder team who's reputation is on the line
over the quality of cinder code, I think we are exactly the people who
can design a certification program, and that is exactly what we have
done.


-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Stateful Applications on OpenStack

2014-06-09 Thread hossein zabolzadeh
Hi OpenStack Development Community,
I know that the OpenStack interest is to become a cloud computing operating
system. And this simple sentence means: "Say goodbye to Statefull
Applications".
But, as you know we are in the transition phase from stateful apps to
stateless apps(Remember Pets and Cattle Example). Legacy apps are still in
used and how openstack can address the problems of running stateful
applications(e.g. HA, DR, FT, R,...)?
HA: High Availability
DR: Disaster Recovery
FT: Fault Tolerance
R: Resiliancy!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >