Re: [openstack-dev] Gerrit downtime and upgrade on 2014-04-28

2014-04-28 Thread Chmouel Boudjnah
On Wed, Apr 23, 2014 at 12:40 AM, James E. Blair wrote:

> There are a few changes that will impact developers.  We will have more
> detailed documentation about this soon, but here are the main things you
> should know about:
>

What plugins are going to be enabled under gerrit?

I am asking that because I know that there is a pycharm/intelij plugin
allowing to do gerrit reviews and it need the 'download-commands' gerrit
plugin (http://is.gd/X9vIJ0) to work properly.

Would love to be able to use that and to look over how to make an emacs
mode out of it.

Chmouel
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] should we limit threshold value to be postive

2014-04-28 Thread Julien Danjou
On Mon, Apr 28 2014, ZhiQiang Fan wrote:

> Hi, developers,
>
> When I test the ceilometer threshold alarm, I find that there is no
> limitation for the threshold value, which means we can set it to negative
> value, but I didn't find any volume of meters will be negative, (if I'm
> wrong, please let me know, thanks)

I imagine that everyone we measure right now is positive, but gauge
meters can be negative.

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] VIF event callbacks implementation

2014-04-28 Thread Mike Kolesnik
Hi, 

I came across the implementation of 
https://blueprints.launchpad.net/neutron/+spec/nova-event-callback
and have a question about the way it was implemented.

I notice that now Neutron has a dependency on Nova and needs to be configured
to have nova details (API endpoint, user, password, tenant, etc).
Aside from creating a sort of cyclic dependency between the two, it is my
understanding that Neutron is meant to be a "stand alone" service capable of
being consumed by other compute managers (i.e. oVirt).
This breaks that paradigm.

So my question is: Why use API and not RPC?

I saw that there is already a notification system in Neutron that notifies on
each port update (among other things) which are currently consumed by 
Ceilometer.
Why not have Nova use those notifications to decide that a VIF got plugged 
correctly,
floating IPs changed, and so on?

I am willing to make the necessary changes to decouple Neutron from Nova, but
want to understand the rationale behind the original decision of using API
and not RPC notifications.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Deepak Shetty
Hi,

H703  Multiple positional placeholders

I got this for one of my patch and googling i could find that the fix is to
use
dict instead of direct substitues.. which i did.. but it still gives me the
error :(

Also just running pep8 locally on my glsuterfs.py file doesn't show any
issue
but gerrit does.
So how do i run the same pep8 that gerrit does locally on my box, so that I
don't end up resending new patches due to failed gerrit build checks ?

thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Sean Dague
On 04/28/2014 06:08 AM, Deepak Shetty wrote:
> Hi,
> 
> H703  Multiple positional placeholders
> 
> I got this for one of my patch and googling i could find that the fix is
> to use
> dict instead of direct substitues.. which i did.. but it still gives me
> the error :(
> 
> Also just running pep8 locally on my glsuterfs.py file doesn't show any
> issue
> but gerrit does.
> So how do i run the same pep8 that gerrit does locally on my box, so
> that I don't end up resending new patches due to failed gerrit build
> checks ?

tox -epep8

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Deepak Shetty
Why is this not part of cinder or devstack dep and why isn't this auto
installed ?
I searched the HACKING and CONTRIBUTING docs.. none of them explain how to
sanity check your code before posting it to gerrit ... yum search and pip
install didn't help me install tox-epep8...
 How do i proceed ?


On Mon, Apr 28, 2014 at 3:39 PM, Sean Dague  wrote:

> On 04/28/2014 06:08 AM, Deepak Shetty wrote:
> > Hi,
> >
> > H703  Multiple positional placeholders
> >
> > I got this for one of my patch and googling i could find that the fix is
> > to use
> > dict instead of direct substitues.. which i did.. but it still gives me
> > the error :(
> >
> > Also just running pep8 locally on my glsuterfs.py file doesn't show any
> > issue
> > but gerrit does.
> > So how do i run the same pep8 that gerrit does locally on my box, so
> > that I don't end up resending new patches due to failed gerrit build
> > checks ?
>
> tox -epep8
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] PyCharm Professional Edition licence

2014-04-28 Thread Ilya Sviridov
Hello MagnetoDB community,

Thanks JetBrains, we have PyCharm Professional Edition licence for every
MagnetoDB project contributor.

We have issued an OS license for your project. License key should arrive to
your email in a separate message shortly. Please feel free to share this
key with other project contributors (via secured channels only: please do
not use a public forum or mailing list to share license keys).



If you are interested, please contact me via e-mail or IRC.

Ilya Sviridov
isviridov @FreeNode
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Deepak Shetty
[stack@devstack-vm cinder]$ sudo pip install tox-epep8
Downloading/unpacking tox-epep8
  Could not find any downloads that satisfy the requirement tox-epep8
Cleaning up...
No distributions at all found for tox-epep8
Storing complete log in /root/.pip/pip.log

[stack@devstack-vm cinder]$ sudo yum search tox-epep8
Warning: No matches found for: tox-epep8
No matches found
[stack@devstack-vm cinder]$



On Mon, Apr 28, 2014 at 3:39 PM, Sean Dague  wrote:

> On 04/28/2014 06:08 AM, Deepak Shetty wrote:
> > Hi,
> >
> > H703  Multiple positional placeholders
> >
> > I got this for one of my patch and googling i could find that the fix is
> > to use
> > dict instead of direct substitues.. which i did.. but it still gives me
> > the error :(
> >
> > Also just running pep8 locally on my glsuterfs.py file doesn't show any
> > issue
> > but gerrit does.
> > So how do i run the same pep8 that gerrit does locally on my box, so
> > that I don't end up resending new patches due to failed gerrit build
> > checks ?
>
> tox -epep8
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Avishay Traeger
Deepak,
Sean meant that 'tox -epep8' is the command that runs the pep8 checks.
You can install tox with 'pip install tox' and pep8 with 'pip install
pep8'.  Once you have those, run 'tox -epep8'

Thanks,
Avishay


On Mon, Apr 28, 2014 at 1:15 PM, Deepak Shetty  wrote:

> [stack@devstack-vm cinder]$ sudo pip install tox-epep8
> Downloading/unpacking tox-epep8
>   Could not find any downloads that satisfy the requirement tox-epep8
> Cleaning up...
> No distributions at all found for tox-epep8
> Storing complete log in /root/.pip/pip.log
>
> [stack@devstack-vm cinder]$ sudo yum search tox-epep8
> Warning: No matches found for: tox-epep8
> No matches found
> [stack@devstack-vm cinder]$
>
>
>
> On Mon, Apr 28, 2014 at 3:39 PM, Sean Dague  wrote:
>
>> On 04/28/2014 06:08 AM, Deepak Shetty wrote:
>> > Hi,
>> >
>> > H703  Multiple positional placeholders
>> >
>> > I got this for one of my patch and googling i could find that the fix is
>> > to use
>> > dict instead of direct substitues.. which i did.. but it still gives me
>> > the error :(
>> >
>> > Also just running pep8 locally on my glsuterfs.py file doesn't show any
>> > issue
>> > but gerrit does.
>> > So how do i run the same pep8 that gerrit does locally on my box, so
>> > that I don't end up resending new patches due to failed gerrit build
>> > checks ?
>>
>> tox -epep8
>>
>> -Sean
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Day, Phil
> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: 25 April 2014 23:29
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Proposal: remove the server groups
> feature
> 
> On Fri, 2014-04-25 at 22:00 +, Day, Phil wrote:
> > Hi Jay,
> >
> > I'm going to disagree with you on this one, because:
> 
> No worries, Phil, I expected some dissention and I completely appreciate
> your feedback and perspective :)
>
Always happy to meet your expectations ;-)

Seem that where we mainly disagree is on the usability of the current Server 
Group API - and maybe we just need a wider range of views to see where the 
majority feeling is on that.

The folks we've got who have been looking for affinity/anti-affinity scheduling 
(we've been holding off from the previous filters because they include DB 
lookups) don't seem to find the Server Groups schematic confusing - you know 
you want to do something with a group of servers so you create a group, define 
the properties for that group, and add servers to it as you create them.

I agree there are a number of things needed to round this out (such as 
add/remove server, and some form of quota on the maximum size of a group), but 
I just don't see the  basic approach as broken in the way that you do - and I 
am worried that we end up spinning on much needed functionality if we start to 
rework it now.

The tagging approach  (if I understand it correctly) seems like it would start 
to introduce system schematics to values that are currently just user defined 
free text - which I think might lead to more confusion  / name-space clashes 
around which tags are now in effect reserved names and which are still user 
defined.   I think I prefer the clearer separation.

Phil
 
> > i) This is a feature that was discussed in at least one if not two Design
> Summits and went through a long review period, it wasn't one of those
> changes that merged in 24 hours before people could take a good look at it.
> 
> Completely understood. That still doesn't mean we can't propose to get rid
> of it early instead of letting it sit around when an alternate implementation
> would be better for the user of OpenStack.
> 
> >   Whatever you feel about the implementation,  it is now in the API and we
> should assume that people have started coding against it.
> 
> Sure, maybe. AFAIK, it's only in the v2 API, though, not in the v3 API 
> (sorry, I
> made a mistake about that in my original email). Is there a reason it wasn't
> added to the v3 API?
> 
> >   I don't think it gives any credibility to Openstack as a platform if we 
> > yank
> features back out just after they've landed.
> 
> Perhaps not, though I think we have less credibility if we don't recognize
> when a feature isn't implemented with users in mind and leave it in the code
> base to the detriment and confusion of users. We absolutely must, IMO, as a
> community, be able to say "this isn't right"
> and have a path for changing or removing something.
> 
> If that path is deprecation vs outright removal, so be it, I'd be cool with 
> that.
> I'd just like to nip this anti-feature in the bud early so that it doesn't 
> become
> the next "feature" like file-injection to persist in Nova well after its time 
> has
> come and passed.
> 
> > ii) Sever Group - It's a way of defining a group of servers, and the initial
> thing (only thing right now) you can define for such a group is the affinity 
> or
> anti-affinity for scheduling.
> 
> We already had ways of defining groups of servers. This new "feature"
> doesn't actually define a group of servers. It defines a policy, which is not
> particularly useful, as it's something that is better specified at the time of
> launching.
> 
> >   Maybe in time we'll add other group properties or operations - like
> "delete all the servers in a group" (I know some QA folks that would love to
> have that feature).
> 
> We already have the ability to define a group of servers using key=value tags.
> Deleting all servers in a group is a three-line bash script that loops over 
> the
> results of a nova list command and calls nova delete.
> Trust me, I've done group deletes in this way many times.
> 
> >   I don't see why it shouldn't be possible to have a server group that 
> > doesn't
> have a scheduling policy associated to it.
> 
> I don't think the grouping of servers should have *anything* to do with
> scheduling :) That's the point of my proposal. Servers can and should be
> grouped using simple tags or key=value pair tags.
> 
> The grouping of servers together doesn't have anything of substance to do
> with scheduling policies.
> 
> >I don't see any  Cognitive dissonance here - I think your just assuming 
> > that
> the only reason for being able to group servers is for scheduling.
> 
> Again, I don't think scheduling and grouping of servers has anything to do
> with each other, thus my proposal to remove the relationship between
> groups of se

Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression

2014-04-28 Thread Duncan Thomas
Regex matching in APIs can be a dangerous source of DoS attacks - see
http://en.wikipedia.org/wiki/ReDoS. Unless this is mitigated sensibly,
I will continue to resist any cinder patch that adds them.

Glob matches might be safer?

On 26 April 2014 05:02, Zhangleiqiang (Trump)  wrote:
> Hi, all:
>
> I see Nova allows search instances by name, ip and ip6 fields which 
> can be normal string and regular expression:
>
> [stack@leiqzhang-stack cinder]$ nova help list
>
> List active servers.
>
> Optional arguments:
> --ip   Search with regular expression match by 
> IP address
> (Admin only).
> --ip6 Search with regular expression match by 
> IPv6 address
>  (Admin only).
> --name   Search with regular expression match by 
> name
> --instance-name  Search with regular expression 
> match by server name
> (Admin only).
>
> I think it is also needed for Cinder when query the 
> volume/snapshot/backup by name. Any advice?
>
> --
> zhangleiqiang (Trump)
>
> Best Regards
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Heat Windows templates contribution

2014-04-28 Thread Alessandro Pilotti
Hi all,

Following up to various conversations during the Icehouse cycle, we’d like to 
contribute the Heat templates work that we did at Cloudbase, partly available 
at:
https://github.com/cloudbase/windows-heat-templates

There’s also a BP for that 
https://blueprints.launchpad.net/heat/+spec/windows-instances and a document 
discussing the critical Windows integration areas (linked in the BP): 
http://wiki.cloudbase.it/heat-windows

I’m sending this now so that if anybody is interested on the topic we can start 
some discussions before heading to Atlanta’s design sessions.

At the current stage we are running templates of any size and type on Havana 
and Icehouse without problems with Cloudbase-Init, so there are no particular 
blocking issues, but it’d be great to have a community discussion about what to 
do with the CFN tools porting on Windows for example and how to make the Heat 
produced Nova userdata metadata less Linux dependent.


Thanks!

Alessandro





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][globalization] Need input on how to proceed .

2014-04-28 Thread Duncan Thomas
Two separate patches, or even two chains of separate patches, will
make reviewing and more importantly (hopefully temporary) backouts
easier. It will also reduce the number of merge conflicts, which are
still likely to be substantial.

There's no benefit at all to all of this being done in one patch, and
substantial costs. Doing the conversion by sections seems like the way
forward.

Doing both around the same time (maybe as dependant patches) seems reasonable

On 27 April 2014 00:20, Jay S. Bryant  wrote:
> All,
>
> I am looking for feedback on how to complete implementation of i18n
> support for Cinder.  I need to open a new BluePrint for Juno as soon as
> the cinder-specs process is available.  In the mean time I would like to
> start working on this and need feedback on the scope I should undertake
> with this.
>
> First, the majority of the code for i18n support went in with Icehouse.
> There is just a small change that is needed to actually enable Lazy
> Translation again.  I want to get this enabled as soon as possible to
> get plenty of runtime on the code for Icehouse.
>
> The second change is to add an explicit export for '_' to all of our
> files to be consistent with other projects. [1]  This is also the safer
> way to implement i18n.  My plan is to integrate the change as part of
> the i18n work.  Unfortunately this will touch many of the files in
> Cinder.
>
> Given that fact, this brings me to the item I need feedback upon.  It
> appears that Nova is moving forward with the plan to remove translation
> of debug messages as there was a recent patch submitted to enable a
> check for translated DEBUG messages.  Given that fact, would it be an
> appropriate time, while adding the explicit import of '_' to also remove
> translation of debug messages.  It is going to make the commit for
> enabling Lazy Translation much bigger, but it would also take out
> several work items that need to be addressed at once.  I am willing to
> undertake the effort if I have support for the changes.
>
> Please let me know your thoughts.
>
> Thanks!2]
> Jay
> (jungleboyj on freenode)
>
> [1] https://bugs.launchpad.net/cinder/+bug/1306275
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [MagnetoDB] PyCharm Professional Edition licence

2014-04-28 Thread Serge Kovaleff
Oh! Great! I suppose we should do that privately. I would like one :)


On Mon, Apr 28, 2014 at 1:15 PM, Ilya Sviridov wrote:

> Hello MagnetoDB community,
>
> Thanks JetBrains, we have PyCharm Professional Edition licence for every
> MagnetoDB project contributor.
>
> We have issued an OS license for your project. License key should arrive
> to your email in a separate message shortly. Please feel free to share this
> key with other project contributors (via secured channels only: please do
> not use a public forum or mailing list to share license keys).
>
>
>
> If you are interested, please contact me via e-mail or IRC.
>
> Ilya Sviridov
> isviridov @FreeNode
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Approximate alarming

2014-04-28 Thread Eoghan Glynn


- Original Message -
> Hey everyone!
> 
> I’d like to get your gut reaction on an idea for the future of alarming.
> Should I or should I not put it up for debate at the design summit?

Hi Nejc,

Yes this is certainly worthy of discussion at the design summit.

Because the algorithms being discussed are quite technical, it
would be good to prepare an etherpad in advance with a gentle
introduction to the ideas involved, e.g. some worked examples etc.

This would make for a more profitable discussion at summit, where
you don't have to burn too much time explaining the intricacies of
the streaming approaches.

Cheers,
Eoghan


> ---TL;DR
> Online algorithms for computing stream statistics over sliding windows would
> allow us to provide sample statistics within an error bound (e.g. "The
> average cpu utilization in the last hour was 85% +/- 1%”), while
> significantly reducing the load and memory requirements of the computation.
> —
> 
> Alarm evaluation currently recalculates the aggregate values each time the
> alarm is evaluated, which is problematic because of the load it puts on the
> system. There have been multiple ideas on how to solve this problem, from
> precalculating aggregate values
> (https://wiki.openstack.org/wiki/Ceilometer/Alerting#Precalculation_of_aggregate_values)
> to re-architecting the alarms into the sample pipeline
> (https://wiki.openstack.org/wiki/Ceilometer/AlarmImprovements). While
> Sandy's suggestions make sense from the performance viewpoint, the problem
> of scalability remains. Samples in the pipeline need to be kept in-memory
> for the whole evaluation window, which requires O(N) memory for a window of
> size N.
> 
> We could tackle this problem by using cutting edge research in streaming
> algorithms, namely the papers by Datar et al. [1], and Arasu et al. [2].
> They provide algorithms for computing stream statistics over sliding
> windows, such as *count, avg, min, max* and even *percentile*, **online**
> and with polylogarithmic space requirements. The tradeoff is of course
> precision, but the algorithms are bounded on the relative error - which
> could be specified by the user.
> 
> If we can tell the user "The average cpu utilization in the last hour was 85%
> +/- 1%", would that not be enough for most use cases, while severely
> reducing the load on the system? We could still support *error_rate=0*,
> which would simply use O(N) space and provide a precise answer for the cases
> where such an answer is needed.
> 
> These algorithms were developed with telcos and computer network monitoring
> in mind, "in which information about current network performance—latency,
> bandwidth, etc.—is generated online and is used to monitor and adjust
> network performance dynamically"[1]. IIUC the main user of alarms is Heat
> autoscaling, which is exactly the kind of problem suitable to 'soft'
> calculations, with a certain tolerance for error.
> 
> [1] Datar, Mayur, et al. "Maintaining stream statistics over sliding
> windows." *SIAM Journal on Computing* 31.6 (2002): 1794-1813. PDF @
> http://ilpubs.stanford.edu:8090/504/1/2001-34.pdf
> 
> [2] Arasu, Arvind, and Gurmeet Singh Manku. "Approximate counts and quantiles
> over sliding windows." *Proceedings of the twenty-third ACM
> SIGMOD-SIGACT-SIGART symposium on Principles of database systems.* ACM,
> 2004. PDF @ http://ilpubs.stanford.edu:8090/624/1/2003-72.pdf
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] detailed git commit messages

2014-04-28 Thread Lucas Alvares Gomes
> We've all been pretty lax about the amount of detail that we put in commit
> messages some times, and I'd like to change that as we start Juno
> development. Why? Well, just imagine that, six months from now, you're going
> to write a document describing *all* the changes in Juno, just based on the
> commit messages...
>
> The git commit message should be a succinct but complete description of the
> changes in your patch set. If you can't summarize the change in a few
> paragraphs, perhaps that's a sign the patch should be split up! So, I'm
> going to start -1'ing patches if I don't think the commit message has enough
> detail in it. I would like to encourage other cores to do the same.
>
> What's "enough" detail? It's subjective, but there are some lengthy and
> detailed guidelines here that everyone should be familiar with :)
>   https://wiki.openstack.org/wiki/GitCommitMessages

Agreed, I think it's important that we start improving our commit messages.

>
>
> Cheers,
> Devananda
>
>
> (If English isn't your native language, feel free to ask in channel for a
> little help writing the summary.)

I want to make a point here, the project has many non-native english
speakers right now and, of course, you can expect grammar/spelling
errors from us, so I think we should be a bit flexible about -1'ing
things for these reasons, nobody wants to keep bugging other people
every time he/she needs to write a commit message.

Also, if a patch is -1'ed for this reason please make sure you also
leave in the comments a suggestion/correction as part of the review,
do not -1 it saying it's wrong without a replacement text.

Thanks,
Lucas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] detailed git commit messages

2014-04-28 Thread Roman Prykhodchenko
That seems to be reasonable to me.

Perhaps we should define a more or less formal format for commit 
messages? That might
both help newcomers to write them make it easier for us to write 
sensible "What's changed"
documents basing on those commit messages.


- Roman


On пн, 28-кві-2014 14:41:58 +0300, Lucas Alvares Gomes wrote:
>> We've all been pretty lax about the amount of detail that we put in commit
>> messages some times, and I'd like to change that as we start Juno
>> development. Why? Well, just imagine that, six months from now, you're going
>> to write a document describing *all* the changes in Juno, just based on the
>> commit messages...
>>
>> The git commit message should be a succinct but complete description of the
>> changes in your patch set. If you can't summarize the change in a few
>> paragraphs, perhaps that's a sign the patch should be split up! So, I'm
>> going to start -1'ing patches if I don't think the commit message has enough
>> detail in it. I would like to encourage other cores to do the same.
>>
>> What's "enough" detail? It's subjective, but there are some lengthy and
>> detailed guidelines here that everyone should be familiar with :)
>>   https://wiki.openstack.org/wiki/GitCommitMessages
>
> Agreed, I think it's important that we start improving our commit messages.
>
>>
>>
>> Cheers,
>> Devananda
>>
>>
>> (If English isn't your native language, feel free to ask in channel for a
>> little help writing the summary.)
>
> I want to make a point here, the project has many non-native english
> speakers right now and, of course, you can expect grammar/spelling
> errors from us, so I think we should be a bit flexible about -1'ing
> things for these reasons, nobody wants to keep bugging other people
> every time he/she needs to write a commit message.
>
> Also, if a patch is -1'ed for this reason please make sure you also
> leave in the comments a suggestion/correction as part of the review,
> do not -1 it saying it's wrong without a replacement text.
>
> Thanks,
> Lucas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] pep8 issues and how to pep8 locally ?

2014-04-28 Thread Kashyap Chamarthy
On Mon, Apr 28, 2014 at 03:47:14PM +0530, Deepak Shetty wrote:
> Why is this not part of cinder or devstack dep and why isn't this auto
> installed ?
> I searched the HACKING and CONTRIBUTING docs.. none of them explain how to
> sanity check your code before posting it to gerrit ... yum search and pip
> install didn't help me install tox-epep8...
>  How do i proceed ?

Seems like you're searching for the wrong packages. Try this:

$ yum install python-tox python-pep8 -y

That should bring you what you need. Then, you can try:

$ tox -epep8

PS: Per Fedora packaging conventions, all PyPI packages are prefixed
with 'python-'

-- 
/kashyap



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] How to implement and configure a new Neutron vpnaas driver from scratch?

2014-04-28 Thread Paul Michali (pcm)

On Apr 26, 2014, at 7:39 AM, Julio Carlos Barrera Juez 
mailto:juliocarlos.barr...@i2cat.net>> wrote:

I'm trying to configure any VPNaaS plugin in single-provider mode. I'm not able 
to achieve this goal. I'm using a devstack installation and I'm editing 
/etc/neutron/neutron.conf file, modifying this line:

...
service_provider=VPN:cisco_csr:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
...

and /etc/neutron/vpn_agent.ini modifyin gthis line:

...
vpn_device_driver=neutron.services.vpn.device_drivers.ipsec.IPsecDriver…

PCM: So what are you modifying these lines to? Are they pointing to valid 
modules?



I'm not sure if this configuration is OK. I have some doubts:

- Is this configuration a valid one taking into account that plugin are 
available in Python modules path?

PCM: Sorry, I don’t understand what you’re asking here.


- Where are the log files located to check valid neutron configuration?

PCM: There is a variable in DevStacks localrc to set where the logs are placed. 
For example:

SCREEN_LOGDIR=/opt/stack/screen-logs

I don’t know what the default is (disabled?).


- What services should I restart each time I change this configuration?

PCM: q-svc for the service_driver, and q-vpn for the device_driver (and maybe 
q-aft?).


What I do, is modify vpn_agent.ini in /opt/stack/neutron/etc/ and then, using a 
newer DevStack that has my commit for VPN 
(https://review.openstack.org/#/c/86567/), /etc/neutron/vpn_agent.ini will be 
set with the desired device driver and that will be loaded at start up.

Also, I do a patch to DevStack’s lib/neutron and 
lib/neutron_plugins/services/vpn to setup neutron.conf as well, so that, again, 
/etc/neutron/neutron.conf is set up and stack.sh will do the right thing. The 
patch is:

patch -p 1 << EOT
diff --git a/lib/neutron b/lib/neutron
index 02dcaf6..452281b 100644
--- a/lib/neutron
+++ b/lib/neutron
@@ -728,6 +728,7 @@ function _configure_neutron_fwaas {
 function _configure_neutron_vpn {
 neutron_vpn_install_agent_packages
 neutron_vpn_configure_common
+neutron_vpnaas_configure_driver
 }

 # _configure_neutron_plugin_agent() - Set config files for neutron plugin agent
diff --git a/lib/neutron_plugins/services/vpn b/lib/neutron_plugins/services/vpn
index d920ba6..a676fdc 100644
--- a/lib/neutron_plugins/services/vpn
+++ b/lib/neutron_plugins/services/vpn
@@ -18,6 +18,10 @@ function neutron_vpn_configure_common {
 _neutron_service_plugin_class_add $VPN_PLUGIN
 }

+function neutron_vpnaas_configure_driver() {
+iniset_multiline $NEUTRON_CONF service_providers service_provider 
"VPN:cisco:neutron.services.vpn.service_drivers.cisco_ipsec.CiscoCsrIPsecVPNDriver:default"
+}
+
 function neutron_vpn_stop {
 local ipsec_data_dir=$DATA_DIR/neutron/ipsec
 local pids
EOT


Regards,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



Thank you very much.

[http://www.i2cat.net/sites/all/themes/elegantica/logo.png]
Julio C. Barrera Juez
Office phone: +34 93 357 99 27
Distributed Applications and Networks Area (DANA)
i2CAT Foundation, Barcelona, Spain
http://dana.i2cat.net


On 24 April 2014 16:14, Paul Michali (pcm) 
mailto:p...@cisco.com>> wrote:
Not sure I quite understand the question, but to configuring VPNaaS in single 
provider mode, from a user’s perspective is the same (see 
api.openstack.org).

To bring up a cloud that uses a different vendor’s service and device driver, 
you need to modify neutron.conf to select the vendor’s service driver (as the 
default driver), instead of the reference driver, and in vpn_agent.ini you 
select the vendor’s device driver (instead of or in addition to the reference 
implementation, doesn’t matter, as it pairs with the service driver).

HTHs,


PCM (Paul Michali)

MAIL …..…. p...@cisco.com
IRC ……..… pcm_ (irc.freenode.com)
TW ………... @pmichali
GPG Key … 4525ECC253E31A83
Fingerprint .. 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



On Apr 24, 2014, at 3:13 AM, Julio Carlos Barrera Juez 
mailto:juliocarlos.barr...@i2cat.net>> wrote:

OK, thank you guys, I understood that it was not possible to configure and make 
work any VPNaaS plugin. I don't care, by now, because it works in 
single-provider mode. I knew about the Cisco implementation, but I don't know 
how to configure it, because I didn't find enough documentation about that 
topic. I need some help on the basics configuring a VPNaaS plugin in single 
provider mode, because I only found information about it in 3rd party blog 
posts, etc.

What are the basic steps?

Thank you again.

[http://www.i2cat.net/sites/all/themes/elegantica/logo.png]
Julio C. Barrera Juez
Office phone: +34 93 357 99 27
Distributed Applications and Networks Area

Re: [openstack-dev] [Climate] Meeting minutes

2014-04-28 Thread Martinez, Christian
Great then!
So new steps will be:

· Abandon the https://review.openstack.org/#/c/89837/ (I never done 
that, so I’ll need some help on that)

· Open a bp for v2 support (I can do that)

· Start working on the v2 client (I can also help with that)

· Anything that you guys consider necessary

Is that OK for you?

Thanks in advance,
Christian

From: Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
Sent: Sunday, April 27, 2014 1:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Climate] Meeting minutes

Agree with Dina, we should support V2 here.
Sorry, I had no time for delivering a new client, but as V1 and V2 are quite 
identical, I can take this blueprint.

-Sylvain

2014-04-27 17:44 GMT+02:00 Dina Belova 
mailto:dbel...@mirantis.com>>:
Christian, variant #2 looks good to me)

On Fri, Apr 25, 2014 at 9:59 PM, Martinez, Christian 
mailto:christian.marti...@intel.com>> wrote:
Hello,
One comment regarding 
https://blueprints.launchpad.net/climate/+spec/before-end-notification-crud :
One of Dina’s comments on the https://review.openstack.org/#/c/89833/ was that 
it is her intention to not add this functionality into v1 API.
If that’s the case, then the changes I proposed for the climateclient at 
https://review.openstack.org/#/c/89837/ won’t make sense since the client only 
works with v1 API.
I see a couple of options here:

• Give support for v1 and change the client accordantly

• Give support only on v2, and open a bp for climateclient v2 support.

Hope I make myself clear.
I’ll be waiting for your feedback ☺

Regards,
Christian
From: Sylvain Bauza 
[mailto:sylvain.ba...@gmail.com]
Sent: Friday, April 25, 2014 1:04 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Climate] Meeting minutes

Hi,

Sorry again about my non-presence for 20 mins, I had an IRC client/connection 
issue.
That impacted much the discussions, feel free to reply to this email with any 
concerns you didn't had time to raise on the meeting, so we could continue.

That said, meeting minutes can be found here :

(18:00:32) openstack: Minutes: 
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.html
(18:00:33) openstack: Minutes (text): 
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.txt
(18:00:34) openstack: Log: 
http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.log.html

Thanks,
-Sylvain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Meeting minutes

2014-04-28 Thread Dina Belova
I'm ok with these steps))) Abandoning is very simple - just click "Abandon"
button there ;)


On Mon, Apr 28, 2014 at 4:21 PM, Martinez, Christian <
christian.marti...@intel.com> wrote:

>  Great then!
>
> So new steps will be:
>
> · Abandon the https://review.openstack.org/#/c/89837/ (I never
> done that, so I’ll need some help on that)
>
> · Open a bp for v2 support (I can do that)
>
> · Start working on the v2 client (I can also help with that)
>
> · Anything that you guys consider necessary
>
>
>
> Is that OK for you?
>
>
>
> Thanks in advance,
>
> Christian
>
>
>
> *From:* Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
> *Sent:* Sunday, April 27, 2014 1:56 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Climate] Meeting minutes
>
>
>
> Agree with Dina, we should support V2 here.
>
> Sorry, I had no time for delivering a new client, but as V1 and V2 are
> quite identical, I can take this blueprint.
>
>
>
> -Sylvain
>
>
>
> 2014-04-27 17:44 GMT+02:00 Dina Belova :
>
>  Christian, variant #2 looks good to me)
>
>
>
> On Fri, Apr 25, 2014 at 9:59 PM, Martinez, Christian <
> christian.marti...@intel.com> wrote:
>
>   Hello,
>
> One comment regarding
> https://blueprints.launchpad.net/climate/+spec/before-end-notification-crud:
>
> One of Dina’s comments on the https://review.openstack.org/#/c/89833/ was
> that it is her intention to not add this functionality into v1 API.
>
> If that’s the case, then the changes I proposed for the climateclient at
> https://review.openstack.org/#/c/89837/ won’t make sense since the client
> only works with v1 API.
>
> I see a couple of options here:
>
> · Give support for v1 and change the client accordantly
>
> · Give support only on v2, and open a bp for climateclient v2
> support.
>
>
>
> Hope I make myself clear.
>
> I’ll be waiting for your feedback J
>
>
>
> Regards,
>
> Christian
>
> *From:* Sylvain Bauza [mailto:sylvain.ba...@gmail.com]
> *Sent:* Friday, April 25, 2014 1:04 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [Climate] Meeting minutes
>
>
>
> Hi,
>
>
>
> Sorry again about my non-presence for 20 mins, I had an IRC
> client/connection issue.
>
> That impacted much the discussions, feel free to reply to this email with
> any concerns you didn't had time to raise on the meeting, so we could
> continue.
>
>
>
> That said, meeting minutes can be found here :
>
>
>
> (18:00:32) openstack: Minutes:
> http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.html
> (18:00:33) openstack: Minutes (text):
> http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.txt
> (18:00:34) openstack: Log:
> http://eavesdrop.openstack.org/meetings/climate/2014/climate.2014-04-25-15.00.log.html
>
>
>
> Thanks,
>
> -Sylvain
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] Configuring consistency draft of concept

2014-04-28 Thread MAKSYM IARMAK (CS)
Hi,

Because of we can't use inconsistent write if we use indexed table and 
condition operations which indexes based on (this staff requires the state of 
data), we have one more issue.

If we want to make write with consistency level ONE (WEAK) to the indexed 
table, we will have 2 variants:
1. Carry out the operation successfully and implicitly make write to the 
indexed table with minimally possible consistency level for it (QUORUM);
2. Raise an exception, that we can not perform this operation and list all 
possible CLs for this operation.

I personally prefer the 2nd variant. So, does anybody have some objections or 
maybe another ideas?


From: MAKSYM IARMAK (CS) [maksym_iar...@symantec.com]
Sent: Friday, April 25, 2014 9:14 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [MagnetoDB] Configuring consistency draft of concept

>So, here is specification draft of concept.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][service group]improve host state detection

2014-04-28 Thread Jiangying (Jenny)
Nova now can detect host unreachable. But it fails to make out host isolation, 
host dead and nova compute service down. When host unreachable is reported, 
users have to find out the exact state by himself and then take the appropriate 
measure to recover. Therefore we'd like to improve the host detection for nova.

Currently the service group API factors out the host detection and makes it a 
set of abstract internal APIs with a pluggable backend implementation. The 
backend we designed is as follows:

A detection central agent is introduced. When a member joins into the service 
group, the member host starts to send network heartbeat to the central agent 
and writes timestamp in shared storage periodically. When the central agent 
stops receiving the network heartbeats from a member, it pings the member and 
checks the storage heartbeat before declaring the host to have failed.


network heartbeat|network ping|storage heartbeat| state  | reason
|-||---|--
OK   |  - |-| Running | -
  Not OK |   Not OK   | Not OK  | Dead   | hardware 
failure/abnormal host shut down
  Not OK | OK | Not OK  | Service unreachable| service 
process crashed
  Not OK |   Not OK   |   OK| Isolated   | network 
unreachable

Based on the state recognition table, nova can discern the exact host state and 
assign the reasons.

Thoughts?

Jenny

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Steve Gordon
- Original Message -
> Hi Stackers,
> 
>8-->8-->8-->8-->8-->8--
> 
> Proposal
> 
> 
> I propose to scrap the server groups API entirely and replace it with a
> simpler way to accomplish the same basic thing.
> 
> Create two new options to nova boot:
> 
>  --near-tag 
> and
>  --not-near-tag 
> 
> The first would tell the scheduler to place the new VM near other VMs
> having a particular "tag". The latter would tell the scheduler to place
> the new VM *not* near other VMs with a particular tag.

Would we continue to grow this set of arguments in response to the addition of 
new policies, how much do we expect this to grow? The two most likely additions 
I can think of are "soft"/"best effort" versions of the current two, are there 
any other proposals/ideas out there - I know we're a creative bunch ;)?

> What is a "tag"? Well, currently, since the Compute API doesn't have a
> concept of a single string tag, the tag could be a key=value pair that
> would be matched against the server extra properties.
> 
> Once a real user-controlled simple string tags system is added to the
> Compute API, a "tag" would be just that, a simple string that may be
> attached or detached from some object (in this case, a server object).
> 
> How does this solve all the issues highlighted above? In order, it
> solves the issues like so:
> 
> 1. There's no need to have any "server group" object any more. Servers
> have a set of tags (key/value pairs in v2/v3 API) that may be used to
> identify a type of server. The activity of launching an instance would
> now have options for the user to indicate their affinity preference,
> which removes the cognitive dissonance that happens due to the user
> needing to know what a server group is (a policy, not a group).

Would the user's affinity preference stay with the instance for consideration 
in future operations post-boot (either now or in a future extension of this 
functionality)?

> 2. Since there is no more need to maintain a separate server group
> object, if a user launched 3 instances and then wanted to make sure that
> 3 new instances don't end up on the same hosts, all the user needs to do
> is tag the existing instances with a tag, and issue a call to:
> 
>  nova boot --not-near-tag $TAG ...
> 
> and the affinity policy is applied properly.

The fact that membership can't be changed, at least in the initial 
implementation, is explicitly called out in the design wiki for the server 
group api [1]. My understanding is that this was not because implementing an 
add/remove that works the way you suggest would have been particularly 
problematic but because user expectations when the group membership is modified 
are not just that the new instances booted into the group subsequently are 
placed with affinity/anti-affinity but that the existing instances that were 
added to the group are also evaluated and moved as necessary to ensure *all* 
members of the group meet the policy.

So in the example this would mean ensuring that all 6 VMs have anti-affinity, 
not just the latest 3 that are being booted do (or perhaps I am misreading what 
you are proposing?). Similarly there is an expectation that it's possible to 
look at a group and easily determine whether it was placed with a policy, and 
if so what that policy was (not saying that could not be implemented on top of 
your proposal, just recording for completeness).

Whether solutions to meet these expectations belongs in Nova or somewhere else 
is probably another matter but when dealing with users who use this 
functionality and they talk about modifying group membership *this* is what 
they expect. 
On the other hand this proposal does seem to offer more flexibility for some 
more complex use cases, for example where users want to place pairs of 
instances in their own instance groups with affinity, but have them placed with 
anti-affinity for other pairs. Perhaps I am missing something in the server 
groups design (or proposed extensions of it) on this point though.

I think everyone agrees that the server groups API as it stands is not the 
final solution here, but I think it's important to refer back to the previous 
design summit discussions [2][3] on this functionality and ensure any 
replacement caters not just to reimplementing the current state of server 
groups but also ensuring that it's easily extended to cover future needs 
(particularly those already discussed/considered in framing the current 
functionality).

Thanks,

Steve

[1] https://wiki.openstack.org/wiki/GroupApiExtension
[2] https://etherpad.openstack.org/p/group-scheduling
[3] https://etherpad.openstack.org/p/NovaIcehouse-Instance-Group-API

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Steve Gordon


- Original Message -
> On Fri, 25 Apr 2014 18:28:38 -0400
> Jay Pipes  wrote:
> > 
> > 
> > Sure, maybe. AFAIK, it's only in the v2 API, though, not in the v3 API
> > (sorry, I made a mistake about that in my original email). Is there a
> > reason it wasn't added to the v3 API?
> > 
> 
> We did have a pretty strong rule for most of the Icehouse
> development cycle to only merge new API features if the change was
> added either first to the V3 API or at the same time as the V2 API.
> However this (almost unintentionally) ended up getting relaxed whilst
> all the V2 vs V3 API discussions were occurring. As a result there are
> some features that were merged into V2 that we definitely need to now
> add to the V3 API in Juno.
> 
> Since the V3 API is still experimental we have some flexibility, but
> transition pain for those moving from V2 to V3 is still going to be a
> factor in terms of what we want to support.
> 
> Chris

Yeah, looks like this fell by the wayside during those discussions, the v3 
patchset linked from the blueprint is here:

https://review.openstack.org/#/c/70533/

-Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Summit schedule draft

2014-04-28 Thread Kyle Mestery
On Mon, Apr 28, 2014 at 1:00 AM, Tom Fifield  wrote:
> On 28/04/14 05:02, Michael Still wrote:
>>
>> Hi.
>>
>> I've just pushed a draft summit schedule to sched.org. I'd be
>> interested in people who proposed a session that was accepted checking
>> if their session time clashes with other commitments that they have,
>> as well as people who are passionate about a given proposal ensuring
>> that they're available at the scheduled time.
>>
>> Bear in mind that this is a non-trivial problem though... There's only
>> so much schedule shuffling that can be done.
>>
>> Thanks,
>> Michael
>>
>
> Nova session: Next steps in live upgrade
>
> clashes with
>
> neutron session: Nova-Net to Neutron migration
>
> 2:40pm on Wednesday.
>
> Since these are both specifically about upgrades that involve nova, perhaps
> there's enough of a shared audience they should go in separate times?
>
+1 to that.

Michael, any chance you can slide this Nova session to a different
slot? If it's challenging, I can look at moving the Neutron session.

Thanks,
Kyle

>
> Regards,
>
>
> Tom
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] cancelling team meeting for May 1st

2014-04-28 Thread Eoghan Glynn

Hi Folks,

Since our French, Hungarian, Chinese, Russian and Slovenian contributors 
(geo-diversity WTF!) will all be celebrating International Workers' Day
on May 1st, let's skip the weekly meeting.

If anything pressing comes up, let's just have an emergent discussion to
deal with it on Wednesday.

Cheers,
Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] cancelling team meeting for May 1st

2014-04-28 Thread Eoghan Glynn


> Hi Folks,
> 
> Since our French, Hungarian, Chinese, Russian and Slovenian contributors
> (geo-diversity WTF!) will all be celebrating International Workers' Day

A dyslexic moment: I meant geo-diversity FTW! :)

> on May 1st, let's skip the weekly meeting.
> 
> If anything pressing comes up, let's just have an emergent discussion to
> deal with it on Wednesday.
> 
> Cheers,
> Eoghan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Canceling today's meeting and a few other notes

2014-04-28 Thread Kyle Mestery
Hi folks:

Given that this is the off week in the OpenStack calendar, I'm going
to cancel the Neutron meeting today to give folks a day off. We'll
reconvene next week per usual schedule.

I also wanted to point out a few things for Neutron devs:

1. Our Design Summit schedule [1] is now live on sched.org. If you
have any issues with your slot, please unicast me and I'll see what I
can slide around. No guarantees, but it's possible we could move
things.
2. If you have an approved session, please add an etherpad to the
Summit etherpad location here [2]. Also, you are encouraged to start a
discussion on the ML for your session as well. This helps to level set
people before we all arrive in Atlanta.

Thanks!
Kyle

[1] http://junodesignsummit.sched.org/overview/type/neutron
[2] https://wiki.openstack.org/wiki/Summit/Juno/Etherpads#Neutron

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Programs] Client Tools program discussion

2014-04-28 Thread Dean Troyer
I want to open the discussion of an OpenStack Client Tools program proposal
to a wider audience.  It would initially consist of OpenStackClient and
eventually add the existing SDK projects as they are ready to join. The
initial wiki page is at https://wiki.openstack.org/wiki/ClientTools.  I do
want to have the proposal made before the summit, but not necessarily the
TC consideration.

There has recently been some discussion (specifically around summit
sessions) regarding the overlap of client code and the user experience
team.  This is one of the things I want to get some feedback on before
making a formal proposal.

The mission statement and description are written with the anticipation of
one or more SDK projects joining the program during the Juno cycle.

dt



Mission Statement

The OpenStack Client Tools mission is to provide clean and consistent
interfaces to OpenStack services via the published REST APIs. The intended
audiences are command-line users (OpenStackClient) and application
developers (SDKs as they join the program).
Description

The OpenStack Client Tools program encompasses a number of related projects
that have both common contributors and common consumer interests regarding
OpenStack services. The existing OpenStackClient project is targeted at
command-line users: end-users as well as cloud operators, devops, system
administrators or anyone needing a shell interface to OpenStack. The SDK
projects (expected to join the program as they mature) implement bindings
to the OpenStack REST APIs in multiple languages.
Deliverables

Releases for the Client Tools projects are on an independent schedule from
the OpenStack integrated release just as the existing client CLI/libraries
do today.

   - OpenStackClient delivers an integrated CLI as a PyPI package and the
   usual OpenStack tarballs.



-- 

Dean Troyer
dtro...@gmail.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

2014-04-28 Thread Kyle Mestery
Folks, sorry for the top post here, but I wanted to make sure to
gather people's attention in this thread.

I'm very happy to see all the passion around LBaaS in Neutron for this
cycle. As I've told a few people, seeing all the interest from
operators and providers is fantastic, as it gives us valuable input
from that side of things before we embark on designing and coding.
I've also attended the last few LBaaS IRC meetings, and I've been
catching up on the LBaaS documents and emails. There is a lot of great
work and passion by many people. However, the downside of what I've
seen is that there is a logjam around progress here. Given we're two
weeks out from the Summit, I'm going to start running the LBaaS
meetings with Eugene to try and help provide some focus there.
Hopefully we can use this week and next week's meetings to drive to a
consistent Summit agenda and lay the groundwork for LBaaS in Juno and
beyond.

Also, while our new neutron-specs BP repository has been great so far
for developers, based on feedback from operators, it may not be ideal
for those who are not used to contributing using gerrit. I don't want
to lose the voice of those people, so I'm pondering what to do. This
is really affecting the LBaaS discussion at the moment. I'm thinking
that we should ideally try to use Google Docs for these initial
discussions and then move the result of that into a BP on
neutron-specs. What do people think of that?

If we go down this path, we need to decide on a single Google Doc for
people to collaborate on. I don't want to put Stephen on the spot, but
his document may be a good starting point.

I'd like to hear what others think on this plan as well.

Thanks,
Kyle


On Sun, Apr 27, 2014 at 6:06 PM, Eugene Nikanorov
 wrote:
> Hi,
>
>>
>> You knew from the action items that came out of the IRC meeting of April
>> 17 that my team would be working on an API revision proposal. You also knew
>> that this proposal was to be accompanied by an object model diagram and
>> glossary, in order to clear up confusion. You were in that meeting, you saw
>> the action items being created. Heck, you even added the "to prepare API for
>> SSL and L7" directive for my team yourself!
>>
>> The implied but not stated assumption about this work was that it would be
>> fairly evaluated once done, and that we would be given a short window (ie.
>> about a week) in which to fully prepare and state our proposal.
>>
>> Your actions, though, were apparently to produce your own version of the
>> same in blueprint form without notifying anyone in the group that you were
>> going to be doing this, let alone my team. How could you have given my API
>> proposal a fair shake prior to publishing your blueprint, if both came out
>> on the same day? (In fact, I'm lead to believe that you and other Neutron
>> LBaaS developers hadn't even looked at my proposal before the meeting on
>> 4/24, where y'all started determining product direction, apparently by
>> edict.)
>>
>>
>> Therefore, looking honestly at your actions on this and trying to give you
>> the benefit of the doubt, I still must assume that you never intended to
>> seriously consider our proposal.
>
> That's strange to hear because the spec on review is a part of what is
> proposed in the document made by you and your team.
> Once again I'm not sure what this heated discussion is all about. The doc
> does good job and we will continue discussing it, while a part of it (spec
> about VIPs/Listeners/Pools) is on review where we, as lbaas subteam,
> actually can finalize a part of it.
>
>>
>> Do you now understand why I find this offensive? Can you also understand
>> how others, seeing how this was handled, might now be reluctant to
>> participate?
>
> People may find different things to be offensive. I can also say much on
> this, but would not like not continue the conversation in this direction.
>
>
>> Right, so *if* we decide to go with my proposal, we need to first decide
>> which parts we're actually going to go with--
>>
>>  I don't expect my proposal to be complete or perfect by any means, and we
>> need to have honest discussion of this first. Then, once we've more-or-less
>> come to a consensus on this overall direction,
>
> I'm not sure i understand what you mean by 'overall direction'. Was there
> ever an idea of not supporting HA, or L7, or SSL or to not satisfy other
> requirements?
> The discussion could be on how to do it, then.
>
>> it makes sense to think about how to split up the work into digestible,
>> parallelize-able chunks that can be tackled by the various interested
>> parties working on this project.  (My team actually wanted to propose a road
>> map and attach it to the proposal, but there simply wasn't time if we wanted
>> to get the API out before the next IRC meeting in enough time for people to
>> have had a chance to look at it.)
>>
>> Why embark on this process at all if we don't have any real idea of what
>> the end-goal looks like?
>
> I hope this wil

Re: [openstack-dev] [Ironic] detailed git commit messages

2014-04-28 Thread Julie Pichon
On 28/04/14 12:52, Roman Prykhodchenko wrote:
> That seems to be reasonable to me.
> 
> Perhaps we should define a more or less formal format for commit 
> messages? That might
> both help newcomers to write them make it easier for us to write 
> sensible "What's changed"
> documents basing on those commit messages.

I usually point newcomers to [1] though the whole page is excellent.

[1]
https://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_GIT_commit_message_structure


> 
> 
> - Roman
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] cancelling team meeting for May 1st

2014-04-28 Thread Dina Belova
Okay, thanks! Have a nice holiday then)


On Mon, Apr 28, 2014 at 5:06 PM, Eoghan Glynn  wrote:

>
>
> > Hi Folks,
> >
> > Since our French, Hungarian, Chinese, Russian and Slovenian contributors
> > (geo-diversity WTF!) will all be celebrating International Workers' Day
>
> A dyslexic moment: I meant geo-diversity FTW! :)
>
> > on May 1st, let's skip the weekly meeting.
> >
> > If anything pressing comes up, let's just have an emergent discussion to
> > deal with it on Wednesday.
> >
> > Cheers,
> > Eoghan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-28 Thread Thomas Spatzier
Excerpts from Steve Baker's message on 28/04/2014 01:25:29:


> #1 Enable software components for full lifecycle:

> So in a short, stripped-down version, SoftwareConfigs could look like
>
> my_sw_config:
>   type: OS::Heat::SoftwareConfig
>   properties:
> create_config: # the hook for software install
> suspend_config: # hook for suspend action
> resume_config: # hook for resume action
> delete_config: # hook for delete action
>

>
> OS::Heat::SoftwareConfig itself needs to remain ignorant of heat
> lifecycle phases, since it is just a store of config.

Sure, I agree on that. SoftwareConfig is just a store of config that gets
used by another resource which then deals with Heat's lifecycle.
The thing I was proposing is actually not making it lifecycle aware, but it
allows the user to store respective config pieces to later be executed by a
software deployment at respective lifecycle steps.

>
> Currently there are 2 ways to build configs which are lifecycle aware:
> 1. have a config/deployment pair, each with different deployment actions
> 2. have a single config/deployment, and have the config script do
> conditional logic
>on the derived input value deploy_action
>
> Option 2. seem reasonable for most cases, but having an option which
> maps better to TOSCA would be nice.

So option 2 sounds like the right thing to me. The only things is that I
would not want to put all logic into a large script with conditional
handling, but to allow breaking the script into parts and let the condition
handling be done by the framework. My snippet above would then just allow
for telling the deploy logic which script to call when.
Most of the real work would probably be done in the in-instance tool, so
the Heat resource would really "just" allow for storing data in a
well-defined structure.

>
> Clint's StructuredConfig example would get us most of the way there,
> but a dedicated config resource might be easier to use.

Right, and that's the core of my proposal: having a dedicated config
resource that is intuitive to use for template authors.

> The deployment resource could remain agnostic to the contents of this
> resource though. The right place to handle this on the deployment
> side would be in the orc script 55-heat-config, which could infer
> whether the config was a lifecycle config, then invoke the required
> config based on the value of deploy_action.

Fully agree on that. This should be the place to handle most of the work.
I think we are saying the same thing on this topic, so I am optimistic to
agree on a solution :-)

>
>
> #2 Enable add-hoc actions on software components:

>
> Lets park this for now. Maybe one day heat templates will be used to
> represent workflow tasks, but this isn't directly related to software
config.

I think if we get to a good conclusion of #1, maybe this won't be a big
deal after all.
So yeah, maybe park it (but keep in the back of our heads) and look at it
again depending on what the result for #1 looks like.

>

> #3.1 software deployment should run just once:
> A bug has been raised because with today's implementation it can happen
> that SoftwareDeployments get executed multiple times. There has been some
> discussion around this issue but no final conclusion. An average user
will
> however assume that his automation gets run only or exactly once. When
> using existing scripts, it would be an additional burden to require
> rewrites to cope with multiple invocations. Therefore, we should have a
> generic solution to the problem so that users do not have to deal with
this
> complex problem.

> I'm with Clint on this one. Heat-engine cannot know the true state
> of a server just by monitoring what has been polled and signaled.
> Since it can't know it would be dangerous for it to guess. Instead
> it should just offer all known configuration data to the server and
> allow the server to make the decision whether to execute a config
> again. I still think one more derived input value would be useful to
> help the server to make that decision. This could either be a
> datestamp for when the derived config was created, or a hash of all
> of the derived config data.

So as I said in another note, I agree that the this seems best handled in
the in-instance tool and the Heat engine, or the resource should probably
not have any new magic. If there is some additional state property that the
resource maintains, and the in-instance tool handles it, that should be
fine. I think what is important, is that users who want to use existing
automation scripts do not have to implement much logic for interpreting
that additional "flag", but that we handle it in the generic hook
invocation logic.

Can you elaborate more on what you have in mind with the additional derived
input value?


>
> #3.2 dependency on heat-cfn-api:
> Some parts of current signaling still depend on the heat-cfn-api. While
> work seems underway to completely move to Heat native signaling, some
> cleanup to 

[openstack-dev] [Docs} Summit sessions for Documentation

2014-04-28 Thread Anne Gentle
Hi all,
I've pushed the doc track to:

http://junodesignsummit.sched.org/overview/type/documentation#.U15Y4uZdU0w

The two cross-project tracks related to docs are on Tuesday:
Tues 12:05 Cross-project documentation
Tues Lunch: Let's talk docs
Tues 2:00 Easier documentation for all project developers
Wed 9:00 Install Guide Discussion Session
Thurs 9:50 Patching the documentation process
Fri 9:00 Continuous publishing and automation for Docs
Fri 11:40 Beef Up User and Operations Guides for Integrated

Nick and I emailed about the order of Tuesdays but it's such a tight
schedule that I hate to ask for any more changes. I'd also like to take
advantage of lunch in between those cross-project sessions.

Still, let me know if you see any red flags.

Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MagnetoDB CLI client

2014-04-28 Thread Ilya Sviridov
Hello Andrey,

Great!

Looking closer at blueprint, I've realized that parameter naming is
confusing.

I would suggest to use --request-file parameter instead --description-file
used now.

Also, I believe that table-list will be the most popular call and it has
only two parameters, so would be better to avoid json for that in cli and
pass all info via command line.

like
magnetodb table-list --exclusive-start-table-name  table_1 --count 10

Probably we have to think about default behavior when no json passed or
required arguments are passed as CLI arguments for easier usage.
Scan looks as a good example.


BTW: We have dedicated mail prefix in order to not to spam everybody, but
only MagnetoDB project interest audience :) so just add
[openstack-dev][MagnetoDB] at the beginning of email subject next time.

Thank you
Ilya




On Fri, Apr 25, 2014 at 4:29 PM, ANDREY OSTAPENKO (CS) <
andrey_ostape...@symantec.com> wrote:

> Hello, everyone!
>
> Now I'm starting to implement cli client for KeyValue Storage service
> MagnetoDB.
> I'm going to use heat approach for cli commands, e.g. heat stack-create
> --template-file ,
> because we have too many parameters to pass to the command.
> For example, table creation command:
>
> magnetodb create-table --description-file 
>
> File will contain json data, e.g.:
>
> {
> "table_name": "data",
> "attribute_definitions": [
> {
> "attribute_name": "Attr1",
> "attribute_type": "S"
> },
> {
> "attribute_name": "Attr2",
> "attribute_type": "S"
> },
> {
> "attribute_name": "Attr3",
> "attribute_type": "S"
> }
> ],
> "key_schema": [
> {
> "attribute_name": "Attr1",
> "key_type": "HASH"
> },
> {
> "attribute_name": "Attr2",
> "key_type": "RANGE"
> }
> ],
> "local_secondary_indexes": [
> {
> "index_name": "IndexName",
> "key_schema": [
> {
> "attribute_name": "Attr1",
> "key_type": "HASH"
> },
> {
> "attribute_name": "Attr3",
> "key_type": "RANGE"
> }
> ],
> "projection": {
> "projection_type": "ALL"
> }
> }
> ]
> }
>
> Blueprint:
> https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-cli-client
>
> If you have any comments, please let me know.
>
> Best regards,
> Andrey Ostapenko
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Oleg Bondarev
Yeah, I also saw in docs that *update-device *is supported since 0.8.0
version,
not sure why it didn't work in my setup.
I installed latest libvirt 1.2.3 and now update-device works just fine and
I am able
to move instance tap device from one bridge to another with no downtime and
no reboot!
I'll try to investigate why it didn't work on 0.9.8 and which is the
minimal libvirt version for this.

Thanks,
Oleg


On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery wrote:

> According to this page [1], "update-device" is supported from libvirt
> 0.8.0 onwards. So in theory, this should be working with your 0.9.8
> version you have. If you continue to hit issues here Oleg, I'd suggest
> sending an email to the libvirt mailing list with the specifics of the
> problem. I've found in the past there are lots of very helpful on that
> mailing list.
>
> Thanks,
> Kyle
>
> [1] http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
>
> On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev 
> wrote:
> > So here is the etherpad for the migration discussion:
> > https://etherpad.openstack.org/p/novanet-neutron-migration
> > I've also filed a design session on this:
> > http://summit.openstack.org/cfp/details/374
> >
> > Currently I'm still struggling with instance vNic update, trying to move
> it
> > from one bridge to another.
> > Tried the following on ubuntu 12.04 with libvirt 0.9.8:
> >
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
> > virsh update-device shows success but nothing actually changes in the
> > instance interface config.
> > Going to try this with later libvirt version.
> >
> > Thanks,
> > Oleg
> >
> >
> >
> > On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido  >
> > wrote:
> >>
> >>
> >> Very interesting topic!
> >> +1 Salvatore
> >>
> >> It would be nice to have an etherpad to share the information and
> organize
> >> a plan. This way it would be easier for interested people  to join.
> >>
> >> Rossella
> >>
> >>
> >> On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
> >>
> >> It's great to see that there is activity on the launchpad blueprint as
> >> well.
> >> From what I heard Oleg should have already translated the various
> >> discussion into a list of functional requirements (or something like
> that).
> >>
> >> If that is correct, it might be a good idea to share them with relevant
> >> stakeholders (operators and developers), define an actionable plan for
> Juno,
> >> and then distribute tasks.
> >> It would be a shame if it turns out several contributors are working on
> >> this topic independently.
> >>
> >> Salvatore
> >>
> >>
> >> On 22 April 2014 16:27, Jesse Pretorius 
> wrote:
> >>>
> >>> On 22 April 2014 14:58, Salvatore Orlando  wrote:
> 
>  From previous requirements discussions,
> >>>
> >>>
> >>> There's a track record of discussions on the whiteboard here:
> >>> https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
> >>>
> >>> ___
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev@lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting reminder - 04/28/2014

2014-04-28 Thread Renat Akhmerov
Hi,

This is a reminder about another community meeting that we’ll be having today 
at 16.00 UTC (#openstack-meeting).

The agenda:
Review action items
Current status (quickly by team members)
POC readiness and steps that left to finalise it
Open discussion

You can also find it at https://wiki.openstack.org/wiki/Meetings/MistralAgenda 
as well as the links the previous meeting minutes and logs.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] Proposed tools and workflows for OpenStack User Experience contributors

2014-04-28 Thread Jaromir Coufal

Thanks all for great feedback, I will try to do a short summary:



Wiki

Wiki page is obvious and easy consensus for us. It should contain all 
important information about UX, such as "how to contribute", "where to 
go to start", various links, etc.




Mailing list - [UX]
---
Everybody agrees that discussions should happen on the mailing list with 
[UX] tag in the subject. There is no consensus if the discussion should 
be just general or detailed as well. (We should continue this discussion 
in separate thread)




Discussion forum - (terminate)
--
It's agreed that AskBot doesn't work very well - it is a bit chaotic and 
also big problem is disconnection from the rest of OpenStack teams. It 
was suggested to search for another solution (also no consensus on which 
one).




IRC meetings


Very welcome from everybody.



Launchpad (StoryBoard in the future)

Also welcome, until StoryBoard is ready for us, we should keep track of 
BPs and bugs in Launchpad and document how to work with it.




Wishlist (currently Launchpad)
--
Seems like nice idea. We should figure out what would be the best way to 
provide this list, but it might be as simple as registering new bug or 
blueprint in UX's Launchpad (at least from the beginning).




Storage place (GitHUb)
--
We should do research on what tool would be useful for us. GitHub (or 
OpenStack's git repository) can work with combination of gerrit. It 
looks that we should at least give it a try.



Templates library
-
Yes, could be stored at the same place as other materials, just marked 
and linked properly.




??? (user community for feedback gathering)
---
Needs more research - still trying to figure out where and how the best 
connect with the user community.



So mostly we agreed on all suggested tools which makes me very happy. I 
will start setting up the obvious ones and for the rest 
(+controversial), I am going to start separate threads, where we can 
discuss in more details and follow up at Summit.


Thank you all for your feedback, you will hear from me soon
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Kyle Mestery
On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev  wrote:
> Yeah, I also saw in docs that update-device is supported since 0.8.0
> version,
> not sure why it didn't work in my setup.
> I installed latest libvirt 1.2.3 and now update-device works just fine and I
> am able
> to move instance tap device from one bridge to another with no downtime and
> no reboot!
> I'll try to investigate why it didn't work on 0.9.8 and which is the minimal
> libvirt version for this.
>
Wow, cool! This is really good news. Thanks for driving this! By
chance did you notice if there was a drop in connectivity at all, or
if the guest detected the move at all?

Kyle

> Thanks,
> Oleg
>
>
> On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery 
> wrote:
>>
>> According to this page [1], "update-device" is supported from libvirt
>> 0.8.0 onwards. So in theory, this should be working with your 0.9.8
>> version you have. If you continue to hit issues here Oleg, I'd suggest
>> sending an email to the libvirt mailing list with the specifics of the
>> problem. I've found in the past there are lots of very helpful on that
>> mailing list.
>>
>> Thanks,
>> Kyle
>>
>> [1] http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
>>
>> On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev 
>> wrote:
>> > So here is the etherpad for the migration discussion:
>> > https://etherpad.openstack.org/p/novanet-neutron-migration
>> > I've also filed a design session on this:
>> > http://summit.openstack.org/cfp/details/374
>> >
>> > Currently I'm still struggling with instance vNic update, trying to move
>> > it
>> > from one bridge to another.
>> > Tried the following on ubuntu 12.04 with libvirt 0.9.8:
>> >
>> > https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
>> > virsh update-device shows success but nothing actually changes in the
>> > instance interface config.
>> > Going to try this with later libvirt version.
>> >
>> > Thanks,
>> > Oleg
>> >
>> >
>> >
>> > On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
>> > 
>> > wrote:
>> >>
>> >>
>> >> Very interesting topic!
>> >> +1 Salvatore
>> >>
>> >> It would be nice to have an etherpad to share the information and
>> >> organize
>> >> a plan. This way it would be easier for interested people  to join.
>> >>
>> >> Rossella
>> >>
>> >>
>> >> On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
>> >>
>> >> It's great to see that there is activity on the launchpad blueprint as
>> >> well.
>> >> From what I heard Oleg should have already translated the various
>> >> discussion into a list of functional requirements (or something like
>> >> that).
>> >>
>> >> If that is correct, it might be a good idea to share them with relevant
>> >> stakeholders (operators and developers), define an actionable plan for
>> >> Juno,
>> >> and then distribute tasks.
>> >> It would be a shame if it turns out several contributors are working on
>> >> this topic independently.
>> >>
>> >> Salvatore
>> >>
>> >>
>> >> On 22 April 2014 16:27, Jesse Pretorius 
>> >> wrote:
>> >>>
>> >>> On 22 April 2014 14:58, Salvatore Orlando  wrote:
>> 
>>  From previous requirements discussions,
>> >>>
>> >>>
>> >>> There's a track record of discussions on the whiteboard here:
>> >>> https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
>> >>>
>> >>> ___
>> >>> OpenStack-dev mailing list
>> >>> OpenStack-dev@lists.openstack.org
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Globalization] REST API sorting by status severity vs. alphabetical status key

2014-04-28 Thread Jay Pipes
On Wed, 2014-04-23 at 22:07 -0500, Steven Kaufer wrote:
> > yeah, we're talking about thousands and thousands of rows that have
> to
> > be updated before the API can be restarted…
> > 
> > > There's also a possibility of adding support for the status codes,
> but
> > > keeping the string columns in the database, and then using the
> nova
> > > object versioning to "migrate" the object schema over time to the
> point
> > > where the migration is a simple DROP COLUMN.
> > 
> > I like that idea better, TBH, but we're probably talking about a
> > long-time deprecation here, like on the order of a couple of
> releases;
> > that would give plenty of time for the majority of the records to be
> > revisited and make the final migration run for a lot shorter time.
> > -- 
> 
> Thanks for the discussion.

No prob, sorry for the delayed response...

> So how would this new flow work?
> In Juno would there be an additional status_int column that would be
> populated and (eventually) replace the existing status (as string)
> column?

That would be the cleanest way, yes.

> How would the object versioning populate the new column for the
> existing records?

Within the nova.objects.instance.Instance object itself, we can put a
small check-and-transform function in the object to do the translation
in-line.

> Any examples or details that would help explain how this could work
> would be appreciated.

Probably worth putting a blueprint up about it. I can work with you on
it, if you'd like, though it will likely be after the summit until I
have time to work on it.

> Lastly, is there agreement that this is an issue that needs to be
> addressed? Note that this seems to be a pervasive problem, I've
> investigated the status column in cinder and nova but I suspect that
> the same issue exists in other components.

Yes, the same issue unfortunately exists in lots of the other
components, and they don't have the benefit of the nova objects work in
them, which makes it a lot more of a nuisance to migrate the database
schema. Though, personally I'm not entirely sure going through the
effort of doing a long-time in-object translation is worth it. A change
to the database schema, even on tens of millions of records wouldn't
take more than a couple minutes. But it all depends on the operator's
tolerance for downtime, since the instances table would certainly be
locked for the duration of the migration.

Best,
-jay

> Thanks,
> Steven Kaufer
> 
> 
> > Kevin L. Mitchell 
> > Rackspace
> > 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Oleg Bondarev
On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery wrote:

> On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev 
> wrote:
> > Yeah, I also saw in docs that update-device is supported since 0.8.0
> > version,
> > not sure why it didn't work in my setup.
> > I installed latest libvirt 1.2.3 and now update-device works just fine
> and I
> > am able
> > to move instance tap device from one bridge to another with no downtime
> and
> > no reboot!
> > I'll try to investigate why it didn't work on 0.9.8 and which is the
> minimal
> > libvirt version for this.
> >
> Wow, cool! This is really good news. Thanks for driving this! By
> chance did you notice if there was a drop in connectivity at all, or
> if the guest detected the move at all?
>

Didn't check it yet. What in your opinion would be the best way of testing
this?

Kyle
>
> > Thanks,
> > Oleg
> >
> >
> > On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery  >
> > wrote:
> >>
> >> According to this page [1], "update-device" is supported from libvirt
> >> 0.8.0 onwards. So in theory, this should be working with your 0.9.8
> >> version you have. If you continue to hit issues here Oleg, I'd suggest
> >> sending an email to the libvirt mailing list with the specifics of the
> >> problem. I've found in the past there are lots of very helpful on that
> >> mailing list.
> >>
> >> Thanks,
> >> Kyle
> >>
> >> [1]
> http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
> >>
> >> On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev 
> >> wrote:
> >> > So here is the etherpad for the migration discussion:
> >> > https://etherpad.openstack.org/p/novanet-neutron-migration
> >> > I've also filed a design session on this:
> >> > http://summit.openstack.org/cfp/details/374
> >> >
> >> > Currently I'm still struggling with instance vNic update, trying to
> move
> >> > it
> >> > from one bridge to another.
> >> > Tried the following on ubuntu 12.04 with libvirt 0.9.8:
> >> >
> >> >
> https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
> >> > virsh update-device shows success but nothing actually changes in the
> >> > instance interface config.
> >> > Going to try this with later libvirt version.
> >> >
> >> > Thanks,
> >> > Oleg
> >> >
> >> >
> >> >
> >> > On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
> >> > 
> >> > wrote:
> >> >>
> >> >>
> >> >> Very interesting topic!
> >> >> +1 Salvatore
> >> >>
> >> >> It would be nice to have an etherpad to share the information and
> >> >> organize
> >> >> a plan. This way it would be easier for interested people  to join.
> >> >>
> >> >> Rossella
> >> >>
> >> >>
> >> >> On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
> >> >>
> >> >> It's great to see that there is activity on the launchpad blueprint
> as
> >> >> well.
> >> >> From what I heard Oleg should have already translated the various
> >> >> discussion into a list of functional requirements (or something like
> >> >> that).
> >> >>
> >> >> If that is correct, it might be a good idea to share them with
> relevant
> >> >> stakeholders (operators and developers), define an actionable plan
> for
> >> >> Juno,
> >> >> and then distribute tasks.
> >> >> It would be a shame if it turns out several contributors are working
> on
> >> >> this topic independently.
> >> >>
> >> >> Salvatore
> >> >>
> >> >>
> >> >> On 22 April 2014 16:27, Jesse Pretorius 
> >> >> wrote:
> >> >>>
> >> >>> On 22 April 2014 14:58, Salvatore Orlando 
> wrote:
> >> 
> >>  From previous requirements discussions,
> >> >>>
> >> >>>
> >> >>> There's a track record of discussions on the whiteboard here:
> >> >>>
> https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
> >> >>>
> >> >>> ___
> >> >>> OpenStack-dev mailing list
> >> >>> OpenStack-dev@lists.openstack.org
> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>>
> >> >>
> >> >>
> >> >>
> >> >> ___
> >> >> OpenStack-dev mailing list
> >> >> OpenStack-dev@lists.openstack.org
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >>
> >> >>
> >> >> ___
> >> >> OpenStack-dev mailing list
> >> >> OpenStack-dev@lists.openstack.org
> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >>
> >> >
> >> >
> >> > ___
> >> > OpenStack-dev mailing list
> >> > OpenStack-dev@lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > h

[openstack-dev] [TROVE] Resource management in Trove

2014-04-28 Thread Denis Makogon
Good day, Trove community



 I'd like to start thread related to orchestration based resource
management. At current state Heat support in Trove is nothing else than
experimental. Trove should be able to fully support Trove as resource
management driver.


 *Why is it so important?*


 Because Trove should not do what it does now (cloud service orchestration
is not the part of the OS Database Program). Trove should delegate all
tasks to Cloud Orchestration Service (Heat).

 *How can Heat help Trove?*


 Easily, Trove API allows to perform next resource operations:

   1.

   Trove instance provisioning (a combination of nova compute instance and
   cinder volume).
   2.

   Resize instances (compute instance flavor resize).
   3.

   Volume resize (cinder volume resize).
   4.

   Security groups management (nova-network, neutron).


   -

  create rules in group;
  -

  create group;
  -

  update rules CIDR;

 Heat allows to do almost all given tasks.


  *Resource management interface*

 What is management interface – abstract class that describes the required
tasks to accomplish. From Trove-taskmanager perspective, management
interface is nothing else than RPC service manager that being used at
service start 
[1]
.


 *Why is it needed?*

 The first answer is: To split-out two completely different resource
management engines. Nova/Cinder/Neutron engine etc. called “*NATIVES*” and
Heat engine called “*ORCHESTRATOR*”.

As you can all know they cannot work together, because they are acting with
resources in their own manners. But both engines are sharing more than
enough common code inside the Trove.


*Is it backward compatible?*

 Here comes the third (mixed) manager called “*MIGRATION*”. It allows to
work with previously provisioned instances through *NATIVES *engine
(resizes, migration, deletion) but new instances which would be provisioned
in future will be provisioned withing stacks through *ORCHESTRATOR*.

 So, there are three valid options:

   -

  use *NATIVES* if there's no available Heat;
  -

  use *ORCHESTRATOR *to work with Heat only;
  -

  use *MIGRATION *to work with mixed manager;


TODO list:

   -

  provide abstract manager interface;
  -

  extract common code shared between natives/heat/migration;
  -

  implement native management support;
  -

  implement orchestrator management support;
  -

  implement migration management support;
  -

  implement missing features in Heat;
  -

  re-visit orchestrator support;



 There are already filed several blueprints which would give to Trove an
ability to fully support orchestrator based provisioning:

[TROVE SPACE]

https://blueprints.launchpad.net/trove/+spec/stack-id

https://blueprints.launchpad.net/trove/+spec/resource-manager-interface

https://blueprints.launchpad.net/trove/+spec/resize-volume

https://blueprints.launchpad.net/trove/+spec/resize-instance


 [HEAT SPACE]

https://blueprints.launchpad.net/heat/+spec/update-cinder-volume

https://blueprints.launchpad.net/heat/+spec/handle-update-for-security-groups
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][service group]improve host state detection

2014-04-28 Thread John Garbutt
On 28 April 2014 13:30, Jiangying (Jenny)  wrote:
> Nova now can detect host unreachable. But it fails to make out host
> isolation, host dead and nova compute service down. When host unreachable is
> reported, users have to find out the exact state by himself and then take
> the appropriate measure to recover. Therefore we’d like to improve the host
> detection for nova.
>
> Currently the service group API factors out the host detection and makes it
> a set of abstract internal APIs with a pluggable backend implementation. The
> backend we designed is as follows:
>
> A detection central agent is introduced. When a member joins into the
> service group, the member host starts to send network heartbeat to the
> central agent and writes timestamp in shared storage periodically. When the
> central agent stops receiving the network heartbeats from a member, it pings
> the member and checks the storage heartbeat before declaring the host to
> have failed.
>
> 
>
> network heartbeat|network ping|storage heartbeat| state  | reason
>
> |-||---|--
>
> OK   |  - |-| Running | -
>
>   Not OK |   Not OK   | Not OK  | Dead   |
> hardware failure/abnormal host shut down
>
>   Not OK | OK | Not OK  | Service unreachable|
> service process crashed
>
>   Not OK |   Not OK   |   OK| Isolated   |
> network unreachable
>
> 
>
> Based on the state recognition table, nova can discern the exact host state
> and assign the reasons.
>
> Thoughts?

I don't think Nova should try to include functionality that
re-implements other good monitoring tools (Nagios, etc)

Having said that, having a new service group API that uses information
from external tools to decide if a host is dead or not, and describes
why, is maybe worth considering.

John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Nova-network to Neutron migration: issues with libvirt

2014-04-28 Thread Kyle Mestery
On Mon, Apr 28, 2014 at 9:19 AM, Oleg Bondarev  wrote:
> On Mon, Apr 28, 2014 at 6:01 PM, Kyle Mestery 
> wrote:
>>
>> On Mon, Apr 28, 2014 at 8:54 AM, Oleg Bondarev 
>> wrote:
>> > Yeah, I also saw in docs that update-device is supported since 0.8.0
>> > version,
>> > not sure why it didn't work in my setup.
>> > I installed latest libvirt 1.2.3 and now update-device works just fine
>> > and I
>> > am able
>> > to move instance tap device from one bridge to another with no downtime
>> > and
>> > no reboot!
>> > I'll try to investigate why it didn't work on 0.9.8 and which is the
>> > minimal
>> > libvirt version for this.
>> >
>> Wow, cool! This is really good news. Thanks for driving this! By
>> chance did you notice if there was a drop in connectivity at all, or
>> if the guest detected the move at all?
>
>
> Didn't check it yet. What in your opinion would be the best way of testing
> this?
>
The simplest way would to have a ping running when you run
"update-device" and see if any packets are dropped. We can do more
thorough testing after that, but that would give us a good
approximation of connectivity while swapping the underlying device.

>> Kyle
>>
>> > Thanks,
>> > Oleg
>> >
>> >
>> > On Sat, Apr 26, 2014 at 5:46 AM, Kyle Mestery
>> > 
>> > wrote:
>> >>
>> >> According to this page [1], "update-device" is supported from libvirt
>> >> 0.8.0 onwards. So in theory, this should be working with your 0.9.8
>> >> version you have. If you continue to hit issues here Oleg, I'd suggest
>> >> sending an email to the libvirt mailing list with the specifics of the
>> >> problem. I've found in the past there are lots of very helpful on that
>> >> mailing list.
>> >>
>> >> Thanks,
>> >> Kyle
>> >>
>> >> [1]
>> >> http://libvirt.org/sources/virshcmdref/html-single/#sect-update-device
>> >>
>> >> On Thu, Apr 24, 2014 at 7:42 AM, Oleg Bondarev 
>> >> wrote:
>> >> > So here is the etherpad for the migration discussion:
>> >> > https://etherpad.openstack.org/p/novanet-neutron-migration
>> >> > I've also filed a design session on this:
>> >> > http://summit.openstack.org/cfp/details/374
>> >> >
>> >> > Currently I'm still struggling with instance vNic update, trying to
>> >> > move
>> >> > it
>> >> > from one bridge to another.
>> >> > Tried the following on ubuntu 12.04 with libvirt 0.9.8:
>> >> >
>> >> >
>> >> > https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Administration_Guide/sect-dynamic-vNIC.html
>> >> > virsh update-device shows success but nothing actually changes in the
>> >> > instance interface config.
>> >> > Going to try this with later libvirt version.
>> >> >
>> >> > Thanks,
>> >> > Oleg
>> >> >
>> >> >
>> >> >
>> >> > On Wed, Apr 23, 2014 at 3:24 PM, Rossella Sblendido
>> >> > 
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >> Very interesting topic!
>> >> >> +1 Salvatore
>> >> >>
>> >> >> It would be nice to have an etherpad to share the information and
>> >> >> organize
>> >> >> a plan. This way it would be easier for interested people  to join.
>> >> >>
>> >> >> Rossella
>> >> >>
>> >> >>
>> >> >> On 04/23/2014 12:57 AM, Salvatore Orlando wrote:
>> >> >>
>> >> >> It's great to see that there is activity on the launchpad blueprint
>> >> >> as
>> >> >> well.
>> >> >> From what I heard Oleg should have already translated the various
>> >> >> discussion into a list of functional requirements (or something like
>> >> >> that).
>> >> >>
>> >> >> If that is correct, it might be a good idea to share them with
>> >> >> relevant
>> >> >> stakeholders (operators and developers), define an actionable plan
>> >> >> for
>> >> >> Juno,
>> >> >> and then distribute tasks.
>> >> >> It would be a shame if it turns out several contributors are working
>> >> >> on
>> >> >> this topic independently.
>> >> >>
>> >> >> Salvatore
>> >> >>
>> >> >>
>> >> >> On 22 April 2014 16:27, Jesse Pretorius 
>> >> >> wrote:
>> >> >>>
>> >> >>> On 22 April 2014 14:58, Salvatore Orlando 
>> >> >>> wrote:
>> >> 
>> >>  From previous requirements discussions,
>> >> >>>
>> >> >>>
>> >> >>> There's a track record of discussions on the whiteboard here:
>> >> >>>
>> >> >>> https://blueprints.launchpad.net/neutron/+spec/nova-to-quantum-upgrade
>> >> >>>
>> >> >>> ___
>> >> >>> OpenStack-dev mailing list
>> >> >>> OpenStack-dev@lists.openstack.org
>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >>>
>> >> >>
>> >> >>
>> >> >>
>> >> >> ___
>> >> >> OpenStack-dev mailing list
>> >> >> OpenStack-dev@lists.openstack.org
>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >>
>> >> >>
>> >> >>
>> >> >> ___
>> >> >> OpenStack-dev mailing list
>> >> >> OpenStack-dev@lists.openstack.org
>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> >>
>> >> >
>> >> >
>> >> > 

[openstack-dev] [Trove] Resource management in Trove

2014-04-28 Thread Denis Makogon
Good day, Trove community

 I'd like to start thread related to orchestration based resource
management. At current state Heat support in Trove is nothing else than
experimental. Trove should be able to fully support Trove as resource
management driver.


 *Why is it so important?*

 Because Trove should not do what it does now (cloud service orchestration
is not the part of the OS Database Program). Trove should delegate all
tasks to Cloud Orchestration Service (Heat).


*How can Heat help Trove?*

 Easily, Trove API allows to perform next resource operations:

   1.

   Trove instance provisioning (a combination of nova compute instance and
   cinder volume).
   2.

   Resize instances (compute instance flavor resize).
   3.

   Volume resize (cinder volume resize).
   4.

   Security groups management (nova-network, neutron).


   -

  create rules in group;
  -

  create group;
  -

  update rules CIDR;

 Heat allows to do almost all given tasks.


  *Resource management interface*

 What is management interface – abstract class that describes the required
tasks to accomplish. From Trove-taskmanager perspective, management
interface is nothing else than RPC service manager that being used at
service start 
[1]
.


 *Why is it needed?*

 The first answer is: To split-out two completely different resource
management engines. Nova/Cinder/Neutron engine etc. called “*NATIVES*” and
Heat engine called “*ORCHESTRATOR*”.

As you can all know they cannot work together, because they are acting with
resources in their own manners. But both engines are sharing more than
enough common code inside the Trove.


*Is it backward compatible?*

 Here comes the third (mixed) manager called “*MIGRATION*”. It allows to
work with previously provisioned instances through *NATIVES *engine
(resizes, migration, deletion) but new instances which would be provisioned
in future will be provisioned withing stacks through *ORCHESTRATOR*.

 So, there are three valid options:

   -

   use *NATIVES* if there's no available Heat;
   -

   use *ORCHESTRATOR *to work with Heat only;
   -

   use *MIGRATION *to work with mixed manager;


TODO list:

   -

   provide abstract manager interface;
   -

   extract common code shared between natives/heat/migration;
   -

   implement native management support;
   -

   implement orchestrator management support;
   -

   implement migration management support;
   -

   implement missing features in Heat;
   -

   re-visit orchestrator support;


There are already filed several blueprints which would give to Trove an
ability to fully support orchestrator based provisioning:

[TROVE SPACE]

https://blueprints.launchpad.net/trove/+spec/stack-id

https://blueprints.launchpad.net/trove/+spec/resource-manager-interface

https://blueprints.launchpad.net/trove/+spec/resize-volume

https://blueprints.launchpad.net/trove/+spec/resize-instance


 [HEAT SPACE]

https://blueprints.launchpad.net/heat/+spec/update-cinder-volume

https://blueprints.launchpad.net/heat/+spec/handle-update-for-security-groups
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PGP keysigning party for Juno summit in Atlanta?

2014-04-28 Thread Sergey Lukjanov
IIRC there was a key signing party on the launch time in Hong Kong, isn't it?

On Sun, Apr 27, 2014 at 4:05 AM, Clint Byrum  wrote:
> Just a friendly reminder to add yourself to this list if you are
> interested in participating in the key signing in Atlanta:
>
> https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
>
> Now that we have more visibility about schedules, I think we should try
> to find a time slot. Does anybody have an idea already? If not I think
> we should just pick a break time period and get it done.
>
> Excerpts from Thomas Goirand's message of 2014-03-29 23:32:55 -0700:
>> On 03/30/2014 10:00 AM, Mark Atwood wrote:
>> > Hi!
>> >
>> > Are there plans for a PGP keysigning party at the Juno Summit in
>> > Atlanta, similar to the one at the Icehouse summit in Hong Kong?
>> >
>> > Inspired by the URL at
>> > https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Icehouse_Summit
>> > I looked for
>> > https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
>> > to discover that that wiki page does not yet exist and I do not have
>> > permission to create it.
>> >
>> > ..m
>>
>> If there's none, then we should do one.
>>
>> One thing about last key signing party, is that I didn't really like the
>> photocopy method. IMO, it'd be much much nicer to use a file, posted
>> somewhere, containing all participant fingerprints. To check for that
>> file validity, together, we check for its sha256 sum (someone say it out
>> loud, while everyone is checking for its own copy). And everyone,
>> individually, checks for its own PGP fingerprint inside the file. Then
>> we just need to validate entries in this file (with matching ID documents).
>>
>> Otherwise, there's the question of the trustability of the photocopy
>> machine and such... Not that I don't trust Jimmy (I do...)! :)
>>
>> Plus having a text file with all fingerprints in it is more convenient:
>> you can just cut/past the whole fingerprint and do gpg --recv-keys at
>> once (and not just the key ID, which is unsafe because prone to
>> brute-force). That file can be posted anywhere, provided that we check
>> for its sha256 sum.
>>
>> I would happily organize this, if someone can find a *quite* room with
>> decent network. Who can take care of the place and time?
>>
>> Of course, We will need need the fingerprints of every participant in
>> advance, so the wiki page would be useful as well. I therefore created
>> the wiki page:
>> https://wiki.openstack.org/wiki/OpenPGP_Web_of_Trust/Juno_Summit
>>
>> Please add yourself. We'll see if I can make it to Atlanta, and organize
>> something later on.
>>
>> Cheers,
>>
>> Thomas Goirand (zigo)
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Meeting Monday April 28th at 20:00 UTC

2014-04-28 Thread Douglas Mendizabal
Hi Everyone,

The Barbican team is hosting our weekly meeting today, Monday April 28, at
20:00 UTC  in #openstack-meeting-alt

Meeting agenda is avaialbe here
https://wiki.openstack.org/wiki/Meetings/Barbican and everyone is welcomed
to add agenda items

You can check this link
http://time.is/0800PM_28_Apr_2014_in_UTC/CDT/EDT/PDT?Barbican_Weekly_Meeting
to help figure out what 20:00 UTC means in your local time.

-Douglas Mendizábal






smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][service group]improve host state detection

2014-04-28 Thread Sylvain Bauza
2014-04-28 16:33 GMT+02:00 John Garbutt :

>
> I don't think Nova should try to include functionality that
> re-implements other good monitoring tools (Nagios, etc)
>
> Having said that, having a new service group API that uses information
> from external tools to decide if a host is dead or not, and describes
> why, is maybe worth considering.
>
>

Agree with John, a new backend could potentially help out this use-case.
That said, there is yet a ZooKeeper driver [1] for servicegroups that could
help.

My 2 cts,
-Sylvain

[1] :
https://github.com/openstack/nova/blob/master/nova/servicegroup/drivers/zk.py



> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] Design Summit Sessions

2014-04-28 Thread Sergey Lukjanov
Matt, I'd like to keep the v2 api discussion in the end of our design
sessions track to have enough input on other areas. IMO we should
discuss first what we need to have and then how it'll looks like.

On Fri, Apr 25, 2014 at 9:29 PM, Matthew Farrellee  wrote:
> On 04/24/2014 10:51 AM, Sergey Lukjanov wrote:
>>
>> Hey folks,
>>
>> I've pushed the draft schedule for Sahara sessions on ATL design
>> summit. The description isn't fully completed, I'm working on it. I'll
>> do it till the end of week and add an etherpad to each session.
>>
>> Sahara folks, please, take a look on a schedule and share your
>> thoughts / comments.
>>
>> Thanks.
>>
>> http://junodesignsummit.sched.org/overview/type/sahara+%28ex-savanna%29
>
>
> will you swap v2-api and scalable slots? part of it will flow into ux re
> image-registry.
>
> maybe add some "error handling / state machine" to the ux improvements
>
> best,
>
>
> matt
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] looking to add support for server groups to heat...any comments?

2014-04-28 Thread Chris Friesen

On 04/26/2014 09:41 PM, Jay Lau wrote:

Just noticed this email, I have already filed a blueprint related to
this topic
https://blueprints.launchpad.net/heat/+spec/vm-instance-group-support

My idea is that can we add a new field such as "PlacemenetPolicy" to
AutoScalingGroup? If the value is affinity, then when heat engine create
the AutoScalingGroup, it will first create a server group with affinity
policy, then when create VM instance for the AutoScalingGroup, heat
engine will transfer the server group id as scheduler hints so as to
make sure all the VM instances in the AutoScalingGroup can be created
with affinity policy.

resources:
   WorkloadGroup:
 type: AWS::AutoScaling::AutoScalingGroup
 properties:
   AvailabilityZones: ["nova"]
   LaunchConfigurationName: {Ref: LaunchConfig}
   PlacementPolicy: ["affinity"] 
   MaxSize: 3
   MinSize: 2



While I personally like this sort of idea from the perspective of 
simplifying things for heat users, I see two problems.


First, my impression is that heat tries to provide a direct mapping of 
nova resources to heat resources.  Using a property of a heat resource 
to trigger the creation of a nova resource would not fit that model.


Second, it seems less well positioned for exposing possible server group 
enhancements in nova.  For example, one enhancement that has been 
discussed is to add a server group option to make the group scheduling 
policy a weighting factor if it can't be satisfied as a filter.  With 
the server group as an explicit resource there is a natural way to 
extend it.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][service group]improve host state detection

2014-04-28 Thread Jay Pipes
cc'ing Intel and Ericsson engineers who are interested in a similar
plan...

On Mon, 2014-04-28 at 15:33 +0100, John Garbutt wrote:
> On 28 April 2014 13:30, Jiangying (Jenny)  wrote:
> > Nova now can detect host unreachable. But it fails to make out host
> > isolation, host dead and nova compute service down. When host unreachable is
> > reported, users have to find out the exact state by himself and then take
> > the appropriate measure to recover. Therefore we’d like to improve the host
> > detection for nova.
> >
> > Currently the service group API factors out the host detection and makes it
> > a set of abstract internal APIs with a pluggable backend implementation. The
> > backend we designed is as follows:
> >
> > A detection central agent is introduced. When a member joins into the
> > service group, the member host starts to send network heartbeat to the
> > central agent and writes timestamp in shared storage periodically. When the
> > central agent stops receiving the network heartbeats from a member, it pings
> > the member and checks the storage heartbeat before declaring the host to
> > have failed.
> >
> > 
> >
> > network heartbeat|network ping|storage heartbeat| state  | reason
> >
> > |-||---|--
> >
> > OK   |  - |-| Running | -
> >
> >   Not OK |   Not OK   | Not OK  | Dead   |
> > hardware failure/abnormal host shut down
> >
> >   Not OK | OK | Not OK  | Service unreachable|
> > service process crashed
> >
> >   Not OK |   Not OK   |   OK| Isolated   |
> > network unreachable
> >
> > 
> >
> > Based on the state recognition table, nova can discern the exact host state
> > and assign the reasons.
> >
> > Thoughts?
> 
> I don't think Nova should try to include functionality that
> re-implements other good monitoring tools (Nagios, etc)

Agreed.

> Having said that, having a new service group API that uses information
> from external tools to decide if a host is dead or not, and describes
> why, is maybe worth considering.

Also agreed.

FYI, related blueprint from Ericsson: 

https://review.openstack.org/#/c/87978/

I am -1 on the above blueprint not because I don't see the value in
having nic state play a part in service group management, but because I
don't see a reason to have the resource tracker (which manages resource
usage, not state) or scheduler implement agent state checks.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Environments Working Group

2014-04-28 Thread Roshan Agrawal
The most popular time slot right now is Wed 4:30 pm Central US time. The issue 
with this time though is that it is bad time for folks in India and Europe 
[Noorul, Rajdeep, Julien]

I have added a few more time slot options [8 am, 9 am central US time].  Please 
retake the poll keeping in view that we have to accommodate folks from Europe 
and India as well. 

http://doodle.com/n4w9gmekwz58ekdz





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] cinder not support query volume/snapshot with regular expression

2014-04-28 Thread Trump.Zhang
Thanks for your reply.

Regex matching can be implemented in Database, and glob matches may not
work fine with "paginate_query". However, the ReDoS you mentioned will not
be avoided when using regex matching.

I will think of it again.

Thanks.


2014-04-28 19:04 GMT+08:00 Duncan Thomas :

> Regex matching in APIs can be a dangerous source of DoS attacks - see
> http://en.wikipedia.org/wiki/ReDoS. Unless this is mitigated sensibly,
> I will continue to resist any cinder patch that adds them.
>
> Glob matches might be safer?
>
> On 26 April 2014 05:02, Zhangleiqiang (Trump) 
> wrote:
> > Hi, all:
> >
> > I see Nova allows search instances by name, ip and ip6 fields
> which can be normal string and regular expression:
> >
> > [stack@leiqzhang-stack cinder]$ nova help list
> >
> > List active servers.
> >
> > Optional arguments:
> > --ip   Search with regular expression
> match by IP address
> > (Admin only).
> > --ip6 Search with regular expression
> match by IPv6 address
> >  (Admin only).
> > --name   Search with regular expression
> match by name
> > --instance-name  Search with regular
> expression match by server name
> > (Admin only).
> >
> > I think it is also needed for Cinder when query the
> volume/snapshot/backup by name. Any advice?
> >
> > --
> > zhangleiqiang (Trump)
> >
> > Best Regards
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Duncan Thomas
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
---
Best Regards

Trump.Zhang
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] User Experience cross-project sessions at Summit

2014-04-28 Thread Jaromir Coufal

Hey Liz,

thank you very much for taking a time, proposing and covering this 
agenda. It looks very good and I am happy that we got two slots for UX 
discussions.


I agree with Thierry that we should definitely cover as much UX areas as 
possible. Therefore I would like to encourage people from all fields of 
expertize who have interest in User Experience (no matter if it is user 
research, GUI, CLI, API, ...) to join our session so that we can group 
people together and search for the best way of how to cooperate and 
contribute to OpenStack.


Few inline comments follow:

On 2014/23/04 20:12, Liz Blanchard wrote:

Hi All,

I’m happy to say that there will be two slots (back to back) on the 
cross-project track for us to have discussions around User Experience during 
Summit \o/. I’d like to propose we talk about the following, but am completely 
open to suggestions from whoever is interested in attending these sessions. Let 
me know if anyone has any thoughts here!

1) Introduction of everyone in the session.
-What role do you have today?
-How does UX affect you?
-How will you (if you plan to) contribute to OpenStack UX?
-How active do you plan to be for the Juno development cycle?


+1. I hope that more people from various areas will join the session and 
we can find groups of people who are working in similar areas and 
connect them together.




2) Discussion of where are are currently in UX.
-What components have we worked on so far?
-What does our current process look like?
-What tools do we use?
-What has worked well?
-What could be improved?


It would be great to figure out what are the groups of people and hear 
from each group what they did so far and what are their goals.



3) Discussion on where we want to go for Juno.
-How should we improve our process and tooling during the Juno release? 
How do we track this and who will take certain action items?
-What tools should we remove/add? (Jarda sent a nice e-mail proposal 
around yesterday that would be great to discuss further)
-What are our goals for UX during the Juno release? (More 
research/requirements work? More designing? More user testing?...) Which 
components will we focus on? (Horizon? Tuskar? Heat? Ceilometer?….) Which 
features will we focus on?


I think the most important goal for this session would be to connect 
people together and establish a bridge between us, so that we can 
communicate, meet and discuss all together in an easy way. Thread about 
tools and processes helped (thanks all for your feedback), I am going to 
expand on it and I would like to share a summary and follow up on open 
questions at this session.


Regarding goals for Juno cycle - I would envision us to discuss more 
global goals for the whole UX community (which from my perspective means 
mostly gluing people together and finding a way of how to cooperate).


For finer goals (like what to focus on in GUI, in CLI, ...), I think it 
should be discussed in smaller groups (which don't have to overlap, so 
that people can be present in multiple groups). Therefore I think this 
kind of planning might happen:
A) Later on during the Summit (maybe second block?), discussing each 
area separately.

B) After the Summit through channels which we establish.



4) UX as a program.
-What does it mean to be a program?
-Would UX make sense to be a program? If so, how should we work 
together to make this something we could propose as a team?


+1 it would be nice to follow up on this topic. I hope we will build on 
top of previous discussions about UX becoming a program:

* Wiki: https://wiki.openstack.org/wiki/UX/ProgramProposal
* Email thread: 
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019884.html


Thanks, I am looking forward to meeting all of you
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] discussion: json schema to define apis

2014-04-28 Thread Jamie Hannaford
Thanks Matt for bringing up these questions - I think having this kind of 
discussion is essential for such a big idea. It also helps me clarify my own 
thinking towards this issue.

Before I answer, I want to point out that I'm not staunchly for or against any 
particular idea. I do think that schemas offer a lot of advantages over writing 
user-land code, but I'm more than willing to abandon the proposal if we all 
determine there's a stronger and more compelling alternative.

1. Why use schemas instead of userland code?

I've put my answer to this question here: 
https://wiki.openstack.org/wiki/OpenStack-SDK-PHP/JSON-schema

2. How will debugging work?

I'll highlight two conceivable issues which might need debugging. The first 
issue is the API rejecting a request for whatever reason (i.e. a proxy 
modifying headers); the second issue is when a data structure returned from the 
API fails to validate against a particular schema file.

Issue 1: Malformed requests
There are two reasons why a request would fail: if an end-user stocks it with 
bad data, or if something in the middle deforms it. A very easy solution to the 
first problem is using schemas to perform basic parameter checking before a 
request is serialized. If we know, for example, that the API is expecting a 
particular value - or a particular header - the schema is in charge of making 
that happen. Performing basic validation catches most errors - and debugging is 
very easy due to the exception thrown. If you're ever in doubt, you just refer 
to the schema to see what was serialized into a request in the same way you do 
for a concrete class method.

If something in the middle deforms the request, the API will naturally reject 
it. When it comes to debugging this issue, all you need to do is wrap your 
original code in a try/catch block and use Guzzle's BadResponseException to 
return the API's response. You can easily see the type of failure through the 
HTTP status code, and the exact reason why the request failed. So it doesn't 
matter where the failure happens - all that matters is that there's a way to 
catch and spit out the API's response and the originating request.

Issue 2: Incorrect API data
Say we've defined that a Server has two properties: a name (which is a string) 
and metadata (which is an object). If the API returns a name as an array, that 
obviously fails validation. When the schema code goes to validate the API data, 
it will raise validation error when it comes to validate that "name" property. 
How you consequently use this validation error them is completely up to you: 
you could output it to STDOUT, you could save it to a local log on the 
filesystem, you could buffer it temporarily in memory.

Any API data that does not validate successfully against a schema should not be 
presented to the end-user. So if a "created_date" property is returned, that 
isn't defined in our schema, it should not be populated in the resulting model. 
The model returned to the end-user would be a simple object that implements 
\ArrayAccess, meaning that it can be accessed like a simple array.

3. Where would JSON schemas come from?

It depends on each OpenStack service. Glance and Marconi (soon) offer schemas 
directly through the API - so they are directly responsible for maintaining 
this - we'd just consume it. We could probably cache a local version to 
minimize requests.

For services that do not offer schemas yet, we'd have to use local schema 
files. There's a project called Tempest which does integration tests for 
OpenStack clusters, and it uses schema files. So there might be a possibility 
of using their files in the future. If this is not possible, we'd write them 
ourselves. It took me 1-2 days to write the entire Nova API. Once a schema file 
has been fully tested and signed off as 100% operational, it can be frozen as a 
set version.

4. What would the workflow look like?

I don't really understand what you mean: can you elaborate?

5. How does schema files handle business logic?

That's a really great question. I've written a brief write-up here: 
https://wiki.openstack.org/wiki/OpenStack-SDK-PHP/JSON-schema-business-logic


Jamie

From: Matthew Farina mailto:m...@mattfarina.com>>
Date: Thursday, April 24, 2014 at 5:42 PM
To: Jamie Hannaford 
mailto:jamie.hannaf...@rackspace.com>>, 
"OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "sam.c...@hp.com" 
mailto:sam.c...@hp.com>>
Subject: [openstack-sdk-php] discussion: json schema to define apis

Jamie (and whom ever else wants to jump in),

It's been proposed to use json schema to describe the API calls rather
than code. The operations to perform and what they do would be
described rather than coded and then some code would use the schema to
know how to act.

Others are already doing t

[openstack-dev] [Heat] No meeting this week

2014-04-28 Thread Zane Bitter
Since this is the designated "off" week, I'm going to cancel this week's 
IRC meeting. By happy coincidence, that will give us until after Summit 
to decide on a new alternate meeting time :)


The next meeting will be on the 7th of May, at the regular time (2000 UTC).

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][globalization] Need input on how to proceed .

2014-04-28 Thread Jay S. Bryant
Duncan,

Thanks for the response.  Have some additional thoughts, in-line, below:


On Mon, 2014-04-28 at 12:15 +0100, Duncan Thomas wrote:
> Two separate patches, or even two chains of separate patches, will
> make reviewing and more importantly (hopefully temporary) backouts
> easier. It will also reduce the number of merge conflicts, which are
> still likely to be substantial.

True, I suppose we need to keep in mind the fact that we might want to
make this be easy to back-out in the future.  Hopefully it isn't an
issue this time around though.

> There's no benefit at all to all of this being done in one patch, and
> substantial costs. Doing the conversion by sections seems like the way
> forward.

So, let me propose a different process here.  Handling the i18n and
removal of debug separately instead.  First, propose one patch that will
add the explicit import of '_' to all files.  There will be a lot of
files touched, but they all will be 1 liners.  Then make the patch for
the re-enablement of lazy tanslation a second patch that is dependent
upon the first patch.

Then handle removal of _() from DEBUG logs as a separate issue once the
one above has merged.  For that change do it in multiple patches divided
by section.  Make the sections be the top level directories under
cinder/ ?  Does that sound like a reasonable plan?

> 
> Doing both around the same time (maybe as dependant patches) seems reasonable
> 

As I think about it, I don't know that the debug translation removal
needs to be dependent, but we could work it out that way if you feel
that is important.

Let me know what you think.

Thanks!

> On 27 April 2014 00:20, Jay S. Bryant  wrote:
> > All,
> >
> > I am looking for feedback on how to complete implementation of i18n
> > support for Cinder.  I need to open a new BluePrint for Juno as soon as
> > the cinder-specs process is available.  In the mean time I would like to
> > start working on this and need feedback on the scope I should undertake
> > with this.
> >
> > First, the majority of the code for i18n support went in with Icehouse.
> > There is just a small change that is needed to actually enable Lazy
> > Translation again.  I want to get this enabled as soon as possible to
> > get plenty of runtime on the code for Icehouse.
> >
> > The second change is to add an explicit export for '_' to all of our
> > files to be consistent with other projects. [1]  This is also the safer
> > way to implement i18n.  My plan is to integrate the change as part of
> > the i18n work.  Unfortunately this will touch many of the files in
> > Cinder.
> >
> > Given that fact, this brings me to the item I need feedback upon.  It
> > appears that Nova is moving forward with the plan to remove translation
> > of debug messages as there was a recent patch submitted to enable a
> > check for translated DEBUG messages.  Given that fact, would it be an
> > appropriate time, while adding the explicit import of '_' to also remove
> > translation of debug messages.  It is going to make the commit for
> > enabling Lazy Translation much bigger, but it would also take out
> > several work items that need to be addressed at once.  I am willing to
> > undertake the effort if I have support for the changes.
> >
> > Please let me know your thoughts.
> >
> > Thanks!2]
> > Jay
> > (jungleboyj on freenode)
> >
> > [1] https://bugs.launchpad.net/cinder/+bug/1306275
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MagnetoDB] RE: MagnetoDB CLI client

2014-04-28 Thread ANDREY OSTAPENKO (CS)
Hi, Ilya!

Thank you for your suggestion! I totally argee with you, I'll make changes in 
bp.

Andrey Ostapenko

From: Ilya Sviridov [isviri...@mirantis.com]
Sent: Monday, April 28, 2014 6:37 AM
To: ANDREY OSTAPENKO (CS)
Cc: openstack-dev@lists.openstack.org
Subject: Re: MagnetoDB CLI client

Hello Andrey,

Great!

Looking closer at blueprint, I've realized that parameter naming is confusing.

I would suggest to use --request-file parameter instead --description-file used 
now.

Also, I believe that table-list will be the most popular call and it has only 
two parameters, so would be better to avoid json for that in cli and pass all 
info via command line.

like
magnetodb table-list --exclusive-start-table-name  table_1 --count 10

Probably we have to think about default behavior when no json passed or 
required arguments are passed as CLI arguments for easier usage.
Scan looks as a good example.


BTW: We have dedicated mail prefix in order to not to spam everybody, but only 
MagnetoDB project interest audience :) so just add [openstack-dev][MagnetoDB] 
at the beginning of email subject next time.

Thank you
Ilya




On Fri, Apr 25, 2014 at 4:29 PM, ANDREY OSTAPENKO (CS) 
mailto:andrey_ostape...@symantec.com>> wrote:
Hello, everyone!

Now I'm starting to implement cli client for KeyValue Storage service MagnetoDB.
I'm going to use heat approach for cli commands, e.g. heat stack-create 
--template-file ,
because we have too many parameters to pass to the command.
For example, table creation command:

magnetodb create-table --description-file 

File will contain json data, e.g.:

{
"table_name": "data",
"attribute_definitions": [
{
"attribute_name": "Attr1",
"attribute_type": "S"
},
{
"attribute_name": "Attr2",
"attribute_type": "S"
},
{
"attribute_name": "Attr3",
"attribute_type": "S"
}
],
"key_schema": [
{
"attribute_name": "Attr1",
"key_type": "HASH"
},
{
"attribute_name": "Attr2",
"key_type": "RANGE"
}
],
"local_secondary_indexes": [
{
"index_name": "IndexName",
"key_schema": [
{
"attribute_name": "Attr1",
"key_type": "HASH"
},
{
"attribute_name": "Attr3",
"key_type": "RANGE"
}
],
"projection": {
"projection_type": "ALL"
}
}
]
}

Blueprint: https://blueprints.launchpad.net/magnetodb/+spec/magnetodb-cli-client

If you have any comments, please let me know.

Best regards,
Andrey Ostapenko

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Globalization] REST API sorting by status severity vs. alphabetical status key

2014-04-28 Thread Steven Kaufer
Jay,

Thanks again for the reply.  If this migration is implemented using the
object "versioning", then the new "status as int" column cannot be utilized
(ie, sorted on) until the existing "status as string" column is eventually
dropped.

Is this correct?  If so, then this approach will not actually solve the
globalization sort problem until more release cycles have completed -- this
does not seem like a viable solution.

Until we know that the new "status as int" column is populated then we
cannot use it as a sortable column.

In theory, a deployer could conditionally choose to migrate to the new
column if they needed that function and were willing to take the hit during
the migration.  However, this just complicates the sorting logic since we
would then need to know which column to use during the sort (the new int
column if the migration has completed or the old string column if the
migration has not completed).

Thanks,
Steven Kaufer

Jay Pipes  wrote on 04/28/2014 09:05:51 AM:

> From: Jay Pipes 
> To: openstack-dev@lists.openstack.org,
> Date: 04/28/2014 09:07 AM
> Subject: Re: [openstack-dev] [Globalization] REST API sorting by
> status severity vs. alphabetical status key
>
> On Wed, 2014-04-23 at 22:07 -0500, Steven Kaufer wrote:
> > > yeah, we're talking about thousands and thousands of rows that have
> > to
> > > be updated before the API can be restarted…
> > >
> > > > There's also a possibility of adding support for the status codes,
> > but
> > > > keeping the string columns in the database, and then using the
> > nova
> > > > object versioning to "migrate" the object schema over time to the
> > point
> > > > where the migration is a simple DROP COLUMN.
> > >
> > > I like that idea better, TBH, but we're probably talking about a
> > > long-time deprecation here, like on the order of a couple of
> > releases;
> > > that would give plenty of time for the majority of the records to be
> > > revisited and make the final migration run for a lot shorter time.
> > > --
> >
> > Thanks for the discussion.
>
> No prob, sorry for the delayed response...
>
> > So how would this new flow work?
> > In Juno would there be an additional status_int column that would be
> > populated and (eventually) replace the existing status (as string)
> > column?
>
> That would be the cleanest way, yes.
>
> > How would the object versioning populate the new column for the
> > existing records?
>
> Within the nova.objects.instance.Instance object itself, we can put a
> small check-and-transform function in the object to do the translation
> in-line.
>
> > Any examples or details that would help explain how this could work
> > would be appreciated.
>
> Probably worth putting a blueprint up about it. I can work with you on
> it, if you'd like, though it will likely be after the summit until I
> have time to work on it.
>
> > Lastly, is there agreement that this is an issue that needs to be
> > addressed? Note that this seems to be a pervasive problem, I've
> > investigated the status column in cinder and nova but I suspect that
> > the same issue exists in other components.
>
> Yes, the same issue unfortunately exists in lots of the other
> components, and they don't have the benefit of the nova objects work in
> them, which makes it a lot more of a nuisance to migrate the database
> schema. Though, personally I'm not entirely sure going through the
> effort of doing a long-time in-object translation is worth it. A change
> to the database schema, even on tens of millions of records wouldn't
> take more than a couple minutes. But it all depends on the operator's
> tolerance for downtime, since the instances table would certainly be
> locked for the duration of the migration.
>
> Best,
> -jay
>
> > Thanks,
> > Steven Kaufer
> >
> >
> > > Kevin L. Mitchell 
> > > Rackspace
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Summit sessions

2014-04-28 Thread Sergey Lukjanov
Hey folks,

I've finished descriptions / schedule for our design summit in Atlanta.

You can find sched: http://junodesignsummit.sched.org
General etherpad with all links and assignees:
https://etherpad.openstack.org/p/juno-summit-sahara

Volunteers to help working on etherpads are welcome, please, contact
current assignees to join.

Matt, Chad, Trevor, Andrey, Alex and me, please, start filling the etherpads.

Schedule with assignments:

>> Thu 1:30 PM [Sahara Juno] Releasing and backward compatibility
summit.o.o: http://summit.openstack.org/cfp/details/181
sched.org: 
http://junodesignsummit.sched.org/event/b4f52627efa42f285978d5af3643e189
etherpad.o.o: 
https://etherpad.openstack.org/p/juno-summit-sahara-relmngmt-backward
assignee: Sergey Lukjanov (SergeyLukjanov), Andrew Lazarev (alazarev)

>> Thu 2:20 PM [Sahara Juno] CI/gating and plugin requirements
summit.o.o: http://summit.openstack.org/cfp/details/189
sched.org: 
http://junodesignsummit.sched.org/event/c8774beefd9e9188a3e0729d2bd7131e
etherpad.o.o: 
https://etherpad.openstack.org/p/juno-summit-sahara-testing-plugins
assignee: Sergey Lukjanov (SergeyLukjanov), Matther Farrellee (mattf)

>> Thu 3:10 PM [Sahara Juno] Scalable Sahara and further OpenStack integration
summit.o.o: http://summit.openstack.org/cfp/details/29
sched.org: 
http://junodesignsummit.sched.org/event/10bc9a23eb43eb9df885586035fb2491
etherpad.o.o: 
https://etherpad.openstack.org/p/juno-summit-sahara-scale-integration
assignee: Sergey Lukjanov (SergeyLukjanov)

>> Thu 4:10 PM [Sahara Juno] UX improvements
summit.o.o: http://summit.openstack.org/cfp/details/143
sched.org: 
http://junodesignsummit.sched.org/event/be842178a085fe95b7665a653f8ab541
etherpad.o.o: https://etherpad.openstack.org/p/juno-summit-sahara-ux
assignee: Char Roberts (croberts), Trevor McKay (tmckay)

>> Thu 5:00 PM [Sahara Juno] Future of EDP: plugins, SPI, Oozie
summit.o.o: http://summit.openstack.org/cfp/details/345
sched.org: 
http://junodesignsummit.sched.org/event/dfa603324c0bbf29c2f09a77efb82d1d
etherpad.o.o: https://etherpad.openstack.org/p/juno-summit-sahara-edp
assignee: Trevor McKay (tmckay), Alexander Ignatov (aignatov)

>> Fri 9:00 AM [Sahara Juno] Next major REST API - v2
summit.o.o: http://summit.openstack.org/cfp/details/27
sched.org: 
http://junodesignsummit.sched.org/event/a64f771cf28ed3ad637730db828668ff
etherpad.o.o: https://etherpad.openstack.org/p/juno-summit-sahara-v2-api
assignee: Matthew Farrellee (mattf), Sergey Lukjanov (SergeyLukjanov)

>> Fri 9:50 AM [Sahara Juno] Sahara in Icehouse and Juno
summit.o.o: http://summit.openstack.org/cfp/details/360
sched.org: 
http://junodesignsummit.sched.org/event/49089a1d9c8203c6a4c1f0001fa417af
etherpad.o.o: https://etherpad.openstack.org/p/juno-summit-sahara-roadmap-retro
assignee: Sergey Lukjanov (SergeyLukjanov), Alexander Ignatov (aignatov)

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mistral] Community meeting minutes/logs - 02/28/2014

2014-04-28 Thread Renat Akhmerov
Thanks for joining today’s community meeting.

Minutes: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-04-28-16.00.html
Full log: 
http://eavesdrop.openstack.org/meetings/mistral/2014/mistral.2014-04-28-16.00.log.html

The next meeting is scheduled for May 5.

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Query leases by project_id/domain_id

2014-04-28 Thread Fuente, Pablo A
Any comments on this?

On Tue, 2014-04-22 at 18:58 +, Fuente, Pablo A wrote:
> Hi
>   I'm trying to tackle this bug
> (https://bugs.launchpad.net/climate/+bug/1304435). The options that I'm
> considering are:
> 
>   1 - Add the project_id query parameter to the leases API
>   2 - Use the X_PROJECT_ID header
> 
>   I prefer the first option, but I would like to know if there is a
> general OpenStack approach for implementing this. BTW, I'm planning to
> apply the same criteria for making queries for Domains.
> 
> Pablo.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] nominating Pablo Andres Fuente for the Climate core reviewers team

2014-04-28 Thread Fuente, Pablo A
No problem.
Thanks

On Fri, 2014-04-25 at 11:13 +0400, Nikolay Starodubtsev wrote:
> Congrats, Pablo! I was out of office and have no internet and couldn't
> give you +1 :(
> 
>   
> 
> Nikolay Starodubtsev
> 
> 
> Software Engineer
> 
> Mirantis Inc.
> 
> 
> 
> Skype: dark_harlequine1
> 
> 
> 
> 
> 
> 2014-04-24 17:43 GMT+04:00 Fuente, Pablo A :
> Thanks, it's an honor!
> 
> On Thu, 2014-04-24 at 13:20 +, Sanchez, Cristian A wrote:
> > Congratulations Pablo!
> >
> > From: Sylvain Bauza
> mailto:sylvain.ba...@gmail.com>>
> > Reply-To: "OpenStack Development Mailing List (not for usage
> questions)"
> 
> mailto:openstack-dev@lists.openstack.org>>
> > Date: jueves, 24 de abril de 2014 10:16
> > To: "OpenStack Development Mailing List (not for usage
> questions)"
> 
> mailto:openstack-dev@lists.openstack.org>>
> > Subject: Re: [openstack-dev] [Climate] nominating Pablo
> Andres Fuente for the Climate core reviewers team
> >
> > Welcome Pablo !
> >
> >
> > 2014-04-24 15:06 GMT+02:00 Dina Belova
> mailto:dbel...@mirantis.com>>:
> > Well, as 3/4 core team members are okay with it, I'll do
> this)
> >
> >
> > On Thu, Apr 24, 2014 at 2:14 PM, Sylvain Bauza
> mailto:sylvain.ba...@gmail.com>>
> wrote:
> >
> http://russellbryant.net/openstack-stats/climate-reviewers-90.txt
> >
> > As per the stats, +1 to this.
> >
> >
> > 2014-04-24 12:10 GMT+02:00 Dina Belova
> mailto:dbel...@mirantis.com>>:
> > I propose to add Pablo Andreas Fuente (pafuent on the IRC)
> to Climate core team.
> >
> > He's Python contributor from Intel, and he took great part
> in Climate development including  design suggestions and great
> ideas. He has been quite active during the Icehouse and given
> his skills, interest and background I suppose it'll be great
> to add him to the team.
> >
> >
> > Best regards,
> >
> > Dina Belova
> >
> > Software Engineer
> >
> > Mirantis Inc.
> >
> > ___
> > OpenStack-dev mailing list
> >
> 
> OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> >
> 
> OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > --
> >
> > Best regards,
> >
> > Dina Belova
> >
> > Software Engineer
> >
> > Mirantis Inc.
> >
> > ___
> > OpenStack-dev mailing list
> >
> 
> OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Chris Friesen

On 04/25/2014 03:15 PM, Jay Pipes wrote:


There are myriad problems with the above user experience and
implementation. Let me explain them.

1. The user isn't creating a "server group" when they issue a nova
server-group-create call. They are creating a policy and calling it a
group. Cognitive dissonance results from this mismatch.


I actually don't think this is true.  From my perspective they are 
actually creating a group, and then when booting servers they can be 
added into the group.


The group happens to have a policy, it is not only a policy.


2. There's no way to add an existing server to this "group".


In the original API there was a way to add existing servers to the 
group.  This didn't make it into the code that was submitted.  It is 
however supported by the instance group db API in nova.



3. There's no way to remove members from the group


In the original API there was a way to remove members from the group. 
This didn't make it into the code that was submitted.



4. There's no way to manually add members to the server group


Isn't this the same as item 2?


5. The act of telling the scheduler to place instances near or away from
some other instances has been hidden behind the server group API, which
means that users doing a nova help boot will see a --group option that
doesn't make much sense, as it doesn't describe the scheduling policy
activity.


There are many things hidden away that affect server booting...metadata 
matching between host aggregates and flavor extra specs, for instance.


As I understand it, originally the concept of "server groups" was more 
broad.  They supported multiple policies, arbitrary group metadata, etc. 
 The scheduler policy was only one of the things that could be 
associated with a group.  This is why the underlying database structure 
is more complicated than necessary for the current set of supported 
operations.


What we have currently is sort of a "dumbed-down" version but now that 
we have the basic support we can start adding in additional 
functionality as desired.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DriverLog][nova][neutron][cinder] Call for vendor participation please

2014-04-28 Thread Jay Pipes

Hi Stackers,

Mirantis has been collaborating with a number of OpenStack contributors 
and PTLs for the last couple months on something called DriverLog. It is 
an effort to consolidate and display information about the verification 
of vendor drivers in OpenStack.


Current implementation is here:

http://staging.stackalytics.com/driverlog/

Public wiki here: https://wiki.openstack.org/wiki/DriverLog

Code is here: https://github.com/stackforge/driverlog

There is currently a plan by the foundation to publicly announce this in 
the coming weeks.


At this point Evgeniya Shumakher, in cc, is manually maintaining the 
records, but we aspire for this to become a community driven process 
over time with vendors submitting updates as described in the wiki and 
PTLs and cores of the respective projects participating in update reviews.


A REQUEST: If you are vendor that has built an OpenStack driver, please 
check that it is listed on the dashboard and update the record 
(following the process in the wiki) to make sure the information is 
accurately reflected. We want to make sure that the data is accurate 
prior to announcing it to general public.


Also, if anybody has a suggestion on what should be improved / changed 
etc. == please don’t hesitate to share your ideas!


Thanks!
Jay and Evgeniya

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Chris Friesen

On 04/28/2014 06:58 AM, Steve Gordon wrote:

- Original Message -



Create two new options to nova boot:

--near-tag  and --not-near-tag 

The first would tell the scheduler to place the new VM near other
VMs having a particular "tag". The latter would tell the scheduler
to place the new VM *not* near other VMs with a particular tag.



Would we continue to grow this set of arguments in response to the
addition of new policies, how much do we expect this to grow? The two
most likely additions I can think of are "soft"/"best effort"
versions of the current two, are there any other proposals/ideas out
there - I know we're a creative bunch ;)?


One logical extension that came up previously is a max group size, maybe 
expressed as a quota or something.




1. There's no need to have any "server group" object any more.
Servers have a set of tags (key/value pairs in v2/v3 API) that may
be used to identify a type of server. The activity of launching an
instance would now have options for the user to indicate their
affinity preference, which removes the cognitive dissonance that
happens due to the user needing to know what a server group is (a
policy, not a group).



Would the user's affinity preference stay with the instance for
consideration in future operations post-boot (either now or in a
future extension of this functionality)?


Whichever way it's implemented, we need to preserve the boot time 
scheduler constraints so that any time we reschedule (migration, 
evacuation, resize, etc.) the constraints will be re-evaluated.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Dan Smith
>> 2. There's no way to add an existing server to this "group".
> 
> In the original API there was a way to add existing servers to the
> group.  This didn't make it into the code that was submitted.  It is
> however supported by the instance group db API in nova.
> 
>> 3. There's no way to remove members from the group
> 
> In the original API there was a way to remove members from the group.
> This didn't make it into the code that was submitted.

Well, it didn't make it in because it was broken. If you add an instance
to a group after it's running, a migration may need to take place in
order to keep the semantics of the group. That means that for a while
the policy will be being violated, and if we can't migrate the instance
somewhere to satisfy the policy then we need to either drop it back out,
or be in violation. Either some additional states (such as being queued
for inclusion in a group, etc) may be required, or some additional
footnotes on what it means to be in a group might have to be made.

It was for the above reasons, IIRC, that we decided to leave that bit
out since the semantics and consequences clearly hadn't been fully
thought-out. Obviously they can be addressed, but I fear the result will
be ... ugly. I think there's a definite possibility that leaving out
those dynamic functions will look more desirable than an actual
implementation.

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] performance

2014-04-28 Thread Janczuk, Tomasz
Hello,

Have any performance numbers been published for Marconi? I have asked this 
question before 
(http://lists.openstack.org/pipermail/openstack-dev/2014-March/031004.html) but 
there were none at that time.

Thanks,
Tomasz Janczuk

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] [Keystone] [TripleO] Making use of domains by name - policy and API issues?

2014-04-28 Thread Clint Byrum
So in the process of making Heat deploy itself, I've run into a bit of a
deadlock.

https://bugs.launchpad.net/tripleo/+bug/1287453
https://bugs.launchpad.net/heat/+bug/1313003

Currently, we deploy OpenStack like this:

* First we generate usernames/passwords for all service accounts
* Next we deploy Keystone and Heat (and.. the rest of OpenStack)
  - In this process, we feed in the usernames and passwords we
generated.
* Then when everything is "deployed", we initialize Keystone with the
  generated usernames and passwords via the keystone API.
* Now we test to make sure what we deployed works.

However, in order to create isolated users for narrow access to Heat
from inside instances, Heat needs a domain to put these narrowly scoped
users in. Heat has a handy script for creating this domain and an admin
inside the domain which is needed to create the lesser users. So that
naturally fits into our initialization of keystone.

The problem is that because of bug 1313003, Heat can only use a domain
ID to specify this domain. We haven't created that domain yet at stack
creation time though, so we would have to add another step before
testing/using the cloud:

* Update stack with ID of heat stack user domain.

Steven Hardy has indicated that it was problematic to make use of names
instead of id's for domains, and that to me signals a problem with the
API and/or policy model in Keystone around domains.

Everything else in TripleO makes use of names except this, so I think
we need to solve this. This isn't just a TripleO or Heat problem though,
anybody using domains will run into the same trouble Steven hit, and
that is not something we should ignore.

Can somebody more familiar with domains explain what would be needed to
be able to have Heat able to lookup domains by name and use them like
most other things in OpenStack, where we can use names or IDs
interchangeably?

Thanks!

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Need some help with mock

2014-04-28 Thread Deepak Shetty
 I was writing this in test_glusterfs.py

def test_ensure_shares_unmounted_1share(self):
with contextlib.nested(
mock.patch.object(self._driver, '_load_shares_config'),
mock.patch.object(self._driver, '_ensure_share_unmounted')
) as (self._fake_load_shares_config, mock_ensure_share_unmounted):

#mock_shares = {'127.7.7.7:/gluster1': None}
#mock_load_shares_config.return_value = mock_shares
#self._driver.shares = mock_load_shares_config.return_value

self._driver._ensure_shares_unmounted()

mock_ensure_share_unmounted.assert_called_once()
mock_ensure_share_unmounted.assert_called_once_with(
'127.7.7.7:/gluster1')

for my patch @ https://review.openstack.org/#/c/86888/6

and i get the output as ..

==
FAIL:
cinder.tests.test_glusterfs.GlusterFsDriverTestCase.test_ensure_shares_unmounted_1share
--
...
...

stderr: {{{
cinder/tests/test_glusterfs.py:736: DeprecationWarning: With-statements now
directly support multiple context managers
  mock.patch.object(self._driver, '_ensure_share_unmounted')
}}}

Traceback (most recent call last):
  File "cinder/tests/test_glusterfs.py", line 747, in
test_ensure_shares_unmounted_1share
'127.7.7.7:/gluster1')
  File "/usr/lib/python2.7/site-packages/mock.py", line 845, in
assert_called_once_with
raise AssertionError(msg)
AssertionError: Expected to be called once. Called 0 times.


Can you help with why
'mock_ensure_share_unmounted.assert_called_once()' check passes
but
'mock_ensure_share_unmounted.assert_called_once_with('127.7.7.7:/gluster1')
check fails ?


In glusterfs.py ...

def _ensure_shares_unmounted(self):
self._load_shares_config(self.configuration.glusterfs_shares_config)
for share in self.shares.keys():
try:
self._ensure_share_unmounted(share)
except Exception as exc:
LOG.warning(_('Exception during unmounting %s') % (exc,))


thanx,
deepak
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-04-28 Thread David Kranz

On 04/27/2014 10:02 PM, Matthew Treinish wrote:

On Mon, Apr 28, 2014 at 01:01:00AM +, Kenichi Oomichi wrote:

Hi,

Sorry for my late response, but I'd like to discuss this again.

Now we are working for adding Nova API responses checks to Tempest[1] to
block backward incompatible changes.
With this work, Tempest checks each response(status code, response body)
and raises a test failure exception if detecting something unexpected.
For example if some API parameter, which is defined as 'required' Tempest
side, does not exist in response body, Tempest test fails.

We are defining API parameters as 'required' if they are not API extensions
or they are not depended on Nova configuration. In addition now Tempest
allows additional API parameters, that means Tempest does not fail even if
Nova response includes unexpected API parameters. Because I think the removal
of API parameter causes backward incompatible issue but the addition does not
cause it.

So, AIUI we can only add parameters to an API with a new extension. The API
change guidelines also say that adding new properties must be conditional:

https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
I just wanted to note that the original referenced wiki page, assembled 
by markmc and myself, did not specify the need for an extension in order 
to add a value to a return dict or add a value to a dict argument if the 
existing api would ignore it. This was changed (and honestly I did not 
notice this change at the time) last August to require an extension: 
https://wiki.openstack.org/w/index.php?title=APIChangeGuidelines&diff=prev&oldid=28593. 


Is there any trace left of discussions around that change?

The original wording allowed the api to evolve as long as a "reasonable" 
application would not be broken. Essentially the extra value becomes 
optional and new client or server code can check for it. The new 
definition is quite strict and similar to what leads to many Windows 
APIs having names like CreateWindowEx. Not saying being strict is bad, 
but it will require a lot of changes to the client libraries as well as 
tempest because there are a lot of extensions that are not checked by 
either.



 -David


Adding or removing a parameter to an API is a backwards incompatible change IMO
for the exact reasons you mentioned here. If we have to worry about it in
tempest then end users do as well.

This is also why there are a bunch of nova v2 extensions that just add
properties to an existing API. I think in v3 the proposal was to do this with
microversioning of the plugins. (we don't have a way to configure
microversioned v3 api plugins in tempest yet, but we can cross that bridge when
the time comes) Either way it will allow tempest to have in config which
behavior to expect.


In this situation, there is a problem related to branchless Tempest.
When we define new API parameter as 'required', Tempest against old release
would fail.

So I feel that if we are marking something in the API as required in tempest
makes the test fail in a previous release than that should be considered a bug
in the old release (assuming it was correct to mark it as required), and that
should be backportable fix.


I think we need to define new parameters, which do not depended on the
configuration, as 'required' in Tempest when we have added them in the
development cycle because of blocking backward incompatible changes in
the future. However these parameters are new and old releases don't contain
them, so the Tempest change causes failures against old releases tests.

I agree we should mark the required parameters as those that are used without
any extensions. If we can also conditionally check those not marked as required
based on the enabled extensions or features in the tempest config that would
provide the best coverage.

So for master branch development on tempest I think we should only concern
ourselves with getting these things to work against Juno and Icehouse. I think
Havana support using master is a lost cause at this point so we'll keep the
stable branch around until it's EOL. So hopefully we can lock down things in
tempest with the new api attribute tests quickly so we can block the Juno
additions that would violate the stability guidelines. It would be a shame if
we managed to allow a breaking API change into an API since the release. (but
hopefully it would be an easy backport or revert)


Case: add new parameter 'A' in Juno cycle

IcehouseJunoK   L
  --*---*---*---*--
  Nova:new parameter 'A'
  Tempest: define 'A' as 'required'
block 'A' removal   block ..
test fails due to non-existent 'A'

So in this example I feel that parameter 'A' can only be added as an extension.
(or some other condition) If it's not then it's a breaking api change which will

[openstack-dev] [barbican] certificate orders discussion

2014-04-28 Thread Pitucha, Stanislaw Izaak
Hi all,
I've seen some blueprints/wikis from people interested in certificate
signing via barbican orders, so hopefully you'll have some feedback.

I submitted a proposal for certificate/signing order API at
https://review.openstack.org/90613 (based on previous Arvind's work with
keys)

It's not pretty and needs some more details, but it's there :)
There are some things I wasn't really sure how to handle, so here's my
reasoning you may have an opinion on:

1. The keys for generating a new certificate request + signing could be
handled inside of the generate+sign order. This would require fewer requests
than uploading a key as a secret first, but on the other hand, there's
already an api for generating those keys, so it would be nice to just
reference the key potentially already generated by barbican. I went with
posting an id reference to the key.

2. The signing-only request takes the pkcs10-style csr inline in its meta
part. I thought about treating it that same as the key decision above, but
thought this would be wrong. The request isn't really secret - it doesn't
contain any private data, so it would be incorrect to treat it the same as
keys.
On the other hand, having one order to generate a CSR and one to sign it
would be a cleaner design without a mix of attributes that are required or
not depending on the use case.
In this case the first option won - just include it as inline - generation
of the request by barbican is unlikely to be a very common case.

Otherwise, if you're interested in certificate orders, please have a look
and see if the proposed api is missing any parts you would like to see.
Maybe we can figure it out before the coding starts :)

Regards,
Stanisław Pitucha
Cloud Services 
Hewlett Packard



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Need some help with mock

2014-04-28 Thread Jay Pipes

On 04/28/2014 02:00 PM, Deepak Shetty wrote:

I was writing this in test_glusterfs.py

 def test_ensure_shares_unmounted_1share(self):
 with contextlib.nested(
 mock.patch.object(self._driver, '_load_shares_config'),
 mock.patch.object(self._driver, '_ensure_share_unmounted')
 ) as (self._fake_load_shares_config, mock_ensure_share_unmounted):

 #mock_shares = {'127.7.7.7:/gluster1': None}
 #mock_load_shares_config.return_value = mock_shares
 #self._driver.shares = mock_load_shares_config.return_value

 self._driver._ensure_shares_unmounted()

 mock_ensure_share_unmounted.assert_called_once()
 mock_ensure_share_unmounted.assert_called_once_with(
 '127.7.7.7:/gluster1')

for my patch @ https://review.openstack.org/#/c/86888/6

and i get the output as ..

==
FAIL:
cinder.tests.test_glusterfs.GlusterFsDriverTestCase.test_ensure_shares_unmounted_1share
--
...
...

stderr: {{{
cinder/tests/test_glusterfs.py:736: DeprecationWarning: With-statements
now directly support multiple context managers
   mock.patch.object(self._driver, '_ensure_share_unmounted')
}}}

Traceback (most recent call last):
   File "cinder/tests/test_glusterfs.py", line 747, in
test_ensure_shares_unmounted_1share
 '127.7.7.7:/gluster1')
   File "/usr/lib/python2.7/site-packages/mock.py", line 845, in
assert_called_once_with
 raise AssertionError(msg)
AssertionError: Expected to be called once. Called 0 times.


Can you help with why
'mock_ensure_share_unmounted.assert_called_once()' check passes
but
'mock_ensure_share_unmounted.assert_called_once_with('127.7.7.7:/gluster1')
check fails ?


Sure. This is because assert_called_once() is not a method of 
mock.Mock() and therefore is "magic-mocked" to return a mock.MagicMock() 
itself.


assert_called_once_with(), however, *is* an actual method of the 
mock.Mock() object and therefore is failing because the 
mock_unsure_share_unmounted mock was not called once with the expected 
arguments.


The way to avoid the above problem is to use something called autospec'ing.

You can read more about this intricacy of mock here:

http://www.voidspace.org.uk/python/mock/helpers.html#autospeccing

Best,
-jay


In glusterfs.py ...

 def _ensure_shares_unmounted(self):
self._load_shares_config(self.configuration.glusterfs_shares_config)
 for share in self.shares.keys():
 try:
 self._ensure_share_unmounted(share)
 except Exception as exc:
 LOG.warning(_('Exception during unmounting %s') % (exc,))


thanx,
deepak



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Chris Friesen

On 04/28/2014 11:22 AM, Dan Smith wrote:

2. There's no way to add an existing server to this "group".


In the original API there was a way to add existing servers to the
group.  This didn't make it into the code that was submitted.  It is
however supported by the instance group db API in nova.


3. There's no way to remove members from the group


In the original API there was a way to remove members from the group.
This didn't make it into the code that was submitted.


Well, it didn't make it in because it was broken. If you add an instance
to a group after it's running, a migration may need to take place in
order to keep the semantics of the group. That means that for a while
the policy will be being violated, and if we can't migrate the instance
somewhere to satisfy the policy then we need to either drop it back out,
or be in violation. Either some additional states (such as being queued
for inclusion in a group, etc) may be required, or some additional
footnotes on what it means to be in a group might have to be made.


I think your comment actually applies to adding existing instances to a 
group.  There's no good reason not to allow removing instances from a group.



As for the case of addition, we could start with something simple...if 
adding an instance to a group would violate the group scheduling policy, 
then raise an exception.



It was for the above reasons, IIRC, that we decided to leave that bit
out since the semantics and consequences clearly hadn't been fully
thought-out. Obviously they can be addressed, but I fear the result will
be ... ugly. I think there's a definite possibility that leaving out
those dynamic functions will look more desirable than an actual
implementation.


Your idea of "pending group membership" doesn't sound too ugly.

That said, I would expect "adding existing instances to a group" to be 
something that would be done under fairly well-controlled circumstances. 
 In that case I think it would be reasonable to push the work of 
managing any migrations onto whoever is trying to create a group from 
existing instances.


Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Design Summit Sessions

2014-04-28 Thread Carl Baldwin
Kyle,

Could you point to any information about the "pod" area?  I would like
to do something with the DNS discussion.  Will this area be
schedulable or first-come-first-served?

Carl

On Fri, Apr 25, 2014 at 7:17 AM, Kyle Mestery  wrote:
> Hi everyone:
>
> I've pushed out the Neutron Design Summit Schedule to sched.org [1].
> Like the other projects, it was tough to fit everything in. If your
> proposal didn't make it, there will still be opportunities to talk
> about it at the Summit in the project "Pod" area. Also, I encourage
> you to still file a BP using the new Neutron BP process [2].
>
> I expect some slight juggling of the schedule may occur as the entire
> Summit schedule is set, but this should be approximately where things
> land.
>
> Thanks!
> Kyle
>
> [1] http://junodesignsummit.sched.org/overview/type/neutron
> [2] https://wiki.openstack.org/wiki/Blueprints#Neutron
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

2014-04-28 Thread Samuel Bercovici
Hi,

I was just working to push the use cases into the new format .rst but I agree 
that using google doc would be more intuitive.
Let me know what you prefer to do with the use cases document:
1. leave it at google docs at - 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
2. Move it to the new format under - 
http://git.openstack.org/cgit/openstack/neutron-specs, I have already files a 
blue print https://blueprints.launchpad.net/neutron/+spec/lbaas-use-cases and 
can complete the .rst process by tomorrow.

Regards,
-Sam.






-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com] 
Sent: Monday, April 28, 2014 4:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

Folks, sorry for the top post here, but I wanted to make sure to gather 
people's attention in this thread.

I'm very happy to see all the passion around LBaaS in Neutron for this cycle. 
As I've told a few people, seeing all the interest from operators and providers 
is fantastic, as it gives us valuable input from that side of things before we 
embark on designing and coding.
I've also attended the last few LBaaS IRC meetings, and I've been catching up 
on the LBaaS documents and emails. There is a lot of great work and passion by 
many people. However, the downside of what I've seen is that there is a logjam 
around progress here. Given we're two weeks out from the Summit, I'm going to 
start running the LBaaS meetings with Eugene to try and help provide some focus 
there.
Hopefully we can use this week and next week's meetings to drive to a 
consistent Summit agenda and lay the groundwork for LBaaS in Juno and beyond.

Also, while our new neutron-specs BP repository has been great so far for 
developers, based on feedback from operators, it may not be ideal for those who 
are not used to contributing using gerrit. I don't want to lose the voice of 
those people, so I'm pondering what to do. This is really affecting the LBaaS 
discussion at the moment. I'm thinking that we should ideally try to use Google 
Docs for these initial discussions and then move the result of that into a BP 
on neutron-specs. What do people think of that?

If we go down this path, we need to decide on a single Google Doc for people to 
collaborate on. I don't want to put Stephen on the spot, but his document may 
be a good starting point.

I'd like to hear what others think on this plan as well.

Thanks,
Kyle


On Sun, Apr 27, 2014 at 6:06 PM, Eugene Nikanorov  
wrote:
> Hi,
>
>>
>> You knew from the action items that came out of the IRC meeting of 
>> April
>> 17 that my team would be working on an API revision proposal. You 
>> also knew that this proposal was to be accompanied by an object model 
>> diagram and glossary, in order to clear up confusion. You were in 
>> that meeting, you saw the action items being created. Heck, you even 
>> added the "to prepare API for SSL and L7" directive for my team yourself!
>>
>> The implied but not stated assumption about this work was that it 
>> would be fairly evaluated once done, and that we would be given a short 
>> window (ie.
>> about a week) in which to fully prepare and state our proposal.
>>
>> Your actions, though, were apparently to produce your own version of 
>> the same in blueprint form without notifying anyone in the group that 
>> you were going to be doing this, let alone my team. How could you 
>> have given my API proposal a fair shake prior to publishing your 
>> blueprint, if both came out on the same day? (In fact, I'm lead to 
>> believe that you and other Neutron LBaaS developers hadn't even 
>> looked at my proposal before the meeting on 4/24, where y'all started 
>> determining product direction, apparently by
>> edict.)
>>
>>
>> Therefore, looking honestly at your actions on this and trying to 
>> give you the benefit of the doubt, I still must assume that you never 
>> intended to seriously consider our proposal.
>
> That's strange to hear because the spec on review is a part of what is 
> proposed in the document made by you and your team.
> Once again I'm not sure what this heated discussion is all about. The 
> doc does good job and we will continue discussing it, while a part of 
> it (spec about VIPs/Listeners/Pools) is on review where we, as lbaas 
> subteam, actually can finalize a part of it.
>
>>
>> Do you now understand why I find this offensive? Can you also 
>> understand how others, seeing how this was handled, might now be 
>> reluctant to participate?
>
> People may find different things to be offensive. I can also say much 
> on this, but would not like not continue the conversation in this direction.
>
>
>> Right, so *if* we decide to go with my proposal, we need to first 
>> decide which parts we're actually going to go with--
>>
>>  I don't expect my proposal to be complete or perfect by any means, 
>> and

[openstack-dev] [Trove] Pluggable conductor manager

2014-04-28 Thread boden

Guys,
I have a few small features / enhancements I'd like to suggest. I'm 
willing to contribute the code / unit tests myself, but am looking for a 
consensus from the group before I invest the time.


There are a few enhancements on my list -- I will send details each in a 
separate email to keep the communication clearer.




I'd like to propose the ability to support a pluggable trove conductor 
manager. Currently the trove conductor manager is hard-coded [1][2] and 
thus is always 'trove.conductor.manager.Manager'. I'd like to see this 
conductor manager class be pluggable like nova does [3].


I'm thinking the same approach nova took:
(a) Add a conductor section to the trove-conductor.conf with a property 
to specify the conductor manager class... e.g.

[conductor]
manager = package.of.conductor.manager.Manager

(b) Default trove's CONF value for conductor.manager to the existing 
manager trove.conductor.manager.Manager which ensures backwards compat.


(c) In trove.cmd.conductor.py create the RpcService using the conf 
value. e.g.

server = rpc_service.RpcService(manager=CONF.conductor.manager, topic=topic)


The above will allow consumers to extend / plug into the conductor 
manager without upstream changes.



Any disagreement / comments on this enhancement? Again -- I can 
contribute the code, unless someone is bored and wants to run with it 
short-term.


Thanks


[1] 
https://github.com/openstack/trove/blob/master/trove/cmd/conductor.py#L40
[2] 
https://github.com/openstack/trove/blob/master/trove/cmd/conductor.py#L42

[3] https://github.com/openstack/nova/blob/master/nova/cmd/conductor.py#L43


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-04-28 Thread Dan Smith
> I'd like to propose the ability to support a pluggable trove conductor
> manager. Currently the trove conductor manager is hard-coded [1][2] and
> thus is always 'trove.conductor.manager.Manager'. I'd like to see this
> conductor manager class be pluggable like nova does [3].

Note that most of us don't like this and we're generally trying to get
rid of these sorts of things. I actually didn't realize that
conductor.manager was exposed in the CONF, and was probably just done to
mirror other similar settings.

Making arbitrary classes pluggable like this without a structured and
stable API is really just asking for trouble when people think it's a
pluggable interface.

So, you might not want to use "because nova does it" as a reason to add
it to trove like this :)

--Dan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-28 Thread Mike Spreitzer
Steve Gordon  wrote on 04/28/2014 08:58:35 AM:

> - Original Message -
> > Hi Stackers,
> > Proposal
> > 
> > Create two new options to nova boot:
> > 
> >  --near-tag 
> > and
> >  --not-near-tag 
> > 
> > The first would tell the scheduler to place the new VM near other VMs
> > having a particular "tag". The latter would tell the scheduler to 
place
> > the new VM *not* near other VMs with a particular tag.
> 
> Would we continue to grow this set of arguments in response to the 
> addition of new policies, how much do we expect this to grow? The 
> two most likely additions I can think of are "soft"/"best effort" 
> versions of the current two, are there any other proposals/ideas out
> there - I know we're a creative bunch ;)?

I brought an extensive list to the last summit; see 
https://wiki.openstack.org/wiki/Heat/PolicyExtension (but concrete syntax 
proposals B and C, I only brought A at the time, and am editing B and C 
into existence now) and do not be distracted by the fact that it is a Heat 
proposal; the policy issues are prior issues for scheduling irrespective 
of heat involvement.  Note that I not only add policy types but also endow 
some of them with parameters.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Configurable db plugins

2014-04-28 Thread boden

Guys,
I have a few small features / enhancements I'd like to suggest. I'm 
willing to contribute the code / unit tests myself, but am looking for a 
consensus from the group before I invest the time.


There are a few enhancements on my list -- I will send details each in a 
separate email to keep the communication clearer.



I'd like to propose the ability to support configurable db plugins for 
trove. Currently the sqlalchemy api's configure_db() function in trove 
supports the ability to pass in 1 or more db plugin (mappers) classes 
[1] which allow consumers to add their own schema (tables) to trove's 
db. However the current trove.cmd.*.py entry points do not support any 
means to pass in such plugins (example [2]).


I was thinking something like the following:
(a) Support a comma list property on CONF.DEFAULT in the trove conf 
files.. e.g.

[DEFAULT]
db_plugins = 
org.foo.bar.sqlalchemy.BarPlugins,org.yadda.sqlalchemy.MyPlugins


(b) In the trove CONF, default the comma list property to empty list 
(e.g. []).


(c) Update each of the respective trove.cmd.*.py entry point (main() 
methods) which call into configure_db() to first load any plugin classes 
and them pass them into the method.. e.g.

get_db_api().configure_db(CONF, import_classes(CONF.db_plugins))


The above should give consumers the ability to plug into the trove 
database api and add scheme without upstream changes.


Any disagreement / comments on this enhancement? Again -- I can 
contribute the code, unless someone is bored and wants to run with it 
short-term.


Thanks


[1] 
https://github.com/openstack/trove/blob/master/trove/db/sqlalchemy/api.py#L84

[2] https://github.com/openstack/trove/blob/master/trove/cmd/api.py#L41


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-04-28 Thread Denis Makogon
Good day, Boden.

I think you should file the blueprint for it and put it into BP meeting
agenda.

Best regards,
Denis Makogon

On Mon, Apr 28, 2014 at 9:50 PM, boden  wrote:

> Guys,
> I have a few small features / enhancements I'd like to suggest. I'm
> willing to contribute the code / unit tests myself, but am looking for a
> consensus from the group before I invest the time.
>
> There are a few enhancements on my list -- I will send details each in a
> separate email to keep the communication clearer.
>
>
>
> I'd like to propose the ability to support a pluggable trove conductor
> manager. Currently the trove conductor manager is hard-coded [1][2] and
> thus is always 'trove.conductor.manager.Manager'. I'd like to see this
> conductor manager class be pluggable like nova does [3].
>
> I'm thinking the same approach nova took:
> (a) Add a conductor section to the trove-conductor.conf with a property to
> specify the conductor manager class... e.g.
> [conductor]
> manager = package.of.conductor.manager.Manager
>
> (b) Default trove's CONF value for conductor.manager to the existing
> manager trove.conductor.manager.Manager which ensures backwards compat.
>
> (c) In trove.cmd.conductor.py create the RpcService using the conf value.
> e.g.
> server = rpc_service.RpcService(manager=CONF.conductor.manager,
> topic=topic)
>
>
> The above will allow consumers to extend / plug into the conductor manager
> without upstream changes.
>
>
> Any disagreement / comments on this enhancement? Again -- I can contribute
> the code, unless someone is bored and wants to run with it short-term.
>
> Thanks
>
>
> [1] https://github.com/openstack/trove/blob/master/trove/cmd/
> conductor.py#L40
> [2] https://github.com/openstack/trove/blob/master/trove/cmd/
> conductor.py#L42
> [3] https://github.com/openstack/nova/blob/master/nova/cmd/
> conductor.py#L43
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Design Summit Sessions

2014-04-28 Thread Kyle Mestery
On Mon, Apr 28, 2014 at 1:38 PM, Carl Baldwin  wrote:
> Kyle,
>
> Could you point to any information about the "pod" area?  I would like
> to do something with the DNS discussion.  Will this area be
> schedulable or first-come-first-served?
>
The pod area is more free-form, no schedule necessary now. If there is
enough interest in the Pod area, we could certainly look to make it a
resource to schedule time at. I think the idea in general was to make
it more open and possibly to use it for continuing discussions which
overflowed their time. What do others think?

Thanks,
Kyle

> Carl
>
> On Fri, Apr 25, 2014 at 7:17 AM, Kyle Mestery  
> wrote:
>> Hi everyone:
>>
>> I've pushed out the Neutron Design Summit Schedule to sched.org [1].
>> Like the other projects, it was tough to fit everything in. If your
>> proposal didn't make it, there will still be opportunities to talk
>> about it at the Summit in the project "Pod" area. Also, I encourage
>> you to still file a BP using the new Neutron BP process [2].
>>
>> I expect some slight juggling of the schedule may occur as the entire
>> Summit schedule is set, but this should be approximately where things
>> land.
>>
>> Thanks!
>> Kyle
>>
>> [1] http://junodesignsummit.sched.org/overview/type/neutron
>> [2] https://wiki.openstack.org/wiki/Blueprints#Neutron
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Multiple api extension paths

2014-04-28 Thread boden

Guys,
I have a few small features / enhancements I'd like to suggest. I'm 
willing to contribute the code / unit tests myself, but am looking for a 
consensus from the group before I invest the time.


There are a few enhancements on my list -- I will send details each in a 
separate email to keep the communication clearer.


I'd like to propose that trove supports the ability to have multiple API 
extension paths. The current impl of trove supports the ability to 
specify the api extension path [1]. This path is obviously searched at 
start-up and extensions found are loaded / added to trove API. However 
if consumers want to plug-in their own API extensions they have to copy 
them into the trove python path specified by [1] as this is a string 
property only accepting a single path [2].


I was thinking something like the following:
(a) Change the api_extensions_path to a comma list rather than just a 
single string.


(b) Default the value to ['trove/extensions/routes'] for backwards 
compat [2].


(c) Update trove.common.extensions.py to support an array of api 
extension paths [2] such that each path is used in the extension mgmt / 
mapping.


This would allow consumers to plug-in API extensions without having to 
inject their extension classes / code in the same python dist path as 
trove while still maintaining the existing trove 'core' extensions.



Any disagreement / comments on this enhancement? Again -- I can 
contribute the code, unless someone is bored and wants to run with it 
short-term.


Thanks


[1] 
https://github.com/openstack/trove/blob/master/etc/trove/trove.conf.sample#L71

[2] https://github.com/openstack/trove/blob/master/trove/common/cfg.py#L42


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Keystone] [TripleO] Making use of domains by name - policy and API issues?

2014-04-28 Thread Dolph Mathews
On Mon, Apr 28, 2014 at 12:51 PM, Clint Byrum  wrote:

> So in the process of making Heat deploy itself, I've run into a bit of a
> deadlock.
>
> https://bugs.launchpad.net/tripleo/+bug/1287453
> https://bugs.launchpad.net/heat/+bug/1313003
>
> Currently, we deploy OpenStack like this:
>
> * First we generate usernames/passwords for all service accounts
> * Next we deploy Keystone and Heat (and.. the rest of OpenStack)
>   - In this process, we feed in the usernames and passwords we
> generated.
> * Then when everything is "deployed", we initialize Keystone with the
>   generated usernames and passwords via the keystone API.
> * Now we test to make sure what we deployed works.
>
> However, in order to create isolated users for narrow access to Heat
> from inside instances, Heat needs a domain to put these narrowly scoped
> users in. Heat has a handy script for creating this domain and an admin
> inside the domain which is needed to create the lesser users. So that
> naturally fits into our initialization of keystone.
>
> The problem is that because of bug 1313003, Heat can only use a domain
> ID to specify this domain.


I agree with Stephen's assessment in bug 1313003:

  https://bugs.launchpad.net/heat/+bug/1313003/comments/1

It's ultimately a user experience issue (it'd require two config options to
properly express two different concepts). This issue isn't unique to heat,
though.

As Stephen points out, "it's a set-once deployer option (not a user-facing
one)" - the IDs are intended exactly for this purpose. They're immutable,
unambiguous identifiers. They're not particularly user-friendly, but as
Stephen points out, they don't need to be. They just need to be reliable.


> We haven't created that domain yet at stack
> creation time though, so we would have to add another step before
> testing/using the cloud:
>
> * Update stack with ID of heat stack user domain.
>
> Steven Hardy has indicated that it was problematic to make use of names
> instead of id's for domains, and that to me signals a problem with the
> API and/or policy model in Keystone around domains.
>
> Everything else in TripleO makes use of names except this, so I think
> we need to solve this. This isn't just a TripleO or Heat problem though,
> anybody using domains will run into the same trouble Steven hit, and
> that is not something we should ignore.
>
> Can somebody more familiar with domains explain what would be needed to
> be able to have Heat able to lookup domains by name and use them like
> most other things in OpenStack, where we can use names or IDs
> interchangeably?


> Thanks!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Pluggable conductor manager

2014-04-28 Thread boden

On 4/28/2014 3:03 PM, Denis Makogon wrote:

Good day, Boden.

I think you should file the blueprint for it and put it into BP meeting
agenda.

Best regards,
Denis Makogon

On Mon, Apr 28, 2014 at 9:50 PM, boden mailto:bo...@linux.vnet.ibm.com>> wrote:

Guys,
I have a few small features / enhancements I'd like to suggest. I'm
willing to contribute the code / unit tests myself, but am looking
for a consensus from the group before I invest the time.

There are a few enhancements on my list -- I will send details each
in a separate email to keep the communication clearer.



I'd like to propose the ability to support a pluggable trove
conductor manager. Currently the trove conductor manager is
hard-coded [1][2] and thus is always
'trove.conductor.manager.__Manager'. I'd like to see this conductor
manager class be pluggable like nova does [3].

I'm thinking the same approach nova took:
(a) Add a conductor section to the trove-conductor.conf with a
property to specify the conductor manager class... e.g.
[conductor]
manager = package.of.conductor.manager.__Manager

(b) Default trove's CONF value for conductor.manager to the existing
manager trove.conductor.manager.__Manager which ensures backwards
compat.

(c) In trove.cmd.conductor.py  create
the RpcService using the conf value. e.g.
server = rpc_service.RpcService(__manager=CONF.conductor.__manager,
topic=topic)


The above will allow consumers to extend / plug into the conductor
manager without upstream changes.


Any disagreement / comments on this enhancement? Again -- I can
contribute the code, unless someone is bored and wants to run with
it short-term.

Thanks


[1]

https://github.com/openstack/__trove/blob/master/trove/cmd/__conductor.py#L40

[2]

https://github.com/openstack/__trove/blob/master/trove/cmd/__conductor.py#L42

[3]
https://github.com/openstack/__nova/blob/master/nova/cmd/__conductor.py#L43



_
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.__org

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Denis thanks...

Done, and done --
https://blueprints.launchpad.net/trove/+spec/pluggable-conductor-manager
https://wiki.openstack.org/wiki/Meetings/TroveBPMeeting#Weekly_Trove_Blueprint_Meeting

I missed today's BP meeting, but I will plan to attend the next one.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Design Summit Sessions

2014-04-28 Thread Collins, Sean
From: Kyle Mestery mailto:mest...@noironetworks.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, April 28, 2014 3:08 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] Design Summit Sessions

I think the idea in general was to make
it more open and possibly to use it for continuing discussions which
overflowed their time. What do others think?

Perfect! Love the idea – I know at Hong Kong I had a couple times where we 
moved it into the lounge to continue discussions

--
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Gerrit host key changed

2014-04-28 Thread Jeremy Stanley
On 2014-04-25 10:40:02 -0700 (-0700), James E. Blair wrote:
[...]
> Of course, for some of us, that's not a lot. So on Monday, we'll
> send a GPG signed email with the fingerprints as well. And this is
> just another reminder that as a community, we should endeavor to
> build our GPG web of trust. See you at the Summit!

As noted, the Gerrit Git+SSH/API host keys for review.openstack.org
have been regenerated during the course of today's upgrade. The
corresponding new key fingerprints are as follows:

28:c6:42:b7:44:d2:48:64:c1:3f:31:d8:1b:6e:3b:63 (RSA)
6c:95:14:fd:8b:0f:de:d3:e3:10:77:5a:de:22:8a:5f (DSA)

They are also published at the following URL within its WebUI for
convenience:

https://review.openstack.org/#/settings/ssh-keys

When you're presented with an error message about a host key
mismatch, you'll probably need to strip the hashed entry for the old
host key from your known hosts file before being able to accept the
new one. For modern versions of OpenSSH, this can be done as
outlined here:

https://wiki.openstack.org/wiki/GerritUpgrade#DNS_Spoofing_Warning

-- 
Jeremy Stanley


signature.asc
Description: Digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Need some help with mock

2014-04-28 Thread Deepak Shetty
Hi Jay,

(I havent checked your link yet)
But just to get some more clarification.. I haven't understood yet, why you
think its not called w/ the expected args ?
I expect it to get called with the expected args bcos ...

_load_shares_config is mocked to _fake_load_shares_config
which sets self._driver.shares dict as my 'expected' share key:value pair

Hence i expect my ensure_share_unmounted to get called with the above share
key:value pair that i set using
_fake_load_shares_config.. and thats not happening.

Your reponse doesn't address this, does it ?

thanx,
deepak


On Mon, Apr 28, 2014 at 11:38 PM, Jay Pipes  wrote:

> On 04/28/2014 02:00 PM, Deepak Shetty wrote:
>
>> I was writing this in test_glusterfs.py
>>
>>  def test_ensure_shares_unmounted_1share(self):
>>  with contextlib.nested(
>>  mock.patch.object(self._driver, '_load_shares_config'),
>>  mock.patch.object(self._driver, '_ensure_share_unmounted')
>>  ) as (self._fake_load_shares_config,
>> mock_ensure_share_unmounted):
>>
>>  #mock_shares = {'127.7.7.7:/gluster1': None}
>>  #mock_load_shares_config.return_value = mock_shares
>>  #self._driver.shares = mock_load_shares_config.return_value
>>
>>  self._driver._ensure_shares_unmounted()
>>
>>  mock_ensure_share_unmounted.assert_called_once()
>>  mock_ensure_share_unmounted.assert_called_once_with(
>>  '127.7.7.7:/gluster1')
>>
>> for my patch @ https://review.openstack.org/#/c/86888/6
>>
>> and i get the output as ..
>>
>> ==
>> FAIL:
>> cinder.tests.test_glusterfs.GlusterFsDriverTestCase.test_
>> ensure_shares_unmounted_1share
>> --
>> ...
>> ...
>>
>> stderr: {{{
>> cinder/tests/test_glusterfs.py:736: DeprecationWarning: With-statements
>> now directly support multiple context managers
>>mock.patch.object(self._driver, '_ensure_share_unmounted')
>> }}}
>>
>> Traceback (most recent call last):
>>File "cinder/tests/test_glusterfs.py", line 747, in
>> test_ensure_shares_unmounted_1share
>>  '127.7.7.7:/gluster1')
>>File "/usr/lib/python2.7/site-packages/mock.py", line 845, in
>> assert_called_once_with
>>  raise AssertionError(msg)
>> AssertionError: Expected to be called once. Called 0 times.
>>
>>
>> Can you help with why
>> 'mock_ensure_share_unmounted.assert_called_once()' check passes
>> but
>> 'mock_ensure_share_unmounted.assert_called_once_with('127.
>> 7.7.7:/gluster1')
>> check fails ?
>>
>
> Sure. This is because assert_called_once() is not a method of mock.Mock()
> and therefore is "magic-mocked" to return a mock.MagicMock() itself.
>
> assert_called_once_with(), however, *is* an actual method of the
> mock.Mock() object and therefore is failing because the
> mock_unsure_share_unmounted mock was not called once with the expected
> arguments.
>
> The way to avoid the above problem is to use something called autospec'ing.
>
> You can read more about this intricacy of mock here:
>
> http://www.voidspace.org.uk/python/mock/helpers.html#autospeccing
>
> Best,
> -jay
>
>  In glusterfs.py ...
>>
>>  def _ensure_shares_unmounted(self):
>> self._load_shares_config(self.configuration.glusterfs_shares_config)
>>  for share in self.shares.keys():
>>  try:
>>  self._ensure_share_unmounted(share)
>>  except Exception as exc:
>>  LOG.warning(_('Exception during unmounting %s') % (exc,))
>>
>>
>> thanx,
>> deepak
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Keystone] [TripleO] Making use of domains by name - policy and API issues?

2014-04-28 Thread Clint Byrum
Excerpts from Dolph Mathews's message of 2014-04-28 12:28:41 -0700:
> On Mon, Apr 28, 2014 at 12:51 PM, Clint Byrum  wrote:
> 
> > So in the process of making Heat deploy itself, I've run into a bit of a
> > deadlock.
> >
> > https://bugs.launchpad.net/tripleo/+bug/1287453
> > https://bugs.launchpad.net/heat/+bug/1313003
> >
> > Currently, we deploy OpenStack like this:
> >
> > * First we generate usernames/passwords for all service accounts
> > * Next we deploy Keystone and Heat (and.. the rest of OpenStack)
> >   - In this process, we feed in the usernames and passwords we
> > generated.
> > * Then when everything is "deployed", we initialize Keystone with the
> >   generated usernames and passwords via the keystone API.
> > * Now we test to make sure what we deployed works.
> >
> > However, in order to create isolated users for narrow access to Heat
> > from inside instances, Heat needs a domain to put these narrowly scoped
> > users in. Heat has a handy script for creating this domain and an admin
> > inside the domain which is needed to create the lesser users. So that
> > naturally fits into our initialization of keystone.
> >
> > The problem is that because of bug 1313003, Heat can only use a domain
> > ID to specify this domain.
> 
> 
> I agree with Stephen's assessment in bug 1313003:
> 
>   https://bugs.launchpad.net/heat/+bug/1313003/comments/1
> 
> It's ultimately a user experience issue (it'd require two config options to
> properly express two different concepts). This issue isn't unique to heat,
> though.
> 
> As Stephen points out, "it's a set-once deployer option (not a user-facing
> one)" - the IDs are intended exactly for this purpose. They're immutable,
> unambiguous identifiers. They're not particularly user-friendly, but as
> Stephen points out, they don't need to be. They just need to be reliable.
> 

In this instance, I, the deployer, am a Heat and Keystone user. So that
is not a valid reason to dismiss this overly complicated user experience.
If it is hard for me in TripleO, and hard for Steven in Heat, then it
will be hard for everybody else who wants to consume keystone domains
via the API.

Could you contrast domains with the way that we can use names for every
other keystone component? Notice:

https://git.openstack.org/cgit/openstack/tripleo-image-elements/tree/elements/heat/os-config-applier/etc/heat/heat.conf#n522

If it has to be a different config option, that is fine, as long as
they're not both required. But this is the only place that we've run
into where we have to put an ID in the configuration for a service,
rather than a name.

Also, names are unambiguous. I requested that name specifically so I
could use it for this purpose. If they're not unique enough to use for
lookups, then I would love to hear an explanation as to why they cannot
be made unique, at least in some context.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] certificate orders discussion

2014-04-28 Thread Adam Young

On 04/28/2014 02:07 PM, Pitucha, Stanislaw Izaak wrote:

Hi all,
I've seen some blueprints/wikis from people interested in certificate
signing via barbican orders, so hopefully you'll have some feedback.

I submitted a proposal for certificate/signing order API at
https://review.openstack.org/90613 (based on previous Arvind's work with
keys)


Good stuff.



It's not pretty and needs some more details, but it's there :)
There are some things I wasn't really sure how to handle, so here's my
reasoning you may have an opinion on:

1. The keys for generating a new certificate request + signing could be
handled inside of the generate+sign order. This would require fewer requests
than uploading a key as a secret first, but on the other hand, there's
already an api for generating those keys, so it would be nice to just
reference the key potentially already generated by barbican. I went with
posting an id reference to the key.
Lets not reinvent a format here.  Certmonger can already do that stuff 
client side, so lets just focus on getting the request to the server, 
signed, and returned.





2. The signing-only request takes the pkcs10-style csr inline in its meta
part. I thought about treating it that same as the key decision above, but
thought this would be wrong. The request isn't really secret - it doesn't
contain any private data, so it would be incorrect to treat it the same as
keys.
Correct.  This should be simpler than a Keys escrow call:  you might 
want to do key escrow for encryption keys that get signed this way, but 
that could used the existing mechanism via a separate call, either prior 
or after.  Prior might  be the way to go:  "if you request a certificate 
with an encryption attribute, you need to have submitted the key for 
escrow."

On the other hand, having one order to generate a CSR and one to sign it
would be a cleaner design without a mix of attributes that are required or
not depending on the use case.
In this case the first option won - just include it as inline - generation
of the request by barbican is unlikely to be a very common case.

Otherwise, if you're interested in certificate orders, please have a look
and see if the proposed api is missing any parts you would like to see.
Maybe we can figure it out before the coding starts :)
Lets get the basics down:  process a CSR.  We still need to handle the 
workflow for approval.





Regards,
Stanis?aw Pitucha
Cloud Services
Hewlett Packard



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] HAProxy and Keystone setup (in Overcloud)

2014-04-28 Thread Jay Dobies

We may want to consider making use of Heat outputs for this.


This was my first thought as well. stack-show returns a JSON document 
that would be easy enough to parse through instead of having it in two 
places.



Rather than assuming hard coding, create an output on the overcloud
template that is something like 'keystone_endpoint'. It would look
something like this:

Outputs:
   keystone_endpoint:
 Fn::Join:
   - ''
   - - "http://";
 - {Fn::GetAtt: [ haproxy_node, first_ip ]} # fn select and yada
 - ":"
 - {Ref: KeystoneEndpointPort} # thats a parameter
 - "/v2.0"


These are then made available via heatclient as stack.outputs in
'stack-show'.

That way as we evolve new stacks that have different ways of controlling
the endpoints (LBaaS anybody?) we won't have to change os-cloud-config
for each one.



2) do Keystone setup from inside Overcloud:
Extend keystone element, steps done in init-keystone script would be
done in keystone's os-refresh-config script. This script would have to
be called only on one of nodes in cluster and only once (though we
already do similar check for other services - mysql/rabbitmq, so I don't
think this is a problem). Then this script can easily get list of
haproxy ports from heat metadata. This looks like more attractive option
to me - it eliminates an extra post-create config step.


Things that can be done from outside the cloud, should be done from
outside the cloud. This helps encourage the separation of concerns and
also makes it simpler to reason about which code is driving the cloud
versus code that is creating the cloud.



Related to Keystone setup is also the plan around keys/cert setup
described here:
http://lists.openstack.org/pipermail/openstack-dev/2014-March/031045.html
But I think this plan would remain same no matter which of the options
above would be used.


What do you think?

Jan



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Design Summit Sessions

2014-04-28 Thread Manish Godara
Sounds like a good idea to me.  Is the pod area for neutron-specific
discussions?

Thanks,
manish

On 4/28/14 12:08 PM, "Kyle Mestery"  wrote:

>On Mon, Apr 28, 2014 at 1:38 PM, Carl Baldwin  wrote:
>> Kyle,
>>
>> Could you point to any information about the "pod" area?  I would like
>> to do something with the DNS discussion.  Will this area be
>> schedulable or first-come-first-served?
>>
>The pod area is more free-form, no schedule necessary now. If there is
>enough interest in the Pod area, we could certainly look to make it a
>resource to schedule time at. I think the idea in general was to make
>it more open and possibly to use it for continuing discussions which
>overflowed their time. What do others think?
>
>Thanks,
>Kyle
>
>> Carl
>>
>> On Fri, Apr 25, 2014 at 7:17 AM, Kyle Mestery
>> wrote:
>>> Hi everyone:
>>>
>>> I've pushed out the Neutron Design Summit Schedule to sched.org [1].
>>> Like the other projects, it was tough to fit everything in. If your
>>> proposal didn't make it, there will still be opportunities to talk
>>> about it at the Summit in the project "Pod" area. Also, I encourage
>>> you to still file a BP using the new Neutron BP process [2].
>>>
>>> I expect some slight juggling of the schedule may occur as the entire
>>> Summit schedule is set, but this should be approximately where things
>>> land.
>>>
>>> Thanks!
>>> Kyle
>>>
>>> [1] http://junodesignsummit.sched.org/overview/type/neutron
>>> [2] https://wiki.openstack.org/wiki/Blueprints#Neutron
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Heat Windows templates contribution

2014-04-28 Thread Steve Baker
On 28/04/14 23:04, Alessandro Pilotti wrote:
> Hi all,
>
> Following up to various conversations during the Icehouse cycle, we'd
> like to contribute the Heat templates work that we did at Cloudbase,
> partly available at:
> https://github.com/cloudbase/windows-heat-templates
>
It looks like progress is being made on porting these templates to HOT:
https://github.com/cloudbase/windows-heat-templates/blob/master/iis/iis_drupal7_webpi.yaml

To further this process I would suggest the following:
*
https://github.com/cloudbase/windows-heat-templates/blob/master/iis/iis_drupal7_webpi.yaml#L73
This powershell should really be put in a separate file and included
with {get_file: iis_drupal7_webpi.ps1}
*
https://github.com/cloudbase/windows-heat-templates/blob/master/iis/iis_drupal7_webpi.yaml#L69
setting user_data_format: RAW will stop heat from adding unnecessary
files to the multipart-mime
* Consider adopting this pattern for cloudbase-init configuration
http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/example-cloud-init.yaml
  Which is to specify a OS::Heat::SoftwareConfig resource for each
powershell script (also using get_file), then use a
OS::Heat::MultipartMime to combine multiple powershell scripts into a
single configuration packet (I'm assuming cloudbase-init supports
multipart-mime)
> There's also a BP for
> that https://blueprints.launchpad.net/heat/+spec/windows-instances and
> a document discussing the critical Windows integration areas (linked
> in the BP): http://wiki.cloudbase.it/heat-windows
>
> I'm sending this now so that if anybody is interested on the topic we
> can start some discussions before heading to Atlanta's design sessions.
>
> At the current stage we are running templates of any size and type on
> Havana and Icehouse without problems with Cloudbase-Init, so there are
> no particular blocking issues, but it'd be great to have a community
> discussion about what to do with the CFN tools porting on Windows for
> example and how to make the Heat produced Nova userdata metadata less
> Linux dependent.
>
So I'm all for heat-cfntools being ported to windows, and would happily
review those patches. However a port of os-collect-config,
os-apply-config and os-refresh-config[1] might be the most expedient way
of being able to configure with powershell throughout the lifecycle of a
windows server (not just boot config).

It would be great if it was possible to have the powershell equivalent
of this
https://review.openstack.org/#/c/70297/16/hot/software-config/example-templates/example-script-template.yaml

[1] https://github.com/openstack/os-collect-config
https://github.com/openstack/os-refresh-config
https://github.com/openstack/os-apply-config
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

2014-04-28 Thread Eichberger, German
Sam,

The use cases where pretty complete the last time I checked so let's move them 
to gerrit so we can all vote.

Echoing Kyle I would love to see us focusing on getting things ready for the 
summit. 

German

-Original Message-
From: Samuel Bercovici [mailto:samu...@radware.com] 
Sent: Monday, April 28, 2014 11:44 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

Hi,

I was just working to push the use cases into the new format .rst but I agree 
that using google doc would be more intuitive.
Let me know what you prefer to do with the use cases document:
1. leave it at google docs at - 
https://docs.google.com/document/d/1Ewl95yxAMq2fO0Z6Dz6fL-w2FScERQXQR1-mXuSINis/edit?pli=1
2. Move it to the new format under - 
http://git.openstack.org/cgit/openstack/neutron-specs, I have already files a 
blue print https://blueprints.launchpad.net/neutron/+spec/lbaas-use-cases and 
can complete the .rst process by tomorrow.

Regards,
-Sam.






-Original Message-
From: Kyle Mestery [mailto:mest...@noironetworks.com] 
Sent: Monday, April 28, 2014 4:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal

Folks, sorry for the top post here, but I wanted to make sure to gather 
people's attention in this thread.

I'm very happy to see all the passion around LBaaS in Neutron for this cycle. 
As I've told a few people, seeing all the interest from operators and providers 
is fantastic, as it gives us valuable input from that side of things before we 
embark on designing and coding.
I've also attended the last few LBaaS IRC meetings, and I've been catching up 
on the LBaaS documents and emails. There is a lot of great work and passion by 
many people. However, the downside of what I've seen is that there is a logjam 
around progress here. Given we're two weeks out from the Summit, I'm going to 
start running the LBaaS meetings with Eugene to try and help provide some focus 
there.
Hopefully we can use this week and next week's meetings to drive to a 
consistent Summit agenda and lay the groundwork for LBaaS in Juno and beyond.

Also, while our new neutron-specs BP repository has been great so far for 
developers, based on feedback from operators, it may not be ideal for those who 
are not used to contributing using gerrit. I don't want to lose the voice of 
those people, so I'm pondering what to do. This is really affecting the LBaaS 
discussion at the moment. I'm thinking that we should ideally try to use Google 
Docs for these initial discussions and then move the result of that into a BP 
on neutron-specs. What do people think of that?

If we go down this path, we need to decide on a single Google Doc for people to 
collaborate on. I don't want to put Stephen on the spot, but his document may 
be a good starting point.

I'd like to hear what others think on this plan as well.

Thanks,
Kyle


On Sun, Apr 27, 2014 at 6:06 PM, Eugene Nikanorov  
wrote:
> Hi,
>
>>
>> You knew from the action items that came out of the IRC meeting of 
>> April
>> 17 that my team would be working on an API revision proposal. You 
>> also knew that this proposal was to be accompanied by an object model 
>> diagram and glossary, in order to clear up confusion. You were in 
>> that meeting, you saw the action items being created. Heck, you even 
>> added the "to prepare API for SSL and L7" directive for my team yourself!
>>
>> The implied but not stated assumption about this work was that it 
>> would be fairly evaluated once done, and that we would be given a short 
>> window (ie.
>> about a week) in which to fully prepare and state our proposal.
>>
>> Your actions, though, were apparently to produce your own version of 
>> the same in blueprint form without notifying anyone in the group that 
>> you were going to be doing this, let alone my team. How could you 
>> have given my API proposal a fair shake prior to publishing your 
>> blueprint, if both came out on the same day? (In fact, I'm lead to 
>> believe that you and other Neutron LBaaS developers hadn't even 
>> looked at my proposal before the meeting on 4/24, where y'all started 
>> determining product direction, apparently by
>> edict.)
>>
>>
>> Therefore, looking honestly at your actions on this and trying to 
>> give you the benefit of the doubt, I still must assume that you never 
>> intended to seriously consider our proposal.
>
> That's strange to hear because the spec on review is a part of what is 
> proposed in the document made by you and your team.
> Once again I'm not sure what this heated discussion is all about. The 
> doc does good job and we will continue discussing it, while a part of 
> it (spec about VIPs/Listeners/Pools) is on review where we, as lbaas 
> subteam, actually can finalize a part of it.
>
>>
>> Do you now understand why I find this offensive? Can y

Re: [openstack-dev] [all] Branchless Tempest QA Spec - final draft

2014-04-28 Thread Sean Dague
On 04/28/2014 02:06 PM, David Kranz wrote:
> On 04/27/2014 10:02 PM, Matthew Treinish wrote:
>> On Mon, Apr 28, 2014 at 01:01:00AM +, Kenichi Oomichi wrote:
>>> Hi,
>>>
>>> Sorry for my late response, but I'd like to discuss this again.
>>>
>>> Now we are working for adding Nova API responses checks to Tempest[1] to
>>> block backward incompatible changes.
>>> With this work, Tempest checks each response(status code, response body)
>>> and raises a test failure exception if detecting something unexpected.
>>> For example if some API parameter, which is defined as 'required'
>>> Tempest
>>> side, does not exist in response body, Tempest test fails.
>>>
>>> We are defining API parameters as 'required' if they are not API
>>> extensions
>>> or they are not depended on Nova configuration. In addition now Tempest
>>> allows additional API parameters, that means Tempest does not fail
>>> even if
>>> Nova response includes unexpected API parameters. Because I think the
>>> removal
>>> of API parameter causes backward incompatible issue but the addition
>>> does not
>>> cause it.
>> So, AIUI we can only add parameters to an API with a new extension.
>> The API
>> change guidelines also say that adding new properties must be
>> conditional:
>>
>> https://wiki.openstack.org/wiki/APIChangeGuidelines#Generally_Considered_OK
>>
> I just wanted to note that the original referenced wiki page, assembled
> by markmc and myself, did not specify the need for an extension in order
> to add a value to a return dict or add a value to a dict argument if the
> existing api would ignore it. This was changed (and honestly I did not
> notice this change at the time) last August to require an extension:
> https://wiki.openstack.org/w/index.php?title=APIChangeGuidelines&diff=prev&oldid=28593.
> 
> Is there any trace left of discussions around that change?
> 
> The original wording allowed the api to evolve as long as a "reasonable"
> application would not be broken. Essentially the extra value becomes
> optional and new client or server code can check for it. The new
> definition is quite strict and similar to what leads to many Windows
> APIs having names like CreateWindowEx. Not saying being strict is bad,
> but it will require a lot of changes to the client libraries as well as
> tempest because there are a lot of extensions that are not checked by
> either.

If I remember correctly that arose out of Portland summit sessions.
Basically you need to provide some signaling to the client on what they
should expect. And this kind of organically grew into that.

We pretty desperately need micro versioning so we're handling this
directly, and not via inference from extension lists.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Design summit preparation - Next steps for Heat Software Orchestration

2014-04-28 Thread Steve Baker
On 29/04/14 01:41, Thomas Spatzier wrote:
> Excerpts from Steve Baker's message on 28/04/2014 01:25:29:
>
>> I'm with Clint on this one. Heat-engine cannot know the true state
>> of a server just by monitoring what has been polled and signaled.
>> Since it can't know it would be dangerous for it to guess. Instead
>> it should just offer all known configuration data to the server and
>> allow the server to make the decision whether to execute a config
>> again. I still think one more derived input value would be useful to
>> help the server to make that decision. This could either be a
>> datestamp for when the derived config was created, or a hash of all
>> of the derived config data.
> So as I said in another note, I agree that the this seems best handled in
> the in-instance tool and the Heat engine, or the resource should probably
> not have any new magic. If there is some additional state property that the
> resource maintains, and the in-instance tool handles it, that should be
> fine. I think what is important, is that users who want to use existing
> automation scripts do not have to implement much logic for interpreting
> that additional "flag", but that we handle it in the generic hook
> invocation logic.
>
> Can you elaborate more on what you have in mind with the additional derived
> input value?
>
Heat needs to give the hook or the config script enough information to
know whether that *particular* combination of config script + input
values has been executed on that server. It could do this by providing
the timestamp or the hash of the derived config, then this piece of
information can be compared with some local state on the server to
decide whether to run the config again. Actually the hash could be
calculated on the server too, so the hash could be calculated in
55-heat-config then consumed by the hook or config script.
>>
>> For this design session I have my own list of items to discuss:
>> #4.1 Maturing the puppet hook so it can invoke more existing puppet
> scripts
>> #4.2 Make progress on the chef hook, and defining the mapping from
>> chef concepts to heat config/inputs/outputs
>> #4.3 Finding volunteers to write hooks for Salt, Ansible
>> #5.1 Now that heatclient can include binary files, discuss enhancing
>> get_file to zip the directory contents if it is pointed at a directory
>> #5.2 Now that heatclient can include binary files, discuss making
>> stack create/update API calls multipart/form-data so that proper
>> mime data can be captured for attached files
>> #6.1 Discuss options for where else metadata could be polled from (ie,
> swift)
>> #6.2 Discuss whether #6.1 can lead to software-config that can work
>> on an OpenStack which doesn't allow admin users or keystone domains
>> (ie, rackspace)
> #4.1 thru #4.3 are important and seem straight forward and more about
> finding people to do it. If there are design issues to be figured out,
> maybe we can do it offline via the ML.
>
> #5.1 and #5.2 are really interesting and map to use cases we have also seen
> internally. Is there a size limit for the binaries? Would this also cover,
> e.g. sending small binaries like a wordpress install tgz instead of doing a
> yum based install? Or would the latter be something to address via #6
> below?
>
> #6 looks very interesting as well. We also thought about using swift not
> only for metadata but also for sharing installables to instances in cases
> where direct download from the internet, for example, is not possible.
We'll just have to try it to find out were the limits are, but in
general I would assume the following:
* user_data limited to about 16k total, so anything bigger than that
needs to go in the deployment input_values
* practically speaking, a binary could go in a deployment input value or
a swift object resource (which doesn't exist yet) to be passed to the
deployment input value by url
* The default heat.conf max_template_size=524288, so to avoid this limit
binaries should be put into swift outside the scope of heat, and passed
into the template as a parameter URL.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic][QA] Ironic functional gate tests stable, thoughts on extending?

2014-04-28 Thread Adam Gandelman
Hi All--

We've finally got the check-tempest-dsvm-virtual-ironic passing
successfully in the Ironic gate.  Among other things, this test runs the
tempest.scenario.test_baremetal_basic_ops which is a functional
provisioning test directly stressing Ironic, Nova and Neutron as well as
devstack and diskimage-builder indirectly.  It is non-voting currently, but
will eventually move to a voting job once its proven stable.  I've also
proposed adding this to the non-experimental checks of Devstack, Nova,
Tempest, diskimage-builder and devstack-gate as the stability of the job is
dependent on all of them.  If you see this failing, please investigate
and/or ping me (adam_g).  There have been multiple breakages that slipped
into trunk over the last month or so that this test would have caught, and
I'd love for it do its job now that it is stable.

The current scenario test only tests the pxe_ssh driver.  As we begin to
think about things like the IPA and other new features in Juno, we should
also think about how they fit in to the current test framework.  I'd love
it if we can capture some thoughts on the list or in the IronicCI etherpad
[1] about what we'd like to test and how those test would look.  Its
important to note that Tempest is generally a blackbox test suite that only
has access to the various APIs (both admin and non) and guests via SSH, so
any assertions we hope to make must be possible by poking APIs or SSHing to
a provisioned node.

Any thoughts we can collect before and during the summit would be great.

Thanks,
Adam

[1] https://etherpad.openstack.org/p/IronicCI
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-sdk-php] Questions about user-facing documentation

2014-04-28 Thread Anne Gentle
On Sun, Apr 27, 2014 at 7:57 PM, Shaunak Kashyap <
shaunak.kash...@rackspace.com> wrote:

> Thanks for your inputs, Matt and Anne. I'm punting on the first question
> (re: publishing) for now. It sounds like this is a larger discussion and we
> can make progress on the PHP SDK user-facing documentation without
> answering it right away. I'll bring it up again if we don't have an answer
> by the time we have something worth publishing.
>
> Based on your inputs I've created this spec:
> https://wiki.openstack.org/wiki/OpenStack-SDK-PHP/UserFacingDocumentation.
> Feel free to comment on it. I intend to start implementing it within a week
> or so.
>
>
Great details, Shaunak! I say go for it.

I'm working with the Foundation and infra team to get
developer.openstack.org registered and then we'll redirect api.openstack.orgto
developer.openstack.org. Look for that change this week.

We had a good discussion on IRC about some of the possible difficulties
with this domain: 1) is the developer.example.com standard commercial-based
and not open-source? and 2) will developers who want to contribute to
OpenStack be confused by this additional subdomain? For 1), I don't believe
there's a good open source equivalent we'd want to emulate. For 2), I think
the numbers of app developers will far outnumber the contributor developers
and their needs outweigh the contributor developers for their own landing
page.

I do want to mitigate these concerns by redesigning the
docs.openstack.orglanding page to ensure that app devs know there's a
site just for them (not
just for docs but for downloading tools like SDKs.) I would love to get
input on how to best grow the developer site and engage that community
further.

Thanks,
Anne






> Thank you,
>
> Shaunak
>
> On Apr 21, 2014, at 11:40 AM, Anne Gentle 
> wrote:
>
> > Great questions, Shaunak. Yep I've been thinking about this for a while
> but not sure I have complete conclusions. More below.
> >
> >
> > On Wed, Apr 16, 2014 at 9:06 PM, Shaunak Kashyap <
> shaunak.kash...@rackspace.com> wrote:
> > Hi folks,
> >
> > As part of working on
> https://blueprints.launchpad.net/openstack-sdk-php/+spec/sphinx-docs,
> I've been looking at
> http://git.openstack.org/cgit/stackforge/openstack-sdk-php/tree/doc.
> >
> > Before I start making any changes toward that BP, however, I wanted to
> put forth a couple of overarching questions and proposals to the group:
> >
> > 1. Where and how should the user guide (i.e. Sphinx-generated docs) be
> published?
> >
> > Just to give some context and background, we have a User Guide with a
> Python SDK chapter: http://docs.openstack.org/user-guide/content/
> >
> > A PHP SDK chapter might be a good addition, if you look at what we have
> and a pattern that exists, but I'd REALLY like us to break out of the book
> model and try to create a developer portal with a more page-centered model.
> >
> > What's that? For REST APIs, developers typically expect:
> > developer.example.com - for docs, examples, code, links to download dev
> kits.
> > api.example.com - for the actual api endpoints.
> >
> > What's tough for us is that there are thousands of OpenStack endpoints
> by now. A few years back we created api.openstack.org, but didn't realize
> there's an existing pattern of what devs look for from REST APIs. My bad, I
> hope we can correct that by creating developer.openstack.org.
> >
> >
> >
> > I know there's http://docs.openstack.org/. It seems like the logical
> place for these to be linked off of but where would that link go and what
> is the process of publishing our Sphinx-generated docs to that place?
> >
> >
> > What's tough about correcting our doc domains going forward is that we
> have docs.openstack.org/developer for all the contributor dev docs. I do
> want to continue to separate out the audiences: the contributors are Python
> devs, the app devs are all languages. I'd like developer.openstack.org to
> be that landing point for all language devs looking to consume OpenStack
> cloud resources from any provider.
> >
> > Another difficulty is, how do we govern and review this content? Or do
> we? Does it fall under the Documentation Program mission? Right now our
> mission states we only cover "core" projects (the definition being just a
> handful of projects) and users as top priority. So I see a developer portal
> as a user priority. I'm not trying to do a land-grab as a PTL here, but
> trying to serve users as best we can, and this is definitely an underserved
> audience. The TC has a stance of "code or it doesn't count" so stackforge
> seems like a good starting place. Then core teams can form with
> deliverables, one of which can be docs. I think we're on the right track
> here, just making sure I state the potential decision point on how to
> govern SDKs and their docs and form communities around them.
> >
> >
> > 2. How should the user guide(s) be organized?
> >
> > If I were a developer, I'm probably looking to use a particular
> OpenStack

Re: [openstack-dev] [barbican] certificate orders discussion

2014-04-28 Thread John Wood
Hello folks,

FWIW, we've been trying to refine the ssl cert generation workflows via this 
blueprint: https://blueprints.launchpad.net/barbican/+spec/add-ssl-ca-support

These flows could be kicked off via the orders API.

Thanks,
John




From: Adam Young [ayo...@redhat.com]
Sent: Monday, April 28, 2014 2:52 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [barbican] certificate orders discussion

On 04/28/2014 02:07 PM, Pitucha, Stanislaw Izaak wrote:

Hi all,
I've seen some blueprints/wikis from people interested in certificate
signing via barbican orders, so hopefully you'll have some feedback.

I submitted a proposal for certificate/signing order API at
https://review.openstack.org/90613 (based on previous Arvind's work with
keys)

Good stuff.


It's not pretty and needs some more details, but it's there :)
There are some things I wasn't really sure how to handle, so here's my
reasoning you may have an opinion on:

1. The keys for generating a new certificate request + signing could be
handled inside of the generate+sign order. This would require fewer requests
than uploading a key as a secret first, but on the other hand, there's
already an api for generating those keys, so it would be nice to just
reference the key potentially already generated by barbican. I went with
posting an id reference to the key.

Lets not reinvent a format here.  Certmonger can already do that stuff client 
side, so lets just focus on getting the request to the server, signed, and 
returned.



2. The signing-only request takes the pkcs10-style csr inline in its meta
part. I thought about treating it that same as the key decision above, but
thought this would be wrong. The request isn't really secret - it doesn't
contain any private data, so it would be incorrect to treat it the same as
keys.

Correct.  This should be simpler than a Keys escrow call:  you might want to do 
key escrow for encryption keys that get signed this way, but that could used 
the existing mechanism via a separate call, either prior or after.  Prior might 
 be the way to go:  "if you request a certificate with an encryption attribute, 
you need to have submitted the key for escrow."

On the other hand, having one order to generate a CSR and one to sign it
would be a cleaner design without a mix of attributes that are required or
not depending on the use case.
In this case the first option won - just include it as inline - generation
of the request by barbican is unlikely to be a very common case.

Otherwise, if you're interested in certificate orders, please have a look
and see if the proposed api is missing any parts you would like to see.
Maybe we can figure it out before the coding starts :)

Lets get the basics down:  process a CSR.  We still need to handle the workflow 
for approval.



Regards,
Stanisław Pitucha
Cloud Services
Hewlett Packard





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >