Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Andreas Jaeger
On 02/05/2014 06:38 PM, Jonathan Bryce wrote:
> On Feb 5, 2014, at 10:18 AM, Steve Gordon  wrote:
> 
>> - Original Message -
>>> From: "Andreas Jaeger" 
>>> To: "Mark McLoughlin" , "OpenStack Development Mailing 
>>> List (not for usage questions)"
>>> 
>>> Cc: "Jonathan Bryce" 
>>> Sent: Wednesday, February 5, 2014 9:17:39 AM
>>> Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
>>>
>>> On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
 On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
> Steve Gordon wrote:
>>> From: "Anne Gentle" 
>>> Based on today's Technical Committee meeting and conversations with the
>>> OpenStack board members, I need to change our Conventions for service
>>> names
>>> at
>>> https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
>>> .
>>>
>>> Previously we have indicated that Ceilometer could be named OpenStack
>>> Telemetry and Heat could be named OpenStack Orchestration. That's not
>>> the
>>> case, and we need to change those names.
>>>
>>> To quote the TC meeting, ceilometer and heat are "other modules" (second
>>> sentence from 4.1 in
>>> http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
>>> distributed with the Core OpenStack Project.
>>>
>>> Here's what I intend to change the wiki page to:
>>> Here's the list of project and module names and their official names
>>> and
>>> capitalization:
>>>
>>> Ceilometer module
>>> Cinder: OpenStack Block Storage
>>> Glance: OpenStack Image Service
>>> Heat module
>>> Horizon: OpenStack dashboard
>>> Keystone: OpenStack Identity Service
>>> Neutron: OpenStack Networking
>>> Nova: OpenStack Compute
>>> Swift: OpenStack Object Storage
>
> Small correction. The TC had not indicated that Ceilometer could be
> named "OpenStack Telemetry" and Heat could be named "OpenStack
> Orchestration". We formally asked[1] the board to allow (or disallow)
> that naming (or more precisely, that use of the trademark).
>
> [1]
> https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
>
> We haven't got a formal and clear answer from the board on that request
> yet. I suspect they are waiting for progress on DefCore before deciding.
>
> If you need an answer *now* (and I suspect you do), it might make sense
> to ask foundation staff/lawyers about using those OpenStack names with
> the current state of the bylaws and trademark usage rules, rather than
> the hypothetical future state under discussion.

 Basically, yes - I think having the Foundation confirm that it's
 appropriate to use "OpenStack Telemetry" in the docs is the right thing.

 There's an awful lot of confusion about the subject and, ultimately,
 it's the Foundation staff who are responsible for enforcing (and giving
 advise to people on) the trademark usage rules. I've cc-ed Jonathan so
 he knows about this issue.

 But FWIW, the TC's request is asking for Ceilometer and Heat to be
 allowed use their "Telemetry" and "Orchestration" names in *all* of the
 circumstances where e.g. Nova is allowed use its "Compute" name.

 Reading again this clause in the bylaws:

  "The other modules which are part of the OpenStack Project, but
   not the Core OpenStack Project may not be identified using the
   OpenStack trademark except when distributed with the Core OpenStack
   Project."

 it could well be said that this case of naming conventions in the docs
 for the entire OpenStack Project falls under the "distributed with" case
 and it is perfectly fine to refer to "OpenStack Telemetry" in the docs.
 I'd really like to see the Foundation staff give their opinion on this,
 though.
> 
> In this case, we are talking about documentation that is produced and 
> distributed with the integrated release to cover the Core OpenStack Project 
> and the “modules" that are distributed together with the Core OpenStack 
> Project in the integrated release. This is the intended use case for the 
> exception Mark quoted above from the Bylaws, and I think it is perfectly fine 
> to refer to the integrated components in the OpenStack release documentation 
> as OpenStack components.


What about if I talk about OpenStack at a conference (like I'm doing
today)? What should I say: "Orchestration", "Heat module" (or just Heat")?


What about all the OpenStack distributors and users like SUSE,
Rackspace, HP, Red Hat etc? What should they use in their documentation
and software?

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Jeff Hawn,Jennifer Guild,Felix Imendörffer,HRB16746 (AG Nürnberg)
GPG fi

Re: [openstack-dev] [heat] Sofware Config progress [for appliances]

2014-02-05 Thread Mike Spreitzer
> From: Prasad Vellanki 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> , 
> Date: 01/21/2014 02:16 AM
> Subject: Re: [openstack-dev] [heat] Sofware Config progress
> 
> Steve & Clint
> 
> That should work. We will look at implementing a resource that spins
> up a shortlived VM for bootstrapping a service VM and informing 
> configuration server for further configuration. 
> 
> thanks
> prasadv
> 

> On Wed, Jan 15, 2014 at 7:53 PM, Steven Dake  wrote:
> On 01/14/2014 09:27 PM, Clint Byrum wrote:
> Excerpts from Prasad Vellanki's message of 2014-01-14 18:41:46 -0800:
> Steve
> 
> I did not mean to have custom solution at all. In fact that would be
> terrible.  I think Heat model of software config and deployment is 
really
> good. That allows configurators such as Chef, Puppet, Salt or Ansible to 
be
> plugged into it and all users need to write are modules for those.
> 
> What I was  thinking is if there is a way to use software 
config/deployment
>   to do initial configuration of the appliance by using agentless system
> such  as Ansible or Salt, thus requiring no cfminit. I am not sure this
> will work either, since it might require ssh keys to be installed for
> getting ssh to work without password prompting. But I do see that 
ansible
> and salt support username/password option.
> If this would not work, I agree that the best option is to make them
> support cfminit...
> Ansible is not agent-less. It just makes use of an extremely flexible
> agent: sshd. :) AFAIK, salt does use an agent though maybe they've added
> SSH support.
> 
> Anyway, the point is, Heat's engine should not be reaching into your
> machines. It talks to API's, but that is about it.
> 
> What you really want is just a VM that spins up and does the work for
> you and then goes away once it is done.
> Good thinking.  This model might work well without introducing the 
> "groan another daemon" problems pointed out elsewhere in this thread
> that were snipped.  Then the "modules" could simply be heat 
> templates available to the Heat engine to do the custom config setup.
> 
> The custom config setup might still be a problem with the original 
> constraints (not modifying images to inject SSH keys).
> 
> That model wfm.
> 
> Regards
> -steve
> 

(1) What destroys the short-lived VM if the heat engine crashes between 
creating and destroying that short-lived VM?

(2) What if something goes wrong and the heat engine never gets the signal 
it is waiting for?

(3) This still has the problem that something needs to be configured 
some(client-ish)where to support the client authorization solution 
(usually username/password).

(4) Given that everybody seems sanguine about solving the client 
authorization problem, what is wrong with code in the heat engine opening 
and using a connection to code in an appliance?  Steve, what do you mean 
by "reaching into your machines" that is critically different from calling 
their APIs?

(5) Are we really talking about the same kind of software configuration 
here?  Many appliances do not let you SSH into a bash shell and do 
whatever you want; they provide only their own API or special command 
language over a telnet/ssh sort of connection.  Is hot-software-config 
intended to cover that?  Is this what the OneConvergence guys are 
concerned with?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Clint Byrum
Excerpts from Murray, Paul (HP Cloud Services)'s message of 2014-01-27 04:14:44 
-0800:
> Hi Justin,
> 
> My though process is to go back to basics. To perform discovery there is no 
> getting away from the fact that you have to start with a well-known address 
> that your peers can access on the network. The second part is a 
> service/protocol accessible at that address that can perform the discovery. 
> So the questions are: what well-known addresses can I reach? And is that a 
> suitable place to implement the service/protocol.
> 
> The metadata service is different to the others in that it can be accessed 
> without credentials (correct me if I'm wrong), so it is the only possibility 
> out of the openstack services if you do not want to have credentials on the 
> peer instances. If that is not the case then the other services are options. 
> All services require security groups and/or networks to be configured 
> appropriately to access them.
> 
> (Yes, the question "can all instances access the same metadata service" did 
> really mean are they all local. Sorry for being unclear. But I think your 
> answer is yes, they are, right?)
> 
> Implementing the peer discovery in the instances themselves requires some 
> kind of multicast or knowing a list of addresses to try. In both cases either 
> the actual addresses or some name resolved through a naming service would do. 
> Whatever is starting your instances does have access to at least nova, so it 
> can find out if there are any running instances and what their addresses are. 
> These could be used as the addresses they try first. These are the way that 
> internet p2p services work and they work in the cloud.
> 

That's kind of my point about using Heat. You can use any higher level
tool to achieve this by dropping the existing addresses into userdata
and then using a gossip protocol to "spread the word" to existing nodes
about new ones.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Clint Byrum
Excerpts from Day, Phil's message of 2014-01-27 03:02:17 -0800:
> > -Original Message-
> > From: Clint Byrum [mailto:cl...@fewbar.com]
> > Sent: 24 January 2014 21:09
> > To: openstack-dev
> > Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
> > through metadata service
> > 
> > Excerpts from Justin Santa Barbara's message of 2014-01-24 12:29:49 -0800:
> > > Clint Byrum  wrote:
> > >
> > > >
> > > > Heat has been working hard to be able to do per-instance limited
> > > > access in Keystone for a while. A trust might work just fine for what 
> > > > you
> > want.
> > > >
> > >
> > > I wasn't actually aware of the progress on trusts.  It would be
> > > helpful except (1) it is more work to have to create a separate trust
> > > (it is even more painful to do so with IAM) and (2) it doesn't look
> > > like we can yet lock-down these delegations as much as people would
> > > probably want.  I think IAM is the end-game in terms of the model that
> > > people actually want, and it ends up being incredibly complex.
> > > Delegation is very useful (particularly because clusters could
> > > auto-scale themselves), but I'd love to get an easier solution for the
> > > peer discovery problem than where delegation ends up.
> > >
> > > Are you hesitant to just use Heat? This is exactly what it is supposed
> > > > to do.. make a bunch of API calls and expose the results to
> > > > instances for use in configuration.
> > >
> > > > If you're just hesitant to use a declarative templating language, I
> > > > totally understand. The auto-scaling minded people are also feeling
> > > > this way. You could join them in the quest to create an imperative
> > > > cluster-making API for Heat.
> > > >
> > >
> > > I don't want to _depend_ on Heat.  My hope is that we can just launch
> > > 3 instances with the Cassandra image, and get a Cassandra cluster.  It
> > > might be that we want Heat to auto-scale that cluster, Ceilometer to
> > > figure out when to scale it, Neutron to isolate it, etc but I think we
> > > can solve the basic discovery problem cleanly without tying in all the 
> > > other
> > services.
> > >  Heat's value-add doesn't come from solving this problem!
> > >
> > 
> > I suppose we disagree on this fundamental point then.
> > 
> > Heat's value-add really does come from solving this exact problem. It
> > provides a layer above all of the other services to facilitate expression of
> > higher level concepts. Nova exposes a primitive API, where as Heat is meant
> > to have a more logical expression of the user's intentions. That includes
> > exposure of details of one resource to another (not just compute, swift
> > containers, load balancers, volumes, images, etc).
> > 
> 
> The main problem I see with using heat is that seems to depend on all 
> instances having network access to the heat server, and I'm not sure how that 
> would work for Neutron VPN network. This is already solved for the 
> Metadata server because the Neturon proxy already provides secure access.
> 

This is not actually true. For Justin's use case, only access to the
ec2 metadata is needed. When using Heat one can set the Userdata to
whatever one already has discovered before booting the server at boot
time. Heat uses its own Metadata server for ongoing updates, but in
Justin's prescribed scenario, the machines discover each-other at boot-up
only anyway. So machine 0 sees no other machines. Machine 1 sees machine
0. Machine 2 sees 0 and 1... etc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Clint Byrum
Excerpts from Monty Taylor's message of 2014-02-05 14:57:33 -0800:
> On 01/27/2014 11:02 AM, Day, Phil wrote:
> >> -Original Message-
> >> From: Clint Byrum [mailto:cl...@fewbar.com]
> >> Sent: 24 January 2014 21:09
> >> To: openstack-dev
> >> Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer 
> >> instances
> >> through metadata service
> >>
> >> Excerpts from Justin Santa Barbara's message of 2014-01-24 12:29:49 -0800:
> >>> Clint Byrum  wrote:
> >>>
> 
>  Heat has been working hard to be able to do per-instance limited
>  access in Keystone for a while. A trust might work just fine for what you
> >> want.
> 
> >>>
> >>> I wasn't actually aware of the progress on trusts.  It would be
> >>> helpful except (1) it is more work to have to create a separate trust
> >>> (it is even more painful to do so with IAM) and (2) it doesn't look
> >>> like we can yet lock-down these delegations as much as people would
> >>> probably want.  I think IAM is the end-game in terms of the model that
> >>> people actually want, and it ends up being incredibly complex.
> >>> Delegation is very useful (particularly because clusters could
> >>> auto-scale themselves), but I'd love to get an easier solution for the
> >>> peer discovery problem than where delegation ends up.
> >>>
> >>> Are you hesitant to just use Heat? This is exactly what it is supposed
>  to do.. make a bunch of API calls and expose the results to
>  instances for use in configuration.
> >>>
>  If you're just hesitant to use a declarative templating language, I
>  totally understand. The auto-scaling minded people are also feeling
>  this way. You could join them in the quest to create an imperative
>  cluster-making API for Heat.
> 
> >>>
> >>> I don't want to _depend_ on Heat.  My hope is that we can just launch
> >>> 3 instances with the Cassandra image, and get a Cassandra cluster.  It
> >>> might be that we want Heat to auto-scale that cluster, Ceilometer to
> >>> figure out when to scale it, Neutron to isolate it, etc but I think we
> >>> can solve the basic discovery problem cleanly without tying in all the 
> >>> other
> >> services.
> >>>   Heat's value-add doesn't come from solving this problem!
> >>>
> >>
> >> I suppose we disagree on this fundamental point then.
> >>
> >> Heat's value-add really does come from solving this exact problem. It
> >> provides a layer above all of the other services to facilitate expression 
> >> of
> >> higher level concepts. Nova exposes a primitive API, where as Heat is meant
> >> to have a more logical expression of the user's intentions. That includes
> >> exposure of details of one resource to another (not just compute, swift
> >> containers, load balancers, volumes, images, etc).
> >>
> >
> > The main problem I see with using heat is that seems to depend on all 
> > instances having network access to the heat server, and I'm not sure how 
> > that would work for Neutron VPN network. This is already solved for the 
> > Metadata server because the Neturon proxy already provides secure access.
> 
> That sounds like an integration issue we should fix. (regardless of 
> whether it makes Justin's life any better) If we can't use heat in some 
> situations because neutron doesn't know how to securely proxy to its 
> metadata service ... that's kinda yuck.
> 

Indeed that is a known problem with Heat and one that has several
solutions. One simple solution is for Heat to simply update the nova
userdata, and for in-instance tools to just query ec2 metadata. The only
obstacle to that is that ec2 metadata is visible to non-privileged users
on the box without extra restrictions being applied.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Sofware Config progress

2014-02-05 Thread Mike Spreitzer
> From: Steven Dake 
...
> The crux of the problem is how do you obtain critical mass for 
> custom one-off solutions?  Lets assume two possible solutions to 
> this problem that these vendors could take.  If there are more, 
> please feel free to explain them:
> 
> 1) Implement a ReST server which the vendor's image talks to ReST 
> server to obtain bootstrapping information
> 2) SSH into the machine from an external configuration server process

It looks to me like the OneConvergence guys had in mind

(3) Client is authorized by presenting a username/password pair, as 
mentioned in the Puppet case (
https://puppetlabs.com/blog/managing-f5-big-ip-network-devices-with-puppet
) cited elsewhere in this thread; server authentication/authorization is 
not a concern; conversation confidentiality either is not a concern or is 
handled by the client/server protocol without additional configuration; 
client is code running in the heat engine___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Dina Belova
>
> Perhaps we should start putting each project on the TC agenda for a
> review of its current standing.  For any gaps, I think we should set a
> specific timeframe for when we expect these gaps to be filled.


Really good idea. New requirements are great, but frankly speaking not all
currently integrated projects fit all of them.
Will be nice to find out all gaps there and fix them asap.

Dina


On Thu, Feb 6, 2014 at 2:47 AM, Monty Taylor  wrote:

> On 02/05/2014 10:38 PM, Sean Dague wrote:
>
>> On 02/06/2014 06:31 AM, Russell Bryant wrote:
>>
>>> On 02/05/2014 02:31 PM, Doug Hellmann wrote:
>>>



 On Wed, Feb 5, 2014 at 1:24 PM, Russell Bryant >>> > wrote:

  Greetings,

  In the TC we have been going through a process to better define our
  requirements for incubation and graduation to being an integrated
  project.  The current version can be found in the governance repo:

  http://git.openstack.org/cgit/openstack/governance/tree/
 reference/incubation-integration-requirements

  Is it time that we do an analysis of the existing integrated
 projects
  against the requirements we have set?  If not now, when?

  Perhaps we should start putting each project on the TC agenda for a
  review of its current standing.  For any gaps, I think we should
 set a
  specific timeframe for when we expect these gaps to be filled.

  Thoughts?


 I like the idea of starting this soon, so projects can prioritize the
 work during the next cycle and have time to plan to discuss any related
 issues at the summit. Setting a deadline for finishing may depend on the
 nature and size of the gaps, but it seems fair to set a deadline for
 *starting* the work.

>>>
>>> Well, I think in all cases the work should start ASAP.
>>>
>>> We could set the deadline for when we expect it to be finished on a case
>>> by case basis, though.
>>>
>>
>> First, +1 on doing these kinds of reviews. I think as we've been
>> applying the rules to new projects, we need to validate that they are
>> sane by applying them to existing projects.
>>
>> My feeling is that we've been evolving these new requirements during
>> Icehouse, and it's fair to say that all existing integrated projects
>> need to be up to snuff by Juno, otherwise we take a project back to
>> incubating status.
>>
>> I think it will be really good to do some gap analysis here and figure
>> out where we think we have holes in our existing integrated projects.
>> Because realistically I think we're going to find a number of projects
>> that don't meet are current bar, and we'll need to come up with a way to
>> get them in sync.
>>
>>  From a gating perspective, I think a bunch of our issues are based on
>> the fact that as the number of moving parts in OpenStack expands, our
>> tolerance for any particular part not being up to par has to decrease,
>> because the number of ways a badly integrated component can impact the
>> OpenStack whole is really large.
>>
>
> +100
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [group-policy] Canceling tomorrows meeting

2014-02-05 Thread Mohammad Banikazemi
Hi everybody,

Since some of us are away attending the OpenDaylight Summit, we are going
to cancel the Thursday Feb 6 meeting.
We have started coding for our PoC implementation and should have some code
for review by next week. Please use the mailing list for discussing Group
Policy related matters.

Best,

Mohammad




From:   Kyle Mestery 
To: "OpenStack Development Mailing List (not for usage questions)"
,
Date:   01/29/2014 06:35 PM
Subject:[openstack-dev] [neutron] [group-policy] Canceling tomorrows
meeting



Folks:

We’ve decided to cancel tomorrows IRC meeting. If you have
action items around the POC we’re all working on, please
continue down that path for now. We’ll sync up for a meeting
the following week again.

Thanks!
Kyle
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

<>___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-05 Thread Joshua Harlow
Any mysql DB drivers (I think the majority of openstack deployments use
mysql?).

How about sqlalchemy (what would possibly need to change there for it to
work)? The pain that I see is that to connect all these libraries into
asyncio they have to invert how they work (sqlalchemy would have to become
asyncio compatible (?), which probably means a big rewrite). This is where
it would be great to have a 'eventlet' like-thing built ontop of asyncio
(letting existing libraries work without rewrites). Eventually I guess
in-time (if tulip succeeds) then this 'eventlet' like-thing could be
removed.

Has there been commitment from library developers to start adjusting there
libraries to follow this new model (openstack has over 100+ dependencies
so each one would seem to have to change, especially if it has any sort of
I/O capabilities)?

-Original Message-
From: victor stinner 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

Date: Wednesday, February 5, 2014 at 3:00 AM
To: "OpenStack Development Mailing List (not for usage questions)"

Subject: Re: [openstack-dev] Asynchrounous programming:
replace eventletwith asyncio

>Hi,
>
>Chris Behrens wrote:
>> Interesting thread. I have been working on a side project that is a
>> gevent/eventlet replacement [1] that focuses on thread-safety and
>> performance. This came about because of an outstanding bug we have with
>> eventlet not being Thread safe. (We cannot safely enable thread pooling
>>for
>> DB calls so that they will not block.)
>
>There are DB drivers compatible with asyncio: PostgreSQL, MongoDB, Redis
>and memcached.
>
>There is also a driver for ZeroMQ which can be used in Oslo Messaging to
>have a more efficient (asynchronous) driver.
>
>There also many event loops for: gevent (geventreactor, gevent3),
>greenlet, libuv, GLib and Tornado.
>
>See the full list:
>http://code.google.com/p/tulip/wiki/ThirdParty
>
>Victor
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [State-Management] Agenda for meeting (tommorow) at 2000 UTC

2014-02-05 Thread Joshua Harlow
Hi all,

The [state-management] project team holds a weekly meeting in
#openstack-meeting on thursdays, 2000 UTC. The next meeting is tommorow,
2014-02-06!!! 

As usual, everyone is welcome :-)

Link: https://wiki.openstack.org/wiki/Meetings/StateManagement
Taskflow: https://wiki.openstack.org/TaskFlow

## Agenda (30-60 mins):

- Discuss any action items from last meeting.
- Taskflow 0.1.3 release (why, what, when).
- Taskflow 0.2.0 release (why, what, when).
- Documenting and planning and improving cinder integration processes.
- Discuss about any other potential new use-cases for said library.
- Discuss about any other ideas, reviews needing help, questions and
answers (and more!).

Any other topics are welcome :-)

See you all soon!

--

Joshua Harlow

It's openstack, relax... | harlo...@yahoo-inc.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread John Griffith
On Wed, Feb 5, 2014 at 5:54 PM, Rochelle.RochelleGrober
 wrote:
> On Wed, Feb 5, 2014 at 12:05 PM, Russell Bryant  wrote:
>
> On 02/05/2014 11:22 AM, Thierry Carrez wrote:
>> (This email is mostly directed to PTLs for programs that include one
>> integrated project)
>>
>> The DefCore subcommittee from the OpenStack board of directors asked the
>> Technical Committee yesterday about which code sections in each
>> integrated project should be "designated sections" in the sense of [1]
>> (code you're actually needed to run or include to be allowed to use the
>> trademark). That determines where you can run alternate code (think:
>> substitute your own private hypervisor driver) and still be able to call
>> the result openstack.
>>
>> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
>>
>> PTLs and their teams are obviously the best placed to define this, so it
>> seems like the process should be: PTLs propose designated sections to
>> the TC, which blesses them, combines them and forwards the result to the
>> DefCore committee. We could certainly leverage part of the governance
>> repo to make sure the lists are kept up to date.
>>
>> Comments, thoughts ?
>>
>
> The process you suggest is what I would prefer.  (PTLs writing proposals
> for TC to approve)
>
> Using the governance repo makes sense as a means for the PTLs to post
> their proposals for review and approval of the TC.
>
>
>
> +1
>
>
>
> +1
>
>
>
> Who gets final say if there's strong disagreement between a PTL and the
> TC?  Hopefully this won't matter, but it may be useful to go ahead and
> clear this up front.
>
>
>
> The Board has some say in this, too, right? The proposal [1] is for a set of
> tests to be proposed and for the Board to approve (section 8).
>
>
>
> What is the relationship between that test suite and the designated core
> areas? It seems that anything being tested would need to be designated as
> core. What about the inverse?
>
>
>
> The test suite should validate that the core
> capabilities/behaviors/functionality behave as expected (positive and
> negative testing in an integrated environment).  So, the test suites would
> need to be reviewed for applicability.  Maybe, like Gerrit, there would be
> voting and nonvoting parts of tests based on whether something outside of
> core gets exercised in the process of running some tests.  Whatever the
> case, I doubt that the tests would generate a simple yes/no, but rather a
> score.  An discussion of one of the subsets of capabilities for Nova might
> start with the capabilities highlighted on this page:
>
> https://wiki.openstack.org/wiki/HypervisorSupportMatrix
>
>
>
> The test suite would need to exercise the capabilities in these sorts of
> matrices and might product the A/B/C grades as the rest of the page
> elucidates.
>

Sorry but I think this misses the point of the PTL request being made
here.  The question being asked is not "is the interface compatible",
it's quite possible for somebody to build a cloud without a single
piece of OpenStack code but still provide an OpenStack compatible
interface and mimic behaviors.  IMO compatibility tests already exist
for the most part via the Tempest test suite that we use to gate on.
If I'm incorrect and that is in fact the goal, that's significantly
easier to solve IMO.

The question here as I understand it (and I may be confused again
based on the thread here) is what parts of the code/modules are
required to be used in order for somebody building a cloud to say
"it's an OpenStack cloud"?  The cheat answer for me would be, you have
to be using cinder-api, cinder-scheduler and cinder-volume services
(regardless of driver).  That raises the next layer of detail though,
do those services have to be un-modified?  How much modification is
acceptable etc. What about deployments that may use their own
scheduler?

I think the direction the thread is taking here is that there really
isn't enough information to make this call, and there certainly isn't
enough understanding of the intent, meaning or ramifications.
>
>
> --Rocky
>
>
>
> Doug
>
>
>
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
>
>
>
>
>
>
> --
> Russell Bryant
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Rochelle.RochelleGrober
On Wed, Feb 5, 2014 at 12:05 PM, Russell Bryant 
mailto:rbry...@redhat.com>> wrote:
On 02/05/2014 11:22 AM, Thierry Carrez wrote:
> (This email is mostly directed to PTLs for programs that include one
> integrated project)
>
> The DefCore subcommittee from the OpenStack board of directors asked the
> Technical Committee yesterday about which code sections in each
> integrated project should be "designated sections" in the sense of [1]
> (code you're actually needed to run or include to be allowed to use the
> trademark). That determines where you can run alternate code (think:
> substitute your own private hypervisor driver) and still be able to call
> the result openstack.
>
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
>
> PTLs and their teams are obviously the best placed to define this, so it
> seems like the process should be: PTLs propose designated sections to
> the TC, which blesses them, combines them and forwards the result to the
> DefCore committee. We could certainly leverage part of the governance
> repo to make sure the lists are kept up to date.
>
> Comments, thoughts ?
>
The process you suggest is what I would prefer.  (PTLs writing proposals
for TC to approve)

Using the governance repo makes sense as a means for the PTLs to post
their proposals for review and approval of the TC.

+1

+1

Who gets final say if there's strong disagreement between a PTL and the
TC?  Hopefully this won't matter, but it may be useful to go ahead and
clear this up front.

The Board has some say in this, too, right? The proposal [1] is for a set of 
tests to be proposed and for the Board to approve (section 8).

What is the relationship between that test suite and the designated core areas? 
It seems that anything being tested would need to be designated as core. What 
about the inverse?

The test suite should validate that the core 
capabilities/behaviors/functionality behave as expected (positive and negative 
testing in an integrated environment).  So, the test suites would need to be 
reviewed for applicability.  Maybe, like Gerrit, there would be voting and 
nonvoting parts of tests based on whether something outside of core gets 
exercised in the process of running some tests.  Whatever the case, I doubt 
that the tests would generate a simple yes/no, but rather a score.  An 
discussion of one of the subsets of capabilities for Nova might start with the 
capabilities highlighted on this page:
https://wiki.openstack.org/wiki/HypervisorSupportMatrix

The test suite would need to exercise the capabilities in these sorts of 
matrices and might product the A/B/C grades as the rest of the page elucidates.

--Rocky

Doug

[1] https://wiki.openstack.org/wiki/Governance/CoreDefinition



--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vmware minesweeper

2014-02-05 Thread Ryan Hsu
Did you mean flags as in excluding tests? We have an exclude list but these 
bugs are intermittent problems that can affect tests at random. 

> On Feb 5, 2014, at 4:18 PM, "John Dickinson"  wrote:
> 
> 
>> On Feb 5, 2014, at 4:04 PM, Ryan Hsu  wrote:
>> 
>> Also, I have added a section noting crucial bugs/patches that are blocking 
>> Minesweeper.
> 
> 
> Can we just put flags around them and move on?
> 
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=0GxzU7fJvvhFTHF4JHrlbg%3D%3D%0A&m=PaNv1e10cXsdYwWWKCJcK9%2Fj6DZ6FSoJrgPy%2FgU%2BqRk%3D%0A&s=c39743f7af92cf915b297291118d723c9de907f51ddf0f25e70845769fcff2c4

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Comments on DSL

2014-02-05 Thread Renat Akhmerov
Dmitri,

Sure, no problem. Good considerations. I’m now in the process of reviewing your 
notes..

Renat Akhmerov
@ Mirantis Inc.


On 04 Feb 2014, at 09:05, Dmitri Zimine  wrote:

> Following up from yesterday's community meeting
> 
> I am still catching up with the project,  still miss a lot of context. Put my 
> questions and comments on DSL definition: 
> https://etherpad.openstack.org/p/mistral-dsl-discussion
> 
> Let's review, and thanks for helping us get up to speed. 
> 
> DZ> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Justin Santa Barbara
Russell Bryant wrote:

> I'm saying use messaging as the means to implement discovery.

OK.  Sorry that I didn't get this before.

>> 1) Marconi isn't widely deployed
>
> Yet.
>
> I think we need to look to the future and decide on the right solution
> to the problem.

Agreed 100%.  I actually believe this _is_ the correct long-term
solution.  The fact that it doesn't depend on long-term roadmaps for
other projects is merely a nice bonus.

>> 2) There is no easy way for a node to discover Marconi, even if it was 
>> deployed.
>
> That's what the Keystone service catalog is for.

Agreed.  But, as far as I know, we have not defined how an instance
reaches the Keystone service catalog.  Probably, we would need to
expose the Keystone endpoint in the metadata.  (And, yes, we should do
that too, but it doesn't really matter until we solve #3...)

>> 3) There is no easy way for a node to authenticate to Marconi, even if
>> we could discover it
>
> huh?
>
> The whole point of Marconi is to allow instances to have a messaging
> service available to them to use.  Of course they can auth to it.

As far as I know, we haven't defined any way for an instance to get
credentials to use.  The only approach that I know of is that the
end-user puts their credentials into the metadata.   But we don't have
particularly fine-grained roles, so I can't see anyone wanting that in
production!

>> I absolutely think we should fix each of those obstacles, and I'm sure
>> we will eventually.  But in the meantime, let's get this into
>> Icehouse!
>
> NACK.

Well there's no need to shout :-)

I understand the idea that everything in OpenStack should work
together:  I am a big proponent of it.  However, this blueprint is a
nice self-contained solution that solves a real problem today.  The
alternative Marconi-based approach is not only years away from
public-cloud deployment, but will be more complicated for the end
user.  Have you ever tried defining IAM roles on EC2? - yuk!

Even once we reach the happy day where we have Marconi everywhere,
pub-sub queues, IAM, Instance Roles, and Keystone auto-discovery; even
then end-users would still prefer the "it just works" result this
blueprint will provide.  As such we're not duplicating functionality,
and we could have discovery in June, not in Juno (or - realistically -
M).

So: Is this a permanent no, or just a not-in-Icehouse no?

Justin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vmware minesweeper

2014-02-05 Thread John Dickinson

On Feb 5, 2014, at 4:04 PM, Ryan Hsu  wrote:

> Also, I have added a section noting crucial bugs/patches that are blocking 
> Minesweeper.


Can we just put flags around them and move on?





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] vmware minesweeper

2014-02-05 Thread Ryan Hsu
Also, I have added a section noting crucial bugs/patches that are blocking 
Minesweeper.

Ryan

On Feb 5, 2014, at 2:03 PM, Shawn Hartsock  wrote:

> FYI:
> 
> We're keeping usage notes on VMware Minesweeper here:
>
> https://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=0GxzU7fJvvhFTHF4JHrlbg%3D%3D%0A&m=RVCKDKQu%2F63n6QtFmau%2FZuCmtmegCzkvdxtXsLZ7ZZ8%3D%0A&s=ed288df547b25030f058d673a01a6641695d9d4baf71b5249a0aea4b41666ccb
> 
> Status updates appear on this page:
>
> https://urldefense.proofpoint.com/v1/url?u=https://wiki.openstack.org/wiki/NovaVMware/Minesweeper/Status&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=0GxzU7fJvvhFTHF4JHrlbg%3D%3D%0A&m=RVCKDKQu%2F63n6QtFmau%2FZuCmtmegCzkvdxtXsLZ7ZZ8%3D%0A&s=f2780fc695b5d171a1e53701894f8e053874ee8605f2275c0e12b9b89f971f0b
> 
> -- 
> # Shawn.Hartsock
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> https://urldefense.proofpoint.com/v1/url?u=http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=0GxzU7fJvvhFTHF4JHrlbg%3D%3D%0A&m=RVCKDKQu%2F63n6QtFmau%2FZuCmtmegCzkvdxtXsLZ7ZZ8%3D%0A&s=54af1782c6d48bbaaa4e3a1f8b8d004a92240c278efa938f4633f972952c390e


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [group-policy] Canceling tomorrow's meeting

2014-02-05 Thread siliconloons
Due to the fact many people involved are traveling for the OpenDaylight Summit, 
I am canceling tomorrow's meeting. We will resume next week.

Thanks,
Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Monty Taylor

On 01/27/2014 11:02 AM, Day, Phil wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 24 January 2014 21:09
To: openstack-dev
Subject: Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances
through metadata service

Excerpts from Justin Santa Barbara's message of 2014-01-24 12:29:49 -0800:

Clint Byrum  wrote:



Heat has been working hard to be able to do per-instance limited
access in Keystone for a while. A trust might work just fine for what you

want.




I wasn't actually aware of the progress on trusts.  It would be
helpful except (1) it is more work to have to create a separate trust
(it is even more painful to do so with IAM) and (2) it doesn't look
like we can yet lock-down these delegations as much as people would
probably want.  I think IAM is the end-game in terms of the model that
people actually want, and it ends up being incredibly complex.
Delegation is very useful (particularly because clusters could
auto-scale themselves), but I'd love to get an easier solution for the
peer discovery problem than where delegation ends up.

Are you hesitant to just use Heat? This is exactly what it is supposed

to do.. make a bunch of API calls and expose the results to
instances for use in configuration.



If you're just hesitant to use a declarative templating language, I
totally understand. The auto-scaling minded people are also feeling
this way. You could join them in the quest to create an imperative
cluster-making API for Heat.



I don't want to _depend_ on Heat.  My hope is that we can just launch
3 instances with the Cassandra image, and get a Cassandra cluster.  It
might be that we want Heat to auto-scale that cluster, Ceilometer to
figure out when to scale it, Neutron to isolate it, etc but I think we
can solve the basic discovery problem cleanly without tying in all the other

services.

  Heat's value-add doesn't come from solving this problem!



I suppose we disagree on this fundamental point then.

Heat's value-add really does come from solving this exact problem. It
provides a layer above all of the other services to facilitate expression of
higher level concepts. Nova exposes a primitive API, where as Heat is meant
to have a more logical expression of the user's intentions. That includes
exposure of details of one resource to another (not just compute, swift
containers, load balancers, volumes, images, etc).



The main problem I see with using heat is that seems to depend on all instances 
having network access to the heat server, and I'm not sure how that would work 
for Neutron VPN network. This is already solved for the Metadata server 
because the Neturon proxy already provides secure access.


That sounds like an integration issue we should fix. (regardless of 
whether it makes Justin's life any better) If we can't use heat in some 
situations because neutron doesn't know how to securely proxy to its 
metadata service ... that's kinda yuck.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Russell Bryant
On 02/05/2014 04:45 PM, Justin Santa Barbara wrote:
> Russell Bryant  wrote:
>> So, it seems that at the root of this, you're looking for a
>> cloud-compatible way for instances to message each other.
> 
> No: discovery of peers, not messaging.  After discovery, communication

I'm saying use messaging as the means to implement discovery.

> between nodes will then be done directly e.g. over TCP.  Examples of
> services that work using this model:  Elasticsearch, JBoss Data Grid,
> anything using JGroups, the next version of Zookeeper, etc.  The
> instances just need some way to find each other; a nice way to think
> of this is as a replacement for multicast-discovery on the cloud.
> 
> All these services then switch to direct messaging, because using an
> intermediate service introduces too much latency.
> 
> With this blueprint though, we could build and run a great backend for
> Marconi, using OOO.
> 
>> I really don't see the metadata API as the appropriate place for that.
> 
> But I presume you're OK with it for discovery? :-)
> 
>> How about using Marconi here?  If not, what's missing from Marconi's API
>> to solve your messaging use case to allow instances to discover each other?
> 
> Well, again: discovery, so Marconi isn't the natural fit it may at
> first appear.  Not sure if Marconi supports 'broadcast' queues (that
> would be the missing piece if it doesn't).  But, even if we could
> abuse a Marconi queue for this:
> 
> 1) Marconi isn't widely deployed

Yet.

I think we need to look to the future and decide on the right solution
to the problem.

> 2) There is no easy way for a node to discover Marconi, even if it was 
> deployed.

That's what the Keystone service catalog is for.

> 3) There is no easy way for a node to authenticate to Marconi, even if
> we could discover it

huh?

The whole point of Marconi is to allow instances to have a messaging
service available to them to use.  Of course they can auth to it.

> 
> I absolutely think we should fix each of those obstacles, and I'm sure
> we will eventually.  But in the meantime, let's get this into
> Icehouse!

NACK.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Monty Taylor

On 02/05/2014 10:38 PM, Sean Dague wrote:

On 02/06/2014 06:31 AM, Russell Bryant wrote:

On 02/05/2014 02:31 PM, Doug Hellmann wrote:




On Wed, Feb 5, 2014 at 1:24 PM, Russell Bryant mailto:rbry...@redhat.com>> wrote:

 Greetings,

 In the TC we have been going through a process to better define our
 requirements for incubation and graduation to being an integrated
 project.  The current version can be found in the governance repo:

 
http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements

 Is it time that we do an analysis of the existing integrated projects
 against the requirements we have set?  If not now, when?

 Perhaps we should start putting each project on the TC agenda for a
 review of its current standing.  For any gaps, I think we should set a
 specific timeframe for when we expect these gaps to be filled.

 Thoughts?


I like the idea of starting this soon, so projects can prioritize the
work during the next cycle and have time to plan to discuss any related
issues at the summit. Setting a deadline for finishing may depend on the
nature and size of the gaps, but it seems fair to set a deadline for
*starting* the work.


Well, I think in all cases the work should start ASAP.

We could set the deadline for when we expect it to be finished on a case
by case basis, though.


First, +1 on doing these kinds of reviews. I think as we've been
applying the rules to new projects, we need to validate that they are
sane by applying them to existing projects.

My feeling is that we've been evolving these new requirements during
Icehouse, and it's fair to say that all existing integrated projects
need to be up to snuff by Juno, otherwise we take a project back to
incubating status.

I think it will be really good to do some gap analysis here and figure
out where we think we have holes in our existing integrated projects.
Because realistically I think we're going to find a number of projects
that don't meet are current bar, and we'll need to come up with a way to
get them in sync.

 From a gating perspective, I think a bunch of our issues are based on
the fact that as the number of moving parts in OpenStack expands, our
tolerance for any particular part not being up to par has to decrease,
because the number of ways a badly integrated component can impact the
OpenStack whole is really large.


+100


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread Pat Bredenberg

On 02/ 5/14 03:24 PM, John Griffith wrote:

On Wed, Feb 5, 2014 at 3:09 PM, Jay S Bryant  wrote:

Joe,

Ah!  So, those aren't for Cinder Volume but for nova-volume.  Ok, so there
isn't really a bug then.


Yep, this is left over from when volumes were in nova.
Thank you all for your comments.  After Jay's initial response, I 
did file bug 1276828: 
https://bugs.launchpad.net/python-novaclient/+bug/1276828.  Joe Gordon's 
comment made it sound like a bug is still present.  But if that's no 
longer the case, please close that bug as you see fit.  Minimally, I see 
this as something that could be addressed in documentation, if it hasn't 
already.
I'll step back and let you decide the appropriate action with the 
bug.  Thank you again.


Sincerely,
Pat



Sorry for speaking too quickly.  Thanks for the info!


Jay S. Bryant
IBM Cinder Subject Matter Expert&   Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:Joe Gordon
To:"OpenStack Development Mailing List (not for usage questions)"
,
Date:02/05/2014 04:03 PM
Subject:Re: [openstack-dev] Grizzly volume quotas




On Wed, Feb 5, 2014 at 1:21 PM, Jay S Bryant  wrote:

Pat,

I see the same behavior on an Icehouse level install.  So, I think you may
have found a bug.

So the bug here isn't what you expect.

First a bit of background.

* python-novaclient isn't part of the integrated release and needs to
support most releases (not just the most recent).
* python-novaclient doesn't have any mechanism to detect what commands
a cloud supports and hide the other commands  [This is the bug].

So nova client needs to support nova-volume, which is why we still
have the volume quota options.


I would open the bug to python-novaclient to start with, but it may end up
coming back to Cinder.


Jay S. Bryant
IBM Cinder Subject Matter Expert&   Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:Pat Bredenberg
To:openstack-dev@lists.openstack.org,
Date:02/05/2014 03:05 PM
Subject:[openstack-dev] Grizzly volume quotas




Dear all,

 I'm part of the team bringing OpenStack to Solaris and am confused
about how volume quotas appear according to nova(1).  We're using
Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other
configuration information you need.  The raw data itself is available
here: http://paste.openstack.org/show/62667/.
 Is it a bug that "volumes" appears as a configurable quota via
nova(1), according to its help menu?  I'll apologize in advance if this
has already been documented elsewhere and/or addressed in Havana or
Icehouse.  I searched but didn't see it mentioned.  If it's a bug that
has yet to be filed and should be addressed, please let me know and I'll
gladly file the bug.  Otherwise, I'll chalk it up as a learning
experience.  Your guidance is greatly appreciated.

Very respectfully,
Pat Bredenberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] VMwareAPI sub-team status 2014-02-05

2014-02-05 Thread Shawn Hartsock
sub-team status tracked here:
https://etherpad.openstack.org/p/vmware-subteam-icehouse

Many blueprints were retargeted from icehouse-3 to Juno. Full update
in the etherpad for the retargeted reviews. For icehouse so far we
still have targeted for i-3:

Nova
2. https://blueprints.launchpad.net/nova/+spec/vmware-iso-boot - gary
3. https://blueprints.launchpad.net/nova/+spec/vmware-vsan-support - vui
4. https://blueprints.launchpad.net/nova/+spec/autowsdl-repair - hartsocks
5. https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
- Yaguang Tang
6. https://blueprints.launchpad.net/nova/+spec/vmware-hot-plug - gary

Glance
1. 
https://blueprints.launchpad.net/glance/+spec/vmware-datastore-storage-backend
- arnaud

Cinder
1. https://blueprints.launchpad.net/cinder/+spec/vmdk-storage-policy-volume-type
- subbu

Reviews!
Ordered by fitness for review:

== needs one more +2/approval ==
* https://review.openstack.org/69622
title: 'VMware: prevent race for vmdk deletion'
votes: +2:2, +1:7, -1:0, -2:0. +8 days in progress, revision:
12 is 0 days old
* https://review.openstack.org/67774
title: 'VMware snapshot-force support: consolidate backing'
votes: +2:1, +1:2, -1:0, -2:0. +16 days in progress, revision:
3 is 0 days old
* https://review.openstack.org/61931
title: 'vmware: Storage policy based volume placement.'
votes: +2:1, +1:2, -1:0, -2:0. +54 days in progress, revision:
16 is 5 days old
* https://review.openstack.org/70858
title: 'Rename Neutron core/service plugins for VMware NSX'
votes: +2:1, +1:9, -1:0, -2:0. +2 days in progress, revision:
4 is 0 days old
* https://review.openstack.org/66373
title: 'Add device bus and type to virt attach_volume call'
votes: +2:2, +1:3, -1:0, -2:0. +23 days in progress, revision:
10 is 0 days old

== ready for core ==
* https://review.openstack.org/65306
title: 'VMware: fix race for datastore directory existence'
votes: +2:0, +1:5, -1:0, -2:0. +29 days in progress, revision:
19 is 0 days old

Many more reviews are up (over 20) but these need to have sub-team
consensus on them before I can promote them to "ready for core"
status. We're organizing a VMware sub-team "review day" to get
everyone reviewing consistently the same way. Naturally, any core
reviewer guidance would be appreciated since much of this work is done
trying to anticipate what core-reviewers will look for in a patch. A
full patch listing is up on the etherpad mentioned earlier, the list
is dynamic so it changes frequently.


== Meeting info: ==

No meeting next week. Next meeting February 19th.

* https://wiki.openstack.org/wiki/Meetings/VMwareAPI
* We hang out in #openstack-vmware if you need to chat & it's not
worth spamming the whole list

Happy stacking!

-- 
# Shawn.Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Sean Dague
On 02/06/2014 06:31 AM, Russell Bryant wrote:
> On 02/05/2014 02:31 PM, Doug Hellmann wrote:
>>
>>
>>
>> On Wed, Feb 5, 2014 at 1:24 PM, Russell Bryant > > wrote:
>>
>> Greetings,
>>
>> In the TC we have been going through a process to better define our
>> requirements for incubation and graduation to being an integrated
>> project.  The current version can be found in the governance repo:
>>
>> 
>> http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements
>>
>> Is it time that we do an analysis of the existing integrated projects
>> against the requirements we have set?  If not now, when?
>>
>> Perhaps we should start putting each project on the TC agenda for a
>> review of its current standing.  For any gaps, I think we should set a
>> specific timeframe for when we expect these gaps to be filled.
>>
>> Thoughts?
>>
>>
>> I like the idea of starting this soon, so projects can prioritize the
>> work during the next cycle and have time to plan to discuss any related
>> issues at the summit. Setting a deadline for finishing may depend on the
>> nature and size of the gaps, but it seems fair to set a deadline for
>> *starting* the work.
> 
> Well, I think in all cases the work should start ASAP.
> 
> We could set the deadline for when we expect it to be finished on a case
> by case basis, though.

First, +1 on doing these kinds of reviews. I think as we've been
applying the rules to new projects, we need to validate that they are
sane by applying them to existing projects.

My feeling is that we've been evolving these new requirements during
Icehouse, and it's fair to say that all existing integrated projects
need to be up to snuff by Juno, otherwise we take a project back to
incubating status.

I think it will be really good to do some gap analysis here and figure
out where we think we have holes in our existing integrated projects.
Because realistically I think we're going to find a number of projects
that don't meet are current bar, and we'll need to come up with a way to
get them in sync.

From a gating perspective, I think a bunch of our issues are based on
the fact that as the number of moving parts in OpenStack expands, our
tolerance for any particular part not being up to par has to decrease,
because the number of ways a badly integrated component can impact the
OpenStack whole is really large.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Trove] Replication Contract Verbiage

2014-02-05 Thread Daniel Salinas
https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API#REPLICATION

I have updated the wiki page to reflect the current proposal for
replication verbiage with some explanation of the choices.  I would like to
open discussion here regarding that verbiage.  Without completely
duplicating everything I just wrote in the wiki here are the proposed words
that could be used to describe replication between two datastore instances
of the same type.  Please take a moment to consider them and let me know
what you think.  I welcome all feedback.

replicates_from:  This term will be used in an instance that is a slave of
another instance. It is a clear indicator that it is a slave of another
instance.

replicates_to: This term will be used in an instance that has slaves of
itself. It is a clear indicator that it is a master of one or more
instances.

writable: This term will be used in an instance to indicate whether it is
intended to be used for writes. As replication is used commonly to scale
read operations it is very common to have a read-only slave in many
datastore types. It is beneficial to the user to be able to see this
information when viewing the instance details via the api.

The intention here is to:
1.  have a clearly defined replication contract between instances.
2.  allow users to create a topology map simply by querying the api for
details of instances linked in the replication contracts
3.  allow the greatest level of flexibility for users when replicating
their data so that Trove doesn't prescribe how they should make use of
replication.

I also think there is value in documenting common replication topologies
per datastore type with example replication contracts and/or steps to
recreate them in our api documentation.  There are currently no examples of
this yet

e.g. To create multi-master replication in mysql...

As previously stated I welcome all feedback and would love input.

Regards,

Daniel Salinas
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] team meeting Feb 6 1800 UTC

2014-02-05 Thread Sergey Lukjanov
Hi folks,

We'll be having the Savanna team meeting as usual in #openstack-meeting-alt
channel.

Agenda:
https://wiki.openstack.org/wiki/Meetings/SavannaAgenda#Agenda_for_February.2C_06

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Savanna+Meeting&iso=20140206T18

P.S. The main topic will be Savanna project renaming.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic] January review redux

2014-02-05 Thread Ruby Loo
From: Devananda van der Veen 
mailto:devananda@gmail.com>>

So, I'd like to nominate the following two additions to the ironic-core team:

Max Lobur
https://review.openstack.org/#/q/reviewer:mlobur%2540mirantis.com+project:openstack/ironic,n,z

Roman Prykhodchenko
https://review.openstack.org/#/q/reviewer:rprikhodchenko%2540mirantis.com+project:openstack/ironic,n,z


Max and Roman are great to work with.

+1 to both!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] async devstack/tempest gating

2014-02-05 Thread Sergey Lukjanov
Hi folks,

I'd like to announce that now we have the following bunch of devstack-gate
jobs with Savanna enabled and very basic Tempest tests running for Savanna.
Each of this job runs migrations to setup DB and sanity checks REST API.

We have 3 types of jobs running:

* Nova-Network + MySQL;
* Neutron + MySQL;
* Nova-Network + PostgreSQL.

And for which projects we're running them (all 3 jobs):

* voting jobs for Savanna repo;
* non-voting for Tempest;
* non-voting for DevStack;
* non-voting for devstack-gate.

P.S. thanks for infra folks for awesome infra ;)

Thanks.

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread John Griffith
On Wed, Feb 5, 2014 at 3:09 PM, Jay S Bryant  wrote:
> Joe,
>
> Ah!  So, those aren't for Cinder Volume but for nova-volume.  Ok, so there
> isn't really a bug then.
>

Yep, this is left over from when volumes were in nova.

> Sorry for speaking too quickly.  Thanks for the info!
>
>
> Jay S. Bryant
>IBM Cinder Subject Matter Expert  &  Cinder Core Member
> Department 7YLA, Building 015-2, Office E125, Rochester, MN
> Telephone: (507) 253-4270, FAX (507) 253-6410
> TIE Line: 553-4270
> E-Mail:  jsbry...@us.ibm.com
> 
> All the world's a stage and most of us are desperately unrehearsed.
>   -- Sean O'Casey
> 
>
>
>
> From:Joe Gordon 
> To:"OpenStack Development Mailing List (not for usage questions)"
> ,
> Date:02/05/2014 04:03 PM
> Subject:Re: [openstack-dev] Grizzly volume quotas
> 
>
>
>
> On Wed, Feb 5, 2014 at 1:21 PM, Jay S Bryant  wrote:
>> Pat,
>>
>> I see the same behavior on an Icehouse level install.  So, I think you may
>> have found a bug.
>
> So the bug here isn't what you expect.
>
> First a bit of background.
>
> * python-novaclient isn't part of the integrated release and needs to
> support most releases (not just the most recent).
> * python-novaclient doesn't have any mechanism to detect what commands
> a cloud supports and hide the other commands  [This is the bug].
>
> So nova client needs to support nova-volume, which is why we still
> have the volume quota options.
>
>>
>> I would open the bug to python-novaclient to start with, but it may end up
>> coming back to Cinder.
>>
>>
>> Jay S. Bryant
>>IBM Cinder Subject Matter Expert  &  Cinder Core Member
>> Department 7YLA, Building 015-2, Office E125, Rochester, MN
>> Telephone: (507) 253-4270, FAX (507) 253-6410
>> TIE Line: 553-4270
>> E-Mail:  jsbry...@us.ibm.com
>> 
>> All the world's a stage and most of us are desperately unrehearsed.
>>   -- Sean O'Casey
>> 
>>
>>
>>
>> From:Pat Bredenberg 
>> To:openstack-dev@lists.openstack.org,
>> Date:02/05/2014 03:05 PM
>> Subject:[openstack-dev] Grizzly volume quotas
>> 
>>
>>
>>
>> Dear all,
>>
>> I'm part of the team bringing OpenStack to Solaris and am confused
>> about how volume quotas appear according to nova(1).  We're using
>> Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other
>> configuration information you need.  The raw data itself is available
>> here: http://paste.openstack.org/show/62667/.
>> Is it a bug that "volumes" appears as a configurable quota via
>> nova(1), according to its help menu?  I'll apologize in advance if this
>> has already been documented elsewhere and/or addressed in Havana or
>> Icehouse.  I searched but didn't see it mentioned.  If it's a bug that
>> has yet to be filed and should be addressed, please let me know and I'll
>> gladly file the bug.  Otherwise, I'll chalk it up as a learning
>> experience.  Your guidance is greatly appreciated.
>>
>> Very respectfully,
>> Pat Bredenberg
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread Jay S Bryant
Joe,

Ah!  So, those aren't for Cinder Volume but for nova-volume.  Ok, so there 
isn't really a bug then.

Sorry for speaking too quickly.  Thanks for the info!


Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Joe Gordon 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   02/05/2014 04:03 PM
Subject:Re: [openstack-dev] Grizzly volume quotas



On Wed, Feb 5, 2014 at 1:21 PM, Jay S Bryant  wrote:
> Pat,
>
> I see the same behavior on an Icehouse level install.  So, I think you 
may
> have found a bug.

So the bug here isn't what you expect.

First a bit of background.

* python-novaclient isn't part of the integrated release and needs to
support most releases (not just the most recent).
* python-novaclient doesn't have any mechanism to detect what commands
a cloud supports and hide the other commands  [This is the bug].

So nova client needs to support nova-volume, which is why we still
have the volume quota options.

>
> I would open the bug to python-novaclient to start with, but it may end 
up
> coming back to Cinder.
>
>
> Jay S. Bryant
>IBM Cinder Subject Matter Expert  &  Cinder Core Member
> Department 7YLA, Building 015-2, Office E125, Rochester, MN
> Telephone: (507) 253-4270, FAX (507) 253-6410
> TIE Line: 553-4270
> E-Mail:  jsbry...@us.ibm.com
> 
> All the world's a stage and most of us are desperately unrehearsed.
>   -- Sean O'Casey
> 
>
>
>
> From:Pat Bredenberg 
> To:openstack-dev@lists.openstack.org,
> Date:02/05/2014 03:05 PM
> Subject:[openstack-dev] Grizzly volume quotas
> 
>
>
>
> Dear all,
>
> I'm part of the team bringing OpenStack to Solaris and am confused
> about how volume quotas appear according to nova(1).  We're using
> Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other
> configuration information you need.  The raw data itself is available
> here: http://paste.openstack.org/show/62667/.
> Is it a bug that "volumes" appears as a configurable quota via
> nova(1), according to its help menu?  I'll apologize in advance if this
> has already been documented elsewhere and/or addressed in Havana or
> Icehouse.  I searched but didn't see it mentioned.  If it's a bug that
> has yet to be filed and should be addressed, please let me know and I'll
> gladly file the bug.  Otherwise, I'll chalk it up as a learning
> experience.  Your guidance is greatly appreciated.
>
> Very respectfully,
> Pat Bredenberg
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] vmware minesweeper

2014-02-05 Thread Shawn Hartsock
FYI:

We're keeping usage notes on VMware Minesweeper here:
https://wiki.openstack.org/wiki/NovaVMware/Minesweeper

Status updates appear on this page:
https://wiki.openstack.org/wiki/NovaVMware/Minesweeper/Status

-- 
# Shawn.Hartsock

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread Joe Gordon
On Wed, Feb 5, 2014 at 1:21 PM, Jay S Bryant  wrote:
> Pat,
>
> I see the same behavior on an Icehouse level install.  So, I think you may
> have found a bug.

So the bug here isn't what you expect.

First a bit of background.

* python-novaclient isn't part of the integrated release and needs to
support most releases (not just the most recent).
* python-novaclient doesn't have any mechanism to detect what commands
a cloud supports and hide the other commands  [This is the bug].

So nova client needs to support nova-volume, which is why we still
have the volume quota options.

>
> I would open the bug to python-novaclient to start with, but it may end up
> coming back to Cinder.
>
>
> Jay S. Bryant
>IBM Cinder Subject Matter Expert  &  Cinder Core Member
> Department 7YLA, Building 015-2, Office E125, Rochester, MN
> Telephone: (507) 253-4270, FAX (507) 253-6410
> TIE Line: 553-4270
> E-Mail:  jsbry...@us.ibm.com
> 
> All the world's a stage and most of us are desperately unrehearsed.
>   -- Sean O'Casey
> 
>
>
>
> From:Pat Bredenberg 
> To:openstack-dev@lists.openstack.org,
> Date:02/05/2014 03:05 PM
> Subject:[openstack-dev] Grizzly volume quotas
> 
>
>
>
> Dear all,
>
> I'm part of the team bringing OpenStack to Solaris and am confused
> about how volume quotas appear according to nova(1).  We're using
> Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other
> configuration information you need.  The raw data itself is available
> here: http://paste.openstack.org/show/62667/.
> Is it a bug that "volumes" appears as a configurable quota via
> nova(1), according to its help menu?  I'll apologize in advance if this
> has already been documented elsewhere and/or addressed in Havana or
> Icehouse.  I searched but didn't see it mentioned.  If it's a bug that
> has yet to be filed and should be addressed, please let me know and I'll
> gladly file the bug.  Otherwise, I'll chalk it up as a learning
> experience.  Your guidance is greatly appreciated.
>
> Very respectfully,
> Pat Bredenberg
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Justin Santa Barbara
Russell Bryant  wrote:
> So, it seems that at the root of this, you're looking for a
> cloud-compatible way for instances to message each other.

No: discovery of peers, not messaging.  After discovery, communication
between nodes will then be done directly e.g. over TCP.  Examples of
services that work using this model:  Elasticsearch, JBoss Data Grid,
anything using JGroups, the next version of Zookeeper, etc.  The
instances just need some way to find each other; a nice way to think
of this is as a replacement for multicast-discovery on the cloud.

All these services then switch to direct messaging, because using an
intermediate service introduces too much latency.

With this blueprint though, we could build and run a great backend for
Marconi, using OOO.

> I really don't see the metadata API as the appropriate place for that.

But I presume you're OK with it for discovery? :-)

> How about using Marconi here?  If not, what's missing from Marconi's API
> to solve your messaging use case to allow instances to discover each other?

Well, again: discovery, so Marconi isn't the natural fit it may at
first appear.  Not sure if Marconi supports 'broadcast' queues (that
would be the missing piece if it doesn't).  But, even if we could
abuse a Marconi queue for this:

1) Marconi isn't widely deployed
2) There is no easy way for a node to discover Marconi, even if it was deployed.
3) There is no easy way for a node to authenticate to Marconi, even if
we could discover it

I absolutely think we should fix each of those obstacles, and I'm sure
we will eventually.  But in the meantime, let's get this into
Icehouse!

Justin

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread Pat Bredenberg

Dear Jay,

On 02/ 5/14 02:21 PM, Jay S Bryant wrote:

Pat,

I see the same behavior on an Icehouse level install.  So, I think you 
may have found a bug.


I would open the bug to python-novaclient to start with, but it may 
end up coming back to Cinder.
Great.  Thanks very much for the quick reply and I'll get a bug 
written up ASAP.


Sincerely,
Pat


*/
Jay S. Bryant/*
/IBM Cinder Subject Matter Expert &  Cinder Core Member/
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
  -- Sean O'Casey




From: Pat Bredenberg 
To: openstack-dev@lists.openstack.org,
Date: 02/05/2014 03:05 PM
Subject: [openstack-dev] Grizzly volume quotas




Dear all,

I'm part of the team bringing OpenStack to Solaris and am confused
about how volume quotas appear according to nova(1).  We're using
Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other
configuration information you need.  The raw data itself is available
here: http://paste.openstack.org/show/62667/.
Is it a bug that "volumes" appears as a configurable quota via
nova(1), according to its help menu?  I'll apologize in advance if this
has already been documented elsewhere and/or addressed in Havana or
Icehouse.  I searched but didn't see it mentioned.  If it's a bug that
has yet to be filed and should be addressed, please let me know and I'll
gladly file the bug.  Otherwise, I'll chalk it up as a learning
experience.  Your guidance is greatly appreciated.

Very respectfully,
Pat Bredenberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Russell Bryant
On 02/05/2014 02:31 PM, Doug Hellmann wrote:
> 
> 
> 
> On Wed, Feb 5, 2014 at 1:24 PM, Russell Bryant  > wrote:
> 
> Greetings,
> 
> In the TC we have been going through a process to better define our
> requirements for incubation and graduation to being an integrated
> project.  The current version can be found in the governance repo:
> 
> 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements
> 
> Is it time that we do an analysis of the existing integrated projects
> against the requirements we have set?  If not now, when?
> 
> Perhaps we should start putting each project on the TC agenda for a
> review of its current standing.  For any gaps, I think we should set a
> specific timeframe for when we expect these gaps to be filled.
> 
> Thoughts?
> 
> 
> I like the idea of starting this soon, so projects can prioritize the
> work during the next cycle and have time to plan to discuss any related
> issues at the summit. Setting a deadline for finishing may depend on the
> nature and size of the gaps, but it seems fair to set a deadline for
> *starting* the work.

Well, I think in all cases the work should start ASAP.

We could set the deadline for when we expect it to be finished on a case
by case basis, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Grizzly volume quotas

2014-02-05 Thread Jay S Bryant
Pat,

I see the same behavior on an Icehouse level install.  So, I think you may 
have found a bug.

I would open the bug to python-novaclient to start with, but it may end up 
coming back to Cinder.


Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Pat Bredenberg 
To: openstack-dev@lists.openstack.org, 
Date:   02/05/2014 03:05 PM
Subject:[openstack-dev] Grizzly volume quotas



Dear all,

 I'm part of the team bringing OpenStack to Solaris and am confused 
about how volume quotas appear according to nova(1).  We're using 
Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other 
configuration information you need.  The raw data itself is available 
here: http://paste.openstack.org/show/62667/.
 Is it a bug that "volumes" appears as a configurable quota via 
nova(1), according to its help menu?  I'll apologize in advance if this 
has already been documented elsewhere and/or addressed in Havana or 
Icehouse.  I searched but didn't see it mentioned.  If it's a bug that 
has yet to be filed and should be addressed, please let me know and I'll 
gladly file the bug.  Otherwise, I'll chalk it up as a learning 
experience.  Your guidance is greatly appreciated.

Very respectfully,
Pat Bredenberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] backporting database migrations to stable/havana

2014-02-05 Thread Miguel Angel Ajo Pelayo


- Original Message -
> From: "Ralf Haferkamp" 
> To: openstack-dev@lists.openstack.org
> Sent: Wednesday, February 5, 2014 12:47:24 PM
> Subject: Re: [openstack-dev] [Neutron] backporting database migrations
> to  stable/havana
> 
> Hi,
> 
> On Tue, Feb 04, 2014 at 12:36:16PM -0500, Miguel Angel Ajo Pelayo wrote:
> > 
> > 
> > Hi Ralf, I see we're on the same boat for this.
> > 
> >It seems that a database migration introduces complications
> > for future upgrades. It's not an easy path.
> > 
> >My aim when I started this backport was trying to scale out
> > neutron-server, starting several ones together. But I'm afraid
> > we would find more bugs like this requiring db migrations.
> > 
> >Have you actually tested running multiple servers in icehouse?,
> > I just didn't have the time, but it's in my roadmap.
> I actually ran into the bug in a single server setup. But that seems to
> happen
> pretty rarely.

Upps, really?, then it's worse than I thought, thank you for this feedback.

> 
> >If that fixes the problem, may be some heavier approach (like
> > table locking) could be used in the backport, without introducing
> > a new/conflicting migration.
> Hm, there seems to be no clean way to do table locking in sqlalchemy. At
> least I
> didn't find one.

I must admit I didn't yet look at this, if we don't have table locking it's 
hard to 
think of a proper solution for similar problems.

>  
> > About the DB migration backport problem, the actual problem is:
> [..]
> > 1st step) fix E in icehouse to skip the real unique constraint insertion if
> > it does already exist:
> > 
> > havana   | icehouse
> >  |
> > A<-B<-C<-|--D<-*E*<-F
> >  
> > 2nd step) insert E2 in the middle of B and C to keep the icehouse first
> > reference happy:
> > 
> > havana  | icehouse
> > |
> > A<-B<-E2<-C<-|--D<-*E*<-F
> > 
> > What do you think?
> I agree, that would likely be the right fix. But as it seems there are some
> (more or less) strict rules about stable backports of migrations (which I
> understand as it can get really tricky). So a solution that doesn't require
> them would probabyl be preferable.


Yes, we must think about something else, but I'm afraid that either we manage
to have table locking, or we will need this DB backport.  

The 2-step process seems correct to me, but we will need approval from the 
community.
I believe that a bug that breaks agent registration or listing is *bad* enough. 
But
anyway, until the gate is not stable there are more important things.

I like what the nova guys do, we must definitely make the same at the end
of Icehouse cycle, add a set of empty DB migrations which could be used for
this purpose.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asynchrounous programming: replace eventlet with asyncio

2014-02-05 Thread Sean Dague
On 02/05/2014 09:44 PM, victor stinner wrote:
> Hi,
> 
> Thierry Carrez wrote:
>>> The problem is that the asyncio module was written for Python 3.3, whereas
>>> OpenStack is not fully Python 3 compatible (yet). To easy the transition I
>>> have ported asyncio on Python 2, it's the new Trollis project which
>>> supports Python 2.6-3.4:
>>>https://bitbucket.org/enovance/trollius
>>
>> How much code from asyncio did you reuse ? How deep was the porting
>> effort ? Is the port maintainable as asyncio gets more bugfixes over time ?
> 
> Technically, Trollius is a branch of the Tulip project. I host the repository 
> on Bitbucket, whereas Tulip is hosted at code.google.com. I use "hg merge" to 
> retrieve last changes from Tulip into Trollius.
> 
> Differences between Trollius and Tulip show how much work has been done 
> between Python 2.6 and 3.3 :-) Some examples:
> 
> - classes must inherit from object in Python 2.6 to be "new-style" classes 
> (it's not more needed in Python 3),
> - "{}".format() must be replaced with "{0}".format(),
> - IOError/OSError exceptions have been reworked and now have specialized 
> subclasses in Python 3.3 (I reimplemented them for Python 2.6),
> - etc.
> 
> But most of the code is still the same between Tulip and Trollius. In my 
> opinion, the major difference is that Tulip uses "yield from" wheras Trollius 
> uses "yield", which imply subtle differences in the module iteself. You may 
> not notice them if you use Trollius, but the implementation is a little bit 
> different because of that (differences are limited to the asyncio/tasks.py 
> file).
> 
> I'm working actively on Tulip (asyncio). We are fixing last bugs before the 
> release of Python 3.4, scheduled for March 16, 2014. So I track changes in 
> Tulip and I will "port" them into Trollius.

First, very cool!

This is very promising work. It might be really interesting to figure
out if there was a smaller project inside of OpenStack that could be
test ported over to this (even as a stackforge project), and something
we could run in the gate.

Our experience is the OpenStack CI system catches bugs in libraries and
underlying components that no one else catches, and definitely getting
something running workloads hard on this might be helpful in maturing
Trollius. Basically coevolve it with a piece of OpenStack to know that
it can actually work on OpenStack and be a viable path forward.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Grizzly volume quotas

2014-02-05 Thread Pat Bredenberg

Dear all,

I'm part of the team bringing OpenStack to Solaris and am confused 
about how volume quotas appear according to nova(1).  We're using 
Grizzly 2013.1.4 for both Nova and Cinder; please let me know what other 
configuration information you need.  The raw data itself is available 
here: http://paste.openstack.org/show/62667/.
Is it a bug that "volumes" appears as a configurable quota via 
nova(1), according to its help menu?  I'll apologize in advance if this 
has already been documented elsewhere and/or addressed in Havana or 
Icehouse.  I searched but didn't see it mentioned.  If it's a bug that 
has yet to be filed and should be addressed, please let me know and I'll 
gladly file the bug.  Otherwise, I'll chalk it up as a learning 
experience.  Your guidance is greatly appreciated.


Very respectfully,
Pat Bredenberg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Poll to select dates for next Solum Summit

2014-02-05 Thread Adrian Otto
Hello,

If you would like to attend the next Solum Summit event in March (in person or 
remotely) please indicate your date preferences at the following poll:

http://doodle.com/y7we6cpw9cakfqcv

The event will be 2 days long, and is tentatively planned for a venue in 
Raleigh, North Carolina to be hosted by Red Hat. Please take a moment now to 
vote so we can find the best two adjacent days.

Our first Solum Summit was held at Rackspace in San Francisco in November 2013, 
and was attended by about 35 Stackers, and a number of remote participants as 
well. There were many attendees who provided feedback that they liked the 
format of this event because it allowed a laser focus on one project and 
allowed for flexibility for topics that may need more than one hour of 
interactive discussion.

Thanks,

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 1:25 PM, Ben Nemec  wrote:

>  On 2014-02-05 10:58, Doug Hellmann wrote:
>
>
>
>
> On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec  wrote:
>
>>   On 2014-02-05 09:05, Doug Hellmann wrote:
>>
>>
>> On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec  wrote:
>>
>>>  On 2014-01-08 12:14, Doug Hellmann wrote:
>>>
>>>
>>>
>>> On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec wrote:
>>>
 On 2014-01-08 11:16, Sean Dague wrote:

> On 01/08/2014 12:06 PM, Doug Hellmann wrote:
> 
>
>> Yeah, that's what made me start thinking oslo.sphinx should be called
>> something else.
>>
>> Sean, how strongly do you feel about not installing oslo.sphinx in
>> devstack? I see your point, I'm just looking for alternatives to the
>> hassle of renaming oslo.sphinx.
>
>
> Doing the git thing is definitely not the right thing. But I guess I
> got
> lost somewhere along the way about what the actual problem is. Can
> someone write that up concisely? With all the things that have been
> tried/failed, why certain things fail, etc.

  The problem seems to be when we pip install -e oslo.config on the
 system, then pip install oslo.sphinx in a venv.  oslo.config is unavailable
 in the venv, apparently because the namespace package for o.s causes the
 egg-link for o.c to be ignored.  Pretty much every other combination I've
 tried (regular pip install of both, or pip install -e of both, regardless
 of where they are) works fine, but there seem to be other issues with all
 of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
 used for gating, and we can't pip install -e oslo.sphinx because it's not a
 runtime dep so it doesn't belong in the gate.  Changing the toplevel
 package for oslo.sphinx was also mentioned, but has obvious drawbacks too.

 I think that about covers what I know so far.
>>>
>>>  Here's a link dstufft provided to the pip bug tracking this problem:
>>> https://github.com/pypa/pip/issues/3
>>> Doug
>>>
>>>   This just bit me again trying to run unit tests against a fresh Nova
>>> tree.I don't think it's just me either - Matt Riedemann said he has
>>> been disabling site-packages in tox.ini for local tox runs.  We really need
>>> to do _something_ about this, even if it's just disabling site-packages by
>>> default in tox.ini for the affected projects.  A different option would be
>>> nice, but based on our previous discussion I'm not sure we're going to find
>>> one.
>>> Thoughts?
>>>
>>  Is the problem isolated to oslo.sphinx? That is, do we end up with any
>> configurations where we have 2 oslo libraries installed in different modes
>> (development and "regular") where one of those 2 libraries is not
>> oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename
>> that to move it out of the namespace package.
>>
>>oslo.sphinx is the only one that has triggered this for me so far.  I
>> think it's less likely to happen with the others because they tend to be
>> runtime dependencies so they get installed in devstack, whereas oslo.sphinx
>> doesn't because it's a build dep (AIUI anyway).
>>
>
>  That's pretty much what I expected.
>
> Can we get a volunteer to work on renaming oslo.sphinx?
>
>
>   I'm winding down on the parallel testing work so I could look at this
> next.  I don't know exactly what is going to be involved in the rename
> though.
>
> We also need to decide what we're going to call it.  I haven't come up
> with any suggestions that I'm particularly in love with so far. :-/
>

Yeah, I haven't come up with anything good, either.

oslosphinx?

openstacksphinx?

We will need to:

- rename the git repository -- we have some other renames planned for this
Friday, so we could possibly take care of that one this week
- make sure the metadata file for packaging the new library is correct in
the new repo
- prepare a release under the new name so it ends up on PyPI
- update the sphinx conf.py in all consuming projects to use the new name,
and change their test-requirements.txt to refer to the new name (or finally
add a doc-requirements.txt for doc jobs)
- remove oslo.sphinx from pypi so no one uses it accidentally

Doug



>
> -Ben
>
>
> Doug
>
>>
>>
>>   Doug
>>
>>>   -Ben
>>>
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 12:05 PM, Russell Bryant  wrote:

> On 02/05/2014 11:22 AM, Thierry Carrez wrote:
> > (This email is mostly directed to PTLs for programs that include one
> > integrated project)
> >
> > The DefCore subcommittee from the OpenStack board of directors asked the
> > Technical Committee yesterday about which code sections in each
> > integrated project should be "designated sections" in the sense of [1]
> > (code you're actually needed to run or include to be allowed to use the
> > trademark). That determines where you can run alternate code (think:
> > substitute your own private hypervisor driver) and still be able to call
> > the result openstack.
> >
> > [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
> >
> > PTLs and their teams are obviously the best placed to define this, so it
> > seems like the process should be: PTLs propose designated sections to
> > the TC, which blesses them, combines them and forwards the result to the
> > DefCore committee. We could certainly leverage part of the governance
> > repo to make sure the lists are kept up to date.
> >
> > Comments, thoughts ?
> >
>
> The process you suggest is what I would prefer.  (PTLs writing proposals
> for TC to approve)
>
> Using the governance repo makes sense as a means for the PTLs to post
> their proposals for review and approval of the TC.
>

+1

Who gets final say if there's strong disagreement between a PTL and the
> TC?  Hopefully this won't matter, but it may be useful to go ahead and
> clear this up front.
>

The Board has some say in this, too, right? The proposal [1] is for a set
of tests to be proposed and for the Board to approve (section 8).

What is the relationship between that test suite and the designated core
areas? It seems that anything being tested would need to be designated as
core. What about the inverse?

Doug

[1] https://wiki.openstack.org/wiki/Governance/CoreDefinition



>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Chris Behrens

On Feb 5, 2014, at 3:38 AM, Vishvananda Ishaya  wrote:

> 
> On Feb 5, 2014, at 12:27 AM, Chris Behrens  wrote:
> 
>> 1) domain ‘a’ cannot see instances (or resources in general) in domain ‘b’. 
>> It doesn’t matter if domain ‘a’ and domain ‘b’ share the same tenant ID. If 
>> you act with the API on behalf of domain ‘a’, you cannot see your instances 
>> in domain ‘b’.
>> 2) Flavors per domain. domain ‘a’ can have different flavors than domain ‘b’.
> 
> I hadn’t thought of this one, but we do have per-project flavors so I think 
> this could work in a project hierarchy world. We might have to rethink the 
> idea of global flavors and just stick them in the top-level project. That way 
> the flavors could be removed. The flavor list would have to be composed by 
> matching all parent projects. It might make sense to have an option for 
> flavors to be “hidden" in sub projects somehow as well. In other words if 
> orgb wants to delete a flavor from the global list they could do it by hiding 
> the flavor.
> 
> Definitely some things to be thought about here.

Yeah, it's completely do-able in some way. The per-project flavors is a good 
start.

> 
>> 3) Images per domain. domain ‘a’ could see different images than domain ‘b’.
> 
> Yes this would require similar hierarchical support in glance.

Yup :)

> 
>> 4) Quotas and quota limits per domain. your instances in domain ‘a’ don’t 
>> count against quotas in domain ‘b’.
> 
> Yes we’ve talked about quotas for sure. This is definitely needed.

Also: not really related to this, but if we're making considerable quota 
changes, I would also like to see the option for separate quotas _per flavor_, 
even. :)

> 
>> 5) Go as far as using different config values depending on what domain 
>> you’re using. This one is fun. :)
> 
> Curious for some examples here.

With the idea that I want to be able to provide multiple virtual clouds within 
1 big cloud, these virtual clouds may desire different config options. I'll 
pick one that could make sense:

# When set, compute API will consider duplicate hostnames
# invalid within the specified scope, regardless of case.
# Should be empty, "project" or "global". (string value)
#osapi_compute_unique_server_name_scope=

This is the first one that popped into my mind for some reason, and it turns 
out that this is actually a more complicated example than I was originally 
intending. I left it here, because there might be a potential issue with this 
config option when using 'org.tenant' as project_id. Ignoring that, let's say 
this config option had a way to say "I don't want duplicate hostnames within my 
organization at all", "I don't want any single tenant in my organization to 
have duplicate hostnames", or "I don't care at all about duplicate hostnames". 
Ideally each organization could have its own config for this.

>> volved with this. I am not sure that I currently have the time to help with 
>> implementation, however.
> 
> Come to the meeting on friday! 1600 UTC

I meant to hit the first one. :-/   I'll try to hit it this week.

- Chris



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 1:24 PM, Russell Bryant  wrote:

> Greetings,
>
> In the TC we have been going through a process to better define our
> requirements for incubation and graduation to being an integrated
> project.  The current version can be found in the governance repo:
>
>
> http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements
>
> Is it time that we do an analysis of the existing integrated projects
> against the requirements we have set?  If not now, when?
>
> Perhaps we should start putting each project on the TC agenda for a
> review of its current standing.  For any gaps, I think we should set a
> specific timeframe for when we expect these gaps to be filled.
>
> Thoughts?
>

I like the idea of starting this soon, so projects can prioritize the work
during the next cycle and have time to plan to discuss any related issues
at the summit. Setting a deadline for finishing may depend on the nature
and size of the gaps, but it seems fair to set a deadline for *starting*
the work.

Doug



>
> --
> Russell Bryant
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Sergey Lukjanov
Trevor, I've created an issue to track it
https://bugs.launchpad.net/savanna/+bug/1276764


On Wed, Feb 5, 2014 at 8:56 PM, Trevor McKay  wrote:

> Hi Sergey,
>
>   Is there a bug or a blueprint for this?  I did a quick search but
> didn't see one.
>
> Thanks,
>
> Trevor
>
> On Wed, 2014-02-05 at 16:06 +0400, Sergey Kolekonov wrote:
> > I'm currently working on moving on the MySQL for savanna-ci
> >
> >
> > On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov
> >  wrote:
> > Agreed, let's move on to the MySQL for savanna-ci to run
> > integration tests against production-like DB.
> >
> >
> > On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev
> >  wrote:
> > Since sqlite is not in the list of "databases that
> > would be used in production", CI should use other DB
> > for testing.
> >
> >
> > Andrew.
> >
> >
> > On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov
> >  wrote:
> > Indeed. We should create a bug around that and
> > move our savanna-ci to mysql.
> >
> > Regards,
> > Alexander Ignatov
> >
> >
> >
> > On 05 Feb 2014, at 01:01, Trevor McKay
> >  wrote:
> >
> > > This brings up an interesting problem:
> > >
> > > In https://review.openstack.org/#/c/70420/
> > I've added a migration that
> > > uses a drop column for an upgrade.
> > >
> > > But savann-ci is apparently using a sqlite
> > database to run.  So it can't
> > > possibly pass.
> > >
> > > What do we do here?  Shift savanna-ci tests
> > to non sqlite?
> > >
> > > Trevor
> > >
> > > On Sat, 2014-02-01 at 18:17 +0200, Roman
> > Podoliaka wrote:
> > >> Hi all,
> > >>
> > >> My two cents.
> > >>
> > >>> 2) Extend alembic so that op.drop_column()
> > does the right thing
> > >> We could, but should we?
> > >>
> > >> The only reason alembic doesn't support
> > these operations for SQLite
> > >> yet is that SQLite lacks proper support of
> > ALTER statement. For
> > >> sqlalchemy-migrate we've been providing a
> > work-around in the form of
> > >> recreating of a table and copying of all
> > existing rows (which is a
> > >> hack, really).
> > >>
> > >> But to be able to recreate a table, we
> > first must have its definition.
> > >> And we've been relying on SQLAlchemy schema
> > reflection facilities for
> > >> that. Unfortunately, this approach has a
> > few drawbacks:
> > >>
> > >> 1) SQLAlchemy versions prior to 0.8.4 don't
> > support reflection of
> > >> unique constraints, which means the
> > recreated table won't have them;
> > >>
> > >> 2) special care must be taken in 'edge'
> > cases (e.g. when you want to
> > >> drop a BOOLEAN column, you must also drop
> > the corresponding CHECK (col
> > >> in (0, 1)) constraint manually, or SQLite
> > will raise an error when the
> > >> table is recreated without the column being
> > dropped)
> > >>
> > >> 3) special care must be taken for 'custom'
> > type columns (it's got
> > >> better with SQLAlchemy 0.8.x, but e.g. in
> > 0.7.x we had to override
> > >> definitions of reflected BIGINT columns
> > manually for each
> > >> column.drop() call)
> > >>
> > >> 4) schema reflection can't be performed
> > when alembic migrations are
> > >> run in 'offline' mode (without 

Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Chris Behrens

On Feb 5, 2014, at 9:13 AM, "Tiwari, Arvind"  wrote:

> Hi Chris,
> 
> Looking at your requirements, seems my solution (see attached email) is 
> pretty much aligned. What I am trying to propose is
> 
> 1. One root domain as owner of "virtual cloud". Logically linked to "n" leaf 
> domains. 
> 2. All leaf domains falls under admin boundary of "virtual cloud" owner.
> 3. No sharing of resources at project level, that will keep the authorization 
> model simple.
> 4. No sharing of resources at domain level either.
> 5. Hierarchy or admin boundary will be totally governed by roles. 
> 
> This way we can setup a true virtual cloud/Reseller/wholesale model.
> 
> Thoughts?

Yeah, sounds the same, although we should clarify what 'resources' means (I 
used the term without completely clarifying it as well :). For example, a 
physical host is a resource, but I fully intend for it to be shared in that it 
will run VMs for multiple domains. So, by resources, I mean things like 
"instances, images, networks", although I would also want the flexibility to be 
able to share images/networks between domains.

Here's my larger thought process which led me to these features/requirements:

Within a large company, you will find that you need to provide many discrete 
clouds to different organizations within the company. Each organization 
potentially has different requirements when it comes to flavors, images, 
networks, and even config options. The only current option is to setup 'x' 
completely separate openstack installs. This can be completely cost 
ineffective. Instead of doing this, I want to build 1 big cloud. The benefits 
are:

1) You don't have 'x' groups maintaining 'y' platforms. This results in saving 
time and saving money on people.
2) Creating a new cloud for a new organization takes seconds.
3) You can have a huge cost savings on hardware as it is all shared.

and so forth.

And yes, this exact same model is what Service Providers should want if they 
intend to Resell/Co-brand, etc.

- Chris


> 
> Thanks,
> Arvind
> 
> -Original Message-
> From: Chris Behrens [mailto:cbehr...@codestud.com] 
> Sent: Wednesday, February 05, 2014 1:27 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy 
> Discussion
> 
> 
> Hi Vish,
> 
> I'm jumping in slightly late on this, but I also have an interest in this. 
> I'm going to preface this by saying that I have not read this whole thread 
> yet, so I apologize if I repeat things, say anything that is addressed by 
> previous posts, or doesn't jive with what you're looking for. :) But what you 
> describe below sounds like exactly a use case I'd come up with.
> 
> Essentially I want another level above project_id. Depending on the exact use 
> case, you could name it 'wholesale_id' or 'reseller_id'...and yeah, 'org_id' 
> fits in with your example. :) I think that I had decided I'd call it 'domain' 
> to be more generic, especially after seeing keystone had a domain concept.
> 
> Your idea below (prefixing the project_id) is exactly one way I thought of 
> doing this to be least intrusive. I, however, thought that this would not be 
> efficient. So, I was thinking about proposing that we add 'domain' to all of 
> our models. But that limits your hierarchy and I don't necessarily like that. 
> :)  So I think that if the queries are truly indexed as you say below, you 
> have a pretty good approach. The one issue that comes into mind is that if 
> there's any chance of collision. For example, if project ids (or orgs) could 
> contain a '.', then '.' as a delimiter won't work.
> 
> My requirements could be summed up pretty well by thinking of this as 
> 'virtual clouds within a cloud'. Deploy a single cloud infrastructure that 
> could look like many multiple clouds. 'domain' would be the key into each 
> different virtual cloud. Accessing one virtual cloud doesn't reveal any 
> details about another virtual cloud.
> 
> What this means is:
> 
> 1) domain 'a' cannot see instances (or resources in general) in domain 'b'. 
> It doesn't matter if domain 'a' and domain 'b' share the same tenant ID. If 
> you act with the API on behalf of domain 'a', you cannot see your instances 
> in domain 'b'.
> 2) Flavors per domain. domain 'a' can have different flavors than domain 'b'.
> 3) Images per domain. domain 'a' could see different images than domain 'b'.
> 4) Quotas and quota limits per domain. your instances in domain 'a' don't 
> count against quotas in domain 'b'.
> 5) Go as far as using different config values depending on what domain you're 
> using. This one is fun. :)
> 
> etc.
> 
> I'm not sure if you were looking to go that far or not. :) But I think that 
> our ideas are close enough, if not exact, that we can achieve both of our 
> goals with the same implementation.
> 
> I'd love to be involved with this. I am not sure that I currently have the 
> time to help with implem

Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 & Sass support)

2014-02-05 Thread Gabriel Hurley
I would imagine the downstream distros won't have the same problems with Ruby 
as they did with Node.js from a dependency standpoint, though it still doesn't 
jive with the community's all-Python bias.

My real concern, though, is anyone who may have extended the Horizon 
stylesheets using the capabilities of LESS. There are lots of ways you can 
customize the appearance of Horizon, and some folks may have gone that route.

My recommended course of action would be to think deeply on some recommended 
ways of "upgrading" from LESS to SASS for existing deployments who may have 
written their own stylesheets. Treat this like a feature deprecation (which is 
what it is).

Otherwise, if it makes people's lives better to use SASS instead of LESS, it 
sounds good to me.

- Gabriel

> -Original Message-
> From: Jason Rist [mailto:jr...@redhat.com]
> Sent: Wednesday, February 05, 2014 9:48 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from
> Less to Sass (Bootstrap 3 & Sass support)
> 
> On Wed 05 Feb 2014 09:32:54 AM MST, Jaromir Coufal wrote:
> > Dear Horizoners,
> >
> > in last days there were couple of interesting discussions about
> > updating to Bootstrap 3. In this e-mail, I would love to give a small
> > summary and propose a solution for us.
> >
> > As Bootstrap was heavily dependent on Less, when we got rid of node.js
> > we started to use lesscpy. Unfortunately because of this change we
> > were unable to update to Bootstrap 3. Fixing lesscpy looks problematic
> > - there are issues with supporting all use-cases and even if we fix
> > this in some time, we might challenge these issues again in the future.
> >
> > There is great news for Bootstrap. It started to support Sass [0].
> > (Thanks Toshi and MaxV for highlighting this news!)
> >
> > Thanks to this step forward, we might get out of our lesscpy issues by
> > switching to Sass. I am very happy with this possible change, since
> > Sass is more powerful than Less and we will be able to update our
> > libraries without any constraints.
> >
> > There are few downsides - we will need to change our Horizon Less
> > files to Sass, but it shouldn't be very big deal as far as we
> > discussed it with some Horizon folks. We can actually do it as a part
> > of Bootstrap update [1] (or CSS files restructuring [2]).
> >
> > Other concern will be with compilers. So far I've found 3 ways:
> > * rails dependency (how big problem would it be?)
> > * https://pypi.python.org/pypi/scss/0.7.1
> > * https://pypi.python.org/pypi/SassPython/0.2.1
> > * ... (other suggestions?)
> >
> > Nice benefit of Sass is, that we can use advantage of Compass
> > framework [3], which will save us a lot of energy when writing (not
> > just cross-browser) stylesheets thanks to their mixins.
> >
> > When we discussed on IRC with Horizoners, it looks like this is good
> > way to go in order to move us forward. So I am here, bringing this
> > suggestion up to whole community.
> >
> > My proposal for Horizon is to *switch from Less to Sass*. Then we can
> > unblock our already existing BPs, get Bootstrap updates and include
> > Compass framework. I believe this is all doable in Icehouse timeframe
> > if there are no problems with compilers.
> >
> > Thoughts?
> >
> > -- Jarda
> >
> > [0] http://getbootstrap.com/getting-started/
> > [1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
> > [2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
> > [3] http://compass-style.org/
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> I think this is a fantastic idea. Having no experience with Less, but seeing 
> that
> it is troublesome - if we can use SASS/Compass, I'd be much more
> comfortable with the switch. +1
> 
> --
> Jason E. Rist
> Senior Software Engineer
> OpenStack Management UI
> Red Hat, Inc.
> +1.919.754.4048
> Freenode: jrist
> github/identi.ca: knowncitizen
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-02-05 Thread Vishvananda Ishaya

On Feb 5, 2014, at 6:54 AM, Florent Flament  
wrote:

> Vish:
> 
> I agree that having roles associated with projects may complicate
> policy rules (although we may find ways to simplify the syntax?). It
> may be a sound choice to stick to a single scope for a given token.
> 
> +1 for your quotas tree proposal. Maybe ensuring that the sum of
> subprojects quotas is lower (or equal) than the parent quota will be
> enough for most use cases.
> 
> So far, I don't see any issue against your hierarchical projects
> proposal. IMHO, domains would not be much useful anymore.
> 
> 
> Vinod:
> 
> I agree that you raised the same issue that I did. I needed some
> clarification.
> 
> Regarding names (or IDs) that Nova uses, it would have to be "full
> project names" to avoid conflicts.

to be clear, I was not proposing at any point that we actually use
project names internally in the service databases, just that the names
are easier for humans to understand them for discussion. So when we
use:

orga.projecta

the database actually contains something like:

b04f9ea01a9944ac903526885a2666de.c45674c5c2c6463dad3c0cb9d7b8a6d8

That said, the full hierarchical is necessary for quotas to ensure
that we don’t exceed any of the parent quotas for a project.

> 
> 
> Tiago, Vinod, Vish:
> 
> I agree with Tiago that having policy files spread on every node
> doesn't look easy to maintain. I don't think that the service
> centralizing RBAC would have to know about the services "sets of
> operations". It could work by checking some tuple "(action, context)"
> against a set a rules, and answering whether the action is authorized
> or not.
> 
> Moreover, if the same service were to centralize both RBAC and Quotas,
> then both could be checked in a row, for the provided tuple. The thing
> about Quotas, is that it requires the service to track resources
> usage, which can be done by the service providing RBAC, since each
> action would have to be authorized (and possibly tracked) by the RBAC
> engine.
> 
> This is why I would argue in favor of a unique service providing RBAC
> and Quotas enforcement together.
> 
> I don't know much about Gantt, so I guess that potential candidate for
> such service would be Keystone, Gantt, Ceilometer ? (which already
> agregates information about resources usage), a new service?.
> 
> I have seen that some work had been started to centralize Quotas, but
> abandonned:
> * https://review.openstack.org/#/c/44878/
> * https://review.openstack.org/#/c/40568/
> 
> There's also Identity API V3 providing (centralized?) policies
> management:
> * http://api.openstack.org/api-ref-identity.html#identity-v3
> 
> I think it would be worth to try to clarify/simplify/rationalize the
> way RBAC/Quotas are working. Or am I missing something ?
> 
> Although, I think this might be out of scope of the initial
> "Hierachical Multitenancy Discussion". Should it be moved to a new
> thread?

I vote for new thread. I’m not sure that policy and quota enforcement
can be done in a separate service for performance reasons. Even if we
can’t move enforcement into a central service like gantt or keystone, there
could still be a central location to store policy rules and quota values.

Vish

> 
> Florent Flament
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][nova] Issues syncing latest db changes

2014-02-05 Thread Joe Gordon
Hi Boris, Roman, Victor (oslo-incubator db maintainers),

Last night I stumbled across bug https://launchpad.net/bugs/1272500 in
nova, which says the issue has been fixed in the latest oslo-incubator
code. So I ran:

./update.sh --base nova --dest-dir ../nova --modules db.sqlalchemy

https://review.openstack.org/#/c/71191/

And that appeared to fix the specific issues I was seeing from Bug
1272500, but it introduced some new failures.


I would like to get nova unit test working with sqlite 3.8.2-1 if
possible. How can this situation be resolved?


best,
Joe

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-05 Thread Julien Danjou
On Wed, Feb 05 2014, Joe Gordon wrote:


[…]

>> Is that right?
>>
>> Personally, I think #1 is far superior to #2.
>
> ++ to #1. I am concerned about the timing of this and don't think we
> can do this by icehouse though.

#1 has been on the radar for – at least – a year. I don't have the
courage to dig into the mailing list archive, but you'll find at least
one or two threads about that subject already.

-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-05 Thread Julien Danjou
On Wed, Feb 05 2014, Joe Gordon wrote:

> Dan explained it well in another email.
>
> Is the plugin run in devstack? I couldn't seem to find the code that
> enables the ceilometer plugin in nova.

It was in devstack but got removed because it was hanging IIRC. There
was a bug report about that. It didn't get love since them AFAIK.

-- 
Julien Danjou
;; Free Software hacker ; independent consultant
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-05 Thread Joe Gordon
On Wed, Feb 5, 2014 at 1:35 AM, Julien Danjou  wrote:
> On Tue, Feb 04 2014, Joe Gordon wrote:
>
>> Ceilometer running a plugin in nova is bad (for all the reasons
>> previously discussed),
>
> Well, I partially disagree. Are you saying that nobody is allowed to run
> a plugin in Nova? So what are these plugins in the first place?
> Or if you're saying that Ceilometer cannot have plugins in Nova, I would
> like to know why.


Dan explained it well in another email.

Is the plugin run in devstack? I couldn't seem to find the code that
enables the ceilometer plugin in nova.


>
> What is wrong, I agree, is that we have to use and mock nova internals
> to test our plugins. OTOH anyone writing plugin for Nova will have the
> same issue. To which extend this is a problem with the plugin system,
> I'll let everybody thing about it. :)

Yup, the nova plugin system is not a stable API and we don't make any
guarantees about plugins working after an upgrade.  In other words
plugins will inventively break (which is why I would prefer to not use
them upstream).

>
>> So what can nova do to help this?  It sounds like you have a valid use
>> case that nova should support without requiring a plugin.
>
> We just need the possibility to run some code before an instance is
> deleted, in a synchronous manner - i.e. our code needs to be fully
> executed before Nova can actually destroyes the VM.
>
> --
> Julien Danjou
> ;; Free Software hacker ; independent consultant
> ;; http://julien.danjou.info

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday February 6th at 22:00UTC

2014-02-05 Thread Matthew Treinish
Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, February 6th at 22:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

17:00 EST
07:00 JST
08:30 ACDT
23:00 CET
16:00 CST
14:00 PST
 
-Matt Treinish 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ceilometer] ceilometer unit tests broke because of a nova patch

2014-02-05 Thread Joe Gordon
On Wed, Feb 5, 2014 at 6:57 AM, Dan Smith  wrote:
>> We don't have to add a new notification, but we have to add some
>> new datas in the nova notifications. At least for the delete
>> instance notification to remove the ceilometer nova notifier.
>>
>> A while ago, I have registered a blueprint that explains which
>> datas are missing in the current nova notifications:
>>
>> https://blueprints.launchpad.net/nova/+spec/usage-data-in-notification
>>
>>
> https://wiki.openstack.org/wiki/Ceilometer/blueprints/remove-ceilometer-nova-notifier
>
> This seems like a much better way to do this.
>
> I'm not opposed to a nova plugin, but if it's something that lives
> outside the nova tree, I think there's going to be a problem of
> constantly chasing internal API changes. IMHO, a plugin should live
> (and be tested) in the nova tree and provide/consume a stableish API
> to/from Ceilometer.
>
> So, it seems like we've got the following options:
>
> 1. Provide the required additional data in our notifications to avoid
>the need for a plugin to hook into nova internals.
> 2. Continue to use a plugin in nova to scrape the additional data
>needed during certain events, but hopefully in a way that ties the
>plugin to the internal APIs in a maintainable way.
>
> Is that right?
>
> Personally, I think #1 is far superior to #2.

++ to #1. I am concerned about the timing of this and don't think we
can do this by icehouse though.

>
> --Dan
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Scheduler] Will the Scheuler use Nova Objects?

2014-02-05 Thread Chris Behrens

On Jan 30, 2014, at 5:55 AM, Andrew Laski  wrote:

> I'm of the opinion that the scheduler should use objects, for all the reasons 
> that Nova uses objects, but that they should not be Nova objects.  Ultimately 
> what the scheduler needs is a concept of capacity, allocations, and locality 
> of resources.  But the way those are modeled doesn't need to be tied to how 
> Nova does it, and once the scope expands to include Cinder it may quickly 
> turn out to be limiting to hold onto Nova objects.

+2! 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-02-05 Thread Russell Bryant
On 01/23/2014 11:28 AM, Justin Santa Barbara wrote:
> Would appreciate feedback / opinions on this
> blueprint: 
> https://blueprints.launchpad.net/nova/+spec/first-discover-your-peers

The blueprint starts out with:

When running a clustered service on Nova, typically each node needs
to find its peers. In the physical world, this is typically done
using multicast. On the cloud, we either can't or don't want to use
multicast.

So, it seems that at the root of this, you're looking for a
cloud-compatible way for instances to message each other.  I really
don't see the metadata API as the appropriate place for that.

How about using Marconi here?  If not, what's missing from Marconi's API
to solve your messaging use case to allow instances to discover each other?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Ben Nemec
 

On 2014-02-05 10:58, Doug Hellmann wrote: 

> On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec  wrote:
> 
> On 2014-02-05 09:05, Doug Hellmann wrote: 
> 
> On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec  wrote:
> 
> On 2014-01-08 12:14, Doug Hellmann wrote: 
> 
> On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec  wrote:
> 
> On 2014-01-08 11:16, Sean Dague wrote:
> On 01/08/2014 12:06 PM, Doug Hellmann wrote:
> 
> Yeah, that's what made me start thinking oslo.sphinx should be called
> something else.
> 
> Sean, how strongly do you feel about not installing oslo.sphinx in
> devstack? I see your point, I'm just looking for alternatives to the
> hassle of renaming oslo.sphinx. 
> Doing the git thing is definitely not the right thing. But I guess I got
> lost somewhere along the way about what the actual problem is. Can
> someone write that up concisely? With all the things that have been
> tried/failed, why certain things fail, etc.
 The problem seems to be when we pip install -e oslo.config on the
system, then pip install oslo.sphinx in a venv. oslo.config is
unavailable in the venv, apparently because the namespace package for
o.s causes the egg-link for o.c to be ignored. Pretty much every other
combination I've tried (regular pip install of both, or pip install -e
of both, regardless of where they are) works fine, but there seem to be
other issues with all of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
used for gating, and we can't pip install -e oslo.sphinx because it's
not a runtime dep so it doesn't belong in the gate. Changing the
toplevel package for oslo.sphinx was also mentioned, but has obvious
drawbacks too.

 I think that about covers what I know so far. 

Here's a link dstufft provided to the pip bug tracking this problem:
https://github.com/pypa/pip/issues/3 [1] 
Doug 

This just bit me again trying to run unit tests against a fresh Nova
tree. I don't think it's just me either - Matt Riedemann said he has
been disabling site-packages in tox.ini for local tox runs. We really
need to do _something_ about this, even if it's just disabling
site-packages by default in tox.ini for the affected projects. A
different option would be nice, but based on our previous discussion I'm
not sure we're going to find one. 
Thoughts? 

Is the problem isolated to oslo.sphinx? That is, do we end up with any
configurations where we have 2 oslo libraries installed in different
modes (development and "regular") where one of those 2 libraries is not
oslo.sphinx? Because if the issue is really just oslo.sphinx, we can
rename that to move it out of the namespace package. 

oslo.sphinx is the only one that has triggered this for me so far. I
think it's less likely to happen with the others because they tend to be
runtime dependencies so they get installed in devstack, whereas
oslo.sphinx doesn't because it's a build dep (AIUI anyway). 

That's pretty much what I expected. 

Can we get a volunteer to work on renaming oslo.sphinx? 

I'm winding down on the parallel testing work so I could look at this
next. I don't know exactly what is going to be involved in the rename
though. 

We also need to decide what we're going to call it. I haven't come up
with any suggestions that I'm particularly in love with so far. :-/ 

-Ben 

> Doug 
> 
> Doug 
> 
> -Ben

 

Links:
--
[1] https://github.com/pypa/pip/issues/3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Developer documentation - linking to slideshares?

2014-02-05 Thread Collins, Sean
On Tue, Feb 04, 2014 at 07:52:22AM -0600, Anne Gentle wrote:
> Currently the docs contributor sign the same CLA as code contributors. I'd
> encourage you to use the docs to really explain not just link to slide
> decks. There's a better chance of maintenance over time.

Agreed - I plan on writing up docs, but when I find something really
good on a slide I'd like to be able to have a reference to it in the
footnotes - I suppose a works cited section, so I'm not plagiarizing.

> I had been using a wiki page for a collection of videos at
> https://wiki.openstack.org/wiki/Demo_Videos. But it ages with time.

Awesome - I'll make sure to add that link to some kind of
"further reading" section.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Governance] Integrated projects and new requirements

2014-02-05 Thread Russell Bryant
Greetings,

In the TC we have been going through a process to better define our
requirements for incubation and graduation to being an integrated
project.  The current version can be found in the governance repo:

http://git.openstack.org/cgit/openstack/governance/tree/reference/incubation-integration-requirements

Is it time that we do an analysis of the existing integrated projects
against the requirements we have set?  If not now, when?

Perhaps we should start putting each project on the TC agenda for a
review of its current standing.  For any gaps, I think we should set a
specific timeframe for when we expect these gaps to be filled.

Thoughts?

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Zane Bitter

On 05/02/14 11:39, Clint Byrum wrote:

Excerpts from Zane Bitter's message of 2014-02-04 16:14:09 -0800:

On 03/02/14 17:09, Clint Byrum wrote:

UpdatePolicy in cfn is a single string, and causes very generic rolling


Huh?

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html

Not only is it not just a single string (in fact, it looks a lot like
the properties you have defined), it's even got another layer of
indirection so you can define different types of update policy (rolling
vs. canary, anybody?). It's an extremely flexible syntax.



Oops, I relied a little too much on my memory and not enough on docs for
that one. O-k, I will re-evaluate given actual knowledge of how it
actually works. :-P


cheers :D


BTW, given that we already implemented this in autoscaling, it might be
helpful to talk more specifically about what we need to do in addition
in order to support the use cases you have in mind.



As Robert mentioned in his mail, autoscaling groups won't allow us to
inject individual credentials. With the ResourceGroup, we can make a
nested stack with a random string generator so that is solved. Now the


\o/ for the random string generator solving the problem!

:-( for ResourceGroup being the only way to do it.

This is exactly why I hate ResourceGroup and think it was a mistake. 
Powerful software comes from being able to combine simple concepts in 
complex ways. Right now you have to choose between an autoscaling group, 
which has rolling updates, and a ResourceGroup which allows you to scale 
stacks. That sucks. What you need is to have both at the same time, and 
the way to do that is to allow autoscaling groups to scale stacks, as 
has long been planned.


At this point it would be a mistake to add a _complicated_ feature 
solely for the purpose of working around the fact the we can't yet 
combine two other, existing, features. It would be better to fix 
autoscaling groups to allow you to inject individual credentials and 
then add a simpler feature that does not need to create ad-hoc groups.



other piece we need is to be able to directly choose machines to take
out of commission, which I think we may have a simple solution to but I
don't want to derail on that.

The one used in AutoScalingGroups is also limited to just one group,
thus it can be done all inside the resource.


update behavior. I want this resource to be able to control multiple
groups as if they are one in some cases (Such as a case where a user
has migrated part of an app to a new type of server, but not all.. so
they will want to treat the entire aggregate as one rolling update).

I'm o-k with overloading it to allow resource references, but I'd like
to hear more people take issue with depends_on before I select that
course.


Resource references in general, and depends_on in particular, feel like
very much the wrong abstraction to me. This is a policy, not a resource.


To answer your question, using it with a server instance allows
rolling updates across non-grouped resources. In the example the
rolling_update_dbs does this.


That's not a great example, because one DB server depends on the other,
forcing them into updating serially anyway.



You're right, a better example is a set of (n) resource groups which
serve the same service and thus we want to make sure we maintain the
minimum service levels as a whole.


That's interesting, and I'd like to hear more about that use case and 
why it couldn't be solved using autoscaling groups assuming the obstacle 
to using them at all were eliminated. If there's a real use case here 
beyond "work around lack of stack-scaling functionality" then I'm 
definitely open to being persuaded. I'd just like to make sure that it 
exists and justifies the extra complexity.



If it were an order of magnitude harder to do it this way, I'd say
sure let's just expand on the single-resource rolling update. But
I think it won't be that much harder to achieve this and then the use
case is solved.


I guess what I'm thinking is that your proposal is really two features:

1) Notifications/callbacks on update that allow the user to hook in to 
the workflow.

2) Rolling updates over ad-hoc groups (not autoscaling groups).

I think we all agree that (1) is needed; by my count ~6 really good use 
cases have been mentioned in this thread.


What I'm suggesting is that we probably don't need to do (2) at all if 
we fix autoscaling groups to be something you could use.


Having reviewed the code for rolling updates in scaling groups, I can 
report that it is painfully complicated and that you'd be doing yourself 
a big favour by not attempting to reimplement it with ad-hoc groups ;). 
(To be fair, I don't think this would be quite as bad, though clearly it 
wouldn't be as good as not having to do it at all.) More concerning than 
that, though, is the way this looks set to make the template format even 
more arcane than it already is. We might eventually be able

Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Russell Bryant
On 02/05/2014 12:54 PM, Jonathan Bryce wrote:
> On Feb 5, 2014, at 11:12 AM, Mark McLoughlin  wrote:
> 
>> I don't have a big issue with the way the Foundation currently enforces
>> "you must use the code" - anyone who signs a trademark agreement with
>> the Foundation agrees to "include the entirety of" Nova's code. That's
>> very vague, but I assume the Foundation can terminate the agreement if
>> it thinks the other party is acting in bad faith.
>>
>> Basically, I'm concerned about us swinging from a rather lax "you must
>> include our code" rule to an overly strict "you must make no downstream
>> modifications to our code”.
> 
> I tend to agree with you for the most part. As they exist today, the 
> trademark licenses include a couple of components: legally agreeing to use 
> the code in the projects specified (requires self certification from the 
> licensee) and passing the approved test suite once it exists (which adds a 
> component requiring external validation of behavior). By creating the test 
> suite and selecting required capabilities that can be externally validated 
> through the test suite, we would take a step in tightening up the usage and 
> consistency enforceable by our existing legal framework.
> 
> I think that "designated sections” could provide a useful construct for 
> better general guidance on where the extension points to the codebase are. 
> From a practical standpoint, it would probably be pretty difficult to 
> efficiently audit an overly strict definition of the designated sections and 
> this would still be a self certifying requirement on the licensee.

Another thing to consider is that like many other implementation
details, this stuff is rapidly evolving.  I'm a bit worried about the
nightmare of trying to keep the definitions up to date, much less agreed
upon by all parties involved.

The vague "include the entirety of" statement is in line with what I
feel is appropriate for Nova.  I suspect that I would disagree with some
interpretations of that, though.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Mark Washenberger
On Wed, Feb 5, 2014 at 8:22 AM, Thierry Carrez wrote:

> (This email is mostly directed to PTLs for programs that include one
> integrated project)
>
> The DefCore subcommittee from the OpenStack board of directors asked the
> Technical Committee yesterday about which code sections in each
> integrated project should be "designated sections" in the sense of [1]
> (code you're actually needed to run or include to be allowed to use the
> trademark). That determines where you can run alternate code (think:
> substitute your own private hypervisor driver) and still be able to call
> the result openstack.
>
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
>
> PTLs and their teams are obviously the best placed to define this, so it
> seems like the process should be: PTLs propose designated sections to
> the TC, which blesses them, combines them and forwards the result to the
> DefCore committee. We could certainly leverage part of the governance
> repo to make sure the lists are kept up to date.
>
> Comments, thoughts ?
>

I don't have any issue defining what I think of as typical extension /
variation seams in the Glance code base. However, I'm still struggling to
understand what all this means for our projects and our ecosystem.
Basically, why do I care? What are the implications of a 0% vs 100%
designation? Are we hoping to improve interoperability, or encourage more
upstream collaboration, or what?

How many deployments do we expect to get the trademark after this core
definition process is completed?


>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Jonathan Bryce
On Feb 5, 2014, at 11:12 AM, Mark McLoughlin  wrote:

> I don't have a big issue with the way the Foundation currently enforces
> "you must use the code" - anyone who signs a trademark agreement with
> the Foundation agrees to "include the entirety of" Nova's code. That's
> very vague, but I assume the Foundation can terminate the agreement if
> it thinks the other party is acting in bad faith.
> 
> Basically, I'm concerned about us swinging from a rather lax "you must
> include our code" rule to an overly strict "you must make no downstream
> modifications to our code”.

I tend to agree with you for the most part. As they exist today, the trademark 
licenses include a couple of components: legally agreeing to use the code in 
the projects specified (requires self certification from the licensee) and 
passing the approved test suite once it exists (which adds a component 
requiring external validation of behavior). By creating the test suite and 
selecting required capabilities that can be externally validated through the 
test suite, we would take a step in tightening up the usage and consistency 
enforceable by our existing legal framework.

I think that "designated sections” could provide a useful construct for better 
general guidance on where the extension points to the codebase are. From a 
practical standpoint, it would probably be pretty difficult to efficiently 
audit an overly strict definition of the designated sections and this would 
still be a self certifying requirement on the licensee.

Jonathan


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Thierry Carrez
Russell Bryant wrote:
> Who gets final say if there's strong disagreement between a PTL and the
> TC?  Hopefully this won't matter, but it may be useful to go ahead and
> clear this up front.

I suspect that would be as usual. PTL has final say over his project
matters. The TC can just wield the nuclear weapon of removing a project
from the integrated release... but I seriously doubt we'd engage in such
an extreme solution over that precise discussion.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 & Sass support)

2014-02-05 Thread Jason Rist
On Wed 05 Feb 2014 09:32:54 AM MST, Jaromir Coufal wrote:
> Dear Horizoners,
>
> in last days there were couple of interesting discussions about
> updating to Bootstrap 3. In this e-mail, I would love to give a small
> summary and propose a solution for us.
>
> As Bootstrap was heavily dependent on Less, when we got rid of node.js
> we started to use lesscpy. Unfortunately because of this change we
> were unable to update to Bootstrap 3. Fixing lesscpy looks problematic
> - there are issues with supporting all use-cases and even if we fix
> this in some time, we might challenge these issues again in the future.
>
> There is great news for Bootstrap. It started to support Sass [0].
> (Thanks Toshi and MaxV for highlighting this news!)
>
> Thanks to this step forward, we might get out of our lesscpy issues by
> switching to Sass. I am very happy with this possible change, since
> Sass is more powerful than Less and we will be able to update our
> libraries without any constraints.
>
> There are few downsides - we will need to change our Horizon Less
> files to Sass, but it shouldn't be very big deal as far as we
> discussed it with some Horizon folks. We can actually do it as a part
> of Bootstrap update [1] (or CSS files restructuring [2]).
>
> Other concern will be with compilers. So far I've found 3 ways:
> * rails dependency (how big problem would it be?)
> * https://pypi.python.org/pypi/scss/0.7.1
> * https://pypi.python.org/pypi/SassPython/0.2.1
> * ... (other suggestions?)
>
> Nice benefit of Sass is, that we can use advantage of Compass
> framework [3], which will save us a lot of energy when writing (not
> just cross-browser) stylesheets thanks to their mixins.
>
> When we discussed on IRC with Horizoners, it looks like this is good
> way to go in order to move us forward. So I am here, bringing this
> suggestion up to whole community.
>
> My proposal for Horizon is to *switch from Less to Sass*. Then we can
> unblock our already existing BPs, get Bootstrap updates and include
> Compass framework. I believe this is all doable in Icehouse timeframe
> if there are no problems with compilers.
>
> Thoughts?
>
> -- Jarda
>
> [0] http://getbootstrap.com/getting-started/
> [1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
> [2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
> [3] http://compass-style.org/
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

I think this is a fantastic idea. Having no experience with Less, but 
seeing that it is troublesome - if we can use SASS/Compass, I'd be much 
more comfortable with the switch. +1

--
Jason E. Rist
Senior Software Engineer
OpenStack Management UI
Red Hat, Inc.
+1.919.754.4048
Freenode: jrist
github/identi.ca: knowncitizen

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Oslo] [Fuel] [Fuel-dev] Openstack services should support SIGHUP signal

2014-02-05 Thread Ben Nemec

On 2014-02-05 10:58, Bogdan Dobrelya wrote:

Hi, stackers.
I believe Openstack services from all projects should support SIGHUP 
for

effective log/config files handling w/o unnecessary restarts.
(See https://bugs.launchpad.net/oslo/+bug/1276694)

'Smooth reloads'(kill -HUP) are much better than 'disturbing restarts',
aren't they?


I believe Oslo already has support for this: 
https://github.com/openstack/oslo-incubator/commit/825ace5581fbb416944acae62f51c489ed93b9c9


As such, I'm going to mark that bug invalid against Oslo, but please 
feel free to add other projects to it that need to start using the 
functionality (or tell me I'm completely wrong and that doesn't do what 
you want :-).


-Ben

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Jonathan Bryce
On Feb 5, 2014, at 10:18 AM, Steve Gordon  wrote:

> - Original Message -
>> From: "Andreas Jaeger" 
>> To: "Mark McLoughlin" , "OpenStack Development Mailing 
>> List (not for usage questions)"
>> 
>> Cc: "Jonathan Bryce" 
>> Sent: Wednesday, February 5, 2014 9:17:39 AM
>> Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
>> 
>> On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
>>> On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
 Steve Gordon wrote:
>> From: "Anne Gentle" 
>> Based on today's Technical Committee meeting and conversations with the
>> OpenStack board members, I need to change our Conventions for service
>> names
>> at
>> https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
>> .
>> 
>> Previously we have indicated that Ceilometer could be named OpenStack
>> Telemetry and Heat could be named OpenStack Orchestration. That's not
>> the
>> case, and we need to change those names.
>> 
>> To quote the TC meeting, ceilometer and heat are "other modules" (second
>> sentence from 4.1 in
>> http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
>> distributed with the Core OpenStack Project.
>> 
>> Here's what I intend to change the wiki page to:
>> Here's the list of project and module names and their official names
>> and
>> capitalization:
>> 
>> Ceilometer module
>> Cinder: OpenStack Block Storage
>> Glance: OpenStack Image Service
>> Heat module
>> Horizon: OpenStack dashboard
>> Keystone: OpenStack Identity Service
>> Neutron: OpenStack Networking
>> Nova: OpenStack Compute
>> Swift: OpenStack Object Storage
 
 Small correction. The TC had not indicated that Ceilometer could be
 named "OpenStack Telemetry" and Heat could be named "OpenStack
 Orchestration". We formally asked[1] the board to allow (or disallow)
 that naming (or more precisely, that use of the trademark).
 
 [1]
 https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
 
 We haven't got a formal and clear answer from the board on that request
 yet. I suspect they are waiting for progress on DefCore before deciding.
 
 If you need an answer *now* (and I suspect you do), it might make sense
 to ask foundation staff/lawyers about using those OpenStack names with
 the current state of the bylaws and trademark usage rules, rather than
 the hypothetical future state under discussion.
>>> 
>>> Basically, yes - I think having the Foundation confirm that it's
>>> appropriate to use "OpenStack Telemetry" in the docs is the right thing.
>>> 
>>> There's an awful lot of confusion about the subject and, ultimately,
>>> it's the Foundation staff who are responsible for enforcing (and giving
>>> advise to people on) the trademark usage rules. I've cc-ed Jonathan so
>>> he knows about this issue.
>>> 
>>> But FWIW, the TC's request is asking for Ceilometer and Heat to be
>>> allowed use their "Telemetry" and "Orchestration" names in *all* of the
>>> circumstances where e.g. Nova is allowed use its "Compute" name.
>>> 
>>> Reading again this clause in the bylaws:
>>> 
>>>  "The other modules which are part of the OpenStack Project, but
>>>   not the Core OpenStack Project may not be identified using the
>>>   OpenStack trademark except when distributed with the Core OpenStack
>>>   Project."
>>> 
>>> it could well be said that this case of naming conventions in the docs
>>> for the entire OpenStack Project falls under the "distributed with" case
>>> and it is perfectly fine to refer to "OpenStack Telemetry" in the docs.
>>> I'd really like to see the Foundation staff give their opinion on this,
>>> though.

In this case, we are talking about documentation that is produced and 
distributed with the integrated release to cover the Core OpenStack Project and 
the “modules" that are distributed together with the Core OpenStack Project in 
the integrated release. This is the intended use case for the exception Mark 
quoted above from the Bylaws, and I think it is perfectly fine to refer to the 
integrated components in the OpenStack release documentation as OpenStack 
components.


>> What Steve is asking IMO is whether we have to change "OpenStack
>> Telemetry" to "Ceilometer module" or whether we can just say "Telemetry"
>> without the OpenStack in front of it,
>> 
>> Andreas
> 
> Constraining myself to the topic of what we should be using in the 
> documentation, yes this is what I'm asking. This makes more sense to me than 
> switching to calling them the "Heat module" and "Ceilometer module" because:
> 
> 1) It resolves the issue of using the OpenStack mark where it (apparently) 
> shouldn't be used.
> 2) It means we're still using the "formal" name for the program as defined by 
> the TC [1] (it is my understanding this remains t

Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy Discussion

2014-02-05 Thread Tiwari, Arvind
Hi Chris,

Looking at your requirements, seems my solution (see attached email) is pretty 
much aligned. What I am trying to propose is

1. One root domain as owner of "virtual cloud". Logically linked to "n" leaf 
domains. 
2. All leaf domains falls under admin boundary of "virtual cloud" owner.
3. No sharing of resources at project level, that will keep the authorization 
model simple.
4. No sharing of resources at domain level either.
5. Hierarchy or admin boundary will be totally governed by roles. 

This way we can setup a true virtual cloud/Reseller/wholesale model.

Thoughts?

Thanks,
Arvind

-Original Message-
From: Chris Behrens [mailto:cbehr...@codestud.com] 
Sent: Wednesday, February 05, 2014 1:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][nova] Re: Hierarchicical Multitenancy 
Discussion


Hi Vish,

I'm jumping in slightly late on this, but I also have an interest in this. I'm 
going to preface this by saying that I have not read this whole thread yet, so 
I apologize if I repeat things, say anything that is addressed by previous 
posts, or doesn't jive with what you're looking for. :) But what you describe 
below sounds like exactly a use case I'd come up with.

Essentially I want another level above project_id. Depending on the exact use 
case, you could name it 'wholesale_id' or 'reseller_id'...and yeah, 'org_id' 
fits in with your example. :) I think that I had decided I'd call it 'domain' 
to be more generic, especially after seeing keystone had a domain concept.

Your idea below (prefixing the project_id) is exactly one way I thought of 
doing this to be least intrusive. I, however, thought that this would not be 
efficient. So, I was thinking about proposing that we add 'domain' to all of 
our models. But that limits your hierarchy and I don't necessarily like that. 
:)  So I think that if the queries are truly indexed as you say below, you have 
a pretty good approach. The one issue that comes into mind is that if there's 
any chance of collision. For example, if project ids (or orgs) could contain a 
'.', then '.' as a delimiter won't work.

My requirements could be summed up pretty well by thinking of this as 'virtual 
clouds within a cloud'. Deploy a single cloud infrastructure that could look 
like many multiple clouds. 'domain' would be the key into each different 
virtual cloud. Accessing one virtual cloud doesn't reveal any details about 
another virtual cloud.

What this means is:

1) domain 'a' cannot see instances (or resources in general) in domain 'b'. It 
doesn't matter if domain 'a' and domain 'b' share the same tenant ID. If you 
act with the API on behalf of domain 'a', you cannot see your instances in 
domain 'b'.
2) Flavors per domain. domain 'a' can have different flavors than domain 'b'.
3) Images per domain. domain 'a' could see different images than domain 'b'.
4) Quotas and quota limits per domain. your instances in domain 'a' don't count 
against quotas in domain 'b'.
5) Go as far as using different config values depending on what domain you're 
using. This one is fun. :)

etc.

I'm not sure if you were looking to go that far or not. :) But I think that our 
ideas are close enough, if not exact, that we can achieve both of our goals 
with the same implementation.

I'd love to be involved with this. I am not sure that I currently have the time 
to help with implementation, however.

- Chris



On Feb 3, 2014, at 1:58 PM, Vishvananda Ishaya  wrote:

> Hello Again!
> 
> At the meeting last week we discussed some options around getting true 
> multitenancy in nova. The use case that we are trying to support can be 
> described as follows:
> 
> "Martha, the owner of ProductionIT provides it services to multiple 
> Enterprise clients. She would like to offer cloud services to Joe at 
> WidgetMaster, and Sam at SuperDevShop. Joe is a Development Manager for 
> WidgetMaster and he has multiple QA and Development teams with many users. 
> Joe needs the ability create users, projects, and quotas, as well as the 
> ability to list and delete resources across WidgetMaster. Martha needs to be 
> able to set the quotas for both WidgetMaster and SuperDevShop; manage users, 
> projects, and objects across the entire system; and set quotas for the client 
> companies as a whole. She also needs to ensure that Joe can't see or mess 
> with anything owned by Sam."
> 
> As per the plan I outlined in the meeting I have implemented a 
> Proof-of-Concept that would allow me to see what changes were required in 
> nova to get scoped tenancy working. I used a simple approach of faking out 
> heirarchy by prepending the id of the larger scope to the id of the smaller 
> scope. Keystone uses uuids internally, but for ease of explanation I will 
> pretend like it is using the name. I think we can all agree that 
> 'orga.projecta' is more readable than 
> 'b04f9ea01a9944ac903526885a2666dec45674c5c2c6463dad3c0cb9d7b8a6d8'

Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Daniel P. Berrange
On Wed, Feb 05, 2014 at 11:56:35AM -0500, Doug Hellmann wrote:
> On Wed, Feb 5, 2014 at 11:40 AM, Chmouel Boudjnah wrote:
> 
> >
> > On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann  > > wrote:
> >
> >> Including the config file in either the developer documentation or the
> >> packaging build makes more sense. I'm still worried that adding it to the
> >> sdist generation means you would have to have a lot of tools installed just
> >> to make the sdist. However, we could
> >
> >
> >
> > I think that may slighty complicate more devstack with this, since we rely
> > heavily on config samples to setup the services.
> >
> 
> Good point, we would need to add a step to generate a sample config for
> each app instead of just copying the one in the source repository.

Which is what 'python setup.py build' for an app would take care of.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
> I'm new, so I'm sure there's some history I'm missing, but I find it
> bizarre that we have to put the same license into every single file of
> source code in our projects.  In my past experience, a single LICENSE
> file at the root-level of the project has been sufficient to declare
> the license chosen for a project.  Github even has the capacity to
> choose a license and generate that file for you, it's neat. 

Take a look at this thread on legal-discuss last month:

  http://lists.openstack.org/pipermail/legal-discuss/2014-January/thread.html

But yeah, as others say - per-file license headers help make the license
explicit when it is copied to other projects.

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Zane Bitter

On 04/02/14 20:34, Robert Collins wrote:

On 5 February 2014 13:14, Zane Bitter  wrote:



That's not a great example, because one DB server depends on the other,
forcing them into updating serially anyway.

I have to say that even in general, this whole idea about applying update
policies to non-grouped resources doesn't make a whole lot of sense to me.
For non-grouped resources you control the resource definitions individually
- if you don't want them to update at a particular time, you have the option
of just not updating them.


Well, I don't particularly like the idea of doing thousands of
discrete heat stack-update calls, which would seem to be what you're
proposing.


I'm not proposing you do it by hand if that's any help ;)

Ideally a workflow service would exist that could do the messy parts for 
you, but at the end of the day it's just a for-loop in your code. From 
what you say below, I think you started down the path of managing a lot 
of complexity yourself when you were forced to generate templates for 
server groups rather than use autoscaling. I think it would be better 
for _everyone_ for us to put resources into helping TripleO get off that 
path rather than it would for us to put resources into making it less 
inconvenient to stay on it.



On groups: autoscale groups are a problem for secure minded
deployments because every server has identical resources (today) and
we very much want discrete credentials per server - at least this is
my understanding of the reason we're not using scaling groups in
TripleO.


OK, I wasn't aware that y'all are not using scaling groups. It sounds 
like this is the real problem we should be addressing, because everyone 
wants secure-minded deployments and nobody wants to have to manually 
define the configs for their 1000 all-but-identical servers. If we had a 
mechanism to ensure that every server in a scaling group could obtain 
its own credentials then it seems to me that the issue of whether to 
apply autoscaling-style rolling upgrades to manually-defined groups of 
resources becomes moot.


(Note: if anybody read that paragraph and started thinking "hey, we 
could make Turing-complete programmable template templates using the 
JSON equivalent of XSLT, please just stop right now kthx.)



Where you _do_ need it is for scaling groups where every server is based on
the same launch config, so you need a way to control the members
individually - by batching up operations (done), adding delays (done) or,
even better, notifications and callbacks.

So it seems like doing 'rolling' updates for any random subset of resources
is effectively turning Heat into something of a poor-man's workflow service,
and IMHO that is probably a mistake.


I mean to reply to the other thread, but here is just as good :) -
heat as a way to describe the intended state, and heat takes care of
transitions, is a brilliant model. It absolutely implies a bunch of
workflows - the AWS update policy is probably the key example.


Absolutely. Orchestration works by building a workflow internally, which 
Heat then also executes. No disagreement there.



Being able to gracefully, *automatically* work through a transition
between two defined states, allowing the nodes in question to take
care of their own needs along the way seems like a pretty core
function to fit inside Heat itself. Its not at all the same as 'allow
users to define abitrary workflows'.


That's fair and, I like to think, consistent with what I was suggesting 
below.



What we do need for all resources (not just scaling groups) is a way for the
user to say "for this particular resource, notify me when it has updated
(but, if possible, before we have taken any destructive actions on it), give
me a chance to test it and accept or reject the update". For example, when
you resize a server, give the user a chance to confirm or reject the change
at the VERIFY_RESIZE step (Trove requires this). Or when you replace a
server during an update, give the user a chance to test the new server and
either keep it (continue on and delete the old one) or not (roll back). Or
when you replace a server in a scaling group, notify the load balancer _or
some other thing_ (e.g. OpenShift broker node) that a replacement has been
created and wait for it to switch over to the new one before deleting the
old one. Or, of course, when you update a server to some new config, give
the user a chance to test it out and make sure it works before continuing
with the stack update. All of these use cases can, I think, be solved with a
single feature.

The open questions for me are:
1) How do we notify the user that it's time to check on a resource?
(Marconi?)


This is the graceful update stuff I referred to in my mail to Clint -
the proposal from hallway discussions in HK was to do this by
notifying the server itself (that way we don't create a centralised
point of fail). I can see though that in a general sense not all
resources are servers. But - how about a

Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Mark McLoughlin
On Wed, 2014-02-05 at 17:22 +0100, Thierry Carrez wrote:
> (This email is mostly directed to PTLs for programs that include one
> integrated project)
> 
> The DefCore subcommittee from the OpenStack board of directors asked the
> Technical Committee yesterday about which code sections in each
> integrated project should be "designated sections" in the sense of [1]
> (code you're actually needed to run or include to be allowed to use the
> trademark). That determines where you can run alternate code (think:
> substitute your own private hypervisor driver) and still be able to call
> the result openstack.
> 
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
> 
> PTLs and their teams are obviously the best placed to define this, so it
> seems like the process should be: PTLs propose designated sections to
> the TC, which blesses them, combines them and forwards the result to the
> DefCore committee. We could certainly leverage part of the governance
> repo to make sure the lists are kept up to date.
> 
> Comments, thoughts ?

I think what would be useful to the board is if we could describe at a
high level which parts of each project have a pluggable interface and
whether we encourage out-of-tree implementations of those pluggable
interfaces.

That's actually a pretty tedious thing to document properly - think
about e.g. whether we encourage out-of-tree WSGI middlewares.

There's a flip-side to this "designated sections" thing that bothers me
after talking it through with Michael Still - I think it's perfectly
reasonable for vendors to e.g. backport fixes to their products without
that backport ever seeing the light of day upstream (say it was too
invasive for the stable branch).

This can't be a case of e.g. enforcing the sha1 sums of files. If we
want to go that route, let's just use the AGPL :)

I don't have a big issue with the way the Foundation currently enforces
"you must use the code" - anyone who signs a trademark agreement with
the Foundation agrees to "include the entirety of" Nova's code. That's
very vague, but I assume the Foundation can terminate the agreement if
it thinks the other party is acting in bad faith.

Basically, I'm concerned about us swinging from a rather lax "you must
include our code" rule to an overly strict "you must make no downstream
modifications to our code".

Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Clint Byrum
Excerpts from Steven Dake's message of 2014-02-05 07:35:37 -0800:
> On 02/04/2014 06:34 PM, Robert Collins wrote:
> > On 5 February 2014 13:14, Zane Bitter  wrote:
> >
> >
> >> That's not a great example, because one DB server depends on the other,
> >> forcing them into updating serially anyway.
> >>
> >> I have to say that even in general, this whole idea about applying update
> >> policies to non-grouped resources doesn't make a whole lot of sense to me.
> >> For non-grouped resources you control the resource definitions individually
> >> - if you don't want them to update at a particular time, you have the 
> >> option
> >> of just not updating them.
> > Well, I don't particularly like the idea of doing thousands of
> > discrete heat stack-update calls, which would seem to be what you're
> > proposing.
> >
> > On groups: autoscale groups are a problem for secure minded
> > deployments because every server has identical resources (today) and
> > we very much want discrete credentials per server - at least this is
> > my understanding of the reason we're not using scaling groups in
> > TripleO.
> >
> >> Where you _do_ need it is for scaling groups where every server is based on
> >> the same launch config, so you need a way to control the members
> >> individually - by batching up operations (done), adding delays (done) or,
> >> even better, notifications and callbacks.
> >>
> >> So it seems like doing 'rolling' updates for any random subset of resources
> >> is effectively turning Heat into something of a poor-man's workflow 
> >> service,
> >> and IMHO that is probably a mistake.
> > I mean to reply to the other thread, but here is just as good :) -
> > heat as a way to describe the intended state, and heat takes care of
> > transitions, is a brilliant model. It absolutely implies a bunch of
> > workflows - the AWS update policy is probably the key example.
> >
> > Being able to gracefully, *automatically* work through a transition
> > between two defined states, allowing the nodes in question to take
> > care of their own needs along the way seems like a pretty core
> > function to fit inside Heat itself. Its not at all the same as 'allow
> > users to define abitrary workflows'.
> >
> > -Rob
> Rob,
> 
> I'm not precisely certain what your proposing, but I think we need to 
> take care not to turn the Heat DSL into a full-fledged programming 
> language.  IMO thousands of updates done through heat is a perfect way 
> for a third party service to do such things - eg control workflow.  
> Clearly there is a workflow gap in OpenStack, and possibly that thing 
> doing the thousands of updates should be a workflow service, rather then 
> TripleO, but workflow is out of scope for Heat proper.  Such a workflow 
> service could potentially fit in the Orchestration program alongside 
> Heat and Autoscaling.  It is too bad there isn't a workflow service 
> already because we are getting alot of pressure to make Heat fill this 
> gap.  I personally believe filling this gap with heat would be a mistake 
> and the correct course of action would be for a workflow service to 
> emerge to fill this need (and depend on Heat for orchestration).
> 

I don't think we want to make it more programmable. I think the opposite,
we want to relieve the template author of workflow by hiding the common
case work-flows behind an update pattern.

To provide some substance to that, if we were to make a workflow service
that does this, it would have to understand templating, and it would
have to understand heat's API. By the time we get done implementing
that, it would look a lot like the resource I've suggested, surrounded
by calls to heatclient and a heat template library.

> I believe this may be what Zane is reacting to; I believe the Heat 
> community would like to avoid making the DSL more programmable because 
> then it is harder to use and support.  The parameters,resources,outputs 
> DSL objects are difficult enough for new folks to pick up and its only 3 
> things to understand...

I do agree that keeping this simple to understand from a template author
perspective is extremely important.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Russell Bryant
On 02/05/2014 11:53 AM, Daniel P. Berrange wrote:
> On Wed, Feb 05, 2014 at 04:29:20PM +, Greg Hill wrote:
>> I'm new, so I'm sure there's some history I'm missing, but I find it
>> bizarre that we have to put the same license into every single file
>> of source code in our projects.  In my past experience, a single
>> LICENSE file at the root-level of the project has been sufficient
>> to declare the license chosen for a project.  Github even has the
>> capacity to choose a license and generate that file for you, it's
>> neat.
> 
> It is not uncommon for source from one project to be copied into another
> project in either direction. While the licenses of the two projects have
> to be compatible, they don't have to be the same. It is highly desirable
> that each file have license explicitly declared to remove any level of
> ambiguity as to what license its code falls under. This might not seem
> like a problem now, but code lives for a very long time and what is
> clear today might be not be so clear 10, 15, 20 years down the road.
> Distros like Debian and Fedora who audit project license compliance have
> learnt the hard way that you really want these per-file licenses for
> clarity of intent.

Yes, this.  :-)

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Clint Byrum
Excerpts from Greg Hill's message of 2014-02-05 08:29:20 -0800:
> I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
> that we have to put the same license into every single file of source code in 
> our projects.  In my past experience, a single LICENSE file at the root-level 
> of the project has been sufficient to declare the license chosen for a 
> project.  Github even has the capacity to choose a license and generate that 
> file for you, it's neat. 
> 

I am definitely not a lawyer, but this is what my reading has shown.

In legal terms, explicit trumps implicit. So being explicit about
our license in each copyrightable file is a hedge against somebody
forklifting the code into their own code base in a proprietary product
and just removing the license. If the header were not there, they might
have a mitigating argument that they were not aware of the license. But
by removing it, they've actively subverted the license.

In reality, I think it is because Debian Developers like me whine when
our program 'licensecheck' says "UNKNOWN" for any files. ;)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Russell Bryant
On 02/05/2014 11:55 AM, Doug Hellmann wrote:
> 
> 
> 
> On Wed, Feb 5, 2014 at 11:22 AM, Thierry Carrez  > wrote:
> 
> (This email is mostly directed to PTLs for programs that include one
> integrated project)
> 
> The DefCore subcommittee from the OpenStack board of directors asked the
> Technical Committee yesterday about which code sections in each
> integrated project should be "designated sections" in the sense of [1]
> (code you're actually needed to run or include to be allowed to use the
> trademark). That determines where you can run alternate code (think:
> substitute your own private hypervisor driver) and still be able to call
> the result openstack.
> 
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
> 
> PTLs and their teams are obviously the best placed to define this, so it
> seems like the process should be: PTLs propose designated sections to
> the TC, which blesses them, combines them and forwards the result to the
> DefCore committee. We could certainly leverage part of the governance
> repo to make sure the lists are kept up to date.
> 
> Comments, thoughts ?
> 
> 
> How specific do those designations need to be? The question of the
> impact of this designation system on code organization came up, but
> wasn't really answered clearly. Do we have any cases where part of the
> code in one module might be designated core, but another part wouldn't?
> 
> For example, I could envision a module that contains code for managing
> data with CRUD operations where the delete is handled through an
> operational job rather than a public API (keystone tokens come to mind
> as an example of that sort of data, as does the data collected by
> ceilometer). While it's likely that the operational job for pruning the
> database would be used in any real deployment, is that tool part of
> "core"? Does that mean a deployer could not use an alternate mechanism
> to manage database's growth? If the pruning tool is not core, does that
> mean the delete code is also not? Does it have to then live in a
> different module from the implementations of the other operations that
> are core?
> 
> It seems like the intent is to draw the lines between common project
> code and "drivers" or other sorts of plugins or extensions, without
> actually using those words because all of them are tied to
> implementation details. It seems better technically, and closer to the
> need of someone wanting to customize a deployment, to designate a set of
> "customization points" for each app (be they drivers, plugins,
> extensions, whatever) and say that the rest of the app is core.

Perhaps going through this process for a single project first would be
helpful.  I agree that some clarification is needed on the details of
the expected result.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Joe Gordon
On Wed, Feb 5, 2014 at 8:29 AM, Greg Hill  wrote:
> I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
> that we have to put the same license into every single file of source code in 
> our projects.  In my past experience, a single LICENSE file at the root-level 
> of the project has been sufficient to declare the license chosen for a 
> project.  Github even has the capacity to choose a license and generate that 
> file for you, it's neat.


We do it for the same reason apache does it:

"Why is a licensing header necessary?

License headers allow someone examining the file to know the terms for
the work, even when it is distributed without the rest of the
distribution. Without a licensing notice, it must be assumed that the
author has reserved all rights, including the right to copy, modify,
and redistribute."

http://www.apache.org/legal/src-headers.html


>
> Greg
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Donald Stufft
Avoiding namespace packages is a good idea in general. At least until Python 
3.whatever is baseline. 

> On Feb 5, 2014, at 10:58 AM, Doug Hellmann  
> wrote:
> 
> 
> 
> 
>> On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec  wrote:
>>> On 2014-02-05 09:05, Doug Hellmann wrote:
>>> 
>>> 
 On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec  wrote:
 On 2014-01-08 12:14, Doug Hellmann wrote:
 
 
 
> On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec  wrote:
>> On 2014-01-08 11:16, Sean Dague wrote:
>> On 01/08/2014 12:06 PM, Doug Hellmann wrote:
>> 
>>> Yeah, that's what made me start thinking oslo.sphinx should be called
>>> something else.
>>> 
>>> Sean, how strongly do you feel about not installing oslo.sphinx in
>>> devstack? I see your point, I'm just looking for alternatives to the
>>> hassle of renaming oslo.sphinx.
>> 
>> Doing the git thing is definitely not the right thing. But I guess I got
>> lost somewhere along the way about what the actual problem is. Can
>> someone write that up concisely? With all the things that have been
>> tried/failed, why certain things fail, etc.
> The problem seems to be when we pip install -e oslo.config on the system, 
> then pip install oslo.sphinx in a venv.  oslo.config is unavailable in 
> the venv, apparently because the namespace package for o.s causes the 
> egg-link for o.c to be ignored.  Pretty much every other combination I've 
> tried (regular pip install of both, or pip install -e of both, regardless 
> of where they are) works fine, but there seem to be other issues with all 
> of the other options we've explored so far.
> 
> We can't remove the pip install -e of oslo.config because it has to be 
> used for gating, and we can't pip install -e oslo.sphinx because it's not 
> a runtime dep so it doesn't belong in the gate.  Changing the toplevel 
> package for oslo.sphinx was also mentioned, but has obvious drawbacks too.
> 
> I think that about covers what I know so far.
 Here's a link dstufft provided to the pip bug tracking this problem: 
 https://github.com/pypa/pip/issues/3
 Doug
 This just bit me again trying to run unit tests against a fresh Nova tree. 
I don't think it's just me either - Matt Riedemann said he has been 
 disabling site-packages in tox.ini for local tox runs.  We really need to 
 do _something_ about this, even if it's just disabling site-packages by 
 default in tox.ini for the affected projects.  A different option would be 
 nice, but based on our previous discussion I'm not sure we're going to 
 find one.
 Thoughts?
>>>  
>>> Is the problem isolated to oslo.sphinx? That is, do we end up with any 
>>> configurations where we have 2 oslo libraries installed in different modes 
>>> (development and "regular") where one of those 2 libraries is not 
>>> oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename 
>>> that to move it out of the namespace package.
>> 
>> oslo.sphinx is the only one that has triggered this for me so far.  I think 
>> it's less likely to happen with the others because they tend to be runtime 
>> dependencies so they get installed in devstack, whereas oslo.sphinx doesn't 
>> because it's a build dep (AIUI anyway).
> 
> That's pretty much what I expected.
> 
> Can we get a volunteer to work on renaming oslo.sphinx?
> 
> Doug
>  
>>>  
>>> Doug
 -Ben
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] savann-ci, Re: [savanna] Alembic migrations and absence of DROP column in sqlite

2014-02-05 Thread Trevor McKay
Hi Sergey,

  Is there a bug or a blueprint for this?  I did a quick search but
didn't see one.

Thanks,

Trevor

On Wed, 2014-02-05 at 16:06 +0400, Sergey Kolekonov wrote:
> I'm currently working on moving on the MySQL for savanna-ci
> 
> 
> On Wed, Feb 5, 2014 at 3:53 PM, Sergey Lukjanov
>  wrote:
> Agreed, let's move on to the MySQL for savanna-ci to run
> integration tests against production-like DB.
> 
> 
> On Wed, Feb 5, 2014 at 1:54 AM, Andrew Lazarev
>  wrote:
> Since sqlite is not in the list of "databases that
> would be used in production", CI should use other DB
> for testing.
> 
> 
> Andrew.
> 
> 
> On Tue, Feb 4, 2014 at 1:13 PM, Alexander Ignatov
>  wrote:
> Indeed. We should create a bug around that and
> move our savanna-ci to mysql.
> 
> Regards,
> Alexander Ignatov
> 
> 
> 
> On 05 Feb 2014, at 01:01, Trevor McKay
>  wrote:
> 
> > This brings up an interesting problem:
> >
> > In https://review.openstack.org/#/c/70420/
> I've added a migration that
> > uses a drop column for an upgrade.
> >
> > But savann-ci is apparently using a sqlite
> database to run.  So it can't
> > possibly pass.
> >
> > What do we do here?  Shift savanna-ci tests
> to non sqlite?
> >
> > Trevor
> >
> > On Sat, 2014-02-01 at 18:17 +0200, Roman
> Podoliaka wrote:
> >> Hi all,
> >>
> >> My two cents.
> >>
> >>> 2) Extend alembic so that op.drop_column()
> does the right thing
> >> We could, but should we?
> >>
> >> The only reason alembic doesn't support
> these operations for SQLite
> >> yet is that SQLite lacks proper support of
> ALTER statement. For
> >> sqlalchemy-migrate we've been providing a
> work-around in the form of
> >> recreating of a table and copying of all
> existing rows (which is a
> >> hack, really).
> >>
> >> But to be able to recreate a table, we
> first must have its definition.
> >> And we've been relying on SQLAlchemy schema
> reflection facilities for
> >> that. Unfortunately, this approach has a
> few drawbacks:
> >>
> >> 1) SQLAlchemy versions prior to 0.8.4 don't
> support reflection of
> >> unique constraints, which means the
> recreated table won't have them;
> >>
> >> 2) special care must be taken in 'edge'
> cases (e.g. when you want to
> >> drop a BOOLEAN column, you must also drop
> the corresponding CHECK (col
> >> in (0, 1)) constraint manually, or SQLite
> will raise an error when the
> >> table is recreated without the column being
> dropped)
> >>
> >> 3) special care must be taken for 'custom'
> type columns (it's got
> >> better with SQLAlchemy 0.8.x, but e.g. in
> 0.7.x we had to override
> >> definitions of reflected BIGINT columns
> manually for each
> >> column.drop() call)
> >>
> >> 4) schema reflection can't be performed
> when alembic migrations are
> >> run in 'offline' mode (without connecting
> to a DB)
> >> ...
> >> (probably something else I've forgotten

[openstack-dev] [Openstack-dev] [Oslo] [Fuel] [Fuel-dev] Openstack services should support SIGHUP signal

2014-02-05 Thread Bogdan Dobrelya
Hi, stackers.
I believe Openstack services from all projects should support SIGHUP for
effective log/config files handling w/o unnecessary restarts.
(See https://bugs.launchpad.net/oslo/+bug/1276694)

'Smooth reloads'(kill -HUP) are much better than 'disturbing restarts',
aren't they?

-- 
Best regards,
Bogdan Dobrelya,
Skype #bogdando_at_yahoo.com
Irc #bogdando

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 11:44 AM, Ben Nemec  wrote:

>  On 2014-02-05 09:05, Doug Hellmann wrote:
>
>
> On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec  wrote:
>
>>  On 2014-01-08 12:14, Doug Hellmann wrote:
>>
>>
>>
>> On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec wrote:
>>
>>> On 2014-01-08 11:16, Sean Dague wrote:
>>>
 On 01/08/2014 12:06 PM, Doug Hellmann wrote:
 

> Yeah, that's what made me start thinking oslo.sphinx should be called
> something else.
>
> Sean, how strongly do you feel about not installing oslo.sphinx in
> devstack? I see your point, I'm just looking for alternatives to the
> hassle of renaming oslo.sphinx.


 Doing the git thing is definitely not the right thing. But I guess I got
 lost somewhere along the way about what the actual problem is. Can
 someone write that up concisely? With all the things that have been
 tried/failed, why certain things fail, etc.
>>>
>>>  The problem seems to be when we pip install -e oslo.config on the
>>> system, then pip install oslo.sphinx in a venv.  oslo.config is unavailable
>>> in the venv, apparently because the namespace package for o.s causes the
>>> egg-link for o.c to be ignored.  Pretty much every other combination I've
>>> tried (regular pip install of both, or pip install -e of both, regardless
>>> of where they are) works fine, but there seem to be other issues with all
>>> of the other options we've explored so far.
>>>
>>> We can't remove the pip install -e of oslo.config because it has to be
>>> used for gating, and we can't pip install -e oslo.sphinx because it's not a
>>> runtime dep so it doesn't belong in the gate.  Changing the toplevel
>>> package for oslo.sphinx was also mentioned, but has obvious drawbacks too.
>>>
>>> I think that about covers what I know so far.
>>
>>  Here's a link dstufft provided to the pip bug tracking this problem:
>> https://github.com/pypa/pip/issues/3
>> Doug
>>
>>   This just bit me again trying to run unit tests against a fresh Nova
>> tree.I don't think it's just me either - Matt Riedemann said he has
>> been disabling site-packages in tox.ini for local tox runs.  We really need
>> to do _something_ about this, even if it's just disabling site-packages by
>> default in tox.ini for the affected projects.  A different option would be
>> nice, but based on our previous discussion I'm not sure we're going to find
>> one.
>> Thoughts?
>>
>
>  Is the problem isolated to oslo.sphinx? That is, do we end up with any
> configurations where we have 2 oslo libraries installed in different modes
> (development and "regular") where one of those 2 libraries is not
> oslo.sphinx? Because if the issue is really just oslo.sphinx, we can rename
> that to move it out of the namespace package.
>
>
>   oslo.sphinx is the only one that has triggered this for me so far.  I
> think it's less likely to happen with the others because they tend to be
> runtime dependencies so they get installed in devstack, whereas oslo.sphinx
> doesn't because it's a build dep (AIUI anyway).
>

That's pretty much what I expected.

Can we get a volunteer to work on renaming oslo.sphinx?

Doug


>
> Doug
>
>>   -Ben
>>
>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Russell Bryant
On 02/05/2014 11:22 AM, Thierry Carrez wrote:
> (This email is mostly directed to PTLs for programs that include one
> integrated project)
> 
> The DefCore subcommittee from the OpenStack board of directors asked the
> Technical Committee yesterday about which code sections in each
> integrated project should be "designated sections" in the sense of [1]
> (code you're actually needed to run or include to be allowed to use the
> trademark). That determines where you can run alternate code (think:
> substitute your own private hypervisor driver) and still be able to call
> the result openstack.
> 
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
> 
> PTLs and their teams are obviously the best placed to define this, so it
> seems like the process should be: PTLs propose designated sections to
> the TC, which blesses them, combines them and forwards the result to the
> DefCore committee. We could certainly leverage part of the governance
> repo to make sure the lists are kept up to date.
> 
> Comments, thoughts ?
> 

The process you suggest is what I would prefer.  (PTLs writing proposals
for TC to approve)

Using the governance repo makes sense as a means for the PTLs to post
their proposals for review and approval of the TC.

Who gets final say if there's strong disagreement between a PTL and the
TC?  Hopefully this won't matter, but it may be useful to go ahead and
clear this up front.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Daniel P. Berrange
On Wed, Feb 05, 2014 at 05:40:13PM +0100, Chmouel Boudjnah wrote:
> On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann
> wrote:
> 
> > Including the config file in either the developer documentation or the
> > packaging build makes more sense. I'm still worried that adding it to the
> > sdist generation means you would have to have a lot of tools installed just
> > to make the sdist. However, we could
> 
> 
> 
> I think that may slighty complicate more devstack with this, since we rely
> heavily on config samples to setup the services.

devstack has to checkout nova and run its build + install steps, so it
would have a full sample config available to use. So I don't think that
generating config at build time would have any real negative impact on
devstack overall.

Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 11:22 AM, Thierry Carrez wrote:

> (This email is mostly directed to PTLs for programs that include one
> integrated project)
>
> The DefCore subcommittee from the OpenStack board of directors asked the
> Technical Committee yesterday about which code sections in each
> integrated project should be "designated sections" in the sense of [1]
> (code you're actually needed to run or include to be allowed to use the
> trademark). That determines where you can run alternate code (think:
> substitute your own private hypervisor driver) and still be able to call
> the result openstack.
>
> [1] https://wiki.openstack.org/wiki/Governance/CoreDefinition
>
> PTLs and their teams are obviously the best placed to define this, so it
> seems like the process should be: PTLs propose designated sections to
> the TC, which blesses them, combines them and forwards the result to the
> DefCore committee. We could certainly leverage part of the governance
> repo to make sure the lists are kept up to date.
>
> Comments, thoughts ?
>

How specific do those designations need to be? The question of the impact
of this designation system on code organization came up, but wasn't really
answered clearly. Do we have any cases where part of the code in one module
might be designated core, but another part wouldn't?

For example, I could envision a module that contains code for managing data
with CRUD operations where the delete is handled through an operational job
rather than a public API (keystone tokens come to mind as an example of
that sort of data, as does the data collected by ceilometer). While it's
likely that the operational job for pruning the database would be used in
any real deployment, is that tool part of "core"? Does that mean a deployer
could not use an alternate mechanism to manage database's growth? If the
pruning tool is not core, does that mean the delete code is also not? Does
it have to then live in a different module from the implementations of the
other operations that are core?

It seems like the intent is to draw the lines between common project code
and "drivers" or other sorts of plugins or extensions, without actually
using those words because all of them are tied to implementation details.
It seems better technically, and closer to the need of someone wanting to
customize a deployment, to designate a set of "customization points" for
each app (be they drivers, plugins, extensions, whatever) and say that the
rest of the app is core.

Doug



>
> --
> Thierry Carrez (ttx)
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] olso.config error on running Devstack

2014-02-05 Thread Ben Nemec
 

On 2014-02-05 09:05, Doug Hellmann wrote: 

> On Tue, Feb 4, 2014 at 5:14 PM, Ben Nemec  wrote:
> 
> On 2014-01-08 12:14, Doug Hellmann wrote: 
> 
> On Wed, Jan 8, 2014 at 12:37 PM, Ben Nemec  wrote:
> 
> On 2014-01-08 11:16, Sean Dague wrote:
> On 01/08/2014 12:06 PM, Doug Hellmann wrote:
> 
> Yeah, that's what made me start thinking oslo.sphinx should be called
> something else.
> 
> Sean, how strongly do you feel about not installing oslo.sphinx in
> devstack? I see your point, I'm just looking for alternatives to the
> hassle of renaming oslo.sphinx. 
> Doing the git thing is definitely not the right thing. But I guess I got
> lost somewhere along the way about what the actual problem is. Can
> someone write that up concisely? With all the things that have been
> tried/failed, why certain things fail, etc.
 The problem seems to be when we pip install -e oslo.config on the
system, then pip install oslo.sphinx in a venv. oslo.config is
unavailable in the venv, apparently because the namespace package for
o.s causes the egg-link for o.c to be ignored. Pretty much every other
combination I've tried (regular pip install of both, or pip install -e
of both, regardless of where they are) works fine, but there seem to be
other issues with all of the other options we've explored so far.

 We can't remove the pip install -e of oslo.config because it has to be
used for gating, and we can't pip install -e oslo.sphinx because it's
not a runtime dep so it doesn't belong in the gate. Changing the
toplevel package for oslo.sphinx was also mentioned, but has obvious
drawbacks too.

 I think that about covers what I know so far. 

Here's a link dstufft provided to the pip bug tracking this problem:
https://github.com/pypa/pip/issues/3 [1] 
Doug 

This just bit me again trying to run unit tests against a fresh Nova
tree. I don't think it's just me either - Matt Riedemann said he has
been disabling site-packages in tox.ini for local tox runs. We really
need to do _something_ about this, even if it's just disabling
site-packages by default in tox.ini for the affected projects. A
different option would be nice, but based on our previous discussion I'm
not sure we're going to find one. 
Thoughts? 

Is the problem isolated to oslo.sphinx? That is, do we end up with any
configurations where we have 2 oslo libraries installed in different
modes (development and "regular") where one of those 2 libraries is not
oslo.sphinx? Because if the issue is really just oslo.sphinx, we can
rename that to move it out of the namespace package. 

oslo.sphinx is the only one that has triggered this for me so far. I
think it's less likely to happen with the others because they tend to be
runtime dependencies so they get installed in devstack, whereas
oslo.sphinx doesn't because it's a build dep (AIUI anyway). 

> Doug 
> 
>> -Ben

 

Links:
--
[1] https://github.com/pypa/pip/issues/3
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Doug Hellmann
On Wed, Feb 5, 2014 at 11:40 AM, Chmouel Boudjnah wrote:

>
> On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann  > wrote:
>
>> Including the config file in either the developer documentation or the
>> packaging build makes more sense. I'm still worried that adding it to the
>> sdist generation means you would have to have a lot of tools installed just
>> to make the sdist. However, we could
>
>
>
> I think that may slighty complicate more devstack with this, since we rely
> heavily on config samples to setup the services.
>

Good point, we would need to add a step to generate a sample config for
each app instead of just copying the one in the source repository.

Doug



>
>
> Chmouel.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Donald Stufft
It's nice when someone removes a file from the project. They get license 
information transmitted automatically without needing to do extra work. 

> On Feb 5, 2014, at 10:46 AM, Jay Pipes  wrote:
> 
>> On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
>> I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
>> that we have to put the same license into every single file of source code 
>> in our projects.
> 
> Meh, probably just habit and copy/paste behavior.
> 
>>  In my past experience, a single LICENSE file at the root-level of the 
>> project has been sufficient to declare the license chosen for a project.
> 
> Agreed, and the git history is enough to figure out who worked on a
> particular file. But, there's been many discussions about this topic
> over the years, and it's just not been a priority, frankly.
> 
>> Github even has the capacity to choose a license and generate that file for 
>> you, it's neat.
> 
> True, but we don't use GitHub :) We only only use it as a mirror for
> Gerrit.
> 
> Best,
> -jay
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Daniel P. Berrange
On Wed, Feb 05, 2014 at 04:29:20PM +, Greg Hill wrote:
> I'm new, so I'm sure there's some history I'm missing, but I find it
> bizarre that we have to put the same license into every single file
> of source code in our projects.  In my past experience, a single
> LICENSE file at the root-level of the project has been sufficient
> to declare the license chosen for a project.  Github even has the
> capacity to choose a license and generate that file for you, it's
> neat.

It is not uncommon for source from one project to be copied into another
project in either direction. While the licenses of the two projects have
to be compatible, they don't have to be the same. It is highly desirable
that each file have license explicitly declared to remove any level of
ambiguity as to what license its code falls under. This might not seem
like a problem now, but code lives for a very long time and what is
clear today might be not be so clear 10, 15, 20 years down the road.
Distros like Debian and Fedora who audit project license compliance have
learnt the hard way that you really want these per-file licenses for
clarity of intent.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] why do we put a license in every file?

2014-02-05 Thread Jay Pipes
On Wed, 2014-02-05 at 16:29 +, Greg Hill wrote:
> I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
> that we have to put the same license into every single file of source code in 
> our projects.

Meh, probably just habit and copy/paste behavior.

>   In my past experience, a single LICENSE file at the root-level of the 
> project has been sufficient to declare the license chosen for a project.

Agreed, and the git history is enough to figure out who worked on a
particular file. But, there's been many discussions about this topic
over the years, and it's just not been a priority, frankly.

> Github even has the capacity to choose a license and generate that file for 
> you, it's neat.

True, but we don't use GitHub :) We only only use it as a mirror for
Gerrit.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Manila] Modularity of generic driver (network mediated)

2014-02-05 Thread Ramana Raja
Hi,

The first prototype of the multi-tenant capable GlusterFS driver would 
piggyback on the generic driver, which implements the network plumbing model 
[1]. We'd have NFS-Ganesha server running on the service VM. The Ganesha server 
would mediate access to the GlusterFS backend (or any other Ganesha compatible 
clustered file system backends such as CephFS, GPFS, among others), while the 
tenant network isolation would be done by the service VM networking [2][3]. To 
implement this idea, we'd have to reuse much of the generic driver code 
especially that related to the service VM networking.

So we were wondering whether the current generic driver can be made more 
modular? The service VM could not just be used to expose a formatted cinder 
volume, but instead be used as an instrument to convert the existing single 
tenant drivers (with slight modification) - LVM, GlusterFS - to a multi-tenant 
ready driver. Do you see any issues with this thought - generic driver, a 
modular multi-tenant driver that implements the network plumbing model? And is 
this idea feasible?


[1] https://wiki.openstack.org/wiki/Manila_Networking
[2] 
https://docs.google.com/document/d/1WBjOq0GiejCcM1XKo7EmRBkOdfe4f5IU_Hw1ImPmDRU/edit
[3] 
https://docs.google.com/a/mirantis.com/drawings/d/1Fw9RPUxUCh42VNk0smQiyCW2HGOGwxeWtdVHBB5J1Rw/edit

Thanks,

Ram

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pep8 gating fails due to tools/config/check_uptodate.sh

2014-02-05 Thread Chmouel Boudjnah
On Wed, Feb 5, 2014 at 4:20 PM, Doug Hellmann
wrote:

> Including the config file in either the developer documentation or the
> packaging build makes more sense. I'm still worried that adding it to the
> sdist generation means you would have to have a lot of tools installed just
> to make the sdist. However, we could



I think that may slighty complicate more devstack with this, since we rely
heavily on config samples to setup the services.

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [cinder][neutron][nova][3rd party testing] Gerrit Jenkins plugin will not fulfill requirements of 3rd party testing

2014-02-05 Thread Jay Pipes
On Wed, 2014-02-05 at 15:50 +0400, Sergey Lukjanov wrote:
> Hi Jay,
> 
> it's really very easy to setup Zuul for it (we're using one for
> Savanna CI).

Yes, I set up Zuul for AT&T's gate system, thx.

> There are some useful links:
> 
> * check pipeline as an example of zuul layout configuration
> - 
> https://github.com/openstack-infra/config/blob/master/modules/openstack_project/files/zuul/layout.yaml#L5
> * zuul docs - http://ci.openstack.org/zuul/
> * zuul config sample
> - https://github.com/openstack-infra/zuul/blob/master/etc/zuul.conf-sample
> 
> So, I think that it could be easy enough to setup Zuul for 3rd party
> testing, but it'll be better to have some doc about it.

Yeah, I proposed in my email that I would do the documentation for using
Zuul as the trigger from Gerrit (see below). However, I didn't think I'd
get the docs done in a timely fashion and proposed relaxing the
requirement for "recheck" triggers until that documentation was complete
(since most of the vendors I have spoken with have used the Jenkins
Gerrit plugin and not Zuul as their triggering agent)

Best,
-jay

> 
> On Wed, Feb 5, 2014 at 3:55 AM, Jay Pipes  wrote:
> Sorry for cross-posting to both mailing lists, but there's
> lots of folks
> working on setting up third-party testing platforms that are
> not members
> of the openstack-infra ML...
> 
> tl;dr
> -
> 
> The third party testing documentation [1] has requirements [2]
> that
> include the ability to trigger a recheck based on a gerrit
> comment.
> 
> Unfortunately, the Gerrit Jenkins Trigger plugin [3] does not
> have the
> ability to trigger job runs based on a regex-filtered comment
> (only on
> the existence of any new comment to the code review).
> 
> Therefore, we either should:
> 
> a) Relax the requirement that the third party system trigger
> test
> re-runs when a comment including the word "recheck" appears in
> the
> Gerrit event stream
> 
> b) Modify the Jenkins Gerrit plugin to support regex filtering
> on the
> comment text (in the same way that it currently supports regex
> filtering
> on the project name)
> 
> or
> 
> c) Add documentation to the third party testing pages that
> explains how
> to use Zuul as a replacement for the Jenkins Gerrit plugin.
> 
> I propose we do a) for the short term, and I'll work on c)
> long term.
> However, I'm throwing this out there just in case there are
> some Java
> and Jenkins whizzes out there that could get b) done in a
> jiffy.
> 
> details
> ---
> 
> OK, so I've been putting together documentation on how to set
> up an
> external Jenkins platform that is "linked" [4] with the
> upstream
> OpenStack CI system.
> 
> Recently, I wrote an article detailing how the upstream CI
> system
> worked, including a lot of the gory details from the
> openstack-infra/config project's files. [5]
> 
> I've been working on a follow-up article that goes through how
> to set up
> a Jenkins system, and in writing that article, I created a
> source
> repository [6] that contains scripts, instructions and Puppet
> modules
> that set up a Jenkins system, the Jenkins Job Builder tool,
> and
> installs/configures the Jenkins Gerrit plugin [7].
> 
> I planned to use the Jenkins Gerrit plugin as the mechanism
> that
> triggers Jenkins jobs on the external system based on gerrit
> events
> published by the OpenStack review.openstack.org Gerrit
> service. In
> addition to being mentioned in the third party documentation,
> Jenkins
> Job Builder has the ability to construct Jenkins jobs that are
> triggered
> by the Jenkins Gerrit plugin [8].
> 
> Unforunately, I've run into a bit of a snag.
> 
> The third party testing documentation has requirements that
> include the
> ability to trigger a recheck based on a gerrit comment:
> 
> 
> Support recheck to request re-running a test.
>  * Support the following syntaxes recheck no bug and recheck
> bug ###.
>  * Recheck means recheck everything. A single recheck comment
> should
> re-trigger all testing systems.
> 
> 
> The documentation has a section on using the Gerrit Jenkins
> Trigger
> plugin [3] to accept notifications from the upstream OpenStack
> Gerrit
> instance.
>

Re: [openstack-dev] [Heat] [TripleO] Rolling updates spec re-written. RFC

2014-02-05 Thread Clint Byrum
Excerpts from Zane Bitter's message of 2014-02-04 16:14:09 -0800:
> On 03/02/14 17:09, Clint Byrum wrote:
> > Excerpts from Thomas Herve's message of 2014-02-03 12:46:05 -0800:
> >>> So, I wrote the original rolling updates spec about a year ago, and the
> >>> time has come to get serious about implementation. I went through it and
> >>> basically rewrote the entire thing to reflect the knowledge I have
> >>> gained from a year of working with Heat.
> >>>
> >>> Any and all comments are welcome. I intend to start implementation very
> >>> soon, as this is an important component of the HA story for TripleO:
> >>>
> >>> https://wiki.openstack.org/wiki/Heat/Blueprints/RollingUpdates
> >>
> >> Hi Clint, thanks for pushing this.
> >>
> >> First, I don't think RollingUpdatePattern and CanaryUpdatePattern should 
> >> be 2 different entities. The second just looks like a parametrization of 
> >> the first (growth_factor=1?).
> >
> > Perhaps they can just be one. Until I find parameters which would need
> > to mean something different, I'll just use UpdatePattern.
> >
> >>
> >> I then feel that using (abusing?) depends_on for update pattern is a bit 
> >> weird. Maybe I'm influenced by the CFN design, but the separate 
> >> UpdatePolicy attribute feels better (although I would probably use a 
> >> property). I guess my main question is around the meaning of using the 
> >> update pattern on a server instance. I think I see what you want to do for 
> >> the group, where child_updating would return a number, but I have no idea 
> >> what it means for a single resource. Could you detail the operation a bit 
> >> more in the document?
> >>
> >
> > I would be o-k with adding another keyword. The idea in abusing depends_on
> > is that it changes the core language less. Properties is definitely out
> > for the reasons Christopher brought up, properties is really meant to
> > be for the resource's end target only.
> 
> Agree, -1 for properties - those belong to the resource, and this data 
> belongs to Heat.
> 
> > UpdatePolicy in cfn is a single string, and causes very generic rolling
> 
> Huh?
> 
> http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html
> 
> Not only is it not just a single string (in fact, it looks a lot like 
> the properties you have defined), it's even got another layer of 
> indirection so you can define different types of update policy (rolling 
> vs. canary, anybody?). It's an extremely flexible syntax.
> 

Oops, I relied a little too much on my memory and not enough on docs for
that one. O-k, I will re-evaluate given actual knowledge of how it
actually works. :-P

> BTW, given that we already implemented this in autoscaling, it might be 
> helpful to talk more specifically about what we need to do in addition 
> in order to support the use cases you have in mind.
> 

As Robert mentioned in his mail, autoscaling groups won't allow us to
inject individual credentials. With the ResourceGroup, we can make a
nested stack with a random string generator so that is solved. Now the
other piece we need is to be able to directly choose machines to take
out of commission, which I think we may have a simple solution to but I
don't want to derail on that.

The one used in AutoScalingGroups is also limited to just one group,
thus it can be done all inside the resource.

> > update behavior. I want this resource to be able to control multiple
> > groups as if they are one in some cases (Such as a case where a user
> > has migrated part of an app to a new type of server, but not all.. so
> > they will want to treat the entire aggregate as one rolling update).
> >
> > I'm o-k with overloading it to allow resource references, but I'd like
> > to hear more people take issue with depends_on before I select that
> > course.
> 
> Resource references in general, and depends_on in particular, feel like 
> very much the wrong abstraction to me. This is a policy, not a resource.
> 
> > To answer your question, using it with a server instance allows
> > rolling updates across non-grouped resources. In the example the
> > rolling_update_dbs does this.
> 
> That's not a great example, because one DB server depends on the other, 
> forcing them into updating serially anyway.
> 

You're right, a better example is a set of (n) resource groups which
serve the same service and thus we want to make sure we maintain the
minimum service levels as a whole.

If it were an order of magnitude harder to do it this way, I'd say
sure let's just expand on the single-resource rolling update. But
I think it won't be that much harder to achieve this and then the use
case is solved.

> I have to say that even in general, this whole idea about applying 
> update policies to non-grouped resources doesn't make a whole lot of 
> sense to me. For non-grouped resources you control the resource 
> definitions individually - if you don't want them to update at a 
> particular time, you have the option of just not up

[openstack-dev] [Horizon] RFC - Suggestion for switching from Less to Sass (Bootstrap 3 & Sass support)

2014-02-05 Thread Jaromir Coufal

Dear Horizoners,

in last days there were couple of interesting discussions about updating 
to Bootstrap 3. In this e-mail, I would love to give a small summary and 
propose a solution for us.


As Bootstrap was heavily dependent on Less, when we got rid of node.js 
we started to use lesscpy. Unfortunately because of this change we were 
unable to update to Bootstrap 3. Fixing lesscpy looks problematic - 
there are issues with supporting all use-cases and even if we fix this 
in some time, we might challenge these issues again in the future.


There is great news for Bootstrap. It started to support Sass [0]. 
(Thanks Toshi and MaxV for highlighting this news!)


Thanks to this step forward, we might get out of our lesscpy issues by 
switching to Sass. I am very happy with this possible change, since Sass 
is more powerful than Less and we will be able to update our libraries 
without any constraints.


There are few downsides - we will need to change our Horizon Less files 
to Sass, but it shouldn't be very big deal as far as we discussed it 
with some Horizon folks. We can actually do it as a part of Bootstrap 
update [1] (or CSS files restructuring [2]).


Other concern will be with compilers. So far I've found 3 ways:
* rails dependency (how big problem would it be?)
* https://pypi.python.org/pypi/scss/0.7.1
* https://pypi.python.org/pypi/SassPython/0.2.1
* ... (other suggestions?)

Nice benefit of Sass is, that we can use advantage of Compass framework 
[3], which will save us a lot of energy when writing (not just 
cross-browser) stylesheets thanks to their mixins.


When we discussed on IRC with Horizoners, it looks like this is good way 
to go in order to move us forward. So I am here, bringing this 
suggestion up to whole community.


My proposal for Horizon is to *switch from Less to Sass*. Then we can 
unblock our already existing BPs, get Bootstrap updates and include 
Compass framework. I believe this is all doable in Icehouse timeframe if 
there are no problems with compilers.


Thoughts?

-- Jarda

[0] http://getbootstrap.com/getting-started/
[1] https://blueprints.launchpad.net/horizon/+spec/bootstrap-update
[2] https://blueprints.launchpad.net/horizon/+spec/css-breakdown
[3] http://compass-style.org/

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Icehouse Requirements

2014-02-05 Thread Jaromir Coufal

On 2014/05/02 15:27, Tzu-Mainn Chen wrote:

Hi,

In parallel to Jarda's updated wireframes, and based on various discussions 
over the past
weeks, here are the updated Tuskar requirements for Icehouse:

https://wiki.openstack.org/wiki/TripleO/TuskarIcehouseRequirements

Any feedback is appreciated.  Thanks!

Tzu-Mainn Chen


+1 looks good to me!

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] why do we put a license in every file?

2014-02-05 Thread Greg Hill
I'm new, so I'm sure there's some history I'm missing, but I find it bizarre 
that we have to put the same license into every single file of source code in 
our projects.  In my past experience, a single LICENSE file at the root-level 
of the project has been sufficient to declare the license chosen for a project. 
 Github even has the capacity to choose a license and generate that file for 
you, it's neat. 

Greg



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-docs] Conventions on naming

2014-02-05 Thread Steve Gordon
- Original Message -
> From: "Andreas Jaeger" 
> To: "Mark McLoughlin" , "OpenStack Development Mailing 
> List (not for usage questions)"
> 
> Cc: "Jonathan Bryce" 
> Sent: Wednesday, February 5, 2014 9:17:39 AM
> Subject: Re: [openstack-dev] [Openstack-docs] Conventions on naming
> 
> On 02/05/2014 01:09 PM, Mark McLoughlin wrote:
> > On Wed, 2014-02-05 at 11:52 +0100, Thierry Carrez wrote:
> >> Steve Gordon wrote:
>  From: "Anne Gentle" 
>  Based on today's Technical Committee meeting and conversations with the
>  OpenStack board members, I need to change our Conventions for service
>  names
>  at
>  https://wiki.openstack.org/wiki/Documentation/Conventions#Service_and_project_names
>  .
> 
>  Previously we have indicated that Ceilometer could be named OpenStack
>  Telemetry and Heat could be named OpenStack Orchestration. That's not
>  the
>  case, and we need to change those names.
> 
>  To quote the TC meeting, ceilometer and heat are "other modules" (second
>  sentence from 4.1 in
>  http://www.openstack.org/legal/bylaws-of-the-openstack-foundation/)
>  distributed with the Core OpenStack Project.
> 
>  Here's what I intend to change the wiki page to:
>   Here's the list of project and module names and their official names
>   and
>  capitalization:
> 
>  Ceilometer module
>  Cinder: OpenStack Block Storage
>  Glance: OpenStack Image Service
>  Heat module
>  Horizon: OpenStack dashboard
>  Keystone: OpenStack Identity Service
>  Neutron: OpenStack Networking
>  Nova: OpenStack Compute
>  Swift: OpenStack Object Storage
> >>
> >> Small correction. The TC had not indicated that Ceilometer could be
> >> named "OpenStack Telemetry" and Heat could be named "OpenStack
> >> Orchestration". We formally asked[1] the board to allow (or disallow)
> >> that naming (or more precisely, that use of the trademark).
> >>
> >> [1]
> >> https://github.com/openstack/governance/blob/master/resolutions/20131106-ceilometer-and-heat-official-names
> >>
> >> We haven't got a formal and clear answer from the board on that request
> >> yet. I suspect they are waiting for progress on DefCore before deciding.
> >>
> >> If you need an answer *now* (and I suspect you do), it might make sense
> >> to ask foundation staff/lawyers about using those OpenStack names with
> >> the current state of the bylaws and trademark usage rules, rather than
> >> the hypothetical future state under discussion.
> > 
> > Basically, yes - I think having the Foundation confirm that it's
> > appropriate to use "OpenStack Telemetry" in the docs is the right thing.
> > 
> > There's an awful lot of confusion about the subject and, ultimately,
> > it's the Foundation staff who are responsible for enforcing (and giving
> > advise to people on) the trademark usage rules. I've cc-ed Jonathan so
> > he knows about this issue.
> > 
> > But FWIW, the TC's request is asking for Ceilometer and Heat to be
> > allowed use their "Telemetry" and "Orchestration" names in *all* of the
> > circumstances where e.g. Nova is allowed use its "Compute" name.
> > 
> > Reading again this clause in the bylaws:
> > 
> >   "The other modules which are part of the OpenStack Project, but
> >not the Core OpenStack Project may not be identified using the
> >OpenStack trademark except when distributed with the Core OpenStack
> >Project."
> > 
> > it could well be said that this case of naming conventions in the docs
> > for the entire OpenStack Project falls under the "distributed with" case
> > and it is perfectly fine to refer to "OpenStack Telemetry" in the docs.
> > I'd really like to see the Foundation staff give their opinion on this,
> > though.
> 
> What Steve is asking IMO is whether we have to change "OpenStack
> Telemetry" to "Ceilometer module" or whether we can just say "Telemetry"
> without the OpenStack in front of it,
> 
> Andreas

Constraining myself to the topic of what we should be using in the 
documentation, yes this is what I'm asking. This makes more sense to me than 
switching to calling them the "Heat module" and "Ceilometer module" because:

1) It resolves the issue of using the OpenStack mark where it (apparently) 
shouldn't be used.
2) It means we're still using the "formal" name for the program as defined by 
the TC [1] (it is my understanding this remains the purview of the TC, it's 
control of the mark that the board are exercising here).
3) It is a more minor change/jump and therefore provides more continuity and 
less confusion to readers (and similarly if one of them ever becomes endorsed 
as core and we need to switch again).

Thanks,

Steve

[1] 
http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTL] Designating "required use" upstream code

2014-02-05 Thread Thierry Carrez
(This email is mostly directed to PTLs for programs that include one
integrated project)

The DefCore subcommittee from the OpenStack board of directors asked the
Technical Committee yesterday about which code sections in each
integrated project should be "designated sections" in the sense of [1]
(code you're actually needed to run or include to be allowed to use the
trademark). That determines where you can run alternate code (think:
substitute your own private hypervisor driver) and still be able to call
the result openstack.

[1] https://wiki.openstack.org/wiki/Governance/CoreDefinition

PTLs and their teams are obviously the best placed to define this, so it
seems like the process should be: PTLs propose designated sections to
the TC, which blesses them, combines them and forwards the result to the
DefCore committee. We could certainly leverage part of the governance
repo to make sure the lists are kept up to date.

Comments, thoughts ?

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ml2] Port binding information, transactions, and concurrency

2014-02-05 Thread Robert Kukura
On 02/05/2014 09:10 AM, Henry Gessau wrote:
> Bob, this is fantastic, I really appreciate all the detail. A couple of
> questions ...
> 
> On Wed, Feb 05, at 2:16 am, Robert Kukura  wrote:
> 
>> A couple of interrelated issues with the ML2 plugin's port binding have
>> been discussed over the past several months in the weekly ML2 meetings.
>> These effect drivers being implemented for icehouse, and therefore need
>> to be addressed in icehouse:
>>
>> * MechanismDrivers need detailed information about all binding changes,
>> including unbinding on port deletion
>> (https://bugs.launchpad.net/neutron/+bug/1276395)
>> * MechanismDrivers' bind_port() methods are currently called inside
>> transactions, but in some cases need to make remote calls to controllers
>> or devices (https://bugs.launchpad.net/neutron/+bug/1276391)
>> * Semantics of concurrent port binding need to be defined if binding is
>> moved outside the triggering transaction.
>>
>> I've taken the action of writing up a unified proposal for resolving
>> these issues, which follows...
>>
>> 1) An original_bound_segment property will be added to PortContext. When
>> the MechanismDriver update_port_precommit() and update_port_postcommit()
>> methods are called and a binding previously existed (whether its being
>> torn down or not), this property will provide access to the network
>> segment used by the old binding. In these same cases, the portbinding
>> extension attributes (such as binding:vif_type) for the old binding will
>> be available via the PortContext.original property. It may be helpful to
>> also add bound_driver and original_bound_driver properties to
>> PortContext that behave similarly to bound_segment and
>> original_bound_segment.
>>
>> 2) The MechanismDriver.bind_port() method will no longer be called from
>> within a transaction. This will allow drivers to make remote calls on
>> controllers or devices from within this method without holding a DB
>> transaction open during those calls. Drivers can manage their own
>> transactions within bind_port() if needed, but need to be aware that
>> these are independent from the transaction that triggered binding, and
>> concurrent changes to the port could be occurring.
>>
>> 3) Binding will only occur after the transaction that triggers it has
>> been completely processed and committed. That initial transaction will
>> unbind the port if necessary. Four cases for the initial transaction are
>> possible:
>>
>> 3a) In a port create operation, whether the binding:host_id is supplied
>> or not, all drivers' port_create_precommit() methods will be called, the
>> initial transaction will be committed, and all drivers'
>> port_create_postcommit() methods will be called. The drivers will see
>> this as creation of a new unbound port, with PortContext properties as
>> shown. If a value for binding:host_id was supplied, binding will occur
>> afterwards as described in 4 below.
>>
>> PortContext.original: None
>> PortContext.original_bound_segment: None
>> PortContext.original_bound_driver: None
>> PortContext.current['binding:host_id']: supplied value or None
>> PortContext.current['binding:vif_type']: 'unbound'
>> PortContext.bound_segment: None
>> PortContext.bound_driver: None
>>
>> 3b) Similarly, in a port update operation on a previously unbound port,
>> all drivers' port_update_precommit() and port_update_postcommit()
>> methods will be called, with PortContext properies as shown. If a value
>> for binding:host_id was supplied, binding will occur afterwards as
>> described in 4 below.
>>
>> PortContext.original['binding:host_id']: previous value or None
>> PortContext.original['binding:vif_type']: 'unbound' or 'binding_failed'
>> PortContext.original_bound_segment: None
>> PortContext.original_bound_driver: None
>> PortContext.current['binding:host_id']: current value or None
>> PortContext.current['binding:vif_type']: 'unbound'
>> PortContext.bound_segment: None
>> PortContext.bound_driver: None
>>
>> 3c) In a port update operation on a previously bound port that does not
>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>> update_port_postcommit() methods will be called with PortContext
>> properties reflecting unchanged binding states as shown.
>>
>> PortContext.original['binding:host_id']: previous value
>> PortContext.original['binding:vif_type']: previous value
>> PortContext.original_bound_segment: previous value
>> PortContext.original_bound_driver: previous value
>> PortContext.current['binding:host_id']: previous value
>> PortContext.current['binding:vif_type']: previous value
>> PortContext.bound_segment: previous value
>> PortContext.bound_driver: previous value
>>
>> 3d) In a the port update operation on a previously bound port that does
>> trigger unbinding or rebinding, all drivers' update_port_precommit() and
>> update_port_postcommit() methods will be called with PortContext
>> properties reflecting the previously bound and currently unbound binding

[openstack-dev] [Trove] Backup/Restore encryption/decryption issue

2014-02-05 Thread Denis Makogon
Goodday, OpenStack DВaaS community.


I'd like to start conversation about guestagent security issue related
to backup/restore process. Trove guestagent service uses AES with 256 bit
key (in CBC mode) [1] to encrypt backups which are stored at predefined
Swift container.

As you can see, password is defined in config file [2]. And here comes
problem, this password is used for all tenants/projects that use Trove - it
is a security issue. I would like to suggest Key derivation function [3]
based on static attributes specific for each tenant/project (tenant_id).
KDF would be based upon python implementation of PBKDF2 [4]. Implementation
can be seen here [5].

Also i'm looking forward to give user an ability to pass password for
KDF that would deliver key for backup/restore encryption/decryption, if
ingress password (from user) will be empty, guest will use static
attributes of tenant (tenant_id).

To allow backward compatibility, python-troveclient should be able to pass
old password [1] to guestagent as one of parameters on restore call.

Blueprint already have been registered in Trove launchpad space, [6].

I also foresee porting this feature to oslo-crypt, as part of security
framework (oslo.crypto) extensions.

Thoughts ?

[1]
https://github.com/openstack/trove/blob/master/trove/guestagent/strategies/backup/base.py#L113-L116

[2]
https://github.com/openstack/trove/blob/master/etc/trove/trove-guestagent.conf.sample#L69

[3] http://en.wikipedia.org/wiki/Key_derivation_function

[4] http://en.wikipedia.org/wiki/PBKDF2

[5] https://gist.github.com/denismakogon/8823279

[6] https://blueprints.launchpad.net/trove/+spec/backup-encryption

Best regards,

Denis Makogon

Mirantis, Inc.

Kharkov, Ukraine

www.mirantis.com

www.mirantis.ru

dmako...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >