Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?

2014-02-20 Thread Miguel Angel Ajo Pelayo

I rebased the https://review.openstack.org/#/c/72576/ no-op change.



- Original Message -
> From: "Alan Pevec" 
> To: "openstack-stable-maint" 
> Cc: "OpenStack Development Mailing List" 
> Sent: Tuesday, February 18, 2014 7:52:23 PM
> Subject: Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?
> 
> 2014-02-11 16:14 GMT+01:00 Anita Kuno:
> > On 02/11/2014 04:57 AM, Alan Pevec wrote:
> >> Hi Mark and Anita,
> >>
> >> could we declare stable/havana neutron gate jobs good enough at this
> >> point?
> >> There are still random failures as this no-op change shows
> >> https://review.openstack.org/72576
> >> but I don't think they're stable/havana specific.
> ...
> 
> > I will reaffirm here what I had stated in IRC.
> >
> > If Mark McClain gives his assent for stable/havana patches to be
> > approved, I will not remove Neutron stable/havana patches from the gate
> > queue before they start running tests. If after they start running
> > tests, they demonstrate that they are failing, I will remove them from
> > the gate as a means to keep the gate flowing. If the stable/havana gate
> > jobs are indeed stable, I will not be removing any patches that should
> > be merged.
> 
> As discussed on #openstack-infra last week, stable-maint team should
> start looking more closely at Tempest stable/havana branch and Matthew
> Treinish from Tempest core joined the stable-maint team to help us
> there.
> 
> In the meantime, we need to do something more urgently, there are
> remaining failures showing up frequently in stable/havana jobs which
> seem to have been fixed or at least improved on master:
> 
> * bug 1254890 - "Timed out waiting for thing ... to become ACTIVE"
> causes tempest-dsvm-* failures
>   resolution unclear?
> 
> * bug 1253896 - "Attempts to verify guests are running via SSH fails.
> SSH connection to guest does not work."
>   based on Salvatore's comment 56, I've marked it as Won't Fix in
> neutron/havana and opened tempest/havana to propose what Tempest test
> or jobs should skip for Havana. Please chime-in in the bug if you have
> suggestions.
> 
> 
> Cheers,
> Alan
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-20 Thread IWAMOTO Toshihiro
At Wed, 19 Feb 2014 20:23:04 +0400,
Eugene Nikanorov wrote:
> 
> Hi Sam,
> 
> My comments inline:
> 
> 
> On Wed, Feb 19, 2014 at 4:57 PM, Samuel Bercovici wrote:
> 
> >  Hi,
> >
> >
> >
> > I think we mix different aspects of operations. And try to solve a non
> > "problem".
> >
> Not really, Advanced features we're trying to introduce are incompatible by
> both object model and API.

I agree with Samuel here.  I feel the logical model and other issues
(implementation etc.) are mixed in the discussion.

I'm failing to understand why the current model is unfit for L7 rules.

  - pools belonging to a L7 group should be created with the same
provider/flavor by a user
  - pool scheduling can be delayed until it is bound to a vip to make
sure pools belonging to a L7 group are scheduled to one backend

I think proposed changes are introduction of "implementation details"
and as a general rule it's better to be hidden from users.

>  From APIs/Operations we are mixing the following models:
> >
> > 1.   Logical model (which as far as I understand is the topic of this
> > discussion) - tenants define what they need logically 
> > vip$(D+"(Bdefault_pool,
> > l7 association, ssl, etc.
> >
> That's correct. Tenant may or may not care about how it is grouped on the
> backend. We need to support both cases.
> 
> >  2.   Physical model - operator / vendor install and specify how
> > backend gets implemented.
> >
> > 3.   Deploying 1 on 2 - this is currently the driver's
> > responsibility. We can consider making it better but this should not impact
> > the logical model.
> >
> I think grouping vips and pools is important part of logical model, even if
> some users may not care about it.

One possibility is to provide an optional data structure to describe
grouping of vips and pools, on top of the existing pool-vip model.

> > I think this is not a "problem".
> >
> > In a logical model a pool which is part of L7 policy is a logical object
> > which could be placed at any backend and any existing vip$(D)N+"(Bpool and
> > accordingly configure the backend that those vip$(D)N+"(Bpool are 
> > deployed on.
> >
>  That's not how it currently works - that's why we're trying to address it.
> Having pool shareable between backends at least requires to move 'instance'
> role from the pool to some other entity, and also that changes a number of
> API aspects.
> 
>  If the same pool that was part of a l7 association will also be connected
> > to a vip as a default pool, than by all means this new vip$(D)N+"(Bpool 
> > pair can
> > be instantiated into some back end.
> >
> > The proposal to not allow this (ex: only allow pools that are connected to
> > the same lb-instance to be used for l7 association), brings the physical
> > model into the logical model.
> >
> So proposal tries to address 2 issues:
> 1) in many cases it is desirable to know about grouping of logical objects
> on the backend
> 2) currently physical model implied when working with pools, because pool
> is the root and corresponds to backend with 1:1 mapping
> 
> 
> >
> > I think that the current logical model is fine with the exception that the
> > two way reference between vip and pool (vip$(D)N+"(Bpool) should be 
> > modified
> > with only vip pointing to a pool (vip$(D+"(Bpool) which allows reusing 
> > the pool
> > with multiple vips.
> >
> Reusing pools by vips is not as simple as it seems.
> If those vips belong to 1 backend (that by itself requires tenant to know
> about that) - that's no problem, but if they don't, then:
> 1) what 'status' attribute of the pool would mean?
> 2) how health monitors for the pool will be deployed? and what their
> statuses would mean?
> 3) what pool statistics would mean?
> 4) If the same pool is used on
> 
> To be able to preserve existing meaningful healthmonitors, members and
> statistics API we will need to create associations for everything, or just
> change API in backward incompatible way.
> My opinion is that it make sense to limit such ability (reusing pools by
> vips deployed on different backends) in favor of simpler code, IMO it's
> really a big deal. Pool is lightweight enough to not to share it as an
> object.

Yes, there's little benefit in sharing pools at cost of the
complexity.

--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] when icehouse will be frozen

2014-02-20 Thread Miguel Angel Ajo Pelayo

If I didn't understand it wrong, as long as you have an active review for
your change, and some level of interest / approval, then you should
be ok to finish it during the last Icehouse cycle, but of course,
your code needs to be approved to become part of Icehouse.

Cheers,
Miguel Ángel.

- Original Message -
> From: "马煜" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, February 20, 2014 1:52:04 AM
> Subject: [openstack-dev] [neutron] when icehouse will be frozen
> 
> who know when to freezy icehouse version ?
> 
> my bp on ml2 driver has been approved, code is under review,
> but I have some trouble to deploy third-party ci on which tempest test run.
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-20 Thread IWAMOTO Toshihiro
At Tue, 18 Feb 2014 18:47:37 -0800,
Stephen Balukoff wrote:
> 
> [1  ]
> [1.1  ]
> Small correction to my option #4 (here as #4.1). Neutron port_id should be
> an attribute of the 'loadbalancer' object, not the 'cluster' object.
> (Though cluster should have a network_id attribute).

Hi Eugene and Stephen,
I'd like to see the wiki updated with the plan #4 and current issues
as mentioned in emails.  It'll greatly help me to keep in touch with
the discussion.

Thanks.
--
IWAMOTO Toshihiro

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-stable-maint] Stable gate status?

2014-02-20 Thread Alan Pevec
2014-02-20 8:57 GMT+01:00 Miguel Angel Ajo Pelayo :
> I rebased the https://review.openstack.org/#/c/72576/ no-op change.

And it failed in check-tempest-dsvm-neutron-pg with bug 1254890 -
"Timed out waiting for thing ... to become ACTIVE"
while previous check on Feb 17 failed in
check-tempest-dsvm-neutron-isolated with bug 1253896 - "Attempts to
verify guests are running via SSH fails. SSH connection to guest does
not work."

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 19/02/14 18:29, Dougal Matthews wrote:
> The question for me, is what passwords will we have and when do we need
> them? Are any of the passwords required long term.

We will need whatever the Heat template needs to generate all the
configuration files. That includes passwords for all services that are
going to be configured, such as, for example, Swift or MySQL.

I'm not sure about the exact mechanisms in Heat, but I would guess that
we will need all the parameters, including passwords, when the templates
are re-generated. We could probably generate new passwords every time,
though.

> If we do need to store passwords it becomes a somewhat thorny issue, how
> does Tuskar know what a password is? If this is flagged up by the
> UI/client then we are relying on the user to tell us which isn't wise.

All the template parameters that are passwords are marked in the Heat
parameter list that we get from it as "NoEcho": "true", so we do have an
idea about which parts are sensitive.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift]stable/havana Jenkins failed

2014-02-20 Thread Alan Pevec
> I notice that we have changed "from swiftclient import Connection,
> HTTPException" to "from swiftclient import Connection, RequestException"
> at 2014-02-14, I don't know is it relational.
>
> I have reported a bug for this:
> https://bugs.launchpad.net/swift/+bug/1281886

Bug is a duplicate of https://bugs.launchpad.net/openstack-ci/+bug/1281540
and has been also discussed in the other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027476.html

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-20 Thread Julien Danjou
On Thu, Feb 20 2014, Sergey Lukjanov wrote:

Hi Sergey,


[…]

> * install GNU getopt by using homebrew (brew install gnu-getopt) or
> macports (port install getopts);

I've been doing that since day one to have things work locally.

I'm pretty sure it'd be OK to use getopt in a portable way rather than
specifically the GNU version, but I had no idea if it was acceptable. If
everybody think it is, I can give it a try.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Imre Farkas

On 02/19/2014 06:10 PM, Ladislav Smola wrote:

Hello,

I would like to have your opinion about how to deal with passwords in
Tuskar-API

The background is, that tuskarAPI is storing heat template parameters in
its database, it's a
preparation for more complex workflows, when we will need to store the
data before the actual
heat stack-create.

So right now, the state is unacceptable, we are storing sensitive
data(all the heat passwords and keys)
in a raw form in the TuskarAPI database. That is wrong right?


Right, that is definitely wrong.


So is anybody aware of the reasons, why we would need to store the
passwords? Storing them
for a small amount of time (rather in a session) should be fine, so we
can use them for latter init of the stack.
Do we need to store them for heat stack-update? Cause heat throws them away.


You will need those passwords after Heat finished creating the stack and 
Tuskar is going to initialize Keystone and register the services. 
Keystone can't be used for that because a) Tuskar will need the 
password, not a token, b) Keystone is not yet initialized.
Since stack creation is an asynchronous operation, the session might 
have gone long time ago. So storing it in the session would not work, 
Tuskar has to store it in a more permanent place.


You will also need the password every time a service need reregistered. 
Eg. user decides to get rid of swift, then changes his mind, but can't 
undo the operation because the password is gone.



If yes, this bug should change to encrypting of the all sensitive data,
right? Cause it might be just me,
but dealing with sensitive data like this the 8th deadly sin.

The second thing is, if users will update their passwords, info in the
TuskarAPI will be obsolete and
can't be used anyway.


I don't think that the user is going to change the password for 
different services. We should also investigate if the service will work 
after someone changes its password.


I don't see any problem in storing passwords for the overcloud, since 
Tuskar *is* the management interface. But I agree, we should do it more 
securely.


Imre



There is a bug filled for it:
https://bugs.launchpad.net/tuskar/+bug/1282066

Thanks for the feedback, seems like the bug is not as straightforward as
I thought.

Kind Regards,
Ladislav



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Cleaning OpenStack resources

2014-02-20 Thread Florent Flament
Hi David,

I have been working on an OpenStack resources cleanup script that allows you to 
wipe out all resources in a given project, I guess you could use / adapt it for 
your own case. It is available on github there: 
https://github.com/cloudwatt/ospurge

You can also install it with pip (pip install ospurge).

As for the floating ips, you should be able to list and remove them, by using 
the neutron CLI:
* neutron floatingip-list
* neutron floatingip-delete

Forent Flament

- Original Message -
From: "David Kranz" 
To: "OpenStack Development Mailing List" 
Sent: Wednesday, February 19, 2014 10:15:12 PM
Subject: [openstack-dev] [qa] Cleaning OpenStack resources

I was looking at https://review.openstack.org/#/c/73274/1 which makes it 
configurable whether a brute-force cleanup of resources is done after 
success. This got my wondering how this should really be done. As admin, 
there are some resources that can be cleaned and some that I don't  know 
how. For example, as admin you can list all servers and delete them with 
the --all-tenants flag. But for floating ips I don't see a way to list 
all of them even as admin through the apis. Is there a way that an admin 
can, through the api, locate all resources used by a particular tenant?

  -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-20 Thread Chmouel Boudjnah
On Thu, Feb 20, 2014 at 10:22 AM, Julien Danjou  wrote:

> I'm pretty sure it'd be OK to use getopt in a portable way rather than
> specifically the GNU version, but I had no idea if it was acceptable. If
> everybody think it is, I can give it a try.
>

In which sort of system setup other than macosx/freebsd generate_sample.sh
is not working?

Chmouel.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Imre Farkas

On 02/20/2014 10:12 AM, Radomir Dopieralski wrote:

On 19/02/14 18:29, Dougal Matthews wrote:

The question for me, is what passwords will we have and when do we need
them? Are any of the passwords required long term.


We will need whatever the Heat template needs to generate all the
configuration files. That includes passwords for all services that are
going to be configured, such as, for example, Swift or MySQL.

I'm not sure about the exact mechanisms in Heat, but I would guess that
we will need all the parameters, including passwords, when the templates
are re-generated. We could probably generate new passwords every time,
though.


That is an excellent point. Tuskar will need the passwords every time it 
needs to regenerate the Heat template (basically when running 
stack-update).


I don't think, changing the password every time would work. If eg. the 
MySQL password is changed, then os-refresh-config will fail during the 
db migration scripts because it no longer can access the existing db 
with the new password.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-20 Thread Julien Danjou
On Thu, Feb 20 2014, Chmouel Boudjnah wrote:

> In which sort of system setup other than macosx/freebsd generate_sample.sh
> is not working?

Likely everywhere GNU tools are not standard. So that's every system
_except_ GNU/Linux ones I'd say. :)

-- 
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Jay Lau
Hi,

Does HEAT support provisioning windows cluster?  If so, can I also use
user-data to do some post install work for windows instance? Is there any
example template for this?

Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Jay Lau
Just noticed that there is already a bp tracing this, but no milestone was
set for it and the bp has been there for one year.

https://blueprints.launchpad.net/heat/+spec/windows-instances

Do we have any plan to finish this? Many customers are using windows
cluster, it is really cool if we can support provisioning windows cluster
with heat.

Thanks,

Jay



2014-02-20 18:02 GMT+08:00 Jay Lau :

>
> Hi,
>
> Does HEAT support provisioning windows cluster?  If so, can I also use
> user-data to do some post install work for windows instance? Is there any
> example template for this?
>
> Thanks,
>
> Jay
>
>


-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-20 Thread Roman Podoliaka
Hi all,

I'm ready to help with syncing of DB code. But we'll need reviewers
attention in both oslo-incubator in nova :)

Thanks,
Roman

On Thu, Feb 20, 2014 at 5:37 AM, Lance D Bragstad  wrote:
> Shed a little bit of light on Matt's comment about Keystone removing
> oslo-incubator code and the issues we hit. Comments below.
>
>
> Best Regards,
>
> Lance Bragstad
> ldbra...@us.ibm.com
>
> Doug Hellmann  wrote on 02/19/2014 09:00:29 PM:
>
>> From: Doug Hellmann 
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> ,
>> Date: 02/19/2014 09:12 PM
>> Subject: Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator
>> sync workflow
>
>
>>
>>
>
>> On Wed, Feb 19, 2014 at 9:52 PM, Joe Gordon  wrote:
>> As a side to this, as an exercise I tried a oslo sync in cinder to see
>> what kind of issues would arise and here are my findings so far:
>> https://review.openstack.org/#/c/74786/
>>
>> On Wed, Feb 19, 2014 at 6:20 PM, Matt Riedemann
>>  wrote:
>> >
>> >
>> > On 2/19/2014 7:13 PM, Joe Gordon wrote:
>> >>
>> >> Hi All,
>> >>
>> >> As many of you know most oslo-incubator code is wildly out of sync.
>> >> Assuming we consider it a good idea to sync up oslo-incubator code
>> >> before cutting Icehouse, then we have a problem.
>> >>
>> >> Today oslo-incubator code is synced in ad-hoc manor, resulting in
>> >> duplicated efforts and wildly out of date code. Part of the challenges
>> >> today are backwards incompatible changes and new oslo bugs. I expect
>> >> that once we get a single project to have an up to date oslo-incubator
>> >> copy it will make syncing a second project significantly easier. So
>> >> because I (hopefully) have some karma built up in nova, I would like
>> >> to volunteer nova to be the guinea pig.
>> >>
>> >>
>> >> To fix this I would like to propose starting an oslo-incubator/nova
>> >> sync team. They would be responsible for getting nova's oslo code up
>> >> to date.  I expect this work to involve:
>> >> * Reviewing lots of oslo sync patches
>> >> * Tracking the current sync patches
>> >> * Syncing over the low hanging fruit, modules that work without
>> >> changing
>> >> nova.
>> >> * Reporting bugs to oslo team
>> >> * Working with oslo team to figure out how to deal with backwards
>> >> incompatible changes
>> >>* Update nova code or make oslo module backwards compatible
>> >> * Track all this
>> >> * Create a roadmap for other projects to follow (re: documentation)
>> >>
>> >> I am looking for volunteers to help with this effort, any takers?
>> >>
>> >>
>> >> best,
>> >> Joe Gordon
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> > Well I'll get the ball rolling...
>> >
>> > In the past when this has come up there is always a debate over should
>> > be
>> > just sync to sync because we should always be up to date, or is that
>> > dangerous and we should only sync when there is a need (which is what
>> > the
>> > review guidelines say now [1]).  There are pros and cons:
>> >
>> > pros:
>> >
>> > - we get bug fixes that we didn't know existed
>> > - it should be less painful to sync if we do it more often
>> >
>> > cons:
>> >
>> > - it's more review overhead and some crazy guy thinks we need a special
>> > team
>> > dedicated to reviewing those changes :)
>> > - there are some changes in o-i that would break nova; I'm specifically
>> > thinking of the oslo RequestContext which has domain support now (or
>> > some
>> > other keystone thingy) and nova has it's own RequestContext - so if we
>> > did
>> > sync that from o-i it would change nova's logging context and break on
>> > us
>> > since we didn't use oslo context.
>> >
>> > For that last con, I'd argue that we should move to the oslo
>> > RequestContext,
>> > I'm not sure why we aren't.  Would that module then not fall under
>> > low-hanging-fruit?
>
>> I am classifying low hanging fruit as anything that doesn't require
>> any nova changes to work.
>>
>> +1
>> > I think the DB API modules have been a concern for auto-syncing before
>> > too
>> > but I can't remember why now...something about possibly changing the
>> > behavior of how the nova migrations would work?  But if they are already
>> > using the common code, I don't see the issue.
>
>> AFAIK there is already a team working on db api syncing, so I was
>> thinking of let them deal with it.
>>
>> +1
>>
>> Doug
>>
>> >
>> > This is kind of an aside, but I'm kind of confused now about how the
>> > syncs
>> > work with things that fall under oslo.rootwrap or oslo.messaging, like
>> > this
>> > patch [2].  It doesn't completely match the o-i patch, i.e. it's not
>> > syncing
>> > over openstack/common/rootwrap/wrapper.py, and I'm assuming because
>> > that's
>> > in oslo.rootwrap now?  But then why does the code still exist in
>> > oslo-incubator?
>> >
>> > I think the keystone guys are runnin

[openstack-dev] [keystone] "SAML consumption" Blueprints

2014-02-20 Thread Marco Fargetta
Dear all,

I am interested to the integration of SAML with keystone and I am analysing
the following blueprint and its implementation:

https://blueprints.launchpad.net/keystone/+spec/saml-id

https://review.openstack.org/#/c/71353/


Looking at the code there is something I cannot undertand. In the code it seems 
you
will use apache httpd with mod_shib (or other alternatives) to parse saml 
assertion
and the code inside keystone will read only the values extrapolated by the 
front-end server.

If this is the case, it is not clear to me why you need to register the IdPs, 
with its certificate,
in keystone using the new federation API. You can filter the IdP in the server 
so why do you need this extra list?
What is the use of the IdP list and the certificate?

Is still this implementation open to discussion or the design is frozen for the 
icehouse release?

Thanks in advance,
Marco


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sent the first batch of invitations to Atlanta's Summit

2014-02-20 Thread Thierry Carrez
Dolph Mathews wrote:
> I just noticed the subject of this email referred to the "first batch"
> of invitations -- are there going to be subsequent batches of invites?
> If so, who was not included in the first batch that will be in
> subsequent batches?

Yes, as usual subsequent batches will be sent to catch late Icehouse
contributors. We usually stop looking just after the -3 milestone.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Dougal Matthews

On 20/02/14 09:12, Radomir Dopieralski wrote:

If we do need to store passwords it becomes a somewhat thorny issue, how
does Tuskar know what a password is? If this is flagged up by the
UI/client then we are relying on the user to tell us which isn't wise.


All the template parameters that are passwords are marked in the Heat
parameter list that we get from it as "NoEcho": "true", so we do have an
idea about which parts are sensitive.


Right, that's good to know. I think Ladislav mentioned this to me but
it didn't click. If we do store passwords however, I wonder if we are
best to encrypt everything to be safe. The overhead shouldn't be that
big and it may be better than special casing the "NoEcho" values.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Alexander Tivelkov
Hi Jay,

Windows support in Heat is being developed, but is not complete yet, afaik.
You may already use Cloudbase Init to do the post-deploy actions on windows
- check [1] for the details.

Meanwhile, running a windows cluster is a much more complicated task then
just deploying a number of windows instances (if I understand you correctly
and you speak about Microsoft Failover Cluster, see [2]): to build it in
the cloud you will have to execute quite a complex workflow after the nodes
are actually deployed, which is not possible with Heat (at least for now).

Murano project ([3]) does this on top of Heat, as it was initially designed
as Windows Data Center as a Service, so I suggest you too take a look at
it. You may also check this video ([4]) which demonstrates how Murano is
used to deploy a failover cluster of Windows 2012 with a clustered MS SQL
server on top of it.


[1] http://wiki.cloudbase.it/heat-windows
[2] http://technet.microsoft.com/library/hh831579
[3] https://wiki.openstack.org/Murano
[4] http://www.youtube.com/watch?v=Y_CmrZfKy18

--
Regards,
Alexander Tivelkov


On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau  wrote:

>
> Hi,
>
> Does HEAT support provisioning windows cluster?  If so, can I also use
> user-data to do some post install work for windows instance? Is there any
> example template for this?
>
> Thanks,
>
> Jay
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] when icehouse will be frozen

2014-02-20 Thread Thierry Carrez
马煜 wrote:
> who know when to freezy icehouse version ?
> 
> my bp on ml2 driver has been approved, code is under review, 
> but I have some trouble to deploy third-party ci on which tempest test run.

Feature freeze is on March 4th [1], so featureful code shall be proposed
*and* merged by then. I suspect Neutron core won't approve it until the
3rd party CI testing is in order, though, so if you can't get it to work
by then it may have to live out of the tree for the Icehouse release.

Neutron drivers shall be able to give you more precisions.

[1] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cleaning OpenStack resources

2014-02-20 Thread Koderer, Marc
Hi Forent,

thanks for the link. Our current implementation of for the cleanup is a bit 
buggy. I will give your script a try.
Maybe we could use it an replace the existing module.

Regards
Marc 

From: Florent Flament [florent.flament-...@cloudwatt.com]
Sent: Thursday, February 20, 2014 10:30 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [qa] Cleaning OpenStack resources

Hi David,

I have been working on an OpenStack resources cleanup script that allows you to 
wipe out all resources in a given project, I guess you could use / adapt it for 
your own case. It is available on github there: 
https://github.com/cloudwatt/ospurge

You can also install it with pip (pip install ospurge).

As for the floating ips, you should be able to list and remove them, by using 
the neutron CLI:
* neutron floatingip-list
* neutron floatingip-delete

Forent Flament

- Original Message -
From: "David Kranz" 
To: "OpenStack Development Mailing List" 
Sent: Wednesday, February 19, 2014 10:15:12 PM
Subject: [openstack-dev] [qa] Cleaning OpenStack resources

I was looking at https://review.openstack.org/#/c/73274/1 which makes it
configurable whether a brute-force cleanup of resources is done after
success. This got my wondering how this should really be done. As admin,
there are some resources that can be cleaned and some that I don't  know
how. For example, as admin you can list all servers and delete them with
the --all-tenants flag. But for floating ips I don't see a way to list
all of them even as admin through the apis. Is there a way that an admin
can, through the api, locate all resources used by a particular tenant?

  -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-20 Thread Victor Sergeyev
Hello All

I and Roman Podoliaka are familiar with the changes made to common db code,
so we are ready to help with syncing it to OS projects.
But we want to ask you for more activity in reviewing of these patches.

Thanks, Victor


On Thu, Feb 20, 2014 at 4:27 AM, Doug Hellmann
wrote:

>
>
>
> On Wed, Feb 19, 2014 at 8:13 PM, Joe Gordon  wrote:
>
>> Hi All,
>>
>> As many of you know most oslo-incubator code is wildly out of sync.
>> Assuming we consider it a good idea to sync up oslo-incubator code
>> before cutting Icehouse, then we have a problem.
>>
>> Today oslo-incubator code is synced in ad-hoc manor, resulting in
>> duplicated efforts and wildly out of date code. Part of the challenges
>> today are backwards incompatible changes and new oslo bugs. I expect
>> that once we get a single project to have an up to date oslo-incubator
>> copy it will make syncing a second project significantly easier. So
>> because I (hopefully) have some karma built up in nova, I would like
>> to volunteer nova to be the guinea pig.
>>
>
> Thank you for volunteering to spear-head this, Joe.
>
>
>> To fix this I would like to propose starting an oslo-incubator/nova
>> sync team. They would be responsible for getting nova's oslo code up
>> to date.  I expect this work to involve:
>> * Reviewing lots of oslo sync patches
>> * Tracking the current sync patches
>> * Syncing over the low hanging fruit, modules that work without changing
>> nova.
>> * Reporting bugs to oslo team
>> * Working with oslo team to figure out how to deal with backwards
>> incompatible changes
>>   * Update nova code or make oslo module backwards compatible
>> * Track all this
>> * Create a roadmap for other projects to follow (re: documentation)
>>
>> I am looking for volunteers to help with this effort, any takers?
>>
>
> I will help, especially with reviews and tracking.
>
> We are going to want someone from the team working on the db modules to
> participate as well, since we know that's one area where the API has
> diverged some (although we did take backwards compatibility into account).
> Victor, can you help find us a volunteer?
>
> Doug
>
>
>
>>
>>
>> best,
>> Joe Gordon
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 11:21, Dougal Matthews wrote:
> If we do store passwords however, I wonder if we are
> best to encrypt everything to be safe. The overhead shouldn't be that
> big and it may be better than special casing the "NoEcho" values.

I think that before we start encrypting everything, we need to ask
ourselves the question about system boundaries and about what we are
protecting from what. Otherwise we will end up with ridiculous things
like encrypting the passwords and storing the decryption key right in
the same place. In other words, this has to be designed.
-- 
Radomir Dopieralski



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][oslo] Changes to oslo-incubator sync workflow

2014-02-20 Thread Thierry Carrez
Matt Riedemann wrote:
> This is kind of an aside, but I'm kind of confused now about how the
> syncs work with things that fall under oslo.rootwrap or oslo.messaging,
> like this patch [2].  It doesn't completely match the o-i patch, i.e.
> it's not syncing over openstack/common/rootwrap/wrapper.py, and I'm
> assuming because that's in oslo.rootwrap now?  But then why does the
> code still exist in oslo-incubator?

FWIW the code was recently removed from the oslo-incubator, once Neutron
(the last of the rootwrap-consuming projects) got migrated to using
oslo.rootwrap.

> [2] https://review.openstack.org/#/c/73340/

This one syncs changes from https://review.openstack.org/#/c/63094

63094 should never have been approved, since rootwrap in oslo-incubator
was frozen ("graduating"). Now the changes are lost, since they were
never proposed to oslo.rootwrap, and the code in the incubator was
cleaned up.

I'll comment on the 73340 review to try to solve this mess.

-- 
Thierry Carrez (ttx)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Dougal Matthews

On 20/02/14 10:36, Radomir Dopieralski wrote:

On 20/02/14 11:21, Dougal Matthews wrote:

If we do store passwords however, I wonder if we are
best to encrypt everything to be safe. The overhead shouldn't be that
big and it may be better than special casing the "NoEcho" values.


I think that before we start encrypting everything, we need to ask
ourselves the question about system boundaries and about what we are
protecting from what. Otherwise we will end up with ridiculous things
like encrypting the passwords and storing the decryption key right in
the same place. In other words, this has to be designed.


Absolutely. I couldn't agree more and hope I didn't suggest otherwise :)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 11:46, Dougal Matthews wrote:
> On 20/02/14 10:36, Radomir Dopieralski wrote:
>> On 20/02/14 11:21, Dougal Matthews wrote:
>>> If we do store passwords however, I wonder if we are
>>> best to encrypt everything to be safe. The overhead shouldn't be that
>>> big and it may be better than special casing the "NoEcho" values.
>>
>> I think that before we start encrypting everything, we need to ask
>> ourselves the question about system boundaries and about what we are
>> protecting from what. Otherwise we will end up with ridiculous things
>> like encrypting the passwords and storing the decryption key right in
>> the same place. In other words, this has to be designed.
> 
> Absolutely. I couldn't agree more and hope I didn't suggest otherwise :)

So what is the smallest subsystem (or subsystems) that needs those
passwords, and what storage it has access to that other subsystems
don't, so we could put the key there?

As I see it, the passwords are needed by:

* Heat, for generating the templates (may be passed from Tuskar-API),
* Tuskar-API, for passing them to Heat and for registering services in
  Keystone,
* Tuskar-UI, if we want to display them to the user on request (do
  we?), may also be passed from Tuskar-API.

What is the storage that Tuskar-API has access to, but other parts
don't? I would guess that's the Tuskar database. But that's where
the passwords are already stored, so storing the key there doesn't
make much sense. Anybody who gets access to Tuskar-API gets the
passwords, whether we encrypt them or not. Anybody who doesn't have
access to Tuskar-API doesn't get the passwords, whether we encrypt
them or not.

-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Nominate Andrew Lazarew for savanna-core

2014-02-20 Thread Alexander Ignatov
+1

Regards,
Alexander Ignatov



On 20 Feb 2014, at 03:02, John Speidel  wrote:

> +1 
> Andrew would be a good addition.
> 
> -John
> 
> 
> 
> 
> On Wed, Feb 19, 2014 at 5:40 PM, Sergey Lukjanov  
> wrote:
> Hey folks,
> 
> I'd like to nominate Andrew Lazarew (alazarev) for savanna-core.
> 
> He is among the top reviewers of Savanna subprojects. Andrew is working on 
> Savanna full time since September 2013 and is very familiar with current 
> codebase. His code contributions and reviews have demonstrated a good 
> knowledge of Savanna internals. Andrew have a valuable knowledge of both core 
> and EDP parts, IDH plugin and Hadoop itself. He's working on both bugs and 
> new features implementation.
> 
> Some links:
> 
> http://stackalytics.com/report/reviews/savanna-group/30
> http://stackalytics.com/report/reviews/savanna-group/90
> http://stackalytics.com/report/reviews/savanna-group/180
> https://review.openstack.org/#/q/owner:alazarev+savanna+AND+-status:abandoned,n,z
> https://launchpad.net/~alazarev
> 
> Savanna cores, please, reply with +1/0/-1 votes.
> 
> Thanks.
> 
> -- 
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader of 
> this message is not the intended recipient, you are hereby notified that any 
> printing, copying, dissemination, distribution, disclosure or forwarding of 
> this communication is strictly prohibited. If you have received this 
> communication in error, please contact the sender immediately and delete it 
> from your system. Thank You.___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 12:02, Radomir Dopieralski wrote:
> On 20/02/14 11:46, Dougal Matthews wrote:
>> On 20/02/14 10:36, Radomir Dopieralski wrote:
>>> On 20/02/14 11:21, Dougal Matthews wrote:
 If we do store passwords however, I wonder if we are
 best to encrypt everything to be safe. The overhead shouldn't be that
 big and it may be better than special casing the "NoEcho" values.
>>>
>>> I think that before we start encrypting everything, we need to ask
>>> ourselves the question about system boundaries and about what we are
>>> protecting from what. Otherwise we will end up with ridiculous things
>>> like encrypting the passwords and storing the decryption key right in
>>> the same place. In other words, this has to be designed.
>>
>> Absolutely. I couldn't agree more and hope I didn't suggest otherwise :)
> 
> So what is the smallest subsystem (or subsystems) that needs those
> passwords, and what storage it has access to that other subsystems
> don't, so we could put the key there?
> 
> As I see it, the passwords are needed by:
> 
> * Heat, for generating the templates (may be passed from Tuskar-API),
> * Tuskar-API, for passing them to Heat and for registering services in
>   Keystone,
> * Tuskar-UI, if we want to display them to the user on request (do
>   we?), may also be passed from Tuskar-API.
> 
> What is the storage that Tuskar-API has access to, but other parts
> don't? I would guess that's the Tuskar database. But that's where
> the passwords are already stored, so storing the key there doesn't
> make much sense. Anybody who gets access to Tuskar-API gets the
> passwords, whether we encrypt them or not. Anybody who doesn't have
> access to Tuskar-API doesn't get the passwords, whether we encrypt
> them or not.
> 
Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.

-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Object Model discussion

2014-02-20 Thread Eugene Nikanorov
Hi Iwamoto,


> I agree with Samuel here.  I feel the logical model and other issues
> (implementation etc.) are mixed in the discussion.
>

A little bit. While ideally it's better to separate it, in my opinion we
need to have some 'fair bit' of implementation details
in API in order to reduce code complexity (I'll try to explain it on the
meeting). Currently these 'implementation details' are implied because we
deal with simplest configurations which maps 1:1 to a backend.



> I'm failing to understand why the current model is unfit for L7 rules.
>
>   - pools belonging to a L7 group should be created with the same
> provider/flavor by a user
>   - pool scheduling can be delayed until it is bound to a vip to make
> sure pools belonging to a L7 group are scheduled to one backend
>
> While that could be an option, It's not as easy as it seems.
We've discussed that back on HK summit but at that point decided that it's
undesirable.


> > I think grouping vips and pools is important part of logical model, even
> if
> > some users may not care about it.
>
> One possibility is to provide an optional data structure to describe
> grouping of vips and pools, on top of the existing pool-vip model.
>
That would be 'loadbalancer' approach, #2 in a wiki page.
So far we tend to introduce such grouping directly into vip-pool
relationship.
I plan to explain that in more detail on the meeting.


> Yes, there's little benefit in sharing pools at cost of the
> complexity.
>
Right, that's the suggestion, but such ability is also a consequence of
pure logical config where backend considerations are not taken into account
in the API.

Hope to see you on the meeting!

Thanks,
Eugene.

>
> --
> IWAMOTO Toshihiro
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
  Even worse, are we sure that user really wants overcloud roles to be
updated?
- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).

Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift]stable/havana Jenkins failed

2014-02-20 Thread Dong Liu
Thank you Alan and Pete, I will wait for devstack-gate core approve 
patch https://review.openstack.org/#/c/74451/


2014-02-20 17:14, Alan Pevec :

I notice that we have changed "from swiftclient import Connection,
HTTPException" to "from swiftclient import Connection, RequestException"
at 2014-02-14, I don't know is it relational.

I have reported a bug for this:
https://bugs.launchpad.net/swift/+bug/1281886


Bug is a duplicate of https://bugs.launchpad.net/openstack-ci/+bug/1281540
and has been also discussed in the other thread
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027476.html

Cheers,
Alan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron]Do you think tanent_id should be verified

2014-02-20 Thread Dong Liu

Dolph, thanks for the information you provided.

Now I have two question:
1. Will neutron handle this event notification in the future?
2. I also wish neutron could verify that tenant_id is existent.

thanks

于 2014-02-20 4:33, Dolph Mathews 写道:

There's an open bug [1] against nova & neutron to handle notifications
[2] from keystone about such events. I'd love to see that happen during
Juno!

[1] https://bugs.launchpad.net/nova/+bug/967832
[2] http://docs.openstack.org/developer/keystone/event_notifications.html

On Mon, Feb 17, 2014 at 6:35 AM, Yongsheng Gong mailto:gong...@unitedstack.com>> wrote:

It is not easy to enhance it. If we check the tenant_id on creation,
if should we  also to do some job when keystone delete tenant?


On Mon, Feb 17, 2014 at 6:41 AM, Dolph Mathews
mailto:dolph.math...@gmail.com>> wrote:

keystoneclient.middlware.auth_token passes a project ID (and
name, for convenience) to the underlying application through the
WSGI environment, and already ensures that this value can not be
manipulated by the end user.

Project ID's (redundantly) passed through other means, such as
URLs, are up to the service to independently verify against
keystone (or equivalently, against the WSGI environment), but
can be directly manipulated by the end user if no checks are in
place.

Without auth_token in place to manage multitenant authorization,
I'd still expect services to blindly trust the values provided
in the environment (useful for both debugging the service and
alternative deployment architectures).

On Sun, Feb 16, 2014 at 8:52 AM, Dong Liu mailto:willowd...@gmail.com>> wrote:

Hi stackers:

I found that when creating network subnet and other
resources, the attribute tenant_id
can be set by admin tenant. But we did not verify that if
the tanent_id is real in keystone.

I know that we could use neutron without keystone, but do
you think tenant_id should
be verified when we using neutron with keystone.

thanks
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][HEAT][Windows] Does HEAT support provisioning windows cluster

2014-02-20 Thread Jay Lau
Thanks Alexander for the detail explanation, really very helpful!

What I meant for a windows cluster is actually a windows application, such
as a WebSphere cluster or a hadoop windows cluster.

Seems I can use Cloudbase Init to do the post-deploy actions on windows,
but I cannot do some scale up or scale down for this cluster as currently
there is no cfn-tools for windows, is it correct?

Thanks,

Jay



2014-02-20 18:24 GMT+08:00 Alexander Tivelkov :

> Hi Jay,
>
> Windows support in Heat is being developed, but is not complete yet,
> afaik. You may already use Cloudbase Init to do the post-deploy actions on
> windows - check [1] for the details.
>
> Meanwhile, running a windows cluster is a much more complicated task then
> just deploying a number of windows instances (if I understand you correctly
> and you speak about Microsoft Failover Cluster, see [2]): to build it in
> the cloud you will have to execute quite a complex workflow after the nodes
> are actually deployed, which is not possible with Heat (at least for now).
>
> Murano project ([3]) does this on top of Heat, as it was initially
> designed as Windows Data Center as a Service, so I suggest you too take a
> look at it. You may also check this video ([4]) which demonstrates how
> Murano is used to deploy a failover cluster of Windows 2012 with a
> clustered MS SQL server on top of it.
>
>
> [1] http://wiki.cloudbase.it/heat-windows
> [2] http://technet.microsoft.com/library/hh831579
> [3] https://wiki.openstack.org/Murano
> [4] http://www.youtube.com/watch?v=Y_CmrZfKy18
>
> --
> Regards,
> Alexander Tivelkov
>
>
> On Thu, Feb 20, 2014 at 2:02 PM, Jay Lau  wrote:
>
>>
>> Hi,
>>
>> Does HEAT support provisioning windows cluster?  If so, can I also use
>> user-data to do some post install work for windows instance? Is there any
>> example template for this?
>>
>> Thanks,
>>
>> Jay
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Thanks,

Jay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] async / threading for python 2 and 3

2014-02-20 Thread victor stinner
Hi,

> On 19/02/14 10:09 +0100, Julien Danjou wrote:
> >On Wed, Feb 19 2014, Angus Salkeld wrote:
> >
> >> 2) use tulip and give up python 2
> >
> >+ use trollius to have Python 2 support.
> >
> >  https://pypi.python.org/pypi/trollius
> 
> So I have been giving this a go.

FYI I'm the author of Trollius project.

> We use pecan and wsme (like ceilometer), I wanted to use
> a httpserver library in place of wsgiref.server so had a
> look at a couple and can't use them as they all have "yield from"
> all over the place (i.e. python 3 only). The quesion I have
> is:
> How useful is trollius if we can't use other thirdparty libraries
> written for asyncio?
> https://github.com/KeepSafe/aiohttp/blob/master/aiohttp/server.py#L171
> 
> Maybe I am missing something?

(Tulip and Trollius unit tests use wsgiref.simple_server module of the standard 
library. It works but you said that you don't want to use it.)

Honestly, I have no answer to your question right now ("How useful is trollius 
..."). asyncio developers are working on fixing last bugs in asyncio (Trollius 
is a fork, I merge regulary updates from Tulip into Trollius) and adding some 
late features before the Python 3.4 release. This Python release will be 
somehow the "version 1.0" of asyncio and will freeze the API. Right now, I'm 
working on a proof-on-concept of eventlet hub using asyncio event loop. So it 
may be possible to use eventlet and asyncio APIs are the same time. And maybe 
slowly replace eventlet with asyncio, or at least use asyncio in new code.

I asked your question on Tulip mailing list to see how a single code base could 
support Tulip (yield from) and Trollius (yield). At least check if it's 
technically possible.

Victor

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Jiří Stránský

On 20.2.2014 12:18, Radomir Dopieralski wrote:

On 20/02/14 12:02, Radomir Dopieralski wrote:

Anybody who gets access to Tuskar-API gets the
passwords, whether we encrypt them or not. Anybody who doesn't have
access to Tuskar-API doesn't get the passwords, whether we encrypt
them or not.


Yeah, i think so too.


Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.


This would be possible, but it would damage the user experience quite a 
bit. Afaik other deployment tools solve password storage the same way we 
do now.


Imho keeping the passwords the way we do now is not among the biggest 
OpenStack security risks. I think we can make the assumption that 
undercloud will not be publicly accessible, so a potential external 
attacker would have to first gain network access to the undercloud 
machines and only then they can start trying to exploit Tuskar API to 
hand out the passwords. Overcloud services (which are meant to be 
publicly accessible) have their service passwords accessible in 
plaintext, e.g. in nova.conf you'll find nova password and neutron 
password -- i think this is comparatively greater security risk.


So if we can come up with a solution where the benefits outweigh the 
drawbacks and it makes sense in broader view at OpenStack security, we 
should go for it, but so far i'm not convinced there is such a solution. 
Just my 2 cents :)


Jirka

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 10:12, Radomir Dopieralski wrote:
> On 19/02/14 18:29, Dougal Matthews wrote:
>> The question for me, is what passwords will we have and when do we need
>> them? Are any of the passwords required long term.
> 
> We will need whatever the Heat template needs to generate all the
> configuration files. That includes passwords for all services that are
> going to be configured, such as, for example, Swift or MySQL.


This is a one-time operation, though, isn't it? You pass those
parameters to Heat when you run stack-create. Heat and os-*-config will
handle the rest.

> 
> I'm not sure about the exact mechanisms in Heat, but I would guess that
> we will need all the parameters, including passwords, when the templates
> are re-generated. We could probably generate new passwords every time,
> though.

What do you mean by regenarating the templates? Do you mean when we want
to update the deployment (e.g. using heat stack-update)?

> 
>> If we do need to store passwords it becomes a somewhat thorny issue, how
>> does Tuskar know what a password is? If this is flagged up by the
>> UI/client then we are relying on the user to tell us which isn't wise.
> 
> All the template parameters that are passwords are marked in the Heat
> parameter list that we get from it as "NoEcho": "true", so we do have an
> idea about which parts are sensitive.
> 

If at all possible, we should not store any passwords or keys whatsoever.

We may have to pass them through from the user to an API (and then
promptly forget them) or possible hold onto them for a little while (in
RAM) but never persisting them anywhere.

Let's go through the specific cases where we do handle passwords and
what to do with them.

Looking at devtest, I can see two places where the user deals with
passwords:

http://docs.openstack.org/developer/tripleo-incubator/devtest_overcloud.html

1) in the step 10. (Deploy an overcloud) we pass the various overcloud
service passwords and keys to Heat (it's things like the Keystone Admin
Token & password, SSL key & cert, nova/heat/cinder/glance service
passwords, etc.).

I'm assuming this could include any database and AMQP passwords in the
future.

2) step 17 & 18 (Perform admin setup of your overcloud) where pass some
of the same passwords to Keystone to set up the Overcloud OpenStack
services (compute, metering, orchestration, etc.)

And that's it.

I'd love if we could eventually push the steps 17 & 18 into our Heat
templates, it's where they belong I think (please correct me if that's
wrong).

Regardless, all the passwords here are user-specified. When you install
OpenStack, you have to come up with a bunch of passwords up front and
use them to set the various services up.

Now Tuskar serves as an intermediary. It should ask for these passwords
and then perform the steps you'd otherwise do manually and then *forget*
the passwords again.

Since we're using the passwords in 2 steps (10 and 17), we can't just
pass them to Heat and immediately forget them. But we can pass them in
step 10, wait for it to finish, pass them to step 17 and forget them then.

So here's the workflow:

1. The user wants to deploy the overcloud through the UI
2. They're asked to fill in all the necessary information (including the
passwords) -- or we autogenerate it which doesn't change anything
3. Tuskar UI sends a request to Tuskar API including the passwords
3.1. Tuskar UI forgets the passwords (this isn't an explicit action, we
don't store them anywhere)
4. Tuskar API fetches/builds the correct Heat template
5. Tuskar API calls heat stack-create and passes in all the params
(including passwords)
6. Tuskar API waits for heat stack-create to finish
7. Tuskar API issues a bunch of keystone calls to set up the services
(with the specified passwords)
8. Tuskar API forgets the passwords

The asynchronous nature of Heat stack-create may make this a bit more
difficult but the point should still stand -- we should not persist the
passwords. We may have to store them somewhere for a short duration, but
not throughout the entire lifecycle of the overcloud.

I'm not sure if we have to pass the unchanged parameters to Heat again
during stack-update (they may or may not be stored on the metadata
server). If we do, I'd vote we ask the user to re-enter them instead of
storing them somewhere.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Sean Dague
I agree that we shouldn't be rushing something that's not ready, but I
guess it raises kind of a meta issue.

When we started this journey this was because v2 has a ton of warts, is
completely wonky on the code internals, which leads to plenty of bugs.
v3 was both a surface clean up, but it was also a massive internals
clean up. I think comparing servers.py:create is a good look at the
differences:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L768
- v2

vs.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L415
- v3

v3 was small on user surface changes for a reason, because the idea was
that it would be a quick cut over, the migration pain would be minimal,
and v2 could be dropped relatively quickly (2 cycles).

However if the new thinking is that v2 is going to be around for a
long time then I think it raises questions about this whole approach.
Because dual maintenance is bad. We see this today where stable/* trees
end up broken in CI for weeks because no one is working on it.

We're also duplicating a lot of test and review energy in having 2 API
stacks. Even before v3 has come out of experimental it's consumed a huge
amount of review resource on both the Nova and Tempest sides to get it
to it's current state.

So my feeling is that in order to get more energy and focus on the API,
we need some kind of game plan to get us to a single API version, with a
single data payload in L (or on the outside, M). If the decision is v2
must be in both those releases (and possibly beyond), then it seems like
asking other hard questions.

* why do a v3 at all? instead do we figure out a way to be able to
evolve v2 in a backwards compatible way.
* if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
that working around all that code isn't a velocity inhibitor in the
cleanups required in v2? Because some of the crazy hacks that exist to
make XML structures work for the json in v2 is kind of special.

This big bang approach to API development may just have run it's course,
and no longer be a useful development model. Which is good to find out.
Would have been nice to find out earlier... but not all lessons are easy
or cheap. :)

-Sean

On 02/19/2014 12:36 PM, Russell Bryant wrote:
> Greetings,
> 
> The v3 API effort has been going for a few release cycles now.  As we
> approach the Icehouse release, we are faced with the following question:
> "Is it time to mark v3 stable?"
> 
> My opinion is that I think we need to leave v3 marked as experimental
> for Icehouse.
> 
> There are a number of reasons for this:
> 
> 1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
> last week made me come to the realization that v2 won't be going away
> *any* time soon.  In some cases, users have long term API support
> expectations (perhaps based on experience with EC2).  In the best case,
> we have to get all of the SDKs updated to the new API, and then get to
> the point where everyone is using a new enough version of all of these
> SDKs to use the new API.  I don't think that's going to be quick.
> 
> We really don't want to be in a situation where we're having to force
> any sort of migration to a new API.  The new API should be compelling
> enough that everyone *wants* to migrate to it.  If that's not the case,
> we haven't done our job.
> 
> 2) There's actually quite a bit still left on the existing v3 todo list.
>  We have some notes here:
> 
> https://etherpad.openstack.org/p/NovaV3APIDoneCriteria
> 
> One thing is nova-network support.  Since nova-network is still not
> deprecated, we certainly can't deprecate the v2 API without nova-network
> support in v3.  We removed it from v3 assuming nova-network would be
> deprecated in time.
> 
> Another issue is that we discussed the tasks API as the big new API
> feature we would include in v3.  Unfortunately, it's not going to be
> complete for Icehouse.  It's possible we may have some initial parts
> merged, but it's much smaller scope than what we originally envisioned.
>  Without this, I honestly worry that there's not quite enough compelling
> functionality yet to encourage a lot of people to migrate.
> 
> 3) v3 has taken a lot more time and a lot more effort than anyone
> thought.  This makes it even more important that we're not going to need
> a v4 any time soon.  Due to various things still not quite wrapped up,
> I'm just not confident enough that what we have is something we all feel
> is Nova's API of the future.
> 
> 
> Let's all take some time to reflect on what has happened with v3 so far
> and what it means for how we should move forward.  We can regroup for Juno.
> 
> Finally, I would like to thank everyone who has helped with the effort
> so far.  Many hours have been put in to code and reviews for this.  I
> would like to specifically thank Christopher Yeoh for his work here.
> Chris has done an *enormous* amount of work on this and deserv

Re: [openstack-dev] [Heat] [TripleO] Better handling of lists in Heat - a proposal to add a map function

2014-02-20 Thread Tomas Sedovic
On 19/02/14 08:48, Clint Byrum wrote:
> Since picking up Heat and trying to think about how to express clusters
> of things, I've been troubled by how poorly the CFN language supports
> using lists. There has always been the Fn::Select function for
> dereferencing arrays and maps, and recently we added a nice enhancement
> to HOT to allow referencing these directly in get_attr and get_param.
> 
> However, this does not help us when we want to do something with all of
> the members of a list.
> 
> In many applications I suspect the template authors will want to do what
> we want to do now in TripleO. We have a list of identical servers and
> we'd like to fetch the same attribute from them all, join it with other
> attributes, and return that as a string.
> 
> The specific case is that we need to have all of the hosts in a cluster
> of machines addressable in /etc/hosts (please, Designate, save us,
> eventually. ;). The way to do this if we had just explicit resources
> named NovaCompute0, NovaCompute1, would be:
> 
>   str_join:
> - "\n"
> - - str_join:
> - ' '
> - get_attr:
>   - NovaCompute0
>   - networks.ctlplane.0
> - get_attr:
>   - NovaCompute0
>   - name
>   - str_join:
> - ' '
> - get_attr:
>   - NovaCompute1
>   - networks.ctplane.0
> - get_attr:
>   - NovaCompute1
>   - name
> 
> Now, what I'd really like to do is this:
> 
> map:
>   - str_join:
> - "\n"
> - - str_join:
>   - ' '
>   - get_attr:
> - "$1"
> - networks.ctlplane.0
>   - get_attr:
> - "$1"
> - name
>   - - NovaCompute0
> - NovaCompute1
> 
> This would be helpful for the instances of resource groups too, as we
> can make sure they return a list. The above then becomes:
> 
> 
> map:
>   - str_join:
> - "\n"
> - - str_join:
>   - ' '
>   - get_attr:
> - "$1"
> - networks.ctlplane.0
>   - get_attr:
> - "$1"
> - name
>   - get_attr:
>   - NovaComputeGroup
>   - member_resources
> 
> Thoughts on this idea? I will throw together an implementation soon but
> wanted to get this idea out there into the hive mind ASAP.

I think it's missing lambdas and recursion ;-).

Joking aside, I like it. As long as we don't actually turn this into
anything remotely resembling turing-completeness, having useful data
processing primitives is good.

Now onto the bikeshed: could we denote the arguments with something
that's more obviously looking like a Heat specific notation and not a
user-entered string?

E.g. replace "$1" with {Arg: 1}

It's a bit uglier but more obvious to spot what's going on.

> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchicical Multitenancy Discussion

2014-02-20 Thread John Dennis
On 02/19/2014 08:58 PM, Adam Young wrote:
>> Can you give more detail here? I can see arguments for both ways of
>> doing this but continuing to use ids for ownership is an easier
>> choice. Here is my thinking:
>>
>> 1. all of the projects use ids for ownership currently so it is a
>> smaller change
> That does not change.  It is the hierarchy that is labeled by name.
> 
>> 2. renaming a project in keystone would not invalidate the ownership
>> hierarchy (Note that moving a project around would invalidate the
>> hierarchy in both cases)
>>
> Renaming would not change anything.
> 
> I would say the rule should be this:  Ids are basically uuids, and are
> immutable.  Names a mutable.  Each project has a parent Id.  A project
> can either be referenced directly by ID, oir hierarchically by name.  In
> addition, you can navigate to a project by traversing the set of ids,
> but you need to know where you are going.  THus the array
> 
> ['abcd1234',fedd3213','3e3e3e3e'] would be a way to find a project, but
> the project ID for the lead node would still be just '3e3e3e3e'.

The analogy I see here is the unix file system which is organized into a
tree structure by inodes, each inode has a name (technically it can have
more than one name). But the fundamental point is the structure is
formed by id's (e.g. inodes), the path name of a file is transitory and
depends only on what name is bound to the id at the moment. It's a very
rich and powerful abstraction. The same concept is used in many database
schemas, an object has a primary key which is numeric and a name. You
can change the name easily without affecting any references to the id.



-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 14:10, Jiří Stránský wrote:
> On 20.2.2014 12:18, Radomir Dopieralski wrote:
>> On 20/02/14 12:02, Radomir Dopieralski wrote:
>>> Anybody who gets access to Tuskar-API gets the
>>> passwords, whether we encrypt them or not. Anybody who doesn't have
>>> access to Tuskar-API doesn't get the passwords, whether we encrypt
>>> them or not.
> 
> Yeah, i think so too.
> 
>> Thinking about it some more, all the uses of the passwords come as a
>> result of an action initiated by the user either by tuskar-ui, or by
>> the tuskar command-line client. So maybe we could put the key in their
>> configuration and send it with the request to (re)deploy. Tuskar-API
>> would still need to keep it for the duration of deployment (to register
>> the services at the end), but that's it.
> 
> This would be possible, but it would damage the user experience quite a
> bit. Afaik other deployment tools solve password storage the same way we
> do now.
> 
> Imho keeping the passwords the way we do now is not among the biggest
> OpenStack security risks. I think we can make the assumption that
> undercloud will not be publicly accessible, so a potential external
> attacker would have to first gain network access to the undercloud
> machines and only then they can start trying to exploit Tuskar API to
> hand out the passwords. Overcloud services (which are meant to be
> publicly accessible) have their service passwords accessible in
> plaintext, e.g. in nova.conf you'll find nova password and neutron
> password -- i think this is comparatively greater security risk.

This to me reads as: we should fix the OpenStack services not to store
passwords in their service.conf, not making the situation worse by
storing them in even more places.

> 
> So if we can come up with a solution where the benefits outweigh the
> drawbacks and it makes sense in broader view at OpenStack security, we
> should go for it, but so far i'm not convinced there is such a solution.
> Just my 2 cents :)
> 
> Jirka
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 14:10, Jiří Stránský wrote:
> On 20.2.2014 12:18, Radomir Dopieralski wrote:

>> Thinking about it some more, all the uses of the passwords come as a
>> result of an action initiated by the user either by tuskar-ui, or by
>> the tuskar command-line client. So maybe we could put the key in their
>> configuration and send it with the request to (re)deploy. Tuskar-API
>> would still need to keep it for the duration of deployment (to register
>> the services at the end), but that's it.
> 
> This would be possible, but it would damage the user experience quite a
> bit. Afaik other deployment tools solve password storage the same way we
> do now.

I don't think it would damage the user experience so much. All you need
is an additional configuration option in Tuskar-UI and Tuskar-client,
the encryption key.

That key would be used to encrypt the passwords when they are first sent
to Tuskar-API, and also added to the (re)deployment calls.

This way, if the database leaks due to a security hole in MySQL or bad
engineering practices administering the database, the passwords are
still inaccessible. To get them, the attacker would need to get
*both* the database and the config files from host on which Tuskar-UI runs.

With the tuskar-client it's a little bit more obnoxious, because you
would need to configure it on every host from which you want to use it,
but you already need to do some configuration to point it at the
tuskar-api and authenticate it, so it's not so bad.

I agree that this complicates the whole process a little, and adds
another potential failure point though.
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Jay Dobies



On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:

Hi.

While implementing CRUD operations for node profiles in Tuskar (which
are essentially Nova flavors renamed) I encountered editing of flavors
and I have some doubts about it.

Editing of nova flavors in Horizon is implemented as
deleting-then-creating with a _new_ flavor ID.
For us it essentially means that all links to flavor/profile (e.g. from
overcloud role) will become broken. We had the following proposals:
- Update links automatically after editing by e.g. fetching all
overcloud roles and fixing flavor ID. Poses risk of race conditions with
concurrent editing of either node profiles or overcloud roles.
   Even worse, are we sure that user really wants overcloud roles to be
updated?


This is a big question. Editing has always been a complicated concept in 
Tuskar. How soon do you want the effects of the edit to be made live? 
Should it only apply to future creations or should it be applied to 
anything running off the old configuration? What's the policy on how to 
apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
something else)?



- The same as previous but with confirmation from user. Also risk of
race conditions.
- Do not update links. User may be confused: operation called "edit"
should not delete anything, nor is it supposed to invalidate links. One
of the ideas was to show also deleted flavors/profiles in a separate
table.
- Implement clone operation instead of editing. Shows user a creation
form with data prefilled from original profile. Original profile will
stay and should be deleted manually. All links also have to be updated
manually.
- Do not implement editing, only creating and deleting (that's what I
did for now in https://review.openstack.org/#/c/73576/ ).


I'm +1 on not implementing editing. It's why we wanted to standardize on 
a single flavor for Icehouse in the first place, the use cases around 
editing or multiple flavors are very complicated.



Any ideas on what to do?

Thanks in advance,
Dmitry Tantsur


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Alex Xu

On 2014年02月20日 10:44, Christopher Yeoh wrote:

On Wed, 19 Feb 2014 12:36:46 -0500
Russell Bryant  wrote:


Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following
question: "Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.


Although I'm very eager to get the V3 API released, I do agree with you.
As you have said we will be living with both the V2 and V3 APIs for a
very long time. And at this point there would be simply too many last
minute changes to the V3 API for us to be confident that we have it
right "enough" to release as a stable API.


+1


We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the
case, we haven't done our job.

+1


Let's all take some time to reflect on what has happened with v3 so
far and what it means for how we should move forward.  We can regroup
for Juno.

Finally, I would like to thank everyone who has helped with the effort
so far.  Many hours have been put in to code and reviews for this.  I
would like to specifically thank Christopher Yeoh for his work here.
Chris has done an *enormous* amount of work on this and deserves
credit for it.  He has taken on a task much bigger than anyone
anticipated. Thanks, Chris!

Thanks Russell, that's much appreciated. I'm also very thankful to
everyone who has worked on the V3 API either through patches and/or
reviews, especially Alex Xu and Ivan Zhu who have done a lot of work on
it in Havana and Icehouse.


Thank you, Chris, hope we get a great v3 api.



Chris.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Jay Dobies

Just to throw this out there, is this something we need for Icehouse?

Yes, I fully acknowledge that it's an ugly security hole. But what's our 
story for how stable/clean Tuskar will be for Icehouse? I don't believe 
the intention is for people to use this in a production environment yet, 
so it will be people trying things out in a test environment. I don't 
think it's absurd to document that we haven't finished hardening the 
security yet and to not use super-sensitive passwords.


If there was a simple answer, I likely wouldn't even suggest this. But 
there's some real design and thought that needs to take place and, 
frankly, we're running out of time. Keeping in mind the intended usage 
of the Icehouse release of Tuskar, it might make sense to shelve this 
for now and file a big fat bug that we address in Juno.


On 02/20/2014 08:47 AM, Radomir Dopieralski wrote:

On 20/02/14 14:10, Jiří Stránský wrote:

On 20.2.2014 12:18, Radomir Dopieralski wrote:



Thinking about it some more, all the uses of the passwords come as a
result of an action initiated by the user either by tuskar-ui, or by
the tuskar command-line client. So maybe we could put the key in their
configuration and send it with the request to (re)deploy. Tuskar-API
would still need to keep it for the duration of deployment (to register
the services at the end), but that's it.


This would be possible, but it would damage the user experience quite a
bit. Afaik other deployment tools solve password storage the same way we
do now.


I don't think it would damage the user experience so much. All you need
is an additional configuration option in Tuskar-UI and Tuskar-client,
the encryption key.

That key would be used to encrypt the passwords when they are first sent
to Tuskar-API, and also added to the (re)deployment calls.

This way, if the database leaks due to a security hole in MySQL or bad
engineering practices administering the database, the passwords are
still inaccessible. To get them, the attacker would need to get
*both* the database and the config files from host on which Tuskar-UI runs.

With the tuskar-client it's a little bit more obnoxious, because you
would need to configure it on every host from which you want to use it,
but you already need to do some configuration to point it at the
tuskar-api and authenticate it, so it's not so bad.

I agree that this complicates the whole process a little, and adds
another potential failure point though.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Dmitry Tantsur
I think we still are going to multiple flavors for I, e.g.:
https://review.openstack.org/#/c/74762/
On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> 
> On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > Hi.
> >
> > While implementing CRUD operations for node profiles in Tuskar (which
> > are essentially Nova flavors renamed) I encountered editing of flavors
> > and I have some doubts about it.
> >
> > Editing of nova flavors in Horizon is implemented as
> > deleting-then-creating with a _new_ flavor ID.
> > For us it essentially means that all links to flavor/profile (e.g. from
> > overcloud role) will become broken. We had the following proposals:
> > - Update links automatically after editing by e.g. fetching all
> > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > concurrent editing of either node profiles or overcloud roles.
> >Even worse, are we sure that user really wants overcloud roles to be
> > updated?
> 
> This is a big question. Editing has always been a complicated concept in 
> Tuskar. How soon do you want the effects of the edit to be made live? 
> Should it only apply to future creations or should it be applied to 
> anything running off the old configuration? What's the policy on how to 
> apply that (canary v. the-other-one-i-cant-remember-the-name-for v. 
> something else)?
> 
> > - The same as previous but with confirmation from user. Also risk of
> > race conditions.
> > - Do not update links. User may be confused: operation called "edit"
> > should not delete anything, nor is it supposed to invalidate links. One
> > of the ideas was to show also deleted flavors/profiles in a separate
> > table.
> > - Implement clone operation instead of editing. Shows user a creation
> > form with data prefilled from original profile. Original profile will
> > stay and should be deleted manually. All links also have to be updated
> > manually.
> > - Do not implement editing, only creating and deleting (that's what I
> > did for now in https://review.openstack.org/#/c/73576/ ).
> 
> I'm +1 on not implementing editing. It's why we wanted to standardize on 
> a single flavor for Icehouse in the first place, the use cases around 
> editing or multiple flavors are very complicated.
> 
> > Any ideas on what to do?
> >
> > Thanks in advance,
> > Dmitry Tantsur
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Matt Riedemann



On 2/19/2014 12:26 PM, Chris Behrens wrote:

+1. I'd like to leave it experimental as well. I think the task work is 
important to the future of nova-api and I'd like to make sure we're not rushing 
anything. We're going to need to live with old API versions for a long time, so 
it's important that we get it right. I'm also not convinced there's a 
compelling enough reason for one to move to v3 as it is. Extension versioning 
is important, but I'm not sure it can't be backported to v2 in the meantime.


Thinking about what would differentiate V3, tasks is the big one but the 
common request ID [1] is something that could be a nice carrot for 
getting people to move eventually.


[1] https://blueprints.launchpad.net/nova/+spec/cross-service-request-id



- Chris


On Feb 19, 2014, at 9:36 AM, Russell Bryant  wrote:

Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following question:
"Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.

There are a number of reasons for this:

1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
last week made me come to the realization that v2 won't be going away
*any* time soon.  In some cases, users have long term API support
expectations (perhaps based on experience with EC2).  In the best case,
we have to get all of the SDKs updated to the new API, and then get to
the point where everyone is using a new enough version of all of these
SDKs to use the new API.  I don't think that's going to be quick.

We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the case,
we haven't done our job.

2) There's actually quite a bit still left on the existing v3 todo list.
We have some notes here:

https://etherpad.openstack.org/p/NovaV3APIDoneCriteria

One thing is nova-network support.  Since nova-network is still not
deprecated, we certainly can't deprecate the v2 API without nova-network
support in v3.  We removed it from v3 assuming nova-network would be
deprecated in time.

Another issue is that we discussed the tasks API as the big new API
feature we would include in v3.  Unfortunately, it's not going to be
complete for Icehouse.  It's possible we may have some initial parts
merged, but it's much smaller scope than what we originally envisioned.
Without this, I honestly worry that there's not quite enough compelling
functionality yet to encourage a lot of people to migrate.

3) v3 has taken a lot more time and a lot more effort than anyone
thought.  This makes it even more important that we're not going to need
a v4 any time soon.  Due to various things still not quite wrapped up,
I'm just not confident enough that what we have is something we all feel
is Nova's API of the future.


Let's all take some time to reflect on what has happened with v3 so far
and what it means for how we should move forward.  We can regroup for Juno.

Finally, I would like to thank everyone who has helped with the effort
so far.  Many hours have been put in to code and reviews for this.  I
would like to specifically thank Christopher Yeoh for his work here.
Chris has done an *enormous* amount of work on this and deserves credit
for it.  He has taken on a task much bigger than anyone anticipated.
Thanks, Chris!

--
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano] [horizon] Reverse order of settings modules inclusion

2014-02-20 Thread Timur Sufiev
Hello!

In Murano's dashboard we have around 20 parameters that should be changed
or added to DJANGO_SETTINGS_MODULE. Currently all these parameters are
embedded into openstack_dashboard.settings during install with sed and
tools alike (and removed during uninstall).

Recently more clean and elegant way was devised [1]: not to insert code
into openstack_dashboard.settings, but define all Murano-specific settings
in its own config file which in turn imports the contents of
openstack_dashboard.settings. That also requires to change
DJANGO_SETTINGS_MODULE environment variable from
'openstack_dashboard.settings' to 'muranodashboard.settings' in django.wsgi
file which is referenced in apache config as the entry point for
Django-served site (we do not touch original openstack_dashboard wsgi-file,
but edit apache config to point to our own muranodashboard/wsgi/django.wsgi
file with appropriate environment variable).

While this approach has obvious advantages:
* Murano-specific parameters are clearly separated from common
openstack_dashboard parameters;
* moreover, customizable Murano parameters inside
/etc/murano/murano-dashboard/settings-{prologue,epilogue}.py files are
separated from constant muranodashboard settings somewhere in /usr/,
it also reverses in some sense relation between openstack_dashboard and
muranodashboard: now muranodashboard.settings file is the main entry point
for the whole openstack_dashboard Django application.

As to me, it doesn't pose a serious drawback, because openstack_dashboard
is still a main Django application which uses Murano as one of its
dashboards, only the order of settings inclusion is reversed. But after
investigating and implementing this scheme I might be a bit biased :)... So
I'd like to hear your opinion, whether this way of augmenting
openstack_dashboard settings is viable or not?

[1] https://review.openstack.org/#/c/68125/

-- 
Timur Sufiev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] when icehouse will be frozen

2014-02-20 Thread Miguel Angel Ajo

Ok

My previous answer was actually about the Feature proposal freeze
which happened two days ago.

Cheers,
Miguel Ángel.

On 02/20/2014 11:27 AM, Thierry Carrez wrote:

马煜 wrote:

who know when to freezy icehouse version ?

my bp on ml2 driver has been approved, code is under review,
but I have some trouble to deploy third-party ci on which tempest test run.

Feature freeze is on March 4th [1], so featureful code shall be proposed
*and* merged by then. I suspect Neutron core won't approve it until the
3rd party CI testing is in order, though, so if you can't get it to work
by then it may have to live out of the tree for the Icehouse release.

Neutron drivers shall be able to give you more precisions.

[1] https://wiki.openstack.org/wiki/Icehouse_Release_Schedule




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Hi,

Neutron permits to set a gateway IP outside of the subnet cidr by default.
And, thanks to the garyk's patch [1], it's possible to change this default
behavior with config flag 'force_gateway_on_subnet'.

This flag was added to keep the backward compatibility for people who need
to set the gateway outside of the subnet.

I think this behavior does not reflect the classic usage of subnets. So I
propose to update the default value of the flag 'force_gateway_on_subnet'
to True.

Any thought?

[1] https://review.openstack.org/#/c/19048/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [TripleO] [Tuskar] Thoughts on editing node profiles (aka flavors in Tuskar UI)

2014-02-20 Thread Tzu-Mainn Chen
Multiple flavors, but a single flavor per role, correct?

Mainn

- Original Message -
> I think we still are going to multiple flavors for I, e.g.:
> https://review.openstack.org/#/c/74762/
> On Thu, 2014-02-20 at 08:50 -0500, Jay Dobies wrote:
> > 
> > On 02/20/2014 06:40 AM, Dmitry Tantsur wrote:
> > > Hi.
> > >
> > > While implementing CRUD operations for node profiles in Tuskar (which
> > > are essentially Nova flavors renamed) I encountered editing of flavors
> > > and I have some doubts about it.
> > >
> > > Editing of nova flavors in Horizon is implemented as
> > > deleting-then-creating with a _new_ flavor ID.
> > > For us it essentially means that all links to flavor/profile (e.g. from
> > > overcloud role) will become broken. We had the following proposals:
> > > - Update links automatically after editing by e.g. fetching all
> > > overcloud roles and fixing flavor ID. Poses risk of race conditions with
> > > concurrent editing of either node profiles or overcloud roles.
> > >Even worse, are we sure that user really wants overcloud roles to be
> > > updated?
> > 
> > This is a big question. Editing has always been a complicated concept in
> > Tuskar. How soon do you want the effects of the edit to be made live?
> > Should it only apply to future creations or should it be applied to
> > anything running off the old configuration? What's the policy on how to
> > apply that (canary v. the-other-one-i-cant-remember-the-name-for v.
> > something else)?
> > 
> > > - The same as previous but with confirmation from user. Also risk of
> > > race conditions.
> > > - Do not update links. User may be confused: operation called "edit"
> > > should not delete anything, nor is it supposed to invalidate links. One
> > > of the ideas was to show also deleted flavors/profiles in a separate
> > > table.
> > > - Implement clone operation instead of editing. Shows user a creation
> > > form with data prefilled from original profile. Original profile will
> > > stay and should be deleted manually. All links also have to be updated
> > > manually.
> > > - Do not implement editing, only creating and deleting (that's what I
> > > did for now in https://review.openstack.org/#/c/73576/ ).
> > 
> > I'm +1 on not implementing editing. It's why we wanted to standardize on
> > a single flavor for Icehouse in the first place, the use cases around
> > editing or multiple flavors are very complicated.
> > 
> > > Any ideas on what to do?
> > >
> > > Thanks in advance,
> > > Dmitry Tantsur
> > >
> > >
> > > ___
> > > OpenStack-dev mailing list
> > > OpenStack-dev@lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Matt Riedemann



On 2/20/2014 7:22 AM, Sean Dague wrote:

I agree that we shouldn't be rushing something that's not ready, but I
guess it raises kind of a meta issue.

When we started this journey this was because v2 has a ton of warts, is
completely wonky on the code internals, which leads to plenty of bugs.
v3 was both a surface clean up, but it was also a massive internals
clean up. I think comparing servers.py:create is a good look at the
differences:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/servers.py#L768
- v2

vs.

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/plugins/v3/servers.py#L415
- v3

v3 was small on user surface changes for a reason, because the idea was
that it would be a quick cut over, the migration pain would be minimal,
and v2 could be dropped relatively quickly (2 cycles).

However if the new thinking is that v2 is going to be around for a
long time then I think it raises questions about this whole approach.
Because dual maintenance is bad. We see this today where stable/* trees
end up broken in CI for weeks because no one is working on it.

We're also duplicating a lot of test and review energy in having 2 API
stacks. Even before v3 has come out of experimental it's consumed a huge
amount of review resource on both the Nova and Tempest sides to get it
to it's current state.

So my feeling is that in order to get more energy and focus on the API,
we need some kind of game plan to get us to a single API version, with a
single data payload in L (or on the outside, M). If the decision is v2
must be in both those releases (and possibly beyond), then it seems like
asking other hard questions.

* why do a v3 at all? instead do we figure out a way to be able to
evolve v2 in a backwards compatible way.
* if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
that working around all that code isn't a velocity inhibitor in the
cleanups required in v2? Because some of the crazy hacks that exist to
make XML structures work for the json in v2 is kind of special.


I also have something on the nova meeting agenda today about how some 
things should be handled in the V2 API now that we know it's going to be 
around for awhile and that we're working towards a more transparent 
integration with Neutron, since we have some bugs on that subject with 
differing viewpoints on how to handle them/what's supported:


https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting



This big bang approach to API development may just have run it's course,
and no longer be a useful development model. Which is good to find out.
Would have been nice to find out earlier... but not all lessons are easy
or cheap. :)

-Sean

On 02/19/2014 12:36 PM, Russell Bryant wrote:

Greetings,

The v3 API effort has been going for a few release cycles now.  As we
approach the Icehouse release, we are faced with the following question:
"Is it time to mark v3 stable?"

My opinion is that I think we need to leave v3 marked as experimental
for Icehouse.

There are a number of reasons for this:

1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
last week made me come to the realization that v2 won't be going away
*any* time soon.  In some cases, users have long term API support
expectations (perhaps based on experience with EC2).  In the best case,
we have to get all of the SDKs updated to the new API, and then get to
the point where everyone is using a new enough version of all of these
SDKs to use the new API.  I don't think that's going to be quick.

We really don't want to be in a situation where we're having to force
any sort of migration to a new API.  The new API should be compelling
enough that everyone *wants* to migrate to it.  If that's not the case,
we haven't done our job.

2) There's actually quite a bit still left on the existing v3 todo list.
  We have some notes here:

https://etherpad.openstack.org/p/NovaV3APIDoneCriteria

One thing is nova-network support.  Since nova-network is still not
deprecated, we certainly can't deprecate the v2 API without nova-network
support in v3.  We removed it from v3 assuming nova-network would be
deprecated in time.

Another issue is that we discussed the tasks API as the big new API
feature we would include in v3.  Unfortunately, it's not going to be
complete for Icehouse.  It's possible we may have some initial parts
merged, but it's much smaller scope than what we originally envisioned.
  Without this, I honestly worry that there's not quite enough compelling
functionality yet to encourage a lot of people to migrate.

3) v3 has taken a lot more time and a lot more effort than anyone
thought.  This makes it even more important that we're not going to need
a v4 any time soon.  Due to various things still not quite wrapped up,
I'm just not confident enough that what we have is something we all feel
is Nova's API of the future.


Let's all take some time to reflect on what has happened with

Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 14:47, Radomir Dopieralski wrote:
> On 20/02/14 14:10, Jiří Stránský wrote:
>> On 20.2.2014 12:18, Radomir Dopieralski wrote:
> 
>>> Thinking about it some more, all the uses of the passwords come as a
>>> result of an action initiated by the user either by tuskar-ui, or by
>>> the tuskar command-line client. So maybe we could put the key in their
>>> configuration and send it with the request to (re)deploy. Tuskar-API
>>> would still need to keep it for the duration of deployment (to register
>>> the services at the end), but that's it.
>>
>> This would be possible, but it would damage the user experience quite a
>> bit. Afaik other deployment tools solve password storage the same way we
>> do now.
> 
> I don't think it would damage the user experience so much. All you need
> is an additional configuration option in Tuskar-UI and Tuskar-client,
> the encryption key.
> 
> That key would be used to encrypt the passwords when they are first sent
> to Tuskar-API, and also added to the (re)deployment calls.


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.

> 
> This way, if the database leaks due to a security hole in MySQL or bad
> engineering practices administering the database, the passwords are
> still inaccessible. To get them, the attacker would need to get
> *both* the database and the config files from host on which Tuskar-UI runs.
> 
> With the tuskar-client it's a little bit more obnoxious, because you
> would need to configure it on every host from which you want to use it,
> but you already need to do some configuration to point it at the
> tuskar-api and authenticate it, so it's not so bad.
> 
> I agree that this complicates the whole process a little, and adds
> another potential failure point though.
> 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Looking back, perhaps we should remove that flag and only authorize the
admin user to be able to set the gateway IP outside of the subnet cidr (for
tricky network), like only admin user can create provider network. And
require classic users to set gatway IP inside the subnet cidr.

Édouard.


On Thu, Feb 20, 2014 at 3:15 PM, Édouard Thuleau  wrote:

> Hi,
>
> Neutron permits to set a gateway IP outside of the subnet cidr by default.
> And, thanks to the garyk's patch [1], it's possible to change this default
> behavior with config flag 'force_gateway_on_subnet'.
>
> This flag was added to keep the backward compatibility for people who need
> to set the gateway outside of the subnet.
>
> I think this behavior does not reflect the classic usage of subnets. So I
> propose to update the default value of the flag 'force_gateway_on_subnet'
> to True.
>
> Any thought?
>
> [1] https://review.openstack.org/#/c/19048/
>
> Regards,
> Édouard.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-20 Thread Mark McClain
 I’d like to welcome Oleg as member of the core Neutron team as he has received 
more than enough +1s and no negative votes from the other cores. 

mark

On Feb 10, 2014, at 6:28 PM, Mark McClain  wrote:

> All-
> 
> I’d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg 
> has been valuable contributor to Neutron by actively reviewing, working on 
> bugs, and contributing code.
> 
> Neutron cores please reply back with +1/0/-1 votes.
> 
> mark
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Veiga, Anthony
This would break IPv6.  The gateway address, according to RFC 4861[1] Section 
4.2 regarding Router Advertisements: "Source Address MUST be the link-local 
address assigned to the interface from which this message is sent".  This means 
that if you configure a subnet with a Globally Unique Address scope, the 
gateway by definition cannot be in the configured subnet.  Please don't force 
this option, as it will break work going on in the Neutron IPv6 sub-team.
-Anthony

[1] http://tools.ietf.org/html/rfc4861

Hi,

Neutron permits to set a gateway IP outside of the subnet cidr by default. And, 
thanks to the garyk's patch [1], it's possible to change this default behavior 
with config flag 'force_gateway_on_subnet'.

This flag was added to keep the backward compatibility for people who need to 
set the gateway outside of the subnet.

I think this behavior does not reflect the classic usage of subnets. So I 
propose to update the default value of the flag 'force_gateway_on_subnet' to 
True.

Any thought?

[1] https://review.openstack.org/#/c/19048/

Regards,
Édouard.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 15:00, Tomas Sedovic wrote:

> Are we even sure we need to store the passwords in the first place? All
> this encryption talk seems very premature to me.

How are you going to redeploy without them?
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Ensure that configured gateway is on subnet by default

2014-02-20 Thread Édouard Thuleau
Ha yes, I completely forget IPv6 case.
Sorry and forget that thread.

Édouard.


On Thu, Feb 20, 2014 at 3:34 PM, Veiga, Anthony <
anthony_ve...@cable.comcast.com> wrote:

>  This would break IPv6.  The gateway address, according to RFC 4861[1]
> Section 4.2 regarding Router Advertisements: "Source Address MUST be the
> link-local address assigned to the interface from which this message is
> sent".  This means that if you configure a subnet with a Globally Unique
> Address scope, the gateway by definition cannot be in the configured
> subnet.  Please don't force this option, as it will break work going on in
> the Neutron IPv6 sub-team.
> -Anthony
>
>  [1] http://tools.ietf.org/html/rfc4861
>
>   Hi,
>
>  Neutron permits to set a gateway IP outside of the subnet cidr by
> default. And, thanks to the garyk's patch [1], it's possible to change this
> default behavior with config flag 'force_gateway_on_subnet'.
>
>  This flag was added to keep the backward compatibility for people who
> need to set the gateway outside of the subnet.
>
>  I think this behavior does not reflect the classic usage of subnets. So
> I propose to update the default value of the flag 'force_gateway_on_subnet'
> to True.
>
>  Any thought?
>
>  [1] https://review.openstack.org/#/c/19048/
>
>  Regards,
> Édouard.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread 郭小熙
We will move to Python33 in the future. More and more OpenStack projects
including python-novaclient are Python33 compatible. Do we have plan to
make Nova python33 compatible ?

As I know, oslo.messaging will not support python33 in Icehouse,this is
just one dependency for Nova, that means we can't finish it in Icehouse for
Nova. I registered one blueprint [1]to make us move to Python33 smoothly in
the future. Python33 compatibility would be taken into account while
reviewing code.

We have to add py33 check/gate jobs to check Py33 compatibility. This
blueprint could be marked as implemented only until Nova code can pass
these jobs.
[1] https://blueprints.launchpad.net/nova/+spec/nova-py3kcompat

-- 
ChangBo Guo(gcb)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-20 Thread Oleg Bondarev
Thanks Mark,

thanks everyone for voiting! I'm so happy to become a member of this really
great team!

Oleg


On Thu, Feb 20, 2014 at 6:29 PM, Mark McClain wrote:

>  I'd like to welcome Oleg as member of the core Neutron team as he has
> received more than enough +1s and no negative votes from the other cores.
>
> mark
>
> On Feb 10, 2014, at 6:28 PM, Mark McClain  wrote:
>
> > All-
> >
> > I'd like to nominate Oleg Bondarev to become a Neutron core reviewer.
>  Oleg has been valuable contributor to Neutron by actively reviewing,
> working on bugs, and contributing code.
> >
> > Neutron cores please reply back with +1/0/-1 votes.
> >
> > mark
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Christopher Yeoh
On Thu, 20 Feb 2014 08:22:57 -0500
Sean Dague  wrote:
> 
> We're also duplicating a lot of test and review energy in having 2 API
> stacks. Even before v3 has come out of experimental it's consumed a
> huge amount of review resource on both the Nova and Tempest sides to
> get it to it's current state.
> 
> So my feeling is that in order to get more energy and focus on the
> API, we need some kind of game plan to get us to a single API
> version, with a single data payload in L (or on the outside, M). If
> the decision is v2 must be in both those releases (and possibly
> beyond), then it seems like asking other hard questions.
> 
> * why do a v3 at all? instead do we figure out a way to be able to
> evolve v2 in a backwards compatible way.

So there's lots of changes (cleanups) made between v2 and v3 which are
really not possible to do in a backwards compatible way. One example
is that we're a lot stricter and consistent on input validation in v3
than v2 which is better both from a user and server point of view.
Another is that the tasks API would be a lot uglier and really look
"bolted on" if we tried to do so. Also doing so doesn't actually reduce
the test load as if we're still supporting the old 'look' of the api we
still need to test for it separately to the new 'look' even if we don't
bump the api major version. 

In terms of code sharing (and we've experimented a bit with this for
v2/v3) I think in most cases ends up actually being easier having two
quite completely separate trees because it ends up diverging so much
that having if statements around everywhere to handle the different
cases is actually a higher maintenance burden (much harder to read)
than just knowing that you might have to make changes in two quite
separate places. 

> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
> that working around all that code isn't a velocity inhibitor in the
> cleanups required in v2? Because some of the crazy hacks that exist to
> make XML structures work for the json in v2 is kind of special.

So I don't think we can do that for similar reasons we can't just drop
V2 after a couple of cycles. We should be encouraging people off, not
forcing them off. 

> This big bang approach to API development may just have run it's
> course, and no longer be a useful development model. Which is good to
> find out. Would have been nice to find out earlier... but not all
> lessons are easy or cheap. :)

So I think what the v3 gives us is much more consistent and clean
API base to start from. It's a clean break from the past. But we have to
be much more careful about any future API changes/enhancements than we
traditionally have done in the past especially with any changes which
affect the core. I think we've already significantly raised the quality
bar in what we allow for both v2 and v3 in Icehouse compared to previous
releases (those frustrated with trying to get API changes in will
probably agree) but I'd like us to get even stricter about it in the
future because getting it wrong in the API design has a MUCH higher
long term impact than bugs in most other areas. Requiring an API spec
upfront (and reviewing it) with a blueprint for any new API features
should IMO be compulsory before a blueprint is approved. 

Also micro and extension versioning is not the magic bullet which will
get us out of trouble in the future. Especially with the core changes.
Because even though versioning allows us to make changes, for similar
reasons to not being able to just drop V2 after a couple of cycles
we'll still need to keep supporting (and testing) the old behaviour for
a significant period of time (we have often quietly ignored
this issue in the past).

Ultimately the only way to free ourselves from the maintenance of two
API versions (and I'll claim this is rather misleading as it actually
has more dimensions to it than this) is to convince users to move from
the V2 API to the "new one". And it doesn't make much difference
whether we call it V3 or V2.1 we still have very similar maintenance
burdens if we want to make the sorts of API changes that we have done
for V3.

Chris

> 
>   -Sean
> 
> On 02/19/2014 12:36 PM, Russell Bryant wrote:
> > Greetings,
> > 
> > The v3 API effort has been going for a few release cycles now.  As
> > we approach the Icehouse release, we are faced with the following
> > question: "Is it time to mark v3 stable?"
> > 
> > My opinion is that I think we need to leave v3 marked as
> > experimental for Icehouse.
> > 
> > There are a number of reasons for this:
> > 
> > 1) Discussions about the v2 and v3 APIs at the in-person Nova meetup
> > last week made me come to the realization that v2 won't be going
> > away *any* time soon.  In some cases, users have long term API
> > support expectations (perhaps based on experience with EC2).  In
> > the best case, we have to get all of the SDKs updated to the new
> > API, and then get to the point where everyone is using a new enough
> > ver

Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 15:41, Radomir Dopieralski wrote:
> On 20/02/14 15:00, Tomas Sedovic wrote:
> 
>> Are we even sure we need to store the passwords in the first place? All
>> this encryption talk seems very premature to me.
> 
> How are you going to redeploy without them?
> 

What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Radomir Dopieralski
On 20/02/14 15:57, Tomas Sedovic wrote:
> On 20/02/14 15:41, Radomir Dopieralski wrote:
>> On 20/02/14 15:00, Tomas Sedovic wrote:
>>
>>> Are we even sure we need to store the passwords in the first place? All
>>> this encryption talk seems very premature to me.
>>
>> How are you going to redeploy without them?
>>
> 
> What do you mean by redeploy?
> 
> 1. Deploy a brand new overcloud, overwriting the old one
> 2. Updating the services in the existing overcloud (i.e. image updates)
> 3. Adding new machines to the existing overcloud
> 4. Autoscaling
> 5. Something else
> 6. All of the above

I mean clicking "scale" in tuskar-ui.
-- 
Radomir Dopieralski


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder]Do you think volume force delete operation should not apply to the volume being used?

2014-02-20 Thread yunling

From cinder code, we know that volume delete operation could be classify into 
three categories;
1. General delelte:  delete volumes that are in the status of available, error, 
error_restoring, error_extending. 
2. Force delete: delete volumes that are in the status of extending, attaching, 
detaching, await-transfering, backing or restoring. 
3. Others: volumes that are attached or in the progress of migrate operation 
can't be force deleted. 
 
We know that volume's status in attaching/detaching also means that the volume 
is "in-use", not only in "attached" status and in the progress of volume 
migration.
Cinder force delete sometimes can delelte "in-use" volumes, and sometimes can 
not deleted "in-use" volumes.
 
My question is as follows:
1. Do you think volume force delete operation should not apply to the volume 
being used?
eg. Should attaching/detaching/backing volume can't be force delete ?




From: yunlingz...@hotmail.com
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev][Cinder]Do you think volume force delete operation 
should not apply to the volume being used?
Date: Mon, 17 Feb 2014 13:13:45 +






Hi stackers: 

  

I found that volume status become inconsistent (nova volume status is 
attaching, verus cinder volume status is deleted) between nova and cinder when 
doing volume force delete operation on an attaching volume. 

I think volume force delete operation should not apply to the volume being 
used, which included the attached status of attaching, attached and detached. 

  

How do you think? 






thanks___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Sean Dague
On 02/20/2014 09:55 AM, Christopher Yeoh wrote:
> On Thu, 20 Feb 2014 08:22:57 -0500
> Sean Dague  wrote:
>>
>> We're also duplicating a lot of test and review energy in having 2 API
>> stacks. Even before v3 has come out of experimental it's consumed a
>> huge amount of review resource on both the Nova and Tempest sides to
>> get it to it's current state.
>>
>> So my feeling is that in order to get more energy and focus on the
>> API, we need some kind of game plan to get us to a single API
>> version, with a single data payload in L (or on the outside, M). If
>> the decision is v2 must be in both those releases (and possibly
>> beyond), then it seems like asking other hard questions.
>>
>> * why do a v3 at all? instead do we figure out a way to be able to
>> evolve v2 in a backwards compatible way.
> 
> So there's lots of changes (cleanups) made between v2 and v3 which are
> really not possible to do in a backwards compatible way. One example
> is that we're a lot stricter and consistent on input validation in v3
> than v2 which is better both from a user and server point of view.
> Another is that the tasks API would be a lot uglier and really look
> "bolted on" if we tried to do so. Also doing so doesn't actually reduce
> the test load as if we're still supporting the old 'look' of the api we
> still need to test for it separately to the new 'look' even if we don't
> bump the api major version. 
> 
> In terms of code sharing (and we've experimented a bit with this for
> v2/v3) I think in most cases ends up actually being easier having two
> quite completely separate trees because it ends up diverging so much
> that having if statements around everywhere to handle the different
> cases is actually a higher maintenance burden (much harder to read)
> than just knowing that you might have to make changes in two quite
> separate places. 
> 
>> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
>> that working around all that code isn't a velocity inhibitor in the
>> cleanups required in v2? Because some of the crazy hacks that exist to
>> make XML structures work for the json in v2 is kind of special.
> 
> So I don't think we can do that for similar reasons we can't just drop
> V2 after a couple of cycles. We should be encouraging people off, not
> forcing them off. 
> 
>> This big bang approach to API development may just have run it's
>> course, and no longer be a useful development model. Which is good to
>> find out. Would have been nice to find out earlier... but not all
>> lessons are easy or cheap. :)
> 
> So I think what the v3 gives us is much more consistent and clean
> API base to start from. It's a clean break from the past. But we have to
> be much more careful about any future API changes/enhancements than we
> traditionally have done in the past especially with any changes which
> affect the core. I think we've already significantly raised the quality
> bar in what we allow for both v2 and v3 in Icehouse compared to previous
> releases (those frustrated with trying to get API changes in will
> probably agree) but I'd like us to get even stricter about it in the
> future because getting it wrong in the API design has a MUCH higher
> long term impact than bugs in most other areas. Requiring an API spec
> upfront (and reviewing it) with a blueprint for any new API features
> should IMO be compulsory before a blueprint is approved. 
> 
> Also micro and extension versioning is not the magic bullet which will
> get us out of trouble in the future. Especially with the core changes.
> Because even though versioning allows us to make changes, for similar
> reasons to not being able to just drop V2 after a couple of cycles
> we'll still need to keep supporting (and testing) the old behaviour for
> a significant period of time (we have often quietly ignored
> this issue in the past).
> 
> Ultimately the only way to free ourselves from the maintenance of two
> API versions (and I'll claim this is rather misleading as it actually
> has more dimensions to it than this) is to convince users to move from
> the V2 API to the "new one". And it doesn't make much difference
> whether we call it V3 or V2.1 we still have very similar maintenance
> burdens if we want to make the sorts of API changes that we have done
> for V3.

I want to flip this a little bit around. As an API consumer for an
upstream service I actually get excited when they announce a new version
and give me some new nobs to play with. Often times I'll even email
providers asking for certain API interfaces get exposed.

I do think we need to actually start from the end goal and work
backwards. My assumption is that 1 API vs. with 1 Data Format in L/M is
our end goal. I think that there are huge technical debt costs with
anything else. Our current course and speed makes us have 3 APIs/Formats
in that time frame.

There is no easy way out of this, but I think that the current course
and speed inhibits us in a lot of ways, not leas

Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Tomas Sedovic
On 20/02/14 16:02, Radomir Dopieralski wrote:
> On 20/02/14 15:57, Tomas Sedovic wrote:
>> On 20/02/14 15:41, Radomir Dopieralski wrote:
>>> On 20/02/14 15:00, Tomas Sedovic wrote:
>>>
 Are we even sure we need to store the passwords in the first place? All
 this encryption talk seems very premature to me.
>>>
>>> How are you going to redeploy without them?
>>>
>>
>> What do you mean by redeploy?
>>
>> 1. Deploy a brand new overcloud, overwriting the old one
>> 2. Updating the services in the existing overcloud (i.e. image updates)
>> 3. Adding new machines to the existing overcloud
>> 4. Autoscaling
>> 5. Something else
>> 6. All of the above
> 
> I mean clicking "scale" in tuskar-ui.
> 

Right. So either Heat's able to handle this on its own or we fix it to
be able to do that or we ask for the necessary parameters again.

I really dislike having to do crypto in tuskar.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Tuskar] Dealing with passwords in Tuskar-API

2014-02-20 Thread Imre Farkas

On 02/20/2014 03:57 PM, Tomas Sedovic wrote:

On 20/02/14 15:41, Radomir Dopieralski wrote:

On 20/02/14 15:00, Tomas Sedovic wrote:


Are we even sure we need to store the passwords in the first place? All
this encryption talk seems very premature to me.


How are you going to redeploy without them?



What do you mean by redeploy?

1. Deploy a brand new overcloud, overwriting the old one
2. Updating the services in the existing overcloud (i.e. image updates)
3. Adding new machines to the existing overcloud
4. Autoscaling
5. Something else
6. All of the above

I'd guess each of these have different password workflow requirements.


I am not sure if all these use cases have different password 
requirement. If you check devtest, no matter whether you are creating or 
just updating your overcloud, all the parameters have to be provided for 
the heat template:

https://github.com/openstack/tripleo-incubator/blob/master/scripts/devtest_overcloud.sh#L125

I would rather not require the user to enter 5/10/15 different passwords 
every time Tuskar updates the stack. I think it's much better to 
autogenerate the passwords for the first time, provide an option to 
override them, then save and encrypt them in Tuskar. So +1 for designing 
a proper system for storing the passwords.


Imre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread John Garbutt
On 20 February 2014 14:55, Christopher Yeoh  wrote:
> On Thu, 20 Feb 2014 08:22:57 -0500
> Sean Dague  wrote:
>>
>> We're also duplicating a lot of test and review energy in having 2 API
>> stacks. Even before v3 has come out of experimental it's consumed a
>> huge amount of review resource on both the Nova and Tempest sides to
>> get it to it's current state.
>>
>> So my feeling is that in order to get more energy and focus on the
>> API, we need some kind of game plan to get us to a single API
>> version, with a single data payload in L (or on the outside, M). If
>> the decision is v2 must be in both those releases (and possibly
>> beyond), then it seems like asking other hard questions.
>>
>> * why do a v3 at all? instead do we figure out a way to be able to
>> evolve v2 in a backwards compatible way.
>
> So there's lots of changes (cleanups) made between v2 and v3 which are
> really not possible to do in a backwards compatible way. One example
> is that we're a lot stricter and consistent on input validation in v3
> than v2 which is better both from a user and server point of view.
> Another is that the tasks API would be a lot uglier and really look
> "bolted on" if we tried to do so. Also doing so doesn't actually reduce
> the test load as if we're still supporting the old 'look' of the api we
> still need to test for it separately to the new 'look' even if we don't
> bump the api major version.
>
> In terms of code sharing (and we've experimented a bit with this for
> v2/v3) I think in most cases ends up actually being easier having two
> quite completely separate trees because it ends up diverging so much
> that having if statements around everywhere to handle the different
> cases is actually a higher maintenance burden (much harder to read)
> than just knowing that you might have to make changes in two quite
> separate places.

Maybe, but what about a slightly less different v3, that would enable
such an approach?

>> * if we aren't doing a v3, can we deprecate XML in v2 in Icehouse so
>> that working around all that code isn't a velocity inhibitor in the
>> cleanups required in v2? Because some of the crazy hacks that exist to
>> make XML structures work for the json in v2 is kind of special.
>
> So I don't think we can do that for similar reasons we can't just drop
> V2 after a couple of cycles. We should be encouraging people off, not
> forcing them off.

We could look to remove XML support, with the same cycle we considered
dropping v2.

>> This big bang approach to API development may just have run it's
>> course, and no longer be a useful development model. Which is good to
>> find out. Would have been nice to find out earlier... but not all
>> lessons are easy or cheap. :)
>
> So I think what the v3 gives us is much more consistent and clean
> API base to start from. It's a clean break from the past. But we have to
> be much more careful about any future API changes/enhancements than we
> traditionally have done in the past especially with any changes which
> affect the core. I think we've already significantly raised the quality
> bar in what we allow for both v2 and v3 in Icehouse compared to previous
> releases (those frustrated with trying to get API changes in will
> probably agree) but I'd like us to get even stricter about it in the
> future because getting it wrong in the API design has a MUCH higher
> long term impact than bugs in most other areas. Requiring an API spec
> upfront (and reviewing it) with a blueprint for any new API features
> should IMO be compulsory before a blueprint is approved.

I think we need to go down this (slightly painful) path.

Particularly because there is so much continuous deployment going on.

> Also micro and extension versioning is not the magic bullet which will
> get us out of trouble in the future. Especially with the core changes.
> Because even though versioning allows us to make changes, for similar
> reasons to not being able to just drop V2 after a couple of cycles
> we'll still need to keep supporting (and testing) the old behaviour for
> a significant period of time (we have often quietly ignored
> this issue in the past).

I thought the versions were all backwards compatible (for some
definition of that).

> Ultimately the only way to free ourselves from the maintenance of two
> API versions (and I'll claim this is rather misleading as it actually
> has more dimensions to it than this) is to convince users to move from
> the V2 API to the "new one". And it doesn't make much difference
> whether we call it V3 or V2.1 we still have very similar maintenance
> burdens if we want to make the sorts of API changes that we have done
> for V3.

I did agree with you before now, but I maybe we have "too many" people
using v2 already. I have been wondering about a half way house...

So, changes in v3 I would love to keep (highest priority first, as I see it):
* versioning extensions
* task API
* internal wiring fix ups (policy, everything is an ext

Re: [openstack-dev] [oslo][all] config sample tools on os x

2014-02-20 Thread Sergey Lukjanov
Yup, current implementation depends on GNU getopt.

Julien, cool, that means I'm not crazy :)

About using common getopt functionally - at least long args will be removed
to support non GNU getopt. Rewriting it on pure python will be more useful
IMO.


On Thu, Feb 20, 2014 at 1:45 PM, Julien Danjou  wrote:

> On Thu, Feb 20 2014, Chmouel Boudjnah wrote:
>
> > In which sort of system setup other than macosx/freebsd
> generate_sample.sh
> > is not working?
>
> Likely everywhere GNU tools are not standard. So that's every system
> _except_ GNU/Linux ones I'd say. :)
>
> --
> Julien Danjou
> -- Free Software hacker
> -- http://julien.danjou.info
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat][Neutron] Refactoring heat LBaaS architecture according Neutron API

2014-02-20 Thread Sergey Kraynev
Hello community.

I'd like to discuss feature of Neutron LBaaS in Heat.
Currently Heat resources are not identical to Neutron's.
There are four resources here:
'OS::Neutron::HealthMonitor'
'OS::Neutron::Pool'
'OS::Neutron::PoolMember'
'OS::Neutron::LoadBalancer'

According to this representation the VIP is a part of resource
Loadbalancer, whereas Neutron has separate object VIP.  I think it should
be changed to conform with Neutron's implementation.
So the main question: what is the best way to change it? I see following
options:

1. Move VIP in separate resource in icehouse release (without any
additions).
Possibly we should support both (old and new) implementation for users too.
 IMO, it has also one big danger, because now we have stable version of it
and have not enough time to check new approach.
Also I think it does not make sense now, because Neutron team are
discussing new object model (
http://lists.openstack.org/pipermail/openstack-dev/2014-February/027480.html)
and it will be implemented in Juno.

2. The second idea is to wait all architecture changes that are planed in
Juno in Neutron. (look at link above)
Then we could recreate or change Heat LBaaS architecture at all.

Your feedback and other ideas about better implementation plan are welcome.

Regards,
Sergey.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] git-review patch: Fix parsing of SCP-style URLs

2014-02-20 Thread Alexander Jones
Really don't want to have to resolve conflicts again or battle through making 
the test suite succeed... Please can someone merge this? 

https://review.openstack.org/#/c/72751/ 
https://bugs.launchpad.net/git-review/+bug/1279016 

Thanks! 

Alexander Jones 
Double Negative R&D 
www.dneg.com 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] v3 API in Icehouse

2014-02-20 Thread Christopher Yeoh
On Thu, 20 Feb 2014 15:24:22 +
John Garbutt  wrote:
> 
> > Also micro and extension versioning is not the magic bullet which
> > will get us out of trouble in the future. Especially with the core
> > changes. Because even though versioning allows us to make changes,
> > for similar reasons to not being able to just drop V2 after a
> > couple of cycles we'll still need to keep supporting (and testing)
> > the old behaviour for a significant period of time (we have often
> > quietly ignored this issue in the past).
> 
> I thought the versions were all backwards compatible (for some
> definition of that).

They are, but say you add a backwards compatible change that allows you
to specific a flag that significantly changes the behaviour of the call.
At least from say a tempest point of view you have to test that method
both (with all the various existing possibilities)  with that flag
enabled and with it disabled. So we've doubled the test burden for the
call.

> I did agree with you before now, but I maybe we have "too many" people
> using v2 already. I have been wondering about a half way house...
> 
> So, changes in v3 I would love to keep (highest priority first, as I
> see it):
> * versioning extensions
> * task API
> * internal wiring fix ups (policy, everything is an extension, split
> up extensions)
> * return code fix ups
> * better input validation
> * url (consistency) changes
> * document consistency changes
> 

I think by the time we put these in, we're essentially forcing
people off the old V2 API anyway because we will break existing apps.
We're just being stealthy about it and not bumping the api version.

Why not just instead tell people upfront that they have to move to the
V3 API within X cycles because the V2 API is being removed? 

> Assuming we are happy to retro-fix versions, the question is how much
> do we allow to change between those versions.
> 
> I am good with stuff that would not break a "correct" old API client:
> * allow an API to warn users it is "deprecated"
> * extra attributes may be added in the return document
> * return codes for error cases might get updated
> * the x in 2xx might change for success cases
> * stricter validation of inputs might occur
> 
> Ideally, we only do this once, like at the same time we add in the
> versioning of extensions.
> 
> Having two urls for a single extension seems like quite a low cost
> "fix up", that we can keep in tree. Unit tests should be able to cover
> that, or at least only a small amount of functional tests.
> 
> The clean ups to the format of documents returned, this one is harder:
> * let the client (somehow) choose the new version, maybe a new URL, or
> maybe an Accepts header
> * keep the old version via some "converter" that can be easily unit
> tested in isolation
> 
> The general idea, is to get the fix ups we want, but make it easier to
> maintain, and try to reduce the functional test version, by using the
> knowledge there is so much common code, we don't need a full
> duplication of tests.

Hrm I'm not so sure about that. The API layer is pretty thin and so
essentially there is a lot of common code, but from a tempest point of
view we still fully test against both APIs. I'm not sure I'd feel that
comfortable about not doing so. 

> As far as implementation, I think it would be to make the v3 code
> support v2 and v3, in the process change v3 so thats possible, then
> drop the v2 code.

So I guess my question is adding a v2 backwards compatibility mode to
the v3 code any less burden than just simply keeping the v2 code
around? I don't think it is if it complicates the v2 code too much.

Though I think someone did bring up at the meeting (I'm not sure if it
was you) if it was possible to have a V2<->V3 translation layer. So
perhaps we could have a separate service that just sat in the middle
between a client and nova-api which translated requests and responses
just for V2 API requests to V3 format. And proxied to
neutron/cinder/glance where necessary. That may be one possible solution
to supporting V2 for longer than we'd like whilst at the same time being
able to remove it from Nova. Probably not a trivial thing to implement
but I think it addresses some of the concerns mentioned about keeping
the v2 API around.

This sort of technique might be able to be used to remove the ec2 api
code as well

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] sphinxcontrib-pecanwsme 0.7 released

2014-02-20 Thread Doug Hellmann
sphinxcontrib-pecanwsme is an extension to Sphinx for documenting APIs
 
built with the Pecan web framework and WSME.

What's New?
===

- Remove the trailing slash from the end of the URLs, as it results in
misleading feature documentation, see Ceilometer bug #1202744.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-20 Thread Nader Lahouti
Hi All,

I have a question regarding creating/deleting a tenant in openstack (using
horizon or CLI). Is there any notification mechanism in place so that an
application get informed of such an event?

If not, can it be done using plugin to send create/delete notification to
an application?

Appreciate your suggestion and help.

Regards,
Nader.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-20 Thread Dolph Mathews
Yes, see:

  http://docs.openstack.org/developer/keystone/event_notifications.html

On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti wrote:

> Hi All,
>
> I have a question regarding creating/deleting a tenant in openstack (using
> horizon or CLI). Is there any notification mechanism in place so that an
> application get informed of such an event?
>
> If not, can it be done using plugin to send create/delete notification to
> an application?
>
> Appreciate your suggestion and help.
>
> Regards,
> Nader.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Martinez, Christian
Hello all,
I'm working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create "special" tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.

First of all, we need to add some "parameters or flags" during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it's being done with climate-nova) 
and use that information to associate a lease quota to the resources assigned 
to that tenant.

I'm not sure if this is the right approach or if this is even possible, so 
feedback is welcomed.

Regards,
Christian


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Re: [openstack-qa] Not Able to Run Tempest API Tests

2014-02-20 Thread David Kranz

On 02/20/2014 05:58 AM, om prakash pandey wrote:
I am not able to run Tempest API tests. The typical ERROR I am getting 
is "Connection Timed Out".


When checking into the logs I found out that tempest is trying to 
access the admin URL which is a private IP for our deployment. Now, 
Tempest is designed to access only the Public API endpoints, so is 
this something to do with my Tempest Configuration OR A problem with 
the Deployment itself.
Please use openstack-dev prefixed with [qa] in the subject. The 
openstack-qa list is not being used anymore.


I think the problem you are having is that,
by default, tempest creates a new tenant and user for each test class. 
Doing so requires admin credentials which are specified in tempest.conf. 
You can

run tempest without this feature by setting these values in tempest.conf:

allow_tenant_isolation = false

If you do this you will not be able to run tempest in parallel and a 
number of tests that require admin to run at all will fail.


Also, if you are using master, the use of nose is not supported any 
more. You will need to use testr.


 -David



ERROR: test suite for 'tempest.api.compute.limits.test_absolute_limits.AbsoluteLimitsTestJSON'>

--
Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 208, in run

self.setUp()
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 291, in setUp

self.setupContext(ancestor)
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 314, in setupContext

try_run(context, names)
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/util.py", 
line 469, in try_run

return func()
  File 
"/opt/stack/tempest/tempest/api/compute/limits/test_absolute_limits.py", 
line 25, in setUpClass

super(AbsoluteLimitsTestJSON, cls).setUpClass()
  File "/opt/stack/tempest/tempest/api/compute/base.py", line 183, in 
setUpClass

super(BaseV2ComputeTest, cls).setUpClass()
  File "/opt/stack/tempest/tempest/api/compute/base.py", line 39, in 
setUpClass

os = cls.get_client_manager()
  File "/opt/stack/tempest/tempest/test.py", line 288, in 
get_client_manager

creds = cls.isolated_creds.get_primary_creds()
  File "/opt/stack/tempest/tempest/common/isolated_creds.py", line 
367, in get_primary_creds

user, tenant = self._create_creds()
  File "/opt/stack/tempest/tempest/common/isolated_creds.py", line 
166, in _create_creds

description=tenant_desc)
  File "/opt/stack/tempest/tempest/common/isolated_creds.py", line 81, 
in _create_tenant

name=name, description=description)
  File 
"/opt/stack/tempest/tempest/services/identity/json/identity_client.py", line 
63, in create_tenant

resp, body = self.post('tenants', post_body, self.headers)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 154, 
in post

return self.request('POST', url, headers, body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 276, 
in request

headers=headers, body=body)
  File "/opt/stack/tempest/tempest/common/rest_client.py", line 260, 
in _request

req_url, method, headers=req_headers, body=req_body)
  File "/opt/stack/tempest/tempest/common/http.py", line 25, in request
return super(ClosingHttp, self).request(*args, **new_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 1571, in request
(response, content) = self._request(conn, authority, uri, 
request_uri, method, body, headers, redirections, cachekey)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 1318, in _request
(response, content) = self._conn_request(conn, request_uri, 
method, body, headers)
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 1291, in _conn_request

conn.connect()
  File "/usr/local/lib/python2.7/dist-packages/httplib2/__init__.py", 
line 913, in connect

raise socket.error, msg
error: [Errno 110] Connection timed out
 >> begin captured stdout << -
connect: (10.135.120.120, 35357) 
connect fail: (10.135.120.120, 35357)

- >> end captured stdout << --

==
ERROR: test suite for 'tempest.api.compute.limits.test_absolute_limits.AbsoluteLimitsTestXML'>

--
Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 208, in run

self.setUp()
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 291, in setUp

self.setupContext(ancestor)
  File 
"/usr/local/lib/python2.7/dist-packages/nose-1.3.0-py2.7.egg/nose/suite.py", 
line 314, in setupCo

Re: [openstack-dev] [nova][libvirt] Is there anything blocking the libvirt driver from implementing the host_maintenance_mode API?

2014-02-20 Thread Matt Riedemann



On 2/19/2014 4:05 PM, Matt Riedemann wrote:

The os-hosts OS API extension [1] showed up before I was working on the
project and I see that only the VMware and XenAPI drivers implement it,
but was wondering why the libvirt driver doesn't - either no one wants
it, or there is some technical reason behind not implementing it for
that driver?

[1]
http://docs.openstack.org/api/openstack-compute/2/content/PUT_os-hosts-v2_updateHost_v2__tenant_id__os-hosts__host_name__ext-os-hosts.html




By the way, am I missing something when I think that this extension is 
already covered if you're:


1. Looking to get the node out of the scheduling loop, you can just 
disable it with os-services/disable?


2. Looking to evacuate instances off a failed host (or one that's in 
"maintenance mode"), just use the evacuate server action.


--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-openstacksdk] Meeting minutes, and Next Steps/new meeting time.

2014-02-20 Thread Jesse Noller
Hi Everyone;

Our first python-openstack meeting was awesome: and I really want to thank 
everyone who came, and for Doug teaching me the meeting bot :)

Minutes:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.html
Minutes 
(text):http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.txt
Log:
http://eavesdrop.openstack.org/meetings/python_openstacksdk/2014/python_openstacksdk.2014-02-19-19.01.log.html

Note that coming out of this we will be moving the meetings to Tuesdays, 19:00 
UTC / 1pm CST starting on Tuesday March 4th. Next week there will not be a 
meeting while we discuss and flesh out next steps and requested items (API, 
names, extensions and internal HTTP API).

If you want to participate: please join us on free node: #openstack-sdks 

https://wiki.openstack.org/wiki/PythonOpenStackSDK

Jesse
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] renaming: initial voting

2014-02-20 Thread Sergey Lukjanov
I've contacted foundation and they are ready to verify 5 options, so, we'll
choose them on todays irc team meeting (starting right now).


On Wed, Feb 19, 2014 at 12:27 AM, Sergey Lukjanov wrote:

> The voting is ended, you can ding results here -
> http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=10&id=E_5dd4f18fde38ce8e&algorithm=beatpath
>
> So, the new name options for more detailed discussion are:
>
> 1. Gravity  (Condorcet winner: wins contests with all other choices)
> 2. Sahara  loses to Gravity by 10-8
> 3. Quazar  loses to Gravity by 13-3, loses to Sahara by 12-6
> 4. Stellar  loses to Gravity by 13-4, loses to Quazar by 9-7
> 5. Caravan  loses to Gravity by 12-5, loses to Stellar by 9-7
> 6. Tied:
> Fusor  loses to Gravity by 13-2, loses to Caravan by 9-4
> Maestro  loses to Gravity by 15-3, loses to Quazar by 9-5
> Magellanic  loses to Gravity by 15-0, loses to Caravan by 9-5
> 9. Magellan  loses to Gravity by 16-1, loses to Maestro by 7-4
> 10. Stackadoop  loses to Gravity by 14-6, loses to Magellan by 8-6
>
> Thanks for voting.
>
>
> On Tue, Feb 18, 2014 at 10:52 AM, Sergey Lukjanov 
> wrote:
>
>> Currently, we have only 19/47 votes, so, I'm adding one more day.
>>
>>
>> On Fri, Feb 14, 2014 at 3:04 PM, Sergey Lukjanov 
>> wrote:
>>
>>> Hi folks,
>>>
>>> I've created a poll to select 10 candidates for new Savanna name. It's a
>>> first round of selecting new name for our lovely project. This poll will be
>>> ended in Monday, Feb 17.
>>>
>>> You should receive an email from "Sergey Lukjanov (CIVS poll supervisor)
>>> slukja...@mirantis.com"  via cs.cornell.edu with topic "Poll: Savanna
>>> new name candidates".
>>>
>>> Thank you!
>>>
>>> P.S. I've bcced all ATCs, don't panic.
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Savanna Technical Lead
>>> Mirantis Inc.
>>>
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Sylvain Bauza
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian :

>  Hello all,
>
> I'm working in the following BP:
> https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
> in which the idea is to have the possibility to create "special" tenants
> that have a lease for all of its associated resources.
>
>
>
> The BP is in discussing phase and we were having conversations on IRC
> about what approach should we follow.
>
>
>

Before speaking about implementation,  I would definitely know the usecases
you want to design.
What kind of resources do you want to provision using Climate ? The basic
thing is, what is the rationale thinking about hooking tenant creation ?
Could you please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of
calculating the resources asked, because the resources wouldn't have been
allocated before. So, generating a lease on top of this would be like a
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests
or projects requests, meaning that's duty of Climate to guarantee that the
desired associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains.
In that case, if Climate would be hooking Keystone, that would say that
Climate ensures that the cloud will have enough capacity for creating these
resources in the future.

IMHO, that's not worth implementing it.


 First of all, we need to add some "parameters or flags" during the tenant
> creation so we can know that the associated resources need to have a lease.
> Does anyone know if Keystone has similar functionality to Nova in relation
> with Hooks/API extensions (something like the stuff mentioned on
> http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first
> idea is to intercept the tenant creation call (as it's being done with
> climate-nova) and use that information to associate a lease quota to the
> resources assigned to that tenant.
>
>

Keystone has no way to know which resources are associated within a tenant,
see how the middleware authentication is done here [1]
Regarding the BP, the motivation is to possibly 'leasify' all the VMs from
one single tenant. IIRC, that should still be duty of Nova to handle that
workflow and send the requests to Climate.

-Sylvain

[1] :
http://docs.openstack.org/developer/keystone/middlewarearchitecture.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] sphinxcontrib-pecanwsme 0.7 released

2014-02-20 Thread Sylvain Bauza
Hi Doug,


2014-02-20 17:37 GMT+01:00 Doug Hellmann :

> sphinxcontrib-pecanwsme is an extension to Sphinx for documenting APIs
> built with the Pecan web framework and WSME.
>
> What's New?
> ===
>
> - Remove the trailing slash from the end of the URLs, as it results in
> misleading feature documentation, see Ceilometer bug #1202744.
>
>
>
Do you have a review in progress for updating global requirements ? At the
moment, it's still pointing to 0.6.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Notification When Creating/Deleting a Tenant in openstack

2014-02-20 Thread Nader Lahouti
Thanks Dolph for link. The document shows the format of the message and doesn't 
give any info on how to listen to the notification. 
Is there any other document showing the detail on how to listen or get these 
notifications ?

Regards,
Nader.

> On Feb 20, 2014, at 9:06 AM, Dolph Mathews  wrote:
> 
> Yes, see:
> 
>   http://docs.openstack.org/developer/keystone/event_notifications.html
> 
>> On Thu, Feb 20, 2014 at 10:54 AM, Nader Lahouti  
>> wrote:
>> Hi All,
>> 
>> I have a question regarding creating/deleting a tenant in openstack (using 
>> horizon or CLI). Is there any notification mechanism in place so that an 
>> application get informed of such an event?
>> 
>> If not, can it be done using plugin to send create/delete notification to an 
>> application?
>> 
>> Appreciate your suggestion and help.
>> 
>> Regards,
>> Nader.
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Joe Gordon
Hi All,

I discussion recently came up inside of nova about what it means
supported version for a dependency means.  in libvirt we gate on the
minimal version that we support but for all python dependencies we
gate on the highest version that passes our requirements. While we all
agree that having two different ways of choosing which version to test
(min and max) is bad, there are good arguments for doing both.

testing most recent version:
* We want to make sure we support the latest and greatest
* Bug fixes
* Quickly discover backwards incompatible changes so we can deal
with them as they arise instead of in batch

Testing lowest version supported:
* Make sure we don't land any code that breaks compatibility with
the lowest version we say we support


A few questions and ideas on how to move forward.
* How do other projects deal with this? This problem isn't unique
in OpenStack.
* What are the issues with making one gate job use the latest
versions and one use the lowest supported versions?
* Only test some things on every commit or every day (periodic
jobs)? But no one ever fixes those things when they break? who wants
to own them? distros? deployers?
* Other solutions?
* Does it make sense to gate on the lowest version of libvirt but
the highest version of python libs?
* Given our finite resources what gets us the furthest?


best,
Joe Gordon
John Garbutt

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Incubation Request: Murano

2014-02-20 Thread Georgy Okrokvertskhov
All,

Murano is the OpenStack Application Catalog service which has been
developing on stackforge almost 11 months. Murano has been presented on HK
summit on unconference track and now we would like to apply for incubation
during Juno release.

As the first step we would like to get feedback from TC on Murano readiness
from OpenStack processes standpoint as well as open up conversation around
mission and how it fits OpenStack ecosystem.

Murano incubation request form is here:
https://wiki.openstack.org/wiki/Murano/Incubation

As a part of incubation request we are looking for an advice from TC on the
governance model for Murano. Murano may potentially fit to the expanding
scope of Image program, if it will be transformed to Catalog program. Also
it potentially fits Orchestration program, and as a third option there
might be a value in creation of a new standalone Application Catalog
program. We have pros and cons analysis in Murano Incubation request form.

Murano team  has been working on Murano as a community project. All our
code and bugs/specs are hosted at OpenStack Gerrit and Launchpad
correspondingly. Unit tests and all pep8/hacking checks are run at
OpenStack Jenkins and we have integration tests running at our own Jenkins
server for each patch set. Murano also has all necessary scripts for
devstack integration. We have been holding weekly IRC meetings for the last
7 months and discussing architectural questions there and in openstack-dev
mailing lists as well.

Murano related information is here:

Launchpad: https://launchpad.net/murano

Murano Wiki page: https://wiki.openstack.org/wiki/Murano

Murano Documentation: https://wiki.openstack.org/wiki/Murano/Documentation

Murano IRC channel: #murano

With this we would like to start the process of incubation application
review.

Thanks
Georgy

-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Dina Belova
Sylvain, as I understand in BP description, Christian is about not exactly
reserving tenants itself like we actually do with VMs/hosts - it's just
naming for that. I think he is about two moments:

1) mark some tenants as "needed to be reserved" - speaking about resources
assigned to it
2) reserve these resources via Climate (VMs for first approximation)

I suppose Christian is speaking now about hacking tenants creation process
to mark them as "needed to be reserved" (1st step).

Christian, correct me if I'm wrong, please
Waiting for your comments


On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza wrote:

> Hi Christian,
>
> 2014-02-20 18:10 GMT+01:00 Martinez, Christian <
> christian.marti...@intel.com>:
>
>   Hello all,
>>
>> I'm working in the following BP:
>> https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
>> in which the idea is to have the possibility to create "special" tenants
>> that have a lease for all of its associated resources.
>>
>>
>>
>> The BP is in discussing phase and we were having conversations on IRC
>> about what approach should we follow.
>>
>>
>>
>
> Before speaking about implementation,  I would definitely know the
> usecases you want to design.
> What kind of resources do you want to provision using Climate ? The basic
> thing is, what is the rationale thinking about hooking tenant creation ?
> Could you please be more explicit ?
>
> At the tenant creation, Climate wouldn't have no information in terms of
> calculating the resources asked, because the resources wouldn't have been
> allocated before. So, generating a lease on top of this would be like a
> non-formal contract in between Climate and the user, accounting nothing.
>
> The main reason behind Climate is to provide SLAs for either user requests
> or projects requests, meaning that's duty of Climate to guarantee that the
> desired associated resource with the lease will be created in the future.
> Speaking of Keystone, the Keystone objects are tenants, users or domains.
> In that case, if Climate would be hooking Keystone, that would say that
> Climate ensures that the cloud will have enough capacity for creating these
> resources in the future.
>
> IMHO, that's not worth implementing it.
>
>
>  First of all, we need to add some "parameters or flags" during the
>> tenant creation so we can know that the associated resources need to have a
>> lease. Does anyone know if Keystone has similar functionality to Nova in
>> relation with Hooks/API extensions (something like the stuff mentioned on
>> http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first
>> idea is to intercept the tenant creation call (as it's being done with
>> climate-nova) and use that information to associate a lease quota to the
>> resources assigned to that tenant.
>>
>>
>
> Keystone has no way to know which resources are associated within a
> tenant, see how the middleware authentication is done here [1]
> Regarding the BP, the motivation is to possibly 'leasify' all the VMs from
> one single tenant. IIRC, that should still be duty of Nova to handle that
> workflow and send the requests to Climate.
>
> -Sylvain
>
> [1] :
> http://docs.openstack.org/developer/keystone/middlewarearchitecture.html
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Martinez, Christian
Dina: Yes, I'm talking about that. Thanks for the clarification.

Sylvain, let me put the use case that we have:
As part of project/tenant creation we would like to mark the tenant in such a 
way that climate will automatically create a lease for the resources. All 
non-production tenants/projects will be granted a default quota and all 
resources should have associated leases. Climate leases will trigger work-flows 
via notifications. The work-flows defined in mistral will provide automation to 
achieve some of our non-production capacity management needs. We expect Mistral 
work-flows to trigger emails, ability for customer to extend lease and finally 
for the resource to potentially be backed up and then deleted.
We have also considered implementing a non-climate process to automatically 
create the leases for all non-production tenants.

Regarding the resources to be considered,
For us and our need managing just the VM resource is sufficient for the 
foreseeable future.

Also, I think that we should consider casanch1's comments on the BP:
"we must also have a blueprint that allow the user to create "Tenant Types" 
with 'default' lease attributes. Then when creating a tenant, the user can 
specify lease dates and/or tenant type"



From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Thursday, February 20, 2014 3:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design

Sylvain, as I understand in BP description, Christian is about not exactly 
reserving tenants itself like we actually do with VMs/hosts - it's just naming 
for that. I think he is about two moments:

1) mark some tenants as "needed to be reserved" - speaking about resources 
assigned to it
2) reserve these resources via Climate (VMs for first approximation)

I suppose Christian is speaking now about hacking tenants creation process to 
mark them as "needed to be reserved" (1st step).

Christian, correct me if I'm wrong, please
Waiting for your comments

On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian 
mailto:christian.marti...@intel.com>>:

Hello all,
I'm working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create "special" tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.


Before speaking about implementation,  I would definitely know the usecases you 
want to design.
What kind of resources do you want to provision using Climate ? The basic thing 
is, what is the rationale thinking about hooking tenant creation ? Could you 
please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of 
calculating the resources asked, because the resources wouldn't have been 
allocated before. So, generating a lease on top of this would be like a 
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests or 
projects requests, meaning that's duty of Climate to guarantee that the desired 
associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains. In 
that case, if Climate would be hooking Keystone, that would say that Climate 
ensures that the cloud will have enough capacity for creating these resources 
in the future.

IMHO, that's not worth implementing it.


First of all, we need to add some "parameters or flags" during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it's being done with climate-nova) 
and use that information to associate a lease quota to the resources assigned 
to that tenant.


Keystone has no way to know which resources are associated within a tenant, see 
how the middleware authentication is done here [1]
Regarding the BP, the motivation is to possibly 'leasify' all the VMs from one 
single tenant. IIRC, that should still be duty of Nova to handle that workflow 
and send the requests to Climate.

-Sylvain

[1] : http://docs.openstack.org/developer/keystone/middlewarearchitecture.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
Ope

[openstack-dev] [designate] Designate Icehouse-2 Release

2014-02-20 Thread Betsy Luzader
Today we have released Designate Icehouse-2. The high-level launchpad details 
can be found at https://launchpad.net/designate/icehouse/icehouse-2, as well as 
a link to the tar file. This release includes almost a dozen blueprints, 
including one for Domain Import/Export, as well as numerous bug fixes.

If you any questions, you can reach the team via this Openstack dev group with 
the subject line [designate] or via our IRC chat room at #openstack-dns.

Betsy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Sylvain Bauza
2014-02-20 19:32 GMT+01:00 Dina Belova :

> Sylvain, as I understand in BP description, Christian is about not exactly
> reserving tenants itself like we actually do with VMs/hosts - it's just
> naming for that. I think he is about two moments:
>
> 1) mark some tenants as "needed to be reserved" - speaking about resources
> assigned to it
> 2) reserve these resources via Climate (VMs for first approximation)
>
>
Well, I understood your BP, that's Christian's message which was a bit
misunderstanding.
Speaking of marking a tenant as "reserved" would then mean that it does
have kind of priority vs. another tenant. But again, at said, how could you
ensure at the marking (ie. at lease creation) that Climate can honor
contracts with resources that haven't been explicitely defined ?


> I suppose Christian is speaking now about hacking tenants creation process
> to mark them as "needed to be reserved" (1st step).
>
>
Again, a lease is mutually and exclusively linked with explicit resources.
If you say "create a lease, for the love" without speaking of what, I don't
see the interest in Climate, unless I missed something obvious.

-Sylvain

> Christian, correct me if I'm wrong, please
> Waiting for your comments
>
>
> On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
> wrote:
>
>> Hi Christian,
>>
>> 2014-02-20 18:10 GMT+01:00 Martinez, Christian <
>> christian.marti...@intel.com>:
>>
>>   Hello all,
>>>
>>> I'm working in the following BP:
>>> https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept,
>>> in which the idea is to have the possibility to create "special" tenants
>>> that have a lease for all of its associated resources.
>>>
>>>
>>>
>>> The BP is in discussing phase and we were having conversations on IRC
>>> about what approach should we follow.
>>>
>>>
>>>
>>
>> Before speaking about implementation,  I would definitely know the
>> usecases you want to design.
>> What kind of resources do you want to provision using Climate ? The basic
>> thing is, what is the rationale thinking about hooking tenant creation ?
>> Could you please be more explicit ?
>>
>> At the tenant creation, Climate wouldn't have no information in terms of
>> calculating the resources asked, because the resources wouldn't have been
>> allocated before. So, generating a lease on top of this would be like a
>> non-formal contract in between Climate and the user, accounting nothing.
>>
>> The main reason behind Climate is to provide SLAs for either user
>> requests or projects requests, meaning that's duty of Climate to guarantee
>> that the desired associated resource with the lease will be created in the
>> future.
>> Speaking of Keystone, the Keystone objects are tenants, users or domains.
>> In that case, if Climate would be hooking Keystone, that would say that
>> Climate ensures that the cloud will have enough capacity for creating these
>> resources in the future.
>>
>> IMHO, that's not worth implementing it.
>>
>>
>>  First of all, we need to add some "parameters or flags" during the
>>> tenant creation so we can know that the associated resources need to have a
>>> lease. Does anyone know if Keystone has similar functionality to Nova in
>>> relation with Hooks/API extensions (something like the stuff mentioned on
>>> http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first
>>> idea is to intercept the tenant creation call (as it's being done with
>>> climate-nova) and use that information to associate a lease quota to the
>>> resources assigned to that tenant.
>>>
>>>
>>
>> Keystone has no way to know which resources are associated within a
>> tenant, see how the middleware authentication is done here [1]
>> Regarding the BP, the motivation is to possibly 'leasify' all the VMs
>> from one single tenant. IIRC, that should still be duty of Nova to handle
>> that workflow and send the requests to Climate.
>>
>> -Sylvain
>>
>> [1] :
>> http://docs.openstack.org/developer/keystone/middlewarearchitecture.html
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

2014-02-20 Thread Edgar Magana
Congratulations Oleg!!!

No need for welcoming you to the team, you were already part of  ;-)

Edgar

From:  Oleg Bondarev 
Reply-To:  OpenStack List 
Date:  Thursday, February 20, 2014 6:43 AM
To:  OpenStack List 
Subject:  Re: [openstack-dev] [Neutron] Nominate Oleg Bondarev for Core

Thanks Mark,

thanks everyone for voiting! I'm so happy to become a member of this really
great team!

Oleg


On Thu, Feb 20, 2014 at 6:29 PM, Mark McClain 
wrote:
>  I¹d like to welcome Oleg as member of the core Neutron team as he has
> received more than enough +1s and no negative votes from the other cores.
> 
> mark
> 
> On Feb 10, 2014, at 6:28 PM, Mark McClain  wrote:
> 
>> > All-
>> >
>> > I¹d like to nominate Oleg Bondarev to become a Neutron core reviewer.  Oleg
>> has been valuable contributor to Neutron by actively reviewing, working on
>> bugs, and contributing code.
>> >
>> > Neutron cores please reply back with +1/0/-1 votes.
>> >
>> > mark
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___ OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Lease by tenants feature design

2014-02-20 Thread Sanchez, Cristian A
I agree with Bauza that the main purpose of Climate is to reserve resources, 
and in the case of keystone it should reserve tenant, users, domains, etc.

So, it could be possible that climate is not the module in which the tenant 
“lease” information should be saved. As stated in the use case, the only 
purpose of this BP is to allow the creation of tenants with start and end 
dates. Then when creating resources in that tenant (like VMs) climate could 
take “lease” information from the tenant itself and create actual leases for 
the VMs.

Any thoughts of this?

From: Sylvain Bauza mailto:sylvain.ba...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: jueves, 20 de febrero de 2014 15:57
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Climate] Lease by tenants feature design




2014-02-20 19:32 GMT+01:00 Dina Belova 
mailto:dbel...@mirantis.com>>:
Sylvain, as I understand in BP description, Christian is about not exactly 
reserving tenants itself like we actually do with VMs/hosts - it's just naming 
for that. I think he is about two moments:

1) mark some tenants as "needed to be reserved" - speaking about resources 
assigned to it
2) reserve these resources via Climate (VMs for first approximation)


Well, I understood your BP, that's Christian's message which was a bit 
misunderstanding.
Speaking of marking a tenant as "reserved" would then mean that it does have 
kind of priority vs. another tenant. But again, at said, how could you ensure 
at the marking (ie. at lease creation) that Climate can honor contracts with 
resources that haven't been explicitely defined ?

I suppose Christian is speaking now about hacking tenants creation process to 
mark them as "needed to be reserved" (1st step).


Again, a lease is mutually and exclusively linked with explicit resources. If 
you say "create a lease, for the love" without speaking of what, I don't see 
the interest in Climate, unless I missed something obvious.

-Sylvain
Christian, correct me if I'm wrong, please
Waiting for your comments


On Thu, Feb 20, 2014 at 10:06 PM, Sylvain Bauza 
mailto:sylvain.ba...@gmail.com>> wrote:
Hi Christian,

2014-02-20 18:10 GMT+01:00 Martinez, Christian 
mailto:christian.marti...@intel.com>>:

Hello all,
I’m working in the following BP: 
https://blueprints.launchpad.net/climate/+spec/tenant-reservation-concept, in 
which the idea is to have the possibility to create “special” tenants that have 
a lease for all of its associated resources.

The BP is in discussing phase and we were having conversations on IRC about 
what approach should we follow.


Before speaking about implementation,  I would definitely know the usecases you 
want to design.
What kind of resources do you want to provision using Climate ? The basic thing 
is, what is the rationale thinking about hooking tenant creation ? Could you 
please be more explicit ?

At the tenant creation, Climate wouldn't have no information in terms of 
calculating the resources asked, because the resources wouldn't have been 
allocated before. So, generating a lease on top of this would be like a 
non-formal contract in between Climate and the user, accounting nothing.

The main reason behind Climate is to provide SLAs for either user requests or 
projects requests, meaning that's duty of Climate to guarantee that the desired 
associated resource with the lease will be created in the future.
Speaking of Keystone, the Keystone objects are tenants, users or domains. In 
that case, if Climate would be hooking Keystone, that would say that Climate 
ensures that the cloud will have enough capacity for creating these resources 
in the future.

IMHO, that's not worth implementing it.


First of all, we need to add some “parameters or flags” during the tenant 
creation so we can know that the associated resources need to have a lease. 
Does anyone know if Keystone has similar functionality to Nova in relation with 
Hooks/API extensions (something like the stuff mentioned on 
http://docs.openstack.org/developer/nova/devref/hooks.html ) ? My first idea is 
to intercept the tenant creation call (as it’s being done with climate-nova) 
and use that information to associate a lease quota to the resources assigned 
to that tenant.


Keystone has no way to know which resources are associated within a tenant, see 
how the middleware authentication is done here [1]
Regarding the BP, the motivation is to possibly 'leasify' all the VMs from one 
single tenant. IIRC, that should still be duty of Nova to handle that workflow 
and send the requests to Climate.

-Sylvain

[1] : http://docs.openstack.org/developer/keystone/middlewarearchitecture.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.o

[openstack-dev] [savanna] team meeting minutes Feb 20

2014-02-20 Thread Sergey Lukjanov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:

Minutes:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-20-18.03.html
Log:
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-20-18.03.log.html

-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] renaming: initial voting

2014-02-20 Thread Sergey Lukjanov
We've agreed to send top 5 options to foundation for review, more details -
http://eavesdrop.openstack.org/meetings/savanna/2014/savanna.2014-02-20-18.03.html


On Thu, Feb 20, 2014 at 10:02 PM, Sergey Lukjanov wrote:

> I've contacted foundation and they are ready to verify 5 options, so,
> we'll choose them on todays irc team meeting (starting right now).
>
>
> On Wed, Feb 19, 2014 at 12:27 AM, Sergey Lukjanov 
> wrote:
>
>> The voting is ended, you can ding results here -
>> http://civs.cs.cornell.edu/cgi-bin/results.pl?num_winners=10&id=E_5dd4f18fde38ce8e&algorithm=beatpath
>>
>> So, the new name options for more detailed discussion are:
>>
>> 1. Gravity  (Condorcet winner: wins contests with all other choices)
>> 2. Sahara  loses to Gravity by 10-8
>> 3. Quazar  loses to Gravity by 13-3, loses to Sahara by 12-6
>> 4. Stellar  loses to Gravity by 13-4, loses to Quazar by 9-7
>> 5. Caravan  loses to Gravity by 12-5, loses to Stellar by 9-7
>> 6. Tied:
>> Fusor  loses to Gravity by 13-2, loses to Caravan by 9-4
>> Maestro  loses to Gravity by 15-3, loses to Quazar by 9-5
>> Magellanic  loses to Gravity by 15-0, loses to Caravan by 9-5
>> 9. Magellan  loses to Gravity by 16-1, loses to Maestro by 7-4
>> 10. Stackadoop  loses to Gravity by 14-6, loses to Magellan by 8-6
>>
>> Thanks for voting.
>>
>>
>> On Tue, Feb 18, 2014 at 10:52 AM, Sergey Lukjanov > > wrote:
>>
>>> Currently, we have only 19/47 votes, so, I'm adding one more day.
>>>
>>>
>>> On Fri, Feb 14, 2014 at 3:04 PM, Sergey Lukjanov >> > wrote:
>>>
 Hi folks,

 I've created a poll to select 10 candidates for new Savanna name. It's
 a first round of selecting new name for our lovely project. This poll will
 be ended in Monday, Feb 17.

 You should receive an email from "Sergey Lukjanov (CIVS poll
 supervisor) slukja...@mirantis.com"  via cs.cornell.edu with topic
 "Poll: Savanna new name candidates".

 Thank you!

 P.S. I've bcced all ATCs, don't panic.

 --
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.

>>>
>>>
>>>
>>> --
>>> Sincerely yours,
>>> Sergey Lukjanov
>>> Savanna Technical Lead
>>> Mirantis Inc.
>>>
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] supported dependency versioning and testing

2014-02-20 Thread Sean Dague
On 02/20/2014 01:31 PM, Joe Gordon wrote:
> Hi All,
> 
> I discussion recently came up inside of nova about what it means
> supported version for a dependency means.  in libvirt we gate on the
> minimal version that we support but for all python dependencies we
> gate on the highest version that passes our requirements. While we all
> agree that having two different ways of choosing which version to test
> (min and max) is bad, there are good arguments for doing both.
> 
> testing most recent version:
> * We want to make sure we support the latest and greatest
> * Bug fixes
> * Quickly discover backwards incompatible changes so we can deal
> with them as they arise instead of in batch
> 
> Testing lowest version supported:
> * Make sure we don't land any code that breaks compatibility with
> the lowest version we say we support
> 
> 
> A few questions and ideas on how to move forward.
> * How do other projects deal with this? This problem isn't unique
> in OpenStack.
> * What are the issues with making one gate job use the latest
> versions and one use the lowest supported versions?
> * Only test some things on every commit or every day (periodic
> jobs)? But no one ever fixes those things when they break? who wants
> to own them? distros? deployers?
> * Other solutions?
> * Does it make sense to gate on the lowest version of libvirt but
> the highest version of python libs?
> * Given our finite resources what gets us the furthest?

So I'm one of the first people to utter "if it isn't tested, it's
probably broken", however I also think we need to be realistic about the
fact that if you did out the permutations of dependencies and config
options, we'd have as many test matrix scenarios as grains of sand on
the planet.

I do think in some ways this is unique to OpenStack, in that our
automated testing is head and shoulders above any other Open Source
project out there, and most proprietary software systems I've seen.

So this is about being pragmatic. In our dependency testing we are
actually testing with most recent versions of everything. So I would
think that even with libvirt, we should err in that direction.

That being said, we also need to be a little bit careful about taking
such a hard line about "supported vs. not" based on only what's in the
gate. Because if we did the following things would be listed as
unsupported (in increasing level of ridiculousness):

 * Live migration
 * Using qpid or zmq
 * Running on anything other than Ubuntu 12.04
 * Running on multiple nodes

Supported to me means we think it should work, and if it doesn't, it's a
high priority bug that will get fixed quickly. Testing is our sanity
check. But it can't be considered that it will catch everything, at
least not before the heat death of the universe.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Fixes for the alembic migration (sqlite + postgress) aren't being reviewed

2014-02-20 Thread Armando M.
Thomas,

I feel your frustration, however before complaining please do follow
the actual chain of events.

Patch [1]: I asked a question which I never received an answer to.
Patch [2]: I did put a -1, but I have nothing against this patch per
se. This was only been recently abandoned and my -1 lied primarily to
give patch [1] the opportunity to be resumed.

No action on a negative review means automatic expiration, if you lose
interest in something you care about whose fault is that?

A.

[1] = https://review.openstack.org/#/c/52757
[2] = https://review.openstack.org/#/c/68611

On 19 February 2014 06:28, Thomas Goirand  wrote:
> Hi,
>
> I've seen this one:
> https://review.openstack.org/#/c/68611/
>
> which is suppose to fix something for Postgress. This is funny, because
> I was doing the exact same patch for fixing it for SQLite. Though this
> was before the last summit in HK.
>
> Since then, I just gave up on having my Debian specific patch [1] being
> upstreamed. No review, despite my insistence. Mark, on the HK summit,
> told me that it was pending discussion about what would be the policy
> for SQLite.
>
> Guys, this is disappointing. That's the 2nd time the same patch is being
> blocked, with no explanations.
>
> Could 2 core reviewers have a *serious* look at this patch, and explain
> why it's not ok for it to be approved? If nobody says why, then could
> this be approved, so we can move on?
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> [1]
> http://anonscm.debian.org/gitweb/?p=openstack/neutron.git;a=blob;f=debian/patches/fix-alembic-migration-with-sqlite3.patch;h=9108b45aaaf683e49b15338bacd813e50e9f563d;hb=b44e96d9e1d750e35513d63877eb05f167a175d8
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Monitoring IP Availability

2014-02-20 Thread Jay Pipes
On Thu, 2014-02-20 at 00:53 +, Vilobh Meshram wrote:
> Hello OpenStack Dev,
> 
> We wanted to have your input on how different companies/organizations,
> using Openstack, are monitoring IP availability as this can be useful
> to track the used IP’s and total number of IP’s.

I presume you are talking about monitoring the number of available
public floating IP addresses? At AT&T, we just had a Nagios check that
queried the Nova or Neutron database to see if the number of available
public IP addresses went below a certain threshold.

Best,
-jay



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] blueprint: Nova with py33 compatibility

2014-02-20 Thread Russell Bryant
On 02/20/2014 09:43 AM, 郭小熙 wrote:
> We will move to Python33 in the future. More and more OpenStack projects
> including python-novaclient are Python33 compatible. Do we have plan to
> make Nova python33 compatible ?
> 
> As I know, oslo.messaging will not support python33 in Icehouse,this is
> just one dependency for Nova, that means we can't finish it in Icehouse
> for Nova. I registered one blueprint [1]to make us move to Python33
> smoothly in the future. Python33 compatibility would be taken into
> account while reviewing code.
> 
> We have to add py33 check/gate jobs to check Py33 compatibility. This
> blueprint could be marked as implemented only until Nova code can pass
> these jobs.
> 
> [1] https://blueprints.launchpad.net/nova/+spec/nova-py3kcompat

Python 3 support is certainly a goal that *all* OpenStack projects
should be aiming for.  However, for Nova, I don't think Nova's code is
actually our biggest hurdle.  The hardest parts are dependencies that we
have that don't support Python 3.  A big example is eventlet.  We're so
far off that I don't even think a CI job is useful yet.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread David Kranz
Running this test in tempest requires an ami image triple to be on the 
disk where tempest is running in order for the test to upload it. It 
would be a lot easier if this test could use a simple image file 
instead. That image file could even be obtained from the cloud being 
tested while configuring tempest. Is there a reason to keep the 
three-part image?


 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread Frittoli, Andrea (HP Cloud)
Thanks David, +++ 

This is a strong dependency to devstack, and it would be nice if we could
lose it.

andrea

-Original Message-
From: David Kranz [mailto:dkr...@redhat.com] 
Sent: 20 February 2014 21:32
To: OpenStack Development Mailing List
Subject: [openstack-dev] [qa] Does scenario.test_minimum_basic need to
upload ami images?

Running this test in tempest requires an ami image triple to be on the disk
where tempest is running in order for the test to upload it. It would be a
lot easier if this test could use a simple image file instead. That image
file could even be obtained from the cloud being tested while configuring
tempest. Is there a reason to keep the three-part image?

  -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Does scenario.test_minimum_basic need to upload ami images?

2014-02-20 Thread Sean Dague
On 02/20/2014 04:31 PM, David Kranz wrote:
> Running this test in tempest requires an ami image triple to be on the
> disk where tempest is running in order for the test to upload it. It
> would be a lot easier if this test could use a simple image file
> instead. That image file could even be obtained from the cloud being
> tested while configuring tempest. Is there a reason to keep the
> three-part image?

I have no issue changing this to a single part image, as long as we
could find a way that we can make it work with cirros in the gate
(mostly because it can run in really low mem footprint).

Is there a cirros single part image somewhere? Honestly it would be much
simpler even in the devstack environment.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >