Re: [openstack-dev] [Tacker][Tricircle]Multi-VIM collaboration

2015-12-09 Thread Sridhar Ramaswamy
Sure.

As mentioned in the BP we stumbled into Tricycle project while researching
for this feature (and hence got mentioned in the BP). It sure looks
promising. The immediate asks from our user community is quite modest
though, so we are trying to keep the scope small. However the integration
point you mention make sense, so that Tacker + Tricircle could be one of
the deployment option. Lets continue the discussion in the gerrit as we put
all other suggestions coming in (like heat multi-cloud / multi-region) in
perspective. It will be great to get the Tacker multi-site API bake in
different multi-site deployment patterns underneath.

- Sridhar

On Tue, Dec 8, 2015 at 10:37 PM, Zhipeng Huang 
wrote:

> Hi Tacker team,
>
> As I commented in the BP[1], our team is interested in a collaboration in
> this area. I think one of the collaboration point would be to define a
> mapping between tacker multi-vim api with Tricircle resource routing api
> table [2].
>
> [1]https://review.openstack.org/#/c/249085/
> [2]
> https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit#heading=h.5t71ara040n5
>
>
> --
> Zhipeng (Howard) Huang
>
> Standard Engineer
> IT Standard & Patent/IT Prooduct Line
> Huawei Technologies Co,. Ltd
> Email: huangzhip...@huawei.com
> Office: Huawei Industrial Base, Longgang, Shenzhen
>
> (Previous)
> Research Assistant
> Mobile Ad-Hoc Network Lab, Calit2
> University of California, Irvine
> Email: zhipe...@uci.edu
> Office: Calit2 Building Room 2402
>
> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Li, Xiaoyan
On Dec 10, 2015 06:34, Mike Perez wrote:
> On 09:27 Dec 09, John Griffith wrote:
>> On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan 
> wrote:
> 
> 
> 
>>> As a result, this raises two concerns here:
>>> 1. Let such operations behavior same in Cinder.
>>> 2. I prefer to let storage driver decide the dependencies, not in
>>> the general core codes.
>>> 
>> 
>> ​I have and always will strongly disagree with this approach and your
>> proposal.  Sadly we've already started to allow more and more vendor
>> drivers just "do their own thing" and implement their own special API
>> methods.  This is in my opinion a horrible path and defeats the entire
>> purpose of have a Cinder abstraction layer.
>> 
>> This will make it impossible to have compatibility between clouds for
>> those that care about it, it will make it impossible for
>> operators/deployers to understand exactly what they can and should
>> expect in terms of the usage of their cloud.  Finally, it will also
>> mean that not OpenStack API functionality is COMPLETELY dependent on
>> backend device.  I know people are sick of hearing me say this, so I'll
>> keep it short and say it one more time: "Compatibility in the API
>> matters and should always be our priority"
> 
> +1
>

This seems that cinder needs to take more and more works, and vendor storages 
do what cinder asks them to. 
For example, cinder supports full and incremental snapshots in core codes, and 
it keeps the dependencies like backups. 

More general example is that storage vendors supports kinds of volumes, like 
normal, provisioned, and compressed etc. 
Cinder needs to implement such functions in core codes. Every storage vendor 
report their capability to cinder scheduler. 
When users create a volume, scheduler finds a storage vendor based on their 
capacities. 
(Of course these functions in cinder should be general and implemented by most 
of vendor storages. )

But currently cinder core codes do little, lots of this are in extra_specs, 
conf file which are handled in vendor drivers. 

This leads to a problem like extending volume. Extending a volume in an 
incremental snapshot fails
in vendor storage.  And then the cinder volume goes into error_extending 
status. From my opinion this is not good. 

Best wishes
Lisa


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tacker][Tricircle]Multi-VIM collaboration

2015-12-09 Thread Zhipeng Huang
Great :)



On Thu, Dec 10, 2015 at 10:25 AM, Sridhar Ramaswamy 
wrote:

> Sure.
>
> As mentioned in the BP we stumbled into Tricycle project while researching
> for this feature (and hence got mentioned in the BP). It sure looks
> promising. The immediate asks from our user community is quite modest
> though, so we are trying to keep the scope small. However the integration
> point you mention make sense, so that Tacker + Tricircle could be one of
> the deployment option. Lets continue the discussion in the gerrit as we put
> all other suggestions coming in (like heat multi-cloud / multi-region) in
> perspective. It will be great to get the Tacker multi-site API bake in
> different multi-site deployment patterns underneath.
>
> - Sridhar
>
> On Tue, Dec 8, 2015 at 10:37 PM, Zhipeng Huang 
> wrote:
>
>> Hi Tacker team,
>>
>> As I commented in the BP[1], our team is interested in a collaboration in
>> this area. I think one of the collaboration point would be to define a
>> mapping between tacker multi-vim api with Tricircle resource routing api
>> table [2].
>>
>> [1]https://review.openstack.org/#/c/249085/
>> [2]
>> https://docs.google.com/document/d/18kZZ1snMOCD9IQvUKI5NVDzSASpw-QKj7l2zNqMEd3g/edit#heading=h.5t71ara040n5
>>
>>
>> --
>> Zhipeng (Howard) Huang
>>
>> Standard Engineer
>> IT Standard & Patent/IT Prooduct Line
>> Huawei Technologies Co,. Ltd
>> Email: huangzhip...@huawei.com
>> Office: Huawei Industrial Base, Longgang, Shenzhen
>>
>> (Previous)
>> Research Assistant
>> Mobile Ad-Hoc Network Lab, Calit2
>> University of California, Irvine
>> Email: zhipe...@uci.edu
>> Office: Calit2 Building Room 2402
>>
>> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-09 Thread Kekane, Abhishek


-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: 09 December 2015 19:28
To: openstack-dev
Subject: Re: [openstack-dev] [python-glanceclient] Return request-id to caller

Excerpts from Flavio Percoco's message of 2015-12-09 09:09:10 -0430:
> On 09/12/15 11:33 +, Kekane, Abhishek wrote:
> >Hi Devs,
> >
> > 
> >
> >We are adding support for returning ‘x-openstack-request-id’  to the 
> >caller as per the design proposed in cross-project specs:
> >
> >http://specs.openstack.org/openstack/openstack-specs/specs/
> >return-request-id.html
> >
> > 
> >
> >Problem Description:
> >
> >Cannot add a new property of list type to the warlock.model object.
> >
> > 
> >
> >How is a model object created:
> >
> >Let’s take an example of glanceclient.api.v2.images.get() call [1]:
> >
> > 
> >
> >Here after getting the response we call model() method. This model() 
> >does the job of creating a warlock.model object(essentially a dict) 
> >based on the schema given as argument (image schema retrieved from 
> >glance in this case). Inside
> >model() the raw() method simply return the image schema as JSON 
> >object. The advantage of this warlock.model object over a simple dict 
> >is that it validates any changes to object based on the rules specified in 
> >the reference schema.
> >The keys of this  model object are available as object properties to 
> >the caller.
> >
> > 
> >
> >Underlying reason:
> >
> >The schema for different sub APIs is returned a bit differently. For 
> >images, metadef APIs glance.schema.Schema.raw() is used which returns 
> >a schema containing “additionalProperties”: {“type”: “string”}. 
> >Whereas for members and tasks APIs glance.schema.Schema.minimal() is 
> >used to return schema object which does not contain “additionalProperties”.
> >
> > 
> >
> >So we can add extra properties of any type to the model object 
> >returned from members or tasks API but for images and metadef APIs we 
> >can only add properties which can be of type string. Also for the 
> >latter case we depend on the glance configuration to allow additional 
> >properties.
> >
> > 
> >
> >As per our analysis we have come up with two approaches for resolving 
> >this
> >issue:
> >
> > 
> >
> >Approach #1:  Inject request_ids property in the warlock model object 
> >in glance client
> >
> >Here we do the following:
> >
> >1. Inject the ‘request_ids’ as additional property into the model 
> >object (returned from model())
> >
> >2. Return the model object which now contains request_ids property
> >
> > 
> >
> >Limitations:
> >
> >1. Because the glance schemas for images and metadef only allows 
> >additional properties of type string, so even though natural type of 
> >request_ids should be list we have to make it as a comma separated 
> >‘string’ of request ids as a compromise.
> >
> >2. Lot of extra code is needed to wrap objects returned from the 
> >client API so that the caller can get request ids. For example we 
> >need to write wrapper classes for dict, list, str, tuple, generator.
> >
> >3. Not a good design as we are adding a property which should 
> >actually be a base property but added as additional property as a compromise.
> >
> >4. There is a dependency on glance whether to allow custom/additional 
> >properties or not. [2]
> >
> > 
> >
> >Approach #2:  Add ‘request_ids’ property to all schema definitions in 
> >glance
> >
> > 
> >
> >Here we add  ‘request_ids’ property as follows to the various APIs (schema):
> >
> > 
> >
> >“request_ids”: {
> >
> >  "type": "array",
> >
> >  "items": {
> >
> >"type": "string"
> >
> >  }
> >
> >}
> >
> > 
> >
> >Doing this will make changes in glance client very simple as compared 
> >to approach#1.
> >
> >This also looks a better design as it will be consistent.
> >
> >We simply need to modify the request_ids property in  various API 
> >calls for example glanceclient.v2.images.get().
> >
> 
> Hey Abhishek,
> 
> thanks for working on this.
> 
> To be honest, I'm a bit confused on why the request_id needs to be an 
> attribute of the image. Isn't it passed as a header? Does it have to 
> be an attribute so we can "print" it?

The requirement they're trying to meet is to make the request id available to 
the user of the client library [1]. The user typically doesn't have access to 
the headers, so the request id needs to be part of the payload returned from 
each method. In other clients that work with simple data types, they've 
subclassed dict, list, etc. to add the extra property. This adds the request id 
to the return value without making a breaking change to the API of the client 
library.

Abhishek, would it be possible to add the request id information to the schema 
data in glance client, before giving it to warlock?
I don't know whether warlock asks for the schema or what form that data takes 
(dictionary, JSON blob, etc.). If it's a dictionary visible to the client code 
it would be straightforward to add data to it.

Yes, it 

Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread ChangBo Guo
  we have more projects like Trove/Maruno/Manila using deprecated method,
I just did a simple search


http://codesearch.openstack.org/?q=timeutils.isotime=nope==

2015-12-09 17:54 GMT+08:00 Julien Danjou :

> Hi fellow developers,
>
> Some oslo_utils.timeutils functions have been deprecated for months and
> several major version of oslo.utils. We're going to remove these
> functions as part of:
>
> https://review.openstack.org/#/c/252898/
>
> Some projects, Glance in particular, are still using these functions.

FWIW, I've started to cook a patch for Glance at:
>
> https://review.openstack.org/#/c/253517/
>
> Please, make sure you don't use any of these functions or upgrading to a
> new oslo.utils will very likely break your project.
>
> Happy hacking!
>
> Cheers,
> --
> Julien Danjou
> # Free Software hacker
> # https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]Can not login machine by using Ironic

2015-12-09 Thread Zhi Chang
hi, all
I have installed Ironic in my devstack. And I create a keypair in nova by 
using command:
nova keypair-add --pub-key ~/.ssh/id_rsa.pub new
Next, I boot a vm in nova by using command:
nova boot --image [image_id] --flavor [baremetal_id] --nic 
net-id=[net_id] --key-name new test3
After a few minutes, I can connect the machine from router namespace like 
this:
[stack@localhost ~]$ sudo ip netns exec qrouter-[router_id] ping 
10.0.0.205
PING 10.0.0.205 (10.0.0.205) 56(84) bytes of data.
64 bytes from 10.0.0.205: icmp_seq=1 ttl=64 time=0.412 ms
64 bytes from 10.0.0.205: icmp_seq=2 ttl=64 time=0.346 ms
So I want to login the machine by using command:
sudo ip netns exec qrouter-[router_id] ssh -i ~/.ssh/id_rsa 
root@10.0.0.205
But I can not login the machine by using the file named ~/.ssh/id_rsa. It 
remind me to input
the password. Why? I used the key to boot this machine!


Could someone give me some advice?




Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mitaka Infra Sprint

2015-12-09 Thread Joshua Hesketh
Hi all,
As discussed during the infra-meeting on Tuesday[0], the infra team will be
holding a mid-cycle sprint to focus on infra-cloud[1].
The sprint is an opportunity to get in a room and really work through as
much code and reviews as we can related to infra-cloud while having each
other near by to discuss blockers, technical challenges and enjoy company.
Information + RSVP:https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
Dates:Mon. February 22nd at 9:00am to Thursday. February 25th
Location:HPE Fort Collins Colorado Office
Who:Anybody is welcome. Please put your name on the wiki page if you are
interested in attending.
If you have any questions please don't hesitate to ask.
Cheers,Josh + Infra team
[0]
http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-08-19.00.html[1]
https://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Davanum Srinivas
So clearly the deprecation process is not working as no one looks at
the log messages and fix their own projects. Sigh!

-- Dims

On Thu, Dec 10, 2015 at 7:08 AM, ChangBo Guo  wrote:
>   we have more projects like Trove/Maruno/Manila using deprecated method,  I
> just did a simple search
>
>
> http://codesearch.openstack.org/?q=timeutils.isotime=nope==
>
> 2015-12-09 17:54 GMT+08:00 Julien Danjou :
>>
>> Hi fellow developers,
>>
>> Some oslo_utils.timeutils functions have been deprecated for months and
>> several major version of oslo.utils. We're going to remove these
>> functions as part of:
>>
>> https://review.openstack.org/#/c/252898/
>>
>> Some projects, Glance in particular, are still using these functions.
>>
>> FWIW, I've started to cook a patch for Glance at:
>>
>> https://review.openstack.org/#/c/253517/
>>
>> Please, make sure you don't use any of these functions or upgrading to a
>> new oslo.utils will very likely break your project.
>>
>> Happy hacking!
>>
>> Cheers,
>> --
>> Julien Danjou
>> # Free Software hacker
>> # https://julien.danjou.info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Joshua Harlow
Shouldn't be to hard (although it's probably not on each oslo project, 
but on the consumers projects).


The warnings module can turn warnings into raised exceptions with a 
simple command line switch btw...


For example:

$ python -Wonce
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import warnings
>>> warnings.warn("I am not supposed to be used", DeprecationWarning)
__main__:1: DeprecationWarning: I am not supposed to be used

$ python -Werror
Python 2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import warnings
>>> warnings.warn("I am not supposed to be used", DeprecationWarning)
Traceback (most recent call last):
  File "", line 1, in 
DeprecationWarning: I am not supposed to be used

https://docs.python.org/2/library/warnings.html#the-warnings-filter

Turn that CLI switch from off to on and I'm pretty sure usage of 
deprecated things will become pretty evident real quick ;)


Robert Collins wrote:

On 10 December 2015 at 18:19, Davanum Srinivas  wrote:

So clearly the deprecation process is not working as no one looks at
the log messages and fix their own projects. Sigh!


I think we need some way to ensure that a deprecation has been done:
perhaps a job that we can run on each oslo project which fails if
deprecated things are in use; we can have that on non-voting, and then
use it to detect the release in which we can start the removal clock
for a deprecated thing.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Davanum Srinivas
Robert,

+1 to non-firedrill way as long as it's the individual projects that
see the output and take action on it as appropriate.

-- dims

On Thu, Dec 10, 2015 at 10:00 AM, Robert Collins
 wrote:
> I'm not suggesting we increase Oslo responsibilities, but that we need a non
> fire drill way to signal that the work hasn't been done.
>
> On 10 Dec 2015 6:42 PM, "Davanum Srinivas"  wrote:
>>
>> Rob,
>>
>> This is a shared responsibility. I am stating that projects using oslo
>> are failing to take up their share of things to be done. No, we should
>> not increase more responsibility on Oslo team.
>>
>> -- Dims
>>
>> On Thu, Dec 10, 2015 at 8:25 AM, Robert Collins
>>  wrote:
>> > On 10 December 2015 at 18:19, Davanum Srinivas 
>> > wrote:
>> >> So clearly the deprecation process is not working as no one looks at
>> >> the log messages and fix their own projects. Sigh!
>> >
>> > I think we need some way to ensure that a deprecation has been done:
>> > perhaps a job that we can run on each oslo project which fails if
>> > deprecated things are in use; we can have that on non-voting, and then
>> > use it to detect the release in which we can start the removal clock
>> > for a deprecated thing.
>> >
>> > -Rob
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] summarizing the cross-project summit session on Mitaka themes

2015-12-09 Thread Mike Perez
On 18:04 Nov 06, Doug Hellmann wrote:
> One thing I forgot to mention in my original email was the discussion
> about when we would have this themes conversation for the N cycle.
> I had originally hoped we would discuss the themes online before
> the summit, and that those would inform decisions about summit
> sessions. Several other folks in the room made the point that we
> were unlikely to come up with a theme so surprising that we would
> add or drop a summit session from any existing planning, so having
> the discussion in person at the summit to add background to the
> other sessions for the week was more constructive. I'd like to hear
> from some folks about whether that worked out this time, and then
> we can decide closer to the N summit whether to use an email thread
> or some other venue instead of (or in addition to) a summit session
> in Austin.
> 
> I also plan to start some email threads this cycle after each
> milestone to re-consider the themes and get feedback about how we're
> making progress.  I hope the release liaisons, at least, will
> participate in those discussions, and it would be great to have the
> product working group involved as well.

It might make sense to have the cross-project spec liaison [1] to be part of
this discussion?

[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-December/080869.html

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Robert Collins
On 10 December 2015 at 18:19, Davanum Srinivas  wrote:
> So clearly the deprecation process is not working as no one looks at
> the log messages and fix their own projects. Sigh!

I think we need some way to ensure that a deprecation has been done:
perhaps a job that we can run on each oslo project which fails if
deprecated things are in use; we can have that on non-voting, and then
use it to detect the release in which we can start the removal clock
for a deprecated thing.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Armando M.
On 9 December 2015 at 09:31, Doug Wiegley 
wrote:

>
> > On Dec 9, 2015, at 7:25 AM, Doug Hellmann  wrote:
> >
> > Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:
> >> On 3 December 2015 at 02:21, Thierry Carrez 
> wrote:
> >>
> >>> Armando M. wrote:
>  On 2 December 2015 at 01:16, Thierry Carrez   > wrote:
> >Armando M. wrote:
> >>> One solution is, like you mentioned, to make some (or all) of
> >>> them
> >>> full-fledged project teams. Be aware that this means the TC
> >>> would judge
> >>> those new project teams individually and might reject them if we
> >>> feel
> >>> the requirements are not met. We might want to clarify what
> >>> happens
> >>> then.
> >>
> >> That's a good point. Do we have existing examples of this or
> >>> would we be
> >> sailing in uncharted waters?
> >
> >It's been pretty common that we rejected/delayed applications for
> >projects where we felt they needed more alignment. In such cases,
> >>> the
> >immediate result for those projects if they are out of the Neutron
> >"stadium" is that they would fall from the list of official
> >>> projects.
> >Again, I'm fine with that outcome, but I want to set expectations
> >clearly :)
> 
>  Understood. It sounds to me that the outcome would be that those
>  projects (that may end up being rejected) would show nowhere on [1],
> but
>  would still be hosted and can rely on the support and services of the
>  OpenStack community, right?
> 
>  [1] http://governance.openstack.org/reference/projects/
> >>>
> >>> Yes they would still be hosted on OpenStack development infrastructure.
> >>> Contributions would no longer count toward ATC status, so people who
> >>> only contribute to those projects would no longer be able to vote in
> the
> >>> Technical Committee election. They would not have "official" design
> >>> summit space either -- they can still camp in the hallway though :)
> >>>
> >>
> >> Hi folks,
> >>
> >> For whom of you is interested in the conversation, the topic was brought
> >> for discussion at the latest TC meeting [1]. Unfortunately I was unable
> to
> >> join, however I would like to try and respond to some of the comments
> made
> >> to clarify my position on the matter:
> >>
> >>> ttx: the neutron PTL say he can't vouch for anything in the neutron
> >> "stadium"
> >>
> >> To be honest that's not entirely my position.
> >>
> >> The problem stems from the fact that, if I am asked what the stadium
> means,
> >> as a PTL I can't give a straight answer; ttx put it relatively well
> (and I
> >> quote him): by adding all those projects under your own project team,
> you
> >> bypass the Technical Committee approval that they behave like OpenStack
> >> projects and are produced by the OpenStack community. The Neutron team
> >> basically vouches for all of them to be on par. As far as the Technical
> >> Committee goes, they are all being produced by the same team we
> originally
> >> blessed (the Neutron project team).
> >>
> >> The reality is: some of these projects are not produced by the same
> team,
> >> they do not behave the same way, and they do not follow the same
> practices
> >> and guidelines. For the stadium to make sense, in my humble opinion, a
> >
> > This is the thing that's key, for me. As Anita points out elsewhere in
> > this thread, we want to structure our project teams so that decision
> > making and responsibility are placed in the same set of hands. It sounds
> > like the Stadium concept has made it easy to let those diverge.
> >
> >> definition of these practices should happen and enforcement should
> follow,
> >> but who's got the time for policing and enforcing eviction, especially
> on a
> >> large scale? So we either reduce the scale (which might not be feasible
> >> because in OpenStack we're all about scaling and adding more and more
> and
> >> more), or we address the problem more radically by evolving the
> >> relationship from tight aggregation to loose association; this way who
> >> needs to vouch for the Neutron relationship is not the Neutron PTL, but
> the
> >> person sponsoring the project that wants to be associated to Neutron. On
> >> the other end, the vouching may still be pursued, but for a much more
> >> focused set of initiatives that are led by the same team.
> >>
> >>> russellb: I attempted to start breaking down the different types of
> repos
> >> that are part of the stadium (consumer, api, implementation of
> technology,
> >> plugins/drivers).
> >>
> >> The distinction between implementation of technology, plugins/drivers
> and
> >> api is not justified IMO because from a neutron standpoint they all look
> >> like the same: they leverage the pluggable extensions to the Neutron
> core
> >> framework. As I 

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Armando M.
On 9 December 2015 at 04:06, Sean Dague  wrote:

> On 12/09/2015 01:46 AM, Armando M. wrote:
> >
> >
> > On 3 December 2015 at 02:21, Thierry Carrez  > > wrote:
> >
> > Armando M. wrote:
> > > On 2 December 2015 at 01:16, Thierry Carrez  
> > > >>
> wrote:
> > >> Armando M. wrote:
> > >> >> One solution is, like you mentioned, to make some (or all)
> of them
> > >> >> full-fledged project teams. Be aware that this means the
> TC would judge
> > >> >> those new project teams individually and might reject them
> if we feel
> > >> >> the requirements are not met. We might want to clarify
> what happens
> > >> >> then.
> > >> >
> > >> > That's a good point. Do we have existing examples of this
> or would we be
> > >> > sailing in uncharted waters?
> > >>
> > >> It's been pretty common that we rejected/delayed applications
> for
> > >> projects where we felt they needed more alignment. In such
> cases, the
> > >> immediate result for those projects if they are out of the
> Neutron
> > >> "stadium" is that they would fall from the list of official
> projects.
> > >> Again, I'm fine with that outcome, but I want to set
> expectations
> > >> clearly :)
> > >
> > > Understood. It sounds to me that the outcome would be that those
> > > projects (that may end up being rejected) would show nowhere on
> [1], but
> > > would still be hosted and can rely on the support and services of
> the
> > > OpenStack community, right?
> > >
> > > [1] http://governance.openstack.org/reference/projects/
> >
> > Yes they would still be hosted on OpenStack development
> infrastructure.
> > Contributions would no longer count toward ATC status, so people who
> > only contribute to those projects would no longer be able to vote in
> the
> > Technical Committee election. They would not have "official" design
> > summit space either -- they can still camp in the hallway though :)
> >
> >
> > Hi folks,
> >
> > For whom of you is interested in the conversation, the topic was brought
> > for discussion at the latest TC meeting [1]. Unfortunately I was unable
> > to join, however I would like to try and respond to some of the comments
> > made to clarify my position on the matter:
> >
> >> ttx: the neutron PTL say he can't vouch for anything in the neutron
> > "stadium"
> >
> > To be honest that's not entirely my position.
> >
> > The problem stems from the fact that, if I am asked what the stadium
> > means, as a PTL I can't give a straight answer; ttx put it relatively
> > well (and I quote him): by adding all those projects under your own
> > project team, you bypass the Technical Committee approval that they
> > behave like OpenStack projects and are produced by the OpenStack
> > community. The Neutron team basically vouches for all of them to be on
> > par. As far as the Technical Committee goes, they are all being produced
> > by the same team we originally blessed (the Neutron project team).
> >
> > The reality is: some of these projects are not produced by the same
> > team, they do not behave the same way, and they do not follow the same
> > practices and guidelines. For the stadium to make sense, in my humble
> > opinion, a definition of these practices should happen and enforcement
> > should follow, but who's got the time for policing and enforcing
> > eviction, especially on a large scale? So we either reduce the scale
> > (which might not be feasible because in OpenStack we're all about
> > scaling and adding more and more and more), or we address the problem
> > more radically by evolving the relationship from tight aggregation to
> > loose association; this way who needs to vouch for the Neutron
> > relationship is not the Neutron PTL, but the person sponsoring the
> > project that wants to be associated to Neutron. On the other end, the
> > vouching may still be pursued, but for a much more focused set of
> > initiatives that are led by the same team.
> >
> >> russellb: Iattempted to start breaking down the different types of
> > repos that are part of the stadium (consumer, api, implementation of
> > technology, plugins/drivers).
> >
> > The distinction between implementation of technology, plugins/drivers
> > and api is not justified IMO because from a neutron standpoint they all
> > look like the same: they leverage the pluggable extensions to the
> > Neutron core framework. As I attempted to say: we have existing plugins
> > and drivers that implement APIs, and we have plugins that implement
> > technology, so the extra classification seems overspecification.
> >
> >> flaper87: I agree a driver should not be independent
> >
> > Why, what's your rationale? If we 

Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Armando M.
On 9 December 2015 at 02:02, Thierry Carrez  wrote:

> Armando M. wrote:
> > For whom of you is interested in the conversation, the topic was brought
> > for discussion at the latest TC meeting [1]. Unfortunately I was unable
> > to join, however I would like to try and respond to some of the comments
> > made to clarify my position on the matter:
> >
> >> ttx: the neutron PTL say he can't vouch for anything in the neutron
> > "stadium"
> >
> > To be honest that's not entirely my position.
> > [...]
>
> I think I should have said "for everything" rather than "for anything" :)
>
>
ok, It makes more sense!

>> flaper87: I agree a driver should not be independent
> >
> > Why, what's your rationale? If we dig deeper, some drivers are small
> > code drops with no or untraceable maintainers. Some are actively
> > developed and can be fairly complex. The spectrum is pretty wide. Either
> > way, I think that preventing them from being independent in principle
> > may hurt the ones that can be pretty elaborated, and the ones that are
> > stale may hurt Neutron's reputation because we're the ones who are
> > supposed to look after them (after all didn't we vouch for them??)
> > [...]
>
> Yes, I agree with you that the line in the sand (between what should be
> independent and what should stay in neutron) should not be based on a
> technical classification, but on a community definition. The "big tent"
> is all about project teams - we judge if that team follows the OpenStack
> way, more than we judge what the team technically produces. As far as
> neutron goes, the question is not whether what the team produces is a
> plugin or a driver: the question is whether all the things are actually
> produced by the same team and the same leadership.


> If the teams producing those things overlap so significantly the Neutron
> leadership can vouch for them being done by "the neutron project team",
> they should stay in. If the subteams do not overlap, or follow different
> development practices, or have independent leadership, they are not
> produced by "the neutron project team" and should have their own
> independent project team.
>

I am glad you made this point, because some projects have clearly been a
spin-off sponsored by the same team and leadership, whilst others have not
(call it the new stuff if you will). However, people move on, when
technology tend to be the legacy, so I am not sure the judgement call
should be about teams rather than technology, but that's a different
conversation I suppose.

If that's the criteria, managing the growth (or lack thereof) of the
stadium becomes a problem of a different nature. However, before we do that
we'll have to figure out what to do with the growth that has occurred up
until now without taking this criteria into account to the letter!


>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Evolving the stadium concept

2015-12-09 Thread Armando M.
On 9 December 2015 at 06:25, Doug Hellmann  wrote:

> Excerpts from Armando M.'s message of 2015-12-08 22:46:16 -0800:
> > On 3 December 2015 at 02:21, Thierry Carrez 
> wrote:
> >
> > > Armando M. wrote:
> > > > On 2 December 2015 at 01:16, Thierry Carrez  > > > > wrote:
> > > >> Armando M. wrote:
> > > >> >> One solution is, like you mentioned, to make some (or all) of
> > > them
> > > >> >> full-fledged project teams. Be aware that this means the TC
> > > would judge
> > > >> >> those new project teams individually and might reject them
> if we
> > > feel
> > > >> >> the requirements are not met. We might want to clarify what
> > > happens
> > > >> >> then.
> > > >> >
> > > >> > That's a good point. Do we have existing examples of this or
> > > would we be
> > > >> > sailing in uncharted waters?
> > > >>
> > > >> It's been pretty common that we rejected/delayed applications
> for
> > > >> projects where we felt they needed more alignment. In such
> cases,
> > > the
> > > >> immediate result for those projects if they are out of the
> Neutron
> > > >> "stadium" is that they would fall from the list of official
> > > projects.
> > > >> Again, I'm fine with that outcome, but I want to set
> expectations
> > > >> clearly :)
> > > >
> > > > Understood. It sounds to me that the outcome would be that those
> > > > projects (that may end up being rejected) would show nowhere on [1],
> but
> > > > would still be hosted and can rely on the support and services of the
> > > > OpenStack community, right?
> > > >
> > > > [1] http://governance.openstack.org/reference/projects/
> > >
> > > Yes they would still be hosted on OpenStack development infrastructure.
> > > Contributions would no longer count toward ATC status, so people who
> > > only contribute to those projects would no longer be able to vote in
> the
> > > Technical Committee election. They would not have "official" design
> > > summit space either -- they can still camp in the hallway though :)
> > >
> >
> > Hi folks,
> >
> > For whom of you is interested in the conversation, the topic was brought
> > for discussion at the latest TC meeting [1]. Unfortunately I was unable
> to
> > join, however I would like to try and respond to some of the comments
> made
> > to clarify my position on the matter:
> >
> > > ttx: the neutron PTL say he can't vouch for anything in the neutron
> > "stadium"
> >
> > To be honest that's not entirely my position.
> >
> > The problem stems from the fact that, if I am asked what the stadium
> means,
> > as a PTL I can't give a straight answer; ttx put it relatively well (and
> I
> > quote him): by adding all those projects under your own project team, you
> > bypass the Technical Committee approval that they behave like OpenStack
> > projects and are produced by the OpenStack community. The Neutron team
> > basically vouches for all of them to be on par. As far as the Technical
> > Committee goes, they are all being produced by the same team we
> originally
> > blessed (the Neutron project team).
> >
> > The reality is: some of these projects are not produced by the same team,
> > they do not behave the same way, and they do not follow the same
> practices
> > and guidelines. For the stadium to make sense, in my humble opinion, a
>
> This is the thing that's key, for me. As Anita points out elsewhere in
> this thread, we want to structure our project teams so that decision
> making and responsibility are placed in the same set of hands. It sounds
> like the Stadium concept has made it easy to let those diverge.
>

Yes, only during this conversation (and whilst I have been thinking about
this) this has been become suddenly clear.


>
> > definition of these practices should happen and enforcement should
> follow,
> > but who's got the time for policing and enforcing eviction, especially
> on a
> > large scale? So we either reduce the scale (which might not be feasible
> > because in OpenStack we're all about scaling and adding more and more and
> > more), or we address the problem more radically by evolving the
> > relationship from tight aggregation to loose association; this way who
> > needs to vouch for the Neutron relationship is not the Neutron PTL, but
> the
> > person sponsoring the project that wants to be associated to Neutron. On
> > the other end, the vouching may still be pursued, but for a much more
> > focused set of initiatives that are led by the same team.
> >
> > > russellb: I attempted to start breaking down the different types of
> repos
> > that are part of the stadium (consumer, api, implementation of
> technology,
> > plugins/drivers).
> >
> > The distinction between implementation of technology, plugins/drivers and
> > api is not justified IMO because from a neutron standpoint they all look
> > like the same: they leverage the pluggable extensions 

[openstack-dev] Fwd: [QA] Meeting Thursday December 10th at 9:00 UTC

2015-12-09 Thread GHANSHYAM MANN
Hi everyone,

Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, November 26th at 9:00 UTC in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Proposed_Agenda_for_December_10th_2015_.280900_UTC.29

 Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the
next meeting will be at:

04:00 EST

18:00 JST

18:30 ACST

11:00 CEST

04:00 CDT

02:00 PDT


-- 
Regards
Ghanshyam Mann
+81-8084200646

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Sandro Mathys
On Thu, Dec 10, 2015 at 12:48 AM, Galo Navarro  wrote:
> Hi,
>
>> I think the goal of this split is well explained by Sandro in the first
>> mails of the chain:
>>
>> 1. Downstream packaging
>> 2. Tagging the delivery properly as a library
>> 3. Adding as a project on pypi
>
> Not really, because (1) and (2) are *a consequence* of the repo split. Not a
> cause. Please correct me if I'm reading wrong but he's saying:
>
> - I want tarballs
> - To produce tarballs, I want a separate repo, and separate repos have (1),
> (2) as requirements.

No, they're all goals, no consequences. Sorry, I didn't notice it
could be interpreted differently

> So this is where I'm going: producing a tarball of pyc does *not* require a
> separate repo. If we don't need a new repo, we don't need to do all the
> things that a separate repo requires.
>
> Now:
>
>> OpenStack provide us a tarballs web page[1] for each branch of each
>> project
>> of the infrastructure.
>> Then, projects like Delorean can allow us to download theses tarball
>> master
>> branches, create the
>> packages and host them in a target repository for each one of the rpm-like
>> distributions[2]. I am pretty sure
>> that there is something similar for Ubuntu.
>
> This looks more accurate: you're actually not asking for a tarball. You're
> asking for being compatible with a system that produces tarballs off a repo.
> This is very different :)
>
> So questions: we have a standalone mirror of the repo, that could be used
> for this purpose. Say we move the mirror to OSt infra, would things work?

Good point. Actually, no. The mirror can't go into OSt infra as they
don't allow direct pushes to repos - they need to go through reviews.
Of course, we could still have a mirror on GitHub in midonet/ but that
might cause us a lot of trouble.

>> Everything is done in a very straightforward and standarized way, because
>> every repo has its own
>> deliverable. You can look how they are packaged and you won't see too many
>> differences between
>> them. Packaging a python-midonetclient it will be trivial if it is
>> separated
>> in a single repo. It will be
>
> But create a lot of other problems in development. With a very important
> difference: the pain created by the mirror solution is solved cheaply with
> software (e.g.: as you know, with a script). OTOH, the pain created by
> splitting the repo is paid in very costly human resources.

Adding the PMC as a submodule should reduce this costs significantly,
no? Of course, when working on the PMC, sometimes (or often, even)
there will be the need for two instead of one review requests but the
content and discussion of those should be nearly identical, so the
actual overhead is fairly small. Figure I'm missing a few things here
- what other pains would this add?

>> complicated and we'll have to do tricky things if it is a directory inside
>> the midonet repo. And I am not
>> sure if Ubuntu and RDO community will allow us to have weird packaging
>> metadata repos.
>
> I do get this point and it's a major concern, IMO we should split to a
> different conversation as it's not related to where PYC lives, but to a more
> general question: do we really need a repo per package?

No, we don't. Not per package as you outlined them earlier: agent, cluster, etc.

Like Jaume, I know the RPM side much better than the DEB side. So for
RPM, one source package (srpm) can create several binary packages
(rpm). Therfore, one repo/tarball (there's an expected 1:1 relation
between these two) can be used for several packages.

But there's different policies for services and clients, e.g. the
services are only packaged for servers but the clients both for
servers and workstations. Therefore, they are kept in separate srpms.

Additionally, it's much easier to maintain java and python code in
separate srpms/rpms - mostly due to (build) dependencies.

> Like Guillermo and myself said before, the midonet repo generate 4 packages,
> and this will grow. If having a package per repo is really a strong
> requirement, there is *a lot* of work ahead, so we need to start talking
> about this now. But like I said, it's orthogonal to the PYC points above.

It really shouldn't be necessary to split up agent, cluster, etc.
Unless maybe if they are _very_ loosely coupled and there's a case
where it makes _a lot_ of sense to operate different versions of each
component together over an extended period of time (e.g. not just to
upgrade one at a time), I guess. Added some emphasis to that sentence,
because just the possibility won't justify this - there must be a real
use case.

-- Sandro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic]Can't login machine by using ironic

2015-12-09 Thread Zhi Chang
hi, all
I create a keypair in nova. So nova generates a key named "ironic" and I 
store the data into a file named "key". Then I boot a vm by using command "nova 
boot --image 9eb9d034-a33b-421d-8c9d-a960597e4101 --flavor 
7095f52d-d438-44c3-bdf7-ca5cca3012bd --nic 
net-id=cf9e60fa-1e62-4af7-af58-b9407c209824 --key-name ironic test3".
I want to login the machine when the machine boots success(ping the machine is 
okay). I use the command "sudo ip netns exec 
qrouter-94b916a9-054a-43b6-91ed-4f86b7eeac64 ssh -i key root@10.0.0.204". Why 
does the console let me input the password? The console outputs 
"root@10.0.0.204's password:".
Could someone helps me?


Thx
Zhi Chang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Takashi Yamamoto
On Thu, Dec 10, 2015 at 12:48 AM, Galo Navarro  wrote:
> Hi,
>
>> I think the goal of this split is well explained by Sandro in the first
>> mails of the chain:
>>
>> 1. Downstream packaging
>> 2. Tagging the delivery properly as a library
>> 3. Adding as a project on pypi
>
> Not really, because (1) and (2) are *a consequence* of the repo split. Not a
> cause. Please correct me if I'm reading wrong but he's saying:
>
> - I want tarballs
> - To produce tarballs, I want a separate repo, and separate repos have (1),
> (2) as requirements.
>
> So this is where I'm going: producing a tarball of pyc does *not* require a
> separate repo. If we don't need a new repo, we don't need to do all the
> things that a separate repo requires.

in openstack, client libraries usually have a separate release schedules
from servers.  see "Final release for client libraries" in
http://docs.openstack.org/releases/schedules/mitaka.html .
in case we want to align to it, a single all-in-one repository doesn't sound
viable to me.

>
> Now:
>
>> OpenStack provide us a tarballs web page[1] for each branch of each
>> project
>> of the infrastructure.
>> Then, projects like Delorean can allow us to download theses tarball
>> master
>> branches, create the
>> packages and host them in a target repository for each one of the rpm-like
>> distributions[2]. I am pretty sure
>> that there is something similar for Ubuntu.
>
> This looks more accurate: you're actually not asking for a tarball. You're
> asking for being compatible with a system that produces tarballs off a repo.
> This is very different :)
>
> So questions: we have a standalone mirror of the repo, that could be used
> for this purpose. Say we move the mirror to OSt infra, would things work?
>
>> Everything is done in a very straightforward and standarized way, because
>> every repo has its own
>> deliverable. You can look how they are packaged and you won't see too many
>> differences between
>> them. Packaging a python-midonetclient it will be trivial if it is
>> separated
>> in a single repo. It will be
>
> But create a lot of other problems in development. With a very important
> difference: the pain created by the mirror solution is solved cheaply with
> software (e.g.: as you know, with a script). OTOH, the pain created by
> splitting the repo is paid in very costly human resources.
>
>> complicated and we'll have to do tricky things if it is a directory inside
>> the midonet repo. And I am not
>> sure if Ubuntu and RDO community will allow us to have weird packaging
>> metadata repos.
>
> I do get this point and it's a major concern, IMO we should split to a
> different conversation as it's not related to where PYC lives, but to a more
> general question: do we really need a repo per package?
>
> Like Guillermo and myself said before, the midonet repo generate 4 packages,
> and this will grow. If having a package per repo is really a strong
> requirement, there is *a lot* of work ahead, so we need to start talking
> about this now. But like I said, it's orthogonal to the PYC points above.
>
> g
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Davanum Srinivas
Rob,

This is a shared responsibility. I am stating that projects using oslo
are failing to take up their share of things to be done. No, we should
not increase more responsibility on Oslo team.

-- Dims

On Thu, Dec 10, 2015 at 8:25 AM, Robert Collins
 wrote:
> On 10 December 2015 at 18:19, Davanum Srinivas  wrote:
>> So clearly the deprecation process is not working as no one looks at
>> the log messages and fix their own projects. Sigh!
>
> I think we need some way to ensure that a deprecation has been done:
> perhaps a job that we can run on each oslo project which fails if
> deprecated things are in use; we can have that on non-voting, and then
> use it to detect the release in which we can start the removal clock
> for a deprecated thing.
>
> -Rob
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Takashi Yamamoto
On Tue, Dec 8, 2015 at 9:54 PM, Guillermo Ontañón
 wrote:
> Hi Sandro,
>
> On Tue, Dec 8, 2015 at 7:31 AM, Sandro Mathys  wrote:
>> Hi,
>>
>> As Takashi Yamamoto raised in another thread [0], python-midonetclient
>> should be split out into its own repo
>
>
> I'm strongly against on this one. Stuff in the midonet/ repo is
> developed in sync with python-midonetclient (and much more so today
> that it's an internal api), and even depends on it, all the tests/

do you mean that the rest api used between python-midonetclient and
midonet-cluster is an internal api?

> directory in midonet/ depends on python-midonetclient. A split would
> make a lot of workflows costlier / inconvenient / impossible (for
> instance, it becomes impossible to gate properly when a patch
> introduces changes to the rest api).
>
>>  There's two major reasons for
>> this:
>>
>> 1) (Downstream) packaging: midonet and python-midonetclient are two
>> distinct packages, and therefore should have distinct upstream
>> tarballs - which are compiled on a per repo basis.
>
> It is fairly common for a single source tarball to produce multiple
> binary packages. In fact, midonet produces 4 binary packages, are you
> proposing that we split out midonet-tools and midonet-cluster too?
>
>
>> 2) When adding repositories to OpenStack, they need to be tagged.
>> There's a bunch of tags, and one category is the "type": library [1]
>> or service [2]. Now midonet is a service, but python-midonetclient is
>> a library so they need to be separate repositories.
>
> I find it hard to buy this argument, the midonet repository includes
> several other internal libraries and we are not splitting those out.
>
> Regards,
> G
>
>>
>> Thoughts?
>>
>> Cheers,
>> Sandro
>>
>> [0] 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081216.html
>> [1] http://governance.openstack.org/reference/tags/type_library.html
>> [2] http://governance.openstack.org/reference/tags/type_service.html
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread ChangBo Guo
Maybe before we remove any deprecated things from oslo . we can file a bug
for  projects with deprecated thing.
We can use code search to find out in master branch , but there is
potential problems. If  project 's stable branch didn't cap the version of
oslo library.

2015-12-10 13:41 GMT+08:00 Davanum Srinivas :

> Rob,
>
> This is a shared responsibility. I am stating that projects using oslo
> are failing to take up their share of things to be done. No, we should
> not increase more responsibility on Oslo team.
>
> -- Dims
>
> On Thu, Dec 10, 2015 at 8:25 AM, Robert Collins
>  wrote:
> > On 10 December 2015 at 18:19, Davanum Srinivas 
> wrote:
> >> So clearly the deprecation process is not working as no one looks at
> >> the log messages and fix their own projects. Sigh!
> >
> > I think we need some way to ensure that a deprecation has been done:
> > perhaps a job that we can run on each oslo project which fails if
> > deprecated things are in use; we can have that on non-voting, and then
> > use it to detect the release in which we can start the removal clock
> > for a deprecated thing.
> >
> > -Rob
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][glance][all] Removing deprecated functions from oslo_utils.timeutils

2015-12-09 Thread Robert Collins
I'm not suggesting we increase Oslo responsibilities, but that we need a
non fire drill way to signal that the work hasn't been done.
On 10 Dec 2015 6:42 PM, "Davanum Srinivas"  wrote:

> Rob,
>
> This is a shared responsibility. I am stating that projects using oslo
> are failing to take up their share of things to be done. No, we should
> not increase more responsibility on Oslo team.
>
> -- Dims
>
> On Thu, Dec 10, 2015 at 8:25 AM, Robert Collins
>  wrote:
> > On 10 December 2015 at 18:19, Davanum Srinivas 
> wrote:
> >> So clearly the deprecation process is not working as no one looks at
> >> the log messages and fix their own projects. Sigh!
> >
> > I think we need some way to ensure that a deprecation has been done:
> > perhaps a job that we can run on each oslo project which fails if
> > deprecated things are in use; we can have that on non-voting, and then
> > use it to detect the release in which we can start the removal clock
> > for a deprecated thing.
> >
> > -Rob
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-glanceclient] Return request-id to caller

2015-12-09 Thread Kekane, Abhishek
t': u'2015-11-18T13:04:18Z', 
u'disk_format': u'aki', u'protected': False, u'schema': u'/v2/schemas/image'}
>>> get.request_ids
['req-68926f34-4434-45dc-822c-c4eb94506c63']
>>>

Please suggest.

Thank You,

Abhishek

>> Cheers,
>> Flavio
>>
>>>
>>>
>>> Please let us know which approach is better or any suggestions for the same.
>>>
>>>
>>>
>>> [1] 
>>> https://github.com/openstack/python-glanceclient/blob/master/glanceclient/
>>> v2/images.py#L179
>>>
>>> [2] https://github.com/openstack/glance/blob/master/glance/api/v2/images.py#
>>> L944
>>>
>>>
>>> __
>>> Disclaimer: This email and any attachments are sent in strictest confidence
>>> for the sole use of the addressee and may contain legally privileged,
>>> confidential, and proprietary data. If you are not the intended recipient,
>>> please advise the sender by replying promptly to this email and then delete
>>> and destroy this email and any attachments without any further use, copying
>>> or forwarding.
>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
>
> Message: 15
> Date: Wed, 9 Dec 2015 21:59:50 +0800
> From: "=?utf-8?B?WmhpIENoYW5n?=" <chang...@unitedstack.com>
> To: "=?utf-8?B?b3BlbnN0YWNrLWRldg==?="
>   <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [ironic]Boot physical machine fails, says
>   "PXE-E11 ARP Timeout"
> Message-ID: <tencent_50bbe4336f52f9e54b571...@qq.com>
> Content-Type: text/plain; charset="utf-8"
>
> hi, all
>I treat a normal physical machine as a bare metal machine. The physical 
> machine booted when I run "nova boot xxx" in command line. But there is an 
> error happens. I upload a movie in youtube, link: 
> https://www.youtube.com/watch?v=XZQCNsrkyMI=youtu.be. Could someone 
> give me some advice?
>
>
> Thx
> Zhi Chang
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151209/00ee4d3e/attachment-0001.html>
>
> --
>
> Message: 16
> Date: Wed, 09 Dec 2015 09:02:38 -0500
> From: Doug Hellmann <d...@doughellmann.com>
> To: openstack-dev <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [stable] Stable team PTL nominations are
>   open
> Message-ID: <1449669713-sup-8899@lrrr.local>
> Content-Type: text/plain; charset=UTF-8
>
> Excerpts from Thierry Carrez's message of 2015-12-09 09:57:24 +0100:
>> Thierry Carrez wrote:
>>> Thierry Carrez wrote:
>>>> The nomination deadline is passed, we have two candidates!
>>>>
>>>> I'll be setting up the election shortly (with Jeremy's help to generate
>>>> election rolls).
>>>
>>> OK, the election just started. Recent contributors to a stable branch
>>> (over the past year) should have received an email with a link to vote.
>>> If you haven't and think you should have, please contact me privately.
>>>
>>> The poll closes on Tuesday, December 8th at 23:59 UTC.
>>> Happy voting!
>>
>> Election is over[1], let me congratulate Matt Riedemann for his election
>> ! Thanks to everyone who participated to the vote.
>>
>> Now I'll submit the request for spinning off as a separate project team
>> to the governance ASAP, and we should be up and running very soon.
>>
>> Cheers,
>>
>> [1] http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_2f5fd6c3837eae2a
>>
>
> Congratulations, Matt!
>
> Doug
>
>
>
> --
>
> Message: 17
> Date: Wed, 9 Dec 2015 09:32:53 -0430
> From: Flavio Percoco <fla...@redhat.com>
> To: Jordan Pittier <jordan.pitt...@scality.com>
> Cc: "OpenStack Development Mailing List \(not for usage questions\)"
>   <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [glance][tempest][defcore] Process to
>   imrpove tests coverge in temepest
> Message-ID: <20151209140253.gb10...@redhat.com>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
&

Re: [openstack-dev] Mitaka Infra Sprint

2015-12-09 Thread Spencer Krum
Thanks josh

--
  Spencer Krum
  n...@spencerkrum.com
 
 
 
On Wed, Dec 9, 2015, at 09:17 PM, Joshua Hesketh wrote:
> Hi all,
> As discussed during the infra-meeting on Tuesday[0], the infra team will be 
> holding a mid-cycle sprint to focus on infra-cloud[1].
> The sprint is an opportunity to get in a room and really work through as much 
> code and reviews as we can related to infra-cloud while having each other 
> near by to discuss blockers, technical challenges and enjoy company.
> Information + RSVP:https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
> Dates:Mon. February 22nd at 9:00am to Thursday. February 25th
> Location:HPE Fort Collins Colorado Office
> Who:Anybody is welcome. Please put your name on the wiki page if you are 
> interested in attending.
> If you have any questions please don't hesitate to ask.
> Cheers,Josh + Infra team
> [0] 
> http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-08-19.00.html[1]
>  
> https://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [midonet] Split up python-midonetclient

2015-12-09 Thread Galo Navarro
On 10 December 2015 at 04:35, Sandro Mathys  wrote:

> On Thu, Dec 10, 2015 at 12:48 AM, Galo Navarro  wrote:
> > Hi,
> >
> >> I think the goal of this split is well explained by Sandro in the first
> >> mails of the chain:
> >>
> >> 1. Downstream packaging
> >> 2. Tagging the delivery properly as a library
> >> 3. Adding as a project on pypi
> >
> > Not really, because (1) and (2) are *a consequence* of the repo split.
> Not a
> > cause. Please correct me if I'm reading wrong but he's saying:
> >
> > - I want tarballs
> > - To produce tarballs, I want a separate repo, and separate repos have
> (1),
> > (2) as requirements.
>
> No, they're all goals, no consequences. Sorry, I didn't notice it
> could be interpreted differently
>

I beg to disagree. The location of code is not a goal in itself. Producing
artifacts such as tarballs is.



> > This looks more accurate: you're actually not asking for a tarball.
> You're
> > asking for being compatible with a system that produces tarballs off a
> repo.
> > This is very different :)
> >
> > So questions: we have a standalone mirror of the repo, that could be used
> > for this purpose. Say we move the mirror to OSt infra, would things work?
>
> Good point. Actually, no. The mirror can't go into OSt infra as they
> don't allow direct pushes to repos - they need to go through reviews.
> Of course, we could still have a mirror on GitHub in midonet/ but that
> might cause us a lot of trouble.
>

I don't follow. Where a repo is hosted is orthogonal to how commits are
added. If commits to the mirror must go via gerrit, this is perfectly
doable.


> > But create a lot of other problems in development. With a very important
> > difference: the pain created by the mirror solution is solved cheaply
> with
> > software (e.g.: as you know, with a script). OTOH, the pain created by
> > splitting the repo is paid in very costly human resources.
>
> Adding the PMC as a submodule should reduce this costs significantly,
> no? Of course, when working on the PMC, sometimes (or often, even)
>
there will be the need for two instead of one review requests but the
> content and discussion of those should be nearly identical, so the
> actual overhead is fairly small. Figure I'm missing a few things here
> - what other pains would this add?
>

No, it doesn't make things easier. We already tried.

Guillermo explained a few reasons already in his email.


> > I do get this point and it's a major concern, IMO we should split to a
> > different conversation as it's not related to where PYC lives, but to a
> more
> > general question: do we really need a repo per package?
>
> No, we don't. Not per package as you outlined them earlier: agent,
> cluster, etc.
>
> Like Jaume, I know the RPM side much better than the DEB side. So for
> RPM, one source package (srpm) can create several binary packages
> (rpm). Therfore, one repo/tarball (there's an expected 1:1 relation
> between these two) can be used for several packages.
>
> But there's different policies for services and clients, e.g. the
> services are only packaged for servers but the clients both for
> servers and workstations. Therefore, they are kept in separate srpms.
>
> Additionally, it's much easier to maintain java and python code in
> separate srpms/rpms - mostly due to (build) dependencies.
>

What's your rationale for saying this? Could you point at specific
maintenance points that are made easier by having different languages in
separate repos?

g
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Chris Friesen

On 12/09/2015 10:27 AM, John Griffith wrote:



On Tue, Dec 8, 2015 at 9:10 PM, Li, Xiaoyan > wrote:

Hi all,

Currently when deleting a volume, it checks whether there are snapshots
created from it. If yes deletion is prohibited.  But it allows to extend
the volume, no check whether there are snapshots from it.

​Correct​


The two behaviors in Cinder are not consistent from my viewpoint.

​Well, your snapshot was taken at a point in time; and if you do a create from
snapshot the whole point is you want what you HAD when the snapshot command was
issued and NOT what happened afterwards.  So in my opinion this is not
inconsistent at all.


If we look at it a different way...suppose that the snapshot is linked in a 
copy-on-write manner with the original volume.  If someone deletes the original 
volume then the snapshot is in trouble.  However, if someone modifies the 
original volume then a new chunk of backing store is allocated for the original 
volume and the snapshot still references the original contents.


If we did allow deletion of the volume we'd have to either keep the volume 
backing store around as long as any snapshots are around, or else flatten any 
snapshots so they're no longer copy-on-write.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Dependencies of snapshots on volumes

2015-12-09 Thread Jordan Pittier
Hi,
FWIW, I completely agree with what John said. All of it.

Please don't do that.

Jordan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2