Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Jay Lau
After more thinking, I agree with Hongbin that instance_type might make
customer confused with flavor, what about using server_type?

Actually, nova has concept of server group, the "servers" in this group can
be vm. pm or container.

Thanks!

2015-07-16 11:58 GMT+08:00 Kai Qiang Wu :

> Hi Hong Bin,
>
> Thanks for your reply.
>
>
> I think it is better to discuss the 'platform' Vs instance_type Vs others
> case first.
> Attach:  initial patch (about the discussion):
> *https://review.openstack.org/#/c/200401/*
> 
>
> My other patches all depend on above patch, if above patch can not reach a
> meaningful agreement.
>
> My following patches would be blocked by that.
>
>
>
> Thanks
>
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强  Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
> 100193
>
> 
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for Hongbin Lu ---07/16/2015 11:47:30
> AM---Kai, Sorry for the confusion. To clarify, I was thinking how t]Hongbin
> Lu ---07/16/2015 11:47:30 AM---Kai, Sorry for the confusion. To clarify, I
> was thinking how to name the field you proposed in baymo
>
> From: Hongbin Lu 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 07/16/2015 11:47 AM
>
> Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
> VS others as a type?
> --
>
>
>
> Kai,
>
> Sorry for the confusion. To clarify, I was thinking how to name the field
> you proposed in baymodel [1]. I prefer to drop it and use the existing
> field ‘flavor’ to map the Heat template.
>
> [1] *https://review.openstack.org/#/c/198984/6*
> 
>
> *From:* Kai Qiang Wu [mailto:wk...@cn.ibm.com ]
> * Sent:* July-15-15 10:36 PM
> * To:* OpenStack Development Mailing List (not for usage questions)
> * Subject:* Re: [openstack-dev] [magnum] Magnum template manage use
> platform VS others as a type?
>
>
> Hi HongBin,
>
> I think flavors introduces more confusion than nova_instance_type or
> instance_type.
>
>
> As flavors not have binding with 'vm' or 'baremetal',
>
> Let me summary the initial question:
>  We have two kinds of templates for kubernetes now,
> (as templates in heat not flexible like programming language, if else etc.
> And separate templates are easy to maintain)
> The two kinds of kubernets templates,  One for boot VM, another boot
> Baremetal. 'VM' or Baremetal here is just used for heat template selection.
>
>
> 1> If used flavor, it is nova specific concept: take two as example,
>m1.small, or m1.middle.
>   m1.small < 'VM' m1.middle < 'VM'
>   Both m1.small and m1.middle can be used in 'VM' environment.
> So we should not use m1.small as a template identification. That's why I
> think flavor not good to be used.
>
>
> 2> @Adrian, we have --flavor-id field for baymodel now, it would picked up
> by heat-templates, and boot instances with such flavor.
>
>
> 3> Finally, I think instance_type is better.  instance_type can be used as
> heat templates identification parameter.
>
> instance_type = 'vm', it means such templates fit for normal 'VM' heat
> stack deploy
>
> instance_type = 'baremetal', it means such templates fit for ironic
> baremetal heat stack deploy.
>
>
>
>
>
> Thanks!
>
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强  Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: *wk...@cn.ibm.com* 
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
>No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
> 100193
>
> 
> Follow your heart. You are miracle!
>
> [image: Inactive hide details for Hongbin Lu ---07/16/2015 04:44:14
> AM---+1 for the idea of using Nova flavor directly. Why we introduc]Hongbin
> Lu ---07/16/2015 04:44:14 AM---+1 for the idea of using Nova flavor
> directly. Why we introduced the “platform” field to indicate “v
>
> From: Hongbin Lu <*hongbin...@huawei.com* >
> To: "OpenStack Development Mailing List (not for usage questions)" <
> *openstack-dev@lists.openstack.org* >
> Date: 07/16/2015 04:44 AM
> Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
> VS others as a type?
> --
>
>
>
>
> +1 for the idea of using Nova flavor directly.
>
> Why we introduced the “platform” field to indicate “vm” or “baremetel” is
> that magnum need to map a bay t

Re: [openstack-dev] OpenStack Mitaka (三鷹) - Our next release name has been selected

2015-07-15 Thread Michael Micucci

A little off-topic, so please forgive me. :)

If you go to the Ghibli Museum (and yes, it IS a great experience), be 
sure to get tickets in advance.  You have to buy them at Lawson store 
before the actual ticket date (no same-day sales).  Just a heads up for 
anyone planning to go. ;)


As I said, I lived in that area until recently, so if anyone wants some 
tips on places to go, I might be able to provide a couple sightseeing 
spots. ;)


Thanks!

Michael Micucci

On 07/15/2015 08:08 PM, Jaesuk Ahn wrote:

Thank everyone in the community for the great collaboration. :)

It seems like Mitaka is hosting great animations everyone loves, 
Ghibli Museum (http://www.ghibli-museum.jp/en/).

We should probably plan an official trip there during Tokyo Summit. :)




On Wed, Jul 15, 2015 at 4:00 AM Ian Cordasco 
mailto:ian.corda...@rackspace.com>> wrote:


On 7/14/15, 13:47, "Monty Taylor" mailto:mord...@inaugust.com>> wrote:

>Hi everybody!
>
>Ok. There is nothing more actually useful I can say that isn't in the
>subject line. As I mentioned previously, the preliminary results from
>our name election are here:
>
>http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4983776e190c8dbc
>
>As you are all probably aware by now, as a follow on step, the
OpenStack
>Foundation staff assessed the names chosen for legal risk in the
order
>we ranked them. The first two had significant identified problems
so we
>skipped them. The third had no legal problems, but after
announcing it
>as the choice, it came to light that there were significant social
>issues surrounding the name.
>
>The fourth on the list, Mitaka (三鷹) is clear.
>
>So please join me in welcoming the latest name to our family ...
and if
>you, like me, are not a native Japanese speaker, in learning two
(more)
>new characters.
>
>I'd also like to thank everyone in our community for
understanding. As
>we try our best to be an inclusive worldwide community, the
>opportunities for unwitting missteps are vast and ultimately probably
>unavoidable. Being able to recognize and deal with them and learn
from
>them as they occur makes me proud of who we are and what we've
become.

I agree. It's really encouraging to see a community as large as
OpenStack
embrace inclusivity and empathy around social issues.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]Request for help to review a patch

2015-07-15 Thread Damon Wang
Hi,

I know that request review is not good in mail list, but the review process
of this patch seems freeze except  gained two +1 :-)

The review url is: https://review.openstack.org/#/c/172875/

Thanks a lot,
Wei wang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl][release] New library release request process

2015-07-15 Thread Andreas Jaeger

Doug,

I'm missing openstackdocstheme and openstack-doc-tools in your import. 
How do you want to handle these?


Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] schedule instance based on CPU frequency ?

2015-07-15 Thread Chris Friesen

On 07/15/2015 04:57 PM, Dugger, Donald D wrote:

In re: Static CPU frequency.  For modern Intel CPUs this really isn't true.
Turbo Boost is a feature that allows certain CPUs in certain conditions to
actually run at a higher clock rate that what is advertised at power on (the
havoc this causes code that depends upon timing based upon CPU spin loops is
left as an exercise for the reader :-)


Reasonably recent machines have constant rates for the timestamp counter even in 
the face of CPU frequency variation.  Nobody should be using bare spin loops.



Having said that, I think CPU frequency is a really bad metric to be making
any kind of scheduling decisions on.  A Core I7 running at 2 GHz is going to
potentially run code faster than a Core I3 running at 2.2 GHz (issues of
micro-architecture and cache sizes impact performance much more than minor
variations in clock speed).  If you really want to schedule based upon CPU
capability you need to define an abstract metric, identify how many of these
abstract units apply to the specific compute nodes in your cloud and do
scheduling based upon that.  There is actually work going to do just this,
check out the BP:

https://blueprints.launchpad.net/nova/+spec/normalized-compute-units


I agree with the general concept, but I'm a bit concerned that the "normalized" 
units will only be accurate for the specific units being tested.  Other 
workloads may scale differently, especially if different CPU features are 
exposed (potentially allowing for much more efficient low-level instructions).


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-15 Thread Kevin Benton
I'm guessing Salvatore might just be suggesting that we restrict users from
populating values that have special meaning (e.g. l3 agent router interface
ports). I don't think at this point we could constrain the owner field to
essentially an enum at this point.

On Wed, Jul 15, 2015 at 10:22 PM, Mike Kolesnik  wrote:

>
> --
>
> Yes please.
>
> This would be a good starting point.
> I also think that the ability of editing it, as well as the value it could
> be set to, should be constrained.
>
> FYI the oVirt project uses this field to identify ports it creates and
> manages.
> So if you're going to constrain it to something, it should probably be
> configurable so that managers other than Nova can continue to use Neutron.
>
>
> As you have surely noticed, there are several code path which rely on an
> appropriate value being set in this attribute.
> This means a user can potentially trigger malfunctioning by sending PUT
> requests to edit this attribute.
>
> Summarizing, I think that document its usage is a good starting point, but
> I believe we should address the way this attribute is exposed at the API
> layer as well.
>
> Salvatore
>
>
>
> On 13 July 2015 at 11:52, Wang, Yalei  wrote:
>
>> Hi all,
>> The device:owner the port is defined as a 255 byte string, and is widely
>> used now, indicating the use of the port.
>> Seems we can fill it freely, and user also could update/set it from cmd
>> line(port-update $PORT_ID --device_owner), and I don’t find the guideline
>> for using.
>>
>> What is its function? For indicating the using of the port, and seems
>> horizon also use it to show the topology.
>> And nova really need it editable, should we at least document all of the
>> possible values into some guide to make it clear? If yes, I can do it.
>>
>> I got these using from the code(maybe not complete, pls point it out):
>>
>> From constants.py,
>> DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
>> DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
>> DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
>> DEVICE_OWNER_FLOATINGIP = "network:floatingip"
>> DEVICE_OWNER_DHCP = "network:dhcp"
>> DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
>> DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
>> DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
>> DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
>>
>> And from debug_agent.py
>> DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
>> DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'
>>
>> And setting from nova/network/neutronv2/api.py,
>> 'compute:%s' % instance.availability_zone
>>
>>
>> Thanks all!
>> /Yalei
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][security-group] rules for filter mac-addresses

2015-07-15 Thread Kevin Benton
Thanks, but we don't use the blueprint process for feature requests in
Neutron. Just file a bug in launchpad and give in an 'rfe' tag, which
stands for "request for enhancement".

On Tue, Jul 14, 2015 at 5:29 AM, yan_xing...@163.com 
wrote:

> Thank you, Kevin.
> I search the blueprint about this point in launchpad.net
> , and got nothing, then register one at:
> https://blueprints.launchpad.net/neutron/+spec/security-group-mac-rule
>
>
> --
> Yan Xing'an
>
>
> *From:* Kevin Benton 
> *Date:* 2015-07-14 18:31
> *To:* OpenStack Development Mailing List (not for usage questions)
> 
> *Subject:* Re: [openstack-dev] [neutron][security-group] rules for filter
> mac-addresses
> Unfortunately the security groups API does not have mac-level rules right
> now.
>
> On Tue, Jul 14, 2015 at 2:17 AM, yan_xing...@163.com 
> wrote:
>
>> Hi, all:
>>
>> Here is a requirement: deny/permit incoming packets on VM by mac
>> addresses,
>> I have tried to find better method than modifying neutron code, but
>> failed.
>> Any suggesion is grateful. Thank you.
>>
>> Yan.
>>
>> --
>> yan_xing...@163.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] questions on neutron-db-migrations

2015-07-15 Thread Madhusudhan Kandadai
Thanks Ihar. Went through your email and updated HEADS to resolve merging
conflicts for my patch.



On Wed, Jul 15, 2015 at 1:03 PM, Ihar Hrachyshka 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 07/15/2015 07:17 PM, Madhusudhan Kandadai wrote:
> > Hello,
> >
> > I have noticed that neutron project got rid of
> > neutron/db/migration/alembic_migrations/versions/HEAD file and
> > renamed it to
> > neutron/db/migration/alembic_migrations/versions/HEADS
> >
> > May I know the reason why this happened? I may have overlooked
> > some documentation with respect to the change. I have a patch which
> > is in merge conflicts and have a db upgrade with version "XXX" and
> > I use that version in HEAD. When I upgrade them, I use
> > neutron-db-manage --config-file /etc/neutron/neutron.conf
> > --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head.
> >
> > With this recent refactoring related to db, what needs to be done
> > in-order to upgrade db into neutron-db?
> >
>
> Reasoning behind the change and some suggestions on how to proceed can
> be found at:
>
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069582.html
>
> I will also update devref tomorrow as per suggestion from Salvatore
> there, adding some examples on how to proceed.
>
> Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVpryjAAoJEC5aWaUY1u57wtYIAOUlFNjOWQ29DMGV21URt6rJ
> c+M4zzdoiw5/Vtj4sJl39cGrdJ9HGyUJLLu203j7fQdhe5/snOf6Vw8XeC0S8nk9
> WzVtM0wbgJiKeG1uSNLMZTXWtpUfcX62X7fuUxibX6qDQVvMt5lJ86R4DROui8/v
> v9fgJfP7uvARorad80qY06kYL6zZOtxBGQFAfzhCIex2WI8gla5t6BIq73PKh76T
> pmxCL8fIM81JgCOpt/zKkg9r3A1D5XmVklxuh9etx2REKPtgqHNsdL3hPETLH8Bu
> eM9G1HS7L5qMQAagN0Ge5lYbPXyATmsBu15PbqXhwp6YJeWnriSmCI5ssCG+0VI=
> =Jlr0
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-15 Thread Joshua Harlow

Thanks that helps! (and I hope it helps others to),

A few questions inline...

Stefano Maffulli wrote:

On 07/15/2015 11:25 AM, Joshua Harlow wrote:

So I've been following the TC work on tags, and have been slightly
confused by the whole work, so I am wondering if I can get a
'explainlikeimfive' (borrowing from reddit terminology) edition of it.


I'll try :)

You need to think of "tags" as the labels on the boxes containing your
toys: you'll have a box with legos, one box with dolls, one box with
bicycle parts, one with star wars figurines etc. Each box with a label
outside so you can read (at 5 you may be able to read) what is in the
box before you open it.

Does that make sense?

You may think that the tags are to identify the toys you like the most
but that's not the purpose. You may like Skywalker figurine but dislike
Yoda, but they're both going to be in the starwars box. Starwars is an
objective way for dad and your friends to understand which toys go in
which bucket. Since you may like something today and not tomorrow, and
since dad can't read your mind we don't use labels such as "things I
like a lot" or "things I hate" because those are subjective valuations.

Are you still there? :)


(eating paste)




I always thought tags were going to be something like:

http://i.imgur.com/rcAnMkX.png


The graphic you used obviously carries subjective meaning, which tags
are never meant to be and hopefully never will.


Does it? Replace the tags with some other more objective words, and 
still let people upvote/downvote it (imho the main intention there was 
to make the tags be vote-able; making the usefulness of the tag a 
democratic 'entity' that can prove its own usefulness via up/down 
votes); but maybe your looking for 100% objective words as tags (that 
will never be vote-able, an honorable goal I suppose).




The 'tags' are defined on the spec:

http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html

On that spec there is spelled out the problem that the tags are
introduced to solve.  Tags are to represent a precise *taxonomy* to
navigate the OpenStack ecosystem, to help readers understand how a
project is managed (does it have a vulnerability team assigned? does it
keep a stable branch? what's the release model? Is it overseen by the
TC? etc). As the spec says:

 the landscape [used to be]is very simple: you’re in the integrated
 release, or you’re not. But since there was only one category (or
 badge of honor), it ended up meaning different things to different
 people.


In Thierry's words to "[Openstack-operators] [tags] Ops-Data vs.
Ops-Tags", June 16 2015:

 They come with a definition, a set of requirements that a project
 must fulfill to be granted the label. Ideally the requirements are
 objective, based on available documentation and metrics. But the
 tag definition itself remains subjective.


The tags are called to describe objectively each project. So, for
example, if you want to know which project maintain a stable branch, you
see the list on:


Ok, so I guess this is more then like classifiers in python (in a way),

Where the list is pretty objective and statically defined like @ 
https://pypi.python.org/pypi?%3Aaction=list_classifiers (or something 
like that).




http://governance.openstack.org/reference/tags/release_has-stable-branches.html

You want to see if projects are libraries, middleware or client:

http://governance.openstack.org/reference/tags/type_library.html

Are you curious to see which projects constitute the release approved by
the TC?

http://governance.openstack.org/reference/tags/tc-approved-release.html

Tags can be proposed by anyone, not only by the TC and they get
discussed and voted on gerrit. The proposed tags need to be as objective
as possible. And there is a working group
(https://etherpad.openstack.org/p/ops-tags-June-2015) among operators
trying to define tags that may help operators to judge if a project is
good for them to use or not.


So my only thought about this is that ^ sounds like a lot of red-tape, 
and I really wonder if there is anyway to make this more 'relaxed' (and 
also 'fun') and/or less strict but still achieve the same result 
("objectiveness"...).




HTH
stef

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Joshua Harlow

Chris Friesen wrote:

On 07/15/2015 09:31 AM, Joshua Harlow wrote:

I do like experiments!

What about going even farther and trying to integrate somehow into mesos?

https://mesos.apache.org/documentation/latest/mesos-architecture/

Replace the hadooop executor, MPI executor with a 'VM executor' and
perhaps we
could eliminate a large part of the scheduler code (just a thought)...


Is the mesos scheduler sufficiently generic as to encompass all the
filters we currently have in nova?


IMHO some of these should probably have never existed in the first 
place: ie 
https://github.com/openstack/nova/blob/master/nova/scheduler/filters/json_filter.py 
since they are near impossible to ever migrate away from once created (a 
"JSON-based grammar for selecting hosts", like woah). So if someone is 
going to do a comparison/experiment I'd hope that they can overlook some 
of the filters that should likely never have been created in the first 
place ;)




Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Should we document the using of "device:owner" of the PORT ?

2015-07-15 Thread Mike Kolesnik
- Original Message -

> Yes please.

> This would be a good starting point.
> I also think that the ability of editing it, as well as the value it could be
> set to, should be constrained.

FYI the oVirt project uses this field to identify ports it creates and manages. 
So if you're going to constrain it to something, it should probably be 
configurable so that managers other than Nova can continue to use Neutron. 

> As you have surely noticed, there are several code path which rely on an
> appropriate value being set in this attribute.
> This means a user can potentially trigger malfunctioning by sending PUT
> requests to edit this attribute.

> Summarizing, I think that document its usage is a good starting point, but I
> believe we should address the way this attribute is exposed at the API layer
> as well.

> Salvatore

> On 13 July 2015 at 11:52, Wang, Yalei < yalei.w...@intel.com > wrote:

> > Hi all,
> 
> > The device:owner the port is defined as a 255 byte string, and is widely
> > used
> > now, indicating the use of the port.
> 
> > Seems we can fill it freely, and user also could update/set it from cmd
> > line(port-update $PORT_ID --device_owner), and I don’t find the guideline
> > for using.
> 
> > What is its function? For indicating the using of the port, and seems
> > horizon
> > also use it to show the topology.
> 
> > And nova really need it editable, should we at least document all of the
> > possible values into some guide to make it clear? If yes, I can do it.
> 
> > I got these using from the code(maybe not complete, pls point it out):
> 
> > From constants.py,
> 
> > DEVICE_OWNER_ROUTER_HA_INTF = "network:router_ha_interface"
> 
> > DEVICE_OWNER_ROUTER_INTF = "network:router_interface"
> 
> > DEVICE_OWNER_ROUTER_GW = "network:router_gateway"
> 
> > DEVICE_OWNER_FLOATINGIP = "network:floatingip"
> 
> > DEVICE_OWNER_DHCP = "network:dhcp"
> 
> > DEVICE_OWNER_DVR_INTERFACE = "network:router_interface_distributed"
> 
> > DEVICE_OWNER_AGENT_GW = "network:floatingip_agent_gateway"
> 
> > DEVICE_OWNER_ROUTER_SNAT = "network:router_centralized_snat"
> 
> > DEVICE_OWNER_LOADBALANCER = "neutron:LOADBALANCER"
> 
> > And from debug_agent.py
> 
> > DEVICE_OWNER_NETWORK_PROBE = 'network:probe'
> 
> > DEVICE_OWNER_COMPUTE_PROBE = 'compute:probe'
> 
> > And setting from nova/network/neutronv2/api.py,
> 
> > 'compute:%s' % instance.availability_zone
> 
> > Thanks all!
> 
> > /Yalei
> 

> > __
> 
> > OpenStack Development Mailing List (not for usage questions)
> 
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps

2015-07-15 Thread TIANTIAN
I'd agree with Angus.


在 2015-07-16 12:05:25,"Angus Salkeld"  写道:

On Tue, Jun 30, 2015 at 6:09 PM, Julien Danjou  wrote:
On Mon, Jun 29 2015, Ildikó Váncsa wrote:

> I think removing options from the API requires version bump. So if we plan to
> do this, that should be introduced in v3 as opposed to v2, which should remain
> the same and maintained for two cycles (assuming that we still have this 
> policy
> in OpenStack). It this is achievable by removing the old code and relying on
> the new repo that would be the best, if not then we need to figure out how to
> freeze the old code.

This is not an API change as we're not modifying anything in the API.
It's just the endpoint *potentially* split in two. But you can also
merge them as they are 2 separate entities (/v2/alarms and /v2/*).
So there's no need for a v3 here.



Hi Julien,


I just saw this, and I am concerned this is going to kill Heat's gate (and 
user's templates).


Will this be hidden within the client so that as long as we have aodh enabled 
in our gate's devstack
this will just work?


-Angus
 

As the consensus goes toward removal, I'll work on a patch for that.

--
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] Aodh has been imported, next steps

2015-07-15 Thread Angus Salkeld
On Tue, Jun 30, 2015 at 6:09 PM, Julien Danjou  wrote:

> On Mon, Jun 29 2015, Ildikó Váncsa wrote:
>
> > I think removing options from the API requires version bump. So if we
> plan to
> > do this, that should be introduced in v3 as opposed to v2, which should
> remain
> > the same and maintained for two cycles (assuming that we still have this
> policy
> > in OpenStack). It this is achievable by removing the old code and
> relying on
> > the new repo that would be the best, if not then we need to figure out
> how to
> > freeze the old code.
>
> This is not an API change as we're not modifying anything in the API.
> It's just the endpoint *potentially* split in two. But you can also
> merge them as they are 2 separate entities (/v2/alarms and /v2/*).
> So there's no need for a v3 here.
>

Hi Julien,

I just saw this, and I am concerned this is going to kill Heat's gate (and
user's templates).

Will this be hidden within the client so that as long as we have aodh
enabled in our gate's devstack
this will just work?

-Angus


>
> As the consensus goes toward removal, I'll work on a patch for that.
>
> --
> Julien Danjou
> /* Free Software hacker
>http://julien.danjou.info */
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Kai Qiang Wu
Hi Hong Bin,

Thanks for your reply.


I think it is better to discuss the 'platform' Vs instance_type Vs others
case first.
Attach:  initial patch (about the discussion):
https://review.openstack.org/#/c/200401/

My other patches all depend on above patch, if above patch can not reach a
meaningful agreement.

My following patches would be blocked by that.



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   07/16/2015 11:47 AM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field
you proposed in baymodel [1]. I prefer to drop it and use the existing
field ‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?



Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc.
And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot
Baremetal. 'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small < 'VM' m1.middle < 'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I
think flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up
by heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as
heat templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat
stack deploy

instance_type = 'baremetal', it means such templates fit for ironic
baremetal heat stack deploy.





Thanks!


Best Wishes,


Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193


Follow your heart. You are miracle!

Inactive hide details for Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the
idea of using Nova flavor directly. Why we introducHongbin Lu ---07/16/2015
04:44:14 AM---+1 for the idea of using Nova flavor directly. Why we
introduced the “platform” field to indicate “v

From: Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?




+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is
that magnum need to map a bay to a Heat template (which will be used to
provision the bay). Currently, Magnum has three layers of mapping:
  ・ platform: vm or baremetal
  ・ os: atomic, coreos, …
  ・ coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate
a list of flovars for VM and another list of flavors for baremetal (We may
need an additional list of flavors for container in the future for the
nested container use case). Then, the new three layers would be:
  ・ flavor: baremetal, m1.small, m1.medium,  …
  ・ os: atomic, coreos, ...
  ・ coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate
what Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage 

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Hongbin Lu
Kai,

Sorry for the confusion. To clarify, I was thinking how to name the field you 
proposed in baymodel [1]. I prefer to drop it and use the existing field 
‘flavor’ to map the Heat template.

[1] https://review.openstack.org/#/c/198984/6

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-15-15 10:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?


Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or 
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc. And 
separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot Baremetal. 
'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small < 'VM' m1.middle < 'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I think 
flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up by 
heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as heat 
templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat stack 
deploy

instance_type = 'baremetal', it means such templates fit for ironic baremetal 
heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!

[Inactive hide details for Hongbin Lu ---07/16/2015 04:44:14 AM---+1 for the 
idea of using Nova flavor directly. Why we introduc]Hongbin Lu ---07/16/2015 
04:44:14 AM---+1 for the idea of using Nova flavor directly. Why we introduced 
the “platform” field to indicate “v

From: Hongbin Lu mailto:hongbin...@huawei.com>>
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: 07/16/2015 04:44 AM
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?





+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:
* platform: vm or baremetal
* os: atomic, coreos, …
* coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors for baremetal (We may need 
an additional list of flavors for container in the future for the nested 
container use case). Then, the new three layers would be:
* flavor: baremetal, m1.small, m1.medium,  …
* os: atomic, coreos, ...
* coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate what 
Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors? They 
already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto mailto:adrian.o.

Re: [openstack-dev] [magnum] Tom Cammann for core

2015-07-15 Thread Jay Lau
Welcome Tom!

2015-07-15 6:53 GMT+08:00 Tom Cammann :

> Thanks team, happy to be here :)
>
> Tom
> > On 14 Jul 2015, at 23:02, Adrian Otto  wrote:
> >
> > Tom,
> >
> > It is my pleasure to welcome you to the magnum-core group. We are happy
> to have you on the team.
> >
> > Regards,
> >
> > Adrian
> >
> >> On Jul 9, 2015, at 7:20 PM, Adrian Otto 
> wrote:
> >>
> >> Team,
> >>
> >> Tom Cammann (tcammann) has become a valued Magnum contributor, and
> consistent reviewer helping us to shape the direction and quality of our
> new contributions. I nominate Tom to join the magnum-core team as our
> newest core reviewer. Please respond with a +1 vote if you agree.
> Alternatively, vote -1 to disagree, and include your rationale for
> consideration.
> >>
> >> Thanks,
> >>
> >> Adrian
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,

Jay Lau (Guangya Liu)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-15 Thread Stefano Maffulli
On 07/15/2015 11:25 AM, Joshua Harlow wrote:
> So I've been following the TC work on tags, and have been slightly
> confused by the whole work, so I am wondering if I can get a
> 'explainlikeimfive' (borrowing from reddit terminology) edition of it.

I'll try :)

You need to think of "tags" as the labels on the boxes containing your
toys: you'll have a box with legos, one box with dolls, one box with
bicycle parts, one with star wars figurines etc. Each box with a label
outside so you can read (at 5 you may be able to read) what is in the
box before you open it.

Does that make sense?

You may think that the tags are to identify the toys you like the most
but that's not the purpose. You may like Skywalker figurine but dislike
Yoda, but they're both going to be in the starwars box. Starwars is an
objective way for dad and your friends to understand which toys go in
which bucket. Since you may like something today and not tomorrow, and
since dad can't read your mind we don't use labels such as "things I
like a lot" or "things I hate" because those are subjective valuations.

Are you still there? :)

> I always thought tags were going to be something like:
>
> http://i.imgur.com/rcAnMkX.png

The graphic you used obviously carries subjective meaning, which tags
are never meant to be and hopefully never will.

The 'tags' are defined on the spec:

http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html

On that spec there is spelled out the problem that the tags are
introduced to solve.  Tags are to represent a precise *taxonomy* to
navigate the OpenStack ecosystem, to help readers understand how a
project is managed (does it have a vulnerability team assigned? does it
keep a stable branch? what's the release model? Is it overseen by the
TC? etc). As the spec says:

the landscape [used to be]is very simple: you’re in the integrated
release, or you’re not. But since there was only one category (or
badge of honor), it ended up meaning different things to different
people.


In Thierry's words to "[Openstack-operators] [tags] Ops-Data vs.
Ops-Tags", June 16 2015:

They come with a definition, a set of requirements that a project
must fulfill to be granted the label. Ideally the requirements are
objective, based on available documentation and metrics. But the
tag definition itself remains subjective.


The tags are called to describe objectively each project. So, for
example, if you want to know which project maintain a stable branch, you
see the list on:

http://governance.openstack.org/reference/tags/release_has-stable-branches.html

You want to see if projects are libraries, middleware or client:

http://governance.openstack.org/reference/tags/type_library.html

Are you curious to see which projects constitute the release approved by
the TC?

http://governance.openstack.org/reference/tags/tc-approved-release.html

Tags can be proposed by anyone, not only by the TC and they get
discussed and voted on gerrit. The proposed tags need to be as objective
as possible. And there is a working group
(https://etherpad.openstack.org/p/ops-tags-June-2015) among operators
trying to define tags that may help operators to judge if a project is
good for them to use or not.

HTH
stef

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Kai Qiang Wu
Hi HongBin,

I think flavors introduces more confusion than nova_instance_type or
instance_type.


As flavors not have binding with 'vm' or 'baremetal',

Let me summary the initial question:
  We have two kinds of templates for kubernetes now,
(as templates in heat not flexible like programming language, if else etc.
And separate templates are easy to maintain)
The two kinds of kubernets templates,  One for boot VM, another boot
Baremetal. 'VM' or Baremetal here is just used for heat template selection.


1> If used flavor, it is nova specific concept: take two as example,
m1.small, or m1.middle.
   m1.small < 'VM' m1.middle < 'VM'
   Both m1.small and m1.middle can be used in 'VM' environment.
So we should not use m1.small as a template identification. That's why I
think flavor not good to be used.


2> @Adrian, we have --flavor-id field for baymodel now, it would picked up
by heat-templates, and boot instances with such flavor.


3> Finally, I think instance_type is better.  instance_type can be used as
heat templates identification parameter.

instance_type = 'vm', it means such templates fit for normal 'VM' heat
stack deploy

instance_type = 'baremetal', it means such templates fit for ironic
baremetal heat stack deploy.





Thanks!


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   Hongbin Lu 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   07/16/2015 04:44 AM
Subject:Re: [openstack-dev] [magnum] Magnum template manage use
platform VS others as a type?



+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is
that magnum need to map a bay to a Heat template (which will be used to
provision the bay). Currently, Magnum has three layers of mapping:
  ・ platform: vm or baremetal
  ・ os: atomic, coreos, …
  ・ coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate
a list of flovars for VM and another list of flavors for baremetal (We may
need an additional list of flavors for container in the future for the
nested container use case). Then, the new three layers would be:
  ・ flavor: baremetal, m1.small, m1.medium,  …
  ・ os: atomic, coreos, ...
  ・ coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate
what Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors?
They already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What
about using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include:
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver,
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: Tuesday, July 14, 2015 at 7:44 PM
To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform
VS others as a type?

 One drawback to virt_type if not seen in the context of the acceptable
 values, is that it should be set to values like libvirt, xen, ironic, etc.
 That might actually be good. Instead of using the values 'vm' or
 'baremetal', we use the name of the nova virt driver, and interpret those
 to be vm or baremetal types. So if I set the value to 'xen', I know the
 nova instance type is a vm, and 'ironic' means a baremetal nova instance.

 Adrian


  Original message 
 From: Hongbin Lu 
 Date: 07/14/2015 

[openstack-dev] Please add add 'Fuel' to list topic categories

2015-07-15 Thread Qiming Teng
Hi,

I believe we are all receiving a large number of Fuel related messages
everyday, but not all of us have the abundant bandwidth to read them.
Maybe we can consider adding 'Fuel' to the topic categories we can check
on/off when customising the subscription.

Currently, the option is to filter out "all messages that do not match
any topic filter", which is an obvious overkill.

Thanks for considering this.

Regards,
 Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][infra][third-party] Intel actively seeking solution to CI issue and getting close to a solution

2015-07-15 Thread Anita Kuno
On 07/15/2015 10:19 PM, yongli he wrote:
> Hello OpenStackers!
> 
> The Intel PCI/SRIOV/NGFW/PTAS CI located in China, due to reasons beyond
> our control, lost connectivity to the Jenkins servers.

The great firewall of China is making quite a few folks unhappy.


> Although the CI
> system is working fine we haven’t been able to report results back for
> about a month now.
> 
> We are actively looking for a solution to this problem.
> 
> Currently we are seeking a VM in AWS or similar public cloud to hold our
> CI logs,

Have you taken a look at any of the fine offerings from companies who
operate OpenStack public clouds?
http://www.openstack.org/marketplace/public-clouds/


> which will quickly give us a short term solution.  For a longer
> term solution we are exploring moving to machines located in the US.
> 
> Sorry for the inconvenience and your patience.
> 
> Regards
> Yongli

Thanks Yongli,
Anita.

> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] July 15th meeting cancelled

2015-07-15 Thread Steve Gordon
- Original Message -
> From: "Calum Loudon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Hi Steve
> 
> I missed the linked mail as it was sent to the openstack-operators list, not
> openstack-dev - was that intentional?

Sort of, in that the use case definition work is primarily an operators 
activity rather than a development one as such. The other issue with 
cross-posting is that we end up with a split thread across the two (or more 
lists), though in this case I did not get any on list responses though I did 
get several off list responses suggesting variations of:

1) Move focus away from the telcowg-usecases repository in favor of the 
productwg user stories repository currently being created.
2) Move focus away from the telcowg-usecases repository in favor of backlog 
spec and/or RFE processes for projects that support them.
3) Move the telcowg-usecases repository into the openstack namespace as 
proposed but do so under the governance of the user committee rather than the 
TC.

I record these here simply for the purposes of transparency, obviously we need 
to discuss as a team which if any of these is appropriate in addition to or 
instead of the actions I proposed in the previous email.

> On the substance of the mail, +1 to adding Daniel and Yuriy to the core
> reviewers list.

Thanks for the feedback on this.

-Steve

> cheers
> 
> Calum
> 
> -Original Message-
> From: Steve Gordon [mailto:sgor...@redhat.com]
> Sent: 15 July 2015 12:36
> To: openstack-operators; OpenStack Development Mailing List (not for usage
> questions)
> Subject: [openstack-dev] [NFV][Telco] July 15th meeting cancelled
> 
> Hi all,
> 
> I'm unable to make the meeting today and was unable to get an alternative
> facilitator to run the meeting, as such it is canceled. Please note that I
> am still seeking comment on:
> 
> [Openstack-operators] [nfv][telco] On-going management of
> telcowg-usecases repository
> 
> http://lists.openstack.org/pipermail/openstack-operators/2015-July/007611.html
> 
> As always outstanding reviews are here:
> 
> 
> https://review.openstack.org/#/q/status:open+project:stackforge/telcowg-usecases,n,z
> 
> Thanks,Steve
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Steve Gordon, RHCE
Sr. Technical Product Manager,
Red Hat Enterprise Linux OpenStack Platform

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][infra][third-party] Intel actively seeking solution to CI issue and getting close to a solution

2015-07-15 Thread yongli he

Hello OpenStackers!

The Intel PCI/SRIOV/NGFW/PTAS CI located in China, due to reasons beyond 
our control, lost connectivity to the Jenkins servers. Although the CI 
system is working fine we haven’t been able to report results back for 
about a month now.


We are actively looking for a solution to this problem.

Currently we are seeking a VM in AWS or similar public cloud to hold our 
CI logs, which will quickly give us a short term solution.  For a longer 
term solution we are exploring moving to machines located in the US.


Sorry for the inconvenience and your patience.

Regards
Yongli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tricircle]Team Weekly Meeting 2015.07.15 Roundup

2015-07-15 Thread Zhipeng Huang
Hi Team,

Sorry I weren't able to make it at the irc meeting due to personal issues,
however Joe was kind enough to log all the conversations.

So here you go, the automated generated minutes is here (tho not so much
info):
http://eavesdrop.openstack.org/meetings/tricircle/2015/tricircle.2015-07-15-13.11.html


Please also find in the attachment a "noise-cancelled" chatlog for anything
that couldn't be captured by the meetbot. :)

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado


Tricircle IRC Meeting chatlog 20150715.docx
Description: MS-Word 2007 document
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][bp] Power Magnum to run on metalwith Hyper

2015-07-15 Thread Peng Zhao
-- Original --
From:  “Adrian Otto”;
Date:  Wed, Jul 15, 2015 02:31 AM
To:  “OpenStack Development Mailing List (not for usage 
questions)“; 


Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run onmetal  
withHyper

 

 Peng, 
   On Jul 13, 2015, at 8:37 PM, Peng Zhao  wrote:
 
  Thanks Adrian!
 
 
 Hi, all,
 
 
 Let me recap what is hyper and the idea of hyperstack. 
 
 
 Hyper is a single-host runtime engine. Technically, 
 Docker = LXC + AUFS
 Hyper = Hypervisor + AUFS
 where AUFS is the Docker image.
 
  
 
 I do not understand the last line above. My understanding is that AUFS == 
UnionFS, which is used to implement a storage driver for Docker. Others exist 
for btrfs, and devicemapper. You select which one you want by setting an option 
like this:
 
 
 DOCKEROPTS=”-s devicemapper”
 
 
 Are you trying to say that with Hyper, AUFS is used to provide layered Docker 
image capability that are shared by multiple hypervisor guests?





Peng >>> Yes, AUFS implies the Docker images here.

 My guess is that you are trying to articulate that a host running Hyper is a 
1:1 substitute for a host running Docker, and will respond using the Docker 
remote API. This would result in containers running on the same host that have 
a superior security  isolation than they would if LXC was used as the backend 
to Docker. Is this correct?





Peng>>> Exactly
 
   Due to the shared-kernel nature of LXC, Docker lacks of the necessary 
isolation in a multi-tenant CaaS platform, and this is what Hyper/hypervisor is 
good at.
 
 
 And because of this, most CaaS today run on top of IaaS: 
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/388x275/e286dea1266b46c1999d566b0f9e326b/iaas.png
 Hyper enables the native, secure, bare-metal CaaS  
https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/395x244/828ad577dafb3f357e95899e962651b2/caas.png
 
 
 From the tech stack perspective, Hyperstack turns Magnum o run in parallel 
with Nova, not running on atop.
 
  
 
 For this to work, we’d expect to get a compute host from Heat, so if the bay 
type were set to “hyper”, we’d need to use a template that can produce a 
compute host running Hyper. How would that host be produced, if we do not get 
it from nova? Might it make more  sense to make a dirt driver for nova that 
could produce a Hyper guest on a host already running the nova-compute agent? 
That way Magnum would not need to re-create any of Nova’s functionality in 
order to produce nova instances of type “hyper”.





Peng >>> We don’t have to get the physical host from nova. Let’s say
   OpenStack = Nova+Cinder+Neutron+Bare-metal+KVM, so “AWS-like IaaS for 
everyone else”

   HyperStack= Magnum+Cinder+Neutron+Bare-metal+Hyper, then “Google-like CaaS 
for everyone else”


Ideally, customers should deploy a single OpenStack cluster, with both nova/kvm 
and magnum/hyper. I’m looking for a solution to make nova/magnum co-exist.

 Is Hyper compatible with libvirt?





Peng>>> We are working on the libvirt integration, expect in v0.5

 
 
 Can Hyper support nested Docker containers within the Hyper guest?





Peng>>> Docker in Docker? In a HyperVM instance, there is no docker daemon, 
cgroup and namespace (except MNT for pod). VM serves the purpose of isolation. 
We plan to support cgroup and namespace, so you can control whether multiple 
containers in a pod share the same namespace, or completely isolated. But in 
either case, no docker daemon is present.

 
 
 Thanks,
 
 
 Adrian Otto
 
 

 
  Best,
 Peng
  

   -- Original --
  From:  “Adrian Otto”;
 Date:  Tue, Jul 14, 2015 07:18 AM
 To:  “OpenStack Development Mailing List (not for usage 
questions)“; 
 

 Subject:  Re: [openstack-dev] [magnum][bp] Power Magnum to run on metal 
withHyper
 
  

 Team, 
 
 I woud like to ask for your input about adding support for Hyper in Magnum:
 
 
 https://blueprints.launchpad.net/magnum/+spec/hyperstack
 
 
 We touched on this in our last team meeting, and it was apparent that 
achieving a higher level of understanding of the technology before weighing in 
about the directional approval of this blueprint. Peng Zhao and Xu Wang have 
graciously agreed  to respond to this thread to address questions about how the 
technology works, and how it could be integrated with Magnum.
 
 
 Please take a moment to review the blueprint, and ask your questions here on 
this thread.
 
 
 Thanks,
 
 
 Adrian Otto
 
 
   On Jul 2, 2015, at 8:48 PM, Peng Zhao  wrote:
 
  Here is the bp of Magnum+Hyper+Metal integration:  
https://blueprints.launchpad.net/magnum/+spec/hyperstack
 
 
 Wanted to hear more thoughts and kickstart some brainstorming.
 
 
 Thanks,
 Peng
 
 
 -
 Hyper - Make VM run like Container
 
 
  
   __
 OpenStack  Development Mailing List (not for usage 

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Adrian Otto

On Jul 15, 2015, at 1:39 PM, 
hongbin...@huawei.com wrote:

+1 for the idea of using Nova flavor directly.

Our initial resistance to using flavor is that you may want the same bay to 
have a combination of different flavors in it. I suppose this might still be 
possible by changing the flavor value in the bay, and then scaling it.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:
• platform: vm or baremetal
• os: atomic, coreos, …
• coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors for baremetal (We may need 
an additional list of flavors for container in the future for the nested 
container use case). Then, the new three layers would be:
• flavor: baremetal, m1.small, m1.medium,  …

Well, this would need to be a valid flavor name in nova for it to work. You 
might have several flavors that are all actually “baremetal”, so it would be up 
to the cloud operator to map all the flavors to the correct instance types. 
Perhaps it’s possible to determine which nova dirt driver would be used for 
each of the flavors, and look that up automatically in order to eliminate 
thinned for maintaining a mapping.

• os: atomic, coreos, ...
• coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate what 
Nova flavor already indicates.

It would also give us the ability to pass in a flavor id as a parameter to our 
heat templates to allow them to be more easily used with different flavor 
sizes. IT could send the value from the bay model initially, or from the bay 
once it exists. It would also be possible to implement a ‘scale bay’ operation 
that would also take this as a parameter so you could accomplish the equivalent 
of “add an 8GB node to this bay”, or "add a giant baremetal node to this bay”.

Adrian


Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors? They 
already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 7:44 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

One drawback to virt_type if not seen in the context of the acceptable values, 
is that it should be set to values like libvirt, xen, ironic, etc. That might 
actually be good. Instead of using the values 'vm' or 'baremetal', we use the 
name of the nova virt driver, and interpret those to be vm or baremetal types. 
So if I set the value to 'xen', I know the nova instance type is a vm, and 
'ironic' means a baremetal nova instance.

Adrian


 Original message 
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.ope

Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-15 Thread GHANSHYAM MANN
On Thu, Jul 16, 2015 at 3:03 AM, Sean Dague  wrote:
> On 07/15/2015 01:44 PM, Matt Riedemann wrote:
>> The osapi_v3.enabled option is False by default [1] even though it's
>> marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
>> we've frozen it for new feature development).
>>
>> I got looking at this because osapi_v3.enabled is True in nova.conf in
>> both the check-tempest-dsvm-nova-v21-full job and non-v21
>> check-tempest-dsvm-full job, but only in the v21 job is
>> "x-openstack-nova-api-version: '2.1'" used.
>>
>> Shouldn't the v2.1 API be enabled by default now?
>>
>> [1]
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44
>
> Honestly, we should probably deprecate out osapi_v3.enabled make it
> osapi_v21 (or osapi_v2_microversions) so as to not confuse people further.
>

Nice Catch. We might have just forgot to make it default to True.

How about just deprecating it and remove in N and makes v21 enable all
the time (irrespective of osapi_v3.enabled) as they are current now.

> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Thanks & Regards
Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Abandon changesets which hang for a while without updates

2015-07-15 Thread Mike Scherbakov
Folks,
let's execute here. Numbers are still large. Did we have a chance to look
over the whole queue?

Can we go ahead and abandon changes having -1 or -2 from reviewers for over
than a months or so?
I'm all for just following standard OpenStack process [1], and then change
it only if there is good reason for it.

[1] https://wiki.openstack.org/wiki/Puppet#Patch_abandonment_policy

On Thu, Jul 9, 2015 at 6:27 PM Stanislaw Bogatkin 
wrote:

> 2 weeks seems too small for me. We easy can be in situation when fix for
> medium bug is done, but SCF starts. And gap between SCF and release easily
> can be more than a month. So, 2 months seems okay for me if speaking about
> forcibly applying auto-abandon by major vote. And I'm personally against
> such innovation at all.
>
> On Thu, Jul 9, 2015 at 5:37 PM, Davanum Srinivas 
> wrote:
>
>> That's a very good plan ("Initial feedback/triage") Mike.
>>
>> thanks,
>> dims
>>
>> On Thu, Jul 9, 2015 at 3:23 PM, Mike Scherbakov
>>  wrote:
>> > +1 for just reusing existing script, and adjust it on the way. No need
>> to
>> > immediately switch from infinite time to a couple of weeks, we can
>> always
>> > adjust it later. But 1-2 month should be a good start already.
>> >
>> > Our current stats [1] look just terrible. Before we enable an
>> auto-abandon,
>> > we need to go every single patch first, and review it / provide comment
>> to
>> > authors. The idea is not to abandon some good patches, and not to offend
>> > contributors...
>> >
>> > Let's think how we can approach it. Should we have core reviewers to
>> check
>> > their corresponding components?
>> >
>> > [1] http://stackalytics.com/report/reviews/fuel-group/open
>> >
>> > On Wed, Jul 8, 2015 at 1:13 PM Sean M. Collins 
>> wrote:
>> >>
>> >> Let's keep it at >4 weeks without comment, and Jenkins failed - similar
>> >> to the script that Kyle Mestery uses for Neutron. In fact, we could
>> >> actually just use his script ;)
>> >>
>> >>
>> >>
>> https://github.com/openstack/neutron/blob/master/tools/abandon_old_reviews.sh
>> >> --
>> >> Sean M. Collins
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > --
>> > Mike Scherbakov
>> > #mihgen
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-15 Thread Ken'ichi Ohmichi
2015-07-16 3:03 GMT+09:00 Sean Dague :
> On 07/15/2015 01:44 PM, Matt Riedemann wrote:
>> The osapi_v3.enabled option is False by default [1] even though it's
>> marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
>> we've frozen it for new feature development).
>>
>> I got looking at this because osapi_v3.enabled is True in nova.conf in
>> both the check-tempest-dsvm-nova-v21-full job and non-v21
>> check-tempest-dsvm-full job, but only in the v21 job is
>> "x-openstack-nova-api-version: '2.1'" used.
>>
>> Shouldn't the v2.1 API be enabled by default now?
>>
>> [1]
>> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44

Oops, nice catch.
Yeah, we need to make the default enabled.

> Honestly, we should probably deprecate out osapi_v3.enabled make it
> osapi_v21 (or osapi_v2_microversions) so as to not confuse people further.

+1 for renaming it to osapi_v21 (or osapi_v2_microversions).

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Deprecation and backwards compaibility Policy

2015-07-15 Thread Andrew Woodward
Using lazy census here, I will update the wiki with this process.

On Mon, Jun 29, 2015 at 4:22 PM Andrew Woodward  wrote:

> Some recent specs have proposed changing some of the API's by either
> removing parts, or changing them in non-backwards way. Additionally there
> are some proposals that are light on details of their impact to already
> supported components.
>
> I propose that deprecation and backwards compatibility should be
> maintained for at least one release before removing support for the
> previous implementation.
>
> This would result in a process such as
>
> A -> A2,B -> B
> R1 -> R2-> R3
>
> Where
> A is the initial implementation
> A2 is the depricated A interface that likely converts to B back to A
> B is the new feature
>
> R[1,2,3] Releases incrementing.
>
> This would require that the interface A is documented in the release notes
> of R2 as being marked for removal. The A interface can then be removed in
> R3.
>
> This will allow for a reasonable time for down-stream users to learn that
> the interface they may be using is going away and they can adapt to the new
> interface before it's the only interface available.
>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Brian Haley

+1

On 07/15/2015 02:47 PM, Carl Baldwin wrote:

As the Neutron L3 Lieutenant along with Kevin Benton for control
plane, and Assaf Muller for testing, I would like to propose Cedric
Brandily as a member of the Neutron core reviewer team under these
areas of focus.

Cedric has been a long time contributor to Neutron showing expertise
particularly in these areas.  His knowledge and involvement will be
very important to the project.  He is a trusted member of our
community.  He has been reviewing consistently [1][2] and community
feedback that I've received indicates that he is a solid reviewer.

Existing Neutron core reviewers from these areas of focus, please vote
+1/-1 for the addition of Cedric to the team.

Thanks!
Carl Baldwin

[1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
[2] http://stackalytics.com/report/contribution/neutron-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] schedule instance based on CPU frequency ?

2015-07-15 Thread Dugger, Donald D
In re: Static CPU frequency.  For modern Intel CPUs this really isn't true.  
Turbo Boost is a feature that allows certain CPUs in certain conditions to 
actually run at a higher clock rate that what is advertised at power on (the 
havoc this causes code that depends upon timing based upon CPU spin loops is 
left as an exercise for the reader :-)  Likewise, SpeedStep technology allows 
the kernel to slow down the clock by asking for different P-states, trading off 
lower performance for lower power drain by lowering the clock speed.  
Admittedly, SpeedStep is more used on laptops to conserve battery life, not a 
major segment for OpenStack, but it just goes to show that the CPU frequency is 
technically not a constant.

Having said that, I think CPU frequency is a really bad metric to be making any 
kind of scheduling decisions on.  A Core I7 running at 2 GHz is going to 
potentially run code faster than a Core I3 running at 2.2 GHz (issues of 
micro-architecture and cache sizes impact performance much more than minor 
variations in clock speed).  If you really want to schedule based upon CPU 
capability you need to define an abstract metric, identify how many of these 
abstract units apply to the specific compute nodes in your cloud and do 
scheduling based upon that.  There is actually work going to do just this, 
check out the BP:

https://blueprints.launchpad.net/nova/+spec/normalized-compute-units




--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Friday, July 3, 2015 7:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] schedule instance based on CPU frequency ?

On 07/03/2015 06:32 AM, Sylvain Bauza wrote:
> Le 02/07/2015 21:40, Jay Pipes a écrit :
>> On 07/01/2015 12:23 AM, ChangBo Guo wrote:
>>> thanks Dan and Jay,  we don't need add new scheduler for that  :-), 
>>> what about provide cpu frequency to  api  /os-hypervisors, that  
>>> means we can report this value automatically,  the value can be used 
>>> in high level mange tools.
>>
>> Meh, I'm not too big of a fan of the os-hypervisors extension.
>> Actually, one might say I despise that extension :)
>>
>> That said, I suppose it should be possible to include the output of 
>> the CPU frequency in the cpu_info field there...
>>
>
> Well, IMHO I don't like to have the Hypervisors API to be a 
> Nagios-like view of the hypervisors world and I don't really much 
> benefits of pusing the metrics up to the API.
>
> On the other hand, those monitor metrics are already sent as 
> notifications on the bus [1] so a 3rd party tool can easily fetch them 
> without necessarly needing to extend the API.

Yeah, the difference here is that CPU frequency really isn't a metric... 
it's a static thing that doesn't change over time. Which is why I think it's OK 
to put it in cpu_info.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Henry Gessau
+1!

On Wed, Jul 15, 2015, Carl Baldwin  wrote:
> As the Neutron L3 Lieutenant along with Kevin Benton for control
> plane, and Assaf Muller for testing, I would like to propose Cedric
> Brandily as a member of the Neutron core reviewer team under these
> areas of focus.
> 
> Cedric has been a long time contributor to Neutron showing expertise
> particularly in these areas.  His knowledge and involvement will be
> very important to the project.  He is a trusted member of our
> community.  He has been reviewing consistently [1][2] and community
> feedback that I've received indicates that he is a solid reviewer.
> 
> Existing Neutron core reviewers from these areas of focus, please vote
> +1/-1 for the addition of Cedric to the team.
> 
> Thanks!
> Carl Baldwin
> 
> [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
> [2] http://stackalytics.com/report/contribution/neutron-group/90



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] requirements and tox.ini

2015-07-15 Thread Robert Collins
One thing I've noticed over the last month or so looking at many
projects requirements handling is that many projects have deps
statements in tox.ini.

There are a couple of nuances here I want to call out - and I want
advice on where to document for people to find this.

Firstly, this:

deps = -r{toxinidir}/requirements.txt

Is redundant: pbr reflects the package dependencies from
requirements.txt into the sdist that tox builds. The only reason to
use requirements.txt directly is if there are dependencies that pbr
can't reflect. This includes all URL based dependencies (today). So -
the only projects that need to use this line today are neutron
split-out services, because everyone else should be strictly
describing their dependencies as packages. Once we get constraints up
and running for tox, even this case can be handled more directly - and
we'll get ZUUL_REF support for running dependencies via git checkouts
too.

Then there is this:

deps = -r{toxinidir}/test-requirements.txt

This is ok. We're likely going to transition away from this, but thats
still being formalised in oslo, and we're not ready to roll it out en
masse yet. When we are it will become something like:
deps = .[test]

Finally, these things are all problematic:
deps = doc8 # or hacking, or flake8, or $ANYTHING
deps = {variable}
deps = {toxinidir}/test-requirements-py3.txt

The -py3 one is problematic because it breaks dependencies in
universal wheels. We're now ready to deprecate this in pbr (but not
remove - backwards compat is for life :)).

The other two are problematic because they are not synchronised with
global-requirements and that leads to things being out of sync across
the project, which leads to non-coinstallability and confusion.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Meeting July 16

2015-07-15 Thread Andrew Woodward
Please note the IRC meeting is scheduled for 16:00 UTC in
#openstack-meeting-alt

Please review meeting agenda and update if there is something you wish to
discuss.

https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Status of identity v3 support

2015-07-15 Thread melanie witt
Hi Everyone,

Recently I have started reviewing the patch series about nested quotas in nova 
[1] and I'm having trouble understanding where we currently are with identity 
v3 support in nova. From what I read in a semi recent proposal [2] I think 
things mostly "just work" if you configure to run with v3, but there are some 
gaps.

Nested quotas use the concept of parent/child projects in keystone v3 to allow 
parent projects to delegate quota management to subprojects. This means we'd 
start getting requests with a token scoped to the parent project to modify 
quota of a child project.

With keystone v3 we could get requests with tokens scoped to parent projects 
that act upon child project resources for all APIs in general.

The first patch in the series [3] removes the top-level validation check for 
context.project_id != project_id in URL, since with v3 it's a supported thing 
for a parent project to act on child project resources. (I don't think it's 
completely correct in that I think it would allow unrelated projects to act on 
one another)

Doing this fails the keypairs and security groups tempest tests [4] that verify 
that one project cannot create keypairs or security group rules in a different 
project.

Question: How can we handle project_id mismatch in a way that supports both 
keystone v2 and v3? Do we augment the check to fall back on checking if "is 
parent of" using keystone API if there's a project_id mismatch?

Question: Do we intend to, for example, allow creation of keypairs by a parent 
on behalf of child being that the private key is returned to the caller?

Basically, I feel stuck on these reviews because it appears to me that nova 
doesn't fully support identity v3 yet. From what I checked, there aren't yet 
Tempest jobs running against identity v3 either.

Can anyone shed some light on this as I'm trying to see a way forward with the 
nested quotas reviews?

Thanks,
-melanie (irc: melwitt)


[1] 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-quota-driver-api,n,z
[2] https://review.openstack.org/#/c/103617/
[3] https://review.openstack.org/182140/
[4] 
http://logs.openstack.org/40/182140/12/check/check-tempest-dsvm-full/8e51c94/logs/testr_results.html.gz



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] puppet-designate POC implementation of virtualenv and docker support.

2015-07-15 Thread Mike Dorman
I have been meaning to ask you about this, so thanks for posting.

I like the approach.  Definitely a lot cleaner than the somewhat hardcoded 
dependencies and subscriptions that are in the modules now.

Do you envision that long term the docker/venv/whatever else implementation 
(like you have in designate_ext) would actually be part of the upstream Puppet 
module?  Or would we provide the hooks that you describe, and leave it up to 
other modules to handle the non-package-based installs?

Mike


From: Clayton O'Neill
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Monday, July 13, 2015 at 8:34 PM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [puppet] puppet-designate POC implementation of 
virtualenv and docker support.

Last year I put together a virtualenv patch for the Designate puppet module, 
but the patch was too invasive of a change and too opinionated to be practical 
to merge.  I've taken another shot at this with the approach of implementing 
well defined hooks for various phases of the module. This should  allow 
external support for alternative ways of installing and running services (such 
as virtualenv, and docker).  I think this patch is now mostly ready for some 
outside reviews (we'll be running the virtualenv support in production soon).

The puppet-designate change to support this can be found here:  
https://review.openstack.org/#/c/197172/

The supporting puppet-designate_ext module can be found here: 
https://github.com/twc-openstack/puppet-designate_ext

The basic approach is to split the module dependency chain into 3 phases:

 * install begin/end
 * config begin/end
 * service begin/end

Each of these phases have a pair of corresponding anchors that are used 
internally for dependencies and notifications.  This allows external modules to 
hook into this flow without having to change the module.  For example, the 
virtualenv support will build the virtualenv environment between the 
designate::install::begin and designate::install::end anchors.  Additionally, 
the virtualenv support will notify the designate::install::end anchor.  This 
allows other resources to subscribe to this anchor without needing to know if 
the software is being installed as a package, virtualenv, or docker image.

I think this approach could be applied mostly as is to at least some of the 
existing modules with similar levels of changes.  For example, horizon, 
keystone & heat would probably be fairly straightforward.  I suspect this 
approach would need refinement for more complex services like neutron and nova. 
 We would need to work out how to manage things like external packages that 
would still be needed if running a virtualenv based install, but probably not 
needed if running a docker based install.  We would probably also want to 
consider how to be more granular about service notifications.

I'd love to get some feedback on this approach if people have time to look it 
over.  We're still trying to move away from using packages for service installs 
and I'd like to figure out how to do that without carrying heavyweight and 
fragile patches to the openstack puppet modules.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [congress] murano congress integration test - trusts

2015-07-15 Thread Victor Ryzhenkin
Hi folks!
I’ve tried to debug this problem and found strange behavior of auth in [1].
So, at least, if in murano.conf we have parameter use_trusts=True,
keystone_client.tenant_name returns None [2].
I’ve tried to use v3 auth instead of v2 and, accordingly, replaced tenant_name 
for auth by trust_id, but in tis case we have the same error, because of 
auth.token and session.get_token(auth) returns None too.
The next step was to hard-assign fresh token to auth.token field, and in this 
case I’ve get a different error [3].

I’m really confused about it..


[1] 
https://github.com/openstack/murano/blob/master/murano/engine/client_manager.py#L88-L111
[2] 
https://github.com/openstack/murano/blob/master/murano/engine/client_manager.py#L105
[3] http://paste.openstack.org/show/378669/

Best regards!

-- 
Victor Ryzhenkin
Junior QA Engeneer
freerunner on #freenode

Включено 16 июля 2015 г. в 0:49:40, Tim Hinrichs (t...@styra.com) написал:

Hi Filip,

Did you get this resolved?  If not, could you point me to the gate failure?  
I'm on #congress for higher-bandwidth communication.

Tim



On Wed, Jul 15, 2015 at 6:39 AM Filip Blaha  wrote:
Hi all

our congress integration tests were  broken by the change [1] (trusts
enabled by default).  However I suspect problem could be with
initialization congress client [2] or in python-congressclient. Any
ideas about that? Thanks

[1] https://review.openstack.org/#/c/194615/
[2]
https://github.com/openstack/murano/blob/6ac473fabbc2d2e1f3ed4c3d36be6439c1d6c2cd/murano/engine/client_manager.py#L102

Regards
Filip



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] token revocation woes

2015-07-15 Thread Dolph Mathews
On Wed, Jul 15, 2015 at 4:51 PM, Matt Fischer  wrote:

> I'm having some issues with keystone revocation events. The bottom line is
> that due to the way keystone handles the clean-up of these events[1],
> having more than a few leads to:
>
>  - bad performance, up to 2x slower token validation with about 600 events
> based on my perf measurements.
>  - database deadlocks, which cause API calls to fail, more likely with
> more events it seems
>
> I am seeing this behavior in code from trunk on June 11 using Fernet
> tokens, but the token backend does not seem to make a difference.
>
> Here's what happens to the db in terms of deadlock:
> 2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
> (OperationalError) (1213, 'Deadlock found when trying to get lock; try
> restarting transaction') 'DELETE FROM revocation_event WHERE
> revocation_event.revoked_at < %s' (datetime.datetime(2015, 7, 15, 18, 55,
> 41, 55186),)
>
> When this starts happening, I just go truncate the table, but this is not
> ideal. If [1] is really true then the design is not great, it sounds like
> keystone is doing a revocation event clean-up on every token validation
> call. Reading and deleting/locking from my db cluster is not something I
> want to do on every validate call.
>

Unfortunately, that's *exactly* what keystone is doing. Adam and I had a
conversation about this problem in Vancouver which directly resulted in
opening the bug referenced on the operator list:

  https://bugs.launchpad.net/keystone/+bug/1456797

Neither of us remembered the actual implemented behavior, which is what
you've run into and Deepti verified in the bug's comments.


>
> So, can I turn of token revocation for now? I didn't see an obvious no-op
> driver.
>

Not sure how, other than writing your own no-op driver, or perhaps an
extended driver that doesn't try to clean the table on every read?


> And in the long-run can this be fixed? I'd rather do almost anything else,
> including writing a cronjob than what happens now.
>

If anyone has a better solution than the current one, that's also better
than requiring a cron job on something like keystone-manage
revocation_flush I'd love to hear it.


> [1] -
> http://lists.openstack.org/pipermail/openstack-operators/2015-June/007210.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate repo for Fuel Agent

2015-07-15 Thread Mike Scherbakov
+1, great to see it is being pushed through. This is not always pleasant
work, but it certainly makes our life easier.

> 130 vs 7275 in fuel-web (fuel-agent-7.0.0-1.mos7275.noarch.rpm)
Two questions:
a) how our patching suppose to work if we have lower version now?
b) why there is "mos" code name in there, if it's pure Fuel package?

Thanks,

On Wed, Jul 15, 2015 at 9:12 AM Oleg Gelbukh  wrote:

> Nice work, Vladimir. Thank you for pushing this, it's really important
> step to decouple things from consolidated repository.
>
> --
> Best regards,
> Oleg Gelbukh
>
> On Wed, Jul 15, 2015 at 6:47 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> I'm glad to announce that everything about this task is done. ISO build
>> job uses this new repository [1]. BVT is green. Fuel Agent rpm spec has
>> been moved to the new repo and perestroika has also been switched to build
>> fuel-agent package from the new repo. The only difference that could
>> potentially affect deployment is that fuel-agent package built from the new
>> repo will have lower version because the number or commits in the new repo
>> is around 130 vs 7275 in fuel-web (fuel-agent-7.0.0-1.mos7275.noarch.rpm).
>> But I believe it gonna be fine until there are more than one fuel-agent
>> packages in rpm repository.
>>
>> Next step is to remove stackforge/fuel-web/fuel_agent directory.
>>
>>
>> [1] https://github.com/stackforge/fuel-agent.git
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Jul 15, 2015 at 2:19 AM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>>
>>> Thanks Vladimir. Let's ensure to get it done sooner than later (this
>>> might require to be tested in custom ISO..) - we are approaching FF, and I
>>> expect growing queues of patches to land...
>>>
>>> On Tue, Jul 14, 2015 at 6:03 AM Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Dear colleagues,

 New repository [1] has been created. So, please port all your review
 requests to stackforge/fuel-web related to Fuel Agent to this new
 repository. Currently, I am testing these two patches
 https://review.openstack.org/#/c/200595
 https://review.openstack.org/#/c/200025. If they work, we need to
 merge them and that is it. Review is welcome.



 [1] https://github.com/stackforge/fuel-agent.git

 Vladimir Kozhukalov

 On Fri, Jul 10, 2015 at 8:14 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Ok, guys.
>
> Looks like there are no any objections. At the moment I need to create
> actual version of upstream repository which is going to be sucked in by
> OpenStack Infra. Please, be informed that all patches changing
> fuel-web/fuel_agent that will be merged after this moment will need to be
> ported into the new fuel-agent repository.
>
>
> Vladimir Kozhukalov
>
> On Fri, Jul 10, 2015 at 6:38 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Guys, we are next to moving fuel_agent directory into a separate
>> repository. Action flow is going to be as follows:
>>
>> 1) Create verify jobs on our CI
>> https://review.fuel-infra.org/#/c/9186 (DONE)
>> 2) Freeze fuel_agent directory in
>> https://github.com/stackforge/fuel-web (will announce in a separate
>> mail thread). That means we stop merging patches into master which change
>> fuel_agent directory. Unfortunately, all review requests need to be
>> re-sent, but it is not going to be very difficult.
>> 3) Create temporary upstream repository with fuel_agent/* as a
>> content. I'm not planning to move 5.x and 6.x branches. Only master. So,
>> all fixes for 5.x and 6.x will be living in fuel-web.
>> 4) This upstream repository is going to be sucked in by
>> openstack-infra. Patch is here
>> https://review.openstack.org/#/c/199178/ (review is welcome) I don't
>> know how long it is going to take. Will try to poke infra people to do 
>> this
>> today.
>> 5) Then we need to accept two patches into new fuel-agent repository:
>>  - rpm spec (extraction from fuel-web/specs/nailgun.spec) (ready, but
>> there is no review request)
>>  - run_tests.sh (to run tests) (ready, but there is no review request)
>>
>> !!! By this moment there won't be any impact on ISO build process !!!
>>
>> 6) Then we need to change two things at the same time (review is
>> welcome)
>>   - fuel-web/specs/nailgun.spec in order to prevent fuel-agent
>> package building  https://review.openstack.org/#/c/200595
>>   - fuel-main so as to introduce new fuel-agent repository into the
>> build process https://review.openstack.org/#/c/200025
>>
>> And good luck to me -)
>>
>>
>> Vladimir Kozhukalov
>>
>> On Wed, Jul 8, 2015 at 12:53 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> There were some questions from Alexandra

[openstack-dev] [keystone] token revocation woes

2015-07-15 Thread Matt Fischer
I'm having some issues with keystone revocation events. The bottom line is
that due to the way keystone handles the clean-up of these events[1],
having more than a few leads to:

 - bad performance, up to 2x slower token validation with about 600 events
based on my perf measurements.
 - database deadlocks, which cause API calls to fail, more likely with more
events it seems

I am seeing this behavior in code from trunk on June 11 using Fernet
tokens, but the token backend does not seem to make a difference.

Here's what happens to the db in terms of deadlock:
2015-07-15 21:25:41.082 31800 TRACE keystone.common.wsgi DBDeadlock:
(OperationalError) (1213, 'Deadlock found when trying to get lock; try
restarting transaction') 'DELETE FROM revocation_event WHERE
revocation_event.revoked_at < %s' (datetime.datetime(2015, 7, 15, 18, 55,
41, 55186),)

When this starts happening, I just go truncate the table, but this is not
ideal. If [1] is really true then the design is not great, it sounds like
keystone is doing a revocation event clean-up on every token validation
call. Reading and deleting/locking from my db cluster is not something I
want to do on every validate call.

So, can I turn of token revocation for now? I didn't see an obvious no-op
driver. And in the long-run can this be fixed? I'd rather do almost
anything else, including writing a cronjob than what happens now.

[1] -
http://lists.openstack.org/pipermail/openstack-operators/2015-June/007210.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [congress] murano congress integration test - trusts

2015-07-15 Thread Kirill Zaitsev
Just to keep track, Victor started a bug on that: 
https://bugs.launchpad.net/murano/+bug/1474938/ Haven’t looked close into the 
problem, yet, though. Will try to do so some time soon.

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 15 Jul 2015 at 16:44:24, Filip Blaha (filip.bl...@hp.com) wrote:

Hi all  

our congress integration tests were broken by the change [1] (trusts  
enabled by default). However I suspect problem could be with  
initialization congress client [2] or in python-congressclient. Any  
ideas about that? Thanks  

[1] https://review.openstack.org/#/c/194615/  
[2]  
https://github.com/openstack/murano/blob/6ac473fabbc2d2e1f3ed4c3d36be6439c1d6c2cd/murano/engine/client_manager.py#L102
  

Regards  
Filip  



__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] [congress] murano congress integration test - trusts

2015-07-15 Thread Tim Hinrichs
Hi Filip,

Did you get this resolved?  If not, could you point me to the gate
failure?  I'm on #congress for higher-bandwidth communication.

Tim



On Wed, Jul 15, 2015 at 6:39 AM Filip Blaha  wrote:

> Hi all
>
> our congress integration tests were  broken by the change [1] (trusts
> enabled by default).  However I suspect problem could be with
> initialization congress client [2] or in python-congressclient. Any
> ideas about that? Thanks
>
> [1] https://review.openstack.org/#/c/194615/
> [2]
>
> https://github.com/openstack/murano/blob/6ac473fabbc2d2e1f3ed4c3d36be6439c1d6c2cd/murano/engine/client_manager.py#L102
>
> Regards
> Filip
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] First Sprint proposal

2015-07-15 Thread Jeremy Stanley
On 2015-07-15 14:04:47 -0700 (-0700), Spencer Krum wrote:
> It is also possible to use the openstack-infra asterisk server for voice
> chat. Historically this service has out-performed google hangout and
> bluejeans. It doesn't use video though.

For those who don't have the URL memorized:

https://wiki.openstack.org/wiki/Infrastructure/Conferencing

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] First Sprint proposal

2015-07-15 Thread Spencer Krum
It is also possible to use the openstack-infra asterisk server for voice
chat. Historically this service has out-performed google hangout and
bluejeans. It doesn't use video though.

-- 
  Spencer Krum
  n...@spencerkrum.com

On Tue, Jul 14, 2015, at 09:09 AM, Emilien Macchi wrote:
> We decided during our weekly meeting that the Sprint will happen
> virtually on IRC on Wed 9/2 – Fri 9/4.
> 
> We will use #openstack-sprint (freenode) IRC channel and Google Hangout
> / Bluejeans if needed.
> 
> We made progress on defining an agenda:
> https://etherpad.openstack.org/p/puppet-liberty-mid-cycle
> 
> Please have a look and feel free to add / complete the topics.
> 
> See you there,
> 
> On 07/13/2015 09:03 AM, Emilien Macchi wrote:
> > I just closed the poll after one week.
> > It will happen on Wed 9/2 – Fri 9/4.
> > 
> > We'll work on the agenda during the following weeks.
> > 
> > Best,
> > 
> > On 07/06/2015 10:26 PM, Matt Fischer wrote:
> >> Operators mid-cycle is Aug 17-21 at a TBD location, I voted accordingly.
> >> Thanks.
> >>
> >> On Mon, Jul 6, 2015 at 12:09 PM, Emilien Macchi  >> > wrote:
> >>
> >>
> >>
> >> On 07/06/2015 02:05 PM, Matt Fischer wrote:
> >> > I think this is a great idea. I'd like to get a firm date on the
> >> > operators mid-cycle before I vote though.
> >>
> >> If it overlaps, we can add more slots. Feel free to ping me on IRC for
> >> that, I'll update the doodle.
> >>
> >> Thanks,
> >>
> >> >
> >> > On Mon, Jul 6, 2015 at 11:31 AM, Emilien Macchi  >> 
> >> > >> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I would like to organize our first sprint for contributing to 
> >> our Puppet
> >> > OpenStack modules. It will happen in Red Hat Montreal (Canada) 
> >> office,
> >> > during 3 days.
> >> >
> >> > If you're interested to participate, please find the slots that 
> >> may work
> >> > for you [1]. Any slot suggestion is welcome though.
> >> > Also, please bring on the etherpad any topics we should work on 
> >> together
> >> > [2].
> >> > We will groom topics during a meeting and prepare the agenda 
> >> before the
> >> > sprint.
> >> >
> >> > [1] http://doodle.com/xk2sfgu4xsi4y6n4r46t7u9k
> >> > [2] https://etherpad.openstack.org/p/puppet-liberty-mid-cycle
> >> >
> >> > Regards,
> >> > --
> >> > Emilien Macchi
> >> >
> >> >
> >> > 
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> 
> >> >   
> >>  
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >> >
> >> >
> >> >
> >> >
> >> 
> >> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> 
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> --
> >> Emilien Macchi
> >>
> >>
> >> 
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-d

Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Hongbin Lu
+1 for the idea of using Nova flavor directly.

Why we introduced the “platform” field to indicate “vm” or “baremetel” is that 
magnum need to map a bay to a Heat template (which will be used to provision 
the bay). Currently, Magnum has three layers of mapping:

・ platform: vm or baremetal

・ os: atomic, coreos, …

・ coe: kubernetes, swarm or mesos

I think we could just replace “platform” with “flavor”, if we can populate a 
list of flovars for VM and another list of flavors for baremetal (We may need 
an additional list of flavors for container in the future for the nested 
container use case). Then, the new three layers would be:

・ flavor: baremetal, m1.small, m1.medium,  …

・ os: atomic, coreos, ...

・ coe: kubernetes, swarm or mesos

This approach can avoid introducing a new field in baymodel to indicate what 
Nova flavor already indicates.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: July-15-15 12:37 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

Maybe somehow I missed the point, but why not just use raw Nova flavors? They 
already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin

From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None


(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver


http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 7:44 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

One drawback to virt_type if not seen in the context of the acceptable values, 
is that it should be set to values like libvirt, xen, ironic, etc. That might 
actually be good. Instead of using the values 'vm' or 'baremetal', we use the 
name of the nova virt driver, and interpret those to be vm or baremetal types. 
So if I set the value to 'xen', I know the nova instance type is a vm, and 
'ironic' means a baremetal nova instance.

Adrian


 Original message 
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?
I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Magnum template manage use platform VS others 
as a type?


Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or 'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and 
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner as 
the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I think 
that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei W

Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Joshua Harlow

Chris Friesen wrote:

On 07/15/2015 09:31 AM, Joshua Harlow wrote:

I do like experiments!

What about going even farther and trying to integrate somehow into mesos?

https://mesos.apache.org/documentation/latest/mesos-architecture/

Replace the hadooop executor, MPI executor with a 'VM executor' and
perhaps we
could eliminate a large part of the scheduler code (just a thought)...


Is the mesos scheduler sufficiently generic as to encompass all the
filters we currently have in nova?


Unsure, if not it's just another open-source project right? I'm sure 
they'd love to collaborate, and maybe they will even do most of the 
work? Who knows...




Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Chris Friesen

On 07/15/2015 08:18 AM, Ed Leafe wrote:


What I'd like to investigate is replacing the current design of having
the compute nodes communicating with the scheduler via message queues.
This design is overly complex and has several known scalability
issues. My thought is to replace this with a Cassandra [1] backend.
Compute nodes would update their state to Cassandra whenever they
change, and that data would be read by the scheduler to make its host
selection. When the scheduler chooses a host, it would post the claim
to Cassandra wrapped in a lightweight transaction, which would ensure
that no other scheduler has tried to claim those resources. When the
host has built the requested VM, it will delete the claim and update
Cassandra with its current state.

One main motivation for using Cassandra over the current design is
that it will enable us to run multiple schedulers without increasing
the raciness of the system.


It seems to me that the ability to run multiple schedulers comes from the fact 
that you're talking about claiming resources in the data store, and not from 
anything inherent in Cassandra itself.


Why couldn't we just update the existing nova scheduler to claim resources in 
the existing database in order to get the same reduction of raciness? (Thus 
allowing multiple schedulers running in parallel.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Chris Friesen

On 07/15/2015 09:31 AM, Joshua Harlow wrote:

I do like experiments!

What about going even farther and trying to integrate somehow into mesos?

https://mesos.apache.org/documentation/latest/mesos-architecture/

Replace the hadooop executor, MPI executor with a 'VM executor' and perhaps we
could eliminate a large part of the scheduler code (just a thought)...


Is the mesos scheduler sufficiently generic as to encompass all the filters we 
currently have in nova?


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] Implementation of ABC MetaClasses

2015-07-15 Thread John Griffith
On Wed, Jul 8, 2015 at 12:48 AM, Marc Koderer  wrote:

>
> Am 08.07.2015 um 01:37 schrieb Mike Perez :
>
> > On 18:38 Jun 22, Marc Koderer wrote:
> >>
> >> Am 20.06.2015 um 01:59 schrieb John Griffith  >:
> >>> * The BaseVD represents the functionality we require from all drivers.
> >>> ​Yep
> >>> ​
> >>> * The additional ABC classes represent features that are not required
> yet.
> >>>  * These are represented by classes because some features require a
> >>> bundle of methods for it to be fulfilled. Consistency group is an
> >>> example. [2]
> >>>
> >>> ​Sure, I suppose that's fine for things like CG and Replication.
> Although I would think that if somebody implemented something optional like
> CG's that they should be able to figure out they need all of the "cg"
> methods, it's kinda like that warning on ladders to not stand on the very
> top rung.  By the way, maybe we should discuss the state of "optional API
> methods" at the mid-cycle.
> >>>
> >>>  * Any driver that wishes to mark their driver as supporting a
> >>> non-required feature inherits this feature and fulfills the required
> >>> methods.
> >>>
> >>> ​Yeah... ok​, I guess.
> >>>
> >>> * After communication is done on said feature being required, there
> >>> would be time for driver maintainers to begin supporting it.
> >>>  * Eventually that feature would disappear from it's own class and be
> >>> put in the BaseVD. Anybody not supporting it would have a broken
> >>> driver, a broken CI, and eventually removed from the release.
> >>>
> >>> ​Sure, I guess my question is what's the real value in this
> intermediate step.  The bulk of these are things that I'd argue shouldn't
> be optional anyway (snapshots, transfers, manage, extend, retype and even
> migrate).  Snapshots in particular I find surprising to be considered as
> "optional“.
> >>
> >> Reducing the number of classes/optional functions sounds good to me.
> >> I think it’s quite valuable to discuss what are the mandatory functions
> >> of a cinder driver. Before ABC nobody really cared because all drivers
> simply raised an run-time exception :)
> >
> > If Marc is fine with this, I see no harm in us trying out John's
> proposal of
> > using decorators in the volume driver class.
> >
> > --
> > Mike Perez
>
>
> +1 sure, happy to see the code :)
>
> Regards
> Marc
>
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Ok, so I spent a little time on this; first gathering some detail around
what's been done as well as proposing a patch to sort of step back a bit
and take another look at this [1].

Here's some more detail on what is bothering me here:
* Inheritance model

One of the things the work has done is moved us from a mostly singular
inheritance OO structure for the ​Volume Drivers where each level of
inheritance was specifically for a more general differentiation.  For
example, in driver.py we had:

VolumeDriver(object):
-- ISCSIDriver(VolumeDriver):
-- FakeISCSIDriver(ISCSIDriver):
-- ISERDriver(ISCSIDriver):
-- FakeISERDriver(FakeISCSIDriver):
-- FibreChannelDriver(VolumeDriver):

Arguably the fakes probably should be done differently and ISCSI, ISER and
Fibre should be able to go away if we follow through with the target driver
work we started.

Under the new model we started with ABC, we ended up with 25 base classes
to work with, and the base VolumeDriver itself is now composed of 12 other
independent base classes.

BaseVD(object):
-- LocalVD(object):
-- SnapshotVD(object):
-- ConsistencyGroupVD(object):
-- CloneableVD(object):
-- CloneableImageVD(object):
-- MigrateVD(object):
-- ExtendVD(object):
-- RetypeVD(object):
-- TransferVD(object):
-- ManageableVD(object):
-- ReplicaVD(object):
-- VolumeDriver(ConsistencyGroupVD, TransferVD, ManageableVD, ExtendVD,
-- ProxyVD(object): (* my personal favorite*)
-- ISCSIDriver(VolumeDriver):
-- FakeISCSIDriver(ISCSIDriver):
-- ISERDriver(ISCSIDriver):
-- FakeISERDriver(FakeISCSIDriver):
-- FibreChannelDriver(VolumeDriver):

The idea behind this was to break out different functionality into it's own
"class" so that we could enforce an entire feature based on whether a
backend implemented it or not, good idea I think, but hindsight is 20/20
and I have some problems with this.

I'm not a fan of having the base VolumeDriver that ideally could be used as
a template and source of truth be composed of 12 different classes.  I
think this has caused some confusion among a number of contributors.

I think this c

Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Edgar Magana
+1 Excellent addition to the team.

Edgar




On 7/15/15, 11:47 AM, "Carl Baldwin"  wrote:

>As the Neutron L3 Lieutenant along with Kevin Benton for control
>plane, and Assaf Muller for testing, I would like to propose Cedric
>Brandily as a member of the Neutron core reviewer team under these
>areas of focus.
>
>Cedric has been a long time contributor to Neutron showing expertise
>particularly in these areas.  His knowledge and involvement will be
>very important to the project.  He is a trusted member of our
>community.  He has been reviewing consistently [1][2] and community
>feedback that I've received indicates that he is a solid reviewer.
>
>Existing Neutron core reviewers from these areas of focus, please vote
>+1/-1 for the addition of Cedric to the team.
>
>Thanks!
>Carl Baldwin
>
>[1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
>[2] http://stackalytics.com/report/contribution/neutron-group/90
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

While I'm an obsolete™ core reviewer and not an official member of any
of those subteams, I will nevertheless say that Cedric will be a very
valuable addition to the team. His reviews are usually insightful and
touch the root of the problem, not mere cosmetic aspects of a change.

+1.

On 07/15/2015 08:47 PM, Carl Baldwin wrote:
> As the Neutron L3 Lieutenant along with Kevin Benton for control 
> plane, and Assaf Muller for testing, I would like to propose
> Cedric Brandily as a member of the Neutron core reviewer team under
> these areas of focus.
> 
> Cedric has been a long time contributor to Neutron showing
> expertise particularly in these areas.  His knowledge and
> involvement will be very important to the project.  He is a trusted
> member of our community.  He has been reviewing consistently [1][2]
> and community feedback that I've received indicates that he is a
> solid reviewer.
> 
> Existing Neutron core reviewers from these areas of focus, please
> vote +1/-1 for the addition of Cedric to the team.
> 
> Thanks! Carl Baldwin
> 
> [1]
> https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z 
> [2] http://stackalytics.com/report/contribution/neutron-group/90
> 
> __

>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVpr3YAAoJEC5aWaUY1u57LJwIANeE5/3KRnRc2bCwCt0mZLN7
7koRMwyuBe8E62b16bBulhaIhXNAhWJ2H5ZsqjIY8yoz2mbqEVk/PRRGLsesMbJC
jQ8e7AGac+y68EkodLZSpNm3Al9JXigUYCX7Ung1YpVcKapDzmMWBRuSNPLtr00k
LtCtS+NWsjsWmKWJr3P5F4p5lIy2Bd6nV1Q1y2qhHcZio3A/Fm8DvAgByO4uBxKm
fBWTObok6rGxWYUTJ83L+Rr4n4RRW8RQ2i44Wq8wVTx4baMcB6u0B8uz8iwzDGAd
Mfaywqa5/2GMSjzRG16YUgM6GCqNjfW3hp0MctOm9sRS4gOA8sAO3ThB7pQg7yE=
=U0J2
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas v2

2015-07-15 Thread Doug Wiegley
From what I’m hearing, Akihiro is recommending that we put it somewhere where 
it’s easy to add horizon cores to the repo, which seems like it’d be very 
useful. I think we can roll a parallel repo under the neutron tent pretty 
easily. For the sake of getting collaboration moving, I’d suggest:

1. Submit to gerrit in neutron-lbaas/horizon
2. We’ll get the new repo creation in progress.
3. We’ll move the patches to that repo when it’s ready, and add neutron-lbaas 
cores and any interesting/willing horizon cores at the same time (and the 
authors of said panels.)

I’ll submit an infra patch for the new repo/package right now.

Thanks,
doug


> On Jul 15, 2015, at 11:35 AM, Balle, Susanne  wrote:
> 
> I agree with German. Let’s keep things together for now. Susanne
>  
> From: Eichberger, German 
> Sent: Wednesday, July 15, 2015 1:31 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Balle, Susanne; Tonse, Milan
> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for 
> neutron-lbaas v2
>  
> Hi,
>  
> Let’s move it into the LBaaS repo that seems like the right place for me —
>  
> Thanks,
> German
>  
> From: "Jain, Vivek" mailto:vivekj...@ebay.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, July 14, 2015 at 10:22 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Cc: "Balle Balle, Susanne"  >, "Tonse, Milan"  >
> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for 
> neutron-lbaas v2
>  
> Thanks Akihiro. Currently lbaas panels are part of horizon repo. If there is 
> a easy way to de-couple lbaas dashboard from horizon? I think that will 
> simplify development efforts. What does it take to separate lbaas dashboard 
> from horizon?
>  
> Thanks,
> Vivek
>  
> From: Akihiro Motoki mailto:amot...@gmail.com>>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: Tuesday, July 14, 2015 at 10:09 AM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> mailto:openstack-dev@lists.openstack.org>>
> Cc: "Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
> "Tonse, Milan" mailto:mto...@ebay.com>>
> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for 
> neutron-lbaas v2
>  
> Another option is to create a project under openstack.
> designate-dashboard project takes this approach,
> and the core team of the project is both horizon-core and designate-core.
> We can do the similar approach. Thought?
>  
> I have one question.
> Do we have a separate place forever or do we want to merge horizon repo
> once the implementation are available.
> If we have a separate repo for LBaaS v2 panel, we need to release it 
> separately.
>  
> I am not sure I am available at LBaaS meeting, but I would like to help
> this efforts as a core from horizon and neutron.
>  
> Akihiro
>  
>  
> 2015-07-15 1:52 GMT+09:00 Doug Wiegley  >:
> I’d be good submitting it to the neutron-lbaas repo, under a horizon/ 
> directory. We can iterate there, and talk with the Horizon team about how 
> best to integrate. Would that work?
> 
> Thanks,
> doug
> 
> > On Jul 13, 2015, at 3:05 PM, Jain, Vivek  > > wrote:
> >
> > Hi German,
> >
> > We integrated UI with LBaaS v2 GET APIs. We have created all panels for
> > CREATE and UPDATE as well.
> > Plan is to share our code with community on stackforge for more
> > collaboration from the community.
> >
> > So far Ganesh from cisco has shown interest in helping with some work. It
> > will be great if we can get more hands.
> >
> > Q: what is the process for hosting in-progress project on stackforge? Can
> > someone help me here?
> >
> > Thanks,
> > Vivek
> >
> > On 7/10/15, 11:40 AM, "Eichberger, German"  > >
> > wrote:
> >
> >> Hi Vivek,
> >>
> >> Hope things are well. With the Midccyle next week I am wondering if you
> >> made any progress and/or how we can best help with the panels.
> >>
> >> Thanks,
> >> German
> >>
> >> From: "Jain, Vivek"  >>  >> >>
> >> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> >>  >>  >>  
> >> g>>
> >> Date: Wednesday, April 8, 2015 at 3:32 PM
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >>  >>  >>  
> >> g>>
> >> Cc: "Balle Balle, Susanne"
> >>  >>  >> 

Re: [openstack-dev] [Fuel] New Criteria for UX bugs

2015-07-15 Thread Andrew Woodward
I've updated the documentation on the wiki with this criteria

https://wiki.openstack.org/wiki/Fuel/How_to_contribute#Confirm_and_triage_bugs

On Tue, Jul 14, 2015 at 4:22 PM Mike Scherbakov 
wrote:

> Sounds good to start, we can always adjust later if needed. I actually
> changed one doc bug priority already using this criteria.
>
> On Fri, Jul 10, 2015 at 5:42 AM Jay Pipes  wrote:
>
>> On 07/09/2015 06:14 PM, Andrew Woodward wrote:
>> > We often have bugs which create really poor User eXperience (UX) but our
>> > current bug priority criteria prevent nearly all of them from being
>> > higher than medium (as they nearly always have workarounds). We need to
>> > identify what should qualify as a critical, or high UX defect so that
>> > they can receive appropriate attention.
>> >
>> > We discussed what this may look like on the IRC meeting, the general
>> > idea here is that the complexity of effort to work around the UX issue
>> > should be related to the priority.
>> >
>> > Critical: requires massive effort to work around, including [un|under]
>> > documented commands and edits to config files
>> >
>> > High: requires modification of config files, interfaces that users
>> > aren't expected to use (ie the API when it's _intended_ to work in the
>> > CLI / UI (exclusive of interfaces that are intended to only be available
>> > via API) or requires custom node yaml (again except when it should
>> > exclusively be available)
>> >
>> > Medium: Straight forward commands in the CLI
>>
>> Above criteria look excellent to me, thanks Andrew!
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> --
> Mike Scherbakov
> #mihgen
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] questions on neutron-db-migrations

2015-07-15 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 07/15/2015 07:17 PM, Madhusudhan Kandadai wrote:
> Hello,
> 
> I have noticed that neutron project got rid of 
> neutron/db/migration/alembic_migrations/versions/HEAD file and
> renamed it to
> neutron/db/migration/alembic_migrations/versions/HEADS
> 
> May I know the reason why this happened? I may have overlooked
> some documentation with respect to the change. I have a patch which
> is in merge conflicts and have a db upgrade with version "XXX" and
> I use that version in HEAD. When I upgrade them, I use
> neutron-db-manage --config-file /etc/neutron/neutron.conf
> --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head.
> 
> With this recent refactoring related to db, what needs to be done 
> in-order to upgrade db into neutron-db?
> 

Reasoning behind the change and some suggestions on how to proceed can
be found at:

http://lists.openstack.org/pipermail/openstack-dev/2015-July/069582.html

I will also update devref tomorrow as per suggestion from Salvatore
there, adding some examples on how to proceed.

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVpryjAAoJEC5aWaUY1u57wtYIAOUlFNjOWQ29DMGV21URt6rJ
c+M4zzdoiw5/Vtj4sJl39cGrdJ9HGyUJLLu203j7fQdhe5/snOf6Vw8XeC0S8nk9
WzVtM0wbgJiKeG1uSNLMZTXWtpUfcX62X7fuUxibX6qDQVvMt5lJ86R4DROui8/v
v9fgJfP7uvARorad80qY06kYL6zZOtxBGQFAfzhCIex2WI8gla5t6BIq73PKh76T
pmxCL8fIM81JgCOpt/zKkg9r3A1D5XmVklxuh9etx2REKPtgqHNsdL3hPETLH8Bu
eM9G1HS7L5qMQAagN0Ge5lYbPXyATmsBu15PbqXhwp6YJeWnriSmCI5ssCG+0VI=
=Jlr0
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-15 Thread Doug Wiegley
That begins to looks like nova’s metadata tags and scheduler, which is a valid 
use case. The underpinnings of flavors could do this, but it’s not in the 
initial implementation.

doug

> On Jul 15, 2015, at 12:38 PM, Kevin Benton  wrote:
> 
> Wouldn't it be valid to assign flavors to groups of provider networks? e.g. a 
> tenant wants to attach to a network that is wired up to a 40g router so 
> he/she chooses a network of the "fat pipe" flavor.
> 
> On Wed, Jul 15, 2015 at 10:40 AM, Madhusudhan Kandadai 
> mailto:madhusudhan.openst...@gmail.com>> 
> wrote:
> 
> 
> On Wed, Jul 15, 2015 at 9:25 AM, Kyle Mestery  > wrote:
> On Wed, Jul 15, 2015 at 10:54 AM, Neil Jerram  > wrote:
> I've been reading available docs about the forthcoming Neutron flavors 
> framework, and am not yet sure I understand what it means for a network.
> 
> 
> In reality, this is envisioned more for service plugins (e.g. flavors of 
> LBaaS, VPNaaS, and FWaaS) than core neutron resources.
> Yes. Right put. This is for service plugins and its part of extensions than 
> core network resources// 
>  
> Is it a way for an admin to provide a particular kind of network, and then 
> for a tenant to know what they're attaching their VMs to?
> 
> 
> I'll defer to Madhu who is implementing this, but I don't believe that's the 
> intention at all.
> Currently, an admin will be able to assign particular flavors, unfortunately, 
> this is not going to be tenant specific flavors. As you might have seen in 
> the review, we are just using tenant_id to bypass the keystone checks 
> implemented in base.py and it is not stored in the db as well. It is 
> something to do in the future and documented the same in the blueprint.
>  
> How does it differ from provider:network-type?  (I guess, because the latter 
> is supposed to be for implementation consumption only - but is that correct?)
> 
> 
> Flavors are created and curated by operators, and consumed by API users.
> +1 
>  
> Thanks,
> Neil
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> 
> 
> -- 
> Kevin Benton
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Robert Collins
On 16 July 2015 at 07:27, Ed Leafe  wrote:
> On Jul 15, 2015, at 1:08 PM, Maish Saidel-Keesing  wrote:
>
>>> * Consider the cost of introducing a brand new technology into the
>>> deployer space. If there _is_ a way to get the desired improvement with,
>>> say, just MySQL and some clever sharding, then that might be a smaller
>>> pill to swallow for deployers.
>> +1000 to this part regarding introducing a new technology
>
> Yes, of course it has been considered. If it were trivial, I would just 
> propose a blueprint.
>
> Again, I'd really like to hear ideas on what kind of results would be 
> convincing enough to make it worthwhile to introduce a new technology.

We spent some summit time discussing just this:
https://wiki.openstack.org/wiki/TechnologyChoices

The summary here is IMO:
 - ops will follow where we lead BUT
 - we need to take their needs into account
 - which includes robustness, operability, and so on
 - things where an alternative implementation exists can be
uptake-driven : e.g. we expand the choices, and observe what folk move
onto.

That said, I think the fundamental thing today is that we have a bug
and its not fixed. LOTS of them. Where fixing them needs better
plumbing, lets be bold - but not hasty.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Doug Wiegley
+1

> On Jul 15, 2015, at 1:07 PM, Kyle Mestery  wrote:
> 
> +1, Cedric has been a great contributor for a while now!
> 
> On Wed, Jul 15, 2015 at 1:47 PM, Carl Baldwin  > wrote:
> As the Neutron L3 Lieutenant along with Kevin Benton for control
> plane, and Assaf Muller for testing, I would like to propose Cedric
> Brandily as a member of the Neutron core reviewer team under these
> areas of focus.
> 
> Cedric has been a long time contributor to Neutron showing expertise
> particularly in these areas.  His knowledge and involvement will be
> very important to the project.  He is a trusted member of our
> community.  He has been reviewing consistently [1][2] and community
> feedback that I've received indicates that he is a solid reviewer.
> 
> Existing Neutron core reviewers from these areas of focus, please vote
> +1/-1 for the addition of Cedric to the team.
> 
> Thanks!
> Carl Baldwin
> 
> [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z 
> 
> [2] http://stackalytics.com/report/contribution/neutron-group/90 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tacker][NFV] Devstack plugin support is now available

2015-07-15 Thread Sridhar Ramaswamy
Heads up. Tacker project now supports installation using a devstack plugin.
The latest Tacker installation instruction is available in this wiki [1]

Note, the team continues to meet weekly in IRC [2]. Feel free to join if
you are interested in an ETSI MANO complaint NFV Orchestrator / VNF Manager.

- Sridhar

[1] https://wiki.openstack.org/wiki/Tacker/Installation
[2] https://wiki.openstack.org/wiki/Meetings/Tacker
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-15 Thread John Griffith
On Wed, Jul 15, 2015 at 12:44 PM, Joshua Harlow 
wrote:

> John Griffith wrote:
>
>>
>>
>> On Wed, Jul 15, 2015 at 12:25 PM, Joshua Harlow > > wrote:
>>
>> So I've been following the TC work on tags, and have been slightly
>> confused by the whole work, so I am wondering if I can get a
>> 'explainlikeimfive' (borrowing from reddit terminology) edition of it.
>>
>> I always thought tags were going to be something like:
>>
>> http://i.imgur.com/rcAnMkX.png
>>
>> (I'm not a graphic artist, obviously, haha); but the point there was
>> that it would allow people to create tags, up or down vote them, and
>> maybe even add comments and let the democratic process (the
>> community) decide which tags are useful and which aren't (possibly
>> prune tags that are garbage or are not useful).
>>
>> Or something like:
>>
>> http://i.imgur.com/gy1MGo6.png
>>
>> That could perhaps use https://wordpress.org/plugins/fl3r-feelbox/
>> (or something like it, doesn't matter)...
>>
>> I was thinking the whole tag creation, up-vote, down-vote, smiley,
>> sad-face, was all going to be community driven, instead of TC
>> driven, but maybe someone can explain it to me (like I am five).
>>
>> Thanks!
>>
>> -Josh
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ​Hi Josh,
>>
>> So regardless of the detail that was agreed upon when we proposed the
>> tag idea, I personally find your interpretation to make a lot of sense.
>> I think it would be a great direction to go with the tags (and
>> personally I prefer the faces/graphic version).
>>
>> I think the trick will still be figuring out what tags are
>> defined/used.  I don't think we should have just "any tag" added/used,
>> but of course anybody from the community should be able to
>> submit/propose a tag.​
>>
>>
> Ok, so some basic moderation around the 'add your own tag' in
> http://i.imgur.com/rcAnMkX.png that can go through some kind of review
> process (or something else)? Maybe the TC gets to have +2 +A powers on
> people submitting new tags, making sure duplicates are collapsed,
> categories are in english?, other basic checks (but in general staying out
> of the business of tag defining/reviewing/other, let the community figure
> it out).
>

​Right I probably should've been more detailed in my response :)  But yes,
something like a standard gerrit review process with TC acting as Core for
example.

As far as your request "Tags, explain like I'm 5", I'll give it a shot :)

So the idea was to create Tags to associate with various projects and
libraries within the OpenStack ecosystem.  The idea being to come up with
defined tags that might have some interest to deployers, whether it be to
identify how the project is managed/released.

It's relatively limited right now I think, and part of the reason for that
is we've tried to ensure that the information that we put on tags is
subjective, and at the same time doesn't give any false impression that
something is "good" or "bad".  We just wanted to have tags to easily convey
some general information about a project to help people gain at least a
little insight into a project, how it's managed, what sort of community is
contributing to it etc.

As we move into things like the "compute starter kit" it gets a bit less
subjective, but not really too much.  Maybe the next step is IaaS tag, or
maybe not (that doesn't typically end up being a very popular topic when I
bring it up).  Regardless, that's not really the point here.

I personally have always had the notion in mind that we would have
different types of tags, those that the TC works on, identifies and puts
together, as well as those that would come in from a broader community.  I
think that what you're proposing here would be that "broader" community
category that IMO is probably more valuable than anything else.

I'm admittedly not the strongest tag expert on the TC, so it's quite
possible that I'm going to be corrected in short order here, but from my
perspective and involvement in the activities, that's the simple
explanation that I would provide.

Anyway, I don't know if that helps answer your question at all... I do like
your idea of community proposed and voted tags (A LOT) and hope to see some
more thought/response via this posting.



>  I personally (both as a community member and TC member) would love to
>> see more of this moving out to the community as a whole rather than
>> being something defined and set up within the TC.
>>
>> Note that one thing the TC has been trying to focus on here is
>> subjective measurement of

Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Ed Leafe
On Jul 15, 2015, at 1:08 PM, Maish Saidel-Keesing  wrote:

>> * Consider the cost of introducing a brand new technology into the
>> deployer space. If there _is_ a way to get the desired improvement with,
>> say, just MySQL and some clever sharding, then that might be a smaller
>> pill to swallow for deployers.
> +1000 to this part regarding introducing a new technology

Yes, of course it has been considered. If it were trivial, I would just propose 
a blueprint.

Again, I'd really like to hear ideas on what kind of results would be 
convincing enough to make it worthwhile to introduce a new technology.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Kyle Mestery
+1, Cedric has been a great contributor for a while now!

On Wed, Jul 15, 2015 at 1:47 PM, Carl Baldwin  wrote:

> As the Neutron L3 Lieutenant along with Kevin Benton for control
> plane, and Assaf Muller for testing, I would like to propose Cedric
> Brandily as a member of the Neutron core reviewer team under these
> areas of focus.
>
> Cedric has been a long time contributor to Neutron showing expertise
> particularly in these areas.  His knowledge and involvement will be
> very important to the project.  He is a trusted member of our
> community.  He has been reviewing consistently [1][2] and community
> feedback that I've received indicates that he is a solid reviewer.
>
> Existing Neutron core reviewers from these areas of focus, please vote
> +1/-1 for the addition of Cedric to the team.
>
> Thanks!
> Carl Baldwin
>
> [1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
> [2] http://stackalytics.com/report/contribution/neutron-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Need new release of stable oslotest with capped mock

2015-07-15 Thread Doug Hellmann
Excerpts from Yuriy Taraday's message of 2015-07-15 13:14:13 +:
> Hello, oslo team.
> 
> With recent mock nightmare we should not release a new stable version of
> oslotest so that projects that depend on oslotest but don't directly depend
> on mock will be unblocked in gate.
> 
> I found out about this from this review: [0]
> It fails because stable oslotest 1.5.1 have uncapped dependency on mock for
> 2.6. It still remains so because Proposal Bot's review to update
> requirements in oslotest [1] got stuck because of a problem with new(er)
> version of fixtures. It has been fixed in oslotest master 2 weeks ago [2],
> but hasn't been backported to stable/kilo, so I've created a CR [3] (change
> touches only a test for oslotest, so it's double-safe for stable).
> 
> So after CRs [3][1] are merged to oslotest we should release a new stable
> version (1.5.2, I guess) for it and then we can update requirements in
> oslo.concurrency [0].
> 
> All that said it looks like we need to pay more attention to Proposal Bot's
> failures. It should trigger a loud alarm and make Zuul blink all red since
> it most likely means that something got broken in our requirements and
> noone would notice until it breaks something else.
> 
> [0] https://review.openstack.org/201862
> [1] https://review.openstack.org/201196
> [2] https://review.openstack.org/197900
> [3] https://review.openstack.org/202091

This is done as oslotest 1.5.2.

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Proposing Cedric Brandily to Neutron Core Reviewer Team

2015-07-15 Thread Carl Baldwin
As the Neutron L3 Lieutenant along with Kevin Benton for control
plane, and Assaf Muller for testing, I would like to propose Cedric
Brandily as a member of the Neutron core reviewer team under these
areas of focus.

Cedric has been a long time contributor to Neutron showing expertise
particularly in these areas.  His knowledge and involvement will be
very important to the project.  He is a trusted member of our
community.  He has been reviewing consistently [1][2] and community
feedback that I've received indicates that he is a solid reviewer.

Existing Neutron core reviewers from these areas of focus, please vote
+1/-1 for the addition of Cedric to the team.

Thanks!
Carl Baldwin

[1] https://review.openstack.org/#/q/reviewer:zzelle%2540gmail.com,n,z
[2] http://stackalytics.com/report/contribution/neutron-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-15 Thread Joshua Harlow

John Griffith wrote:



On Wed, Jul 15, 2015 at 12:25 PM, Joshua Harlow mailto:harlo...@outlook.com>> wrote:

So I've been following the TC work on tags, and have been slightly
confused by the whole work, so I am wondering if I can get a
'explainlikeimfive' (borrowing from reddit terminology) edition of it.

I always thought tags were going to be something like:

http://i.imgur.com/rcAnMkX.png

(I'm not a graphic artist, obviously, haha); but the point there was
that it would allow people to create tags, up or down vote them, and
maybe even add comments and let the democratic process (the
community) decide which tags are useful and which aren't (possibly
prune tags that are garbage or are not useful).

Or something like:

http://i.imgur.com/gy1MGo6.png

That could perhaps use https://wordpress.org/plugins/fl3r-feelbox/
(or something like it, doesn't matter)...

I was thinking the whole tag creation, up-vote, down-vote, smiley,
sad-face, was all going to be community driven, instead of TC
driven, but maybe someone can explain it to me (like I am five).

Thanks!

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


​Hi Josh,

So regardless of the detail that was agreed upon when we proposed the
tag idea, I personally find your interpretation to make a lot of sense.
I think it would be a great direction to go with the tags (and
personally I prefer the faces/graphic version).

I think the trick will still be figuring out what tags are
defined/used.  I don't think we should have just "any tag" added/used,
but of course anybody from the community should be able to
submit/propose a tag.​



Ok, so some basic moderation around the 'add your own tag' in 
http://i.imgur.com/rcAnMkX.png that can go through some kind of review 
process (or something else)? Maybe the TC gets to have +2 +A powers on 
people submitting new tags, making sure duplicates are collapsed, 
categories are in english?, other basic checks (but in general staying 
out of the business of tag defining/reviewing/other, let the community 
figure it out).



I personally (both as a community member and TC member) would love to
see more of this moving out to the community as a whole rather than
being something defined and set up within the TC.

Note that one thing the TC has been trying to focus on here is
subjective measurement of the tags that are created, what you're
suggesting is a bit different from that, but personally I like it and
think that in a number of cases it could be very valuable.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-15 Thread Kevin Benton
Wouldn't it be valid to assign flavors to groups of provider networks? e.g.
a tenant wants to attach to a network that is wired up to a 40g router so
he/she chooses a network of the "fat pipe" flavor.

On Wed, Jul 15, 2015 at 10:40 AM, Madhusudhan Kandadai <
madhusudhan.openst...@gmail.com> wrote:

>
>
> On Wed, Jul 15, 2015 at 9:25 AM, Kyle Mestery  wrote:
>
>> On Wed, Jul 15, 2015 at 10:54 AM, Neil Jerram > > wrote:
>>
>>> I've been reading available docs about the forthcoming Neutron flavors
>>> framework, and am not yet sure I understand what it means for a network.
>>>
>>>
>> In reality, this is envisioned more for service plugins (e.g. flavors of
>> LBaaS, VPNaaS, and FWaaS) than core neutron resources.
>>
> Yes. Right put. This is for service plugins and its part of extensions
> than core network resources//
>
>>
>>
>>> Is it a way for an admin to provide a particular kind of network, and
>>> then for a tenant to know what they're attaching their VMs to?
>>>
>>>
>> I'll defer to Madhu who is implementing this, but I don't believe that's
>> the intention at all.
>>
> Currently, an admin will be able to assign particular flavors,
> unfortunately, this is not going to be tenant specific flavors. As you
> might have seen in the review, we are just using tenant_id to bypass the
> keystone checks implemented in base.py and it is not stored in the db as
> well. It is something to do in the future and documented the same in the
> blueprint.
>
>>
>>
>>> How does it differ from provider:network-type?  (I guess, because the
>>> latter is supposed to be for implementation consumption only - but is that
>>> correct?)
>>>
>>>
>> Flavors are created and curated by operators, and consumed by API users.
>>
> +1
>
>>
>>
>>> Thanks,
>>> Neil
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-15 Thread John Griffith
On Wed, Jul 15, 2015 at 12:25 PM, Joshua Harlow 
wrote:

> So I've been following the TC work on tags, and have been slightly
> confused by the whole work, so I am wondering if I can get a
> 'explainlikeimfive' (borrowing from reddit terminology) edition of it.
>
> I always thought tags were going to be something like:
>
> http://i.imgur.com/rcAnMkX.png
>
> (I'm not a graphic artist, obviously, haha); but the point there was that
> it would allow people to create tags, up or down vote them, and maybe even
> add comments and let the democratic process (the community) decide which
> tags are useful and which aren't (possibly prune tags that are garbage or
> are not useful).
>
> Or something like:
>
> http://i.imgur.com/gy1MGo6.png
>
> That could perhaps use https://wordpress.org/plugins/fl3r-feelbox/ (or
> something like it, doesn't matter)...
>
> I was thinking the whole tag creation, up-vote, down-vote, smiley,
> sad-face, was all going to be community driven, instead of TC driven, but
> maybe someone can explain it to me (like I am five).
>
> Thanks!
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

​Hi Josh,

So regardless of the detail that was agreed upon when we proposed the tag
idea, I personally find your interpretation to make a lot of sense.  I
think it would be a great direction to go with the tags (and personally I
prefer the faces/graphic version).

I think the trick will still be figuring out what tags are defined/used.  I
don't think we should have just "any tag" added/used, but of course anybody
from the community should be able to submit/propose a tag.​

I personally (both as a community member and TC member) would love to see
more of this moving out to the community as a whole rather than being
something defined and set up within the TC.

Note that one thing the TC has been trying to focus on here is subjective
measurement of the tags that are created, what you're suggesting is a bit
different from that, but personally I like it and think that in a number of
cases it could be very valuable.

Thanks,
John
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade] future direction on partial upgrade support

2015-07-15 Thread Auld, Will
Sean,

OK, moving thread to openstack-dev. 

We'd like to help with this work if there is more to do. What are the next 
steps and what areas need help?

Thanks,

Will

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: Wednesday, July 15, 2015 3:10 AM
> To: Du, Dolpher; Auld, Will; Troyer, Dean; mtrein...@kortar.org
> Subject: Re: [openstack-dev] [grenade] future direction on partial upgrade
> support
> 
> Yes, they would. However, can we do this conversation on openstack-dev?
> It was started on an open list orginally, and that's where it should continue 
> to
> evolve.
> 
> On 07/15/2015 04:43 AM, Du, Dolpher wrote:
> > I've been reviewing Sean's 3 patches, if I understand correctly, they 
> > leverage
> the current multinode setup for devstack, and just provide an entrypoint in
> grenade.sh to deploy the other nodes with specific services.
> > https://review.openstack.org/#/c/199073/
> > https://review.openstack.org/#/c/199091/
> > https://review.openstack.org/#/c/199103/
> >
> > This should be enough for enabling the partial upgrade test in the current 
> > CI
> system, and is easier to add other partial upgrade test cases(Just the same as
> this one).
> >
> > I'm not sure if Sean have some further plan on this?
> >
> > Regards,
> > Dolpher
> >
> >> -Original Message-
> >> From: Auld, Will
> >> Sent: Wednesday, July 8, 2015 4:53 AM
> >> To: Troyer, Dean; s...@dague.net; Du, Dolpher; mtrein...@kortar.org
> >> Cc: Auld, Will
> >> Subject: RE: [openstack-dev] [grenade] future direction on partial
> >> upgrade support
> >>
> >> Ping.
> >>
> >>
> >>
> >> I haven't seen any response to this yet but would like our team to take 
> >> this
> up.
> >> Will need to understand next steps and any additional items (I'm
> >> thinking of the experimental jobs Dean mentioned).
> >>
> >>
> >>
> >> Thanks,
> >>
> >>
> >>
> >> Will
> >>
> >>
> >>
> >> From: Troyer, Dean
> >> Sent: Thursday, July 02, 2015 3:45 PM
> >> To: Auld, Will; s...@dague.net; Du, Dolpher; mtrein...@kortar.org
> >> Subject: RE: [openstack-dev] [grenade] future direction on partial
> >> upgrade support
> >>
> >>
> >>
> >> [trying again with Matt's correct address...]
> >>
> >>
> >>
> >>
> >>
> >> From: Troyer, Dean
> >> Sent: Thursday, July 2, 2015 5:42 PM
> >> To: Auld, Will; s...@dague.net  ; Du, Dolpher;
> >> 'mtrein...@kortar.net'
> >> Subject: RE: [openstack-dev] [grenade] future direction on partial
> >> upgrade support
> >>
> >>
> >>
> >> From: Auld, Will
> >>
> >> I'd like to talk about what is needed to move forward with the
> >> multi-node Grenade capability discussed on the openstack-dev list.
> >> What are the next steps that are needed?
> >>
> >>
> >>
> >> [adding Matthew Treinish, QA PTL]
> >>
> >>
> >>
> >> Sean is out this week, the quick answer I can provide is that the
> >> work Joe Gordon was doing needs to be continued.
> >>
> >>
> >>
> >> Matt mentioned in the QA meeting today that there are some jobs
> >> running in either experimental or non-voting mode but I don't know
> >> the details.  With Joe gone, someone needs to pick that up.
> >>
> >>
> >>
> >> Matt, do you have the details on where Joe stopped work?
> >>
> >>
> >>
> >> Dt
> >>
> >>
> >>
> >> --
> >>
> >> Dean Troyer
> >>
> >> dean.tro...@intel.com 
> >>
> >>
> >>
> >>
> >
> 
> 
> --
> Sean Dague
> http://dague.net
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Robert Collins
On 16 July 2015 at 02:18, Ed Leafe  wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
...
> What I'd like to investigate is replacing the current design of having
> the compute nodes communicating with the scheduler via message queues.
> This design is overly complex and has several known scalability
> issues. My thought is to replace this with a Cassandra [1] backend.
> Compute nodes would update their state to Cassandra whenever they
> change, and that data would be read by the scheduler to make its host
> selection. When the scheduler chooses a host, it would post the claim
> to Cassandra wrapped in a lightweight transaction, which would ensure
> that no other scheduler has tried to claim those resources. When the
> host has built the requested VM, it will delete the claim and update
> Cassandra with its current state.

+1 on doing an experiment.

Some semi-random thoughts here. Well, not random at all, I've been
mulling on this for a while.

I think Kafka may fit our model significantly vis-a-vis updating state
more closely than Cassandra does. It would be neat if we could do a
few different sketchy implementations and head-to-head test them. I
love Cassandra in a lot of ways, but lightweight-transaction are two
words that I'd really not expect to see in Cassandra (Yes, I know it
has them in the official docs and design :)) - its a full paxos
interaction to do SERIAL consistency, which is more work than ether
QUORUM or LOCAL_QUORUM. A sharded approach - there is only one compute
node in question for the update needed - can be less work than either
and still race free.

I too also very much want to see us move to brokerless RPC,
systematically, for all the reasons :). You might need a little of
that mixed in to the experiments, depending on the scale reached.

In terms of quantification; are you looking to test scalability (e.g.
scheduling some N events per second without races), [there are huge
improvements possible by rewriting the current schedulers innards to
be less wasteful, but that doesn't address active-active setups],
latency (e.g. 99th percentile time-to-schedule) or <...> ?

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] Tags, explain like I am five?

2015-07-15 Thread Joshua Harlow
So I've been following the TC work on tags, and have been slightly 
confused by the whole work, so I am wondering if I can get a 
'explainlikeimfive' (borrowing from reddit terminology) edition of it.


I always thought tags were going to be something like:

http://i.imgur.com/rcAnMkX.png

(I'm not a graphic artist, obviously, haha); but the point there was 
that it would allow people to create tags, up or down vote them, and 
maybe even add comments and let the democratic process (the community) 
decide which tags are useful and which aren't (possibly prune tags that 
are garbage or are not useful).


Or something like:

http://i.imgur.com/gy1MGo6.png

That could perhaps use https://wordpress.org/plugins/fl3r-feelbox/ (or 
something like it, doesn't matter)...


I was thinking the whole tag creation, up-vote, down-vote, smiley, 
sad-face, was all going to be community driven, instead of TC driven, 
but maybe someone can explain it to me (like I am five).


Thanks!

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Swift container sync process is consuming 100% CPU

2015-07-15 Thread Alan Jiang
In our production swift cluster, we have container sync middleware enabled in 
the pipeline. But
swift container sync is not configured. 

We noticed 100% CPU utilization and a lot of read/write i/os from the 
swift-container-sync process
on our meta data nodes recently after we have seen increasing volumes of 
POST/DELETE requests.

From the code it makes sense since container_sync() calls broker.get_info() 
which will merge the
pending requests to the container db before it gets the metadata from the 
container_stat table. 

I have two questions:

1. Is it safe just to stop the swift-container-sync process on all of our 
metadata nodes(have all the account/container daemons running)?
Any side effect if we do so?

2. Since container_updater is doing the same broker.get_info() in the 
container_sweep() code path, and the concurrency is 4, why
don’t I see swift-container-updater sitting on the top CPU consumer list?  My 
assumption is due to each swift-container-updater child
process is forked to handle only one partition per device. So it is a short 
lived process which won’t be captured by atop or top. 
Is that true?

Thanks.
Alan Jiang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Maish Saidel-Keesing

On 07/15/15 20:40, Clint Byrum wrote:

What you describe is a spike. It's a grand plan, and you don't need
anyone's permission, so huzzah for the spike!

As far as what should be improved, I hear a lot that having multiple
schedulers does not scale well, so I'd suggest that as a primary target
(maybe measure the _current_ problem, and then set the target as a 10x
improvement over what we have now).

Things to consider while pushing on that goal:

* Do not backslide the resilience in the system. The code is just now
starting to be fault tolerant when talking to RabbitMQ, so make sure
to also consider how tolerant of failures this will be. Cassandra is
typically chosen for its resilience and performance, but Cassandra does
a neat trick in that clients can switch its CAP theorem profile from
Consistent and Available (but slow) to Available and Performant when
reading things. That might be useful in the context of trying to push
the performance _UP_ for schedulers, while not breaking anything else.

* Consider the cost of introducing a brand new technology into the
deployer space. If there _is_ a way to get the desired improvement with,
say, just MySQL and some clever sharding, then that might be a smaller
pill to swallow for deployers.

+1000 to this part regarding introducing a new technology


Anyway, I wish you well on this endeavor and hope to see your results
soon!

Excerpts from Ed Leafe's message of 2015-07-15 07:18:42 -0700:

Hash: SHA512

Changing the architecture of a complex system such as Nova is never
easy, even when we know that the design isn't working as well as we
need it to. And it's even more frustrating because when the change is
complete, it's hard to know if the improvement, if any, was worth it.

So I had an idea: what if we ran a test of that architecture change
out-of-tree? In other words, create a separate deployment, and rip out
the parts that don't work well, replacing them with an alternative
design. There would be no Gerrit reviews or anything that would slow
down the work or add load to the already overloaded reviewers. Then we
could see if this modified system is a significant-enough improvement
to justify investing the time in implementing it in-tree. And, of
course, if the test doesn't show what was hoped for, it is scrapped
and we start thinking anew.

The important part in this process is defining up front what level of
improvement would be needed to make considering actually making such a
change worthwhile, and what sort of tests would demonstrate whether or
not whether this level was met. I'd like to discuss such an experiment
next week at the Nova mid-cycle.

What I'd like to investigate is replacing the current design of having
the compute nodes communicating with the scheduler via message queues.
This design is overly complex and has several known scalability
issues. My thought is to replace this with a Cassandra [1] backend.
Compute nodes would update their state to Cassandra whenever they
change, and that data would be read by the scheduler to make its host
selection. When the scheduler chooses a host, it would post the claim
to Cassandra wrapped in a lightweight transaction, which would ensure
that no other scheduler has tried to claim those resources. When the
host has built the requested VM, it will delete the claim and update
Cassandra with its current state.

One main motivation for using Cassandra over the current design is
that it will enable us to run multiple schedulers without increasing
the raciness of the system. Another is that it will greatly simplify a
lot of the internal plumbing we've set up to implement in Nova what we
would get out of the box with Cassandra. A third is that if this
proves to be a success, it would also be able to be used further down
the road to simplify inter-cell communication (but this is getting
ahead of ourselves...). I've worked with Cassandra before and it has
been rock-solid to run and simple to set up. I've also had preliminary
technical reviews with the engineers at DataStax [2], the company
behind Cassandra, and they agreed that this was a good fit.

At this point I'm sure that most of you are filled with thoughts on
how this won't work, or how much trouble it will be to switch, or how
much more of a pain it will be, or how you hate non-relational DBs, or
any of a zillion other negative thoughts. FWIW, I have them too. But
instead of ranting, I would ask that we acknowledge for now that:

a) it will be disruptive and painful to switch something like this at
this point in Nova's development
b) it would have to provide *significant* improvement to make such a
change worthwhile

So what I'm asking from all of you is to help define the second part:
what we would want improved, and how to measure those benefits. In
other words, what results would you have to see in order to make you
reconsider your initial "nah, this'll never work" reaction, and start
to think that this is will be a worthwhile change to make to Nova.

I'm al

Re: [openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-15 Thread Sean Dague
On 07/15/2015 01:44 PM, Matt Riedemann wrote:
> The osapi_v3.enabled option is False by default [1] even though it's
> marked as the CURRENT API and the v2 API is marked as SUPPORTED (and
> we've frozen it for new feature development).
> 
> I got looking at this because osapi_v3.enabled is True in nova.conf in
> both the check-tempest-dsvm-nova-v21-full job and non-v21
> check-tempest-dsvm-full job, but only in the v21 job is
> "x-openstack-nova-api-version: '2.1'" used.
> 
> Shouldn't the v2.1 API be enabled by default now?
> 
> [1]
> http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44

Honestly, we should probably deprecate out osapi_v3.enabled make it
osapi_v21 (or osapi_v2_microversions) so as to not confuse people further.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Why is osapi_v3.enabled = False by default?

2015-07-15 Thread Matt Riedemann
The osapi_v3.enabled option is False by default [1] even though it's 
marked as the CURRENT API and the v2 API is marked as SUPPORTED (and 
we've frozen it for new feature development).


I got looking at this because osapi_v3.enabled is True in nova.conf in 
both the check-tempest-dsvm-nova-v21-full job and non-v21 
check-tempest-dsvm-full job, but only in the v21 job is 
"x-openstack-nova-api-version: '2.1'" used.


Shouldn't the v2.1 API be enabled by default now?

[1] 
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/__init__.py#n44


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-15 Thread Andrew Laski

On 07/15/15 at 12:19pm, Matt Riedemann wrote:



On 7/15/2015 11:23 AM, Nikola Đipanov wrote:

I'll keep this email brief since this has been a well known issue for
some time now.

Problem: Libvirt can't honour device names specified at boot for any
volumes requested as part of block_device_mapping. What we currently do
is in case they do get specified, we persist them as is, so that we can
return them from the API, even though libvirt can't honour them (this
leads to a number of issues when we do really on the data in the DB, a
very common one comes up when attaching further devices which follow up
patches to [1] try to address).

There is a proposed patch [1] that will make libvirt disregard what was
passed and persist the values it defaults and can honour. This seems
contentious because it will change the API behaviour (instance show will
potentially return device names other than the ones requested).

My take on this is that this is broken and we should fix it. All other
ways to fix it, namely:

  * reject the request if libvirt is the driver in the API (we can't
know where the request will end up really and blocking in the API is
bad, plus we would still have to keep backwards compatibility for a long
time which means the bug is not really solved, we just have more code
for bugs to fester)
  * fail the request at the scheduler level (very disruptive , and the
question is how do we tell users that this is a legit change, we can't
really bump the API version for a compute change)

are way more disruptive for little gain.

  * There is one more thing we could do that hasn't been discussed - we
could store requested_device_name, and always return that from the API.
This too adds needless complexity IMO.

I think the patch in [1] is a pragmatic solution to a long standing
issue that only changes the API behaviour for an already broken
interaction. I'd like to avoid needless complexity if it gives us nothing.

It would be awesome to get some discussion around this and hopefully get
some resolution to this long standing issue. Do let me know if more
information/clarification is required.

[1] https://review.openstack.org/#/c/189632/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The other part of the discussion is around the API changes, not just 
for libvirt, but having a microversion that removes the device from 
the request so it's no longer optional and doesn't provide some false 
sense that it works properly all of the time.  We talked about this 
in the nova channel yesterday and I think the thinking was we wanted 
to get agreement on dropping that with a microversion before moving 
forward with the libvirt change you have to ignore the requested 
device name.


From what I recall, this was supposed to really only work reliably 
for xen but now it actually might not, and would need to be tested 
again. Seems we could start by checking the xen CI to see if it is 
running the test_minimum_basic scenario test or anything in 
test_attach_volume.py in Tempest.


This doesn't really work reliably for xen either, depending on what is 
being done.  For the xenapi driver Nova converts the device name 
provided into an integer based on the trailing letter, so 'vde' becomes 
4, and asks xen to mount the device based on that int.  Xen does honor 
that integer request so you'll get an 'e' device, but you could be 
asking for hde and get an xvde or vice versa.




I'm not sure about vmware/hyper-v/ironic drivers in nova and how they 
handle this or if they are just as buggy as the libvirt driver.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Clint Byrum
What you describe is a spike. It's a grand plan, and you don't need
anyone's permission, so huzzah for the spike!

As far as what should be improved, I hear a lot that having multiple
schedulers does not scale well, so I'd suggest that as a primary target
(maybe measure the _current_ problem, and then set the target as a 10x
improvement over what we have now).

Things to consider while pushing on that goal:

* Do not backslide the resilience in the system. The code is just now
starting to be fault tolerant when talking to RabbitMQ, so make sure
to also consider how tolerant of failures this will be. Cassandra is
typically chosen for its resilience and performance, but Cassandra does
a neat trick in that clients can switch its CAP theorem profile from
Consistent and Available (but slow) to Available and Performant when
reading things. That might be useful in the context of trying to push
the performance _UP_ for schedulers, while not breaking anything else.

* Consider the cost of introducing a brand new technology into the
deployer space. If there _is_ a way to get the desired improvement with,
say, just MySQL and some clever sharding, then that might be a smaller
pill to swallow for deployers.

Anyway, I wish you well on this endeavor and hope to see your results
soon!

Excerpts from Ed Leafe's message of 2015-07-15 07:18:42 -0700:
> Hash: SHA512
> 
> Changing the architecture of a complex system such as Nova is never
> easy, even when we know that the design isn't working as well as we
> need it to. And it's even more frustrating because when the change is
> complete, it's hard to know if the improvement, if any, was worth it.
> 
> So I had an idea: what if we ran a test of that architecture change
> out-of-tree? In other words, create a separate deployment, and rip out
> the parts that don't work well, replacing them with an alternative
> design. There would be no Gerrit reviews or anything that would slow
> down the work or add load to the already overloaded reviewers. Then we
> could see if this modified system is a significant-enough improvement
> to justify investing the time in implementing it in-tree. And, of
> course, if the test doesn't show what was hoped for, it is scrapped
> and we start thinking anew.
> 
> The important part in this process is defining up front what level of
> improvement would be needed to make considering actually making such a
> change worthwhile, and what sort of tests would demonstrate whether or
> not whether this level was met. I'd like to discuss such an experiment
> next week at the Nova mid-cycle.
> 
> What I'd like to investigate is replacing the current design of having
> the compute nodes communicating with the scheduler via message queues.
> This design is overly complex and has several known scalability
> issues. My thought is to replace this with a Cassandra [1] backend.
> Compute nodes would update their state to Cassandra whenever they
> change, and that data would be read by the scheduler to make its host
> selection. When the scheduler chooses a host, it would post the claim
> to Cassandra wrapped in a lightweight transaction, which would ensure
> that no other scheduler has tried to claim those resources. When the
> host has built the requested VM, it will delete the claim and update
> Cassandra with its current state.
> 
> One main motivation for using Cassandra over the current design is
> that it will enable us to run multiple schedulers without increasing
> the raciness of the system. Another is that it will greatly simplify a
> lot of the internal plumbing we've set up to implement in Nova what we
> would get out of the box with Cassandra. A third is that if this
> proves to be a success, it would also be able to be used further down
> the road to simplify inter-cell communication (but this is getting
> ahead of ourselves...). I've worked with Cassandra before and it has
> been rock-solid to run and simple to set up. I've also had preliminary
> technical reviews with the engineers at DataStax [2], the company
> behind Cassandra, and they agreed that this was a good fit.
> 
> At this point I'm sure that most of you are filled with thoughts on
> how this won't work, or how much trouble it will be to switch, or how
> much more of a pain it will be, or how you hate non-relational DBs, or
> any of a zillion other negative thoughts. FWIW, I have them too. But
> instead of ranting, I would ask that we acknowledge for now that:
> 
> a) it will be disruptive and painful to switch something like this at
> this point in Nova's development
> b) it would have to provide *significant* improvement to make such a
> change worthwhile
> 
> So what I'm asking from all of you is to help define the second part:
> what we would want improved, and how to measure those benefits. In
> other words, what results would you have to see in order to make you
> reconsider your initial "nah, this'll never work" reaction, and start
> to think that this is will be a worthwhile change

Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas v2

2015-07-15 Thread Balle, Susanne
I agree with German. Let’s keep things together for now. Susanne

From: Eichberger, German
Sent: Wednesday, July 15, 2015 1:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Balle, Susanne; Tonse, Milan
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Hi,

Let’s move it into the LBaaS repo that seems like the right place for me —

Thanks,
German

From: "Jain, Vivek" mailto:vivekj...@ebay.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 10:22 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Balle Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
"Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Thanks Akihiro. Currently lbaas panels are part of horizon repo. If there is a 
easy way to de-couple lbaas dashboard from horizon? I think that will simplify 
development efforts. What does it take to separate lbaas dashboard from horizon?

Thanks,
Vivek

From: Akihiro Motoki mailto:amot...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 10:09 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
"Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Another option is to create a project under openstack.
designate-dashboard project takes this approach,
and the core team of the project is both horizon-core and designate-core.
We can do the similar approach. Thought?

I have one question.
Do we have a separate place forever or do we want to merge horizon repo
once the implementation are available.
If we have a separate repo for LBaaS v2 panel, we need to release it separately.

I am not sure I am available at LBaaS meeting, but I would like to help
this efforts as a core from horizon and neutron.

Akihiro


2015-07-15 1:52 GMT+09:00 Doug Wiegley 
mailto:doug...@parksidesoftware.com>>:
I’d be good submitting it to the neutron-lbaas repo, under a horizon/ 
directory. We can iterate there, and talk with the Horizon team about how best 
to integrate. Would that work?

Thanks,
doug

> On Jul 13, 2015, at 3:05 PM, Jain, Vivek 
> mailto:vivekj...@ebay.com>> wrote:
>
> Hi German,
>
> We integrated UI with LBaaS v2 GET APIs. We have created all panels for
> CREATE and UPDATE as well.
> Plan is to share our code with community on stackforge for more
> collaboration from the community.
>
> So far Ganesh from cisco has shown interest in helping with some work. It
> will be great if we can get more hands.
>
> Q: what is the process for hosting in-progress project on stackforge? Can
> someone help me here?
>
> Thanks,
> Vivek
>
> On 7/10/15, 11:40 AM, "Eichberger, German" 
> mailto:german.eichber...@hp.com>>
> wrote:
>
>> Hi Vivek,
>>
>> Hope things are well. With the Midccyle next week I am wondering if you
>> made any progress and/or how we can best help with the panels.
>>
>> Thanks,
>> German
>>
>> From: "Jain, Vivek" 
>> mailto:vivekj...@ebay.com>>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Date: Wednesday, April 8, 2015 at 3:32 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Cc: "Balle Balle, Susanne"
>> mailto:susanne.ba...@hp.com>>>,
>>  "Tonse, Milan"
>> mailto:mto...@ebay.com>>>
>> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for
>> neutron-lbaas v2
>>
>> Thanks German for the etherpad link. If you have any documentation for
>> flows, please share those too.
>>
>> I will work with my team at ebay to publish wireframes for design we are
>> working on. It will be mostly along the lines I demo’ed in Paris.
>>
>> Thanks,
>> Vivek
>>
>> From: , German
>> mailto:german.eichber...@hp.com>>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Date: Wednesday, April 8, 2015 at 11:24 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>

[openstack-dev] [lbaas] [octavia] No meeting today 7/15

2015-07-15 Thread Eichberger, German
All,

This week is the L4-L7 mid cycle so we will skip today¹s meeting ‹

German


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas v2

2015-07-15 Thread Eichberger, German
Hi,

Let’s move it into the LBaaS repo that seems like the right place for me —

Thanks,
German

From: "Jain, Vivek" mailto:vivekj...@ebay.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 10:22 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Balle Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
"Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Thanks Akihiro. Currently lbaas panels are part of horizon repo. If there is a 
easy way to de-couple lbaas dashboard from horizon? I think that will simplify 
development efforts. What does it take to separate lbaas dashboard from horizon?

Thanks,
Vivek

From: Akihiro Motoki mailto:amot...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 10:09 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Cc: "Balle, Susanne" mailto:susanne.ba...@hp.com>>, 
"Tonse, Milan" mailto:mto...@ebay.com>>
Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for neutron-lbaas 
v2

Another option is to create a project under openstack.
designate-dashboard project takes this approach,
and the core team of the project is both horizon-core and designate-core.
We can do the similar approach. Thought?

I have one question.
Do we have a separate place forever or do we want to merge horizon repo
once the implementation are available.
If we have a separate repo for LBaaS v2 panel, we need to release it separately.

I am not sure I am available at LBaaS meeting, but I would like to help
this efforts as a core from horizon and neutron.

Akihiro


2015-07-15 1:52 GMT+09:00 Doug Wiegley 
mailto:doug...@parksidesoftware.com>>:
I’d be good submitting it to the neutron-lbaas repo, under a horizon/ 
directory. We can iterate there, and talk with the Horizon team about how best 
to integrate. Would that work?

Thanks,
doug

> On Jul 13, 2015, at 3:05 PM, Jain, Vivek 
> mailto:vivekj...@ebay.com>> wrote:
>
> Hi German,
>
> We integrated UI with LBaaS v2 GET APIs. We have created all panels for
> CREATE and UPDATE as well.
> Plan is to share our code with community on stackforge for more
> collaboration from the community.
>
> So far Ganesh from cisco has shown interest in helping with some work. It
> will be great if we can get more hands.
>
> Q: what is the process for hosting in-progress project on stackforge? Can
> someone help me here?
>
> Thanks,
> Vivek
>
> On 7/10/15, 11:40 AM, "Eichberger, German" 
> mailto:german.eichber...@hp.com>>
> wrote:
>
>> Hi Vivek,
>>
>> Hope things are well. With the Midccyle next week I am wondering if you
>> made any progress and/or how we can best help with the panels.
>>
>> Thanks,
>> German
>>
>> From: "Jain, Vivek" 
>> mailto:vivekj...@ebay.com>>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Date: Wednesday, April 8, 2015 at 3:32 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Cc: "Balle Balle, Susanne"
>> mailto:susanne.ba...@hp.com>>>,
>>  "Tonse, Milan"
>> mailto:mto...@ebay.com>>>
>> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for
>> neutron-lbaas v2
>>
>> Thanks German for the etherpad link. If you have any documentation for
>> flows, please share those too.
>>
>> I will work with my team at ebay to publish wireframes for design we are
>> working on. It will be mostly along the lines I demo’ed in Paris.
>>
>> Thanks,
>> Vivek
>>
>> From: , German
>> mailto:german.eichber...@hp.com>>>
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Date: Wednesday, April 8, 2015 at 11:24 AM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> mailto:openstack-dev@lists.openstack.org>
>> g>>
>> Cc: "Balle, Susanne" 
>> mailto:susanne.ba...@hp.com>>>
>> Subject: Re: [openstack-dev] [neutron][lbaas] Horizon support for
>> neutron-lbaas v2
>>
>> Hi,
>>
>> We briefly talked about it

Re: [openstack-dev] [Cinder] FFE Request: Re-add Quobyte Cinder Driver

2015-07-15 Thread Silvan Kaiser
Hello Mike!
Thanks for looking into this. Yes, the fails are caused by the two open
bugs i mentioned [1,2].
We will continue to see into those.
Regards
Silvan


[1] https://bugs.launchpad.net/nova/+bug/1465416
[2] https://bugs.launchpad.net/cinder/+bug/1473116


2015-07-15 0:07 GMT+02:00 Mike Perez :

> On 17:44 Jul 14, Silvan Kaiser wrote:
> > Hello Cinder Community!
> > I'd like to request a feature freeze exception for change [1], re-adding
> the Quobyte driver to Cinder.
> >
> > The driver was removed in change [2] due to missing CI activity [3], it
> was originally added in the kilo release [4].
> > We've been able to remove the CI's issues and it has been reporting for
> a week now [5], stably testing and consistently showing two bugs (one in
> Nova and one in our driver/Cinder),
> > referenced on the CIs status page [6]. We're monitoring the CI results
> continuously and the detected bugs are beeing addressed.
> > The complete logs can be reviewed at [7].
> > CI status changes are published on the Quobyte CI Status page in the
> wiki [6], where there’s also our contact information.
> > Thanks a lot for considering and best regards
> > Silvan Kaiser
> > (kaisers/casusbelli in IRC)
> >
> >
> > [1] https://review.openstack.org/#/c/201507/
> > [2] https://review.openstack.org/#/c/191192/
> > [3]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/068609.html
> > [4] https://review.openstack.org/#/c/94186/
> > [5]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069183.html
> > [6] https://wiki.openstack.org/wiki/ThirdPartySystems/Quobyte_CI
> > [7] http://176.9.127.22:8081/?C=M;O=D
>
> The last 120 jobs have failed. Here's a paste of the 60 of them:
>
> http://paste.openstack.org/show/375484/
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Dr. Silvan Kaiser
Quobyte GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

-- 

--
*Quobyte* GmbH
Hardenbergplatz 2 - 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-15 Thread Matt Riedemann



On 7/15/2015 11:23 AM, Nikola Đipanov wrote:

I'll keep this email brief since this has been a well known issue for
some time now.

Problem: Libvirt can't honour device names specified at boot for any
volumes requested as part of block_device_mapping. What we currently do
is in case they do get specified, we persist them as is, so that we can
return them from the API, even though libvirt can't honour them (this
leads to a number of issues when we do really on the data in the DB, a
very common one comes up when attaching further devices which follow up
patches to [1] try to address).

There is a proposed patch [1] that will make libvirt disregard what was
passed and persist the values it defaults and can honour. This seems
contentious because it will change the API behaviour (instance show will
potentially return device names other than the ones requested).

My take on this is that this is broken and we should fix it. All other
ways to fix it, namely:

   * reject the request if libvirt is the driver in the API (we can't
know where the request will end up really and blocking in the API is
bad, plus we would still have to keep backwards compatibility for a long
time which means the bug is not really solved, we just have more code
for bugs to fester)
   * fail the request at the scheduler level (very disruptive , and the
question is how do we tell users that this is a legit change, we can't
really bump the API version for a compute change)

are way more disruptive for little gain.

   * There is one more thing we could do that hasn't been discussed - we
could store requested_device_name, and always return that from the API.
This too adds needless complexity IMO.

I think the patch in [1] is a pragmatic solution to a long standing
issue that only changes the API behaviour for an already broken
interaction. I'd like to avoid needless complexity if it gives us nothing.

It would be awesome to get some discussion around this and hopefully get
some resolution to this long standing issue. Do let me know if more
information/clarification is required.

[1] https://review.openstack.org/#/c/189632/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



The other part of the discussion is around the API changes, not just for 
libvirt, but having a microversion that removes the device from the 
request so it's no longer optional and doesn't provide some false sense 
that it works properly all of the time.  We talked about this in the 
nova channel yesterday and I think the thinking was we wanted to get 
agreement on dropping that with a microversion before moving forward 
with the libvirt change you have to ignore the requested device name.


From what I recall, this was supposed to really only work reliably for 
xen but now it actually might not, and would need to be tested again. 
Seems we could start by checking the xen CI to see if it is running the 
test_minimum_basic scenario test or anything in test_attach_volume.py in 
Tempest.


I'm not sure about vmware/hyper-v/ironic drivers in nova and how they 
handle this or if they are just as buggy as the libvirt driver.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] questions on neutron-db-migrations

2015-07-15 Thread Madhusudhan Kandadai
Hello,

I have noticed that neutron project got rid of
neutron/db/migration/alembic_migrations/versions/HEAD file and renamed it
to neutron/db/migration/alembic_migrations/versions/HEADS

May I know the reason why this happened? I may have overlooked some
documentation with respect to the change. I have a patch which is in merge
conflicts and have a db upgrade with version "XXX" and I use that version
in HEAD. When I upgrade them, I use neutron-db-manage --config-file
/etc/neutron/neutron.conf --config-file
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head.

With this recent refactoring related to db, what needs to be done in-order
to upgrade db into neutron-db?

Thanks,
Madhu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Need a new release of os-brick for Python 3

2015-07-15 Thread Mike Perez
On 10:45 Jul 15, Victor Stinner wrote:
> Hi,
> 
> The latest release of os-brick is not compatible with Python 3.
> Different patches were merged to fix Python 3 support. "tox -e py34"
> now executes all tests and all tests pass on Python 3.4.
> 
> I need a release of os-brick to port Cinder to Python 3. A Python 3
> issue in os-brick blocks my 4 pending Cinder patches for Python 3.
> 
> Tell me if I can help to get this release.

You should review the changelog update so we can do this:

https://review.openstack.org/#/c/201736/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Device names supplied to the boot request

2015-07-15 Thread Matt Riedemann



On 7/15/2015 11:23 AM, Nikola Đipanov wrote:

I'll keep this email brief since this has been a well known issue for
some time now.

Problem: Libvirt can't honour device names specified at boot for any
volumes requested as part of block_device_mapping. What we currently do
is in case they do get specified, we persist them as is, so that we can
return them from the API, even though libvirt can't honour them (this
leads to a number of issues when we do really on the data in the DB, a
very common one comes up when attaching further devices which follow up
patches to [1] try to address).

There is a proposed patch [1] that will make libvirt disregard what was
passed and persist the values it defaults and can honour. This seems
contentious because it will change the API behaviour (instance show will
potentially return device names other than the ones requested).

My take on this is that this is broken and we should fix it. All other
ways to fix it, namely:

   * reject the request if libvirt is the driver in the API (we can't
know where the request will end up really and blocking in the API is
bad, plus we would still have to keep backwards compatibility for a long
time which means the bug is not really solved, we just have more code
for bugs to fester)
   * fail the request at the scheduler level (very disruptive , and the
question is how do we tell users that this is a legit change, we can't
really bump the API version for a compute change)

are way more disruptive for little gain.

   * There is one more thing we could do that hasn't been discussed - we
could store requested_device_name, and always return that from the API.
This too adds needless complexity IMO.

I think the patch in [1] is a pragmatic solution to a long standing
issue that only changes the API behaviour for an already broken
interaction. I'd like to avoid needless complexity if it gives us nothing.

It would be awesome to get some discussion around this and hopefully get
some resolution to this long standing issue. Do let me know if more
information/clarification is required.

[1] https://review.openstack.org/#/c/189632/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Coincidentally mtreinish and I were talking today about some Tempest 
tests that attach/detach a volume and do ssh operations in between.  He 
pointed out that the test_stamp_pattern scenario test has been skipped 
forever because the device_name is not reliable on the BDM.  And that's 
using a hard-coded device name in tempest.conf [1].


So this would actually fix that test if we updated the test to just get 
the BDM device information after the attach and use that rather than the 
hard-coded config option in Tempest that is not likely to work - 
arguably the test should have been written more dynamically to start 
with since you're not required to provide a device name when attaching a 
volume.


[1] 
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/test_stamp_pattern.py#n105


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-15 Thread Doug Wiegley

> On Jul 15, 2015, at 9:54 AM, Neil Jerram  wrote:
> 
> I've been reading available docs about the forthcoming Neutron flavors 
> framework, and am not yet sure I understand what it means for a network.
> 
> Is it a way for an admin to provide a particular kind of network, and then 
> for a tenant to know what they're attaching their VMs to?

Theoretically, anything in neutron can consume the linked flavor info and do 
something special. Since flavors can be an abstraction of vendor/plugin 
specific stuff, that means any plugin is free to add flavor support for 
networks if desired, under the operators control to enable. None is planned for 
that object at the moment, that I am aware of.

Thanks,
doug


> 
> How does it differ from provider:network-type?  (I guess, because the latter 
> is supposed to be for implementation consumption only - but is that correct?)
> 
> Thanks,
>Neil
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-15 Thread Madhusudhan Kandadai
On Wed, Jul 15, 2015 at 9:25 AM, Kyle Mestery  wrote:

> On Wed, Jul 15, 2015 at 10:54 AM, Neil Jerram 
> wrote:
>
>> I've been reading available docs about the forthcoming Neutron flavors
>> framework, and am not yet sure I understand what it means for a network.
>>
>>
> In reality, this is envisioned more for service plugins (e.g. flavors of
> LBaaS, VPNaaS, and FWaaS) than core neutron resources.
>
Yes. Right put. This is for service plugins and its part of extensions than
core network resources//

>
>
>> Is it a way for an admin to provide a particular kind of network, and
>> then for a tenant to know what they're attaching their VMs to?
>>
>>
> I'll defer to Madhu who is implementing this, but I don't believe that's
> the intention at all.
>
Currently, an admin will be able to assign particular flavors,
unfortunately, this is not going to be tenant specific flavors. As you
might have seen in the review, we are just using tenant_id to bypass the
keystone checks implemented in base.py and it is not stored in the db as
well. It is something to do in the future and documented the same in the
blueprint.

>
>
>> How does it differ from provider:network-type?  (I guess, because the
>> latter is supposed to be for implementation consumption only - but is that
>> correct?)
>>
>>
> Flavors are created and curated by operators, and consumed by API users.
>
+1

>
>
>> Thanks,
>> Neil
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Fox, Kevin M
Maybe somehow I missed the point, but why not just use raw Nova flavors? They 
already abstract away irconic vs kvm vs hyperv/etc.

Thanks,
Kevin


From: Daneyon Hansen (danehans) [daneh...@cisco.com]
Sent: Wednesday, July 15, 2015 9:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None

(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver

http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 7:44 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

One drawback to virt_type if not seen in the context of the acceptable values, 
is that it should be set to values like libvirt, xen, ironic, etc. That might 
actually be good. Instead of using the values 'vm' or 'baremetal', we use the 
name of the nova virt driver, and interpret those to be vm or baremetal types. 
So if I set the value to 'xen', I know the nova instance type is a vm, and 
'ironic' means a baremetal nova instance.

Adrian


 Original message 
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Magnum template manage use platform VS others 
as a type?


Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or 'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and 
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner as 
the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I think 
that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Continuing with heat-coe-templates

2015-07-15 Thread Lars Kellogg-Stedman
On Sun, Jul 05, 2015 at 12:00:55AM +, Steven Dake (stdake) wrote:
> Lars had a repo he maintains.  Magnum had a repo it maintained.  We
> wanted one source of truth.  The deal was we would merge all the
> things into heat-coe-templates, delete larsks/heat-kubernetes and
> delete the magnum templates.  Then there would be one source of
> truth.

I apologize for being out of the loop for a bit; I was stuck out at a
customer site for a while.

I create the heat-coe-templates project at the request of sdake
because it sounded as if (a) magnum wanted to make use of the
templates and have them in a location where there was a better
workflow for submitting and reviewing patches, and (b) magnum wanted
to take the templates in a different direction (with support for other
scheduling engines, etc).

After creating it, there was no activity on it so I stopped paying
attention for a while.  If folks want to use it, we should set up some
additional maintainers and go for it.

I'm going to continue maintaining my own repository as a
strictly-for-kubernetes tool.  I had to make a number of changes to it
recently in order to support a demo at the recent summit, and I am
happy to contribute some of these upstream.

In conclusion: I have very little skin in this game.  I am happy for
folks to make use of the templates if they are useful, and I am
totally happy to let other folks manage the heat-coe-templates
project and take it in a direction completely different from where
things are now.

I leave the decision about where things are going to someone who has a
more vested interest in the resolution.

Cheers,

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] questions about some skipped tests in vmware/nsx ci

2015-07-15 Thread Matt Riedemann



On 7/15/2015 11:17 AM, Matt Riedemann wrote:

I was looking at NSX CI results on [1] which is related to volumes and
noticed that
tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume

is being skipped, is there a reason why?  Attach and detach of volumes
is a pretty basic operation for a virt driver in nova.

[1] https://review.openstack.org/#/c/197192/



I guess this is just a poorly named test case, it's skipped if you don't 
have ssh validation enabled in the CI run [1].  Which is False by default.


There is another test right below it, test_list_get_volume_attachments, 
which is run in the NSX CI so nevermind, the sky isn't falling.


[1] 
http://git.openstack.org/cgit/openstack/tempest/tree/tempest/api/compute/volumes/test_attach_volume.py#n87


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] What does flavor mean for a network?

2015-07-15 Thread Kyle Mestery
On Wed, Jul 15, 2015 at 10:54 AM, Neil Jerram 
wrote:

> I've been reading available docs about the forthcoming Neutron flavors
> framework, and am not yet sure I understand what it means for a network.
>
>
In reality, this is envisioned more for service plugins (e.g. flavors of
LBaaS, VPNaaS, and FWaaS) than core neutron resources.


> Is it a way for an admin to provide a particular kind of network, and then
> for a tenant to know what they're attaching their VMs to?
>
>
I'll defer to Madhu who is implementing this, but I don't believe that's
the intention at all.


> How does it differ from provider:network-type?  (I guess, because the
> latter is supposed to be for implementation consumption only - but is that
> correct?)
>
>
Flavors are created and curated by operators, and consumed by API users.


> Thanks,
> Neil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Device names supplied to the boot request

2015-07-15 Thread Nikola Đipanov
I'll keep this email brief since this has been a well known issue for
some time now.

Problem: Libvirt can't honour device names specified at boot for any
volumes requested as part of block_device_mapping. What we currently do
is in case they do get specified, we persist them as is, so that we can
return them from the API, even though libvirt can't honour them (this
leads to a number of issues when we do really on the data in the DB, a
very common one comes up when attaching further devices which follow up
patches to [1] try to address).

There is a proposed patch [1] that will make libvirt disregard what was
passed and persist the values it defaults and can honour. This seems
contentious because it will change the API behaviour (instance show will
potentially return device names other than the ones requested).

My take on this is that this is broken and we should fix it. All other
ways to fix it, namely:

  * reject the request if libvirt is the driver in the API (we can't
know where the request will end up really and blocking in the API is
bad, plus we would still have to keep backwards compatibility for a long
time which means the bug is not really solved, we just have more code
for bugs to fester)
  * fail the request at the scheduler level (very disruptive , and the
question is how do we tell users that this is a legit change, we can't
really bump the API version for a compute change)

are way more disruptive for little gain.

  * There is one more thing we could do that hasn't been discussed - we
could store requested_device_name, and always return that from the API.
This too adds needless complexity IMO.

I think the patch in [1] is a pragmatic solution to a long standing
issue that only changes the API behaviour for an already broken
interaction. I'd like to avoid needless complexity if it gives us nothing.

It would be awesome to get some discussion around this and hopefully get
some resolution to this long standing issue. Do let me know if more
information/clarification is required.

[1] https://review.openstack.org/#/c/189632/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][vmware] questions about some skipped tests in vmware/nsx ci

2015-07-15 Thread Matt Riedemann
I was looking at NSX CI results on [1] which is related to volumes and 
noticed that 
tempest.api.compute.volumes.test_attach_volume.AttachVolumeTestJSON.test_attach_detach_volume
is being skipped, is there a reason why?  Attach and detach of volumes 
is a pretty basic operation for a virt driver in nova.


[1] https://review.openstack.org/#/c/197192/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Magnum template manage use platform VS others as a type?

2015-07-15 Thread Daneyon Hansen (danehans)
All,

IMO virt_type does not properly describe bare metal deployments.  What about 
using the compute_driver parameter?

compute_driver = None

(StrOpt) Driver to use for controlling virtualization. Options include: 
libvirt.LibvirtDriver, xenapi.XenAPIDriver, fake.FakeDriver, 
baremetal.BareMetalDriver, vmwareapi.VMwareVCDriver, hyperv.HyperVDriver

http://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html
http://docs.openstack.org/developer/ironic/deploy/install-guide.html

From: Adrian Otto mailto:adrian.o...@rackspace.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, July 14, 2015 at 7:44 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

One drawback to virt_type if not seen in the context of the acceptable values, 
is that it should be set to values like libvirt, xen, ironic, etc. That might 
actually be good. Instead of using the values 'vm' or 'baremetal', we use the 
name of the nova virt driver, and interpret those to be vm or baremetal types. 
So if I set the value to 'xen', I know the nova instance type is a vm, and 
'ironic' means a baremetal nova instance.

Adrian


 Original message 
From: Hongbin Lu mailto:hongbin...@huawei.com>>
Date: 07/14/2015 7:20 PM (GMT-08:00)
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [magnum] Magnum template manage use platform VS 
others as a type?

I am going to propose a third option:

3. virt_type

I have concerns about option 1 and 2, because “instance_type” and flavor was 
used interchangeably before [1]. If we use “instance_type” to indicate “vm” or 
“baremetal”, it may cause confusions.

[1] https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup

Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: July-14-15 9:35 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [magnum] Magnum template manage use platform VS others 
as a type?


Hi Magnum Guys,


I want to raise this question through ML.


In this patch https://review.openstack.org/#/c/200401/


For some old history reason, we use platform to indicate 'vm' or 'baremetal'.
This seems not proper for that, @Adrian proposed nova_instance_type, and 
someone prefer other names, let me summarize as below:


1. nova_instance_type  2 votes

2. instance_type 2 votes

3. others (1 vote, but not proposed any name)


Let's try to reach the agreement ASAP. I think count the final votes winner as 
the proper name is the best solution(considering community diversity).


BTW, If you not proposed any better name, just vote to disagree all, I think 
that vote is not valid and not helpful to solve the issue.


Please help to vote for that name.


Thanks




Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193

Follow your heart. You are miracle!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate repo for Fuel Agent

2015-07-15 Thread Oleg Gelbukh
Nice work, Vladimir. Thank you for pushing this, it's really important step
to decouple things from consolidated repository.

--
Best regards,
Oleg Gelbukh

On Wed, Jul 15, 2015 at 6:47 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> I'm glad to announce that everything about this task is done. ISO build
> job uses this new repository [1]. BVT is green. Fuel Agent rpm spec has
> been moved to the new repo and perestroika has also been switched to build
> fuel-agent package from the new repo. The only difference that could
> potentially affect deployment is that fuel-agent package built from the new
> repo will have lower version because the number or commits in the new repo
> is around 130 vs 7275 in fuel-web (fuel-agent-7.0.0-1.mos7275.noarch.rpm).
> But I believe it gonna be fine until there are more than one fuel-agent
> packages in rpm repository.
>
> Next step is to remove stackforge/fuel-web/fuel_agent directory.
>
>
> [1] https://github.com/stackforge/fuel-agent.git
>
> Vladimir Kozhukalov
>
> On Wed, Jul 15, 2015 at 2:19 AM, Mike Scherbakov  > wrote:
>
>> Thanks Vladimir. Let's ensure to get it done sooner than later (this
>> might require to be tested in custom ISO..) - we are approaching FF, and I
>> expect growing queues of patches to land...
>>
>> On Tue, Jul 14, 2015 at 6:03 AM Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Dear colleagues,
>>>
>>> New repository [1] has been created. So, please port all your review
>>> requests to stackforge/fuel-web related to Fuel Agent to this new
>>> repository. Currently, I am testing these two patches
>>> https://review.openstack.org/#/c/200595
>>> https://review.openstack.org/#/c/200025. If they work, we need to merge
>>> them and that is it. Review is welcome.
>>>
>>>
>>>
>>> [1] https://github.com/stackforge/fuel-agent.git
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Jul 10, 2015 at 8:14 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Ok, guys.

 Looks like there are no any objections. At the moment I need to create
 actual version of upstream repository which is going to be sucked in by
 OpenStack Infra. Please, be informed that all patches changing
 fuel-web/fuel_agent that will be merged after this moment will need to be
 ported into the new fuel-agent repository.


 Vladimir Kozhukalov

 On Fri, Jul 10, 2015 at 6:38 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> Guys, we are next to moving fuel_agent directory into a separate
> repository. Action flow is going to be as follows:
>
> 1) Create verify jobs on our CI https://review.fuel-infra.org/#/c/9186
> (DONE)
> 2) Freeze fuel_agent directory in
> https://github.com/stackforge/fuel-web (will announce in a separate
> mail thread). That means we stop merging patches into master which change
> fuel_agent directory. Unfortunately, all review requests need to be
> re-sent, but it is not going to be very difficult.
> 3) Create temporary upstream repository with fuel_agent/* as a
> content. I'm not planning to move 5.x and 6.x branches. Only master. So,
> all fixes for 5.x and 6.x will be living in fuel-web.
> 4) This upstream repository is going to be sucked in by
> openstack-infra. Patch is here
> https://review.openstack.org/#/c/199178/ (review is welcome) I don't
> know how long it is going to take. Will try to poke infra people to do 
> this
> today.
> 5) Then we need to accept two patches into new fuel-agent repository:
>  - rpm spec (extraction from fuel-web/specs/nailgun.spec) (ready, but
> there is no review request)
>  - run_tests.sh (to run tests) (ready, but there is no review request)
>
> !!! By this moment there won't be any impact on ISO build process !!!
>
> 6) Then we need to change two things at the same time (review is
> welcome)
>   - fuel-web/specs/nailgun.spec in order to prevent fuel-agent package
> building  https://review.openstack.org/#/c/200595
>   - fuel-main so as to introduce new fuel-agent repository into the
> build process https://review.openstack.org/#/c/200025
>
> And good luck to me -)
>
>
> Vladimir Kozhukalov
>
> On Wed, Jul 8, 2015 at 12:53 PM, Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> There were some questions from Alexandra Fedorova about independent
>> release cycle.
>>
>> >according to the configuration [1] Infra team won't be able to do
>> >branching or any kind of release management for new repository.
>>
>> >Could you please clarify, do we plan to version new repository the
>> >same way as we do for main fuel repositories or there going to be
>> >separate releases as in python-fuelclient [2]? Who should drive the
>> >release process for this repo and how this change will affect Fuel
>> ISO
>> >release?
>>

[openstack-dev] [all][ptl][release] New library release request process

2015-07-15 Thread Doug Hellmann
PTLs and release liaisons,

We are ready to take the next step in implementing the new library
release process, and start using gerrit for release request reviews.

The new repository openstack/releases is set up with back-history for
all of our current releases. There are full instructions in the REAMDE
in the repository [1], but I will explain the basics here.

We're tracking all managed library releases using one YAML file per
release series (kilo, liberty, etc.) and "deliverable" (for non-library
projects a deliverable may include more than one repository). To request
a new release, you edit the existing file to add the version, sha, etc.
The release team will look at the version number and consider the timing
of the release, and provide feedback on the review, just like we've been
doing on IRC or via email for the past couple of weeks as releases were
requested. The full release tagging is not yet automated, so when the
release is approved, we'll run the tool manually for now. You'll find
more details about the automation in the infra spec [2].

We should start using this new process for all release requests, from
this point on. That will let us maintain a complete history, which we
are eventually going to use to build a site showing which releases are
part of each series to help downstream consumers of each project.

As usual, stop by #openstack-relmgr-office or email the list if you have
questions.

Thanks,
Doug

[1] http://git.openstack.org/cgit/openstack/releases/tree/README.rst
[2]
http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] What does flavor mean for a network?

2015-07-15 Thread Neil Jerram
I've been reading available docs about the forthcoming Neutron flavors 
framework, and am not yet sure I understand what it means for a network.


Is it a way for an admin to provide a particular kind of network, and 
then for a tenant to know what they're attaching their VMs to?


How does it differ from provider:network-type?  (I guess, because the 
latter is supposed to be for implementation consumption only - but is 
that correct?)


Thanks,
Neil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Separate repo for Fuel Agent

2015-07-15 Thread Vladimir Kozhukalov
I'm glad to announce that everything about this task is done. ISO build job
uses this new repository [1]. BVT is green. Fuel Agent rpm spec has been
moved to the new repo and perestroika has also been switched to build
fuel-agent package from the new repo. The only difference that could
potentially affect deployment is that fuel-agent package built from the new
repo will have lower version because the number or commits in the new repo
is around 130 vs 7275 in fuel-web (fuel-agent-7.0.0-1.mos7275.noarch.rpm).
But I believe it gonna be fine until there are more than one fuel-agent
packages in rpm repository.

Next step is to remove stackforge/fuel-web/fuel_agent directory.


[1] https://github.com/stackforge/fuel-agent.git

Vladimir Kozhukalov

On Wed, Jul 15, 2015 at 2:19 AM, Mike Scherbakov 
wrote:

> Thanks Vladimir. Let's ensure to get it done sooner than later (this might
> require to be tested in custom ISO..) - we are approaching FF, and I expect
> growing queues of patches to land...
>
> On Tue, Jul 14, 2015 at 6:03 AM Vladimir Kozhukalov <
> vkozhuka...@mirantis.com> wrote:
>
>> Dear colleagues,
>>
>> New repository [1] has been created. So, please port all your review
>> requests to stackforge/fuel-web related to Fuel Agent to this new
>> repository. Currently, I am testing these two patches
>> https://review.openstack.org/#/c/200595
>> https://review.openstack.org/#/c/200025. If they work, we need to merge
>> them and that is it. Review is welcome.
>>
>>
>>
>> [1] https://github.com/stackforge/fuel-agent.git
>>
>> Vladimir Kozhukalov
>>
>> On Fri, Jul 10, 2015 at 8:14 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>>> Ok, guys.
>>>
>>> Looks like there are no any objections. At the moment I need to create
>>> actual version of upstream repository which is going to be sucked in by
>>> OpenStack Infra. Please, be informed that all patches changing
>>> fuel-web/fuel_agent that will be merged after this moment will need to be
>>> ported into the new fuel-agent repository.
>>>
>>>
>>> Vladimir Kozhukalov
>>>
>>> On Fri, Jul 10, 2015 at 6:38 PM, Vladimir Kozhukalov <
>>> vkozhuka...@mirantis.com> wrote:
>>>
 Guys, we are next to moving fuel_agent directory into a separate
 repository. Action flow is going to be as follows:

 1) Create verify jobs on our CI https://review.fuel-infra.org/#/c/9186
 (DONE)
 2) Freeze fuel_agent directory in
 https://github.com/stackforge/fuel-web (will announce in a separate
 mail thread). That means we stop merging patches into master which change
 fuel_agent directory. Unfortunately, all review requests need to be
 re-sent, but it is not going to be very difficult.
 3) Create temporary upstream repository with fuel_agent/* as a content.
 I'm not planning to move 5.x and 6.x branches. Only master. So, all fixes
 for 5.x and 6.x will be living in fuel-web.
 4) This upstream repository is going to be sucked in by
 openstack-infra. Patch is here https://review.openstack.org/#/c/199178/
 (review is welcome) I don't know how long it is going to take. Will try to
 poke infra people to do this today.
 5) Then we need to accept two patches into new fuel-agent repository:
  - rpm spec (extraction from fuel-web/specs/nailgun.spec) (ready, but
 there is no review request)
  - run_tests.sh (to run tests) (ready, but there is no review request)

 !!! By this moment there won't be any impact on ISO build process !!!

 6) Then we need to change two things at the same time (review is
 welcome)
   - fuel-web/specs/nailgun.spec in order to prevent fuel-agent package
 building  https://review.openstack.org/#/c/200595
   - fuel-main so as to introduce new fuel-agent repository into the
 build process https://review.openstack.org/#/c/200025

 And good luck to me -)


 Vladimir Kozhukalov

 On Wed, Jul 8, 2015 at 12:53 PM, Vladimir Kozhukalov <
 vkozhuka...@mirantis.com> wrote:

> There were some questions from Alexandra Fedorova about independent
> release cycle.
>
> >according to the configuration [1] Infra team won't be able to do
> >branching or any kind of release management for new repository.
>
> >Could you please clarify, do we plan to version new repository the
> >same way as we do for main fuel repositories or there going to be
> >separate releases as in python-fuelclient [2]? Who should drive the
> >release process for this repo and how this change will affect Fuel ISO
> >release?
>
> >[1]
> https://review.openstack.org/#/c/199178/1/gerrit/acls/stackforge/fuel-agent.config,cm
> >[2]
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/068837.html
>
> IMO all Fuel components should be as much independent as possible with
> highly defined APIs used for their interaction, with their own teams, with
> their own independent relea

Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Joshua Harlow

I do like experiments!

What about going even farther and trying to integrate somehow into mesos?

https://mesos.apache.org/documentation/latest/mesos-architecture/

Replace the hadooop executor, MPI executor with a 'VM executor' and 
perhaps we could eliminate a large part of the scheduler code (just a 
thought)...


I think a bunch of other ideas were also written down @ 
https://review.openstack.org/#/c/191914/ maybe u can try some of those to :)


Ed Leafe wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Changing the architecture of a complex system such as Nova is never
easy, even when we know that the design isn't working as well as we
need it to. And it's even more frustrating because when the change is
complete, it's hard to know if the improvement, if any, was worth it.

So I had an idea: what if we ran a test of that architecture change
out-of-tree? In other words, create a separate deployment, and rip out
the parts that don't work well, replacing them with an alternative
design. There would be no Gerrit reviews or anything that would slow
down the work or add load to the already overloaded reviewers. Then we
could see if this modified system is a significant-enough improvement
to justify investing the time in implementing it in-tree. And, of
course, if the test doesn't show what was hoped for, it is scrapped
and we start thinking anew.

The important part in this process is defining up front what level of
improvement would be needed to make considering actually making such a
change worthwhile, and what sort of tests would demonstrate whether or
not whether this level was met. I'd like to discuss such an experiment
next week at the Nova mid-cycle.

What I'd like to investigate is replacing the current design of having
the compute nodes communicating with the scheduler via message queues.
This design is overly complex and has several known scalability
issues. My thought is to replace this with a Cassandra [1] backend.
Compute nodes would update their state to Cassandra whenever they
change, and that data would be read by the scheduler to make its host
selection. When the scheduler chooses a host, it would post the claim
to Cassandra wrapped in a lightweight transaction, which would ensure
that no other scheduler has tried to claim those resources. When the
host has built the requested VM, it will delete the claim and update
Cassandra with its current state.

One main motivation for using Cassandra over the current design is
that it will enable us to run multiple schedulers without increasing
the raciness of the system. Another is that it will greatly simplify a
lot of the internal plumbing we've set up to implement in Nova what we
would get out of the box with Cassandra. A third is that if this
proves to be a success, it would also be able to be used further down
the road to simplify inter-cell communication (but this is getting
ahead of ourselves...). I've worked with Cassandra before and it has
been rock-solid to run and simple to set up. I've also had preliminary
technical reviews with the engineers at DataStax [2], the company
behind Cassandra, and they agreed that this was a good fit.

At this point I'm sure that most of you are filled with thoughts on
how this won't work, or how much trouble it will be to switch, or how
much more of a pain it will be, or how you hate non-relational DBs, or
any of a zillion other negative thoughts. FWIW, I have them too. But
instead of ranting, I would ask that we acknowledge for now that:

a) it will be disruptive and painful to switch something like this at
this point in Nova's development
b) it would have to provide *significant* improvement to make such a
change worthwhile

So what I'm asking from all of you is to help define the second part:
what we would want improved, and how to measure those benefits. In
other words, what results would you have to see in order to make you
reconsider your initial "nah, this'll never work" reaction, and start
to think that this is will be a worthwhile change to make to Nova.

I'm also asking that you refrain from talking about why this can't
work for now. I know it'll be difficult to do that, since nobody likes
ranting about stuff more than I do, but right now it won't be helpful.
There will be plenty of time for that later, assuming that this
experiment yields anything worthwhile. Instead, think of the current
pain points in the scheduler design, and what sort of improvement you
would have to see in order to seriously consider undertaking this
change to Nova.

I've gotten the OK from my management to pursue this, and several
people in the community have expressed support for both the approach
and the experiment, even though most don't have spare cycles to
contribute. I'd love to have anyone who is interested become involved.

I hope that this will be a positive discussion at the Nova mid-cycle
next week. I know it will be a lively one. :)

[1] http://cassandra.apache.org/
[2] http://www.datastax.com/
- --

- -

Re: [openstack-dev] Why do we need python-fasteners and not just oslo.concurrency?

2015-07-15 Thread Joshua Harlow

For the same reason we don't move all of sqlalchemy into oslo.db.

This is further explained @

- https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Choosing_a_Name

- 
http://specs.openstack.org/openstack/oslo-specs/specs/policy/naming-libraries.html#proposed-policy


Taskflow, tooz and others were designed to not just be 'production 
runtime dependencies of OpenStack projects' (I know they are used 
elsewhere in non-openstack projects); so this is why fasteners was split 
off, so it can be used by the wider world (taskflow, tooz, 
oslo.concurrency now depend on fasteners and use it).


-Josh

Thomas Goirand wrote:

Hi,

I've seen that the latest version of taskflow needs fasteners, which
handles lock stuff. Why can't this goes into oslo.concurrency?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal for an Experiment

2015-07-15 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 07/15/2015 09:49 AM, Matt Riedemann wrote:

> Without reading the whole thread, couldn't you just do a feature
> branch (but that would require reviews in gerrit which we don't
> want), or fork the repo in github and just hack on it there without
> gerrit?
> 
> I'm sure many will say it's not cool to fork the repo, but that's 
> essentially what you'd be doing anyway, so meh.

It will be a temporary fork, not anything designed to live on forever.

> I think you just have to have an understanding that whatever you
> work on in the fork won't necessarily be accepted back in the main
> repo.

Yes, that's sort of the whole point. First prove that it can work;
then and only then do we sit down and discuss the best way to
implement. The odds that any of the changes made would be able to be
pulled directly into master would be slim.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJVpnlOAAoJEKMgtcocwZqL1loQAI/xGth0tSbXAB5gm3bjKYMQ
mdsWopf2sAfBUqgSXys5VmYRMuJPGsVXmIQhOYVZtjA8FFAAHcfeHba6c8uIw04n
iZgOv/Da8ABX+Saj7jFnjXrBpujD6v4b7T2WhIWg38RNT15z79wTCG0Olh2WPHP0
UNGu79iTqV2c7jaFQ1P91jswRRfLYoY/MaRnTCEhT0Rl/VYS46IeSK9GY1PXrC+z
ZBNKdqo2RHqNisPPsdvBVvdTsbcTU3Y8T00u+djp/OEHTPQGIP6SIUzFL61iOVye
RXcdSehWmGNG61Tiq1ng6qSzVoisWYaP9kATrXRGTVUhYVJXrhiCgCZPJ8WK3jSI
Du3meEW1mr18NcDClTsMbbPmuMeTlPTwWoVNqqqDBhFYQIHTYhbwk9cI2XwkKy0+
VQdORuO5h9Qt7JNdRGb62kDLrC4tKnXP7TWCmqmGXdj31kiCQc4vno+kozzJb90j
6I/I37acxIDKFBvF6GsdWxYNnJdIz03IfoQtMwfR6Jc3QTwl47h/aUuIrTpVpXPA
+CCgmcrimef5reQB8kaUEbPyPbwjBUOoYxaFJi3mtQ13nWoOsU23km9qt253+9eS
xWVcRL06L6418juvPbMPqDz3giNhUT5ZOL/qC/a9UirQw3p2mASeVwTKmDwfOl+i
zhnQQpeqIPkWR3N7+Mwu
=5/gA
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][nova]

2015-07-15 Thread Matt Riedemann



On 7/15/2015 5:41 AM, John Garbutt wrote:

On 14 July 2015 at 21:43, Cale Rath  wrote:

Hi,

I created a patch to fail on the proxy call to Neutron for used limits,
found here: https://review.openstack.org/#/c/199604/

This patch was done because of this:
http://docs.openstack.org/developer/nova/project_scope.html?highlight=proxy#no-more-api-proxies,
where it’s stated that Nova shouldn’t be proxying API calls.

That said, Matt Riedemann brings up the point that this breaks the case
where Neutron is installed and we want to be more graceful, rather than just
raising an exception.


+1 to matt's point.


Here are some options:

1. fail - (the code in the patch above)
2. proxy to neutron for floating ips and security groups - that's what the
original change was doing back in havana
3. return -1 or something for floatingips/security groups to indicate that
we don't know, you have to get those from neutron

Does anybody have an opinion on which option we should do regarding API
proxies in this case?


We need to have our APIs work the same using either nova-network or
neutron, to keep the API interoperable.

The scope document is really trying to say that adding new APIs that
force us to do more proxying would be bad (e.g. passing in extra
properties for the ports that Nova creates in neutron on behalf of the
user).

In this case, it seems we need to proxy to neutron to ensure the Nova
API keeps working as expected when you use Neutron.

Its possible there is a massive gotcha I am just not seeing right now?

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't think you're missing anything.  It's a pretty clear case.  The 
reason this hasn't been fixed for so long is that originally back in 
Havana with the nova v3 API we expected to drop all proxy code to 
neutron so it wouldn't even be a problem in the new v3 API, at least 
that was the thinking.  Then things changed, we just never got back 
around to closing this gap.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >