[openstack-dev] What's Up, Doc? 17 June 2016

2016-06-16 Thread Lana Brindley
Hi everyone,

This week I've very pleased to announce that we have our first project-specific 
Install Guide published! Petr Kovar got Heat over the line in first place, and 
it's looking great:  
http://docs.openstack.org/project-install-guide/orchestration/draft/index.html 
Well done Petr, and of course all the wonderful docs people who helped us get 
to this point. We're also expecting to see Trove published very soon. I've been 
using the lessons learned from these early projects to flesh out our 
instructions a little more, so it should be even easier for projects to get 
their Install Guides up and running. We still need to create the central index, 
but it's all starting to come together now.

I'm also excited to announce that, for the very first time, we have a specific 
Ops Cross-Project Liaison. Please welcome Robert Starmer to the CPL family :)

== Progress towards Newton ==

110 days to go!

Bugs closed so far: 185

Newton deliverables: 
https://wiki.openstack.org/wiki/Documentation/NewtonDeliverables
Feel free to add more detail and cross things off as they are achieved 
throughout the release.

Also, just a note that the CFP for Barcelona is open now, until 13 July. If you 
want to brainstorm some documentation-related ideas, please get in touch!

== Speciality Team Reports ==

'''HA Guide: Bogdan Dobrelya'''
No report this week.

'''Install Guide: Lana Brindley'''
Orchestration is done! Well done, Petr :) Working on updating instructions in 
the Contributor Guide. Instructions: 
http://docs.openstack.org/contributor-guide/project-install-guide.html Next 
meeting: Tue 21 June 0600 UTC

'''Networking Guide: Edgar Magana'''
No report this week.

'''Security Guide: Nathaniel Dillon'''
No report this week.

'''User Guides: Joseph Robinson'''
Began a consistency and IA plan, and held the US meeting. Emailing one new 
interested contributor.

'''Ops Guide: Darren Chan'''
Some architecture content moved from the Ops Guide to the draft Arch Guide. 
Patches will be submitted to remove old content from the Ops Guide. Team is 
currently reviewing enterprise ops documentation to incorporate into the Ops 
Guide.

'''API Guide: Anne Gentle'''
Check out all the open reviews for api-ref: 
https://review.openstack.org/#/q/status:open+file:api-ref Nice.
Went to weekly team meeting for swift, landed build patch for swift's api-ref, 
responding to reviews.
Noodling with Karen Bradshaw about API navigation, 
https://review.openstack.org/#/c/329508/ (though I can't take any credit for 
the work!)

'''Config/CLI Ref: Tomoyuki Kato'''
Closed many bugs continuously. Started working on the common configurations for 
shared services and libraries.

'''Training labs: Pranav Salunke, Roger Luethi'''
No report this week.

'''Training Guides: Matjaz Pancur'''
No report this week.

'''Hypervisor Tuning Guide: Blair Bethwaite
No report this week.

'''UX/UI Guidelines: Michael Tullis, Stephen Ballard'''
No report this week.

== Training Guides/Labs Core Team Changes ==

We've adjusted the training guides and labs core teams so that speciality team 
leads are core in their own repos, and the docs core team is an 'included 
group' in both repos. This is intended to stop the core teams for these groups 
drifting out of date too quickly. Note that it is expected that docs cores will 
not be the primary reviewers/mergers for training repos, but are there as 
backup in case extra eyes are needed. That responsibility will continue to lie 
with the training teams themselves, as they know the codebase the best. 

You can see the updated core team lists here:
Training Guides: 
https://review.openstack.org/#/admin/groups/uuid-3490bf37012cb344104cb315f3dd5c76dabea62f,members
Training Labs: https://review.openstack.org/#/admin/groups/1118,members

== Site Stats ==

During May the top viewed book was the Mitaka Ubuntu Install Guide, followed 
closely by the Mitaka RDO Install Guide. Rolling into third place was the Admin 
Guide. 

Also, apologies for an error I made in this section last week. The correct 
number for the total number of views in May was 2,365,626.

== Doc team meeting ==

Next meetings:

The APAC meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-06-15

Next meetings:
US: Wednesday 22 June, 19:00 UTC
APAC: Wednesday 29 June, 00:30 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#17_June_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [daisycloud-core] IRC weekly meeting logistics

2016-06-16 Thread Jaivish Kothari
Hi Daisy Team,

It is my honor to become a part of team,Unfortunately i had to leave the
town for Client meeting urgently today as told to me so it might not be
possible to attend the meeting :( , but i would surely follow up through
logs and Irc Daisy Channel.
I regret for the inconvenience caused.


Regards
Jaivish Kothari


On Tue, Jun 14, 2016 at 9:07 AM,  wrote:

> Hi Team,
>
> Here is the IRC weekly meeting logistics:
>
>
> Weekly on Friday at 1200 UTC, You can check out your local time here:
> http://www.timeanddate.com/worldclock/fixedtime.html?hour=12=0=0
> IRC channel: #daisycloud at freenode
>
> So our first meeting will be on this friday (Jun 17). The Agenda mainly is:
>
> 1. Rollcall
> 2. Welcome Jaivish and everyone introduce him/herself, for the very first
> time over this IRC channel.
> 3. Daisy status update
> 4. Daisy for NFV update
>
>
>
> Could anyone please help me to update logistics info as well as
> contributors info to https://wiki.openstack.org/wiki/Daisy ? Because I
> can not log into https://wiki.openstack.org any more, I dont know why, I
> think it is a problem of the wiki.openstack.org website. Every time I
> trying to login it show me a blank page, and if I refresh it, it shows error:
> "Nonce already used or out of range"
>
>
> Our current active contributor list
>
> -
> NameIRC Nick  Email
> -
> Zhijiang Hu huzhj hu.zhiji...@zte.com.cn
> Jaivish Kothari janonymousjanonymous.codevult...@gmail.com
> Wei Kong  kong.w...@zte.com.cn
> Yao Lulu.yao...@zte.com.cn
> Ya Zhou   zhou...@zte.com.cn
> Jing Sun  sun.jin...@zte.com.cn
>
>
>
> 
> ZTE Information Security Notice: The information contained in this mail (and 
> any attachment transmitted herewith) is privileged and confidential and is 
> intended for the exclusive use of the addressee(s).  If you are not an 
> intended recipient, any disclosure, reproduction, distribution or other 
> dissemination or use of the information contained is strictly prohibited.  If 
> you have received this mail in error, please delete it and notify us 
> immediately.
>
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] publish-to-pypi not working in Tricircle stable/mitaka branch

2016-06-16 Thread joehuang
Hello, Tony,

Yes, just follow the [2] to tag a release. Maybe the issue is the version 
naming, using "v2.0.0 " is the problem, I guess, only number no character is 
allowed.

Thank you, will try again.

Best Regards
Chaoyi Huang ( Joe Huang )


-Original Message-
From: Tony Breeds [mailto:t...@bakeyournoodle.com] 
Sent: Friday, June 17, 2016 10:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Infra] publish-to-pypi not working in Tricircle 
stable/mitaka branch

On Fri, Jun 17, 2016 at 01:15:13AM +, joehuang wrote:
> Hello,
> 
> The publish-to-pypi job is configured for Tricircle[1], and already 
> gave "openstackci" the role to "Owner"[3][2].
> 
> After push a new tag v2.0.1 in stable/mitaka branch of :
> https://github.com/openstack/tricircle, the tagging is successfully 
> applied to the repository, but the publish-to-pypi job did not work, 
> and no package was published.

I'm *far* form an expert but I think that the release pipeline only triggers on 
tags that match this pattern: ^refs/tags/[0-9]+(\.[0-9]+)*$ from [1]

Also I want to be sure that you tagged the repo in the openstck infrastructure 
(as opposed to guthub) as outlined in [2]

Yours Tony.

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n128
[2] http://docs.openstack.org/infra/manual/drivers.html#tagging-a-release
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Wang, Shane
Anita, sorry about replying to you slowly. Because we are a committee from a 
couple of companies, and need discussion, which causes slowness.
I am not the only decision maker. Thanks Anita;)

Regards.
--
Shane
-Original Message-
From: Anita Kuno [mailto:ante...@anteaya.info] 
Sent: Friday, June 17, 2016 7:18 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

On 06/16/2016 07:03 PM, Matt Riedemann wrote:
> On 6/14/2016 9:03 PM, Anita Kuno wrote:
>>
>> I'll reply in private first because I am a core reviewer on the 
>> project-config repo, which was not mentioned in your list but you 
>> might consider useful to you at the bug smash nonetheless.
>>
>> Let me know if you would like me to attend and I'll reply in public, 
>> if not no worries.
>>
>> Thank you,
>> Anita
>>
>> _
>> _
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> Busted!
> 
Yeah, I know. I was tired and wasn't paying attention to the to: field.
Good thing I pretend like everything I say in private is public anyway.

Shane and folks are still welcome to tell me no, I didn't want them to feel 
obliged and I still don't. Even if I fail at private.

Thanks Matt :)
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-16 Thread John Griffith
On Wed, Jun 15, 2016 at 5:59 PM, Preston L. Bannister 
wrote:

> QEMU has the ability to directly connect to iSCSI volumes. Running the
> iSCSI connections through the nova-compute host *seems* somewhat
> inefficient.
>

​I know tests I've run in the past virt-io actually does a really good job
here.  Granted it's been a couple years since I've spent any time looking
at this so really can't definitively say without looking again.​


>
> There is a spec/blueprint and implementation that landed in Kilo:
>
>
> https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
> https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator
>
> From looking at the OpenStack Nova sources ... I am not entirely clear on
> when this behavior is invoked (just for Ceph?), and how it might change in
> future.
>

​I actually hadn't seen that, glad you pointed it out :)  I haven't tried
configuring it but will try and do so and see what sort of differences in
performance there are.  One other thing to keep in mind (I could be
mistaken, but...) last time I looked at this is wasn't vastly different
from the model we use now.  It's not actually using an iSCSI initiator on
the Instance, it's still using an initiator on the compute node and passing
the device in I believe.  I'm sure somebody will correct me if I'm wrong
here.

I don't know what your reference to Ceph has to do with this here?  This
appears to be a Cinder iSCSI mechanism.  You can see how to config in the
commit message (https://review.openstack.org/#/c/135854/19, again I plan to
try it out).

>
> Looking for a general sense where this is headed. (If anyone knows...)
>

​Seems like you should be able to configure it and run it, assuming the
work is actually done and hasn't broken while sitting.
​


>
> If there is some problem with QEMU and directly attached iSCSI volumes,
> that would explain why this is not the default. Or is this simple inertia?
>

​Virt-io is actually super flexible and lets us do all sorts of things with
various connector types.  I think you'd have to have some pretty compelling
data to change the default here.​  Another thing to keep in mind, even if
we just consider iSCSI and leave out FC and other protocols; one thing we
absolutely wouldn't want is to give Instances direct access to the iSCSI
network.  This raises all sorts of security concerns for folks running
public clouds.  It also means more heavy weight Instances due to additional
networking requirements, the iSCSI stack etc.  More importantly, the last
time I looked hot-plugging didn't work with this option, but again I admit
it's been a long time since I've looked at it and my memory isn't always
that great.

>
>
> I have a concrete concern. I work for a company (EMC) that offers backup
> products, and we now have backup for instances in OpenStack. To make this
> efficient, we need to collect changed-block information from instances.
>

​Ahh, ok, so you don't really have a "concrete concern" about using virt-io
driver, or the way things work... or even any data that one performs
better/worse than the other.  What you do have apparently is a solution
you'd like to integrate and sell with OpenStack.​  Fair enough, but we
should probably be clear about the motivation until there's some data
(there very well may be compelling reasons to change this).

>
> 1)  We could put an intercept in the Linux kernel of the nova-compute host
> to track writes at the block layer. This has the merit of working for
> containers, and potentially bare-metal instance deployments. But is not
> guaranteed for instances, if the iSCSI volumes are directly attached to
> QEMU.
>
> 2)  We could use the QEMU support for incremental backup (first bit landed
> in QEMU 2.4). This has the merit of working with any storage, by only for
> virtual machines under QEMU.
>
> As our customers are (so far) only asking about virtual machine backup. I
> long ago settled on (2) as most promising.
>
> What I cannot clearly determine is where (1) will fail. Will all iSCSI
> volumes connected to QEMU instances eventually become directly connected?
>
>
> Xiao's unanswered query (below) presents another question. Is this a
> site-choice? Could I require my customers to configure their OpenStack
> clouds to always route iSCSI connections through the nova-compute host? (I
> am not a fan of this approach, but I have to ask.)
>

​Certainly seems like you could.  The question is would the distro in use
support it?  Also would it work with multi-backend configs.  Honestly it
sounds like there's a lot of data collection and analysis that you could do
here and contribute back to the community.​  Perhaps Xiao or you should try
it out?

>
> To answer Xiao's question, can a site configure their cloud to *always*
> directly connect iSCSI volumes to QEMU?
>
>
>
> On Tue, Feb 16, 2016 at 4:54 AM, Xiao Ma (xima2)  wrote:
>
>> Hi, All
>>
>> I want to 

Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Steve Gordon
- Original Message -
> From: "Jeremy Stanley" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Sent: Thursday, June 16, 2016 5:04:43 PM
> Subject: Re: [openstack-dev] [all][tc] Require a level playing field  for 
> OpenStack projects
> 
> On 2016-06-16 16:04:28 -0400 (-0400), Steve Gordon wrote:
> [...]
> > This is definitely a point worth clarifying in the general case,
> > but tangentially for the specific case of the RHEL operating
> > system please note that RHEL is available to developers for free:
> > 
> > http://developers.redhat.com/products/rhel/get-started/
> > http://developers.redhat.com/articles/no-cost-rhel-faq/
> > 
> > This is a *relatively* recent advancement so I though I would
> > mention it as folks may not be aware.
> 
> Just to clarify, this is free-as-in-beer (gratis) and not
> free-as-in-speech (libre)? If so, that's still proprietary so I'm
> curious how that changes the situation. Would OpenStack welcome a
> project built exclusively around a "free for developer use" product
> into the tent?

Well, in the context of evaluating this specific proposed change that really 
depends on the final language used. Under the wording that is currently 
proposed the answer would seem to be "yes" if developers of all organizations 
have access to that same software - whether that's the intent or not is perhaps 
a different question. In reality of course such a hypothetical project would 
likely fall afoul of the earlier criteria around dependencies anyway...

-Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] publish-to-pypi not working in Tricircle stable/mitaka branch

2016-06-16 Thread Tony Breeds
On Fri, Jun 17, 2016 at 01:15:13AM +, joehuang wrote:
> Hello,
> 
> The publish-to-pypi job is configured for Tricircle[1], and already gave
> "openstackci" the role to "Owner"[3][2].
> 
> After push a new tag v2.0.1 in stable/mitaka branch of :
> https://github.com/openstack/tricircle, the tagging is successfully applied
> to the repository, but the publish-to-pypi job did not work, and no package
> was published.

I'm *far* form an expert but I think that the release pipeline only triggers on
tags that match this pattern: ^refs/tags/[0-9]+(\.[0-9]+)*$ from [1]

Also I want to be sure that you tagged the repo in the openstck infrastructure
(as opposed to guthub) as outlined in [2]

Yours Tony.

[1] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n128
[2] http://docs.openstack.org/infra/manual/drivers.html#tagging-a-release


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] publish-to-pypi not working in Tricircle stable/mitaka branch

2016-06-16 Thread joehuang
Hello,

The publish-to-pypi job is configured for Tricircle[1], and already gave 
"openstackci" the role to "Owner"[3][2].

After push a new tag v2.0.1 in stable/mitaka branch of : 
https://github.com/openstack/tricircle, the tagging is successfully applied to 
the repository, but the publish-to-pypi job did not work, and no package was 
published.

Is there some configuration still missed for the tagging to trigger a 
publish-to-pypi, or it's only working in the master branch, or the first 
package must be uploaded manually?

Thanks in advance.

[1] 
https://github.com/openstack-infra/project-config/commit/291083df12f50de2a8e0634df2ef94dcc608e11f
[2] 
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
[3] https://pypi.python.org/pypi/tricircle

Best Regards
Chaoyi Huang ( Joe Huang )

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Upcoming changes now that Jenkins has retired

2016-06-16 Thread James E. Blair
Now that we have retired Jenkins, we have some upcoming changes:

* Console logs are now available via TCP

  The status page now has "telnet" protocol links to running jobs.  If
  you connect to the host and port specified in that link, you will be
  sent the console log for that job up to that point in time and it
  will continue to stream over that connection in real time.  If your
  browser doesn't understand "telnet://" URLs, just grab the host and
  port and type "telnet  " or better yet, "nc 
  " into your terminal.  You can also grep through in progress
  console logs with "nc   | grep ".

* Console logs will soon be available over the WWW

  Netcatting to Grep is cool, but sometimes if you're already in a
  browser, it may be easier to click on a link and have that just open
  up in your existing browser.  Monty has been working on a websocket
  interface to the console log stream that we hope to have in place
  soon.

* Zuul will stop using the name "Jenkins"

  There is a new user in Gerrit named "Zuul".  Zuul has been
  masquerading as Jenkins for the past few years, but now that we no
  longer run any software named "Jenkins" it is the right time to
  change the name to Zuul.  If you have any programs, scripts,
  dashboards, etc, that look for either the full name "Jenkins" or
  username "jenkins" from Gerrit, you should immediately update them
  to also use the full name "Zuul" or username "zuul" in order to
  prepare for the change.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] There is no Jenkins, only Zuul

2016-06-16 Thread Jeremy Stanley
On 2016-06-17 01:48:08 +0200 (+0200), Thomas Goirand wrote:
> What will be the faith of jenkins job builder then? It is already used
> by lots of people, including some Debian folks. Will it be maintained?

Jenkins Job Builder is very actively maintained by contributors and
reviewers from many organizations beyond OpenStack, and will
continue to have a happy home hosted within our infrastructure for
as long as its caretakers would like.

Note that with the current Zuul implementation, even though the
OpenStack upstream CI is not using Jenkins any longer, we're still
using JJB as a library within Zuul because we need to be able to
parse our current massive set of job configurations. Also, Zuul 2.x
continues to support using Jenkins as a worker (it just now also
supports using its own Ansible-based launcher too).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] There is no Jenkins, only Zuul

2016-06-16 Thread Jeremy Stanley
On 2016-06-16 16:15:34 -0700 (-0700), Joshua Harlow wrote:
[...]
> Is the goal/idea that there would be something like a
> '.travis.yml' (file or directory) that would contain the job
> configuration (and any special jobs or commands or tasks) in the
> project repository that zuul would then use and do things with
> (thus becoming more distributed, and self-configurable and more
> like travis CI than what previously existed)?
[...]

That's one of the primary design goals for Zuul v3. See the third
paragraph at
http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#tenants
for a bit about it (and please help us implement if you find this
interesting!). Not that it's particularly attempting to replicate
features from other job management systems, but we do want projects
to have the freedom to take more control over some of their job
definitions without needing our already overworked reviewers on
every tweak. Also this makes changes to many of those jobs
self-testing, so less job breakage and fewer iterations on the
configuration.

Note that Zuul v2(.5) is still relying solely on central job
configuration in the openstack-infra/project-config repo, the above
is purely in reference to aspects of the v3 redesign.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] There is no Jenkins, only Zuul

2016-06-16 Thread Thomas Goirand
On 06/17/2016 12:41 AM, James E. Blair wrote:
> Since its inception, the OpenStack project has used Jenkins to perform
> its testing and artifact building.  When OpenStack was two git repos,
> we had one Jenkins master, a few slaves, and we configured all of our
> jobs manually in the web interface.  It was easy for a new project
> like OpenStack to set up and maintain.  Over the years, we have grown
> significantly, with over 1,200 git repos and 8,000 jobs spread across
> 8 Jenkins masters and 800 dynamic slave nodes.  Long before we got to
> this point, we could not manage all of those jobs by hand, so we wrote
> Jenkins Job Builder[1], one of our more widely used projects, so that
> we could automatically generate those 8,000 jobs from templated YAML.
> 
> We also wrote Zuul[2].
> 
> Zuul is a system to drive project automation.  It directs our testing,
> running tens of thousands of jobs each day, responding to events from
> our code review system and stacking potential changes to be tested
> together.
> 
> We are working on a new version of Zuul (version 3) with some major
> changes: we want to make it easier to run jobs in multi-node
> environments, easier to manage large numbers of jobs and job
> variations, support in-tree job configuration, and the ability to define
> jobs using Ansible[3].
> 
> With Zuul in charge of deciding which jobs to run, and when and where
> to run them, we use very few advanced features of Jenkins at this
> point.  While we are still working on Zuul v3, we are at a point where
> we can start to use some of the work we have done already to switch to
> running our jobs entirely with Zuul.
> 
> As of today, we have turned off our last Jenkins master and all of our
> automation is being run by Zuul.  It's been a great ride, and
> OpenStack wouldn't be where it is today without Jenkins.  Now we're
> looking forward to focusing on Zuul v3 and exploring the full
> potential of project automation.
> 
> [1] http://docs.openstack.org/infra/jenkins-job-builder/
> [2] http://docs.openstack.org/infra/zuul/
> [3] https://www.ansible.com/

What will be the faith of jenkins job builder then? It is already used
by lots of people, including some Debian folks. Will it be maintained?

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Anita Kuno
On 06/16/2016 07:03 PM, Matt Riedemann wrote:
> On 6/14/2016 9:03 PM, Anita Kuno wrote:
>>
>> I'll reply in private first because I am a core reviewer on the
>> project-config repo, which was not mentioned in your list but you might
>> consider useful to you at the bug smash nonetheless.
>>
>> Let me know if you would like me to attend and I'll reply in public, if
>> not no worries.
>>
>> Thank you,
>> Anita
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> Busted!
> 
Yeah, I know. I was tired and wasn't paying attention to the to: field.
Good thing I pretend like everything I say in private is public anyway.

Shane and folks are still welcome to tell me no, I didn't want them to
feel obliged and I still don't. Even if I fail at private.

Thanks Matt :)
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] There is no Jenkins, only Zuul

2016-06-16 Thread Joshua Harlow

James E. Blair wrote:

Since its inception, the OpenStack project has used Jenkins to perform
its testing and artifact building.  When OpenStack was two git repos,
we had one Jenkins master, a few slaves, and we configured all of our
jobs manually in the web interface.  It was easy for a new project
like OpenStack to set up and maintain.  Over the years, we have grown
significantly, with over 1,200 git repos and 8,000 jobs spread across
8 Jenkins masters and 800 dynamic slave nodes.  Long before we got to
this point, we could not manage all of those jobs by hand, so we wrote
Jenkins Job Builder[1], one of our more widely used projects, so that
we could automatically generate those 8,000 jobs from templated YAML.

We also wrote Zuul[2].

Zuul is a system to drive project automation.  It directs our testing,
running tens of thousands of jobs each day, responding to events from
our code review system and stacking potential changes to be tested
together.

We are working on a new version of Zuul (version 3) with some major
changes: we want to make it easier to run jobs in multi-node
environments, easier to manage large numbers of jobs and job
variations, support in-tree job configuration, and the ability to define
jobs using Ansible[3].


Is the goal/idea that there would be something like a '.travis.yml' 
(file or directory) that would contain the job configuration (and any 
special jobs or commands or tasks) in the project repository that zuul 
would then use and do things with (thus becoming more distributed, and 
self-configurable and more like travis CI than what previously existed)?




With Zuul in charge of deciding which jobs to run, and when and where
to run them, we use very few advanced features of Jenkins at this
point.  While we are still working on Zuul v3, we are at a point where
we can start to use some of the work we have done already to switch to
running our jobs entirely with Zuul.

As of today, we have turned off our last Jenkins master and all of our
automation is being run by Zuul.  It's been a great ride, and
OpenStack wouldn't be where it is today without Jenkins.  Now we're
looking forward to focusing on Zuul v3 and exploring the full
potential of project automation.

[1] http://docs.openstack.org/infra/jenkins-job-builder/
[2] http://docs.openstack.org/infra/zuul/
[3] https://www.ansible.com/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Matt Riedemann

On 6/14/2016 9:03 PM, Anita Kuno wrote:


I'll reply in private first because I am a core reviewer on the
project-config repo, which was not mentioned in your list but you might
consider useful to you at the bug smash nonetheless.

Let me know if you would like me to attend and I'll reply in public, if
not no worries.

Thank you,
Anita

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Busted!

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] There is no Jenkins, only Zuul

2016-06-16 Thread James E. Blair
Since its inception, the OpenStack project has used Jenkins to perform
its testing and artifact building.  When OpenStack was two git repos,
we had one Jenkins master, a few slaves, and we configured all of our
jobs manually in the web interface.  It was easy for a new project
like OpenStack to set up and maintain.  Over the years, we have grown
significantly, with over 1,200 git repos and 8,000 jobs spread across
8 Jenkins masters and 800 dynamic slave nodes.  Long before we got to
this point, we could not manage all of those jobs by hand, so we wrote
Jenkins Job Builder[1], one of our more widely used projects, so that
we could automatically generate those 8,000 jobs from templated YAML.

We also wrote Zuul[2].

Zuul is a system to drive project automation.  It directs our testing,
running tens of thousands of jobs each day, responding to events from
our code review system and stacking potential changes to be tested
together.

We are working on a new version of Zuul (version 3) with some major
changes: we want to make it easier to run jobs in multi-node
environments, easier to manage large numbers of jobs and job
variations, support in-tree job configuration, and the ability to define
jobs using Ansible[3].

With Zuul in charge of deciding which jobs to run, and when and where
to run them, we use very few advanced features of Jenkins at this
point.  While we are still working on Zuul v3, we are at a point where
we can start to use some of the work we have done already to switch to
running our jobs entirely with Zuul.

As of today, we have turned off our last Jenkins master and all of our
automation is being run by Zuul.  It's been a great ride, and
OpenStack wouldn't be where it is today without Jenkins.  Now we're
looking forward to focusing on Zuul v3 and exploring the full
potential of project automation.

[1] http://docs.openstack.org/infra/jenkins-job-builder/
[2] http://docs.openstack.org/infra/zuul/
[3] https://www.ansible.com/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] IPA DIB Ramdisk needs CI, Documentation

2016-06-16 Thread Jay Faulkner

Hey all,

I recently tried to do some testing with the DIB ramdisk in devstack, 
and found there were several bugs (I filed three yesterday) in the build 
process, and additionally, no documentation or guidance on how to build 
or test the images devstack in IPA developer docs (although inspector 
docs have some information). Also, there is no DIB image published by 
IPA (we publish CoreOS and TinyIPA images already today), and no CI for 
DIB images.


This is particularly concerning given some of our third-party CI systems 
are using DIB images that we don't even test for basic functionality 
upstream. This could lead to failures inside their CI jobs that aren't 
related whatsoever to the hardware drivers they are designed to test.


I filed https://bugs.launchpad.net/ironic-python-agent/+bug/1590935 
about getting working CI for DIB, and by extension, official support. 
I'd like to generally request more attention from those who use the DIB 
driver in getting this working reliably in devstack, documented, and 
tested. I'm willing to assist with troubleshooting and will review any 
patches related to this effort if you add me as a reviewer.


Thanks,
Jay Faulkner
OSIC



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Anyone using builtin pluggable IPAM driver?

2016-06-16 Thread Carl Baldwin
Hi,

Cross posting to the operators and devs.

In Liberty, pluggable IPAM was added to Neutron.  With it, a built-in
pluggable driver, equivalent to the old non-pluggable IPAM was added
as a reference implementation. In a greenfield deployment, you could
choose to use this driver by setting the following:

  ipam_driver = 'internal'

A brown field deployment would've required a manual migration of
existing data to new DB tables.  No migration script was officially
provided.

After considerable testing with this driver, we're planning to
obsolete the old built-in IPAM implementation in favor of the
pluggable version.  The strategy for this migration could depend on if
anyone is using this internal driver.

So, I'd like to know if anyone is using it.  Specifically, are there
any deployments with "ipam_driver = 'internal'" set in the
neutron.conf?  In this context, I'm not interested in anyone using any
externally provided pluggable IPAM driver.  I'm only interested in the
builtin 'internal' driver.

Mostly, I'm asking this because this was documented in the advanced
config section under "IPAM configuration" [1] even though the driver
was supposed to be somewhat experimental.  So, I have to assume people
have read that and could possibly have followed it.

Carl

[1] http://docs.openstack.org/mitaka/networking-guide/adv-config-ipam.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Jeremy Stanley
On 2016-06-16 16:04:28 -0400 (-0400), Steve Gordon wrote:
[...]
> This is definitely a point worth clarifying in the general case,
> but tangentially for the specific case of the RHEL operating
> system please note that RHEL is available to developers for free:
> 
> http://developers.redhat.com/products/rhel/get-started/
> http://developers.redhat.com/articles/no-cost-rhel-faq/
> 
> This is a *relatively* recent advancement so I though I would
> mention it as folks may not be aware.

Just to clarify, this is free-as-in-beer (gratis) and not
free-as-in-speech (libre)? If so, that's still proprietary so I'm
curious how that changes the situation. Would OpenStack welcome a
project built exclusively around a "free for developer use" product
into the tent?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Mike Perez
On 09:35 Jun 14, Ed Leafe wrote:
> On Jun 14, 2016, at 8:57 AM, Thierry Carrez  wrote:
> 
> > A few months ago we had the discussion about what "no open core" means in
> > 2016, in the context of the Poppy team candidacy. With our reading at the
> > time we ended up rejecting Poppy partly because it was interfacing with
> > proprietary technologies. However, I think what we originally wanted to
> > ensure with this rule was that no specific organization would use the
> > OpenStack open source code as crippled bait to sell their specific
> > proprietary add-on.
> 
> I saw the problem with Poppy was that since it depended on a proprietary
> product, there was no way to run any meaningful testing with it, since you
> can’t simply download that product into your testing environment. Had there
> been an equivalent free software implementation, I think many would have not
> had as strong an objection in including Poppy.

Yup, I spoke loud and repeated this in the discussion many times. There was no
open source reference implementation to base the API off of, just a proprietary
solution. I feel starting that direction with any new project in some open
source space where we want multiple solutions to plug in is just a disaster
waiting to happen.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Mark Voelker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



On Jun 16, 2016, at 2:25 PM, Matthew Treinish  wrote:

On Thu, Jun 16, 2016 at 02:15:47PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-16 13:56:31 -0400:
On Thu, Jun 16, 2016 at 12:59:41PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-15 19:27:13 -0400:
On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
Top posting one note and direct comments inline, I’m proposing
this as a member of the DefCore working group, but this
proposal itself has not been accepted as the forward course of
action by the working group. These are my own views as the
administrator of the program and not that of the working group
itself, which may independently reject the idea outside of the
response from the upstream devs.

I posted a link to this thread to the DefCore mailing list to make
that working group aware of the outstanding issues.

On Jun 14, 2016, at 3:50 PM, Matthew Treinish  wrote:

On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
Last year, in response to Nova micro-versioning and extension updates[1],
the QA team added strict API schema checking to Tempest to ensure that
no additional properties were added to Nova API responses[2][3]. In the
last year, at least three vendors participating the the OpenStack Powered
Trademark program have been impacted by this change, two of which
reported this to the DefCore Working Group mailing list earlier this year[4].

The DefCore Working Group determines guidelines for the OpenStack Powered
program, which includes capabilities with associated functional tests
from Tempest that must be passed, and designated sections with associated
upstream code [5][6]. In determining these guidelines, the working group
attempts to balance the future direction of development with lagging
indicators of deployments and user adoption.

After a tremendous amount of consideration, I believe that the DefCore
Working Group needs to implement a temporary waiver for the strict API
checking requirements that were introduced last year, to give downstream
deployers more time to catch up with the strict micro-versioning
requirements determined by the Nova/Compute team and enforced by the
Tempest/QA team.

I'm very much opposed to this being done. If we're actually concerned with
interoperability and verify that things behave in the same manner between 
multiple
clouds then doing this would be a big step backwards. The fundamental disconnect
here is that the vendors who have implemented out of band extensions or were
taking advantage of previously available places to inject extra attributes
believe that doing so means they're interoperable, which is quite far from
reality. **The API is not a place for vendor differentiation.**

This is a temporary measure to address the fact that a large number
of existing tests changed their behavior, rather than having new
tests added to enforce this new requirement. The result is deployments
that previously passed these tests may no longer pass, and in fact
we have several cases where that's true with deployers who are
trying to maintain their own standard of backwards-compatibility
for their end users.

That's not what happened though. The API hasn't changed and the tests haven't
really changed either. We made our enforcement on Nova's APIs a bit stricter to
ensure nothing unexpected appeared. For the most these tests work on any version
of OpenStack. (we only test it in the gate on supported stable releases, but I
don't expect things to have drastically shifted on older releases) It also
doesn't matter which version of the API you run, v2.0 or v2.1. Literally, the
only case it ever fails is when you run something extra, not from the community,
either as an extension (which themselves are going away [1]) or another service
that wraps nova or imitates nova. I'm personally not comfortable saying those
extras are ever part of the OpenStack APIs.

We have basically three options.

1. Tell deployers who are trying to do the right for their immediate
 users that they can't use the trademark.

2. Flag the related tests or remove them from the DefCore enforcement
 suite entirely.

3. Be flexible about giving consumers of Tempest time to meet the
 new requirement by providing a way to disable the checks.

Option 1 goes against our own backwards compatibility policies.

I don't think backwards compatibility policies really apply to what what define
as the set of tests that as a community we are saying a vendor has to pass to
say they're OpenStack. From my perspective as a 

Re: [openstack-dev] [nova] consistency and exposing quiesce in the Nova API

2016-06-16 Thread Preston L. Bannister
Comments inline.


On Thu, Jun 16, 2016 at 10:13 AM, Matt Riedemann  wrote:

> On 6/16/2016 6:12 AM, Preston L. Bannister wrote:
>
>> I am hoping support for instance quiesce in the Nova API makes it into
>> OpenStack. To my understanding, this is existing function in Nova, just
>> not-yet exposed in the public API. (I believe Cinder uses this via a
>> private Nova API.)
>>
>
> I'm assuming you're thinking of the os-assisted-volume-snapshots admin API
> in Nova that is called from the Cinder RemoteFSSnapDrivers (glusterfs,
> scality, virtuozzo and quobyte). I started a separate thread about that
> yesterday, mainly around the lack of CI testing / status so we even have an
> idea if this is working consistently and we don't regress it.


Yes, I believe we are talking about the same thing. Also, I saw your other
message. :)



Much of the discussion is around disaster recovery (DR) and NFV - which
>> is not wrong, but might be muddling the discussion? Forget DR and NFV,
>> for the moment.
>>
>> My interest is simply in collecting high quality backups of applications
>> (instances) running in OpenStack. (Yes, customers are deploying
>> applications into OpenStack that need backup - and at large scale. They
>> told us, *very* clearly.) Ideally, I would like to give the application
>> a chance to properly quiesce, so the on-disk state is most-consistent,
>> before collecting the backup.
>>
>
> We already attempt to quiesce an active volume-backed instance before
> doing a volume snapshot:
>
>
> https://github.com/openstack/nova/blob/11bd0052bdd660b63ecca53c5b6fe68f81bdf9c3/nova/compute/api.py#L2266
>

The problem is, from my point of view, if the instance has more than one
volume (and many do), then quiescing the instance for more than once is not
very nice.




> The existing function in Nova should be at least a good start, it just
>> needs to be exposed in the public Nova API. (At least, this is my
>> understanding.)
>>
>> Of course, good backups (however collected) allow you to build DR
>> solutions. My immediate interest is simply to collect high-quality
>> backups.
>>
>> The part in the blueprint about an atomic operation on a list of
>> instances ... this might be over-doing things. First, if you have a set
>> of related instances, very likely there is a logical order in which they
>> should be quiesced. Some could be quiesced concurrently. Others might
>> need to be sequential.
>>
>> Assuming the quiesce API *starts* the operation, and there is some means
>> to check for completion, then a single-instance quiesce API should be
>> sufficient. An API that is synchronous (waits for completion before
>> returning) would also be usable. (I am not picky - just want to collect
>> better backups for customers.)
>>
>
> As noted above, we already attempt to quiesce when doing a volume-backed
> instance snapshot.
>
> The problem comes in with the chaining and orchestration around a list of
> instances. That requires additional state management and overhead within
> Nova and while we're actively trying to redo parts of the code base to make
> things less terrible, adding more complexity on top at the same time
> doesn't help.
>

I agree with your concern. To be clear, what I am hoping for is the
simplest possible version - a API to quiesce/unquiesce a single instance,
similar to the existing pause/unpause APIs.

Handling of lists of instances (and response to state changes), I would
expect implement on the caller-side. There are application-specific
semantics, so a single-instance API has merit from my perspective.




> I'm also not sure what something like multiattach volumes will throw into
> the mix with this, but that's another DR/HA requirement.
>
> So I get that lots of people want lots of things that aren't in Nova right
> now. We have that coming from several different projects (cinder for
> multiattach volumes, neutron for vlan-aware-vms and routed networks), and
> several different groups (NFV, ops).
>
> We also have a lot of people that just want the basic IaaS layer to work
> for the compute service in an OpenStack cloud, like being able to scale
> that out better and track resource usage for accurate scheduling.
>
> And we have a lot of developers that want to be able to actually
> understand what it is the code is doing, and a much smaller number of core
> maintainers / reviewers that don't want to have to keep piling technical
> debt into the project while we're trying to fix some of what's already
> built up over the years - and actually have this stuff backed with
> integration testing.
>
> So, I get it. We all have requirements and we all have resource
> limitations, which is why we as a team prioritize our work items for the
> release. This one didn't make it for Newton.
>

Ah. I did not quite get that from what I read online. Unfortunate. Also
sounds like the Nova-folk are overloaded, and we need to come up with
resources to contribute to Nova, if we want this to 

[openstack-dev] Recommended ways to find compute node's bandwidth (physical NIC)

2016-06-16 Thread KHAN, RAO ADNAN
I am writing a nova filter that will check for the compute node (max, avg) 
bandwidth, before instantiating an instance. What are some of the recommended 
tools that can provide this info in real time? Does any openstack component 
hold this info already?

Thanks,
Adnan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Steve Gordon
- Original Message -
> From: "Amrith Kumar" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Thierry,
> 
> Thanks for writing this up and for the interesting discussion that has come
> up in this ML thread.
> 
> While I think I get the general idea of the motivation, I think the verbiage
> doesn't quite do justice to your intent.
> 
> One area which I would like to highlight is the situation with the underlying
> operating system on which the software is to run. What if that is
> proprietary software? Consider support for (for example) running on Red Hat
> or the Windows operating systems. That would not be something that could be
> easily abstracted into a 'driver'.

This is definitely a point worth clarifying in the general case, but 
tangentially for the specific case of the RHEL operating system please note 
that RHEL is available to developers for free:

http://developers.redhat.com/products/rhel/get-started/
http://developers.redhat.com/articles/no-cost-rhel-faq/

This is a *relatively* recent advancement so I though I would mention it as 
folks may not be aware.

Thanks,

Steve

> Another is the case of proprietary software; consider support in Trove for
> (for example) using the DB2 Express or the Vertica database. Clearly these
> are things where some have an advantage when compared to others.
> 
> I therefore suggest the following amendment in
> https://review.openstack.org/#/c/329448/.
> 
> * The project provides a level playing field for interested developers to
> collaborate. Where proprietary software, hardware, or other resources
> (including testing) are required, these should be reasonably accessible to
> interested contributors.
> 
> Thanks,
> 
> -amrith
> 
> > -Original Message-
> > From: Thierry Carrez [mailto:thie...@openstack.org]
> > Sent: Tuesday, June 14, 2016 9:57 AM
> > To: OpenStack Development Mailing List 
> > Subject: [openstack-dev] [all][tc] Require a level playing field for
> > OpenStack projects
> > 
> > Hi everyone,
> > 
> > I just proposed a new requirement for OpenStack "official" projects,
> > which I think is worth discussing beyond the governance review:
> > 
> > https://review.openstack.org/#/c/329448/
> > 
> >  From an upstream perspective, I see us as being in the business of
> > providing open collaboration playing fields in order to build projects
> > to reach the OpenStack Mission. We collectively provide resources
> > (infra, horizontal teams, events...) in order to enable that open
> > collaboration.
> > 
> > An important characteristic of these open collaboration grounds is that
> > they need to be a level playing field, where no specific organization is
> > being given an unfair advantage. I expect the teams that we bless as
> > "official" project teams to operate in that fair manner. Otherwise we
> > end up blessing what is essentially a trojan horse for a given
> > organization, open-washing their project in the process. Such a project
> > can totally exist as an unofficial project (and even be developed on
> > OpenStack infrastructure) but I don't think it should be given free
> > space in our Design Summits or benefit from "OpenStack community"
> > branding.
> > 
> > So if, in a given project team, developers from one specific
> > organization benefit from access to specific knowledge or hardware
> > (think 3rd-party testing blackboxes that decide if a patch goes in, or
> > access to proprietary hardware or software that the open source code
> > primarily interfaces with), then this project team should probably be
> > rejected under the "open community" rule. Projects with a lot of drivers
> > (like Cinder) provide an interesting grey area, but as long as all
> > drivers are in and there is a fully functional (and popular) open source
> > implementation, I think no specific organization would be considered as
> > unfairly benefiting compared to others.
> > 
> > A few months ago we had the discussion about what "no open core" means
> > in 2016, in the context of the Poppy team candidacy. With our reading at
> > the time we ended up rejecting Poppy partly because it was interfacing
> > with proprietary technologies. However, I think what we originally
> > wanted to ensure with this rule was that no specific organization would
> > use the OpenStack open source code as crippled bait to sell their
> > specific proprietary add-on.
> > 
> > I think taking the view that OpenStack projects need to be open, level
> > collaboration playing fields encapsulates that nicely. In the Poppy
> > case, nobody in the Poppy team has an unfair advantage over others, so
> > we should not reject them purely on the grounds that this interfaces
> > with non-open-source solutions (leaving only the infrastructure/testing
> > requirement to solve). On the other hand, a Neutron plugin targeting a
> > specific piece of networking hardware 

Re: [openstack-dev] [Openstack-operators] [glance] Proposal for a mid-cycle virtual sync on operator issues

2016-06-16 Thread Brian Rosmaita
Fei Long and I have followed up on our action item from the glance/ops
mid-cycle sync, namely, to start a discussion among the operators and
product working group so that the Glance team can get a better
understanding of what "Better image lifecycle support" means.  Please
leave comments on this user story patch:

https://review.openstack.org/#/c/327980/

cheers,
brian

On 6/8/16, 6:44 PM, "Nikhil Komawar"  wrote:

>
>Please note, due to the last minute additions to the RSVP list, we have
>changed the tool to be used. Update info can now be found here:
>https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync .
>
>Please try to join 5-10 minutes before the meeting as you may have to
>install a plugin and give us time to fix any issues you may face.
>
>
>On 6/7/16 11:51 PM, Nikhil Komawar wrote:
>> Hi all,
>>
>>
>> Thanks a ton for the feedback on the time and thanks to Kris for adding
>> items to the agenda [1].
>>
>>
>> Just wanted to announce a few things here:
>>
>>
>> The final decision on the time has been made after a lot of discussions.
>>
>> This event will be on *Thursday June 9th at 1130 UTC*
>>
>> Here's [2] how it looks at/near your timezone.
>>
>>
>> It somewhat manages to accommodate people from different (and extremely
>> diverse) timezones but if it's too early or too late for you for this
>> full *2 hour* sync, please add your interest topics and name against it
>> so that we can schedule your items either later or earlier during the
>> event. The schedule will be tentative unless significant/enough
>> information is provided on time to help set the schedule in advance.
>>
>>
>> I had kept open agenda from the developers' side so that we can
>> collaborate better on the pain points of the operators. You are very
>> welcome to add items to the etherpad [1].
>>
>>
>> The event has been updated to the Virtual Sprints wiki [3] and the
>> details have been added to the etherpad [1] as well. Please feel free to
>> reach out to me for any questions.
>>
>>
>> Thanks for the RSVP and see you soon virtually.
>>
>>
>> [1] https://etherpad.openstack.org/p/newton-glance-and-ops-midcycle-sync
>> [2]
>> 
>>http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016
>>=6=9=11=30=0=881=196=47=22=157=87=2
>>4=78=283=1800
>> [3]
>> 
>>https://wiki.openstack.org/wiki/VirtualSprints#Glance_and_Operators_mid-c
>>ycle_sync_for_Newton
>>
>>
>> Cheers
>>
>>
>> On 5/31/16 5:13 PM, Nikhil Komawar wrote:
>>> Hey,
>>>
>>>
>>> Thanks for your interest.
>>>
>>> Sorry about the confusion. Please consider the same time for Thursday
>>> June 9th.
>>>
>>>
>>> Thur June 9th proposed time:
>>> 
>>>http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016
>>>h=6=9=11=0=0=881=196=47=22=157=87=
>>>24=78=283
>>>
>>>
>>> Alternate time proposal:
>>> 
>>>http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016
>>>h=6=9=23=0=0=881=196=47=22=157=87=
>>>24=78=283
>>>
>>>
>>> Overall time planner:
>>> 
>>>http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160609=8
>>>81=196=47=22=157=87=24=78=283
>>>
>>>
>>>
>>> It will really depend on who is strongly interested in the discussions.
>>> Scheduling with EMEA, Pacific time (US), Australian (esp. Eastern) is
>>> quite difficult. If there's strong interest from San Jose, we may have
>>> to settle for a rather awkward choice below:
>>>
>>>
>>> 
>>>http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016
>>>h=6=9=4=0=0=881=196=47=22=157=87=2
>>>4=78=283
>>>
>>>
>>>
>>> A vote of +1, 0, -1 on these times would help long way.
>>>
>>>
>>> On 5/31/16 4:35 PM, Belmiro Moreira wrote:
 Hi Nikhil,
 I'm interested in this discussion.

 Initially you were proposing Thursday June 9th, 2016 at 2000UTC.
 Are you suggesting to change also the date? Because in the new
 timeanddate suggestions is 6/7 of June.

 Belmiro

 On Tue, May 31, 2016 at 6:13 PM, Nikhil Komawar > wrote:

 Hey,





 Thanks for the feedback. 0800UTC is 4am EDT for some of the US
 Glancers :-)





 I request this time which may help the folks in Eastern and
Central US

 time.

 
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016
th=6=7=11=0=0=881=196=47=22=157=87
7=24=78





 If it still does not work, I may have to poll the folks in EMEA
on how

 strong their intentions are for joining this call.  Because
 another time

 slot that works for folks in Australia & US might be too
inconvenient

 for those in EMEA:

 
http://www.timeanddate.com/worldclock/meetingdetails.html?year=2016
th=6=6=23=0=0=881=196=47=22=157=87
7=24=78





 Here's the map of cities that may be involved:

 

Re: [openstack-dev] [nova] Virtuozzo (Compute) CI is incorrectly patching for resize support

2016-06-16 Thread Jeremy Stanley
On 2016-06-16 11:57:11 +0300 (+0300), Evgeny Antyshev wrote:
> Jeremy, thank you for pointing this out! It saved me from such a headache!
> BTW, is there any plan to workaround this in puppet-jenkins?

Probably not in puppet-jenkins since there are alternative
workarounds such as declaring the necessary parameters in your jobs
(and also because turning off a default security measure without the
admin's knowledge is bad form).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest pre-provisioned credentials in the gate

2016-06-16 Thread Ken'ichi Ohmichi
2016-06-14 17:00 GMT-07:00 Andrea Frittoli :
> Dear all,
>
> TL;DR: I'd like to propose to start running some of the existing dsvm
> check/gate jobs using Tempest pre-provisioned credentials.
>
> Full Text:
> Tempest provides tests with two mechanisms to acquire test credentials [0]:
> dynamic credentials and pre-provisioned ones.
>
> The current check and gate jobs only use the dynamic credentials provider.
>
> The pre-provisioned credentials provider has been introduced to support
> running test in parallel without the need of having access to admin
> credentials in tempest configuration file - which is a valid use case
> especially when testing public clouds or in general a deployment that is not
> own by who runs the test.
>
> As a small extra, since pre-provisioned credentials are re-used to run many
> tests during a CI test run, they give an opportunity to discover issues
> related to cleanup of test resources.

This is a significant merit for Tempest development.
So +1 for enabling the credential on the gate.

> Pre-provisioned credentials is currently used in periodic jobs [1][2] - as
> well as an experimental job defined for tempest changes. This means that
> even if we are careful, there is a good chance for it to be inadvertently
> broken by a change.
>
> Until recently the periodic job suffered a racy failure on object-storage
> tests. A recent refactor [3] of the tool that pre-proprovisioned the
> accounts has fixed the issue: the past 8 runs of the periodic jobs have not
> encountered that race anymore [4][5].
>
> Specifically I'd like to propose changing to start changing of the neutron
> jobs [6].

I am not sure why don't we add another simple job(non neutron job)
which just enables this.
But maybe it is better to discuss which job is nice for enabling
pre-provisioned cred on the review.

Thanks
Ken Ohmichi

---

> [0]
> http://docs.openstack.org/developer/tempest/configuration.html#credential-provider-mechanisms
> [1]
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n220
> [2]
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/devstack-gate.yaml#n253
> [3] https://review.openstack.org/#/c/317105/
> [4]
> http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-full-test-accounts-master
> [5]
> http://status.openstack.org/openstack-health/#/job/periodic-tempest-dsvm-neutron-full-test-accounts-master
> [6] https://review.openstack.org/329723
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Kanagaraj Manickam to Tacker core team

2016-06-16 Thread Karthik Natarajan
+1. Thanks Kanagaraj for making such a great impact during the Newton cycle.

From: Sripriya Seetharam [mailto:ssee...@brocade.com]
Sent: Thursday, June 16, 2016 10:35 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [tacker] Proposing Kanagaraj Manickam to Tacker 
core team

+1


-Sripriya


From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Wednesday, June 15, 2016 6:32 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: [openstack-dev] [tacker] Proposing Kanagaraj Manickam to Tacker core 
team

Tackers,

It gives me great pleasure to propose Kanagaraj Manickam to join the Tacker 
core team. In a short time, Kanagaraj has grown into a key member of the Tacker 
team. His enthusiasm and dedication to get Tacker code base on par with other 
leading OpenStack projects is very much appreciated. He is already working on 
two important specs in Newton cycle and many more fixes and RFEs [1]. Kanagaraj 
is also a core member in the Heat project and this lends well as we heavily use 
Heat for many Tacker features.

Please provide your +1/-1 votes.

- Sridhar

[1] 
http://stackalytics.com/?module=tacker-group_id=kanagaraj-manickam=marks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Infra] Newton Code Sprint

2016-06-16 Thread Ken'ichi Ohmichi
Hi Everyone,

As we've done the past 3 cycles we'll be having another QA/Infra code
sprint this cycle.
Previous code sprints were amazing, and we could concentrate on the
development with working together directly.
At this time, we have gotten an opportunity to hold a code sprint with
QA team and Infra team.

Many people say the first code sprint which was held in Germany was
great and want to do it again.
So the next sprint will take place in Germany on 19th - 21th
September, SAP has offered to sponsor the event and it'll be held at
SAP HQ, Walldorf, Germany.
Thank you so much to host this event, SAP and Marc Koderer who
arranges this event.

More details can be found on the following wiki page, and if you're planning on
attending please sign up using the wiki page:

https://wiki.openstack.org/wiki/Sprints/QAInfraNewtonSprint

Thanks
Ken Omichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Kanagaraj Manickam to Tacker core team

2016-06-16 Thread Stephen Wong
+1. Great addition to the team indeed!

On Wed, Jun 15, 2016 at 6:31 PM, Sridhar Ramaswamy 
wrote:

> Tackers,
>
> It gives me great pleasure to propose Kanagaraj Manickam to join the
> Tacker core team. In a short time, Kanagaraj has grown into a key member of
> the Tacker team. His enthusiasm and dedication to get Tacker code base on
> par with other leading OpenStack projects is very much appreciated. He is
> already working on two important specs in Newton cycle and many more fixes
> and RFEs [1]. Kanagaraj is also a core member in the Heat project and this
> lends well as we heavily use Heat for many Tacker features.
>
> Please provide your +1/-1 votes.
>
> - Sridhar
>
> [1]
> http://stackalytics.com/?module=tacker-group_id=kanagaraj-manickam=marks
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Matthew Treinish
On Thu, Jun 16, 2016 at 02:15:47PM -0400, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2016-06-16 13:56:31 -0400:
> > On Thu, Jun 16, 2016 at 12:59:41PM -0400, Doug Hellmann wrote:
> > > Excerpts from Matthew Treinish's message of 2016-06-15 19:27:13 -0400:
> > > > On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
> > > > > Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
> > > > > > Top posting one note and direct comments inline, I’m proposing
> > > > > > this as a member of the DefCore working group, but this
> > > > > > proposal itself has not been accepted as the forward course of
> > > > > > action by the working group. These are my own views as the
> > > > > > administrator of the program and not that of the working group
> > > > > > itself, which may independently reject the idea outside of the
> > > > > > response from the upstream devs.
> > > > > > 
> > > > > > I posted a link to this thread to the DefCore mailing list to make
> > > > > > that working group aware of the outstanding issues.
> > > > > > 
> > > > > > > On Jun 14, 2016, at 3:50 PM, Matthew Treinish 
> > > > > > >  wrote:
> > > > > > > 
> > > > > > > On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> > > > > > >> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 
> > > > > > >> -0400:
> > > > > > >>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > > > > >  Excerpts from Matthew Treinish's message of 2016-06-14 
> > > > > >  14:21:27 -0400:
> > > > > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > > > >> Last year, in response to Nova micro-versioning and 
> > > > > > >> extension updates[1],
> > > > > > >> the QA team added strict API schema checking to Tempest to 
> > > > > > >> ensure that
> > > > > > >> no additional properties were added to Nova API 
> > > > > > >> responses[2][3]. In the
> > > > > > >> last year, at least three vendors participating the the 
> > > > > > >> OpenStack Powered
> > > > > > >> Trademark program have been impacted by this change, two of 
> > > > > > >> which
> > > > > > >> reported this to the DefCore Working Group mailing list 
> > > > > > >> earlier this year[4].
> > > > > > >> 
> > > > > > >> The DefCore Working Group determines guidelines for the 
> > > > > > >> OpenStack Powered
> > > > > > >> program, which includes capabilities with associated 
> > > > > > >> functional tests
> > > > > > >> from Tempest that must be passed, and designated sections 
> > > > > > >> with associated
> > > > > > >> upstream code [5][6]. In determining these guidelines, the 
> > > > > > >> working group
> > > > > > >> attempts to balance the future direction of development with 
> > > > > > >> lagging
> > > > > > >> indicators of deployments and user adoption.
> > > > > > >> 
> > > > > > >> After a tremendous amount of consideration, I believe that 
> > > > > > >> the DefCore
> > > > > > >> Working Group needs to implement a temporary waiver for the 
> > > > > > >> strict API
> > > > > > >> checking requirements that were introduced last year, to 
> > > > > > >> give downstream
> > > > > > >> deployers more time to catch up with the strict 
> > > > > > >> micro-versioning
> > > > > > >> requirements determined by the Nova/Compute team and 
> > > > > > >> enforced by the
> > > > > > >> Tempest/QA team.
> > > > > > > 
> > > > > > > I'm very much opposed to this being done. If we're actually 
> > > > > > > concerned with
> > > > > > > interoperability and verify that things behave in the same 
> > > > > > > manner between multiple
> > > > > > > clouds then doing this would be a big step backwards. The 
> > > > > > > fundamental disconnect
> > > > > > > here is that the vendors who have implemented out of band 
> > > > > > > extensions or were
> > > > > > > taking advantage of previously available places to inject 
> > > > > > > extra attributes
> > > > > > > believe that doing so means they're interoperable, which is 
> > > > > > > quite far from
> > > > > > > reality. **The API is not a place for vendor 
> > > > > > > differentiation.**
> > > > > >  
> > > > > >  This is a temporary measure to address the fact that a large 
> > > > > >  number
> > > > > >  of existing tests changed their behavior, rather than having 
> > > > > >  new
> > > > > >  tests added to enforce this new requirement. The result is 
> > > > > >  deployments
> > > > > >  that previously passed these tests may no longer pass, and in 
> > > > > >  fact
> > > > > >  we have several cases where that's true with deployers who are
> > > > > >  trying to maintain their own standard of 
> > > > > >  backwards-compatibility
> > > > > >  for their end 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2016-06-16 13:56:31 -0400:
> On Thu, Jun 16, 2016 at 12:59:41PM -0400, Doug Hellmann wrote:
> > Excerpts from Matthew Treinish's message of 2016-06-15 19:27:13 -0400:
> > > On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
> > > > Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
> > > > > Top posting one note and direct comments inline, I’m proposing
> > > > > this as a member of the DefCore working group, but this
> > > > > proposal itself has not been accepted as the forward course of
> > > > > action by the working group. These are my own views as the
> > > > > administrator of the program and not that of the working group
> > > > > itself, which may independently reject the idea outside of the
> > > > > response from the upstream devs.
> > > > > 
> > > > > I posted a link to this thread to the DefCore mailing list to make
> > > > > that working group aware of the outstanding issues.
> > > > > 
> > > > > > On Jun 14, 2016, at 3:50 PM, Matthew Treinish 
> > > > > >  wrote:
> > > > > > 
> > > > > > On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> > > > > >> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 
> > > > > >> -0400:
> > > > > >>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > > > >  Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 
> > > > >  -0400:
> > > > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > > >> Last year, in response to Nova micro-versioning and extension 
> > > > > >> updates[1],
> > > > > >> the QA team added strict API schema checking to Tempest to 
> > > > > >> ensure that
> > > > > >> no additional properties were added to Nova API 
> > > > > >> responses[2][3]. In the
> > > > > >> last year, at least three vendors participating the the 
> > > > > >> OpenStack Powered
> > > > > >> Trademark program have been impacted by this change, two of 
> > > > > >> which
> > > > > >> reported this to the DefCore Working Group mailing list 
> > > > > >> earlier this year[4].
> > > > > >> 
> > > > > >> The DefCore Working Group determines guidelines for the 
> > > > > >> OpenStack Powered
> > > > > >> program, which includes capabilities with associated 
> > > > > >> functional tests
> > > > > >> from Tempest that must be passed, and designated sections with 
> > > > > >> associated
> > > > > >> upstream code [5][6]. In determining these guidelines, the 
> > > > > >> working group
> > > > > >> attempts to balance the future direction of development with 
> > > > > >> lagging
> > > > > >> indicators of deployments and user adoption.
> > > > > >> 
> > > > > >> After a tremendous amount of consideration, I believe that the 
> > > > > >> DefCore
> > > > > >> Working Group needs to implement a temporary waiver for the 
> > > > > >> strict API
> > > > > >> checking requirements that were introduced last year, to give 
> > > > > >> downstream
> > > > > >> deployers more time to catch up with the strict 
> > > > > >> micro-versioning
> > > > > >> requirements determined by the Nova/Compute team and enforced 
> > > > > >> by the
> > > > > >> Tempest/QA team.
> > > > > > 
> > > > > > I'm very much opposed to this being done. If we're actually 
> > > > > > concerned with
> > > > > > interoperability and verify that things behave in the same 
> > > > > > manner between multiple
> > > > > > clouds then doing this would be a big step backwards. The 
> > > > > > fundamental disconnect
> > > > > > here is that the vendors who have implemented out of band 
> > > > > > extensions or were
> > > > > > taking advantage of previously available places to inject extra 
> > > > > > attributes
> > > > > > believe that doing so means they're interoperable, which is 
> > > > > > quite far from
> > > > > > reality. **The API is not a place for vendor differentiation.**
> > > > >  
> > > > >  This is a temporary measure to address the fact that a large 
> > > > >  number
> > > > >  of existing tests changed their behavior, rather than having new
> > > > >  tests added to enforce this new requirement. The result is 
> > > > >  deployments
> > > > >  that previously passed these tests may no longer pass, and in 
> > > > >  fact
> > > > >  we have several cases where that's true with deployers who are
> > > > >  trying to maintain their own standard of backwards-compatibility
> > > > >  for their end users.
> > > > > >>> 
> > > > > >>> That's not what happened though. The API hasn't changed and the 
> > > > > >>> tests haven't
> > > > > >>> really changed either. We made our enforcement on Nova's APIs a 
> > > > > >>> bit stricter to
> > > > > >>> ensure nothing unexpected appeared. For 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Matthew Treinish
On Thu, Jun 16, 2016 at 12:59:41PM -0400, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2016-06-15 19:27:13 -0400:
> > On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
> > > Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
> > > > Top posting one note and direct comments inline, I’m proposing
> > > > this as a member of the DefCore working group, but this
> > > > proposal itself has not been accepted as the forward course of
> > > > action by the working group. These are my own views as the
> > > > administrator of the program and not that of the working group
> > > > itself, which may independently reject the idea outside of the
> > > > response from the upstream devs.
> > > > 
> > > > I posted a link to this thread to the DefCore mailing list to make
> > > > that working group aware of the outstanding issues.
> > > > 
> > > > > On Jun 14, 2016, at 3:50 PM, Matthew Treinish  
> > > > > wrote:
> > > > > 
> > > > > On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> > > > >> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 
> > > > >> -0400:
> > > > >>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > > >  Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 
> > > >  -0400:
> > > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > >> Last year, in response to Nova micro-versioning and extension 
> > > > >> updates[1],
> > > > >> the QA team added strict API schema checking to Tempest to 
> > > > >> ensure that
> > > > >> no additional properties were added to Nova API responses[2][3]. 
> > > > >> In the
> > > > >> last year, at least three vendors participating the the 
> > > > >> OpenStack Powered
> > > > >> Trademark program have been impacted by this change, two of which
> > > > >> reported this to the DefCore Working Group mailing list earlier 
> > > > >> this year[4].
> > > > >> 
> > > > >> The DefCore Working Group determines guidelines for the 
> > > > >> OpenStack Powered
> > > > >> program, which includes capabilities with associated functional 
> > > > >> tests
> > > > >> from Tempest that must be passed, and designated sections with 
> > > > >> associated
> > > > >> upstream code [5][6]. In determining these guidelines, the 
> > > > >> working group
> > > > >> attempts to balance the future direction of development with 
> > > > >> lagging
> > > > >> indicators of deployments and user adoption.
> > > > >> 
> > > > >> After a tremendous amount of consideration, I believe that the 
> > > > >> DefCore
> > > > >> Working Group needs to implement a temporary waiver for the 
> > > > >> strict API
> > > > >> checking requirements that were introduced last year, to give 
> > > > >> downstream
> > > > >> deployers more time to catch up with the strict micro-versioning
> > > > >> requirements determined by the Nova/Compute team and enforced by 
> > > > >> the
> > > > >> Tempest/QA team.
> > > > > 
> > > > > I'm very much opposed to this being done. If we're actually 
> > > > > concerned with
> > > > > interoperability and verify that things behave in the same manner 
> > > > > between multiple
> > > > > clouds then doing this would be a big step backwards. The 
> > > > > fundamental disconnect
> > > > > here is that the vendors who have implemented out of band 
> > > > > extensions or were
> > > > > taking advantage of previously available places to inject extra 
> > > > > attributes
> > > > > believe that doing so means they're interoperable, which is quite 
> > > > > far from
> > > > > reality. **The API is not a place for vendor differentiation.**
> > > >  
> > > >  This is a temporary measure to address the fact that a large number
> > > >  of existing tests changed their behavior, rather than having new
> > > >  tests added to enforce this new requirement. The result is 
> > > >  deployments
> > > >  that previously passed these tests may no longer pass, and in fact
> > > >  we have several cases where that's true with deployers who are
> > > >  trying to maintain their own standard of backwards-compatibility
> > > >  for their end users.
> > > > >>> 
> > > > >>> That's not what happened though. The API hasn't changed and the 
> > > > >>> tests haven't
> > > > >>> really changed either. We made our enforcement on Nova's APIs a bit 
> > > > >>> stricter to
> > > > >>> ensure nothing unexpected appeared. For the most these tests work 
> > > > >>> on any version
> > > > >>> of OpenStack. (we only test it in the gate on supported stable 
> > > > >>> releases, but I
> > > > >>> don't expect things to have drastically shifted on older releases) 
> > > > >>> It also
> > > > >>> doesn't matter which version of the API you 

Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Devananda van der Veen


On 06/16/2016 08:12 AM, Jim Rollenhagen wrote:
> Hi all,
> 
> I'd like to propose Jay Faulkner (JayF) and Sam Betts (sambetts) for the
> ironic-core team.
> 
> Jay has been in the community as long as I have, has been IPA and
> ironic-specs core for quite some time. His background is operations, and
> he's getting good with Python. He's given great reviews for quite a
> while now, and the number is steadily increasing. I think it's a
> no-brainer.
> 
> Sam has been in the ironic community for quite some time as well, with
> close ties to the neutron community as well. His background seems to be
> in networking, he's got great python skills as well. His reviews are
> super useful. He doesn't have quite as many as some people, but they are
> always thoughtful, and he often catches things others don't. I do hope
> to see more of his reviews.
> 
> Both Sam and Jay are to the point where I consider their +1 or -1 as
> highly as any other core, so I think it's past time to allow them to +2
> as well.
> 
> Current cores, please reply with your vote.
> 

+2 to both!

--devananda



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Kanagaraj Manickam to Tacker core team

2016-06-16 Thread Sripriya Seetharam
+1


-Sripriya


From: Sridhar Ramaswamy [mailto:sric...@gmail.com]
Sent: Wednesday, June 15, 2016 6:32 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [tacker] Proposing Kanagaraj Manickam to Tacker core 
team

Tackers,

It gives me great pleasure to propose Kanagaraj Manickam to join the Tacker 
core team. In a short time, Kanagaraj has grown into a key member of the Tacker 
team. His enthusiasm and dedication to get Tacker code base on par with other 
leading OpenStack projects is very much appreciated. He is already working on 
two important specs in Newton cycle and many more fixes and RFEs [1]. Kanagaraj 
is also a core member in the Heat project and this lends well as we heavily use 
Heat for many Tacker features.

Please provide your +1/-1 votes.

- Sridhar

[1] 
http://stackalytics.com/?module=tacker-group_id=kanagaraj-manickam=marks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] POST /api-wg/news

2016-06-16 Thread Chris Dent


Greetings OpenStack community,

No new guidelines this week but we did have a very productive time creating 
launchpad bugs for all the TODOs that are in the existing guidelines. The idea 
is that using launchpad will help to keep the TODOs visible and encourage 
action. See them at 
https://bugs.launchpad.net/openstack-api-wg/+bugs?field.tag=todo

And, of course, if you think there are bugs in the guidelines, report them.

We also decided that the great big "THIS A DRAFT" warning at the start of the 
guidelines [1] suggests that the guidelines are not ready. They are ready but they are 
also live documents that evolve. A review is in place to remove the warning [2].

# Recently merged guidelines

Nothing new in the last two weeks.

# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.

None this week

# Guidelines currently under review

These are guidelines that the working group are debating and working on for 
consistency and language. We encourage any interested parties to join in the 
conversation.

* Add the beginning of a set of guidlines for URIs
  https://review.openstack.org/#/c/322194/
* Add description of pagination parameters
  https://review.openstack.org/190743
* Add guideline for Experimental APIs
  https://review.openstack.org/273158
* Add version discovery guideline
  https://review.openstack.org/254895

Note that some of these guidelines were introduced quite a long time ago and 
need to either be refreshed by their original authors, or adopted by new 
interested parties. If you're the author of one of these older reviews, please 
come back to it or we'll have to mark it abandoned.

# API Impact reviews currently open

Reviews marked as APIImpact [3] are meant to help inform the working group 
about changes which would benefit from wider inspection by group members and 
liaisons. While the working group will attempt to address these reviews 
whenever possible, it is highly recommended that interested parties attend the 
API-WG meetings [4] to promote communication surrounding their reviews.

Thanks for reading and see you next week!

[1] http://specs.openstack.org/openstack/api-wg/
[2] https://review.openstack.org/#/c/330687/
[3] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z
[4] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] No middle-man - when does/will Nova directly connect iSCSI volumes?

2016-06-16 Thread Walter A. Boring IV

One major disadvantage is lack of multipath support.

Multipath is still done outside of qemu and there is no native multipath 
support inside of qemu from what I can tell.  Another
disadvantage is that qemu iSCSI support is all s/w based. There are 
hardware iSCSI initiators that are supported by os-brick today.  I think 
migrating attaches into qemu itself isn't a good idea and will always be 
behind the level of support already provided by the tools that have been 
around forever.  Also, what kind of support does QEMU have for target 
portal discovery?  Can it discover all targets via a single portal, and 
can you pass in multiple portals to do discovery for the same volume?  
This is also related to multipath support.  Some storage arrays can't do 
discovery on a single portal, they have to have discovery on each interface.


Do you have some actual numbers to prove that host based attaches passed 
into libvirt are slower than QEMU direct attaches?


You can't really compare RBD to iSCSI.  RBD is a completely different 
beast.  The kernel rbd driver hasn't been as stable and as fast as the 
rbdclient that qemu uses.


Walt


On 06/15/2016 04:59 PM, Preston L. Bannister wrote:
QEMU has the ability to directly connect to iSCSI volumes. Running the 
iSCSI connections through the nova-compute host *seems* somewhat 
inefficient.


There is a spec/blueprint and implementation that landed in Kilo:

https://specs.openstack.org/openstack/nova-specs/specs/kilo/implemented/qemu-built-in-iscsi-initiator.html
https://blueprints.launchpad.net/nova/+spec/qemu-built-in-iscsi-initiator

From looking at the OpenStack Nova sources ... I am not entirely clear 
on when this behavior is invoked (just for Ceph?), and how it might 
change in future.


Looking for a general sense where this is headed. (If anyone knows...)

If there is some problem with QEMU and directly attached iSCSI 
volumes, that would explain why this is not the default. Or is this 
simple inertia?



I have a concrete concern. I work for a company (EMC) that offers 
backup products, and we now have backup for instances in OpenStack. To 
make this efficient, we need to collect changed-block information from 
instances.


1)  We could put an intercept in the Linux kernel of the nova-compute 
host to track writes at the block layer. This has the merit of working 
for containers, and potentially bare-metal instance deployments. But 
is not guaranteed for instances, if the iSCSI volumes are directly 
attached to QEMU.


2)  We could use the QEMU support for incremental backup (first bit 
landed in QEMU 2.4). This has the merit of working with any storage, 
by only for virtual machines under QEMU.


As our customers are (so far) only asking about virtual machine 
backup. I long ago settled on (2) as most promising.


What I cannot clearly determine is where (1) will fail. Will all iSCSI 
volumes connected to QEMU instances eventually become directly connected?



Xiao's unanswered query (below) presents another question. Is this a 
site-choice? Could I require my customers to configure their OpenStack 
clouds to always route iSCSI connections through the nova-compute 
host? (I am not a fan of this approach, but I have to ask.)


To answer Xiao's question, can a site configure their cloud to 
*always* directly connect iSCSI volumes to QEMU?




On Tue, Feb 16, 2016 at 4:54 AM, Xiao Ma (xima2) > wrote:


Hi, All

I want to make the qemu communicate with iscsi target using
libiscsi directly, and I
followed https://review.openstack.org/#/c/135854/ to add
'volume_drivers =
iscsi=nova.virt.libvirt.volume.LibvirtNetVolumeDriver’ in nova.conf
 and then restarted nova services and cinder services, but still
the volume configuration of vm is as bellow:


  
  
  
076bb429-67fd-4c0c-9ddf-0dc7621a975a
  



I use centos7 and Liberty version of OpenStack.
Could anybody tell me how can I achieve it?


Thanks.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] consistency and exposing quiesce in the Nova API

2016-06-16 Thread Matt Riedemann

On 6/16/2016 6:12 AM, Preston L. Bannister wrote:

I am hoping support for instance quiesce in the Nova API makes it into
OpenStack. To my understanding, this is existing function in Nova, just
not-yet exposed in the public API. (I believe Cinder uses this via a
private Nova API.)


I'm assuming you're thinking of the os-assisted-volume-snapshots admin 
API in Nova that is called from the Cinder RemoteFSSnapDrivers 
(glusterfs, scality, virtuozzo and quobyte). I started a separate thread 
about that yesterday, mainly around the lack of CI testing / status so 
we even have an idea if this is working consistently and we don't 
regress it.




Much of the discussion is around disaster recovery (DR) and NFV - which
is not wrong, but might be muddling the discussion? Forget DR and NFV,
for the moment.

My interest is simply in collecting high quality backups of applications
(instances) running in OpenStack. (Yes, customers are deploying
applications into OpenStack that need backup - and at large scale. They
told us, *very* clearly.) Ideally, I would like to give the application
a chance to properly quiesce, so the on-disk state is most-consistent,
before collecting the backup.


We already attempt to quiesce an active volume-backed instance before 
doing a volume snapshot:


https://github.com/openstack/nova/blob/11bd0052bdd660b63ecca53c5b6fe68f81bdf9c3/nova/compute/api.py#L2266



The existing function in Nova should be at least a good start, it just
needs to be exposed in the public Nova API. (At least, this is my
understanding.)

Of course, good backups (however collected) allow you to build DR
solutions. My immediate interest is simply to collect high-quality backups.

The part in the blueprint about an atomic operation on a list of
instances ... this might be over-doing things. First, if you have a set
of related instances, very likely there is a logical order in which they
should be quiesced. Some could be quiesced concurrently. Others might
need to be sequential.

Assuming the quiesce API *starts* the operation, and there is some means
to check for completion, then a single-instance quiesce API should be
sufficient. An API that is synchronous (waits for completion before
returning) would also be usable. (I am not picky - just want to collect
better backups for customers.)


As noted above, we already attempt to quiesce when doing a volume-backed 
instance snapshot.


The problem comes in with the chaining and orchestration around a list 
of instances. That requires additional state management and overhead 
within Nova and while we're actively trying to redo parts of the code 
base to make things less terrible, adding more complexity on top at the 
same time doesn't help.


I'm also not sure what something like multiattach volumes will throw 
into the mix with this, but that's another DR/HA requirement.


So I get that lots of people want lots of things that aren't in Nova 
right now. We have that coming from several different projects (cinder 
for multiattach volumes, neutron for vlan-aware-vms and routed 
networks), and several different groups (NFV, ops).


We also have a lot of people that just want the basic IaaS layer to work 
for the compute service in an OpenStack cloud, like being able to scale 
that out better and track resource usage for accurate scheduling.


And we have a lot of developers that want to be able to actually 
understand what it is the code is doing, and a much smaller number of 
core maintainers / reviewers that don't want to have to keep piling 
technical debt into the project while we're trying to fix some of what's 
already built up over the years - and actually have this stuff backed 
with integration testing.


So, I get it. We all have requirements and we all have resource 
limitations, which is why we as a team prioritize our work items for the 
release. This one didn't make it for Newton.








On Sun, May 29, 2016 at 7:24 PM, joehuang > wrote:

Hello,

This spec[1] was to expose quiesce/unquiesce API, which had been
approved in Mitaka, but code not merged in time.

The major consideration for this spec is to enable application level
consistency snapshot, so that the backup of the snapshot in the
remote site could be recovered correctly in case of disaster
recovery. Currently there is only single VM level consistency
snapshot( through create image from VM ), but it's not enough.

First, the disaster recovery is mainly the action in the
infrastructure level in case of catastrophic failures (flood,
earthquake, propagating software fault), the cloud service provider
recover the infrastructure and the applications without the help
from each application owner: you can not just recover the OpenStack,
then send notification to all applications' owners, to ask them to
restore their applications by their own. As the cloud service
provider, they should be 

Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Doug Hellmann
Excerpts from Matthew Treinish's message of 2016-06-15 19:27:13 -0400:
> On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
> > Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
> > > Top posting one note and direct comments inline, I’m proposing
> > > this as a member of the DefCore working group, but this
> > > proposal itself has not been accepted as the forward course of
> > > action by the working group. These are my own views as the
> > > administrator of the program and not that of the working group
> > > itself, which may independently reject the idea outside of the
> > > response from the upstream devs.
> > > 
> > > I posted a link to this thread to the DefCore mailing list to make
> > > that working group aware of the outstanding issues.
> > > 
> > > > On Jun 14, 2016, at 3:50 PM, Matthew Treinish  
> > > > wrote:
> > > > 
> > > > On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> > > >> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> > > >>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > >  Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 
> > >  -0400:
> > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > >> Last year, in response to Nova micro-versioning and extension 
> > > >> updates[1],
> > > >> the QA team added strict API schema checking to Tempest to ensure 
> > > >> that
> > > >> no additional properties were added to Nova API responses[2][3]. 
> > > >> In the
> > > >> last year, at least three vendors participating the the OpenStack 
> > > >> Powered
> > > >> Trademark program have been impacted by this change, two of which
> > > >> reported this to the DefCore Working Group mailing list earlier 
> > > >> this year[4].
> > > >> 
> > > >> The DefCore Working Group determines guidelines for the OpenStack 
> > > >> Powered
> > > >> program, which includes capabilities with associated functional 
> > > >> tests
> > > >> from Tempest that must be passed, and designated sections with 
> > > >> associated
> > > >> upstream code [5][6]. In determining these guidelines, the working 
> > > >> group
> > > >> attempts to balance the future direction of development with 
> > > >> lagging
> > > >> indicators of deployments and user adoption.
> > > >> 
> > > >> After a tremendous amount of consideration, I believe that the 
> > > >> DefCore
> > > >> Working Group needs to implement a temporary waiver for the strict 
> > > >> API
> > > >> checking requirements that were introduced last year, to give 
> > > >> downstream
> > > >> deployers more time to catch up with the strict micro-versioning
> > > >> requirements determined by the Nova/Compute team and enforced by 
> > > >> the
> > > >> Tempest/QA team.
> > > > 
> > > > I'm very much opposed to this being done. If we're actually 
> > > > concerned with
> > > > interoperability and verify that things behave in the same manner 
> > > > between multiple
> > > > clouds then doing this would be a big step backwards. The 
> > > > fundamental disconnect
> > > > here is that the vendors who have implemented out of band 
> > > > extensions or were
> > > > taking advantage of previously available places to inject extra 
> > > > attributes
> > > > believe that doing so means they're interoperable, which is quite 
> > > > far from
> > > > reality. **The API is not a place for vendor differentiation.**
> > >  
> > >  This is a temporary measure to address the fact that a large number
> > >  of existing tests changed their behavior, rather than having new
> > >  tests added to enforce this new requirement. The result is 
> > >  deployments
> > >  that previously passed these tests may no longer pass, and in fact
> > >  we have several cases where that's true with deployers who are
> > >  trying to maintain their own standard of backwards-compatibility
> > >  for their end users.
> > > >>> 
> > > >>> That's not what happened though. The API hasn't changed and the tests 
> > > >>> haven't
> > > >>> really changed either. We made our enforcement on Nova's APIs a bit 
> > > >>> stricter to
> > > >>> ensure nothing unexpected appeared. For the most these tests work on 
> > > >>> any version
> > > >>> of OpenStack. (we only test it in the gate on supported stable 
> > > >>> releases, but I
> > > >>> don't expect things to have drastically shifted on older releases) It 
> > > >>> also
> > > >>> doesn't matter which version of the API you run, v2.0 or v2.1. 
> > > >>> Literally, the
> > > >>> only case it ever fails is when you run something extra, not from the 
> > > >>> community,
> > > >>> either as an extension (which themselves are going away [1]) or 
> > > >>> another service
> > > >>> 

Re: [openstack-dev] [requirements][all] VOTE to expand the Requirements Team

2016-06-16 Thread Doug Hellmann
Excerpts from Davanum Srinivas (dims)'s message of 2016-06-16 07:44:50 -0700:
> Folks,
> 
> At Austin the Release Management team reached a consensus to spin off
> with some new volunteers to take care of the requirements process and
> repository [1]. The following folks showed up and worked with me on
> getting familiar with the issues/problems/tasks (see [1] and [2]) and
> help with the day to day work.
> 
> Matthew Thode (prometheanfire)
> Dirk Mueller (dirk)
> Swapnil Kulkarni (coolsvap)
> Tony Breeds (tonyb)
> Thomas Bechtold (tbechtold)
> 
> So, please cast your VOTE to grant them +2/core rights on the
> requirements repository and keep up the good work w.r.t speeding up
> reviews, making sure new requirements don't break etc.
> 
> Also, please note that Thierry has been happy enough with our work to
> step down from core responsibilities :) Many thanks Thierry for
> helping with this effort and guidance. I'll make all the add/remove to
> the requirements-core team when this VOTE passes.
> 
> Thanks,
> Dims
> 
> [1] https://etherpad.openstack.org/p/newton-relmgt-plan
> [2] https://etherpad.openstack.org/p/requirements-tasks
> [3] https://etherpad.openstack.org/p/requirements-cruft
> 

+1 for all 5, gladly!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Dmitry Tantsur

On 06/16/2016 05:12 PM, Jim Rollenhagen wrote:

Hi all,

I'd like to propose Jay Faulkner (JayF) and Sam Betts (sambetts) for the
ironic-core team.

Jay has been in the community as long as I have, has been IPA and
ironic-specs core for quite some time. His background is operations, and
he's getting good with Python. He's given great reviews for quite a
while now, and the number is steadily increasing. I think it's a
no-brainer.


+2



Sam has been in the ironic community for quite some time as well, with
close ties to the neutron community as well. His background seems to be
in networking, he's got great python skills as well. His reviews are
super useful. He doesn't have quite as many as some people, but they are
always thoughtful, and he often catches things others don't. I do hope
to see more of his reviews.


+2



Both Sam and Jay are to the point where I consider their +1 or -1 as
highly as any other core, so I think it's past time to allow them to +2
as well.


+1 :)



Current cores, please reply with your vote.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Potential direction proposal - unified OpenStack containers

2016-06-16 Thread Michał Jastrzębski
Hey, welcome back to Kolla guys!:) We've missed you!

I'm sure we can figure out together best way to meet everyones needs,
I'm happy to help! Also reviewed spec. Thanks.

On 16 June 2016 at 10:23, Ryan Hallisey  wrote:
> Sergey,
>
> Thanks for reaching out to the community!  I think there is a lot to discuss. 
> I added some comments on
> the spec and I'm sure many kolla folks will follow up.
>
>
> Thanks,
> Ryan
>
> - Original Message -
> From: "Sergey Lukjanov" 
> To: "OpenStack Development Mailing List" 
> Sent: Thursday, June 16, 2016 10:04:06 AM
> Subject: [openstack-dev] [kolla] Potential direction proposal - unified 
> OpenStack containers
>
> Hi folks,
>
> I'd like to share some thoughts about the OpenStack containerization in the 
> form of a specification for Kolla and have some discussion on the proposed 
> items in the review.
>
> In general it's a meta spec to describe potential direction for Kolla to 
> provide Unified deployment tool agnostic containers for anyone to use.
>
> We very much welcome your feedback on the following spec.
>
> Link: https://review.openstack.org/330575
>
> Thanks.
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Principal Software Engineer
> Mirantis Inc.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] It is impossible to queue UpdateDnsmasqTask

2016-06-16 Thread Georgy Kibardin
Hi All,

Currently we can only run one instance of subj. at time. An attempt to run
second one causes an exception. This behaviour at least may cause a cluster
to stuck forever in "removing" state (reproduces here
https://bugs.launchpad.net/fuel/+bug/1544493) or just produce
incomprehensible "task already running" message. So we need to address the
problem somehow. I see the following ways to fix it:

1. Just put the cluster into "error" state which would allow user to remove
it later.
  pros: simple and fixes the problem at hand (#1544493)
  cons: it would be hard to detect "come againg later" situation; quite a
lame behavior: why don't you "come again later" yourself.

2. Implement generic queueing in nailgun.
pros: quite simple
cons: it doesn't look like nailgun responsibility

3. Implement generic queueing in astute.
   pros: this behaviour makes sense for astute.
   cons: the implementation would be quite complex, we need to synchronize
execution between separate worker processes.

4. Split the task so that each part would work with particular cluster.
   pros: we don't extend our execution model
   cons: untrivial implementation; no guarantee that we are always able to
split master node tasks on per cluster basis.

Best,
Georgy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Loo, Ruby
Jim,

Thanks for the proposal.

+2 +A. Err, +2 :)

--ruby

On 2016-06-16, 11:12 AM, "Jim Rollenhagen" 
> wrote:


Both Sam and Jay are to the point where I consider their +1 or -1 as
highly as any other core, so I think it's past time to allow them to +2
as well.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Villalovos, John L
> -Original Message-
> From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> Sent: Thursday, June 16, 2016 08:13
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [ironic] Proposing two new cores
> 
> Hi all,
> 
> I'd like to propose Jay Faulkner (JayF) and Sam Betts (sambetts) for the
> ironic-core team.
> 
> Jay has been in the community as long as I have, has been IPA and
> ironic-specs core for quite some time. His background is operations, and
> he's getting good with Python. He's given great reviews for quite a
> while now, and the number is steadily increasing. I think it's a
> no-brainer.
> 
> Sam has been in the ironic community for quite some time as well, with
> close ties to the neutron community as well. His background seems to be
> in networking, he's got great python skills as well. His reviews are
> super useful. He doesn't have quite as many as some people, but they are
> always thoughtful, and he often catches things others don't. I do hope
> to see more of his reviews.
> 
> Both Sam and Jay are to the point where I consider their +1 or -1 as
> highly as any other core, so I think it's past time to allow them to +2
> as well.
> 
> Current cores, please reply with your vote.

+1 to both Jay and Sam. In my opinion, they both have a wealth of knowledge and 
have been great contributors to Ironic :)

John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A primer on data structures used by Nova to represent block devices

2016-06-16 Thread Matthew Booth
On Thu, Jun 16, 2016 at 4:20 PM, Kashyap Chamarthy 
wrote:
[...]

> > BlockDeviceMapping
> > ===
> >
> > The 'top level' data structure is the block device mapping object. It is
> a
> > NovaObject, persisted in the db. Current code creates a BDM object for
> > every disk associated with an instance, whether it is a volume or not. I
> > can't confirm (or deny) that this has always been the case, though, so
> > there may be instances which still exist which have some BDMs missing.
> >
> > The BDM object describes properties of each disk as specified by the
> user.
> > It is initially created by the user and passed to compute api. Compute
> api
> > transforms and consolidates all BDMs to ensure that all disks, explicit
> or
> > implicit, have a BDM, then persists them.
>
> What could be an example of an "implicit disk"?
>

If the flavor defines an ephemeral disk which the user did not specify
explicitly, it will be added. Possibly others, I'm not looking at that code
right now.


>
> > Look in nova.objects.block_device
> > for all BDM fields, but in essence they contain information like
> > (source_type='image', destination_type='local', image_id='),
> > or equivalents describing ephemeral disks, swap disks or volumes, and
> some
> > associated data.
> >
> > Reader note: BDM objects are typically stored in variables called 'bdm'
> > with lists in 'bdms', although this is obviously not guaranteed (and
> > unfortunately not always true: bdm in libvirt.block_device is usually a
> > DriverBlockDevice object). This is a useful reading aid (except when it's
> > proactively confounding), as there is also something else typically
> called
> > 'block_device_mapping' which is not a BlockDeviceMapping object.
>
> [...]
>
> > instance_disk_info
> > =
> >
> > The driver api defines a method get_instance_disk_info, which returns a
> > json blob. The compute manager calls this and passes the data over rpc
> > between calls without ever looking at it. This is driver-specific opaque
> > data. It is also only used by the libvirt driver, despite being part of
> the
> > api for all drivers. Other drivers do not return any data. The most
> > interesting aspect of instance_disk_info is that it is generated from the
> > libvirt XML, not from nova's state.
> >
> > Reader beware: instance_disk_info is often named 'disk_info' in code,
> which
> > is unfortunate as this clashes with the normal naming of the next
> > structure. Occasionally the two are used in the same block of code.
> >
> > instance_disk_info is a list of dicts for some of an instance's disks.
>
> The above sentence reads a little awkward (maybe it's just me), might
> want to rephrase it if you're submitting it as a Gerrit change.
>

Yeah. I think that's a case of re-editing followed by inadequate proof
reading.


> While reading this section, among other places, I was looking at:
> _get_instance_disk_info() ("Get the non-volume disk information from the
> domain xml") from nova/virt/libvirt/driver.py.
>

non-volume or Rbd ;) I've become a bit cautious about such docstrings: they
aren't always correct :/


>
> > Reader beware: Rbd disks (including non-volume disks) and cinder volumes
> > are not included in instance_disk_info.
> >
> > The dicts are:
> >
> >   {
> > 'type': libvirt's notion of the disk's type
> > 'path': libvirt's notion of the disk's path
> > 'virt_disk_size': The disk's virtual size in bytes (the size the
> guest
> > OS sees)
> > 'backing_file': libvirt's notion of the backing file path
> > 'disk_size': The file size of path, in bytes.
> > 'over_committed_disk_size': As-yet-unallocated disk size, in bytes.
> >   }
> >
> > disk_info
> > ===
> >
> > Reader beware: as opposed to instance_disk_info, which is frequently
> called
> > disk_info.
> >
> > This data structure is actually described pretty well in the comment
> block
> > at the top of libvirt/blockinfo.py. It is internal to the libvirt driver.
> > It contains:
> >
> >   {
> > 'disk_bus': the default bus used by disks
> > 'cdrom_bus': the default bus used by cdrom drives
> > 'mapping': defined below
> >   }
> >
> > 'mapping' is a dict which maps disk names to a dict describing how that
> > disk should be passed to libvirt. This mapping contains every disk
> > connected to the instance, both local and volumes.
>
> Worth updating exising defintion of 'mapping' in
> nova/virt/libvirt/blockinfo.py with your above clearer description
> above?
>

Indubitably.

Matt
-- 
Matthew Booth
Red Hat Engineering, Virtualisation Team

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Vladyslav Drok
+1 from me, both Jay and Sam are doing very good job :)

On Thu, Jun 16, 2016 at 6:20 PM, Lucas Alvares Gomes 
wrote:

> Hi,
>
> > Both Sam and Jay are to the point where I consider their +1 or -1 as
> > highly as any other core, so I think it's past time to allow them to +2
> > as well.
> >
> > Current cores, please reply with your vote.
> >
>
> Great work Sam and Jay!
>
> +1 for both
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [UX] [TripleO] TripleO UI Initial Wireframes

2016-06-16 Thread Dan Prince
I left some comments on the wireframes themselves. One general concept
I would like to see capture is to make sure that things across the UI
and CLI have parity.

Specifically things like if I register nodes on the CLI we use a JSON
file format:

http://tripleo.org/environments/environments.html#instackenv

Supporting individual nodes to be created is fine as well since a
command line user could just run Ironic client commands directly too.

I also left a comment about the screen with multiple plans. I like this
idea and it is something that we can pursue but due to fact that we use
a flat physical deployment network there would need to be some extra
care in setting up the network ranges, vlans, etc across multiple
plans. Again this is something I would like to see us support and
document with the CLI before we go and expose the capability in the UI.

Dan


On Mon, 2016-06-06 at 15:03 -0400, Liz Blanchard wrote:
> Hi All,
> 
> I wanted to share some brainstorming we've done on the TripleO UI. I
> put together wireframes[1] to reflect some ideas we have on moving
> forward with features in the UI and would love to get any feedback
> you all have. Feel free to comment via this email or comment within
> InVision.
> 
> Best,
> Liz
> 
> [1] https://invis.io/KW7JTXBBR
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-16 Thread Hongbin Lu
Welcome! Please feel free to ping us in IRC (#openstack-zun) or join our weekly 
meeting (https://wiki.openstack.org/wiki/Zun#Meetings). I am happy to discuss 
how to collaborate further.

Best regards,
Hongbin

From: Pengfei Ni [mailto:feisk...@gmail.com]
Sent: June-16-16 6:36 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qi Ming Teng; yanya...@cn.ibm.com; flw...@catalyst.net.nz; 
adit...@nectechnologies.in; sitlani.namr...@yahoo.in; Chandan Kumar; Sheel Rana 
Insaan; Yuanying
Subject: Re: [openstack-dev] [Higgins][Zun] Project roadmap

Hello, everyone,

Hypernetes has done some work same as this project, that is

- Leverate Neutron for container network
- Leverate Cinder for storage
- Leverate Keystone for auth
- Leverate HyperContainer for hypervisor-based container runtime

We could help to provide hypervisor-based container runtime (HyperContainer) 
integration for Zun.

See https://github.com/hyperhq/hypernetes and 
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html
 for more information about Hypernetes, and see 
https://github.com/hyperhq/hyperd for more information about HyperContainer.


Best regards.


---
Pengfei Ni
Software Engineer @Hyper

2016-06-13 6:10 GMT+08:00 Hongbin Lu 
>:
Hi team,

During the team meetings these weeks, we collaborated the initial project 
roadmap. I summarized it as below. Please review.

* Implement a common container abstraction for different container runtimes. 
The initial implementation will focus on supporting basic container operations 
(i.e. CRUD).
* Focus on non-nested containers use cases (running containers on physical 
hosts), and revisit nested containers use cases (running containers on VMs) 
later.
* Provide two set of APIs to access containers: The Nova APIs and the 
Zun-native APIs. In particular, the Zun-native APIs will expose full container 
capabilities, and Nova APIs will expose capabilities that are shared between 
containers and VMs.
* Leverage Neutron (via Kuryr) for container networking.
* Leverage Cinder for container data volume.
* Leverage Glance for storing container images. If necessary, contribute to 
Glance for missing features (i.e. support layer of container images).
* Support enforcing multi-tenancy by doing the following:
** Add configurable options for scheduler to enforce neighboring containers 
belonging to the same tenant.
** Support hypervisor-based container runtimes.

The following topics have been discussed, but the team cannot reach consensus 
on including them into the short-term project scope. We skipped them for now 
and might revisit them later.
* Support proxying API calls to COEs.
* Advanced container operations (i.e. keep container alive, load balancer 
setup, rolling upgrade).
* Nested containers use cases (i.e. provision container hosts).
* Container composition (i.e. support docker-compose like DSL).

NOTE: I might forgot and mis-understood something. Please feel free to point 
out if anything is wrong or missing.

Best regards,
Hongbin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Potential direction proposal - unified OpenStack containers

2016-06-16 Thread Ryan Hallisey
Sergey,

Thanks for reaching out to the community!  I think there is a lot to discuss. I 
added some comments on
the spec and I'm sure many kolla folks will follow up.


Thanks,
Ryan

- Original Message -
From: "Sergey Lukjanov" 
To: "OpenStack Development Mailing List" 
Sent: Thursday, June 16, 2016 10:04:06 AM
Subject: [openstack-dev] [kolla] Potential direction proposal - unified 
OpenStack containers

Hi folks, 

I'd like to share some thoughts about the OpenStack containerization in the 
form of a specification for Kolla and have some discussion on the proposed 
items in the review. 

In general it's a meta spec to describe potential direction for Kolla to 
provide Unified deployment tool agnostic containers for anyone to use. 

We very much welcome your feedback on the following spec. 

Link: https://review.openstack.org/330575 

Thanks. 


-- 
Sincerely yours, 
Sergey Lukjanov 
Principal Software Engineer 
Mirantis Inc. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Lucas Alvares Gomes
Hi,

> Both Sam and Jay are to the point where I consider their +1 or -1 as
> highly as any other core, so I think it's past time to allow them to +2
> as well.
>
> Current cores, please reply with your vote.
>

Great work Sam and Jay!

+1 for both

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A primer on data structures used by Nova to represent block devices

2016-06-16 Thread Kashyap Chamarthy
On Thu, Jun 16, 2016 at 12:48:18PM +0100, Matthew Booth wrote:
> The purpose of this mail is to share what I have learned about the various
> data structures used by Nova for representing block devices. I compiled
> this for my own use, but I hope it might be useful for others, and that
> other might point out any errors.

Definitely!  Thanks for taking time to write this essay.

[Since you made the effort, worth submitting this to
nova/doc/source/nova-block-internals.rst (or some such).]

> As is usual when I'm reading code like this, I've created some cleanup
> patches to address nits or things I found confusing as I went along. I've
> posted review links at the end.
> 
> A note on reading this. I refer to local disks and volumes. A local disk in
> this context is any disk directly managed by nova compute. If nova is
> configured to use Rbd or NFS for instance disks these disks won't actually
> be local, but they are still managed locally and referred to as local disks.
> 
> There are 4 relevant data structures. 2 of these are general, 2 are
> specific to the libvirt driver.
> 
> BlockDeviceMapping
> ===
> 
> The 'top level' data structure is the block device mapping object. It is a
> NovaObject, persisted in the db. Current code creates a BDM object for
> every disk associated with an instance, whether it is a volume or not. I
> can't confirm (or deny) that this has always been the case, though, so
> there may be instances which still exist which have some BDMs missing.
> 
> The BDM object describes properties of each disk as specified by the user.
> It is initially created by the user and passed to compute api. Compute api
> transforms and consolidates all BDMs to ensure that all disks, explicit or
> implicit, have a BDM, then persists them.

What could be an example of an "implicit disk"?

> Look in nova.objects.block_device
> for all BDM fields, but in essence they contain information like
> (source_type='image', destination_type='local', image_id='),
> or equivalents describing ephemeral disks, swap disks or volumes, and some
> associated data.
> 
> Reader note: BDM objects are typically stored in variables called 'bdm'
> with lists in 'bdms', although this is obviously not guaranteed (and
> unfortunately not always true: bdm in libvirt.block_device is usually a
> DriverBlockDevice object). This is a useful reading aid (except when it's
> proactively confounding), as there is also something else typically called
> 'block_device_mapping' which is not a BlockDeviceMapping object.

[...]
 
> Reader beware: common usage is to pull 'block_device_mapping' out of this
> dict into a variable called 'block_device_mapping'. This is not a
> BlockDeviceMapping object, or list of them.
> 
> Reader beware: if block_device_info was passed to the driver by compute
> manager, it was probably generated by _get_instance_block_device_info(). By
> default, this function filters out all cinder volumes from
> block_device_mapping which don't currently have connection_info. In other
> contexts this filtering will not have happened, and block_device_mapping
> will contain all volumes.
> 
> Reader beware: unlike BDMs, block_device_info does not represent all disks
> that an instance might have. Significantly, it will not contain any
> representation of an image-backed local disk, i.e. the root disk of a
> typical instance which isn't boot-from-volume. Other representations used
> by the libvirt driver explicitly reconstruct this missing disk. I assume
> other drivers must do the same.

[Meta comment: Appreciate these "Reader beaware"s -- they're having
the right effect -- causing my brain to 'stand up and read more than
twice' to assimilate.]

 
> instance_disk_info
> =
> 
> The driver api defines a method get_instance_disk_info, which returns a
> json blob. The compute manager calls this and passes the data over rpc
> between calls without ever looking at it. This is driver-specific opaque
> data. It is also only used by the libvirt driver, despite being part of the
> api for all drivers. Other drivers do not return any data. The most
> interesting aspect of instance_disk_info is that it is generated from the
> libvirt XML, not from nova's state.
> 
> Reader beware: instance_disk_info is often named 'disk_info' in code, which
> is unfortunate as this clashes with the normal naming of the next
> structure. Occasionally the two are used in the same block of code.
> 
> instance_disk_info is a list of dicts for some of an instance's disks.

The above sentence reads a little awkward (maybe it's just me), might
want to rephrase it if you're submitting it as a Gerrit change.

While reading this section, among other places, I was looking at:
_get_instance_disk_info() ("Get the non-volume disk information from the
domain xml") from nova/virt/libvirt/driver.py.

> Reader beware: Rbd disks (including non-volume disks) and cinder volumes
> are not included in instance_disk_info.
> 
> The dicts are:
> 
>   {
>   

[openstack-dev] [Monasca] monasca-agent, flake8 version change on Jenkins

2016-06-16 Thread László Hegedüs
Hi,

Flake 8 version seems to have changed since yesterday on the Jenkins node.

was: flake8==2.5.5
now: flake8==2.6.0

It (apparently) causes gate-monasca-agent-pep8 checks to fail, since the old 
code does not pass the check.
Has anyone started addressing this issue?

BR,
Laszlo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Proposing two new cores

2016-06-16 Thread Jim Rollenhagen
Hi all,

I'd like to propose Jay Faulkner (JayF) and Sam Betts (sambetts) for the
ironic-core team.

Jay has been in the community as long as I have, has been IPA and
ironic-specs core for quite some time. His background is operations, and
he's getting good with Python. He's given great reviews for quite a
while now, and the number is steadily increasing. I think it's a
no-brainer.

Sam has been in the ironic community for quite some time as well, with
close ties to the neutron community as well. His background seems to be
in networking, he's got great python skills as well. His reviews are
super useful. He doesn't have quite as many as some people, but they are
always thoughtful, and he often catches things others don't. I do hope
to see more of his reviews.

Both Sam and Jay are to the point where I consider their +1 or -1 as
highly as any other core, so I think it's past time to allow them to +2
as well.

Current cores, please reply with your vote.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins] Call for contribution for Higgins API design

2016-06-16 Thread Kumari, Madhuri
Hi,

We have created an etherpad page for API design 
https://etherpad.openstack.org/p/zun-containers-service-api
Please have a look and write your suggestions.

Regards,
Madhuri

From: Yuanying OTSUKA [mailto:yuany...@oeilvert.org]
Sent: Tuesday, June 14, 2016 9:39 AM
To: OpenStack Development Mailing List (not for usage questions) 
; Sheel Rana Insaan 
Cc: adit...@nectechnologies.in; yanya...@cn.ibm.com; flw...@catalyst.net.nz; Qi 
Ming Teng ; sitlani.namr...@yahoo.in; Yuanying 
; Chandan Kumar 
Subject: Re: [openstack-dev] [Higgins] Call for contribution for Higgins API 
design

Hi, Hongbin,

Yes, those urls are just information for our work.
We will create a etherpad page to collaborate.




2016年6月11日(土) 7:38 Hongbin Lu 
>:
Yuanying,

The etherpads you pointed to were a few years ago and the information looks a 
bit outdated. I think we can collaborate a similar etherpad with updated 
information (i.e. remove container runtimes that we don’t care, add container 
runtimes that we care). The existing etherpad can be used as a starting point. 
What do you think?

Best regards,
Hongbin

From: Yuanying OTSUKA 
[mailto:yuany...@oeilvert.org]
Sent: June-01-16 12:43 AM
To: OpenStack Development Mailing List (not for usage questions); Sheel Rana 
Insaan
Cc: adit...@nectechnologies.in; 
yanya...@cn.ibm.com; 
flw...@catalyst.net.nz; Qi Ming Teng; 
sitlani.namr...@yahoo.in; Yuanying; Chandan 
Kumar
Subject: Re: [openstack-dev] [Higgins] Call for contribution for Higgins API 
design

Just F.Y.I.

When Magnum wanted to become “Container as a Service”,
There were some discussion about API design.

* https://etherpad.openstack.org/p/containers-service-api
* https://etherpad.openstack.org/p/openstack-containers-service-api



2016年6月1日(水) 12:09 Hongbin Lu 
>:
Sheel,

Thanks for taking the responsibility. Assigned the BP to you. As discussed, 
please submit a spec for the API design. Feel free to let us know if you need 
any help.

Best regards,
Hongbin

From: Sheel Rana Insaan 
[mailto:ranasheel2...@gmail.com]
Sent: May-31-16 9:23 PM
To: Hongbin Lu
Cc: adit...@nectechnologies.in; 
vivek.jain.openst...@gmail.com; 
flw...@catalyst.net.nz; Shuu Mutou; Davanum 
Srinivas; OpenStack Development Mailing List (not for usage questions); Chandan 
Kumar; hai...@xr.jp.nec.com; Qi Ming Teng; 
sitlani.namr...@yahoo.in; Yuanying; Kumari, 
Madhuri; yanya...@cn.ibm.com
Subject: Re: [Higgins] Call for contribution for Higgins API design


Dear Hongbin,

I am interested in this.
Thanks!!

Best Regards,
Sheel Rana
On Jun 1, 2016 3:53 AM, "Hongbin Lu" 
> wrote:
Hi team,

As discussed in the last team meeting, we agreed to define core use cases for 
the API design. I have created a blueprint for that. We need an owner of the 
blueprint and it requires a spec to clarify the API design. Please let me know 
if you interest in this work (it might require a significant amount of time to 
work on the spec).

https://blueprints.launchpad.net/python-higgins/+spec/api-design

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovs-dpdk] suggested devStack local.conf settings for OpenStack Mitaka with DPDK

2016-06-16 Thread Montorsi, Francesco
Hi,
I would like to install with DevStack (on a single node) OpenStack Mitaka with 
neutron using OVS+DPDK (so far I managed to create with devstack a Mitaka stack 
using neutron with "vanilla" OVS).
To be clear, I checked out the stable/mitaka devstack branch:
   git clone https://git.openstack.org/openstack-dev/devstack -b stable/mitaka

I'm wondering 3 things though:

1) Do you recommend any change to the networking-ovs-dpdk local.conf file 
advertised here:
   
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node
?
(it seems it was updated only 5 months ago, while Mitaka has been released in 
April, 2 months ago)

2) What's the syntax for ML2_VLAN_RANGES option? Is
 ML2_VLAN_RANGES=default:1000-1100
ok?

3) What happens to the bridges that get listed into OVS_BRIDGE_MAPPINGS ? 
I understand they will be bound to DPDK driver... but does this mean that VM 
traffic will use such bridge for exiting the devstack node (e.g., for Internet 
access)?


Thanks a lot for your help,
Francesco Montorsi



PS:
I noticed that the links to the local.conf files in the following webpages are 
out-of-date:
   
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/usage.rst
   https://github.com/openstack/networking-ovs-dpdk 



Twitter: @Empirix
Website: http://www.empirix.com
Blog: http://blog.empirix.com

600 Technology Park Drive, Suite 100
Billerica, MA 01821
United States


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Cannot setup IPSEC transport mode between VMS

2016-06-16 Thread Yitao Jiang
Hi all,

In Liberty, i want to setup a IPSEC between VMS using transport mode with
ESP protocol,


Just as the diagram above descried, only 10.0.0.4 access 10.0.0.5/10.0.0.6.

If i setup the IPSEC using manually configured key management,
ipsec-tools(setkey) under ubuntu, the vm of 10.0.0.4 cannot reach to
10.0.0.5, neither do 10.0.0.6. But if 10.0.0.5/10.0.0.6 first send request
to 10.0.0.4, such using ping, the 10.0.0.4 can reach them

here's the related OpenStack info

OpenStack: Liberty
Neutron: ML2 LinuxBridge with VxLAN encapsulation.

​And if i setup the same topology of above under VirtualBox on my laptop
with the same IPSEC configuration, there's no such issue.​

-- 

Regards,

Yitao
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][all] VOTE to expand the Requirements Team

2016-06-16 Thread Davanum Srinivas
Folks,

At Austin the Release Management team reached a consensus to spin off
with some new volunteers to take care of the requirements process and
repository [1]. The following folks showed up and worked with me on
getting familiar with the issues/problems/tasks (see [1] and [2]) and
help with the day to day work.

Matthew Thode (prometheanfire)
Dirk Mueller (dirk)
Swapnil Kulkarni (coolsvap)
Tony Breeds (tonyb)
Thomas Bechtold (tbechtold)

So, please cast your VOTE to grant them +2/core rights on the
requirements repository and keep up the good work w.r.t speeding up
reviews, making sure new requirements don't break etc.

Also, please note that Thierry has been happy enough with our work to
step down from core responsibilities :) Many thanks Thierry for
helping with this effort and guidance. I'll make all the add/remove to
the requirements-core team when this VOTE passes.

Thanks,
Dims

[1] https://etherpad.openstack.org/p/newton-relmgt-plan
[2] https://etherpad.openstack.org/p/requirements-tasks
[3] https://etherpad.openstack.org/p/requirements-cruft

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposed TripleO core changes

2016-06-16 Thread Dan Prince
On Thu, 2016-06-09 at 15:03 +0100, Steven Hardy wrote:
> Hi all,
> 
> I've been in discussion with Martin André and Tomas Sedovic, who are
> involved with the creation of the new tripleo-validations repo[1]
> 
> We've agreed that rather than create another gerrit group, they can
> be
> added to tripleo-core and agree to restrict +A to this repo for the
> time
> being (hopefully they'll both continue to review more widely, and
> obviously
> Tomas is a former TripleO core anyway, so welcome back! :)
> 
> If folks feel strongly we should create another group we can, but
> this
> seems like a low-overhead approach, and well aligned with the scope
> of the
> repo, let me know if you disagree.


For more isolated projects that can be used standalone I have a slight
preference for sub-teams. I recently proposed this for os-net-config:

https://review.openstack.org/#/c/307975/

If we think tripleo-validations is more of a "TripleO" thing and won't
be useful outside of TripleO proper then I think adding them to
tripleo-core is probably fine. If our intent is to make this a generic
set of validations then perhaps a subteam makes sense.

For now, I'm totally fine adding Andre and Tomas to TripleO though too.

> 
> Also, while reviewing the core group[2] I noticed the following
> members who
> are no longer active and should probably be removed:
> 
> - Radomir Dopieralski
> - Martyn Taylor
> - Clint Byrum

+1 for these changes.

> 
> I know Clint is still involved with DiB (which has a separate core
> group),
> but he's indicated he's no longer going to be directly involved in
> other
> tripleo development, and AFAIK neither Martyn or Radomir are actively
> involved in TripleO reviews - thanks to them all for their
> contribution,
> we'll gladly add you back in the future should you wish to return :)
> 
> Please let me know if there are any concerns or objections, if there
> are
> none I will make these changes next week.
> 
> Thanks,
> 
> Steve
> 
> [1] https://github.com/openstack/tripleo-validations
> [2] https://review.openstack.org/#/admin/groups/190,members
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] Panko has been imported

2016-06-16 Thread Julien Danjou
Hi team,

Panko has been imported and is now ready to be worked on.

Our next steps should be to set up devstack jobs for testing. The
devstack plugin should be working, it just misses an integration from
Ceilometer so the event dispatcher is set to 'panko' if panko is
installed via devstack alongside Ceilometer.

Anyone volunteering to help on those job setup?

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Potential direction proposal - unified OpenStack containers

2016-06-16 Thread Sergey Lukjanov
Hi folks,

I'd like to share some thoughts about the OpenStack containerization in the
form of a specification for Kolla and have some discussion on the proposed
items in the review.

In general it's a meta spec to describe potential direction for Kolla to
provide Unified deployment tool agnostic containers for anyone to use.

We very much welcome your feedback on the following spec.

Link: https://review.openstack.org/330575

Thanks.


-- 
Sincerely yours,
Sergey Lukjanov
Principal Software Engineer
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] What's Happening in OpenStack-Ansible - June 2016

2016-06-16 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hello there,

One of the feedback items that came out of the OpenStack Summit in Austin was 
around the constant stream of changes throughout OpenStack-Ansible and how to 
best keep up with them. That could be said about OpenStack in general as well, 
but I decided to take some action and make these changes easier to understand 
and digest.

Hugh Blemings started his "Last Week on openstack-dev" last year and I found 
myself reading it each week to keep up with some of the bigger developments. I 
borrowed Hugh's strategy and format to create the "What's Happening in 
OpenStack-Ansible" (also called the "WHOA") report.

I will publish this report monthly (somewhere in the middle of the month) on my 
blog. If you find anything that belongs in the report, feel free to let me know!

Without further ado, the first edition for June 2016 is here:

  https://major.io/2016/06/15/whats-happening-openstack-ansible-whoa-june-2016/

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJXYqkyAAoJEHNwUeDBAR+xCUsP+wXzKva4jeNCpjQgQhj5m/3L
+vEhsProy9pIlouqJ+ITZ2MBMuy/u8rlvhoH//uQJ3atIY2ca8zV19hV2w80pRRR
wBSB8h7jSc7ubtvlIIFZUK/1nMa06LV4EKihmuFLpamzfJMxE4vNuleZTnmAIe+S
C7HowoBLYZb6sM72Zcl9vtMe+mAH1d8UVv5fDTx+oarz1ynWpePJI3LyW1wvpirA
MJ4r2JYPkeODZqRAOK4wPFf/8WVZ5F2OeIMOAq15PdPMWCnvLRjgO8XiOceUrjx2
p7grqbXFKH8nFGLKQ606wTskmJdABFFOIh7x0jvPdOrreEkDpdnwAswCBIeZVsh3
Y5qxU/eEX4ARiWoY/9WJHuda3IovMpqKGrgR5ioSeoa+Pa4NDV2wavEVFy9pC9T4
3erVo/aqmopsNGQaNupYgUOZns0EL26l85DY+mWdlTERf5WdZFv1CtIDiUWQk568
lHQ+EOhELCCDL8iJS5rNJW17B1udjNHnRIXCgsVTUBdhGvpVuylJxaoXQ0lSZWTi
WMK3C6SIMN4VHRhQmzBK/K2w+3Tm9TIq7hdRgbKXIBvEwNFipXcGM8W4jXqEhCyD
nYpu3TpRIxWbyeAYk9CTuDH5DT9qerIORfRtebscoeLAflOEc1ssxLach71PNYWt
RzTO74nstbR0dp+f6mnS
=U2zE
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [api] running gabbi tests in a concurrent environment

2016-06-16 Thread Chris Dent


Overnight, gate failures for the gabbi tests in the telemetry
projects exposed that I'd done a rather poor job of highlighting
the issues with and the proper ways to manage running gabbi tests
with a concurrent runner.

Gabbi has always worked concurrently as long as the test runner was
made aware of how to group the test suites. The 1.22.0 release of
gabbi had some refactoring that change a module name that made
things blow up. If you are using gabbi in tests you might want to
look at these fixes:

https://review.openstack.org/#/q/topic:cd/fix-gabbi-1.22

to see if it is relevant[1].

The 1.23.0 release of gabbi (RSN) will have more visible docs on this
issue and warn when gabbit filenames make grouping more difficult.

[1] I used codesearch.o.o to find the projects that appear to be
using gabbi, but I may have missed some.

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Hayes, Graham
On 16/06/2016 00:30, Matthew Treinish wrote:
> On Wed, Jun 15, 2016 at 09:10:30AM -0400, Doug Hellmann wrote:
>> Excerpts from Chris Hoge's message of 2016-06-14 16:37:06 -0700:
>>> Top posting one note and direct comments inline, I’m proposing
>>> this as a member of the DefCore working group, but this
>>> proposal itself has not been accepted as the forward course of
>>> action by the working group. These are my own views as the
>>> administrator of the program and not that of the working group
>>> itself, which may independently reject the idea outside of the
>>> response from the upstream devs.
>>>
>>> I posted a link to this thread to the DefCore mailing list to make
>>> that working group aware of the outstanding issues.
>>>
 On Jun 14, 2016, at 3:50 PM, Matthew Treinish  wrote:

 On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
>>> Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
 On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:



>>> The current active guidelines cover icehouse through mitaka. The release
>>> of 2016.08 will change that to cover juno through mitaka (with newton
>>> as an add-on to 2016.08 when it’s released). There’s overlap between
>>> the guidelines, so 2016.01 covers juno through mitaka while 2016.08
>>> will cover kilo through newton. Essentially two years of releases.
>>>
> We may also need to consider that test implementation details may
> change, and have a review process within DefCore to help expose
> those changes to make them clearer to deployers.
>
> Fixing the process issue may also mean changing the way we implement
> things in Tempest. In this case, adding a flag helps move ahead
> more smoothly. Perhaps we adopt that as a general policy in the
> future when we make underlying behavioral changes like this to
> existing tests.  Perhaps instead we have a policy that we do not
> change the behavior of existing tests in such significant ways, at
> least if they're tagged as being used by DefCore. I don't know --
> those are things we need to discuss.

 Sure I agree, this thread raises larger issues which need to be figured 
 out.
 But, that is probably an independent discussion.
>>>
>>> I’m beginning to wonder if we need to make DefCore use release
>>> branches then back-port bug-fixes and relevant features additions
>>> as necessary.

What I suggested when the TC decided on keeping the tests in tempest
was to keep them as a tempest plugin.

This would allow the tests to progress overtime, and be versioned,
so that def-core 201x.y would be equal to def-core-tempest-plugin
version 201x.y .

This allows the project developers to continue to develop tests that
match their vision, without causing unforeseen breakages in def-core.

It also allows the def-core tests to target a wider range - tests
that are run currently have no guarantee that they will run against
Kilo, but the def-core plugin could be tested against Kilo known good
clouds in its gate.

Is there any major blocker for moving them?

>> We should definitely have that conversation, to understand what
>> effect it would have both on Tempest and on DefCore.
>>
>
> While from a quick glance this would seem like it would solve some of the
> problems when you start to dig into it you'll see that it actually wouldn't,
> and would just end up causing more issues in the long run. Branchless tempest
> was originally started back at the icehouse release and was implemented to
> actually enforce the API is the same across release boundaries. We had hit 
> many
> issues where incompatibilities inadvertently were introduced into projects' 
> APIs
> which weren't caught because of divergence between tempest master and tempest
> stable/*. If you decide to branch you'll end up having to do the same thing or
> you'll risk the same types of regressions. Testing stable branches against
> master puts you in a weird place where upstream changes could break things for
> your branch and you'd never know until you tried to land something. From a
> tempest dev standpoint branching quite frankly doesn't make any sense.
>
> Also, the other thing to consider is that there wouldn't actually be anything 
> to
> branch on. Tempest always supports whatever releases the community is
> supporting. Every commit is tested against all the branches of OpenStack that
> still exist. When we EOL a stable branch there is no longer anything to run
> tests against. Assuming you're primarily motivated by the fact defcore 
> attempts
> to support branches that no longer have upstream support you wouldn't actually
> be able to do this by branching. When a branch is removed there isn't anything
> for you to test tempest changes against, and merging code without 

Re: [openstack-dev] [kolla] Stability and reliability of gate jobs

2016-06-16 Thread Steven Dake (stdake)
David,

The gates are unreliable for a variety of reasons - some we can fix - some
we can't directly.

RDO rabbitmq introduced IPv6 support to erlang, which caused our gate
reliably to drop dramatically.  Prior to this change, our gate was running
95% reliability or better - assuming the code wasn¹t busted.
The gate gear is different - meaning different setup.  We have been
working on debugging all these various gate provider issues with infra
team and I think that is mostly concluded.
The gate changed to something called bindeps which has been less reliable
for us.
We do not have mirrors of CentOS repos - although it is in the works.
Mirrors will ensure that images always get built.  At the moment many of
the gate failures are triggered by build failures (the mirrors are too
busy).
We do not have mirrors of the other 5-10 repos and files we use.  This
causes more build failures.

Complicating matters, any of theses 5 things above can crater one gate job
of which we run about 15 jobs, which causes the entire gate to fail (if
they were voting).  I really want a voting gate for kolla's jobs.  I super
want it.  The reason we can't make the gates voting at this time is
because of the sheer unreliability of the gate.

If anyone is up for a thorough analysis of *why* the gates are failing,
that would help us fix them.

Regards
-steve

On 6/15/16, 3:27 AM, "Paul Bourke"  wrote:

>Hi David,
>
>I agree with this completely. Gates continue to be a problem for Kolla,
>reasons why have been discussed in the past but at least for me it's not
>clear what the key issues are.
>
>I've added this item to agenda for todays IRC meeting (16:00 UTC -
>https://wiki.openstack.org/wiki/Meetings/Kolla). It may help if before
>hand we can brainstorm a list of the most common problems here beforehand.
>
>To kick things off, rabbitmq seems to cause a disproportionate amount of
>issues, and the problems are difficult to diagnose, particularly when
>the only way to debug is to summit "DO NOT MERGE" patch sets over and
>over. Here's an example of a failed centos binary gate from a simple
>patch set I was reviewing this morning:
>http://logs.openstack.org/06/329506/1/check/gate-kolla-dsvm-deploy-centos-
>binary/3486d03/console.html#_2016-06-14_15_36_19_425413
>
>Cheers,
>-Paul
>
>On 15/06/16 04:26, David Moreau Simard wrote:
>> Hi Kolla o/
>>
>> I'm writing to you because I'm concerned.
>>
>> In case you didn't already know, the RDO community collaborates with
>> upstream deployment and installation projects to test it's packaging.
>>
>> This relationship is beneficial in a lot of ways for both parties, in
>>summary:
>> - RDO has improved test coverage (because it's otherwise hard to test
>> different ways of installing, configuring and deploying OpenStack by
>> ourselves)
>> - The RDO community works with upstream projects (deployment or core
>> projects) to fix issues that we find
>> - In return, the collaborating deployment project can feel more
>> confident that the RDO packages it consumes have already been tested
>> using it's platform and should work
>>
>> To make a long story short, we do this with a project called WeIRDO
>> [1] which essentially runs gate jobs outside of the gate.
>>
>> I tried to get Kolla in our testing pipeline during the Mitaka cycle.
>> I really did.
>> I contributed the necessary features I needed in Kolla in order to
>> make this work, like the configurable Yum repositories for example.
>>
>> However, in the end, I had to put off the initiative because the gate
>> jobs were very flappy and unreliable.
>> We cannot afford to have a job that is *expected* to flap in our
>> testing pipeline, it leads to a lot of wasted time, effort and
>> resources.
>>
>> I think there's been a lot of improvements since my last attempt but
>> to get a sample of data, I looked at ~30 recently merged reviews.
>> Of 260 total build/deploy jobs, 55 (or over 20%) failed -- and I
>> didn't account for rechecks, just the last known status of the check
>> jobs.
>> I put up the results of those jobs here [2].
>>
>> In the case that interests me most, CentOS binary jobs, it's 5
>> failures out of 50 jobs, so 10%. Not as bad but still a concern for
>> me.
>>
>> Other deployment projects like Puppet-OpenStack, OpenStack Ansible,
>> Packstack and TripleO have quite a bit of *voting* integration testing
>> jobs.
>> Why are Kolla's jobs non-voting and so unreliable ?
>>
>> Thanks,
>>
>> [1]: https://github.com/rdo-infra/weirdo
>> [2]: 
>>https://docs.google.com/spreadsheets/d/1NYyMIDaUnlOD2wWuioAEOhjeVmZe7Q8_z
>>dFfuLjquG4/edit#gid=0
>>
>> David Moreau Simard
>> Senior Software Engineer | Openstack RDO
>>
>> dmsimard = [irc, github, twitter]
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [murano] Nominating Alexander Tivelkov and Zhu Rong for murano cores

2016-06-16 Thread Victor Ryzhenkin
It’s great to see this happen!
+1 for adding both! Well deserved, folks!

Also agreed to remove Steve from murano-core.

-- 
Victor Ryzhenkin
Quality Assurance Engineer
freerunner on #freenode

От 16 июня 2016 г. в 14:51:23, Tetiana Lashchova (tlashch...@mirantis.com) 
написал:

+1 for both

On Thu, Jun 16, 2016 at 11:26 AM, Nikolay Starodubtsev 
 wrote:
+1
Well deserved!
                                  
Nikolay Starodubtsev
Software Engineer
Mirantis Inc.

Skype: dark_harlequine1

2016-06-15 19:42 GMT+03:00 Serg Melikyan :
+1

Finally!

On Wed, Jun 15, 2016 at 3:33 AM, Ihor Dvoretskyi  
wrote:
+1 for Alexander Tivelkov.

Good effort.

On Wed, Jun 15, 2016 at 1:08 PM, Artem Silenkov  wrote:
Hello! 

+1

Regards, 
Artem Silenkov
---
paas-team

On Wed, Jun 15, 2016 at 12:56 PM, Dmytro Dovbii  wrote:
+1

15 июня 2016 г. 6:47 пользователь "Yang, Lin A"  написал:

+1 both for Alexander Tivelkov and Zhu Rong. Well deserved.

Regards,
Lin Yang

On Jun 15, 2016, at 3:17 AM, Kirill Zaitsev  wrote:

Hello team, I want to annonce the following changes to murano core team:

1) I’d like to nominate Alexander Tivelkov for murano core. He has been part of 
the project for a very long time and has contributed to almost every part of 
murano. He has been fully committed to murano during mitaka cycle and continues 
doing so during newton [1]. His work on the scalable framework architecture is 
one of the most notable features scheduled for N release.

2) I’d like to nominate Zhu Rong for murano core. Last time he was nominated I 
-1’ed the proposal, because I believed he needed to start making more 
substantial contributions. I’m sure that Zhu Rong showed his commitment [2] to 
murano project and I’m happy to nominate him myself. His work on the separating 
cfapi from murano api and contributions headed at addressing murano’s technical 
debt are much appreciated.

3) Finally I would like to remove Steve McLellan[3] from murano core team. 
Steve has been part of murano from very early stages of it. However his focus 
has since shifted and he hasn’t been active in murano during last couple of 
cycles. I want to thank Steve for his contributions and express hope to see him 
back in the project in future.


Murano team, please respond with +1/-1 to the proposed changes.

[1] http://stackalytics.com/?user_id=ativelkov=marks
[2] http://stackalytics.com/?metric=marks_id=zhu-rong
[3] http://stackalytics.com/?user_id=sjmc7
-- 
Kirill Zaitsev
Software Engineer
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,

Ihor Dvoretskyi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Serg Melikyan, Development Manager at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  

[openstack-dev] networking-sfc: unable to use SFC (ovs driver) with multiple networks

2016-06-16 Thread Banszel, MartinX
Hello,

I'd need some help with using the SFC implementation in openstack.

I use liberty version of devstack + liberty branch of networking-sfc.

It's not clear to me if the SFC instance and it's networks should be separated
from the remaining virtual network topology or if it should be connected to it.

E.g. consider the following topology, where SFC and its networks net2 and net3
(one for ingress port, one for egress port) are connected to the tenants
networks. I know that all three instances can share one network but a use case I
am trying to implement requires that every instance has it's separated network
and there is a different network for ingress and egress port of the SF.

 +---+ +-+ +---+
 | VMSRC | |  VMSFC  | | VMDST |
 +---+---+ +--+---+--+ +---+---+
 | p1 (1.1.1.1) p2|   |p3  |p4 (4.4.4.4)
 ||   ||
-++--- net1   |   |  --+---+- net4
  |   |   ||
  |  ---+-+---) net2   |
  |  ---)--+--+ net3   |
  | |  |   |
  |  +--+--+--+|
  +--+ ROUTER ++
 ++


All networks are connected to a single router ROUTER. I created a flow
classifier that matches all traffic going from VMSRC to VMDST
(--logical-source-port p1 --source-ip-prefix=1.1.1.1/32
--destination-ip-prefix=4.4.4.4/32), port pair p2,p3, a port pair group
containing this port pair and a port chain containing this port pair group and
flow classifier.

If I try to ping from VMSRC the 5.4.4.4 address, it is correctly steered through
the VMSFC (where just the ip_forwarding is set to 1) and forwarded back through
the p3 port to the ROUTER.  The router finds out that there are packets with
source address 1.1.1.1 coming from port where is should not (the router expects
those packets from the net1 interface), they don't pass the reverse path filter
and the router drops them.

It works when I set the rp_filter off via sysctl command in the router namespace
on the controller. But I don't want to do this -- I expect the sfc to work
without such changes.

Is such topology supported? What should the topology look like?

I have noticed, that when I disconnect the net2 and net3 from the ROUTER, and
add new routers ROUTER2 and ROUTER3 to the net2 and net3 networks respectivelly
and don't connect them anyhow to the ROUTER nor the rest of the topology, the
OVS is able to send the traffic to the p2 port on the ingress side. However, on
the egress side the packet is routed to the ROUTER3 which drops it as it doesn't
have any route for it.

Thanks for any hints!

Best regards
Martin Banszel
--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263


This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Nominating Alexander Tivelkov and Zhu Rong for murano cores

2016-06-16 Thread Tetiana Lashchova
+1 for both

On Thu, Jun 16, 2016 at 11:26 AM, Nikolay Starodubtsev <
nstarodubt...@mirantis.com> wrote:

> +1
> Well deserved!
>
>
>
> Nikolay Starodubtsev
>
> Software Engineer
>
> Mirantis Inc.
>
>
> Skype: dark_harlequine1
>
> 2016-06-15 19:42 GMT+03:00 Serg Melikyan :
>
>> +1
>>
>> Finally!
>>
>> On Wed, Jun 15, 2016 at 3:33 AM, Ihor Dvoretskyi <
>> idvorets...@mirantis.com> wrote:
>>
>>> +1 for Alexander Tivelkov.
>>>
>>> Good effort.
>>>
>>> On Wed, Jun 15, 2016 at 1:08 PM, Artem Silenkov 
>>> wrote:
>>>
 Hello!

 +1

 Regards,
 Artem Silenkov
 ---
 paas-team

 On Wed, Jun 15, 2016 at 12:56 PM, Dmytro Dovbii 
 wrote:

> +1
> 15 июня 2016 г. 6:47 пользователь "Yang, Lin A" 
> написал:
>
> +1 both for Alexander Tivelkov and Zhu Rong. Well deserved.
>>
>> Regards,
>> Lin Yang
>>
>> On Jun 15, 2016, at 3:17 AM, Kirill Zaitsev 
>> wrote:
>>
>> Hello team, I want to annonce the following changes to murano core
>> team:
>>
>> 1) I’d like to nominate Alexander Tivelkov for murano core. He has
>> been part of the project for a very long time and has contributed to 
>> almost
>> every part of murano. He has been fully committed to murano during mitaka
>> cycle and continues doing so during newton [1]. His work on the scalable
>> framework architecture is one of the most notable features scheduled for 
>> N
>> release.
>>
>> 2) I’d like to nominate Zhu Rong for murano core. Last time he was
>> nominated I -1’ed the proposal, because I believed he needed to start
>> making more substantial contributions. I’m sure that Zhu Rong showed his
>> commitment [2] to murano project and I’m happy to nominate him myself. 
>> His
>> work on the separating cfapi from murano api and contributions headed at
>> addressing murano’s technical debt are much appreciated.
>>
>> 3) Finally I would like to remove Steve McLellan[3] from murano core
>> team. Steve has been part of murano from very early stages of it. However
>> his focus has since shifted and he hasn’t been active in murano during 
>> last
>> couple of cycles. I want to thank Steve for his contributions and express
>> hope to see him back in the project in future.
>>
>>
>> Murano team, please respond with +1/-1 to the proposed changes.
>>
>> [1] http://stackalytics.com/?user_id=ativelkov=marks
>> [2] http://stackalytics.com/?metric=marks_id=zhu-rong
>> [3] http://stackalytics.com/?user_id=sjmc7
>> --
>> Kirill Zaitsev
>> Software Engineer
>> Mirantis, Inc
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Best regards,
>>>
>>> Ihor Dvoretskyi
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Serg Melikyan, Development Manager at Mirantis, Inc.
>> http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> 

[openstack-dev] [nova] A primer on data structures used by Nova to represent block devices

2016-06-16 Thread Matthew Booth
The purpose of this mail is to share what I have learned about the various
data structures used by Nova for representing block devices. I compiled
this for my own use, but I hope it might be useful for others, and that
other might point out any errors.

As is usual when I'm reading code like this, I've created some cleanup
patches to address nits or things I found confusing as I went along. I've
posted review links at the end.

A note on reading this. I refer to local disks and volumes. A local disk in
this context is any disk directly managed by nova compute. If nova is
configured to use Rbd or NFS for instance disks these disks won't actually
be local, but they are still managed locally and referred to as local disks.

There are 4 relevant data structures. 2 of these are general, 2 are
specific to the libvirt driver.

BlockDeviceMapping
===

The 'top level' data structure is the block device mapping object. It is a
NovaObject, persisted in the db. Current code creates a BDM object for
every disk associated with an instance, whether it is a volume or not. I
can't confirm (or deny) that this has always been the case, though, so
there may be instances which still exist which have some BDMs missing.

The BDM object describes properties of each disk as specified by the user.
It is initially created by the user and passed to compute api. Compute api
transforms and consolidates all BDMs to ensure that all disks, explicit or
implicit, have a BDM, then persists them. Look in nova.objects.block_device
for all BDM fields, but in essence they contain information like
(source_type='image', destination_type='local', image_id='),
or equivalents describing ephemeral disks, swap disks or volumes, and some
associated data.

Reader note: BDM objects are typically stored in variables called 'bdm'
with lists in 'bdms', although this is obviously not guaranteed (and
unfortunately not always true: bdm in libvirt.block_device is usually a
DriverBlockDevice object). This is a useful reading aid (except when it's
proactively confounding), as there is also something else typically called
'block_device_mapping' which is not a BlockDeviceMapping object.

block_device_info
=

Drivers do not directly use BDM objects. Instead, they are transformed into
a different driver-specific representation. This representation is normally
called 'block_device_info', and is generated by
virt.driver.get_block_device_info(). Its output is based on data in BDMs.
block_device_info is a struct containing:

  {
'root_device_name': hypervisor's notion of the root device's name
'ephemerals': A list of all ephemeral disks
'block_device_mapping': A list of all cinder volumes
'swap': A swap disk, or None if there is no swap disk
  }

The disks are represented in one of 2 ways, which depends on the specific
driver currently in use. There's the 'new' representation, used by the
libvirt and vmwareapi drivers, and the 'legacy' representation used by all
other drivers. The legacy representation is a plain dict. It does not
contain the same information as the new representation. I won't cover it
further here as I haven't looked at it in detail.

The new representation involves subclasses of
nova.block_device.DriverBlockDevice. As well as containing different
fields, the new representation significantly also retains a reference to
the underlying BDM object. This means that by manipulating the
DriverBlockDevice object, the driver is able to persist data to the BDM
object in the db.

Reader beware: common usage is to pull 'block_device_mapping' out of this
dict into a variable called 'block_device_mapping'. This is not a
BlockDeviceMapping object, or list of them.

Reader beware: if block_device_info was passed to the driver by compute
manager, it was probably generated by _get_instance_block_device_info(). By
default, this function filters out all cinder volumes from
block_device_mapping which don't currently have connection_info. In other
contexts this filtering will not have happened, and block_device_mapping
will contain all volumes.

Reader beware: unlike BDMs, block_device_info does not represent all disks
that an instance might have. Significantly, it will not contain any
representation of an image-backed local disk, i.e. the root disk of a
typical instance which isn't boot-from-volume. Other representations used
by the libvirt driver explicitly reconstruct this missing disk. I assume
other drivers must do the same.

instance_disk_info
=

The driver api defines a method get_instance_disk_info, which returns a
json blob. The compute manager calls this and passes the data over rpc
between calls without ever looking at it. This is driver-specific opaque
data. It is also only used by the libvirt driver, despite being part of the
api for all drivers. Other drivers do not return any data. The most
interesting aspect of instance_disk_info is that it is generated from the
libvirt XML, not from nova's state.

Reader 

Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Amrith Kumar
Thanks Thierry, I did the same (again) :)

-amrith

> -Original Message-
> From: Thierry Carrez [mailto:thie...@openstack.org]
> Sent: Wednesday, June 15, 2016 12:22 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [all][tc] Require a level playing field for
> OpenStack projects
> 
> Amrith Kumar wrote:
> > Thanks for writing this up and for the interesting discussion that has
> come up in this ML thread.
> >
> > While I think I get the general idea of the motivation, I think the
> verbiage doesn't quite do justice to your intent.
> >
> > One area which I would like to highlight is the situation with the
> underlying operating system on which the software is to run. What if that
> is proprietary software? Consider support for (for example) running on Red
> Hat or the Windows operating systems. That would not be something that
> could be easily abstracted into a 'driver'.
> >
> > Another is the case of proprietary software; consider support in Trove
> for (for example) using the DB2 Express or the Vertica database. Clearly
> these are things where some have an advantage when compared to others.
> >
> > I therefore suggest the following amendment in
> https://review.openstack.org/#/c/329448/.
> >
> > * The project provides a level playing field for interested developers
> to collaborate. Where proprietary software, hardware, or other resources
> (including testing) are required, these should be reasonably accessible to
> interested contributors.
> 
> I replied to that on the review :)
> 
> --
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] consistency and exposing quiesce in the Nova API

2016-06-16 Thread Preston L. Bannister
I am hoping support for instance quiesce in the Nova API makes it into
OpenStack. To my understanding, this is existing function in Nova, just
not-yet exposed in the public API. (I believe Cinder uses this via a
private Nova API.)

Much of the discussion is around disaster recovery (DR) and NFV - which is
not wrong, but might be muddling the discussion? Forget DR and NFV, for the
moment.

My interest is simply in collecting high quality backups of applications
(instances) running in OpenStack. (Yes, customers are deploying
applications into OpenStack that need backup - and at large scale. They
told us, *very* clearly.) Ideally, I would like to give the application a
chance to properly quiesce, so the on-disk state is most-consistent, before
collecting the backup.

The existing function in Nova should be at least a good start, it just
needs to be exposed in the public Nova API. (At least, this is my
understanding.)

Of course, good backups (however collected) allow you to build DR
solutions. My immediate interest is simply to collect high-quality backups.

The part in the blueprint about an atomic operation on a list of instances
... this might be over-doing things. First, if you have a set of related
instances, very likely there is a logical order in which they should be
quiesced. Some could be quiesced concurrently. Others might need to be
sequential.

Assuming the quiesce API *starts* the operation, and there is some means to
check for completion, then a single-instance quiesce API should be
sufficient. An API that is synchronous (waits for completion before
returning) would also be usable. (I am not picky - just want to collect
better backups for customers.)





On Sun, May 29, 2016 at 7:24 PM, joehuang  wrote:

> Hello,
>
> This spec[1] was to expose quiesce/unquiesce API, which had been approved
> in Mitaka, but code not merged in time.
>
> The major consideration for this spec is to enable application level
> consistency snapshot, so that the backup of the snapshot in the remote site
> could be recovered correctly in case of disaster recovery. Currently there
> is only single VM level consistency snapshot( through create image from VM
> ), but it's not enough.
>
> First, the disaster recovery is mainly the action in the infrastructure
> level in case of catastrophic failures (flood, earthquake, propagating
> software fault), the cloud service provider recover the infrastructure and
> the applications without the help from each application owner: you can not
> just recover the OpenStack, then send notification to all applications'
> owners, to ask them to restore their applications by their own. As the
> cloud service provider, they should be responsible for the infrastructure
> and application recovery in case of disaster.
>
> The second, this requirement is not to make OpenStack bend over NFV,
> although this requirement was asked from OPNFV at first, it's general
> requirement to have application level consistency snapshot. For example,
> just using OpenStack itself as the application running in the cloud, we can
> deploy different DB for different service, i.e. Nova has its own mysql
> server nova-db-VM, Neutron has its own mysql server neutron-db-VM. In fact,
> I have seen in some production to divide the db for Nova/Cinder/Neutron to
> different DB server for scalability purpose. We know that there are
> interaction between Nova and Neutron when booting a new VM, during the VM
> booting period, some data will be in the memory cache of the
> nova-db-VM/neutron-db-VM, if we just create snapshot of the volumes of
> nova-db-VM/neutron-db-VM in Cinder, the data which has not been flushed to
> the disk will not be in the snapshot of the volumes. We cann't make sure
> when these data in the memory cache will be flushed, then
>  there is random possibility that the data in the snapshot is not
> consistent as what happened as in the virtual machines of
> nova-db-VM/neutron-db-VM.In this case, Nova/Neutron may boot in the
> disaster recovery site successfully, but some port information may be
> crushed for not flushed into the neutron-db-VM when doing snapshot, and in
> the severe situation, even the VM may not be able to recover successfully
> to run. Although there is one project called Dragon[2], Dragon can't
> guarantee the consistency of the application snapshot too through OpenStack
> API.
>
> The third, for those applications which can decide the data and checkpoint
> should be replicated to disaster recovery site, this is the third option
> discussed and described in our analysis:
> https://git.opnfv.org/cgit/multisite/tree/docs/requirements/multisite-vnf-gr-requirement.rst.
> But unfortunately in Cinder, after the volume replication V2.1 is
> developed, the tenant granularity volume replication is still being
> discussed, and still not on single volume level. And just like what have
> mentioned in the first point, both application level and infrastructure
> level are 

Re: [openstack-dev] [tempest][SR-IOV] tempest breaks Mellanox CI

2016-06-16 Thread Lenny Verkhovsky
Thanks,
We are investigating the failure.

From: Andrea Frittoli [mailto:andrea.fritt...@gmail.com]
Sent: Thursday, June 16, 2016 2:02 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [tempest][SR-IOV] tempest breaks Mellanox CI


On Thu, Jun 16, 2016 at 11:16 AM Moshe Levi 
> wrote:

Hi all,



A recent change  [1]  in tempest broke all Mellanox CIs.

This is the second time it happened.

After the first time it happened we decided that  Mellanox CI  will comment on 
tempest.

On this time I saw that Mellanox CI was commenting on that  patch with a 
failure but was still got approved - [2] Enabling Mellanox CI as commenting on  
tempest requires  us physical resources  such as servers/NICS because it tests 
SR-IOV.

So I am wandering  what can be done in the future to prevent this from happen 
again.
A message to the DL is a good way to raise awareness on this.
Ensuring the job stability is also important, too many failures will decrease 
the attention on it - that said I don't have numbers for Mellanox CI failure 
rates, so it may be that the pass rate is already good enough.



Anyway we proposed the following fix to tempest [3]
Thanks for fixing this, and sorry about the inconvenience.
Even with the fix in, the Mellanox CI job failed, which if unfortunate.
I issued a re-check.

Andrea Frittoli





[1] - https://review.openstack.org/#/c/320495/

[2] - 
http://13.69.151.247/95/320495/20/check-sriov-tempest/Tempest-Sriov/825304c/testr_results.html.gz

[3] - https://review.openstack.org/#/c/330331/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][SR-IOV] tempest breaks Mellanox CI

2016-06-16 Thread Andrea Frittoli
On Thu, Jun 16, 2016 at 11:16 AM Moshe Levi  wrote:

> Hi all,
>
>
>
> A recent change  [1]  in tempest broke all Mellanox CIs.
>
> This is the second time it happened.
>
> After the first time it happened we decided that  Mellanox CI  will
> comment on tempest.
>
> On this time I saw that Mellanox CI was commenting on that  patch with a
> failure but was still got approved - [2] Enabling Mellanox CI as commenting
> on  tempest requires  us physical resources  such as servers/NICS because
> it tests SR-IOV.
>
> So I am wandering  what can be done in the future to prevent this from
> happen again.
>
A message to the DL is a good way to raise awareness on this.
Ensuring the job stability is also important, too many failures will
decrease the attention on it - that said I don't have numbers for Mellanox
CI failure rates, so it may be that the pass rate is already good enough.

>
>
> Anyway we proposed the following fix to tempest [3]
>
> Thanks for fixing this, and sorry about the inconvenience.
Even with the fix in, the Mellanox CI job failed, which if unfortunate.
I issued a re-check.

Andrea Frittoli

>
>
>
>
> [1] - https://review.openstack.org/#/c/320495/
>
> [2] -
> http://13.69.151.247/95/320495/20/check-sriov-tempest/Tempest-Sriov/825304c/testr_results.html.gz
>
> [3] - https://review.openstack.org/#/c/330331/
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Higgins][Zun] Project roadmap

2016-06-16 Thread Pengfei Ni
Hello, everyone,

Hypernetes has done some work same as this project, that is

- Leverate Neutron for container network
- Leverate Cinder for storage
- Leverate Keystone for auth
- Leverate HyperContainer for hypervisor-based container runtime

We could help to provide hypervisor-based container runtime
(HyperContainer) integration for Zun.

See https://github.com/hyperhq/hypernetes and
http://blog.kubernetes.io/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes.html
for more information about Hypernetes, and see
https://github.com/hyperhq/hyperd for more information about HyperContainer.


Best regards.


---
Pengfei Ni
Software Engineer @Hyper


2016-06-13 6:10 GMT+08:00 Hongbin Lu :

> Hi team,
>
>
>
> During the team meetings these weeks, we collaborated the initial project
> roadmap. I summarized it as below. Please review.
>
>
>
> * Implement a common container abstraction for different container
> runtimes. The initial implementation will focus on supporting basic
> container operations (i.e. CRUD).
>
> * Focus on non-nested containers use cases (running containers on physical
> hosts), and revisit nested containers use cases (running containers on VMs)
> later.
>
> * Provide two set of APIs to access containers: The Nova APIs and the
> Zun-native APIs. In particular, the Zun-native APIs will expose full
> container capabilities, and Nova APIs will expose capabilities that are
> shared between containers and VMs.
>
> * Leverage Neutron (via Kuryr) for container networking.
>
> * Leverage Cinder for container data volume.
>
> * Leverage Glance for storing container images. If necessary, contribute
> to Glance for missing features (i.e. support layer of container images).
>
> * Support enforcing multi-tenancy by doing the following:
>
> ** Add configurable options for scheduler to enforce neighboring
> containers belonging to the same tenant.
>
> ** Support hypervisor-based container runtimes.
>
>
>
> The following topics have been discussed, but the team cannot reach
> consensus on including them into the short-term project scope. We skipped
> them for now and might revisit them later.
>
> * Support proxying API calls to COEs.
>
> * Advanced container operations (i.e. keep container alive, load balancer
> setup, rolling upgrade).
>
> * Nested containers use cases (i.e. provision container hosts).
>
> * Container composition (i.e. support docker-compose like DSL).
>
>
>
> NOTE: I might forgot and mis-understood something. Please feel free to
> point out if anything is wrong or missing.
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable] Stable check of openstack/horizon failed

2016-06-16 Thread Matthias Runge
On 15/06/16 09:22, Matthias Runge wrote:
> On 15/06/16 09:20, Matthias Runge wrote:
>> On 15/06/16 08:14, A mailing list for the OpenStack Stable Branch test
>> reports. wrote:
>>> Build failed.
>>>
>>> - periodic-horizon-docs-liberty 
>>> http://logs.openstack.org/periodic-stable/periodic-horizon-docs-liberty/b029b21/
>>>  : SUCCESS in 7m 13s
>>> - periodic-horizon-python27-liberty 
>>> http://logs.openstack.org/periodic-stable/periodic-horizon-python27-liberty/45cf2ec/
>>>  : SUCCESS in 6m 55s
>>> - periodic-horizon-docs-mitaka 
>>> http://logs.openstack.org/periodic-stable/periodic-horizon-docs-mitaka/5083844/
>>>  : SUCCESS in 7m 00s
>>> - periodic-horizon-python27-mitaka 
>>> http://logs.openstack.org/periodic-stable/periodic-horizon-python27-mitaka/8dfd68c/
>>>  : FAILURE in 4m 27s
>>>
> This is due to a recent change in oslo_utils.
> 
> https://bugs.launchpad.net/horizon/+bug/1592553
> 
> we have a fix for current master branch
> https://review.openstack.org/329777
> 
> and it should be backported once the patch merges on master.

another backport to mitaka is required to unlock the horizon mitaka gate:

https://review.openstack.org/#/c/329145/



-- 
Matthias Runge 

Red Hat GmbH, http://www.de.redhat.com/, Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Michael Cunningham,
Michael O'Neill, Eric Shander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] what do you work on upstream of OpenStack?

2016-06-16 Thread Alexis Lee
Doug Hellmann said on Mon, Jun 13, 2016 at 03:11:17PM -0400:
> I'm trying to pull together some information about contributions
> that OpenStack community members have made *upstream* of OpenStack,
> via code, docs, bug reports, or anything else to dependencies that
> we have.

https://github.com/eventlet/eventlet/pull/309


Alexis (lxsli)
-- 
Nova developer, Hewlett-Packard Limited.
Registered Office: Cain Road, Bracknell, Berkshire RG12 1HN.
Registered Number: 00690597 England
VAT number: GB 314 1496 79

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] spawn a group of nodes on different availability zones

2016-06-16 Thread Zane Bitter

On 07/06/16 23:53, Hongbin Lu wrote:

Hi Heat team,

A question inline.

Best regards,
Hongbin


-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com]
Sent: March-03-16 3:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum][heat] spawn a group of nodes on
different availability zones

On Wed, Mar 02, 2016 at 05:40:20PM -0500, Zane Bitter wrote:

On 02/03/16 05:50, Mathieu Velten wrote:

Hi all,

I am looking at a way to spawn nodes in different specified
availability zones when deploying a cluster with Magnum.

Currently Magnum directly uses predefined Heat templates with Heat
parameters to handle configuration.
I tried to reach my goal by sticking to this model, however I
couldn't find a suitable Heat construct that would allow that.

Here are the details of my investigation :
- OS::Heat::ResourceGroup doesn't allow to specify a list as a
variable that would be iterated over, so we would need one
ResourceGroup by AZ
- OS::Nova::ServerGroup only allows restriction at the hypervisor
level
- OS::Heat::InstanceGroup has an AZs parameter but it is marked
unimplemented , and is CFN specific.
- OS::Nova::HostAggregate only seems to allow adding some metadatas
to a group of hosts in a defined availability zone
- repeat function only works inside the properties section of a
resource and can't be used at the resource level itself, hence
something like that is not allowed :

resources:
   repeat:
 for_each:
   <%az%>: { get_param: availability_zones }
 template:
   rg-<%az%>:
 type: OS::Heat::ResourceGroup
 properties:
   count: 2
   resource_def:
 type: hot_single_server.yaml
 properties:
   availability_zone: <%az%>


The only possibility that I see is generating a ResourceGroup by AZ,
but it would induce some big changes in Magnum to handle
modification/generation of templates.

Any ideas ?


This is a long-standing missing feature in Heat. There are two
blueprints for this (I'm not sure why):

https://blueprints.launchpad.net/heat/+spec/autoscaling-

availabilityzo

nes-impl
https://blueprints.launchpad.net/heat/+spec/implement-

autoscalinggroup

-availabilityzones

The latter had a spec with quite a lot of discussion:

https://review.openstack.org/#/c/105907

And even an attempted implementation:

https://review.openstack.org/#/c/116139/

which was making some progress but is long out of date and would need
serious work to rebase. The good news is that some of the changes I
made in Liberty like https://review.openstack.org/#/c/213555/ should
hopefully make it simpler.

All of which is to say, if you want to help then I think it would be
totally do-able to land support for this relatively early in Newton :)


Failing that, the only think I can think to try is something I am
pretty sure won't work: a ResourceGroup with something like:

   availability_zone: {get_param: [AZ_map, "%i"]}

where AZ_map looks something like {"0": "az-1", "1": "az-2", "2":
"az-1", ...} and you're using the member index to pick out the AZ to
use from the parameter. I don't think that works (if "%i" is resolved
after get_param then it won't, and I suspect that's the case) but

it's

worth a try if you need a solution in Mitaka.


Yeah, this won't work if you attempt to do the map/index lookup in the
top-level template where the ResourceGroup is defined, but it *does*
work if you pass both the map and the index into the nested stack, e.g
something like this (untested):

$ cat rg_az_map.yaml
heat_template_version: 2015-04-30

parameters:
   az_map:
 type: json
 default:
   '0': az1
   '1': az2

resources:
  AGroup:
 type: OS::Heat::ResourceGroup
 properties:
   count: 2
   resource_def:
 type: server_mapped_az.yaml
 properties:
   availability_zone_map: {get_param: az_map}
   index: '%index%'

$ cat server_mapped_az.yaml
heat_template_version: 2015-04-30

parameters:
   availability_zone_map:
 type: json
   index:
 type: string

resources:
  server:
 type: OS::Nova::Server
 properties:
   image: the_image
   flavor: m1.foo
   availability_zone: {get_param: [availability_zone_map, {get_param:
index}]}


This is nice. It seems to address our heterogeneity requirement at *deploy* 
time. However, I wonder what is the runtime behavior. For example, I deploy a 
stack by:
$ heat stack-create -f rg_az_map.yaml -P az_map='{"0":"az1","1":"az2"}'

Then, I want to remove a sever by:
$ heat stack-update -f rg_az_map.yaml -P az_map='{"0":"az1"}'

Will Heat remove resources in index "1" only (with resources in index "0" 
untouched)? Also, I wonder if we can dynamically add resources (with existing resources untouched). 
For example, add a server by:
$ heat stack-update -f rg_az_map.yaml -P 
az_map='{"0":"az1","1":"az2","2":"az3"}'


Removing members from the end of a ResourceGroup works fairly well. It's 

[openstack-dev] [tempest][SR-IOV] tempest breaks Mellanox CI

2016-06-16 Thread Moshe Levi
Hi all,



A recent change  [1]  in tempest broke all Mellanox CIs.

This is the second time it happened.

After the first time it happened we decided that  Mellanox CI  will comment on 
tempest.

On this time I saw that Mellanox CI was commenting on that  patch with a 
failure but was still got approved - [2] Enabling Mellanox CI as commenting on  
tempest requires  us physical resources  such as servers/NICS because it tests 
SR-IOV.

So I am wandering  what can be done in the future to prevent this from happen 
again.



Anyway we proposed the following fix to tempest [3]





[1] - https://review.openstack.org/#/c/320495/

[2] - 
http://13.69.151.247/95/320495/20/check-sriov-tempest/Tempest-Sriov/825304c/testr_results.html.gz

[3] - https://review.openstack.org/#/c/330331/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Morgan Fainberg
On Wed, Jun 15, 2016 at 11:54 PM, Ken'ichi Ohmichi 
wrote:

> This discussion was expected when we implemented the Tempest patch,
> then I sent a mail to defcore comittee[1]
> As the above ml, "A DefCore Guideline typically covers three OpenStack
> releases".
> That means the latest guideline needs to cover Mitaka, Liberty and Kilo,
> right?
>
> In the Kilo development, we(nova team) have already considered
> additional properties are not good for the interoperability.
> And the stable_api.rst of [2] which is contained in Kilo says we need
> to implement new features without extensions.
> However, there are Kilo+ clouds which are extended with vendors' own
> extensions, right?
>
> My concern of allowing additional properties on interoperability tests is
> that
>  - users can move from pure OpenStack clouds to non-pure OpenStack
> clouds which implement vender specific properties
>  - but users cannot move from non-pure OpenStack clouds if users
> depend on the properties
> even if these clouds are certificated on the same interoperability tests.
>
>
The end goal is 100% to get everyone consistent with no "extra" data being
passed out of the APIs and certified on the same tests.

However, right now we have an issue where vendors/operators are lagging on
getting this cleaned up. Since this is the first round of certifications
(among other things), the proposal is to support/manage this in a way that
gives a bit more of a grace period while the deployers/operators finish
moving away from custom properties (as i understand it the ones affected
have communicated that they are working on meeting this goal; Chris, please
correct me if I am wrong).

Your concerns are spot on, and at the end of this "greylist" window ( at
the " 2017.01" defcore guideline ), this grace period will expire and
everyone will be expected to be compatible without the "Extra" data. Part
of the process of doing these programs is working to refine the process
(and sometimes make exceptions in the early stages) until the workflow
is established and understood. It is not expected to continue nor extend
the period beyond the firm end point Chris highlighted. I would not support
this proposal if it was open ended.

Cheers,
--Morgan


> Thanks
> Ken Ohmichi
>
> ---
> [1]:
> http://lists.openstack.org/pipermail/defcore-committee/2015-June/000849.html
> [2]: https://review.openstack.org/#/c/162912
>
> 2016-06-14 16:37 GMT-07:00 Chris Hoge :
> > Top posting one note and direct comments inline, I’m proposing
> > this as a member of the DefCore working group, but this
> > proposal itself has not been accepted as the forward course of
> > action by the working group. These are my own views as the
> > administrator of the program and not that of the working group
> > itself, which may independently reject the idea outside of the
> > response from the upstream devs.
> >
> > I posted a link to this thread to the DefCore mailing list to make
> > that working group aware of the outstanding issues.
> >
> > On Jun 14, 2016, at 3:50 PM, Matthew Treinish 
> wrote:
> >
> > On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
> >
> > Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> >
> > On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> >
> > Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> >
> > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> >
> > Last year, in response to Nova micro-versioning and extension updates[1],
> > the QA team added strict API schema checking to Tempest to ensure that
> > no additional properties were added to Nova API responses[2][3]. In the
> > last year, at least three vendors participating the the OpenStack Powered
> > Trademark program have been impacted by this change, two of which
> > reported this to the DefCore Working Group mailing list earlier this
> > year[4].
> >
> > The DefCore Working Group determines guidelines for the OpenStack Powered
> > program, which includes capabilities with associated functional tests
> > from Tempest that must be passed, and designated sections with associated
> > upstream code [5][6]. In determining these guidelines, the working group
> > attempts to balance the future direction of development with lagging
> > indicators of deployments and user adoption.
> >
> > After a tremendous amount of consideration, I believe that the DefCore
> > Working Group needs to implement a temporary waiver for the strict API
> > checking requirements that were introduced last year, to give downstream
> > deployers more time to catch up with the strict micro-versioning
> > requirements determined by the Nova/Compute team and enforced by the
> > Tempest/QA team.
> >
> >
> > I'm very much opposed to this being done. If we're actually concerned
> with
> > interoperability and verify that things behave in the same manner between
> > multiple
> > clouds then doing 

Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Thierry Carrez

Robert Collins wrote:

[...]
From an upstream perspective, I see us as being in the business of providing
open collaboration playing fields in order to build projects to reach the
OpenStack Mission. We collectively provide resources (infra, horizontal
teams, events...) in order to enable that open collaboration.

An important characteristic of these open collaboration grounds is that they
need to be a level playing field, where no specific organization is being
given an unfair advantage.


Would it change your meaning if I added 'by OpenStack community /
infrastructure' there? If not, then it seems to me that e.g.
Rackspace, Dreamhost, and the other organisations that have deployed
scaled clouds have a pretty big advantage. If it does change your
meaning, then what really do you mean?


Where would you add that ? Also I don't think organizations which have 
deployed scaled clouds have an *unfair* advantage. Nothing in our 
governance structure actively prevents another organization from doing 
the same ?



  I expect the teams that we bless as "official" project teams to operate in 
that fair manner. Otherwise we end up blessing
what is essentially a trojan horse for a given organization, open-washing
their project in the process. Such a project can totally exist as an
unofficial project (and even be developed on OpenStack infrastructure) but I
don't think it should be given free space in our Design Summits or benefit
from "OpenStack community" branding.


We already have a mechanism - the undiverse tag - for calling out
projects that don't have diversity in their core. That seems to
overlap a lot here?


Yes, it is likely that official project teams that present such a unfair 
playing field would stay "team:single-vendor" forever as a consequence. 
This proposal is about not recognizing such teams as official in the 
first place. The single-vendor tag is, IMHO, meant to encourage project 
teams with a fair playing field to increase their diversity. It is not 
meant to officially support projects that present unfair playing fields.



So if, in a given project team, developers from one specific organization
benefit from access to specific knowledge or hardware (think 3rd-party
testing blackboxes that decide if a patch goes in, or access to proprietary
hardware or software that the open source code primarily interfaces with),
then this project team should probably be rejected under the "open
community" rule. Projects with a lot of drivers (like Cinder) provide an
interesting grey area, but as long as all drivers are in and there is a
fully functional (and popular) open source implementation, I think no
specific organization would be considered as unfairly benefiting compared to
others.


So I read this paragraph as Its ok if many organisations have unfair
advantages, but its not ok if there is only one organisation with an
unfair advantage?

Consider a project with one open implementation and one organisation
funded proprietary driver. This would be a problem. But I don't
understand why it would be.


Project team requirements are just guidelines, which are interpreted by 
humans. In the end, the TC members vote and use human judgment rather 
than blind 'rules'. I just want (1) to state that a level playing field 
is an essential part of what we call "open collaboration", and (2) to 
have TC members *consider* whether the project presents a fair playing 
field or not, as part of how they judge future project teams.


There is a grey area that requires human judgment here. In your example 
above, if the open implementation was unusable open core bait to lure 
people into using the one and only proprietary driver, it would be a 
problem. If the open implementation was fully functional and nothing 
prevented adding additional proprietary drivers, there would be no problem.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Virtuozzo (Compute) CI is incorrectly patching for resize support

2016-06-16 Thread Evgeny Antyshev

Jeremy, thank you for pointing this out! It saved me from such a headache!
BTW, is there any plan to workaround this in puppet-jenkins?

On 06/15/2016 08:39 PM, Jeremy Stanley wrote:

On 2016-06-15 13:17:54 +0300 (+0300), Evgeny Antyshev wrote:
[...]

This all started yesterday when I updated Virtuozzo CI using latest puppets,
Zuul, Jenkins and other stuff.
The update somehow led to Zuul not passing environment variables like
ZUUL_CHANGE, LOG_PATH to Jenkins job.

[...]

Very recent Jenkins releases began preventing injected parameters
from making it into worker environments.

http://lists.openstack.org/pipermail/openstack-infra/2016-May/004284.html




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Gate broken

2016-06-16 Thread Gary Kotton
Proposal for fix is https://review.openstack.org/330368

From: Gary Kotton 
Reply-To: OpenStack List 
Date: Thursday, June 16, 2016 at 11:21 AM
To: OpenStack List 
Subject: Re: [openstack-dev] [Neutron][LBaaS] Gate broken

Hi,
The reason is https://review.openstack.org/#/c/327413/
Thanks
Gary

From: Gary Kotton 
Reply-To: OpenStack List 
Date: Thursday, June 16, 2016 at 10:47 AM
To: OpenStack List 
Subject: [openstack-dev] [Neutron][LBaaS] Gate broken

Hi,
The unitests are failing because of:

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py", line 
1020, in test_delete_loadbalancer
'tenant_id': acontext.tenant_id}})
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1159, in create_port
db_port = self.create_port_db(context, port)
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1194, in create_port_db
db_port = self._create_db_port_obj(context, port_data)
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1154, in _create_db_port_obj
db_port = models_v2.Port(mac_address=mac_address, **port_data)
TypeError: DeclarativeMeta object got multiple values for keyword argument 
'mac_address'

Was there a change in objects?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Require a level playing field for OpenStack projects

2016-06-16 Thread Thierry Carrez

Matt Riedemann wrote:

[...]
So is the question does Nova provide a level playing field as a project
because it has drivers that can be deployed and used and tested without
special hardware, i.e. libvirt? Then yes. Or is it Nova doesn't provide
a level playing field because zVM and powervm aren't in tree?


Nova provides a level playing field because there is no single company 
unfairly benefiting from Nova being an official project team. No 
specific group of developers in Nova ends up having specific powers that 
others can't have.



If this is really just, random project wants to be considered an
'official' OpenStack project but is totally unusable without a
proprietary stack to deploy and run it - which makes it completely
vendor specific, regardless of whether or not they open sourced the
front-end to talk to their proprietary backend, so only developers from
said vendor can work on the project, then yeah, I agree with the
proposed change in wording.


That's the gist of it, although I would extend that slightly beyond 
"totally unusable". If a project team is mainly formed around a piece of 
code that interacts with a proprietary hardware or software solution, 
then the developers which happen to have access to that solution, and 
can read or modify the code it runs, have an unfair advantage compared 
to other developers. Even if a 3rd-party testing solution is offered, 
that's still a blackbox which says "yes" or "no" for anyone outside the 
special group of people which happen to have access to it.


It is very likely that as a result of this tilted playing field, such a 
project team will stay single-vendor forever. This proposal is just 
saying such a project team should not be made an official OpenStack 
project team -- all those we bless need to be reasonably-level playing 
fields.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-16 Thread Bence Romsics
Can we move the discussion of deprecating veth pairs to here?

https://bugs.launchpad.net/neutron/+bug/1587296
https://review.openstack.org/323310

As you can see in the related bugs and linked patches there are some
complications. Some of the veth config options were already deprecated
and the change had to be reverted recently. Would be good to hear your
opinions on how to solve the remaining problems.

Bence Romsics

On Wed, Jun 15, 2016 at 8:01 PM, Peters, Rawlin  wrote:
> On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub) wrote:
>> >which generates an arbitrary name
>>
>> I'm not a fan of this approach because it requires coordinated assumptions.
>> With the OVS hybrid plug strategy we have to make guesses on the agent side
>> about the presence of bridges with specific names that we never explicitly
>> requested and that we were never explicitly told about. So we end up with 
>> code
>> like [1] that is looking for a particular end of a veth pair it just hopes is
>> there so the rules have an effect.
>
> I don't think this should be viewed as a downside of Strategy 1 because, at
> least when we use patch port pairs, we can easily get the peer name from the
> port on br-int, then use the equivalent of "ovs-vsctl iface-to-br "
> to get the name of the bridge. If we allow supporting veth pairs to implement
> the subports, then getting the arbitrary trunk bridge/veth names isn't as
> trivial.
>
> This also brings up the question: do we even need to support veth pairs over
> patch port pairs anymore? Are there any distros out there that support
> openstack but not OVS patch ports?
>
>>
>> >it seems that the LinuxBridge implementation can simply use an L2 agent
>> >extension for creating the vlan interfaces for the subports
>>
>> LinuxBridge implementation is the same regardless of the strategy for OVS. 
>> The
>> whole reason we have to come up with these alternative approaches for OVS is
>> because we can't use the obvious architecture of letting it plug into the
>> integration bridge due to VLANs already being used for network isolation. I'm
>> not sure pushing complexity out to os-vif to deal with this is a great
>> long-term strategy.
>
> The complexity we'd be pushing out to os-vif is not much worse than the 
> current
> complexity of the hybrid_ovs strategy already in place today.
>
>>
>> >Also, we didn’t make the OVS agent monitor for new linux bridges in the
>> >hybrid_ovs strategy so that Neutron could be responsible for creating the 
>> >veth
>> >pair.
>>
>> Linux Bridges are outside of the domain of OVS and even its agent. The L2 
>> agent
>> doesn't actually do anything with the bridge itself, it just needs a veth
>> device it can put iptables rules on. That's in contrast to these new OVS
>> bridges that we will be managing rules for, creating additional patch ports,
>> etc.
>
> I wouldn't say linux bridges are totally outside of its domain because it 
> relies
> on them for security groups. Rather than relying on an arbitrary naming
> convention between Neutron and Nova, we could've implemented monitoring for 
> new
> linux bridges to create veth pairs and firewall rules on. I'm glad we didn't,
> because that logic is specific to that particular firewall driver, similar to
> how this trunk bridge monitoring would be specific to only vlan-aware-vms. I
> think the logic lives best within an L2 agent extension, outside of the core
> of the OVS agent.
>
>>
>> >Why shouldn't we use the tools that are already available to us?
>>
>> Because we're trying to build a house and all we have are paint brushes. :)
>
> To me it seems like we already have a house that just needs a little paint :)
>
>>
>>
>> 1.
>> https://github.com/openstack/neutron/blob/f78e5b4ec812cfcf5ab8b50fca62d1ae0dd7741d/neutron/agent/linux/iptables_firewall.py#L919-L923
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Nominating Alexander Tivelkov and Zhu Rong for murano cores

2016-06-16 Thread Nikolay Starodubtsev
+1
Well deserved!



Nikolay Starodubtsev

Software Engineer

Mirantis Inc.


Skype: dark_harlequine1

2016-06-15 19:42 GMT+03:00 Serg Melikyan :

> +1
>
> Finally!
>
> On Wed, Jun 15, 2016 at 3:33 AM, Ihor Dvoretskyi  > wrote:
>
>> +1 for Alexander Tivelkov.
>>
>> Good effort.
>>
>> On Wed, Jun 15, 2016 at 1:08 PM, Artem Silenkov 
>> wrote:
>>
>>> Hello!
>>>
>>> +1
>>>
>>> Regards,
>>> Artem Silenkov
>>> ---
>>> paas-team
>>>
>>> On Wed, Jun 15, 2016 at 12:56 PM, Dmytro Dovbii 
>>> wrote:
>>>
 +1
 15 июня 2016 г. 6:47 пользователь "Yang, Lin A" 
 написал:

 +1 both for Alexander Tivelkov and Zhu Rong. Well deserved.
>
> Regards,
> Lin Yang
>
> On Jun 15, 2016, at 3:17 AM, Kirill Zaitsev 
> wrote:
>
> Hello team, I want to annonce the following changes to murano core
> team:
>
> 1) I’d like to nominate Alexander Tivelkov for murano core. He has
> been part of the project for a very long time and has contributed to 
> almost
> every part of murano. He has been fully committed to murano during mitaka
> cycle and continues doing so during newton [1]. His work on the scalable
> framework architecture is one of the most notable features scheduled for N
> release.
>
> 2) I’d like to nominate Zhu Rong for murano core. Last time he was
> nominated I -1’ed the proposal, because I believed he needed to start
> making more substantial contributions. I’m sure that Zhu Rong showed his
> commitment [2] to murano project and I’m happy to nominate him myself. His
> work on the separating cfapi from murano api and contributions headed at
> addressing murano’s technical debt are much appreciated.
>
> 3) Finally I would like to remove Steve McLellan[3] from murano core
> team. Steve has been part of murano from very early stages of it. However
> his focus has since shifted and he hasn’t been active in murano during 
> last
> couple of cycles. I want to thank Steve for his contributions and express
> hope to see him back in the project in future.
>
>
> Murano team, please respond with +1/-1 to the proposed changes.
>
> [1] http://stackalytics.com/?user_id=ativelkov=marks
> [2] http://stackalytics.com/?metric=marks_id=zhu-rong
> [3] http://stackalytics.com/?user_id=sjmc7
> --
> Kirill Zaitsev
> Software Engineer
> Mirantis, Inc
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best regards,
>>
>> Ihor Dvoretskyi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com | +1 (650) 440-8979
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Gate broken

2016-06-16 Thread Gary Kotton
Hi,
The reason is https://review.openstack.org/#/c/327413/
Thanks
Gary

From: Gary Kotton 
Reply-To: OpenStack List 
Date: Thursday, June 16, 2016 at 10:47 AM
To: OpenStack List 
Subject: [openstack-dev] [Neutron][LBaaS] Gate broken

Hi,
The unitests are failing because of:

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py", line 
1020, in test_delete_loadbalancer
'tenant_id': acontext.tenant_id}})
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1159, in create_port
db_port = self.create_port_db(context, port)
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1194, in create_port_db
db_port = self._create_db_port_obj(context, port_data)
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1154, in _create_db_port_obj
db_port = models_v2.Port(mac_address=mac_address, **port_data)
TypeError: DeclarativeMeta object got multiple values for keyword argument 
'mac_address'

Was there a change in objects?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Gate broken

2016-06-16 Thread Gary Kotton
Hi,
The unitests are failing because of:

Captured traceback:
~~~
Traceback (most recent call last):
  File 
"neutron_lbaas/tests/unit/db/loadbalancer/test_db_loadbalancerv2.py", line 
1020, in test_delete_loadbalancer
'tenant_id': acontext.tenant_id}})
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1159, in create_port
db_port = self.create_port_db(context, port)
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1194, in create_port_db
db_port = self._create_db_port_obj(context, port_data)
  File 
"/home/gkotton/neutron-lbaas/.tox/py27/src/neutron/neutron/db/db_base_plugin_v2.py",
 line 1154, in _create_db_port_obj
db_port = models_v2.Port(mac_address=mac_address, **port_data)
TypeError: DeclarativeMeta object got multiple values for keyword argument 
'mac_address'

Was there a change in objects?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday June 16th at 9:00 UTC

2016-06-16 Thread GHANSHYAM MANN
Hello everyone,


Please reminder that the weekly OpenStack QA team IRC meeting will be
Thursday, June 16th at 9:00 UTC in the #openstack-meeting channel.


The agenda for the meeting can be found here:

https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_June_16th_2016_.280900_UTC.29

Anyone is welcome to add an item to the agenda.

To help people figure out what time 9:00 UTC is in other timezones the
next meeting will be at:

04:00 EST
18:00 JST
18:30 ACST
11:00 CEST
04:00 CDT
02:00 PDT


Regards
Ghanshyam Mann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-16 Thread Armando M.
On 16 June 2016 at 03:33, Matt Riedemann  wrote:

> On 6/13/2016 3:35 AM, Daniel P. Berrange wrote:
>
>> On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
>>
>>> Hi,
>>>
>>> You may or may not be aware of the vlan-aware-vms effort [1] in
>>> Neutron.  If not, there is a spec and a fair number of patches in
>>> progress for this.  Essentially, the goal is to allow a VM to connect
>>> to multiple Neutron networks by tagging traffic on a single port with
>>> VLAN tags.
>>>
>>> This effort will have some effect on vif plugging because the datapath
>>> will include some changes that will effect how vif plugging is done
>>> today.
>>>
>>> The design proposal for trunk ports with OVS adds a new bridge for
>>> each trunk port.  This bridge will demux the traffic and then connect
>>> to br-int with patch ports for each of the networks.  Rawlin Peters
>>> has some ideas for expanding the vif capability to include this
>>> wiring.
>>>
>>> There is also a proposal for connecting to linux bridges by using
>>> kernel vlan interfaces.
>>>
>>> This effort is pretty important to Neutron in the Newton timeframe.  I
>>> wanted to send this out to start rounding up the reviewers and other
>>> participants we need to see how we can start putting together a plan
>>> for nova integration of this feature (via os-vif?).
>>>
>>
>> I've not taken a look at the proposal, but on the timing side of things
>> it is really way to late to start this email thread asking for design
>> input from os-vif or nova. We're way past the spec proposal deadline
>> for Nova in the Newton cycle, so nothing is going to happen until the
>> Ocata cycle no matter what Neutron want  in Newton. For os-vif our
>> focus right now is exclusively on getting existing functionality ported
>> over, and integrated into Nova in Newton. So again we're not really
>> looking
>> to spend time on further os-vif design work right now.
>>
>> In the Ocata cycle we'll be looking to integrate os-vif into Neutron to
>> let it directly serialize VIF objects and send them over to Nova, instead
>> of using the ad-hoc port-binding dicts.  From the Nova side, we're not
>> likely to want to support any new functionality that affects port-binding
>> data until after Neutron is converted to os-vif. So Ocata at the earliest,
>> but probably more like P, unless the Neutron conversion to os-vif gets
>> completed unexpectedly quickly.
>>
>> Regards,
>> Daniel
>>
>>
> +1. Nova is past non-priority spec approval freeze for Newton. With
> respect to os-vif it's a priority to integrate that into Nova in Newton [1].
>
> We're also working on refactoring how we allocate and bind ports when
> creating a server [2]. This is a dependency for the routed networks work
> and it's also going to bump up against the changes I'm making in nova for
> get-me-a-network in Newton (which is another priority).
>
> So if vlan-aware-vms changes how nova allocates/binds ports, that's going
> to be dependent on this also, and will have to be worked into the Ocata
> release from Nova's POV.
>

If my understanding is correct, everything that was required in Nova was
done in the context of [1], which completed in Mitaka. What's left is the
os-vif part: if os-vif is not tied to the Nova release cycle or the
spec/blueprint approval and freeze process and the change in question is
trivial, then I hope we can make an effort to pull it off.

Now, if the review process unveiled loose ends and changes that are indeed
required to Nova, then I'd agree we should not change priorities.

Thanks,
Armando

[1] https://blueprints.launchpad.net/nova/+spec/neutron-ovs-bridge-name


>
> [1]
> https://specs.openstack.org/openstack/nova-specs/priorities/newton-priorities.html#os-vif-integration
> [2]
> http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/prep-for-network-aware-scheduling.html
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Template Structure Change

2016-06-16 Thread Har-Tal, Liat (Nokia - IL)
Hi All,

Please note  that from today the id field in metadata section is renamed and 
from now it is called name.
After updating your code, you should also update the templates which you are 
using in your setups.

Any question is welcome,
Liat

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Ken'ichi Ohmichi
This discussion was expected when we implemented the Tempest patch,
then I sent a mail to defcore comittee[1]
As the above ml, "A DefCore Guideline typically covers three OpenStack
releases".
That means the latest guideline needs to cover Mitaka, Liberty and Kilo, right?

In the Kilo development, we(nova team) have already considered
additional properties are not good for the interoperability.
And the stable_api.rst of [2] which is contained in Kilo says we need
to implement new features without extensions.
However, there are Kilo+ clouds which are extended with vendors' own
extensions, right?

My concern of allowing additional properties on interoperability tests is that
 - users can move from pure OpenStack clouds to non-pure OpenStack
clouds which implement vender specific properties
 - but users cannot move from non-pure OpenStack clouds if users
depend on the properties
even if these clouds are certificated on the same interoperability tests.

Thanks
Ken Ohmichi

---
[1]: 
http://lists.openstack.org/pipermail/defcore-committee/2015-June/000849.html
[2]: https://review.openstack.org/#/c/162912

2016-06-14 16:37 GMT-07:00 Chris Hoge :
> Top posting one note and direct comments inline, I’m proposing
> this as a member of the DefCore working group, but this
> proposal itself has not been accepted as the forward course of
> action by the working group. These are my own views as the
> administrator of the program and not that of the working group
> itself, which may independently reject the idea outside of the
> response from the upstream devs.
>
> I posted a link to this thread to the DefCore mailing list to make
> that working group aware of the outstanding issues.
>
> On Jun 14, 2016, at 3:50 PM, Matthew Treinish  wrote:
>
> On Tue, Jun 14, 2016 at 05:42:16PM -0400, Doug Hellmann wrote:
>
> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
>
> On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
>
> Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
>
> On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
>
> Last year, in response to Nova micro-versioning and extension updates[1],
> the QA team added strict API schema checking to Tempest to ensure that
> no additional properties were added to Nova API responses[2][3]. In the
> last year, at least three vendors participating the the OpenStack Powered
> Trademark program have been impacted by this change, two of which
> reported this to the DefCore Working Group mailing list earlier this
> year[4].
>
> The DefCore Working Group determines guidelines for the OpenStack Powered
> program, which includes capabilities with associated functional tests
> from Tempest that must be passed, and designated sections with associated
> upstream code [5][6]. In determining these guidelines, the working group
> attempts to balance the future direction of development with lagging
> indicators of deployments and user adoption.
>
> After a tremendous amount of consideration, I believe that the DefCore
> Working Group needs to implement a temporary waiver for the strict API
> checking requirements that were introduced last year, to give downstream
> deployers more time to catch up with the strict micro-versioning
> requirements determined by the Nova/Compute team and enforced by the
> Tempest/QA team.
>
>
> I'm very much opposed to this being done. If we're actually concerned with
> interoperability and verify that things behave in the same manner between
> multiple
> clouds then doing this would be a big step backwards. The fundamental
> disconnect
> here is that the vendors who have implemented out of band extensions or were
> taking advantage of previously available places to inject extra attributes
> believe that doing so means they're interoperable, which is quite far from
> reality. **The API is not a place for vendor differentiation.**
>
>
> This is a temporary measure to address the fact that a large number
> of existing tests changed their behavior, rather than having new
> tests added to enforce this new requirement. The result is deployments
> that previously passed these tests may no longer pass, and in fact
> we have several cases where that's true with deployers who are
> trying to maintain their own standard of backwards-compatibility
> for their end users.
>
>
> That's not what happened though. The API hasn't changed and the tests
> haven't
> really changed either. We made our enforcement on Nova's APIs a bit stricter
> to
> ensure nothing unexpected appeared. For the most these tests work on any
> version
> of OpenStack. (we only test it in the gate on supported stable releases, but
> I
> don't expect things to have drastically shifted on older releases) It also
> doesn't matter which version of the API you run, v2.0 or v2.1. Literally,
> the
> only case it ever fails is when you run something extra, not from the
> community,
> either as an extension (which 

Re: [openstack-dev] [release] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Wang, Shane
That is what I want. It is better to map the day into the release schedule in 
the future. Can we make it since Otaca?
And everyone is encouraged.

Regards.
--
Shane
-Original Message-
From: Rochelle Grober [mailto:rochelle.gro...@huawei.com] 
Sent: Wednesday, June 15, 2016 9:53 AM
To: Daniel P. Berrange; OpenStack Development Mailing List (not for usage 
questions)
Cc: Zhuangzhen; Anni Lai; Liang, Maggie
Subject: Re: [openstack-dev] [release] Invitation to join Hangzhou Bug Smash

Perhaps the right way to schedule these bug smashes is to do it at the same 
time as the release scheduling is determined.  Decide on a fixed time within 
the release cycle (it's been just after M3/feature freeze a few times) and when 
the schedule is put together, the bugsmash is part of the schedule.

By having the release schedule determine the week of the bug smash, we have a 
long timeline to get the planning done and don't have to worry about 
development schedule conflicts.

--Rocky

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com]
Sent: Monday, June 13, 2016 2:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Zhuangzhen; Anni Lai; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date 
> will be around July 11 (probably July 6 - 8, but to be determined very soon).

The newton-2 milestone release date is July 15th, so you certainly *don't* want 
the event during that week. IOW, the 8th July is the latest you should schedule 
it - don't let it slip into the next week starting July 11th, as during the 
week of the n-2 milestone focus of the teams will be almost exclusively on prep 
for that release, to the detriment of any bug smash event.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Wang, Shane
OK, Got you, Sean, will let you know if there is any change.

Regards.
--
Shane
-Original Message-
From: Sean McGinnis [mailto:sean.mcgin...@gmx.com] 
Sent: Tuesday, June 14, 2016 3:10 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Zhuangzhen; anni@huawei.com; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date will be 
> around July 11 (probably July 6 - 8, but to be determined very soon).
> 
> The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum, 
> Rally, Ironic, Dragonflow and Watcher, etc. projects, so need developers to 
> join and fix bugs as many as possible, and cores to be on site to moderate 
> the code changes and merges. Welcome to the smash mash at Hangzhou - 
> http://www.chinahighlights.com/hangzhou/attraction/.
> 
> Good news is still that for the first two cores who are from those above 
> projects and respond to this invitation in my email inbox and copy the CC 
> list, the sponsors are pleased to sponsor your international travel, 
> including flight and hotel. Please simply reply to me.
> 
> Best regards,
> --
> China OpenStack Bug Smash Team
> 
> 

Glad to see this continuing!

I would like to participate in this event, but that current timeframe would 
conflict with OpenStack Days India. If that does end up being the final date, I 
will try to be online as much as possible to help with reviews.

If it does end up being moved to another date, I would be interested in 
participating in person to help mentor.

Thanks,
Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Wang, Shane
Yes, sure, we encourage every company to host bug smash at its city globally, 
as the company wants, but should align with us. Actually last time it followed 
the model, for example, we have SUSE to host at Germany, and in the last minute 
we have Taiwan.
For Hangzhou in PRC, we are also calling for sponsorship to work with us. So in 
the final accouchement, you will see company names.

Regards.
--
Shane
-Original Message-
From: Tom Fifield [mailto:t...@openstack.org] 
Sent: Monday, June 13, 2016 5:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: anni@huawei.com; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

Hi,

Are there plans to follow the OpenStack events policy this time?

eg Commercial participants should have equal opportunity to sponsor and support 
the activity. When the number of sponsorships is limited, a best practice is to 
publish a sponsorship prospectus online on a date known in advance with 
sponsorships filled on a "first to sign" basis.


Regards,


Tom

On 13/06/16 16:06, Wang, Shane wrote:
> Hi, OpenStackers,
> As you know, Huawei, Intel and CESI are hosting the 4th China 
> OpenStack Bug Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 
> 3rd was at Chengdu.
> We are constructing the etherpad page for registration, and the date 
> will be around July 11 (probably July 6 - 8, but to be determined very 
> soon).
> The China teams will still focus on Neutron, Nova, Cinder, Heat, 
> Magnum, Rally, Ironic, Dragonflow and Watcher, etc. projects, so need 
> developers to join and fix bugs as many as possible, and cores to be 
> on site to moderate the code changes and merges. Welcome to the smash 
> mash at Hangzhou -_http://www.chinahighlights.com/hangzhou/attraction/_.
> Good news is still that for the first two cores who are from those 
> above projects and respond to this invitation in my email inbox and 
> copy the CC list, the sponsors are pleased to sponsor your 
> international travel, including flight and hotel. Please simply reply to me.
> Best regards,
> --
> China OpenStack Bug Smash Team
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Wang, Shane
I heard that from Doug, the best timing is before July 11.

Thank you for the reminder.

Regards.
--
Shane
-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Monday, June 13, 2016 5:46 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Zhuangzhen; anni@huawei.com; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

On Mon, Jun 13, 2016 at 08:06:50AM +, Wang, Shane wrote:
> Hi, OpenStackers,
> 
> As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
> Smash at Hangzhou, China.
> The 1st China Bug Smash was at Shanghai, the 2nd was at Xi'an, and the 3rd 
> was at Chengdu.
> 
> We are constructing the etherpad page for registration, and the date 
> will be around July 11 (probably July 6 - 8, but to be determined very soon).

The newton-2 milestone release date is July 15th, so you certainly *don't* want 
the event during that week. IOW, the 8th July is the latest you should schedule 
it - don't let it slip into the next week starting July 11th, as during the 
week of the n-2 milestone focus of the teams will be almost exclusively on prep 
for that release, to the detriment of any bug smash event.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

2016-06-16 Thread Wang, Shane
Welcome to join us, Duncan.

Regards.
--
Shane
From: Duncan Thomas [mailto:duncan.tho...@gmail.com]
Sent: Monday, June 13, 2016 4:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Zhuangzhen; anni@huawei.com; Liang, Maggie
Subject: Re: [openstack-dev] Invitation to join Hangzhou Bug Smash

Hi

I would, once again, love to attend.
If you find that other cores apply and you'd rather have a new face, I would be 
very understanding of the situation.
Regards

--
Duncan Thomas



On 13 June 2016 at 11:06, Wang, Shane 
> wrote:
Hi, OpenStackers,

As you know, Huawei, Intel and CESI are hosting the 4th China OpenStack Bug 
Smash at Hangzhou, China.
The 1st China Bug Smash was at Shanghai, the 2nd was at Xi’an, and the 3rd was 
at Chengdu.

We are constructing the etherpad page for registration, and the date will be 
around July 11 (probably July 6 – 8, but to be determined very soon).

The China teams will still focus on Neutron, Nova, Cinder, Heat, Magnum, Rally, 
Ironic, Dragonflow and Watcher, etc. projects, so need developers to join and 
fix bugs as many as possible, and cores to be on site to moderate the code 
changes and merges. Welcome to the smash mash at Hangzhou - 
http://www.chinahighlights.com/hangzhou/attraction/.

Good news is still that for the first two cores who are from those above 
projects and respond to this invitation in my email inbox and copy the CC list, 
the sponsors are pleased to sponsor your international travel, including flight 
and hotel. Please simply reply to me.

Best regards,
--
China OpenStack Bug Smash Team



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
--
Duncan Thomas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][security] Service User Permissions

2016-06-16 Thread Jamie Lennox
Thanks everyone for your input.

I generally agree that there is something that doesn't quite feel right
about purely trusting this information to be passed from service to
service, this is why i was keen for outside input and I have been
rethinking the approach.

To this end i've proposed reservations (a name that doesn't feel right):
https://review.openstack.org/#/c/330329/

At a gut feeling level i'm much happier with the concept. I think it will
allow us to handle the distinction between user->service and
service->service communication much better and has the added bonus of
potentially opening up some policy options in future.

Please let me know of any concerns/thoughts on the new approach.

Once again i've only written the proposal part of the spec as there will be
a lot of details to figure out if we go forward. It is also fairly rough
but it should convey the point.


Thanks

Jamie

On 3 June 2016 at 03:06, Shawn McKinney  wrote:

>
> > On Jun 2, 2016, at 10:58 AM, Adam Young  wrote:
> >
> > Any senseible RBAC setup would support this, but we are not using a
> sensible one, we are using a hand rolled one. Replacing everything with
> Fortress implies a complete rewrite of what we do now.  Nuke it from orbit
> type stuff.
> >
> > What I would rather focus on is the splitting of the current policy into
> two parts:
> >
> > 1. Scope check done in code
> > 2. Role check done in middleware
> >
> > Role check should be donebased on URL, not on the policy key like
> identity:create_user
> >
> >
> > Then, yes, a Fortress style query could be done, or it could be done by
> asking the service itself.
>
> Mostly in agreement.  I prefer to focus on the model (RBAC) rather than a
> specific impl like Fortress. That is to say support the model and allow the
> impl to remain pluggable.  That way you enable many vendors to participate
> in your ecosystem and more important, one isn’t tied to a specific backend
> (ldapv3, sql, …)
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][nova][defcore] Add option to disable some strict response checking for interop testing

2016-06-16 Thread Morgan Fainberg
On Jun 14, 2016 14:42, "Doug Hellmann"  wrote:
>
> Excerpts from Matthew Treinish's message of 2016-06-14 15:12:45 -0400:
> > On Tue, Jun 14, 2016 at 02:41:10PM -0400, Doug Hellmann wrote:
> > > Excerpts from Matthew Treinish's message of 2016-06-14 14:21:27 -0400:
> > > > On Tue, Jun 14, 2016 at 10:57:05AM -0700, Chris Hoge wrote:
> > > > > Last year, in response to Nova micro-versioning and extension
updates[1],
> > > > > the QA team added strict API schema checking to Tempest to ensure
that
> > > > > no additional properties were added to Nova API responses[2][3].
In the
> > > > > last year, at least three vendors participating the the OpenStack
Powered
> > > > > Trademark program have been impacted by this change, two of which
> > > > > reported this to the DefCore Working Group mailing list earlier
this year[4].
> > > > >
> > > > > The DefCore Working Group determines guidelines for the OpenStack
Powered
> > > > > program, which includes capabilities with associated functional
tests
> > > > > from Tempest that must be passed, and designated sections with
associated
> > > > > upstream code [5][6]. In determining these guidelines, the
working group
> > > > > attempts to balance the future direction of development with
lagging
> > > > > indicators of deployments and user adoption.
> > > > >
> > > > > After a tremendous amount of consideration, I believe that the
DefCore
> > > > > Working Group needs to implement a temporary waiver for the
strict API
> > > > > checking requirements that were introduced last year, to give
downstream
> > > > > deployers more time to catch up with the strict micro-versioning
> > > > > requirements determined by the Nova/Compute team and enforced by
the
> > > > > Tempest/QA team.
> > > >
> > > > I'm very much opposed to this being done. If we're actually
concerned with
> > > > interoperability and verify that things behave in the same manner
between multiple
> > > > clouds then doing this would be a big step backwards. The
fundamental disconnect
> > > > here is that the vendors who have implemented out of band
extensions or were
> > > > taking advantage of previously available places to inject extra
attributes
> > > > believe that doing so means they're interoperable, which is quite
far from
> > > > reality. **The API is not a place for vendor differentiation.**
> > >
> > > This is a temporary measure to address the fact that a large number
> > > of existing tests changed their behavior, rather than having new
> > > tests added to enforce this new requirement. The result is deployments
> > > that previously passed these tests may no longer pass, and in fact
> > > we have several cases where that's true with deployers who are
> > > trying to maintain their own standard of backwards-compatibility
> > > for their end users.
> >
> > That's not what happened though. The API hasn't changed and the tests
haven't
> > really changed either. We made our enforcement on Nova's APIs a bit
stricter to
> > ensure nothing unexpected appeared. For the most these tests work on
any version
> > of OpenStack. (we only test it in the gate on supported stable
releases, but I
> > don't expect things to have drastically shifted on older releases) It
also
> > doesn't matter which version of the API you run, v2.0 or v2.1.
Literally, the
> > only case it ever fails is when you run something extra, not from the
community,
> > either as an extension (which themselves are going away [1]) or another
service
> > that wraps nova or imitates nova. I'm personally not comfortable saying
those
> > extras are ever part of the OpenStack APIs.
> >
> > > We have basically three options.
> > >
> > > 1. Tell deployers who are trying to do the right for their immediate
> > >users that they can't use the trademark.
> > >
> > > 2. Flag the related tests or remove them from the DefCore enforcement
> > >suite entirely.
> > >
> > > 3. Be flexible about giving consumers of Tempest time to meet the
> > >new requirement by providing a way to disable the checks.
> > >
> > > Option 1 goes against our own backwards compatibility policies.
> >
> > I don't think backwards compatibility policies really apply to what
what define
> > as the set of tests that as a community we are saying a vendor has to
pass to
> > say they're OpenStack. From my perspective as a community we either
take a hard
> > stance on this and say to be considered an interoperable cloud (and to
get the
> > trademark) you have to actually have an interoperable product. We
slowly ratchet
> > up the requirements every 6 months, there isn't any implied backwards
> > compatibility in doing that. You passed in the past but not in the
newer stricter
> > guidelines.
> >
> > Also, even if I did think it applied, we're not talking about a change
which
> > would fall into breaking that. The change was introduced a year and
half ago
> > during kilo and landed a year ago during liberty:
> >
> > https://review.openstack.org/#/c/156130/
> >
>