Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-11 Thread Andreas Jaeger
On 2016-05-12 02:59, Nikhil Komawar wrote:
> Thanks Josh about your reply. It's helpful.
> 
> The attempt of this cross project work is to come up with a standard way
> of implementing quota logic that can be used by different services.
> Currently, different projects have their individual implementations and
> there are many learning lessons. The library is supposed to be born out
> of that shared wisdom.
> 
> Hence, it needs to be an independent library that can make progress in a
> way, to be successfully adopted and vetted upon by cross project cases;
> but not necessarily enforce cross project standardization for projects
> to adopt it in a particular way.
> 
> 
> So, not oslo for now at least [1]. BigTent? -- I do not know the
> consequences of it not being in BigTent. We do not need design summit
> slot dedicated for this project, neither do we need to have elections,
> nor it is a big enough project to be coordinated with a specific release
> milestone (Newton, Ocata, etc.). The team, though, does follow the four
> opens [2]. So, we can in future go for either option as needed. As long
> as it lives under openstack/delimiter umbrella, runs the standard gate
> tests, follows the release process of openstack for libraries (but not
> necessarily require intervention of the release team), we are happy.

The release team only takes care of Big Tent projects, documentation and
translation teams are not covering independent projects - including
using the docs.openstack.org web site for publishing. Even in the Big
Tent you are not forced to release at a specific release milestone.

We don't have stackforge anymore by name, just by spirit: you can create
a repo for this as independent project as openstack/delimter,

Note that you can also start now under an existing project - like
keystone, glance or oslo - and later spin off as separate independent team.

Starting off independent is fine in general - it just really strikes me
as odd that a cross-project effort considers itself outside of the Big Tent,

Andreas
> 
> 
> [1] Personally, I do not care of where it lives after it has been
> adopted by a few different projects. But let's keep the future
> discussions in the Pandora's box for now.
> 
> [2] The four opens http://governance.openstack.org/reference/opens.html
> 
> On 5/11/16 7:16 PM, Joshua Harlow wrote:
>> So it was under my belief that at its current stage that this library
>> would start off on its own, and not initially start of (just yet) in
>> oslo (as I think the oslo group wants to not be the
>> blocker/requirement for a library being a successful thing + the cost
>> of it being in oslo may not be warranted yet).
>>
>> If in the future we as a community think it is better under oslo (and
>> said membership into oslo will help); then I'm ok with it being
>> there... I just know that others (in the oslo group) have other
>> thoughts here (and hopefully they can chime in).
>>
>> Part of this is also being refined in
>> https://review.openstack.org/#/c/312233/ and that hopefully can be a
>> guideline for new libraries that come along.
>>
>> -Josh
>>
>> Andreas Jaeger wrote:
>>> Since the review [1] to create the repo is up now, I have one question:
>>> This is a cross-project effort, so what is it's governance?
>>>
>>> The review stated it will be an independent project outside of the big
>>> tent - but seeing that this should become a common part for core
>>> projects and specific to OpenStack, I wonder whether that is the right
>>> approach. It fits nicely into Oslo as cross-project library - or it
>>> could be an independent team on its own in the Big Tent.
>>>
>>> But cross-project and outside of Big Tent looks very strange to me,
>>>
>>> Andreas
>>>
>>> [1] https://review.openstack.org/284454
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-11 Thread Andreas Jaeger
Looking at yourcross-project spec
https://review.openstack.org/#/c/284454/, I really wonder what's the
reason is for this developed outside of the Big Tent,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-11 Thread Tim Bell


On 12/05/16 02:59, "Nikhil Komawar"  wrote:

>Thanks Josh about your reply. It's helpful.
>
>The attempt of this cross project work is to come up with a standard way
>of implementing quota logic that can be used by different services.
>Currently, different projects have their individual implementations and
>there are many learning lessons. The library is supposed to be born out
>of that shared wisdom.
>
>Hence, it needs to be an independent library that can make progress in a
>way, to be successfully adopted and vetted upon by cross project cases;
>but not necessarily enforce cross project standardization for projects
>to adopt it in a particular way.
>
>
>So, not oslo for now at least [1]. BigTent? -- I do not know the
>consequences of it not being in BigTent. We do not need design summit
>slot dedicated for this project, neither do we need to have elections,
>nor it is a big enough project to be coordinated with a specific release
>milestone (Newton, Ocata, etc.). The team, though, does follow the four
>opens [2]. So, we can in future go for either option as needed. As long
>as it lives under openstack/delimiter umbrella, runs the standard gate
>tests, follows the release process of openstack for libraries (but not
>necessarily require intervention of the release team), we are happy.
>

I think it will be really difficult to persuade the mainstream projects to adopt
a library if it is not part of Oslo. Developing a common library for quota
management outside the scope of the common library framework for OpenStack
does not seem to be encouraging the widespread use of delimiter.

What are the issues with being part of oslo ?

Is it that oslo may not want the library or are there constraints that it would 
impose 
on the development ? 

Tim

>
>[1] Personally, I do not care of where it lives after it has been
>adopted by a few different projects. But let's keep the future
>discussions in the Pandora's box for now.
>
>[2] The four opens http://governance.openstack.org/reference/opens.html
>
>On 5/11/16 7:16 PM, Joshua Harlow wrote:
>> So it was under my belief that at its current stage that this library
>> would start off on its own, and not initially start of (just yet) in
>> oslo (as I think the oslo group wants to not be the
>> blocker/requirement for a library being a successful thing + the cost
>> of it being in oslo may not be warranted yet).
>>
>> If in the future we as a community think it is better under oslo (and
>> said membership into oslo will help); then I'm ok with it being
>> there... I just know that others (in the oslo group) have other
>> thoughts here (and hopefully they can chime in).
>>
>> Part of this is also being refined in
>> https://review.openstack.org/#/c/312233/ and that hopefully can be a
>> guideline for new libraries that come along.
>>
>> -Josh
>>
>> Andreas Jaeger wrote:
>>> Since the review [1] to create the repo is up now, I have one question:
>>> This is a cross-project effort, so what is it's governance?
>>>
>>> The review stated it will be an independent project outside of the big
>>> tent - but seeing that this should become a common part for core
>>> projects and specific to OpenStack, I wonder whether that is the right
>>> approach. It fits nicely into Oslo as cross-project library - or it
>>> could be an independent team on its own in the Big Tent.
>>>
>>> But cross-project and outside of Big Tent looks very strange to me,
>>>
>>> Andreas
>>>
>>> [1] https://review.openstack.org/284454
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Newton priorities, processes, dates, spec owners and reviewers

2016-05-11 Thread Nikhil Komawar
Hello all,

Here are a few important announcements for the members involved in the
Glance community.


Priorities:

===

* The Glance priorities for Newton were discussed at the contributors'
meetup at summit.

* There are a few items that were carried forward from Mitaka that are
still our priorities and there are a couple of items from the summit
that we have made a priority for reviews.

Code review priority:

* Import refactor

* Nova v1, v2 support

* Image sharing changes

* Documentation changes [1], [2]


The required attention from Glance team on Nova v1, v2 support is
minimal; the people who are actively involved should review the code and
the spec.


Everyone is encouraged to review the Import refactor work however, if
you do not know where to start you can join the informal syncs on
#openstack-glance Thursdays at 1330 UTC. If you do not see people
chatting you are more than encouraged to highlight the following irc
nicks: rosmaita, nikhil (to the very least)


Everyone is encouraged to review the Image sharing changes that are
currently being discussed. Although, the constructs are not going to
hamper the standard image workflows, the experiences of sharing may be
different after these changes. There will be subsequent changes to the
python-glanceclient for accommodating server changes.


Documentation changes are something that we must accommodate in this
cycle; thanks to the docs team the code draft was given to us.
Documentation liaison is working hard to get it in the right shape and a
couple more reviewers are to be assigned to review this change. We need
volunteers for the review work.


Process to be adopted in Newton:

==


Full specs:

* For all newly introduced features, API Impacting changes and changes
that could either have an impact security or larger impact on operators
will need a full spec against the openstack/glance-specs repo.

* For each spec, you need to create a corresponding blueprint in
launchpad [3] and indicate your intention to target that spec in the
newton milestone. You will want to be judicious on selecting the
milestone; if we see too many proposals for a particular milestone
glance-core team will have to selectively reject some of those or move
to a different milestone. Please set the url of the spec on your blueprint.

* Please use the template for the full spec [4] and try to complete it
as much as possible. A spec that is missing some critical info is likely
to not get feedback.

* Only blueprints by themselves will not be reviewed. You need a spec
associated with a blueprint to get the proposal reviewed.

* The reviewers section [5] is very important for us to determine if the
team will have enough time to review your spec and code. This
information plays important role in planning and prioritize your spec.
Reach out to these core-reviewer nicks [6] on #openstack-glance channel
to see who is interested in assigning themselves to your spec.

* Please make sure that each spec has the problem statement well
defined. The problem statement isn't a one liner that indicates -- it
would be nice to have this change, admins should do operations that user
can't, Glance should do so and so, etc. Problem statement should
elaborate your use case and explain what in Glance or OpenStack can be
improved, what exists currently, if any, why would it be beneficial to
make this change, how would the view of cloud change after this change, etc.

* All full specs require +W from PTL/liaison


Lite specs:

* All proposals that are expected to change the behavior of the system
significantly are required to have a lite-spec.

* For a lite-spec you do not need a blueprint filed and you don't need
to target it to particular milestones. Glance would accept most
lite-specs until newton-3 unless a cross-project or another conflicting
change is a blocker.

* Please make sure that each lite-spec has a well defined problem
statement. The problem statement is NOT a one liner that indicates -- it
would be nice to have this change, admins should do operations such
operations that user can't, Glance should do so and so, etc. Problem
statement should elaborate your use case and explain what in Glance or
OpenStack can be improved, what exists currently, if any, why would it
be beneficial to make this change, how would the view of cloud change
after this change, etc.

* All lite specs should have at least two +2 (agreement from at least
two core reviewers). There is no need to wait on +W from the PTL but it
is highly encouraged to consult a liaison (module expert).

* Lite specs can be merged irrespective of the spec freeze dates.


Important dates to remember:

===

* June 2, R-18: newton-1

* June 17, R-16: Spec soft freeze, Glance mid-cycle (15th-17th)
(depending on attendance). If you've already booked travel contact me ASAP.

* July 14, R-12: newton-2

* Jul 29, R-10: Spec hard freeze

* Aug 23, R-6: final glance_store release

* Aug 30, R-5: ne

Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-11 Thread Kumari, Madhuri
Hi,

+1 for option #2. Its ok to fetch help message from API server but a drawback 
is help message is supposed to work generally even if the actual services are 
not running.
Thoughts?

Regards,
Madhuri

-Original Message-
From: taget [mailto:qiaoliy...@gmail.com] 
Sent: Thursday, May 12, 2016 7:01 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [magnum] How to document 'labels'

+1 for #2, to keep help message in magnum API server, since we have an
validation list in API Server, see
https://review.openstack.org/#/c/312990/5/magnum/api/attr_validator.py@23 .

But, if we chose #2 , CLI should add supporting to retrieve help message from 
API server.

On 2016年05月12日 09:10, Shuu Mutou wrote:
> Hi all,
>
> +1 for option #2.
>
> Yuanying drafted following blueprint. And I will follow up this.
> https://blueprints.launchpad.net/magnum/+spec/magnum-api-swagger-suppo
> rt
>
> I think, this will be satisfy Tom's comments.
>
> regards,
>
> Shu Muto
>
>
> __
>  OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Best Regards, Eli Qiao (乔立勇)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Easing contributions to central documentation

2016-05-11 Thread Joseph Robinson
Hi All, One reply inline:

On 11/05/2016, 7:33 AM, "Lana Brindley"  wrote:

>On 10/05/16 20:08, Julien Danjou wrote:
>> On Mon, May 09 2016, Matt Kassawara wrote:
>>
>>> So, before developer frustrations drive some or all projects to move
>>> their documentation in-tree which which negatively impacts the goal of
>>> presenting a coherent product, I suggest establishing an agreement
>>> between developers and the documentation team regarding the review
>>> process.
>>
>> My 2c, but it's said all over the place that OpenStack is not a product,
>> but a framework. So perhaps the goal you're pursuing is not working
>> because it's not accessible by design?
>>
>>> 1) The documentation team should review the patch for compliance with
>>> conventions (proper structure, format, grammar, spelling, etc.) and
>>>provide
>>> feedback to the developer who updates the patch.
>>> 2) The documentation team should modify the patch to make it compliant
>>>and
>>> ask the developer for a final review to prior to merging it.
>>> 3) The documentation team should only modify the patch to make it
>>>build (if
>>> necessary) and quickly merge it with a documentation bug to resolve any
>>> compliance problems in a future patch by the documentation team.

I like the idea of options 2 and 3. Specifically though, I think Option 3
- merging content that builds, and checking out a bug to improve the
quality - can work in some cases. With a dedicated teams on several
guides, docs contributors would be able to pick up bugs right away -
that¹s my 2c.

>>>
>>> What do you think?
>>
>> We, Telemetry, are moving our documentation in-tree and are applying a
>> policy of "no doc, no merge" (same policy we had for unit tests).
>
>This is great news! I love hearing stories like this from project teams
>who recognise the important of documentation. Hopefully the new model for
>Install Guides will help you out here, too.
>
>> So until the doc team starts to help projects with that (proof-reading,
>> pointing out missing doc update in patches, etc) and trying to be part
>> of actual OpenStack projects, I don't think your goal will ever work.
>>
>> For example, we have an up-to-date documentation in Gnocchi since the
>> beginning, that covers the whole project. It's probably not coherent
>> with the rest of OpenStack in wording etc, but we'd be delighted to have
>> some folks of the doc team help us with that.
>
>Let's work together to find out how we can help. I note that Lance
>Bragstad is your CPL, is that still current?
>
>Lana
>
>--
>Lana Brindley
>Technical Writer
>Rackspace Cloud Builders Australia
>http://lanabrindley.com
>



Rackspace Hosting Australia PTY LTD a company registered in the state of 
Victoria, Australia (company registered number ACN 153 275 524) whose 
registered office is at Level 1, 37 Pitt Street, Sydney, NSW 2000, Australia. 
Rackspace Hosting Australia PTY LTD privacy policy can be viewed at 
www.rackspace.com.au/company/legal-privacy-statement.php - This e-mail message 
may contain confidential or privileged information intended for the recipient. 
Any dissemination, distribution or copying of the enclosed material is 
prohibited. If you receive this transmission in error, please notify us 
immediately by e-mail at ab...@rackspace.com and delete the original message. 
Your cooperation is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-11 Thread Bhandaru, Malini K
+1   :-)

-Original Message-
From: Nikhil Komawar [mailto:nik.koma...@gmail.com] 
Sent: Wednesday, May 11, 2016 8:40 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian 
Rosmaita to the glance-coresec team

Hello all,

I would like to propose adding add Brian to the team. He has been doing great 
work on improving the Glance experience for user and operators and tying the 
threads with the security aspects of the service. He also brings in good 
perspective of running large scale glance deployment and the issues seen 
therein.

Please cast your vote with +1, 0 or -1, or you can reply back to me.

Thank you.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar] Zaqar weekly meeting time changed

2016-05-11 Thread Fei Long Wang
Hi team,

Based on our previous discussion, the weekly meeting time has been
changed. The new meeting time is as below. Thanks.

time:   18:00 UTC
day: Monday
irc:   openstack-meeting-3
frequency:  weekly
start_date: 20160516

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings

2016-05-11 Thread Ildikó Váncsa


> -Original Message-
> From: Mike Perez [mailto:m...@openstack.org]
> Sent: May 11, 2016 23:52
> To: Ildikó Váncsa
> Cc: 'D'Angelo, Scott  (scott.dang...@hpe.com)'; 
> 'Walter A. Boring IV'; 'John Griffith
>  (john.griffi...@gmail.com)'; 'Matt Riedemann'; 
> 'Sean McGinnis'; 'John Garbutt 
> (j...@johngarbutt.com)'; openstack-dev@lists.openstack.org
> Subject: Re: [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings
> 
> On 14:38 May 11, Ildikó Váncsa wrote:
> > Hi All,
> >
> > We will continue the meeting series about the Cinder-Nova interaction 
> > changes mostly from multiattach  perspective. We have a
> new meeting slot, which is __Thursday, 1700UTC__ on the #openstack-meeting-cp 
> channel.
> >
> > Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
> > Summary about ongoing items: 
> > http://lists.openstack.org/pipermail/openstack-dev/2016-May/094089.html
> 
> I don't think this meeting is registered. [1] Take a look at [2].

A quick question before I move forward with registering this temporary series. 
As it is a project to project interaction as opposed to a cross-project meeting 
according to [2] I assume I should pick another IRC channel for this. Is that 
correct? If yes, is the temporary nature of the meeting series is accepted on 
other meeting channels as well?

Thanks,
/Ildikó

> 
> [1] - http://eavesdrop.openstack.org/
> [2] - https://review.openstack.org/#/c/301822/6/doc/source/cross-project.rst
> 
> --
> Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-11 Thread Fei Long Wang
+1 for Brian

On 12/05/16 15:39, Nikhil Komawar wrote:
> Hello all,
>
> I would like to propose adding add Brian to the team. He has been doing
> great work on improving the Glance experience for user and operators and
> tying the threads with the security aspects of the service. He also
> brings in good perspective of running large scale glance deployment and
> the issues seen therein.
>
> Please cast your vote with +1, 0 or -1, or you can reply back to me.
>
> Thank you.
>

-- 
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] glance-registry deprecation: Request for feedback

2016-05-11 Thread Flavio Percoco

Greetings,

The Glance team is evaluating the needs and usefulness of the Glance Registry
service and this email is a request for feedback from the overall community
before the team moves forward with anything.

Historically, there have been reasons to create this service. Some deployments
use it to hide database credentials from Glance public endpoints, others use it
for scaling purposes and others because v1 depends on it. This is a good time
for the team to re-evaluate the need of these services since v2 doesn't depend
on it.

So, here's the big question:

Why do you think this service should be kept around?

Summit etherpad: 
https://etherpad.openstack.org/p/newton-glance-registry-deprecation

Flavio
--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] [VMT] [Security] Proposal to add Brian Rosmaita to the glance-coresec team

2016-05-11 Thread Nikhil Komawar
Hello all,

I would like to propose adding add Brian to the team. He has been doing
great work on improving the Glance experience for user and operators and
tying the threads with the security aspects of the service. He also
brings in good perspective of running large scale glance deployment and
the issues seen therein.

Please cast your vote with +1, 0 or -1, or you can reply back to me.

Thank you.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [glance] on contributing to projects/programs -- say glance

2016-05-11 Thread Nikhil Komawar
Hi all,

I wanted to share with you that over the last week 5 (iirc) people have
reached out to me asking of how they could contribute to Glance and have
expressed interest in specific areas. The list of such individuals is
longer (around 10) if I count those whom I had a chat with at the summit.


It's a great sign that people are enthusiastic about contributing and
trying to make an impact. It is an open source, open community projects
where the visibility factor becomes important -- agreed. But the only
way one can truly become relevant and important to a projects/programs
is by contributing their efforts to the significantly important
discussions the team is having. This is way more important than
contributing code, because it is about shaping it.


There are so many of you who have a spec or piece of code out for review
and do ask for the same from core reviewers at the meetings, or during
casual chats. On not getting a review/response back, there is complaint
about lack of reviews on specific patch set or proposals. This may
result into a theme that a project/program is too slow and it spreads
like wildfire even among the prominent members of the community.
(Negative news is always trendy). But does that really reflect how the
project/program is being operated upon? How can we ensure that such
misconceptions are not endured by the community? This is an email
attempting to address that. And I hope to come up with further more as
time permits.


What can be your most significant contribution?

No matter what your status is in the project/program, core or not, you
NEED to provide feedback via reviews; preferably via gerrit reviews or
you can choose to give input on irc/emails. The way people will know
that you are being valuable to the team or to their code proposals is
via providing constructive, critical, rational, forward-looking, concise
and precise reviews. They will appreciate that you have taken the time
to provide them feedback for something they consider as important. This
will result into a positive feedback loop and you will soon find
yourself in back and forth review dance with those reviewers (and
possibly) other reviewers too. No, they need not be cores, just any
reviewer who is willing to give feedback.


What are good ways of giving reviews?

Please focus on the agenda (what the code intends to implement),
completeness, compliance with standards & processes, testing
thoroughness, thinking of corner cases, sometimes playing devil's
advocate, tying the concepts with other projects/programs (Cross project
thinking), thinking about performance, thinking about security, etc.
Please take a look at reviews from some of the known reviewers in
OpenStack, search on internet on best practices and ask questions (always)!


What if I'm not sure?

Start chatting with the team on irc or ping someone privately to see if
they are willing to answer your questions. You can always provide
feedback on irc and need not worry about providing reviews on gerrit.


How to be effective?

Keep a good review cadence for yourself. And no, I'm not talking about
focusing on the stackalytics numbers, do that for your corporate
organization/management. Try to be involved in the project on a regular
basis. Almost 30-40% of my reviews are either via irc or email or
etherpad. The important thing should be that you are helping things more
forward. In OpenStack (at least in Glance) you will be recognized when
your reviews are found helpful. People will vouch for you and provide
feedback in return.


Over the past few years, I have realized that not many people focus on
all of the above four. So, we (at least Glance) have not been able to
establish a positive feedback loop involving many members. We as a Open
Community cannot afford to work in silos, the community input is
important on the important aspects of projects/programs. Many members
pick important code and are very enthusiastic about contributing more,
not many are willing to provide constant valuable feedback. Slowness,
silos, polarization is only natural. Even more, when feedback isn't
provided you are likely to be missing important aspects of the
project/program when writing your solution -- would you consider that as
valuable contribution to the project/program?


Also, people do keep a wrong notion that being a core is sign of status,
and I keep writing such emails mentioning that it is a sign of
_responsibility_. When being a core gives you influence, it is to ensure
that you remain capable of taking the project forward with input and
contribution from others as well. So, they way to give back to community
for giving you this role is through various ways -- start reviewing
(more), specs too, help with (various) liaison duties, mentor new
members, help with squashing bugs & questions, provide feedback about
and on the process, increase more awareness, etc.


If you are really interested in becoming a core or take the
responsibility of core reviewer-ship seriously

Re: [openstack-dev] [mistral] Promoting Hardik Parekh to core reviewers

2016-05-11 Thread Hardik Parekh

Thank you all !!! I will try my best to justify this responsibility :)

Thanks and Regards,
Hardik Parekh

> On May 10, 2016, at 4:34 PM, Lingxian Kong  wrote:
> 
> +1 to Hardik :-)
> Regards!
> ---
> Lingxian Kong
> 
> 
> On Tue, May 10, 2016 at 8:49 PM, Renat Akhmerov
>  wrote:
>> I’d like to promote Hardik Parekh to Mistral core reviewers.
>> 
>> He was #1 by number of commits and #3 by number of reviews in Mitaka cycle
>> and I think he deserved it. His statistics for Mitaka is at [1].
>> 
>> Please vote.
>> 
>> [1]
>> http://stackalytics.com/?release=mitaka&module=mistral-group&metric=marks&user_id=hardik-parekh047
>> 
>> Renat Akhmerov
>> @Nokia
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Hiatus

2016-05-11 Thread Nikhil Komawar
Hi Stuart,

This is a sad news indeed that you need to shift focus. I do hope that
you come back very soon! (soon enough to run for PTL next cycle ;-) )


I do and am sure the entire Glance team appreciates all the effort you
have put in the Import work and the great discussions between you Brian,
Flavio and the rest of us in Glance have witnessed over the last few
months. Hope that the work does work out in a way that you liked (as we
will all definitely have our opinions to shape it). :-)


All the best in your current task.


On 5/11/16 5:56 AM, stuart.mcla...@hp.com wrote:
> Hi,
>
> A change in focus means I'm going to be less involved upstream for the
> next while.
>
> I realise the timing isn't great for the DefCore image import stuff.
> I'm hoping someone else may be able to pick up where I've left off [1].
>
> Thanks,
>
> -Stuart
>
> [1] https://review.openstack.org/#/c/312972,
> https://review.openstack.org/#/c/312653,
> https://review.openstack.org/#/c/308386
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Jinja2 for Heat template

2016-05-11 Thread Yuanying OTSUKA
Hi, all.

Now, I’m trying to implement following bp.
* https://blueprints.launchpad.net/magnum/+spec/bay-with-no-floating-ips

This bp requires to disable/enable “floating ip resource” in heat template
dynamically.
We have 3 options to implement this.

1. Use “conditions function”
*
https://blueprints.launchpad.net/heat/+spec/support-conditions-function
2. Dynamically generate heat template using Jinja2
3. Separate heat template to “kubecluster-with-floatingip.yaml” and
“kubecluster-no-floatingip.yaml”

Option 1 is easy to implement, but unfortunately this isn’t supported by
Mitaka release.
Option 2 needs a Jinja2 template support by Magnum.
Option 3 will bring chaos.

Please let me know what you think.


Thanks
-OTSUKA, Yuanying
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Pete Zaitcev
On Mon, 9 May 2016 14:17:40 -0500
Edward Leafe  wrote:

> Whenever I hear claims that Python is “too slow for X”, I wonder
> what’s so special about X that makes it so much more demanding than,
> say, serving up YouTube.

In case of Swift, the biggest issue was the scheduler. As RAX people
found, although we (OpenStack Swift community of implementors) were able
to obtain an acceptable baseline performance, a bad drive dragged down
the whole node. Before Hummingbird, dfg (and redbo) screwed around with
fully separate processes that provided isolation, but that was not
scaling well. So, there was an endless parade of solutions on the base
of threads. Some patches went in, some did not. At some points things
were so bad that dfg posted a patch, which maintained a scoring board
in an SQLite file. They were willing to add a bunch of I/O to every
request just to avoid the worst case that Python forced upon them.
The community (that is basically John, Sam, and I) put brakes on that.
But only at that point redbo created Hummingbird, which solved the
issue for them.

Once Hummingbird went into production, they found that it was easy to
polish and it could be much faster. Some of the benchmarks were
beating Python by 80 times. CPU consumption went way down, too.
But all that was secondary in the adoption of Go. If not a significant
scalability crisis in the field, Swift in Go would not have happened.

Scott Simpson gave a preso at Vancouver Summit that had some details and
benchmarks. Google is no help finding it online, unfortunately. Only
finds the panel discussion. Maybe someone had it saved.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-11 Thread taget
+1 for #2, to keep help message in magnum API server, since we have an 
validation list in API Server, see 
https://review.openstack.org/#/c/312990/5/magnum/api/attr_validator.py@23 .


But, if we chose #2 , CLI should add supporting to retrieve help message 
from API server.


On 2016年05月12日 09:10, Shuu Mutou wrote:

Hi all,

+1 for option #2.

Yuanying drafted following blueprint. And I will follow up this.
https://blueprints.launchpad.net/magnum/+spec/magnum-api-swagger-support

I think, this will be satisfy Tom's comments.

regards,

Shu Muto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Best Regards, Eli Qiao (乔立勇)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-11 Thread Shuu Mutou
Hi all, 

+1 for option #2.

Yuanying drafted following blueprint. And I will follow up this.
https://blueprints.launchpad.net/magnum/+spec/magnum-api-swagger-support

I think, this will be satisfy Tom's comments.

regards, 

Shu Muto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][keystone] Austin summit nova/keystone cross-project session recap

2016-05-11 Thread Matt Riedemann
On Thursday afternoon we had a nova/keystone cross-project design summit 
session. The session etherpad with full details is here [1].


This session was mostly about a nova spec [2] to fix a long-standing bug 
in Nova. We've actually had several duplicate bugs for this issue, which 
is why we're working on the spec again to resolve it.


The issue is that when a nova admin is setting quota or flavor access 
they are passing a project id (not their own) which is not validated to 
actually exist with keystone. So if there is a typo, or a name is passed 
rather than an id, Nova will persist that and say the operation was 
successful, but it's useless.


Some attempts have been made to do at least checking on the id so that 
it's a UUID, but project IDs don't have to be UUIDs, so we've rejected 
those types of fixes for now. And we don't really want python-novaclient 
doing the check with keystone since we really need to handle this in the 
Nova REST API for all clients.


The spec was originally written to add new policy to keystone and add 
keystone admin credentials to nova.conf to perform the lookup, but 
that's since been rejected (it's not a good idea for Nova to have 
keystone admin credentials).


We then talked about whether or not to rely on a nova service user, but 
ultimately decided against that.


So the plan is to re-use the Nova admin context passed in to perform a 
check in keystone (HEAD or GET) with the given project_id which will 
either fail or pass, and Nova can handle the result appropriately:


* If Nova gets a 200, then the ID exists and we continue processing the 
request.


* If Nova gets a 403 then we don't have access to Keystone to perform 
the check (the deployer would have to adjust Keystone's policy.json to 
allow this). In this case I think we continue as we do today, maybe 
logging a warning. Details on that will be sorted out in the spec review.


* If Nova gets a 404 then the ID isn't valid and we return a 400 to the 
user.


There were some clarifying questions already in the dev list [3].

We also agreed that the Nova REST API won't be accepting a tenant name 
lookup, that convenience can happen in clients like 
python-openstackclient which already accepts name or id for project inputs.


We had some time left at the end of the session to cover two other items.

oslo.policy changes for policy defaults in code
---

This was just a nod to the Keystone team to review Andrew Laski's specs 
[4], one of which is already approved (thanks Keystoners - is that what 
you call yourselves?).



Service users between projects to avoid token timeouts
--

The idea here is to pass user information (user, role, project IDs) and 
service token and KSA will check the expiration on the service token. 
This is not yet implemented in KSA. The Keystone team is hoping to merge 
this code in Newton so Nova could use it in Ocata. The good news is most 
everything Nova is using for clients, e.g. cinderclient, glanceclient, 
neutronclient, etc, are already using KSA.


This would also be an additive change to the context so it will work 
with older computes. The only catch with this is you can't set the token 
timeouts low until your nova computes are upgraded and using the updated 
context. A multi-node CI job should be able to tease out issues with 
this though.


[1] https://etherpad.openstack.org/p/newton-nova-keystone
[2] https://review.openstack.org/#/c/294337/
[3] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094096.html
[4] 
https://review.openstack.org/#/q/project:openstack/oslo-specs+branch:master+topic:policy_generation



--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-11 Thread Nikhil Komawar
Thanks Josh about your reply. It's helpful.

The attempt of this cross project work is to come up with a standard way
of implementing quota logic that can be used by different services.
Currently, different projects have their individual implementations and
there are many learning lessons. The library is supposed to be born out
of that shared wisdom.

Hence, it needs to be an independent library that can make progress in a
way, to be successfully adopted and vetted upon by cross project cases;
but not necessarily enforce cross project standardization for projects
to adopt it in a particular way.


So, not oslo for now at least [1]. BigTent? -- I do not know the
consequences of it not being in BigTent. We do not need design summit
slot dedicated for this project, neither do we need to have elections,
nor it is a big enough project to be coordinated with a specific release
milestone (Newton, Ocata, etc.). The team, though, does follow the four
opens [2]. So, we can in future go for either option as needed. As long
as it lives under openstack/delimiter umbrella, runs the standard gate
tests, follows the release process of openstack for libraries (but not
necessarily require intervention of the release team), we are happy.


[1] Personally, I do not care of where it lives after it has been
adopted by a few different projects. But let's keep the future
discussions in the Pandora's box for now.

[2] The four opens http://governance.openstack.org/reference/opens.html

On 5/11/16 7:16 PM, Joshua Harlow wrote:
> So it was under my belief that at its current stage that this library
> would start off on its own, and not initially start of (just yet) in
> oslo (as I think the oslo group wants to not be the
> blocker/requirement for a library being a successful thing + the cost
> of it being in oslo may not be warranted yet).
>
> If in the future we as a community think it is better under oslo (and
> said membership into oslo will help); then I'm ok with it being
> there... I just know that others (in the oslo group) have other
> thoughts here (and hopefully they can chime in).
>
> Part of this is also being refined in
> https://review.openstack.org/#/c/312233/ and that hopefully can be a
> guideline for new libraries that come along.
>
> -Josh
>
> Andreas Jaeger wrote:
>> Since the review [1] to create the repo is up now, I have one question:
>> This is a cross-project effort, so what is it's governance?
>>
>> The review stated it will be an independent project outside of the big
>> tent - but seeing that this should become a common part for core
>> projects and specific to OpenStack, I wonder whether that is the right
>> approach. It fits nicely into Oslo as cross-project library - or it
>> could be an independent team on its own in the Big Tent.
>>
>> But cross-project and outside of Big Tent looks very strange to me,
>>
>> Andreas
>>
>> [1] https://review.openstack.org/284454
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][infra] HW upgrade for tripleo CI

2016-05-11 Thread Derek Higgins
On 11 May 2016 at 10:24, Derek Higgins  wrote:
> Hi All,
> I'll be taking down the tripleo cloud today for the hardware
> upgrade, The cloud will be take down at about 1PM UTC and I'll start
> bringing it back up once dcops have finished installing the new HW. We
> should have everything back up an running for tomorrow.

Unfortunately we didn't get this completed tonight and will have to
pick up on it tomorrow. During the upgrade one of the hosts failed to
get passed its POST process, the host in question is the bastion
server which routes all traffic to the cloud and is our jump host for
access to the other servers. We'll be picking up on it tomorrow when
other members of the lab team are available to assist.

Thanks and sorry for the extended disruption,
Derek.

>
> thanks,
> Derek.
>
> On 6 May 2016 at 15:36, Derek Higgins  wrote:
>> Hi All,
>>the long awaited RAM and SSD's have arrived for the tripleo rack,
>> I'd like to schedule a time next week to do the install which will
>> involve and outage window. We could attempt to do it node by node but
>> the controller needs to come down at some stage anyways and doing
>> other nodes groups at a time will take all day as we would have to
>> wait for jobs to finish on each one as we go along.
>>
>>I'm suggesting we do it on one of Monday(maybe a little soon at
>> this stage), Wednesday or Friday (mainly because those best suit me),
>> has anybody any suggestions why one day would be better over the
>> others?
>>
>>The other option is that we do nothing until the Rack is moved
>> later in the summer but the exact timing of this is now up in the air
>> a little so I think its best we just bite the bullet and do this ASAP
>> without waiting.
>>
>> thanks,
>> Derek.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Gregory Haynes
On Wed, May 11, 2016, at 01:11 PM, Robert Collins wrote:
> As a community, we decided long ago to silo our code: Nova and Swift
> could have been in one tree, with multiple different artifacts - think
> of all the cross-code-base-friction we would not have had if we'd done
> that! The cultural consequence though is that bringing up new things
> is much less disruptive: add a new tree, get it configured in CI, move
> on. We're a team-centric structure for governance, and the costs of
> doing cross-team code-level initiatives are already prohibitive: we
> have already delegated all those efforts to the teams that own the
> repositories: requirements syncing, translations syncing, lint fixups,
> security audits... Having folk routinely move cross project is
> something we seem to be trying to optimise for, but we're not actually
> achieving.
> 

I think this greatly understates how much cross project work goes on. I
agree that cross-team feature development is prohibitively difficult
most of the time, but I do see a lot of cross-project reading/debugging
happening on a day to day basis. I worry that this type of work is
extremely valuable but not very visible and therefore easy to overlook.
I know I do this a lot as both a deployer and a developer, and I have to
imagine others do as well.

> So, given that that is the model - why is language part of it? Yes,
> there are minimum overheads to having a given language in CI - we need
> to be able to do robust reliable builds [or accept periodic downtime
> when the internet is not cooperating], and that sets a lower fixed
> cost, varying per language. Right now we support Python, Java,
> Javascript, Ruby in CI (as I understand it - infra focused folk please
> jump in here :)).
> 

The upfront costs are not a huge issue IMO, for reasons you hit on -
folks wanting the support for a new lanaguage are silo'd off enough that
they can pay down upfront costs without effecting the rest of the
community much. What I worry about are the maintenance costs and the
cross-team costs which are hard to quantify in a list of requirements
for a new language:

Its reasonable to assume that new languages will have similar amounts of
upstream breakage that we get from python tooling (such as a new pip
releases), and so we would just be increasing the load here by a factor
of the number of languages we gate on. This is especially concerning
given that a lot of the firefighting to solve these types of issues
seems to center around one team doing cross-project work (infra).

The ability for folks to easily debug problems across projects is a huge
issue. This isn't mostly a language issue even, its a tooling issue: we
are going to have to use and/or develop a lot of tools to replace their
python counterparts we rely on and debugging issues (especially in the
gate) is going to require knowledge of each one.

-Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-11 Thread Fox, Kevin M
Heat added a panel in horizon to describe all the resources they have. Not sure 
you would want to do it the same way, but there is at least one other project 
that ran into this issue and took a stab at solving it. Might be worth a look.

Thanks,
Kevin

From: Hongbin Lu [hongbin...@huawei.com]
Sent: Wednesday, May 11, 2016 12:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap, ‘labels’ 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document ‘labels’. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/…), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Clark Boylan
On Wed, May 11, 2016, at 01:11 PM, Robert Collins wrote:
> So, given that that is the model - why is language part of it? Yes,
> there are minimum overheads to having a given language in CI - we need
> to be able to do robust reliable builds [or accept periodic downtime
> when the internet is not cooperating], and that sets a lower fixed
> cost, varying per language. Right now we support Python, Java,
> Javascript, Ruby in CI (as I understand it - infra focused folk please
> jump in here :)).

The actual list is more accurately Python and Bash and to a degree
Javascript. We can run Java and Ruby and there have even been people
running Go, this is a result of our test infrastructure giving you root
on single use test instances. But we haven't really jumped over
backwards to properly support these languages with package mirrors,
debugging help, test framework help, etc this has largely only been done
for Python and Bash (but more often Javascript too due to the realities
of the modern browser world).

One thing to keep in mind is that people working cross project tasks and
particularly infra and gate triage end up doing a significant amount of
debug work keeping the projects running. Whether it is an upstream lib
breaking, packaging tools exploding, packages disappearing, or
misdiagnosed infrastructure issues these individuals do spend a ton of
time knee deep in language specific details. Yes, there is a minimum
overhead for supporting a given language, and that overhead is much
higher than many seem to realize. This is why language is part of it.

If we had a stronger culture of cross project cooperation and OpenStack
issue triage I definitely wouldn't worry about this as much, but the
reality of the current world is that a very small subsection of our
community is involved in this work and they already have a high rate of
burn out.

I am glad that general consensus seems to be we should have this
discussion and determine these costs before we make any decisions.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] App Catalog IRC meeting Thursday May 12th

2016-05-11 Thread Christopher Aedo
Join us Thursday for our weekly meeting, scheduled for May 12th at
17:00UTC in #openstack-meeting-3

The agenda can be found here, and please add to if you want to discuss
something with the Community App Catalog team:
https://wiki.openstack.org/wiki/Meetings/app-catalog

It might be a short meeting as I (and I suspect others) have not fully
caught back up from all the fun during the summit :)  Just the same,
looking forward to seeing you there tomorrow!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [senlin] [keystone] [ceilometer] [telemetry] Questions about api-ref launchpad bugs

2016-05-11 Thread Atsushi SAKAI
Anne
Augustina

  Note.
Sorry for duplicate posting for openstack-docs.
Since I failed to post to openstack-dev.

  Thank you for description and help.

  I will clean up launchpad api-site within this week at first.
  Since just around 150 issues and many issues are tagged it already.
  I can manually move most of the issues.
  Does any problem exist for this method?

  Then I will do it for remaining api-ref migration issues.

Thanks
Atsushi SAKAI

  



On Tue, 10 May 2016 07:53:19 -0500
Anne Gentle  wrote:

> Great questions, so I'm copying the -docs and -dev lists to make sure
> people know the answers.
> 
> On Tue, May 10, 2016 at 5:14 AM, Atsushi SAKAI 
> wrote:
> 
> > Hello Anne
> >
> >   I have several question when I am reading through etherpad's (in
> > progress).
> >   It would be appreciated to answer these questions.
> >
> > 1)Should api-ref launchpad **bugs** be moved to each modules
> >   (like keystone, nova etc)?
> >   Also, this should be applied to moved one's only or all components?
> >(compute, baremetal Ref.2)
> >
> >   Ref.
> > https://etherpad.openstack.org/p/austin-docs-newtonplan
> > API site bug list cleanup: move specific service API ref bugs to
> > project's Launchpad
> >
> >   Ref.2
> > http://developer.openstack.org/api-ref/compute/
> > http://developer.openstack.org/api-ref/baremetal/
> 
> 
> Yes! I definitely got agreement from nova team that they want them. Does
> anyone have a Launchpad script that could help with the bulk filter/export?
> Also, are any teams concerned about taking on their API reference bugs?
> Let's chat.
> 
> 
> >
> >
> > 2)Status of API-Ref
> >   a)Why keystone and senlin are no person at this moment?
> >
> >
> >
> Keystone -- after the Summit, keystone had someone sign up [1], but sounds
> like we need someone else. Brant, can you help us find someone?
> 
> Senlin -- Qiming Teng had asked a lot of questions earlier in the process
> and tested the instructions. Qiming had good concerns about personal
> bandwidth limits following along with all the changes. Now that it's
> settled, I'll follow up (and hoping the senlin team is reading the list).
> 
> 
> >   b)What is your plan for sahara and ceilometer?
> >  (It seems already exist the document.)
> >
> 
> Yes, these are two I had seen already have RST, but they do not use the
> helpful Sphinx extensions.
> 
> Sahara -- Mike McCune, we should chat about the plans. Are you okay with
> moving towards the common framework and editing the current RST files to
> use the rest_method and rest_parameters Sphinx directives?
> 
> Ceilometer -- sorry, Julien, I hadn't reached out individually to you.
> Could you let me know your plans for the RST API reference docs?
> 
> 
> >   c)When is the table's status changed to "Done"?
> >  nova (compute) and ironic (baremetal) seems first patch merged
> >  and see the document already.
> >
> 
> I'll change those two to Done.
> 
> Thanks for asking -
> Anne
> 
> 
> >
> >
> >   Ref.
> > [1]
> > https://wiki.openstack.org/wiki/Documentation/Migrate#API_Reference_Plan
> >
> >
> > Thanks
> >
> > Atsushi SAKAI
> >
> 
> 
> 
> -- 
> Anne Gentle
> www.justwriteclick.com


-- 
Atsushi SAKAI 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Fox, Kevin M
/me puts on his Operator hat.

Operators do care about being able to debug issues with the stuff they have to 
deploy/manage.

One of our complains about Java (and probably Erlang.. Not sure about go): In 
addition to the standard C like issues you hit, file descriptor limits, etc, 
the language has its own issues that it carries in. Java stack sizes, max 
memory limits that default too low, unusual command line switches, etc.

So, yes, operators don't tend to care too much about language if they can help 
it, but often the language makes them care.

Also, as someone that has had to write at least one patch (or backport myself) 
against something, pretty much every OpenStack upgrade I've done, familiarity 
with the language has greatly helped. Too many languages makes that harder. 
Sure, in an ideal world, OpenStack shouldn't require that out of an Operator 
and so language won't count as much. But we're not there yet.

I'm not saying don't do it. If its the right tool for the job, its the right 
tool for the job. Just be very cognizant of what you asking Operators to deal 
with. The language list is quite long already.

Thanks,
Kevin

From: Robert Collins [robe...@robertcollins.net]
Sent: Wednesday, May 11, 2016 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] supporting Go

I'm going to try arguing the pro case.

The big tent has us bringing any *team* that want to work the way we
do: in the open, collaboratively, in concert with other teams, into
OpenStacks community.

Why are we using implementation language as a gate here?

I assert that most deployers don't fundamentally care about language
of the thing they are deploying; they care about cost of operations,
which is influenced by a number of facets around the language
ecosystem: packaging, building, maturity, VM
behaviour-and-tuning-needs, as well as service specific issues such as
sharding, upgrades, support (from vendors/community), training (their
staff - free reference material, as well as courses) etc. Every
discussion we had with operators about adding new dependencies was
primarily focused on the operational aspects, not 'is it written in
C/$LANGUAGE'. Right now our stock dependency stack has C (Linux, libc,
libssl, Python itself etc, Erlang (rabbitmq), java (zookeeper).

End users emphatically do not care about the language API servers were
written in. They want stability, performance, features, not 'Written
in $LANGUAGE'.

As a community, we decided long ago to silo our code: Nova and Swift
could have been in one tree, with multiple different artifacts - think
of all the cross-code-base-friction we would not have had if we'd done
that! The cultural consequence though is that bringing up new things
is much less disruptive: add a new tree, get it configured in CI, move
on. We're a team-centric structure for governance, and the costs of
doing cross-team code-level initiatives are already prohibitive: we
have already delegated all those efforts to the teams that own the
repositories: requirements syncing, translations syncing, lint fixups,
security audits... Having folk routinely move cross project is
something we seem to be trying to optimise for, but we're not actually
achieving.

So, given that that is the model - why is language part of it? Yes,
there are minimum overheads to having a given language in CI - we need
to be able to do robust reliable builds [or accept periodic downtime
when the internet is not cooperating], and that sets a lower fixed
cost, varying per language. Right now we support Python, Java,
Javascript, Ruby in CI (as I understand it - infra focused folk please
jump in here :)).

A Big Tent approach would be this:
 - Document the required support for a new language - not just CI
 - Any team that wants to use $LANGUAGE just needs to ensure that that
support is present.
 - Make sure that any cross-service interactions are well defined in a
language neutral fashion. This is just good engineering basically:
define a contract, use it.

Here is a straw man list of requirements:
 - Reliable builds: the ability to build and test without talking to
the internet at all.
 - Packagable: the ability to take a source tree for a project, do
some arbitrary transform and end up with a folder structure that can
be placed on another machine, with any needed dependencies, and work.
[Note, this isn't the same as 'packagable in a way that makes Red Hat
and Canonical and Suse **happy**, but thats something we can be sure
that those orgs are working on with language providers already ]
 - FL/OSS
 - Compatible with ASL v2 source code. [e.g. any compiler doesn't
taint its output]
 - Can talk oslo.messaging's message format

That list is actually short, and those needs are quite tameable. So
lets do it - lets open up the tent still further, stop picking winners
and instead let the market sort it out.

-Rob

_

Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-11 Thread Joshua Harlow
So it was under my belief that at its current stage that this library 
would start off on its own, and not initially start of (just yet) in 
oslo (as I think the oslo group wants to not be the blocker/requirement 
for a library being a successful thing + the cost of it being in oslo 
may not be warranted yet).


If in the future we as a community think it is better under oslo (and 
said membership into oslo will help); then I'm ok with it being there... 
I just know that others (in the oslo group) have other thoughts here 
(and hopefully they can chime in).


Part of this is also being refined in 
https://review.openstack.org/#/c/312233/ and that hopefully can be a 
guideline for new libraries that come along.


-Josh

Andreas Jaeger wrote:

Since the review [1] to create the repo is up now, I have one question:
This is a cross-project effort, so what is it's governance?

The review stated it will be an independent project outside of the big
tent - but seeing that this should become a common part for core
projects and specific to OpenStack, I wonder whether that is the right
approach. It fits nicely into Oslo as cross-project library - or it
could be an independent team on its own in the Big Tent.

But cross-project and outside of Big Tent looks very strange to me,

Andreas

[1] https://review.openstack.org/284454


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-11 Thread Andreas Jaeger
Since the review [1] to create the repo is up now, I have one question:
This is a cross-project effort, so what is it's governance?

The review stated it will be an independent project outside of the big
tent - but seeing that this should become a common part for core
projects and specific to OpenStack, I wonder whether that is the right
approach. It fits nicely into Oslo as cross-project library - or it
could be an independent team on its own in the Big Tent.

But cross-project and outside of Big Tent looks very strange to me,

Andreas

[1] https://review.openstack.org/284454
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-sfc] meeting topics for 5/12/2016 networking-sfc project IRC meeting

2016-05-11 Thread Cathy Zhang
Hi everyone,
Here are some topics I have for this week's project meeting discussion(The 
meeting time is not changed, still UTC 1700 Thursday). Feel free to add more.
You can also find the meeting topics in the project wiki page.  
https://wiki.openstack.org/wiki/Meetings/ServiceFunctionChainingMeeting
Meeting Info: Every Thursday 1700 UTC on #openstack-meeting-4
* Status update:
   *Add support for Querying Driver capability for SFC functionality support
   *Networking-sfc SFC driver for OVN
   *Networking-sfc SFC driver for ODL
   *Networking-sfc integration with ONOS completion status update
   *Tacker Driver for networking-sfc
* Use case documentation
* new field "priority" in flow-classifier
* Move the generation of the data path chain path ID from the Driver 
component to the networking-sfc plugin component
* Add VNF type (l2 or l3) param in service-function-param
* Add "NSH" encap param in service-chain-param
* Functional Test script
* Tempest Test suite
* Dynamic service chain update without service interruption


Thanks,
Cathy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-11 Thread Matt Kassawara
Can you move this documentation to the networking guide where
operators/users can locate it more easily?

On Wed, May 11, 2016 at 3:26 PM, Sukhdev Kapur 
wrote:

> Hi Matt,
>
> If you look through the documentation, there are steps described to
> install L2GW outside of devstack.
>
> -Sukhdev
>
>
> On Wed, May 11, 2016 at 1:54 PM, Matt Kassawara 
> wrote:
>
>> Do you have any examples of how to implement L2GW outside of DevStack?
>> Operators do not deploy on DevStack.
>>
>> On Wed, May 11, 2016 at 2:47 PM, Sukhdev Kapur 
>> wrote:
>>
>>> Hi Matt,
>>>
>>>
>>> Here is the wiki - https://wiki.openstack.org/wiki/Neutron/L2-GW
>>>
>>> This should provide you all the information that you need.
>>>
>>> -Sukhdev
>>>
>>>
>>>
>>> On Wed, May 11, 2016 at 1:14 PM, Matt Kassawara 
>>> wrote:
>>>
 Can you point me to the documentation?

 On Wed, May 11, 2016 at 2:05 PM, Sukhdev Kapur 
 wrote:

>
> Folks,
>
> I am happy to announce that Mitaka release for L2 Gateway is released
> and now available at https://pypi.python.org/pypi/networking-l2gw.
>
> You can install it by using "pip install networking-l2gw"
>
> This release has several enhancements and fixes for issues discovered
> in liberty release.
>
> Thanks
> Sukhdev Kapur
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][nova][neutron][qa] Injecting code into grenade resource create phase

2016-05-11 Thread Jim Rollenhagen
I've ran into a bit of a wedge working on ironic grenade tests.

In a normal dsvm run, the ironic setup taps the control plane into the
tenant network (as that's how it's currently intended to be deployed).
That code is here[0].

However, in a grenade run, during the resource create phase, a new
network is created. This happens in the neutron bits[1], and is used to
boot a server in the nova bits[2].

Since the control plane can't communicate with the machine on that
network, our ramdisk doesn't reach back to ironic after booting up, and
provisioning fails[3][4].

Curious if any grenade experts have thoughts on how we might be able to
set up that tap in between the neutron and nova resource creation.

One alternative I've considered is a method to have nova resource
creation not boot an instance, and replicate that functionality in the
ironic plugin, after we tap into that network.

I'm sure there's other alternatives here that I haven't thought of;
suggestions welcome. Thanks in advance. :)

// jim

[0] 
https://github.com/openstack/ironic/blob/95ff5badbdea0898d7877e651893916008561760/devstack/lib/ironic#L653
[1] 
https://github.com/openstack-dev/grenade/blob/fce63f40d21abea926d343e9cddd620e3f03684a/projects/50_neutron/resources.sh#L34
[2] 
https://github.com/openstack-dev/grenade/blob/fce63f40d21abea926d343e9cddd620e3f03684a/projects/60_nova/resources.sh#L79
[3] 
http://logs.openstack.org/65/311865/3/experimental/gate-grenade-dsvm-ironic/e635dec/logs/grenade.sh.txt.gz#_2016-05-10_18_10_03_648
[4] 
http://logs.openstack.org/65/311865/3/experimental/gate-grenade-dsvm-ironic/e635dec/logs/old/screen-ir-cond.txt.gz?level=WARNING

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] Multi-attach/Cinder-Nova weekly IRC meetings

2016-05-11 Thread Mike Perez
On 14:38 May 11, Ildikó Váncsa wrote:
> Hi All,
> 
> We will continue the meeting series about the Cinder-Nova interaction changes 
> mostly from multiattach  perspective. We have a new meeting slot, which is 
> __Thursday, 1700UTC__ on the #openstack-meeting-cp channel.
> 
> Related etherpad: https://etherpad.openstack.org/p/cinder-nova-api-changes
> Summary about ongoing items: 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/094089.html 

I don't think this meeting is registered. [1] Take a look at [2].

[1] - http://eavesdrop.openstack.org/
[2] - https://review.openstack.org/#/c/301822/6/doc/source/cross-project.rst

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Anita Kuno
On 05/11/2016 02:06 PM, Paul Belanger wrote:
> On Wed, May 11, 2016 at 05:51:41PM +, Jeremy Stanley wrote:
>> On 2016-05-11 10:04:26 -0400 (-0400), Sean Dague wrote:
>> [...]
>>> Before deciding that it's unsurmountable, maybe giving delete
>>> permissions to a larger set of contributors might be a good idea.
>> [...]
>>
>> I have no objection to granting delete (on non-locked articles) to
>> all users immediately if someone works out the necessary
>> configuration change to put that into effect. I expect it was
>> restricted by default to encourage people to instead annotate
>> articles as deprecated or use the "move" feature to redirect them to
>> more relevant ones. That said, I think we're at the point where we
>> should start considering such important content as candidates to
>> move elsewhere outside the wiki (especially any currently remaining
>> locked articles).
>>
>>> I do realize running a public wiki is a PITA, especially at the
>>> target level of openstack.org.
>> [...]
>>
>> The thing which would most significantly reduce its attractiveness
>> to spammers would be to stop having major search engines index the
>> content on it. Somewhere they can publish content and have it
>> immediately show up in search engine results is really _all_ they
>> care about. Maybe a compromise is to say that the wiki is an okay
>> place to publish things you don't need people to find through
>> external search engines? Then we could quite easily open account
>> creation back up with minimal risk and much lower moderator demand.
> 
> I think giving out delete permissions is a good first step, however if we are
> going down this path of keep the wiki alive, I believe we have to look at
> re-evaluating mediawiki.

Our current plan includes keeping the wiki maintained for a least a year
in any case, for some definition of maintained. On the etherpad for the
session, Craige had made notes of wikis other than mediawiki that he
liked. The group that was in the session decided not to pursue an
investigation of these suggestions but should any party feel interested
in investigating and creating a comparison, the suggestions still exist
in the session etherpad.

Thanks,
Anita.

>   Our current setup with puppet is not idea and I feel
> mediawiki is maybe too much software for what we are doing.
> 
> I'd love to use something more easier to manage (ideally not php based).
> 
>> -- 
>> Jeremy Stanley
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Are you getting value from the 8:00 utc Tuesday meeting?

2016-05-11 Thread Anita Kuno
On 05/11/2016 02:22 PM, Mike Perez wrote:
> On 14:30 May 10, Anita Kuno wrote:
>> I've been chairing this meeting for about 3 releases now and in this
>> last release it has mostly been myself and lennyb, who also attends the
>> Monday 15:00 utc third-party meeting that I chair.
>>
>> Are you getting value from the Tuesday 8:00 utc third-party meeting? If
>> yes, please make yourself known. If no, the meeting in this slot will be
>> removed leaving the other third-party meetings to continue along their
>> regular schedule.
> 
> I just want to thank you Anita for providing this to the community and helping
> the third-party CI movement with a variety of projects. I know this was not
> a convenient time for you, but you made sure everyone could get help.
> 

Thanks Mike, I appreciate the acknowledgement. We both know this is a
tough space and I am grateful to all the folks who do their best to make
it possible for folks to accomplish their goals here.

Thank you to you and to all those who help third party ci operators get
oriented and find what they need to run their systems.

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [designate] multi-tenancy in Neutron's DNS integration

2016-05-11 Thread Kevin Benton
Yes, dnsmasq is there as a simple solution for people that didn't want a
separate DNS server specified in their subnet.

Can you just enable DHCP on the subnet? Even though your containers don't
use it for addressing, it will still work for resolving DNS. (it doesn't
answer anonymous DHCP queries if you're worried about that)

On Wed, May 11, 2016 at 10:40 AM, Mike Spreitzer 
wrote:

> > On Tue, May 10, 2016 at 11:28 AM, Kevin Benton  wrote:
> > neutron subnet-show with the UUID of the subnet they have a port on
> > will tell you.
> >
> > On Tue, May 10, 2016 at 6:40 AM, Mike Spreitzer 
> wrote:
> > "Hayes, Graham"  wrote on 05/10/2016 09:30:26 AM:
> >
> > > ...
> > > > Ah, that may be what I want.  BTW, I am not planning to use Nova.  I
> am
> > > > planning to use Swarm and Kubernetes to create containers attached to
> > > > Neutron private tenant networks.  What DNS server would I configure
> > > > those containers to use?
> > >
> > > ...
> > >
> > > The DNSMasq instance running on the neutron network would have these
> > > records - they should be sent as part of the DHCP lease, so leaving the
> > > DNS set to automatic should pick them up.
> >
> > IIRC, our Docker containers do not use DHCP.  Is there any other way
> > to find out the correct DNS server(s) for the containers to use?
>
>
> > From: Kevin Benton 
> > ...
> >
> > Whoops. What I just said was wrong if it hadn't been explicitly
> overwritten.
> >
> > I think you will end up having to do a port-list looking for the DHCP
> port(s).
> > http://paste.openstack.org/show/496604/
>
>
>
> So the DNS server and the DHCP server are bundled together?
>
> Like I said, our Docker containers do not use DHCP.  Our subnets do not
> enable DHCP.  We have no DHCP ports.  Does that mean we do not get internal
> DNS?
>
> Thanks,
> Mike
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-11 Thread Sukhdev Kapur
Hi Matt,

If you look through the documentation, there are steps described to install
L2GW outside of devstack.

-Sukhdev


On Wed, May 11, 2016 at 1:54 PM, Matt Kassawara 
wrote:

> Do you have any examples of how to implement L2GW outside of DevStack?
> Operators do not deploy on DevStack.
>
> On Wed, May 11, 2016 at 2:47 PM, Sukhdev Kapur 
> wrote:
>
>> Hi Matt,
>>
>>
>> Here is the wiki - https://wiki.openstack.org/wiki/Neutron/L2-GW
>>
>> This should provide you all the information that you need.
>>
>> -Sukhdev
>>
>>
>>
>> On Wed, May 11, 2016 at 1:14 PM, Matt Kassawara 
>> wrote:
>>
>>> Can you point me to the documentation?
>>>
>>> On Wed, May 11, 2016 at 2:05 PM, Sukhdev Kapur 
>>> wrote:
>>>

 Folks,

 I am happy to announce that Mitaka release for L2 Gateway is released
 and now available at https://pypi.python.org/pypi/networking-l2gw.

 You can install it by using "pip install networking-l2gw"

 This release has several enhancements and fixes for issues discovered
 in liberty release.

 Thanks
 Sukhdev Kapur



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Bootstrapping new team for Requirements.

2016-05-11 Thread Dirk Müller
> That works for me.  I'd prefer that meeting earlier in the week so perhaps
> a UTC 11:00 Tuesday would be a good time to use until DST changes and then we
> can re check based on who is active.

Works for me. thanks for the writeup in the etherpad, that was very
helpful. Lets meet on #openstack-requirements then?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Seeing db lockout issues in neutron add_router_interface

2016-05-11 Thread Divya
Thanks Kevin. Will take a look at the example.

Thanks,
Divya

On Tue, May 10, 2016 at 11:41 PM, Kevin Benton  wrote:

> Unfortunately we didn't switch to the new sql driver until liberty so that
> probably wouldn't be a safe switch in Kilo.
>
> Adding a retry will help, but unfortunately that will still block your
> call for 60 seconds with that driver until the timeout exception is
> triggered.
> We worked around this in ML2 by identifying the calls that could yield
> while holding a DB lock and then acquiring a semaphore before doing each
> one.
> You can see an example here:
> https://github.com/openstack/neutron/blob/363eeb06104662ee38aeed04af043899379f6ab8/neutron/plugins/ml2/plugin.py#L1074
>
> On Tue, May 10, 2016 at 11:27 PM, Divya  wrote:
>
>> Thanks Mike for the response. I am part of Nuage openstack team. We are
>> looking in to the issue.
>> An extra delete_port call in NuagePlugin's add_router_interface triggers
>> db lockout when insert into routerport ( this is in core neutron ).
>> Are you suggesting NuagePlugin should retry in this case or should core
>> neutron, add-router_interface should retry??
>> Will give it a try.
>>
>>
>> On Tue, May 10, 2016 at 4:54 PM, Mike Bayer  wrote:
>>
>>>
>>>
>>> On 05/10/2016 04:57 PM, Divya wrote:
>>>
 Hi,
 I am trying to run this rally test on stable/kilo

 https://github.com/openstack/rally/blob/master/samples/tasks/scenarios/neutron/create_and_delete_routers.json

 with concurrency 50 and iterations 2000.

 This test basically cretaes routers and subnets
 and then calls
 router-interface-add
 router-interface-delete


 And i am running this against 3rd party Nuage plugin.

 In the NuagePlugin:

 add_router_interface is something like this:
 
 super().add_router_interface
 try:
some calls to external rest server
super().delete_port
 except:

 remove_router_interface:
 ---
 super().remove_router_interface
 some calls to external rest server
 super().create_port()
 some calls to external rest server


 If i comment delete_port in the add_router_interface, i am not hitting
 the db lockout issue.
 delete_port or any other operations are not within any transaction.
 So not sure, why this is leading to db lock timeouts in insert to
 routerport

 error trace
 http://paste.openstack.org/show/496626/



 Really appreciate any help on this.

>>>
>>>
>>> I'm not on the Neutron team, but in general, Openstack applications
>>> should be employing retry logic internally which anticipates database
>>> deadlocks like these and retries the operation.  I'd report this stack
>>> trace (especially if it is reproducible) as a bug to this plugin's
>>> launchpad project.
>>>
>>>
>>>
>>>
 Thanks,
 Divya















 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] api-ref sprint Wed status

2016-05-11 Thread Sean Dague
On 05/11/2016 09:06 AM, Sean Dague wrote:
> On 05/09/2016 08:23 AM, Sean Dague wrote:
>> There is a lot of work to be done to get the api-ref into a final state.
>>
>> Review / fix existing patches -
>> https://review.openstack.org/#/q/project:openstack/nova+file:api-ref+status:open
>> shows patches not yet merged. Please review them, and if there are
>> issues feel free to fix them.
>>
>> Help create new API ref changes verifying some of the details -
>> https://wiki.openstack.org/wiki/NovaAPIRef

Thus far, we've landed about 35 tag removals, and there are 27 reviews
outstanding that impact this (may remove one or more tags). Which while
it feel slow, you'll see in the burndown (http://burndown.dague.org/)
that things are accelerating as people get better at both creating and
reviewing the changes.

While the sprint officially is over today, I'm going to keep drumming up
on this through the rest of the week and see how far we can get. Thanks
again to everyone that's been helping out so far:

Has proposed changes: 16
 - Alex Xu
 - Anusha Unnam
 - Augustina Ragwitz
 - Karen Bradshaw
 - Matt Riedemann
 - Nikita Konovalov
 - Pushkar Umaranikar
 - Ronald Bradford
 - Sarafraj Singh
 - Sean Dague
 - Sharat Sharma
 - Sivasathurappan Radhakrishnan
 - Sujitha
 - Timofey Durakov
 - Zhenyu Zheng
 - jichenjc

Has had changes merged: 9
 - Alex Xu
 - Karen Bradshaw
 - Pushkar Umaranikar
 - Ronald Bradford
 - Sean Dague
 - Sharat Sharma
 - Sivasathurappan Radhakrishnan
 - Sujitha
 - jichenjc

Has reviewed changes: 18
 - Alex Xu
 - Andrew Laski
 - Atsushi SAKAI
 - Gábor Antal
 - Jay Pipes
 - John Garbutt
 - Ken'ichi Ohmichi
 - Matt Riedemann
 - Ronald Bradford
 - Sarafraj Singh
 - Sean Dague
 - Sivasathurappan Radhakrishnan
 - Sylvain Bauza
 - Takashi NATSUME
 - jichenjc
 - melissaml
 - venkatamahesh
 - yejiawei

Open reviews changing files: 27
 - https://review.openstack.org/310096 -
api-ref/source/os-hypervisors.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/310420 - api-ref/source/parameters.yaml,
api-ref/source/servers.inc
 - https://review.openstack.org/311071 - api-ref/source/os-floating-ips.inc
 - https://review.openstack.org/311727 - api-ref/source/parameters.yaml,
api-ref/source/servers-admin-action.inc
 - https://review.openstack.org/314101 - api-ref/source/extensions.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314133 - api-ref/source/flavors.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314198 - api-ref/source/os-networks.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314268 - api-ref/source/images.inc
 - https://review.openstack.org/314320 - api-ref/source/ips.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314325 - api-ref/source/os-volumes.inc
 - https://review.openstack.org/314502 - api-ref/source/os-keypairs.inc,
api-ref/source/parameters.yaml
 - https://review.openstack.org/314566 -
api-ref/source/_static/api-site.css, api-ref/source/parameters.yaml,
api-ref/source/servers-action-crash-dump.inc
 - https://review.openstack.org/314776 - api-ref/source/parameters.yaml,
api-ref/source/servers-action-evacuate.inc
 - https://review.openstack.org/314796 - api-ref/source/os-certificates.inc
 - https://review.openstack.org/314833 - api-ref/source/os-quota-sets.inc
 - https://review.openstack.org/314932 -
api-ref/source/servers-admin-action.inc,
doc/api_samples/versions/v21-version-get-resp.json,
doc/api_samples/versions/versions-get-resp.json,
nova/api/openstack/api_version_request.py,
nova/api/openstack/compute/migrate_server.py,
nova/api/openstack/rest_api_version_history.rst, nova/conductor/manager.py
 - https://review.openstack.org/315100 -
api-ref/source/servers-action-remote-consoles.inc
 - https://review.openstack.org/315103 - api-ref/ext/rest_parameters.py
 - https://review.openstack.org/315126 -
api-ref/source/os-security-group-rules.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315127 -
api-ref/source/os-baremetal-nodes.inc
 - https://review.openstack.org/315145 -
api-ref/source/servers-action-deferred-delete.inc
 - https://review.openstack.org/315170 -
api-ref/source/os-instance-usage-audit-log.inc
 - https://review.openstack.org/315182 -
api-ref/source/os-server-external-events.inc
 - https://review.openstack.org/315199 -
api-ref/source/os-floating-ips-bulk.inc
 - https://review.openstack.org/315212 - api-ref/source/index.rst,
api-ref/source/os-server-external-events.inc,
api-ref/source/os-services.inc, api-ref/source/parameters.yaml
 - https://review.openstack.org/315216 - api-ref/source/limits.inc
 - https://review.openstack.org/315220 - api-ref/source/servers-actions.inc


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac

Re: [openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-11 Thread Matt Kassawara
Do you have any examples of how to implement L2GW outside of DevStack?
Operators do not deploy on DevStack.

On Wed, May 11, 2016 at 2:47 PM, Sukhdev Kapur 
wrote:

> Hi Matt,
>
>
> Here is the wiki - https://wiki.openstack.org/wiki/Neutron/L2-GW
>
> This should provide you all the information that you need.
>
> -Sukhdev
>
>
>
> On Wed, May 11, 2016 at 1:14 PM, Matt Kassawara 
> wrote:
>
>> Can you point me to the documentation?
>>
>> On Wed, May 11, 2016 at 2:05 PM, Sukhdev Kapur 
>> wrote:
>>
>>>
>>> Folks,
>>>
>>> I am happy to announce that Mitaka release for L2 Gateway is released
>>> and now available at https://pypi.python.org/pypi/networking-l2gw.
>>>
>>> You can install it by using "pip install networking-l2gw"
>>>
>>> This release has several enhancements and fixes for issues discovered in
>>> liberty release.
>>>
>>> Thanks
>>> Sukhdev Kapur
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-11 Thread Sukhdev Kapur
Hi Matt,


Here is the wiki - https://wiki.openstack.org/wiki/Neutron/L2-GW

This should provide you all the information that you need.

-Sukhdev



On Wed, May 11, 2016 at 1:14 PM, Matt Kassawara 
wrote:

> Can you point me to the documentation?
>
> On Wed, May 11, 2016 at 2:05 PM, Sukhdev Kapur 
> wrote:
>
>>
>> Folks,
>>
>> I am happy to announce that Mitaka release for L2 Gateway is released and
>> now available at https://pypi.python.org/pypi/networking-l2gw.
>>
>> You can install it by using "pip install networking-l2gw"
>>
>> This release has several enhancements and fixes for issues discovered in
>> liberty release.
>>
>> Thanks
>> Sukhdev Kapur
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread Ryan Moats
John McDowall  wrote on 05/11/2016 12:37:40
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" 
> Date: 05/11/2016 12:37 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> I have done networking-sfc the files that I see as changed/added are:
>
> devstack/settings   Modified Runtime setting to pick up OVN Driver
> networking_sfc/db/migration/alembic_migrations/versions/mitaka/
> expand/5a475fc853e6_ovs_data_model.py Hack to work around
> flow_classifier issue – need to resolve with SFC team.
> networking_sfc/services/sfc/drivers/ovn/__init__.py   Added for OVN
Driver
> networking_sfc/services/sfc/drivers/ovn/driver.py Added ovn driver
file
> setup.cfg Inserted OVN driver entry
>
> I am currently working to clean up ovs/ovn.
>
> Regards
>
> John

I can confirm that the networking-sfc rebase goes in clean against
master for me :) - Looking forward to ovs ...

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread John Dickinson


On 11 May 2016, at 13:11, Robert Collins wrote:
>
> So, given that that is the model - why is language part of it? Yes,
> there are minimum overheads to having a given language in CI - we need
> to be able to do robust reliable builds [or accept periodic downtime
> when the internet is not cooperating], and that sets a lower fixed
> cost, varying per language. Right now we support Python, Java,
> Javascript, Ruby in CI (as I understand it - infra focused folk please
> jump in here :)).

+1000

this is what the whole thread should be about

> Here is a straw man list of requirements:
>  - Reliable builds: the ability to build and test without talking to
> the internet at all.
>  - Packagable: the ability to take a source tree for a project, do
> some arbitrary transform and end up with a folder structure that can
> be placed on another machine, with any needed dependencies, and work.
> [Note, this isn't the same as 'packagable in a way that makes Red Hat
> and Canonical and Suse **happy**, but thats something we can be sure
> that those orgs are working on with language providers already ]
>  - FL/OSS
>  - Compatible with ASL v2 source code. [e.g. any compiler doesn't
> taint its output]
>  - Can talk oslo.messaging's message format


The great news is that we don't have to have the straw man--we actually are 
building the real list (and you've hit several of them).

At the infra team meeting this week we talked through a few of these (mostly 
focused on the dependency management issues first), and we've started 
collecting notes on 
https://etherpad.openstack.org/p/golang-infra-issues-to-solve about the basic 
infra things that need to be figured out.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Jeremy Stanley
On 2016-05-12 08:11:46 +1200 (+1200), Robert Collins wrote:
[...]
> Right now we support Python, Java, Javascript, Ruby in CI (as I
> understand it - infra focused folk please jump in here :)).
[...]

We also support boatloads of shell script! ;)
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [infra] Jobs failing : "No matching distribution found for "

2016-05-11 Thread Jeremy Stanley
On 2016-05-11 21:55:28 +1000 (+1000), Joshua Hesketh wrote:
[...]
> Once fixed properly we should undo the workaround so that our
> mirror matches pypi.

Donald's bandersnatch fix has merged and is now available in an
updated release, but as James Blair pointed out in IRC we're
currently running a lightweight fork of bandersnatch with his hash
indexing patch applied so need to make sure that still applies
cleanly and include it in whatever updated version we deploy on the
mirror-update server.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] State of the refactor

2016-05-11 Thread Sean M. Collins
More updates:

https://review.openstack.org/#/q/status:open+project:openstack-dev/devstack+branch:master+topic:neutron-refactor

Has the fixes for things I screwed up. Basically it looks like just the
_configure_neutron_l3_agent got clobbered which could have broken people
using neutron-legacy. I've been watching Zuul all afternoon, but oddly
it didn't trigger any breakage in the gate so far.

So, hopefully we can clean up my boo-boos quickly and pretend this
never happened.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-11 Thread Matt Kassawara
Can you point me to the documentation?

On Wed, May 11, 2016 at 2:05 PM, Sukhdev Kapur 
wrote:

>
> Folks,
>
> I am happy to announce that Mitaka release for L2 Gateway is released and
> now available at https://pypi.python.org/pypi/networking-l2gw.
>
> You can install it by using "pip install networking-l2gw"
>
> This release has several enhancements and fixes for issues discovered in
> liberty release.
>
> Thanks
> Sukhdev Kapur
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Robert Collins
I'm going to try arguing the pro case.

The big tent has us bringing any *team* that want to work the way we
do: in the open, collaboratively, in concert with other teams, into
OpenStacks community.

Why are we using implementation language as a gate here?

I assert that most deployers don't fundamentally care about language
of the thing they are deploying; they care about cost of operations,
which is influenced by a number of facets around the language
ecosystem: packaging, building, maturity, VM
behaviour-and-tuning-needs, as well as service specific issues such as
sharding, upgrades, support (from vendors/community), training (their
staff - free reference material, as well as courses) etc. Every
discussion we had with operators about adding new dependencies was
primarily focused on the operational aspects, not 'is it written in
C/$LANGUAGE'. Right now our stock dependency stack has C (Linux, libc,
libssl, Python itself etc, Erlang (rabbitmq), java (zookeeper).

End users emphatically do not care about the language API servers were
written in. They want stability, performance, features, not 'Written
in $LANGUAGE'.

As a community, we decided long ago to silo our code: Nova and Swift
could have been in one tree, with multiple different artifacts - think
of all the cross-code-base-friction we would not have had if we'd done
that! The cultural consequence though is that bringing up new things
is much less disruptive: add a new tree, get it configured in CI, move
on. We're a team-centric structure for governance, and the costs of
doing cross-team code-level initiatives are already prohibitive: we
have already delegated all those efforts to the teams that own the
repositories: requirements syncing, translations syncing, lint fixups,
security audits... Having folk routinely move cross project is
something we seem to be trying to optimise for, but we're not actually
achieving.

So, given that that is the model - why is language part of it? Yes,
there are minimum overheads to having a given language in CI - we need
to be able to do robust reliable builds [or accept periodic downtime
when the internet is not cooperating], and that sets a lower fixed
cost, varying per language. Right now we support Python, Java,
Javascript, Ruby in CI (as I understand it - infra focused folk please
jump in here :)).

A Big Tent approach would be this:
 - Document the required support for a new language - not just CI
 - Any team that wants to use $LANGUAGE just needs to ensure that that
support is present.
 - Make sure that any cross-service interactions are well defined in a
language neutral fashion. This is just good engineering basically:
define a contract, use it.

Here is a straw man list of requirements:
 - Reliable builds: the ability to build and test without talking to
the internet at all.
 - Packagable: the ability to take a source tree for a project, do
some arbitrary transform and end up with a folder structure that can
be placed on another machine, with any needed dependencies, and work.
[Note, this isn't the same as 'packagable in a way that makes Red Hat
and Canonical and Suse **happy**, but thats something we can be sure
that those orgs are working on with language providers already ]
 - FL/OSS
 - Compatible with ASL v2 source code. [e.g. any compiler doesn't
taint its output]
 - Can talk oslo.messaging's message format

That list is actually short, and those needs are quite tameable. So
lets do it - lets open up the tent still further, stop picking winners
and instead let the market sort it out.

-Rob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Newton Summit Infra Sessions Recap

2016-05-11 Thread Jeremy Stanley
On 2016-05-10 19:00:35 + (+), Jeremy Stanley wrote:
[...]
> Wiki Upgrades
> -
[...]
> I'll be starting a thread on the openst...@lists.openstack.org
> mailing list in the next few days to cover the situation in greater
> detail and get the community-facing discussion going for this.
[...]

Thierry beat me to it and initiated separate threads not only there
but also on the -dev and -operators MLs, so hopefully we can get
some good feedback on current wiki use-cases and figure out whether
any are better served with different tools. Thanks!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L2GW] Mitaka release of L2 Gateway now available

2016-05-11 Thread Sukhdev Kapur
Folks,

I am happy to announce that Mitaka release for L2 Gateway is released and
now available at https://pypi.python.org/pypi/networking-l2gw.

You can install it by using "pip install networking-l2gw"

This release has several enhancements and fixes for issues discovered in
liberty release.

Thanks
Sukhdev Kapur
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra] Newton Summit Infra Sessions Recap

2016-05-11 Thread Jeremy Stanley
On 2016-05-11 12:27:13 -0700 (-0700), Elizabeth K. Joseph wrote:
> Thanks for taking time to write these up for the list. As per my own
> tradition, I've also written a blog post giving an overview of
> Infrastructure sessions, according to my own recollection:
> http://princessleia.com/journal/2016/05/openstack-summit-days-3-5/

That's an awesome redux, and does a great job of highlighting some
items I failed to mention. Definitely worth the read!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Robert Collins
On 12 May 2016 at 02:35, Brant Knudson  wrote:
>
> I'd be worried about bringing in a language that doesn't integrate well with
> Python, since I'd expect the normal route would be to take advantage of as
> much of the existing code as we have and only replace those parts that need
> replacing. From these web pages it looks like Go integrates with Python:
> https://blog.filippo.io/building-python-modules-with-go-1-5/ and
> https://github.com/go-python/gopy (I haven't tried these myself).

It looks like that works by forking off of a go runtime and chatting
via locks. Using an RPC model (similar to privsep) would probably be
more flexible and easier, with at most a small speed cost. Worth
trying a few things to get experience?

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Eric Larson


Jim Rollenhagen writes:


On Wed, May 11, 2016 at 03:36:09PM +0200, Thomas Goirand wrote:

On 05/11/2016 02:41 PM, Jim Rollenhagen wrote:
>> Installing from $language manager instead of distro 
>> packages, be it in containers or not, will almost always 
>> make you download random blobs from the Internet, which are 
>> of course changing over time without any notice, loosing the 
>> above 3 important features.

>
> Unless you pin the versions of your dependencies.

Pinning versions doesn't change the fact that you'll have to 
trust a large amount of providers, with some of the files 
stored in a single location on the Internet. Yes, you can add a 
cache, etc. but these are band-aids...


Well, if we're talking about python, it all comes from PyPI. For 
Go, the recommendation is for everything to come from Github, 
but you can choose other sources if you desire.




To clarify, Go best practices are to checkout the repo into a 
vendor directory that must be updated explicitly. While not 
everyone commits the vendored deps, I'd argue it is a reasonable 
practice, which means that at build time within a CI system, there 
should be *NO* dependencies resolved. Tools such as glide 
(https://github.com/Masterminds/glide) also create a `glide.lock` 
that provides the dependencies from the latest build, that can be 
checked into source control.


--

Eric Larson | eric.lar...@rackspace.com Software Developer 
| Cloud DNS | OpenStack Designate Rackspace Hosting   | Austin, 
Texas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] How to document 'labels'

2016-05-11 Thread Hongbin Lu
Hi all,

This is a continued discussion from the last team meeting. For recap, 'labels' 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document 'labels'. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/...), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] State of the refactor

2016-05-11 Thread Sean M. Collins
I'm seeing some mistakes I made when moving over the contents of
neutron-legacy that were l3 related into the new file. It didn't trigger
any failures in the gate, but it might impact some developer machines or
third party systems. I'm putting together patches, but also considering
reverting https://review.openstack.org/168438.

Ugh.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Eric Larson


Flavio Percoco writes:


On 11/05/16 09:47 -0500, Dean Troyer wrote:
On Tue, May 10, 2016 at 5:54 PM, Flavio Percoco 
 wrote:


[language mixing bits were here]


   The above is my main concern with this proposal. I've 
   mentioned this in the upstream review and I'm glad to have 
   found it here as well. The community impact of this change 
   is perhaps not being discussed enough and I believe, in the 
   long run, it'll bite us.



Agreed, but to do nothing instead is so not what we are 
about. The change from integrated/incubated to Big Tent was done 
to address some issues knowing we did not have all of the 
answers up front and would learn some things along the way. We 
did learn some things, both good and bad.


I do believe that we can withstand the impact of a new language, 
particularly when we do it intentionally and knowing where some 
of the pitfalls are. Also, the specific request is coming from 
the oldest of all OpenStack projects, and one that has a history 
of not making big changes without _really_ good reasons. Yes it 
opens a door, but it will be opened with what I believe to be a 
really solid model to build upon in other parts of the OpenStack 
community.  I would MUCH rather do it this way then with a new 
Go-only project that is joining OpenStack from scratch in more 
than just the implementation language.



So, one thing that was mentioned during the last TC meeting is 
to decide this in a project basis. Don't open the door entirely 
but let projects sign up for this.  This will give us a more 
contained growth as far as projects with go-code go but it does 
mean we'll have to do a technical analysis on every project 
willing to sign up and it kinda goes against the principles of 
the big tent.




   The feedback from the Horizon community has been that it's 
   been impossible to avoid a community split and that's what 
   I'd like to avoid.



I do think part of this is also due to the differences in the 
problem domain of client/browser-side and server-side. I believe 
there is a similar issue with  devs writing SQL, 
the overlap in expertise between the two is way smaller than we 
all wish it was.


Exactly! This separation of domains is the reason why opening 
the door for JS code was easier. The request was for browser 
apps that can't be written in Python.


And for the specific Python-Golang overlap, it feels to me like 
more Python devs have (at least talked about) working in Go than 
in other newish languages. There are worse choices to test the 
waters with.


Just to stress this a bit more, I don't think the problem is the 
language per se. There are certainly technical issues related to 
it (packaging, CI, etc) but the main discussion is currently 
going around the impact this change will have in the community 
and other areas. I'm sure we can figure the technical issues 
out.




One thing to consider regarding the community's ability to task 
switch is how Go is much easier than other languages and 
techniques. For example, one common tactic people suggest when 
Python becomes too slow is to rewrite the slow parts in C. In 
designate's case, rewriting the dns wire protocol aspects in C 
could be beneficial, but it would be very difficult as well. We 
would need to write an implementation that is able to safely parse 
dns wire format in a reasonably thread safe fashion that also will 
work well when those threads have been patched by eventlet, all 
while writing C code that is compatible with Python internals.


To contrast that, the go POC was able to use a well tested go DNS 
library and implement the same documented interface that was then 
testable via the same functional tests. It also allowed an 
extremely simple deployment and had a minimal impact for our CI 
systems. Finally, as other go code has been written on our small 
team, getting Python developers up to speed has been 
trivial. Memory management, built in concurrency primitives, and 
similar language constructs have made using Go feel natural.


This experience is different from JavaScript because there are 
very specific silos between the UI and the backend. I'd expect 
that, even though JavaScript is an accepted language in OpenStack, 
writing a node.js service would prevent a whole host of new 
complexity the project would similarly debate. Fortunately, on a 
technical level, I believe we can try Go without its requirements 
putting a large burden on the CI team resources.


Eric


Flavio


dt

--

Dean Troyer dtro...@gmail.com



--

Eric Larson | eric.lar...@rackspace.com Software Developer 
| Cloud DNS | OpenStack Designate Rackspace Hosting   | Austin, 
Texas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] The Future of Meetings

2016-05-11 Thread Mike Perez
On 16:11 May 10, Matt Riedemann wrote:
> On 5/9/2016 11:30 PM, Mike Perez wrote:
> >Hey all,
> >
> >When we first discussed the future of cross-project meetings at the Tokyo
> >summit, we walked out with the idea of beginning to have ad-hoc meetings
> >instead of one big meeting with all big tent projects.
> >
> >Now that we have some process in place [1] (unfortunately still under 
> >review),
> >we can begin that idea.
> >
> >I will no longer be announcing the weekly cross-project meeting being 
> >skipped.
> >Instead people who are interested in fixing some cross-project issue or 
> >feature
> >may do so by introducing a meeting for that initiative [1] to take place in 
> >the
> >#openstack-meeting-cp channel. Together, this group of project will create
> >a spec or guideline [2].
> >
> >For some initiatives you may know which OpenStack projects are involved with
> >it. You can find people who are interested in helping with cross-project
> >initiatives for their respected project by viewing the CPL page [3]. As noted
> >in their duties, they will either be the person to attend the meeting and 
> >give
> >feedback in the spec/guideline on behalf of their project, or find someone to
> >knowledgeable in the initiative and the project [4].
> >
> >If warranted, we can have a cross-project wide meeting, but I see these
> >becoming unnecessary.
> >
> >Some examples of cross-project initiatives happening are:
> >
> >* The service catalog TNG [5]
> >* Quotas - Delimiter [6]
> >
> >I think this will allow cross-project initiatives to continue to flow in a 
> >more
> >a natural way.
> >
> >Please reach out to me if you have any questions. Thanks!
> >
> >[1] - https://review.openstack.org/#/c/301822/4/doc/source/cross-project.rst
> >[2] - https://review.openstack.org/#/c/295940/5/doc/source/cross-project.rst
> >[3] - 
> >https://wiki.openstack.org/wiki/CrossProjectLiaisons#Cross-Project_Spec_Liaisons
> >[4] - 
> >http://docs.openstack.org/project-team-guide/cross-project.html#cross-project-specification-liaisons
> >[5] - https://wiki.openstack.org/wiki/ServiceCatalogTNG
> >[6] - https://launchpad.net/delimiter
> >
> 
> Are we going to drop the calendar entry [1] or at least update the meeting
> agenda [2] to point out there is no regular meeting anymore?
> 
> [1] http://eavesdrop.openstack.org/#OpenStack_Cross-Project_Meeting
> [2] https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

Thanks for reminding me. When the project team guide information on setting up
meetings merges [1], I will have the wiki reference that.

I will be leaving the main cross-project meeting in
openstack-infra/irc-meetings since as it's defined, it's on-demand.

[1] - https://review.openstack.org/#/c/301822/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Robert Collins
On 12 May 2016 at 07:36, Ben Meyer  wrote:
> On 05/11/2016 03:23 PM, Samuel Merritt wrote:

> Have you tried putting the file into non-blocking mode?
> From the above description is doesn't sound like you have.

The limitation here is a kernel limitation. Squid has workarounds for
exactly the same thing - and it has drivers for AIO too, and *they*
still need to deal with blocking file descriptors for actual physical
IO.

Sadly, 'everything is a file' breaks down when they don't all behave
consistently.

The pattern of having a worker thread for physical IO is a good
pattern (on Linux), at least last I heard the underlying problems have
not been solved.

-Rob



-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Gregory Haynes
On Wed, May 11, 2016, at 03:24 AM, Thierry Carrez wrote:
> 
> That said I know that the Swift team spent a lot of time in the past 6 
> years optimizing their Python code, so I'm not sure we can generalize 
> this "everything to do with the algorithms" analysis to them ?
> 

I agree. The swift case is clearly a not easy engineering problem and
the awesome write up we got on it points out what issues they are
running in to that aren't easily solved. At that point the onus is on us
to either be certain there is a much easier way for them to solve the
issues they are running in to without the cost to the community or to
accept that another tool might be good for this job. Personally, I can
completely empathize with running into a problem that requires a lot of
direct OS interaction and just wanting a tool that gets out of my way
for it. I might have picked a different tool, but I don't think that is
the point of the conversation here :). I know we have a lot of Python
gurus here though, so maybe they feel differently.

Even if we're OK with part of Swift being in Go, I do still think there
is a lot of value in us remaining a Python project rather than a
Python+Go project. I really think Swift might be *the* exception to the
rule here and we don't need to completely open ourselves up to anyone re
implementing in Go as a result. Yes, some of the cost of Go will be
lowered when we support a single project having Go in tree, but there
are plenty of additional costs - especially for us all understanding a
common language/toolset - which aren't effected much by having one
component of one project be in a different language.

Cheers,
Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Ben Meyer
On 05/11/2016 03:23 PM, Samuel Merritt wrote:
> On 5/11/16 7:09 AM, Thomas Goirand wrote:
>> On 05/10/2016 09:56 PM, Samuel Merritt wrote:
>>> On 5/9/16 5:21 PM, Robert Collins wrote:
 On 10 May 2016 at 10:54, John Dickinson  wrote:
> On 9 May 2016, at 13:16, Gregory Haynes wrote:
>>
>> This is a bit of an aside but I am sure others are wondering the
>> same
>> thing - Is there some info (specs/etherpad/ML thread/etc) that
>> has more
>> details on the bottleneck you're running in to? Given that the only
>> clients of your service are the public facing DNS servers I am
>> now even
>> more surprised that you're hitting a python-inherent bottleneck.
>
> In Swift's case, the summary is that it's hard[0] to write a network
> service in Python that shuffles data between the network and a block
> device (hard drive) and effectively utilizes all of the hardware
> available. So far, we've done very well by fork()'ing child
> processes,
 ...
> Initial results from a golang reimplementation of the object
> server in
> Python are very positive[1]. We're not proposing to rewrite Swift
> entirely in Golang. Specifically, we're looking at improving object
> replication time in Swift. This service must discover what data is on
> a drive, talk to other servers in the cluster about what they have,
> and coordinate any data sync process that's needed.
>
> [0] Hard, not impossible. Of course, given enough time, we can do
>  anything in a Turing-complete language, right? But we're not talking
>  about possible, we're talking about efficient tools for the job at
>  hand.
 ...

 I'm glad you're finding you can get good results in (presumably)
 clean, understandable code.

 Given go's historically poor perfornance with multiple cores
 (https://golang.org/doc/faq#Why_GOMAXPROCS) I'm going to presume the
 major advantage is in the CSP programming model - something that
 Twisted does very well: and frustratingly we've had numerous
 discussions from folk in the Twisted world who see the pain we have
 and want to help, but as a community we've consistently stayed with
 eventlet, which has a threaded programming model - and threaded models
 are poorly suited for the case here.
>>>
>>> At its core, the problem is that filesystem IO can take a surprisingly
>>> long time, during which the calling thread/process is blocked, and
>>> there's no good asynchronous alternative.
>>>
>>> Some background:
>>>
>>> With Eventlet, when your greenthread tries to read from a socket and
>>> the
>>> socket is not readable, then recvfrom() returns -1/EWOULDBLOCK; then,
>>> the Eventlet hub steps in, unschedules your greenthread, finds an
>>> unblocked one, and lets it proceed. It's pretty good at servicing a
>>> bunch of concurrent connections and keeping the CPU busy.
>>>
>>> On the other hand, when the socket is readable, then recvfrom() returns
>>> quickly (a few microseconds). The calling process was technically
>>> blocked, but the syscall is so fast that it hardly matters.
>>>
>>> Now, when your greenthread tries to read from a file, that read() call
>>> doesn't return until the data is in your process's memory. This can
>>> take
>>> a surprisingly long time. If the data isn't in buffer cache and the
>>> kernel has to go fetch it from a spinning disk, then you're looking
>>> at a
>>> seek time of ~7 ms, and that's assuming there are no other pending
>>> requests for the disk.
>>>
>>> There's no EWOULDBLOCK when reading from a plain file, either. If the
>>> file pointer isn't at EOF, then the calling process blocks until the
>>> kernel fetches data for it.
>>>
>>> Back to Swift:
>>>
>>> The Swift object server basically does two things: it either reads from
>>> a disk and writes to a socket or vice versa. There's a little HTTP
>>> parsing in there, but the vast majority of the work is shuffling bytes
>>> between network and disk. One Swift object server can service many
>>> clients simultaneously.
>>>
>>> The problem is those pauses due to read(). If your process is servicing
>>> hundreds of clients reading from and writing to dozens of disks (in,
>>> say, a 48-disk 4U server), then all those little 7 ms waits are pretty
>>> bad for throughput. Now, a lot of the time, the kernel does some
>>> readahead so your read() calls can quickly return data from buffer
>>> cache, but there are still lots of little hitches.
>>>
>>> But wait: it gets worse. Sometimes a disk gets slow. Maybe it's got a
>>> lot of pending IO requests, maybe its filesystem is getting close to
>>> full, or maybe the disk hardware is just starting to get flaky. For
>>> whatever reason, IO to this disk starts taking a lot longer than 7
>>> ms on
>>> average; think dozens or hundreds of milliseconds. Now, every time your
>>> process tries to read from this disk, all other work stops for quite a
>>> long time. The net

Re: [openstack-dev] [Infra] Newton Summit Infra Sessions Recap

2016-05-11 Thread Elizabeth K. Joseph
On Tue, May 10, 2016 at 12:00 PM, Jeremy Stanley  wrote:
> That concludes my recollection of these sessions over the course of
> the week--thanks for reading this far--feel free to follow up (on
> the openstack-dev ML please) with any corrections/additions!

Thanks for taking time to write these up for the list. As per my own
tradition, I've also written a blog post giving an overview of
Infrastructure sessions, according to my own recollection:
http://princessleia.com/journal/2016/05/openstack-summit-days-3-5/

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Samuel Merritt

On 5/11/16 7:09 AM, Thomas Goirand wrote:

On 05/10/2016 09:56 PM, Samuel Merritt wrote:

On 5/9/16 5:21 PM, Robert Collins wrote:

On 10 May 2016 at 10:54, John Dickinson  wrote:

On 9 May 2016, at 13:16, Gregory Haynes wrote:


This is a bit of an aside but I am sure others are wondering the same
thing - Is there some info (specs/etherpad/ML thread/etc) that has more
details on the bottleneck you're running in to? Given that the only
clients of your service are the public facing DNS servers I am now even
more surprised that you're hitting a python-inherent bottleneck.


In Swift's case, the summary is that it's hard[0] to write a network
service in Python that shuffles data between the network and a block
device (hard drive) and effectively utilizes all of the hardware
available. So far, we've done very well by fork()'ing child processes,

...

Initial results from a golang reimplementation of the object server in
Python are very positive[1]. We're not proposing to rewrite Swift
entirely in Golang. Specifically, we're looking at improving object
replication time in Swift. This service must discover what data is on
a drive, talk to other servers in the cluster about what they have,
and coordinate any data sync process that's needed.

[0] Hard, not impossible. Of course, given enough time, we can do
 anything in a Turing-complete language, right? But we're not talking
 about possible, we're talking about efficient tools for the job at
 hand.

...

I'm glad you're finding you can get good results in (presumably)
clean, understandable code.

Given go's historically poor perfornance with multiple cores
(https://golang.org/doc/faq#Why_GOMAXPROCS) I'm going to presume the
major advantage is in the CSP programming model - something that
Twisted does very well: and frustratingly we've had numerous
discussions from folk in the Twisted world who see the pain we have
and want to help, but as a community we've consistently stayed with
eventlet, which has a threaded programming model - and threaded models
are poorly suited for the case here.


At its core, the problem is that filesystem IO can take a surprisingly
long time, during which the calling thread/process is blocked, and
there's no good asynchronous alternative.

Some background:

With Eventlet, when your greenthread tries to read from a socket and the
socket is not readable, then recvfrom() returns -1/EWOULDBLOCK; then,
the Eventlet hub steps in, unschedules your greenthread, finds an
unblocked one, and lets it proceed. It's pretty good at servicing a
bunch of concurrent connections and keeping the CPU busy.

On the other hand, when the socket is readable, then recvfrom() returns
quickly (a few microseconds). The calling process was technically
blocked, but the syscall is so fast that it hardly matters.

Now, when your greenthread tries to read from a file, that read() call
doesn't return until the data is in your process's memory. This can take
a surprisingly long time. If the data isn't in buffer cache and the
kernel has to go fetch it from a spinning disk, then you're looking at a
seek time of ~7 ms, and that's assuming there are no other pending
requests for the disk.

There's no EWOULDBLOCK when reading from a plain file, either. If the
file pointer isn't at EOF, then the calling process blocks until the
kernel fetches data for it.

Back to Swift:

The Swift object server basically does two things: it either reads from
a disk and writes to a socket or vice versa. There's a little HTTP
parsing in there, but the vast majority of the work is shuffling bytes
between network and disk. One Swift object server can service many
clients simultaneously.

The problem is those pauses due to read(). If your process is servicing
hundreds of clients reading from and writing to dozens of disks (in,
say, a 48-disk 4U server), then all those little 7 ms waits are pretty
bad for throughput. Now, a lot of the time, the kernel does some
readahead so your read() calls can quickly return data from buffer
cache, but there are still lots of little hitches.

But wait: it gets worse. Sometimes a disk gets slow. Maybe it's got a
lot of pending IO requests, maybe its filesystem is getting close to
full, or maybe the disk hardware is just starting to get flaky. For
whatever reason, IO to this disk starts taking a lot longer than 7 ms on
average; think dozens or hundreds of milliseconds. Now, every time your
process tries to read from this disk, all other work stops for quite a
long time. The net effect is that the object server's throughput
plummets while it spends most of its time blocked on IO from that one
slow disk.

Now, of course there's things we can do. The obvious one is to use a
couple of IO threads per disk and push the blocking syscalls out
there... and, in fact, Swift did that. In commit b491549, the object
server gained a small threadpool for each disk[1] and started doing its
IO there.

This worked pretty well for avoiding the slow-disk problem. Requests
that touched 

[openstack-dev] [javascript] [infra] NPM Mirrors (IMPORTANT)

2016-05-11 Thread Michael Krotscheck
Hello everyone!

We've recently added NPM mirrors to our infrastructure, and are about to
turn them on. Before that happens, however, we'd like to get a sanity check
from impacted projects to make sure that we don't wedge your gate.

If you are in charge of a project that invokes `npm install` during any of
its gate jobs, then please invoke the following commands at your project
root.

echo "registry=http://mirror.dfw.rax.openstack.org/npm/"; >> .npmrc
rm -rf ./node_modules/
rm -rf ~/.npm/
npm install

If you encounter an error, put it in paste.openstack.org and reply to this
thread. If not, great! Delete the .npmrc file and go on your merry way.

Have a great day!

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Trove 2016-05-11 weekly meeting notes

2016-05-11 Thread Amrith Kumar
Here are the notes from today's Trove weekly meeting:

http://eavesdrop.openstack.org/meetings/trove/2016/trove.2016-05-11-18.01.html

Action Items:

[amrith] update infra specs to point to new BP and WF +0: Done
[tellesnobrega] abandon spec on CEPH 
(https://review.openstack.org/#/c/256057/): Done

Thanks,

-amrith

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] State of the refactor

2016-05-11 Thread Sean Dague
On 05/11/2016 02:48 PM, Sean M. Collins wrote:
> A quick update, https://review.openstack.org/168438 has landed.
> 
> There is a lot of work that needs to be done, Armando and I spoke
> yesterday about some of the hooks that are present in neutron-legacy
> that both in-tree and out-of-tree consumers rely on being called.
> 
> Doing a quick git grep - we have the following:


I'd like to rethink these a bit if we can.

> * neutron_plugin_install_agent_packages

Is this not just the pre-install or install phase of a plugin? i.e. is
there any reason that it can't happen at the normal point.


> * neutron_plugin_configure_dhcp_agent neutron_plugin_configure_l3_agent
> * neutron_plugin_configure_plugin_agent neutron_plugin_configure_service
> * neutron_plugin_setup_interface_driver

How are these different from 'post-config' (other than in the neutron
case they got micro split)

> * neutron_plugin_create_initial_networks

I do see a very specific place for 'create_initial_networks' (and it
getting called directly in stack.sh is indicative of that fact).

So before we go and double the size of the contract can you map out
where relative to

neutron - pre-install, install, post-config, extra each of these calls
currently happen? And why they couldn't just be done by a plugin in said
phase.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale documentation on docs.o.o

2016-05-11 Thread Carl Baldwin
Thank you.

Carl

On Wed, May 11, 2016 at 11:44 AM, Andreas Jaeger  wrote:
> On 05/11/2016 07:15 PM, Carl Baldwin wrote:
>> Greetings all,
>>
>> This document [1] was superseded by this one [2] a while ago.  It
>> looks like the source for [1] no longer exists but the web server
>> still has a copy that it continues to serve up.  At least, that is the
>> situation as far as I can tell.
>>
>> Can anyone help here?  Do we need to add something to the docs
>> generation to clean up stale pages?
>>
>> Carl Baldwin
>>
>> [1] 
>> http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
>> [2] 
>> http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html
>
>
> I've just removed that page and a few others in the devref directory
> that were stale - for next time, the recommendation by Matt works best...
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] State of the refactor

2016-05-11 Thread Sean M. Collins
A quick update, https://review.openstack.org/168438 has landed.

There is a lot of work that needs to be done, Armando and I spoke
yesterday about some of the hooks that are present in neutron-legacy
that both in-tree and out-of-tree consumers rely on being called.

Doing a quick git grep - we have the following:

* neutron_plugin_install_agent_packages
* neutron_plugin_configure_dhcp_agent neutron_plugin_configure_l3_agent
* neutron_plugin_configure_plugin_agent neutron_plugin_configure_service
* neutron_plugin_setup_interface_driver
* neutron_plugin_create_initial_networks

I think that many of these entrypoints most likely need to be formalized
in our plugin.sh[1] contract. Perhaps not exactly as they are today, but
we do need a phase in the plugin contract where these kinds of things
should exist.

[1]: 
http://docs.openstack.org/developer/devstack/plugins.html#plugin-sh-contract

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] magnum module - fixes/improvements for Mitaka release

2016-05-11 Thread Hongbin Lu


> -Original Message-
> From: Emilien Macchi [mailto:emil...@redhat.com]
> Sent: May-11-16 9:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [puppet] magnum module -
> fixes/improvements for Mitaka release
> 
> On Wed, May 11, 2016 at 9:22 AM, Michal Adamczyk 
> wrote:
> > Hi,
> >
> > I am working on some fixes/improvements to puppet magnum module:
> > https://review.openstack.org/#/c/313293/
> 
> reviewed & commented. Almost good, excellent work here!
> 
> > I have found some issues while creating a bays with magnum
> > (https://bugs.launchpad.net/magnum/+bug/1575524) and I still need to
> > address few things:
> >
> >  - getting ID of the domain and user used to create trust (see my
> > patch set
> > 10 commit info). I have added domain class like in heat
> >
> https://review.openstack.org/#/c/313293/10/manifests/keystone/domain.p
> > p but the requirements is to get domain and user id, not a name.
> >With names module fails to create trust...
> >
> >   Do you know if there is simple way to get user/domain ID via
> > existing keystone module?
> 
> We already had this issue in the paste, with neutron.conf that needed
> the tenant id from nova service, to manage notifications.
> It was a bug and it was fixed very early.
> Using ID in production deployments is:
> * hard to deploy, you need some magic that deploy the resource and get
> the id to write it somewhere
> * not flexible: everytime the resource change, the ID change and you
> need to update conf
> 
> So please fix Magnum to allow to use names (or maybe it's in Keystone,
> I haven't looked).
> Otherwise, you'll need to write a provider that will get the ID for you,
> look this example:
> https://github.com/openstack/puppet-
> tempest/blob/master/lib/puppet/provider/tempest_glance_id_setter/openst
> ack.rb

No problem from me. Please file a bug for that.

> 
> >  - as magnum requires lbaas and in Mitaka v2 is supported according
> to
> > the docs
> > http://docs.openstack.org/mitaka/networking-guide/adv-config-
> lbaas.htm
> > l we should add to neutron module or integration class directly the
> > following
> > changes:
> >
> >  class { '::neutron::agents::lbaas':
> > interface_driver => $driver,
> > debug=> true,
> > enable_v1=> false,
> > enable_v2=> true,
> >   }
> >
> >   neutron_config { 'service_providers/service_provider':
> > value =>
> >
> 'LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.Hap
> roxyOnHostPluginDriver:default';
> >   }
> >
> 
> Good to know, we recently did some work to stabilize puppet-neutron so
> we can deploy LBaaS v2, mjblack worked on it, maybe we can have a
> status about it here.

FYI, Magnum is using LBaaS v1. There is a blueprint [1] to migrate to v2, but 
the blueprint is not finished yet.

[1] https://blueprints.launchpad.net/magnum/+spec/magnum-lbaasv2-support

> 
> > - add a parameter to api.pp or creates a new class with this
> parameter
> > to manage certificate manager entry inside [certificates] section of
> > magnum.conf. Any preferences here?
> 
> Is it some entries for enabling SSL? Or related to Barbican?
> If related to barbican, I suggest we take the puppet-oslo approach, and
> create a Define resource with the common parameters that we'll have in
> our puppet modules for barbican section.
> And consume this class/define or write this code in api.pp makes sense
> to me, now.

It is related to Barbican.

> 
> >
> > - should we extend glance_image resource type to contains --os-distro
> > properties so we can add the fedora-atomic or core-os image via
> > glance_image resource type?
> 
> Definitely yes. Once we have all of this, we might want to create a 4th
> scenario in our integration CI with magnum + containers + barbican
> + neutron lbaas v2, so we test the full stack.
> 
> Question: how do you test Magnum, do you have Tempest tests?
> I see
> https://github.com/openstack/magnum/tree/master/magnum/tests/functional
> /tempest_tests
> kind of empty.
> Our CI is running Tempest, it would be very useful for us to have
> Magnum tests.
> 
> Thanks for your work here,
> --
> Emilien Macchi
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Jeremy Stanley
On 2016-05-11 14:01:37 -0400 (-0400), Sean Dague wrote:
> Well, part of the reason for the wiki over etherpads is that it is
> findable in google.
[...]

Yep, also if we somehow found a way to get them to index our
etherpads, those would be overrun with spam in short order. This was
the case with the paste server too, and as soon as we added a
robots.txt to get search engines to cease indexing them the abuse
situation we had there trailed off to nothing in a matter of weeks.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third-party] Are you getting value from the 8:00 utc Tuesday meeting?

2016-05-11 Thread Mike Perez
On 14:30 May 10, Anita Kuno wrote:
> I've been chairing this meeting for about 3 releases now and in this
> last release it has mostly been myself and lennyb, who also attends the
> Monday 15:00 utc third-party meeting that I chair.
> 
> Are you getting value from the Tuesday 8:00 utc third-party meeting? If
> yes, please make yourself known. If no, the meeting in this slot will be
> removed leaving the other third-party meetings to continue along their
> regular schedule.

I just want to thank you Anita for providing this to the community and helping
the third-party CI movement with a variety of projects. I know this was not
a convenient time for you, but you made sure everyone could get help.

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pbr] New release?

2016-05-11 Thread Doug Hellmann
Excerpts from Julien Danjou's message of 2016-05-11 17:03:03 +0200:
> Hey there,
> 
> Would it be possible to push a new release out?
> 
> I'd like to enjoy the fix for the --port option from Thomas for WSGI
> scripts.
> 
> Cheers,

pbr is an Oslo library, so you should bring it up with the Oslo PTL or
release liaison.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Paul Belanger
On Wed, May 11, 2016 at 05:51:41PM +, Jeremy Stanley wrote:
> On 2016-05-11 10:04:26 -0400 (-0400), Sean Dague wrote:
> [...]
> > Before deciding that it's unsurmountable, maybe giving delete
> > permissions to a larger set of contributors might be a good idea.
> [...]
> 
> I have no objection to granting delete (on non-locked articles) to
> all users immediately if someone works out the necessary
> configuration change to put that into effect. I expect it was
> restricted by default to encourage people to instead annotate
> articles as deprecated or use the "move" feature to redirect them to
> more relevant ones. That said, I think we're at the point where we
> should start considering such important content as candidates to
> move elsewhere outside the wiki (especially any currently remaining
> locked articles).
> 
> > I do realize running a public wiki is a PITA, especially at the
> > target level of openstack.org.
> [...]
> 
> The thing which would most significantly reduce its attractiveness
> to spammers would be to stop having major search engines index the
> content on it. Somewhere they can publish content and have it
> immediately show up in search engine results is really _all_ they
> care about. Maybe a compromise is to say that the wiki is an okay
> place to publish things you don't need people to find through
> external search engines? Then we could quite easily open account
> creation back up with minimal risk and much lower moderator demand.

I think giving out delete permissions is a good first step, however if we are
going down this path of keep the wiki alive, I believe we have to look at
re-evaluating mediawiki.   Our current setup with puppet is not idea and I feel
mediawiki is maybe too much software for what we are doing.

I'd love to use something more easier to manage (ideally not php based).

> -- 
> Jeremy Stanley
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Sean Dague
On 05/11/2016 01:51 PM, Jeremy Stanley wrote:
> On 2016-05-11 10:04:26 -0400 (-0400), Sean Dague wrote:
> [...]
>> Before deciding that it's unsurmountable, maybe giving delete
>> permissions to a larger set of contributors might be a good idea.
> [...]
> 
> I have no objection to granting delete (on non-locked articles) to
> all users immediately if someone works out the necessary
> configuration change to put that into effect. I expect it was
> restricted by default to encourage people to instead annotate
> articles as deprecated or use the "move" feature to redirect them to
> more relevant ones. That said, I think we're at the point where we
> should start considering such important content as candidates to
> move elsewhere outside the wiki (especially any currently remaining
> locked articles).
> 
>> I do realize running a public wiki is a PITA, especially at the
>> target level of openstack.org.
> [...]
> 
> The thing which would most significantly reduce its attractiveness
> to spammers would be to stop having major search engines index the
> content on it. Somewhere they can publish content and have it
> immediately show up in search engine results is really _all_ they
> care about. Maybe a compromise is to say that the wiki is an okay
> place to publish things you don't need people to find through
> external search engines? Then we could quite easily open account
> creation back up with minimal risk and much lower moderator demand.

Well, part of the reason for the wiki over etherpads is that it is
findable in google. This has been especially important when the search
engine in mediawiki stops updating itself (which has happened often
enough that it's a non theoretical issue).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [designate] multi-tenancy in Neutron's DNS integration

2016-05-11 Thread Hayes, Graham
On 11/05/2016 18:45, Mike Spreitzer wrote:
>> On Tue, May 10, 2016 at 11:28 AM, Kevin Benton  wrote:
>> neutron subnet-show with the UUID of the subnet they have a port on
>> will tell you.
>>
>> On Tue, May 10, 2016 at 6:40 AM, Mike Spreitzer 
> wrote:
>> "Hayes, Graham"  wrote on 05/10/2016 09:30:26 AM:
>>
>> > ...
>> > > Ah, that may be what I want.  BTW, I am not planning to use Nova.
> I am
>> > > planning to use Swarm and Kubernetes to create containers attached to
>> > > Neutron private tenant networks.  What DNS server would I configure
>> > > those containers to use?
>> >
>> > ...
>> >
>> > The DNSMasq instance running on the neutron network would have these
>> > records - they should be sent as part of the DHCP lease, so leaving the
>> > DNS set to automatic should pick them up.
>>
>> IIRC, our Docker containers do not use DHCP.  Is there any other way
>> to find out the correct DNS server(s) for the containers to use?
>
>
>> From: Kevin Benton 
>> ...
>>
>> Whoops. What I just said was wrong if it hadn't been explicitly
> overwritten.
>>
>> I think you will end up having to do a port-list looking for the DHCP
> port(s).
>> http://paste.openstack.org/show/496604/
>
>
>
> So the DNS server and the DHCP server are bundled together?
>
> Like I said, our Docker containers do not use DHCP.  Our subnets do not
> enable DHCP.  We have no DHCP ports.  Does that mean we do not get
> internal DNS?
>
> Thanks,
> Mike
>

Unfortunately, I do not think so.

-- Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] RPC messaging issue and convergence

2016-05-11 Thread Anant Patil
Hi,

I have confirmed that the issue related to locally queueing of resource
requests, which I highlighted at the design summit, exists currently. I
have also confirmed that the issue is solved in oslo.messaging version
5.0.0.

The issue is with oslo messaging library below version 5.0.0. The
messages, carrying RPC calls/casts requests, are drained from the
messaging server (RabbitMQ) and submitted in the thread pool executor
(GreenThreadPoolExecutor from futurist library). Before submitting the
message to the executor, the message is acknowledged, which means the
message is deleted from the messaging server.The thread pool executor
queues the messages locally when there are no eventlets available to
process  the message. This is bad, because the messages are queued up
locally and if the process goes down, these messages are lost, it is
very difficult to recover as they are not available in the messaging
server. The mail thread
http://lists.openstack.org/pipermail/openstack-dev/2015-July/068742.html
gives more context and I cried and wept when I read it.

In convergence, the heat engine casts the requests to process the
resources and we don't want the heat engine failures to result in loss
of those resource requests, as there is no easier way to recover them.

The issue is fixed by https://review.openstack.org/#/c/297988 . I
installed and tested with version 5.0.0, which is the latest version of
oslo.messaging and has the fix.  In the new version, the messages are
acknowledged only after the message gets an eventlet. It is not ideal in
the sense that it doesn't give the service/client the freedom to
acknowledge when it wants to, but better than the older versions. So, if
the engine process cannot get an eventlet/thread to process the message,
it is not acknowledged and it remains in the messaging server.

I tested with two engine processes with executor thread pool size set to
2. This means at most only 4 resources should be processed at a time and
remaining should be available in the messaging server. I created a stack
of 8 test resources each with 20 secs of waiting time, and saw that 4
messages were available in the messaging server while other 4 were being
processed. I restarted the engine processes and the remaining messages
were again taken up of processing.

I am glad that the issue is fixed in the new version and we should move
to it before enabling convergence by default.

-- Anant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread Ryan Moats
John McDowall  wrote on 05/11/2016 12:37:40
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" 
> Date: 05/11/2016 12:37 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> I have done networking-sfc the files that I see as changed/added are:
>
> devstack/settings   Modified Runtime setting to pick up OVN Driver
> networking_sfc/db/migration/alembic_migrations/versions/mitaka/
> expand/5a475fc853e6_ovs_data_model.py Hack to work around
> flow_classifier issue – need to resolve with SFC team.
> networking_sfc/services/sfc/drivers/ovn/__init__.py   Added for OVN
Driver
> networking_sfc/services/sfc/drivers/ovn/driver.py Added ovn driver
file
> setup.cfg Inserted OVN driver entry
>

Ack... I'll run throug those changes either later today or tomorrow
morning...

Thanks,
Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale documentation on docs.o.o

2016-05-11 Thread Andreas Jaeger
On 05/11/2016 07:15 PM, Carl Baldwin wrote:
> Greetings all,
> 
> This document [1] was superseded by this one [2] a while ago.  It
> looks like the source for [1] no longer exists but the web server
> still has a copy that it continues to serve up.  At least, that is the
> situation as far as I can tell.
> 
> Can anyone help here?  Do we need to add something to the docs
> generation to clean up stale pages?
> 
> Carl Baldwin
> 
> [1] 
> http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
> [2] 
> http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html


I've just removed that page and a few others in the devref directory
that were stale - for next time, the recommendation by Matt works best...

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Jeremy Stanley
On 2016-05-11 10:04:26 -0400 (-0400), Sean Dague wrote:
[...]
> Before deciding that it's unsurmountable, maybe giving delete
> permissions to a larger set of contributors might be a good idea.
[...]

I have no objection to granting delete (on non-locked articles) to
all users immediately if someone works out the necessary
configuration change to put that into effect. I expect it was
restricted by default to encourage people to instead annotate
articles as deprecated or use the "move" feature to redirect them to
more relevant ones. That said, I think we're at the point where we
should start considering such important content as candidates to
move elsewhere outside the wiki (especially any currently remaining
locked articles).

> I do realize running a public wiki is a PITA, especially at the
> target level of openstack.org.
[...]

The thing which would most significantly reduce its attractiveness
to spammers would be to stop having major search engines index the
content on it. Somewhere they can publish content and have it
immediately show up in search engine results is really _all_ they
care about. Maybe a compromise is to say that the wiki is an okay
place to publish things you don't need people to find through
external search engines? Then we could quite easily open account
creation back up with minimal risk and much lower moderator demand.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [designate] multi-tenancy in Neutron's DNS integration

2016-05-11 Thread Mike Spreitzer
> On Tue, May 10, 2016 at 11:28 AM, Kevin Benton  wrote:
> neutron subnet-show with the UUID of the subnet they have a port on 
> will tell you.
> 
> On Tue, May 10, 2016 at 6:40 AM, Mike Spreitzer  
wrote:
> "Hayes, Graham"  wrote on 05/10/2016 09:30:26 AM:
> 
> > ...
> > > Ah, that may be what I want.  BTW, I am not planning to use Nova.  I 
am
> > > planning to use Swarm and Kubernetes to create containers attached 
to
> > > Neutron private tenant networks.  What DNS server would I configure
> > > those containers to use?
> > 
> > ...
> > 
> > The DNSMasq instance running on the neutron network would have these 
> > records - they should be sent as part of the DHCP lease, so leaving 
the
> > DNS set to automatic should pick them up.
> 
> IIRC, our Docker containers do not use DHCP.  Is there any other way
> to find out the correct DNS server(s) for the containers to use?


> From: Kevin Benton 
> ...
> 
> Whoops. What I just said was wrong if it hadn't been explicitly 
overwritten.
> 
> I think you will end up having to do a port-list looking for the DHCP 
port(s).
> http://paste.openstack.org/show/496604/



So the DNS server and the DHCP server are bundled together?

Like I said, our Docker containers do not use DHCP.  Our subnets do not 
enable DHCP.  We have no DHCP ports.  Does that mean we do not get 
internal DNS?

Thanks,
Mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread John McDowall
Ryan,

I have done networking-sfc the files that I see as changed/added are:

devstack/settings   Modified Runtime setting to pick up OVN Driver
networking_sfc/db/migration/alembic_migrations/versions/mitaka/expand/5a475fc853e6_ovs_data_model.py
 Hack to work around flow_classifier issue - need to resolve with SFC team.
networking_sfc/services/sfc/drivers/ovn/__init__.py   Added for OVN Driver
networking_sfc/services/sfc/drivers/ovn/driver.py Added ovn driver file
setup.cfg Inserted OVN driver entry

I am currently working to clean up ovs/ovn.

Regards

John

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Date: Wednesday, May 11, 2016 at 10:22 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
mailto:jmcdow...@paloaltonetworks.com>> wrote 
on 05/11/2016 12:06:55 PM:

> From: John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>>
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" 
> mailto:disc...@openvswitch.org>>, "OpenStack
> Development Mailing List" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: 05/11/2016 12:07 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Looks good apart from:
>
> networking_ovn/common/extensions.py
>
> There should be no changes to that file, I removed them as they are
> from an older prototype.
>
> Regards
>
> John

Ah, ok, I must of missed that when rolling through the patch series -
I've updated my copy.

Are you tackling networking-sfc or ovs next?

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Stale documentation on docs.o.o

2016-05-11 Thread Matt Kassawara
All of the documentation on d.o.o suffers from the inability to easily
delete files that search engines continue to reference. To delete specific
files, I recommend opening a bug in openstack-manuals and someone on the
documentation team (or infra?) can handle it.

On Wed, May 11, 2016 at 11:15 AM, Carl Baldwin  wrote:

> Greetings all,
>
> This document [1] was superseded by this one [2] a while ago.  It
> looks like the source for [1] no longer exists but the web server
> still has a copy that it continues to serve up.  At least, that is the
> situation as far as I can tell.
>
> Can anyone help here?  Do we need to add something to the docs
> generation to clean up stale pages?
>
> Carl Baldwin
>
> [1]
> http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
> [2]
> http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Gregory Haynes
On Wed, May 11, 2016, at 05:09 AM, Hayes, Graham wrote:
> On 10/05/2016 23:28, Gregory Haynes wrote:
> >
> > OK, I'll bite.
> >
> > I had a look at the code and there's a *ton* of low hanging fruit. I
> > decided to hack in some fixes or emulation of fixes to see whether I
> > could get any major improvements. Each test I ran 4 workers using
> > SO_REUSEPORT and timed doing 1k axfr's with 4 in parallel at a time and
> > recorded 5 timings. I also added these changes on top of one another in
> > the order they follow.
> 
> Thanks for the analysis - any suggestions about how we can improve the
> current design are more than welcome .
> 
> For this test, was it a single static zone? What size was it?
> 

This was a small single static zone - so the most time possible was
spent in python, as opposed to blocking on the network.

> >
> > Base timings: [9.223, 9.030, 8.942, 8.657, 9.190]
> >
> > Stop spawning a thread per request - there are a lot of ways to do this
> > better, but lets not even mess with that and just literally move the
> > thread spawning that happens per request because its a silly idea here:
> > [8.579, 8.732, 8.217, 8.522, 8.214] (almost 10% increase).
> >
> > Stop instantiating oslo config object per request - this should be a no
> > brainer, we dont need to parse config inside of a request handler:
> > [8.544, 8.191, 8.318, 8.086] (a few more percent).
> >
> > Now, the slightly less low hanging fruit - there are 3 round trips to
> > the database *every request*. This is where the vast majority of request
> > time is spent (not in python). I didn't actually implement a full on
> > cache (I just hacked around the db queries), but this should be trivial
> > to do since designate does know when to invalidate the cache data. Some
> > numbers on how much a warm cache will help:
> >
> > Caching zone: [5.968, 5.942, 5.936, 5.797, 5.911]
> >
> > Caching records: [3.450, 3.357, 3.364, 3.459, 3.352].
> >
> > I would also expect real-world usage to be similar in that you should
> > only get 1 cache miss per worker per notify, and then all the other
> > public DNS servers would be getting cache hits. You could also remove
> > the cost of that 1 cache miss by pre-loading data in to the cache.
> 
> I actually would expect the real world use of this to have most of the
> servers have a cache miss.
> 
> We shuffle the order of the miniDNS servers sent out to the user facing
> DNS servers, so I would expect them to hit different minidns servers
> at nearly same time, and each of them try to generate the cache entry.
> 
> For pre-loading - this could work, but I *really* don't like relying on
> a cache for one of the critical path components.
> 

I am not sure what the issue with caching in general is, but its not
far-fetched to pre load an axfr into a cache before you send out any
notifies (since you know exactly when that will happen). For the herding
issue - that's just a matter of how you design your cache coherence
system. Usually you want to design that around your threading/worker
model and since we actually get a speed increase by turning the current
threading off it might be worth fixing that first...

That being said - this doesn't need to be designed amazingly to reap the
benefits being argued for. I haven't heard any goals of 'make every
single request as low latency as possible' (which is when you would
worry about dealing with cold cache costs), but instead that there's a
need to scale up to a potentially large number of clients all requesting
the same axfr at once. In that scenario even the most simple caching
setup would make a huge difference.

> >
> > All said and done, I think that's almost a 3x speed increase with
> > minimal effort. So, can we stop saying that this has anything to do with
> > Python as a language and has everything to do with the algorithms being
> > used?
> 
> As I have said before - for us, the time spent : performance
> improvement ratio is just much higher (for our dev team at least) with
> Go.
> 
> We saw a 50x improvement for small SOA queries, and ~ 10x improvement
> for 2000 record AXFR (without caching). The majority of your
> improvement came from caching, so I would imagine that would speed up
> the Go implementation as well.
> 

There has to be something very different between your python testing set
up and mine. In my testing there simply wasn't enough time spent in
Python to get even a 2x speed increase by removing all execution time. I
wonder if this is because the code originally spawns a thread per
request and therefore if you run in with a large number of parallel
requests you'll effectively thread bomb all the workers?

The point I am trying to make here is that throwing out "we rewrote our
software in another language, now its X times faster" does not mean that
the language is the issue. If there is a potential language issue it is
extremely useful for us to drill in to and determine the root cause
since that effects all of us. Since there is also obv

Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread Ryan Moats
John McDowall  wrote on 05/11/2016 12:06:55
PM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" 
> Date: 05/11/2016 12:07 PM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Looks good apart from:
>
> networking_ovn/common/extensions.py
>
> There should be no changes to that file, I removed them as they are
> from an older prototype.
>
> Regards
>
> John

Ah, ok, I must of missed that when rolling through the patch series -
I've updated my copy.

Are you tackling networking-sfc or ovs next?

Ryan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Stale documentation on docs.o.o

2016-05-11 Thread Carl Baldwin
Greetings all,

This document [1] was superseded by this one [2] a while ago.  It
looks like the source for [1] no longer exists but the web server
still has a copy that it continues to serve up.  At least, that is the
situation as far as I can tell.

Can anyone help here?  Do we need to add something to the docs
generation to clean up stale pages?

Carl Baldwin

[1] 
http://docs.openstack.org/developer/neutron/devref/sub_project_guidelines.html
[2] 
http://docs.openstack.org/developer/neutron/stadium/sub_project_guidelines.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need a volunteer for documentation liaisons

2016-05-11 Thread Anthony Chow
Cool. Just let me know if there is anything I can do.

Have a nice day. :)

Anthony.

On Wed, May 11, 2016 at 9:05 AM, Hongbin Lu  wrote:

> Spyros,
>
>
>
> Thanks for taking this role. I have added your name to the Cross Project
> Liaison wiki [1]. For Anthony and other candidates, this role requires a
> basic understanding of Magnum and OpenStack documentation. I assigned this
> role to Spyros since his past work already showed his qualification.
> However, I encourage you to talk to Spyros if you want to share parts of
> the responsibility.
>
>
>
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Spyros Trigazis [mailto:strig...@gmail.com]
> *Sent:* May-11-16 11:13 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum] Need a volunteer for
> documentation liaisons
>
>
>
> Hi,
>
>
>
> since I work on the install docs (btw, I'll push in our repo in the next
> hour), I could do it.
>
>
>
> Spyros
>
>
>
> On 11 May 2016 at 00:35, Anthony Chow  wrote:
>
> HongBin,
>
> What is the skill requirement or credential for this documentation liaison
> role?  I am interested in doing this
>
> Anthony.
>
>
>
> On Tue, May 10, 2016 at 3:24 PM, Hongbin Lu  wrote:
>
> Hi team,
>
> We need a volunteer as liaison for documentation team. Just let me know if
> you interest in this role.
>
> Best regards,
> Hongbin
>
> > -Original Message-
> > From: Lana Brindley [mailto:openst...@lanabrindley.com]
> > Sent: May-10-16 5:47 PM
> > To: OpenStack Development Mailing List; enstack.org
> > Subject: [openstack-dev] [PTL][docs]Update your cross-project liaison!
> >
> > Hi everyone,
> >
> > OpenStack use cross project liaisons to ensure that projects are
> > talking to each effectively, and the docs CPLs are especially important
> > to the documentation team to ensure we have accurate docs. Can all PTLs
> > please take a moment to check (and update if necessary) their CPL
> > listed here:
> > https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
> >
> > Thanks a bunch!
> >
> > Lana
> >
> > --
> > Lana Brindley
> > Technical Writer
> > Rackspace Cloud Builders Australia
> > http://lanabrindley.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread John McDowall
Ryan,

Looks good apart from:

networking_ovn/common/extensions.py

There should be no changes to that file, I removed them as they are from an 
older prototype.

Regards

John

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Date: Wednesday, May 11, 2016 at 9:59 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
mailto:jmcdow...@paloaltonetworks.com>> wrote 
on 05/11/2016 11:30:07 AM:

> From: John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>>
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" 
> mailto:disc...@openvswitch.org>>, "OpenStack
> Development Mailing List" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: 05/11/2016 11:30 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Apologies for missing the _init_.py files – removed them and
> remerged. When I do a compare from my repo to main I see three files
> changed (which I think is correct):
>
> networking_ovn/ovsdb/commands.py
> networking_ovn/ovsdb/impl_idl_ovn.py
> networking_ovn/ovsdb/ovn_api.py
>
> I could be doing something wrong as not an expert at merging repos.
> If I am doing something wrong let me know and I will fix it.
>
> Regards
>
> John

So the change I made to common/extensions.py was to avoid a merge
conflict, and I double checked and yes, the changes I had to
plugin.py are spurious, so here is an updated/corrected patch for
you to check against:

>From eb93dc3984145f1b82be15d204c2f0790c1429bd Mon Sep 17 00:00:00 2001
From: RYAN D. MOATS mailto:rmo...@us.ibm.com>>
Date: Wed, 11 May 2016 09:10:18 -0500
Subject: [PATCH] test

Signed-off-by: RYAN D. MOATS mailto:rmo...@us.ibm.com>>
---
networking_ovn/common/extensions.py  |1 +
networking_ovn/ovsdb/commands.py |   78 ++
networking_ovn/ovsdb/impl_idl_ovn.py |   18 
networking_ovn/ovsdb/ovn_api.py  |   49 +
4 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/networking_ovn/common/extensions.py 
b/networking_ovn/common/extensions.py
index c171e11..55fc147 100644
--- a/networking_ovn/common/extensions.py
+++ b/networking_ovn/common/extensions.py
@@ -37,4 +37,5 @@ SUPPORTED_API_EXTENSIONS = [
'subnet_allocation',
'port-security',
'allowed-address-pairs',
+'sfi',
]
diff --git a/networking_ovn/ovsdb/commands.py b/networking_ovn/ovsdb/commands.py
index 7ea7a6f..68a747f 100644
--- a/networking_ovn/ovsdb/commands.py
+++ b/networking_ovn/ovsdb/commands.py
@@ -164,6 +164,84 @@ class DelLogicalPortCommand(BaseCommand):
setattr(lswitch, 'ports', ports)
self.api._tables['Logical_Port'].rows[lport.uuid].delete()

+class AddLogicalServiceCommand(BaseCommand):
+def __init__(self, api, lservice, lswitch, may_exist, **columns):
+super(AddLogicalServiceCommand, self).__init__(api)
+self.lservice = lservice
+self.lswitch = lswitch
+self.may_exist = may_exist
+self.columns = columns
+
+def run_idl(self, txn):
+try:
+lswitch = idlutils.row_by_value(self.api.idl, 'Logical_Switch',
+'name', self.lswitch)
+services= getattr(lswitch, 'services', [])
+except idlutils.RowNotFound:
+msg = _("Logical Switch %s does not exist") % self.lswitch
+raise RuntimeError(msg)
+if self.may_exist:
+service = idlutils.row_by_value(self.api.idl,
+ 'Logical_Service', 'name',
+ self.lservice, None)
+if service:
+return
+
+lswitch.verify('services')
+
+service = txn.insert(self.api._tables['Logical_Service'])
+service.name = self.lservice
+for col, val in self.columns.items():
+setattr(service, col, val)
+# add the newly created service to existing lswitch
+services.append(service.uuid)
+setattr(lswitch, 'services', services)
+
+class SetLogicalServiceCommand(BaseCommand):
+def __init__(self, api, lservice, if_exists, **columns):
+super(SetLogicalServiceCommand, self).__init__(api)
+self.lservice = lservice
+self.columns = columns
+self.if_exists = if_exists
+
+def run_idl(self, txn):
+try:
+service = idlutils.row_by_value(self.api.idl, 'Logical_Service',
+'name', self.lservice)
+except idlutils.RowNotFound:
+if self.if_exists:
+return
+msg = _("Logical Service %s does not exist") % self.lservice
+raise RuntimeError(msg)
+
+for col, val in self.columns.i

Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread Ryan Moats
John McDowall  wrote on 05/11/2016 11:30:07
AM:

> From: John McDowall 
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" , "OpenStack
> Development Mailing List" 
> Date: 05/11/2016 11:30 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Apologies for missing the _init_.py files – removed them and
> remerged. When I do a compare from my repo to main I see three files
> changed (which I think is correct):
>
> networking_ovn/ovsdb/commands.py
> networking_ovn/ovsdb/impl_idl_ovn.py
> networking_ovn/ovsdb/ovn_api.py
>
> I could be doing something wrong as not an expert at merging repos.
> If I am doing something wrong let me know and I will fix it.
>
> Regards
>
> John

So the change I made to common/extensions.py was to avoid a merge
conflict, and I double checked and yes, the changes I had to
plugin.py are spurious, so here is an updated/corrected patch for
you to check against:

From eb93dc3984145f1b82be15d204c2f0790c1429bd Mon Sep 17 00:00:00 2001
From: RYAN D. MOATS 
Date: Wed, 11 May 2016 09:10:18 -0500
Subject: [PATCH] test

Signed-off-by: RYAN D. MOATS 
---
 networking_ovn/common/extensions.py  |1 +
 networking_ovn/ovsdb/commands.py |   78 ++
 networking_ovn/ovsdb/impl_idl_ovn.py |   18 
 networking_ovn/ovsdb/ovn_api.py  |   49 +
 4 files changed, 146 insertions(+), 0 deletions(-)

diff --git a/networking_ovn/common/extensions.py 
b/networking_ovn/common/extensions.py
index c171e11..55fc147 100644
--- a/networking_ovn/common/extensions.py
+++ b/networking_ovn/common/extensions.py
@@ -37,4 +37,5 @@ SUPPORTED_API_EXTENSIONS = [
 'subnet_allocation',
 'port-security',
 'allowed-address-pairs',
+'sfi',
 ]
diff --git a/networking_ovn/ovsdb/commands.py b/networking_ovn/ovsdb/commands.py
index 7ea7a6f..68a747f 100644
--- a/networking_ovn/ovsdb/commands.py
+++ b/networking_ovn/ovsdb/commands.py
@@ -164,6 +164,84 @@ class DelLogicalPortCommand(BaseCommand):
 setattr(lswitch, 'ports', ports)
 self.api._tables['Logical_Port'].rows[lport.uuid].delete()

+class AddLogicalServiceCommand(BaseCommand):
+def __init__(self, api, lservice, lswitch, may_exist, **columns):
+super(AddLogicalServiceCommand, self).__init__(api)
+self.lservice = lservice
+self.lswitch = lswitch
+self.may_exist = may_exist
+self.columns = columns
+
+def run_idl(self, txn):
+try:
+lswitch = idlutils.row_by_value(self.api.idl, 'Logical_Switch',
+'name', self.lswitch)
+services= getattr(lswitch, 'services', [])
+except idlutils.RowNotFound:
+msg = _("Logical Switch %s does not exist") % self.lswitch
+raise RuntimeError(msg)
+if self.may_exist:
+service = idlutils.row_by_value(self.api.idl,
+ 'Logical_Service', 'name',
+ self.lservice, None)
+if service:
+return
+
+lswitch.verify('services')
+
+service = txn.insert(self.api._tables['Logical_Service'])
+service.name = self.lservice
+for col, val in self.columns.items():
+setattr(service, col, val)
+# add the newly created service to existing lswitch
+services.append(service.uuid)
+setattr(lswitch, 'services', services)
+
+class SetLogicalServiceCommand(BaseCommand):
+def __init__(self, api, lservice, if_exists, **columns):
+super(SetLogicalServiceCommand, self).__init__(api)
+self.lservice = lservice
+self.columns = columns
+self.if_exists = if_exists
+
+def run_idl(self, txn):
+try:
+service = idlutils.row_by_value(self.api.idl, 'Logical_Service',
+'name', self.lservice)
+except idlutils.RowNotFound:
+if self.if_exists:
+return
+msg = _("Logical Service %s does not exist") % self.lservice
+raise RuntimeError(msg)
+
+for col, val in self.columns.items():
+setattr(service, col, val)
+
+class DelLogicalServiceCommand(BaseCommand):
+def __init__(self, api, lservice, lswitch, if_exists):
+super(DelLogicalServiceCommand, self).__init__(api)
+self.lservice = lservice
+self.lswitch = lswitch
+self.if_exists = if_exists
+
+def run_idl(self, txn):
+try:
+lservice = idlutils.row_by_value(self.api.idl, 'Logical_Service',
+ 'name', self.lservice)
+lswitch = idlutils.row_by_value(self.api.idl, 'Logical_Switch',
+'name', self.lswitch)
+services = getattr(lswitch, 'services', [])
+except idlutils.RowNotFound:
+if self.if_exists:
+ret

Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Flavio Percoco

On 11/05/16 09:47 -0500, Dean Troyer wrote:

On Tue, May 10, 2016 at 5:54 PM, Flavio Percoco  wrote:

[language mixing bits were here]


   The above is my main concern with this proposal. I've mentioned this in the
   upstream review and I'm glad to have found it here as well. The community
   impact
   of this change is perhaps not being discussed enough and I believe, in the
   long
   run, it'll bite us.


Agreed, but to do nothing instead is so not what we are about.  The change from
integrated/incubated to Big Tent was done to address some issues knowing we did
not have all of the answers up front and would learn some things along the
way.  We did learn some things, both good and bad.

I do believe that we can withstand the impact of a new language, particularly
when we do it intentionally and knowing where some of the pitfalls are.  Also,
the specific request is coming from the oldest of all OpenStack projects, and
one that has a history of not making big changes without _really_ good
reasons.  Yes it opens a door, but it will be opened with what I believe to be
a really solid model to build upon in other parts of the OpenStack community. 
I would MUCH rather do it this way then with a new Go-only project that is
joining OpenStack from scratch in more than just the implementation language.



So, one thing that was mentioned during the last TC meeting is to decide this in
a project basis. Don't open the door entirely but let projects sign up for this.
This will give us a more contained growth as far as projects with go-code go but
it does mean we'll have to do a technical analysis on every project willing to
sign up and it kinda goes against the principles of the big tent.



   The feedback from the Horizon community has been that it's been impossible
   to
   avoid a community split and that's what I'd like to avoid.


I do think part of this is also due to the differences in the problem domain of
client/browser-side and server-side.  I believe there is a similar issue with
 devs writing SQL, the overlap in expertise between the two is
way smaller than we all wish it was.


Exactly! This separation of domains is the reason why opening the door for JS
code was easier. The request was for browser apps that can't be written in
Python.


And for the specific Python-Golang overlap, it feels to me like more Python
devs have (at least talked about) working in Go than in other newish
languages.  There are worse choices to test the waters with.


Just to stress this a bit more, I don't think the problem is the language per
se. There are certainly technical issues related to it (packaging, CI, etc) but
the main discussion is currently going around the impact this change will have
in the community and other areas. I'm sure we can figure the technical issues
out.

Flavio


dt

--

Dean Troyer
dtro...@gmail.com


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][infra][puppet] HW upgrade for tripleo CI

2016-05-11 Thread Jeremy Stanley
On 2016-05-11 08:49:39 -0400 (-0400), Emilien Macchi wrote:
[...]
> TripleO Jobs are going to be red from Today 1 pm UTC until
> tomorrow hopefully.
[...]

Note that the _probable_ outcome is that jobs will simply queue in
Zuul, and then the TripleO test cloud will begin to churn through
the backlog once it comes back online. However depending on how
viable it is when it returns to service, it's also possible a lot of
those jobs might just fail while unanticipated complications get
worked out. Also if we end up needing to restart Zuul for some
reason, we likely won't preserve the check-tripleo and
experimental-tripleo backlog.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Flavio Percoco

On 11/05/16 12:09 +, Hayes, Graham wrote:

On 10/05/2016 23:28, Gregory Haynes wrote:

On Tue, May 10, 2016, at 11:10 AM, Hayes, Graham wrote:

On 10/05/2016 01:01, Gregory Haynes wrote:


On Mon, May 9, 2016, at 03:54 PM, John Dickinson wrote:

On 9 May 2016, at 13:16, Gregory Haynes wrote:


This is a bit of an aside but I am sure others are wondering the same
thing - Is there some info (specs/etherpad/ML thread/etc) that has more
details on the bottleneck you're running in to? Given that the only
clients of your service are the public facing DNS servers I am now even
more surprised that you're hitting a python-inherent bottleneck.


In Swift's case, the summary is that it's hard[0] to write a network
service in Python that shuffles data between the network and a block
device (hard drive) and effectively utilizes all of the hardware
available. So far, we've done very well by fork()'ing child processes,
using cooperative concurrency via eventlet, and basic "write more
efficient code" optimizations. However, when it comes down to it,
managing all of the async operations across many cores and many drives
is really hard, and there just isn't a good, efficient interface for
that in Python.


This is a pretty big difference from hitting an unsolvable performance
issue in the language and instead is a case of language preference -
which is fine. I don't really want to fall in to the language-comparison
trap, but I think more detailed reasoning for why it is preferable over
python in specific use cases we have hit is good info to include /
discuss in the document you're drafting :). Essentially its a matter of
weighing the costs (which lots of people have hit on so I won't) with
the potential benefits and so unless the benefits are made very clear
(especially if those benefits are technical) its pretty hard to evaluate
IMO.

There seemed to be an assumption in some of the designate rewrite posts
that there is some language-inherent performance issue causing a
bottleneck. If this does actually exist then that is a good reason for
rewriting in another language and is something that would be very useful
to clearly document as a case where we support this type of thing. I am
highly suspicious that this is the case though, but I am trying hard to
keep an open mind...


The way this component works makes it quite difficult to make any major
improvement.


OK, I'll bite.

I had a look at the code and there's a *ton* of low hanging fruit. I
decided to hack in some fixes or emulation of fixes to see whether I
could get any major improvements. Each test I ran 4 workers using
SO_REUSEPORT and timed doing 1k axfr's with 4 in parallel at a time and
recorded 5 timings. I also added these changes on top of one another in
the order they follow.


Thanks for the analysis - any suggestions about how we can improve the
current design are more than welcome .

For this test, was it a single static zone? What size was it?



Base timings: [9.223, 9.030, 8.942, 8.657, 9.190]

Stop spawning a thread per request - there are a lot of ways to do this
better, but lets not even mess with that and just literally move the
thread spawning that happens per request because its a silly idea here:
[8.579, 8.732, 8.217, 8.522, 8.214] (almost 10% increase).

Stop instantiating oslo config object per request - this should be a no
brainer, we dont need to parse config inside of a request handler:
[8.544, 8.191, 8.318, 8.086] (a few more percent).

Now, the slightly less low hanging fruit - there are 3 round trips to
the database *every request*. This is where the vast majority of request
time is spent (not in python). I didn't actually implement a full on
cache (I just hacked around the db queries), but this should be trivial
to do since designate does know when to invalidate the cache data. Some
numbers on how much a warm cache will help:

Caching zone: [5.968, 5.942, 5.936, 5.797, 5.911]

Caching records: [3.450, 3.357, 3.364, 3.459, 3.352].

I would also expect real-world usage to be similar in that you should
only get 1 cache miss per worker per notify, and then all the other
public DNS servers would be getting cache hits. You could also remove
the cost of that 1 cache miss by pre-loading data in to the cache.


I actually would expect the real world use of this to have most of the
servers have a cache miss.

We shuffle the order of the miniDNS servers sent out to the user facing
DNS servers, so I would expect them to hit different minidns servers
at nearly same time, and each of them try to generate the cache entry.

For pre-loading - this could work, but I *really* don't like relying on
a cache for one of the critical path components.



All said and done, I think that's almost a 3x speed increase with
minimal effort. So, can we stop saying that this has anything to do with
Python as a language and has everything to do with the algorithms being
used?


As I have said before - for us, the time spent : performance
improvement ratio is just mu

Re: [openstack-dev] [requirements] Bootstrapping new team for Requirements.

2016-05-11 Thread Davanum Srinivas
Thanks Doug,

Folks,
Let's use #openstack-requirements channel

-- Dims

On Tue, May 10, 2016 at 11:46 AM, Doug Hellmann  wrote:
> Excerpts from Davanum Srinivas (dims)'s message of 2016-05-07 11:02:23 -0400:
>> Dirk, Haïkel, Igor, Alan, Tony, Ghe,
>>
>> Please see brain dump here - 
>> https://etherpad.openstack.org/p/requirements-tasks
>>
>> Looking at time overlap, it seems that most of you are in one time
>> range and Tony and I are outliers
>> http://www.timeanddate.com/worldclock/meetingtime.html?iso=20160506&p1=43&p2=240&p3=195&p4=166&p5=83&p6=281&p7=141)
>>
>> So one choice for time is 7:00 AM or 8:00 AM my time which will be
>> 9:00/10:00 PM for Tony. Are there other options that anyone sees?
>> Please let me know which days work as well.
>>
>> dhellmann, sdague, markmcclain, ttx, lifeless,
>> Since you are on the current requirements-core gerrit group, Can you
>> please review the etherpad and add your thoughts/ideas/pointers to
>> transfer knowledge to the new folks?
>
> I've added a bunch of the todo items that came up in the release team
> meetup Friday at the summit.
>
> Doug
>
>>
>> To be clear, we are not yet adding new folks to the gerrit group, At
>> the moment, i am just getting everyone familiar and productive with
>> what we do now and see who is still around doing stuff in a couple of
>> months :)
>>
>> Anyone else want to help, Jump in please!
>>
>> Thanks,
>> Dims
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OVN] [networking-ovn] [networking-sfc] SFC and OVN

2016-05-11 Thread John McDowall
Ryan,

Apologies for missing the _init_.py files – removed them and remerged. When I 
do a compare from my repo to main I see three files changed (which I think is 
correct):

networking_ovn/ovsdb/commands.py
networking_ovn/ovsdb/impl_idl_ovn.py
networking_ovn/ovsdb/ovn_api.py

I could be doing something wrong as not an expert at merging repos. If I am 
doing something wrong let me know and I will fix it.

Regards

John

From: Ryan Moats mailto:rmo...@us.ibm.com>>
Date: Wednesday, May 11, 2016 at 7:12 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN


John McDowall 
mailto:jmcdow...@paloaltonetworks.com>> wrote 
on 05/10/2016 11:11:57 AM:

> From: John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>>
> To: Ryan Moats/Omaha/IBM@IBMUS
> Cc: "disc...@openvswitch.org" 
> mailto:disc...@openvswitch.org>>, "OpenStack
> Development Mailing List" 
> mailto:openstack-dev@lists.openstack.org>>
> Date: 05/10/2016 11:12 AM
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> Ryan,
>
> Let me do that – I assume adding them to plugin.py is the right approach.
>
> I have cleaned up 
> https://github.com/doonhammer/networking-ovn
>  and
> did a merge so it should be a lot easier to see the changes. I am
> going to cleanup ovs/ovn next. Once I have everything cleaned up and
> make sure it is still working I will move the code over to the port-
> pair/port-chain model.
>
> Let me know if that works for you.
>
> Regards
>
> John
>
> From: Ryan Moats mailto:rmo...@us.ibm.com>>
> Date: Tuesday, May 10, 2016 at 7:38 AM
> To: John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>>
> Cc: "disc...@openvswitch.org" 
> mailto:disc...@openvswitch.org>>, OpenStack
> Development Mailing List 
> mailto:openstack-dev@lists.openstack.org>>
> Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
>
> John McDowall 
> mailto:jmcdow...@paloaltonetworks.com>> wrote 
> on 05/09/2016
> 10:46:41 AM:
>
> > From: John McDowall 
> > mailto:jmcdow...@paloaltonetworks.com>>
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: "disc...@openvswitch.org" 
> > mailto:disc...@openvswitch.org>>, "OpenStack
> > Development Mailing List" 
> > mailto:openstack-dev@lists.openstack.org>>
> > Date: 05/09/2016 10:46 AM
> > Subject: Re: [OVN] [networking-ovn] [networking-sfc] SFC and OVN
> >
> > Ryan,
> >
> > Thanks – let me try and get the code cleaned up and rebased. One
> > area that I could use your insight on is the interface to
> > networking-ovn and how it should look.
> >
> > Regards
> >
> > John
>
> Looking at this, the initial code that I think should move over are
> _create_ovn_vnf and _delete_ovn_vnf and maybe rename them to
> create_vnf and delete_vnf.
>
> What I haven't figured out at this point is:
> 1) Is the above enough?
> 2) Do we need to refactor some of OVNPlugin's calls to provide hooks
> for the SFC
>driver to use for when the OVNPlugin inheritance goes away.
>
> Ryan

I like the idea, but unfortunately, I'm still not able to apply the series
cleanly to openstack:master:
- the list of supported extensions has moved from plugin.py to
  commons/extensions.py
- the patches that create extensions/__init__.py and
  networking_ovn/db/migration/__init__.py complain of garbage
- the patch that adds Logging on fixed ips doesn't align with the new
  code structure for handling same
- the patch that removes experimental code also has a whole lot of
  changes that were part of the master devstack/plugin.sh and so
  confuses patch.  Worse, almost 90% of the changes to plugin.py fail
  because of the same commingling.

After walking through all the patches, I'm still seeing what looks like
some cruft: I'm not sure what networking_ovn/db/migration
and networking_ovn/extensions are doing anymore, as they only have
__init__.py files in them.

Even with that, I *think* the following patch below all the changes against
tip of the tree master...

Ryan

---
networking_ovn/common/extensions.py |1 +
networking_ovn/ovsdb/commands.py|   78 +++
networking_ovn/ovsdb/impl_idl_ovn.py|   18 +++
networking_ovn/ovsdb/ovn_api.py |   49 +++
networking_ovn/plugin.py|   71 
5 files changed, 217 insertions(+), 0 deletions(-)
create mode 100644 networking_ovn/db/migration/__init__.py
create mode 100644 netwo

[openstack-dev] [TripleO] Undercloud Configuration Wizard

2016-05-11 Thread Ben Nemec
Hi all,

Just wanted to let everyone know that I've ported the undercloud
configuration wizard to be a web app so it can be used by people without
PyQt on their desktop.  I've written a blog post about it here:
http://blog.nemebean.com/content/undercloud-configuration-wizard and the
tool itself is here: http://ucw-bnemec.rhcloud.com/

It might be good to link it from tripleo.org too, or maybe even move it
to be hosted there entirely.  The latter would require some work as it's
not really designed to play nicely with an existing web server (hey, I'm
a webapp noob, cut me some slack :-), but it could be done.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Need a volunteer for documentation liaisons

2016-05-11 Thread Hongbin Lu
Spyros,

Thanks for taking this role. I have added your name to the Cross Project 
Liaison wiki [1]. For Anthony and other candidates, this role requires a basic 
understanding of Magnum and OpenStack documentation. I assigned this role to 
Spyros since his past work already showed his qualification. However, I 
encourage you to talk to Spyros if you want to share parts of the 
responsibility.

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation

Best regards,
Hongbin

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-11-16 11:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] Need a volunteer for documentation 
liaisons

Hi,

since I work on the install docs (btw, I'll push in our repo in the next hour), 
I could do it.

Spyros

On 11 May 2016 at 00:35, Anthony Chow 
mailto:vcloudernb...@gmail.com>> wrote:
HongBin,

What is the skill requirement or credential for this documentation liaison 
role?  I am interested in doing this
Anthony.

On Tue, May 10, 2016 at 3:24 PM, Hongbin Lu 
mailto:hongbin...@huawei.com>> wrote:
Hi team,

We need a volunteer as liaison for documentation team. Just let me know if you 
interest in this role.

Best regards,
Hongbin

> -Original Message-
> From: Lana Brindley 
> [mailto:openst...@lanabrindley.com]
> Sent: May-10-16 5:47 PM
> To: OpenStack Development Mailing List; enstack.org
> Subject: [openstack-dev] [PTL][docs]Update your cross-project liaison!
>
> Hi everyone,
>
> OpenStack use cross project liaisons to ensure that projects are
> talking to each effectively, and the docs CPLs are especially important
> to the documentation team to ensure we have accurate docs. Can all PTLs
> please take a moment to check (and update if necessary) their CPL
> listed here:
> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
>
> Thanks a bunch!
>
> Lana
>
> --
> Lana Brindley
> Technical Writer
> Rackspace Cloud Builders Australia
> http://lanabrindley.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][newton] Austin summit nova/newton cross-project session recap

2016-05-11 Thread Carl Baldwin
On Sun, May 1, 2016 at 7:01 PM, Matt Riedemann
 wrote:
> On Wednesday morning the Nova and Neutron teams got together for a design
> summit session. The full etherpad is here [1].
>
> We talked through three major items.
>
> 1. Neutron routed networks.
>
> Carl Baldwin gave a quick recap that we're on track with the Nova spec [2]
> and had pushed a new revision which addressed Dan Smith's latest comments.
> The spec is highly dependent on Jay Pipes' generic-resource-pools spec which
> needs to be rebated, and then hopefully we can approve that this week and
> the routed networks one shortly thereafter.
>
> We spent some time with Dan Smith sketching out his idea for moving the
> neutron network allocation code from the nova compute node to conductor.
> This would help with a few things:
>
> a) Doing the allocation earlier in the process so it's less expensive if we
> fail on the compute and get into a retry loop.
>
> b) It should clean up a bunch of the allocation code that's in the network
> API today, so we can separate the allocation logic from the check/update
> logic. This would mean that by the time we get to the compute the ports are
> already allocated and we just have to check back with Neutron that they are
> still correct and update their details. And that would also mean by the time
> we get to the compute it looks the same whether the user provided the port
> at boot time or Nova allocated it.
>
> c) Nova can update it's allocation tables before scheduling to make a more
> informed decision about where to place the instance based on what Neutron
> has already told us is available.
>
> John Garbutt is planning on working on doing this cleanup/refactor to move
> parts of the network allocation code from the compute to conductor. We'll
> most likely need a spec for this work.

Matt,

This is a good summary from my point of view.  I just want to say it
was a pleasure to be there.  I thought it was very productive for both
teams.

Carl

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Jeremy Stanley
On 2016-05-11 10:09:31 -0400 (-0400), Jim Rollenhagen wrote:
[...]
> Well, if we're talking about python, it all comes from PyPI.
[...]

That's not entirely true. Some projects listed on PyPI are simply
index links to packages hosted elsewhere on the Web and that used to
be a _lot_ more common than it is today. In the past year or two,
pip started warning and then by default refusing to retrieve
packages not hosted directly on PyPI, which has driven a lot of the
remaining stragglers to start uploading their packages directly to
it. Basically after many years, the Python community recognized that
having dependencies scattered hither and yon was a terrible idea
both from a security perspective and from a stability/robustness
perspective. In time I expect other packaging ecosystems still
suffering from that paradigm will come to similar conclusions as
their communities mature and their deployed base broadens further.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [OpenstackClient] deprecations

2016-05-11 Thread Jim Rollenhagen
On Wed, May 11, 2016 at 02:35:12PM +, Loo, Ruby wrote:
> Hi ironic’ers,
> 
> I thought we had decided that we would follow the standard deprecation
> process [1], but I see that ironic isn’t tagged with that [2].
> Although we have documented guidelines wrt deprecations [3]. But I am
> not sure we’ve been good about sending out email about deprecations.
> Does anyone know/remember what we decided?

So, we do follow the process (fairly well, IMO). However, the piece
we're waiting on to assert this tag is this:

In addition, projects assert that:

It uses an automated test to verify that configuration files are
forward-compatible from release to release and that this policy is
not accidentally broken (for example, a gating grenade test).

Given we don't have gating upgrade tests yet, we cannot yet assert this
tag.

> And the whole reason I was looking into this was because we have some
> openstackclient commands that we want to deprecate [4], and I wanted
> to know what the process was for that. How long should/must we keep
> those deprecated commands. Is this considered part of the ironic
> client, or part of openstackclient which might have its own
> deprecation policy.  (So maybe this part should be in a different
> email thread but anyway.)

That's a great question, and I'm not sure. Maybe OSC folks can comment
their thoughts. Added their tag in the subject.

In general, I think we should just follow the standard policy for this.

// jim

> 
> —ruby
> 
> [1] 
> https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html
> [2] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml#n1913
> [3] https://wiki.openstack.org/wiki/Ironic/Developer_guidelines#Deprecations
> [4] https://review.openstack.org/#/c/284160
> 
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Wiki

2016-05-11 Thread Matt Kassawara
>From the perspective of contributing documentation and providing support
(mostly in #openstack) to a variety of consumers, the wiki tends to provide
yet another location for varying levels of content without a specific
audience and questionable relevance due to lack of maintenance. I find a
surprisingly large number of people referencing old content (OpenStack
sites and external sites such as personal blogs) because they don't
understand our release names or cycle length and the content doesn't always
make it clear. For documentation on OpenStack sites in particular, Google
doesn't help by ranking old content fairly high. I suspect that confusion
around preferred location of content and chronic frustration around
contributing to the central documentation (let's fix it! [1]) caused
sporadic growth of content on the wiki. Replacing the wiki with a
combination of more formal documentation and blog posts makes sense
providing the blog posts contain obvious publication dates (or relevant
release cycle) and, if necessary, some sort of deprecation notice to guide
consumers toward newer, more relevant content. Also, if content in a blog
post becomes more formal documentation, reference it.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094390.html

On Wed, May 11, 2016 at 8:44 AM, Thierry Carrez 
wrote:

> Thierry Carrez wrote:
>
>> [...]
>> I'll soon start a thread on that. Since that goes a lot beyond the dev
>> community, I'll post it to the openstack general list and post a pointer
>> to it here.
>>
>
> See
> http://lists.openstack.org/pipermail/openstack/2016-May/016154.html
>
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >