Re: [openstack-dev] [horizon][i18n] Any Horizon plugins ready for translation in Mitaka?

2016-02-22 Thread Akihiro Motoki
Hi Daisy,

AFAIK the following horizon plugins are ready for translation.
I tested and confirmed translations of these two work well with Japanese.
A minor improvement on devstack or other stuff are in progress but it
does not affect translation.

* trove-dashboard
* sahara-dashboard

The following horizon plugins SEEM to support translations.
I have never tried them.

* desginate-dasbhard
* magnum-ui
* monasca-ui
* murano-dashboard
* senlin-dashboard

Thanks,
Akihiro

2016-02-23 15:52 GMT+09:00 Ying Chun Guo :
> Hi,
>
> Mitaka translation will be started from March 4 and ended in the week of
> March 28.
> I'd like to know which Horizon plugins[1] are ready for translation in
> Mitaka release.
> If there are, I'm happy to include them in the Mitaka translation plan.
>
> Thank you.
>
> Best regards
> Ying Chun Guo (Daisy)
>
> [1] http://docs.openstack.org/developer/horizon/plugin_registry.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][i18n] Any Horizon plugins ready for translation in Mitaka?

2016-02-22 Thread Andreas Jaeger
On 2016-02-23 07:52, Ying Chun Guo wrote:
> Hi,
> 
> Mitaka translation will be started from March 4 and ended in the week of
> March 28.
> I'd like to know which Horizon plugins[1] are ready for translation in
> Mitaka release.
> If there are, I'm happy to include them in the Mitaka translation plan.
> 
> Thank you.
> 
> Best regards
> Ying Chun Guo (Daisy)
> 
> [1] http://docs.openstack.org/developer/horizon/plugin_registry.html

Keep in mind that only some have translations setup - I think you can
take those that are setup in Zanata.

Exception is zaqar-ui which is not setup correctly for translations and
not ready from latest conversations I've had,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Feb.24

2016-02-22 Thread joehuang
IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on every 
Wednesday starting from UTC 13:00.

We did not talk a lot on our agent last meeting, let's continue the discussion.

Agenda:
# Progress of To-do list review: https://etherpad.openstack.org/p/TricircleToDo
# SEG
# Quota management
# exception logging, flavor mapping
# Pod scheduling
# L2 networking across pods

Best Regards
Chaoyi Huang ( Joe Huang )

From: joehuang
Sent: Tuesday, February 16, 2016 9:32 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: [openstack-dev][tricircle] weekly meeting of Feb.17th

Hi,

After the Chinese new year festival, let's resume the weekly meeting, and 
agenda as following.

Agenda:
# Progress of To-do list review: https://etherpad.openstack.org/p/TricircleToDo
# SEG
# Quota management
# exception logging, flavor mapping
# Pod scheduling
# L2 networking across pods

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Getting Started Guide

2016-02-22 Thread Andreas Jaeger
On 2016-02-23 04:45, Kenny Johnston wrote:
>   * The Product Work Group (PWG) uses the openstack-user-stories
> repository and gerrit to review and produce .rst formatted user stories
>   * The PWG is comprised (mostly) of non-developers
>   * We've found the Getting Started guide a bit inadequate for pointing
> new PWG contributors to in order to get them up and running with our
> process, and investigated creating a separate guide of our own to
> cover getting setup with Windows machines and common issues with
> corporate firewalls
>   * I heard at the Ops Summit that the getting started guide should be
> THE place we point new contributors to learn how to get setup for
> contributing
> 
> Would it be palatable to submit patches updating the current guide to
> cover non-developer getting started instructions?

Yes, please!

Let's try to get this into the Infra Manual, I prefer to have a single
place for this.

As usual: It's best to discuss a concrete patch - submit one and then
let's iterate on that one,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Backup port info to restore the flow rules

2016-02-22 Thread Jian Wen
On Mon, Feb 22, 2016 at 7:03 PM, Ihar Hrachyshka 
wrote:

> Agent could probably try to restore the state from its internal state. If
> that’s the missing bit you want to have, I think that could stand for a
> proper RFE.
>
Good point. Thanks.

-- 
Best,

Jian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-02-22 Thread Alex Xu
We have weekly Nova API meeting tomorrow. The meeting is being held Tuesday
UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][i18n] Any Horizon plugins ready for translation in Mitaka?

2016-02-22 Thread Ying Chun Guo

Hi,

Mitaka translation will be started from March 4 and ended in the week of
March 28.
I'd like to know which Horizon plugins[1] are ready for translation in
Mitaka release.
If there are, I'm happy to include them in the Mitaka translation plan.

Thank you.

Best regards
Ying Chun Guo (Daisy)

[1] http://docs.openstack.org/developer/horizon/plugin_registry.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting tomorrow

2016-02-22 Thread Afek, Ifat (Nokia - IL)
Hi,

We will have Vitrage weekly meeting tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:

* Current status and progress
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Thanks, 
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally]how can I give admin role to rally

2016-02-22 Thread Wu, Liming
Hi 

  When I run a scenario about "nova evacuate **",  error message was 
  Show as follows.  How can I give the admin role to rally user.

2016-02-23 09:18:25.631 6212 INFO rally.task.runner [-] Task 
e2ad6390-8cde-4ed7-a595-f5c36d5e2a08 | ITER: 0 END: Error Forbidden: User does 
not have admin privileges (HTTP 403) (Request-ID: 
req-45312185-56e5-46c4-a39a-68f5e346715e)
2016-02-23 09:18:25.636 5995 INFO 
rally.plugins.openstack.context.cleanup.context [-] Task 
e2ad6390-8cde-4ed7-a595-f5c36d5e2a08 | Starting:  user resources cleanup

Best regards  
wuliming



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Getting Started Guide

2016-02-22 Thread Anne Gentle
On Mon, Feb 22, 2016 at 9:45 PM, Kenny Johnston 
wrote:

>
>- The Product Work Group (PWG) uses the openstack-user-stories
>repository and gerrit to review and produce .rst formatted user stories
>- The PWG is comprised (mostly) of non-developers
>- We've found the Getting Started guide a bit inadequate for pointing
>new PWG contributors to in order to get them up and running with our
>process, and investigated creating a separate guide of our own to cover
>getting setup with Windows machines and common issues with corporate
>firewalls
>- I heard at the Ops Summit that the getting started guide should be
>THE place we point new contributors to learn how to get setup for
>contributing
>
> Would it be palatable to submit patches updating the current guide to
> cover non-developer getting started instructions?
>

Hi Kenny -

In docs we started our own contributor guide and then point to the infra
guide as needed.

http://docs.openstack.org/contributor-guide/index.html

What do you think about something like that? Lets you have your own review
team and lets the infra guide remain the infra guide.

Anne


>
> Thanks!
>
> --
> Kenny Johnston
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [cinder] How to configure the third party CI to be triggered only when jenkins +1

2016-02-22 Thread Sean McGinnis
On Tue, Feb 23, 2016 at 02:07:54AM +, liuxinguo wrote:
> Thanks for your input John Griffith, it looks like that the code you modify 
> is not in layout.yaml ().
> Could you tell me the exactly filename where you made the change?
> 
> Thanks very much!
> Wilson Liu

That is with sos-ci. Basically a custom scripted CI system. I'm not sure
what the equivalent change would be in the "official" CI, but I do know
there are at least a couple folks that have configured their CI to do
the same.

> 
> 发件人: John Griffith [mailto:john.griffi...@gmail.com]
> 发送时间: 2016年2月23日 9:40
> 收件人: OpenStack Development Mailing List (not for usage questions)
> 抄送: OpenStack Infra; Luozhen
> 主题: Re: [OpenStack-Infra] [openstack-dev] [cinder] How to configure the third 
> party CI to be triggered only when jenkins +1
> 
> 
> 
> On Mon, Feb 22, 2016 at 6:32 PM, liuxinguo 
> > wrote:
> Hi,
> 
> There is no need to trigger third party CI if a patch does not pass Jenkins 
> Verify.
> I think there is a way to reach this but I’m not sure how.
> 
> So is there any reference or suggestion to configure the third party CI to be 
> triggered only when jenkins +1?
> 
> Thanks for any input!
> 
> Regards,
> Wilson Liu
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ​In my case I inspect the comments and only trigger a run on either "run 
> solidfire" or on a Jenkins +1.  The trick is to parse out the comments and 
> look for the conditions that you are interested in.  The code looks something 
> like this:
> 
> if (event.get('type', 'nill') == 'comment-added' and
> 
> 
> 'Verified+1' in event['comment'] and
> 
> 
> cfg['AccountInfo']['project_name'] == event['change']['project'] 
> and
> 
> 
> event['author']['username'] == 'jenkins' and
> 
> 
> event['change']['branch'] == 'master'):
> 
> ​
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra] Getting Started Guide

2016-02-22 Thread Kenny Johnston
   - The Product Work Group (PWG) uses the openstack-user-stories
   repository and gerrit to review and produce .rst formatted user stories
   - The PWG is comprised (mostly) of non-developers
   - We've found the Getting Started guide a bit inadequate for pointing
   new PWG contributors to in order to get them up and running with our
   process, and investigated creating a separate guide of our own to cover
   getting setup with Windows machines and common issues with corporate
   firewalls
   - I heard at the Ops Summit that the getting started guide should be THE
   place we point new contributors to learn how to get setup for contributing

Would it be palatable to submit patches updating the current guide to cover
non-developer getting started instructions?

Thanks!

-- 
Kenny Johnston
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread michael mccune

On 02/22/2016 11:33 AM, Jay Pipes wrote:

OpenStack:How <-- The developer planning event.

:)


very nice ;)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] iPXE / UEFI support for stable liberty

2016-02-22 Thread Jim Rollenhagen

> On Feb 22, 2016, at 15:15, Chris K  wrote:
> 
> Hi Ironicers,
> 
> I wanted to draw attention to iPXE / UEFI support in our stable liberty 
> branch. 

Which doesn't exist, right? Or does it work depending on some other factors?

> There are environments that require support for UEFI, while ironic does have 
> this support in master, it is not capable of this in many configurations when 
> using the stable liberty release and the docs around this feature were 
> unclear.  

What's unclear about the docs? Can you point at a specific thing, or is it just 
the lack of a thing that specifically says UEFI+iPXE is not supported?

> Because support for this feature was unclear when the liberty branch was cut 
> it has caused some confustion to users wishing or needing to consume the 
> stable branch. I have purposed patches 
> https://review.openstack.org/#/c/281564 and 
> https://review.openstack.org/#/c/281536 with the goal of correcting this, 
> given that master may not be acceptable for some businesses to consume. I 
> welcome feedback on this.

I believe the first patch adds the feature, and the second patch fixes a bug 
with the feature. Correct?

As you know, stable policy is to not backport features. I don't see any reason 
this case should bypass this policy (which is why I asked so many questions 
above, it's odd to me that this is an open question at all).

It seems like a better path would be to fix the docs to avoid the confusion in 
the first place, right? I'm not sure what the "backport" would look like, given 
that docs patch wouldn't make sense on master, but surely some more experienced 
stable maintainers could guide us. :)

// jim 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread michael mccune

On 02/22/2016 11:06 AM, Dmitry Tantsur wrote:

+1 here. I got an impression that midcycles now usually happen in the
US. Indeed, it's probably much cheaper for the majority of contributors,
but would make things worse for non-US folks.


cost of travel has been a big reason we have never managed to have a 
sahara mid-cycle, as the team is evenly split across the world.


mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-22 Thread Wuhongning
Hi all,



There is also a control plane performance issue when we try to catch on the 
spec of typical AWS limit (200 subnets per router). When a router with 200 
subnets is scheduled on a new host, a 30s delay is watched when all data plane 
setup is finished.






From: Vasudevan, Swaminathan (PNB Roseville) [swaminathan.vasude...@hpe.com]
Sent: Saturday, February 20, 2016 1:49 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Yuli Stremovsky; Shlomo Narkolayev; Eran Gampel
Subject: Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results 
and scenarios

Hi Gal Sagie,
Let me try to pull in the data and will provide you the information.
Thanks
Swami

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Thursday, February 18, 2016 9:36 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Yuli Stremovsky; Shlomo Narkolayev; Eran Gampel
Subject: Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results 
and scenarios

Hi Swami,

Thanks for the reply, is there any detailed links that describe this that we 
can look at?

(Of course that having results without the full setup (hardware/ NIC, CPU and 
threads for OVS and so on..) details
and without the full scenario details is a bit hard, regardless however i hope 
it will give us at least
an estimation where we are at)

Thanks
Gal.

On Thu, Feb 18, 2016 at 9:34 PM, Vasudevan, Swaminathan (PNB Roseville) 
> wrote:
Hi Gal Sagie,
Yes there was some performance results on DVR that we shared with the community 
during the Liberty summit in Vancouver.

Also I think there was a performance analysis that was done by Oleg Bondarev on 
DVR during the Paris summit.

We have done lot more changes to the control plane to improve the scale and 
performance in DVR during the Mitaka cycle and will be sharing some performance 
results in the upcoming summit.

Definitely we can align on our approach and have all those results captured in 
the upstream for the reference.

Please let me know if you need any other information.

Thanks
Swami

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Thursday, February 18, 2016 6:06 AM
To: OpenStack Development Mailing List (not for usage questions); Eran Gampel; 
Shlomo Narkolayev; Yuli Stremovsky
Subject: [openstack-dev] [Neutron] - DVR L3 data plane performance results and 
scenarios

Hello All,

We have started to test Dragonflow [1] data plane L3 performance and was 
wondering
if there is any results and scenarios published for the current Neutron DVR
that we can compare and learn the scenarios to test.

We mostly want to validate and understand if our results are accurate and also 
join the
community in defining base standards and scenarios to test any solution out 
there.

For that we also plan to join and contribute to openstack-performance [2] 
efforts which to me
are really important.

Would love any results/information you can share, also interested in control 
plane
testing and API stress tests (either using Rally or not)

Thanks
Gal.

[1] http://docs.openstack.org/developer/dragonflow/distributed_dragonflow.html
[2] https://github.com/openstack/performance-docs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-22 Thread Sean Dague
On 02/22/2016 09:03 PM, Davanum Srinivas wrote:
> Sean,
> 
> yes, please let us know which tests fail and we'll try to fix it.

There are no currently failing tests, this is attempting to use this for
new tests.

Follow on question. I think I hacked together something which gets me
far enough to expose the regression in bug #1538011 -
https://review.openstack.org/#/c/283364/

However, this only works because I only need the main db. As nova now
has a few different required dbs (we're up to 3 once we get the cell0
work integrated) it feels like we need some kind of fixture to provide a
N dbs allocated. It's also a bit odd to have to do this with a mixin
(which includes the assumption that you'll only ever need 1 db). Is
there a particular reason for that instead of being able to just use a
fixture? A fixture model would also help in being able to allocate as
many DB as you need.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread michael mccune

On 02/22/2016 10:14 AM, Thierry Carrez wrote:

Hi everyone,

TL;DR: Let's split the events, starting after Barcelona.


as a developer, i'm +1 for this. thanks for all the hard work putting 
this together Thierry.


mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][nova][neutron][horizon][keystone][docs] 2016 MAN Ops Meetup - Top 10 Bugs/Features Feedback

2016-02-22 Thread Kenny Johnston
I had the honor of moderating a session at the MAN Ops Meetup where we
asked operators to identify for the top bugs and feature requests. Whether
it was highlighting command line inconsistencies, improvements needed with
Nova quotas or the need for tenant scrubber tooling (it exists![1]), the
discussion is captured in the sessions etherpad[2].

Please review for additional valuable feedback directly from operators.

Thanks!

-- 
Kenny Johnston
[1]https://github.com/openstack/osops-coda
[2]https://etherpad.openstack.org/p/MAN-ops-Top-10
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do we need lock fencing?

2016-02-22 Thread Joshua Harlow

Agreed seems pretty useful.

Unsure exactly how it would be performed in some of the tooz backends, 
but that's just a technical detail ;)


On 02/22/2016 03:41 PM, Chris Friesen wrote:

It may also be beneficial to take a page from POSIX "robust mutexes" and
introduce a way for the new lock holder to be notified that the previous
lock holder's lease expired rather than the lock being unlocked cleanly.

This alerts the new lock holder that the data protected by the lock may
be self-inconsistent since the old lock holder may have died while in
the middle of modifying things.

Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

2016-02-22 Thread Yipei Niu
Hi Jeo,

I have checked. The Neutron API has not started, but no process is
listening 9696.

Best regards,
Yipei
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

2016-02-22 Thread joehuang
Hi, Yipei,

You can use “ps aux |grep python” to see processes started for OpenStack, and 
make sure services started successfully. And use “netstat -tulpn | grep :9696” 
to see which process is listening this port. Maybe your Neutron API has not 
started successfully.

Best Regards
Chaoyi Huang ( Joe Huang )

From: Yipei Niu [mailto:newy...@gmail.com]
Sent: Tuesday, February 23, 2016 10:04 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: joehuang; Vega Cai
Subject: Re: [tricircle] playing tricircle with devstack under two-region 
configuration

Hi Joe and Zhiyuan,

When creating network with AZ scheduler hints specified, I execute "curl -X 
POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type: application/json" -H 
"X-Auth-Token: $token" -d '{"network": {"name": "net1", "admin_state_up": true, 
"availability_zone_hints": ["az1"]}}'". The results also indicate that

Failed to connect to 127.0.0.1 port 9696: Connection refused.

Unlike the error mentioned in last mail, I still cannot find any processor is 
listening on port 9696 after running "rejoin-stack.sh".

Best regards,
Yipei

On Mon, Feb 22, 2016 at 11:45 AM, Vega Cai 
> wrote:
Hi Yipei,

One reason for that error is that the API service is down. You can run 
"rejoin-stack.sh" under your DevStack folder to enter the "screen" console of 
DevStack, to check if services are running well. If you are not familiar with 
"screen", which is a window manager for Linux, you can do a brief search.

One more thing you can try, change the IP address to 127.0.0.1 and issue the 
request in the machine hosting the services to see if there is still 
"Connection refused" error.

BR
Zhiyuan

On 20 February 2016 at 20:49, Yipei Niu 
> wrote:
Hi Joe and Zhiyuan,

I encounter an error when executing the following command:

stack@nyp-VirtualBox:~/devstack$ curl -X POST 
http://192.168.56.101:1/v1.0/pods -H "Content-Type: application/json" -H 
"X-Auth-Token: 0ead350329ef4b07ab3b823a9d37b724" -d '{"pod": {"pod_name":  
"RegionOne"}}'
curl: (7) Failed to connect to 192.168.56.101 port 1: Connection refused

Before executing the command, I source the file "userrc_early", whose content 
is as follows:
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_URL=http://192.168.56.101:35357
export OS_USERNAME=admin
export OS_USER_DOMAIN_ID=default
export OS_PASSWORD=nypnyp0316
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_ID=default
export OS_REGION_NAME=RegionOne

Furthermore, the results of "openstack endpoint list" are as follows:
stack@nyp-VirtualBox:~/devstack$ openstack endpoint list
+--+---+--++-+---++
| ID   | Region| Service Name | Service Type   
| Enabled | Interface | URL|
+--+---+--++-+---++
| 0702ff208f914910bf5c0e1b69ee73cc | RegionOne | nova_legacy  | compute_legacy 
| True| internal  | http://192.168.56.101:8774/v2/$(tenant_id)s|
| 07fe31211a234566a257e3388bba0393 | RegionOne | nova_legacy  | compute_legacy 
| True| admin | http://192.168.56.101:8774/v2/$(tenant_id)s|
| 11cea2de9407459480a30b190e005a5c | Pod1  | neutron  | network
| True| internal  | http://192.168.56.101:20001/   |
| 16c0d9f251d84af897dfdd8df60f76dd | Pod2  | nova_legacy  | compute_legacy 
| True| admin | http://192.168.56.102:8774/v2/$(tenant_id)s|
| 184870e1e5df48629e8e1c7a13c050f8 | RegionOne | cinderv2 | volumev2   
| True| public| http://192.168.56.101:19997/v2/$(tenant_id)s   |
| 1a068f85aa12413582c4f4d256d276af | Pod2  | nova | compute
| True| admin | http://192.168.56.102:8774/v2.1/$(tenant_id)s  |
| 1b3799428309490bbce57043e87ac815 | RegionOne | cinder   | volume 
| True| internal  | http://192.168.56.101:8776/v1/$(tenant_id)s|
| 221d74877fdd4c03b9b9b7d752e30473 | Pod2  | neutron  | network
| True| internal  | http://192.168.56.102:9696/|
| 413de19152f04fc6b2b1f3a1e43fd8eb | Pod2  | cinderv2 | volumev2   
| True| public| http://192.168.56.102:8776/v2/$(tenant_id)s|
| 42e1260ab0854f3f807dcd67b19cf671 | RegionOne | keystone | identity   
| True| admin | http://192.168.56.101:35357/v2.0   |
| 45e4ccd5e16a423e8cb9f59742acee27 | Pod1  | neutron  | network
| True| public| http://192.168.56.101:20001/   |
| 464dd469545b4eb49e53aa8dafc114bc | RegionOne | cinder   | volume 
| True| admin | http://192.168.56.101:8776/v1/$(tenant_id)s|
| 

Re: [openstack-dev] [OpenStack-Infra] [cinder] How to configure the third party CI to be triggered only when jenkins +1

2016-02-22 Thread liuxinguo
Thanks for your input John Griffith, it looks like that the code you modify is 
not in layout.yaml ().
Could you tell me the exactly filename where you made the change?

Thanks very much!
Wilson Liu

发件人: John Griffith [mailto:john.griffi...@gmail.com]
发送时间: 2016年2月23日 9:40
收件人: OpenStack Development Mailing List (not for usage questions)
抄送: OpenStack Infra; Luozhen
主题: Re: [OpenStack-Infra] [openstack-dev] [cinder] How to configure the third 
party CI to be triggered only when jenkins +1



On Mon, Feb 22, 2016 at 6:32 PM, liuxinguo 
> wrote:
Hi,

There is no need to trigger third party CI if a patch does not pass Jenkins 
Verify.
I think there is a way to reach this but I’m not sure how.

So is there any reference or suggestion to configure the third party CI to be 
triggered only when jenkins +1?

Thanks for any input!

Regards,
Wilson Liu



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
​In my case I inspect the comments and only trigger a run on either "run 
solidfire" or on a Jenkins +1.  The trick is to parse out the comments and look 
for the conditions that you are interested in.  The code looks something like 
this:

if (event.get('type', 'nill') == 'comment-added' and


'Verified+1' in event['comment'] and


cfg['AccountInfo']['project_name'] == event['change']['project'] and


event['author']['username'] == 'jenkins' and


event['change']['branch'] == 'master'):

​

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

2016-02-22 Thread Yipei Niu
Hi Joe and Zhiyuan,

When creating network with AZ scheduler hints specified, I execute "curl -X
POST http://127.0.0.1:9696/v2.0/networks -H "Content-Type:
application/json" -H "X-Auth-Token: $token" -d '{"network": {"name":
"net1", "admin_state_up": true, "availability_zone_hints": ["az1"]}}'". The
results also indicate that

Failed to connect to 127.0.0.1 port 9696: Connection refused.

Unlike the error mentioned in last mail, I still cannot find any processor
is listening on port 9696 after running "rejoin-stack.sh".

Best regards,
Yipei

On Mon, Feb 22, 2016 at 11:45 AM, Vega Cai  wrote:

> Hi Yipei,
>
> One reason for that error is that the API service is down. You can run
> "rejoin-stack.sh" under your DevStack folder to enter the "screen" console
> of DevStack, to check if services are running well. If you are not familiar
> with "screen", which is a window manager for Linux, you can do a brief
> search.
>
> One more thing you can try, change the IP address to 127.0.0.1 and issue
> the request in the machine hosting the services to see if there is still
> "Connection refused" error.
>
> BR
> Zhiyuan
>
> On 20 February 2016 at 20:49, Yipei Niu  wrote:
>
>> Hi Joe and Zhiyuan,
>>
>> I encounter an error when executing the following command:
>>
>> stack@nyp-VirtualBox:~/devstack$ curl -X POST
>> http://192.168.56.101:1/v1.0/pods -H "Content-Type:
>> application/json" -H "X-Auth-Token: 0ead350329ef4b07ab3b823a9d37b724" -d
>> '{"pod": {"pod_name":  "RegionOne"}}'
>> curl: (7) Failed to connect to 192.168.56.101 port 1: Connection
>> refused
>>
>> Before executing the command, I source the file "userrc_early", whose
>> content is as follows:
>> export OS_IDENTITY_API_VERSION=3
>> export OS_AUTH_URL=http://192.168.56.101:35357
>> export OS_USERNAME=admin
>> export OS_USER_DOMAIN_ID=default
>> export OS_PASSWORD=nypnyp0316
>> export OS_PROJECT_NAME=admin
>> export OS_PROJECT_DOMAIN_ID=default
>> export OS_REGION_NAME=RegionOne
>>
>> Furthermore, the results of "openstack endpoint list" are as follows:
>> stack@nyp-VirtualBox:~/devstack$ openstack endpoint list
>>
>> +--+---+--++-+---++
>> | ID   | Region| Service Name | Service
>> Type   | Enabled | Interface | URL
>>|
>>
>> +--+---+--++-+---++
>> | 0702ff208f914910bf5c0e1b69ee73cc | RegionOne | nova_legacy  |
>> compute_legacy | True| internal  |
>> http://192.168.56.101:8774/v2/$(tenant_id)s|
>> | 07fe31211a234566a257e3388bba0393 | RegionOne | nova_legacy  |
>> compute_legacy | True| admin |
>> http://192.168.56.101:8774/v2/$(tenant_id)s|
>> | 11cea2de9407459480a30b190e005a5c | Pod1  | neutron  | network
>>  | True| internal  | http://192.168.56.101:20001/
>> |
>> | 16c0d9f251d84af897dfdd8df60f76dd | Pod2  | nova_legacy  |
>> compute_legacy | True| admin |
>> http://192.168.56.102:8774/v2/$(tenant_id)s|
>> | 184870e1e5df48629e8e1c7a13c050f8 | RegionOne | cinderv2 | volumev2
>>   | True| public|
>> http://192.168.56.101:19997/v2/$(tenant_id)s   |
>> | 1a068f85aa12413582c4f4d256d276af | Pod2  | nova | compute
>>  | True| admin |
>> http://192.168.56.102:8774/v2.1/$(tenant_id)s  |
>> | 1b3799428309490bbce57043e87ac815 | RegionOne | cinder   | volume
>>   | True| internal  | http://192.168.56.101:8776/v1/$(tenant_id)s
>>|
>> | 221d74877fdd4c03b9b9b7d752e30473 | Pod2  | neutron  | network
>>  | True| internal  | http://192.168.56.102:9696/
>>|
>> | 413de19152f04fc6b2b1f3a1e43fd8eb | Pod2  | cinderv2 | volumev2
>>   | True| public| http://192.168.56.102:8776/v2/$(tenant_id)s
>>|
>> | 42e1260ab0854f3f807dcd67b19cf671 | RegionOne | keystone | identity
>>   | True| admin | http://192.168.56.101:35357/v2.0
>> |
>> | 45e4ccd5e16a423e8cb9f59742acee27 | Pod1  | neutron  | network
>>  | True| public| http://192.168.56.101:20001/
>> |
>> | 464dd469545b4eb49e53aa8dafc114bc | RegionOne | cinder   | volume
>>   | True| admin | http://192.168.56.101:8776/v1/$(tenant_id)s
>>|
>> | 47351cda93a54a2a9379b83c0eb445ca | Pod2  | neutron  | network
>>  | True| admin | http://192.168.56.102:9696/
>>|
>> | 56d6f7641ee84ee58611621c4657e45d | Pod2  | nova_legacy  |
>> compute_legacy | True| internal  |
>> http://192.168.56.102:8774/v2/$(tenant_id)s|
>> | 57887a9d15164d6cb5b58d9342316cf7 | RegionOne | glance   | image
>>  | True| internal  | http://192.168.56.101:9292
>> |
>> | 5f2a4f69682941edbe54a85c45a5fe1b | Pod1  | cinderv2 | volumev2
>>   | True| public  

Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-22 Thread Davanum Srinivas
Sean,

yes, please let us know which tests fail and we'll try to fix it.

Thanks,
Dims

On Mon, Feb 22, 2016 at 8:18 PM, Sean Dague  wrote:
> On 02/22/2016 08:08 PM, Davanum Srinivas wrote:
>> Sean,
>>
>> You need to set the env variable like so. See testenv:mysql-python for 
>> example
>> OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost
>>
>> Thanks,
>> Dims
>>
>> [1] 
>> http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION=nope==
>
> If I am reading this correctly, this needs full access to the whole
> mysql administratively?
>
> Is that something that could be addressed? In many of my environments
> the mysql db does other things as well, so giving full admin to
> arbitrary test code is a bit concerning. Tempest ran into a similar
> issue and addressed this by allowing for preallocation of accounts. That
> kind of approach seems like it would work here given that you could do
> grants on well known names.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] How to configure the third party CI to be triggered only when jenkins +1

2016-02-22 Thread John Griffith
On Mon, Feb 22, 2016 at 6:32 PM, liuxinguo  wrote:

> Hi,
>
>
>
> There is no need to trigger third party CI if a patch does not pass
> Jenkins Verify.
>
> I think there is a way to reach this but I’m not sure how.
>
>
>
> So is there any reference or suggestion to configure the third party CI to
> be triggered only when jenkins +1?
>
>
>
> Thanks for any input!
>
>
>
> Regards,
>
> Wilson Liu
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ​In my case I inspect the comments and only trigger a run on either "run
solidfire" or on a Jenkins +1.  The trick is to parse out the comments and
look for the conditions that you are interested in.  The code looks
something like this:

if (event.get('type', 'nill') == 'comment-added' and

'Verified+1' in event['comment'] and
cfg['AccountInfo']['project_name'] == event['change']['project'] and
event['author']['username'] == 'jenkins' and
event['change']['branch'] == 'master'):
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Infra][cinder] How to configure the third party CI to be triggered only when jenkins +1

2016-02-22 Thread liuxinguo
Hi,

There is no need to trigger third party CI if a patch does not pass Jenkins 
Verify.
I think there is a way to reach this but I'm not sure how.

So is there any reference or suggestion to configure the third party CI to be 
triggered only when jenkins +1?

Thanks for any input!

Regards,
Wilson Liu


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-22 Thread joehuang
Hi, Ian, Jay and Brian,

Glad to know that the use case could be defined as "Image cloning", a feature 
ever discussed. As Ian said, the "Image Cloning" seems not only required in 
OPNFV, but also other cloud operators. 

So would it possible to discuss "Image cloning" in Austin design summit, and 
start in parallel with "Image import" in Newton, at least resume BP/spec/code 
framework discussion?

Best Regards
Chaoyi Huang ( Joe Huang )

-Original Message-
From: Ian Cordasco [mailto:sigmaviru...@gmail.com] 
Sent: Tuesday, February 23, 2016 3:55 AM
To: Jay Pipes; Brian Rosmaita; OpenStack Development Mailing List (not for 
usage questions)
Subject: Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

-Original Message-
From: Brian Rosmaita 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 22, 2016 at 08:14:38
To: OpenStack Development Mailing List (not for usage questions) 
, Jay Pipes 
Subject:  Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

> Hello everyone,
>  
> Joe, I think you are proposing a perfectly legitimate use case, but 
> it's not what the Glance community is calling "image import", and 
> that's leading to some confusion.
>  
> The Glance community has defined "image import" as: "A cloud end-user 
> has a bunch of bits that they want to give to Glance in the 
> expectation that (in the absence of error conditions) Glance will 
> produce an Image (record,
> file) tuple that can subsequently be used by other OpenStack services 
> that consume Images." [0]
>  
> The server-side image import workflow allows operators to validate the 
> bits an end-user has uploaded, with the extent of the validation 
> performed determined by the operator. For example, a public cloud may 
> wish to make sure the bits are in the correct format for that cloud so that 
> "bad"
> images can be caught at import time, rather than at boot time, to 
> ensure a better user experience.

Correct. Nothing in what we're talking about right now will be of much use to 
Joe.

> The use case you're talking about takes images that are already "in" a 
> cloud, for example, a snapshot of a server that's been configured 
> exactly the way you want it, and moving them to a different cloud. In 
> the past, the Glance community has referred to this use case as "image 
> cloning" (or region-to-region image transfer). There are some old 
> design docs up on the wiki discussing this (I think [1] gives a good 
> outline and it's got links to some other docs). Those docs are from 
> 2013, though, so they can't be resurrected as-is since Glance has 
> changed a bit in the meantime, but you can look them over and at least 
> see if I'm correct that image cloning captures what you want.
>  
> As I said, the idea has been floated several times, but never got 
> enough traction to be implemented. Maybe its time has come!

Right, we've floated the idea several times about image cloning, and there is a 
need for it (according to operators I've spoken to) but we've had higher 
priorities in past cycles that have prevented us from getting around to working 
on image cloning. I suspect Newton will be much the same as we continue to work 
on image import (which I expect will be our main focus for Newton).

Maybe after we have image import nailed down and implemented and Nova using v2, 
we can then focus on image cloning and Glare better.

--
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-22 Thread Sean Dague
On 02/22/2016 08:08 PM, Davanum Srinivas wrote:
> Sean,
> 
> You need to set the env variable like so. See testenv:mysql-python for example
> OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost
> 
> Thanks,
> Dims
> 
> [1] 
> http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION=nope==

If I am reading this correctly, this needs full access to the whole
mysql administratively?

Is that something that could be addressed? In many of my environments
the mysql db does other things as well, so giving full admin to
arbitrary test code is a bit concerning. Tempest ran into a similar
issue and addressed this by allowing for preallocation of accounts. That
kind of approach seems like it would work here given that you could do
grants on well known names.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-22 Thread Davanum Srinivas
Sean,

You need to set the env variable like so. See testenv:mysql-python for example
OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost

Thanks,
Dims

[1] 
http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION=nope==


On Mon, Feb 22, 2016 at 8:02 PM, Sean Dague  wrote:
> Before migrating into oslo.db the opportunistic testing for database
> backends was pretty simple. Create an openstack_citest@openstack_citest
> pw:openstack_citest and you could get tests running on mysql. This no
> longer seems to be the case.
>
> I went digging through the source code a bit and it's not entirely
> evident what the new required setup is. Can someone point me to the docs
> to use this? Or explain what the setup for local testing is now? We've
> got some bugs which expose on mysql and not sqlite in nova that we'd
> like to get some test cases written for.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-22 Thread Sean Dague
Before migrating into oslo.db the opportunistic testing for database
backends was pretty simple. Create an openstack_citest@openstack_citest
pw:openstack_citest and you could get tests running on mysql. This no
longer seems to be the case.

I went digging through the source code a bit and it's not entirely
evident what the new required setup is. Can someone point me to the docs
to use this? Or explain what the setup for local testing is now? We've
got some bugs which expose on mysql and not sqlite in nova that we'd
like to get some test cases written for.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Matt Fischer
On Mon, Feb 22, 2016 at 11:51 AM, Tim Bell  wrote:

>
>
>
>
>
> On 22/02/16 17:27, "John Garbutt"  wrote:
>
> >On 22 February 2016 at 15:31, Monty Taylor  wrote:
> >> On 02/22/2016 07:24 AM, Russell Bryant wrote:
> >>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez <
> thie...@openstack.org
>  > wrote:
>  Hi everyone,
>  TL;DR: Let's split the events, starting after Barcelona.
> >>> This proposal sounds fantastic.  Thank you very much to those that help
> >>> put it together.
> >> Totally agree. I think it's an excellent way to address the concerns and
> >> balance all of the diverse needs we have.
> >
> >tl;dr
> >+1
> >Awesome work ttx.
> >Thank you!
> >
> >Cheaper cities & venues should make it easier for more contributors to
> >attend. Thats a big deal. This also feels like enough notice to plan
> >for that.
> >
> >I think this means summit talk proposal deadline is both after the
> >previous release, and after the contributor event for the next
> >release? That should help keep proposals concrete (less guess work
> >when submitting). Nice.
> >
> >Dev wise, it seems equally good timing. Initially I was worried about
> >the event distracting from RC bugs, but actually I can see this
> >helping.
> >
> >I am sure there are more questions that will pop up. Like I assume
> >this means there is no ATC free pass to the summit? And I guess a
> >small nominal fee for the contributor meetup (like the recent ops
> >meetup, to help predict numbers of accurately)? I guess that helps
> >level the playing field for contributors who don't put git commits in
> >the repo (I am thinking vocal operators that don't contribute code).
> >But I probably shouldn't go into all that just yet.
>
> I would like to find a way to allow contributors cheaper access to the
> summits. Many of the devOPS contributors are patching test cases,
> configuration management recipes and documentation which should be rewarded
> in some form.
>
> Assuming that many of the ATCs are not so motivated to attend the summit,
> the cost in offering access to the event would not be significant.
>
> Charging for the Ops meetups was, to my understanding, more to confirm
> commitment to attend given limited space.
>
> Thus, I would be in favour of a preferential rate for contributors
> (whether ATC is the right criteria is a different question) for summits.
>
>
> Tim


I believe this is already the case. Unless I'm mistaken contributing to a
big tent config management project like the openstack puppet modules or
chef counts for ATC. I'm not sure if osad is big tent but if so it would
also count. Test cases and Docs also already count.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Do we need lock fencing?

2016-02-22 Thread Chris Friesen

On 02/19/2016 06:05 PM, Joshua Harlow wrote:

Hi all,

After reading over the following interesting article about redis and redlock
(IMHO it's good overview of distributed locking in general):

http://martin.kleppmann.com/2016/02/08/how-to-do-distributed-locking.html#protecting-a-resource-with-a-lock
(I personally recommend people read the whole article as well, as it's rather
interesting, as well as the response from the redis author at
http://antirez.com/news/101).

It got me wondering if with all the locking and such that is getting used in
openstack (distributed or not) that as we move to more distributed locking
mechanisms (for scale reasons, HA, active-active...) that we might need to have
a way to fence modifications of a storage entry (say belonging to a resource, ie
a volume, a network...) with a token (or sequence-id) so that the problems
mentioned in that blog do not affect openstack (apparently issues like it have
affected hbase) and the more we think about it now (vs. later) the better we
will be.

Anyone have any thoughts on this?

Perhaps tooz can along with its lock API also provide a token for each lock that
can be used to interact with a storage layer (and that token can checked by the
storage layer to avoid storage layer corruption).


It may also be beneficial to take a page from POSIX "robust mutexes" and 
introduce a way for the new lock holder to be notified that the previous lock 
holder's lease expired rather than the lock being unlocked cleanly.


This alerts the new lock holder that the data protected by the lock may be 
self-inconsistent since the old lock holder may have died while in the middle of 
modifying things.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Eoghan Glynn


> Hi everyone,
> 
> TL;DR: Let's split the events, starting after Barcelona.
> 
> Long long version:
> 
> In a global and virtual community, high-bandwidth face-to-face time is
> essential. This is why we made the OpenStack Design Summits an integral
> part of our processes from day 0. Those were set at the beginning of
> each of our development cycles to help set goals and organize the work
> for the upcoming 6 months. At the same time and in the same location, a
> more traditional conference was happening, ensuring a lot of interaction
> between the upstream (producers) and downstream (consumers) parts of our
> community.
> 
> This setup, however, has a number of issues. For developers first: the
> "conference" part of the common event got bigger and bigger and it is
> difficult to focus on upstream work (and socially bond with your
> teammates) with so much other commitments and distractions. The result
> is that our design summits are a lot less productive than they used to
> be, and we organize other events ("midcycles") to fill our focus and
> small-group socialization needs. The timing of the event (a couple of
> weeks after the previous cycle release) is also suboptimal: it is way
> too late to gather any sort of requirements and priorities for the
> already-started new cycle, and also too late to do any sort of work
> planning (the cycle work started almost 2 months ago).
> 
> But it's not just suboptimal for developers. For contributing companies,
> flying all their developers to expensive cities and conference hotels so
> that they can attend the Design Summit is pretty costly, and the goals
> of the summit location (reaching out to users everywhere) do not
> necessarily align with the goals of the Design Summit location (minimize
> and balance travel costs for existing contributors). For the companies
> that build products and distributions on top of the recent release, the
> timing of the common event is not so great either: it is difficult to
> show off products based on the recent release only two weeks after it's
> out. The summit date is also too early to leverage all the users
> attending the summit to gather feedback on the recent release -- not a
> lot of people would have tried upgrades by summit time. Finally a common
> event is also suboptimal for the events organization : finding venues
> that can accommodate both events is becoming increasingly complicated.
> 
> Time is ripe for a change. After Tokyo, we at the Foundation have been
> considering options on how to evolve our events to solve those issues.
> This proposal is the result of this work. There is no perfect solution
> here (and this is still work in progress), but we are confident that
> this strawman solution solves a lot more problems than it creates, and
> balances the needs of the various constituents of our community.
> 
> The idea would be to split the events. The first event would be for
> upstream technical contributors to OpenStack. It would be held in a
> simpler, scaled-back setting that would let all OpenStack project teams
> meet in separate rooms, but in a co-located event that would make it
> easy to have ad-hoc cross-project discussions. It would happen closer to
> the centers of mass of contributors, in less-expensive locations.
> 
> More importantly, it would be set to happen a couple of weeks /before/
> the previous cycle release. There is a lot of overlap between cycles.
> Work on a cycle starts at the previous cycle feature freeze, while there
> is still 5 weeks to go. Most people switch full-time to the next cycle
> by RC1. Organizing the event just after that time lets us organize the
> work and kickstart the new cycle at the best moment. It also allows us
> to use our time together to quickly address last-minute release-critical
> issues if such issues arise.
> 
> The second event would be the main downstream business conference, with
> high-end keynotes, marketplace and breakout sessions. It would be
> organized two or three months /after/ the release, to give time for all
> downstream users to deploy and build products on top of the release. It
> would be the best time to gather feedback on the recent release, and
> also the best time to have strategic discussions: start gathering
> requirements for the next cycle, leveraging the very large cross-section
> of all our community that attends the event.
> 
> To that effect, we'd still hold a number of strategic planning sessions
> at the main event to gather feedback, determine requirements and define
> overall cross-project themes, but the session format would not require
> all project contributors to attend. A subset of contributors who would
> like to participate in this sessions can collect and relay feedback to
> other team members for implementation (similar to the Ops midcycle).
> Other contributors will also want to get more involved in the
> conference, whether that's giving presentations or hearing user stories.
> 
> The split should 

Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-22 Thread Matt Riedemann



On 2/22/2016 2:27 PM, Sean Dague wrote:

On 02/22/2016 02:50 PM, Andrew Laski wrote:



On Mon, Feb 22, 2016, at 02:42 PM, Matt Riedemann wrote:



On 2/22/2016 5:56 AM, Sean Dague wrote:

On 02/19/2016 12:49 PM, John Garbutt wrote:



Consider a user that uses these four clouds:
* nova-network flat DHCP
* nova-network VLAN manager
* neutron with a single provider network setup
* neutron where user needs to create their own network

For the first three, the user specifies no network, and they just get
a single NIC with some semi-sensible IP address, likely with a gateway
to the internet.

For the last one, the user ends up with a network with zero NICs. If
they then go and configure a network in neutron (and they can now use
the new easy one shot give-me-a-network CLI), they start to get VMs
just like they would have with nova-network VLAN manager.

We all agree the status quo is broken. For me, this is a bug in the
API where we need to fix the consistency. Because its a change in the
behaviour, it needs to be gated by a micro version.

Now, if we step back and created this again, I would agree that
--nic=auto is a good idea, so its explicit. However, all our users are
used to automatic being the default, all be it a very patchy default.
So I think the best evolution here is to fix the inconsistency by
making a VM with no network being the explicit option (--no-nic or
something?), and failing the build if we are unable to get a nic using
an "automatic guess" route. So now the default is more consistent, and
those that what a VM with no NIC have a way to get their special case
sorted.

I think this means I like "option 2" in the summary mail on the ops list.


Thinking through this over the weekend.

  From the API I think I agree with Laski now. An API shouldn't doesn't
typically need default behavior, it's ok to make folks be explicit. So
making nic a required parameter is fine.

"nic": "auto"
"nic": "none"
"nic": "$name"

nic is now jsonschema enforced, 400 if not provided.

that being said... I think the behavior of CLI tools should default to
nic auto being implied. The user experience there is different. You use
cli tools for one off boots of things, so should be as easy as possible.

I think this is one of the places where the UX needs of the API and the
CLI are definitely different.

-Sean



Is nic only required when using neutron? Or as part of the microversion
are we also going to enforce this for nova-network, because if so, that
seems like a step backward. But if we don't enforce that check for both
neutron and nova-network, then we have differences in the API again.


I think it makes sense to require it in both cases and keep users
blissfully unaware of which networking service is in use.


+1

This should make the experience between both far more consistent. It
means making n-net API applications do a bit more work then now, but
it's explicit.

It also means the CLI experience should continue to be the same, because
--nic=auto is implied.

-Sean



OK, here is the spec so we can move discussion there now:

https://review.openstack.org/#/c/283206/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][magnum] Magnum gate issue

2016-02-22 Thread Hongbin Lu
Hi Heat team,

It looks Magnum gate broke after this patch was landed: 
https://review.openstack.org/#/c/273631/ . I would appreciate if anyone can 
help for trouble-shooting the issue. If the issue is confirmed, I would  prefer 
a quick-fix or a revert, since we want to unlock the gate ASAP. Thanks.

Best regards,
Hongbin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Rico Lin
+1
I believe that separate design summit give the chances for operators and
users can focus on design sessions and gives some great feedback during
design summit. We just have to think about how we attract them to there.


2016-02-23 6:07 GMT+08:00 Flavio Percoco :

>
>
> On Mon, Feb 22, 2016 at 11:31 AM, Monty Taylor 
> wrote:
>
>> On 02/22/2016 07:24 AM, Russell Bryant wrote:
>>
>>>
>>>
>>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez >> > wrote:
>>>
>>> Hi everyone,
>>>
>>> TL;DR: Let's split the events, starting after Barcelona.
>>>
>>>
>>> This proposal sounds fantastic.  Thank you very much to those that help
>>> put it together.
>>>
>>
>> Totally agree. I think it's an excellent way to address the concerns and
>> balance all of the diverse needs we have.
>>
>> Thank you very much!
>>
>
> +1
>
> I don't have much to say as my questions have been asked and answered
> already (co-loation, regional events, etc etc). I believe the proposal is
> great and I'd love to see it happening.
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,

*Rico Lin*

迎棧科技股份有限公司
│ 886-963-612-021
│ ric...@inwinstack.com
│ 886-2-7738-6804 #7754
│ 新北市220板橋區遠東路3號5樓C室
Rm.C, 5F., No.3, Yuandong Rd.,
Banqiao Dist., New Taipei City 220, Taiwan (R.O.C)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-02-22 Thread Walter A. Boring IV

On 02/22/2016 11:24 AM, John Garbutt wrote:

Hi,

Just came up on IRC, when nova-compute gets killed half way through a
volume attach (i.e. no graceful shutdown), things get stuck in a bad
state, like volumes stuck in the attaching state.

This looks like a new addition to this conversation:
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html
And brings us back to this discussion:
https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova

What if we move our attention towards automatically recovering from
the above issue? I am wondering if we can look at making our usually
recovery code deal with the above situation:
https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934

Did we get the Cinder APIs in place that enable the force-detach? I
think we did and it was this one?
https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api

I think diablo_rojo might be able to help dig for any bugs we have
related to this. I just wanted to get this idea out there before I
head out.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
.


The problem is a little more complicated.

In order for cinder backends to be able to do a force detach correctly, 
the Cinder driver needs to have the correct 'connector' dictionary 
passed in to terminate_connection.  That connector dictionary is the 
collection of initiator side information which is gleaned here:

https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connector.py#L99-L144

The plan was to save that connector information in the Cinder 
volume_attachment table.  When a force detach is called, Cinder has the 
existing connector saved if Nova doesn't have it.  The problem was live 
migration.  When you migrate to the destination n-cpu host, the 
connector that Cinder had is now out of date.  There is no API in Cinder 
today to allow updating an existing attachment.


So, the plan at the Mitaka summit was to add this new API, but it 
required microversions to land, which we still don't have in Cinder's 
API today.



Walt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Flavio Percoco
On Mon, Feb 22, 2016 at 11:31 AM, Monty Taylor  wrote:

> On 02/22/2016 07:24 AM, Russell Bryant wrote:
>
>>
>>
>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez > > wrote:
>>
>> Hi everyone,
>>
>> TL;DR: Let's split the events, starting after Barcelona.
>>
>>
>> This proposal sounds fantastic.  Thank you very much to those that help
>> put it together.
>>
>
> Totally agree. I think it's an excellent way to address the concerns and
> balance all of the diverse needs we have.
>
> Thank you very much!
>

+1

I don't have much to say as my questions have been asked and answered
already (co-loation, regional events, etc etc). I believe the proposal is
great and I'd love to see it happening.

Flavio

--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Flavio Percoco
On Mon, Feb 22, 2016 at 1:52 PM, Jay Pipes  wrote:

> On 02/22/2016 12:45 PM, Thierry Carrez wrote:
>
>> I don't think the proposal removes that opportunity. Contributors /can/
>> still go to OpenStack Summits. They just don't /have to/. I just don't
>> think every contributor needs to be present at every OpenStack Summit,
>> while I'd like to see most of them present at every separated
>> contributors-oriented event[tm].
>>
>
> Yes. This. A thousand this.


Fully agreed here!

--
@flaper87
Flavio Percoco
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Move virtualbox scripts to a separate directory

2016-02-22 Thread Vladimir Kozhukalov
New git repository fuel-virtualbox has been created
https://github.com/openstack/fuel-virtualbox.git and since now all review
requests that are related to virtualbox scripts for releases 9.0 and later
should be sent to the new git repository.

Checklist status:

   - Launchpad bug: https://bugs.launchpad.net/fuel/+bug/1544271
   - project-config patch https://review.openstack.org/#/c/279074 (MERGED)
   - governance patch https://review.openstack.org/#/c/281653/ (ON REVIEW)
   - prepare upstream https://github.com/kozhukalov/fuel-virtualbox (DONE)
   - .gitreview file https://review.openstack.org/#/c/283265 (ON REVIEW)
   - .gitignore file https://review.openstack.org/#/c/283265 (ON REVIEW)
   - MAINTAINERS file https://review.openstack.org/#/c/283265 (ON REVIEW)
   - remove old files from fuel-main https://review.openstack.org/#/c/283272
   (ON REVIEW)



Vladimir Kozhukalov

On Wed, Feb 17, 2016 at 9:11 PM, Maksim Malchuk 
wrote:

> Hi Fabrizio,
>
> The project-config patch is on the review now, waiting for a
> core-reviewers to merge the changes.
>
>
> On Wed, Feb 17, 2016 at 5:47 PM, Fabrizio Soppelsa  > wrote:
>
>> Vladimir,
>> a dedicated repo - good to hear.
>> Do you have a rough estimate for how long this directory will be in
>> freeze state?
>>
>> Thanks,
>> Fabrizio
>>
>>
>> On Feb 15, 2016, at 5:16 PM, Vladimir Kozhukalov <
>> vkozhuka...@mirantis.com> wrote:
>>
>> Dear colleagues,
>>
>> I'd like to announce that we are next to moving fuel-main/virtualbox
>> directory to a separate git repository. This directory contains a set of
>> bash scripts that could be used to easily deploy Fuel environment and try
>> to deploy OpenStack cluster using Fuel. Virtualbox is used as a
>> virtualization layer.
>>
>> Checklist for this change is as follows:
>>
>>1. Launchpad bug: https://bugs.launchpad.net/fuel/+bug/1544271
>>2. project-config patch https://review.openstack.org/#/c/279074/2 (ON
>>REVIEW)
>>3. prepare upstream (DONE)
>>https://github.com/kozhukalov/fuel-virtualbox
>>4. .gitreview file (TODO)
>>5. .gitignore file (TODO)
>>6. MAINTAINERS file (TODO)
>>7. remove old files from fuel-main (TODO)
>>
>> Virtualbox directory is not actively changed, so freezing this directory
>> for a while is not going to affect the development process significantly.
>> From this moment virtualbox directory is declared freezed and all changes
>> in this directory that are currently in work should be later backported to
>> the new git repository (fuel-virtualbox).
>>
>> Vladimir Kozhukalov
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Maksim Malchuk,
> Senior DevOps Engineer,
> MOS: Product Engineering,
> Mirantis, Inc
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Devananda van der Veen
On Mon, Feb 22, 2016 at 11:51 AM, Walter A. Boring IV  wrote:

> On 02/22/2016 09:45 AM, Thierry Carrez wrote:
>
>> Amrith Kumar wrote:
>>
>>> [...]
>>> As a result of this proposal, there will still be four events each year,
>>> two "OpenStack Summit" events and two "MidCycle" events.
>>>
>>
>> Actually, the OpenStack summit becomes the midcycle event. The new
>> separated contributors-oriented event[tm] happens at the beginning of the
>> new cycle.
>>
>> [...]
>>> Given the number of projects, and leaving aside high bandwidth internet
>>> and remote participation, providing dedicated meeting room for the duration
>>> of the MidCycle event for each project is a considerable undertaking. I
>>> believe therefore that the consequence is that the MidCycle event will end
>>> up being of comparable scale to the current Design Summit or larger, and
>>> will likely need a similar venue.
>>>
>>
>> It still is an order of magnitude smaller than the "OpenStack Summit".
>> Think 600 people instead of 6000. The idea behind co-hosting is to
>> facilitate cross-project interactions. You know where to find people, and
>> you can easily arrange a meeting between two teams for an hour.
>>
>> [...]
>>> At the current OpenStack Summit, there is an opportunity for
>>> contributors, customers and operators to interact, not just in technical
>>> meetings, but also in a social setting. I think this is valuable, even
>>> though there seems to be a number of people who believe that this is not
>>> necessarily the case.
>>>
>>
>> I don't think the proposal removes that opportunity. Contributors /can/
>> still go to OpenStack Summits. They just don't /have to/. I just don't
>> think every contributor needs to be present at every OpenStack Summit,
>> while I'd like to see most of them present at every separated
>> contributors-oriented event[tm].
>>
>
> Yes they can, but if contributors go to the design summit, then they also
> have to get travel budget to go to the new Summit.   So, design summits,
> midcycle meetups, and now the split off marketing summit.   This is making
> it overall more expensive for contributors that meet with customers.
>
>
I do not believe this proposal will increase the travel requirements on
contributors. AIUI, there would still be two Conferences per year (but
without the attached design summit) and still be two design/planning events
(with all the projects together, instead of individual midcycles) per year.
That's it.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Devananda van der Veen
On Mon, Feb 22, 2016 at 7:14 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> TL;DR: Let's split the events, starting after Barcelona
>

Thank you for the excellent write-up, Thierry (and everyone else behind
it)! This sounds great to me.


> Long long version:
>
> In a global and virtual community, high-bandwidth face-to-face time is
> essential. This is why we made the OpenStack Design Summits an integral
> part of our processes from day 0. Those were set at the beginning of each
> of our development cycles to help set goals and organize the work for the
> upcoming 6 months. At the same time and in the same location, a more
> traditional conference was happening, ensuring a lot of interaction between
> the upstream (producers) and downstream (consumers) parts of our community.
>
> This setup, however, has a number of issues. For developers first: the
> "conference" part of the common event got bigger and bigger and it is
> difficult to focus on upstream work (and socially bond with your teammates)
> with so much other commitments and distractions. The result is that our
> design summits are a lot less productive than they used to be, and we
> organize other events ("midcycles") to fill our focus and small-group
> socialization needs. The timing of the event (a couple of weeks after the
> previous cycle release) is also suboptimal: it is way too late to gather
> any sort of requirements and priorities for the already-started new cycle,
> and also too late to do any sort of work planning (the cycle work started
> almost 2 months ago).
>
> But it's not just suboptimal for developers. For contributing companies,
> flying all their developers to expensive cities and conference hotels so
> that they can attend the Design Summit is pretty costly, and the goals of
> the summit location (reaching out to users everywhere) do not necessarily
> align with the goals of the Design Summit location (minimize and balance
> travel costs for existing contributors). For the companies that build
> products and distributions on top of the recent release, the timing of the
> common event is not so great either: it is difficult to show off products
> based on the recent release only two weeks after it's out. The summit date
> is also too early to leverage all the users attending the summit to gather
> feedback on the recent release -- not a lot of people would have tried
> upgrades by summit time. Finally a common event is also suboptimal for the
> events organization : finding venues that can accommodate both events is
> becoming increasingly complicated.
>
> Time is ripe for a change. After Tokyo, we at the Foundation have been
> considering options on how to evolve our events to solve those issues. This
> proposal is the result of this work. There is no perfect solution here (and
> this is still work in progress), but we are confident that this strawman
> solution solves a lot more problems than it creates, and balances the needs
> of the various constituents of our community.
>
> The idea would be to split the events. The first event would be for
> upstream technical contributors to OpenStack. It would be held in a
> simpler, scaled-back setting that would let all OpenStack project teams
> meet in separate rooms, but in a co-located event that would make it easy
> to have ad-hoc cross-project discussions. It would happen closer to the
> centers of mass of contributors, in less-expensive locations.
>
> More importantly, it would be set to happen a couple of weeks /before/ the
> previous cycle release. There is a lot of overlap between cycles. Work on a
> cycle starts at the previous cycle feature freeze, while there is still 5
> weeks to go. Most people switch full-time to the next cycle by RC1.
> Organizing the event just after that time lets us organize the work and
> kickstart the new cycle at the best moment. It also allows us to use our
> time together to quickly address last-minute release-critical issues if
> such issues arise.
>
> The second event would be the main downstream business conference, with
> high-end keynotes, marketplace and breakout sessions. It would be organized
> two or three months /after/ the release, to give time for all downstream
> users to deploy and build products on top of the release. It would be the
> best time to gather feedback on the recent release, and also the best time
> to have strategic discussions: start gathering requirements for the next
> cycle, leveraging the very large cross-section of all our community that
> attends the event.


> To that effect, we'd still hold a number of strategic planning sessions at
> the main event to gather feedback, determine requirements and define
> overall cross-project themes, but the session format would not require all
> project contributors to attend. A subset of contributors who would like to
> participate in this sessions can collect and relay feedback to other team
> members for implementation (similar to the Ops 

Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-22 Thread Jay Pipes

On 02/22/2016 09:12 AM, Brian Rosmaita wrote:

Hello everyone,

Joe, I think you are proposing a perfectly legitimate use case, but it's
not what the Glance community is calling "image import", and that's
leading to some confusion.

The Glance community has defined "image import" as: "A cloud end-user has
a bunch of bits that they want to give to Glance in the expectation that
(in the absence of error conditions) Glance will produce an Image (record,
file) tuple that can subsequently be used by other OpenStack services that
consume Images." [0]


And that is exactly the same thing as uploading an image to Glance, as 
I've said all along. There is nothing substantively different about the 
existing API to upload an image file compared to "importing" that same 
image file.



The server-side image import workflow allows operators to validate the
bits an end-user has uploaded, with the extent of the validation performed
determined by the operator.


Again, there's absolutely nothing about the existing upload API that 
prevents the above from occurring.


>  For example, a public cloud may wish to make

sure the bits are in the correct format for that cloud so that "bad"
images can be caught at import time, rather than at boot time, to ensure a
better user experience.


Again, nothing preventing the existing upload API from sending some 
images to a quarantined state where such actions can be taken against 
the uploaded bits.


Sorry to keep beating this horse. It's long since passed on to the 
afterlife.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2016-02-22 Thread Ruby Loo
Hi,

We are glad to present this week's subteam report for Ironic. As usual,
this is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 08.02.2016):
- Ironic: 156 bugs (-1) + 175 wishlist items (+5). 14 new, 118 in progress
(+2), 0 critical, 19 high and 11 incomplete (+2)
- Inspector: 10 bugs (-2) + 15 wishlist items (-1). 0 new, 7 in progress
(-1), 0 critical, 3 high (-1) and 0 incomplete
- Nova bugs with Ironic tag: 16 (-7). 0 new (-1), 0 critical, 0 high

Network isolation (Neutron/Ironic work) (jroll)
===
- made good progress during midcycle
- first API patch is very close
- second patch needs a major refactoring
- network provider looks like a driver interface, let's make it the
first composable driver
- however, that spec doesn't define the API yet :(

Manual cleaning (rloo)
==
- most has landed! \o/
- docs still need review
- get clean steps from API patch needs a rebase

Live upgrades (lucasagomes, lintan)
===
- agreement during midcycle: halt work on the "no-db API service". We do
not believe this is necessary for live upgrades.
- need to get grenade tests working to validate

Parallel tasks with futurist (dtantsur)
===
- Ready for review: https://review.openstack.org/264720
- There are 2 Futurist changes that fix behaviour with extremely low thread
number (e.g. 3)
- merged, will hopefully be released this week

Node filter API and claims endpoint (jroll, devananda, lucasagomes)
===
- no update; deprioritized in favor of neutron work, manual cleaning
- talked about this at midcycle
- spec to be split into two distinct specs, one for filters and one for
claims
- this part isn't actually the contentious part, rather the nova side
(pushing scheduling decisions down to ironic)  is what people are concerned
with

Nova Liaisons (jlvillal & mrda)
===
- Performed a complete bug scrub during the mid-cycle of all open bugs.
Closed 7 of 23 bugs.

Testing/Quality (jlvillal/lekha/krtaylor)
=
- Grenade: jlvillal will resurrect his patches to make the tempest tests
run the baremetal tests for grenade, like we run for our normal gate jobs
- Put work on trying to make the tempest smoke tests pass for Ironic on the
back burner for now.
- Resurrected patches:
- https://review.openstack.org/#/c/241018/
- https://review.openstack.org/#/c/241044/
- Already -1ed - what are these "tempest flags" and how do we use them?

Inspector (dtansur)
===
- Released ironic-inspector 2.2.4 for liberty with 2 bug fixes
- HA discussion is ongoing: https://review.openstack.org/253675
- API for aborting introspection landed in both the inspector and client

Bifrost (TheJulia)
==
- Gate is presently broken for bifrost -
https://review.openstack.org/#/c/283108/ will correct this issue.

webclient (krotscheck / betherly)
=
- api work fixed - working to split large patch before merging
- tests underway

Drivers:

CIMC (sambetts)
---
- Hardware aquired and accounts created, need to install test environment
now

.

Until next week,
--ruby
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient region setting

2016-02-22 Thread Xav Paice
That's just what I was after - thanks for that!

On 23 February 2016 at 03:02, Monty Taylor  wrote:

> On 02/21/2016 11:40 PM, Andrey Kurilin wrote:
>
>> Hi!
>> `novaclient.client.Client` entry-point supports almost the same
>> arguments as `novaclient.v2.client.Client`. The difference is only in
>> api_version, so you can set up region via `novaclient.client.Client` in
>> the same way as `novaclient.v2.client.Client`.
>>
>
> The easiest way to get a properly constructed nova Client is with
> os-client-config:
>
> import os_client_config
>
> OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
> OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
> OS_PASSWORD="REDACTED"
> OS_AUTH_URL="http://auth.vexxhost.net;
> OS_REGION_NAME="ca-ymq-1"
>
> client = os_client_config.make_client(
> 'compute',
> auth_url=OS_AUTH_URL, username=OS_USERNAME,
> password=OS_PASSWORD, project_name=OS_PROJECT_NAME,
> region_name=OS_REGION_NAME)
>
> The upside is that the constructor interface is the same for all of the
> rest of the client libs too (just change the first argument) - and it will
> also read in OS_ env vars or named clouds from clouds.yaml if you have them
> set.
>
> (The 'simplest' way is to put your auth and region information into a
> clouds.yaml file like this:
>
>
> http://docs.openstack.org/developer/os-client-config/#site-specific-file-locations
>
> Such as:
>
> # ~/.config/openstack/clouds.yaml
> clouds:
>   vexxhost:
>  profile: vexxhost
>  auth:
>project_name: d8af8a8f-a573-48e6-898a-af333b970a2d
>username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1
>password: REDACTED
>  region_name: ca-ymq-1
>
>
> And do:
>
> client = os_client_config.make_client('compute', cloud='vexxhost')
>
>
> If you don't want to do that for some reason but you'd like to construct a
> novaclient Client object by hand:
>
>
> from keystoneauth1 import loading
> from keystoneauth1 import session as ksa_session
> from novaclient import client as nova_client
>
> OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
> OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
> OS_PASSWORD="REDACTED"
> OS_AUTH_URL="http://auth.vexxhost.net;
> OS_REGION_NAME="ca-ymq-1"
>
> # Get the auth loader for the password auth plugin
> loader = loading.get_plugin_loader('password')
> # Construct the auth plugin
> auth_plugin = loader.load_from_options(
> auth_url=OS_AUTH_URL, username=OS_USERNAME, password=OS_PASSWORD,
> project_name=OS_PROJECT_NAME)
>
> # Construct a keystone session
> # Other arguments that are potentially useful here are:
> #  verify - bool, whether or not to verify SSL connection validity
> #  cert - SSL cert information
> #  timout - time in seconds to use for connection level TCP timeouts
> session = ksa_session.Session(auth_plugin)
>
> # Now make the client
> # Other arguments you may be interested in:
> #  service_name - if you need to specify a service name for finding the
> # right service in the catalog
> #  service_type - if the cloud in question has given a different
> # service type (should be 'compute' for nova - but
> # novaclient sets it, so it's safe to omit in most cases
> #  endpoint_override - if you want to tell it to use a different URL
> #  than what the keystone catalog returns
> #  endpoint_type - if you need to specify admin or internal
> #  endpoints rather than the default 'public'
> #  Note that in glance and barbican, this key is called
> #  'interface'
> client = nova_client.Client(
> version='2.0', # or set the specific microversion you want
> session=session, region_name=OS_REGION_NAME)
>
> It might be clear why I prefer the os_client_config factory function
> instead - but what I prefer and what you prefer might not be the same
> thing. :)
>
> On Mon, Feb 22, 2016 at 6:11 AM, Xav Paice > > wrote:
>>
>> Hi,
>>
>> In http://docs.openstack.org/developer/python-novaclient/api.html
>> it's got some pretty clear instructions not to
>> use novaclient.v2.client.Client but I can't see another way to
>> specify the region - there's more than one in my installation, and
>> no param for region in novaclient.client.Client
>>
>> Shall I hunt down/write a blueprint for that?
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey Kurilin.
>>
>>
>> __
>> OpenStack Development Mailing List (not for 

[openstack-dev] [ironic] [stable] iPXE / UEFI support for stable liberty

2016-02-22 Thread Chris K
Hi Ironicers,

I wanted to draw attention to iPXE / UEFI support in our stable liberty
branch. There are environments that require support for UEFI, while ironic
does have this support in master, it is not capable of this in many
configurations when using the stable liberty release and the docs around
this feature were unclear.  Because support for this feature was unclear
when the liberty branch was cut it has caused some confustion to users
wishing or needing to consume the stable branch. I have purposed patches
https://review.openstack.org/#/c/281564 and
https://review.openstack.org/#/c/281536 with the goal of correcting this,
given that master may not be acceptable for some businesses to consume. I
welcome feedback on this.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Walter A. Boring IV

On 02/22/2016 09:45 AM, Thierry Carrez wrote:

Amrith Kumar wrote:

[...]
As a result of this proposal, there will still be four events each 
year, two "OpenStack Summit" events and two "MidCycle" events.


Actually, the OpenStack summit becomes the midcycle event. The new 
separated contributors-oriented event[tm] happens at the beginning of 
the new cycle.



[...]
Given the number of projects, and leaving aside high bandwidth 
internet and remote participation, providing dedicated meeting room 
for the duration of the MidCycle event for each project is a 
considerable undertaking. I believe therefore that the consequence is 
that the MidCycle event will end up being of comparable scale to the 
current Design Summit or larger, and will likely need a similar venue.


It still is an order of magnitude smaller than the "OpenStack Summit". 
Think 600 people instead of 6000. The idea behind co-hosting is to 
facilitate cross-project interactions. You know where to find people, 
and you can easily arrange a meeting between two teams for an hour.



[...]
At the current OpenStack Summit, there is an opportunity for 
contributors, customers and operators to interact, not just in 
technical meetings, but also in a social setting. I think this is 
valuable, even though there seems to be a number of people who 
believe that this is not necessarily the case.


I don't think the proposal removes that opportunity. Contributors 
/can/ still go to OpenStack Summits. They just don't /have to/. I just 
don't think every contributor needs to be present at every OpenStack 
Summit, while I'd like to see most of them present at every separated 
contributors-oriented event[tm].


Yes they can, but if contributors go to the design summit, then they 
also have to get travel budget to go to the new Summit.   So, design 
summits,  midcycle meetups, and now the split off marketing summit.   
This is making it overall more expensive for contributors that meet with 
customers.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance]one more use case for Image Import Refactor from OPNFV

2016-02-22 Thread Ian Cordasco
-Original Message-
From: Brian Rosmaita 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 22, 2016 at 08:14:38
To: OpenStack Development Mailing List (not for usage questions) 
, Jay Pipes 
Subject:  Re: [openstack-dev] [glance]one more use case for Image Import 
Refactor from OPNFV

> Hello everyone,
>  
> Joe, I think you are proposing a perfectly legitimate use case, but it's
> not what the Glance community is calling "image import", and that's
> leading to some confusion.
>  
> The Glance community has defined "image import" as: "A cloud end-user has
> a bunch of bits that they want to give to Glance in the expectation that
> (in the absence of error conditions) Glance will produce an Image (record,
> file) tuple that can subsequently be used by other OpenStack services that
> consume Images." [0]
>  
> The server-side image import workflow allows operators to validate the
> bits an end-user has uploaded, with the extent of the validation performed
> determined by the operator. For example, a public cloud may wish to make
> sure the bits are in the correct format for that cloud so that "bad"
> images can be caught at import time, rather than at boot time, to ensure a
> better user experience.

Correct. Nothing in what we're talking about right now will be of much use to 
Joe.

> The use case you're talking about takes images that are already "in" a
> cloud, for example, a snapshot of a server that's been configured exactly
> the way you want it, and moving them to a different cloud. In the past,
> the Glance community has referred to this use case as "image cloning" (or
> region-to-region image transfer). There are some old design docs up on
> the wiki discussing this (I think [1] gives a good outline and it's got
> links to some other docs). Those docs are from 2013, though, so they
> can't be resurrected as-is since Glance has changed a bit in the meantime,
> but you can look them over and at least see if I'm correct that image
> cloning captures what you want.
>  
> As I said, the idea has been floated several times, but never got enough
> traction to be implemented. Maybe its time has come!

Right, we've floated the idea several times about image cloning, and there is a 
need for it (according to operators I've spoken to) but we've had higher 
priorities in past cycles that have prevented us from getting around to working 
on image cloning. I suspect Newton will be much the same as we continue to work 
on image import (which I expect will be our main focus for Newton).

Maybe after we have image import nailed down and implemented and Nova using v2, 
we can then focus on image cloning and Glare better.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-22 Thread Andrew Laski


On Mon, Feb 22, 2016, at 02:42 PM, Matt Riedemann wrote:
> 
> 
> On 2/22/2016 5:56 AM, Sean Dague wrote:
> > On 02/19/2016 12:49 PM, John Garbutt wrote:
> > 
> >>
> >> Consider a user that uses these four clouds:
> >> * nova-network flat DHCP
> >> * nova-network VLAN manager
> >> * neutron with a single provider network setup
> >> * neutron where user needs to create their own network
> >>
> >> For the first three, the user specifies no network, and they just get
> >> a single NIC with some semi-sensible IP address, likely with a gateway
> >> to the internet.
> >>
> >> For the last one, the user ends up with a network with zero NICs. If
> >> they then go and configure a network in neutron (and they can now use
> >> the new easy one shot give-me-a-network CLI), they start to get VMs
> >> just like they would have with nova-network VLAN manager.
> >>
> >> We all agree the status quo is broken. For me, this is a bug in the
> >> API where we need to fix the consistency. Because its a change in the
> >> behaviour, it needs to be gated by a micro version.
> >>
> >> Now, if we step back and created this again, I would agree that
> >> --nic=auto is a good idea, so its explicit. However, all our users are
> >> used to automatic being the default, all be it a very patchy default.
> >> So I think the best evolution here is to fix the inconsistency by
> >> making a VM with no network being the explicit option (--no-nic or
> >> something?), and failing the build if we are unable to get a nic using
> >> an "automatic guess" route. So now the default is more consistent, and
> >> those that what a VM with no NIC have a way to get their special case
> >> sorted.
> >>
> >> I think this means I like "option 2" in the summary mail on the ops list.
> >
> > Thinking through this over the weekend.
> >
> >  From the API I think I agree with Laski now. An API shouldn't doesn't
> > typically need default behavior, it's ok to make folks be explicit. So
> > making nic a required parameter is fine.
> >
> > "nic": "auto"
> > "nic": "none"
> > "nic": "$name"
> >
> > nic is now jsonschema enforced, 400 if not provided.
> >
> > that being said... I think the behavior of CLI tools should default to
> > nic auto being implied. The user experience there is different. You use
> > cli tools for one off boots of things, so should be as easy as possible.
> >
> > I think this is one of the places where the UX needs of the API and the
> > CLI are definitely different.
> >
> > -Sean
> >
> 
> Is nic only required when using neutron? Or as part of the microversion 
> are we also going to enforce this for nova-network, because if so, that 
> seems like a step backward. But if we don't enforce that check for both 
> neutron and nova-network, then we have differences in the API again.

I think it makes sense to require it in both cases and keep users
blissfully unaware of which networking service is in use.

> 
> -- 
> 
> Thanks,
> 
> Matt Riedemann
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Walter A. Boring IV

On 02/22/2016 07:14 AM, Thierry Carrez wrote:

Hi everyone,

TL;DR: Let's split the events, starting after Barcelona.


Time is ripe for a change. After Tokyo, we at the Foundation have been 
considering options on how to evolve our events to solve those issues. 
This proposal is the result of this work. There is no perfect solution 
here (and this is still work in progress), but we are confident that 
this strawman solution solves a lot more problems than it creates, and 
balances the needs of the various constituents of our community.


The idea would be to split the events. The first event would be for 
upstream technical contributors to OpenStack. It would be held in a 
simpler, scaled-back setting that would let all OpenStack project 
teams meet in separate rooms, but in a co-located event that would 
make it easy to have ad-hoc cross-project discussions. It would happen 
closer to the centers of mass of contributors, in less-expensive 
locations.
I'm trying to follow this here.   If we want all of the projects in the 
same location to hold a design summit, then all of the contributors are 
still going to have to do international travel, which is the primary 
cost for attendees.   I'm not sure how this saves the attendees much at 
all, unless they just stop attending. Part of the justification for 
myself for the summits is the ability to meet up with customers, as well 
as do presentations on the work that my team has done over the last 
release cycle, as well as the contributors meet up and cross project 
networking.   If we break the summits up, then I may lose the ability to 
justify my travel, if I don't get to meet with customers and do 
presentations to the wider audience.



What kind of locations are we talking about here? Are we looking to stay 
with one continent as it's deemed 'less-expensive'?  Will we still 
alternate between Americas, Europe, Asia?  I'm not sure there is a way 
to make it less expensive for all the projects as there are people from 
around the globe working on each project.



Walt





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Sean McGinnis
On Mon, Feb 22, 2016 at 05:20:21PM +, Amrith Kumar wrote:
> Thierry and all of those who contributed to putting together this write-up, 
> thank you very much.
> 
> TL;DR: +0
> 
> Longer version:
> 
> While I definitely believe that the new proposed timing for "OpenStack 
> Summit" which is some months after the release, is a huge improvement, I am 
> not completely enamored of this proposal. Here is why.
> 
> As a result of this proposal, there will still be four events each year, two 
> "OpenStack Summit" events and two "MidCycle" events. The material change is 
> that the "MidCycle" event that is currently project specific will become a 
> single event inclusive of all projects, not unlike our current "Design 
> Summit".
> 
> I contrast this proposal with a mid-cycle two weeks ago for the Trove 
> project. Thanks to the folks at Red Hat who hosted us in Raleigh, we had a 
> dedicated room, with high bandwidth internet and the ability to have people 
> join us remotely via audio and video (which we used mostly for screen 
> sharing). The previous mid-cycle similarly had excellent facilities provided 
> us by HP (in California), Rackspace (in Austin) and at MIT in Cambridge when 
> we (Tesora) hosted the event.
> 
> At these "simpler, scaled-back settings", would we be able to provide the 
> same kind of infrastructure for each project?
> 
> Given the number of projects, and leaving aside high bandwidth internet and 
> remote participation, providing dedicated meeting room for the duration of 
> the MidCycle event for each project is a considerable undertaking. I believe 
> therefore that the consequence is that the MidCycle event will end up being 
> of comparable scale to the current Design Summit or larger, and will likely 
> need a similar venue.
> 
> I also believe that it is important that OpenStack continue to grow not only 
> a global customer base but also a global contributor base. As others have 
> already commented, this proposal risks the "design summit" become US based, 
> maybe Europe once in a long while. But I find it much harder to believe that 
> these design summits would be truly global. And this I think would be an 
> unwelcome consequence.
> 
> At the current OpenStack Summit, there is an opportunity for contributors, 
> customers and operators to interact, not just in technical meetings, but also 
> in a social setting. I think this is valuable, even though there seems to be 
> a number of people who believe that this is not necessarily the case.
> 
> Those are the three concerns I have with the proposal. 
> 
> Thanks again to Thierry and all who contributed to putting this proposal 
> together.
> 
> -amrith

I agree with a lot of the concerns raised here. I wonder if we're not
just shifting some of the problems and causing others.

While the timing of things isn't ideal right now, I'm also afraid the
timing of these changes would also interupt our development flow and
cause distractions when we need folks focused on getting things done.

I'm also very concerned about losing our midcycles. At least for Cinder,
the midcycle events have been hugely successful and well worth the time
and travel expense, IMO. To me, the design summit event is good for
cross-project communication and getting more operator input. But the
midcycles have been where we've really been able to focus and figure out
issues.

Even if we still have a colocated "midcycle" now, I would be afraid that
there would be too many distractions from everything else going on for
us to be able to really tackle some of the things we've been able to in
our past midcycles.

There are definitely details we would need to work out with this
proposal, and I'm not saying I'm absolutely against it for now. I'm
trying to keep an open mind and see how this will improve things
overall. I would just ask that up front we plan on having a date set,
maybe after a year, where we plan to take a good look back on the
changes and decide whether they really have improved things or not.

Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-22 Thread Matt Riedemann



On 2/22/2016 5:56 AM, Sean Dague wrote:

On 02/19/2016 12:49 PM, John Garbutt wrote:



Consider a user that uses these four clouds:
* nova-network flat DHCP
* nova-network VLAN manager
* neutron with a single provider network setup
* neutron where user needs to create their own network

For the first three, the user specifies no network, and they just get
a single NIC with some semi-sensible IP address, likely with a gateway
to the internet.

For the last one, the user ends up with a network with zero NICs. If
they then go and configure a network in neutron (and they can now use
the new easy one shot give-me-a-network CLI), they start to get VMs
just like they would have with nova-network VLAN manager.

We all agree the status quo is broken. For me, this is a bug in the
API where we need to fix the consistency. Because its a change in the
behaviour, it needs to be gated by a micro version.

Now, if we step back and created this again, I would agree that
--nic=auto is a good idea, so its explicit. However, all our users are
used to automatic being the default, all be it a very patchy default.
So I think the best evolution here is to fix the inconsistency by
making a VM with no network being the explicit option (--no-nic or
something?), and failing the build if we are unable to get a nic using
an "automatic guess" route. So now the default is more consistent, and
those that what a VM with no NIC have a way to get their special case
sorted.

I think this means I like "option 2" in the summary mail on the ops list.


Thinking through this over the weekend.

 From the API I think I agree with Laski now. An API shouldn't doesn't
typically need default behavior, it's ok to make folks be explicit. So
making nic a required parameter is fine.

"nic": "auto"
"nic": "none"
"nic": "$name"

nic is now jsonschema enforced, 400 if not provided.

that being said... I think the behavior of CLI tools should default to
nic auto being implied. The user experience there is different. You use
cli tools for one off boots of things, so should be as easy as possible.

I think this is one of the places where the UX needs of the API and the
CLI are definitely different.

-Sean



Is nic only required when using neutron? Or as part of the microversion 
are we also going to enforce this for nova-network, because if so, that 
seems like a step backward. But if we don't enforce that check for both 
neutron and nova-network, then we have differences in the API again.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Shamail


> On Feb 22, 2016, at 11:19 AM, Tim Bell  wrote:
> 
> 
>> On 22/02/16 18:35, "Thierry Carrez"  wrote:
>> 
>> Clayton O'Neill wrote:
>>> Is the expectation that the ops mid-cycle would continue separately,
>>> or be held with the meeting formerly known as the Design Summit?
>>> 
>>> Personally I’d prefer they be held together, but scheduled with the
>>> thought that operators aren’t likely to be interested in work
>>> sessions, but that a good number of us would be interested in
>>> cross-project and some project specific planning sessions.  This would
>>> also open up the possibility of having some sessions specific intended
>>> for operator/developer feedback sessions.
>> 
>> I'll let Tom comment on that, but the general idea in the strawman 
>> proposal was that the Ops "midcycle" event would be preserved as a 
>> separate event, but likely turn more and more regional to maximize local 
>> attendance. The rationale was that it's hard for ops to justify 
>> traveling to a contributors-branded event, while they can more easily 
>> justify going to the main OpenStack Summit user conference event, and to 
>> regional Ops gatherings.
>> 
>> But things are still pretty open on that front, so let's see what the 
>> feedback is.
> 
> Once we get the ideas reviewed, it would be good to validate it with the 
> -operators list too. There are some impacts on the value of the 
> summits/contributor sessions which would be worth checking to make sure the 
> feedback loops can be kept/enhanced.
> 
> Many will follow both lists but the volume of messages can lead to people 
> focussing on one or the other.
+1, a separate thread focused on Ops-meetups on the operators thread would be a 
great next step.
> 
> Tim
> 
> 
>> 
>> -- 
>> Thierry Carrez (ttx)
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][all] api variation release by release

2016-02-22 Thread Ian Cordasco
-Original Message-
From: John Garbutt 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 22, 2016 at 12:53:49
To: OpenStack Development Mailing List (not for usage questions) 

Subject:  Re: [openstack-dev] [api][all] api variation release by release

> On 13 January 2016 at 14:28, Matt Riedemann wrote:
> > On 1/13/2016 12:11 AM, joehuang wrote:
> >>
> >> Thanks for the information, it's good to know the documentation. The
> >> further question is whether there is any XML format like document will be
> >> published for each release and all core projects, so that other cloud
> >> management software can read the changes, and deal with the fields
> >> variation.
> >>
> >> For example, each project will maintain one XML file in its repository to
> >> record all API update in each release.
> >>
> >> Best Regards
> >> Chaoyi Huang ( Joe Huang )
> >>
> >>
> >> -Original Message-
> >> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
> >> Sent: Wednesday, January 13, 2016 10:56 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [api][all] api variation release by release
> >>
> >>
> >>
> >> On 1/12/2016 7:27 PM, joehuang wrote:
> >>>
> >>> Hello,
> >>>
> >>> As more and more OpenStack release are deployed in the production
> >>> cloud, multiple releases of OpenStack co-located in a cloud is a very
> >>> common situation. For example, "Juno" and "Liberty" releases co-exist
> >>> in the same cloud.
> >>>
> >>> Then the cloud management software has to be aware of the API
> >>> variation of different releases, and deal with the different field of
> >>> object in the request / response. For example, in "Juno", no
> >>> "multiattach" field in the "volume" object, but the field presents in
> >>> "Liberty".
> >>>
> >>> Each releases will bring some API changes, it will be very useful that
> >>> the API variation will also be publish after each release is
> >>> delivered, so that the cloud management software can read and changes
> >>> and react accordingly.
> >>>
> >>> Best Regards
> >>>
> >>> Chaoyi Huang ( Joe Huang )
> >>>
> >>>
> >>>
> >>> __  
> >>>  OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >> Have you heard of this effort going on in multiple projects called
> >> microversions? For example, in Nova:
> >>
> >> http://docs.openstack.org/developer/nova/api_microversion_history.html  
> >>
> >> Nova and Ironic already support microversioned APIs. Cinder and Neutron
> >> are working on it I think, and there could be others.
> >>
> >
> > No, there is nothing like that, at least that I've heard of. I don't know
> > how you'd model what's changing in the microversions in a language like XML.
>  
> You will probably find this summary really interesting:
> https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/
>  
> The idea is people can deploy any commit of Nova in production.
> So to work out what is supported you just look at this API:
> http://developer.openstack.org/api-ref-compute-v2.1.html#listVersionsv2.1  
>  
> You can consult the docs to see what that specific version gives you.
> Thats still a work in progress, for now we have this:
> * http://docs.openstack.org/developer/nova/api_microversion_history.html  
> * http://developer.openstack.org/api-guide/compute/
>  
> There is talk of using JSON home to make some details machine readable.
> But its hard to express the semantic changes in a machine readable form.

Further, JSON home is an abandoned spec that hasn't been completed or really 
tested by very many (if any) production APIs. It is an option, but if we want 
to use it, I'd suggest we all become more involved in the IETF working groups 
to make it an actual standard.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-02-22 Thread John Garbutt
Hi,

Just came up on IRC, when nova-compute gets killed half way through a
volume attach (i.e. no graceful shutdown), things get stuck in a bad
state, like volumes stuck in the attaching state.

This looks like a new addition to this conversation:
http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html
And brings us back to this discussion:
https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova

What if we move our attention towards automatically recovering from
the above issue? I am wondering if we can look at making our usually
recovery code deal with the above situation:
https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934

Did we get the Cinder APIs in place that enable the force-detach? I
think we did and it was this one?
https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api

I think diablo_rojo might be able to help dig for any bugs we have
related to this. I just wanted to get this idea out there before I
head out.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-22 Thread Fox, Kevin M
Yeah, I think the really gray issue here is the dependency thing...

I think we would mostly agree that depending on a non free component for the 
actual
implementation is bad. So, the dependency through Cassandra on the Oracle JVM is
a problem.

But if you allow the argument that the plugins being semiclosed is ok because 
there just isnt
a fully open implementation yet for the plugins, you could make the same 
argument for
the database. Why not allow Cassandra now, since there isn't a non free 
implementation
of Cassandra yet?

It is a sticky problem. You could start putting in very carefully worded 
exceptions to allow the
one case but not the other, but would be ripe for abuse.

The other way to wrangle it would be continuing the discussion on what it would 
take to
make an acceptable enough open solution. Swift is close I think, but like you 
said, it doesn't have
geoip which may be needed to consider it a CDN. What features must a CDN have 
before its
considered a viable CDN?

Designate+Swift might be enough? What other gaps are there?

Thanks,
Kevin



From: Thierry Carrez [thie...@openstack.org]
Sent: Monday, February 22, 2016 9:08 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

Back to the original thread: what does "no open core" mean in OpenStack
2016 ? I think working on that could help sway the Poppy decision one
way or another: my original clarification proposal ("It should have a
fully-functional, production-grade open source implementation") would
mean we would have to exclude Poppy, or make an exception that we can
back up.

Poppy really touches a grey area. Their intent is not malicious and they
mostly behave like an OpenStack project. There are a number of potential
issues like the Cassandra dependency (which depends on Oracle JDK), or
the lack of integration with Designate, but those could be fixed before
the final acceptance.

The central question is therefore, should Poppy not be included in the
"official OpenStack projects" list because it is only functional when
coupled with external, non-OpenStack proprietary services. I hear the
arguments of both sides and they are all valid. Yet we have to make a
decision.

Kevin suggested Poppy could support Swift as its open source backend. It
would just put things in a Swift container. That would make a poor CDN,
since AIUI Swift would only spread the data on globally distributed
clusters, not serve it from closest location. That means we would have
to drop the "fully-functional, production-grade" part of the "no open
core" clarification.

The "no open core" 2016 interpretation could also be moved to "It should
support a fully-functional, production-grade open source implementation
if one is available".

In both cases, the new wording would certainly open the door for real
"open core" services in OpenStack: things that *only* live in OpenStack
as an entry point for proprietary software or hardware. So I'm not sure
we want either of them.

Any other suggestion ?

Or maybe we should not try to clarify what "no open core" means in 2016,
and rely on TC members common sense to judge that ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Tim Bell

On 22/02/16 18:35, "Thierry Carrez"  wrote:

>Clayton O'Neill wrote:
>> Is the expectation that the ops mid-cycle would continue separately,
>> or be held with the meeting formerly known as the Design Summit?
>>
>> Personally I’d prefer they be held together, but scheduled with the
>> thought that operators aren’t likely to be interested in work
>> sessions, but that a good number of us would be interested in
>> cross-project and some project specific planning sessions.  This would
>> also open up the possibility of having some sessions specific intended
>> for operator/developer feedback sessions.
>
>I'll let Tom comment on that, but the general idea in the strawman 
>proposal was that the Ops "midcycle" event would be preserved as a 
>separate event, but likely turn more and more regional to maximize local 
>attendance. The rationale was that it's hard for ops to justify 
>traveling to a contributors-branded event, while they can more easily 
>justify going to the main OpenStack Summit user conference event, and to 
>regional Ops gatherings.
>
>But things are still pretty open on that front, so let's see what the 
>feedback is.

Once we get the ideas reviewed, it would be good to validate it with the 
-operators list too. There are some impacts on the value of the 
summits/contributor sessions which would be worth checking to make sure the 
feedback loops can be kept/enhanced.

Many will follow both lists but the volume of messages can lead to people 
focussing on one or the other.

Tim


>
>-- 
>Thierry Carrez (ttx)
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Chris Friesen

On 02/22/2016 11:20 AM, Daniel P. Berrange wrote:

On Mon, Feb 22, 2016 at 12:07:37PM -0500, Sean Dague wrote:

On 02/22/2016 10:43 AM, Chris Friesen wrote:



But the fact remains that nova-compute is doing disk I/O from the main
thread, and if the guests push that disk hard enough then nova-compute
is going to suffer.

Given the above...would it make sense to use eventlet.tpool or similar
to perform all disk access in a separate OS thread?  There'd likely be a
bit of a performance hit, but at least it would isolate the main thread
from IO blocking.


Making nova-compute more robust is fine, though the reality is once you
IO starve a system, a lot of stuff is going to fall over weird.

So there has to be a tradeoff of the complexity of any new code vs. what
it gains. I think individual patches should be evaluated as such, or a
spec if this is going to get really invasive.


There are OS level mechanisms (eg cgroups blkio controller) for doing
I/O priorization that you could use to give Nova higher priority over
the VMs, to reduce (if not eliminate) the possibility that a busy VM
can inflict a denial of service on the mgmt layer.  Of course figuring
out how to use that mechanism correctly is not entirely trivial.


The 50+ second delays were with CFQ as the disk scheduler.  (No cgroups though, 
just CFQ with equal priorities on nova-compute and the guests.)  This was with a 
3.10 kernel though, so maybe CFQ behaves better on newer kernels.


If you put nova-compute at high priority then glance image downloads, qemu-img 
format conversions, and volume clearing will also run at the higher priority, 
potentially impacting running VMs.


In an ideal world we'd have per-VM cgroups and all activity on behalf of a 
particular VM would be done in the context of that VM's cgroup.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Henry Nash

> On 22 Feb 2016, at 17:45, Thierry Carrez  wrote:
> 
> Amrith Kumar wrote:
>> [...]
>> As a result of this proposal, there will still be four events each year, two 
>> "OpenStack Summit" events and two "MidCycle" events.
> 
> Actually, the OpenStack summit becomes the midcycle event. The new separated 
> contributors-oriented event[tm] happens at the beginning of the new cycle.

So in general a well thought out proposal - and it certainly helps address some 
of the early concerns over a “simplistic” split. I was also worrying, however, 
about the reduction in developer face time - it wasn’t immediate clear that the 
main summit could be treated as a developer midcycle. Is the idea that we just 
let this be informally organized by the projects, or tha there would at least 
be room set aside for each project (but without all the formal cross-project 
structure/agenda that there is in a main developer summit)?

>> [...]
>> Given the number of projects, and leaving aside high bandwidth internet and 
>> remote participation, providing dedicated meeting room for the duration of 
>> the MidCycle event for each project is a considerable undertaking. I believe 
>> therefore that the consequence is that the MidCycle event will end up being 
>> of comparable scale to the current Design Summit or larger, and will likely 
>> need a similar venue.
> 
> It still is an order of magnitude smaller than the "OpenStack Summit". Think 
> 600 people instead of 6000. The idea behind co-hosting is to facilitate 
> cross-project interactions. You know where to find people, and you can easily 
> arrange a meeting between two teams for an hour.
> 
>> [...]
>> At the current OpenStack Summit, there is an opportunity for contributors, 
>> customers and operators to interact, not just in technical meetings, but 
>> also in a social setting. I think this is valuable, even though there seems 
>> to be a number of people who believe that this is not necessarily the case.
> 
> I don't think the proposal removes that opportunity. Contributors /can/ still 
> go to OpenStack Summits. They just don't /have to/. I just don't think every 
> contributor needs to be present at every OpenStack Summit, while I'd like to 
> see most of them present at every separated contributors-oriented event[tm].
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Magnum][Kuryr] Kuryr-Magnum integration (nested containers)

2016-02-22 Thread Fawad Khaliq
Hi folks,

The spec [1] for Mangum-Kuryr integration is out and has already gone
through good discussion and updates. Please take out some time to review,
we would like to converge on the design soon.

[1] https://review.openstack.org/#/c/269039/5

Thanks,
Fawad Khaliq
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Tim Bell





On 22/02/16 17:27, "John Garbutt"  wrote:

>On 22 February 2016 at 15:31, Monty Taylor  wrote:
>> On 02/22/2016 07:24 AM, Russell Bryant wrote:
>>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez > wrote:
 Hi everyone,
 TL;DR: Let's split the events, starting after Barcelona.
>>> This proposal sounds fantastic.  Thank you very much to those that help
>>> put it together.
>> Totally agree. I think it's an excellent way to address the concerns and
>> balance all of the diverse needs we have.
>
>tl;dr
>+1
>Awesome work ttx.
>Thank you!
>
>Cheaper cities & venues should make it easier for more contributors to
>attend. Thats a big deal. This also feels like enough notice to plan
>for that.
>
>I think this means summit talk proposal deadline is both after the
>previous release, and after the contributor event for the next
>release? That should help keep proposals concrete (less guess work
>when submitting). Nice.
>
>Dev wise, it seems equally good timing. Initially I was worried about
>the event distracting from RC bugs, but actually I can see this
>helping.
>
>I am sure there are more questions that will pop up. Like I assume
>this means there is no ATC free pass to the summit? And I guess a
>small nominal fee for the contributor meetup (like the recent ops
>meetup, to help predict numbers of accurately)? I guess that helps
>level the playing field for contributors who don't put git commits in
>the repo (I am thinking vocal operators that don't contribute code).
>But I probably shouldn't go into all that just yet.

I would like to find a way to allow contributors cheaper access to the summits. 
Many of the devOPS contributors are patching test cases, configuration 
management recipes and documentation which should be rewarded in some form.

Assuming that many of the ATCs are not so motivated to attend the summit, the 
cost in offering access to the event would not be significant.

Charging for the Ops meetups was, to my understanding, more to confirm 
commitment to attend given limited space.

Thus, I would be in favour of a preferential rate for contributors (whether ATC 
is the right criteria is a different question) for summits.


Tim

>
>Thanks,
>johnthetubaguy
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api][all] api variation release by release

2016-02-22 Thread John Garbutt
On 13 January 2016 at 14:28, Matt Riedemann  wrote:
> On 1/13/2016 12:11 AM, joehuang wrote:
>>
>> Thanks for the information, it's good to know the documentation. The
>> further question is whether there is any XML format like document will be
>> published for each release and all core projects, so that other cloud
>> management software can read the changes, and deal with the fields
>> variation.
>>
>> For example, each project will maintain one XML file in its repository to
>> record all API update in each release.
>>
>> Best Regards
>> Chaoyi Huang ( Joe Huang )
>>
>>
>> -Original Message-
>> From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
>> Sent: Wednesday, January 13, 2016 10:56 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [api][all] api variation release by release
>>
>>
>>
>> On 1/12/2016 7:27 PM, joehuang wrote:
>>>
>>> Hello,
>>>
>>> As more and more OpenStack release are deployed in the production
>>> cloud, multiple releases of OpenStack co-located in a cloud is a very
>>> common situation. For example, "Juno" and "Liberty" releases co-exist
>>> in the same cloud.
>>>
>>> Then the cloud management software has to be aware of the API
>>> variation of different releases, and deal with the different field of
>>> object in the request / response. For example, in "Juno", no
>>> "multiattach" field in the "volume" object, but the field presents in
>>> "Liberty".
>>>
>>> Each releases will bring some API changes, it will be very useful that
>>> the API variation will also be publish after each release is
>>> delivered, so that the cloud management software can read and changes
>>> and react accordingly.
>>>
>>> Best Regards
>>>
>>> Chaoyi Huang ( Joe Huang )
>>>
>>>
>>>
>>> __
>>>  OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> Have you heard of this effort going on in multiple projects called
>> microversions? For example, in Nova:
>>
>> http://docs.openstack.org/developer/nova/api_microversion_history.html
>>
>> Nova and Ironic already support microversioned APIs. Cinder and Neutron
>> are working on it I think, and there could be others.
>>
>
> No, there is nothing like that, at least that I've heard of. I don't know
> how you'd model what's changing in the microversions in a language like XML.

You will probably find this summary really interesting:
https://dague.net/2015/06/05/the-nova-api-in-kilo-and-beyond-2/

The idea is people can deploy any commit of Nova in production.
So to work out what is supported you just look at this API:
http://developer.openstack.org/api-ref-compute-v2.1.html#listVersionsv2.1

You can consult the docs to see what that specific version gives you.
Thats still a work in progress, for now we have this:
* http://docs.openstack.org/developer/nova/api_microversion_history.html
* http://developer.openstack.org/api-guide/compute/

There is talk of using JSON home to make some details machine readable.
But its hard to express the semantic changes in a machine readable form.

Does that help?

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Tim Bell




On 22/02/16 19:07, "John Garbutt"  wrote:

>On 22 February 2016 at 17:38, Sean Dague  wrote:
>> On 02/22/2016 12:20 PM, Daniel P. Berrange wrote:
>>> On Mon, Feb 22, 2016 at 12:07:37PM -0500, Sean Dague wrote:
 On 02/22/2016 10:43 AM, Chris Friesen wrote:
> Hi all,
>
> We've recently run into some interesting behaviour that I thought I
> should bring up to see if we want to do anything about it.
>
> Basically the problem seems to be that nova-compute is doing disk I/O
> from the main thread, and if it blocks then it can block all of
> nova-compute (since all eventlets will be blocked).  Examples that we've
> found include glance image download, file renaming, instance directory
> creation, opening the instance xml file, etc.  We've seen nova-compute
> block for upwards of 50 seconds.
>
> Now the specific case where we hit this is not a production
> environment.  It's only got one spinning disk shared by all the guests,
> the guests were hammering on the disk pretty hard, the IO scheduler for
> the instance disk was CFQ which seems to be buggy in our kernel.
>
> But the fact remains that nova-compute is doing disk I/O from the main
> thread, and if the guests push that disk hard enough then nova-compute
> is going to suffer.
>
> Given the above...would it make sense to use eventlet.tpool or similar
> to perform all disk access in a separate OS thread?  There'd likely be a
> bit of a performance hit, but at least it would isolate the main thread
> from IO blocking.

 Making nova-compute more robust is fine, though the reality is once you
 IO starve a system, a lot of stuff is going to fall over weird.

 So there has to be a tradeoff of the complexity of any new code vs. what
 it gains. I think individual patches should be evaluated as such, or a
 spec if this is going to get really invasive.
>>>
>>> There are OS level mechanisms (eg cgroups blkio controller) for doing
>>> I/O priorization that you could use to give Nova higher priority over
>>> the VMs, to reduce (if not eliminate) the possibility that a busy VM
>>> can inflict a denial of service on the mgmt layer.  Of course figuring
>>> out how to use that mechanism correctly is not entirely trivial.
>>>
>>> I think it is probably worth focusing effort in that area, before jumping
>>> into making all the I/O related code in Nova more complicated. eg have
>>> someone investigate & write up recommendation in Nova docs for how to
>>> configure the host OS & Nova such that VMs cannot inflict an I/O denial
>>> of service attack on the mgmt service.
>>
>> +1 that would be much nicer.
>>
>> We've got some set of bugs in the tracker right now which are basically
>> "after the compute node being at loadavg of 11 for an hour, nova-compute
>> starts failing". Having some basic methodology to use Linux
>> prioritization on the worker process would mitigate those quite a bit,
>> and could be used by all users immediately, vs. complex nova-compute
>> changes which would only apply to new / upgraded deploys.
>>
>
>+1
>
>Does that turn into improved deployment docs that cover how you do
>that on various platforms?
>
>Maybe some tools to help with that also go in here?
>http://git.openstack.org/cgit/openstack/osops-tools-generic/

And some easy configuration in the puppet/ansible/chef standard recipes would 
also help.

>
>Thanks,
>John
>
>PS
>FWIW, how xenapi runs nova-compute in VM has a similar outcome, albeit
>in a more heavy handed way.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when to call os-brick's connector.disconnect_volume

2016-02-22 Thread John Garbutt
So just attempting to read through this thread, I think I hear:

Problems:

1. multi-attach breaks the assumption that made detach work
2. live-migrate, already breaks with some drivers, due to not fully
understanding the side affects of all API calls.
3. evacuate and shelve issues also related


Solution ideas:

1. New export/target for every volume connection
* pro: simple
* con: that doesn't work for all drivers (?)

2. Nova works out when to disconnect volume on host
* pro: no cinder API changes (i.e. no upgrade issue)
* con: adds complexity in Nova
* con: requires all nodes to run fixed code before multi-attach is safe
* con: doesn't help with the live-migrate and evacuate issues anyways?

3. Give Cinder all the info, so it knows what has to happen
* pro: seems to give cinder the info to stop API users doing bad things
* pro: more robust API particularly useful with multiple nova, and
with baremetal, etc
* con: Need cinder micro-versions to do this API change and work across upgrade


So from where I am sat:
1: doesn't work for everyone
2: doesn't fix all the problems we need to fix
3: will take a long time

If so, it feels like we need solution 3, regardless, to solve wider issues.
We only need solution 2, if solution 3 will block multi-attach for too long.

Am I missing something in that summary?

Thanks,
johnthetubaguy

On 12 February 2016 at 20:26, Ildikó Váncsa  wrote:
> Hi Walt,
>
> Thanks for describing the bigger picture.
>
> In my opinion when we will have microversion support available in Cinder that 
> will give us a bit of a freedom and also possibility to handle these 
> difficulties.
>
> Regarding terminate_connection we will have issues with live_migration as it 
> is today. We need to figure out what information would be best to feed back 
> to Cinder from Nova, so we should figure out what API we would need after we 
> are able to introduce it in a safe way. I still see benefit in storing the 
> connection_info for the attachments.
>
> Also I think the multiattach support should be disable for the problematic 
> drivers like lvm, until we don't have a solution for proper detach on the 
> whole call chain.
>
> Best Regards,
> Ildikó
>
>> -Original Message-
>> From: Walter A. Boring IV [mailto:walter.bor...@hpe.com]
>> Sent: February 11, 2016 18:31
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [Nova][Cinder] Multi-attach, determining when 
>> to call os-brick's connector.disconnect_volume
>>
>> There seems to be a few discussions going on here wrt to detaches.   One
>> is what to do on the Nova side with calling os-brick's disconnect_volume, 
>> and also when to or not to call Cinder's
>> terminate_connection and detach.
>>
>> My original post was simply to discuss a mechanism to try and figure out the 
>> first problem.  When should nova call brick to remove the
>> local volume, prior to calling Cinder to do something.
>>
>> Nova needs to know if it's safe to call disconnect_volume or not. Cinder 
>> already tracks each attachment, and it can return the
>> connection_info
>> for each attachment with a call to initialize_connection.   If 2 of
>> those connection_info dicts are the same, it's a shared volume/target.
>> Don't call disconnect_volume if there are any more of those left.
>>
>> On the Cinder side of things, if terminate_connection, detach is called, the 
>> volume manager can find the list of attachments for a
>> volume, and compare that to the attachments on a host.  The problem is, 
>> Cinder doesn't track the host along with the instance_uuid in
>> the attachments table.  I plan on allowing that as an API change after 
>> microversions lands, so we know how many times a volume is
>> attached/used on a particular host.  The driver can decide what to do with 
>> it at
>> terminate_connection, detach time. This helps account for
>> the differences in each of the Cinder backends, which we will never get all 
>> aligned to the same model.  Each array/backend handles
>> attachments different and only the driver knows if it's safe to remove the 
>> target or not, depending on how many attachments/usages
>> it has
>> on the host itself.   This is the same thing as a reference counter,
>> which we don't need, because we have the count in the attachments table, 
>> once we allow setting the host and the instance_uuid at
>> the same time.
>>
>> Walt
>> > On Tue, Feb 09, 2016 at 11:49:33AM -0800, Walter A. Boring IV wrote:
>> >> Hey folks,
>> >> One of the challenges we have faced with the ability to attach a
>> >> single volume to multiple instances, is how to correctly detach that
>> >> volume.  The issue is a bit complex, but I'll try and explain the
>> >> problem, and then describe one approach to solving one part of the detach 
>> >> puzzle.
>> >>
>> >> Problem:
>> >>When a volume is attached to multiple instances on the same host.
>> >> There are 2 scenarios here.
>> >>
>> >>1) Some Cinder drivers 

Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-22 Thread Dean Troyer
On Mon, Feb 22, 2016 at 11:08 AM, Thierry Carrez 
wrote:

> Back to the original thread: what does "no open core" mean in OpenStack
> 2016 ? I think working on that could help sway the Poppy decision one way
> or another: my original clarification proposal ("It should have a
> fully-functional, production-grade open source implementation") would mean
> we would have to exclude Poppy, or make an exception that we can back up.
>

I think "open core" still means basically what it did 5 years ago, in that
it is generally a single entity that is 'holding back' part of a
project/product from the "community release" for product purposes.

OpenStack's "no open core" rule was set with that in mind IIRC.  While
Poppy doesn't meet the above definition, and therefore I wouldn't call it
open core, it does share certain characteristics with open core projects in
that it is not production-ready without a commercial/proprietary service.

We as a community often take a strong stand that the tools we use to
produce OpenStack are Open Source whenever possible.  Sometimes, when that
is not possible, we simply do without rather than compromise that
philosophy.  See our lack of a video-conference capability (aside from the
arguable usefulness of it).  There simply is not a usable/scalable option
that fits our values.  Would we suddenly find one acceptable if we had an
open source API abstraction layer to multiple non-free services?

I think with that stance on the tools we use it seems a bit disingenuous to
declare that one of our deliverables is a project to enable just that sort
of service.  This does not strike me as consistent with the OpenStack
mission.

dt

-- 

Dean Troyer
dtro...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-22 Thread Ian Cordasco
-Original Message-
From: Mike Perez 
Reply: Mike Perez 
Date: February 22, 2016 at 11:51:39
To: Ian Cordasco , OpenStack Development Mailing List 
(not for usage questions) 
Subject:  Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

> On 02/22/2016 07:19 AM, Ian Cordasco wrote:
> > -Original Message-
> > From: Mike Perez  
> > Reply: OpenStack Development Mailing List (not for usage questions)  
> > Date: February 19, 2016 at 19:21:13
> > To: openstack-dev@lists.openstack.org  
> > Subject: Re: [openstack-dev] [all] [tc] "No Open Core" in 2016
> >
> >> On 02/18/2016 09:05 PM, Cody A.W. Somerville wrote:
> >>> There is no implicit (or explicit) requirement for the tests to be a
> >>> full integration/end-to-end test. Mocks and/or unit tests would be
> >>> sufficient to satisfy "test-driven gate".
> >>
> >> While I do agree there is no requirement, I would not be satisfied with
> >> us giving up on having functional or integration tests from a project
> >> because of the available implementations. It's reasons like this that
> >> highlight Poppy being different from the rest of OpenStack.
> >
> > Would third-party integration CI not be satisfactory?
>  
> That would be fine, but are these commercial CDN solutions going to be
> interested in hosting them?

I don't know that for certain and I don't know if the Poppy team has gotten so 
far as asking them. I'd also be unsurprised if this resulted in a catch 22 of 
sorts where CDNs will only work on those if Poppy is OpenStack and we'd only be 
happy accepting Poppy if it had those third-party CI services.

--  
Ian Cordasco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Tim Bell

On 22/02/16 19:07, "John Garbutt"  wrote:

>On 22 February 2016 at 17:38, Sean Dague  wrote:
>> On 02/22/2016 12:20 PM, Daniel P. Berrange wrote:
>>> On Mon, Feb 22, 2016 at 12:07:37PM -0500, Sean Dague wrote:
 On 02/22/2016 10:43 AM, Chris Friesen wrote:
> Hi all,
>
> We've recently run into some interesting behaviour that I thought I
> should bring up to see if we want to do anything about it.
>
> Basically the problem seems to be that nova-compute is doing disk I/O
> from the main thread, and if it blocks then it can block all of
> nova-compute (since all eventlets will be blocked).  Examples that we've
> found include glance image download, file renaming, instance directory
> creation, opening the instance xml file, etc.  We've seen nova-compute
> block for upwards of 50 seconds.
>
> Now the specific case where we hit this is not a production
> environment.  It's only got one spinning disk shared by all the guests,
> the guests were hammering on the disk pretty hard, the IO scheduler for
> the instance disk was CFQ which seems to be buggy in our kernel.
>
> But the fact remains that nova-compute is doing disk I/O from the main
> thread, and if the guests push that disk hard enough then nova-compute
> is going to suffer.
>
> Given the above...would it make sense to use eventlet.tpool or similar
> to perform all disk access in a separate OS thread?  There'd likely be a
> bit of a performance hit, but at least it would isolate the main thread
> from IO blocking.

 Making nova-compute more robust is fine, though the reality is once you
 IO starve a system, a lot of stuff is going to fall over weird.

 So there has to be a tradeoff of the complexity of any new code vs. what
 it gains. I think individual patches should be evaluated as such, or a
 spec if this is going to get really invasive.
>>>
>>> There are OS level mechanisms (eg cgroups blkio controller) for doing
>>> I/O priorization that you could use to give Nova higher priority over
>>> the VMs, to reduce (if not eliminate) the possibility that a busy VM
>>> can inflict a denial of service on the mgmt layer.  Of course figuring
>>> out how to use that mechanism correctly is not entirely trivial.
>>>
>>> I think it is probably worth focusing effort in that area, before jumping
>>> into making all the I/O related code in Nova more complicated. eg have
>>> someone investigate & write up recommendation in Nova docs for how to
>>> configure the host OS & Nova such that VMs cannot inflict an I/O denial
>>> of service attack on the mgmt service.
>>
>> +1 that would be much nicer.
>>
>> We've got some set of bugs in the tracker right now which are basically
>> "after the compute node being at loadavg of 11 for an hour, nova-compute
>> starts failing". Having some basic methodology to use Linux
>> prioritization on the worker process would mitigate those quite a bit,
>> and could be used by all users immediately, vs. complex nova-compute
>> changes which would only apply to new / upgraded deploys.
>>
>
>+1
>
>Does that turn into improved deployment docs that cover how you do
>that on various platforms?
>
>Maybe some tools to help with that also go in here?
>http://git.openstack.org/cgit/openstack/osops-tools-generic/

I think we could also include something in the puppet/chef/ansible/… 
Configurations to do the appropriate settings.

>
>Thanks,
>John
>
>PS
>FWIW, how xenapi runs nova-compute in VM has a similar outcome, albeit
>in a more heavy handed way.
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (no subject)

2016-02-22 Thread ghe . rivero
From: Ghe Rivero 
Subject: Re: [openstack-dev] [all] A proposal to separate the design summit

Quoting Clayton O'Neill (2016-02-22 10:27:04)
> Is the expectation that the ops mid-cycle would continue separately,
> or be held with the meeting formerly known as the Design Summit?
> 
> Personally I’d prefer they be held together, but scheduled with the
> thought that operators aren’t likely to be interested in work
> sessions, but that a good number of us would be interested in
> cross-project and some project specific planning sessions.  This would
> also open up the possibility of having some sessions specific intended
> for operator/developer feedback sessions.

+1

Ghe Rivero

> On Mon, Feb 22, 2016 at 12:15 PM, Lauren Sell  wrote:
> >
> >> On Feb 22, 2016, at 8:52 AM, Clayton O'Neill  wrote:
> >>
> >> I think this is a great proposal, but like Matt I’m curious how it
> >> might impact the operator sessions that have been part of the Design
> >> Summit and the Operators Mid-Cycle.
> >>
> >> As an operator I got a lot out of the cross-project designs sessions
> >> in Tokyo, but they were scheduled at the same time as the Operator
> >> sessions.  On the other hand, the work sessions clearly aren’t as
> >> useful to me.  It would be nice would be worked out so that the new
> >> design summit replacement was in the same location, and scheduled so
> >> that the operator specific parts were overlapping the work sessions
> >> instead of the more big picture sessions.
> >
> > Great question. The current plan is to maintain the ops summit and 
> > mid-cycle activities.
> >
> > The new format would allow us to reduce overlap between ops summit and 
> > cross project sessions at the main event, both for the operators and 
> > developers who want to be involved in either activity.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread John Dickinson
Amrith raises an interesting point. This proposal moves from effectively 4 dev 
events a year to 2 dev events a year, thus *reducing* the amount of 
face-to-face time we have.

While my first reaction to the proposed changes is positive, de facto reduction 
of time spent together as devs seems counter-productive.

My thinking goes like this: we have mid-cycles currently. Regardless of if they 
are "required" or if they are official or not, effectively they are productive 
weeks that the most active contributors try to attend. And they are highly 
beneficial and productive weeks.

The current summits have time and space set aside for contributor 
communication. Over the last couple of years, these summit sessions have gotten 
better, not worse. While the current summit/conference design does indeed but a 
burden on some contributors (myself included--it's a really busy week), the 
majority of devs in the room during the current design sessions do *not* also 
have customer meetings, booth duty, two conference talks, and various company 
parties to attend.

I'm worried that we're losing valuable dev face-to-face time from the 
"long-tail" of contributors for the benefit of the minority of devs who are 
most active. And for those who are most active, we're still doing four events a 
year all over the world.


--John




On 22 Feb 2016, at 9:45, Thierry Carrez wrote:

> Amrith Kumar wrote:
>> [...]
>> As a result of this proposal, there will still be four events each year, two 
>> "OpenStack Summit" events and two "MidCycle" events.
>
> Actually, the OpenStack summit becomes the midcycle event. The new separated 
> contributors-oriented event[tm] happens at the beginning of the new cycle.
>
>> [...]
>> Given the number of projects, and leaving aside high bandwidth internet and 
>> remote participation, providing dedicated meeting room for the duration of 
>> the MidCycle event for each project is a considerable undertaking. I believe 
>> therefore that the consequence is that the MidCycle event will end up being 
>> of comparable scale to the current Design Summit or larger, and will likely 
>> need a similar venue.
>
> It still is an order of magnitude smaller than the "OpenStack Summit". Think 
> 600 people instead of 6000. The idea behind co-hosting is to facilitate 
> cross-project interactions. You know where to find people, and you can easily 
> arrange a meeting between two teams for an hour.
>
>> [...]
>> At the current OpenStack Summit, there is an opportunity for contributors, 
>> customers and operators to interact, not just in technical meetings, but 
>> also in a social setting. I think this is valuable, even though there seems 
>> to be a number of people who believe that this is not necessarily the case.
>
> I don't think the proposal removes that opportunity. Contributors /can/ still 
> go to OpenStack Summits. They just don't /have to/. I just don't think every 
> contributor needs to be present at every OpenStack Summit, while I'd like to 
> see most of them present at every separated contributors-oriented event[tm].
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Michał Dulko
On 02/22/2016 04:49 PM, Daniel P. Berrange wrote:
> On Mon, Feb 22, 2016 at 04:14:06PM +0100, Thierry Carrez wrote:
>> The idea would be to split the events. The first event would be for upstream
>> technical contributors to OpenStack. It would be held in a simpler,
>> scaled-back setting that would let all OpenStack project teams meet in
>> separate rooms, but in a co-located event that would make it easy to have
>> ad-hoc cross-project discussions. It would happen closer to the centers of
>> mass of contributors, in less-expensive locations.
> The idea that we can choose less expensive locations is great, but I'm a
> little wary of focusing too much on "centers of mass of contributors", as
> it can easily become an excuse to have it in roughly the same places each
> time. As a non-USA based contributor, I really value the fact the the
> summits rotate around different regions instead of spending all the time
> in the USA as was the case earlier in openstcck days. Minimizing travel
> costs is no doubt a welcome aim for companies' budgets, but it should not
> be allowed to dominate to such a large extent that we miss representation
> of different regions. ie if we never went back to Asia because the it is
> cheaper for the /current/ majority of contributors to go to the US, we'll
> make it harder to attract new contributors from those regions we avoid on
> cost ground. The "center of mass of contributors" could become a self-
> fullfilling prophecy.
>
> IOW, I'm onboard with choosing less expensive locations, but would like
> to see us still make the effort to reach out across different regions
> for the events, and not become too US focused once again.

As an EU-based contributor I have similar concerns. First OpenStack
Summit I was able to attend was in Paris and the fact that it was close
let us send almost entire team of contributors. That fact helped us in
future funding negotiations and we were able to maintain constant number
of contributors sent also to Summits far more expensive for us. I don't
believe that would be ever possible if all the conferences were
organized in the US.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread John Garbutt
On 22 February 2016 at 17:38, Sean Dague  wrote:
> On 02/22/2016 12:20 PM, Daniel P. Berrange wrote:
>> On Mon, Feb 22, 2016 at 12:07:37PM -0500, Sean Dague wrote:
>>> On 02/22/2016 10:43 AM, Chris Friesen wrote:
 Hi all,

 We've recently run into some interesting behaviour that I thought I
 should bring up to see if we want to do anything about it.

 Basically the problem seems to be that nova-compute is doing disk I/O
 from the main thread, and if it blocks then it can block all of
 nova-compute (since all eventlets will be blocked).  Examples that we've
 found include glance image download, file renaming, instance directory
 creation, opening the instance xml file, etc.  We've seen nova-compute
 block for upwards of 50 seconds.

 Now the specific case where we hit this is not a production
 environment.  It's only got one spinning disk shared by all the guests,
 the guests were hammering on the disk pretty hard, the IO scheduler for
 the instance disk was CFQ which seems to be buggy in our kernel.

 But the fact remains that nova-compute is doing disk I/O from the main
 thread, and if the guests push that disk hard enough then nova-compute
 is going to suffer.

 Given the above...would it make sense to use eventlet.tpool or similar
 to perform all disk access in a separate OS thread?  There'd likely be a
 bit of a performance hit, but at least it would isolate the main thread
 from IO blocking.
>>>
>>> Making nova-compute more robust is fine, though the reality is once you
>>> IO starve a system, a lot of stuff is going to fall over weird.
>>>
>>> So there has to be a tradeoff of the complexity of any new code vs. what
>>> it gains. I think individual patches should be evaluated as such, or a
>>> spec if this is going to get really invasive.
>>
>> There are OS level mechanisms (eg cgroups blkio controller) for doing
>> I/O priorization that you could use to give Nova higher priority over
>> the VMs, to reduce (if not eliminate) the possibility that a busy VM
>> can inflict a denial of service on the mgmt layer.  Of course figuring
>> out how to use that mechanism correctly is not entirely trivial.
>>
>> I think it is probably worth focusing effort in that area, before jumping
>> into making all the I/O related code in Nova more complicated. eg have
>> someone investigate & write up recommendation in Nova docs for how to
>> configure the host OS & Nova such that VMs cannot inflict an I/O denial
>> of service attack on the mgmt service.
>
> +1 that would be much nicer.
>
> We've got some set of bugs in the tracker right now which are basically
> "after the compute node being at loadavg of 11 for an hour, nova-compute
> starts failing". Having some basic methodology to use Linux
> prioritization on the worker process would mitigate those quite a bit,
> and could be used by all users immediately, vs. complex nova-compute
> changes which would only apply to new / upgraded deploys.
>

+1

Does that turn into improved deployment docs that cover how you do
that on various platforms?

Maybe some tools to help with that also go in here?
http://git.openstack.org/cgit/openstack/osops-tools-generic/

Thanks,
John

PS
FWIW, how xenapi runs nova-compute in VM has a similar outcome, albeit
in a more heavy handed way.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] A proposal to separate the design summit

2016-02-22 Thread James Penick
On Mon, Feb 22, 2016 at 8:32 AM, Matt Fischer  wrote:

> Cross-post to openstack-operators...
>
> As an operator, there's value in me attending some of the design summit
> sessions to provide feedback and guidance. But I don't really need to be in
> the room for a week discussing minutiae of implementations. So I probably
> can't justify 2 extra trips just to give a few hours of
> feedback/discussion. If this is indeed the case for some other folks we'll
> need to do a good job of collecting operator feedback at the operator
> sessions (perhaps hopefully with reps from each major project?). We don't
> want projects operating in a vacuum when it comes to major decisions.
>

If there's one thing i've learned from design summits, it's that there
should be operators in nearly every session. In my experience the core
developers for each project have been overwhelmingly encouraging of Ops
feedback. Im hoping that, if anything, this split would encourage operators
and deployers to participate more in the design sessions.



>
>
Also where do the current operators design sessions and operators midcycle
> fit in here?
>
> (apologies for not replying directly to the first message, gmail seems to
> have lost it).
>
>
>
> On Mon, Feb 22, 2016 at 8:24 AM, Russell Bryant 
> wrote:
>
>>
>>
>> On Mon, Feb 22, 2016 at 10:14 AM, Thierry Carrez 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> TL;DR: Let's split the events, starting after Barcelona.
>>>
>>
>> This proposal sounds fantastic.  Thank you very much to those that help
>> put it together.
>>
>> --
>> Russell Bryant
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Jay Pipes

On 02/22/2016 12:45 PM, Thierry Carrez wrote:

I don't think the proposal removes that opportunity. Contributors /can/
still go to OpenStack Summits. They just don't /have to/. I just don't
think every contributor needs to be present at every OpenStack Summit,
while I'd like to see most of them present at every separated
contributors-oriented event[tm].


Yes. This. A thousand this.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-22 Thread Mike Perez

On 02/22/2016 07:19 AM, Ian Cordasco wrote:

-Original Message-
From: Mike Perez 
Reply: OpenStack Development Mailing List (not for usage questions) 

Date: February 19, 2016 at 19:21:13
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [all] [tc] "No Open Core" in 2016


On 02/18/2016 09:05 PM, Cody A.W. Somerville wrote:

There is no implicit (or explicit) requirement for the tests to be a
full integration/end-to-end test. Mocks and/or unit tests would be
sufficient to satisfy "test-driven gate".


While I do agree there is no requirement, I would not be satisfied with
us giving up on having functional or integration tests from a project
because of the available implementations. It's reasons like this that
highlight Poppy being different from the rest of OpenStack.


Would third-party integration CI not be satisfactory?


That would be fine, but are these commercial CDN solutions going to be 
interested in hosting them?


--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Thierry Carrez

Amrith Kumar wrote:

[...]
As a result of this proposal, there will still be four events each year, two "OpenStack 
Summit" events and two "MidCycle" events.


Actually, the OpenStack summit becomes the midcycle event. The new 
separated contributors-oriented event[tm] happens at the beginning of 
the new cycle.



[...]
Given the number of projects, and leaving aside high bandwidth internet and 
remote participation, providing dedicated meeting room for the duration of the 
MidCycle event for each project is a considerable undertaking. I believe 
therefore that the consequence is that the MidCycle event will end up being of 
comparable scale to the current Design Summit or larger, and will likely need a 
similar venue.


It still is an order of magnitude smaller than the "OpenStack Summit". 
Think 600 people instead of 6000. The idea behind co-hosting is to 
facilitate cross-project interactions. You know where to find people, 
and you can easily arrange a meeting between two teams for an hour.



[...]
At the current OpenStack Summit, there is an opportunity for contributors, 
customers and operators to interact, not just in technical meetings, but also 
in a social setting. I think this is valuable, even though there seems to be a 
number of people who believe that this is not necessarily the case.


I don't think the proposal removes that opportunity. Contributors /can/ 
still go to OpenStack Summits. They just don't /have to/. I just don't 
think every contributor needs to be present at every OpenStack Summit, 
while I'd like to see most of them present at every separated 
contributors-oriented event[tm].


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #71

2016-02-22 Thread Emilien Macchi
Hi,

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting4.

https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack

As usual, free free to bring topics in this etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160223

We'll also have open discussion for bugs & reviews, so anyone is welcome
to join.

See you there,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Sean Dague
On 02/22/2016 12:20 PM, Daniel P. Berrange wrote:
> On Mon, Feb 22, 2016 at 12:07:37PM -0500, Sean Dague wrote:
>> On 02/22/2016 10:43 AM, Chris Friesen wrote:
>>> Hi all,
>>>
>>> We've recently run into some interesting behaviour that I thought I
>>> should bring up to see if we want to do anything about it.
>>>
>>> Basically the problem seems to be that nova-compute is doing disk I/O
>>> from the main thread, and if it blocks then it can block all of
>>> nova-compute (since all eventlets will be blocked).  Examples that we've
>>> found include glance image download, file renaming, instance directory
>>> creation, opening the instance xml file, etc.  We've seen nova-compute
>>> block for upwards of 50 seconds.
>>>
>>> Now the specific case where we hit this is not a production
>>> environment.  It's only got one spinning disk shared by all the guests,
>>> the guests were hammering on the disk pretty hard, the IO scheduler for
>>> the instance disk was CFQ which seems to be buggy in our kernel.
>>>
>>> But the fact remains that nova-compute is doing disk I/O from the main
>>> thread, and if the guests push that disk hard enough then nova-compute
>>> is going to suffer.
>>>
>>> Given the above...would it make sense to use eventlet.tpool or similar
>>> to perform all disk access in a separate OS thread?  There'd likely be a
>>> bit of a performance hit, but at least it would isolate the main thread
>>> from IO blocking.
>>
>> Making nova-compute more robust is fine, though the reality is once you
>> IO starve a system, a lot of stuff is going to fall over weird.
>>
>> So there has to be a tradeoff of the complexity of any new code vs. what
>> it gains. I think individual patches should be evaluated as such, or a
>> spec if this is going to get really invasive.
> 
> There are OS level mechanisms (eg cgroups blkio controller) for doing
> I/O priorization that you could use to give Nova higher priority over
> the VMs, to reduce (if not eliminate) the possibility that a busy VM
> can inflict a denial of service on the mgmt layer.  Of course figuring
> out how to use that mechanism correctly is not entirely trivial.
> 
> I think it is probably worth focusing effort in that area, before jumping
> into making all the I/O related code in Nova more complicated. eg have
> someone investigate & write up recommendation in Nova docs for how to
> configure the host OS & Nova such that VMs cannot inflict an I/O denial
> of service attack on the mgmt service.

+1 that would be much nicer.

We've got some set of bugs in the tracker right now which are basically
"after the compute node being at loadavg of 11 for an hour, nova-compute
starts failing". Having some basic methodology to use Linux
prioritization on the worker process would mitigate those quite a bit,
and could be used by all users immediately, vs. complex nova-compute
changes which would only apply to new / upgraded deploys.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Thierry Carrez

Clayton O'Neill wrote:

Is the expectation that the ops mid-cycle would continue separately,
or be held with the meeting formerly known as the Design Summit?

Personally I’d prefer they be held together, but scheduled with the
thought that operators aren’t likely to be interested in work
sessions, but that a good number of us would be interested in
cross-project and some project specific planning sessions.  This would
also open up the possibility of having some sessions specific intended
for operator/developer feedback sessions.


I'll let Tom comment on that, but the general idea in the strawman 
proposal was that the Ops "midcycle" event would be preserved as a 
separate event, but likely turn more and more regional to maximize local 
attendance. The rationale was that it's hard for ops to justify 
traveling to a contributors-branded event, while they can more easily 
justify going to the main OpenStack Summit user conference event, and to 
regional Ops gatherings.


But things are still pretty open on that front, so let's see what the 
feedback is.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Ricardo Carrillo Cruz
+1

2016-02-22 18:27 GMT+01:00 Clayton O'Neill :

> Is the expectation that the ops mid-cycle would continue separately,
> or be held with the meeting formerly known as the Design Summit?
>
> Personally I’d prefer they be held together, but scheduled with the
> thought that operators aren’t likely to be interested in work
> sessions, but that a good number of us would be interested in
> cross-project and some project specific planning sessions.  This would
> also open up the possibility of having some sessions specific intended
> for operator/developer feedback sessions.
>
> On Mon, Feb 22, 2016 at 12:15 PM, Lauren Sell 
> wrote:
> >
> >> On Feb 22, 2016, at 8:52 AM, Clayton O'Neill 
> wrote:
> >>
> >> I think this is a great proposal, but like Matt I’m curious how it
> >> might impact the operator sessions that have been part of the Design
> >> Summit and the Operators Mid-Cycle.
> >>
> >> As an operator I got a lot out of the cross-project designs sessions
> >> in Tokyo, but they were scheduled at the same time as the Operator
> >> sessions.  On the other hand, the work sessions clearly aren’t as
> >> useful to me.  It would be nice would be worked out so that the new
> >> design summit replacement was in the same location, and scheduled so
> >> that the operator specific parts were overlapping the work sessions
> >> instead of the more big picture sessions.
> >
> > Great question. The current plan is to maintain the ops summit and
> mid-cycle activities.
> >
> > The new format would allow us to reduce overlap between ops summit and
> cross project sessions at the main event, both for the operators and
> developers who want to be involved in either activity.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Andrew Laski


On Mon, Feb 22, 2016, at 12:15 PM, Mike Bayer wrote:
> 
> 
> On 02/22/2016 11:30 AM, Chris Friesen wrote:
> > On 02/22/2016 11:17 AM, Jay Pipes wrote:
> >> On 02/22/2016 10:43 AM, Chris Friesen wrote:
> >>> Hi all,
> >>>
> >>> We've recently run into some interesting behaviour that I thought I
> >>> should bring up to see if we want to do anything about it.
> >>>
> >>> Basically the problem seems to be that nova-compute is doing disk I/O
> >>> from the main thread, and if it blocks then it can block all of
> >>> nova-compute (since all eventlets will be blocked).  Examples that we've
> >>> found include glance image download, file renaming, instance directory
> >>> creation, opening the instance xml file, etc.  We've seen nova-compute
> >>> block for upwards of 50 seconds.
> >>>
> >>> Now the specific case where we hit this is not a production
> >>> environment.  It's only got one spinning disk shared by all the guests,
> >>> the guests were hammering on the disk pretty hard, the IO scheduler for
> >>> the instance disk was CFQ which seems to be buggy in our kernel.
> >>>
> >>> But the fact remains that nova-compute is doing disk I/O from the main
> >>> thread, and if the guests push that disk hard enough then nova-compute
> >>> is going to suffer.
> >>>
> >>> Given the above...would it make sense to use eventlet.tpool or similar
> >>> to perform all disk access in a separate OS thread?  There'd likely be a
> >>> bit of a performance hit, but at least it would isolate the main thread
> >>> from IO blocking.
> >>
> >> This is probably a good idea, but will require quite a bit of code
> >> change. I
> >> think in the past we've taken the expedient route of just exec'ing
> >> problematic
> >> code in a greenthread using utils.spawn().
> >
> > I'm not an expert on eventlet, but from what I've seen this isn't
> > sufficient to deal with disk access in a robust way.
> >
> > It's my understanding that utils.spawn() will result in the code running
> > in the same OS thread, but in a separate eventlet greenthread.  If that
> > code tries to access the disk via a potentially-blocking call the
> > eventlet subsystem will not jump to another greenthread.  Because of
> > this it can potentially block the whole OS thread (and thus all other
> > greenthreads running in that OS thread).
> 
> not sure what utils.spawn() does but if it is in fact an "exec" (or if 
> Jay is suggesting that an exec() be used within) then the code would be 
> in a different process entirely, and communicating with it becomes an 
> issue of pipe IO over unix sockets which IIRC can do non blocking.

utils.spawn() is just a wrapper around eventlet.spawn(), mostly there to
be stubbed out in testing.


> 
> 
> >
> > I think we need to eventlet.tpool for disk IO (or else fork a whole
> > separate process).  Basically we need to ensure that the main OS thread
> > never issues a potentially-blocking syscall.
> 
> tpool would probably be easier (and more performant because no socket 
> needed).
> 
> 
> >
> > Chris
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Clayton O'Neill
Is the expectation that the ops mid-cycle would continue separately,
or be held with the meeting formerly known as the Design Summit?

Personally I’d prefer they be held together, but scheduled with the
thought that operators aren’t likely to be interested in work
sessions, but that a good number of us would be interested in
cross-project and some project specific planning sessions.  This would
also open up the possibility of having some sessions specific intended
for operator/developer feedback sessions.

On Mon, Feb 22, 2016 at 12:15 PM, Lauren Sell  wrote:
>
>> On Feb 22, 2016, at 8:52 AM, Clayton O'Neill  wrote:
>>
>> I think this is a great proposal, but like Matt I’m curious how it
>> might impact the operator sessions that have been part of the Design
>> Summit and the Operators Mid-Cycle.
>>
>> As an operator I got a lot out of the cross-project designs sessions
>> in Tokyo, but they were scheduled at the same time as the Operator
>> sessions.  On the other hand, the work sessions clearly aren’t as
>> useful to me.  It would be nice would be worked out so that the new
>> design summit replacement was in the same location, and scheduled so
>> that the operator specific parts were overlapping the work sessions
>> instead of the more big picture sessions.
>
> Great question. The current plan is to maintain the ops summit and mid-cycle 
> activities.
>
> The new format would allow us to reduce overlap between ops summit and cross 
> project sessions at the main event, both for the operators and developers who 
> want to be involved in either activity.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-22 Thread Steven Dake (stdake)


On 2/22/16, 10:13 AM, "Jeff Peeler"  wrote:

>On Mon, Feb 22, 2016 at 9:07 AM, Steven Dake (stdake) 
>wrote:
>> The issue isn't about reviewing patches in my opinion.  Obviously people
>> shouldn't jam patches through the review queue that they know will be
>> counter-productive to the majority view of the core reviewers.  If they
>>do,
>> they can easily be reverted by 3 different core reviewers.  Our core
>> reviewers are adults and don't behave in this way.  I have seen a couple
>> patches "jammed through" and not reverted, from multiple companies
>>rather
>> then just one company and it just made everyone angry.  I think folks
>>have
>> learned from that and tend not to "jan through contentious changes"
>>unless
>> it is time critical (as in breaking gate, or busted master, or milestone
>> deadline).
>>
>> The issue is around policy setting.  The PTL should NOT be a dictator
>>and
>> set policy on their own whim.  The way we set policy in Kolla (and I
>>believe
>> in other OpenStack projects) is by formal majority vote.  For example,
>>one
>> policy we have set is that we permit third party proprietary distros and
>> plugins to interact and even be a part of our Dockerfile.j2 if someone
>>steps
>> up to maintain them.  NB our specs directory are actually policy
>>"direction"
>> rather then hard policies.  That is why spec's require a majority vote
>>to
>> approve.
>>
>> Folks that have responded on this thread thus far have seem to missed
>>this
>> policy point and focused on the reviewing of patches.
>
>I think the reason people are so focused on reviewing patches is
>because that is the "core" job of a core reviewer. I feel like the
>Kolla project votes a lot more on policy than other projects (I'm
>including IRC and during formal gatherings), so that may be why policy
>is not at the forefront of the discussion.
>
>> All that said, I hear what your saying regarding motivation.  The
>>original
>> discussion was about protecting the project from a lack of diversity in
>>the
>> core reviewer team which could potentially lead to majority-rules by one
>> corporate affiliation on policy matters.  What would be an ideal
>>outcome in
>> my opinion is to keep motivation intact but meet the diversity
>>requirements
>> set forth in the governance repository to avoid a majority-rules by one
>> corporate affiliation situation.  There are two ways to do this that I
>>can
>> think of:
>>
>> Add core reviewers that aren't quite there yet, but close to meet the
>> diversity requirements
>
>(Label: solution #1)
>Perhaps instead of this a specific group such as the drivers team (or
>bugs, whatever) can be allowed to vote on policy. This role change
>would widen the pool of available candidates while not adding people
>prematurely to core status.
>
>> Or
>> Limit core reviewers
>
>As stated in another thread, this policy wouldn't be acceptable for
>some smaller projects with limited diversity.
>
>> Or
>> Another simple solution is to permit a a veto vote from any core
>>reviewer
>> within the 1 week voting window if a majority (or some other value,
>>such as
>> 35%) from one corporate affiliation votes yes on a policy decision.
>>This
>> could be gamed, but at least does not permit policy changes by one
>>corporate
>> affiliation.  With our current core review team, that means 3 people
>>could
>> vote from RHT (out of the 4 core reviewers) before triggering the veto
>>rule.
>
>(Label: solution #2)
>This sounds like to me prevention of "jamming through" which I'm not
>sure is necessary, but I do like this option best of those presented
>by Steve. Some policies simply aren't that significant, but others
>are. I think this is why it's important to bring major policy
>decisions to the mailing list. It gives people time to really think
>about their opinions/facts and broadens the scope of the discussion.
>
>> Or
>> Permit a veto vote on policy changes (I really don't like this option,
>>as it
>> gives too much "power" to one individual over the project policy)
>
>Agreed.
>
>> I'd like to hear what other core reviewers as well as Kolla developers
>>have
>> to say about the matter.
>>
>> As a final note, I am very very (!) anti-process.  A project should only
>> have as much process as it needs to succeed.  Many/most projects(not
>> OpenStack, but other projects) go overboard on process.  Process just
>> creates needless complication, so I am also not hot on setting a bunch
>>of
>> policies (which require process to execute).  The main problem with
>>process
>> is it creates too many rules for people to make sure they are compliant
>> with.  This slows the system down, and the system should be fast,
>>nimble,
>> and agile.
>>
>> When I open discussion on a policy change, its not like I do it for my
>> health - its because I see a serious issue coming down the road.  In
>>this
>> case I don't know precisely how to correct this particular problem,
>>which is
>> why we are having this 

Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Daniel P. Berrange
On Mon, Feb 22, 2016 at 12:07:37PM -0500, Sean Dague wrote:
> On 02/22/2016 10:43 AM, Chris Friesen wrote:
> > Hi all,
> > 
> > We've recently run into some interesting behaviour that I thought I
> > should bring up to see if we want to do anything about it.
> > 
> > Basically the problem seems to be that nova-compute is doing disk I/O
> > from the main thread, and if it blocks then it can block all of
> > nova-compute (since all eventlets will be blocked).  Examples that we've
> > found include glance image download, file renaming, instance directory
> > creation, opening the instance xml file, etc.  We've seen nova-compute
> > block for upwards of 50 seconds.
> > 
> > Now the specific case where we hit this is not a production
> > environment.  It's only got one spinning disk shared by all the guests,
> > the guests were hammering on the disk pretty hard, the IO scheduler for
> > the instance disk was CFQ which seems to be buggy in our kernel.
> > 
> > But the fact remains that nova-compute is doing disk I/O from the main
> > thread, and if the guests push that disk hard enough then nova-compute
> > is going to suffer.
> > 
> > Given the above...would it make sense to use eventlet.tpool or similar
> > to perform all disk access in a separate OS thread?  There'd likely be a
> > bit of a performance hit, but at least it would isolate the main thread
> > from IO blocking.
> 
> Making nova-compute more robust is fine, though the reality is once you
> IO starve a system, a lot of stuff is going to fall over weird.
> 
> So there has to be a tradeoff of the complexity of any new code vs. what
> it gains. I think individual patches should be evaluated as such, or a
> spec if this is going to get really invasive.

There are OS level mechanisms (eg cgroups blkio controller) for doing
I/O priorization that you could use to give Nova higher priority over
the VMs, to reduce (if not eliminate) the possibility that a busy VM
can inflict a denial of service on the mgmt layer.  Of course figuring
out how to use that mechanism correctly is not entirely trivial.

I think it is probably worth focusing effort in that area, before jumping
into making all the I/O related code in Nova more complicated. eg have
someone investigate & write up recommendation in Nova docs for how to
configure the host OS & Nova such that VMs cannot inflict an I/O denial
of service attack on the mgmt service.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-22 Thread David Moreau Simard
As a reminder, we will proceed with a formal vote on $subject at the
next RDO meeting
on Wednesday, 24th Feb, 2016 1500 UTC [1]. Feel free to join us on
#rdo on freenode.

[1]: https://etherpad.openstack.org/p/RDO-Meeting

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Wed, Feb 17, 2016 at 11:27 AM, David Moreau Simard  wrote:
> Greetings,
>
> (Note: cross-posted between rdo-list and openstack-dev to reach a
> larger audience)
>
> Today, because of the branding and the name "RDO Manager", you might
> think that it's something other than TripleO - either something
> entirely different or perhaps with downstream patches baked in.
> You would not be the only one because the community, the users and the
> developers alike have shared their confusion on that topic.
>
> The truth is, as it stands right now, "RDO Manager" really is "TripleO".
> There is no code or documentation differences.
>
> I feel the only thing that is different is the strategy around how we
> test TripleO to ensure the stability of RDO packages but it's already
> in the process of being sent upstream [1] because we're convinced it's
> the best way forward.
>
> Historically, RDO Manager and TripleO were different things.
> Today this is no longer the case and we plan on keeping it that way.
>
> With this in mind, we would like to drop the RDO manager branding and
> use TripleO instead.
> Not only would we clear the confusion on the topic of what RDO Manager
> really is but it would also strengthen the TripleO name.
>
> We would love the RDO community to chime in on this and give their
> feedback as to whether or not this is a good initiative.
> We will proceed to a formal vote on $subject at the next RDO meeting
> on Wednesday, 24th Feb, 2016 1500 UTC [2]. Feel free to join us on
> #rdo on freenode.
>
> Thanks,
>
> [1]: https://review.openstack.org/#/c/276810/
> [2]: https://etherpad.openstack.org/p/RDO-Meeting
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-22 Thread Lauren Sell

> On Feb 22, 2016, at 8:52 AM, Clayton O'Neill  wrote:
> 
> I think this is a great proposal, but like Matt I’m curious how it
> might impact the operator sessions that have been part of the Design
> Summit and the Operators Mid-Cycle.
> 
> As an operator I got a lot out of the cross-project designs sessions
> in Tokyo, but they were scheduled at the same time as the Operator
> sessions.  On the other hand, the work sessions clearly aren’t as
> useful to me.  It would be nice would be worked out so that the new
> design summit replacement was in the same location, and scheduled so
> that the operator specific parts were overlapping the work sessions
> instead of the more big picture sessions.

Great question. The current plan is to maintain the ops summit and mid-cycle 
activities. 

The new format would allow us to reduce overlap between ops summit and cross 
project sessions at the main event, both for the operators and developers who 
want to be involved in either activity.

> 
> On Mon, Feb 22, 2016 at 11:32 AM, Matt Fischer  wrote:
>> Cross-post to openstack-operators...
>> 
>> As an operator, there's value in me attending some of the design summit
>> sessions to provide feedback and guidance. But I don't really need to be in
>> the room for a week discussing minutiae of implementations. So I probably
>> can't justify 2 extra trips just to give a few hours of feedback/discussion.
>> If this is indeed the case for some other folks we'll need to do a good job
>> of collecting operator feedback at the operator sessions (perhaps hopefully
>> with reps from each major project?). We don't want projects operating in a
>> vacuum when it comes to major decisions.
>> 
>> Also where do the current operators design sessions and operators midcycle
>> fit in here?
>> 
>> (apologies for not replying directly to the first message, gmail seems to
>> have lost it).
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Mike Bayer



On 02/22/2016 11:30 AM, Chris Friesen wrote:

On 02/22/2016 11:17 AM, Jay Pipes wrote:

On 02/22/2016 10:43 AM, Chris Friesen wrote:

Hi all,

We've recently run into some interesting behaviour that I thought I
should bring up to see if we want to do anything about it.

Basically the problem seems to be that nova-compute is doing disk I/O
from the main thread, and if it blocks then it can block all of
nova-compute (since all eventlets will be blocked).  Examples that we've
found include glance image download, file renaming, instance directory
creation, opening the instance xml file, etc.  We've seen nova-compute
block for upwards of 50 seconds.

Now the specific case where we hit this is not a production
environment.  It's only got one spinning disk shared by all the guests,
the guests were hammering on the disk pretty hard, the IO scheduler for
the instance disk was CFQ which seems to be buggy in our kernel.

But the fact remains that nova-compute is doing disk I/O from the main
thread, and if the guests push that disk hard enough then nova-compute
is going to suffer.

Given the above...would it make sense to use eventlet.tpool or similar
to perform all disk access in a separate OS thread?  There'd likely be a
bit of a performance hit, but at least it would isolate the main thread
from IO blocking.


This is probably a good idea, but will require quite a bit of code
change. I
think in the past we've taken the expedient route of just exec'ing
problematic
code in a greenthread using utils.spawn().


I'm not an expert on eventlet, but from what I've seen this isn't
sufficient to deal with disk access in a robust way.

It's my understanding that utils.spawn() will result in the code running
in the same OS thread, but in a separate eventlet greenthread.  If that
code tries to access the disk via a potentially-blocking call the
eventlet subsystem will not jump to another greenthread.  Because of
this it can potentially block the whole OS thread (and thus all other
greenthreads running in that OS thread).


not sure what utils.spawn() does but if it is in fact an "exec" (or if 
Jay is suggesting that an exec() be used within) then the code would be 
in a different process entirely, and communicating with it becomes an 
issue of pipe IO over unix sockets which IIRC can do non blocking.





I think we need to eventlet.tpool for disk IO (or else fork a whole
separate process).  Basically we need to ensure that the main OS thread
never issues a potentially-blocking syscall.


tpool would probably be easier (and more performant because no socket 
needed).





Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rdo-list] [TripleO] Should we rename "RDO Manager" to "TripleO" ?

2016-02-22 Thread David Moreau Simard
On Mon, Feb 22, 2016 at 9:46 AM, Dan Prince  wrote:
> Does this mean RDO Manager will be adopting use of the nicely rounded
> owl mascot as well?
>
> http://tripleo.org/
>
> As far as upstream branding goes this would really make it crystal
> clear to me... :)

"RDO Manager" won't exist anymore. People that have been using "RDO
Manager" have been using TripleO.
So, yes, owls will be had. :)

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.messaging 4.4.0 release (mitaka)

2016-02-22 Thread no-reply
We are content to announce the release of:

oslo.messaging 4.4.0: Oslo Messaging API

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

Changes in oslo.messaging 4.3.0..4.4.0
--

6bc1f01 Updated from global requirements
1bca96f fix override_pool_size
22fea72 Remove executor callback
c10ee15 Log format change in simulator.py
9cfaf50 Fix kombu accept different TTL since version 3.0.25
e384dca .testr.conf: revert workaround of testtools bug
d260744 Remove aioeventlet executor
c325faf Remove bandit.yaml in favor of defaults

Diffstat (except docs and test files)
-

.testr.conf  |   2 +-
bandit.yaml  | 362 ---
oslo_messaging/_drivers/impl_rabbit.py   |  16 +-
oslo_messaging/_executors/base.py|   5 +-
oslo_messaging/_executors/impl_aioeventlet.py|  74 -
oslo_messaging/_executors/impl_pooledexecutor.py |   2 +-
oslo_messaging/dispatcher.py |   9 +-
oslo_messaging/notify/dispatcher.py  |  23 +-
oslo_messaging/rpc/dispatcher.py |  22 +-
oslo_messaging/server.py |   4 +-
requirements.txt |   6 +-
setup.cfg|   1 -
test-requirements.txt|   2 +-
tools/simulator.py   |   4 +-
tox.ini  |   2 +-
18 files changed, 74 insertions(+), 565 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c09392b..86e177f 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -26 +26 @@ cachetools>=1.0.0 # MIT License
-eventlet>=0.18.2 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT
@@ -47,4 +46,0 @@ oslo.middleware>=3.0.0 # Apache-2.0
-
-# needed by the aioeventlet executor
-aioeventlet>=0.4 # Apache-2.0
-trollius>=1.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index e3fc95d..2d4dd83 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -26 +26 @@ pyzmq>=14.3.1 # LGPL+BSD
-kafka-python>=0.9.2 # Apache-2.0
+kafka-python<1.0.0,>=0.9.5 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] tooz 1.32.0 release (mitaka)

2016-02-22 Thread no-reply
We are happy to announce the release of:

tooz 1.32.0: Coordination library for distributed systems.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.31.0..1.32.0
--

ee79cb7 Raises proper error when unwatching a group

Diffstat (except docs and test files)
-

tooz/coordination.py| 33 ++---
2 files changed, 48 insertions(+), 3 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] taskflow 1.29.0 release (mitaka)

2016-02-22 Thread no-reply
We are amped to announce the release of:

taskflow 1.29.0: Taskflow structured state management library.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 1.28.0..1.29.0
--

d0f1f6b Updated from global requirements
a17c4d7 Add missing direct dependency for sqlalchemy-utils

Diffstat (except docs and test files)
-

requirements.txt  | 3 +++
test-requirements.txt | 2 +-
2 files changed, 4 insertions(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2cad471..91d55b4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -52,0 +53,3 @@ debtcollector>=1.2.0 # Apache-2.0
+
+# For sqlalchemy persistence backend
+sqlalchemy-utils # BSD License
diff --git a/test-requirements.txt b/test-requirements.txt
index f724ea3..10eeba7 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -32 +32 @@ PyMySQL>=0.6.2 # MIT License
-eventlet>=0.18.2 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] discussion about core reviewer limitations by company

2016-02-22 Thread Jeff Peeler
On Mon, Feb 22, 2016 at 9:07 AM, Steven Dake (stdake)  wrote:
> The issue isn't about reviewing patches in my opinion.  Obviously people
> shouldn't jam patches through the review queue that they know will be
> counter-productive to the majority view of the core reviewers.  If they do,
> they can easily be reverted by 3 different core reviewers.  Our core
> reviewers are adults and don't behave in this way.  I have seen a couple
> patches "jammed through" and not reverted, from multiple companies rather
> then just one company and it just made everyone angry.  I think folks have
> learned from that and tend not to "jan through contentious changes" unless
> it is time critical (as in breaking gate, or busted master, or milestone
> deadline).
>
> The issue is around policy setting.  The PTL should NOT be a dictator and
> set policy on their own whim.  The way we set policy in Kolla (and I believe
> in other OpenStack projects) is by formal majority vote.  For example, one
> policy we have set is that we permit third party proprietary distros and
> plugins to interact and even be a part of our Dockerfile.j2 if someone steps
> up to maintain them.  NB our specs directory are actually policy "direction"
> rather then hard policies.  That is why spec's require a majority vote to
> approve.
>
> Folks that have responded on this thread thus far have seem to missed this
> policy point and focused on the reviewing of patches.

I think the reason people are so focused on reviewing patches is
because that is the "core" job of a core reviewer. I feel like the
Kolla project votes a lot more on policy than other projects (I'm
including IRC and during formal gatherings), so that may be why policy
is not at the forefront of the discussion.

> All that said, I hear what your saying regarding motivation.  The original
> discussion was about protecting the project from a lack of diversity in the
> core reviewer team which could potentially lead to majority-rules by one
> corporate affiliation on policy matters.  What would be an ideal outcome in
> my opinion is to keep motivation intact but meet the diversity requirements
> set forth in the governance repository to avoid a majority-rules by one
> corporate affiliation situation.  There are two ways to do this that I can
> think of:
>
> Add core reviewers that aren't quite there yet, but close to meet the
> diversity requirements

(Label: solution #1)
Perhaps instead of this a specific group such as the drivers team (or
bugs, whatever) can be allowed to vote on policy. This role change
would widen the pool of available candidates while not adding people
prematurely to core status.

> Or
> Limit core reviewers

As stated in another thread, this policy wouldn't be acceptable for
some smaller projects with limited diversity.

> Or
> Another simple solution is to permit a a veto vote from any core reviewer
> within the 1 week voting window if a majority (or some other value, such as
> 35%) from one corporate affiliation votes yes on a policy decision.  This
> could be gamed, but at least does not permit policy changes by one corporate
> affiliation.  With our current core review team, that means 3 people could
> vote from RHT (out of the 4 core reviewers) before triggering the veto rule.

(Label: solution #2)
This sounds like to me prevention of "jamming through" which I'm not
sure is necessary, but I do like this option best of those presented
by Steve. Some policies simply aren't that significant, but others
are. I think this is why it's important to bring major policy
decisions to the mailing list. It gives people time to really think
about their opinions/facts and broadens the scope of the discussion.

> Or
> Permit a veto vote on policy changes (I really don't like this option, as it
> gives too much "power" to one individual over the project policy)

Agreed.

> I'd like to hear what other core reviewers as well as Kolla developers have
> to say about the matter.
>
> As a final note, I am very very (!) anti-process.  A project should only
> have as much process as it needs to succeed.  Many/most projects(not
> OpenStack, but other projects) go overboard on process.  Process just
> creates needless complication, so I am also not hot on setting a bunch of
> policies (which require process to execute).  The main problem with process
> is it creates too many rules for people to make sure they are compliant
> with.  This slows the system down, and the system should be fast, nimble,
> and agile.
>
> When I open discussion on a policy change, its not like I do it for my
> health - its because I see a serious issue coming down the road.  In this
> case I don't know precisely how to correct this particular problem, which is
> why we are having this discussion.  I'd prefer if folks focus on what we can
> do to fix it, rather then saying "no lets not do anything".

Although you mentioned you didn't want the PTL to serve as a dictator,
I think that having the PTL 

[openstack-dev] [release][oslo] oslo.service 1.6.0 release (mitaka)

2016-02-22 Thread no-reply
We are tickled pink to announce the release of:

oslo.service 1.6.0: oslo.service library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 1.5.0..1.6.0


0f9f072 Updated from global requirements
db1fc24 Allow the backdoor to serve from a local unix domain socket
de959dc Updated from global requirements

Diffstat (except docs and test files)
-

oslo_service/_options.py |  9 +++-
oslo_service/eventlet_backdoor.py| 61 
requirements.txt |  2 +-
4 files changed, 110 insertions(+), 20 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 4311fb5..26fce4d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ WebOb>=1.2.3 # MIT
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslotest 2.2.0 release (mitaka)

2016-02-22 Thread no-reply
We are gleeful to announce the release of:

oslotest 2.2.0: Oslo test framework

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslotest

With package available at:

https://pypi.python.org/pypi/oslotest

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

For more details, please see below.

Changes in oslotest 2.1.0..2.2.0


936a99d move unit tests into the oslotest package
130d2a0 Updated from global requirements
7f098af Hack to get back stopall cleanup behavior feature

Diffstat (except docs and test files)
-

oslotest/base.py   |  27 +++--
test-requirements.txt  |   2 +-
20 files changed, 580 insertions(+), 565 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 842813d..e06a946 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
-oslo.config>=3.2.0 # Apache-2.0
+oslo.config>=3.4.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.privsep 1.2.0 release (mitaka)

2016-02-22 Thread no-reply
We are tickled pink to announce the release of:

oslo.privsep 1.2.0: OpenStack library for privilege separation

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.1.0..1.2.0


6932130 Updated from global requirements
4fba8f5 Ensure fdopen uses greenio object under eventlet

Diffstat (except docs and test files)
-

oslo_privsep/daemon.py | 18 --
requirements.txt   |  2 ++
2 files changed, 18 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5863e2f..d49312a 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11,0 +12,2 @@ cffi # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT
+greenlet>=0.3.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.config 3.8.0 release (mitaka)

2016-02-22 Thread no-reply
We are pumped to announce the release of:

oslo.config 3.8.0: Oslo Configuration API

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

Changes in oslo.config 3.7.0..3.8.0
---

7132173 Hooks around mutate_config_files
f6c668b Add hostname config type
89b9547 Add config_dirs property with a list of directories
3dfd4a4 Fix wrong check with non-None value when format group

Diffstat (except docs and test files)
-

oslo_config/cfg.py  | 45 ++--
oslo_config/generator.py|  3 +-
oslo_config/sphinxext.py|  4 +--
oslo_config/types.py| 49 ++
9 files changed, 269 insertions(+), 7 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.concurrency 3.5.0 release (mitaka)

2016-02-22 Thread no-reply
We are chuffed to announce the release of:

oslo.concurrency 3.5.0: Oslo Concurrency library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.concurrency

With package available at:

https://pypi.python.org/pypi/oslo.concurrency

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.concurrency

For more details, please see below.

Changes in oslo.concurrency 3.4.0..3.5.0


6c0f7ff Updated from global requirements
9d28946 Make ProcessExecutionError picklable
fa04ee7 Updated from global requirements

Diffstat (except docs and test files)
-

oslo_concurrency/processutils.py | 15 +++
test-requirements.txt|  2 +-
3 files changed, 28 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index b1a22fa..f9925e1 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15 +15 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.log 3.1.0 release (mitaka)

2016-02-22 Thread no-reply
We are pleased to announce the release of:

oslo.log 3.1.0: oslo.log library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

3.1.0
^


Upgrade Notes
*

* The deprecated log_format configuration option has been removed.


Other Notes
***

* Switch to reno for managing release notes.

Changes in oslo.log 3.0.0..3.1.0


49df1e2 Add release note for removed log_format option
d7d60c1 Updated from global requirements
41670c1 add page for release notes for unreleased versions
c4b3de9 add a release note about using reno
ad9f8fc Add reno for release notes management

Diffstat (except docs and test files)
-

.gitignore |   3 +
oslo_log/version.py|  18 ++
releasenotes/notes/.placeholder|   0
releasenotes/notes/add-reno-e4fedb11ece56f1e.yaml  |   3 +
.../notes/remove-log-format-b4b949701cee3315.yaml  |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 274 +
releasenotes/source/index.rst  |   9 +
releasenotes/source/liberty.rst|   6 +
releasenotes/source/unreleased.rst |   5 +
test-requirements.txt  |   1 +
tox.ini|   4 +
13 files changed, 326 insertions(+)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 8674891..92cba49 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -22,0 +23 @@ oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0
+reno>=0.1.1 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.context 2.1.0 release (mitaka)

2016-02-22 Thread no-reply
We are tickled pink to announce the release of:

oslo.context 2.1.0: Oslo Context library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.context

With package available at:

https://pypi.python.org/pypi/oslo.context

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.context

For more details, please see below.

Changes in oslo.context 2.0.0..2.1.0


0327388 Agnostic approach to construct context from_dict
01aaeae Add common oslo.log format parameters

Diffstat (except docs and test files)
-

oslo_context/context.py| 30 --
2 files changed, 54 insertions(+), 14 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova-compute blocking main thread under heavy disk IO

2016-02-22 Thread Sean Dague
On 02/22/2016 10:43 AM, Chris Friesen wrote:
> Hi all,
> 
> We've recently run into some interesting behaviour that I thought I
> should bring up to see if we want to do anything about it.
> 
> Basically the problem seems to be that nova-compute is doing disk I/O
> from the main thread, and if it blocks then it can block all of
> nova-compute (since all eventlets will be blocked).  Examples that we've
> found include glance image download, file renaming, instance directory
> creation, opening the instance xml file, etc.  We've seen nova-compute
> block for upwards of 50 seconds.
> 
> Now the specific case where we hit this is not a production
> environment.  It's only got one spinning disk shared by all the guests,
> the guests were hammering on the disk pretty hard, the IO scheduler for
> the instance disk was CFQ which seems to be buggy in our kernel.
> 
> But the fact remains that nova-compute is doing disk I/O from the main
> thread, and if the guests push that disk hard enough then nova-compute
> is going to suffer.
> 
> Given the above...would it make sense to use eventlet.tpool or similar
> to perform all disk access in a separate OS thread?  There'd likely be a
> bit of a performance hit, but at least it would isolate the main thread
> from IO blocking.

Making nova-compute more robust is fine, though the reality is once you
IO starve a system, a lot of stuff is going to fall over weird.

So there has to be a tradeoff of the complexity of any new code vs. what
it gains. I think individual patches should be evaluated as such, or a
spec if this is going to get really invasive.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >