Re: [openstack-dev] [heat] convergence cancel messages

2016-02-23 Thread Clint Byrum
Excerpts from Anant Patil's message of 2016-02-23 23:08:31 -0800:
> Hi,
> 
> I would like the discuss various approaches towards fixing bug
> https://launchpad.net/bugs/1533176
> 
> When convergence is on, and if the stack is stuck, there is no way to
> cancel the existing request. This feature was not implemented in
> convergence, as the user can again issue an update on an in-progress
> stack. But if a resource worker is stuck, the new update will wait
> for-ever on it and the update will not be effective.
> 
> The solution is to implement cancel request. Since the work for a stack
> is distributed among heat engines, the cancel request will not work as
> it does in legacy way. Many or all of the heat engines might be running
> worker threads to provision a stack.
> 
> I could think of two options which I would like to discuss:
> 
> (a) When a user triggered cancel request is received, set the stack
> current traversal to None or something else other than current
> traversal. With this the new check-resources/workers will never be
> triggered. This is okay as long as the worker(s) is not stuck. The
> existing workers will finish running, and no new check-resource
> (workers) will be triggered, and it will be a graceful cancel.  But the
> workers that are stuck will be stuck for-ever till stack times-out.  To
> take care of such cases, we will have to implement logic of "polling"
> the DB at regular intervals (may be at each step() of scheduler task)
> and bail out if the current traversal is updated. Basically, each worker
> will "poll" the DB to see if the current traversal is still valid and if
> not, stop itself. The drawback of this approach is that all the workers
> will be hitting the DB and incur a significant overhead.  Besides, all
> the stack workers irrespective of whether they will be cancelled or not,
> will keep on hitting DB. The advantage is that it probably is easier to
> implement. Also, if the worker is stuck in particular "step", then this
> approach will not work.
> 

I think this is the simplest option. And if the polling gets to be too
much, you can implement an observer pattern where one worker is just
assigned to poll the traversal and if it changes, RPC to the known
active workers that they should cancel any jobs using a now-cancelled
stack version.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Watcher] Nominating Vincent Francoise to Watcher Core

2016-02-23 Thread Taylor D Peoples
+1 - sorry for the delay.

Taylor Peoples

David TARDIVEL  wrote on 02/17/2016 08:05:24 AM:

> From: David TARDIVEL 
> To: "openstack-dev@lists.openstack.org"

> Date: 02/17/2016 08:05 AM
> Subject: [openstack-dev] [Watcher] Nominating Vincent Francoise to
> Watcher Core
>
> Team,
>
> I’d like to promote Vincent Francoise to the core team. Vincent's
> done a great work
> on code reviewing and has proposed a lot of patchsets. He is
> currently the most active
> non-core reviewer on Watcher project, and today, he has a very good
> vision of Watcher.
> I think he would make an excellent addition to the team.
>
> Please vote
>
> David TARDIVEL
> b<>COM
>
__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-23 Thread Clint Byrum
Excerpts from Jay Pipes's message of 2016-02-23 16:10:46 -0800:
> On 02/22/2016 04:23 AM, Sylvain Bauza wrote:
> > I won't argue against performance here. You made a very nice PoC for
> > testing scaling DB writes within a single python process and I trust
> > your findings. While I would be naturally preferring some shared-nothing
> > approach that can horizontally scale, one could mention that we can do
> > the same with Galera clusters.
> 
> a) My benchmarks aren't single process comparisons. They are 
> multi-process benchmarks.
> 
> b) The approach I've taken is indeed shared-nothing. The scheduler 
> processes do not share any data whatsoever.
> 

I think this is a matter of perspective. What I read from Sylvain's
message was that the approach you've taken shares state in a database,
and shares access to all compute nodes.

I also read in to Sylvain's comments taht what he was referring to was
a system where the compute nodes divide up the resources and never share
anything at all.

> c) Galera isn't horizontally scalable. Never was, never will be. That 
> isn't its strong-suit. Galera is best for having a 
> synchronously-replicated database cluster that is incredibly easy to 
> manage and administer but it isn't a performance panacea. It's focus is 
> on availability not performance :)
> 

I also think this is a matter of perspective. Galera is actually
fantastically horizontally scalable in any situation where you have a
very high ratio of reads to writes with a need for consistent reads.

However, for OpenStack's needs, we are typically pretty low on that ratio.

> > That said, most of the operators run a controller/compute situation
> > where all the services but the compute node are hosted on 1:N hosts.
> > Implementing the resource-providers-scheduler BP (and only that one)
> > will dramatically increase the number of writes we do on the scheduler
> > process (ie. on the "controller" - quoting because there is no notion of
> > a "controller" in Nova, it's just a deployment choice).
> 
> Yup, no doubt about it. It won't increase the *total* number of writes 
> the system makes, just the concentration of those writes into the 
> scheduler processes. You are trading increased writes in the scheduler 
> for the challenges inherent in keeping a large distributed cache system 
> valid and fresh (which itself introduces a different kind of writes).
> 

Funny enough, I think of Galera as a large distributed cache that is
always kept valid and fresh. The challenges of doing this for a _busy_
cache are not unique to Galera.

> > That's a big game changer for operators who are currently capping their
> > capacity by adding more conductors. It would require them to do some DB
> > modifications to be able to scale their capacity. I'm not against that,
> > I just say it's a big thing that we need to consider and properly
> > communicate if agreed.
> 
> Agreed completely. I will say, however, that on a 1600 compute node 
> simulation (~60K variably-sized instances), an untuned stock MySQL 5.6 
> database with 128MB InnoDB buffer pool size barely breaks a sweat on my 
> local machine.
> 

That agrees with what I've seen as well. We're talking about tables of
integers for the most part, so your least expensive SSD's can keep up
with this load for many many thousands of computes.

I'd actually also be interested if this has a potential to reduce the
demand on the message bus. I've been investigating this for a while, and I
found that RabbitMQ will happily consume 5 high end CPU cores on a single box
just to serve the needs of 1000 idle compute nodes. I am sorry that I
haven't read enough of the details in your proposal, but doesn't this
mean there'd be quite a bit less load on the MQ if the only time
messages are happening is for direct RPC dispatches and error reporting?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Questions on template-validate

2016-02-23 Thread Anant Patil
On 23-Feb-16 20:34, Jay Dobies wrote:
> I am going to bring this up in the team meeting tomorrow, but I figured 
> I'd send it out here as well. Rather than retype the issue, please look at:
> 
> https://bugs.launchpad.net/heat/+bug/1548856
> 
> My question is what the desired behavior of template-validate should be, 
> at least from a historical standpoint of what we've honored in the past. 
> Before I propose/implement a fix, I want to make sure I'm not violating 
> any unwritten expectations on how it should work.
> 
> On a related note -- and this is going to sound really stupid that I 
> don't know this answer -- but are there any docs on actually using Heat? 
> I was looking for docs that may explain what the expectation of 
> template-validate was but I couldn't really find any.
> 
> The wiki links to a number of developer-centric docs (HOT guide, 
> developer process, etc.). I found the (what I believe to be current) 
> REST API docs [1] but the only real description is "Validates a template."
> 
> Thanks  :D
> 
> 
> [1] http://developer.openstack.org/api-ref-orchestration-v1.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Sometime back, I too went through this, but got adjusted to the thought
that the template validation is really for validating the syntax and
structure of a template. Whether the values provided are valid or not
will be decided when the stack is validated. The values that depend on
resource plugins to fetch data from other services are not validated,
and to me it makes sense. It helps user to quickly test-develop
templates that are syntactically and structurally valid and they don't
have to depend on resource plugins and services availability. IMO, it
would be better to document the way template validate works, than to
make it a heavy weight request that depends on plugins and services.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] convergence cancel messages

2016-02-23 Thread Anant Patil
Hi,

I would like the discuss various approaches towards fixing bug
https://launchpad.net/bugs/1533176

When convergence is on, and if the stack is stuck, there is no way to
cancel the existing request. This feature was not implemented in
convergence, as the user can again issue an update on an in-progress
stack. But if a resource worker is stuck, the new update will wait
for-ever on it and the update will not be effective.

The solution is to implement cancel request. Since the work for a stack
is distributed among heat engines, the cancel request will not work as
it does in legacy way. Many or all of the heat engines might be running
worker threads to provision a stack.

I could think of two options which I would like to discuss:

(a) When a user triggered cancel request is received, set the stack
current traversal to None or something else other than current
traversal. With this the new check-resources/workers will never be
triggered. This is okay as long as the worker(s) is not stuck. The
existing workers will finish running, and no new check-resource
(workers) will be triggered, and it will be a graceful cancel.  But the
workers that are stuck will be stuck for-ever till stack times-out.  To
take care of such cases, we will have to implement logic of "polling"
the DB at regular intervals (may be at each step() of scheduler task)
and bail out if the current traversal is updated. Basically, each worker
will "poll" the DB to see if the current traversal is still valid and if
not, stop itself. The drawback of this approach is that all the workers
will be hitting the DB and incur a significant overhead.  Besides, all
the stack workers irrespective of whether they will be cancelled or not,
will keep on hitting DB. The advantage is that it probably is easier to
implement. Also, if the worker is stuck in particular "step", then this
approach will not work.

(b) Another approach is to send cancel message to all the heat engines
when one receives a stack cancel request. The idea is to use the thread
group manager in each engine to keep track of threads running for a
stack, and stop the thread group when a cancel message is received. The
advantage is that the messages to cancel stack workers is sent only when
required and there is no other over-head. The draw-back is that the
cancel message is 'broadcasted' to all heat engines, even if they are
not running any workers for the given stack, though, in such cases, it
will be a just no-op for the heat-engine (the message will be gracefully
discarded).

Implementation for option (b) is for review:
https://review.openstack.org/#/c/279406/

I am seeking your input on these approaches. Please share any other
ideas if you have.

-- Anant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient region setting

2016-02-23 Thread Xav Paice
fwiw, the second part of Monty's message is in the docs, sans region - it
would be a fairly swift change to add that and I'll probably submit a
gerrit for it soon.

Regards os_client_config,
http://docs.openstack.org/developer/os-client-config/ is great.
Unfortunately it didn't work for me in this particular case because of a
bug in novaclient, but that's beside the point - os_client_config is doing
exactly the right thing and I'm really happy with that approach in most
cases (and am changing some of our tooling to reflect that).

On 24 February 2016 at 05:05, Matt Riedemann 
wrote:

>
>
> On 2/22/2016 8:02 AM, Monty Taylor wrote:
>
>> On 02/21/2016 11:40 PM, Andrey Kurilin wrote:
>>
>>> Hi!
>>> `novaclient.client.Client` entry-point supports almost the same
>>> arguments as `novaclient.v2.client.Client`. The difference is only in
>>> api_version, so you can set up region via `novaclient.client.Client` in
>>> the same way as `novaclient.v2.client.Client`.
>>>
>>
>> The easiest way to get a properly constructed nova Client is with
>> os-client-config:
>>
>> import os_client_config
>>
>> OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
>> OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
>> OS_PASSWORD="REDACTED"
>> OS_AUTH_URL="http://auth.vexxhost.net";
>> OS_REGION_NAME="ca-ymq-1"
>>
>> client = os_client_config.make_client(
>>  'compute',
>>  auth_url=OS_AUTH_URL, username=OS_USERNAME,
>>  password=OS_PASSWORD, project_name=OS_PROJECT_NAME,
>>  region_name=OS_REGION_NAME)
>>
>> The upside is that the constructor interface is the same for all of the
>> rest of the client libs too (just change the first argument) - and it
>> will also read in OS_ env vars or named clouds from clouds.yaml if you
>> have them set.
>>
>> (The 'simplest' way is to put your auth and region information into a
>> clouds.yaml file like this:
>>
>>
>> http://docs.openstack.org/developer/os-client-config/#site-specific-file-locations
>>
>>
>> Such as:
>>
>> # ~/.config/openstack/clouds.yaml
>> clouds:
>>vexxhost:
>>   profile: vexxhost
>>   auth:
>> project_name: d8af8a8f-a573-48e6-898a-af333b970a2d
>> username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1
>> password: REDACTED
>>   region_name: ca-ymq-1
>>
>>
>> And do:
>>
>> client = os_client_config.make_client('compute', cloud='vexxhost')
>>
>>
>> If you don't want to do that for some reason but you'd like to construct
>> a novaclient Client object by hand:
>>
>>
>> from keystoneauth1 import loading
>> from keystoneauth1 import session as ksa_session
>> from novaclient import client as nova_client
>>
>> OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
>> OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
>> OS_PASSWORD="REDACTED"
>> OS_AUTH_URL="http://auth.vexxhost.net";
>> OS_REGION_NAME="ca-ymq-1"
>>
>> # Get the auth loader for the password auth plugin
>> loader = loading.get_plugin_loader('password')
>> # Construct the auth plugin
>> auth_plugin = loader.load_from_options(
>>  auth_url=OS_AUTH_URL, username=OS_USERNAME, password=OS_PASSWORD,
>>  project_name=OS_PROJECT_NAME)
>>
>> # Construct a keystone session
>> # Other arguments that are potentially useful here are:
>> #  verify - bool, whether or not to verify SSL connection validity
>> #  cert - SSL cert information
>> #  timout - time in seconds to use for connection level TCP timeouts
>> session = ksa_session.Session(auth_plugin)
>>
>> # Now make the client
>> # Other arguments you may be interested in:
>> #  service_name - if you need to specify a service name for finding the
>> # right service in the catalog
>> #  service_type - if the cloud in question has given a different
>> # service type (should be 'compute' for nova - but
>> # novaclient sets it, so it's safe to omit in most cases
>> #  endpoint_override - if you want to tell it to use a different URL
>> #  than what the keystone catalog returns
>> #  endpoint_type - if you need to specify admin or internal
>> #  endpoints rather than the default 'public'
>> #  Note that in glance and barbican, this key is called
>> #  'interface'
>> client = nova_client.Client(
>>  version='2.0', # or set the specific microversion you want
>>  session=session, region_name=OS_REGION_NAME)
>>
>> It might be clear why I prefer the os_client_config factory function
>> instead - but what I prefer and what you prefer might not be the same
>> thing. :)
>>
>> On Mon, Feb 22, 2016 at 6:11 AM, Xav Paice >> > wrote:
>>>
>>> Hi,
>>>
>>> In http://docs.openstack.org/developer/python-novaclient/api.html
>>> it's got some pretty clear instructions not to
>>> use novaclient.v2.client.Client but I can't see another way to
>>> specify the region - there's more than one in my installation, and
>>> no param for region in novaclient.client.Client
>>>
>>>  

[openstack-dev] [Neutron] Drivers meeting cancelled Feb 25

2016-02-23 Thread Armando M.
Folks,

Just a reminder that due to the ongoing Neutron mid-cycle, the drivers team
is cancelled for this week.

We'll be sending out a report at the end of the week/early next week to
keep you abreast of the progress made.

Cheers,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.privsep 1.3.0 release (mitaka)

2016-02-23 Thread no-reply
We are tickled pink to announce the release of:

oslo.privsep 1.3.0: OpenStack library for privilege separation

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.2.0..1.3.0


b6f64b1 Updated from global requirements
b9a9d41 fdopen: Use better "is using eventlet" test

Diffstat (except docs and test files)
-

oslo_privsep/daemon.py | 2 +-
requirements.txt   | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index d49312a..3e6d186 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ oslo.config>=3.4.0 # Apache-2.0
-oslo.utils>=3.4.0 # Apache-2.0
+oslo.utils>=3.5.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-brick][nova][cinder] os-brick/privsep change is done and awaiting your review

2016-02-23 Thread Angus Lees
Re: https://review.openstack.org/#/c/277224

Most of the various required changes have flushed out by now, and this
change now passes the dsvm-full integration tests(*).

(*) well, the experimental job anyway.  It still relies on a
merged-but-not-yet-released change in oslo.privsep so gate + 3rd party
won't pass until that happens.

What?
This change replaces os-brick's use of rootwrap with a quick+dirty
privsep-based drop-in replacement.  Privsep doesn't actually provide much
security isolation when used in this way, but it *does* now run commands
with CAP_SYS_ADMIN (still uid=0/gid=0) rather than full root superpowers.
The big win from a practical point of view is that it also means os-brick's
rootwrap filters file is essentially deleted and no longer has to be
manually merged with downstream projects.

Code changes required in nova/cinder:
There is one change each to nova+cinder to add the relevant privsep-helper
command to rootwrap filters, and a devstack change to add a
nova.conf/cinder.conf setting.  That's it - this is otherwise a
backwards/forwards compatible change for nova+cinder.

Deployment changes required in nova/cinder:
A new "privsep_rootwrap.helper_command" needs to be defined in
nova/cinder.conf (default is something sensible using sudo), and rootwrap
filters or sudoers updated depending on the exact command chosen.  Be aware
that any commands will now be run with CAP_SYS_ADMIN (only), and if that's
insufficient for your hardware/drivers it can be tweaked with other
oslo_config options.

Risks:
The end-result is still just running the same commands as before, via a
different path - so there's not a lot of adventurousness here.  The big
behavioural change is CAP_SYS_ADMIN, and (as highlighted above) it's
conceivable that the driver for some exotic os-brick/cinder hardware out
there wants something more than that.

Work remaining:
- global-requirements change needed (for os-brick) once the latest
oslo.privsep release is made
- cinder/nova/devstack changes need to be merged
- after the above, the os-brick gate integration jobs will be able to pass,
and it can be merged
- If we want to *force* the new version of os-brick, we then need an
appropriate global-requirements os-brick bump
- Documentation, release notes, etc

I'll continue chewing through those remaining work items, but essentially
this is now in your combined hands to prioritise for mitaka as you deem
appropriate.

 - Gus
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [horizon] [qa] keystone versionless endpoints and v3

2016-02-23 Thread Jamie Lennox
On 18 February 2016 at 10:50, Matt Fischer  wrote:

> I've been having some issues with keystone v3 and versionless endpoints
> and I'd like to know what's expected to work exactly in Liberty and beyond.
> I thought with v3 we used versionless endpoints but it seems to cause some
> breakages and some disagreement as to what should work.
>

Excellent! I'm really glad someone is looking into this beyond the simple
cases.


> Here's what I've found:
>
> Using versionless endpoints:
>  - horizon project selector doesn't work (v3 api configured in horizon
> local_settings) [1]
>  - keystone client doesn't work (expected v3 I think)
>  - nova/neutron etc seem ok with a few exceptions [2]
>
> Adding /v3 to my endpoints:
>  - openstackclient seems to double up the /v3 reference which fails [3],
> this breaks puppet-openstack, in addition to general CLI usage.
>
> Adding /v2.0 to my endpoints:
>  - things seem to work the best this way
>  - this matches the install docs too
>  - its not very "v3-onic"
>
>
> My goal is to be as v3 as possible, but everything needs to work 100%.
> Given that...
>
> What's the correct and supported way to setup endpoints such that Keystone
> v3 works?
>

So the problem with switching to v3 is that a lot of services and clients
were designed to assume you would have a /v2.0 on your URL. To work with v3
they therefore inspect the url and essentially s/v2.0/v3 before making
calls. Any of the services using the keystoneclient/keystoneauth session
stuff correctly shouldn't have this problem - but that is certainly not
everyone.

It does however explain why you see problems with /v3 where /v2.0 seems to
work even for the v3 API.


> Are services expected to handle versionless keystone endpoints properly?
>

Services should never need to manipulate the catalog. This is what's
causing the problem. If they leave it up to the client to do this then it
will handle the unversioned endpoint.


>
>
Can I ignore that keystoneclient doesn't work with versionless? Does this
> imply that services that use the python library (like Horizon) will also be
> broken?
>

This I'm surprised by. Do you mean the keystone CLI utility that ships with
keystoneclient? If so the decision was made it should never support v3 and
to use openstackclient instead. I haven't actually looked at this in a long
time but we should probably fix it even though it's been deprecated for a
long time now.


> Do I need/Should I have both v2.0 and v3 endpoints in my catalog?
>
> No. And particularly with the new catalog formats that went through the
cross project working group recently we made the decision that these
endpoints should not contain a version number at all. This is not ready yet
but we are working towards that goal.


> [1] its making curl calls without a version on the endpoint, causing it to
> fail. I will file a bug pending the outcome of this discussion.
>
> [2] specifically neutron_admin_auth_url in nova.conf doesn't seem to work
> without a Keystone API version on it. For cinder keymgr_encryption_auth_url
> also seems to need it. I assume I'll eventually also hit some of these:
> https://etherpad.openstack.org/p/v3-only-devstack
>

Can you file bugs for both of these? I've worked on both these sections
before so should be able to have a look into it.

I was going to finish by saying that we have unversioned endpoints in
devstack - but looking again now and we don't :( There have been various
reverted patches in the v3 transition and this must have been one of them.

For now i would suggest keeping the endpoints with the /v2.0 prefix as even
things using v3 API know how to work around this. The goal is to go
versionless everywhere (including other services, long goal but the others
will be easier than keystone) and anything you find that isn't working
isn't using the clients correctly so file a bug and add me to it.


Jamie



> [3] "Making authentication request to
> http://127.0.0.1:5000/v3/v3/auth/tokens";
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Adrian Otto
Ricardo,

Yes, that approach would work. I don’t see any harm in automatically adding 
tags to the docker daemon on the bay nodes as part of the swarm heat template. 
That would allow the filter selection you described.

Adrian

> On Feb 23, 2016, at 4:11 PM, Ricardo Rocha  wrote:
> 
> Hi.
> 
> Has anyone looked into having magnum bay nodes deployed in different
> availability zones? The goal would be to have multiple instances of a
> container running on nodes across multiple AZs.
> 
> Looking at docker swarm this could be achieved using (for example)
> affinity filters based on labels. Something like:
> 
> docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a nginx
> https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter
> 
> We can do this if we change the templates/config scripts to add to the
> docker daemon params some labels exposing availability zone or other
> metadata (taken from the nova metadata).
> https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels
> 
> It's a bit less clear how we would get heat to launch nodes across
> availability zones using ResourceGroup(s), but there are other heat
> resources that support it (i'm sure this can be done).
> 
> Does this make sense? Any thoughts or alternatives?
> 
> If it makes sense i'm happy to submit a blueprint.
> 
> Cheers,
>  Ricardo
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Issue with glancev2 datasource driver

2016-02-23 Thread Masahito MUROI

Hi Bryan,

Could you show me debug log of the command using following command? When 
I check the command in my stable/liberty, it works well.


$ openstack congress datasource row list glancev2 images --debug

And if the setting for the driver is valid, it would show you logs about 
glance driver pulling image list from glance itself.


On 2016/02/24 6:01, Bryan Sullivan wrote:

I’m running into an issue with the rows provided by the glancev2 driver
for congress. There are images defined in glance (see below) but no rows
are being returned by congress. Any idea why this might be happening?

Below I query Openstack directly, then try to get the same data thru
congress. I am using the stable/liberty branch, installed yesterday.
This is on the OPNFV Brahmaputra release (not devstack), and most other
congress datasources and functions are working as expected.

Thanks for your help!

Bryan Sullivan | AT&T

opnfv@jumphost:~/git/python-congressclient$ openstack image list

+--+-+

| ID | Name |

+--+-+

| 98705491-edda-4645-a413-129502190d56 | cirros-0.3.3-x86_64-dmz |

| 59cf60e8-e0ce-48b1-b081-6325c1e1c52b | cirros-0.3.3-x86_64 |

+--+-+

opnfv@jumphost:~/git/python-congressclient$ openstack congress
datasource row list glancev2 images

opnfv@jumphost:~/git/python-congressclient$



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
室井 雅仁(Masahito MUROI)
Software Innovation Center, NTT
Tel: +81-422-59-4539



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] playing tricircle with devstack under two-region configuration

2016-02-23 Thread Yipei Niu
Hi Joe and Zhiyuan,

My VM has recovered. When I re-install devstack in node1, I encounter the
following errors.

The info in stack.sh.log is as follows:

2016-02-23 11:18:27.238 | Error: Service n-sch is not running
2016-02-23 11:18:27.238 | +
/home/stack/devstack/functions-common:service_check:L1625:   '[' -n
/opt/stack/status/stack/n-sch.failure ']'
2016-02-23 11:18:27.238 | +
/home/stack/devstack/functions-common:service_check:L1626:   die 1626 'More
details about the above errors can be found with screen, with
./rejoin-stack.sh'
2016-02-23 11:18:27.238 | + /home/stack/devstack/functions-common:die:L186:
  local exitcode=0
2016-02-23 11:18:27.239 | [Call Trace]
2016-02-23 11:18:27.239 | ./stack.sh:1354:service_check
2016-02-23 11:18:27.239 | /home/stack/devstack/functions-common:1626:die
2016-02-23 11:18:27.261 | [ERROR]
/home/stack/devstack/functions-common:1626 More details about the above
errors can be found with screen, with ./rejoin-stack.sh
2016-02-23 11:18:28.271 | Error on exit
2016-02-23 11:18:28.953 | df: '/run/user/112/gvfs': Permission denied

The info in n-sch.log is as follows:

stack@nyp-VirtualBox:~/devstack$ /usr/local/bin/nova-scheduler
--config-file /et ^Mc/nova/nova.conf & echo $!
>/opt/stack/status/stack/n-sch.pid; fg || echo "n-sch ^M failed to start" |
tee "/opt/stack/status/stack/n-sch.failure"
[1] 29467
/usr/local/bin/nova-scheduler --config-file /etc/nova/nova.conf
2016-02-23 19:13:00.050 ^[[00;32mDEBUG oslo_db.api [^[[00;36m-^[[00;32m]
^[[01;35m^[[00;32mLoading backend 'sqlalchemy' from
'nova.db.sqlalchemy.api'^[[00m ^[[00;33mfrom (pid=29467) _load_backend
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py:238^[[00m
2016-02-23 19:13:00.300 ^[[01;33mWARNING
oslo_reports.guru_meditation_report [^[[00;36m-^[[01;33m]
^[[01;35m^[[01;33mGuru mediation now registers SIGUSR1 and SIGUSR2 by
default for backward compatibility. SIGUSR1 will no longer be registered in
a future release, so please use SIGUSR2 to generate reports.^[[00m
2016-02-23 19:13:00.304 ^[[01;31mCRITICAL nova [^[[00;36m-^[[01;31m]
^[[01;35m^[[01;31mValueError: Empty module name
^[[00m
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mTraceback (most
recent call last):
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/usr/local/bin/nova-scheduler", line 10, in 
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
 sys.exit(main())
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/opt/stack/nova/nova/cmd/scheduler.py", line 43, in main
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
 topic=CONF.scheduler_topic)
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/opt/stack/nova/nova/service.py", line 281, in create
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
 db_allowed=db_allowed)
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/opt/stack/nova/nova/service.py", line 167, in __init__
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mself.manager
= manager_class(host=self.host, *args, **kwargs)
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/opt/stack/nova/nova/scheduler/manager.py", line 49, in __init__
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mself.driver
= importutils.import_object(scheduler_driver)
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line
44, in import_object
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mreturn
import_class(import_str)(*args, **kwargs)
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m  File
"/usr/local/lib/python2.7/dist-packages/oslo_utils/importutils.py", line
30, in import_class
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
 __import__(mod_str)
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00mValueError:
Empty module name
^[[01;31m2016-02-23 19:13:00.304 TRACE nova ^[[01;35m^[[00m
n-sch failed to start


Best regards,
Yipei

On Tue, Feb 23, 2016 at 10:23 AM, Yipei Niu  wrote:

> Hi Jeo,
>
> I have checked. The Neutron API has not started, but no process is
> listening 9696.
>
> Best regards,
> Yipei
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible Ceilometer Configuration

2016-02-23 Thread Alex Cantu
Nate,


That's right. Initially there wasn't any work done to the Ansible playbooks to 
turn off Aodh alarming when deploying Ceilometer.

Ideally the playbooks would check to see if any alarm hosts are defined. If so, 
then turn on the Aodh configurations within Ceilometer. If not, then leave 
those configurations out.


It's worth noting that Ceilometer Alarms are deprecated in Liberty in favor of 
Aodh. If you turn off Aodh, then that feature will not be available to you.

Feel free to file a bug explaining the situation, and if you are feeling up for 
it -- add the logic in to check for Aodh hosts :).


-Alex


From: Potter, Nathaniel 
Sent: Tuesday, February 23, 2016 5:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible 
Ceilometer Configuration

Hi Alex,

So it's looking to me like my problem was being caused by openstack-ansible 
trying to set up aodh although I didn't configure it and didn't want to use it. 
In ceilometer.conf I found that in the [database] section the metering and 
event connections were correctly looking for mongodb at the IP I set as my bind 
IP, but it was also adding an alarm connection looking for an aodh user in the 
database at localhost. This was causing the ceilometer API to time out 
repeatedly looking for the connection that didn't exist. I don't have any aodh 
configuration set up in /etc/openstack_deploy, so should that line not have 
been put into my ceilometer.conf?

Thanks,
Nate
From: Alex Cantu [mailto:miguel.ca...@rackspace.com]
Sent: Wednesday, February 17, 2016 4:48 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible 
Ceilometer Configuration


Nate,



The mongodb host can be anywhere, so long as it can reached by the ceilometer 
containers (on the same network).

What branch are you working from? Master and Liberty should have no problems as 
far as I'm aware. There is a bug open in regards to authentication with swift, 
but everything else should work fine.



Feel free to send over your ceilometer-api, ceilometer-notification-agent, and 
ceilometer-pollster logs on a pastebin that way I can take a look.



-Alex


From: Potter, Nathaniel 
mailto:nathaniel.pot...@intel.com>>
Sent: Wednesday, February 17, 2016 4:17 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible 
Ceilometer Configuration

Hi everyone,

I've been working on setting up a 10 node OpenStack installation with 
ceilometer using openstack-ansible, but the way I've configured it isn't 
working for me. I've tried following these instructions 
http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-ceilometer.html,
 doing these steps -


-I set up MongoDB on the metering-infra_host, making the bind_ip the 
br-mgmt IP of that host and creating the ceilometer user.

-In /etc/openstack_deploy/conf.d/ceilometer.yml I have a compute host 
under metering-compute_hosts and the infra host that I configured MongoDB on in 
my metering-infra_hosts.

-I also set the ceilometer_db_ip in user_variables to be equal to the 
bind_ip set on the infra host.

Running the ceilometer installation playbook is successful, but when I log into 
the utility container and try to run ceilometer meter-list it times out and 
says 'Service Unavailable (HTTP 503)'.

Does anyone see anywhere that I went wrong in these steps, should bind-ip be 
set to something else, or should I be configuring this database on the compute 
host rather than the infra? The documentation wasn't entirely clear on that 
point.

Thanks,
Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-23 Thread Jay Pipes

On 02/22/2016 04:23 AM, Sylvain Bauza wrote:

I won't argue against performance here. You made a very nice PoC for
testing scaling DB writes within a single python process and I trust
your findings. While I would be naturally preferring some shared-nothing
approach that can horizontally scale, one could mention that we can do
the same with Galera clusters.


a) My benchmarks aren't single process comparisons. They are 
multi-process benchmarks.


b) The approach I've taken is indeed shared-nothing. The scheduler 
processes do not share any data whatsoever.


c) Galera isn't horizontally scalable. Never was, never will be. That 
isn't its strong-suit. Galera is best for having a 
synchronously-replicated database cluster that is incredibly easy to 
manage and administer but it isn't a performance panacea. It's focus is 
on availability not performance :)



That said, most of the operators run a controller/compute situation
where all the services but the compute node are hosted on 1:N hosts.
Implementing the resource-providers-scheduler BP (and only that one)
will dramatically increase the number of writes we do on the scheduler
process (ie. on the "controller" - quoting because there is no notion of
a "controller" in Nova, it's just a deployment choice).


Yup, no doubt about it. It won't increase the *total* number of writes 
the system makes, just the concentration of those writes into the 
scheduler processes. You are trading increased writes in the scheduler 
for the challenges inherent in keeping a large distributed cache system 
valid and fresh (which itself introduces a different kind of writes).



That's a big game changer for operators who are currently capping their
capacity by adding more conductors. It would require them to do some DB
modifications to be able to scale their capacity. I'm not against that,
I just say it's a big thing that we need to consider and properly
communicate if agreed.


Agreed completely. I will say, however, that on a 1600 compute node 
simulation (~60K variably-sized instances), an untuned stock MySQL 5.6 
database with 128MB InnoDB buffer pool size barely breaks a sweat on my 
local machine.



> It can be alleviated by changing to a stand-alone high

 performance database.


It doesn't need to be high-performance at all. In my benchmarks, a
small-sized stock MySQL database server is able to fulfill thousands
of placement queries and claim transactions per minute using
completely isolated non-shared, non-caching scheduler processes.

> And the cache refreshing is designed to be

replaced by to direct SQL queries according to resource-provider
scheduler spec [2].


Yes, this is correct.

> The performance bottlehead of shared-state scheduler

may come from the overwhelming update messages, it can also be
alleviated by changing to stand-alone distributed message queue and by
using the “MessagePipe” to merge messages.


In terms of the number of messages used in each design, I see the
following relationship:

resource-providers < legacy < shared-state-scheduler

would you agree with that?


True. But that's manageable by adding more conductors, right ? IMHO,
Nova performance is bound by the number of conductors you run and I like
that - because that's easy to increase the capacity.
Also, the payload could be far smaller from the existing : instead of
sending a full update for a single compute_node entry, it would only
send the diff (+ some full syncs periodically). We would then mitigate
the messages increase by making sure we're sending less per message.


No message sent is better than sending any message, regardless of 
whether that message contains an incremental update or a full object.



The resource-providers proposal actually uses no update messages at
all (except in the abnormal case of a compute node failing to start
the resources that had previously been claimed by the scheduler). All
updates are done in a single database transaction when the claim is made.


See, I don't think that a compute node unable to start a request is an
'abnormal case'. There are many reasons why a request can't be honored
by the compute node :
  - for example, the scheduler doesn't own all the compute resources and
thus can miss some information : for example, say that you want to pin a
specific pCPU and this pCPU is already assigned. The scheduler doesn't
know *which* pCPUs are free, it only knows *how much* are free
That atomic transaction (pick a free pCPU and assign it to the instance)
is made on the compute manager not at the exact same time we're
decreasing resource usage for pCPUs (because it would be done in the
scheduler process).


See my response to Chris Friesen about the above.


  - some "reserved" RAM or disk could be underestimated and
consequently, spawning a VM could be either taking fare more time than
planned (which would mean that it would be a suboptimal placement) or it
would fail which would issue a reschedule.


Again, the above is an abnormal case.




I

Re: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible Ceilometer Configuration

2016-02-23 Thread Potter, Nathaniel
Hi Alex,

So it's looking to me like my problem was being caused by openstack-ansible 
trying to set up aodh although I didn't configure it and didn't want to use it. 
In ceilometer.conf I found that in the [database] section the metering and 
event connections were correctly looking for mongodb at the IP I set as my bind 
IP, but it was also adding an alarm connection looking for an aodh user in the 
database at localhost. This was causing the ceilometer API to time out 
repeatedly looking for the connection that didn't exist. I don't have any aodh 
configuration set up in /etc/openstack_deploy, so should that line not have 
been put into my ceilometer.conf?

Thanks,
Nate
From: Alex Cantu [mailto:miguel.ca...@rackspace.com]
Sent: Wednesday, February 17, 2016 4:48 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible 
Ceilometer Configuration


Nate,



The mongodb host can be anywhere, so long as it can reached by the ceilometer 
containers (on the same network).

What branch are you working from? Master and Liberty should have no problems as 
far as I'm aware. There is a bug open in regards to authentication with swift, 
but everything else should work fine.



Feel free to send over your ceilometer-api, ceilometer-notification-agent, and 
ceilometer-pollster logs on a pastebin that way I can take a look.



-Alex


From: Potter, Nathaniel 
mailto:nathaniel.pot...@intel.com>>
Sent: Wednesday, February 17, 2016 4:17 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [ceilometer] [openstack-ansible] OpenStack-Ansible 
Ceilometer Configuration

Hi everyone,

I've been working on setting up a 10 node OpenStack installation with 
ceilometer using openstack-ansible, but the way I've configured it isn't 
working for me. I've tried following these instructions 
http://docs.openstack.org/developer/openstack-ansible/install-guide/configure-ceilometer.html,
 doing these steps -


-I set up MongoDB on the metering-infra_host, making the bind_ip the 
br-mgmt IP of that host and creating the ceilometer user.

-In /etc/openstack_deploy/conf.d/ceilometer.yml I have a compute host 
under metering-compute_hosts and the infra host that I configured MongoDB on in 
my metering-infra_hosts.

-I also set the ceilometer_db_ip in user_variables to be equal to the 
bind_ip set on the infra host.

Running the ceilometer installation playbook is successful, but when I log into 
the utility container and try to run ceilometer meter-list it times out and 
says 'Service Unavailable (HTTP 503)'.

Does anyone see anywhere that I went wrong in these steps, should bind-ip be 
set to something else, or should I be configuring this database on the compute 
host rather than the infra? The documentation wasn't entirely clear on that 
point.

Thanks,
Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-23 Thread Jay Pipes

On 02/23/2016 06:03 PM, Chris Friesen wrote:

On 02/21/2016 01:56 PM, Jay Pipes wrote:

Yingxin, sorry for the delay in responding to this thread. My comments
inline.

On 02/17/2016 12:45 AM, Cheng, Yingxin wrote:

To better illustrate the differences between shared-state,
resource-provider and legacy scheduler, I’ve drew 3 simplified pictures
[1] in emphasizing the location of resource view, the location of claim
and resource consumption, and the resource update/refresh pattern in
three kinds of schedulers. Hoping I’m correct in the “resource-provider
scheduler” part.



2) Claims of resource amounts are done in a database transaction
atomically
within each scheduler process. Therefore there are no "cache updates"
arrows
going back from compute nodes to the resource-provider DB. The only
time a
compute node would communicate with the resource-provider DB (and thus
the
scheduler at all) would be in the case of a *failed* attempt to
initialize
already-claimed resources.


Can you point me to the BP/spec that talks about this?  Where in the
code would we update the DB to reflect newly-freed resources?


I should have been more clear, sorry. I am referring only to the process 
of claiming resources and the messages involved in cache updates for 
those claims. I'm not referring to freeing resources (i.e. an instance 
termination). In those cases, there would still need to be a message 
sent to inform the scheduler that the resources had been freed. Nothing 
would change in that regard.


For information, the blueprint where we are discussing moving claims to 
the scheduler (and away from the compute nodes) is here:


https://review.openstack.org/#/c/271823/

I'm in the process of splitting the above blueprint into two. One will 
be for the proposed moving of the filters from the scheduler Python 
process to instead by filters on the database query for compute nodes. 
Another blueprint will be for the "move the claims to the scheduler" stuff.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-23 Thread Jay Pipes

On 02/23/2016 06:12 PM, Chris Friesen wrote:

On 02/22/2016 03:23 AM, Sylvain Bauza wrote:

See, I don't think that a compute node unable to start a request is an
'abnormal
case'. There are many reasons why a request can't be honored by the
compute node :
  - for example, the scheduler doesn't own all the compute resources
and thus
can miss some information : for example, say that you want to pin a
specific
pCPU and this pCPU is already assigned. The scheduler doesn't know
*which* pCPUs
are free, it only knows *how much* are free
That atomic transaction (pick a free pCPU and assign it to the
instance) is made
on the compute manager not at the exact same time we're decreasing
resource
usage for pCPUs (because it would be done in the scheduler process).



I'm pretty sure that the existing NUMATopologyFilter is aware of which
pCPUs/Hugepages are free on each host NUMA node.


It's aware of which pCPUs and hugepages are free on each host NUMA node 
at the time of scheduling, but it doesn't actually *claim* those 
resources in the scheduler. This means that by the time the launch 
request gets to the host, another request for the same NUMA topology may 
have consumed the NUMA cell topology.


I think that's what Sylvain is referring to above.

I'd like to point out, though, that the placement of a requested NUMA 
cell topology onto an available host NUMA cell or cells *is the claim* 
of those NUMA resources. And it is the claim -- i.e. the placement of 
the requested instance NUMA topology onto the host topology -- that I 
wish to make in the scheduler.


So, Sylvain, I am indeed talking about only the 'abnormal' cases.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-23 Thread Jim Rollenhagen
On Tue, Feb 23, 2016 at 05:32:44PM -0500, James Slagle wrote:
> On Tue, Feb 23, 2016 at 5:18 PM, Devananda van der Veen
>  wrote:
> > Responding to your points out of order, since that makes more sense to me
> > right now ...
> >
> >> Since currently DIB claims to be backwards compatible, we just need to
> >> leave master backwards compatible with Kilo and Liberty Ironic, which
> >> means not deleting the bash ramdisk element. If Ironic wants to remove
> >> the bash ramdisk support from master, then it ought to be able to do
> >> so.
> >
> >
> > Yes, we'd like to remove support (read: code) from Ironic for the bash
> > ramdisk. It was deprecated in Liberty, and I'd like to remove it soon (no
> > later than once Newton opens).
> >
> >
> >>
> >> What if you removed the code from Ironic, but left the element in DIB,
> >> with a note that it only works with stable/liberty and earlier
> >> versions of Ironic?
> >
> >
> > Sure, except ...
> >
> >>
> >>
> >> Could we then:
> >>
> >> gate master DIB changes on an Ironic stable/liberty job that uses the
> >> bash ramdisk - this would catch any regressions in DIB that break the
> >> bash ramdisk
> >
> >
> > Yup. We could do this.
> >
> >>
> >> gate master DIB changes on an Ironic master job - this is what
> >> gate-tempest-dsvm-ironic-pxe_ssh-dib is already doing (I think).
> >
> >
> > This, we could not do.
> >
> > Once we remove the support for the bash ramdisk from ironic/master, we will
> > not be able to test the "deploy-baremetal" element in dib/master against
> > ironic/master. We will only be able to test DIB with the "ironic-agent"
> > element against ironic/master. However, since some users of dib still rely
> > on the bash ramdisk (eg, because they're using older versions of Ironic) we
> > understand the need to keep that element supported within dib.
> >
> >>
> >>
> >> Is that a valid option, and would it remove the desire for a stable
> >> branch of DIB?
> >>
> >>
> >> We currently say that DIB is backwards compatible and doesn't use
> >> stable branches. If there's a desire to change that, I think that's
> >> certainly open for discussion. But I don't think we're in a situtation
> >> where it's preventing us from moving forward with removing the bash
> >> ramdisk code from Ironic aiui, but I might be misunderstanding. I also
> >> think that having a stable branch sends the message that master isn't
> >> backwards compatible. If that's not the message, why do we need the
> >> stable branch?
> >>
> >
> > We believe we need the stable branch because we believe we should test
> > master-master for "ironic-agent" and stable-stable for "deploy-baremetal".
> >
> > On the other hand, we could test master-stable (dib-ironic) for the
> > "deploy-baremetal" element. If we did that, then we don't need a stable
> > branch of dib.
> 
> Yes, this ^^ is what I'm proposing, or was trying to anyway. The
> master-master (dib-ironic) gate uses ipa, the master-stable gate would
> use the bash ramdisk.

Right, so that would work and is probably ideal; the main wedge being
how we merge requirements for liberty devstack and master dib. That's
possiby solvable, though, and there's always virtual environments if we
need them. lifeless or mriedem may also have other ideas that I'm not
considering.

// jim

> 
> 
> -- 
> -- James Slagle
> --
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-23 Thread Chris Friesen

On 02/22/2016 03:23 AM, Sylvain Bauza wrote:


See, I don't think that a compute node unable to start a request is an 'abnormal
case'. There are many reasons why a request can't be honored by the compute 
node :
  - for example, the scheduler doesn't own all the compute resources and thus
can miss some information : for example, say that you want to pin a specific
pCPU and this pCPU is already assigned. The scheduler doesn't know *which* pCPUs
are free, it only knows *how much* are free
That atomic transaction (pick a free pCPU and assign it to the instance) is made
on the compute manager not at the exact same time we're decreasing resource
usage for pCPUs (because it would be done in the scheduler process).



I'm pretty sure that the existing NUMATopologyFilter is aware of which 
pCPUs/Hugepages are free on each host NUMA node.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-23 Thread Chris Friesen

On 02/21/2016 01:56 PM, Jay Pipes wrote:

Yingxin, sorry for the delay in responding to this thread. My comments inline.

On 02/17/2016 12:45 AM, Cheng, Yingxin wrote:

To better illustrate the differences between shared-state,
resource-provider and legacy scheduler, I’ve drew 3 simplified pictures
[1] in emphasizing the location of resource view, the location of claim
and resource consumption, and the resource update/refresh pattern in
three kinds of schedulers. Hoping I’m correct in the “resource-provider
scheduler” part.



2) Claims of resource amounts are done in a database transaction atomically
within each scheduler process. Therefore there are no "cache updates" arrows
going back from compute nodes to the resource-provider DB. The only time a
compute node would communicate with the resource-provider DB (and thus the
scheduler at all) would be in the case of a *failed* attempt to initialize
already-claimed resources.


Can you point me to the BP/spec that talks about this?  Where in the code would 
we update the DB to reflect newly-freed resources?



Thanks,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Hongbin Lu
Hi Ricardo,

+1 from me. I like this feature.

Best regards,
Hongbin

-Original Message-
From: Ricardo Rocha [mailto:rocha.po...@gmail.com] 
Sent: February-23-16 5:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] containers across availability zones

Hi.

Has anyone looked into having magnum bay nodes deployed in different 
availability zones? The goal would be to have multiple instances of a container 
running on nodes across multiple AZs.

Looking at docker swarm this could be achieved using (for example) affinity 
filters based on labels. Something like:

docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a nginx 
https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter

We can do this if we change the templates/config scripts to add to the docker 
daemon params some labels exposing availability zone or other metadata (taken 
from the nova metadata).
https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels

It's a bit less clear how we would get heat to launch nodes across availability 
zones using ResourceGroup(s), but there are other heat resources that support 
it (i'm sure this can be done).

Does this make sense? Any thoughts or alternatives?

If it makes sense i'm happy to submit a blueprint.

Cheers,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-23 Thread James Slagle
On Tue, Feb 23, 2016 at 5:18 PM, Devananda van der Veen
 wrote:
> Responding to your points out of order, since that makes more sense to me
> right now ...
>
>> Since currently DIB claims to be backwards compatible, we just need to
>> leave master backwards compatible with Kilo and Liberty Ironic, which
>> means not deleting the bash ramdisk element. If Ironic wants to remove
>> the bash ramdisk support from master, then it ought to be able to do
>> so.
>
>
> Yes, we'd like to remove support (read: code) from Ironic for the bash
> ramdisk. It was deprecated in Liberty, and I'd like to remove it soon (no
> later than once Newton opens).
>
>
>>
>> What if you removed the code from Ironic, but left the element in DIB,
>> with a note that it only works with stable/liberty and earlier
>> versions of Ironic?
>
>
> Sure, except ...
>
>>
>>
>> Could we then:
>>
>> gate master DIB changes on an Ironic stable/liberty job that uses the
>> bash ramdisk - this would catch any regressions in DIB that break the
>> bash ramdisk
>
>
> Yup. We could do this.
>
>>
>> gate master DIB changes on an Ironic master job - this is what
>> gate-tempest-dsvm-ironic-pxe_ssh-dib is already doing (I think).
>
>
> This, we could not do.
>
> Once we remove the support for the bash ramdisk from ironic/master, we will
> not be able to test the "deploy-baremetal" element in dib/master against
> ironic/master. We will only be able to test DIB with the "ironic-agent"
> element against ironic/master. However, since some users of dib still rely
> on the bash ramdisk (eg, because they're using older versions of Ironic) we
> understand the need to keep that element supported within dib.
>
>>
>>
>> Is that a valid option, and would it remove the desire for a stable
>> branch of DIB?
>>
>>
>> We currently say that DIB is backwards compatible and doesn't use
>> stable branches. If there's a desire to change that, I think that's
>> certainly open for discussion. But I don't think we're in a situtation
>> where it's preventing us from moving forward with removing the bash
>> ramdisk code from Ironic aiui, but I might be misunderstanding. I also
>> think that having a stable branch sends the message that master isn't
>> backwards compatible. If that's not the message, why do we need the
>> stable branch?
>>
>
> We believe we need the stable branch because we believe we should test
> master-master for "ironic-agent" and stable-stable for "deploy-baremetal".
>
> On the other hand, we could test master-stable (dib-ironic) for the
> "deploy-baremetal" element. If we did that, then we don't need a stable
> branch of dib.

Yes, this ^^ is what I'm proposing, or was trying to anyway. The
master-master (dib-ironic) gate uses ipa, the master-stable gate would
use the bash ramdisk.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-23 Thread Devananda van der Veen
Responding to your points out of order, since that makes more sense to me
right now ...

Since currently DIB claims to be backwards compatible, we just need to
> leave master backwards compatible with Kilo and Liberty Ironic, which
> means not deleting the bash ramdisk element. If Ironic wants to remove
> the bash ramdisk support from master, then it ought to be able to do
> so.


Yes, we'd like to remove support (read: code) from Ironic for the bash
ramdisk. It was deprecated in Liberty, and I'd like to remove it soon (no
later than once Newton opens).



> What if you removed the code from Ironic, but left the element in DIB,
> with a note that it only works with stable/liberty and earlier
> versions of Ironic?
>

Sure, except ...


>
> Could we then:
>
> gate master DIB changes on an Ironic stable/liberty job that uses the
> bash ramdisk - this would catch any regressions in DIB that break the
> bash ramdisk
>

Yup. We could do this.


> gate master DIB changes on an Ironic master job - this is what
> gate-tempest-dsvm-ironic-pxe_ssh-dib is already doing (I think).
>

This, we could not do.

Once we remove the support for the bash ramdisk from ironic/master, we will
not be able to test the "deploy-baremetal" element in dib/master against
ironic/master. We will only be able to test DIB with the "ironic-agent"
element against ironic/master. However, since some users of dib still rely
on the bash ramdisk (eg, because they're using older versions of Ironic) we
understand the need to keep that element supported within dib.


>
> Is that a valid option, and would it remove the desire for a stable
> branch of DIB?


> We currently say that DIB is backwards compatible and doesn't use
> stable branches. If there's a desire to change that, I think that's
> certainly open for discussion. But I don't think we're in a situtation
> where it's preventing us from moving forward with removing the bash
> ramdisk code from Ironic aiui, but I might be misunderstanding. I also
> think that having a stable branch sends the message that master isn't
> backwards compatible. If that's not the message, why do we need the
> stable branch?
>
>
We believe we need the stable branch because we believe we should test
master-master for "ironic-agent" and stable-stable for "deploy-baremetal".

On the other hand, we could test master-stable (dib-ironic) for the
"deploy-baremetal" element. If we did that, then we don't need a stable
branch of dib.

Thoughts?
--devananda
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] containers across availability zones

2016-02-23 Thread Ricardo Rocha
Hi.

Has anyone looked into having magnum bay nodes deployed in different
availability zones? The goal would be to have multiple instances of a
container running on nodes across multiple AZs.

Looking at docker swarm this could be achieved using (for example)
affinity filters based on labels. Something like:

docker run -it -d -p 80:80 --label nova.availability-zone=my-zone-a nginx
https://docs.docker.com/swarm/scheduler/filter/#use-an-affinity-filter

We can do this if we change the templates/config scripts to add to the
docker daemon params some labels exposing availability zone or other
metadata (taken from the nova metadata).
https://docs.docker.com/engine/userguide/labels-custom-metadata/#daemon-labels

It's a bit less clear how we would get heat to launch nodes across
availability zones using ResourceGroup(s), but there are other heat
resources that support it (i'm sure this can be done).

Does this make sense? Any thoughts or alternatives?

If it makes sense i'm happy to submit a blueprint.

Cheers,
  Ricardo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Ronald Bradford
>
>
>
> 1) Being able to do a grant with a prefix like
>
> GRANT all on 'openstack_ci%'.* to openstack_citest
>
> Then using that prefix in the random db generation. That would at least
> limit scope. That seems the easiest to do with the existing infrastructure.
>

To use this syntax correctly in MySQL, note they have to be backquotes (`).
   And your missing the @host scope.


 GRANT ALL ON `openstack_ci%`.* TO openstack_citest@localhost
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [tripleo] [stable] Phasing out old Ironic ramdisk and its gate jobs

2016-02-23 Thread James Slagle
On Wed, Feb 17, 2016 at 6:27 AM, Dmitry Tantsur  wrote:
> Hi everyone!
>
> Yesterday on the Ironic midcycle we agreed that we would like to remove
> support for the old bash ramdisk from our code and gate. This, however, pose
> a problem, since we still support Kilo and Liberty. Meaning:

Hi, I just wanted to follow up on this issue after the TripleO meeting today.

By removing support from the code do you mean Ironic and/or DIB?

What if you removed the code from Ironic, but left the element in DIB,
with a note that it only works with stable/liberty and earlier
versions of Ironic?

Could we then:

gate master DIB changes on an Ironic stable/liberty job that uses the
bash ramdisk - this would catch any regressions in DIB that break the
bash ramdisk

gate master DIB changes on an Ironic master job - this is what
gate-tempest-dsvm-ironic-pxe_ssh-dib is already doing (I think).

Is that a valid option, and would it remove the desire for a stable
branch of DIB?

We currently say that DIB is backwards compatible and doesn't use
stable branches. If there's a desire to change that, I think that's
certainly open for discussion. But I don't think we're in a situtation
where it's preventing us from moving forward with removing the bash
ramdisk code from Ironic aiui, but I might be misunderstanding. I also
think that having a stable branch sends the message that master isn't
backwards compatible. If that's not the message, why do we need the
stable branch?

Since currently DIB claims to be backwards compatible, we just need to
leave master backwards compatible with Kilo and Liberty Ironic, which
means not deleting the bash ramdisk element. If Ironic wants to remove
the bash ramdisk support from master, then it ought to be able to do
so.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Issue with glancev2 datasource driver

2016-02-23 Thread Bryan Sullivan


I’m
running into an issue with the rows provided by the glancev2 driver for
congress. There are images defined in glance (see below) but no rows are being
returned by congress. Any idea why this might be happening? 

 

Below
I query Openstack directly, then try to get the same data thru congress. I am
using the stable/liberty branch, installed yesterday. This is on the OPNFV
Brahmaputra release (not devstack), and most other congress datasources and 
functions
are working as expected.

 

Thanks
for your help!

Bryan
Sullivan | AT&T

 

opnfv@jumphost:~/git/python-congressclient$
openstack image list

+--+-+

|
ID  
| Name   
|

+--+-+

|
98705491-edda-4645-a413-129502190d56 | cirros-0.3.3-x86_64-dmz |

|
59cf60e8-e0ce-48b1-b081-6325c1e1c52b | cirros-0.3.3-x86_64
|

+--+-+

opnfv@jumphost:~/git/python-congressclient$
openstack congress datasource row list glancev2 images

 

opnfv@jumphost:~/git/python-congressclient$ 

  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/23/2016 09:50 AM, Michael Krotscheck wrote:

> Also, it doesn't seem like alternative cost-savings solutions have
> been considered. For example, how about we host a summit in a
> Not-Top-Tier city for a change? Examples that come to mind are
> Columbus, Pittsburgh, and Indianapolis, which each have convention
> centers larger than Austin.

One of the reasons we started having Summits in big cities was the
cheaper airfares and simpler flight arrangements for people coming
from overseas. The second summit in San Antonio was great since most
people in OpenStack at the time worked for Rackspace, and that's where
the mother ship is located. But after hearing the problems of flying
to SA from, say, Japan, it was decided to keep the summits at coastal
cities in the US, alternating east/west.

But now that the summits have grown to be so huge, the flight savings
have been overwhelmed by the hotel costs in most cases. Making the
events smaller, like the midcycles, can allow us to hold them in much
smaller venues with much less expensive hotels.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWzK2HAAoJEKMgtcocwZqLM4wP/2cY7HrSwq//iqLGd23yxqr7
7q9xSN6y1sxjP77q8TA6bt+XO7r4oFzLltMgU8YwgITY0qyMEDZ+MHizCw4hNxKq
AdMUj1XkDR+q/zT9h4SWNEyYY8ySlLLaaSq1Rb6TcvJFYWfy0KjF2FJWjd2tB8Yq
M4I1WDrqkkB4yO88qYYRvHRVPMNsMYQCBXlUSO9P1HrERUT+x4wNUgE7EMgogYnF
IPupCtR+o+8qRH28Yq302W36iwKwQum6aFn4zjjZhAciikiHI9ohZpY/GaWKZxeX
prnY/POaQ95bFBm87A7ZVR1Bc1R1m84ljhCWUPJL2foAkfRpgMwOFUr7qwFHIkM+
YjSTcDmpLOtIO1A3rhoqjJP47O6LajLtANb7Qh88JFv+DcdfbeyD11V/BzG8CK1i
HDEyk2cxvz0E2i2WwhUNoLx3WpHBeTLSANUGuH+QuTh52yh5CwtTnwmN/RUVdTfE
LctWxORMeRk/PV4XRwhFAxtzf3I7Eibb1l6V9TNcGGzBD8uTwx6hLYJcMpEiswl9
GQvnXQEeXxrUW6cGmwMbXz/tNXGVnzowOHcRPSWv5jqg3niG+e7E4GnXqefSl6dU
cztj3eceBrAGawxZ3JAili8X7JztqIqV575yVJFgEVqyho3g5fzBcAkbsZ+9FP9E
Jh38onWzKSm3ri4lS7V0
=aCk5
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] tooz 1.33.0 release (mitaka)

2016-02-23 Thread no-reply
We are stoked to announce the release of:

tooz 1.33.0: Coordination library for distributed systems.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.32.0..1.33.0
--

0ea96fe Updated from global requirements
ba286fc Fix calling acquire(blocking=False) twice leads to a deadlock

Diffstat (except docs and test files)
-

requirements.txt|  2 +-
test-requirements.txt   |  2 +-
tooz/drivers/ipc.py | 22 --
tooz/drivers/memcached.py   |  8 +---
tooz/drivers/redis.py   | 12 +++-
tooz/drivers/zookeeper.py   | 11 ++-
7 files changed, 39 insertions(+), 25 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index bb29af2..135781e 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17 +17 @@ futurist>=0.11.0 # Apache-2.0
-oslo.utils>=3.4.0 # Apache-2.0
+oslo.utils>=3.5.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 633cb0c..e82c1bb 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -29 +29 @@ redis>=2.10.0 # MIT
-eventlet>=0.18.2 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/23/2016 11:52 AM, Clint Byrum wrote:

> I did not attend the first few summits, my first one being the
> Boston event, but I did attend quite a few Ubuntu Developer
> Summits, which were much more about development discussions, and
> almost completely devoid of conference semantics. It always felt
> like a series of productive meetings, and not like a series of
> rushed, agitated, nervous brain dumps, which frankly is what a lot
> of Tokyo felt like.

While the first was more of a "what exactly are we getting ourselves
into?" event, the next two were much more focused on the developer
design discussions. Yes, there were some business sessions, but there
were no technical talks or splashy booths. The dev sessions were
completely separate, and I remember moving bean bag chairs around to
get a few people together to discuss an issue, and then moving them
again to form a different group. There also weren't many separate
teams then, so it was pretty much everyone in a few rooms.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWzKrsAAoJEKMgtcocwZqLDmAP+wazkjbzxz5LHqyB/wFZwgMy
ds7t+6g5smsaCW/ukGP1l6XhSBmgkNC1P7pGSSoAmL2KN3trMD1qLHM49IQo8bla
1IswO3bdXeacJ0Z+ZcwhdJ5k+Joohk8vRz5HmoVEFuIPApzvMaC+GSiTht/1q4HL
cbTKtxlbam9Nis4NsqOt2X/qOJyukWVBeSwq4SEwZB74PNE/g/o48xWWWPcaTXdp
ehNcbrkSVR0tJyvCbkNxtpm+cJF3/kVRoao+M5kdp4zEhhfTkEbpDAGvjQe6TqpL
3m5C6OK3HpMvPJq7dnvG7Rz+N9IKE8TJfr6vx1vkt8/Xsq9/YeVoX1ezA1lQ+XFL
i7b6mlNuVw21kQlJtx4Tv43ws39hqDMPnM0tuwd/28EtdG99Ck69oTKidS43H7pu
i0tsyqLlNCMr37hxXKiut32kcULJZ1GX0r6sIz53QDBBDNvZRv7sDT4lq4pLiOO3
bLHJwrBSxFjk6/hgV/I/YZZ3yUt4NRkfTMy/fyddvJb+GdSQzU2ieq/rr6bC+1DQ
zjrUt9qWSeNXVAiDH1gZurF6vJnsQ2HsHt2hGc3xAE4/trKkP4+brjipJbOurNo6
uZAWNW2R+sRLq5Qmb4GqfYpU4seDhdIYzMf/ZQjnyYUhWyh+Wkn2vrYLR9SCwk5k
d409ChN/loE2F2gYn7mu
=ZDmG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [QoS] Roadmap and prioritisation of features.

2016-02-23 Thread Miguel Angel Ajo Pelayo
Regarding this conversation about QoS, [1]  as Nate said, we 
have every feature x4 ( x[API, OVS, LB, SR-IOV]) and I add: we
should avoid writing RFEs for any missing piece in the reference
implementations, if any of those is missing, that’s just a bug.

I guess I haven’t been communicating the status and plan lately
neither reviewing new RFEs due to our focus on the current ones,
sorry about that.

I believe the framework we have is solid (what could I say!) but 
we’re sticking to the features that are easier to develop on the reference
implementation, and still beneficial to the broadest audience
(like bandwidth policing, L3 marking -DSCP- , … ) and then
we will be able to jump into more complicated QoS rules.

Some of the things, are simply technically complicated in the low level
while very easy to model with the current framework.

And some of the things need integration with the nova scheduler (like
min bandwidth guarantees -requested by NFV/operators-) 

After the QoS meeting I will work on a tiny report so we can raise visibility
about the features, and the plans.

Best regards,
Miguel Ángel.


[1] 
http://eavesdrop.openstack.org/meetings/neutron_drivers/2016/neutron_drivers.2016-02-18-22.01.log.html#l-52
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Matt Fischer
>
> >  * would it better to keep the ocata cycle at a more normal length, and
> >then run the "contributor events" in Mar/Sept, as opposed to Feb/Aug?
> >(again to avoid the August black hole)
> >
>
> Late March is treacherous in the US, as spring break is generally around
> the last week of March. So I think it just has to stay mid-March or
> earlier.
>
>
Spring break here and many other places is the 2nd week of March, but it
varies in every school district in every state. I think any week in March
is bad in general if you're worried about this but it will be impossible to
avoid all of them.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Clint Byrum
Excerpts from Eoghan Glynn's message of 2016-02-22 15:06:01 -0800:
> 
> > Hi everyone,
> > 
> > TL;DR: Let's split the events, starting after Barcelona.
> > 
> > Long long version:
> > 
> > In a global and virtual community, high-bandwidth face-to-face time is
> > essential. This is why we made the OpenStack Design Summits an integral
> > part of our processes from day 0. Those were set at the beginning of
> > each of our development cycles to help set goals and organize the work
> > for the upcoming 6 months. At the same time and in the same location, a
> > more traditional conference was happening, ensuring a lot of interaction
> > between the upstream (producers) and downstream (consumers) parts of our
> > community.
> > 
> > This setup, however, has a number of issues. For developers first: the
> > "conference" part of the common event got bigger and bigger and it is
> > difficult to focus on upstream work (and socially bond with your
> > teammates) with so much other commitments and distractions. The result
> > is that our design summits are a lot less productive than they used to
> > be, and we organize other events ("midcycles") to fill our focus and
> > small-group socialization needs. The timing of the event (a couple of
> > weeks after the previous cycle release) is also suboptimal: it is way
> > too late to gather any sort of requirements and priorities for the
> > already-started new cycle, and also too late to do any sort of work
> > planning (the cycle work started almost 2 months ago).
> > 
> > But it's not just suboptimal for developers. For contributing companies,
> > flying all their developers to expensive cities and conference hotels so
> > that they can attend the Design Summit is pretty costly, and the goals
> > of the summit location (reaching out to users everywhere) do not
> > necessarily align with the goals of the Design Summit location (minimize
> > and balance travel costs for existing contributors). For the companies
> > that build products and distributions on top of the recent release, the
> > timing of the common event is not so great either: it is difficult to
> > show off products based on the recent release only two weeks after it's
> > out. The summit date is also too early to leverage all the users
> > attending the summit to gather feedback on the recent release -- not a
> > lot of people would have tried upgrades by summit time. Finally a common
> > event is also suboptimal for the events organization : finding venues
> > that can accommodate both events is becoming increasingly complicated.
> > 
> > Time is ripe for a change. After Tokyo, we at the Foundation have been
> > considering options on how to evolve our events to solve those issues.
> > This proposal is the result of this work. There is no perfect solution
> > here (and this is still work in progress), but we are confident that
> > this strawman solution solves a lot more problems than it creates, and
> > balances the needs of the various constituents of our community.
> > 
> > The idea would be to split the events. The first event would be for
> > upstream technical contributors to OpenStack. It would be held in a
> > simpler, scaled-back setting that would let all OpenStack project teams
> > meet in separate rooms, but in a co-located event that would make it
> > easy to have ad-hoc cross-project discussions. It would happen closer to
> > the centers of mass of contributors, in less-expensive locations.
> > 
> > More importantly, it would be set to happen a couple of weeks /before/
> > the previous cycle release. There is a lot of overlap between cycles.
> > Work on a cycle starts at the previous cycle feature freeze, while there
> > is still 5 weeks to go. Most people switch full-time to the next cycle
> > by RC1. Organizing the event just after that time lets us organize the
> > work and kickstart the new cycle at the best moment. It also allows us
> > to use our time together to quickly address last-minute release-critical
> > issues if such issues arise.
> > 
> > The second event would be the main downstream business conference, with
> > high-end keynotes, marketplace and breakout sessions. It would be
> > organized two or three months /after/ the release, to give time for all
> > downstream users to deploy and build products on top of the release. It
> > would be the best time to gather feedback on the recent release, and
> > also the best time to have strategic discussions: start gathering
> > requirements for the next cycle, leveraging the very large cross-section
> > of all our community that attends the event.
> > 
> > To that effect, we'd still hold a number of strategic planning sessions
> > at the main event to gather feedback, determine requirements and define
> > overall cross-project themes, but the session format would not require
> > all project contributors to attend. A subset of contributors who would
> > like to participate in this sessions can collect and relay feedback to
> > other 

Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-02-23 Thread Roman Podoliaka
That's what I tried first :)

For some reason load distribution was still uneven. I'll check this
again, maybe I missed something.

On Tue, Feb 23, 2016 at 5:37 PM, Chris Friesen
 wrote:
> On 02/23/2016 05:25 AM, Roman Podoliaka wrote:
>
>> So looks like it's two related problems here:
>>
>> 1) the distribution of load between workers is uneven. One way to fix
>> this is to decrease the default number of greenlets in pool [2], which
>> will effectively cause a particular worker to give up new connections
>> to other forks, as soon as there are no more greenlets available in
>> the pool to process incoming requests. But this alone will *only* be
>> effective when the concurrency level is greater than the number of
>> greenlets in pool. Another way would be to add a context switch to
>> eventlet accept() loop [8] right after spawn_n() - this is what I've
>> got with greenthread.sleep(0.05) [9][10] (the trade off is that we now
>> only can accept() 1/ 0.05 = 20 new connections per second per worker -
>> I'll try to experiment with numbers here).
>
>
> Would greenthread.sleep(0) be enough to trigger a context switch?
>
> Chris
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Roman Podoliaka
On Tue, Feb 23, 2016 at 7:23 PM, Mike Bayer  wrote:
> Also I'm not
> sure how the enginefacade integration with nova didn't already cover this, I
> guess it doesn't yet impact all of those existing MySQLOpportunisticTest
> classes it has.

Yeah, I guess it's the first test case that actually tries to access
DB via functions in nova/sqlalchemy/api.py, other test cases were
using self.engine/self.sessionmaker attributes provided by
MySQLOpportunisticTestCase directly, thus, when integration with
enginefacade was done we missed this.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Roman Podoliaka
Ok, so I uploaded https://review.openstack.org/#/c/283728/ on the top
of Sean's patches.

We'll take a closer look tomorrow, if we can just put something like
this to oslo.db/sqlalchemy/test_base as a public test fixture.

On Tue, Feb 23, 2016 at 7:23 PM, Mike Bayer  wrote:
>
>
> On 02/23/2016 12:06 PM, Roman Podoliaka wrote:
>>
>> Mike,
>>
>> I think that won't work as Nova creates its own instance of
>> _TransactionContextManager:
>>
>>
>> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/db/sqlalchemy/api.py#L134-L135
>>
>> Maybe we could change _TestTransactionFactory a bit, so that it takes
>> a context manager instance as an argument?
>
>
> If they aren't using the enginefacade global context, then that's even
> easier.  They should be able to drop in _TestTransactionFactory or any other
> TransactionFactory into the _TransactionContextManager they have and then
> swap it back.   If there aren't API methods for this already, because
> everything in enginefacade is underscored, feel free to add. Also I'm not
> sure how the enginefacade integration with nova didn't already cover this, I
> guess it doesn't yet impact all of those existing MySQLOpportunisticTest
> classes it has.
>
>
>
>
>
>>
>> On Tue, Feb 23, 2016 at 6:09 PM, Mike Bayer  wrote:
>>>
>>>
>>>
>>> On 02/23/2016 09:22 AM, Sean Dague wrote:


 With enginefascade working coming into projects, there seems to be some
 new bits around oslo.db global sessions.

 The effect of this on tests is a little problematic. Because it builds
 global state which couples between tests. I've got a review to use mysql
 connection explicitly for some Nova functional tests which correctly
 fails and exposes a bug when run individually. However, when run in a
 full test run, the global session means that it's not run against mysql,
 it's run against sqlite, and passes.

 https://review.openstack.org/#/c/283364/

 We need something that's the inverse of session.configure() -


 https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
 to reset the global session.

 Pointers would be welcomed.
>>>
>>>
>>>
>>> from the oslo.db side, we have frameworks for testing that handle all of
>>> these details (e.g. oslo_db.sqlalchemy.test_base.DbTestCase and
>>> DbFixture).
>>> I don't believe Nova uses these frameworks (I think it should long term),
>>> but for now the techniques used by oslo.db's framework should likely be
>>> used:
>>>
>>> self.test.enginefacade = enginefacade._TestTransactionFactory(
>>>  self.test.engine, self.test.sessionmaker, apply_global=True,
>>>  synchronous_reader=True)
>>>
>>> self.addCleanup(self.test.enginefacade.dispose_global)
>>>
>>>
>>> The above apply_global flag indicates that the global enginefacade should
>>> use this TestTransactionFactory until disposed.
>>>
>>>
>>>
>>>
>>>

  -Sean

>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] volumes stuck detaching attaching and force detach

2016-02-23 Thread John Garbutt
On 22 February 2016 at 22:08, Walter A. Boring IV  wrote:
> On 02/22/2016 11:24 AM, John Garbutt wrote:
>>
>> Hi,
>>
>> Just came up on IRC, when nova-compute gets killed half way through a
>> volume attach (i.e. no graceful shutdown), things get stuck in a bad
>> state, like volumes stuck in the attaching state.
>>
>> This looks like a new addition to this conversation:
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2015-December/082683.html
>> And brings us back to this discussion:
>> https://blueprints.launchpad.net/nova/+spec/add-force-detach-to-nova
>>
>> What if we move our attention towards automatically recovering from
>> the above issue? I am wondering if we can look at making our usually
>> recovery code deal with the above situation:
>>
>> https://github.com/openstack/nova/blob/834b5a9e3a4f8c6ee2e3387845fc24c79f4bf615/nova/compute/manager.py#L934
>>
>> Did we get the Cinder APIs in place that enable the force-detach? I
>> think we did and it was this one?
>>
>> https://blueprints.launchpad.net/python-cinderclient/+spec/nova-force-detach-needs-cinderclient-api
>>
>> I think diablo_rojo might be able to help dig for any bugs we have
>> related to this. I just wanted to get this idea out there before I
>> head out.
>>
>> Thanks,
>> John
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> .
>>
> The problem is a little more complicated.
>
> In order for cinder backends to be able to do a force detach correctly, the
> Cinder driver needs to have the correct 'connector' dictionary passed in to
> terminate_connection.  That connector dictionary is the collection of
> initiator side information which is gleaned here:
> https://github.com/openstack/os-brick/blob/master/os_brick/initiator/connector.py#L99-L144
>
> The plan was to save that connector information in the Cinder
> volume_attachment table.  When a force detach is called, Cinder has the
> existing connector saved if Nova doesn't have it.  The problem was live
> migration.  When you migrate to the destination n-cpu host, the connector
> that Cinder had is now out of date.  There is no API in Cinder today to
> allow updating an existing attachment.
>
> So, the plan at the Mitaka summit was to add this new API, but it required
> microversions to land, which we still don't have in Cinder's API today.

Ah, OK.

We do keep looping back to that core issue.

Thanks,
johnthetubaguy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Sean Dague
On 02/23/2016 12:26 PM, Mike Bayer wrote:

>> 2 thoughts on that:
>>
>> 1) Being able to do a grant with a prefix like
>>
>> GRANT all on 'openstack_ci%'.* to openstack_citest
>>
>> Then using that prefix in the random db generation. That would at least
>> limit scope. That seems the easiest to do with the existing
>> infrastructure.
> 
> a prefix would be very easy, and I almost wonder if we should just have
> an identifiable prefix on the username in all cases anyway.   However,
> the wildcard scheme here is only useful on MySQL.  Other backends don't
> support such a liberal setting.

I think that's ok. We don't need lowest common denominator here. Mysql
(and mysql varients) are the top used db in openstack by far. Making it
easy to build tests for specific bugs / behavior there is a near term
goal. We've got 2 open API bugs in Nova that can't be reproduced on
SQLite in tests, but can with mysql.

And prefixing in all cases would at least make it more clear in case
something failed to cleanup.

[1] -
https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf -
pg 28

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Clint Byrum
Excerpts from Sean McGinnis's message of 2016-02-22 11:48:50 -0800:
> On Mon, Feb 22, 2016 at 05:20:21PM +, Amrith Kumar wrote:
> > Thierry and all of those who contributed to putting together this write-up, 
> > thank you very much.
> > 
> > TL;DR: +0
> > 
> > Longer version:
> > 
> > While I definitely believe that the new proposed timing for "OpenStack 
> > Summit" which is some months after the release, is a huge improvement, I am 
> > not completely enamored of this proposal. Here is why.
> > 
> > As a result of this proposal, there will still be four events each year, 
> > two "OpenStack Summit" events and two "MidCycle" events. The material 
> > change is that the "MidCycle" event that is currently project specific will 
> > become a single event inclusive of all projects, not unlike our current 
> > "Design Summit".
> > 
> > I contrast this proposal with a mid-cycle two weeks ago for the Trove 
> > project. Thanks to the folks at Red Hat who hosted us in Raleigh, we had a 
> > dedicated room, with high bandwidth internet and the ability to have people 
> > join us remotely via audio and video (which we used mostly for screen 
> > sharing). The previous mid-cycle similarly had excellent facilities 
> > provided us by HP (in California), Rackspace (in Austin) and at MIT in 
> > Cambridge when we (Tesora) hosted the event.
> > 
> > At these "simpler, scaled-back settings", would we be able to provide the 
> > same kind of infrastructure for each project?
> > 
> > Given the number of projects, and leaving aside high bandwidth internet and 
> > remote participation, providing dedicated meeting room for the duration of 
> > the MidCycle event for each project is a considerable undertaking. I 
> > believe therefore that the consequence is that the MidCycle event will end 
> > up being of comparable scale to the current Design Summit or larger, and 
> > will likely need a similar venue.
> > 
> > I also believe that it is important that OpenStack continue to grow not 
> > only a global customer base but also a global contributor base. As others 
> > have already commented, this proposal risks the "design summit" become US 
> > based, maybe Europe once in a long while. But I find it much harder to 
> > believe that these design summits would be truly global. And this I think 
> > would be an unwelcome consequence.
> > 
> > At the current OpenStack Summit, there is an opportunity for contributors, 
> > customers and operators to interact, not just in technical meetings, but 
> > also in a social setting. I think this is valuable, even though there seems 
> > to be a number of people who believe that this is not necessarily the case.
> > 
> > Those are the three concerns I have with the proposal. 
> > 
> > Thanks again to Thierry and all who contributed to putting this proposal 
> > together.
> > 
> > -amrith
> 
> I agree with a lot of the concerns raised here. I wonder if we're not
> just shifting some of the problems and causing others.
> 
> While the timing of things isn't ideal right now, I'm also afraid the
> timing of these changes would also interupt our development flow and
> cause distractions when we need folks focused on getting things done.
> 
> I'm also very concerned about losing our midcycles. At least for Cinder,
> the midcycle events have been hugely successful and well worth the time
> and travel expense, IMO. To me, the design summit event is good for
> cross-project communication and getting more operator input. But the
> midcycles have been where we've really been able to focus and figure out
> issues.
> 

I do understand this concern, but the difference is in the way a
development-summit-only event is attended versus a conference+summit.
When you don't have keynotes every morning expending peoples' time, and
you don't have people running out of discussions to give their talks,
this immediately adds a calm focus to the discussions that feels a
lot more like a mid-cycle. When there's no booth for your company to
ask you to come by and man for a while to meet customers and partners,
suddenly every developer can spend the whole of the event talking to
other developers and operators who have come to participate directly.

I did not attend the first few summits, my first one being the Boston
event, but I did attend quite a few Ubuntu Developer Summits, which were
much more about development discussions, and almost completely devoid of
conference semantics. It always felt like a series of productive meetings,
and not like a series of rushed, agitated, nervous brain dumps, which
frankly is what a lot of Tokyo felt like.

> Even if we still have a colocated "midcycle" now, I would be afraid that
> there would be too many distractions from everything else going on for
> us to be able to really tackle some of the things we've been able to in
> our past midcycles.
> 

I _DO_ share your concern here. The mid-cycles are productive because
they're focused. Putting one at the conference will just mak

[openstack-dev] [cross-project] Meeting SKIPPED, Tue February 23rd, 21:00 UTC

2016-02-23 Thread Mike Perez
Hi all!

We will be skipping the cross-project meeting since there are no agenda items
to discuss, but someone can add one [1] to call a meeting next time.

Cross-project spec liaisons, please be ready to discuss these specs for next
week's meeting:

* Support for 4-byte unicode characters in mysql [2]
* Event message format [3]
* Instance auto evacuation [4]

[1] - 
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting#Proposed_agenda
[2] - https://review.openstack.org/#/c/280371/
[3] - https://review.openstack.org/#/c/231382/
[4] - https://review.openstack.org/#/c/257809/

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Mike Bayer



On 02/23/2016 12:20 PM, Sean Dague wrote:

On 02/23/2016 11:29 AM, Mike Bayer wrote:



On 02/22/2016 08:18 PM, Sean Dague wrote:

On 02/22/2016 08:08 PM, Davanum Srinivas wrote:

Sean,

You need to set the env variable like so. See testenv:mysql-python
for example
OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost


Thanks,
Dims

[1]
http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION&i=nope&files=&repos=



If I am reading this correctly, this needs full access to the whole
mysql administratively?


the openstack_citest user needs permission to create and use new
databases when the multiprocessing feature of testr is used.   This is
not a new requirement and the provisioning refactor in oslo.db did not
invent this.


Ok, well it was invented somewhere after it was extracted from Nova. :)


Is that something that could be addressed? In many of my environments
the mysql db does other things as well, so giving full admin to
arbitrary test code is a bit concerning.


I'd suggest that running any test suite against a database that is used
for other things is not an optimal practice; test suites by definition
can break things.   Even if the test suite user has limited permissions,
there's still many ways a bad test can break your database even though
it's less likely.   Running an additional mysql server against an
alternate data directory with a different port is one option here.


  Tempest ran into a similar

issue and addressed this by allowing for preallocation of accounts. That
kind of approach seems like it would work here given that you could do
grants on well known names.


This is a feature that could be supported by oslo.db provisioning. Right
now the multi-process provisioning is hardcoded to use random names but
certainly options or environment variables can be established that it
would work among.But you'd have to ensure that multiple test suites
aren't using the same set of names at the same time.

Feel free to suggest the preferred system of establishing these
pre-defined database names and I or someone else (since im on PTO all
next week) can work something up.


2 thoughts on that:

1) Being able to do a grant with a prefix like

GRANT all on 'openstack_ci%'.* to openstack_citest

Then using that prefix in the random db generation. That would at least
limit scope. That seems the easiest to do with the existing infrastructure.


a prefix would be very easy, and I almost wonder if we should just have 
an identifiable prefix on the username in all cases anyway.   However, 
the wildcard scheme here is only useful on MySQL.  Other backends don't 
support such a liberal setting.





2) Have a set of stack dbs with openstack_citest## where # is number,
and the testr worker id is used to set the number.

That would be more like the static accounts model used in Tempest.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Mike Bayer



On 02/23/2016 12:06 PM, Roman Podoliaka wrote:

Mike,

I think that won't work as Nova creates its own instance of
_TransactionContextManager:

https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/db/sqlalchemy/api.py#L134-L135

Maybe we could change _TestTransactionFactory a bit, so that it takes
a context manager instance as an argument?


If they aren't using the enginefacade global context, then that's even 
easier.  They should be able to drop in _TestTransactionFactory or any 
other TransactionFactory into the _TransactionContextManager they have 
and then swap it back.   If there aren't API methods for this already, 
because everything in enginefacade is underscored, feel free to add. 
Also I'm not sure how the enginefacade integration with nova didn't 
already cover this, I guess it doesn't yet impact all of those existing 
MySQLOpportunisticTest classes it has.







On Tue, Feb 23, 2016 at 6:09 PM, Mike Bayer  wrote:



On 02/23/2016 09:22 AM, Sean Dague wrote:


With enginefascade working coming into projects, there seems to be some
new bits around oslo.db global sessions.

The effect of this on tests is a little problematic. Because it builds
global state which couples between tests. I've got a review to use mysql
connection explicitly for some Nova functional tests which correctly
fails and exposes a bug when run individually. However, when run in a
full test run, the global session means that it's not run against mysql,
it's run against sqlite, and passes.

https://review.openstack.org/#/c/283364/

We need something that's the inverse of session.configure() -

https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
to reset the global session.

Pointers would be welcomed.



from the oslo.db side, we have frameworks for testing that handle all of
these details (e.g. oslo_db.sqlalchemy.test_base.DbTestCase and DbFixture).
I don't believe Nova uses these frameworks (I think it should long term),
but for now the techniques used by oslo.db's framework should likely be
used:

self.test.enginefacade = enginefacade._TestTransactionFactory(
 self.test.engine, self.test.sessionmaker, apply_global=True,
 synchronous_reader=True)

self.addCleanup(self.test.enginefacade.dispose_global)


The above apply_global flag indicates that the global enginefacade should
use this TestTransactionFactory until disposed.







 -Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Sean Dague
On 02/23/2016 11:29 AM, Mike Bayer wrote:
> 
> 
> On 02/22/2016 08:18 PM, Sean Dague wrote:
>> On 02/22/2016 08:08 PM, Davanum Srinivas wrote:
>>> Sean,
>>>
>>> You need to set the env variable like so. See testenv:mysql-python
>>> for example
>>> OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost
>>>
>>>
>>> Thanks,
>>> Dims
>>>
>>> [1]
>>> http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION&i=nope&files=&repos=
>>>
>>
>> If I am reading this correctly, this needs full access to the whole
>> mysql administratively?
> 
> the openstack_citest user needs permission to create and use new
> databases when the multiprocessing feature of testr is used.   This is
> not a new requirement and the provisioning refactor in oslo.db did not
> invent this.

Ok, well it was invented somewhere after it was extracted from Nova. :)

>> Is that something that could be addressed? In many of my environments
>> the mysql db does other things as well, so giving full admin to
>> arbitrary test code is a bit concerning.
> 
> I'd suggest that running any test suite against a database that is used
> for other things is not an optimal practice; test suites by definition
> can break things.   Even if the test suite user has limited permissions,
> there's still many ways a bad test can break your database even though
> it's less likely.   Running an additional mysql server against an
> alternate data directory with a different port is one option here.
> 
> 
>  Tempest ran into a similar
>> issue and addressed this by allowing for preallocation of accounts. That
>> kind of approach seems like it would work here given that you could do
>> grants on well known names.
> 
> This is a feature that could be supported by oslo.db provisioning. Right
> now the multi-process provisioning is hardcoded to use random names but
> certainly options or environment variables can be established that it
> would work among.But you'd have to ensure that multiple test suites
> aren't using the same set of names at the same time.
> 
> Feel free to suggest the preferred system of establishing these
> pre-defined database names and I or someone else (since im on PTO all
> next week) can work something up.

2 thoughts on that:

1) Being able to do a grant with a prefix like

GRANT all on 'openstack_ci%'.* to openstack_citest

Then using that prefix in the random db generation. That would at least
limit scope. That seems the easiest to do with the existing infrastructure.

2) Have a set of stack dbs with openstack_citest## where # is number,
and the testr worker id is used to set the number.

That would be more like the static accounts model used in Tempest.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] code changes for Mitaka

2016-02-23 Thread Davanum Srinivas
Folks,

Let's wrap up any code changes needed for Mitaka as quickly as
possible. We need to cut final releases by tomorrow.
"Final release for non-client libraries: Feb 24" from Doug's email [1]
We may need some final tweaking just for catching up to g-r later, but
any code changes, we need to wrap up.

Cores,

Please go through what's left in the review queue:
bit.ly/oslo-reviews

Thanks,
Dims

[1] http://markmail.org/message/5mzuzyol5pnuxs5p

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Roman Podoliaka
Mike,

I think that won't work as Nova creates its own instance of
_TransactionContextManager:

https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/db/sqlalchemy/api.py#L134-L135

Maybe we could change _TestTransactionFactory a bit, so that it takes
a context manager instance as an argument?

On Tue, Feb 23, 2016 at 6:09 PM, Mike Bayer  wrote:
>
>
> On 02/23/2016 09:22 AM, Sean Dague wrote:
>>
>> With enginefascade working coming into projects, there seems to be some
>> new bits around oslo.db global sessions.
>>
>> The effect of this on tests is a little problematic. Because it builds
>> global state which couples between tests. I've got a review to use mysql
>> connection explicitly for some Nova functional tests which correctly
>> fails and exposes a bug when run individually. However, when run in a
>> full test run, the global session means that it's not run against mysql,
>> it's run against sqlite, and passes.
>>
>> https://review.openstack.org/#/c/283364/
>>
>> We need something that's the inverse of session.configure() -
>>
>> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
>> to reset the global session.
>>
>> Pointers would be welcomed.
>
>
> from the oslo.db side, we have frameworks for testing that handle all of
> these details (e.g. oslo_db.sqlalchemy.test_base.DbTestCase and DbFixture).
> I don't believe Nova uses these frameworks (I think it should long term),
> but for now the techniques used by oslo.db's framework should likely be
> used:
>
> self.test.enginefacade = enginefacade._TestTransactionFactory(
> self.test.engine, self.test.sessionmaker, apply_global=True,
> synchronous_reader=True)
>
> self.addCleanup(self.test.enginefacade.dispose_global)
>
>
> The above apply_global flag indicates that the global enginefacade should
> use this TestTransactionFactory until disposed.
>
>
>
>
>
>>
>> -Sean
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] Getting Started Guide

2016-02-23 Thread Kenny Johnston
On Tue, Feb 23, 2016 at 1:49 AM, Andreas Jaeger  wrote:

> On 2016-02-23 04:45, Kenny Johnston wrote:
> >   * The Product Work Group (PWG) uses the openstack-user-stories
> > repository and gerrit to review and produce .rst formatted user
> stories
> >   * The PWG is comprised (mostly) of non-developers
> >   * We've found the Getting Started guide a bit inadequate for pointing
> > new PWG contributors to in order to get them up and running with our
> > process, and investigated creating a separate guide of our own to
> > cover getting setup with Windows machines and common issues with
> > corporate firewalls
> >   * I heard at the Ops Summit that the getting started guide should be
> > THE place we point new contributors to learn how to get setup for
> > contributing
> >
> > Would it be palatable to submit patches updating the current guide to
> > cover non-developer getting started instructions?
>
> Yes, please!
>
> Let's try to get this into the Infra Manual, I prefer to have a single
> place for this.
>

OK, great! I'll get our team to put together a patch and submit it.


>
> As usual: It's best to discuss a concrete patch - submit one and then
> let's iterate on that one,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kenny Johnston
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] Publishing kolla images to docker-registry.openstack.org

2016-02-23 Thread Steven Dake (stdake)
Ricardo,

Apologies in lag, I remember reading your email but not answering
personally.  I thought Michal's response was appropriate but there is a
bit more to it.

We definitely want to have images for the following events:
* Any tag such as a milestone, or release candidate, or release
* Nightly builds for folks that want to run bleeding edge without the pain
of building images

The nightly builds can easily be culled after 30 days.

We have implementations for centos, ubuntu, and oracle linux.  We have
source and binary versions of both these distros, so essentially we have 6
sets of registry images.

There are 115 containers we would build, push, and tag.  I think Michal's
estimate is for a limited subset of the containers.  Ideally we would want
to build and push all the containers in our repository, which I believe is
10-15GB per distro+build type combination.

At the high end, we are looking at
* (for permanently tagged releases) 15gb*6 sets of images * (3 milestone
tags + 1 release tag + 1-3 RC's per cycle = 7) * 2 release cycles per year
= 1 terabyte of images per cycle growing approximately ~ 1.3 terabytes per
year
* (for the nightly builds), 15bb*6 sets of images * 30 = 2.7 terabytes in
continuous use.

So that is 2.2 TB baseline + 1TB per year growth.  As we add containers
over time, the yearly growth may increase, but I doubt would ever be more
then 2 terabytes in the next 4 release cycles of OpenStack.

There are several registry storage backends available including swift, so
if infra has swift available, that would be a viable option.

I'll get an exact number for our containers as they build today and
respond to this thread since it affects the estimate.

It is not critical the storage be SSD, since tag and nightly build
operations could happen with long run-time on the post jobs I believe
without harming infra resources (although they may go past the 90 minute
limit infra likes to stick to without SSD).  I don't have any non-SSD to
test the build with, so I have no idea what performance impact the SSDs
have.  I know when I went from regular SSD to PCI-E NVM SSD (Intel 750) my
build times dropped from 50 minutes to about 15 minutes.

Note we don't have gate jobs at present for oracle linux nor Ubuntu
binary.  Only CentOS binary, CentOS source, and Ubuntu source are in our
gates at present.  For the present term, storage would be 630GB per year.

I'll get back to you with exact numbers on storage used by the registry in
a full build scenario today.

Regards
-steve


On 2/21/16, 9:26 AM, "Michał Jastrzębski"  wrote:

>I'd say 5Gigs should be enough for all the images per distro (maybe
>less if we have to squeeze). Since we have have 2 strongly supported
>distro 10Gigs. If we would like to add all distros we support, that's
>20-25 (I think). That also depends how many older versions we want to
>keep (current+stable would be absolute minimum, we might increase it
>to milestones). We have lot's of options to tweak so no one will get
>hurt, and if we have dedicated machine for us (which we should because
>apart from disk space, registry can actually eat up lots of IOPS, can
>be VM tho with disk that can handle that), I think any dedicated,
>industry standard, disk should be enough (but SSD would be great).
>
>Cheers,
>Michal
>
>On 20 February 2016 at 16:14, Ricardo Carrillo Cruz
> wrote:
>> Hi Steve
>>
>> When you say the registry would require a machine with plenty of disk
>>space,
>> do you have an estimate of storage needed?
>>
>> Regards
>>
>> 2016-02-20 14:21 GMT+01:00 Steven Dake (stdake) :
>>>
>>> Infra folks,
>>>
>>> I'd like to see a full CI/CD pipeline of Kolla to an OpenStack
>>> infrastructure hosted registry.
>>>
>>> With docker registry 2.2 and earlier a Docker push of Kolla containers
>>> took 5-10 hours.  This is because of design problems in Docker which
>>>made a
>>> push each layer of each Docker image repeatedly.  This has been
>>>rectified in
>>> docker-regitery 2.3 (the latest hub tagged docker registry).  The 5-10
>>>hour
>>> upload times are now down to about 15 minutes.  Now it takes
>>>approximately
>>> 15 minutes to push all 115 kolla containers on a gigabit network.
>>>
>>> Kolla in general wants to publish to a docker registry at least per
>>>tag,
>>> and possibly per commit (or alternatively daily).  We already build
>>>Kolla
>>> images in the gate, and although sometimes our jobs time out on CentOS
>>>the
>>> build on Ubuntu is about 12 minutes.  The reason our jobs time out on
>>>CentOS
>>> is because we lack local to the infrastructure mirrors as is available
>>>on
>>> Ubuntu from a recent patch I believe that Monty offered.
>>>
>>> We have one of two options going forward
>>>
>>> We could publish to the docker hub registry
>>> We could publish to docker-registry.openstack.org
>>>
>>> Having a docker-registry.openstack.org would be my preference, but
>>> requires a machine with plenty of disk space and a copy of docker
>>>1.10.1 or
>>> later running on it.  The do

Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Mike Bayer



On 02/22/2016 08:18 PM, Sean Dague wrote:

On 02/22/2016 08:08 PM, Davanum Srinivas wrote:

Sean,

You need to set the env variable like so. See testenv:mysql-python for example
OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost

Thanks,
Dims

[1] 
http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION&i=nope&files=&repos=


If I am reading this correctly, this needs full access to the whole
mysql administratively?


the openstack_citest user needs permission to create and use new 
databases when the multiprocessing feature of testr is used.   This is 
not a new requirement and the provisioning refactor in oslo.db did not 
invent this.






Is that something that could be addressed? In many of my environments
the mysql db does other things as well, so giving full admin to
arbitrary test code is a bit concerning.


I'd suggest that running any test suite against a database that is used 
for other things is not an optimal practice; test suites by definition 
can break things.   Even if the test suite user has limited permissions, 
there's still many ways a bad test can break your database even though 
it's less likely.   Running an additional mysql server against an 
alternate data directory with a different port is one option here.



 Tempest ran into a similar

issue and addressed this by allowing for preallocation of accounts. That
kind of approach seems like it would work here given that you could do
grants on well known names.


This is a feature that could be supported by oslo.db provisioning. 
Right now the multi-process provisioning is hardcoded to use random 
names but certainly options or environment variables can be established 
that it would work among.But you'd have to ensure that multiple test 
suites aren't using the same set of names at the same time.


Feel free to suggest the preferred system of establishing these 
pre-defined database names and I or someone else (since im on PTO all 
next week) can work something up.







-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Mike Bayer



On 02/22/2016 08:08 PM, Davanum Srinivas wrote:

Sean,

You need to set the env variable like so. See testenv:mysql-python for example
OS_TEST_DBAPI_ADMIN_CONNECTION=mysql://openstack_citest:openstack_citest@localhost


you should not need to set this if you're using the default URL.  The 
default is right here:


https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/provision.py#L457

if that default is not working when OS_TEST_DBAPI_ADMIN_CONNECTION is 
not set, then that's a bug in oslo.db that should be reported.


It is using pymysql now though, so if you are trying to run against 
python-mysql then you'd need to set this.






Thanks,
Dims

[1] 
http://codesearch.openstack.org/?q=OS_TEST_DBAPI_ADMIN_CONNECTION&i=nope&files=&repos=


On Mon, Feb 22, 2016 at 8:02 PM, Sean Dague  wrote:

Before migrating into oslo.db the opportunistic testing for database
backends was pretty simple. Create an openstack_citest@openstack_citest
pw:openstack_citest and you could get tests running on mysql. This no
longer seems to be the case.

I went digging through the source code a bit and it's not entirely
evident what the new required setup is. Can someone point me to the docs
to use this? Or explain what the setup for local testing is now? We've
got some bugs which expose on mysql and not sqlite in nova that we'd
like to get some test cases written for.

 -Sean

--
Sean Dague
http://dague.net


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] documentation on using the oslo.db opportunistic test feature

2016-02-23 Thread Mike Bayer



On 02/22/2016 08:02 PM, Sean Dague wrote:

Before migrating into oslo.db the opportunistic testing for database
backends was pretty simple. Create an openstack_citest@openstack_citest
pw:openstack_citest and you could get tests running on mysql. This no
longer seems to be the case.


this is still the case.   The provisioning system hardcodes this URL as 
the default and no changes were needed to any classes using the existing 
MySQLOpportunisticTestCase base.


Nova has plenty of test cases that use this and I run these tests 
against MySQL on my own CI daily:


grep -hC3  "MySQLOpportunistic" `find nova/tests -name "*.py"`


class TestMySQLSqlalchemyTypesRepr(TestSqlalchemyTypesRepr,
test_base.MySQLOpportunisticTestCase):
pass


--


class TestMigrationUtilsMySQL(TestMigrationUtilsSQLite,
  test_base.MySQLOpportunisticTestCase):
pass
--


class TestNovaMigrationsMySQL(NovaMigrationsCheckers,
  test_base.MySQLOpportunisticTestCase,
  test.NoDBTestCase):
def test_innodb_tables(self):
with mock.patch.object(sa_migration, 'get_engine',
--


class TestNovaAPIMigrationsMySQL(NovaAPIModelsSync,
 test_base.MySQLOpportunisticTestCase,
 test.NoDBTestCase):
pass

--


class TestNovaAPIMigrationsWalkMySQL(NovaAPIMigrationsWalk,
 test_base.MySQLOpportunisticTestCase,
 test.NoDBTestCase):
pass





I went digging through the source code a bit and it's not entirely
evident what the new required setup is. Can someone point me to the docs
to use this? Or explain what the setup for local testing is now? We've
got some bugs which expose on mysql and not sqlite in nova that we'd
like to get some test cases written for.

-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Mike Bayer



On 02/23/2016 09:22 AM, Sean Dague wrote:

With enginefascade working coming into projects, there seems to be some
new bits around oslo.db global sessions.

The effect of this on tests is a little problematic. Because it builds
global state which couples between tests. I've got a review to use mysql
connection explicitly for some Nova functional tests which correctly
fails and exposes a bug when run individually. However, when run in a
full test run, the global session means that it's not run against mysql,
it's run against sqlite, and passes.

https://review.openstack.org/#/c/283364/

We need something that's the inverse of session.configure() -
https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
to reset the global session.

Pointers would be welcomed.


from the oslo.db side, we have frameworks for testing that handle all of 
these details (e.g. oslo_db.sqlalchemy.test_base.DbTestCase and 
DbFixture).   I don't believe Nova uses these frameworks (I think it 
should long term), but for now the techniques used by oslo.db's 
framework should likely be used:


self.test.enginefacade = enginefacade._TestTransactionFactory(
self.test.engine, self.test.sessionmaker, apply_global=True,
synchronous_reader=True)

self.addCleanup(self.test.enginefacade.dispose_global)


The above apply_global flag indicates that the global enginefacade 
should use this TestTransactionFactory until disposed.








-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Jonathan Proulx
On Tue, Feb 23, 2016 at 10:14:11PM +0800, Qiming Teng wrote:

:My take of this is that we are saving the cost by isolating developers
:(contributors) from users/customers.

I'm a little concerned about this as well.  Though presumably at least
the PTLs would still attend the User/Ops conference even if their
project didn't co-schedule a midcycle and while there could be more
focused on that user feed back rather than splitting their attention
with implementation detais and other design summit type issues.

I'm not entierly settled in my opinion yet, but right now the proposed
changes seem like a good direction to me.

Moving the design summit seems popular with the dev community here.

Moving the User/Ops session further after release also seems like a
good plan as there will be some people there with real production
experiance with the new release.  In Tokyo we had an Operators session
on upgrade issues with Liberty that was very well attended but exactly
zero attendees had actually run the upgrade in production.

So later in the cycle is definitely better for getting feed back on
the last realeas, but is there a good plan for how that feed back will
feed into the next release (or maybe at that point it will be next+1)?

-Jon 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] python-novaclient region setting

2016-02-23 Thread Matt Riedemann



On 2/22/2016 8:02 AM, Monty Taylor wrote:

On 02/21/2016 11:40 PM, Andrey Kurilin wrote:

Hi!
`novaclient.client.Client` entry-point supports almost the same
arguments as `novaclient.v2.client.Client`. The difference is only in
api_version, so you can set up region via `novaclient.client.Client` in
the same way as `novaclient.v2.client.Client`.


The easiest way to get a properly constructed nova Client is with
os-client-config:

import os_client_config

OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
OS_PASSWORD="REDACTED"
OS_AUTH_URL="http://auth.vexxhost.net";
OS_REGION_NAME="ca-ymq-1"

client = os_client_config.make_client(
 'compute',
 auth_url=OS_AUTH_URL, username=OS_USERNAME,
 password=OS_PASSWORD, project_name=OS_PROJECT_NAME,
 region_name=OS_REGION_NAME)

The upside is that the constructor interface is the same for all of the
rest of the client libs too (just change the first argument) - and it
will also read in OS_ env vars or named clouds from clouds.yaml if you
have them set.

(The 'simplest' way is to put your auth and region information into a
clouds.yaml file like this:

http://docs.openstack.org/developer/os-client-config/#site-specific-file-locations


Such as:

# ~/.config/openstack/clouds.yaml
clouds:
   vexxhost:
  profile: vexxhost
  auth:
project_name: d8af8a8f-a573-48e6-898a-af333b970a2d
username: 0b8c435b-cc4d-4e05-8a47-a2ada0539af1
password: REDACTED
  region_name: ca-ymq-1


And do:

client = os_client_config.make_client('compute', cloud='vexxhost')


If you don't want to do that for some reason but you'd like to construct
a novaclient Client object by hand:


from keystoneauth1 import loading
from keystoneauth1 import session as ksa_session
from novaclient import client as nova_client

OS_PROJECT_NAME="d8af8a8f-a573-48e6-898a-af333b970a2d"
OS_USERNAME="0b8c435b-cc4d-4e05-8a47-a2ada0539af1"
OS_PASSWORD="REDACTED"
OS_AUTH_URL="http://auth.vexxhost.net";
OS_REGION_NAME="ca-ymq-1"

# Get the auth loader for the password auth plugin
loader = loading.get_plugin_loader('password')
# Construct the auth plugin
auth_plugin = loader.load_from_options(
 auth_url=OS_AUTH_URL, username=OS_USERNAME, password=OS_PASSWORD,
 project_name=OS_PROJECT_NAME)

# Construct a keystone session
# Other arguments that are potentially useful here are:
#  verify - bool, whether or not to verify SSL connection validity
#  cert - SSL cert information
#  timout - time in seconds to use for connection level TCP timeouts
session = ksa_session.Session(auth_plugin)

# Now make the client
# Other arguments you may be interested in:
#  service_name - if you need to specify a service name for finding the
# right service in the catalog
#  service_type - if the cloud in question has given a different
# service type (should be 'compute' for nova - but
# novaclient sets it, so it's safe to omit in most cases
#  endpoint_override - if you want to tell it to use a different URL
#  than what the keystone catalog returns
#  endpoint_type - if you need to specify admin or internal
#  endpoints rather than the default 'public'
#  Note that in glance and barbican, this key is called
#  'interface'
client = nova_client.Client(
 version='2.0', # or set the specific microversion you want
 session=session, region_name=OS_REGION_NAME)

It might be clear why I prefer the os_client_config factory function
instead - but what I prefer and what you prefer might not be the same
thing. :)


On Mon, Feb 22, 2016 at 6:11 AM, Xav Paice mailto:xavpa...@gmail.com>> wrote:

Hi,

In http://docs.openstack.org/developer/python-novaclient/api.html
it's got some pretty clear instructions not to
use novaclient.v2.client.Client but I can't see another way to
specify the region - there's more than one in my installation, and
no param for region in novaclient.client.Client

Shall I hunt down/write a blueprint for that?


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe


http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Andrey Kurilin.


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subje

Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-23 Thread Alex Schultz
On Tue, Feb 23, 2016 at 1:48 AM, Ptacek, MichalX 
wrote:

> Hello again,
>
>
>
> In last days I realized that rpm/deb packages from supported platforms are
> too old (OSC, python-PROJECTclient,….)
>
> so I suppose that I should install newer versions not via deb/rpm packages
> but as pip packages.
>
> This kind of dependency on system packages when trying to install v7
> openstack puppet modules is probably natural for more experienced puppet
> guys,
>
> but I think it should be covered somewhere in doc.
>
>
>

So for our testing we're using the RDO or UCA package sets for the
releases.  Unfortunately you need to have a matching set of packages and
puppet modules for everything to work. What you're running into is trying
to use distro provide packages (probably for kilo or older) with manifests
that were written for something much newer like Liberty or Mitaka.  We do
have a module[0] that can help pull in these newer repos when you're
setting up your system.  You shouldn't pip install anything but rather
leverage the matching package set for the version of OpenStack you are
trying to deploy.




> I suppose I should install openstack clients as pip packages instead …
>
> Like. pip install python-openstackclient==2.0.0, pip install
> python-keystoneclient, …
>
>
>
> by installing them in this way, manifest deployment finished smoothly, but
> I realized that “missing rpm/deb packages” are also installed (even when
> pip version is present),
>
> which might lead to some inconsistency …
>
>
>
> like currently I am fighting with some issue on glance:
>
> ERROR glance.common.config [-] Unable to load glance-api-keystone from
> configuration  file /etc/glance/glance-api-paste.ini.
>
> Got: ImportError(‘No module named middleware.auth_token’),
>
> (I think it’s asking for this file
>
> /usr/lib/python2.7/dist-packages/keystoneclient/middleware
>
> Which is present on the system)
>
>
>
> so my small and general question would be …
>
> What is the procedure if one would like to work with liberty openstack on
> old/supported platform  ?
>
> (currently I am using Ubuntu 14.04 LTS)
>
>
>

For this configuration you'd want ot use the Liberty UCA package set[1]
with 14.04 and it should work.

Thanks,
-Alex

[0] http://git.openstack.org/cgit/openstack/puppet-openstack_extras
[1] https://wiki.ubuntu.com/ServerTeam/CloudArchive



Thanks,
>
> Michal
>
>
>
>
>
> *From:* Ptacek, MichalX [mailto:michalx.pta...@intel.com]
> *Sent:* Monday, February 22, 2016 9:50 AM
>
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [puppet] is puppet-keystone using v3
> credentials correctly ?
>
>
>
> Hi Matt,
>
>
>
> thanks for good hint !
>
> Issue disappeared with newer python-openstackclient-1.0.3-3.fc23.noarch
>
> python-openstackclient-1.0.1-1.fc22.noarch is too old,
>
>
>
> it’s interesting, as supported platforms for puppet-openstack is
> fedora21,22 and I get it running just with fc23 J
>
>
>
> best regards,
>
> Michal
>
>
>
> *From:* Matt Fischer [mailto:m...@mattfischer.com ]
> *Sent:* Friday, February 19, 2016 4:27 PM
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [puppet] is puppet-keystone using v3
> credentials correctly ?
>
>
>
> You shouldn't have to do any of that, it should just work. I have OSC
> 2.0.0 in my environment though (Ubuntu). I'm just guessing but perhaps that
> client is too old? Maybe a Fedora user could recommend a version.
>
>
>
> On Fri, Feb 19, 2016 at 7:38 AM, Matthew Mosesohn 
> wrote:
>
> Hi Michal,
>
> Just add --os-identity-api-version=3 to your command it will work. The
> provider uses v3 openstackclient via env var
> OS_IDENTITY_API_VERSION=3. The default is still 2.
>
> Best Regards,
> Matthew Mosesohn
>
>
> On Fri, Feb 19, 2016 at 5:25 PM, Matt Fischer 
> wrote:
> > What version of openstack client do you have? What version of the module
> are
> > you using?
> >
> > On Feb 19, 2016 7:20 AM, "Ptacek, MichalX" 
> wrote:
> >>
> >> Hi all,
> >>
> >>
> >>
> >> I was playing some time with puppet-keystone deployments,
> >>
> >> and also reported one issue related to this:
> >>
> >> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
> >>
> >> but in general my observations are that keystone_service is using v3
> >> credentials with openstack cli commands that are not compatible
> >>
> >>
> >>
> >> e.g.
> >>
> >> Error: Failed to apply catalog: Execution of '/bin/openstack service
> list
> >> --quiet --format csv --long' returned 2: usage: openstack service list
> [-h]
> >> [-f {csv,table}] [-c COLUMN]
> >>   [--max-width ]
> >>   [--quote {all,minimal,none,nonnumeric}]
> >> openstack service list: error: unrecognized arguments: --long
> >>
> >>
> >>
> >>
> >>
> >> It can’t be bug, because whole module will not work due to this J
> >>
> >> I think I miss something 

Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Michael Krotscheck
On Tue, Feb 23, 2016 at 3:40 AM Chris Dent  wrote:

>
> However, it makes me sad to see the continued trend of limiting
> in-person gatherings. They are useful as a way of keeping people
> aligned with similar goals and approaches to reaching those goals.
> Yes, it is expensive, but it would be nice if the patrons (our
> employers) would recognize that getting us all working well together
> is a cost of doing this business.
>

To echo (and add an angle) what Chris is saying: follow the money. Sales
and marketing has traditionally gotten more dollars than dev, and I feel
that splitting the summit into two is the start of the long slow
budget-cutting death of the design summit. "Sorry, we just can't afford to
send you this year" is going to erode attendance.

Also, it doesn't seem like alternative cost-savings solutions have been
considered. For example, how about we host a summit in a Not-Top-Tier city
for a change? Examples that come to mind are Columbus, Pittsburgh, and
Indianapolis, which each have convention centers larger than Austin.

On the upside, if we're going down this path, can I recommend that the Ops
summit be combined with the marketing summit? It seems like a natural fit
to put People Who Deploy OpenStack together with People Who Sell Things To
People Who Deploy OpenStack.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] iPXE / UEFI support for stable liberty

2016-02-23 Thread Chris K
Thank you for the replies,
I have abandon the patches, upon re-review and testing of the case I
thought was working I agree that these patches are beyond the scope of what
a backport should be.

Chris

On Tue, Feb 23, 2016 at 6:22 AM, Heck, Joseph  wrote:

> Morning,
>
> Just a quick note, there is UEFI booting support within iPXE.  You have to
> invoke a specific build of the binary to get the output, but it's there:
>  make bin-x86_64-efi/snponly.efi
>
> Not entirely relevant to the core of the thread, but wanted to share that
> detail if it's been otherwise missed.
>
> - joe
> _
> From: Jim Rollenhagen 
> Sent: Monday, February 22, 2016 7:37 PM
> Subject: Re: [openstack-dev] [ironic] [stable] iPXE / UEFI support for
> stable liberty
> To: OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
>
>
>
>
> On Feb 22, 2016, at 15:15, Chris K < nobody...@gmail.com> wrote:
>
> Hi Ironicers,
>
> I wanted to draw attention to iPXE / UEFI support in our stable liberty
> branch.
>
>
> Which doesn't exist, right? Or does it work depending on some other
> factors?
>
> There are environments that require support for UEFI, while ironic does
> have this support in master, it is not capable of this in many
> configurations when using the stable liberty release and the docs around
> this feature were unclear.
>
>
> What's unclear about the docs? Can you point at a specific thing, or is it
> just the lack of a thing that specifically says UEFI+iPXE is not supported?
>
> Because support for this feature was unclear when the liberty branch was
> cut it has caused some confustion to users wishing or needing to consume
> the stable branch. I have purposed patches
> https://review.openstack.org/#/c/281564 and
> https://review.openstack.org/#/c/281536 with the goal of correcting this,
> given that master may not be acceptable for some businesses to consume. I
> welcome feedback on this.
>
>
> I believe the first patch adds the feature, and the second patch fixes a
> bug with the feature. Correct?
>
> As you know, stable policy is to not backport features. I don't see any
> reason this case should bypass this policy (which is why I asked so many
> questions above, it's odd to me that this is an open question at all).
>
> It seems like a better path would be to fix the docs to avoid the
> confusion in the first place, right? I'm not sure what the "backport" would
> look like, given that docs patch wouldn't make sense on master, but surely
> some more experienced stable maintainers could guide us. :)
>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-02-23 Thread Chris Friesen

On 02/23/2016 05:25 AM, Roman Podoliaka wrote:


So looks like it's two related problems here:

1) the distribution of load between workers is uneven. One way to fix
this is to decrease the default number of greenlets in pool [2], which
will effectively cause a particular worker to give up new connections
to other forks, as soon as there are no more greenlets available in
the pool to process incoming requests. But this alone will *only* be
effective when the concurrency level is greater than the number of
greenlets in pool. Another way would be to add a context switch to
eventlet accept() loop [8] right after spawn_n() - this is what I've
got with greenthread.sleep(0.05) [9][10] (the trade off is that we now
only can accept() 1/ 0.05 = 20 new connections per second per worker -
I'll try to experiment with numbers here).


Would greenthread.sleep(0) be enough to trigger a context switch?

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [ansible] [docs] Packaging changes about to land for Ubuntu + Neutron/Mitaka

2016-02-23 Thread James Page
Hi Folks

We're about to push the next set of updates into the mitaka-proposed area
of the Ubuntu Cloud Archive; these include changes to three neutron agent
packages which will have impact on end-users and documentation as well as
puppet modules, ansible playbooks, chef cookbooks etc...

1) Package renames

neutron-plugin-openvswitch-agent -> neutron-openvswitch-agent
neutron-plugin-linuxbridge-agent -> neutron-linuxbridge-agent
neutron-plugin-sriov-agent -> neutron-sriov-agent

Transitional packages are included for upgrades (so the old package names
will still work), but the name of each service no longer includes '-plugin'.

https://bugs.launchpad.net/bugs/1548244
https://bugs.launchpad.net/bugs/1321257

2) Dropping of ml2_conf.ini on config-file path for agents

Last cycle we included both the ml2_conf.ini and associate agent ini file
for each daemon as startup arguments to help upgraders deal with the
transition to agent ini files; ml2_conf.ini has now been dropped.

https://bugs.launchpad.net/bugs/1527005

Regards

James
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] weekly meeting #71

2016-02-23 Thread Emilien Macchi


On 02/22/2016 12:41 PM, Emilien Macchi wrote:
> Hi,
> 
> We'll have our weekly meeting tomorrow at 3pm UTC on
> #openstack-meeting4.
> 
> https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack
> 
> As usual, free free to bring topics in this etherpad:
> https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160223
> 
> We'll also have open discussion for bugs & reviews, so anyone is welcome
> to join.
> 
> See you there,
> 

Thanks for this quick & effective meeting.
You can read the notes:
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-02-23-15.00.html
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Questions on template-validate

2016-02-23 Thread Jay Dobies
I am going to bring this up in the team meeting tomorrow, but I figured 
I'd send it out here as well. Rather than retype the issue, please look at:


https://bugs.launchpad.net/heat/+bug/1548856

My question is what the desired behavior of template-validate should be, 
at least from a historical standpoint of what we've honored in the past. 
Before I propose/implement a fix, I want to make sure I'm not violating 
any unwritten expectations on how it should work.


On a related note -- and this is going to sound really stupid that I 
don't know this answer -- but are there any docs on actually using Heat? 
I was looking for docs that may explain what the expectation of 
template-validate was but I couldn't really find any.


The wiki links to a number of developer-centric docs (HOT guide, 
developer process, etc.). I found the (what I believe to be current) 
REST API docs [1] but the only real description is "Validates a template."


Thanks  :D


[1] http://developer.openstack.org/api-ref-orchestration-v1.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][openstack] os-client-config 1.16.0 release (mitaka)

2016-02-23 Thread no-reply
We are jubilant to announce the release of:

os-client-config 1.16.0: OpenStack Client Configuation Library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-client-config

With package available at:

https://pypi.python.org/pypi/os-client-config

Please report issues through launchpad:

http://bugs.launchpad.net/os-client-config

For more details, please see below.

1.16.0
^^


New Features


* Added kwargs and argparse processing for session_client.


Deprecation Notes
*

* Renamed simple_client to session_client. simple_client will remain
  as an alias for backwards compat.

Changes in os-client-config 1.15.0..1.16.0
--

03d5659 Update the README a bit
7a4993d Allow session_client to take the same args as make_client

Diffstat (except docs and test files)
-

README.rst | 64 ++
os_client_config/__init__.py   | 28 ++
.../notes/session-client-b581a6e5d18c8f04.yaml |  6 ++
3 files changed, 65 insertions(+), 33 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] - DVR L3 data plane performance results and scenarios

2016-02-23 Thread Jay Pipes

On 02/22/2016 10:25 PM, Wuhongning wrote:

Hi all,

There is also a control plane performance issue when we try to catch on
the spec of typical AWS limit (200 subnets per router). When a router
with 200 subnets is scheduled on a new host, a 30s delay is watched when
all data plane setup is finished.


How quickly does AWS do the same setup?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] futurist 0.13.0 release (mitaka)

2016-02-23 Thread no-reply
We are amped to announce the release of:

futurist 0.13.0: Useful additions to futures, from the future.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/futurist

With package available at:

https://pypi.python.org/pypi/futurist

Please report issues through launchpad:

http://bugs.launchpad.net/futurist

For more details, please see below.

Changes in futurist 0.12.0..0.13.0
--

bd9f7ca Single quote the callables name (when submission errors)
d750d05 Updated from global requirements
f9f685e Reschedule failed periodic tasks after a short delay
6a2f5a0 Fix wrong comparison in reject_when_reached

Diffstat (except docs and test files)
-

futurist/periodics.py| 18 +-
futurist/rejection.py|  2 +-
test-requirements.txt|  2 +-
5 files changed, 46 insertions(+), 7 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index f17e854..6e6d300 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ hacking<0.11,>=0.10.0
-eventlet>=0.18.2 # MIT
+eventlet!=0.18.3,>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][magnum] Magnum gate issue

2016-02-23 Thread gordon chung


On 23/02/2016 8:09 AM, Oleksii Chuprykov wrote:
> Hi.
> I am the author of that change. We decided to revert it
> https://review.openstack.org/#/c/283297/ , but unfortunately our gates
> are also broken at the moment(by ceilometer).
> Sorry about that :(
>

just an fyi, there was a change in devstack[1] to some variables so 
basically everyone using devstack plugins is broken. we have a fix in 
merge queue[2].

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2016-February/087321.html
[2] https://review.openstack.org/#/c/283382/

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ironic] ironic-lib 1.0.0 release (mitaka)

2016-02-23 Thread doug
We are thrilled to announce the release of:

ironic-lib 1.0.0: Ironic common library

This release is part of the mitaka release series.

With package available at:

https://pypi.python.org/pypi/ironic-lib

For more details, please see below.

Changes in ironic-lib 0.5.0..1.0.0
--

6893662 Updated from global requirements
1d6b628 Updated from global requirements
ceabbdf Updated from global requirements
5c7a847 Remove unused packages from requirements
987aed5 Updated from global requirements
340d323 Updated from global requirements
d6b46e9 Sync test_utils from ironic
8b6866f Add tests for qemu_img_info() & convert_image()
5e642ee Use imageutils from oslo.utils
46a5963 Updated from global requirements
048129a Updated from global requirements
b576a64 Updated from global requirements
912c481 Updated from global requirements

Diffstat (except docs and test files)
-

ironic_lib/disk_utils.py  |   3 +-
ironic_lib/openstack/__init__.py  |   0
ironic_lib/openstack/common/__init__.py   |   0
ironic_lib/openstack/common/_i18n.py  |  45 -
ironic_lib/openstack/common/imageutils.py | 152 --
openstack-common.conf |   7 --
requirements.txt  |  27 ++
test-requirements.txt |  18 ++--
10 files changed, 54 insertions(+), 238 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2572b64..d8cd8b1 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5,5 +5,3 @@
-pbr>=1.6
-argparse
-eventlet>=0.17.4
-greenlet>=0.3.2
-Jinja2>=2.8 # BSD License (3 clause)
+pbr>=1.6 # Apache-2.0
+eventlet>=0.18.2 # MIT
+greenlet>=0.3.2 # MIT
@@ -11,12 +9,7 @@ oslo.concurrency>=2.3.0 # Apache-2.0
-oslo.config>=2.7.0 # Apache-2.0
-oslo.i18n>=1.5.0 # Apache-2.0
-oslo.middleware>=2.9.0 # Apache-2.0
-oslo.serialization>=1.10.0 # Apache-2.0
-oslo.service>=0.12.0 # Apache-2.0
-oslo.utils>=2.8.0 # Apache-2.0
-PrettyTable<0.8,>=0.7
-psutil<2.0.0,>=1.1.1
-pycrypto>=2.6
-requests!=2.8.0,>=2.5.2
-six>=1.9.0
-oslo.log>=1.12.0 # Apache-2.0
+oslo.config>=3.4.0 # Apache-2.0
+oslo.i18n>=2.1.0 # Apache-2.0
+oslo.service>=1.0.0 # Apache-2.0
+oslo.utils>=3.4.0 # Apache-2.0
+requests!=2.9.0,>=2.8.1 # Apache-2.0
+six>=1.9.0 # MIT
+oslo.log>=1.14.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 7ee1245..09e18c2 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -5,2 +5,2 @@
-coverage>=3.6
-discover
+coverage>=3.6 # Apache-2.0
+discover # BSD
@@ -10,7 +10,7 @@ oslotest>=1.10.0 # Apache-2.0
-pylint==1.4.4 # GNU GPL v2
-simplejson>=2.2.0
-sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
-testscenarios>=0.4
-testtools>=1.4.0
-mox3>=0.7.0
-os-testr>=0.4.1
+pylint==1.4.5 # GNU GPL v2
+simplejson>=2.2.0 # MIT
+sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 # BSD
+testscenarios>=0.4 # Apache-2.0/BSD
+testtools>=1.4.0 # MIT
+mox3>=0.7.0 # Apache-2.0
+os-testr>=0.4.1 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.db reset session?

2016-02-23 Thread Sean Dague
With enginefascade working coming into projects, there seems to be some
new bits around oslo.db global sessions.

The effect of this on tests is a little problematic. Because it builds
global state which couples between tests. I've got a review to use mysql
connection explicitly for some Nova functional tests which correctly
fails and exposes a bug when run individually. However, when run in a
full test run, the global session means that it's not run against mysql,
it's run against sqlite, and passes.

https://review.openstack.org/#/c/283364/

We need something that's the inverse of session.configure() -
https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/tests/fixtures.py#L205
to reset the global session.

Pointers would be welcomed.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] [stable] iPXE / UEFI support for stable liberty

2016-02-23 Thread Heck, Joseph
Morning,

Just a quick note, there is UEFI booting support within iPXE.  You have to 
invoke a specific build of the binary to get the output, but it's there:
 make bin-x86_64-efi/snponly.efi

Not entirely relevant to the core of the thread, but wanted to share that 
detail if it's been otherwise missed.

- joe
_
From: Jim Rollenhagen mailto:j...@jimrollenhagen.com>>
Sent: Monday, February 22, 2016 7:37 PM
Subject: Re: [openstack-dev] [ironic] [stable] iPXE / UEFI support for stable 
liberty
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>



On Feb 22, 2016, at 15:15, Chris K < 
nobody...@gmail.com> wrote:

Hi Ironicers,

I wanted to draw attention to iPXE / UEFI support in our stable liberty branch.

Which doesn't exist, right? Or does it work depending on some other factors?

There are environments that require support for UEFI, while ironic does have 
this support in master, it is not capable of this in many configurations when 
using the stable liberty release and the docs around this feature were unclear.

What's unclear about the docs? Can you point at a specific thing, or is it just 
the lack of a thing that specifically says UEFI+iPXE is not supported?

Because support for this feature was unclear when the liberty branch was cut it 
has caused some confustion to users wishing or needing to consume the stable 
branch. I have purposed patches https://review.openstack.org/#/c/281564 and 
https://review.openstack.org/#/c/281536 with the goal of correcting this, given 
that master may not be acceptable for some businesses to consume. I welcome 
feedback on this.

I believe the first patch adds the feature, and the second patch fixes a bug 
with the feature. Correct?

As you know, stable policy is to not backport features. I don't see any reason 
this case should bypass this policy (which is why I asked so many questions 
above, it's odd to me that this is an open question at all).

It seems like a better path would be to fix the docs to avoid the confusion in 
the first place, right? I'm not sure what the "backport" would look like, given 
that docs patch wouldn't make sense on master, but surely some more experienced 
stable maintainers could guide us. :)

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Qiming Teng

> >I don't think the proposal removes that opportunity. Contributors
> >/can/ still go to OpenStack Summits. They just don't /have to/. I
> >just don't think every contributor needs to be present at every
> >OpenStack Summit, while I'd like to see most of them present at
> >every separated contributors-oriented event[tm].
> 
> Yes they can, but if contributors go to the design summit, then they
> also have to get travel budget to go to the new Summit.   So, design
> summits,  midcycle meetups, and now the split off marketing summit.
> This is making it overall more expensive for contributors that meet
> with customers.
> 
My take of this is that we are saving the cost by isolating developers
(contributors) from users/customers.

- Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Qiming Teng
On Mon, Feb 22, 2016 at 10:30:56PM -0500, michael mccune wrote:
> On 02/22/2016 11:06 AM, Dmitry Tantsur wrote:
> >+1 here. I got an impression that midcycles now usually happen in the
> >US. Indeed, it's probably much cheaper for the majority of contributors,
> >but would make things worse for non-US folks.
> 
> cost of travel has been a big reason we have never managed to have a
> sahara mid-cycle, as the team is evenly split across the world.
> 
> mike
> 
Cool. Then this proposal is about saving your mid-cycle costs for ever.

- Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-23 Thread gordon chung


On 12/02/2016 7:01 AM, Sean Dague wrote:
> Ok... this is going to be one of those threads, but I wanted to try to
> get resolution here.
>
> OpenStack is wildly inconsistent in it's use of tenant vs. project. As
> someone that wasn't here at the beginning, I'm not even sure which one
> we are supposed to be transitioning from -> to.
>
> At a minimum I'd like to make all of devstack use 1 term, which is the
> term we're trying to get to. That will help move the needle.
>
> However, again, I'm not sure which one that is supposed to be (comments
> in various places show movement in both directions). So people with
> deeper knowledge here, can you speak up as to which is the deprecated
> term and which is the term moving forward.
>
>   -Sean
>

not sure this was published anywhere but for those using devstack 
plugins, a patch merged recently[1], to take action on this. you'll need 
to switch SERVICE_TENANT_NAME to SERVICE_PROJECT_NAME. a backward compat 
patch[2] is available but we should all change.

glad to see we're moving forward on this.


[1] https://review.openstack.org/#/c/281779/
[2] https://review.openstack.org/#/c/283531/

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Versions api always returns the listen address

2016-02-23 Thread Gyorgy Szombathelyi
> > keystone session, and the versions API must be lightweight?
> >
> > If you put a proxy in front of things you need to also set
> > osapi_compute_link_prefix -
> >
> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910
> > eb29bbe6dac6/nova/api/openstack/common.py#L45-L47
> >
> > This Tempest test was specifically added about six months ago when we
> > realized that people didn't realize that, and were returning invalid
> > links in their environment. It's meant to be a sanity check for
> > people's clouds as much as an interop test.
> Hi Sean!
> 
> Thanks for the pointer to this setting, now only glance, neutron, cinder, heat
> and murano need a similar setting. I'll look at them, maybe they're already
> implemented that.
> 

Seems the solutions in the various components are very colorful:
- nova: osapi_compute_link_prefix
- glance: public_endpoint
- cinder: osapi_volume_base_URL and public_endpoint
- neutron, heat, murano: didn't find anything to set the public endpoint

> 
> >
> > -Sean
> //György
> 
> >
> > --
> > Sean Dague
> > http://dague.net
> >
> >
> __
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> > requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial upgrade - Nova metadata failure

2016-02-23 Thread Korzeniewski, Artur
Hi,
I have re-spin the grenade multimode dvr config [1].

In my understanding, this job would install multinode environment, with L3 
agent and metadata agent running on subnode.
Also to take advantage of this setup:
a) Grenade tests should create DVR router as resource,
b) the Tempest smoke tests should interact with DVR feature.

When a) and b) would not use the DVR feature, we would have DVR-aware setup 
configured, but no real interaction with DVR done.
By default, Grenade jobs are launching the tempest smoke tests only, and no 
full tempest suit.

To avoid having 2 jobs running Grenade multinode setup, we can enable only the 
DVR one in check queue.
To have proper interaction with DVR feature, we can adjust the Grenade tests 
and tempest smoke suit.

Regards,
Artur Korzeniewski
IRC: korzen

[1] https://review.openstack.org/#/c/250215/4

From: Armando M. [mailto:arma...@gmail.com]
Sent: Monday, February 22, 2016 6:01 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [nova][neutron][upgrade] Grenade multinode partial 
upgrade - Nova metadata failure



On 22 February 2016 at 08:52, Ihar Hrachyshka 
mailto:ihrac...@redhat.com>> wrote:
Armando M. mailto:arma...@gmail.com>> wrote:


On 22 February 2016 at 04:56, Ihar Hrachyshka 
mailto:ihrac...@redhat.com>> wrote:
Sean M. Collins mailto:s...@coreitpro.com>> wrote:

Armando M. wrote:
Now that the blocking issue has been identified, I filed project-config
change [1] to enable us to test the Neutron Grenade multinode more
thoroughly.

[1] https://review.openstack.org/#/c/282428/


Indeed - I want to profusely thank everyone that I reached out to during
these past months when I got stuck on this. Ihar, Matt K, Kevin B,
Armando - this is a huge win.

--
Sean M. Collins

Thanks everyone to make that latest push. We are almost there!..

I guess the next steps are:
- monitoring the job for a week, making sure it’s stable enough (comparing 
failure rate to non-partial grenade job?);

Btw, the job trend is here:

http://grafana.openstack.org/dashboard/db/neutron-failure-rate?panelId=6&fullscreen

I'd prefer to wait a little longer. Depending on how things go we may want to 
make it not until N opens up.

Agreed.

- if everything goes fine, propose project-config change to make it voting;
- propose governance patch to enable rolling-upgrade tag for neutron repo (I 
believe not for *aas repos though?).

I guess with that we would be able to claim victory for the basic 'server vs. 
agent’ part of rolling scenario. Right?

Follow up steps would probably be:
- look at enabling partial job for DVR flavour;

That should be only instrumental to see how sane DVR during upgrades is, and 
proceed in tweaking the existing grenade-multi job in the check queue to be 
dvr-aware. In other words: I personally wouldn't want to see two grenade jobs 
in the gate.

Ack, that would be the end goal. There still may be some short time when both 
are in gate.

- proceed on objectification of neutron db layer to open doors for later mixed 
server versions in the same cluster.

Anything I missed?

Also, what do we do with non-partial flavour of the job? Is it staying?

What job are you talking about exactly?

gate-grenade-dsvm-neutron

It’s not ‘partial’ in that we don’t run mixed versions of components during 
tempest run. It only covers that new code can run using old configuration 
files, and that alembic migrations apply correctly for some limited number of 
so called ‘long standing’ resources like instances created on the ‘old’ side of 
grenade.

Yes, that is staying. Especially considering that's part of the integrate gate 
on a bunch of other projects. We'll reconsider what to do, once we strengthen 
our rolling upgrade story.



Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Foundation Sponsorship for tox/pytest Sprint

2016-02-23 Thread Ryan Brown
Since every OpenStack project uses tox, would it be possible to have the 
foundation donate a little bit to the tox/pytest team to enable a sprint 
on both projects?


There's an IndieGoGo (which seems to be yet another crowdfunding site) 
https://www.indiegogo.com/projects/python-testing-sprint-mid-2016#/


While it's not a directly OpenStack project, I think it'd be worth 
supporting since we depend on them so heavily.


Individuals can also donate, and I encourage that too. I donated 100 USD 
because tox saves me loads of time when working on OpenStack, and I use 
py.test for projects at work and at play. If OpenStack pays your salary, 
consider giving the tox/pytest team a slice.


Cheers,
Ryan

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][magnum] Magnum gate issue

2016-02-23 Thread Oleksii Chuprykov
Hi.
I am the author of that change. We decided to revert it
https://review.openstack.org/#/c/283297/ , but unfortunately our gates are
also broken at the moment(by ceilometer).
Sorry about that :(

On Tue, Feb 23, 2016 at 12:40 AM, Hongbin Lu  wrote:

> Hi Heat team,
>
>
>
> It looks Magnum gate broke after this patch was landed:
> https://review.openstack.org/#/c/273631/ . I would appreciate if anyone
> can help for trouble-shooting the issue. If the issue is confirmed, I
> would  prefer a quick-fix or a revert, since we want to unlock the gate
> ASAP. Thanks.
>
>
>
> Best regards,
>
> Hongbin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra][cinder] How to configure the third party CI to be triggered only when jenkins +1

2016-02-23 Thread Mikhail Medvedev
On Mon, Feb 22, 2016 at 7:32 PM, liuxinguo  wrote:
> Hi,
>
>
>
> There is no need to trigger third party CI if a patch does not pass Jenkins
> Verify.
>
> I think there is a way to reach this but I’m not sure how.
>
>
>
> So is there any reference or suggestion to configure the third party CI to
> be triggered only when jenkins +1?
>
>

If you are using zuul, then you should look into 'approval' setting in
layout. E.g. check the current layout that Infra uses for gate
pipeline: 
https://github.com/openstack-infra/project-config/blob/6b71e8cac676e04141839eeecce3462df3a04848/zuul/layout.yaml#L41-L46.

>
> Thanks for any input!
>
>
>
> Regards,
>
> Wilson Liu
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Versions api always returns the listen address

2016-02-23 Thread Gyorgy Szombathelyi

> On 02/23/2016 06:49 AM, Gyorgy Szombathelyi wrote:
> > Hi!
> >
> > Just noticed by a failing
> tempest.api.compute.test_versions.TestVersions.test_get_version_details
> test:
> > The versions answer of the components always return the listen address of
> the corresponding daemon.
> > Is this the intended behavior? I think it should tell the public
> > endpoint, the listening address in a HA cluster cannot nor should be
> reached from the outside.
> >
> > E.g. we have a  setup, where every service have an apache proxy in front of
> it, so getting the versions returns:
> >
> > # curl http://192.168.168.100:8774
> >
> > {"versions": [{"status": "SUPPORTED", "updated":
> > "2011-01-21T11:33:21Z", "links": [{"href":
> > "http://127.0.0.1:8774/v2/";, "rel": "self"}], "min_version": "",
> > "version": "", "id": "v2.0"}, {"status": "CURRENT", "updated":
> > "2013-07-23T11:33:21Z", "links": [{"href":
> > "http://127.0.0.1:8774/v2.1/";, "rel": "self"}], "min_version": "2.1",
> > "version": "2.12", "id": "v2.1"}]}
> >
> > Notice the href: "http://127.0.0.1:8774/xxx"; answer.
> >
> > Or the reason is to not return the public endpoint that it would require a
> keystone session, and the versions API must be lightweight?
> 
> If you put a proxy in front of things you need to also set
> osapi_compute_link_prefix -
> https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910
> eb29bbe6dac6/nova/api/openstack/common.py#L45-L47
> 
> This Tempest test was specifically added about six months ago when we
> realized that people didn't realize that, and were returning invalid links in
> their environment. It's meant to be a sanity check for people's clouds as
> much as an interop test.
Hi Sean!

Thanks for the pointer to this setting, now only glance, neutron, cinder, heat 
and murano 
need a similar setting. I'll look at them, maybe they're already implemented 
that.


> 
>   -Sean
//György

> 
> --
> Sean Dague
> http://dague.net
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Versions api always returns the listen address

2016-02-23 Thread Sean Dague
On 02/23/2016 06:49 AM, Gyorgy Szombathelyi wrote:
> Hi!
> 
> Just noticed by a failing 
> tempest.api.compute.test_versions.TestVersions.test_get_version_details test:
> The versions answer of the components always return the listen address of the 
> corresponding daemon. 
> Is this the intended behavior? I think it should tell the public endpoint, 
> the listening address in a HA cluster
> cannot nor should be reached from the outside.
> 
> E.g. we have a  setup, where every service have an apache proxy in front of 
> it, so getting the versions returns:
> 
> # curl http://192.168.168.100:8774 
> 
> {"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", 
> "links": [{"href": "http://127.0.0.1:8774/v2/";, "rel": "self"}], 
> "min_version": "", "version": "", "id": "v2.0"}, {"status": "CURRENT", 
> "updated": "2013-07-23T11:33:21Z", "links": [{"href": 
> "http://127.0.0.1:8774/v2.1/";, "rel": "self"}], "min_version": "2.1", 
> "version": "2.12", "id": "v2.1"}]}
> 
> Notice the href: "http://127.0.0.1:8774/xxx"; answer.
> 
> Or the reason is to not return the public endpoint that it would require a 
> keystone session, and the versions API must be lightweight?

If you put a proxy in front of things you need to also set
osapi_compute_link_prefix -
https://github.com/openstack/nova/blob/d8ddecf6e3ed1e8193e5f6dba910eb29bbe6dac6/nova/api/openstack/common.py#L45-L47

This Tempest test was specifically added about six months ago when we
realized that people didn't realize that, and were returning invalid
links in their environment. It's meant to be a sanity check for people's
clouds as much as an interop test.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Versions api always returns the listen address

2016-02-23 Thread Morgan Fainberg
On Tue, Feb 23, 2016 at 3:49 AM, Gyorgy Szombathelyi <
gyorgy.szombathe...@doclerholding.com> wrote:

> Hi!
>
> Just noticed by a failing
> tempest.api.compute.test_versions.TestVersions.test_get_version_details
> test:
> The versions answer of the components always return the listen address of
> the corresponding daemon.
> Is this the intended behavior? I think it should tell the public endpoint,
> the listening address in a HA cluster
> cannot nor should be reached from the outside.
>
> E.g. we have a  setup, where every service have an apache proxy in front
> of it, so getting the versions returns:
>
> # curl http://192.168.168.100:8774
>
> {"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z",
> "links": [{"href": "http://127.0.0.1:8774/v2/";, "rel": "self"}],
> "min_version": "", "version": "", "id": "v2.0"}, {"status": "CURRENT",
> "updated": "2013-07-23T11:33:21Z", "links": [{"href": "
> http://127.0.0.1:8774/v2.1/";, "rel": "self"}], "min_version": "2.1",
> "version": "2.12", "id": "v2.1"}]}
>
> Notice the href: "http://127.0.0.1:8774/xxx"; answer.
>
> Or the reason is to not return the public endpoint that it would require a
> keystone session, and the versions API must be lightweight?
>
> Br,
> György
>
>
>
The endpoints are also generally not aware of their URL, and therefore try
to determine via introspection. It would be a good change to make the
endpoints aware of their URLs and/or handle the appropriate (not sure which
HTTP header, or a specific one needs to be defined) to indicate the
Apache/stunnel/nginx/HAProxy/etc front.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Networking-vSphere] - changes in nova compute driver

2016-02-23 Thread Monotosh Das
Hi,

In Networking-vSphere (ovsvapp), 'vsphere' is used as nova compute driver. I 
want to know if there is any modification in the default vsphere driver that is 
specific to ovsvapp. If yes, can you give an idea about the changes ?

If not, then how does vsphere driver get to know port group creation is 
complete, as mentioned in the wiki 
(https://wiki.openstack.org/wiki/Neutron/Networking-vSphere) ? In the code 
(nova/virt/vmwareapi/vmops.py : spawn() ), it appears that VM is created first, 
then neutron is updated about the nic, and then VM is powered on. It doesn't 
wait for any event before powering on the VM.

Some clarification about this will be very helpful,


Thanks,
Monotosh



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Versions api always returns the listen address

2016-02-23 Thread Gyorgy Szombathelyi
Hi!

Just noticed by a failing 
tempest.api.compute.test_versions.TestVersions.test_get_version_details test:
The versions answer of the components always return the listen address of the 
corresponding daemon. 
Is this the intended behavior? I think it should tell the public endpoint, the 
listening address in a HA cluster
cannot nor should be reached from the outside.

E.g. we have a  setup, where every service have an apache proxy in front of it, 
so getting the versions returns:

# curl http://192.168.168.100:8774 

{"versions": [{"status": "SUPPORTED", "updated": "2011-01-21T11:33:21Z", 
"links": [{"href": "http://127.0.0.1:8774/v2/";, "rel": "self"}], "min_version": 
"", "version": "", "id": "v2.0"}, {"status": "CURRENT", "updated": 
"2013-07-23T11:33:21Z", "links": [{"href": "http://127.0.0.1:8774/v2.1/";, 
"rel": "self"}], "min_version": "2.1", "version": "2.12", "id": "v2.1"}]}

Notice the href: "http://127.0.0.1:8774/xxx"; answer.

Or the reason is to not return the public endpoint that it would require a 
keystone session, and the versions API must be lightweight?

Br,
György

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Thomas Goirand
Thierry,

Thanks for writing this up.

On 02/22/2016 11:14 PM, Thierry Carrez wrote:
> More importantly, it would be set to happen a couple of weeks /before/
> the previous cycle release. There is a lot of overlap between cycles.
> Work on a cycle starts at the previous cycle feature freeze, while there
> is still 5 weeks to go. Most people switch full-time to the next cycle
> by RC1. Organizing the event just after that time lets us organize the
> work and kickstart the new cycle at the best moment. It also allows us
> to use our time together to quickly address last-minute release-critical
> issues if such issues arise.

Just a quick comment on the timing... :P

While it makes little sense to have the design summit scheduled a long
time after the final release, I'm a little bit scared that having the
design summit meetings between RC1 and the final release includes the
risk to loose the focus on RC bug fixing.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Chris Dent

On Tue, 23 Feb 2016, Thierry Carrez wrote:

We can't really prevent people from organizing those anyway :) I just hope 
social in-person team gatherings will not be needed as much with this split. 
What may still be needed are mid-cycle "sprints" to achieve specific 
objectives: those could happen in hackathon space at the main summit, in 
donated office space, or online.


Overall I'm in favor of the proposal because it achieves the one thing
that seems most important to have: less conflict of attention between
the project/design summit stuff and the presentatations/marketing/
selling/showing off stuff. It's hard to be in both frames of mind at
the same time.

However, it makes me sad to see the continued trend of limiting
in-person gatherings. They are useful as a way of keeping people
aligned with similar goals and approaches to reaching those goals.
Yes, it is expensive, but it would be nice if the patrons (our
employers) would recognize that getting us all working well together
is a cost of doing this business.

Virtualized gatherings are useful too, but they don't accomplish the
same thing.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-23 Thread Thomas Goirand
On 02/20/2016 12:38 AM, Morgan Fainberg wrote:
> AS a point we are also trying to drop "versioned endpoints" as a thing
> from the catalog going forward. Please do not add a "cinderv3" or
> "volumev3" entry to the catalog. This is something that enourages adding
> for every version a new endpoint. If every service had an entry for each
> endpoint version in the catalog it rapidly balloons the size (think of,
> the ~14? services we have now, each with now three entries per "actual
> api endpoint").

I'm actually counting 20 server packages that are setting-up endpoints
and that I have packaged for Debian.

In Tokyo, we discussed moving to a *single* endpoint, instead of 3, for
each service, since only keystone itself really uses the admin endpoint,
and that there's no much point in the internal endpoint as deployments
could magically do the right thing in routing depending on the address
of the requester (like: avoiding public IPs and count traffic when a
glance image is uploaded from within the cloud).

Has anyone started implementing anything after this discussion?

Cheers,

Thomas Goirand (zigo)

P.S; +1 for *not* adding more than a single "triplet endpoint" for a
given service.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] Excessively high greenlet default + excessively low connection pool defaults leads to connection pool latency, timeout errors, idle database connections / workers

2016-02-23 Thread Roman Podoliaka
Hi all,

I've taken another look at this in order to propose patches to
oslo.service/oslo.db, so that we have better defaults for WSGI
greenlets number / max DB connections overflow [1] [2], which would be
more suitable for DB oriented services like our APIs are.

I used the Mike's snippet [3] for testing, 10 workers (i.e. forks)
served the WSGI app, ab concurrency level was set to 100, 3000
requests were sent.

With our default settings (1000 greenlets per worker, 5 connections in
the DB pool, 10 connections max overflow, 30 seconds timeout waiting
for a connection to become available), ~10-15 requests out of 3000
will fail with 500 due to pool timeout issue on every run [4].

As it was expected, load is distributed unevenly between workers: htop
shows that one worker is busy, while others are not [5]. Tracing
accept() calls with perf-events (sudo perf trace -e accept --pid=$PIDS
-S) allows to see the exact number of requests served by each worker
[6] - we can see that the "busy" worker served almost twice as many
WSGI requests as any other worker did. perf output [7] shows an
interesting pattern: each eventlet WSGI worker sleeps in accept()
waiting for new connections to become available in the queue handled
by the kernel; when there is a new connection available, a random
worker wakes up and tries to accept() as many connections as possible.

Reading the source code of eventlet WSGI server [8] suggests that it
will accept() new connections as long as they are available (and as
long as there are more available greenthreads in the pool) before
starting to process already accept()'ed ones (spawn_n() only creates a
new greenthread and schedules it be executed "later"). Giving the fact
we have 1000 greenlets in the pool, there is a high probability we'll
end up with an overloaded worker. If handling of these requests
involves doing DB queries, we have only 5 (pool) + 10 (max overflow)
DB connections available, others will have to wait (and may eventually
time out after 30 seconds).

So looks like it's two related problems here:

1) the distribution of load between workers is uneven. One way to fix
this is to decrease the default number of greenlets in pool [2], which
will effectively cause a particular worker to give up new connections
to other forks, as soon as there are no more greenlets available in
the pool to process incoming requests. But this alone will *only* be
effective when the concurrency level is greater than the number of
greenlets in pool. Another way would be to add a context switch to
eventlet accept() loop [8] right after spawn_n() - this is what I've
got with greenthread.sleep(0.05) [9][10] (the trade off is that we now
only can accept() 1/ 0.05 = 20 new connections per second per worker -
I'll try to experiment with numbers here).

2) even if the distribution of load is even, we still have to be able
to process requests according to the max level of concurrency,
effectively set by the number of greenlets in pool. For DB oriented
services that means we need to have DB connections available. [1]
increases the
default max_overflow value to allow SQLAlchemy to open additional
connections to a DB and handle spikes of concurrent requests.
Increasing max_overflow value further will probably lead to max number
of connection errors in RDBMs servers.

As it was already mentioned in this thread, the rule of thumb is that
for DB oriented WSGI services the max_overflow value should be at
least close to the number of greenlets. Running tests on my machine
shows that having 100 greenlets in pool / 5 DB connections in pool /
50 max_overflow / 30 seconds pool timeout allows to handle up to 500
concurrent requests without seeing pool timeout errors.

Thanks,
Roman

[1] https://review.openstack.org/#/c/269186/
[2] https://review.openstack.org/#/c/269188/
[3] https://gist.github.com/zzzeek/c69138fd0d0b3e553a1f
[4] http://paste.openstack.org/show/487867/
[5] http://imgur.com/vEWJmrd
[6] http://imgur.com/FOZ2htf
[7] http://paste.openstack.org/show/487871/
[8] https://github.com/eventlet/eventlet/blob/master/eventlet/wsgi.py#L862-L869
[9] http://paste.openstack.org/show/487874/
[10] http://imgur.com/IuukDiD

On Mon, Jan 11, 2016 at 4:05 PM, Mike Bayer  wrote:
>
>
> On 01/11/2016 05:39 AM, Radomir Dopieralski wrote:
>> On 01/08/2016 09:51 PM, Mike Bayer wrote:
>>>
>>>
>>> On 01/08/2016 04:44 AM, Radomir Dopieralski wrote:
 On 01/07/2016 05:55 PM, Mike Bayer wrote:

> but also even if you're under something like
> mod_wsgi, you can spawn a child process or worker thread regardless.
> You always have a Python interpreter running and all the things it can
> do.

 Actually you can't, reliably. Or, more precisely, you really shouldn't.
 Most web servers out there expect to do their own process/thread
 management and get really embarrassed if you do something like this,
 resulting in weird stuff happening.
>>>
>>> I have to disagree with this as an across-the-board rule, par

Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Tom Fifield

Hi ops (cc: devs),

I'm writing to you to let you know why I think Thierry's proposal is a 
good one that probably works better for us than the current situation.


The design summit for us at the moment isn't as good as it could be. You 
turn up, ready to contribute and help out developers with feedback and 
ideas, and get told either "that release is too old, we fixed that 
already" or "oh, we're already well into feature design, try next 
cycle?". It's frustrating for all involved.



A key aspect of this change is the shifting of the the release cycle 
(see the diagram from Thierry). The summit becomes situated a few months 
after the previous release, and right at the start of the planning cycle 
for the next next release.


As a result, the kind of sessions we expect developers to continue to 
host at the summit are exactly the kind we can make most difference in: 
gathering feedback from the previous release, discussing requirements 
for the next next release and cross-project planning and strategy.


In that other, "new" separate developer-oriented event, the plan is that 
the discussions are about the code, not the concepts. The "How" to do 
things that were already discussed at the summit. Unless you're a 
hardcore python folk, or have specific interest in the deep details of 
how something works, in theory there'd be nothing of there of interest.


So, I think that by re-tasking the summit time, I think we actually end 
up with much more relevance at the summit for ops. The details are to be 
worked out in coming months, please participate on openstack-dev to 
ensure that we continue to achieve the "Open Design" goals of this project.



Finally, to answer one specific question:

Also where do the current operators design sessions and operators 
midcycle fit in here?


The changes in the proposal don't touch anything about the ops sessions 
at the design summit, or the ops events that happen during the cycle, 
unless you think its a good thing to do :) I have some ideas saved over 
from our last thread talking about those events, but will propose we 
move to a separate thread for this specifically to avoid drowning -dev ;)




Regards,


Tom

On 22/02/16 23:14, Thierry Carrez wrote:

Hi everyone,

TL;DR: Let's split the events, starting after Barcelona.

Long long version:

In a global and virtual community, high-bandwidth face-to-face time is
essential. This is why we made the OpenStack Design Summits an integral
part of our processes from day 0. Those were set at the beginning of
each of our development cycles to help set goals and organize the work
for the upcoming 6 months. At the same time and in the same location, a
more traditional conference was happening, ensuring a lot of interaction
between the upstream (producers) and downstream (consumers) parts of our
community.

This setup, however, has a number of issues. For developers first: the
"conference" part of the common event got bigger and bigger and it is
difficult to focus on upstream work (and socially bond with your
teammates) with so much other commitments and distractions. The result
is that our design summits are a lot less productive than they used to
be, and we organize other events ("midcycles") to fill our focus and
small-group socialization needs. The timing of the event (a couple of
weeks after the previous cycle release) is also suboptimal: it is way
too late to gather any sort of requirements and priorities for the
already-started new cycle, and also too late to do any sort of work
planning (the cycle work started almost 2 months ago).

But it's not just suboptimal for developers. For contributing companies,
flying all their developers to expensive cities and conference hotels so
that they can attend the Design Summit is pretty costly, and the goals
of the summit location (reaching out to users everywhere) do not
necessarily align with the goals of the Design Summit location (minimize
and balance travel costs for existing contributors). For the companies
that build products and distributions on top of the recent release, the
timing of the common event is not so great either: it is difficult to
show off products based on the recent release only two weeks after it's
out. The summit date is also too early to leverage all the users
attending the summit to gather feedback on the recent release -- not a
lot of people would have tried upgrades by summit time. Finally a common
event is also suboptimal for the events organization : finding venues
that can accommodate both events is becoming increasingly complicated.

Time is ripe for a change. After Tokyo, we at the Foundation have been
considering options on how to evolve our events to solve those issues.
This proposal is the result of this work. There is no perfect solution
here (and this is still work in progress), but we are confident that
this strawman solution solves a lot more problems than it creates, and
balances the needs of the various constituents of our community.

T

Re: [openstack-dev] [cinder] adding a new /v3 endpoint for api-microversions

2016-02-23 Thread Thomas Goirand
On 02/22/2016 10:22 PM, Sean McGinnis wrote:
> On Mon, Feb 22, 2016 at 12:40:48PM +0800, Thomas Goirand wrote:
>>
>> I'd vote for the extra round trip and implementation of caching whenever
>> possible. Using another endpoint is really annoying, I already have
>> specific stuff for cinder to setup both v1 and v2 endpoint, as v2
>> doesn't fully implements what's in v1. BTW, where are we with this? Can
>> I fully get rid of the v1 endpoint, or will I still experience some
>> Tempest failures?
>>
>> Thomas Goirand (zigo)
> 
> This would really surprise me as /v2 was mostly a full copy of /v1, to
> some degree. If you see anything missing please file a bug. I am not
> aware of anything myself.
> 
> /v1 would have been gone by know according to our original deprecation
> plan. We've since realized we can't ever fully get rid of it, but there
> should be no reason to still need to set it up if you have v2.

Thanks for your reply.

All I'm reporting is issues that I experienced with Liberty. Puppet guys
also told me to setup both v1 and v2 endpoints in Keystone. It's not
really clear to me why (yet), but it solved some tempest test failures
for sure.

I'll probably have another try without setting-up the v1, and see what
happens in Tempest. If I find a bug, I'll report it in Launchpad. But
don't expect this to happen late after b3 is released, as I wont have
the time for such investigation. I may though, before the final Mitaka
release.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Thierry Carrez

Eoghan Glynn wrote:

Thanks for the proposal, just a few questions:

  * how would we achieve a "scaled-down design summit in Barcelona"? i.e.
what would be the forcing function to ensure fewer contributors attend,
given that some people will already be making plans?


Exactly how much the Barcelona Design Summit will be scaled down is 
still an open question. If Ocata ends up being a shorter cycle, there 
should be less design discussions needed (especially if some projects 
opt to turn Ocata into a "stabilization cycle"). As a result there 
should be less space requests and we should be able to "scale down" to 
using slightly less rooms.



  * would free passes continue to be issued to all ATCs, for *both* the
conference and the contributor event? (absent cross-subsidization at
the latter event from non-ATC attendees paying full whack)


The contributor event would likely be free for existing contributors to 
attend. Then the idea would be to offer a discount to attend the main 
summit to any person that was physically present at the contributors 
event. That should help control the costs for those who want to attend 
them all.



  * if reducing travel costs is part of the aim here, would it be wise not
to hold the second contributor event per-year in mid-August, when in
Europe at least the cost of flights and hotels spike upwards and the
availability of individual contributors tends to plummet due to PTO.

  * would it better to keep the ocata cycle at a more normal length, and
then run the "contributor events" in Mar/Sept, as opposed to Feb/Aug?
(again to avoid the August black hole)


This is a good point. The reason February / August were picked is that 
the dates for the main summits in 2017 are already ~known (see picture) 
and placing (for example) the P contributors event in early September 
means the summit would happen before the middle of the dev cycle, when 
it's too early to start discussing requirements for the next cycle.


It's still an option though... And over the long run nothing prevents us 
from moving the summit more toward the end of November / start of 
December and end of May / start of June, to allow for a start of March / 
start of September contributors event.


I expect we'll have a session in Austin to discuss all this.


  * instead of collocating any surviving mid-cycles with the more glitzy
conference-style event (which seems to run counter to the midcycle
ethos AIUI), why not allow these to continue running per-project in
unofficial mode in donated office space? (if projects consider them
still needed)


We can't really prevent people from organizing those anyway :) I just 
hope social in-person team gatherings will not be needed as much with 
this split. What may still be needed are mid-cycle "sprints" to achieve 
specific objectives: those could happen in hackathon space at the main 
summit, in donated office space, or online.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Thierry Carrez

Henry Nash wrote:

On 22 Feb 2016, at 17:45, Thierry Carrez  wrote:

Amrith Kumar wrote:

[...]
As a result of this proposal, there will still be four events each year, two "OpenStack 
Summit" events and two "MidCycle" events.


Actually, the OpenStack summit becomes the midcycle event. The new separated 
contributors-oriented event[tm] happens at the beginning of the new cycle.


So in general a well thought out proposal - and it certainly helps address some 
of the early concerns over a “simplistic” split. I was also worrying, however, 
about the reduction in developer face time - it wasn’t immediate clear that the 
main summit could be treated as a developer midcycle. Is the idea that we just 
let this be informally organized by the projects, or tha there would at least 
be room set aside for each project (but without all the formal cross-project 
structure/agenda that there is in a main developer summit)?


How exactly the upstream sessions would be organized at the main 
"summit" event is still an open discussion.


I hear you and John worrying about reducing developer "face time" from 4 
to 2 events per year. On one hand I really think that with this proposed 
split, most teams will get enough face-to-face time with one focused 
event, or will be fine with holding online midcycle sprints. It should 
go a long way to reduce travel costs for most contributors to 2 events 
per year. Worries about exploding contributors travel costs are one of 
the issues that this proposal aims to address.


On the other hand, *some* teams may still need another face-time event 
-- initial team building, solve a very specific issue, whatever. They 
have the opportunity to leverage the main summit as a midcycle venue 
space, or they could even organize their own separate thing. This 
proposal doesn't reduce any dev face time. It just makes it less likely 
that midcycle events would be needed, and provides a default venue to 
hold them in case you end up still needing them.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally]how can I give admin role to rally

2016-02-23 Thread Andrey Kurilin
Hi!
Since I don't found such scenario in our upstream repo, I assume that this
is your custom plugin and I can propose you several solutions:

1) `nova evacuate` is allowed only for admin user, so you can use admin
client in your scenario without changing roles for users. Also,
`required_openstack` validator will check for you that admin client is
specified for your deployment. An example:

from rally.plugins.openstack import scenario


class MyPlugin(scenario.OpenStackScenario):
"""My awesome plugin."""

@validation.required_openstack(admin=True, users=True)
@scenario.configure()
def some_scenario(self):
# do something with nova via simple user
user_novaclient = self.clients("nova")
server = user_novaclient.servers.boot(...)

# do something with nova via admin user
admin_novaclient = self.admin_clients("nova")
admin_novaclient.servers.evacuate(...)


2) Rally supports "roles" context, which can assign roles to users. If you
specify your task as below, self.clients("nova") will return novaclient
initialized by user with "admin" and "another_name_of_role_if_needed" roles:

---
  MyPlugin.some_scenario:
-
  runner:
type: "constant"
times: 10
concurrency: 2
  context:
users:
  tenants: 2
  users_per_tenant: 3
roles:
  - "admin"
  - "another_name_of_role_if_needed"




On Tue, Feb 23, 2016 at 8:20 AM, Wu, Liming 
wrote:

> Hi
>
>   When I run a scenario about "nova evacuate **",  error message was
>   Show as follows.  How can I give the admin role to rally user.
>
> 2016-02-23 09:18:25.631 6212 INFO rally.task.runner [-] Task
> e2ad6390-8cde-4ed7-a595-f5c36d5e2a08 | ITER: 0 END: Error Forbidden: User
> does not have admin privileges (HTTP 403) (Request-ID:
> req-45312185-56e5-46c4-a39a-68f5e346715e)
> 2016-02-23 09:18:25.636 5995 INFO
> rally.plugins.openstack.context.cleanup.context [-] Task
> e2ad6390-8cde-4ed7-a595-f5c36d5e2a08 | Starting:  user resources cleanup
>
> Best regards
> wuliming
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Thierry Carrez

Tim Bell wrote:

On 22/02/16 17:27, "John Garbutt"  wrote:

[...]
I am sure there are more questions that will pop up. Like I assume
this means there is no ATC free pass to the summit? And I guess a
small nominal fee for the contributor meetup (like the recent ops
meetup, to help predict numbers of accurately)? I guess that helps
level the playing field for contributors who don't put git commits in
the repo (I am thinking vocal operators that don't contribute code).
But I probably shouldn't go into all that just yet.


I would like to find a way to allow contributors cheaper access to the summits. 
Many of the devOPS contributors are patching test cases, configuration 
management recipes and documentation which should be rewarded in some form.

Assuming that many of the ATCs are not so motivated to attend the summit, the 
cost in offering access to the event would not be significant.

Charging for the Ops meetups was, to my understanding, more to confirm 
commitment to attend given limited space.

Thus, I would be in favour of a preferential rate for contributors (whether ATC 
is the right criteria is a different question) for summits.


Current thinking would be to give preferential rates to access the main 
summit to people who are present to other events (like this new 
separated contributors-oriented event, or Ops midcycle(s)). That would 
allow for a wider definition of "active community member" and reduce gaming.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials correctly ?

2016-02-23 Thread Ptacek, MichalX
Hello again,

In last days I realized that rpm/deb packages from supported platforms are too 
old (OSC, python-PROJECTclient,….)
so I suppose that I should install newer versions not via deb/rpm packages but 
as pip packages.
This kind of dependency on system packages when trying to install v7 openstack 
puppet modules is probably natural for more experienced puppet guys,
but I think it should be covered somewhere in doc.

I suppose I should install openstack clients as pip packages instead …
Like. pip install python-openstackclient==2.0.0, pip install 
python-keystoneclient, …

by installing them in this way, manifest deployment finished smoothly, but I 
realized that “missing rpm/deb packages” are also installed (even when pip 
version is present),
which might lead to some inconsistency …

like currently I am fighting with some issue on glance:
ERROR glance.common.config [-] Unable to load glance-api-keystone from 
configuration  file /etc/glance/glance-api-paste.ini.
Got: ImportError(‘No module named middleware.auth_token’),
(I think it’s asking for this file
/usr/lib/python2.7/dist-packages/keystoneclient/middleware
Which is present on the system)

so my small and general question would be …
What is the procedure if one would like to work with liberty openstack on 
old/supported platform  ?
(currently I am using Ubuntu 14.04 LTS)

Thanks,
Michal


From: Ptacek, MichalX [mailto:michalx.pta...@intel.com]
Sent: Monday, February 22, 2016 9:50 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials 
correctly ?

Hi Matt,

thanks for good hint !
Issue disappeared with newer python-openstackclient-1.0.3-3.fc23.noarch
python-openstackclient-1.0.1-1.fc22.noarch is too old,

it’s interesting, as supported platforms for puppet-openstack is fedora21,22 
and I get it running just with fc23 ☺

best regards,
Michal

From: Matt Fischer [mailto:m...@mattfischer.com]
Sent: Friday, February 19, 2016 4:27 PM
To: OpenStack Development Mailing List (not for usage questions) 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [puppet] is puppet-keystone using v3 credentials 
correctly ?

You shouldn't have to do any of that, it should just work. I have OSC 2.0.0 in 
my environment though (Ubuntu). I'm just guessing but perhaps that client is 
too old? Maybe a Fedora user could recommend a version.

On Fri, Feb 19, 2016 at 7:38 AM, Matthew Mosesohn 
mailto:mmoses...@mirantis.com>> wrote:
Hi Michal,

Just add --os-identity-api-version=3 to your command it will work. The
provider uses v3 openstackclient via env var
OS_IDENTITY_API_VERSION=3. The default is still 2.

Best Regards,
Matthew Mosesohn

On Fri, Feb 19, 2016 at 5:25 PM, Matt Fischer 
mailto:m...@mattfischer.com>> wrote:
> What version of openstack client do you have? What version of the module are
> you using?
>
> On Feb 19, 2016 7:20 AM, "Ptacek, MichalX" 
> mailto:michalx.pta...@intel.com>> wrote:
>>
>> Hi all,
>>
>>
>>
>> I was playing some time with puppet-keystone deployments,
>>
>> and also reported one issue related to this:
>>
>> https://bugs.launchpad.net/puppet-keystone/+bug/1547394
>>
>> but in general my observations are that keystone_service is using v3
>> credentials with openstack cli commands that are not compatible
>>
>>
>>
>> e.g.
>>
>> Error: Failed to apply catalog: Execution of '/bin/openstack service list
>> --quiet --format csv --long' returned 2: usage: openstack service list [-h]
>> [-f {csv,table}] [-c COLUMN]
>>   [--max-width ]
>>   [--quote {all,minimal,none,nonnumeric}]
>> openstack service list: error: unrecognized arguments: --long
>>
>>
>>
>>
>>
>> It can’t be bug, because whole module will not work due to this J
>>
>> I think I miss something important somewhere …
>>
>>
>>
>> My latest manifest file is :
>>
>>
>>
>> Exec { logoutput => 'on_failure' }
>>
>> package { 'curl': ensure => present }
>>
>>
>>
>> node keystone {
>>
>>
>>
>>   class { '::mysql::server': }
>>
>>   class { '::keystone::db::mysql':
>>
>> password => 'keystone',
>>
>>   }
>>
>>
>>
>>   class { '::keystone':
>>
>> verbose => true,
>>
>> debug   => true,
>>
>> database_connection => 
>> 'mysql://keystone:keystone@127.0.0.1/keystone',
>>
>> catalog_type=> 'sql',
>>
>> admin_token => 'admin_token',
>>
>>   }
>>
>>
>>
>>   class { '::keystone::roles::admin':
>>
>> email=> 'exam...@abc.com',
>>
>> password => 'ChangeMe',
>>
>>   }
>>
>>
>>
>>   class { '::keystone::endpoint':
>>
>> public_url => 
>> "http://${::fqdn}:5000/v2.0",
>>
>> admin_url  => 
>> "http://${::fqdn}:35357/v2.0",
>>
>>   }
>>
>> }
>>
>>
>>
>> Env variables looks as follows(before service list is called with --long)
>>
>>