Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Thomas Bechtold
Great idea! +1!

Tom
On Wed, Nov 02, 2016 at 08:09:27AM -0400, Tom Barron wrote:
> I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
> manila core team.  This is a clear case where he's already been doing
> the review work, excelling both qualitatively and quantitatively, as
> well as being a valuable committer to the project.  Goutham deserves to
> be core and we need the additional bandwidth for the project.  He's
> treated as a de facto core by the community already.  Let's make it
> official!
> 
> -- Tom Barron
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Splitting notifications from rpc (and questions + work around this)

2016-11-02 Thread Joshua Harlow

Davanum Srinivas wrote:

Josh,

Kirill Bespalov put together this doc of which components will work
with separate rpc and notification configurations:
https://docs.google.com/document/d/1CU0KjL9iV8vut76hg9cFuWQGSJawuNq_cK7vRF_KyAA/edit?usp=sharing

 From my team, Oleksii Zamiatin is trying to scale up ZMQ beyond 200+
nodes for RPC.

Ilya Tyaptin's review is stuck because Monasca folks have trouble
using the newer python-kafka version:
https://review.openstack.org/#/c/332105/
https://review.openstack.org/#/c/316259/

As you can tell, we are trying to offer RabbitMQ or ZMQ for RPC and
RabbitMQ or Kafka for Notifications.

Hope this helps.

Thanks,
Dims



Thanks much, good things to know (and share) :)

-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] possible backports for stable/newton

2016-11-02 Thread Brent Eagles
Hi all,

​After forgetting to backport something to stable newton (thanks Emilien
and Alex!), I felt it worthwhile to check for patches that may have been
missed. Since our cherry picks don't seem to be considered equivalents by
git (probably because of modified commit messages), I resorted to
cross-checking git logs. That is to say, I may have missed some. While all
of these refer to bugs, not all of these are marked backport potential.
That being said, a few look like backporting might have been intended or
appropriate. If they look familiar, please revisit.
​
puppet-tripleo

https://review.openstack.org/#/c/389583/ Set redis file descriptor limit
when run via pacemaker
https://review.openstack.org/#/c/380414/ Only run ceilometer::db::sync on
bootstrap node
https://review.openstack.org/#/c/386042/ pacemaker/mysql: wait step 2 to
remove default accounts

tripleo-heat-templates

https://review.openstack.org/#/c/372635/ Use correct password for keystone
bootstrap
https://review.openstack.org/#/c/380979/ Change rabbitmq queues HA mode
from ha-all to ha-exactly
​ (this one was abandoned, possibly to deal with a depends-on problem with
patch ordering - afaict the dependency has merged so this could be
'unabandoned')
https://review.openstack.org/#/c/381869/ Include redis/mongo hiera when
using pacemaker
https://review.openstack.org/#/c/387266/ Enable proxy headers parsing for
Neutron (not sure about this one... there are similar patches that landed
to stable/newton so maybe)
https://review.openstack.org/#/c/385058/ Remove duplicate metadata keys
from nova-api.yaml (probably not critical - the big related bug was the
worker count = 0 thing for nova)


Cheers,

Brent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][magnum] Notes from Summit fishbowl session

2016-11-02 Thread Vikas Choudhary
On Thu, Nov 3, 2016 at 12:33 AM, Antoni Segura Puimedon 
wrote:

> Hi magna and kuryrs!
>
> Thank you all for joining last week meetings. I am now writing a few
> emails to have persistent notes of what was talked about and discussed
> in the Kuryr work sessions. In the Magnum joint session the points
> were:
>
> Kuryr - Magnum joint work session
> =
>
> Authentication
> ==
>
> * Consensus on using Keystone trust tokens.
> - We should follow closely the Keystone effort into scoping the
> allowed
>   actions per token to limit those to the minimal required set of
> verbs
>   that the COE and Kuryr need.
>
> * It was deemed unnecessary to pursue a proxying approach to access
>   Neutron. This means VM applications should be able to reach Neutron
> and
>   Keystone but the only source of credentials they should have is the
>   Keystone tokens.
>
>
> Tenancy and network topology
> 
>
> Two approaches should be made available to users:
>
> Full Neutron networking
> ~~~
>
> Under this configuration, containers running inside the nova instances
> would get networking via Neutron vlan-aware-VMs feature. This means
> the COE
> driver (either kuryr-libnetwork or kuryr-kubernetes) would request a
> Neutron subport for the container. In this way, there can be multiple
> isolated networks running on worker nodes.
>
> The concerns about this solution are about the performance when
> starting
> big amounts of containers and the latency introduced when starting
> them due
> to going all the way to Neutron to request the subport.
>
> Minimal Neutron networking
> ~~
>
>
Is this ipvlan/macvlan approach?


> In order to address the concerns with the 'Full Neutron networking'
> approach, and as a trade-off between features and minimalism, this way
> of
> networking the containers would all be in the same Neutron network as
> the
> ports of their VMs.
>
> The problem with this solution is that allowing multiple isolated
> networks
> like CNM and Kubernetes with policy have is quite complicated.
>
>
> Regards,
>
> Toni
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][routed-network] Host doesn't connected any segments when creating port

2016-11-02 Thread zhi
Hi, Miguel.

Thanks for your reply.

This is my thought about routed network. Please review it and give me some
comments, thanks.

In general L2 provider network, maybe one network to one subnet. If in real
deployment, we has to create many provider networks because one network to
one subnets. But in routed network, one network can contain more than one
subnet. So in real deployment, we can create only one network, this network
we created can contain many many subnets. Does my thought was right?

According to your reply,  I think that if we create a routed network with
two subnets, one is 10.1.0.0/24 and the other is 10.1.1.0/24. Except your
solution, in real deployment, we need to create two gateway (10.1.0.1 and
10.1.1.1)  in physical network, isn't it?

Hope for your reply. :)


Thanks
Zhi Chang

2016-11-02 22:49 GMT+08:00 Miguel Lavalle :

> Hi Zhi,
>
> In routed networks, the routing among the segments has to be provided by a
> router external to Neutron. It has to be provided by the deployment's
> networking infrastructure. In the summit presentation you watched, I used
> this Vagrant environment for the demo portion: https://github.com/
> miguellavalle/routednetworksvagrant. Specifically, look here:
> https://github.com/miguellavalle/routednetworksvagrant/blob/
> master/Vagrantfile#L188. As you can see, I create a VM, "iprouter", to
> act as the router between the two segments I use in the demo: one segment
> on vlan tag 2016 in physnet1 and another segment on vlan tag 2016 in
> physnet2. Please also look here how I enable the routing in the "iprouter"
> Linux: https://github.com/miguellavalle/routednetworksvagrant/blob/
> master/provisioning/setup-iprouter.sh.
>
> Of course, in a real deployment you would use a hardware router connected
> to all the network's segments
>
> Hope this helps
>
> Miguel
>
> On Tue, Nov 1, 2016 at 4:42 AM, zhi  wrote:
>
>> Hi, shihanzhang and Neil, Thanks for your comments.
>>
>> In your comments. I think that Neutron router or physical network should
>> provide routing these two subnets, doesn't it? Does my thought was right?
>>
>> I tried to connect these two subnets with a Neutron router but I met a
>> strange problem. I did some operations like this:
>>
>> stack@devstack:~$ neutron net-list
>> +-+-
>> --+-
>> ---+
>> | id  |
>> name | subnets
>>  |
>> +-+-
>> --+-
>> ---+
>> | 6596da30-d7c6-4c39-b87c-295daad44123 | multinet |
>> a998ac2b-2f50-44f1-9c1a-f4f3684ef63c 10.1.1.0/24|
>> |   |
>>  | 26bcdfd3-6393-425e-963e-1ace6ef74e0c 10.1.0.0/24 |
>> | 662de35c-f7a7-47cd-ba18-e5a2470935f0| net   |
>> 9754dfe9-be48-4a38-b690-5c48cf371ba3 10.10.10.0/24  |
>> +--+
>> --+-
>> ---+
>> stack@devstack:~$ neutron router-port-list c488238d-06d7-4b85-9fa1-e0913e
>> 5bcf13
>>
>> stack@devstack:~$ neutron router-interface-add
>> c488238d-06d7-4b85-9fa1-e0913e5bcf13 a998ac2b-2f50-44f1-9c1a-f4f3684ef63c
>> Added interface 680eb2b6-b445-4790-9610-80154dd6d909 to router
>> c488238d-06d7-4b85-9fa1-e0913e5bcf13.
>> stack@devstack:~$ neutron router-port-list c488238d-06d7-4b85-9fa1-e0913e
>> 5bcf13
>> +--+
>> +---+---
>> +
>> | id   |
>> name | mac_address   | fixed_ips
>>   |
>> +--+
>> ++--
>> +
>> | 680eb2b6-b445-4790-9610-80154dd6d909 |   | fa:16:3e:47:2e:8f |
>> {"subnet_id": "26bcdfd3-6393-425e-963e-1ace6ef74e0c", "ip_address": "
>> 10.1.0.10"} |
>> +--+
>> -+--+---
>> +
>>
>>
>> After adding a port interface ( subnet 10.1.1.0/24  ) to the router, Why
>> does the port's IP address was 10.1.0.10 ? Why not it should be 10.1.1.x/24
>> ?
>>
>>
>>
>> Thanks
>> Zhi Chang
>>
>> 2016-11-01 17:19 GMT+08:00 shihanzhang :
>>
>>> agree with Neil.
>>>
>>> thanks
>>> shihanzhang
>>>
>>>
>>>
>>> 在 2016-11-01 

[openstack-dev] [Neutron][neutron-lbaas][octavia] Not be able to ping loadbalancer ip

2016-11-02 Thread Wanjing Xu (waxu)
So I bring up octavia using devstack (stable/mitaka).   I created a 
loadbalander and a listener(not create member yet) and start to look at how 
things are connected to each other.  I can ssh to amphora vm and I do see a 
haproxy is up with front end point to my listener.  I tried to ping (from dhcp 
namespace) to the loadbalancer ip, and ping could not go through.  I am 
wondering how packet is supposed to reach this amphora vm.  I can see that the 
vm is launched on both network(lb_mgmt network and my vipnet), but I don't see 
any nic associated with my vipnet:

ubuntu@amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699:~$ ifconfig -a
eth0  Link encap:Ethernet  HWaddr fa:16:3e:b4:b2:45
  inet addr:192.168.0.4  Bcast:192.168.0.255  Mask:255.255.255.0
  inet6 addr: fe80::f816:3eff:feb4:b245/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:2496 errors:0 dropped:0 overruns:0 frame:0
  TX packets:2626 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:307518 (307.5 KB)  TX bytes:304447 (304.4 KB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:65536  Metric:1
  RX packets:212 errors:0 dropped:0 overruns:0 frame:0
  TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:18248 (18.2 KB)  TX bytes:18248 (18.2 KB)

localadmin@dmz-eth2-ucs1:~/devstack$ nova list
+--+--+++-+---+
| ID   | Name   
  | Status | Task State | Power State | Networks
  |
+--+--+++-+---+
| 557a3de3-a32e-419d-bdf5-41d92dd2333b | 
amphora-dad2f14e-76b4-4bd8-9051-b7a5627c6699 | ACTIVE | -  | Running
 | lb-mgmt-net=192.168.0.4; vipnet=100.100.100.4 |
+--+--+++-+---+

And it seemed that amphora created a port from the vipnet for its vrrp_ip, but 
now sure how it is used and how it is supposed to help packet to reach 
loadbalancer ip

It will be great if somebody can help on this, especially on network side.

Thanks
Wanjing

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Summary of design summit for Ocata

2016-11-02 Thread Ken'ichi Ohmichi
Hi QA-team,

Thanks for joining QA sessions on OpenStack Summit Barcelona.
They were interesting and good to get directions to move forward in
this development cycle.
This is a summary of these sessions for next steps and hope this helps
our works.

* (Tempest) Add an option for stopping cleanup when test failure happens
  Main assignee / organizer: dpaterson, mkopec
  Milestone: O-3
  Description:
As the design principle, Tempest should clean all created resource
up when finishing.
However the cleanup makes difficult to debug problems sometimes
because the failure situation is deleted.
Nice to add an option into tempest.conf or 'tempest run' command
for disabling the cleanup.

* (Tempest) Add an option for the number of target VMs to test live-migration
  Main assignee / organizer: gmann
  Milestone: O-2
  Description:
On production clouds, it is common to migrate multiple virtual
machines to the other host.
Current Tempest migrates a single virtual machine in each test.
Nice to add an option to control the number of target machines.

* (Tempest) Add and decorator into bug-reproducing tests to know
actual bug number from test failure
  Main assignee / organizer: oomichi, dmellado
  Milestone: O-2
  Description:
When fixing a bug on each project, it is nice to propose Tempest
test to reproduce the bug on the gate.
Such Tempest tests can help detecting latent bugs on production
clouds which are deployed with older OpenStack versions.
By knowing the LP bug number from the test, testers can know which
patch is necessary to be applied to thier own clouds according to the
LP report.
Now they can know it from Tempest git history(Related-Bug tag),
but that is a little hard.
A new test decorator will help to know that easily.

* (Tempest) Reduce deep test class inheritance for easy debugging
  Main assignee / organizer: ekhugen, dmellado, jhakimra andreaf
  Milestone: O-3
  Description:
We are still seeing deep backtrace when some Tempest tests fail.
That makes hard to debug problems because testers need to read
many test modules.
At first, we need to know how deep on current test inheritances
and define the target depth for reducing.
So some tool is necessary to know current test inheritances as first step.

* (Tempest) Bug Triage
  Main assignee / organizer: masayukig, gmann, jhakimra, luzC,
ababich, dmellado
  Milestone: End of Ocata
  Description:
The bug report number of Tempest continues increasing and we need
bug triage.
In this Ocata cycle, many people raise hands for this bug triage.
Thanks so much.
We will do that in weekly rotation and report the progress in
weekly meetings.
https://etherpad.openstack.org/p/ocata-qa-bug-triage is for
managing assignees.

* OpenStack Health
  Main assignee / organizer: masayukig
  Milestone: O-2
  Description:
Submit ideas to launchpad from the session feedback and prioritize them.
The feedback was
- Unit test coverage of each project (Nova, Cinder, etc)
- Test failure ratio ranking by test

* Destructive testing
  Main assignee / organizer: Timur Nurlygayanov
  Milestone: O-2 (qa-spec at least)
  Description:
To clarify the scope, user story and test scenario, qa-spec is
necessary to be proposed.
On the implementation side, it is better to avoid separated repos
of os-faults and stepler for its maintenance.

* Policy testing
  Main assignee / organizer:
  Milestone: O-2 (qa-spec at least)
  Description:
This test will be implemented with tempest-plugin from separated
repo which is different from Tempest.
The qa-spec is already proposed as https://review.openstack.org/#/c/382672/

If having questions, please send mails to me or "Main assignee / organizer".
Thanks for your help.

Reference:
* Ocata Priorities: https://etherpad.openstack.org/p/ocata-qa-priorities
* Etherpads of QA:
https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#QA_.28Quality_Assurance.29

Thanks
Ken Omichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Function as a service in OpenStack

2016-11-02 Thread Lingxian Kong
On Thu, Nov 3, 2016 at 10:44 AM, Zane Bitter  wrote:

> This is a really interesting space. There seems to be two main use cases
> for Lambda that are probably worth talking about separately:
>
> The first is for just Lambda alone. You can use it to provide some glue
> logic between the other AWS services, so you can trigger off various events
> (e.g. S3 notifications) and write a little bit of conditioning logic that
> transforms the data and dispatches it to other services (e.g. DynamoDB).
> This one is particularly interesting to me, and in fact we can support
> parts of this in OpenStack already[1] because Mistral's functionality is
> equivalent to something like SWS + parts of Lambda. (Specifically, Mistral
> can do the data dispatch easily enough, but any data transformation has to
> be done in YAQL, which is a pretty high bar compared to just writing some
> code in a language of your choosing.)
>

​There is still one thing missing in Mistral​ (maybe it should not be).
After receieving events from Aodh or Zaqar, what if user just wants to
trigger some scripts under his/her management, rather than just invoking
openstack services api? Although actions are pluggable in Mistral, but in
this case it's definitely not an easy thing as just writing an customized
action, unless Mistral could include such capatility in its scope which I
think it too heavy for Mistral to mange such things by itself. So, FaaS
will be the right answer in this case, and it will also be add-on part to
empower Mistral to do more things.


>
> The second one is Lambda + the API Gateway, which allows you to have web
> requests act as triggers, so that you can effectively treat it as a PaaS
> and build an entire web app by stringing together Lambda functions and the
> various other services (S3, DynamoDB, ). On the face of it this sounds
> to me like a gimmicky way of deploying an unmaintainable mess. Naturally
> this is the one receiving all of the attention, which shows how much I know
> :D


​I also don't think this one is attractive to me, Lambda is especially
powerful when it's used together with other AWS services(S3,
DynamoDB, Kinesis Streams, etc).
​​

>
> I definitely don't think we should try to reimplement this from scratch in
> OpenStack. IMHO if we're going to add FaaS capabilities we should re-use
> some existing project (like OpenWhisk), even if we have to write our own
> native API over the top of it.
>
> The things we'd really want it to do would be:
>
> * Authenticate against Keystone,
> * Provide Keystone credentials for the user-supplied functions it runs to
> access (probably using Keystone trusts), and
> * Connect to existing OpenStack sources of events, which hopefully means
> Zaqar queues
>
> Which sounds challenging to integrate with an existing standalone project,
> though still not as bad as writing an equivalent from scratch.
>
> TBH I think the appeal, at least for the FaaS-as-a-PaaS (aka #serverless)
> crowd, is going to be pretty limited until such time as we have an
> equivalent of DynamoDB in OpenStack. (i.e. no time soon, since the
> MagnetoDB project is goneburger.) The idea of FaaS is to make the unit of
> compute power that you're paying for (a) as fine-grained as possible, and
> (b) scalable to infinity. Swift provides the same thing for storage
> (Nova:FaaS::Cinder:Swift). What we don't have is the equivalent for a
> database, there's only Trove where you're paying for a VM-sized chunk at a
> minimum and scaling up in units of VM-sized chunks until you reach the
> limit of how many VMs can communicate with each other and still get any
> work done. Not many web apps can get by without a database, so that largely
> negates the purpose to my mind, since the database will likely both
> dominate costs at the low end and put the upper limit on scale at the high
> end.
>

​Yeah, I agree with you that more things are needed so that FaaS-like stuff
could be used appropriately and ideally, we can't get everything ready on
day 1, that's how we do things,  from simple to complex, isn't it?



>
> cheers,
> Zane.
>
> [1] https://www.openstack.org/videos/video/building-self-healing
> -applications-with-aodh-zaqar-and-mistral
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Function as a service in OpenStack

2016-11-02 Thread Lingxian Kong
On Thu, Nov 3, 2016 at 5:08 AM, Clint Byrum  wrote:

I don't have answers to these questions, but I'd ask:
>
> * Does OpenWhisk have a significant user base?
>
> * Do the goals of OpenWhisk run parallel to the goals of OpenStack?
>
> * Can any OpenStack operator deploy OpenWhisk and immediately begin
>   providing serverless to their users?
>

​Yeah, all good questions, I'm afraid only OpenWhisk guys could answer
that, and I also really hope OpenWhisk could be part of OpenStack and
provide more documentations, so people won't recreate the wheels any more.
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Splitting notifications from rpc (and questions + work around this)

2016-11-02 Thread Davanum Srinivas
Josh,

Kirill Bespalov put together this doc of which components will work
with separate rpc and notification configurations:
https://docs.google.com/document/d/1CU0KjL9iV8vut76hg9cFuWQGSJawuNq_cK7vRF_KyAA/edit?usp=sharing

>From my team, Oleksii Zamiatin is trying to scale up ZMQ beyond 200+
nodes for RPC.

Ilya Tyaptin's review is stuck because Monasca folks have trouble
using the newer python-kafka version:
https://review.openstack.org/#/c/332105/
https://review.openstack.org/#/c/316259/

As you can tell, we are trying to offer RabbitMQ or ZMQ for RPC and
RabbitMQ or Kafka for Notifications.

Hope this helps.

Thanks,
Dims

On Wed, Nov 2, 2016 at 8:11 PM, Joshua Harlow  wrote:
> Hi folks,
>
> There was a bunch of chatter at the summit about how there are really two
> different types of (oslo) messaging usage that exist in openstack and how
> they need not be backed by the same solution type (rabbitmq, qpid,
> kafka...).
>
> For those that were not at the oslo sessions:
>
> https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Oslo
>
> The general gist was though that we need to make sure people really do know
> that there are two very different types of messaging usage in openstack and
> to ensure that operators (and developers) are picking the right backing
> technology for each type.
>
> So some questions naturally arise out of this.
>
> * Where are the best practices with regard to selection of the best backend
> type for rpc (and one for notifications); is this something oslo.messaging
> should work through (or can the docs team and operator group also help in
> making this)?
>
> * What are the tradeoffs in using the same (or different) technology for rpc
> and notifications?
>
> * Is it even possible for all oslo.messaging consuming projects to be able
> to choose 2 different backends, are consuming projects consuming the library
> correctly so that they can use 2 different backends?
>
> * Is devstack able to run with say kafka for notifications and rabbitmq for
> rpc (if not, is there any work item the oslo group can help with to make
> this possible) so that we can ensure and test that all projects can work
> correctly with appropriate (and possibly different) backends?
>
> * Any other messaging, arch-wg work that we (oslo or others) can help out
> with to make sure that projects (and operators) are using the right
> technology for the right use (and not just defaulting to RPC over rabbitmq
> because it exists, when in reality something else might be a better choice)?
>
> * More(?)
>
> Just wanted to get this conversation started, because afaik it's one that
> has not been widely circulated (and operators have been setting up rabbitmq
> in various HA and clustered and ... modes, when in reality thinking through
> what and how it is used may be more appropriate); this also applies to
> developers since some technical solutions in openstack seem to be created
> due to (in-part) rabbitmq shortcomings (cells v1 afaik was *in* part created
> due to scaling issues).
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Splitting notifications from rpc (and questions + work around this)

2016-11-02 Thread Joshua Harlow

Hi folks,

There was a bunch of chatter at the summit about how there are really 
two different types of (oslo) messaging usage that exist in openstack 
and how they need not be backed by the same solution type (rabbitmq, 
qpid, kafka...).


For those that were not at the oslo sessions:

https://wiki.openstack.org/wiki/Design_Summit/Ocata/Etherpads#Oslo

The general gist was though that we need to make sure people really do 
know that there are two very different types of messaging usage in 
openstack and to ensure that operators (and developers) are picking the 
right backing technology for each type.


So some questions naturally arise out of this.

* Where are the best practices with regard to selection of the best 
backend type for rpc (and one for notifications); is this something 
oslo.messaging should work through (or can the docs team and operator 
group also help in making this)?


* What are the tradeoffs in using the same (or different) technology for 
rpc and notifications?


* Is it even possible for all oslo.messaging consuming projects to be 
able to choose 2 different backends, are consuming projects consuming 
the library correctly so that they can use 2 different backends?


* Is devstack able to run with say kafka for notifications and rabbitmq 
for rpc (if not, is there any work item the oslo group can help with to 
make this possible) so that we can ensure and test that all projects can 
work correctly with appropriate (and possibly different) backends?


* Any other messaging, arch-wg work that we (oslo or others) can help 
out with to make sure that projects (and operators) are using the right 
technology for the right use (and not just defaulting to RPC over 
rabbitmq because it exists, when in reality something else might be a 
better choice)?


* More(?)

Just wanted to get this conversation started, because afaik it's one 
that has not been widely circulated (and operators have been setting up 
rabbitmq in various HA and clustered and ... modes, when in reality 
thinking through what and how it is used may be more appropriate); this 
also applies to developers since some technical solutions in openstack 
seem to be created due to (in-part) rabbitmq shortcomings (cells v1 
afaik was *in* part created due to scaling issues).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]: Instance creation and deletion metrics in ceilometer !

2016-11-02 Thread Adrian Turjak

On 03/11/16 03:01, gordon chung wrote:
> gnocchi captures the state of a resource and it's history. this is 
> accessible by looking at resource history. i'm not entirely sure if that 
> handles your case, may you could provide the queries you use and we 
> could figure out equivalent gnocchi queries. i built a ceilometer vs 
> gnocchi usage deck[1] that may help but it's more focused on metrics 
> rather than resource history.
>
> [1] http://www.slideshare.net/GordonChung/ceilometer-to-gnocchi
>
> cheers,

I'd need to double check exactly what query it is, but it effectively
amounts to:
"List all instance metric samples where project_id is  and timestamp
is in time range -"

The time range is an hour + leadin from last hour to catch the last
sample from the previous window.

We then group by resource id, and for each instance check the metadata.
If a sample exists, then the instance exists, and depending on what
states the metadata shows it was in we know for how much of that hour we
will be billing. Basically, we don't care about the actual volume/data
of the metric, just the sample metadata from the resource at that point
in time.

The above is what our billing aggregation service does ever hour against
ceilometer. So we're not using ceilometer directly for billing, just are
a source for the data we wish to aggregate and transform into something
we can bill.

It looks like we can still achieve the same thing in gnocchi with any
instance metric that has resource metadata (cpu_util) since gnocchi
stores the changes in the metadata over time. Can we though bypass the
metric and look at changes in resource metadata directly?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][networking-sfc] No networking-sfc meeting for 11/3/2016. We will resume our project meeting on 11/10/2016

2016-11-02 Thread Cathy Zhang

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Chris Friesen

On 11/02/2016 02:52 PM, Jay Pipes wrote:

On 11/01/2016 10:14 AM, Alex Xu wrote:

Currently we only update the resource usage with Placement API in the
instance claim and the available resource update periodic task. But
there is no claim for migration with placement API yet. This works is
tracked by https://bugs.launchpad.net/nova/+bug/1621709. In newton, we
only fix one bit which make the resource update periodic task works
correctly, then it will auto-heal everything. For the migration claim
part, that isn't the goal for newton release.

So the first question is do we want to fix it in this release? If the
answer is yes, there have a concern need to discuss.


Yes, I believe we should fix the underlying problem in Ocata. The underlying
problem is what Sylvain brought up: live migrations do not currently use any
sort of claim operation. The periodic resource audit is relied upon to
essentially clean up the state of claimed resources over time, and as Chris
points out in review comments on https://review.openstack.org/#/c/244489/, this
leads to the scheduler operating on stale data and can lead to an increase in
retry operations.


It's worse than that.  For pinned instances it can result in vCPUs from multiple 
instances running on the same host pCPUs (which defeats the whole point of 
pinning), and can result in outright live migration failures if the destination 
has fewer pCPUs or NUMA nodes than the source.



I see no reason why we can't change the behaviour of the `PUT
/allocations/{consumer_uuid}` call to allow changing either the amounts of the
allocated resources (a resize operation) or the set of resource provider UUIDs
referenced in the allocations list (a move operation).


Agreed, your example looks reasonable at first glance.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron][midonet] midonet liberty gate failure

2016-11-02 Thread Takashi Yamamoto
hi,

On Thu, Nov 3, 2016 at 2:18 AM, Ihar Hrachyshka  wrote:
> Hi YAMAMOTO (and other midokura folks),
>
> I spotted unit tests in the branch are failing due to upper constraints not
> applied. So I backported the fix as:
> https://review.openstack.org/#/c/392698/ Sadly, it does not pass because
> tempest for midonet v2 fails:

thank you.

>
> http://logs.openstack.org/98/392698/1/check/gate-tempest-dsvm-networking-midonet-v2/9494c9c/logs/devstacklog.txt.gz#_2016-11-02_15_10_13_949
>
> It looks like midonet SDN controller misbehaving.
>
> Would you mind taking it from there and propose the needed patches to pass
> the gate for the patch?

sure, it looks like a backend issue. i'll take a look.

>
> Thanks,
> Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovn] restart failure to bring up ovn services

2016-11-02 Thread Murali R
Following the docs online (Newton), the installation was successful.
However when the VM that has the controller (and ovn-nb) restarted, it
fails to bring up ovs & ovn. This is ubuntu deployment using
python-networking-ovn and locally built ovn. Is this a deployment problem?
Is it possible to recover from here without losing the neutron DB sync? I
have not configured any networks that I need to save.

NOTE: I did a reboot once before and the services came back fine at that
time. Not sure if there is a sequence to be followed while shutting down -
if so can I know what it would be?

Nov  2 15:19:42 controller neutron-server[2715]: 2016-11-02 15:19:42.003
3052 ERROR networking_ovn.ovsdb.impl_idl_ovn [-] OVS database connection to
OVN_Northbound failed with error: 'Could not retrieve schema from tcp:
192.168.56.102:6641: Connection refused'. Verify that the OVS and OVN
services are available and that the 'ovn_nb_connection' and
'ovn_sb_connection' configuration options are correct.
Nov  2 15:19:42 controller neutron-server[2715]: 2016-11-02 15:19:42.003
3052 ERROR networking_ovn.ovsdb.impl_idl_ovn Traceback (most recent call
last):
Nov  2 15:19:42 controller neutron-server[2715]: 2016-11-02 15:19:42.003
3052 ERROR networking_ovn.ovsdb.impl_idl_ovn   File
"/usr/lib/python2.7/dist-packages/networking_ovn/ovsdb/impl_idl_ovn.py",
line 84, in __init__


Nov  2 15:19:42 controller neutron-server[2715]: 2016-11-02 15:19:42.003
3052 ERROR networking_ovn.ovsdb.impl_idl_ovn 'err': os.strerror(err)})
Nov  2 15:19:42 controller neutron-server[2715]: 2016-11-02 15:19:42.003
3052 ERROR networking_ovn.ovsdb.impl_idl_ovn Exception: Could not retrieve
schema from tcp:192.168.56.102:6641: Connection refused
Nov  2 15:19:42 controller neutron-server[2715]: 2016-11-02 15:19:42.003
3052 ERROR networking_ovn.ovsdb.impl_idl_ovn
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] PTL Leave of Absence

2016-11-02 Thread Emilien Macchi
I'll be on vacations from Friday night and back on November 22th, with
very low access on IRC / emails.
In case some decision needs to be taken with PTL approval, Steven
Hardy will act as proxy and point of contact for TripleO project.

I know this message is not required but I believe it helps in good
communications across distributed teams.
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] Release announcements

2016-11-02 Thread gordon chung


On 02/11/16 03:39 PM, Emilien Macchi wrote:
> On Wed, Nov 2, 2016 at 12:33 PM, Thierry Carrez  wrote:
>> Hi everyone,
>>
>> In Barcelona the release team has been discussing how to improve release
>> announcements. Posting them on openstack-dev (for libs) and
>> openstack-announce (for main services) has proven to be pretty noisy,
>> especially for projects which publish lots of components, like OpenStack
>> Puppet or OpenStack Ansible. This actively discouraged people to follow
>> openstack-announce, which was really not the goal.
>
> sorry for that ;-)

lol, ttx named names. i also blame Emilien.

>
>> At the same time, we can't just stop making announcements. Some people
>> (especially on the downstream side) still want to receive release
>> announces. And we still want to archive a trace of the release and
>> provide a starting point for discussing immediate issues on a given
>> release, especially for libraries.
>>
>> The proposed solution is to create a specific mailing-list for OpenStack
>> release announcements (proposed name is "release-announces") where we'd
>> post the automated release announcements. Only the release bot and
>> release managers would be able to post to it. The "reply-to" field would
>> be set to openstack-dev, in case someone wanted to start a thread about
>> a given release. By default, it would be set to send in daily digest
>> mode, to reduce noise and encourage people to subscribe to it.
>>
>> The -announce list would get back to low-noise, and be limited to
>> highly-important announcements (one email for the final release, emails
>> about events, elections...).
>>
>> Please let us know if you have comments or questions. We'll start
>> implementing this plan next week if no objection is raised.
>
> Excellent idea!

i'm ok with release-announce tag as others suggested. i like the 
reply-to field idea as well.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Global changes to CI jobs using neutron

2016-11-02 Thread Hayes, Graham
On 02/11/16 19:53, Matt Riedemann wrote:
> nova-network was deprecated in newton. Nova is working on moving the CI
> jobs that run against it to use Neutron only in Ocata. I'm tracking that
> work here:
>
> https://etherpad.openstack.org/p/removing-nova-network
>
> In an effort to make this more of a global switch in Ocata, clarkb has
> proposed a change to devstack-gate to change the default value of
> DEVSTACK_GATE_NEUTRON from 0 (nova-net) to 1 (neutron):
>
> https://review.openstack.org/#/c/392934/
>
> There are conditions on that which are:
>
> 1. If a job definition in project-confg sets DEVSTACK_GATE_NEUTRON
> explicitly, that's honored.
>
> 2. If not explicitly defined and the job is running against a stable
> branch, then nova-network is still the default.
>
> There are a few jobs which are definitely nova-network specific, like
> some of the nova-net specific grenade jobs and the cells v1 job. Those
> are being explicitly handled in a series here:
>
> https://review.openstack.org/#/c/392942/
>
> So why should you care, you ask? First, thanks for asking. Second, as
> noted in the commit message to ^ there are several grenade jobs which
> don't explicitly define DEVSTACK_GATE_NEUTRON, like for designate and
> trove. With that change those grenade jobs will now start using neutron
> when upgrading from newton to ocata. This might work, it might not. If
> it does not work, and you know it won't work, please speak up now.
> Otherwise if things break we'll have to either (a) explicitly set
> DEVSTACK_GATE_NEUTRON=0 in those jobs or (b) cap them at stable/newton -
> either way those affected projects would have to sort out a path forward
> for continued upgrade testing in Ocata.
>

I don't know of any reason that this should not work for the Designate
jobs, as we do not make use of any of the other resources in the cloud.

I will try a manual test in the next couple of days to confirm though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]: Instance creation and deletion metrics in ceilometer !

2016-11-02 Thread Emilien Macchi
On Wed, Nov 2, 2016 at 2:33 PM, Maxime Belanger  wrote:
> Hi Raghunath,
>
>
> We are usign Almanach : https://github.com/openstack/almanach. I think your
> use case pretty much fits what this project does.
> We implemented this as a replacement of Ceilometer to gather usage on
> instances and volumes. We query it to do our billing calculation.

I am failing to understand why we create projects to replace official
projects that would have the same mission statement or common
technical goals.  We saw it with Monasca and AFIK it didn't work very
well.
Have you engaged collaboration with Telemetry team to work together as
a community and propose your use-case to the roadmap if some feature
was missing?

I'm very interested to know from "Almanach" developers why we're here
now.  I personally like to think OpenStack contributors work as a
community and meet common goals by communicating on the appropriate
channels.
Please correct me if I'm wrong in the case this project is totally
doing something else, my knowledge in Telemetry is limited from my
user perspective.

> It is of course a less complete solution than CloudKitty + Gnocchi but
> suites our needs for now.
>
> Maxime
>
> 
> From: Raghunath D 
> Sent: October 25, 2016 5:32:13 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Ceilometer]: Instance creation and deletion
> metrics in ceilometer !
>
> Hi ,
>
> Can some one please suggest how to instance notifications in ceilometer.
>
> With Best Regards
> Raghunath Dudyala
> Tata Consultancy Services Limited
> Mailto: raghunat...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty. IT Services
> Business Solutions
> Consulting
> 
>
>
> -Raghunath D/HYD/TCS wrote: -
> To: openstack-dev@lists.openstack.org
> From: Raghunath D/HYD/TCS
> Date: 10/18/2016 08:01PM
> Subject: [openstack-dev] [Ceilometer]: Instance creation and deletion
> metrics in ceilometer !
>
> Hi ,
>
> How can instance created and deleted information/sample can be retrieved
> from ceilometer.
> What entries should be there in pipeline.yaml  to get instance deleted
> information.
>
> I tried to have meters- "instance" in pipeline.yam but it always gives
> active instance details,
> and no details of deleted instances.
>
> With Best Regards
> Raghunath Dudyala
> Tata Consultancy Services Limited
> Mailto: raghunat...@tcs.com
> Website: http://www.tcs.com
> 
> Experience certainty. IT Services
> Business Solutions
> Consulting
> 
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Function as a service in OpenStack

2016-11-02 Thread Zane Bitter

On 01/11/16 22:20, Lingxian Kong wrote:

Hi, all,

Recently when I was talking with some customers of our OpenStack based
public cloud, some of them are expecting to see a service similar to AWS
Lambda in OpenStack ecosystem (so such service could be invoked by Heat,
Mistral, Swift, etc.).


This is a really interesting space. There seems to be two main use cases 
for Lambda that are probably worth talking about separately:


The first is for just Lambda alone. You can use it to provide some glue 
logic between the other AWS services, so you can trigger off various 
events (e.g. S3 notifications) and write a little bit of conditioning 
logic that transforms the data and dispatches it to other services (e.g. 
DynamoDB). This one is particularly interesting to me, and in fact we 
can support parts of this in OpenStack already[1] because Mistral's 
functionality is equivalent to something like SWS + parts of Lambda. 
(Specifically, Mistral can do the data dispatch easily enough, but any 
data transformation has to be done in YAQL, which is a pretty high bar 
compared to just writing some code in a language of your choosing.)


The second one is Lambda + the API Gateway, which allows you to have web 
requests act as triggers, so that you can effectively treat it as a PaaS 
and build an entire web app by stringing together Lambda functions and 
the various other services (S3, DynamoDB, ). On the face of it this 
sounds to me like a gimmicky way of deploying an unmaintainable mess. 
Naturally this is the one receiving all of the attention, which shows 
how much I know :D



Coincidently, I happened to see an introduction of OpenWhisk project by
IBM guys in Barcelona Summit. The demo was great and I was much more
exsited to know it was opensourced, but after checking, I feels a little
bit frustrated, most of the core part of the code was written in Scala
so it sets a high bar for me (yeah, I'm using Python) to learn and
understand.

So I came here to ask if there are people who are interested in
serverless area as me or have the same requirements as our customers?
Does it deserve a new project complies with OpenStack rules and
conventions? Is there any chance that people could join together for the
implementation?


I definitely don't think we should try to reimplement this from scratch 
in OpenStack. IMHO if we're going to add FaaS capabilities we should 
re-use some existing project (like OpenWhisk), even if we have to write 
our own native API over the top of it.


The things we'd really want it to do would be:

* Authenticate against Keystone,
* Provide Keystone credentials for the user-supplied functions it runs 
to access (probably using Keystone trusts), and
* Connect to existing OpenStack sources of events, which hopefully means 
Zaqar queues


Which sounds challenging to integrate with an existing standalone 
project, though still not as bad as writing an equivalent from scratch.


TBH I think the appeal, at least for the FaaS-as-a-PaaS (aka 
#serverless) crowd, is going to be pretty limited until such time as we 
have an equivalent of DynamoDB in OpenStack. (i.e. no time soon, since 
the MagnetoDB project is goneburger.) The idea of FaaS is to make the 
unit of compute power that you're paying for (a) as fine-grained as 
possible, and (b) scalable to infinity. Swift provides the same thing 
for storage (Nova:FaaS::Cinder:Swift). What we don't have is the 
equivalent for a database, there's only Trove where you're paying for a 
VM-sized chunk at a minimum and scaling up in units of VM-sized chunks 
until you reach the limit of how many VMs can communicate with each 
other and still get any work done. Not many web apps can get by without 
a database, so that largely negates the purpose to my mind, since the 
database will likely both dominate costs at the low end and put the 
upper limit on scale at the high end.


cheers,
Zane.

[1] 
https://www.openstack.org/videos/video/building-self-healing-applications-with-aodh-zaqar-and-mistral



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][i18n] how to indicate non-translatable identifiers in translatable strings?

2016-11-02 Thread Brant Knudson
On Wed, Nov 2, 2016 at 11:34 AM, Brian Rosmaita <
brian.rosma...@rackspace.com> wrote:

> This issue came up during a code review; I've asked around a bit but
> haven't been able to find an answer.
>
> Some of the help output for utility scripts associated with Glance aren't
> being translated, so Li Wei put up a patch to fix this [0], but there are
> at least two problematic cases.
>
> Case 1:
> parser.add_option('-S', '--os_auth_strategy', dest="os_auth_strategy",
> metavar="STRATEGY",
> help=_("Authentication strategy (keystone or noauth)."))
>
> For that one, 'keystone' and 'noauth' are identifiers, so we don't want
> them translated.  Would putting single quotes around them like this be
> sufficient to indicate they shouldn't be translated?  For example,
>
> help=_("Authentication strategy ('keystone' or 'noauth').")
>
>
one option is, don't put the non-translated words in the _(""); for example:

 help=_("Authentication strategy (%r or %r).") % ('keystone', 'noauth')


>
> Andreas Jaeger mentioned that maybe we could use a "translation comment"
> [1].  I guess we'd do something like:
>
> # NOTE: do not translate the stuff in single quotes
> help=_("Authentication strategy ('keystone' or 'noauth').")
>
>
> What are other people doing for this?
>
> Case 2:
> We've got a big block of usage text, roughly
>
> usage = _("""
> %prog  [options] [args]
>
> Commands:
> keyword1what it does
> keyword2what it does
> ...
> keyword8what it does
> """)
>
> We don't want the keywords to be translated, but I'm not sure how to
> convey this to the translators.
>
> Thanks in advance for your help,
> brian
>
>
> [0] https://review.openstack.org/#/c/367795/8
> [1] http://babel.pocoo.org/en/latest/messages.html#translator-comments
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Jay Pipes

On 11/01/2016 10:14 AM, Alex Xu wrote:

Currently we only update the resource usage with Placement API in the
instance claim and the available resource update periodic task. But
there is no claim for migration with placement API yet. This works is
tracked by https://bugs.launchpad.net/nova/+bug/1621709. In newton, we
only fix one bit which make the resource update periodic task works
correctly, then it will auto-heal everything. For the migration claim
part, that isn't the goal for newton release.

So the first question is do we want to fix it in this release? If the
answer is yes, there have a concern need to discuss.


Yes, I believe we should fix the underlying problem in Ocata. The 
underlying problem is what Sylvain brought up: live migrations do not 
currently use any sort of claim operation. The periodic resource audit 
is relied upon to essentially clean up the state of claimed resources 
over time, and as Chris points out in review comments on 
https://review.openstack.org/#/c/244489/, this leads to the scheduler 
operating on stale data and can lead to an increase in retry operations.


This needs to be fixed before even attempting to address the issue you 
bring up with the placement API calls from the resource tracker.



In order to implement the drop of migration claim, the RT needs to
remove allocation records on the specific RP(on the source/destination
compute node). But there isn't any API can do that. The API about remove
allocation records is 'DELETE /allocations/{consumer_uuid}', but it will
delete all the allocation records for the consumer. So the initial
fix(https://review.openstack.org/#/c/369172/) adds new API 'DELETE
/resource_providers/{rp_uuid}/allocations/{consumer_id}'. But Chris Dent
pointed out this against the original design. All the allocations for
the specific consumer only can be dropped together.


Yes, and this is by design. Consumption of resources -- or the freeing 
thereof -- must be an atomic, transactional operation.



There also have suggestion from Andrew, we can update all the allocation
records for the consumer each time. That means the RT will build the
original allocation records and new allocation records for the claim
together, and put into one API. That API should be 'PUT
/allocations/{consumer_uuid}'. Unfortunately that API doesn't replace
all the allocation records for the consumer, it always amends the new
allocation records for the consumer.


I see no reason why we can't change the behaviour of the `PUT 
/allocations/{consumer_uuid}` call to allow changing either the amounts 
of the allocated resources (a resize operation) or the set of resource 
provider UUIDs referenced in the allocations list (a move operation).


For instance, let's say we have an allocation for an instance "i1" that 
is consuming 2 VCPU and 2048 MEMORY_MB on compute node "rpA", 50 DISK_GB 
on a shared storage pool "rpC".


The allocations table would have the following records in it:

resource_provider resource_class consumer used
- --  
rpA   VCPU   i1  2
rpA   MEMORY_MB  i1   2048
rpC   DISK_GBi1 50

Now, we need to migrate instance "i1" to compute node "rpB". The 
instance disk uses shared storage so the only allocation records we 
actually need to modify are the VCPU and MEMORY_MB records.


We would create the following REST API call from the resource tracker on 
the destination node:


PUT /allocations/i1
{
  "allocations": [
  {
"resource_provider": {
  "uuid": "rpB",
},
"resources": {
  "VCPU": 2,
  "MEMORY_MB": 2048
}
  },
  {
"resource_provider": {
  "uuid": "rpC",
},
"resources": {
  "DISK_GB": 50
}
  }
  ]
}

The placement service would receive that request payload and immediately 
grab any existing allocation records referencing consumer_uuid of "i1". 
It would notice that records referencing "rpA" (the source compute node) 
are no longer needed. It would notice that the DISK_GB allocation hasn't 
changed. And finally it would notice that there are new VCPU and 
MEMORY_MB records referring to a new resource provider "rpB" (the 
destination compute node).


A single SQL transaction would be built that executes the following:

BEGIN;

  # Grab the source and destination compute node provider generations
  # to protect against concurrent writes...
  $RPA_GEN := SELECT generation FROM resource_providers
  WHERE uuid = 'rpA';
  $RPB_GEN := SELECT generation FROM resource_providers
  WHERE uuid = 'rpB';

  # Delete the allocation records referring to the source for the VCPU
  # and MEMORY_MB resources
  DELETE FROM allocations
  WHERE consumer = 'i1'
  AND resource_provider = 'rpA'
  AND resource_class IN ('VCPU', 'MEMORY_MB');

  # Add allocation records referring to the destination for VCPU and
  # MEMORY_MB
  INSERT INTO allocations
  (resource_provider, resource_class, 

[openstack-dev] [release][ptl][all] proposed change to Ocata final release date

2016-11-02 Thread Doug Hellmann
One piece of feedback the release team received during the summit
was that downstream packagers would benefit from having more time
during the final release week to prepare the packages after we tag
the final versions of projects using milestone-based releases. To
help them out, we would like to move the final release date from
Thursday 23 Feb to Wednesday 22 Feb.

I don't expect a lot of impact on project teams, since the release
team discourages release candidates for a day or two before the
final release anyway and we create the final tag from an existing
release candidate.  However, if you feel strongly that this would
have a negative effect on your project, please comment on the review [1].

Thanks,
Doug

[1] https://review.openstack.org/392948

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Global changes to CI jobs using neutron

2016-11-02 Thread Matt Riedemann
nova-network was deprecated in newton. Nova is working on moving the CI 
jobs that run against it to use Neutron only in Ocata. I'm tracking that 
work here:


https://etherpad.openstack.org/p/removing-nova-network

In an effort to make this more of a global switch in Ocata, clarkb has 
proposed a change to devstack-gate to change the default value of 
DEVSTACK_GATE_NEUTRON from 0 (nova-net) to 1 (neutron):


https://review.openstack.org/#/c/392934/

There are conditions on that which are:

1. If a job definition in project-confg sets DEVSTACK_GATE_NEUTRON 
explicitly, that's honored.


2. If not explicitly defined and the job is running against a stable 
branch, then nova-network is still the default.


There are a few jobs which are definitely nova-network specific, like 
some of the nova-net specific grenade jobs and the cells v1 job. Those 
are being explicitly handled in a series here:


https://review.openstack.org/#/c/392942/

So why should you care, you ask? First, thanks for asking. Second, as 
noted in the commit message to ^ there are several grenade jobs which 
don't explicitly define DEVSTACK_GATE_NEUTRON, like for designate and 
trove. With that change those grenade jobs will now start using neutron 
when upgrading from newton to ocata. This might work, it might not. If 
it does not work, and you know it won't work, please speak up now. 
Otherwise if things break we'll have to either (a) explicitly set 
DEVSTACK_GATE_NEUTRON=0 in those jobs or (b) cap them at stable/newton - 
either way those affected projects would have to sort out a path forward 
for continued upgrade testing in Ocata.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] Release announcements

2016-11-02 Thread Emilien Macchi
On Wed, Nov 2, 2016 at 12:33 PM, Thierry Carrez  wrote:
> Hi everyone,
>
> In Barcelona the release team has been discussing how to improve release
> announcements. Posting them on openstack-dev (for libs) and
> openstack-announce (for main services) has proven to be pretty noisy,
> especially for projects which publish lots of components, like OpenStack
> Puppet or OpenStack Ansible. This actively discouraged people to follow
> openstack-announce, which was really not the goal.

sorry for that ;-)

> At the same time, we can't just stop making announcements. Some people
> (especially on the downstream side) still want to receive release
> announces. And we still want to archive a trace of the release and
> provide a starting point for discussing immediate issues on a given
> release, especially for libraries.
>
> The proposed solution is to create a specific mailing-list for OpenStack
> release announcements (proposed name is "release-announces") where we'd
> post the automated release announcements. Only the release bot and
> release managers would be able to post to it. The "reply-to" field would
> be set to openstack-dev, in case someone wanted to start a thread about
> a given release. By default, it would be set to send in daily digest
> mode, to reduce noise and encourage people to subscribe to it.
>
> The -announce list would get back to low-noise, and be limited to
> highly-important announcements (one email for the final release, emails
> about events, elections...).
>
> Please let us know if you have comments or questions. We'll start
> implementing this plan next week if no objection is raised.

Excellent idea!

> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] release team plans for Ocata

2016-11-02 Thread Doug Hellmann
Last week at the summit the release team reviewed our notes from
Newton and worked on plans for changes to be implemented during
Ocata.

* Branch automation

  Now that tag management is handled through reviews on openstack/releases,
  we want to add the other major release-related task: branch
  management.  We will start with automating branch creation, and
  then if time permits tackle the workflow for closing a branch
  when it reaches its end-of-life deadline. For more details, see
  the spec: https://review.openstack.org/369643

* Release announcements

  Thierry has already described the plan for making release
  announcements less noisy in another thread to this list in
  http://lists.openstack.org/pipermail/openstack-dev/2016-November/106579.html

  We are also going to work on automating announcements for release
  candidates for projects using milestones.

* Updating upper-constraints.txt for a release

  When a library is released, the job that adds the tag also submits
  the patch to update upper-constraints.txt to allow the new version
  to be used. This almost always results in the jobs for that u-c
  patch failing, because the package does not actually exist yet.
  We will be moving the constraints update to its own job, which
  will run after the new package is uploaded to PyPI.

* Python 3

  In anticipation of having Python 3 support be a goal for Pike,
  we will start porting our automation scripts to run under Python
  3. This should also improve the reliability of some of the tools
  that work with names and release notes, since those may include
  unicode text.

* Recruiting more reviewers

  With the tagging process fully automated, it is easier for us to
  recruit more reviewers for the release team. We will be working
  with the stable-maint team initially, and then looking for other
  folks interested in being involved in release management.

* New project checklist

  We had a few technical issues with brand new big tent projects
  last cycle because the repositories had not been reconfigured
  after the team was accepted into the big tent. We will be starting
  a checklist of steps project teams need to go through to complete
  their transition into the big tent.

* Improving communication about the schedule

  Given several major holiday periods in the Ocata cycle, it will
  be more important than usual to communicate clearly about release
  freeze periods. We will be adding known freeze periods to the
  schedule page soon.

  We are also working on being able to publish the schedule as an
  ICS file that can be imported into your calendar application of
  choice.

* Decoupling releases from governance tags

  There are a set of tags defined in the governance repository that
  control the behavior for release tools. During Newton we had more
  projects changing their tags than we had previously expected, and
  that resulted in some delays and eventually removing some validation
  logic from the releases repository. During Ocata we will be moving
  the release and type tags out of the governance repository to the
  releases repository. This work won't start right away, because
  we need to assess the impact on other projects like the Foundation's
  project navigator web site.

That covers the major initiatives we have. There are several other
clean-up tasks that may be less visible outside of the team. The full
notes from those sessions are available in the etherpad:
https://etherpad.openstack.org/p/ocata-relmgt-plan

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][magnum] Notes from Summit fishbowl session

2016-11-02 Thread Antoni Segura Puimedon
Hi magna and kuryrs!

Thank you all for joining last week meetings. I am now writing a few
emails to have persistent notes of what was talked about and discussed
in the Kuryr work sessions. In the Magnum joint session the points
were:

Kuryr - Magnum joint work session
=

Authentication
==

* Consensus on using Keystone trust tokens.
- We should follow closely the Keystone effort into scoping the allowed
  actions per token to limit those to the minimal required set of verbs
  that the COE and Kuryr need.

* It was deemed unnecessary to pursue a proxying approach to access
  Neutron. This means VM applications should be able to reach Neutron and
  Keystone but the only source of credentials they should have is the
  Keystone tokens.


Tenancy and network topology


Two approaches should be made available to users:

Full Neutron networking
~~~

Under this configuration, containers running inside the nova instances
would get networking via Neutron vlan-aware-VMs feature. This means the COE
driver (either kuryr-libnetwork or kuryr-kubernetes) would request a
Neutron subport for the container. In this way, there can be multiple
isolated networks running on worker nodes.

The concerns about this solution are about the performance when starting
big amounts of containers and the latency introduced when starting them due
to going all the way to Neutron to request the subport.

Minimal Neutron networking
~~

In order to address the concerns with the 'Full Neutron networking'
approach, and as a trade-off between features and minimalism, this way of
networking the containers would all be in the same Neutron network as the
ports of their VMs.

The problem with this solution is that allowing multiple isolated networks
like CNM and Kubernetes with policy have is quite complicated.


Regards,

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]: Instance creation and deletion metrics in ceilometer !

2016-11-02 Thread Maxime Belanger
Hi Raghunath,


We are usign Almanach : https://github.com/openstack/almanach. I think your use 
case pretty much fits what this project does.
We implemented this as a replacement of Ceilometer to gather usage on instances 
and volumes. We query it to do our billing calculation.
It is of course a less complete solution than CloudKitty + Gnocchi but suites 
our needs for now.

Maxime


From: Raghunath D 
Sent: October 25, 2016 5:32:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Ceilometer]: Instance creation and deletion 
metrics in ceilometer !

Hi ,

Can some one please suggest how to instance notifications in ceilometer.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting



-Raghunath D/HYD/TCS wrote: -
To: openstack-dev@lists.openstack.org
From: Raghunath D/HYD/TCS
Date: 10/18/2016 08:01PM
Subject: [openstack-dev] [Ceilometer]: Instance creation and deletion metrics 
in ceilometer !

Hi ,

How can instance created and deleted information/sample can be retrieved from 
ceilometer.
What entries should be there in pipeline.yaml  to get instance deleted 
information.

I tried to have meters- "instance" in pipeline.yam but it always gives active 
instance details,
and no details of deleted instances.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement / resource providers ocata summit session recaps

2016-11-02 Thread Matt Riedemann
We had three design summit sessions related to the new placement service 
and resource providers work. Since they are all more or less related, 
I'm going to recap them in a single email.




The first session was a retrospective on the placement service work that 
happened in the Newton release. The full etherpad is here:


https://etherpad.openstack.org/p/ocata-nova-summit-placement-retrospective

We first talked about what went well, of which there were many things:

- There is a better shared understanding of the design and goals among 
more people in Nova.

- Computes in Newton are reporting their RAM/DISK/CPU inventory and usage.
- We have CI jobs.
- Jay did a nice job of using consistent terminology when discussing 
resource providers and the end goal for Newton so we could stay focused.
- Hangouts helped the team get unstuck at times when we were grinding 
toward feature freeze.
- The placement API has a clean WSGI design and REST interface that 
others are able to build onto easily.


We then talked about what didn't go so well, which included:

- Confusion around division of labor and when different chunks can be 
worked in parallel, and by whom.
- There was too much time spent on making the specs perfect and we 
needed to just start writing and reviewing code. This was especially 
evident when the client side (resource tracker) pieces started getting 
written that used the placement REST API and required changes to the API.
- At times there were key discussions/decisions that were not properly 
documented/communicated back to the wider team.
- There was a breakdown in communication at or after the midcycle about 
the separate placement DB which led to a revert late in the cycle.

- General burnout and frustration.
- Traps of working on long patch series with little review feedback 
early in the series or low-latency on reviews leading to wasted time.


From those discussions, we listed what we should keep doing or do 
differently:


- Write specs with so less low-level detail, but if there is that level 
of detail, make sure to amend the spec later if there are changes once 
implemented.

- Use Hangouts when we get stuck.
- Document/communicate decisions/agreements/changes in direction in the 
mailing list.

- Encourage people to pair up for redundancy.
- Encourage early PoCs before building a long and potentially off the 
mark patch series.


There was also some general discussion about not moving specs to 
'implemented' until the spec is updated after the code is all approved. 
I was personally not sold on what was proposed for this, since I 
consider amending specs is like writing documentation and CI tests - if 
you don't -2 the last change in the series to complete the blueprint, 
people have little incentive to actually do it and once their code is 
merged it's very hard to get them to do the ancillary tasks. I'm open to 
further discussing this idea though in case I missed the point.




The next session was about the quantitative side of resource providers. 
The full etherpad is here:


https://etherpad.openstack.org/p/ocata-nova-summit-resource-providers-quantitative

There were quite a few things in the etherpad and we didn't get to all 
of them, so this is a recap of what we did talk about.


- Custom resource classes

The code for this is moving along and being reviewed. There will be 
namespaces on the standard resource classes that nova provides. The 
resource tracker will create inventory/allocation records for the Ironic 
nodes. The Ironic inventory records will use the node.resource_class 
value as the custom resource class.


We still need to figure out what to do about mapping a single flavor to 
multiple node classes, but it might just be done with extra_specs. There 
will be upgrade impacts for this, however, if not properly mapped and 
the scheduler starts using the placement service.


- Microversions

Chris Dent has a patch up to add microversion support to the placement 
API and it's being reviewed.


- Nested resource providers

Jay has been working on code for this and has a design in mind. Jay and 
Ed did some whiteboarding in the hall and sorted out their differences 
on the design and have agreement on the way forward (which is Jay's 
nesting/tree model).


- Documenting the placement REST API

We didn't get into this at the summit, but in side discussions it's a 
TODO and right now we'll most likely handle this like we do for the 
compute api-ref.


- Top priorities for Ocata

1. The scheduler calling the placement API to get a list of resource 
providers. There are some specs and WIP code up that Sylvain is working 
on. Note that this is not going to involve the caching scheduler for 
now, we'll worry about that later.


2. Start handling shared storage. We need the resource tracker and/or an 
external script to create the resource provider / aggregate mapping and 
inventory/allocation records against shared DISK_GB inventories. The 
aggregates mapping 

Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Bob Ball
> > Oslo.privsep seem try to launch a daemon process and set caps for this
> daemon; but for XenAPI, there is no need to spawn the daemon.
> 
> I guess I'm lacking some context... If you don't need special rights, why use 
> a
> rootwrap-like thing at all ? Why go through a separate process to call into
> XenAPI ? Why not call in directly from Neutron code ?

It does not need to go through a separate process at all, or need special 
rights - see the prototype code at https://review.openstack.org/#/c/390931/ 
which started this thread, which is directly calling from Neutron code.

I guess the argument is that we are trying to run "configure something" which 
in some cases is privileged in the same host as is running the Neutron code 
itself, hence the easiest way to do that is to use a rootwrap.  To me, the very 
use of a "rootwrap" or "privsep" implies that we're running the commands in the 
same host.

Arguably we should have a "per logical component" wrapper - in this case the 
network / OVS instance that's being managed - as each component could be in a 
different location.
Mounting a loopback device (which Nova has needed to do in the past) clearly 
needs a rootwrap that runs in the same host as Nova, but when managing the OVS 
in XenServer's dom0 it needs a similar mechanism to what we are proposing for 
Neutron.

For reference, Nova has a XenAPI session similar to the above and will invoke 
plugins that exist in Dom0 directly, to make the required modifications.  This 
is similar in approach to the prototype code above.

The current XenAPI rootwrap for Neutron[1] is stupidly inefficient as it was 
based on the original rootwrap concept (which neutron replaced with the daemon 
mode to improve performance).  As this is a separate executable, called once 
for each command, it will create a new session with each call.  There are (as 
always) multiple ways to fix this:

1) Get Neutron to call XenAPI directly rather than trying to use a daemon - the 
session management would move from neutron-rootwrap-xen-dom0 into 
xen_rootwrap_client.py (perhaps this could be better named) 
2) Get Neutron to call a local rootwrap daemon (as per the current 
implementation) which maintains a pool of connections and can efficiently call 
through to XenAPI
3) Extend oslo.rootwrap (and I presume also privsep) to know that some commands 
can run in different places, and put the logic for connecting to those 
different places in there.

We did have a prototype implementation of #2 but it was messy, and #1 seemed 
architecturally cleaner.

Bob 

[1] 
http://git.openstack.org/cgit/openstack/neutron/tree/bin/neutron-rootwrap-xen-dom0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Chris Friesen

On 11/02/2016 05:26 AM, Alex Xu wrote:



2016-11-02 16:26 GMT+08:00 Sylvain Bauza >:



#2 all those claim operations don't trigger an allocation request to the
placement API, while the regular boot operation does (hence your bug 
report).


Yes, except the booting new instance, other claims won't trigger allocation
request to the placement API.


We should normally go through the scheduler for 
resize/migration/live-migration/evacuate, so wouldn't it make sense to do some 
sort of allocation request?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] Release announcements

2016-11-02 Thread Davanum Srinivas
On Wed, Nov 2, 2016 at 1:21 PM, Doug Hellmann  wrote:
> Excerpts from Thierry Carrez's message of 2016-11-02 17:33:51 +0100:
>> Hi everyone,
>>
>> In Barcelona the release team has been discussing how to improve release
>> announcements. Posting them on openstack-dev (for libs) and
>> openstack-announce (for main services) has proven to be pretty noisy,
>> especially for projects which publish lots of components, like OpenStack
>> Puppet or OpenStack Ansible. This actively discouraged people to follow
>> openstack-announce, which was really not the goal.
>>
>> At the same time, we can't just stop making announcements. Some people
>> (especially on the downstream side) still want to receive release
>> announces. And we still want to archive a trace of the release and
>> provide a starting point for discussing immediate issues on a given
>> release, especially for libraries.
>>
>> The proposed solution is to create a specific mailing-list for OpenStack
>> release announcements (proposed name is "release-announces") where we'd
>
> How about either "release-announce" or "release-announcements"?

slightly prefer "release-announce" over "release-announcements" to be
in line with openstack-announce

Thanks,
Dims

>
>> post the automated release announcements. Only the release bot and
>> release managers would be able to post to it. The "reply-to" field would
>> be set to openstack-dev, in case someone wanted to start a thread about
>> a given release. By default, it would be set to send in daily digest
>> mode, to reduce noise and encourage people to subscribe to it.
>>
>> The -announce list would get back to low-noise, and be limited to
>> highly-important announcements (one email for the final release, emails
>> about events, elections...).
>>
>> Please let us know if you have comments or questions. We'll start
>> implementing this plan next week if no objection is raised.
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Doug Hellmann
Excerpts from Jianghua Wang's message of 2016-11-02 15:52:22 +:
> Thanks Doug. Please see my response inline starts with .
> 
> Jianghua
> 
> -Original Message-
> From: Doug Hellmann [mailto:d...@doughellmann.com] 
> Sent: Wednesday, November 2, 2016 9:31 PM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [neutron][oslo] proposal to resolve a rootwrap 
> problem for XenServer
> 
> Excerpts from Jianghua Wang's message of 2016-11-02 04:14:48 +:
> > Ihar and Tony,
> >  Thanks for the input.
> >  In order to run command in dom0, it uses XenAPI to create a session which 
> > can be used to remotely call a plugin - netwrap which is located in dom0. 
> > The netwrap plugin is executed as root. It will validate the command basing 
> > on the allowed command list and execute it.
> > The source code for netwrap is in neutron project: 
> > https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/d
> > rivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/netwrap
> > 
> > So at least we can see there are two dependences: 
> > 1. it depends on XenAPI which is XenServer specific.
> > 2. it depends on Neutron's plugin netwrap.
> > Is it acceptable to add such dependences in this common library of 
> > oslo.rootwrap? 
> 
> Why would they need to be dependencies of oslo.rootwrap? They are 
> dependencies of the driver, not the base library, right?
> 
>  With a second thought, I think we can pass the plugin name 
> netwrap as a parameter to the rootwrap; so maybe not a dependence. But if we 
> host the XenAPI session creation in oslo.rootwrap, I think we should import 
> XenAPI in oslo.rootwrap. Then it is a dependence in the base library, isn't 
> it?

I don't think we want to build Xen-specific features or dependencies
into any of the Oslo libraries unless we absolutely can't avoid it.

> 
> > And most of the code in oslo.rootwrap is to:
> > 1. spawn a daemon process and maintain the connection between the 
> > client and daemon; 2. filter commands in the daemon process.
> > But both can't be re-used for this XenAPI/XenServer case as the daemon 
> > process is already running in dom0; the command filtering is done in dom0's 
> > netwrap plugin. In order to hold this in oslo.rootwrap, it requires some 
> > refactoring work to make it looks reasonable. Is it worthy to do that? 
> > Specially by considering it has determined to replace oslo.wrap with 
> > oslo.provsep?
> > 
> > Maybe it's a good option to cover this dom0 case in oslo.provsep at the 
> > beginning. But it becomes more complicated. Maybe we can run a daemon 
> > process in dom0 with the privileges set properly and listening on a 
> > dedicated tcp port . But that's much different from the initial provsep 
> > design [1]. And also it makes the mechanism very different from the current 
> > XenServer OpenStack which is using XenAPI plugin. Anyway, I think we have 
> > long to go with a good solution to cover it in provsep.
> 
> What sort of refactoring do you have in mind for privsep? I could see 
> something analogous to the preexec_fn argument to subprocess.Popen() to let 
> the XenServer driver ensure that its privileged process is running in dom0.
> 
> Sorry, I don't have a clear idea on refactorying in privsep now. 
> It seems privsep attempts to create a daemon process and set caps for this 
> daemon. But for XenServer/XenAPI, the daemon and caps in daemon seems 
> useless. As it sends all commands to the a common XAPI daemon running in 
> dom0. No additional daemon is needed. TBH I don't know how to apply caps at 
> here. But to make it to resolve the current issue, what I'm thinking is to 
> create a XenAPI session and keep it in the rootwrap's client; then each 
> command will be passed to dom0 via this same session.

OK. I think Thierry's question in the other thread (about why the
XenAPI calls have to be made from a privileged process at all) is
useful for thinking about any API changes. Let's keep the discussion
over there to avoid drift or confusion.

Doug

> 
> Doug
> 
> > 
> > But this issue is urgent for XenAPI/XenServer OpenStack. Please the details 
> > described in the bug[2]. So I still think the PoC is a better option, 
> > unless both oslo and Neutron guys agree it's acceptable to refactor 
> > oslo.rootwrap and allow the above dependences introduced to this library.
> > 
> > [1]https://specs.openstack.org/openstack/oslo-specs/specs/liberty/priv
> > sep.html [2] https://bugs.launchpad.net/neutron/+bug/1585510
> > 
> > Regards,
> > Jianghua
> > 
> > On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
> > 
> > > I suggested in the bug and the PoC review that neutron is not the 
> > > right project to solve the issue. Seems like oslo.rootwrap is a 
> > > better place to maintain privilege management code for OpenStack. 
> > > Ideally, a solution would be found in scope of the library that 
> > > would not require any changes per-project.
> > 
> > With the change of 

Re: [openstack-dev] [all][i18n] how to indicate non-translatable identifiers in translatable strings?

2016-11-02 Thread Doug Hellmann
Excerpts from Brian Rosmaita's message of 2016-11-02 16:34:45 +:
> This issue came up during a code review; I've asked around a bit but
> haven't been able to find an answer.
> 
> Some of the help output for utility scripts associated with Glance aren't
> being translated, so Li Wei put up a patch to fix this [0], but there are
> at least two problematic cases.
> 
> Case 1:
> parser.add_option('-S', '--os_auth_strategy', dest="os_auth_strategy",
> metavar="STRATEGY",
> help=_("Authentication strategy (keystone or noauth)."))
> 
> For that one, 'keystone' and 'noauth' are identifiers, so we don't want
> them translated.  Would putting single quotes around them like this be
> sufficient to indicate they shouldn't be translated?  For example,
> 
> help=_("Authentication strategy ('keystone' or 'noauth').")
> 
> 
> Andreas Jaeger mentioned that maybe we could use a "translation comment"
> [1].  I guess we'd do something like:
> 
> # NOTE: do not translate the stuff in single quotes
> help=_("Authentication strategy ('keystone' or 'noauth').")

The ability to pass comments to the translators like that seems
really useful, if it would work with our tools.

It seems like things we put in quotes should not be translated, by
convention.

> What are other people doing for this?
> 
> Case 2:
> We've got a big block of usage text, roughly
> 
> usage = _("""
> %prog  [options] [args]
> 
> Commands:
> keyword1what it does
> keyword2what it does
> ...
> keyword8what it does
> """)
> 
> We don't want the keywords to be translated, but I'm not sure how to
> convey this to the translators.

This is a case where using quotes won't work. Using a different
tool to build the help text (argparse instead of optparse), or even
just building the text from parts inline (put the parts you do or
do not want translated into separate variables and then combine the
results like a template) might work. Both add a bit of complexity.
The second option doesn't require completely rewriting the CLI
processing logic.

> 
> Thanks in advance for your help,
> brian
> 
> 
> [0] https://review.openstack.org/#/c/367795/8
> [1] http://babel.pocoo.org/en/latest/messages.html#translator-comments
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][neutron][midonet] midonet liberty gate failure

2016-11-02 Thread Ihar Hrachyshka

Hi YAMAMOTO (and other midokura folks),

I spotted unit tests in the branch are failing due to upper constraints not  
applied. So I backported the fix as:  
https://review.openstack.org/#/c/392698/ Sadly, it does not pass because  
tempest for midonet v2 fails:


http://logs.openstack.org/98/392698/1/check/gate-tempest-dsvm-networking-midonet-v2/9494c9c/logs/devstacklog.txt.gz#_2016-11-02_15_10_13_949

It looks like midonet SDN controller misbehaving.

Would you mind taking it from there and propose the needed patches to pass  
the gate for the patch?


Thanks,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Thierry Carrez
Jianghua Wang wrote:
> Is Neutron ready to switch oslo.rootwrap to oslo.privsep?

You'll have to ask neutron-core for an updated status... I think it's
ready, but as I mentioned in my other email the current review
introducing it is currently stalled.

> Oslo.privsep seem try to launch a daemon process and set caps for this 
> daemon; but for XenAPI, there is no need to spawn the daemon. All of the 
> commands to be executed are sent to the common dom0 XAPI daemon (which will 
> invoke a dedicated plugin to execute the commands). So I'm confused how to 
> apply the privileged.entrypoint function. Could you help to share more 
> details? Thanks very much.

I guess I'm lacking some context... If you don't need special rights,
why use a rootwrap-like thing at all ? Why go through a separate process
to call into XenAPI ? Why not call in directly from Neutron code ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][release] Release announcements

2016-11-02 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2016-11-02 17:33:51 +0100:
> Hi everyone,
> 
> In Barcelona the release team has been discussing how to improve release
> announcements. Posting them on openstack-dev (for libs) and
> openstack-announce (for main services) has proven to be pretty noisy,
> especially for projects which publish lots of components, like OpenStack
> Puppet or OpenStack Ansible. This actively discouraged people to follow
> openstack-announce, which was really not the goal.
> 
> At the same time, we can't just stop making announcements. Some people
> (especially on the downstream side) still want to receive release
> announces. And we still want to archive a trace of the release and
> provide a starting point for discussing immediate issues on a given
> release, especially for libraries.
> 
> The proposed solution is to create a specific mailing-list for OpenStack
> release announcements (proposed name is "release-announces") where we'd

How about either "release-announce" or "release-announcements"?

> post the automated release announcements. Only the release bot and
> release managers would be able to post to it. The "reply-to" field would
> be set to openstack-dev, in case someone wanted to start a thread about
> a given release. By default, it would be set to send in daily digest
> mode, to reduce noise and encourage people to subscribe to it.
> 
> The -announce list would get back to low-noise, and be limited to
> highly-important announcements (one email for the final release, emails
> about events, elections...).
> 
> Please let us know if you have comments or questions. We'll start
> implementing this plan next week if no objection is raised.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack-dev Digest, Vol 55, Issue 6

2016-11-02 Thread Farhad Sunavala
 If packets are making it to the SF but not making it out, it means the SFC has 
done its part.  Things to check.
1. Check that the SF VM has routing enabled
root@fs-10-145-106-2:~# sysctl net.ipv4.ip_forwardnet.ipv4.ip_forward = 1
2.  Check the security group settings  for the SF VM.
3. Is port security enabled?  If so, you probably need it disabled for the SF 
VM.
Farhad.

Message: 5
Date: Wed, 2 Nov 2016 13:40:21 +0100
From: Alioune 
To: "OpenStack Development Mailing List (not for usage questions)"
    
Subject: Re: [openstack-dev] [networking-sfc][devstack][mitaka] Chain
    doesn't work
Message-ID:
    
Content-Type: text/plain; charset="utf-8"

Any suggestion ?

On Monday, 24 October 2016, Alioune  wrote:

> Hi all,
>
> I'm trying to implement service chain in OpenStack using networking-sfc
> (stable/mitaka) and OVS 2.5.90
>
>
> The following is the architecture I used :
>
> SRC                                            DST
>  |                                                    |
>  == br-int 
>                          |
>                        SF1
> SF1: 55.55.55.3
> SRC: 55.55.55.4
> DST: 55.55.55.5
>
> I can create port-pairs, port-pair-group, classifier and chain with these
> commands:
>
> neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix
> 55.55.55.4/32  --logical-source-port 0009034f-4c39-4cbf-be7d-fcf82dad024c
> --protocol icmp  FC1
> neutron port-pair-create --ingress=p1 --egress=p1 PP1
> neutron port-pair-group-create --port-pair PP1 PG1
> neutron port-chain-create --port-pair-group PG1 --flow-classifier FC1 PC1
>
> I could ping from SRC to DST before setting the chain, but after the chain
> creating ping doesn't work.
>
> ICMP echo request packets arrive to SF1 port but it doesn't send back the
> packets in order to allow them to get their destination DST (see output
> below).
>
> The Opendaylight/SFC project uses NSH aware service function (SF) that
> send back packets to the chains after analyzing them, I would like to know :
>
> - How networking-sfc configures SF to send back packets to the chain as
> seem in some of your presentation ?
> - What's wrong in my configurations (see commands and ovs-ofctl output
> below) ? I've followed the main steps described in your wiki page.
>
> Best Regards,
>
>
> vagrant@vagrant-ubuntu-trusty-64:~$ neutron port-list
> +--+--+-
> --+-
> -+
> | id                                  | name | mac_address      |
> fixed_ips
> |
> +--+--+-
> --+-
> -+
> | 0009034f-4c39-4cbf-be7d-fcf82dad024c |      | fa:16:3e:dd:16:f7 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.4"}    |
> | 082e896d-5982-458c-96e7-0dd372d3d7d9 | p1  | fa:16:3e:90:b4:67 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.3"}    |
> | 2ad109e4-42a8-4554-b884-a32344e91036 |      | fa:16:3e:74:9a:fa |
> {"subnet_id": "3cf6eb27-7258-4252-8f3d-b6f9d27c948b", "ip_address":
> "192.168.105.2"} |
> | 51f055c0-ff4d-47f4-9328-9a0d7ca204f3 |      | fa:16:3e:da:f9:93 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.1"}    |
> | 656ad901-2bc0-407a-a581-da955ecf3b59 |      | fa:16:3e:7f:44:01 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.2"}    |
> | b1d14a4f-cde6-4c44-b42e-0f0466dba32a |      | fa:16:3e:a6:c6:35 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.5"}    |
> +--+--+-
> --+-
> -+
>
> vagrant@vagrant-ubuntu-trusty-64:~$ ifconfig |grep 082e896d
> qbr082e896d-59 Link encap:Ethernet  HWaddr b6:96:27:fa:ab:af
> qvb082e896d-59 Link encap:Ethernet  HWaddr b6:96:27:fa:ab:af
> qvo082e896d-59 Link encap:Ethernet  HWaddr 7e:1a:7b:7d:09:df
> tap082e896d-59 Link encap:Ethernet  HWaddr fe:16:3e:90:b4:67
>
> vagrant@vagrant-ubuntu-trusty-64:~$ sudo tcpdump -i tap082e896d-59 icmp
> tcpdump: WARNING: tap082e896d-59: no IPv4 address assigned
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on tap082e896d-59, link-type EN10MB (Ethernet), capture size
> 65535 bytes
> 10:51:10.229674 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 61, length 64
> 10:51:11.230318 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 62, length 64
> 10:51:12.233451 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 63, length 64
> 10:51:13.234496 IP 55.55.55.4 > 55.55.55.5: ICMP echo 

[openstack-dev] [all][i18n] how to indicate non-translatable identifiers in translatable strings?

2016-11-02 Thread Brian Rosmaita
This issue came up during a code review; I've asked around a bit but
haven't been able to find an answer.

Some of the help output for utility scripts associated with Glance aren't
being translated, so Li Wei put up a patch to fix this [0], but there are
at least two problematic cases.

Case 1:
parser.add_option('-S', '--os_auth_strategy', dest="os_auth_strategy",
metavar="STRATEGY",
help=_("Authentication strategy (keystone or noauth)."))

For that one, 'keystone' and 'noauth' are identifiers, so we don't want
them translated.  Would putting single quotes around them like this be
sufficient to indicate they shouldn't be translated?  For example,

help=_("Authentication strategy ('keystone' or 'noauth').")


Andreas Jaeger mentioned that maybe we could use a "translation comment"
[1].  I guess we'd do something like:

# NOTE: do not translate the stuff in single quotes
help=_("Authentication strategy ('keystone' or 'noauth').")


What are other people doing for this?

Case 2:
We've got a big block of usage text, roughly

usage = _("""
%prog  [options] [args]

Commands:
keyword1what it does
keyword2what it does
...
keyword8what it does
""")

We don't want the keywords to be translated, but I'm not sure how to
convey this to the translators.

Thanks in advance for your help,
brian


[0] https://review.openstack.org/#/c/367795/8
[1] http://babel.pocoo.org/en/latest/messages.html#translator-comments



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][release] Release announcements

2016-11-02 Thread Thierry Carrez
Hi everyone,

In Barcelona the release team has been discussing how to improve release
announcements. Posting them on openstack-dev (for libs) and
openstack-announce (for main services) has proven to be pretty noisy,
especially for projects which publish lots of components, like OpenStack
Puppet or OpenStack Ansible. This actively discouraged people to follow
openstack-announce, which was really not the goal.

At the same time, we can't just stop making announcements. Some people
(especially on the downstream side) still want to receive release
announces. And we still want to archive a trace of the release and
provide a starting point for discussing immediate issues on a given
release, especially for libraries.

The proposed solution is to create a specific mailing-list for OpenStack
release announcements (proposed name is "release-announces") where we'd
post the automated release announcements. Only the release bot and
release managers would be able to post to it. The "reply-to" field would
be set to openstack-dev, in case someone wanted to start a thread about
a given release. By default, it would be set to send in daily digest
mode, to reduce noise and encourage people to subscribe to it.

The -announce list would get back to low-noise, and be limited to
highly-important announcements (one email for the final release, emails
about events, elections...).

Please let us know if you have comments or questions. We'll start
implementing this plan next week if no objection is raised.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] No cellsv2 meeting today

2016-11-02 Thread Dan Smith
Hi all,

A bunch of the usual participants cannot attend the CellsV2 meeting
today, and the ones that can just discussed it last week face-to-face in
Barcelona. So, I'm going to declare it canceled for today for lack of
critical mass.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight] No IRC meeting Nov 3rd

2016-11-02 Thread McLellan, Steven
Hi,

Since some people are still traveling back from the summit and I'm out of the 
office I'm canceling this week's IRC meeting. We'll resume normal service next 
week. Apologies for the late notice.

Steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Function as a service in OpenStack

2016-11-02 Thread Clint Byrum
Excerpts from Lingxian Kong's message of 2016-11-02 15:20:45 +1300:
> Hi, all,
> 
> Recently when I was talking with some customers of our OpenStack based
> public cloud, some of them are expecting to see a service similar to AWS
> Lambda in OpenStack ecosystem (so such service could be invoked by Heat,
> Mistral, Swift, etc.).
> 
> Coincidently, I happened to see an introduction of OpenWhisk project by IBM
> guys in Barcelona Summit. The demo was great and I was much more exsited to
> know it was opensourced, but after checking, I feels a little bit
> frustrated, most of the core part of the code was written in Scala so it
> sets a high bar for me (yeah, I'm using Python) to learn and understand.
> 
> So I came here to ask if there are people who are interested in serverless
> area as me or have the same requirements as our customers? Does it deserve
> a new project complies with OpenStack rules and conventions? Is there any
> chance that people could join together for the implementation?
> 

I don't have answers to these questions, but I'd ask:

* Does OpenWhisk have a significant user base?

* Do the goals of OpenWhisk run parallel to the goals of OpenStack?

* Can any OpenStack operator deploy OpenWhisk and immediately begin
  providing serverless to their users?

The more "yes" answers, the more reason there is to simply promote
OpenWhisk as a great choice for our users.

However, if they're all "no", then it would be good to start a new serverless
project. You can probably do it under the OpenStack umbrella, though
IMO, this is one of those things that can just be standalone + keystone
auth.. there's no need for it to be "inside" the cloud.

Personally, I hope all three answers are "yes", and you can find it in
your heart to forgive the use of Scala, for the users' sake.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][oslo] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Jianghua Wang
Thanks Doug. Please see my response inline starts with .

Jianghua

-Original Message-
From: Doug Hellmann [mailto:d...@doughellmann.com] 
Sent: Wednesday, November 2, 2016 9:31 PM
To: openstack-dev 
Subject: Re: [openstack-dev] [neutron][oslo] proposal to resolve a rootwrap 
problem for XenServer

Excerpts from Jianghua Wang's message of 2016-11-02 04:14:48 +:
> Ihar and Tony,
>  Thanks for the input.
>  In order to run command in dom0, it uses XenAPI to create a session which 
> can be used to remotely call a plugin - netwrap which is located in dom0. The 
> netwrap plugin is executed as root. It will validate the command basing on 
> the allowed command list and execute it.
> The source code for netwrap is in neutron project: 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/d
> rivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/netwrap
> 
> So at least we can see there are two dependences: 
> 1. it depends on XenAPI which is XenServer specific.
> 2. it depends on Neutron's plugin netwrap.
> Is it acceptable to add such dependences in this common library of 
> oslo.rootwrap? 

Why would they need to be dependencies of oslo.rootwrap? They are dependencies 
of the driver, not the base library, right?

 With a second thought, I think we can pass the plugin name netwrap 
as a parameter to the rootwrap; so maybe not a dependence. But if we host the 
XenAPI session creation in oslo.rootwrap, I think we should import XenAPI in 
oslo.rootwrap. Then it is a dependence in the base library, isn't it?

> And most of the code in oslo.rootwrap is to:
> 1. spawn a daemon process and maintain the connection between the 
> client and daemon; 2. filter commands in the daemon process.
> But both can't be re-used for this XenAPI/XenServer case as the daemon 
> process is already running in dom0; the command filtering is done in dom0's 
> netwrap plugin. In order to hold this in oslo.rootwrap, it requires some 
> refactoring work to make it looks reasonable. Is it worthy to do that? 
> Specially by considering it has determined to replace oslo.wrap with 
> oslo.provsep?
> 
> Maybe it's a good option to cover this dom0 case in oslo.provsep at the 
> beginning. But it becomes more complicated. Maybe we can run a daemon process 
> in dom0 with the privileges set properly and listening on a dedicated tcp 
> port . But that's much different from the initial provsep design [1]. And 
> also it makes the mechanism very different from the current XenServer 
> OpenStack which is using XenAPI plugin. Anyway, I think we have long to go 
> with a good solution to cover it in provsep.

What sort of refactoring do you have in mind for privsep? I could see something 
analogous to the preexec_fn argument to subprocess.Popen() to let the XenServer 
driver ensure that its privileged process is running in dom0.

Sorry, I don't have a clear idea on refactorying in privsep now. It 
seems privsep attempts to create a daemon process and set caps for this daemon. 
But for XenServer/XenAPI, the daemon and caps in daemon seems useless. As it 
sends all commands to the a common XAPI daemon running in dom0. No additional 
daemon is needed. TBH I don't know how to apply caps at here. But to make it to 
resolve the current issue, what I'm thinking is to create a XenAPI session and 
keep it in the rootwrap's client; then each command will be passed to dom0 via 
this same session.

Doug

> 
> But this issue is urgent for XenAPI/XenServer OpenStack. Please the details 
> described in the bug[2]. So I still think the PoC is a better option, unless 
> both oslo and Neutron guys agree it's acceptable to refactor oslo.rootwrap 
> and allow the above dependences introduced to this library.
> 
> [1]https://specs.openstack.org/openstack/oslo-specs/specs/liberty/priv
> sep.html [2] https://bugs.launchpad.net/neutron/+bug/1585510
> 
> Regards,
> Jianghua
> 
> On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
> 
> > I suggested in the bug and the PoC review that neutron is not the 
> > right project to solve the issue. Seems like oslo.rootwrap is a 
> > better place to maintain privilege management code for OpenStack. 
> > Ideally, a solution would be found in scope of the library that 
> > would not require any changes per-project.
> 
> With the change of direction from oslo.roowrap to oslo.provsep I doubt that 
> there is scope to land this in oslo.rootwarp.
> 
> Yours Tony.
> 
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Tuesday, November 01, 2016 7:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] proposal to resolve a rootwrap 
> problem for XenServer
> 
> Jianghua Wang  wrote:
> 
> > Hi Neutron guys,
> >
> > I’m trying to explain a problem with the XenServer rootwrap and give 
> > a proposal to resolve it. I need some input on how to proceed 

[openstack-dev] [new][oslo] oslo.config 3.19.0 release (ocata)

2016-11-02 Thread no-reply
We are thrilled to announce the release of:

oslo.config 3.19.0: Oslo Configuration API

This release is part of the ocata release series.

The source is available from:

http://git.openstack.org/cgit/openstack/oslo.config

Download the package from:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

Changes in oslo.config 3.18.0..3.19.0
-

0be235e Fixup list types handling tuples
2096e72 Updated from global requirements
33018ed [TrivialFix] Replace 'assertTrue(a in b)' with 'assertIn(a, b)'


Diffstat (except docs and test files)
-

oslo_config/types.py| 17 -
test-requirements.txt   |  2 +-
4 files changed, 53 insertions(+), 48 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 4795629..b3b9149 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -17 +17 @@ oslotest>=1.10.0 # Apache-2.0
-coverage>=3.6 # Apache-2.0
+coverage>=4.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Jianghua Wang
Thanks Thierry.
Is Neutron ready to switch oslo.rootwrap to oslo.privsep?
Oslo.privsep seem try to launch a daemon process and set caps for this daemon; 
but for XenAPI, there is no need to spawn the daemon. All of the commands to be 
executed are sent to the common dom0 XAPI daemon (which will invoke a dedicated 
plugin to execute the commands). So I'm confused how to apply the 
privileged.entrypoint function. Could you help to share more details? Thanks 
very much.

Jianghua

-Original Message-
From: Thierry Carrez [mailto:thie...@openstack.org] 
Sent: Wednesday, November 2, 2016 10:06 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem 
for XenServer

Ihar Hrachyshka wrote:
> Tony Breeds  wrote:
> 
>> On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
>>
>>> I suggested in the bug and the PoC review that neutron is not the 
>>> right project to solve the issue. Seems like oslo.rootwrap is a 
>>> better place to maintain privilege management code for OpenStack. 
>>> Ideally, a solution would be found in scope of the library that 
>>> would not require any changes per-project.
>>
>> With the change of direction from oslo.roowrap to oslo.provsep I 
>> doubt that there is scope to land this in oslo.rootwarp.
> 
> It may take a while for projects to switch to caps for privilege 
> separation.

oslo.privsep doesn't require projects to switch to caps (just that you rewrite 
the commands you call in Python) and can be done incrementally (while keeping 
rootwrap around for not-yet-migrated stuff)...

> It may be easier to unblock xen folks with a small enhancement in 
> oslo.rootwrap scope and handle transition to oslo.privsep on a 
> separate schedule. I would like to hear from oslo folks on where 
> alternative hypervisors fit in their rootwrap/privsep plans.

Like Tony said at this point new features are added to oslo.privsep rather than 
oslo.rootwrap. In this specific case the most forward-looking solution (and 
also best performance and security) would be to write a Neutron 
@privileged.entrypoint function to call into XenAPI and cache the connection.

https://review.openstack.org/#/c/155631 failed to land in Newton, would be 
great if someone could pick it up (maybe a smaller version to introduce privsep 
first, then migrate commands one by one).

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck

On 02 Nov 2016, at 15:15, Ben Swartzlander 
> wrote:

On 11/02/2016 06:23 AM, Arne Wiebalck wrote:
Hi Valeriy,

I wasn’t aware, thanks!

So, if each driver exposes the storage_protocols it supports, would it be 
sensible to have
manila-ui check the extra_specs for this key and limit the protocol choice for 
a given
share type to the supported protocols (in order to avoid that the user tries to 
create
incompatible type/protocol combinations)?

This is not possible today, as any extra_specs related to protocols are hidden 
from normal API users. It's possible to make sure the share type called 
"nfs_shares" always goes to a backend that supports NFS, but it's not possible 
to programatically know that in a client, and therefore it's not possible to 
build the smarts into the UI. We intend to fix this though, as there is no good 
reason to keep that information hidden.

I see, thanks.

Concerning the workaround for bug/1622732: Would you agree that configuring 
protocol/type
tuples (rather than only protocols) would be a better solution?

Cheers,
 Arne


Thanks again!
 Arne


On 02 Nov 2016, at 10:00, Valeriy Ponomaryov 
> wrote:

Hello, Arne

Each share driver has capability called "storage_protocol". So, for case you 
describe, you should just define such extra spec in your share type that will 
match value reported by desired backend[s].

It is the purpose of extra specs in share types, you (as cloud admin) define 
its connection yourself, either it is strong or not.

Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck 
> wrote:
Hi,

We’re preparing the use of Manila in production and noticed that there seems to 
be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols. If that’s true, 
wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per type, for instance 
as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice to supported 
protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Dustin Schoenbrun
+1

I've considered Goutham a core reviewer long before he actually was in
contention for being one. This is a fantastic addition to the community and
the project. It's very well deserved.

Dustin Schoenbrun
OpenStack Quality Engineer
Red Hat, Inc.
dscho...@redhat.com

On Wed, Nov 2, 2016 at 9:54 AM, Ben Swartzlander 
wrote:

> +1
>
> -Ben
>
>
>
> On 11/02/2016 08:09 AM, Tom Barron wrote:
>
>> I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
>> manila core team.  This is a clear case where he's already been doing
>> the review work, excelling both qualitatively and quantitatively, as
>> well as being a valuable committer to the project.  Goutham deserves to
>> be core and we need the additional bandwidth for the project.  He's
>> treated as a de facto core by the community already.  Let's make it
>> official!
>>
>> -- Tom Barron
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][routed-network] Host doesn't connected any segments when creating port

2016-11-02 Thread Miguel Lavalle
Hi Zhi,

In routed networks, the routing among the segments has to be provided by a
router external to Neutron. It has to be provided by the deployment's
networking infrastructure. In the summit presentation you watched, I used
this Vagrant environment for the demo portion:
https://github.com/miguellavalle/routednetworksvagrant. Specifically, look
here:
https://github.com/miguellavalle/routednetworksvagrant/blob/master/Vagrantfile#L188.
As you can see, I create a VM, "iprouter", to act as the router between the
two segments I use in the demo: one segment on vlan tag 2016 in physnet1
and another segment on vlan tag 2016 in physnet2. Please also look here how
I enable the routing in the "iprouter" Linux:
https://github.com/miguellavalle/routednetworksvagrant/blob/master/provisioning/setup-iprouter.sh
.

Of course, in a real deployment you would use a hardware router connected
to all the network's segments

Hope this helps

Miguel

On Tue, Nov 1, 2016 at 4:42 AM, zhi  wrote:

> Hi, shihanzhang and Neil, Thanks for your comments.
>
> In your comments. I think that Neutron router or physical network should
> provide routing these two subnets, doesn't it? Does my thought was right?
>
> I tried to connect these two subnets with a Neutron router but I met a
> strange problem. I did some operations like this:
>
> stack@devstack:~$ neutron net-list
> +-+-
> --+-
> ---+
> | id  |
> name | subnets
>  |
> +-+-
> --+-
> ---+
> | 6596da30-d7c6-4c39-b87c-295daad44123 | multinet |
> a998ac2b-2f50-44f1-9c1a-f4f3684ef63c 10.1.1.0/24|
> |   |
>  | 26bcdfd3-6393-425e-963e-1ace6ef74e0c 10.1.0.0/24 |
> | 662de35c-f7a7-47cd-ba18-e5a2470935f0| net   |
> 9754dfe9-be48-4a38-b690-5c48cf371ba3 10.10.10.0/24  |
> +--+
> --+-
> ---+
> stack@devstack:~$ neutron router-port-list c488238d-06d7-4b85-9fa1-
> e0913e5bcf13
>
> stack@devstack:~$ neutron router-interface-add 
> c488238d-06d7-4b85-9fa1-e0913e5bcf13
> a998ac2b-2f50-44f1-9c1a-f4f3684ef63c
> Added interface 680eb2b6-b445-4790-9610-80154dd6d909 to router
> c488238d-06d7-4b85-9fa1-e0913e5bcf13.
> stack@devstack:~$ neutron router-port-list c488238d-06d7-4b85-9fa1-
> e0913e5bcf13
> +--+
> +---+---
> +
> | id   |
> name | mac_address   | fixed_ips
>   |
> +--+
> ++--
> +
> | 680eb2b6-b445-4790-9610-80154dd6d909 |   | fa:16:3e:47:2e:8f |
> {"subnet_id": "26bcdfd3-6393-425e-963e-1ace6ef74e0c", "ip_address": "
> 10.1.0.10"} |
> +--+
> -+--+---
> +
>
>
> After adding a port interface ( subnet 10.1.1.0/24  ) to the router, Why
> does the port's IP address was 10.1.0.10 ? Why not it should be 10.1.1.x/24
> ?
>
>
>
> Thanks
> Zhi Chang
>
> 2016-11-01 17:19 GMT+08:00 shihanzhang :
>
>> agree with Neil.
>>
>> thanks
>> shihanzhang
>>
>>
>>
>> 在 2016-11-01 17:13:54,"Neil Jerram"  写道:
>>
>> Hi Zhi Chang,
>>
>> I believe the answer is that the physical network (aka fabric) should
>> provide routing between those two subnets. This routing between segments is
>> implicit in the idea of a multi-segment network, and is entirely
>> independent of routing between virtual _networks_ (which is done by a
>> Neutron router object connecting those networks).
>>
>> Hope that helps!
>> Neil
>>
>>
>> *From: *zhi
>> *Sent: *Tuesday, 1 November 2016 07:50
>> *To: *OpenStack Development Mailing List (not for usage questions)
>> *Reply To: *OpenStack Development Mailing List (not for usage questions)
>> *Subject: *Re: [openstack-dev] [neutron][routed-network] Host doesn't
>> connected any segments when creating port
>>
>> Hi, shihanzhang.
>>
>> I still have a question about routed network. I have two subnets. One is
>> 10.1.0.0/24 and the other is 10.1.1.0/24. I create two instances in each
>> host.
>> Such as 10.1.0.10 and 10.1.1.10.

Re: [openstack-dev] [ironic] When should a project be under Ironic's governance?

2016-11-02 Thread Jim Rollenhagen
On Mon, Oct 17, 2016 at 4:27 PM, Michael Turek
 wrote:
> Hello ironic!
>
> At today's IRC meeting, the questions "what should and should not be a
> project be under Ironic's governance" and "what does it mean to be under
> Ironic's governance" were raised. Log here:
>
> http://eavesdrop.openstack.org/meetings/ironic/2016/ironic.2016-10-17-17.00.log.html#l-176
>
> See http://governance.openstack.org/reference/projects/ironic.html for a
> list of projects currently under Ironic's governance.
>
> Is it as simple as "any project that aides in openstack baremetal deployment
> should be under Ironic's governance"? This is probably too general (nova
> arguably fits here) but it might be a good starting point.
>
> Another angle to look at might be that a project belongs under the Ironic
> governance when both Ironic (the main services) and the candidate subproject
> would benefit from being under the same governance. A hypothetical example
> of this is when Ironic and the candidate project need to release together.
>
> Just some initial thoughts to get the ball rolling. What does everyone else
> think?

We discussed this during our contributor's meetup at the summit, and came to
consensus in the room, that in order for a repository to be under
ironic's governance:

* it must roughly fall within the TC's rules for a new project:
  http://governance.openstack.org/reference/new-projects-requirements.html
* it must not be intended for use with only a single vendor's hardware
(e.g. a library
  to handle iLO is not okay, a library to handle IPMI is okay).
* it must align with ironic's mission statement: "To produce an
OpenStack service
  and associated libraries capable of managing and provisioning
physical machines,
  and to do this in a security-aware and fault-tolerant manner."
* lack of contributor diversity is a chicken-egg problem, and as such
a repository
  where only a single company is contributing is okay.

I've proposed this as a docs patch: https://review.openstack.org/392685

We decided we should get consensus from all cores on that patch - meaning 80%
or more agree, and any that disagree will still agree to live by the
decision. So, cores,
please chime in on gerrit. :)

Once that patch lands, I'll submit a patch to openstack/governance to
shuffle projects
around where they do or don't fit.

// jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] matrix of deploy combinations tested on upstream gates

2016-11-02 Thread Pavlo Shchelokovskyy
Hi Ironicers,

to have better visibility of what is being tested on our gates, I've
started an etherpad that aims to describe what combination of settings /
deploy options is being tested by each currently running (voting and not)
check / gate job

https://etherpad.openstack.org/p/ironic-gate-jobs-described

The work is in progress, but please chime in and correct any mistake I'm
making :)

In the future this would better be published as wiki page or as part of dev
docs (etherpad formatting capabilities are not that great..).

Cheers,
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Ben Swartzlander

On 11/02/2016 06:23 AM, Arne Wiebalck wrote:

Hi Valeriy,

I wasn’t aware, thanks!

So, if each driver exposes the storage_protocols it supports, would it 
be sensible to have
manila-ui check the extra_specs for this key and limit the protocol 
choice for a given
share type to the supported protocols (in order to avoid that the user 
tries to create

incompatible type/protocol combinations)?


This is not possible today, as any extra_specs related to protocols are 
hidden from normal API users. It's possible to make sure the share type 
called "nfs_shares" always goes to a backend that supports NFS, but it's 
not possible to programatically know that in a client, and therefore 
it's not possible to build the smarts into the UI. We intend to fix this 
though, as there is no good reason to keep that information hidden.


-Ben



Thanks again!
 Arne


On 02 Nov 2016, at 10:00, Valeriy Ponomaryov 
> wrote:


Hello, Arne

Each share driver has capability called "storage_protocol". So, for 
case you describe, you should just define such extra spec in your 
share type that will match value reported by desired backend[s].


It is the purpose of extra specs in share types, you (as cloud admin) 
define its connection yourself, either it is strong or not.


Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck > wrote:


Hi,

We’re preparing the use of Manila in production and noticed that
there seems to be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols.
If that’s true, wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per
type, for instance as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice
to supported protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com 
vponomar...@mirantis.com 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org 
?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Arne Wiebalck
CERN IT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Thierry Carrez
Ihar Hrachyshka wrote:
> Tony Breeds  wrote:
> 
>> On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
>>
>>> I suggested in the bug and the PoC review that neutron is not the right
>>> project to solve the issue. Seems like oslo.rootwrap is a better
>>> place to
>>> maintain privilege management code for OpenStack. Ideally, a solution
>>> would
>>> be found in scope of the library that would not require any changes
>>> per-project.
>>
>> With the change of direction from oslo.roowrap to oslo.provsep I doubt
>> that
>> there is scope to land this in oslo.rootwarp.
> 
> It may take a while for projects to switch to caps for privilege
> separation.

oslo.privsep doesn't require projects to switch to caps (just that you
rewrite the commands you call in Python) and can be done incrementally
(while keeping rootwrap around for not-yet-migrated stuff)...

> It may be easier to unblock xen folks with a small
> enhancement in oslo.rootwrap scope and handle transition to oslo.privsep
> on a separate schedule. I would like to hear from oslo folks on where
> alternative hypervisors fit in their rootwrap/privsep plans.

Like Tony said at this point new features are added to oslo.privsep
rather than oslo.rootwrap. In this specific case the most
forward-looking solution (and also best performance and security) would
be to write a Neutron @privileged.entrypoint function to call into
XenAPI and cache the connection.

https://review.openstack.org/#/c/155631 failed to land in Newton, would
be great if someone could pick it up (maybe a smaller version to
introduce privsep first, then migrate commands one by one).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Debugging slow Xenial gate

2016-11-02 Thread Jesse Pretorius
> On 11/2/16, 1:51 PM, "Major Hayden"  wrote:
>I tossed up a horribly written hack[0] to change some CPU scheduler 
> settings back to the Trusty settings.  My initial tests were great!  Also, 
> the first test in OpenStack CI was really good --  62 minutes for trusty and 
> 65 minutes for xenial.  However, that seems to be a fluke since the second 
> test had a 30 minute gap between the test durations. :(

I think that difference was due to the hardware/contention profiles of the 
different nodepool providers. You’ll have to do tests somewhere we you can 
execute on a consistent hardware profile, ideally with no other contention on 
the host, in order to get reliable comparisons.

I think Logan may be able to help with that. Alternatively perhaps you can get 
access to an OSIC host or instance for testing?




Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]: Instance creation and deletion metrics in ceilometer !

2016-11-02 Thread gordon chung


On 02/11/16 01:39 AM, Adrian Turjak wrote:
> Been vaguely following this thread and I have a question.
>
> Just to confirm, as I haven't touched ceilometer code in ages, the
> instance metric still exists? Or at least something like it?

it sort of exists currently. we don't build it from notifications but 
the pollster still generates it. (but it will be dropped unless people 
tell us otherwise.)

>
> We're currently using ceilometer as the data collection for our billing,
> and the instance metric is our primary way of billing for compute mainly
> because it tells us which instances exist at a given point in time. We
> further then use the metadata of that metric to get instance state
> (building, active, shutdown, terminating, etc) to determine if we should
> bill it.

i'm curious, what is the query you are using to get this information? 
the state information still is available via events which in the case of 
gnocchi, it uses to store resource state alongside metrics

>
> With the changes to ceilometer and the move to gnocchi I know we will
> need to rebuild how we handle billing data as we upgrade, but what I'm
> worried about is if gnocchi+ceilometer have some equivalent to the
> instance metric that will supply us with the same data, or do we now
> need to do our own notification monitoring...

i believe all the data is still available in some form. regarding 
gnocchi specifically, it takes sample data to capture measurements for a 
metric and takes event data to capture state (and possibly other metadata)

>
> Basically what I need is time series data of, this instance was in this
> state from this period to that period, and if I query for a range I get
> the ranges or changes in that time series. Is something like that
> present, or if not would I be able to make something like that in
> gnocchi? That way I can then query for a time range and know what state
> changes occurred for a given instance.

gnocchi captures the state of a resource and it's history. this is 
accessible by looking at resource history. i'm not entirely sure if that 
handles your case, may you could provide the queries you use and we 
could figure out equivalent gnocchi queries. i built a ceilometer vs 
gnocchi usage deck[1] that may help but it's more focused on metrics 
rather than resource history.

[1] http://www.slideshare.net/GordonChung/ceilometer-to-gnocchi

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Debugging slow Xenial gate

2016-11-02 Thread Major Hayden
On 10/28/2016 04:02 AM, Major Hayden wrote:
> On the topic of threads, the sysbench output from both Trusty and Xenial are 
> nearly identical with the exception of threads.  Trusty is usually about 
> 15-20% faster on that benchmark than Xenial.

I spoke with a few other people and it seems like the culprit could be a CPU 
scheduler difference and/or a glibc change.  After messing around with perf for 
a long time, I found that context switches and CPU migrations were slightly 
higher on Xenial than Trusty, but by a negligible amount (< 10% at worst).

I tossed up a horribly written hack[0] to change some CPU scheduler settings 
back to the Trusty settings.  My initial tests were great!  Also, the first 
test in OpenStack CI was really good --  62 minutes for trusty and 65 minutes 
for xenial.  However, that seems to be a fluke since the second test had a 30 
minute gap between the test durations. :(

Those scheduler changes for busy_factor, min_interval, and max_interval appear 
to have been made in the upstream Linux kernel, and they're present on various 
distributions like Ubuntu, CentOS, and Fedora.

At this point, I'm still trying to test some additional theories. Does anyone 
have any other ideas?

[0] https://review.openstack.org/392316

--
Major Hayden



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] open work items

2016-11-02 Thread gordon chung
hi,

for those interested in contributing i updated the roadmap[1] page in 
our wiki with items we discussed over last few summits. if you want more 
details on work item and sizing, feel free to ask. we don't use specs or 
blueprints in Telemetry except for rare patch which requires a lot of 
debate so feel free to start working on anything.

i imagine i missed stuff, so please add additional items.

[1] https://wiki.openstack.org/wiki/Telemetry/RoadMap

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Ben Swartzlander

+1

-Ben


On 11/02/2016 08:09 AM, Tom Barron wrote:

I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
manila core team.  This is a clear case where he's already been doing
the review work, excelling both qualitatively and quantitatively, as
well as being a valuable committer to the project.  Goutham deserves to
be core and we need the additional bandwidth for the project.  He's
treated as a de facto core by the community already.  Let's make it
official!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] new keystone core (breton)

2016-11-02 Thread Rodrigo Duarte
Congrats Boris! Very well deserved!

On Tue, Nov 1, 2016 at 9:17 PM, Jamie Lennox  wrote:

> Congrats Boris, Great to have new people on board. Well earned.
>
> On 1 November 2016 at 15:53, Brad Topol  wrote:
>
>> Congratulations Boris!!! Very well deserved!!!
>>
>> --Brad
>>
>>
>> Brad Topol, Ph.D.
>> IBM Distinguished Engineer
>> OpenStack
>> (919) 543-0646
>> Internet: bto...@us.ibm.com
>> Assistant: Kendra Witherspoon (919) 254-0680
>>
>> [image: Inactive hide details for Steve Martinelli ---10/31/2016 03:51:29
>> PM---I want to welcome Boris Bobrov (breton) to the keystone]Steve
>> Martinelli ---10/31/2016 03:51:29 PM---I want to welcome Boris Bobrov
>> (breton) to the keystone core team. Boris has been a contributor for
>>
>> From: Steve Martinelli 
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Date: 10/31/2016 03:51 PM
>> Subject: [openstack-dev] [keystone] new keystone core (breton)
>> --
>>
>>
>>
>> I want to welcome Boris Bobrov (breton) to the keystone core team. Boris
>> has been a contributor for some time and is well respected by the keystone
>> team for bringing real-world operator experience and feedback. He has
>> recently become more active in terms of code contributions and bug
>> triaging. Upon interacting with Boris, you quickly realize he has a high
>> standard for quality and keeps us honest.
>>
>> Thanks for all your hard work Boris, I'm happy to have you on the team.
>>
>> Steve Martinelli
>> stevemar
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Knight, Clinton
+2  Well earned.

From: Rodrigo Barbieri 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, November 2, 2016 at 8:41 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [manila] propose adding gouthamr to manila core

+2!

Goutham contributes a lot to manila in all areas and is a very active and 
skilled member of the community!



On Wed, Nov 2, 2016 at 9:36 AM, Silvan Kaiser 
> wrote:
+1!

2016-11-02 13:31 GMT+01:00 Alex Meade 
>:
+1000

On Wed, Nov 2, 2016 at 1:09 PM, Tom Barron 
> wrote:
I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
manila core team.  This is a clear case where he's already been doing
the review work, excelling both qualitatively and quantitatively, as
well as being a valuable committer to the project.  Goutham deserves to
be core and we need the additional bandwidth for the project.  He's
treated as a de facto core by the community already.  Let's make it
official!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - 
www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rodrigo Barbieri
MSc Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][octavia] Spec: Deploy Octavia with OpenStack-Ansible

2016-11-02 Thread Major Hayden
Hey folks,

I drafted a spec yesterday for deploying Octavia with OpenStack-Ansible.  The 
spec review[0] is pending and you can go straight to the rendered version[1] if 
you want to take a look.

We proposed this before in the Liberty release, but we ended up implementing 
only LBaaSv2 with the agent-based load balancers.  Octavia has come a long way 
and is definitely ready for use in Newton/Ocata.

Most of the spec is fairly straightforward, but there are still two open 
questions that may need to be answered in the implementation steps:

1) Do we generate the amphora (LB) image on the fly
   with DIB with each deployment? Or, do we pre-build
   it and download it during the deployment?

It might be easier to use DIB in the development stages and then figure out a 
cached image solution as the role becomes a little more mature.

2) Do we want to implement SSL offloading (Barbican
   is required) now or tackle that later?

I'd lean towards deploying Octavia without SSL offloading first, and then add 
in the Barbican support afterwards.  My gut says it's better to the basic 
functionality working well first before we begin adding on extras.

Your feedback is definitely welcomed! :)

[0] https://review.openstack.org/392205
[1] 
http://docs-draft.openstack.org/05/392205/2/check/gate-openstack-ansible-specs-docs-ubuntu-xenial/8f1eec1//doc/build/html/specs/ocata/octavia.html

--
Major Hayden



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Centralizing some config options will break many stadium projects

2016-11-02 Thread Doug Hellmann
Excerpts from Brandon Logan's message of 2016-11-01 22:56:45 +:
> On Tue, 2016-11-01 at 13:13 +0100, Ihar Hrachyshka wrote:
> > Brandon Logan  wrote:
> > 
> > > Hello Neutrinos,
> > > I've come across an issue that I'd like to get input/opinions
> > > on.  I've
> > > been reviewing some of the centralize config options reviews and
> > > have
> > > come across a few that would cause issues with other projects that
> > > are
> > > importing these options, especially stadium projects.  High level
> > > view
> > > of the issue:
> > > 
> > > [1] would cause at least 22 projects to need to be fixed based on
> > > [2]
> > > [3] would cause at least 12 projects to need to be fixed based on
> > > [4]
> > > 
> > > [5] looks to affect many other projects as well (I'm being lazy and
> > > not  counting them right now)
> > > 
> > > Initially, the thinking was that moving the config options around
> > > would
> > > cause some breakage with projects outside of neutron, but that
> > > would be
> > > fine because projects shouldn't really be using neutron as a
> > > library
> > > and using it to register config options.  However, with these 3
> > > patches, I definitely don't feel comfortable breaking the amount of
> > > projects these would break.  It also makes me think that maybe
> > > these
> > > options should be in neutron-lib since they're consumed so widely.
> > 
> > Definitely not neutron-lib material (unless carefully hidden behind
> > clearly  
> > public API).
> > 
> > There is a reason why oslo folks explicitly deny any support for  
> > configuration option names and locations their libraries expose
> > [1].  
> > Options are for operators to change in configuration files, but not
> > to  
> > access them or set programmatically. If there are options that
> > subprojects  
> > need access to, we should expose them via explicitly public API, like
> > we  
> > did with global_physnet_mtu [2].
> > 
> > [1]  
> > http://docs.openstack.org/developer/oslo.config/faq.html#why-are-conf
> > iguration-options-not-part-of-a-library-s-api
> > [2]  
> > https://github.com/openstack/neutron/blob/181bdb374fc0c944b1168f27ac7
> > b5cbb0ff0f3c3/neutron/plugins/common/utils.py#L43
> 
> Yeah allowing the options to be imported directly from code outside the
> repo doesn't make sense.  When you talk about a public API in neutron-
> lib for these options, are only talking about READ access as the
> example you gave? OR are you also talking about being able to register

We have seen patterns including an API for reading values and an API for
setting the defaults for values (useful for an application to override
the defaults set in a library).

> these options as well via functions that require no access to the
> options?  If that is the case, then these centralize config option

Options should be registered at runtime by the code that uses them. We
still have lots of code registering options when modules are imported,
but that's not the preferred way to do it because there are potentially
execution order issues when services start up. It's safe to register the
same option more than once, so doing it in some code that will use the
option is fine. For example, when constructing an instance of a class
that uses configuration options, the constructor can register the
options and then the other methods of the class know the options are
present and can be read.

> patches are basically doing that, except not in neutron-lib.  Do you
> think we should move these to neutron-lib instead?  This would mean the
> config options themselves would then probably end up living in neutron-
> lib (though I guess they wouldn't have to).  We'll still have to figure
> out what to do with the subprojects though, but having them in neutron-
> lib and neutron at the same time during a transition period might make
> this easier.

It could also lead to options being registered multiple times with
slightly different settings, which will throw errors. That just means
you need to make sure you don't change the help text or any other
attributions of the options while there are 2 copies.

> 
> > 
> > > Anyway, I've come up with some possible options to deal with this,
> > > but
> > > would like to hear others' opinions on this:
> > > 
> > > 1) Let the patches merge and break those projects as a signal that
> > > importing these shouldn't be done.  The affected projects can
> > > choose to
> > > push fixes that continue importing the neutron config options or
> > > defining their own config options.
> > > 2) Deprecate the old locations for some timeframe, and then remove
> > > later.
> > > 3) Texas Three-Step: change the neutron patches to keep pointers in
> > > the
> > > old locations to the new, and then push patches to the affected
> > > repos
> > > with Depends-On directives.  Once all patches merge, push up one
> > > more
> > > patch to neutron to remove the old location.
> > > 4) Abandon these reviews and do nothing.
> > > 5) 

Re: [openstack-dev] [neutron][oslo] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Doug Hellmann
Excerpts from Jianghua Wang's message of 2016-11-02 04:14:48 +:
> Ihar and Tony,
>  Thanks for the input.
>  In order to run command in dom0, it uses XenAPI to create a session which 
> can be used to remotely call a plugin - netwrap which is located in dom0. The 
> netwrap plugin is executed as root. It will validate the command basing on 
> the allowed command list and execute it.
> The source code for netwrap is in neutron project: 
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/xenapi/etc/xapi.d/plugins/netwrap
> 
> So at least we can see there are two dependences: 
> 1. it depends on XenAPI which is XenServer specific.
> 2. it depends on Neutron's plugin netwrap.
> Is it acceptable to add such dependences in this common library of 
> oslo.rootwrap? 

Why would they need to be dependencies of oslo.rootwrap? They are
dependencies of the driver, not the base library, right?

> And most of the code in oslo.rootwrap is to:
> 1. spawn a daemon process and maintain the connection between the client and 
> daemon; 
> 2. filter commands in the daemon process.
> But both can't be re-used for this XenAPI/XenServer case as the daemon 
> process is already running in dom0; the command filtering is done in dom0's 
> netwrap plugin. In order to hold this in oslo.rootwrap, it requires some 
> refactoring work to make it looks reasonable. Is it worthy to do that? 
> Specially by considering it has determined to replace oslo.wrap with 
> oslo.provsep?
> 
> Maybe it's a good option to cover this dom0 case in oslo.provsep at the 
> beginning. But it becomes more complicated. Maybe we can run a daemon process 
> in dom0 with the privileges set properly and listening on a dedicated tcp 
> port . But that's much different from the initial provsep design [1]. And 
> also it makes the mechanism very different from the current XenServer 
> OpenStack which is using XenAPI plugin. Anyway, I think we have long to go 
> with a good solution to cover it in provsep.

What sort of refactoring do you have in mind for privsep? I could see
something analogous to the preexec_fn argument to subprocess.Popen() to
let the XenServer driver ensure that its privileged process is running
in dom0.

Doug

> 
> But this issue is urgent for XenAPI/XenServer OpenStack. Please the details 
> described in the bug[2]. So I still think the PoC is a better option, unless 
> both oslo and Neutron guys agree it's acceptable to refactor oslo.rootwrap 
> and allow the above dependences introduced to this library.
> 
> [1]https://specs.openstack.org/openstack/oslo-specs/specs/liberty/privsep.html
> [2] https://bugs.launchpad.net/neutron/+bug/1585510
> 
> Regards,
> Jianghua
> 
> On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:
> 
> > I suggested in the bug and the PoC review that neutron is not the 
> > right project to solve the issue. Seems like oslo.rootwrap is a better 
> > place to maintain privilege management code for OpenStack. Ideally, a 
> > solution would be found in scope of the library that would not require 
> > any changes per-project.
> 
> With the change of direction from oslo.roowrap to oslo.provsep I doubt that 
> there is scope to land this in oslo.rootwarp.
> 
> Yours Tony.
> 
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com] 
> Sent: Tuesday, November 01, 2016 7:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem 
> for XenServer
> 
> Jianghua Wang  wrote:
> 
> > Hi Neutron guys,
> >
> > I’m trying to explain a problem with the XenServer rootwrap and give a 
> > proposal to resolve it. I need some input on how to proceed with this
> > proposal: e.g. if requires a spec? Any concerns need further 
> > discussion or clarification?
> >
> > Problem description:
> > As we’ve known, some neutron services need run commands with root 
> > privileges and it’s achieved by running commands via the rootwrap. And 
> > in order to resolve performance issue, it has been improved to support 
> > daemon mode for the rootwrap [1]. Either way has the commands running 
> > on the same node/VM which has relative neutron services running on.
> >
> > But as a type-1 hypervisor, XenServer OpenStack has different behavior.  
> > Neutron’s compute agent neutron-openvswitch-agent need run commands in 
> > dom0, as the tenants’ interfaces are plugged in an integration OVS 
> > which locates in Dom0. Currently the script of 
> > https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-
> > xen-dom0is used as XenServer OpenStack’s rootwrap. This script will 
> > create a XenAPI session with dom0 and passes the commands to dom0 for 
> > the real execution.
> > Each command execution will run this script once. So it has the 
> > similar performance issue as the non-daemon mode of rootwrap on other
> > hypervisors:  For each 

Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck

> On 02 Nov 2016, at 11:52, Tom Barron  wrote:
> 
> 
> 
> On 11/02/2016 06:23 AM, Arne Wiebalck wrote:
>> Hi Valeriy,
>> 
>> I wasn’t aware, thanks! 
>> 
>> So, if each driver exposes the storage_protocols it supports, would it
>> be sensible to have
>> manila-ui check the extra_specs for this key and limit the protocol
>> choice for a given
>> share type to the supported protocols (in order to avoid that the user
>> tries to create
>> incompatible type/protocol combinations)?
> 
> Not necessarily tied to share types, but we have this bug open w.r.t.
> showing only protocols that are available given available backends in
> the actual deployment:
> 
> https://bugs.launchpad.net/manila-ui/+bug/1622732

Thanks for the link, Tom.

As mentioned, I think linking protocols and types would be helpful to guide 
users
during share creation. So, as an intermediate step, how about extending this 
patch
by having protocol/type(s) tuples (rather than only protocols) in the UI config 
file for
Manila and fill the menus in the UI accordingly?

And for a more complete solution, I was wondering if it wouldn't be possible to 
go
over the available share types, extract the supported storage_protocols, and use
these for the protocol pull down menu (and limit the type selection to the ones
supporting the protocol selected by the user). This would avoid that operators 
have
to keep the UI config and the Manila config in sync.

Cheers,
 Arne


> 
>> 
>> Thanks again!
>> Arne
>> 
>> 
>>> On 02 Nov 2016, at 10:00, Valeriy Ponomaryov >> > wrote:
>>> 
>>> Hello, Arne
>>> 
>>> Each share driver has capability called "storage_protocol". So, for
>>> case you describe, you should just define such extra spec in your
>>> share type that will match value reported by desired backend[s].
>>> 
>>> It is the purpose of extra specs in share types, you (as cloud admin)
>>> define its connection yourself, either it is strong or not.
>>> 
>>> Valeriy
>>> 
>>> On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck >> > wrote:
>>> 
>>>Hi,
>>> 
>>>We’re preparing the use of Manila in production and noticed that
>>>there seems to be no strong connection
>>>between share types and share protocols.
>>> 
>>>I would think that not all backends will support all protocols. If
>>>that’s true, wouldn’t it be sensible to establish
>>>a stronger relation and have supported protocols defined per type,
>>>for instance as extra_specs (which, as one
>>>example, could then be used by the Manila UI to limit the choice
>>>to supported protocols for a given share
>>>type, rather than maintaining two independent and hard-coded tuples)?
>>> 
>>>Thanks!
>>> Arne
>>> 
>>>--
>>>Arne Wiebalck
>>>CERN IT
>>> 
>>>
>>> __
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Kind Regards
>>> Valeriy Ponomaryov
>>> www.mirantis.com 
>>> vponomar...@mirantis.com 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>>> ?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> --
>> Arne Wiebalck
>> CERN IT
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Rodrigo Barbieri
+2!

Goutham contributes a lot to manila in all areas and is a very active and
skilled member of the community!



On Wed, Nov 2, 2016 at 9:36 AM, Silvan Kaiser  wrote:

> +1!
>
> 2016-11-02 13:31 GMT+01:00 Alex Meade :
>
>> +1000
>>
>> On Wed, Nov 2, 2016 at 1:09 PM, Tom Barron  wrote:
>>
>>> I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
>>> manila core team.  This is a clear case where he's already been doing
>>> the review work, excelling both qualitatively and quantitatively, as
>>> well as being a valuable committer to the project.  Goutham deserves to
>>> be core and we need the additional bandwidth for the project.  He's
>>> treated as a de facto core by the community already.  Let's make it
>>> official!
>>>
>>> -- Tom Barron
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Dr. Silvan Kaiser
> Quobyte GmbH
> Hardenbergplatz 2, 10623 Berlin - Germany
> +49-30-814 591 800 - www.quobyte.com
> Amtsgericht Berlin-Charlottenburg, HRB 149012B
> Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Barbieri
MSc Computer Scientist
OpenStack Manila Core Contributor
Federal University of São Carlos
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc][devstack][mitaka] Chain doesn't work

2016-11-02 Thread Alioune
Any suggestion ?

On Monday, 24 October 2016, Alioune  wrote:

> Hi all,
>
> I'm trying to implement service chain in OpenStack using networking-sfc
> (stable/mitaka) and OVS 2.5.90
>
>
> The following is the architecture I used :
>
> SRC DST
>   ||
>   == br-int 
>  |
>SF1
> SF1: 55.55.55.3
> SRC: 55.55.55.4
> DST: 55.55.55.5
>
> I can create port-pairs, port-pair-group, classifier and chain with these
> commands:
>
> neutron flow-classifier-create  --ethertype IPv4  --source-ip-prefix
> 55.55.55.4/32  --logical-source-port 0009034f-4c39-4cbf-be7d-fcf82dad024c
> --protocol icmp  FC1
> neutron port-pair-create --ingress=p1 --egress=p1 PP1
> neutron port-pair-group-create --port-pair PP1 PG1
> neutron port-chain-create --port-pair-group PG1 --flow-classifier FC1 PC1
>
> I could ping from SRC to DST before setting the chain, but after the chain
> creating ping doesn't work.
>
> ICMP echo request packets arrive to SF1 port but it doesn't send back the
> packets in order to allow them to get their destination DST (see output
> below).
>
> The Opendaylight/SFC project uses NSH aware service function (SF) that
> send back packets to the chains after analyzing them, I would like to know :
>
> - How networking-sfc configures SF to send back packets to the chain as
> seem in some of your presentation ?
> - What's wrong in my configurations (see commands and ovs-ofctl output
> below) ? I've followed the main steps described in your wiki page.
>
> Best Regards,
>
>
> vagrant@vagrant-ubuntu-trusty-64:~$ neutron port-list
> +--+--+-
> --+-
> -+
> | id   | name | mac_address   |
> fixed_ips
> |
> +--+--+-
> --+-
> -+
> | 0009034f-4c39-4cbf-be7d-fcf82dad024c |  | fa:16:3e:dd:16:f7 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.4"}|
> | 082e896d-5982-458c-96e7-0dd372d3d7d9 | p1   | fa:16:3e:90:b4:67 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.3"}|
> | 2ad109e4-42a8-4554-b884-a32344e91036 |  | fa:16:3e:74:9a:fa |
> {"subnet_id": "3cf6eb27-7258-4252-8f3d-b6f9d27c948b", "ip_address":
> "192.168.105.2"} |
> | 51f055c0-ff4d-47f4-9328-9a0d7ca204f3 |  | fa:16:3e:da:f9:93 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.1"}|
> | 656ad901-2bc0-407a-a581-da955ecf3b59 |  | fa:16:3e:7f:44:01 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.2"}|
> | b1d14a4f-cde6-4c44-b42e-0f0466dba32a |  | fa:16:3e:a6:c6:35 |
> {"subnet_id": "8bf8a2e1-ecad-4b4b-beb1-d760a16667bc", "ip_address":
> "55.55.55.5"}|
> +--+--+-
> --+-
> -+
>
> vagrant@vagrant-ubuntu-trusty-64:~$ ifconfig |grep 082e896d
> qbr082e896d-59 Link encap:Ethernet  HWaddr b6:96:27:fa:ab:af
> qvb082e896d-59 Link encap:Ethernet  HWaddr b6:96:27:fa:ab:af
> qvo082e896d-59 Link encap:Ethernet  HWaddr 7e:1a:7b:7d:09:df
> tap082e896d-59 Link encap:Ethernet  HWaddr fe:16:3e:90:b4:67
>
> vagrant@vagrant-ubuntu-trusty-64:~$ sudo tcpdump -i tap082e896d-59 icmp
> tcpdump: WARNING: tap082e896d-59: no IPv4 address assigned
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on tap082e896d-59, link-type EN10MB (Ethernet), capture size
> 65535 bytes
> 10:51:10.229674 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 61, length 64
> 10:51:11.230318 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 62, length 64
> 10:51:12.233451 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 63, length 64
> 10:51:13.234496 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 64, length 64
> 10:51:14.235583 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 65, length 64
> 10:51:15.236585 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 66, length 64
> 10:51:16.237568 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 67, length 64
> 10:51:17.238974 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 68, length 64
> 10:51:18.244244 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 69, length 64
> 10:51:19.245758 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 70, length 64
> 10:51:20.246521 IP 55.55.55.4 > 55.55.55.5: ICMP echo request, id 15617,
> seq 71, length 64
>
>
>
> vagrant@vagrant-ubuntu-trusty-64:~/openstack_networking/simple-sf$ 

Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Silvan Kaiser
+1!

2016-11-02 13:31 GMT+01:00 Alex Meade :

> +1000
>
> On Wed, Nov 2, 2016 at 1:09 PM, Tom Barron  wrote:
>
>> I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
>> manila core team.  This is a clear case where he's already been doing
>> the review work, excelling both qualitatively and quantitatively, as
>> well as being a valuable committer to the project.  Goutham deserves to
>> be core and we need the additional bandwidth for the project.  He's
>> treated as a de facto core by the community already.  Let's make it
>> official!
>>
>> -- Tom Barron
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Dr. Silvan Kaiser
Quobyte GmbH
Hardenbergplatz 2, 10623 Berlin - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
Management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Alex Meade
+1000

On Wed, Nov 2, 2016 at 1:09 PM, Tom Barron  wrote:

> I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
> manila core team.  This is a clear case where he's already been doing
> the review work, excelling both qualitatively and quantitatively, as
> well as being a valuable committer to the project.  Goutham deserves to
> be core and we need the additional bandwidth for the project.  He's
> treated as a de facto core by the community already.  Let's make it
> official!
>
> -- Tom Barron
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] propose adding gouthamr to manila core

2016-11-02 Thread Tom Barron
I hereby propose that we add Goutham Pacha Ravi (gouthamr on IRC) to the
manila core team.  This is a clear case where he's already been doing
the review work, excelling both qualitatively and quantitatively, as
well as being a valuable committer to the project.  Goutham deserves to
be core and we need the additional bandwidth for the project.  He's
treated as a de facto core by the community already.  Let's make it
official!

-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Sylvain Bauza



Le 02/11/2016 12:26, Alex Xu a écrit :



2016-11-02 16:26 GMT+08:00 Sylvain Bauza >:




Le 01/11/2016 15:14, Alex Xu a écrit :

Currently we only update the resource usage with Placement API in
the instance claim and the available resource update periodic
task. But there is no claim for migration with placement API yet.
This works is tracked by
https://bugs.launchpad.net/nova/+bug/1621709
. In newton, we
only fix one bit which make the resource update periodic task
works correctly, then it will auto-heal everything. For the
migration claim part, that isn't the goal for newton release.


To be clear, there are two distinct points :
#1 there are MoveClaim objects that are synchronously made on
resize (and cold-migrate) and rebuild (and evacuate), but there is
no claim done by the live-migration path.
There is a long-standing bugfix
https://review.openstack.org/#/c/244489/
 that's been tracked by
https://bugs.launchpad.net/nova/+bug/1289064



Yea, thanks for the info. I say `migration claim` is more about the 
move claim. Maybe I should say the move claim.





Np, just a clarification for all of us, not you in particular :-)



#2 all those claim operations don't trigger an allocation request
to the placement API, while the regular boot operation does (hence
your bug report).


Yes, except the booting new instance, other claims won't trigger 
allocation request to the placement API.


Oops, I badly wrote my prose in English, I meant your point, ie. that we 
only write allocation requests for boot operations, and not for move 
operations.







So the first question is do we want to fix it in this release? If
the answer is yes, there have a concern need to discuss.



I'd appreciate if we could merge first #1 before #2 because the
placement API decisions could be wrong if we decide to only
allocate for certain move operations.


Sorry, I didn't get you, what is 'the placement API decisions' pointed to?


I personnally think that rather writing allocation records for all move 
operations but the live-migration case, we should first have the move 
operations being consistent by doing claim operations and only that 
being done, consider writing those allocation records to the placement API.


-Sylvain





In order to implement the drop of migration claim, the RT needs
to remove allocation records on the specific RP(on the
source/destination compute node). But there isn't any API can do
that. The API about remove allocation records is 'DELETE
/allocations/{consumer_uuid}', but it will delete all the
allocation records for the consumer. So the initial
fix(https://review.openstack.org/#/c/369172/
) adds new API 'DELETE
/resource_providers/{rp_uuid}/allocations/{consumer_id}'. But
Chris Dent pointed out this against the original design. All the
allocations for the specific consumer only can be dropped together.

There also have suggestion from Andrew, we can update all the
allocation records for the consumer each time. That means the RT
will build the original allocation records and new allocation
records for the claim together, and put into one API. That API
should be 'PUT /allocations/{consumer_uuid}'. Unfortunately that
API doesn't replace all the allocation records for the consumer,
it always amends the new allocation records for the consumer.

So which directly we should go at here?

Thanks
Alex




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Alex Xu
2016-11-02 16:26 GMT+08:00 Sylvain Bauza :

>
>
> Le 01/11/2016 15:14, Alex Xu a écrit :
>
> Currently we only update the resource usage with Placement API in the
> instance claim and the available resource update periodic task. But there
> is no claim for migration with placement API yet. This works is tracked by
> https://bugs.launchpad.net/nova/+bug/1621709. In newton, we only fix one
> bit which make the resource update periodic task works correctly, then it
> will auto-heal everything. For the migration claim part, that isn't the
> goal for newton release.
>
>
> To be clear, there are two distinct points :
> #1 there are MoveClaim objects that are synchronously made on resize (and
> cold-migrate) and rebuild (and evacuate), but there is no claim done by the
> live-migration path.
> There is a long-standing bugfix https://review.openstack.org/#/c/244489/
> that's been tracked by https://bugs.launchpad.net/nova/+bug/1289064
>

Yea, thanks for the info. I say `migration claim` is more about the move
claim. Maybe I should say the move claim.

>
>
> #2 all those claim operations don't trigger an allocation request to the
> placement API, while the regular boot operation does (hence your bug
> report).
>

Yes, except the booting new instance, other claims won't trigger allocation
request to the placement API.


>
>
>
>
> So the first question is do we want to fix it in this release? If the
> answer is yes, there have a concern need to discuss.
>
>
> I'd appreciate if we could merge first #1 before #2 because the placement
> API decisions could be wrong if we decide to only allocate for certain move
> operations.
>

Sorry, I didn't get you, what is 'the placement API decisions' pointed to?


>
>
> In order to implement the drop of migration claim, the RT needs to remove
> allocation records on the specific RP(on the source/destination compute
> node). But there isn't any API can do that. The API about remove allocation
> records is 'DELETE /allocations/{consumer_uuid}', but it will delete all
> the allocation records for the consumer. So the initial fix(
> https://review.openstack.org/#/c/369172/) adds new API 'DELETE
> /resource_providers/{rp_uuid}/allocations/{consumer_id}'. But Chris Dent
> pointed out this against the original design. All the allocations for the
> specific consumer only can be dropped together.
>
> There also have suggestion from Andrew, we can update all the allocation
> records for the consumer each time. That means the RT will build the
> original allocation records and new allocation records for the claim
> together, and put into one API. That API should be 'PUT
> /allocations/{consumer_uuid}'. Unfortunately that API doesn't replace all
> the allocation records for the consumer, it always amends the new
> allocation records for the consumer.
>
> So which directly we should go at here?
>
> Thanks
> Alex
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone multinode grenade job

2016-11-02 Thread Steve Martinelli
Testing upgrades without downtime is definitely something we need to
improve. At the summit Dolph Mathews (dolphm on irc) was also looking at
testing out the new upgrade flow. I'm not sure what his plans are, but
getting in contact with him would be a good first step.

On Wed, Nov 2, 2016 at 3:44 AM, Julia Odruzova 
wrote:

> Hi Keystone team!
>
>
> I'm currently investigating OpenStack components upgradability. I saw that
> a few months ago there was a mail thread
>
> about Grenade multinode testing job for Keystone [1]. As far as I
> understand it was decided to test how stable Keystone works
>
> with master DB and to test how different Keystone versions work together
> in a multi-node installation. A lot of work was done
>
> to allow upgrades without downtime for Keystone since that time, so now it
> seems that Keystone is ready for testing discussed cases.
>
>
> I was wondering if anybody work on it already? Such tests would be very
> useful for keeping Keystone upgradable, so if nobody
>
> is working on it, I would like to tackle this task. Would it be OK?
>
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-
> February/085781.html
>
> –
>
> Thanks,
>
> Julia Odruzova,
>
> Mirantis, Inc.
>
> irc: jvarlamova
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes

2016-11-02 Thread Daniel P. Berrange
On Wed, Nov 02, 2016 at 10:43:44AM +, Lee Yarwood wrote:
> On 02-11-16 08:55:08, Carlton, Paul (Cloud Services) wrote:
> > Lee
> > 
> > I see this in a multiple node devstack without shared storage, although 
> > that shouldn't be relevant
> > 
> > I do a live migration of an instance
> > 
> > I then hard reboot it
> > 
> > I you are not seeing the same outcome I'll look at this again
> 
> Apologies if I'm not being clear here Paul but I'm asking if we can't
> fix the hard reboot issue directly instead of reverting the serial
> console fix. Given that you actually need the serial console fix to
> avoid calling connect_volume multiple times on the destination host.

Agreed, we should diagnose the hard reboot issue rather than just
blindly revert. Based on the bug info - which points to a failure
in neutron port binding - I'm not even convinced that the serial
console fix is the ultimate cause - it may just have exposed a
different latent bug.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://entangle-photo.org   -o-http://search.cpan.org/~danberr/ :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Tom Barron


On 11/02/2016 06:23 AM, Arne Wiebalck wrote:
> Hi Valeriy,
> 
> I wasn’t aware, thanks! 
> 
> So, if each driver exposes the storage_protocols it supports, would it
> be sensible to have
> manila-ui check the extra_specs for this key and limit the protocol
> choice for a given
> share type to the supported protocols (in order to avoid that the user
> tries to create
> incompatible type/protocol combinations)?

Not necessarily tied to share types, but we have this bug open w.r.t.
showing only protocols that are available given available backends in
the actual deployment:

https://bugs.launchpad.net/manila-ui/+bug/1622732

-- Tom

> 
> Thanks again!
>  Arne
> 
> 
>> On 02 Nov 2016, at 10:00, Valeriy Ponomaryov > > wrote:
>>
>> Hello, Arne
>>
>> Each share driver has capability called "storage_protocol". So, for
>> case you describe, you should just define such extra spec in your
>> share type that will match value reported by desired backend[s].
>>
>> It is the purpose of extra specs in share types, you (as cloud admin)
>> define its connection yourself, either it is strong or not.
>>
>> Valeriy
>>
>> On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck > > wrote:
>>
>> Hi,
>>
>> We’re preparing the use of Manila in production and noticed that
>> there seems to be no strong connection
>> between share types and share protocols.
>>
>> I would think that not all backends will support all protocols. If
>> that’s true, wouldn’t it be sensible to establish
>> a stronger relation and have supported protocols defined per type,
>> for instance as extra_specs (which, as one
>> example, could then be used by the Manila UI to limit the choice
>> to supported protocols for a given share
>> type, rather than maintaining two independent and hard-coded tuples)?
>>
>> Thanks!
>>  Arne
>>
>> --
>> Arne Wiebalck
>> CERN IT
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> -- 
>> Kind Regards
>> Valeriy Ponomaryov
>> www.mirantis.com 
>> vponomar...@mirantis.com 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> --
> Arne Wiebalck
> CERN IT
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes

2016-11-02 Thread Lee Yarwood
On 02-11-16 08:55:08, Carlton, Paul (Cloud Services) wrote:
> Lee
> 
> I see this in a multiple node devstack without shared storage, although that 
> shouldn't be relevant
> 
> I do a live migration of an instance
> 
> I then hard reboot it
> 
> I you are not seeing the same outcome I'll look at this again

Apologies if I'm not being clear here Paul but I'm asking if we can't
fix the hard reboot issue directly instead of reverting the serial
console fix. Given that you actually need the serial console fix to
avoid calling connect_volume multiple times on the destination host. 

Lee

> From: Lee Yarwood 
> Sent: 02 November 2016 08:17:35
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> instances with encrypted volumes
> 
> On 01-11-16 15:22:57, Carlton, Paul (Cloud Services) wrote:
> > Lee
> >
> > That change is in my test version or was till I reverted it with 
> > https://review.openstack.org/#/c/391418,
> >
> > If you live migrate with the change you mentioned the instance goes to 
> > error when you try to hard reboot
> 
> Hey Paul,
> 
> I can't see a bug referenced by the revert above, have you looked into
> why this is happening and if a full revert is really required? It might
> be easier to fix this corner case, leaving the new method of fetching
> the domain XML in post_live_migration_at_destination and thus working
> around your issue.
> 
> Lee
> 
> > From: Lee Yarwood 
> > Sent: 01 November 2016 14:58:58
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> > instances with encrypted volumes
> >
> > On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote:
> > > Daniel
> > >
> > > Yes, thanks, but the thing is this does not occur with regular volumes!
> > > The process seems to be you need to connect the volume then the encryptor.
> > > In pre migration at the destination I connect the volume and then setup 
> > > the encryptor and that works fine, but in post migration
> > > at destination it rebuilds the instance xml and defines the vm which 
> > > calls _get_guest_storage_config which does another call to
> > > connect_volume.  This seems redundant to me, because it is already 
> > > connected,
> > > but it works for normal volumes and if I bypass it for encrypted volumes
> > > it just fails with the same error when the same function is
> > > called as part of a subsequent hard reboot.
> >
> > Try rebasing on the following change that reworked
> > post_live_migration_at_destination to fetch the domain XML from libvirt
> > instead of asking Nova to rebuild it :
> >
> > libvirt: fix serial console not correctly defined after live-migration
> > https://review.openstack.org/#/c/356335/
> >
> > I think you've highlighted that this caused issues with hard rebooting
> > elsewhere right?
> >
> > Lee
> >
> > > From: Daniel P. Berrange 
> > > Sent: 01 November 2016 11:29:51
> > > To: OpenStack Development Mailing List (not for usage questions)
> > > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> > > instances with encrypted volumes
> > >
> > > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) 
> > > wrote:
> > > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with 
> > > > the live migration of
> > > >
> > > > instances with encrypted volumes. I've submitted a work in progress 
> > > > version of a patch
> > > >
> > > > https://review.openstack.org/#/c/389608 but I can't overcome an issue 
> > > > with an iscsi command
> > > >
> > > > failure that only occurs for encrypted volumes during the post 
> > > > migration processing, see
> > > >
> > > > http://paste.openstack.org/show/587535/
> > > >
> > > >
> > > > Does anyone have any thoughts on how to proceed with this issue?
> > >
> > > No particular ideas, but I wanted to point out that the scsi_id command
> > > shown in that stack trace has a device path that points to the raw
> > > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting
> > > a failure before you get the encryption part, so encryption might be
> > > unrelated.
> 
> --
> Lee Yarwood
> Senior Software Engineer
> Red Hat
> 
> PGP : A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck
Hi Valeriy,

I wasn’t aware, thanks!

So, if each driver exposes the storage_protocols it supports, would it be 
sensible to have
manila-ui check the extra_specs for this key and limit the protocol choice for 
a given
share type to the supported protocols (in order to avoid that the user tries to 
create
incompatible type/protocol combinations)?

Thanks again!
 Arne


On 02 Nov 2016, at 10:00, Valeriy Ponomaryov 
> wrote:

Hello, Arne

Each share driver has capability called "storage_protocol". So, for case you 
describe, you should just define such extra spec in your share type that will 
match value reported by desired backend[s].

It is the purpose of extra specs in share types, you (as cloud admin) define 
its connection yourself, either it is strong or not.

Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck 
> wrote:
Hi,

We’re preparing the use of Manila in production and noticed that there seems to 
be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols. If that’s true, 
wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per type, for instance 
as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice to supported 
protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Ocata specs

2016-11-02 Thread Steven Hardy
On Tue, Nov 01, 2016 at 05:46:48PM -0400, Zane Bitter wrote:
> On 01/11/16 15:13, James Slagle wrote:
> > On Tue, Nov 1, 2016 at 7:21 PM, Emilien Macchi  wrote:
> > > Hi,
> > > 
> > > TripleO (like some other projects in OpenStack) have not always done
> > > good job in merging specs on time during a cycle.
> > > I would like to make progress on this topic and for that, I propose we
> > > set a deadline to get a spec approved for Ocata release.
> > > This deadline would be Ocata-1 which is week of November 14th.
> > > 
> > > So if you have a specs under review, please make sure it's well
> > > communicated to our team (IRC, mailing-list, etc); comments are
> > > addressed.
> > > 
> > > Also, I would ask our team to spend some time to review them when they
> > > have time. Here is the link:
> > > https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open
> > 
> > Given that we don't always require specs, should we make the same
> > deadline for blueprints to get approved for Ocata as well?
> > 
> > In fact, we haven't even always required blueprints for all features.
> > In order to avoid any surprise FFE's towards the end of the cycle, I
> > think it might be wise to start doing so. The overhead of creating a
> > blueprint is very small, and it actually works to the implementer's
> > advantage as it helps to focus review attention at the various
> > milestones.
> > 
> > So, we could say:
> > - All features require a blueprint
> > - They may require a spec if we need to reach concensus about the feature 
> > first
> > - All Blueprints and Specs for Ocata not approved by November 14th
> > will be deferred to Pike.
> > 
> > Given we reviewed all the blueprints at the summit, and discussed all
> > the features we plan to implement for Ocata, I think it would be
> > reasonable to go with the above. However, 'm interested in any
> > feedback or if anyone feels that requiring a blueprint for features is
> > undesirable.
> 
> The blueprint interface in Launchpad is kind of horrible for our purposes
> (too many irrelevant fields to fill out). For features that aren't
> big/controversial enough to require a spec, some projects have adopted a
> 'spec-lite' process. Basically you raise a *bug* in Launchpad, give it
> 'Wishlist' priority and tag it with 'spec-lite'.

I think either approach is fine and IIRC we did previously discuss the
spec-lite process and agree it was acceptable for tracking smaller
features for TripleO.

The point is we absolutely need some way to track stuff that isn't yet
landed - and I think folks probably don't care much re (Bug|Blueprint)
provided it's correctly targetted.

We had a very rough time at the end of Newton because $many folks showed up
late with features we didn't know about and/or weren't correctly tracked,
so I think a feature proposal freeze is reasonable.  Given the number of
BPs targetted at Ocata is already prety high I think Nov 14th probably
justifiable but it is on the more conservative side relative to other
projects[2].

Regarding the specs process - tbh I feel like that hasn't been working well
for a while (for all the same reasons John referenced in [1]).

So I've been leaning towards not requiring (or writing) specs in the
majority of cases, instead often we've just linked an etherpad with notes
or had a ML discussion to gain consensus on direction. (This seems pretty
similar to the wiki based approach adopted by the swift team).

Thanks,

Steve

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094026.html
[2] https://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] Next PTL/TC elections timeframes

2016-11-02 Thread Thierry Carrez
Thierry Carrez wrote:
>> See: https://review.openstack.org/#/c/385951/
> [...]
> So we'd like to get extra time for PTLs to chime in on the change and
> post their +1 if they are fine with it. We'll wait until the TC meeting
> on November 8th to finally approve this.

Quick reminder for PTLs to chime in on the review before next week TC
meeting !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Feature to enhance floating IP router lookup to consider extra routes

2016-11-02 Thread Ihar Hrachyshka

Vega Cai  wrote:


Hi folks,

During the development of Tricircle, we find that one feature whose spec  
has already been accepted is very useful, that is to enhance floating IP  
router lookup to consider extra routes[1]. With this feature, floating IP  
is allowed to be associated to internal addresses that are reachable by  
routes over intermediate networks. However the implementation patch has  
been abandoned since the owner didn't update the patch for a long time[2].


We would like to continue to work on it. Shall we just restore the patch  
or a new RFE patch is needed?


[1]  
https://specs.openstack.org/openstack/neutron-specs/specs/juno/floating-ip-extra-route.html

[2] https://review.openstack.org/#/c/55987


Quoting Armando from the blueprint dashboard,

"Nov-13-2015(armax): If someone is interested in pursuing it, this must be  
re-submitted according to guidelines defined in [1].


[1] http://docs.openstack.org/developer/neutron/policies/blueprints.html”

Juno is a long time ago, so we should restart the process from the very  
beginning.


Ihar


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Cloud Provider Security Groups

2016-11-02 Thread Roey Chen
Hello David,

The following RFE’s were created in order to address the same use case:

https://bugs.launchpad.net/neutron/+bug/1592000
https://bugs.launchpad.net/neutron/+bug/1592028 (blueprint 
https://review.openstack.org/#/c/391654)

IMO, These usability issues in the security-group API should be addressed 
regardless of what FWaaS v2.0 will be capable for.

Thanks,
Roey

On Oct 31, 2016, at 11:28 PM, David G. Bingham 
> wrote:

Yo Neutron devs :-)

I was wondering if something like the following subject has come up:  
"Cloud-provider Security Groups”.

*Goal of this email*: Gauge the community’s need and see if this has come up in 
past.
*Requirement*: Apply a provider-managed global set of network flows to all 
instances.
*Use Case*: For our private cloud, have need to dynamically allow network 
traffic flows from other internal network sources across all instances.
*Basic Idea*: Provide an *admin-only* accessible security group ruleset that 
would persist and apply these "cloud-provider" security group rules to all 
instances of a cloud. This *may* be in the form of new "provider" API or an 
extension to existing API only accessible via "admin". When instances are 
created, this global SG ruleset would be applied to each VM/ironic instance. 
This feature likely should be capable of being enabled/disabled depending on 
the provider's need.

*Real example*: Company security team wants to audit consistent security 
software installations (i.e. HIPS) across our entire fleet of servers for 
compliance reporting. Each vm/ironic instance is required to have this software 
installed and up to date. Security audit team actually audits each and every 
server to ensure it is running, patched and up to date. These auditing tools 
source from a narrow set of internal IPs/ports and each instance must allow 
access to these auditing tools.

--- What we do today to hack a "cloud-provider" flow in a private cloud ---
1) We've locked down the default rules (only admins can modify which makes 
effectively kills a lot of canned neutron tools).
2) We've written an external script that iterates over all projects in our 
private cloud (~10k projects)
3) For each project:
3a) Fetch default SGs for that project and do a comparison of its default rules 
against *target* default rules
3b) Create any new missing rules, delete any removed rules
3c) Next project
This process is cumbersome, takes 20+ hours to run (over ~10k projects) and has 
to be throttled such that it doesn't over-hammer neutron while its still 
serving production traffic.

--- What we'd like to do in future ---
1) Call Security Group API that would modify a "cloud-provider" ruleset.
2) When updated, agents on HVs detect the "cloud-provider" change and then 
apply the rules across all instances.
Naturally there are going to be a lot of technical hurdles to make this happen 
while a cloud is currently in-flight.

Other uses for “Provider SGs":
* We want to enable new shared features (i.e. monitoring aaS) that all our 
internal projects can take advantage of. When the monitoring team adds/modifies 
IPs to scale, we'd apply these cloud-provider rules on behalf of all projects 
in the private cloud without users having concern themselves about the 
monitoring team's changes.
* We want to allow access to some internal sites to our VPN users on specific 
ports. These VPN ranges are dynamically changed by the VPN team. Our teams 
should not need to go into individual projects to add a new rule when VPN team 
changes ranges.
* This list could go on and on and naturally makes much more sense in a 
*private cloud* scenario, but there may be cases for public providers.

I’d be happy to create a spec.

Seen this before? Thoughts? Good, Bad or Ugly? :-)

Thanks,
David Bingham (wwriverrat on irc)
GoDaddy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Feature to enhance floating IP router lookup to consider extra routes

2016-11-02 Thread Vega Cai
Hi folks,

During the development of Tricircle, we find that one feature whose spec
has already been accepted is very useful, that is to enhance floating IP
router lookup to consider extra routes[1]. With this feature, floating IP
is allowed to be associated to internal addresses that are reachable by
routes over intermediate networks. However the implementation patch has
been abandoned since the owner didn't update the patch for a long time[2].

We would like to continue to work on it. Shall we just restore the patch or
a new RFE patch is needed?

[1]
https://specs.openstack.org/openstack/neutron-specs/specs/juno/floating-ip-extra-route.html
[2] https://review.openstack.org/#/c/55987
-- 
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Valeriy Ponomaryov
Hello, Arne

Each share driver has capability called "storage_protocol". So, for case
you describe, you should just define such extra spec in your share type
that will match value reported by desired backend[s].

It is the purpose of extra specs in share types, you (as cloud admin)
define its connection yourself, either it is strong or not.

Valeriy

On Wed, Nov 2, 2016 at 9:51 AM, Arne Wiebalck  wrote:

> Hi,
>
> We’re preparing the use of Manila in production and noticed that there
> seems to be no strong connection
> between share types and share protocols.
>
> I would think that not all backends will support all protocols. If that’s
> true, wouldn’t it be sensible to establish
> a stronger relation and have supported protocols defined per type, for
> instance as extra_specs (which, as one
> example, could then be used by the Manila UI to limit the choice to
> supported protocols for a given share
> type, rather than maintaining two independent and hard-coded tuples)?
>
> Thanks!
>  Arne
>
> --
> Arne Wiebalck
> CERN IT
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes

2016-11-02 Thread Carlton, Paul (Cloud Services)
Lee


I see this in a multiple node devstack without shared storage, although that 
shouldn't be relevant

I do a live migration of an instance

I then hard reboot it


I you are not seeing the same outcome I'll look at this again


Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Office: +44 (0) 1173 162189
Mobile:+44 (0)7768 994283
Email:paul.carl...@hpe.com
Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, 
Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender. To any recipient 
of this message within HP, unless otherwise stated you should consider this 
message and attachments as "HP CONFIDENTIAL".


From: Lee Yarwood 
Sent: 02 November 2016 08:17:35
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
instances with encrypted volumes

On 01-11-16 15:22:57, Carlton, Paul (Cloud Services) wrote:
> Lee
>
> That change is in my test version or was till I reverted it with 
> https://review.openstack.org/#/c/391418,
>
> If you live migrate with the change you mentioned the instance goes to error 
> when you try to hard reboot

Hey Paul,

I can't see a bug referenced by the revert above, have you looked into
why this is happening and if a full revert is really required? It might
be easier to fix this corner case, leaving the new method of fetching
the domain XML in post_live_migration_at_destination and thus working
around your issue.

Lee

> From: Lee Yarwood 
> Sent: 01 November 2016 14:58:58
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> instances with encrypted volumes
>
> On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote:
> > Daniel
> >
> > Yes, thanks, but the thing is this does not occur with regular volumes!
> > The process seems to be you need to connect the volume then the encryptor.
> > In pre migration at the destination I connect the volume and then setup the 
> > encryptor and that works fine, but in post migration
> > at destination it rebuilds the instance xml and defines the vm which calls 
> > _get_guest_storage_config which does another call to
> > connect_volume.  This seems redundant to me, because it is already 
> > connected,
> > but it works for normal volumes and if I bypass it for encrypted volumes
> > it just fails with the same error when the same function is
> > called as part of a subsequent hard reboot.
>
> Try rebasing on the following change that reworked
> post_live_migration_at_destination to fetch the domain XML from libvirt
> instead of asking Nova to rebuild it :
>
> libvirt: fix serial console not correctly defined after live-migration
> https://review.openstack.org/#/c/356335/
>
> I think you've highlighted that this caused issues with hard rebooting
> elsewhere right?
>
> Lee
>
> > From: Daniel P. Berrange 
> > Sent: 01 November 2016 11:29:51
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> > instances with encrypted volumes
> >
> > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) 
> > wrote:
> > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with 
> > > the live migration of
> > >
> > > instances with encrypted volumes. I've submitted a work in progress 
> > > version of a patch
> > >
> > > https://review.openstack.org/#/c/389608 but I can't overcome an issue 
> > > with an iscsi command
> > >
> > > failure that only occurs for encrypted volumes during the post migration 
> > > processing, see
> > >
> > > http://paste.openstack.org/show/587535/
> > >
> > >
> > > Does anyone have any thoughts on how to proceed with this issue?
> >
> > No particular ideas, but I wanted to point out that the scsi_id command
> > shown in that stack trace has a device path that points to the raw
> > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting
> > a failure before you get the encryption part, so encryption might be
> > unrelated.

--
Lee Yarwood
Senior Software Engineer
Red Hat

PGP : A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development 

Re: [openstack-dev] [magnum]Is internet-access necessary for Magnum + CoreOS?

2016-11-02 Thread Rikimaru Honjo

Hi Yuanying,

Thank you for explaining.
I consider changing my environment or OS.

Regards,

On 2016/11/01 19:13, Yuanying OTSUKA wrote:

Hi, Rikimaru.

Currently, k8s-CoreOS driver dosen’t have way to disable internet access.
But k8s-fedora driver has.

See, below blueprint.
* https://blueprints.launchpad.net/magnum/+spec/support-insecure-registry

Maybe you can bring this feature to k8s-coreos driver.


Thanks
-yuanying


2016年11月1日(火) 15:05 Rikimaru Honjo :


Hi all,

Can I use magnum + CoreOS on the environment which is not able to access
the internet?
I'm trying it, but CoreOS often accesses to "quay.io".
Please share the knowledge if you know about it.

I'm using CoreOS, kubernetes, Magnum 2.0.1.

Regards,
--
Rikimaru Honjo
honjo.rikim...@po.ntts.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Cancel this week QA meeting

2016-11-02 Thread Masayuki Igawa
Hi!

We cancel this week QA meeting because of too few attendees this timing.
See you next time :)

Best Regards,
-- Masayuki Igawa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Relation of share types and share protocols

2016-11-02 Thread Arne Wiebalck
Hi,

We’re preparing the use of Manila in production and noticed that there seems to 
be no strong connection
between share types and share protocols.

I would think that not all backends will support all protocols. If that’s true, 
wouldn’t it be sensible to establish
a stronger relation and have supported protocols defined per type, for instance 
as extra_specs (which, as one
example, could then be used by the Manila UI to limit the choice to supported 
protocols for a given share
type, rather than maintaining two independent and hard-coded tuples)?

Thanks!
 Arne 

--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-02 Thread Sylvain Bauza



Le 01/11/2016 15:14, Alex Xu a écrit :
Currently we only update the resource usage with Placement API in the 
instance claim and the available resource update periodic task. But 
there is no claim for migration with placement API yet. This works is 
tracked by https://bugs.launchpad.net/nova/+bug/1621709. In newton, we 
only fix one bit which make the resource update periodic task works 
correctly, then it will auto-heal everything. For the migration claim 
part, that isn't the goal for newton release.


To be clear, there are two distinct points :
#1 there are MoveClaim objects that are synchronously made on resize 
(and cold-migrate) and rebuild (and evacuate), but there is no claim 
done by the live-migration path.
There is a long-standing bugfix https://review.openstack.org/#/c/244489/ 
that's been tracked by https://bugs.launchpad.net/nova/+bug/1289064


#2 all those claim operations don't trigger an allocation request to the 
placement API, while the regular boot operation does (hence your bug 
report).





So the first question is do we want to fix it in this release? If the 
answer is yes, there have a concern need to discuss.




I'd appreciate if we could merge first #1 before #2 because the 
placement API decisions could be wrong if we decide to only allocate for 
certain move operations.


In order to implement the drop of migration claim, the RT needs to 
remove allocation records on the specific RP(on the source/destination 
compute node). But there isn't any API can do that. The API about 
remove allocation records is 'DELETE /allocations/{consumer_uuid}', 
but it will delete all the allocation records for the consumer. So the 
initial fix(https://review.openstack.org/#/c/369172/) adds new API 
'DELETE /resource_providers/{rp_uuid}/allocations/{consumer_id}'. But 
Chris Dent pointed out this against the original design. All the 
allocations for the specific consumer only can be dropped together.


There also have suggestion from Andrew, we can update all the 
allocation records for the consumer each time. That means the RT will 
build the original allocation records and new allocation records for 
the claim together, and put into one API. That API should be 'PUT 
/allocations/{consumer_uuid}'. Unfortunately that API doesn't replace 
all the allocation records for the consumer, it always amends the new 
allocation records for the consumer.


So which directly we should go at here?

Thanks
Alex




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] Issue with live migration of instances with encrypted volumes

2016-11-02 Thread Lee Yarwood
On 01-11-16 15:22:57, Carlton, Paul (Cloud Services) wrote:
> Lee
> 
> That change is in my test version or was till I reverted it with 
> https://review.openstack.org/#/c/391418,
> 
> If you live migrate with the change you mentioned the instance goes to error 
> when you try to hard reboot

Hey Paul,

I can't see a bug referenced by the revert above, have you looked into
why this is happening and if a full revert is really required? It might
be easier to fix this corner case, leaving the new method of fetching
the domain XML in post_live_migration_at_destination and thus working
around your issue.

Lee

> From: Lee Yarwood 
> Sent: 01 November 2016 14:58:58
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> instances with encrypted volumes
> 
> On 01-11-16 12:02:55, Carlton, Paul (Cloud Services) wrote:
> > Daniel
> >
> > Yes, thanks, but the thing is this does not occur with regular volumes!
> > The process seems to be you need to connect the volume then the encryptor.
> > In pre migration at the destination I connect the volume and then setup the 
> > encryptor and that works fine, but in post migration
> > at destination it rebuilds the instance xml and defines the vm which calls 
> > _get_guest_storage_config which does another call to
> > connect_volume.  This seems redundant to me, because it is already 
> > connected,
> > but it works for normal volumes and if I bypass it for encrypted volumes
> > it just fails with the same error when the same function is
> > called as part of a subsequent hard reboot.
> 
> Try rebasing on the following change that reworked
> post_live_migration_at_destination to fetch the domain XML from libvirt
> instead of asking Nova to rebuild it :
> 
> libvirt: fix serial console not correctly defined after live-migration
> https://review.openstack.org/#/c/356335/
> 
> I think you've highlighted that this caused issues with hard rebooting
> elsewhere right?
> 
> Lee
> 
> > From: Daniel P. Berrange 
> > Sent: 01 November 2016 11:29:51
> > To: OpenStack Development Mailing List (not for usage questions)
> > Subject: Re: [openstack-dev] [nova] [cinder] Issue with live migration of 
> > instances with encrypted volumes
> >
> > On Tue, Nov 01, 2016 at 11:22:25AM +, Carlton, Paul (Cloud Services) 
> > wrote:
> > > I'm working on a bug https://bugs.launchpad.net/nova/+bug/1633033 with 
> > > the live migration of
> > >
> > > instances with encrypted volumes. I've submitted a work in progress 
> > > version of a patch
> > >
> > > https://review.openstack.org/#/c/389608 but I can't overcome an issue 
> > > with an iscsi command
> > >
> > > failure that only occurs for encrypted volumes during the post migration 
> > > processing, see
> > >
> > > http://paste.openstack.org/show/587535/
> > >
> > >
> > > Does anyone have any thoughts on how to proceed with this issue?
> >
> > No particular ideas, but I wanted to point out that the scsi_id command
> > shown in that stack trace has a device path that points to the raw
> > iSCSI LUN, not to the dm-crypt overlay. So it looks like you're hitting
> > a failure before you get the encryption part, so encryption might be
> > unrelated.

-- 
Lee Yarwood
Senior Software Engineer
Red Hat

PGP : A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer

2016-11-02 Thread Ihar Hrachyshka

Tony Breeds  wrote:


On Tue, Nov 01, 2016 at 12:45:43PM +0100, Ihar Hrachyshka wrote:


I suggested in the bug and the PoC review that neutron is not the right
project to solve the issue. Seems like oslo.rootwrap is a better place to
maintain privilege management code for OpenStack. Ideally, a solution  
would

be found in scope of the library that would not require any changes
per-project.


With the change of direction from oslo.roowrap to oslo.provsep I doubt that
there is scope to land this in oslo.rootwarp.


It may take a while for projects to switch to caps for privilege  
separation. It may be easier to unblock xen folks with a small enhancement  
in oslo.rootwrap scope and handle transition to oslo.privsep on a separate  
schedule. I would like to hear from oslo folks on where alternative  
hypervisors fit in their rootwrap/privsep plans.


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]agenda of weekly meeting Nov.2

2016-11-02 Thread joehuang
Hello, team,

Let's resume the weekly meeting after design summit.

Agenda of Nov.2 weekly meeting:

  1.  Tricircle design summit recap and Ocata planning 
https://etherpad.openstack.org/p/ocata-tricircle-work-session

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.

Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone multinode grenade job

2016-11-02 Thread Julia Odruzova
Hi Keystone team!


I'm currently investigating OpenStack components upgradability. I saw that
a few months ago there was a mail thread

about Grenade multinode testing job for Keystone [1]. As far as I
understand it was decided to test how stable Keystone works

with master DB and to test how different Keystone versions work together in
a multi-node installation. A lot of work was done

to allow upgrades without downtime for Keystone since that time, so now it
seems that Keystone is ready for testing discussed cases.


I was wondering if anybody work on it already? Such tests would be very
useful for keeping Keystone upgradable, so if nobody

is working on it, I would like to tackle this task. Would it be OK?


[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-February/085781.html

–

Thanks,

Julia Odruzova,

Mirantis, Inc.

irc: jvarlamova
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [FaaS] Function as a service in OpenStack

2016-11-02 Thread Zhipeng Huang
Could writing a Scala OpenStack SDK for OpenWhisk do the trick ?

On Wed, Nov 2, 2016 at 10:20 AM, Lingxian Kong  wrote:

> Hi, all,
>
> Recently when I was talking with some customers of our OpenStack based
> public cloud, some of them are expecting to see a service similar to AWS
> Lambda in OpenStack ecosystem (so such service could be invoked by Heat,
> Mistral, Swift, etc.).
>
> Coincidently, I happened to see an introduction of OpenWhisk project by
> IBM guys in Barcelona Summit. The demo was great and I was much more
> exsited to know it was opensourced, but after checking, I feels a little
> bit frustrated, most of the core part of the code was written in Scala so
> it sets a high bar for me (yeah, I'm using Python) to learn and understand.
>
> So I came here to ask if there are people who are interested in serverless
> area as me or have the same requirements as our customers? Does it deserve
> a new project complies with OpenStack rules and conventions? Is there any
> chance that people could join together for the implementation?
>
> Cheers,
> Lingxian Kong (Larry)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Meeting at 20:00 UTC this Wednesday, 2nd November

2016-11-02 Thread Richard Jones
Hi folks,

The Horizon team will be having our next meeting at 20:00 UTC this
Wednesday, 2nd November in #openstack-meeting-3

Meeting agenda is here: https://wiki.openstack.org/wiki/Meetings/Horizon

Anyone is welcome to to add agenda items and everyone interested in
Horizon is encouraged to attend.


Cheers,

Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev