Re: [openstack-dev] [oslo][mistral] Saga of process than ack and where can we go from here...

2016-06-06 Thread Renat Akhmerov

> On 04 Jun 2016, at 04:16, Doug Hellmann  wrote:
> 
> Excerpts from Joshua Harlow's message of 2016-06-03 09:14:05 -0700:
>> Deja, Dawid wrote:
>>> On Thu, 2016-05-05 at 11:08 +0700, Renat Akhmerov wrote:
 
> On 05 May 2016, at 01:49, Mehdi Abaakouk  > wrote:
> 
> 
> Le 2016-05-04 10:04, Renat Akhmerov a écrit :
>> No problem. Let’s not call it RPC (btw, I completely agree with that).
>> But it’s one of the messaging patterns and hence should be under
>> oslo.messaging I guess, no?
> 
> Yes and no, we currently have two APIs (rpc and notification). And
> personally I regret to have the notification part in oslo.messaging.
> 
> RPC and Notification are different beasts, and both are today limited
> in terms of feature because they share the same driver implementation.
> 
> Our RPC errors handling is really poor, for example Nova just put
> instance in ERROR when something bad occurs in oslo.messaging layer.
> This enforces deployer/user to fix the issue manually.
> 
> Our Notification system doesn't allow fine grain routing of message,
> everything goes into one configured topic/queue.
> 
> And now we want to add a new one... I'm not against this idea,
> but I'm not a huge fan.
> 
> Thoughts from folks (mistral and oslo)?
>>> Also, I was not at the Summit, should I conclude the Tooz+taskflow
>>> approach (that ensure the idempotent of the application within the
>>> library API) have not been accepted by mistral folks ?
>> Speaking about idempotency, IMO it’s not a central question that we
>> should be discussing here. Mistral users should have a choice: if they
>> manage to make their actions idempotent it’s excellent, in many cases
>> idempotency is certainly possible, btw. If no, then they know about
>> potential consequences.
> 
> You shouldn't mix the idempotency of the user task and the idempotency
> of a Mistral action (that will at the end run the user task).
> You can have your Mistral task runner implementation idempotent and just
> make the workflow to use configurable in case the user task is
> interrupted or badly finished even if the user task is idempotent or not.
> This makes the thing very predictable. You will know for example:
> * if the user task has started or not,
> * if the error is due to a node power cut when the user task runs,
> * if you can safely retry a not idempotent user task on an other node,
> * you will not be impacted by rabbitmq restart or TCP connection issues,
> * ...
> 
> With the oslo.messaging approach, everything will just end up in a
> generic MessageTimeout error.
> 
> The RPC API already have this kind of issue. Applications have
> unfortunately
> dealt with that (and I think they want something better now).
> I'm just not convinced we should add a new "working queue" API in
> oslo.messaging for tasks scheduling that have the same issue we already
> have with RPC.
> 
> Anyway, that's your choice, if you want rely on this poor structure,
> I will
> not be against, I'm not involved in Mistral. I just want everybody is
> aware
> of this.
> 
>> And even in this case there’s usually a number
>> of measures that can be taken to mitigate those consequences (reruning
>> workflows from certain points after manually fixing problems, rollback
>> scenarios etc.).
> 
> taskflow allows to describe and automate this kind of workflow really
> easily.
> 
>> What I’m saying is: let’s not make that crucial decision now about
>> what a messaging framework should support or not, let’s make it more
>> flexible to account for variety of different usage scenarios.
> 
> I think the confusion is in the "messaging" keyword, currently
> oslo.messaging
> is a "RPC" framework and a "Notification" framework on top of 'messaging'
> frameworks.
> 
> Messaging framework we uses are 'kombu', 'pika', 'zmq' and 'pingus'.
> 
>> It’s normal for frameworks to give more rather than less.
> 
> I disagree, here we mix different concepts into one library, all concepts
> have to be implemented by different 'messaging framework',
> So we fortunately give less to make thing just works in the same way
> with all
> drivers for all APIs.
> 
>> One more thing, at the summit we were discussing the possibility to
>> define at-most-once/at-least-once individually for Mistral tasks. This
>> is demanded because there cases where we need to do it, advanced users
>> may choose one or another depending on a task/action semantics.
>> However, it won’t be possible to implement w/o changes in the
>> underlying messaging framework.
> 
> If we goes that way, oslo.messaging users and Mistral users have to
>

Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-06-06 Thread Igor Zinovik
  Hello,

Aleksandr, one simple question: do I as a plugin developer for upcoming
Fuel 9.0 have
to worry about these network-related changes or not? HCF is approaching,
but patch
that you mentioned (342307 ) is
still not merged. Do I need to spend time on understanding
it and change plugins deployment tasks

according to the netconfig.pp refactoring?

On 6 June 2016 at 11:12, Aleksandr Didenko  wrote:

> Hi,
>
> a bit different patch is on review now [0]. Instead of silently replacing
> default gateway on the fly in netconfig.pp task it's putting new default
> gateway into Hiera. Thus we'll have idempotency for subsequent netconfig.pp
> runs even on Mongo roles. Also we'll have consistent network configuration
> data in Hiera which any plugin can rely on.
>
> I've built a custom ISO with this patch and run a set of custom tests on
> it to cover multi-role and multi-rack cases [1] plus BVT - everything
> worked fine.
>
> Please feel free to review and comment the patch [0].
>
> Regards,
> Alex
>
> [0] https://review.openstack.org/324307
> [1] http://paste.openstack.org/show/508319/
>
> On Wed, Jun 1, 2016 at 4:47 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> YAQL expressions support for task dependencies has been added to Nailgun
>> [0]. So now it's possible to fix network configuration idempotency issue
>> without introducing new 'netconfig' task [1]. There will be no problems
>> with loops in task graph in such case (tested on multiroles, worked fine).
>> When we deprecate role-based deployment (even emulated), then we'll be able
>> to remove all those additional conditions from manifests and remove
>> 'configure_default_route' task completely. Please feel free to review and
>> comment the patch [1].
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/#/c/320861/
>> [1] https://review.openstack.org/#/c/322872/
>>
>> On Wed, May 25, 2016 at 10:39 AM, Simon Pasquier 
>> wrote:
>>
>>> Hi Adam,
>>> Maybe you want to look into network templates [1]? Although the
>>> documentation is a bit sparse, it allows you to define flexible network
>>> mappings.
>>> BR,
>>> Simon
>>> [1]
>>> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>>>
>>> On Wed, May 25, 2016 at 10:26 AM, Adam Heczko 
>>> wrote:
>>>
 Thanks Alex, will experiment with it once again although AFAIR it
 doesn't solve thing I'd like to do.
 I'll come later to you in case of any questions.


 On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko <
 adide...@mirantis.com> wrote:

> Hey Adam,
>
> in Fuel we have the following option (checkbox) on Network Setting tab:
>
> Assign public network to all nodes
> When disabled, public network will be assigned to controllers only
>
> So if you uncheck it (by default it's unchecked) then public network
> and 'br-ex' will exist on controllers only. Other nodes won't even have
> "Public" network on node interface configuration UI.
>
> Regards,
> Alex
>
> On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
> wrote:
>
>> Hello Alex,
>> I have a question about the proposed changes.
>> Is it possible to introduce new vlan and associated bridge only for
>> controllers?
>> I think about DMZ use case and possibility to expose public IPs/VIP
>> and API endpoints on controllers on a completely separate L2 network
>> (segment vlan/bridge) not present on any other nodes than controllers.
>> Thanks.
>>
>> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
>> adide...@mirantis.com> wrote:
>>
>>> Hi folks,
>>>
>>> we had to revert those changes [0] since it's impossible to propery
>>> handle two different netconfig tasks for multi-role nodes. So everything
>>> stays as it was before - we have single task 'netconfig' to configure
>>> network for all roles and you don't need to change anything in your
>>> plugins. Sorry for inconvenience.
>>>
>>> Our current plan for fixing network idempotency is to keep one task
>>> but change 'cross-depends' parameter to yaql_exp. This will allow us to 
>>> use
>>> single 'netconfig' task for all roles but at the same time we'll be 
>>> able to
>>> properly order it: netconfig on non-controllers will be executed only
>>> aftetr 'virtual_ips' task.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://review.openstack.org/#/c/320530/
>>>
>>>
>>> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hi all,

 please be aware that now we have two netconfig tasks (in Fuel 9.0+):

 - netconfig-controller - executed on controllers only
 - netconfig - executed on all other

Re: [openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-06 Thread Andrea Frittoli
Great job, thanks!

On Tue, 7 Jun 2016, 3:36 a.m. Masayuki Igawa, 
wrote:

> Congrats! I'm looking forward to seeing an integration/collaboration
> with openstack-heath :)
>
> On Tue, Jun 7, 2016 at 8:32 AM, Buckley, Tim Jason
>  wrote:
> > Hello all,
> >
> > I'd like to announce that StackViz will now be running at the end all
> > tempest-dsvm jobs and saving visualization output to the log server.
> >
> > StackViz is a visualization utility for generating interactive
> visualizations of
> > jobs in the OpenStack QA pipeline and aims to ease debugging and
> performance
> > analysis tasks. Currently it renders an interactive timeline for subunit
> > results and dstat data, but we are actively working to visualize more
> log types
> > in the future.
> >
> > StackViz instances are saved as a 'stackviz' directory under 'logs' for
> each job
> > run on http://logs.openstack.org/. For an example, see:
> >
> http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/
> >
> > For more information StackViz, see the project page at:
> > https://github.com/openstack/stackviz
> >
> > Bugs can also be reported at:
> > https://bugs.launchpad.net/stackviz
> >
> > Feedback is greatly appreciated!
> >
> > Thanks,
> > Tim Buckley
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-06 Thread Na Zhu
Hi John,

I do not know any better approach, I think it is good to write all the 
parameters in the creation of a port chain, this can avoid saving many 
data in northbound db which are not used. We can do it in that way 
currently, if the community has opposite ideas, we can change, what do you 
think?

Hi Ryan,

Do you agree with that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   John McDowall 
To: Na Zhu/China/IBM@IBMCN
Cc: "disc...@openvswitch.org" , Ryan Moats 
, Srilatha Tangirala , "OpenStack 
Development Mailing List (not for usage questions)" 

Date:   2016/06/06 23:36
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



Juno,

Let me check �C my intention was that the networking-sfc OVNB driver would 
configure all aspects of the port-chain and add the parameters to the 
networking-sfc db. Once all the parameters were in the creation of a 
port-chain would call networking-ovn (passing a deep copy of the 
port-chain dict). Here I see networking-ovn acting only as a bridge into 
ovs/ovn (I did not add anything in the ovn plugin �C not sure if that is 
the right approach). Networking-ovn calls into ovs/ovn and inserts the 
entire port-chain.

Thoughts?

j

From: Na Zhu 
Date: Monday, June 6, 2016 at 5:49 AM
To: John McDowall 
Cc: "disc...@openvswitch.org" , Ryan Moats <
rmo...@us.ibm.com>, Srilatha Tangirala , "OpenStack 
Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

One question need confirm with you, I think the ovn flow classifier driver 
and ovn port chain driver should call the APIs which you add to 
networking-ovn to configure the northbound db sfc tables, right? I see 
your networking-sfc ovn drivers, they does not call the APIs you add to 
networking-ovn, do you miss that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
Cc:Srilatha Tangirala , OpenStack Development 
Mailing List , Ryan Moats <
rmo...@us.ibm.com>, "disc...@openvswitch.org" 
Date:2016/06/06 14:28
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn]   
  [networking-sfc] SFC andOVN



John,

Thanks your working overtime in last weekend, now we have the following 
works to do:
1, submit design spec to networking-sfc
2, submit the RFC to ovs community
3, debug end-to-end about your code changes.
4, submit the initial patch to networking-sfc
5, submit the initial patch to ovs community
6, submit the initial patch to networking-ovn 

Do you have plan to start #1 and #2 now? I think it can be done in 
parallel with the other tasks.
Srilatha and I can start #4 and #6, we need to look at your code changes 
and write the unit test scripts for your code changes and then submit to 
community, what do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:2016/06/06 11:35
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno and team,

I have written and compiled (but not tested ) the ovs/ovn interface to 
networking-ovn and similarly I have written but not tested the IDL 
interfaces on the networking-ovn side. I will put it all together tomorrow 
and start debugging end to end. I know I am going to find a lot of issues 
as it is a major rewrite from my original interface to networking-sfc �C it 
is the right path (IMHO) just a little more work than I expected.

I have merged my repos with the upstream masters and I will keep them sync
’ed so if you want to take a look and start thinking where you can help 
it would be really appreciated.

Regards

John

From: Na Zhu 
Date: Saturday, June 4, 2016 at 6:30 AM
To: John McDowall 
Cc: "disc...@openvswitch.org" , OpenStack 
Development Mailing List , Ryan Moats <
rmo...@us.ibm.com>, Srilatha Tangirala 
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN

Hi John,

OK, please keep me posted once you done, thanks very much.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IB

Re: [openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-06 Thread Masayuki Igawa
Congrats! I'm looking forward to seeing an integration/collaboration
with openstack-heath :)

On Tue, Jun 7, 2016 at 8:32 AM, Buckley, Tim Jason
 wrote:
> Hello all,
>
> I'd like to announce that StackViz will now be running at the end all
> tempest-dsvm jobs and saving visualization output to the log server.
>
> StackViz is a visualization utility for generating interactive visualizations 
> of
> jobs in the OpenStack QA pipeline and aims to ease debugging and performance
> analysis tasks. Currently it renders an interactive timeline for subunit
> results and dstat data, but we are actively working to visualize more log 
> types
> in the future.
>
> StackViz instances are saved as a 'stackviz' directory under 'logs' for each 
> job
> run on http://logs.openstack.org/. For an example, see:
> 
> http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/
>
> For more information StackViz, see the project page at:
> https://github.com/openstack/stackviz
>
> Bugs can also be reported at:
> https://bugs.launchpad.net/stackviz
>
> Feedback is greatly appreciated!
>
> Thanks,
> Tim Buckley
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-06 Thread Cody A.W. Somerville
Congratulations! Happy to see this important milestone. Awesome job!
On 6 Jun 2016 18:32, "Buckley, Tim Jason" 
wrote:

> Hello all,
>
> I'd like to announce that StackViz will now be running at the end all
> tempest-dsvm jobs and saving visualization output to the log server.
>
> StackViz is a visualization utility for generating interactive
> visualizations of
> jobs in the OpenStack QA pipeline and aims to ease debugging and
> performance
> analysis tasks. Currently it renders an interactive timeline for subunit
> results and dstat data, but we are actively working to visualize more log
> types
> in the future.
>
> StackViz instances are saved as a 'stackviz' directory under 'logs' for
> each job
> run on http://logs.openstack.org/. For an example, see:
>
> http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/
>
> For more information StackViz, see the project page at:
> https://github.com/openstack/stackviz
>
> Bugs can also be reported at:
> https://bugs.launchpad.net/stackviz
>
> Feedback is greatly appreciated!
>
> Thanks,
> Tim Buckley
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [probably forge email可能是仿冒邮件]Re: [Neutron][Kolla][ovs-discuss] error whenstarting neutron-openvswitch-agent service

2016-06-06 Thread hu . zhijiang
Hi Liyong,

I think may be that is a consequent. At least it is not the reason for the 
following error, 


2016-06-06 09:19:45.236 1 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-ex']. Exception: Exit code: 
1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: no row "int-br-ex" in table 
Interface 

because with those interfaces down, i can still execute 

ovs-vsctl --timeout=10 --oneline --format=json -- --columns=type 
list Interface int-br-ex

 successfully manually.


Thank you,
Zhijiang



发件人: "Qiao, Liyong" 
收件人: "OpenStack Development Mailing List (not for usage 
questions)" , 
日期:   2016-06-06 23:01
主题:   [probably forge email可能是仿冒邮件]Re: [openstack-dev] 
[Neutron][Kolla][ovs-discuss] error whenstarting neutron-openvswitch-agent 
service



6: ovs-system:  mtu 1500 qdisc noop state DOWN 
link/ether 3e:c8:1d:8e:b5:5b brd ff:ff:ff:ff:ff:ff 
7: br-ex:  mtu 1500 qdisc noop state DOWN 
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff 
8: br-int:  mtu 1500 qdisc noop state DOWN 
link/ether 32:0d:72:8d:d6:42 brd ff:ff:ff:ff:ff:ff 
9: br-tun:  mtu 1500 qdisc noop state DOWN 
link/ether ea:d4:72:22:e2:4f brd ff:ff:ff:ff:ff:ff
 
 
I noted that these devices are not in UP state, you’d better to check 
them first.
 
Best Regards,
Qiao, Liyong (Eli) OTC SSG Intel

此致
敬礼!
英特尔(中国)有限公司软件与服务部开源技术中心 乔立勇
 
 
 
From: hu.zhiji...@zte.com.cn [mailto:hu.zhiji...@zte.com.cn] 
Sent: Monday, June 06, 2016 6:54 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][Kolla][ovs-discuss] error when starting 
neutron-openvswitch-agent service
 
Hi Guys, 

I am new to Neutron Kolla and OVS, I was trying to deploy Mitaka on 
CeontOS in a all-in-one environment using Kolla. After a successful 
deploying I realized that I should disable NetworkManager service roughly 
according to: 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/3/html/Installation_and_Configuration_Guide/Disabling_Network_Manager.html
 


But when I disabled NetworkManager and restarted network service (probably 
host machine also restarted), I cannot ping from my gateway through the 
external interface. 

Here is the relevant log of ovs:   

2016-06-06 09:19:37.278 1 INFO neutron.common.config [-] Logging enabled! 
2016-06-06 09:19:37.283 1 INFO neutron.common.config [-] 
/usr/bin/neutron-openvswitch-agent version 8.0.0 
2016-06-06 09:19:43.035 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Mapping physical 
network physnet1 to bridge br-ex 
2016-06-06 09:19:45.236 1 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-ex']. Exception: Exit code: 
1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: no row "int-br-ex" in table 
Interface 

2016-06-06 09:19:49.979 1 INFO neutron.agent.l2.extensions.manager 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Loaded agent 
extensions: [] 
2016-06-06 09:19:52.185 1 WARNING neutron.agent.securitygroups_rpc 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Firewall driver 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver 
doesn't accept integration_bridge parameter in __init__(): __init__() got 
an unexpected keyword argument 'integration_bridge' 
2016-06-06 09:19:53.204 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Agent initialized 
successfully, now running... 
2016-06-06 09:19:53.733 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Configuring tunnel 
endpoints to other OVS agents



I use enp0s35 as both the VIP interface and the external interface because 
the host only has one interface... 



Here is the ip addr  result before the deployment of enp0s35: 

2: enp0s25:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000 
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff 
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25 
   valid_lft 10429sec preferred_lft 10429sec 
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link 
   valid_lft forever preferred_lft forever 


Here is the ip addr result after the deployment 

2: enp0s25:  mtu 1500 qdisc pfifo_fast 
master ovs-system state UP qlen 1000 
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff 
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25 
   valid_lft 7846sec preferred_lft 7846sec 
inet 10.43.114.149/32 scope global enp0s25 
   valid_lft forever preferred_lft f

Re: [openstack-dev] New core reviewers nomination for TOSCA-Parser and or Heat-Translator project [tosca-parser][heat-translator][heat]

2016-06-06 Thread Sahdev P Zala
Thanks core team for your +1 vote.

Welcome new core - Bob, Miguel, Bharath and Mathiue. Thanks again for your 
great contribution!!

Regards, 
Sahdev Zala




From:   Sahdev P Zala/Durham/IBM@IBMUS
To: "OpenStack Development Mailing List \(not for usage questions\)" 

Date:   05/31/2016 09:30 AM
Subject:[openstack-dev] New core reviewers nomination for 
TOSCA-Parser and or Heat-Translator project 
[tosca-parser][heat-translator][heat]



Hello TOSCA-Parser and Heat-Translator core team,

I would like to nominate following current active contributors to the 
tosca-parser and or heat-translator project as core reviewers to speed up 
the development. They are contributing for more than six months and has 
remained one of the top five contributors for a mentioned project(s).

Please reply to this thread or email me with your vote (+1 or -1) by EOD 
June 4th. 

[1] Bob Haddleton: Bob is a lead developer for the TOSCA NFV specific 
parsing and translation in the tosca-parser and heat-translator projects 
respectively. Bob actively participates in IRC meetings and other 
discussion via emails or IRC. He is a also a core reviewer in OpenStack 
Tacker project. I would like to nominate him for core reviewer position 
for both tosca-parser and heat-translator. 

[2] Miguel Caballar: Miguel is familiar with TOSCA for long time. He is an 
asset for the tosca-parser project and has been bringing lot of new use 
cases to the project. He is a second lead developer overall for the 
project at present. I would like to nominate him for core reviewer 
position in tosca-parser.

[3] Bharath Thiruveedula: Bharath is actively contributing to the 
heat-translator project. He knows project well and has implemented 
important blueprints during the Mitaka cycle including enhancement to the 
OSC plugin, automatic deployment of translated templates and dynamic 
querying of flavors and images. Bharath actively participates in IRC 
meetings and other discussion via emails or IRC. I would like to nominate 
him for the core reviewer position in heat-translator. 

[4] Mathieu Velten: Mathieu is familiar with TOSCA for long time as well. 
He is brining new use cases regularly and actively working on enhancing 
the heat-translator project with needed implementation. He also uses the 
translated templates with real time deployment with Heat for his work on 
project Indigo DataCloud [5]. He knows project well and was the second 
lead developer for the project during the Mitaka cycle. I would like to 
nominate him for the core reviewer position in heat-translator. 

[1] 
http://stackalytics.com/?release=all&module=tosca-parser&metric=commits&user_id=bob-haddleton
and 
http://stackalytics.com/?release=all&module=heat-translator&metric=commits&user_id=bob-haddleton

[2] 
http://stackalytics.com/?release=all&module=tosca-parser&metric=commits&user_id=micafer1

[3] 
http://stackalytics.com/?release=all&module=heat-translator&metric=commits&user_id=bharath-ves

[4] 
http://stackalytics.com/?release=all&metric=commits&module=heat-translator&user_id=matmaul

[5] https://www.indigo-datacloud.eu/

Thanks! 

Regards, 
Sahdev Zala
RTP, NC
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar][zaqar-ui][i18n] Translation enabled

2016-06-06 Thread Shuu Mutou
Hi everyone, 

Now, we can translate Zaqar-UI on Zanata, please translate Zaqar-UI into your 
native language!!
Translation is another good review for Zaqar-UI, so I hope your help.

See also https://wiki.openstack.org/wiki/I18nTeam


Best regards, 

Shu Muto


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-06 Thread Joshua Hesketh
Awesome work to all involved. This is really neat! :-)

On Tue, Jun 7, 2016 at 9:32 AM, Buckley, Tim Jason <
timothy.jas.buck...@hpe.com> wrote:

> Hello all,
>
> I'd like to announce that StackViz will now be running at the end all
> tempest-dsvm jobs and saving visualization output to the log server.
>
> StackViz is a visualization utility for generating interactive
> visualizations of
> jobs in the OpenStack QA pipeline and aims to ease debugging and
> performance
> analysis tasks. Currently it renders an interactive timeline for subunit
> results and dstat data, but we are actively working to visualize more log
> types
> in the future.
>
> StackViz instances are saved as a 'stackviz' directory under 'logs' for
> each job
> run on http://logs.openstack.org/. For an example, see:
>
> http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/
>
> For more information StackViz, see the project page at:
> https://github.com/openstack/stackviz
>
> Bugs can also be reported at:
> https://bugs.launchpad.net/stackviz
>
> Feedback is greatly appreciated!
>
> Thanks,
> Tim Buckley
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Revert "Migrate tripleo to centos-7"

2016-06-06 Thread Dan Prince
Sending it again to [TripleO].

On Mon, 2016-06-06 at 20:06 -0400, Dan Prince wrote:
> Hi all,
> 
> Having a bit of a CI outage today due to (I think) the switch to
> Centos
> Jenkins slaves. I'd like to suggest that we revert that quickly to
> keep
> things moving in TripleO:
> 
> https://review.openstack.org/326182 Revert "Migrate tripleo to
> centos-
> 7"
> 
> And then perhaps we can follow up with a bit more Centos 7 testing
> before we switch over completely.
> 
> Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Revert "Migrate tripleo to centos-7"

2016-06-06 Thread Dan Prince
Hi all,

Having a bit of a CI outage today due to (I think) the switch to Centos
Jenkins slaves. I'd like to suggest that we revert that quickly to keep
things moving in TripleO:

https://review.openstack.org/326182 Revert "Migrate tripleo to centos-
7"

And then perhaps we can follow up with a bit more Centos 7 testing
before we switch over completely.

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-06 Thread Devananda van der Veen

On 06/06/2016 01:44 PM, Kris G. Lindgren wrote:
> Hi ironic folks,
> As I'm trying to explore how GoDaddy can use ironic I've created the following
> in an attempt to document some of my concerns, and I'm wondering if you folks
> could help myself identity ongoing work to solve these (or alternatives?)
> List of concerns with ironic:

Hi Kris,

There is a lot of ongoing work in and around the Ironic project. Thanks for
diving in and for sharing your concerns; you're not alone.

I'll respond to each group of concerns, as some of these appear quite similar to
each other and align with stuff we're already doing. Hopefully I can provide
some helpful background to where the project is at today.

> 
> 1.)Nova <-> ironic interactions are generally seem terrible?

These two projects are coming at the task of managing "compute" with
significantly different situations and we've been working, for the last ~2
years, to build a framework that can provide both virtual and physical resources
through one API. It's not a simple task, and we have a lot more to do.


>   -How to accept raid config and partitioning(?) from end users? Seems to not 
> a
> yet agreed upon method between nova/ironic.

Nova expresses partitioning in a very limited way on the flavor. You get root,
swap, and ephemeral partitions -- and that's it. Ironic honors those today, but
they're pinned on the flavor definition, eg. by the cloud admin (or whoever can
define the flavor.

If your users need more complex partitioning, they could create additional
partitions after the instance is created. This limitation within Ironic exists,
in part, because the projects' goal is to provide hardware through the OpenStack
Compute API -- which doesn't express arbitrary partitionability. (If you're
interested, there is a lengthier and more political discussion about whether the
cloud should support "pets" and whether arbitrary partitioning is needed for
"cattle".)


RAID configuration isn't something that Nova allows their users to choose today
- it doesn't fit in the Nova model of "compute", and there is, to my knowledge,
nothing in the Nova API to allow its input. We've discussed this a little bit,
but so far settled on leaving it up to the cloud admin to set this in Ironic.

There has been discussion with the Cinder community over ways to express volume
spanning and mirroring, but apply it to a machines' local disks, but these
discussions didn't result in any traction.

There's also been discussion of ways we could do ad-hoc changes in RAID level,
based on flavor metadata, during the provisioning process (rather than ahead of
time) but no code has been done for this yet, AFAIK.

So, where does that leave us? With the "explosion of flavors" that you
described. It may not be ideal, but that is the common ground we've reached.

>-How to run multiple conductors/nova-computes?   Right now as far as I can
> tell all of ironic front-end by a single nova-compute, which I will have to
> manage via a cluster technology between two or mode nodes.  Because of this 
> and
> the way host-agregates work I am unable to expose fault domains for ironic
> instances (all of ironic can only be under a single AZ (the az that is 
> assigned
> to the nova-compute node)). Unless I create multiple nova-compute servers and
> manage multiple independent ironic setups.  This makes on-boarding/query of
> hardware capacity painful.

Yep. It's not ideal, and the community is very well aware of, and actively
working on, this limitation. It also may not be as bad as you may think. The
nova-compute process doesn't do very much, and tests show it handling some
thousands of ironic nodes fairly well in parallel. Standard active-passive
management of that process should suffice.

A lot of design work has been done to come up with a joint solution by folks on
both the Ironic and Nova teams.
http://specs.openstack.org/openstack/nova-specs/specs/newton/approved/ironic-multiple-compute-hosts.html

As a side note, it's possible (though not tested, recommended, or well
documented) to run more than one nova-compute. See
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py

>   - Nova appears to be forcing a we are "compute" as long as "compute" is VMs,
> means that we will have a baremetal flavor explosion (ie the mismatch between
> baremetal and VM).
>   - This is a feeling I got from the ironic-nova cross project meeting in
> Austin.  General exmaple goes back to raid config above. I can configure a
> single piece of hardware many different ways, but to fit into nova's world 
> view
> I need to have many different flavors exposed to end-user.  In this way many
> flavors can map back to a single piece of hardware with just a lsightly
> different configuration applied. So how am I suppose to do a single server 
> with
> 6 drives as either: Raid 1 + Raid 5, Raid 5, Raid 10, Raid 6, or JBOD.  Seems
> like I would need to pre-mark out servers that were going to be a specific 
>

Re: [openstack-dev] [neutron][SFC]

2016-06-06 Thread Cathy Zhang
Hi Alioune,

Which OVS version are you using?
Try openvswitch version 2.4.0 and restart the openvswitch-server before 
installing the devstack.

Cathy

From: Alioune [mailto:baliou...@gmail.com]
Sent: Friday, June 03, 2016 9:07 AM
To: openstack-dev@lists.openstack.org
Cc: Cathy Zhang
Subject: [openstack-dev][neutron][SFC]

Probleme with OpenStack SFC
Hi all,
I've installed Openstack SFC with devstack and all module are corretly running 
except the neutron L2-agent

After a "screen -rd", it seems that there is a conflict between l2-agent and 
SFC (see trace bellow).
I solved the issue with "sudo ovs-vsctl set bridge br 
protocols=OpenFlow10,OpenFlow11,OpenFlow12,OpenFlow13" on all openvswitch 
bridge (br-int, br-ex, br-tun and br-mgmt0).
I would like to know:
  - If someone knows why this error arrises ?
 - is there another way to solve it ?

Regards,

2016-06-03 12:51:56.323 WARNING 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] OVS is dead. 
OVSNeutronAgent will keep running and checking OVS status periodically.
2016-06-03 12:51:56.330 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop - 
iteration:4722 completed. Processed ports statistics: {'regular': {'updated': 
0, 'added': 0, 'removed': 0}}. Elapsed:0.086 from (pid=12775) 
loop_count_and_wait 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1680
2016-06-03 12:51:58.256 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Agent rpc_loop - 
iteration:4723 started from (pid=12775) rpc_loop 
/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1732
2016-06-03 12:51:58.258 DEBUG neutron.agent.linux.utils 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Running command (rootwrap 
daemon): ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23'] 
from (pid=12775) execute_rootwrap_daemon 
/opt/stack/neutron/neutron/agent/linux/utils.py:101
2016-06-03 12:51:58.311 ERROR neutron.agent.linux.utils 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
Exit code: 1
Stdin:
Stdout:
Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

2016-06-03 12:51:58.323 ERROR networking_sfc.services.sfc.common.ovs_ext_lib 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None]
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
Exit code: 1
Stdin:
Stdout:
Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)

2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Traceback (most recent call last):
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib   
File 
"/opt/stack/networking-sfc/networking_sfc/services/sfc/common/ovs_ext_lib.py", 
line 125, in run_ofctl
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
 process_input=process_input)
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib   
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 159, in execute
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
 raise RuntimeError(m)
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
RuntimeError:
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Command: ['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23']
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Exit code: 1
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stdin:
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stdout:
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
Stderr: 
2016-06-03T12:51:58Z|1|vconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: 
version negotiation failed (we support version 0x04, peer supports version 0x01)
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib 
ovs-ofctl: br-int: failed to connect to socket (Broken pipe)
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
2016-06-03 12:51:58.323 TRACE networking_sfc.services.sfc.common.ovs_ext_lib
2016-06-03 12:51:58.335 ERROR networking_sfc.services.sfc.common.ovs_ext_lib 
[req-1258bbbc-7211-4cfd-ab7c-8b856604f768 None None] Unable to execute 
['ovs-ofctl', '-O openflow13', 'dump-flows', 'br-int', 'table=23'].
2016-

[openstack-dev] [Infra] Meeting Tuesday June 7th at 19:00 UTC

2016-06-06 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday June 7th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-31-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-31-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-05-31-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] StackViz is now enabled for all devstack-gate jobs

2016-06-06 Thread Buckley, Tim Jason
Hello all,

I'd like to announce that StackViz will now be running at the end all
tempest-dsvm jobs and saving visualization output to the log server.

StackViz is a visualization utility for generating interactive visualizations of
jobs in the OpenStack QA pipeline and aims to ease debugging and performance
analysis tasks. Currently it renders an interactive timeline for subunit
results and dstat data, but we are actively working to visualize more log types
in the future.

StackViz instances are saved as a 'stackviz' directory under 'logs' for each job
run on http://logs.openstack.org/. For an example, see:

http://logs.openstack.org/07/212207/8/check/gate-tempest-dsvm-full/2d30217/logs/stackviz/

For more information StackViz, see the project page at:
https://github.com/openstack/stackviz

Bugs can also be reported at:
https://bugs.launchpad.net/stackviz

Feedback is greatly appreciated!

Thanks,
Tim Buckley


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] stepping down from core

2016-06-06 Thread Michał Jastrzębski
Damn, bad news:( All the best Jeff!

On 6 June 2016 at 17:57, Vikram Hosakote (vhosakot)  wrote:
> Thanks for all the contributions to kolla and good luck Jeff!
>
> Regards,
> Vikram Hosakote
> IRC: vhosakot
>
> From: "Steven Dake (stdake)" 
> Reply-To: OpenStack Development Mailing List
> 
> Date: Monday, June 6, 2016 at 6:14 PM
> To: OpenStack Development Mailing List 
> Subject: Re: [openstack-dev] [kolla] stepping down from core
>
> Jeff,
>
> Thanks for the notification.  Likewise it has been a pleasure working with
> you over the last 3 years on Kolla.  I've removed you from gerrit.
>
> You have made a big impact on Kolla.  For folks that don't know, at one
> point Kolla was nearly dead, and Jeff was one of our team of 3 that stuck
> to it.  Without Jeff to carry the work forward, OpenStack deployment in
> containers would have been set back years.
>
> Best wishes on what you work on next.
>
> Regards
> -steve
>
> On 6/6/16, 12:36 PM, "Jeff Peeler"  wrote:
>
> Hi all,
>
> This is my official announcement to leave core on Kolla /
> Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
> we'll cross paths again!
>
> Jeff
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] stepping down from core

2016-06-06 Thread Vikram Hosakote (vhosakot)
Thanks for all the contributions to kolla and good luck Jeff!

Regards,
Vikram Hosakote
IRC: vhosakot

From: "Steven Dake (stdake)" mailto:std...@cisco.com>>
Reply-To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, June 6, 2016 at 6:14 PM
To: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] stepping down from core

Jeff,

Thanks for the notification.  Likewise it has been a pleasure working with
you over the last 3 years on Kolla.  I've removed you from gerrit.

You have made a big impact on Kolla.  For folks that don't know, at one
point Kolla was nearly dead, and Jeff was one of our team of 3 that stuck
to it.  Without Jeff to carry the work forward, OpenStack deployment in
containers would have been set back years.

Best wishes on what you work on next.

Regards
-steve

On 6/6/16, 12:36 PM, "Jeff Peeler" 
mailto:jpee...@redhat.com>> wrote:

Hi all,

This is my official announcement to leave core on Kolla /
Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
we'll cross paths again!

Jeff

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-06 Thread Gregory Haynes
On Mon, Jun 6, 2016, at 05:44 PM, Gregory Haynes wrote:
>
> On Mon, Jun 6, 2016, at 05:31 PM, Michael Still wrote:
>> On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:
>>> Hello all,
>>>
>>> At Rackspace we're running into an interesting problem: Consider
>>> a user
>>> who boots an instance in Nova with an image which only supports SSH
>>> public-key authentication, but the user doesn't provide a public
>>> key in
>>> the boot request. As far as I understand it, today Nova will happily
>>> boot that image and it may take the user some time to realize their
>>> mistake when they can't login to the instance.
>>
>> What about images where the authentication information is inside the
>> image? For example, there's just a standard account baked in that
>> everyone knows about? In that case Nova doesn't need to inject
>> anything into the instance, and therefore the metadata doesn't need
>> to supply anything.
>
> We have an element in diskimage-builder[1] which allows a user to pass
> a kernel boot param to inject an ssh key if needed due to a reason
> like this. Obviously, this wouldn't 'just work' in any normal cloud
> deploy since the kernel boot params are baked in to the image itself
> (this is currently useful to ironic users who boot ramdisks) but maybe
> the pattern is helpful: Check something once at boot time via init
> script and that's it. The downside being that a user has to reboot the
> image to inject the key, but IMO its a huge decrease in complexity
> (over something like file injection) for something a user who just
> booted a new image should be OK with.
>
> Cheers,
> Greg
 
Looks like I left out the actual useful info:
 
[1]:http://docs.openstack.org/developer/diskimage-builder/elements/dynamic-login/README.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-06 Thread Gregory Haynes
 
On Mon, Jun 6, 2016, at 05:31 PM, Michael Still wrote:
> On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:
>> Hello all,
>>
>>  At Rackspace we're running into an interesting problem: Consider
>>  a user
>>  who boots an instance in Nova with an image which only supports SSH
>>  public-key authentication, but the user doesn't provide a public
>>  key in
>>  the boot request. As far as I understand it, today Nova will happily
>>  boot that image and it may take the user some time to realize their
>>  mistake when they can't login to the instance.
>
> What about images where the authentication information is inside the
> image? For example, there's just a standard account baked in that
> everyone knows about? In that case Nova doesn't need to inject
> anything into the instance, and therefore the metadata doesn't need to
> supply anything.
 
We have an element in diskimage-builder[1] which allows a user to pass a
kernel boot param to inject an ssh key if needed due to a reason like
this. Obviously, this wouldn't 'just work' in any normal cloud deploy
since the kernel boot params are baked in to the image itself (this is
currently useful to ironic users who boot ramdisks) but maybe the
pattern is helpful: Check something once at boot time via init script
and that's it. The downside being that a user has to reboot the image to
inject the key, but IMO its a huge decrease in complexity (over
something like file injection) for something a user who just booted a
new image should be OK with.
 
Cheers,
Greg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-06 Thread Michael Still
On Tue, Jun 7, 2016 at 7:41 AM, Clif Houck  wrote:

> Hello all,
>
> At Rackspace we're running into an interesting problem: Consider a user
> who boots an instance in Nova with an image which only supports SSH
> public-key authentication, but the user doesn't provide a public key in
> the boot request. As far as I understand it, today Nova will happily
> boot that image and it may take the user some time to realize their
> mistake when they can't login to the instance.
>

What about images where the authentication information is inside the image?
For example, there's just a standard account baked in that everyone knows
about? In that case Nova doesn't need to inject anything into the instance,
and therefore the metadata doesn't need to supply anything.

Cheers,
Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] stepping down from core

2016-06-06 Thread Steven Dake (stdake)
Jeff,

Thanks for the notification.  Likewise it has been a pleasure working with
you over the last 3 years on Kolla.  I've removed you from gerrit.

You have made a big impact on Kolla.  For folks that don't know, at one
point Kolla was nearly dead, and Jeff was one of our team of 3 that stuck
to it.  Without Jeff to carry the work forward, OpenStack deployment in
containers would have been set back years.

Best wishes on what you work on next.

Regards
-steve

On 6/6/16, 12:36 PM, "Jeff Peeler"  wrote:

>Hi all,
>
>This is my official announcement to leave core on Kolla /
>Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
>we'll cross paths again!
>
>Jeff
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-06 Thread Clint Byrum
Excerpts from Kris G. Lindgren's message of 2016-06-06 20:44:26 +:
> Hi ironic folks,
> As I'm trying to explore how GoDaddy can use ironic I've created the 
> following in an attempt to document some of my concerns, and I'm wondering if 
> you folks could help myself identity ongoing work to solve these (or 
> alternatives?)

Hi Kris. I've been using Ironic in various forms for a while, and I can
answer a few of these things.

> List of concerns with ironic:
> 
> 1.)Nova <-> ironic interactions are generally seem terrible?

I don't know if I'd call it terrible, but there's friction. Things that
are unchangable on hardware are just software configs in vms (like mac
addresses, overlays, etc), and things that make no sense in VMs are
pretty standard on servers (trunked vlans, bonding, etc).

One way we've gotten around it is by using Ironic standalone via
Bifrost[1]. This deploys Ironic in wide open auth mode on 127.0.0.1,
and includes playbooks to build config drives and deploy images in a
fairly rudimentary way without Nova.

I call this the "better than Cobbler" way of getting a toe into the
Ironic waters.

[1] https://github.com/openstack/bifrost

>   -How to accept raid config and partitioning(?) from end users? Seems to not 
> a yet agreed upon method between nova/ironic.

AFAIK accepting it from the users just isn't solved. Administrators
do have custom ramdisks that they boot to pre-configure RAID during
enrollment.

>-How to run multiple conductors/nova-computes?   Right now as far as I can 
> tell all of ironic front-end by a single nova-compute, which I will have to 
> manage via a cluster technology between two or mode nodes.  Because of this 
> and the way host-agregates work I am unable to expose fault domains for 
> ironic instances (all of ironic can only be under a single AZ (the az that is 
> assigned to the nova-compute node)). Unless I create multiple nova-compute 
> servers and manage multiple independent ironic setups.  This makes 
> on-boarding/query of hardware capacity painful.

The nova-compute does almost nothing. It really just talks to the
scheduler to tell it what's going on in Ironic. If it dies, deploys
won't stop. You can run many many conductors and spread load and fault
tolerance among them easily. I think for multiple AZs though, you're
right, there's no way to expose that. Perhaps it can be done with cells,
which I think Rackspace's OnMetal uses (but I'll let them refute or
confirm that).

Seems like the virt driver could be taught to be AZ-aware and some
metadata in the server record could allow AZs to go through to Ironic.

>   - Nova appears to be forcing a we are "compute" as long as "compute" is 
> VMs, means that we will have a baremetal flavor explosion (ie the mismatch 
> between baremetal and VM).
>   - This is a feeling I got from the ironic-nova cross project meeting in 
> Austin.  General exmaple goes back to raid config above. I can configure a 
> single piece of hardware many different ways, but to fit into nova's world 
> view I need to have many different flavors exposed to end-user.  In this way 
> many flavors can map back to a single piece of hardware with just a lsightly 
> different configuration applied. So how am I suppose to do a single server 
> with 6 drives as either: Raid 1 + Raid 5, Raid 5, Raid 10, Raid 6, or JBOD.  
> Seems like I would need to pre-mark out servers that were going to be a 
> specific raid level.  Which means that I need to start managing additional 
> sub-pools of hardware to just deal with how the end users wants the raid 
> configured, this is pretty much a non-starter for us.  I have not really 
> heard of whats being done on this specific front.

You got that right. Perhaps people are comfortable with this limitation.
It is at least simple.

> 
> 2.) Inspector:
>   - IPA service doesn't gather port/switching information
>   - Inspection service doesn't process port/switching information, which 
> means that it wont add it to ironic.  Which makes managing network swinging 
> of the host a non-starter.  As I would inspect the host – then modify the 
> ironic record to add the details about what port/switch the server is 
> connected to from a different source.  At that point why wouldn't I just 
> onboard everything through the API?
>   - Doesn't grab hardware disk configurations, If the server has multiple 
> raids (r1 + r5) only reports boot raid disk capacity.
>   - Inspection is geared towards using a different network and dnsmasq 
> infrastructure than what is in use for ironic/neutron.  Which also means that 
> in order to not conflict with dhcp requests for servers in ironic I need to 
> use different networks.  Which also means I now need to handle swinging 
> server ports between different networks.
> 
> 3.) IPA image:
>   - Default build stuff is pinned to extremly old versions due to gate 
> failure issues. So I can not work without a fork for onboard of servers due 
> to the fact that IPMI modules a

[openstack-dev] [nova] Using image metadata to sanity check supplied authentication data at nova 'create' or 'recreate' time?

2016-06-06 Thread Clif Houck
Hello all,

At Rackspace we're running into an interesting problem: Consider a user
who boots an instance in Nova with an image which only supports SSH
public-key authentication, but the user doesn't provide a public key in
the boot request. As far as I understand it, today Nova will happily
boot that image and it may take the user some time to realize their
mistake when they can't login to the instance.

I've been thinking about a solution to this problem. Ideally, the Nova
API would quickly return an HTTP code indicating a problem with the
request and reject the `create` or `recreate` request if the proper
credentials were not included as part of the request.

So in the example above, instead of Nova accepting the create request,
Nova would check the requested image's meta-data and ensure at least
one form of authentication is supported AND has credentials available
to place on the image during provisioning. Basically, ensure the
requester has a way to login remotely.

I've put up a short specification on this proposed addition here:
https://review.openstack.org/#/c/326073/
and the blueprint is here:
https://blueprints.launchpad.net/nova/+spec/auth-based-on-image-metadat
a

I think one of the glaring weaknesses of this proposal is it would
require a call to the image API to get image meta-data during `create`
or `recreate`. This could be alleviated by caching image meta-data in
Nova, since I wouldn't expect image meta-data to change often.

There's also the question of the image meta-data itself. I don't think
there is any existing standard to describe, using metadata, what remote
login/authentication methods a particular image supports. One way could
be to provide a set of configuration options for the operator to
define. That way, each operator could define their own metadata
describing each authentication method supported by their images.

Hoping this will elicit some opinions on how to go about solving this,
or if I've missed something that already exists that solves this
problem.

Any thoughts welcome.

Thanks,
Clif Houck

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Openstack support for bump-in-the-wire functions

2016-06-06 Thread Farhad Sunavala
Hi Sean,
networking-sfc does not support bump-in-the-wire functions currently as I 
mentioned.The service functions are essentially L3 (ie. the MAC destination 
address is changed to the SF and the service function then sources the packet 
with its MAC address).
What I am looking for is bump-in-the-wire service functions that pass the 
original packetuntouched.
This is what networking-sfc does currently and does it really well.
A     SF      B|      |    |       |1    2   3      4-OVS
A sends packet to B with dst MAC = B and src MAC = Aflow-classifier matches the 
packet and realizes it needs to be sent to SF (service function).networking-sfc 
changes the MAC DA to SF and send the packet to OF port 2.SF does its work on 
the packet and sends it out to port 3 with src MAC = SF.This is perfectly fine 
and normal operation and networking-sfc does it great.




Now, imagine if "SF" were a bump-in-the-wire function (ie. receives a packet, 
does its work onthe packet and then sends the packet unmodified to B).A     SF  
    B|      |    |       |1    2   3      4-OVS

So, with bump-in-the-wire, following happensA sends packet to B with dst MAC = 
B and src MAC = A.flow-classifier matches the packet and realizes it needs to 
be sent to SF (service function).Packet is sent unmodified to SF (dst MAC=B and 
src MAC=A).on OP port 2.SF does its work on the packet and sends it to OF port 
3 unmodified (dst MAC=B, and src MAC=A).
Now, following issues comes into play.1. bridge learning gone bad.On br-int 
(OVS), when the packet hits any flow with NORMAL action, it will learn the SRC 
MAC address of the packet and the port it arrived on.  First, it learnt that 
src MAC A is on port 1,when SF sends the packet back to port 3, br-int will now 
think that src MAC A is on port 3.
2. Infinite loop issues.BUM (broadcast/unicast/multicast) packets flooded to SF 
will essentially go in an infinite loopunless proper flows are inserted to 
avoid them being sent to SF.
This is not really a service chain issue but a basic issue of how to support 
bump-in-the-wire functionswith Openstack using OVS as the ML2 plugin.
thanks,Farhad.










 

On Monday, June 6, 2016 1:43 PM, Sean M. Collins  wrote:
 

 Take a look at the networking-sfc project.
-- 
Sean M. Collins


  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-06 Thread Srilatha Jandhyala
Hi John,

To get started with adding test scripts, I was trying to workout one end to
end flow with the latest code from your private repos and found the
following.

create_port_chain is calling _create_ovn_vnf.
This is calling self.*_ovn.create_lservice*(
lservice_name = 'sfi-%s' % ovn_info['id'],
lswitch_name = lswitch_name,
name = ovn_info['name'],
app_port = ovn_info['app_port_id'],
in_port = ovn_info['in_port_id'],
out_port = ovn_info['out_port_id'] ))

I could not find create_lservice() in networking-sfc or networking-ovn
repos. Are you planning to move OVN related apis(ex:_create_ovn_vnf) from
SFC driver to networking-ovn?

If you think we should write unit test scripts to test the apis first,
please let us know which apis we should consider first.

Please let us know what is the best way to proceed to write the test
scripts.

Thanks,

Srilatha.

On Mon, Jun 6, 2016 at 8:36 AM, John McDowall <
jmcdow...@paloaltonetworks.com> wrote:

> Juno,
>
> Let me check – my intention was that the networking-sfc OVNB driver would
> configure all aspects of the port-chain and add the parameters to the
> networking-sfc db. Once all the parameters were in the creation of a
> port-chain would call networking-ovn (passing a deep copy of the port-chain
> dict). Here I see networking-ovn acting only as a bridge into ovs/ovn (I
> did not add anything in the ovn plugin – not sure if that is the right
> approach). Networking-ovn calls into ovs/ovn and inserts the entire
> port-chain.
>
> Thoughts?
>
> j
>
> From: Na Zhu 
> Date: Monday, June 6, 2016 at 5:49 AM
> To: John McDowall 
> Cc: "disc...@openvswitch.org" , Ryan Moats <
> rmo...@us.ibm.com>, Srilatha Tangirala , "OpenStack
> Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
>
> Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn]
> [networking-sfc] SFC andOVN
>
> Hi John,
>
> One question need confirm with you, I think the ovn flow classifier driver
> and ovn port chain driver should call the APIs which you add to
> networking-ovn to configure the northbound db sfc tables, right? I see your
> networking-sfc ovn drivers, they does not call the APIs you add to
> networking-ovn, do you miss that?
>
>
>
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New
> District, Shanghai, China (201203)
>
>
>
> From:Na Zhu/China/IBM@IBMCN
> To:John McDowall 
> Cc:Srilatha Tangirala , OpenStack
> Development Mailing List , Ryan Moats <
> rmo...@us.ibm.com>, "disc...@openvswitch.org" 
> Date:2016/06/06 14:28
> Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn]
>[networking-sfc] SFC andOVN
> --
>
>
>
> John,
>
> Thanks your working overtime in last weekend, now we have the following
> works to do:
> 1, submit design spec to networking-sfc
> 2, submit the RFC to ovs community
> 3, debug end-to-end about your code changes.
> 4, submit the initial patch to networking-sfc
> 5, submit the initial patch to ovs community
> 6, submit the initial patch to networking-ovn
>
> Do you have plan to start #1 and #2 now? I think it can be done in
> parallel with the other tasks.
> Srilatha and I can start #4 and #6, we need to look at your code changes
> and write the unit test scripts for your code changes and then submit to
> community, what do you think?
>
>
>
>
> Regards,
> Juno Zhu
> IBM China Development Labs (CDL) Cloud IaaS Lab
> Email: na...@cn.ibm.com
> 5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New
> District, Shanghai, China (201203)
>
>
>
> From:John McDowall 
> To:Na Zhu/China/IBM@IBMCN
> Cc:"disc...@openvswitch.org" ,
> "OpenStack Development Mailing List" ,
> Ryan Moats , Srilatha Tangirala 
> Date:2016/06/06 11:35
> Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc]
> SFC andOVN
> --
>
>
>
> Juno and team,
>
> I have written and compiled (but not tested ) the ovs/ovn interface to
> networking-ovn and similarly I have written but not tested the IDL
> interfaces on the networking-ovn side. I will put it all together tomorrow
> and start debugging end to end. I know I am going to find a lot of issues
> as it is a major rewrite from my original interface to networking-sfc – it
> is the right path (IMHO) just a little more work than I expected.
>
> I have merged my repos with the upstream masters and I will keep them
> sync’ed so if you want to take a look and start thinking where you can help
> it would be really appreciated.
>
> Regards
>
> John
>
> * From: *Na Zhu <*na...@cn.ibm.com* >
> * Date: *Saturday, June 4, 2016 at 6:30 AM
> * To: *John McDowall <*jmcdow...@paloaltonetworks.com*
> >
> * Cc: *"*disc...@openvswitch.org* " <
> *disc...@openvswitch.org* >, OpenStack
> Development Mailing List <*openstack-dev@lists.opensta

Re: [openstack-dev] [TripleO] Undercloud Configuration Wizard

2016-06-06 Thread Ben Nemec
On 05/12/2016 03:36 AM, Dmitry Tantsur wrote:
> On 05/11/2016 06:19 PM, Ben Nemec wrote:
>> Hi all,
>>
>> Just wanted to let everyone know that I've ported the undercloud
>> configuration wizard to be a web app so it can be used by people without
>> PyQt on their desktop.  I've written a blog post about it here:
>> http://blog.nemebean.com/content/undercloud-configuration-wizard and the
>> tool itself is here: http://ucw-bnemec.rhcloud.com/
> 
> Nice! I remember people complaining about our use of 192.0.2.0 network 
> by default, maybe you could change it?

Yeah, this work was actually part of my plan for deprecating that
default.  I've pushed https://review.openstack.org/#/c/320072/ for
instack-undercloud (which I need to revisit) and also changed the
default in the wizard to 192.168.0.0/24.  I chose the new default based
on the fact that we already use the 10. and 172. private ranges in our
sample network-environment files, so to avoid overlap this seemed like
the best choice.

I've also made a few other changes - the input areas now have tooltips
generated from the opt descriptions, opts added recently are now marked
with the version in which they were added, and a couple of other common
options are now available in the wizard.

> 
>>
>> It might be good to link it from tripleo.org too, or maybe even move it
>> to be hosted there entirely.  The latter would require some work as it's
>> not really designed to play nicely with an existing web server (hey, I'm
>> a webapp noob, cut me some slack :-), but it could be done.
>>
>> -Ben
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Deprecating the live_migration_flag and block_migration_flag config options

2016-06-06 Thread Timofei Durakov
On Mon, Jun 6, 2016 at 11:26 PM, Matt Riedemann 
wrote:

> On 6/6/2016 12:15 PM, Matt Riedemann wrote:
>
>> On 1/8/2016 12:28 PM, Mark McLoughlin wrote:
>>
>>> On Fri, 2016-01-08 at 14:11 +, Daniel P. Berrange wrote:
>>>
 On Thu, Jan 07, 2016 at 09:07:00PM +, Mark McLoughlin wrote:

> On Thu, 2016-01-07 at 12:23 +0100, Sahid Orentino Ferdjaoui
> wrote:
>
>> On Mon, Jan 04, 2016 at 09:12:06PM +, Mark McLoughlin
>> wrote:
>>
>>> Hi
>>>
>>> commit 8ecf93e[1] got me thinking - the live_migration_flag
>>> config option unnecessarily allows operators choose arbitrary
>>> behavior of the migrateToURI() libvirt call, to the extent
>>> that we allow the operator to configure a behavior that can
>>> result in data loss[1].
>>>
>>> I see that danpb recently said something similar:
>>>
>>> https://review.openstack.org/171098
>>>
>>> "Honestly, I wish we'd just kill off  'live_migration_flag'
>>> and 'block_migration_flag' as config options. We really
>>> should not be exposing low level libvirt API flags as admin
>>> tunable settings.
>>>
>>> Nova should really be in charge of picking the correct set of
>>> flags for the current libvirt version, and the operation it
>>> needs to perform. We might need to add other more sensible
>>> config options in their place [..]"
>>>
>>
>> Nova should really handle internal flags and this serie is
>> running in the right way.
>>
>> ...
>>>
>>
>> 4) Add a new config option for tunneled versus native:
>>>
>>> [libvirt] live_migration_tunneled = true
>>>
>>> This enables the use of the VIR_MIGRATE_TUNNELLED flag. We
>>> have historically defaulted to tunneled mode because it
>>> requires the least configuration and is currently the only
>>> way to have a secure migration channel.
>>>
>>> danpb's quote above continues with:
>>>
>>> "perhaps a "live_migration_secure_channel" to indicate that
>>> migration must use encryption, which would imply use of
>>> TUNNELLED flag"
>>>
>>> So we need to discuss whether the config option should
>>> express the choice of tunneled vs native, or whether it
>>> should express another choice which implies tunneled vs
>>> native.
>>>
>>> https://review.openstack.org/263434
>>>
>>
>> We probably have to consider that operator does not know much
>> about internal libvirt flags, so options we are exposing for
>> him should reflect benefice of using them. I commented on your
>> review we should at least explain benefice of using this option
>> whatever the name is.
>>
>
> As predicted, plenty of discussion on this point in the review
> :)
>
> You're right that we don't give the operator any guidance in the
> help message about how to choose true or false for this:
>
> Whether to use tunneled migration, where migration data is
> transported over the libvirtd connection. If True, we use the
> VIR_MIGRATE_TUNNELLED migration flag
>
> libvirt's own docs on this are here:
>
> https://libvirt.org/migration.html#transport
>
> which emphasizes:
>
> - the data copies involved in tunneling - the extra configuration
> steps required for native - the encryption support you get when
> tunneling
>
> The discussions I've seen on this topic wrt Nova have revolved
> around:
>
> - that tunneling allows for an encrypted transport[1] - that
> qemu's NBD based drive-mirror block migration isn't supported
> using tunneled mode, and that danpb is working on fixing this
> limitation in libvirt - "selective" block migration[2] won't work
> with the fallback qemu block migration support, and so won't
> currently work in tunneled mode
>

 I'm not working on fixing it, but IIRC some other dev had proposed
 patches.


> So, the advise to operators would be:
>
> - You may want to choose tunneled=False for improved block
> migration capabilities, but this limitation will go away in
> future. - You may want to choose tunneled=False if you wish to
> trade and encrypted transport for a (potentially negligible)
> performance improvement.
>
> Does that make sense?
>
> As for how to name the option, and as I said in the review, I
> think it makes sense to be straightforward here and make it
> clearly about choosing to disable libvirt's tunneled transport.
>
> If we name it any other way, I think our explanation for
> operators will immediately jump to explaining (a) that it
> influences the TUNNELLED flag, and (b) the differences between
> the tunneled and native transports. So, if we're going to have to
> talk about tunneled versus native, why obscure that detail?
>

 Ultimately we need to recog

Re: [openstack-dev] Openstack support for bump-in-the-wire functions

2016-06-06 Thread Sean M. Collins
Take a look at the networking-sfc project.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] using ironic as a replacement for existing datacenter baremetal provisioning

2016-06-06 Thread Kris G. Lindgren
Hi ironic folks,
As I'm trying to explore how GoDaddy can use ironic I've created the following 
in an attempt to document some of my concerns, and I'm wondering if you folks 
could help myself identity ongoing work to solve these (or alternatives?)
List of concerns with ironic:

1.)Nova <-> ironic interactions are generally seem terrible?
  -How to accept raid config and partitioning(?) from end users? Seems to not a 
yet agreed upon method between nova/ironic.
   -How to run multiple conductors/nova-computes?   Right now as far as I can 
tell all of ironic front-end by a single nova-compute, which I will have to 
manage via a cluster technology between two or mode nodes.  Because of this and 
the way host-agregates work I am unable to expose fault domains for ironic 
instances (all of ironic can only be under a single AZ (the az that is assigned 
to the nova-compute node)). Unless I create multiple nova-compute servers and 
manage multiple independent ironic setups.  This makes on-boarding/query of 
hardware capacity painful.
  - Nova appears to be forcing a we are "compute" as long as "compute" is VMs, 
means that we will have a baremetal flavor explosion (ie the mismatch between 
baremetal and VM).
  - This is a feeling I got from the ironic-nova cross project meeting in 
Austin.  General exmaple goes back to raid config above. I can configure a 
single piece of hardware many different ways, but to fit into nova's world view 
I need to have many different flavors exposed to end-user.  In this way many 
flavors can map back to a single piece of hardware with just a lsightly 
different configuration applied. So how am I suppose to do a single server with 
6 drives as either: Raid 1 + Raid 5, Raid 5, Raid 10, Raid 6, or JBOD.  Seems 
like I would need to pre-mark out servers that were going to be a specific raid 
level.  Which means that I need to start managing additional sub-pools of 
hardware to just deal with how the end users wants the raid configured, this is 
pretty much a non-starter for us.  I have not really heard of whats being done 
on this specific front.

2.) Inspector:
  - IPA service doesn't gather port/switching information
  - Inspection service doesn't process port/switching information, which means 
that it wont add it to ironic.  Which makes managing network swinging of the 
host a non-starter.  As I would inspect the host – then modify the ironic 
record to add the details about what port/switch the server is connected to 
from a different source.  At that point why wouldn't I just onboard everything 
through the API?
  - Doesn't grab hardware disk configurations, If the server has multiple raids 
(r1 + r5) only reports boot raid disk capacity.
  - Inspection is geared towards using a different network and dnsmasq 
infrastructure than what is in use for ironic/neutron.  Which also means that 
in order to not conflict with dhcp requests for servers in ironic I need to use 
different networks.  Which also means I now need to handle swinging server 
ports between different networks.

3.) IPA image:
  - Default build stuff is pinned to extremly old versions due to gate failure 
issues. So I can not work without a fork for onboard of servers due to the fact 
that IPMI modules aren't built for the kernel, so inspection can never match 
the node against ironic.  Seems like currently functionality here is MVP for 
gate to work and to deploy images.  But if you need to do firmware, 
bios-config, any other hardware specific features you are pretty much going to 
need to roll your own IPA image and IPA modules to do standard provisioning 
tasks.

4.) Conductor:
  - Serial-over-lan consoles require a unique port on the conductor server (I 
have seen purposes to try and fix this?), this is painful to manage with large 
numbers of servers.
  - SOL consoles aren't restarted when conductor is restarted (I think this 
might be fixed in newer versions of ironic?), again if end users aren't suppose 
to consume ironic api's directly - this is painful to handle.
  - Its very easy to get a node to fall off the staemachine rails (reboot a 
server while an image is being deployed to it), the only way I have seen to be 
able to fix this is to update the DB directly.
  - As far as I can tell shell-in-a- box, SOL consoles aren't support via nova 
– so how are end users suppose to consume the shell-in-box console?
  - I have BMC that need specific configuration (some require SOL on com2, 
others on com1) this makes it pretty much impossible without per box overrides 
against the conductor hardcoded templates.
  - Additionally it would be nice to default to having a provisioning 
kernel/image that was set as a single config option with per server overrides – 
rather than on each server.  If we ever change the IPA image – that means at 
scale we would need to update thousands of ironic nodes.

What is ironic doing to monitor the hardware for failures?  I assume the answer 
here is nothing and that we will nee

Re: [openstack-dev] [nova][libvirt] Deprecating the live_migration_flag and block_migration_flag config options

2016-06-06 Thread Matt Riedemann

On 6/6/2016 12:15 PM, Matt Riedemann wrote:

On 1/8/2016 12:28 PM, Mark McLoughlin wrote:

On Fri, 2016-01-08 at 14:11 +, Daniel P. Berrange wrote:

On Thu, Jan 07, 2016 at 09:07:00PM +, Mark McLoughlin wrote:

On Thu, 2016-01-07 at 12:23 +0100, Sahid Orentino Ferdjaoui
wrote:

On Mon, Jan 04, 2016 at 09:12:06PM +, Mark McLoughlin
wrote:

Hi

commit 8ecf93e[1] got me thinking - the live_migration_flag
config option unnecessarily allows operators choose arbitrary
behavior of the migrateToURI() libvirt call, to the extent
that we allow the operator to configure a behavior that can
result in data loss[1].

I see that danpb recently said something similar:

https://review.openstack.org/171098

"Honestly, I wish we'd just kill off  'live_migration_flag'
and 'block_migration_flag' as config options. We really
should not be exposing low level libvirt API flags as admin
tunable settings.

Nova should really be in charge of picking the correct set of
flags for the current libvirt version, and the operation it
needs to perform. We might need to add other more sensible
config options in their place [..]"


Nova should really handle internal flags and this serie is
running in the right way.


...



4) Add a new config option for tunneled versus native:

[libvirt] live_migration_tunneled = true

This enables the use of the VIR_MIGRATE_TUNNELLED flag. We
have historically defaulted to tunneled mode because it
requires the least configuration and is currently the only
way to have a secure migration channel.

danpb's quote above continues with:

"perhaps a "live_migration_secure_channel" to indicate that
migration must use encryption, which would imply use of
TUNNELLED flag"

So we need to discuss whether the config option should
express the choice of tunneled vs native, or whether it
should express another choice which implies tunneled vs
native.

https://review.openstack.org/263434


We probably have to consider that operator does not know much
about internal libvirt flags, so options we are exposing for
him should reflect benefice of using them. I commented on your
review we should at least explain benefice of using this option
whatever the name is.


As predicted, plenty of discussion on this point in the review
:)

You're right that we don't give the operator any guidance in the
help message about how to choose true or false for this:

Whether to use tunneled migration, where migration data is
transported over the libvirtd connection. If True, we use the
VIR_MIGRATE_TUNNELLED migration flag

libvirt's own docs on this are here:

https://libvirt.org/migration.html#transport

which emphasizes:

- the data copies involved in tunneling - the extra configuration
steps required for native - the encryption support you get when
tunneling

The discussions I've seen on this topic wrt Nova have revolved
around:

- that tunneling allows for an encrypted transport[1] - that
qemu's NBD based drive-mirror block migration isn't supported
using tunneled mode, and that danpb is working on fixing this
limitation in libvirt - "selective" block migration[2] won't work
with the fallback qemu block migration support, and so won't
currently work in tunneled mode


I'm not working on fixing it, but IIRC some other dev had proposed
patches.



So, the advise to operators would be:

- You may want to choose tunneled=False for improved block
migration capabilities, but this limitation will go away in
future. - You may want to choose tunneled=False if you wish to
trade and encrypted transport for a (potentially negligible)
performance improvement.

Does that make sense?

As for how to name the option, and as I said in the review, I
think it makes sense to be straightforward here and make it
clearly about choosing to disable libvirt's tunneled transport.

If we name it any other way, I think our explanation for
operators will immediately jump to explaining (a) that it
influences the TUNNELLED flag, and (b) the differences between
the tunneled and native transports. So, if we're going to have to
talk about tunneled versus native, why obscure that detail?


Ultimately we need to recognise that libvirt's tunnelled mode was
added as a hack, to work around fact that QEMU lacked any kind of
native security capabilities & didn't appear likely to ever get
them at that time.  As well as not working with modern NBD based
block device encryption, it really sucks for performance because it
introduces many extra data copies. So it is going to be quite poor
for large VMs with heavy rate of data dirtying.

The only long term relative "benefit" of tunnelled mode is that it
avoids the need to open extra firewall ports.

IMHO, the long term future is to *never* use tunnelled mode for
QEMU. This will be viable when my support for native TLS support in
QEMU migration + NBD protocols is merged. I'm hopeful this wil be
for QEMU 2.6


But, Pawel strongly disagrees.

One last point I'd make is this isn't about adding a *new*
configuration capability for op

[openstack-dev] [swift][keystone] Using JSON as future ACL format

2016-06-06 Thread Thai Q Tran


Hello all,

Hope everyone had a good weekend, and hope this email does not ruin your
next.
We had a small internal discussion at IBM and here are some of the findings
that I will present to the wider community.

1. The ":" separator that swift currently uses is not entirely safe since
LDAP can be configured to allow special characters in user IDs. It
essentially means no special characters are safe to use as separators. I am
not sure how practical this is, but its something to consider.

2. Since names are not guaranteed to be immutable, we should store
everything via IDs. Currently, for backward compatibility reasons, Swift
continues to support names for for V2. Keep in mind that V2 does not
guarantee that names are immutable either. Given this fact and what we know
from #1, we can say that names are mutable for both V2 and V3, and that any
separators we use are fallible. In other words, using a separator for names
or ids will not work 100% of the time.

3. Keystone recently enabled URL safe naming of project and domains for
their hierarchal work. As a by product of that, if the option is enabled,
Swift can essentially use the reserved characters as separators. The list
of reserved characters are listed below. The only question remaining, how
does Keystone inform Swift that this option is enabled? or Swift can add an
separator option that is a subset of the characters below and leave it to
the deployer to configure it.

";" | "/" | "?" | ":" | "@" | "&" | "=" | "+" |"$" | ","

https://github.com/openstack/keystone/commit/60b52c1248ddd5e682838d9e8ba853789940c284
http://www.ietf.org/rfc/rfc2396.txt

3. As mentioned in the KeystoneAuthACL write up in Atlanta, JSON format is
one of the option going forward. The article goes on to mention that we
should store only user IDs (avoiding the mutable names issue). It outlined
a process and reverse-process that would allow names to be use but
mentioned an overhead cost to Keystone. I personally think is the right
approach going forward since it negate the use of a separator altogether.

Whether we chose to store the user IDs or names as metadata is another
issue. But on a side note, I have tested this the changing names in V2 and
it has the same exact problem as V3. If we are allowing V2 to store names
[{ project, name }], I do not see why we should not allow the same for V3
[{ domain, project, name }].  This would remove the overhead cost to
Keystone. And of course, you still have the option to store things as IDs
[{ domain, project, id }].

https://wiki.openstack.org/wiki/Swift/ContainerACLWithKeystoneV3

My intention is to spark discussion around this topic with the goal of
moving the Swift community toward accepting the JSON format. Whether we
store it as names or ids can be a discussion for another time. If you made
it this far, thanks for reading! Your thoughts will be much appreciated.

Thanks,
Thai (tqtran)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] Does the openvswitch-agent need to be run along side the neutron-l3-agent?

2016-06-06 Thread Kevin Benton
The L3 agent will plug ports into things, but it doesn't know anything
about wiring them up for the appropriate VXLAN/VLAN/whatever. That's all
very l2 specific logic and dependent on if using linuxbridge/ovs/etc.

Think of the l3 agent just like it is Nova wiring up VMs. It plugs them in
and then the l2 agent does the appropriate l2 stuff for each port.
On Jun 6, 2016 11:46 AM, "Sean M. Collins"  wrote:

> Armando M. wrote:
> > The short answer to your question in the question is yes. For OVS,
> wherever
> > you run network services (l3 or dhcp), you need an l2 agent that in
> charge
> > of port wiring.
>
> OK - I'm going senile then. For some reason I thought the L3 agent
> called the same code paths for doing wiring of router ports and didn't
> need the L2 agent running.
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] stepping down from core

2016-06-06 Thread Jeff Peeler
Hi all,

This is my official announcement to leave core on Kolla /
Kolla-Kubernetes. I've enjoyed working with all of you and hopefully
we'll cross paths again!

Jeff

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Openstack support for bump-in-the-wire functions

2016-06-06 Thread Farhad Sunavala

Hi,
I am working with a vendor who has virtualized his appliance.The vendor's 
appliance is essentially a bump-in-the-wire appliance.
Consider the diagram below.The vendor's appliance is B connected to ports 2 and 
3.Whatever comes into B from port 2 is acted upon and then the samepacket is 
sent towards port 3.  Similarly, anything that comes into B from port 3 is 
acted up and then thesame packet is sent towards port 2.

A      B   C

| | |  |

1    2  3   4

--

OpenVswitch




Obviously, there are issues with implementing such a function with Openstackdue 
to flooding and unknown unicast MAC issues.Has anyone successfully implemented 
a bump-in-the-wire function such as B with openstack ?
thanks,Farhad.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [UX] [TripleO] TripleO UI Initial Wireframes

2016-06-06 Thread Liz Blanchard
Hi All,

I wanted to share some brainstorming we've done on the TripleO UI. I put
together wireframes[1] to reflect some ideas we have on moving forward with
features in the UI and would love to get any feedback you all have. Feel
free to comment via this email or comment within InVision.

Best,
Liz

[1] https://invis.io/KW7JTXBBR
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][neutron] Fwd: [Openstack-stable-maint] Stable check of openstack/octavia failed

2016-06-06 Thread Michael Johnson
Hi Matt,

We are aware of the issue and have cherry picked patches pending
review by the neutron stable team:
https://review.openstack.org/#/q/openstack/octavia+status:open+branch:stable/mitaka
https://review.openstack.org/#/q/openstack/octavia+status:open+branch:stable/liberty

Michael

On Mon, Jun 6, 2016 at 11:27 AM, Matt Riedemann
 wrote:
> Can someone from the Octavia team check on the stable/liberty failures for
> the unit test runs?  Those have been failing for several weeks, if not
> months, now, which makes having a job run Octavia unit tests on the
> periodic-stable queue pointless since they never pass.
>
> Keep in mind the octavia repo has the stable:follows-policy tag in the
> governance repo [1] and part of that tag being applied to the project is
> actually maintaining the stable branches, which includes keeping the CI jobs
> running.
>
> [1]
> https://governance.openstack.org/reference/projects/neutron.html#project-neutron
>
>
>  Forwarded Message 
> Subject: [Openstack-stable-maint] Stable check of openstack/octavia failed
> Date: Mon, 06 Jun 2016 06:23:15 +
> From: A mailing list for the OpenStack Stable Branch test reports.
> 
> Reply-To: openstack-dev@lists.openstack.org
> To: openstack-stable-ma...@lists.openstack.org
>
> Build failed.
>
> - periodic-octavia-docs-liberty
> http://logs.openstack.org/periodic-stable/periodic-octavia-docs-liberty/9796536/
> : SUCCESS in 3m 01s
> - periodic-octavia-python27-liberty
> http://logs.openstack.org/periodic-stable/periodic-octavia-python27-liberty/6d96415/
> : FAILURE in 4m 36s
> - periodic-octavia-docs-mitaka
> http://logs.openstack.org/periodic-stable/periodic-octavia-docs-mitaka/b2074b4/
> : SUCCESS in 3m 36s
> - periodic-octavia-python27-mitaka
> http://logs.openstack.org/periodic-stable/periodic-octavia-python27-mitaka/f220954/
> : SUCCESS in 3m 59s
>
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia][neutron] Fwd: [Openstack-stable-maint] Stable check of openstack/octavia failed

2016-06-06 Thread Doug Wiegley
Hi Matt,

Thanks for the heads up, we are looking into it. And adding some sort of 
monitor.

doug

> On Jun 6, 2016, at 11:27 AM, Matt Riedemann  
> wrote:
> 
> Can someone from the Octavia team check on the stable/liberty failures for 
> the unit test runs?  Those have been failing for several weeks, if not 
> months, now, which makes having a job run Octavia unit tests on the 
> periodic-stable queue pointless since they never pass.
> 
> Keep in mind the octavia repo has the stable:follows-policy tag in the 
> governance repo [1] and part of that tag being applied to the project is 
> actually maintaining the stable branches, which includes keeping the CI jobs 
> running.
> 
> [1] 
> https://governance.openstack.org/reference/projects/neutron.html#project-neutron
> 
> 
>  Forwarded Message 
> Subject: [Openstack-stable-maint] Stable check of openstack/octavia failed
> Date: Mon, 06 Jun 2016 06:23:15 +
> From: A mailing list for the OpenStack Stable Branch test reports. 
> 
> Reply-To: openstack-dev@lists.openstack.org
> To: openstack-stable-ma...@lists.openstack.org
> 
> Build failed.
> 
> - periodic-octavia-docs-liberty 
> http://logs.openstack.org/periodic-stable/periodic-octavia-docs-liberty/9796536/
>  : SUCCESS in 3m 01s
> - periodic-octavia-python27-liberty 
> http://logs.openstack.org/periodic-stable/periodic-octavia-python27-liberty/6d96415/
>  : FAILURE in 4m 36s
> - periodic-octavia-docs-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-octavia-docs-mitaka/b2074b4/
>  : SUCCESS in 3m 36s
> - periodic-octavia-python27-mitaka 
> http://logs.openstack.org/periodic-stable/periodic-octavia-python27-mitaka/f220954/
>  : SUCCESS in 3m 59s
> 
> ___
> Openstack-stable-maint mailing list
> openstack-stable-ma...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Smaug]- IRC Meeting tomorrow (07/06) - 1500 UTC SM,

2016-06-06 Thread Saggi Mizrahi
Hi All,

We will hold our weekly IRC meeting today (Tuesday, 07/06) at 1500
UTC in #openstack-meeting

Please review the proposed meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/smaug

Please feel free to add to the agenda any subject you would like to
discuss.

This is the first time we do it in an even week for people from the
western side of Greenwich. Hope to see some new faces.

Thanks,
Saggi
-
This email and any files transmitted and/or attachments with it are 
confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity 
to whom they are addressed.
If you have received this email in error please notify the system manager. This 
message contains confidential
information of Toga Networks Ltd., and is intended only for the individual 
named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please 
notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail 
from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing 
or taking any action in reliance on
the contents of this information is strictly prohibited.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Sean M. Collins
Ihar Hrachyshka wrote:
> 
> > On 06 Jun 2016, at 16:44, Sean M. Collins  wrote:
> > 
> > I agree, it would be convenient to have something similar to what Nova
> > has:
> > 
> > https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/versions.py#L59-L60
> > 
> > We should put some resources behind implementing micro versioning and we
> > could end up with something similar.
> > 
> > It would also be nice to have the agents report their version, so it
> > bubbles up into the agent-list REST API calls.
> 
> Agents already report a list of object versions known to them:
> 
> https://github.com/openstack/neutron/blob/master/neutron/db/agents_db.py#L258
> 
> In theory, we can deduce the version from there. The versions are reported 
> through state reports. Not sure if it’s exposed in API.
> 

Right - I was looking at your commit that implemented that - and perhaps
using that information to return something similar to the nova
min_version and max_versions - in the agent-list output.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] Does the openvswitch-agent need to be run along side the neutron-l3-agent?

2016-06-06 Thread Sean M. Collins
Armando M. wrote:
> The short answer to your question in the question is yes. For OVS, wherever
> you run network services (l3 or dhcp), you need an l2 agent that in charge
> of port wiring.

OK - I'm going senile then. For some reason I thought the L3 agent
called the same code paths for doing wiring of router ports and didn't
need the L2 agent running.
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tacker] Proposing Bharath Thiruveedula to Tacker core team

2016-06-06 Thread Stephen Wong
+1

On Fri, Jun 3, 2016 at 9:23 PM, Haddleton, Bob (Nokia - US) <
bob.haddle...@nokia.com> wrote:

> +1
>
> Bob
>
> On Jun 3, 2016, at 8:24 PM, Sridhar Ramaswamy  wrote:
>
> Tackers,
>
> I'm happy to propose Bharath Thiruveedula (IRC: tbh) to join the tacker
> core team. Bharath has been contributing to Tacker from the Liberty cycle,
> and he has grown into a key member of this project. His contribution has
> steadily increased as he picked up bigger pieces to deliver [1].
> Specifically, he contributed the automatic resource creation blueprint [2]
> in the Mitaka release. Plus tons of other RFEs and bug fixes [3]. Bharath
> is also a key contributor in tosca-parser and heat-translator projects
> which is an added plus.
>
> Please provide your +1/-1 votes.
>
> Thanks Bharath for your contributions so far and much more to come !!
>
> [1]
> http://stackalytics.com/?project_type=openstack&release=all&metric=commits&user_id=bharath-ves&module=tacker-group
> [2]
> https://blueprints.launchpad.net/tacker/+spec/automatic-resource-creation
> [3] https://bugs.launchpad.net/bugs/+bugs?field.assignee=bharath-ves
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [javascript] Meeting time doodle

2016-06-06 Thread Michael Krotscheck
Between fuel, ironic, horizon, storyboard, the app ecosystem group, the
partridges, the pear trees, and the kitchen sinks, there's an awful lot of
JavaScript work happening in OpenStack. Enough so that it's a good idea to
actually start having regular about it.

I've tried to identify dates/times in the week when a meeting channel might
be open. If you work, consume, and/or want to contribute to JavaScript in
OpenStack, please fill out this doodle and let me know when you can attend!

http://doodle.com/poll/3hxubef6va5wzpkc

Michael Krotscheck
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Networking-SFC] Stable/mitaka version

2016-06-06 Thread Henry Fourie
Gary,
Yes, it will be.

-Louis

From: Gary Kotton [mailto:gkot...@vmware.com]
Sent: Monday, June 06, 2016 2:39 AM
To: OpenStack List
Subject: [openstack-dev] [Neutron][Networking-SFC] Stable/mitaka version

Hi,
In git the project has a stable/liberty and trunk version. Will this be 
supported in stable/mitaka?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] Does the openvswitch-agent need to be run along side the neutron-l3-agent?

2016-06-06 Thread Armando M.
On 6 June 2016 at 19:59, Sean M. Collins  wrote:

> While reviewing https://review.openstack.org/#/c/292778/5 I think I
> might have found a bit of coupling between the neutron l2 agent and the
> l3 agent when it comes to DevStack.
>
> In the DevStack neutron guide - the "control node" currently
> does double duty as both an API server and also as a compute host.
>
>
> https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-configuration
>
> Extra compute nodes have a pretty short configuration
>
>
> https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-compute-configuration
>
> So, recently I poked at having a pure control node on the "devstack-1"
> host, by removing the q-agt and n-cpu entries from ENABLED_SERVICES,
> while leaving q-l3.
>
> It appears that the code in DevStack, relies on the presence of q-agt in
> order to create the integration bridge (br-int), so when the L3 agent
> comes up it complains because br-int hasn't been created.
>
>
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ovs_base#L20
>
> Anyway, here's the fix.
>
> https://review.openstack.org/#/c/326063/


The short answer to your question in the question is yes. For OVS, wherever
you run network services (l3 or dhcp), you need an l2 agent that in charge
of port wiring.


>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia][neutron] Fwd: [Openstack-stable-maint] Stable check of openstack/octavia failed

2016-06-06 Thread Matt Riedemann
Can someone from the Octavia team check on the stable/liberty failures 
for the unit test runs?  Those have been failing for several weeks, if 
not months, now, which makes having a job run Octavia unit tests on the 
periodic-stable queue pointless since they never pass.


Keep in mind the octavia repo has the stable:follows-policy tag in the 
governance repo [1] and part of that tag being applied to the project is 
actually maintaining the stable branches, which includes keeping the CI 
jobs running.


[1] 
https://governance.openstack.org/reference/projects/neutron.html#project-neutron



 Forwarded Message 
Subject: [Openstack-stable-maint] Stable check of openstack/octavia failed
Date: Mon, 06 Jun 2016 06:23:15 +
From: A mailing list for the OpenStack Stable Branch test reports. 


Reply-To: openstack-dev@lists.openstack.org
To: openstack-stable-ma...@lists.openstack.org

Build failed.

- periodic-octavia-docs-liberty 
http://logs.openstack.org/periodic-stable/periodic-octavia-docs-liberty/9796536/ 
: SUCCESS in 3m 01s
- periodic-octavia-python27-liberty 
http://logs.openstack.org/periodic-stable/periodic-octavia-python27-liberty/6d96415/ 
: FAILURE in 4m 36s
- periodic-octavia-docs-mitaka 
http://logs.openstack.org/periodic-stable/periodic-octavia-docs-mitaka/b2074b4/ 
: SUCCESS in 3m 36s
- periodic-octavia-python27-mitaka 
http://logs.openstack.org/periodic-stable/periodic-octavia-python27-mitaka/f220954/ 
: SUCCESS in 3m 59s


___
Openstack-stable-maint mailing list
openstack-stable-ma...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-stable-maint


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Live migration meeting tomorrow

2016-06-06 Thread Murray, Paul (HP Cloud)
The agenda is here: https://wiki.openstack.org/wiki/Meetings/NovaLiveMigration

With the non-priority spec freeze having passed there is not much on the agenda 
at the moment. If you have anything to add please feel free.

Regards,
Paul
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Armando M.
On 6 June 2016 at 17:05, Andreas Scheuring 
wrote:

> Is there a chance to get rid of this vif-plugged event at all? E.g. by
> transitioning it to an ReST API interface? As far as I know this is the
> only RPC interface between neutron and nova.
>

This handshake between Neutron and Nova does not happen over RPC


>
>
> --
> -
> Andreas
> IRC: andreas_s (formerly scheuran)
>
>
>
> On Mo, 2016-06-06 at 20:25 +0900, Akihiro Motoki wrote:
> > Hi,
> >
> > If I understand correctly, what you need is to expose the neutron
> > behavior through API or something. In this particular case, neutron
> > need to send a vif-plugged event when neutron detects some event in
> > the data plane (VIF plugging in OVS or some virtual switch). Thus I
> > think the question can be generalized to whether we expose a
> > capability (such that neutron server behaves in XXX way) through API
> > (API version? extension?). For example, do we have an extension to
> > expose that neutron supports the event callback mechanism?
> >
> > I also think the important point is that it is a topic of
> > deployment.Operators are responsible of deploying correct combination
> > of nova and neutron.
> >
> > Honestly I am not sure we need to expose this kind of things through
> > API. Regarding the current event callback mechanism, we assume that
> > operators deploy the expected combination of releases of nova and
> > neutron. Can't we assume that operators deploy Newton nova and neutron
> > when they want to use live-migration vif-plugging support?
> >
> > Akihiro
> >
> > 2016-06-06 17:06 GMT+09:00 Oleg Bondarev :
> > > Hi,
> > >
> > > There are cases where it would be useful to know the version of
> Neutron (or
> > > any other project) from API, like during upgrades or in cross-project
> > > communication cases.
> > > For example in https://review.openstack.org/#/c/246910/ Nova needs to
> know
> > > if Neutron sends vif-plugged event during live migration. To ensure
> this it
> > > should be enough to know Neutron is "Newton" or higher.
> > >
> > > Not sure why it wasn't done before (or was it and I'm just blind?) so
> the
> > > question to the community is what are possible issues/downsides of
> exposing
> > > code version through the API?
> > >
> > > Thanks,
> > > Oleg
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][devstack] Does the openvswitch-agent need to be run along side the neutron-l3-agent?

2016-06-06 Thread Assaf Muller
On Mon, Jun 6, 2016 at 1:59 PM, Sean M. Collins  wrote:
> While reviewing https://review.openstack.org/#/c/292778/5 I think I
> might have found a bit of coupling between the neutron l2 agent and the
> l3 agent when it comes to DevStack.
>
> In the DevStack neutron guide - the "control node" currently
> does double duty as both an API server and also as a compute host.
>
> https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-configuration
>
> Extra compute nodes have a pretty short configuration
>
> https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-compute-configuration
>
> So, recently I poked at having a pure control node on the "devstack-1"
> host, by removing the q-agt and n-cpu entries from ENABLED_SERVICES,
> while leaving q-l3.
>
> It appears that the code in DevStack, relies on the presence of q-agt in
> order to create the integration bridge (br-int), so when the L3 agent
> comes up it complains because br-int hasn't been created.
>
> https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ovs_base#L20
>
> Anyway, here's the fix.
>
> https://review.openstack.org/#/c/326063/

The L3 agent requires an L2 agent on the same host. It's not just
about creating the bridge, it's also about plugging the router/dhcp
ports correctly.

>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Pinning upstream puppet modules

2016-06-06 Thread Vladimir Kozhukalov
Dear colleagues,

We are approaching 9.0.1 release and for higher level of stability we are
going to pin [1] upstream puppet modules temporarily. Once 9.0.1 tag is
created we will unpin upstream to make it possible to get upstream fixes.

[1] https://review.openstack.org/#/c/325807/


Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][devstack] Does the openvswitch-agent need to be run along side the neutron-l3-agent?

2016-06-06 Thread Sean M. Collins
While reviewing https://review.openstack.org/#/c/292778/5 I think I
might have found a bit of coupling between the neutron l2 agent and the
l3 agent when it comes to DevStack.

In the DevStack neutron guide - the "control node" currently 
does double duty as both an API server and also as a compute host.

https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-configuration

Extra compute nodes have a pretty short configuration

https://github.com/openstack-dev/devstack/blob/master/doc/source/guides/neutron.rst#devstack-compute-configuration

So, recently I poked at having a pure control node on the "devstack-1"
host, by removing the q-agt and n-cpu entries from ENABLED_SERVICES,
while leaving q-l3.

It appears that the code in DevStack, relies on the presence of q-agt in
order to create the integration bridge (br-int), so when the L3 agent
comes up it complains because br-int hasn't been created.

https://github.com/openstack-dev/devstack/blob/master/lib/neutron_plugins/ovs_base#L20

Anyway, here's the fix.

https://review.openstack.org/#/c/326063/

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] weekly subteam status report

2016-06-06 Thread Loo, Ruby
Hi,

We are stoked to present this week's subteam report for Ironic. As usual, this 
is pulled directly from the Ironic whiteboard[0] and formatted.

Bugs (dtantsur)
===
- Stats (diff with 23 May 2016)
- Ironic: 204 bugs (+12) + 178 wishlist items. 8 new (+8), 138 in progress, 0 
critical, 35 high (+3) and 22 incomplete (-2)
- Inspector: 8 bugs + 20 wishlist items. 0 new, 7 in progress, 0 critical, 1 
high and 0 incomplete
- Nova bugs with Ironic tag: 16 (-1). 0 new, 0 critical, 1 high

Upgrade (aka Grenade) testing (jlvillal/mgould):

- trello: https://trello.com/c/y127DhpD/3-ci-grenade-testing
- So close :)
- Six patches remaining to be merged to get Grenade working. Two of which are 
Ironic patches
- Details at: https://etherpad.openstack.org/p/ironic-newton-grenade-whiteboard

Network isolation (Neutron/Ironic work) (jroll, TheJulia, devananda)

- trello: 
https://trello.com/c/HWVHhxOj/1-multi-tenant-networking-network-isolation
- still blocking on grenade work, patches are split up

Gate improvements (jlvillal, lucasagomes, dtantsur)
===
- ironic-inspector has lost support for the old ramdisk, ironic is the next :)

Node search API (jroll, lintan, rloo)
=
- trello: https://trello.com/c/j35vJrSz/24-node-search-api
- none - may not be a new priority with new multi-compute spec

Node claims API (jroll, lintan)
===
- trello: https://trello.com/c/3ai8OQcA/25-node-claims-api
- none - may not be a new priority with new multi-compute spec

Multiple compute hosts (jroll, devananda)
=
- trello: https://trello.com/c/OXYBHStp/7-multiple-compute-hosts
- discussed with some folks over hangouts last week
- found some problems and the according solutions :D
- jaypipes to update g-r-c spec, jroll to update ironic-multiple-compute-hosts 
spec

Generic boot-from-volume (TheJulia, dtantsur, lucasagomes)
==
- trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- Pending in specification review, TheJulia intends to revise the driver 
specification this week.

Driver composition (dtantsur)
=
- trello: https://trello.com/c/fTya14y6/14-driver-composition
- Still collecting reviews on the spec

Inspector (dtansur)
===
- Good progress with moving our dsvm job to tempest (instead of bash)
- investigating grenade for inspector (milan)

Bifrost (TheJulia)
==
- Currently working through pending revisions for changes that are being 
requested.

.

Until next week,
--ruby

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-06 Thread Clint Byrum
Excerpts from Brant Knudson's message of 2016-06-03 15:16:20 -0500:
> On Fri, Jun 3, 2016 at 2:35 PM, Lance Bragstad  wrote:
> 
> > Hey all,
> >
> > I have been curious about impact of providing performance feedback as part
> > of the review process. From what I understand, keystone used to have a
> > performance job that would run against proposed patches (I've only heard
> > about it so someone else will have to keep me honest about its timeframe),
> > but it sounds like it wasn't valued.
> >
> >
> We had a job running rally for a year (I think) that nobody ever looked at
> so we decided it was a waste and stopped running it.
> 
> > I think revisiting this topic is valuable, but it raises a series of
> > questions.
> >
> > Initially it probably only makes sense to test a reasonable set of
> > defaults. What do we want these defaults to be? Should they be determined
> > by DevStack, openstack-ansible, or something else?
> >
> >
> A performance test is going to depend on the environment (the machines,
> disks, network, etc), the existing data (tokens, revocations, users, etc.),
> and the config (fernet, uuid, caching, etc.). If these aren't consistent
> between runs then the results are not going to be usable. (This is the
> problem with running rally on infra hardware.) If the data isn't realistic
> (1000s of tokens, etc.) then the results are going to be at best not useful
> or at worst misleading.
> 

That's why I started the counter-inspection spec:

http://specs.openstack.org/openstack/qa-specs/specs/devstack/counter-inspection.html

It just tries to count operations, and graph those. I've, unfortunately,
been pulled off to other things of late, but I do intend to loop back
and hit this hard over the next few months to try and get those graphs.

What we'd get initially is just graphs of how many messages we push
through RabbitMQ, and how many rows/queries/transactions we push through
mysql. We may also want to add counters like how many API requests
happened, and how many retries happen inside the code itself.

There's a _TON_ we can do now to ensure that we know what the trends are
when something gets "slow", so we can look for a gradual "death by 1000
papercuts" trend or a hockey stick that can be tied to a particular
commit.

> What does the performance test criteria look like and where does it live?
> > Does it just consist of running tempest?
> >
> >
> I don't think tempest is going to give us numbers that we're looking for
> for performance. I've seen a few scripts and have my own for testing
> performance of token validation, token creation, user creation, etc. which
> I think will do the exact tests we want and we can get the results
> formatted however we like.
> 

Agreed that tempest will only give a limited view. Ideally one would
also test things like "after we've booted 1000 vms, do we end up reading
1000 more rows, or 1000 * 1000 more rows.

> From a contributor and reviewer perspective, it would be nice to have the
> > ability to compare performance results across patch sets. I understand that
> > keeping all performance results for every patch for an extended period of
> > time is unrealistic. Maybe we take a daily performance snapshot against
> > master and use that to map performance patterns over time?
> >
> >
> Where are you planning to store the results?
> 

Infra has a graphite/statsd cluster which is made for collecting metrics
on tests. It might need to be expanded a bit, but it should be
relatively cheap to do so given the benefit of having some of these
numbers.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][libvirt] Deprecating the live_migration_flag and block_migration_flag config options

2016-06-06 Thread Matt Riedemann

On 1/8/2016 12:28 PM, Mark McLoughlin wrote:

On Fri, 2016-01-08 at 14:11 +, Daniel P. Berrange wrote:

On Thu, Jan 07, 2016 at 09:07:00PM +, Mark McLoughlin wrote:

On Thu, 2016-01-07 at 12:23 +0100, Sahid Orentino Ferdjaoui
wrote:

On Mon, Jan 04, 2016 at 09:12:06PM +, Mark McLoughlin
wrote:

Hi

commit 8ecf93e[1] got me thinking - the live_migration_flag
config option unnecessarily allows operators choose arbitrary
behavior of the migrateToURI() libvirt call, to the extent
that we allow the operator to configure a behavior that can
result in data loss[1].

I see that danpb recently said something similar:

https://review.openstack.org/171098

"Honestly, I wish we'd just kill off  'live_migration_flag'
and 'block_migration_flag' as config options. We really
should not be exposing low level libvirt API flags as admin
tunable settings.

Nova should really be in charge of picking the correct set of
flags for the current libvirt version, and the operation it
needs to perform. We might need to add other more sensible
config options in their place [..]"


Nova should really handle internal flags and this serie is
running in the right way.


...



4) Add a new config option for tunneled versus native:

[libvirt] live_migration_tunneled = true

This enables the use of the VIR_MIGRATE_TUNNELLED flag. We
have historically defaulted to tunneled mode because it
requires the least configuration and is currently the only
way to have a secure migration channel.

danpb's quote above continues with:

"perhaps a "live_migration_secure_channel" to indicate that
migration must use encryption, which would imply use of
TUNNELLED flag"

So we need to discuss whether the config option should
express the choice of tunneled vs native, or whether it
should express another choice which implies tunneled vs
native.

https://review.openstack.org/263434


We probably have to consider that operator does not know much
about internal libvirt flags, so options we are exposing for
him should reflect benefice of using them. I commented on your
review we should at least explain benefice of using this option
whatever the name is.


As predicted, plenty of discussion on this point in the review
:)

You're right that we don't give the operator any guidance in the
help message about how to choose true or false for this:

Whether to use tunneled migration, where migration data is
transported over the libvirtd connection. If True, we use the
VIR_MIGRATE_TUNNELLED migration flag

libvirt's own docs on this are here:

https://libvirt.org/migration.html#transport

which emphasizes:

- the data copies involved in tunneling - the extra configuration
steps required for native - the encryption support you get when
tunneling

The discussions I've seen on this topic wrt Nova have revolved
around:

- that tunneling allows for an encrypted transport[1] - that
qemu's NBD based drive-mirror block migration isn't supported
using tunneled mode, and that danpb is working on fixing this
limitation in libvirt - "selective" block migration[2] won't work
with the fallback qemu block migration support, and so won't
currently work in tunneled mode


I'm not working on fixing it, but IIRC some other dev had proposed
patches.



So, the advise to operators would be:

- You may want to choose tunneled=False for improved block
migration capabilities, but this limitation will go away in
future. - You may want to choose tunneled=False if you wish to
trade and encrypted transport for a (potentially negligible)
performance improvement.

Does that make sense?

As for how to name the option, and as I said in the review, I
think it makes sense to be straightforward here and make it
clearly about choosing to disable libvirt's tunneled transport.

If we name it any other way, I think our explanation for
operators will immediately jump to explaining (a) that it
influences the TUNNELLED flag, and (b) the differences between
the tunneled and native transports. So, if we're going to have to
talk about tunneled versus native, why obscure that detail?


Ultimately we need to recognise that libvirt's tunnelled mode was
added as a hack, to work around fact that QEMU lacked any kind of
native security capabilities & didn't appear likely to ever get
them at that time.  As well as not working with modern NBD based
block device encryption, it really sucks for performance because it
introduces many extra data copies. So it is going to be quite poor
for large VMs with heavy rate of data dirtying.

The only long term relative "benefit" of tunnelled mode is that it
avoids the need to open extra firewall ports.

IMHO, the long term future is to *never* use tunnelled mode for
QEMU. This will be viable when my support for native TLS support in
QEMU migration + NBD protocols is merged. I'm hopeful this wil be
for QEMU 2.6


But, Pawel strongly disagrees.

One last point I'd make is this isn't about adding a *new*
configuration capability for operators. As we deprecate and
remove these con

Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-06 Thread Serg Melikyan
Regarding openstack/murano-apps, in this repo we use stable/kilo
branch not as a version of the apps, but rather than compatibility
version of the app. Applications which are published in this branch
are compatible with Murano from stable/kilo, given than we have a lag
between upstream release and which release actually our users use and
more over develop applications I would considered leaving stable/kilo
branch in the openstack/murano-apps.

On Mon, Jun 6, 2016 at 8:30 AM, Kirill Zaitsev  wrote:
> I’ve submitted a request to release all the unreleased code we still have
> for murano repositories https://review.openstack.org/#/c/325359/ ; It would
> be really great if we could get one final release before EOL’ing kilo in
> murano, murano-dashboard, murano-agent and python-muranoclient, if that is
> possible. After that I believe kilo branches in those repos are ready to be
> EOL’ed and deleted.
>
> --
> Kirill Zaitsev
> Software Engineer
> Mirantis, Inc
>
> On 3 June 2016 at 09:26:58, Tony Breeds (t...@bakeyournoodle.com) wrote:
>
> On Thu, Jun 02, 2016 at 08:31:43PM +1000, Tony Breeds wrote:
>> Hi all,
>> In early May we tagged/EOL'd several (13) projects. We'd like to do a
>> final round for a more complete set. We looked for projects meet one or
>> more
>> of the following criteria:
>> - The project is openstack-dev/devstack, openstack-dev/grenade or
>> openstack/requirements
>> - The project has the 'check-requirements' job listed as a template in
>> project-config:zuul/layout.yaml
>> - The project is listed in governance:reference/projects.yaml and is
>> tagged
>> with 'release:managed' or 'stable:follows-policy' (or both).
>
> So We've had a few people opt into EOL'ing which is great.
>
> I've Moved the lists from paste.o.o to a gist. The reason for that is I can
> update them, the URL doesn't change and there is a revision history (or
> sorts).
>
> The 2 lists are now at:
> https://gist.github.com/tbreeds/7de812a5d363fab4bd425beae5084c87
>
> Given that there are now only 39 repos that are not (yet) EOL'ing I'm
> inclined
> to default to EOL'ing everything that that isn't a deployment project.
>
> That is to say I'm suggesting that:
> openstack/cloudkitty cloudkitty 1
> openstack/cloudkitty-dashboard cloudkitty 1
> openstack/cloudpulse BigTent
> openstack/compute-hyperv BigTent
> openstack/fuel-plugin-purestorage-cinder BigTent
> openstack/group-based-policy BigTent 4
> openstack/group-based-policy-automation BigTent
> openstack/group-based-policy-ui BigTent
> openstack/murano-apps murano 3
> openstack/nova-solver-scheduler BigTent
> openstack/openstack-resource-agents BigTent
> openstack/oslo-incubator oslo
> openstack/powervc-driver BigTent 1
> openstack/python-cloudkittyclient cloudkitty 1
> openstack/python-cloudpulseclient BigTent
> openstack/python-group-based-policy-client BigTent
> openstack/swiftonfile BigTent
> openstack/training-labs Documentation
> openstack/yaql BigTent 2
>
> Get added to the EOL list.
>
> With the following hanging back for a while as they might need small tweaks
> based on the kilo-eol tag.
>
> openstack/cookbook-openstack-bare-metal Chef OpenStack
> openstack/cookbook-openstack-block-storage Chef OpenStack
> openstack/cookbook-openstack-client Chef OpenStack
> openstack/cookbook-openstack-common Chef OpenStack
> openstack/cookbook-openstack-compute Chef OpenStack
> openstack/cookbook-openstack-dashboard Chef OpenStack
> openstack/cookbook-openstack-data-processing Chef OpenStack
> openstack/cookbook-openstack-database Chef OpenStack
> openstack/cookbook-openstack-identity Chef OpenStack
> openstack/cookbook-openstack-image Chef OpenStack
> openstack/cookbook-openstack-integration-test Chef OpenStack
> openstack/cookbook-openstack-network Chef OpenStack
> openstack/cookbook-openstack-object-storage Chef OpenStack
> openstack/cookbook-openstack-ops-database Chef OpenStack
> openstack/cookbook-openstack-ops-messaging Chef OpenStack
> openstack/cookbook-openstack-orchestration Chef OpenStack
> openstack/cookbook-openstack-telemetry Chef OpenStack
> openstack/openstack-ansible OpenStackAnsible
> openstack/openstack-chef-repo Chef OpenStack
> openstack/packstack BigTent
>
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Serg Melikyan, Development Manager at Mirantis, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-re

Re: [openstack-dev] [keystone][all] Incorporating performance feedback into the review process

2016-06-06 Thread Perry, Sean


From: Lance Bragstad [lbrags...@gmail.com]
Sent: Friday, June 03, 2016 1:57 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [keystone][all] Incorporating performance feedback 
into the review process

Here is a list of focus points as I see them so far:


  *   Dedicated hardware is a requirement in order to achieve somewhat 
consistent results
  *   Tight loop micro benchmarks
  *   Tests highlighting the performance cases we care about
  *   The ability to determine a sane control
  *   The ability to tests proposed patches, compare them to the control, and 
leave comments on reviews
  *   Reproducible setup and test runner so that others can run these against a 
dedicated performance environment
  *   Daily snapshots of performance published publicly (nice to have)

Time series data (think RRD and family) is very small and cheap. The graphs are 
pretty trivial to generate from the data. The hard part as Morgan pointed out 
is finding a place to run it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting minutes - 06/06/2016

2016-06-06 Thread Renat Akhmerov
Thanks all for joining today’s meeting and having a productive discussion! 

Meeting minutes and log:
http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-06-06-16.00.html
 

http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-06-06-16.00.log.html
 


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] [keystone] dogpile.cache 0.6.0 released

2016-06-06 Thread Mike Bayer


Hey all -

I've released dogpile.cache 0.6.0.  As discussed earlier in this thread, 
the big change in this is that we've retired the dogpile.core package; 
while that package will stay out on pypi as it is, the actual 
implementation has been rolled into dogpile.cache itself and the 
namespace packaging logic is removed.


In order to prevent any namespace-packaging debacles, the "dogpile.core" 
path itself is no longer used internally by dogpile.cache; however the 
package itself will still provide a dogpile.core import point for 
applications which may have been using dogpile.core directly (this 
should be very rare).


Changelog for 0.6.0 is at:

http://dogpilecache.readthedocs.io/en/latest/changelog.html#change-0.6.0






On 06/01/2016 04:54 PM, Mike Bayer wrote:

Just a reminder, dogpile.cache is doing away with namespace packaging in
version 0.6.0, due for the end of this week or sometime next week.
dogpile.core is being retired and left as-is.   No changes should be
needed by anyone using only dopgile.cache.



On 05/30/2016 06:17 PM, Mike Bayer wrote:

Hi all -

Just a heads up what's happening for dogpile.cache, in version 0.6.0 we
are rolling the functionality of the dogpile.core package into
dogpile.cache itself, and retiring the use of namespace package naming
for dogpile.cache.

Towards retiring the use of namespace packaging, the magic
"declare_namespace() / extend_path()" logic is being removed from the
file dogpile/__init__.py from dogpile.cache, and the "namespace_package"
directive being removed from setup.py.

However, currently, the plan is to leave alone entirely the
"dogpile.core" package as is, and to no longer use the name
"dogpile.core" within dogpile.cache at all; the constructs that it
previously imported from "dogpile.core" it now just imports from
"dogpile" and "dogpile.util" from within the dogpile.cache package.

The caveat here is that Python environments that have dogpile.cache
0.5.7 or earlier installed will also have dogpile.core 0.4.1 installed
as well, and dogpile.core *does* still contain the namespace package
verbiage as before.   From our testing, we don't see there being any
problem with this, however, I know there are people on this list who are
vastly more familiar than I am with namespace packaging and I would
invite them to comment on this as well as on the gerrit review [1] (the
gerrit invites anyone with a Github account to register and comment).

Note that outside of the Openstack world, there are a very small number
of applications that make use of dopgile.core directly.  From our
grepping we can find no mentions of "dogpile.core" in any Openstack
requirements files.For these applications, if a Python environment
already has dogpile.core installed, this would continue to be used;
however dogpile.cache also includes a file dogpile/core.py which sets up
a compatible namespace, so that applications which list only
dogpile.cache in their requirements but make use of "dogpile.core"
constructs will continue to work as before.

I would ask that anyone reading this to please alert me to anyone, any
project, or any announcement medium which may be necessary in order to
ensure that anyone who needs to be made aware of these changes are aware
of them and have vetted them ahead of time.   I would like to release
dogpile.cache 0.6.0 by the end of the week if possible.  I will send
this email a few more times to the list to make sure that it is seen.


[1] https://gerrit.sqlalchemy.org/#/c/89/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN

2016-06-06 Thread Anita Kuno
On 05/26/2016 02:50 AM, zhangyali (D) wrote:
> Hi all,
> 
> I am interested in the VPNaaS project in Neutron. Now I notice that only 
> IPsec tunnel has completed, but other types of VPN, such as, MPLS/BGP, have 
> not completed. I'd like to know how's going about MPLS/BGP vpn? What's the 
> mechanism or extra work need to be done? 
> 
> Thanks.
> 
> Best,
> Yali
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

Please start your own thread for a new topic. Replying to post from an
existing thread just confuses the existing thread, as this post for a
new topic is nested within a different thread.
http://lists.openstack.org/pipermail/openstack-dev/2016-May/thread.html#94076

Also you might find suggestions for how to form good email subjects
helpful: http://www.catb.org/esr/faqs/smart-questions.html#bespecific

Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Virtual GPU provisioning

2016-06-06 Thread Bob Ball
How should we expose Virtual GPUs to Nova?

Various discussions have happened on the original spec submission for Mitaka[1] 
and the recent submission for Newton[2], however there are a few questions 
which need further discussion.  But before those question (at the end), some 
thinking behind the current plans.


* What sort of model should be use for Virtual GPU provisioning?
Virtual GPUs could be considered to be devices like PCI devices or they could 
be considered to be a collection of resources.  Some hypervisor implementations 
(e.g. XenServer) present pre-defined virtual GPU model types to administrators, 
but Hyper-V's remote FX spec[3] is using a resource-based approach for graphics 
provisioning.
There is also a lack of consistency between these two approaches in hardware: 
Some (Intel's GVT-g) could theoretically support per-GPU configuration, but 
this is not supported in other cases (e.g. NVIDIA's vGPU) and not possible in 
the case of AMD's MxGPU (which is essentially SR-IOV)

As we want to cope with vendors in the same way, the suggestion is that the 
hypervisor/driver should expect to expose discrete pre-defined virtual GPU 
model types, rather than using a collection of resources.


* Exposing GPU model types to Nova
Assuming that we have pre-defined model types, how do these get exposed to 
Nova?  The spec proposed for Mitaka spec suggested a new VGPU resource type, 
however qemu's expected implementation of virtualised GPUs ('mediated devices') 
is to use VFIO to present PCI-like devices to qemu[4].  In addition, a GPU 
device could conceptually be passed through to a single VM or split up and 
passed through to multiple VMs, which is very similar to SR-IOV.  As such, the 
spec proposed Newton suggested re-using the PCI passthrough code.

The issue comes because these devices are not strictly SR-IOV devices, and may 
not exist as PCI devices.  Therefore exposing these using the PCI address 
format introduces complications which either have to be worked around with some 
'creative' use of the PCI address format to create the fake PCI devices, or 
with refactoring the PCI passthrough code to support multiple addressing types, 
effectively making the PCI passthrough code work with any device available on 
the host.

We could then have different 'address_type's to represent different types of 
resources that were to be made available to VMs.

So, some questions:

How much value is there in genericising the PCI passthrough code to support 
multiple address types?  What other address types might we want to use - e.g. 
USB?, SCSI?, IDE?

How would we represent an address for virtual GPUs?  Would we define a format 
to be used by each driver, or allow the driver to specify their own address 
format?  See also virt-device-role-tagging [5] which uses various addresses for 
different busses.

The addition of an address_type would likely need a refactor of PCIDevice to be 
a HostDevice to minimize confusion - How far should that refactor go?  (e.g. 
should it include renaming the pci_stats column to host_device _stats?)

I'm sure there are other questions... but I can't think of them now! :)

Bob

[1] https://review.openstack.org/#/c/229351/
[2] https://review.openstack.org/#/c/280099/
[3] https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx
[4] https://lists.gnu.org/archive/html/qemu-devel/2016-05/msg04201.html
[5] 
https://git.openstack.org/cgit/openstack/nova-specs/tree/specs/newton/approved/virt-device-role-tagging.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Ihar Hrachyshka

> On 06 Jun 2016, at 16:44, Sean M. Collins  wrote:
> 
> I agree, it would be convenient to have something similar to what Nova
> has:
> 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/versions.py#L59-L60
> 
> We should put some resources behind implementing micro versioning and we
> could end up with something similar.
> 
> It would also be nice to have the agents report their version, so it
> bubbles up into the agent-list REST API calls.

Agents already report a list of object versions known to them:

https://github.com/openstack/neutron/blob/master/neutron/db/agents_db.py#L258

In theory, we can deduce the version from there. The versions are reported 
through state reports. Not sure if it’s exposed in API.

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][vpnaas]Question about MPLS VPN

2016-06-06 Thread Mathieu Rohon
Hi,

sorry for the late reply, but if you want to attach a neutron network or a
neutron router to an existing MPLS based BGP L3 VPN, you can use the BGPVPN
project [1], with its API and one of it's backend, bagpipe [2] being its
opensource and reference implementation.

Those projects have dedicated devstack plugins, so it's quite easy to
experiment.

[1]http://git.openstack.org/cgit/openstack/networking-bgpvpn
[2]http://git.openstack.org/cgit/openstack/networking-bagpipe

Mathieu

On Thu, May 26, 2016 at 5:29 PM, Kosnik, Lubosz 
wrote:

> I had a discussion with few operators and after what I heard about VPNaaS
> I can tell that we not suppose to help with that implementation.
> Maybe we should work on service VM’s and prepare implementation of VPNaaS
> using them and using some prebuild images like VyOS or other.
>
> Lubosz Kosnik
> Cloud Software Engineer OSIC
> lubosz.kos...@intel.com
>
> > On May 26, 2016, at 9:39 AM, Ihar Hrachyshka 
> wrote:
> >
> >
> >> On 26 May 2016, at 16:23, Kosnik, Lubosz 
> wrote:
> >>
> >> You should read e-mails on ML. VPNaaS will be removed in next 6 months
> from repo. You need to look into something else like starting VyOS image,
> pfSense or other.
> >
> > Strictly speaking, vpnaas is on probation right now, and if interested
> parties actually revive the project, it may stay past those 6 months. That
> said, I haven’t heard about anyone stepping in since the summit.
> >
> > Ihar
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][senlin] python-senlinclient 0.5.0 release (newton)

2016-06-06 Thread no-reply
We are tickled pink to announce the release of:

python-senlinclient 0.5.0: OpenStack Clustering API Client Library

This release is part of the newton release series.

For more details, please see below.

0.5.0
^


New Features


* Added command for node-check and node-recover.


Upgrade Notes
*

* OSC commands for cluster scaling are changed from 'cluster scale
  in' and 'cluster scale out' to 'cluster shrink' and 'cluster expand'
  respectively.

Changes in python-senlinclient 0.4.1..0.5.0
---

685ab4d Added release notes for a new release
f8fb806 Updated from global requirements
6114da4 Updated from global requirements
761c5ed Updated from global requirements
c225f4c Updated from global requirements
a7a12f4 Updated from global requirements
10cac8c Updated from global requirements
d82d836 Updated from global requirements
3121465 Add reno for release notes management
4b7c6eb Trival fix: Update README
c5d8f36 Updated from global requirements
a336d4e Updated from global requirements
c67fe18 Correct some typos
18b5494 Updated from global requirements
7bf8bb3 Pramater doesn't align to comments
d1de6ec Removed the invalid link for Module Index
e42edf4 Add link to API doc in client module
ece2139 Revert "Remove senlin CLI commands"
02f5a76 Support more parameters for senlinclient creation
9c261fd Rename cluster scaling command
3bda3b9 Remove senlin CLI commands
735b018 Refactor osc support
040dcc8 Add deprecation warnings for senlin commands
decbc7c Add OSC command for senlin node-check/recover
ed8c132 Spelling mistakes on 'Clustering service command-line client' page

Diffstat (except docs and test files)
-

.gitignore |   6 +
README.md  |   4 -
README.rst |   2 +-
releasenotes/notes/.placeholder|   0
.../cluster-scaling-command-e0d96f2cd0c7ca5f.yaml  |   5 +
.../notes/node-check-recover-469bf81db9f9f1ec.yaml |   3 +
releasenotes/source/_static/.placeholder   |   0
releasenotes/source/_templates/.placeholder|   0
releasenotes/source/conf.py| 277 +++
releasenotes/source/index.rst  |   8 +
releasenotes/source/unreleased.rst |   5 +
requirements.txt   |  10 +-
senlinclient/common/exc.py |   2 +-
senlinclient/common/sdk.py |  13 +-
senlinclient/osc/__init__.py   |   0
senlinclient/osc/plugin.py |  46 --
senlinclient/osc/v1/__init__.py|   0
senlinclient/osc/v1/action.py  | 146 
senlinclient/osc/v1/build_info.py  |  45 --
senlinclient/osc/v1/cluster.py | 774 
senlinclient/osc/v1/cluster_policy.py  | 156 
senlinclient/osc/v1/event.py   | 133 
senlinclient/osc/v1/node.py| 334 -
senlinclient/osc/v1/policy.py  | 273 ---
senlinclient/osc/v1/policy_type.py |  68 --
senlinclient/osc/v1/profile.py | 311 
senlinclient/osc/v1/profile_type.py|  68 --
senlinclient/osc/v1/receiver.py| 257 ---
senlinclient/plugin.py |  46 ++
senlinclient/v1/action.py  | 146 
senlinclient/v1/build_info.py  |  45 ++
senlinclient/v1/client.py  | 237 ++
senlinclient/v1/cluster.py | 774 
senlinclient/v1/cluster_policy.py  | 156 
senlinclient/v1/event.py   | 133 
senlinclient/v1/node.py| 388 ++
senlinclient/v1/policy.py  | 273 +++
senlinclient/v1/policy_type.py |  68 ++
senlinclient/v1/profile.py | 311 
senlinclient/v1/profile_type.py|  68 ++
senlinclient/v1/receiver.py| 257 +++
senlinclient/v1/shell.py   | 112 ++-
setup.cfg  |  96 +--
test-requirements.txt  |   5 +-
tools/senlinrc |   2 +-
tox.ini|   5 +-
76 files changed, 6673 insertions(+), 5905 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1b408d1..e0318fe 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-Babel>=1.3 # BSD
+Babel>=2.3.4 # BSD
@@ -9 +9 @@ PrettyTable<0.8,>=0.7 # BSD
-openstacksdk>=0.8.1 # Apache-2.0
+openstacksdk>=0.8.6 # Apache-2.0
@@ -12,2 +12,2 @@ oslo.serializat

Re: [openstack-dev] [nova] [placement] conducting, ovo and the placement api

2016-06-06 Thread Jay Pipes

On 06/04/2016 10:37 AM, Dan Smith wrote:

There was a conversation earlier this week in which, if I understood
things correctly, one of the possible outcomes was that it might make
sense for the new placement service (which will perform the function
currently provided by the scheduler in Nova) to only get used over its
REST API, as this will ease its extraction to a standalone service.


FWIW, I think that has been the long term expectation for a while.
Eventually that service is separate, which means no RPC to/from Nova
itself. The thing that came up last week was more a timing issue of
whether we go straight to that in newton or stop at an intermediate
stage where we're still using RPC. Because of the cells thing, I was
hoping to be able to go straight to HTTP, which would be slightly nicer
than what is actually an upcall from the cell to the API (even though
we're still pretty flat).


Agreed, and the above matches my thinking from the status update email I 
sent earlier today to the openstack-dev@ ML. I believe the only 
difference between your thoughts on this and my own are the 
implementation details of how those placement HTTP API calls would be 
made. I believe you want to see those calls done in the 
nova.objects.Inventory[List] object whereas I was hoping to have the 
resource tracker instead call a placement_client.update_inventory() call 
which would be responsible for talking to the placement REST API 
endpoint and the placement REST API endpoint would save inventory state 
to the Nova API database via calls to a 
nova.objects.ResourceProvider.update_inventory() method.



* The way to scale is to add more placement API servers and more
   nodes in the galera (or whatever) cluster. The API servers just
   talk to the persistence layer themselves. There are no tasks to
   "conduct".


I'm not sure that we'll want to consider this purely a data service. The
new resource filter approach is mostly a data operation, but it's not
complete -- it doesn't actually select a resource. For that we still
need to run some of the more complicated scheduler filters. I'm not sure
that completely dispensing with the queue (or queuing of some sort) and
doing all the decisions in the API worker while we hold the HTTP client
waiting is the right approach. I'd have to think about it more.


Yeah, we can talk about this more in the future but I don't believe we 
need to complicate the current proposal any further than it already is.



* If API needs to be versioned then it can version the external
   representations that it uses.


It certainly needs to be versioned, and certainly that versioned
external representation should be decoupled from the actual persistence.


Agreed.


* Nova-side versioned objects that are making http calls in
   themselves. For example, an Inventory object that knows how to
   save itself to the placement API over HTTP. Argh. No. Magical
   self-persisting objects are already messy enough. Adding a second
   medium over which persistence can happen is dire. Let's do
   something else please.


Um, why? What's the difference between a remotable object that uses
rabbit and RPC to ask a remote service to persist a thing, versus a
remotable object that uses HTTP to do it? It seems perfectly fine to me
to have the object be our internal interface, and the implementation
behind it does whatever we want it to do at a given point in time (i.e.
RPC to the scheduler or HTTP to the placement service). The
indirection_api in ovo is pluggable for a reason...


* A possible outcome here is that we're going to have objects in Nova
   and objects in Placement that people will expect to co-evolve.


I don't think so. I think the objects in Nova will mirror the external
representation of the placement API, much like the nova client tracks
evolution in nova's external API.


++

> As placement tries to expand its scope

it is likely to need to evolve its internal data structures like
anything else.


Agreed.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-06 Thread John McDowall
Juno,

I was going to take a pass at 3 today to see if I can get a working use case 
before I submit the patches.
On 1 I will reach out to the networking-sfc team to get the design spec started.
I will start looking at how to get the patches submitted too.

So with your and Srilatha’s help I think we can get in done in the next few 
days.

j

From: Na Zhu mailto:na...@cn.ibm.com>>
Date: Sunday, June 5, 2016 at 11:22 PM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

John,

Thanks your working overtime in last weekend, now we have the following works 
to do:
1, submit design spec to networking-sfc
2, submit the RFC to ovs community
3, debug end-to-end about your code changes.
4, submit the initial patch to networking-sfc
5, submit the initial patch to ovs community
6, submit the initial patch to networking-ovn

Do you have plan to start #1 and #2 now? I think it can be done in parallel 
with the other tasks.
Srilatha and I can start #4 and #6, we need to look at your code changes and 
write the unit test scripts for your code changes and then submit to community, 
what do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, "OpenStack 
Development Mailing List" 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>
Date:2016/06/06 11:35
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN




Juno and team,

I have written and compiled (but not tested ) the ovs/ovn interface to 
networking-ovn and similarly I have written but not tested the IDL interfaces 
on the networking-ovn side. I will put it all together tomorrow and start 
debugging end to end. I know I am going to find a lot of issues as it is a 
major rewrite from my original interface to networking-sfc – it is the right 
path (IMHO) just a little more work than I expected.

I have merged my repos with the upstream masters and I will keep them sync’ed 
so if you want to take a look and start thinking where you can help it would be 
really appreciated.

Regards

John

From: Na Zhu mailto:na...@cn.ibm.com>>
Date: Saturday, June 4, 2016 at 6:30 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

Hi John,

OK, please keep me posted once you done, thanks very much.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, "OpenStack 
Development Mailing List" 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>
Date:2016/06/03 13:15
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN




Juno

Whatever gets it done faster- let me get the three repos aligned. I need to get 
the ovs/ovn work done so networking-ovn can call it, and the networking-sfc can 
call networking-ovn.

Hopefully I will have it done tomorrow or over the weekend - let's touch base 
Monday or Sunday night.

Regards

John

Sent from my iPhone

On Jun 2, 2016, at 6:30 PM, Na Zhu mailto:na...@cn.ibm.com>> 
wrote:

Hi John,

I agree with submitting WIP patches to community, because you already did many 
works on networking-sfc and networking-ovn, it is better that you submit the 
initial patches about networking-sfc and networking-ovn, then me and Srilatha 
take over the patches. Do you have time to do it? if not, me and Srilatha can 
help to do it and you are always the co-author.




Regards,
Juno Zhu
IBM China Development Labs (CDL)

Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-06 Thread John McDowall
Juno,

Let me check – my intention was that the networking-sfc OVNB driver would 
configure all aspects of the port-chain and add the parameters to the 
networking-sfc db. Once all the parameters were in the creation of a port-chain 
would call networking-ovn (passing a deep copy of the port-chain dict). Here I 
see networking-ovn acting only as a bridge into ovs/ovn (I did not add anything 
in the ovn plugin – not sure if that is the right approach). Networking-ovn 
calls into ovs/ovn and inserts the entire port-chain.

Thoughts?

j

From: Na Zhu mailto:na...@cn.ibm.com>>
Date: Monday, June 6, 2016 at 5:49 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, Ryan Moats 
mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>, "OpenStack Development 
Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN

Hi John,

One question need confirm with you, I think the ovn flow classifier driver and 
ovn port chain driver should call the APIs which you add to networking-ovn to 
configure the northbound db sfc tables, right? I see your networking-sfc ovn 
drivers, they does not call the APIs you add to networking-ovn, do you miss 
that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:Na Zhu/China/IBM@IBMCN
To:John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc:Srilatha Tangirala 
mailto:srila...@us.ibm.com>>, OpenStack Development 
Mailing List 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, 
"disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>
Date:2016/06/06 14:28
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn]
[networking-sfc] SFC andOVN




John,

Thanks your working overtime in last weekend, now we have the following works 
to do:
1, submit design spec to networking-sfc
2, submit the RFC to ovs community
3, debug end-to-end about your code changes.
4, submit the initial patch to networking-sfc
5, submit the initial patch to ovs community
6, submit the initial patch to networking-ovn

Do you have plan to start #1 and #2 now? I think it can be done in parallel 
with the other tasks.
Srilatha and I can start #4 and #6, we need to look at your code changes and 
write the unit test scripts for your code changes and then submit to community, 
what do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, "OpenStack 
Development Mailing List" 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>
Date:2016/06/06 11:35
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN




Juno and team,

I have written and compiled (but not tested ) the ovs/ovn interface to 
networking-ovn and similarly I have written but not tested the IDL interfaces 
on the networking-ovn side. I will put it all together tomorrow and start 
debugging end to end. I know I am going to find a lot of issues as it is a 
major rewrite from my original interface to networking-sfc – it is the right 
path (IMHO) just a little more work than I expected.

I have merged my repos with the upstream masters and I will keep them sync’ed 
so if you want to take a look and start thinking where you can help it would be 
really appreciated.

Regards

John

From: Na Zhu mailto:na...@cn.ibm.com>>
Date: Saturday, June 4, 2016 at 6:30 AM
To: John McDowall 
mailto:jmcdow...@paloaltonetworks.com>>
Cc: "disc...@openvswitch.org" 
mailto:disc...@openvswitch.org>>, OpenStack 
Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>, 
Ryan Moats mailto:rmo...@us.ibm.com>>, Srilatha Tangirala 
mailto:srila...@us.ibm.com>>
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

Hi John,

OK, please keep me posted once you done, thanks very much.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New District, 
Shanghai, China (201203)



From:

Re: [openstack-dev] [nova] [placement] conducting, ovo and the placement api

2016-06-06 Thread Dan Smith
> I believe the only difference between your thoughts on this and my 
> own are the implementation details of how those placement HTTP API 
> calls would be made. I believe you want to see those calls done in 
> the nova.objects.Inventory[List] object whereas I was hoping to have 
> the resource tracker instead call a 
> placement_client.update_inventory() call which would be responsible 
> for talking to the placement REST API endpoint and the placement
> REST API endpoint would save inventory state to the Nova API database
> via calls to a nova.objects.ResourceProvider.update_inventory()
> method.

No, I'm not adamant about where they go. I suggested we put them in the
Inventory object purely to hide the where-does-it-go-when-I-save details
from the upper layers in compute. If you want compute to use different
models internally that it passes to the placement client, or have it
never really store them internally and just make calls into the
placement client when it has something to say, then that's fine with me.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-06 Thread Kirill Zaitsev
I’ve submitted a request to release all the unreleased code we still have
for murano repositories https://review.openstack.org/#/c/325359/ ; It would
be really great if we could get one final release before EOL’ing kilo in
murano, murano-dashboard, murano-agent and python-muranoclient, if that is
possible. After that I believe kilo branches in those repos are ready to be
EOL’ed and deleted.

-- 
Kirill Zaitsev
Software Engineer
Mirantis, Inc

On 3 June 2016 at 09:26:58, Tony Breeds (t...@bakeyournoodle.com) wrote:

On Thu, Jun 02, 2016 at 08:31:43PM +1000, Tony Breeds wrote:
> Hi all,
> In early May we tagged/EOL'd several (13) projects. We'd like to do a
> final round for a more complete set. We looked for projects meet one or
more
> of the following criteria:
> - The project is openstack-dev/devstack, openstack-dev/grenade or
> openstack/requirements
> - The project has the 'check-requirements' job listed as a template in
> project-config:zuul/layout.yaml
> - The project is listed in governance:reference/projects.yaml and is
tagged
> with 'release:managed' or 'stable:follows-policy' (or both).

So We've had a few people opt into EOL'ing which is great.

I've Moved the lists from paste.o.o to a gist. The reason for that is I can
update them, the URL doesn't change and there is a revision history (or
sorts).

The 2 lists are now at:
https://gist.github.com/tbreeds/7de812a5d363fab4bd425beae5084c87

Given that there are now only 39 repos that are not (yet) EOL'ing I'm
inclined
to default to EOL'ing everything that that isn't a deployment project.

That is to say I'm suggesting that:
openstack/cloudkitty cloudkitty 1
openstack/cloudkitty-dashboard cloudkitty 1
openstack/cloudpulse BigTent
openstack/compute-hyperv BigTent
openstack/fuel-plugin-purestorage-cinder BigTent
openstack/group-based-policy BigTent 4
openstack/group-based-policy-automation BigTent
openstack/group-based-policy-ui BigTent
openstack/murano-apps murano 3
openstack/nova-solver-scheduler BigTent
openstack/openstack-resource-agents BigTent
openstack/oslo-incubator oslo
openstack/powervc-driver BigTent 1
openstack/python-cloudkittyclient cloudkitty 1
openstack/python-cloudpulseclient BigTent
openstack/python-group-based-policy-client BigTent
openstack/swiftonfile BigTent
openstack/training-labs Documentation
openstack/yaql BigTent 2

Get added to the EOL list.

With the following hanging back for a while as they might need small tweaks
based on the kilo-eol tag.

openstack/cookbook-openstack-bare-metal Chef OpenStack
openstack/cookbook-openstack-block-storage Chef OpenStack
openstack/cookbook-openstack-client Chef OpenStack
openstack/cookbook-openstack-common Chef OpenStack
openstack/cookbook-openstack-compute Chef OpenStack
openstack/cookbook-openstack-dashboard Chef OpenStack
openstack/cookbook-openstack-data-processing Chef OpenStack
openstack/cookbook-openstack-database Chef OpenStack
openstack/cookbook-openstack-identity Chef OpenStack
openstack/cookbook-openstack-image Chef OpenStack
openstack/cookbook-openstack-integration-test Chef OpenStack
openstack/cookbook-openstack-network Chef OpenStack
openstack/cookbook-openstack-object-storage Chef OpenStack
openstack/cookbook-openstack-ops-database Chef OpenStack
openstack/cookbook-openstack-ops-messaging Chef OpenStack
openstack/cookbook-openstack-orchestration Chef OpenStack
openstack/cookbook-openstack-telemetry Chef OpenStack
openstack/openstack-ansible OpenStackAnsible
openstack/openstack-chef-repo Chef OpenStack
openstack/packstack BigTent

Yours Tony.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] moving ansible-deploy driver patches to ironic-staging-drivers

2016-06-06 Thread Pavlo Shchelokovskyy
HI All,

As you might have noticed, lately we (mostly myself and Yuri) were working
on an experimental deployment driver that utilizes Ansible to provision the
node (spec proposed at [0]). We started the work in common Ironic’s Gerrit
project and lately realized that those patches put a lot of unneeded strain
on to the OpenStack CI. There are already about 10 patches refining the
implementation [1]. Each new patch/change-set/rebase triggers a lot of CI
jobs (most are quite heavy ones) that do not test our implementation at all.

We decided to move the development to ironic-staging-drivers, where current
PoC implementation belongs better. Ironic-staging-drivers project has much
less default CI jobs, and we hope there we’ll have better chances to
iterate faster without stomping on Ironic’s CI. Ironic-staging-drivers repo
also looks like a good place to put the corresponding bootstrap image
building code we developed along with the driver implementation.

I have proposed a single commit capturing the current state of our
implementation to Ironic-staging-drivers at [2]. We kindly ask
ironic-staging-drivers project core team to bear with us as we move
forward, and hope for their support. When the driver’s value is proven it
should be re-proposed to the main tree.

For those who expressed interest the code is and will be easily accessible
in the new place.

[0] https://review.openstack.org/#/c/241946
[1] https://review.openstack.org/#/q/topic:bug/1526308
[2] https://review.openstack.org/325974

Best regards,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Andreas Scheuring
The ovs agent reports if it uses hybrid plug or not since [1]. At least
the hybrid plug is part of the agent state - haven't tested if it's
visible via the API. Might that be helpful?

[1]https://review.openstack.org/#/c/311814/

-- 
-
Andreas 
IRC: andreas_s (formerly scheuran)



On Mo, 2016-06-06 at 14:44 +, Sean M. Collins wrote:
> I agree, it would be convenient to have something similar to what Nova
> has:
> 
> https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/versions.py#L59-L60
> 
> We should put some resources behind implementing micro versioning and we
> could end up with something similar.
> 
> It would also be nice to have the agents report their version, so it
> bubbles up into the agent-list REST API calls.
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Virtual midcycle date poll

2016-06-06 Thread Jim Rollenhagen
By the way, I created an etherpad for the midcycle to start bringing in
ideas. You know what to do. :)

https://etherpad.openstack.org/p/ironic-newton-midcycle

// jim

On Wed, Jun 01, 2016 at 02:45:33PM -0400, Jim Rollenhagen wrote:
> On Thu, May 19, 2016 at 09:25:18AM -0400, Jim Rollenhagen wrote:
> > Hi Ironickers,
> > 
> > We decided in our last meeting that the midcycle for Newton will again
> > be virtual. Now, we need to choose a date. Please indicate which options
> > work for you (more than one may be selected):
> > 
> > http://doodle.com/poll/gpug7ynd9fn4rdfe
> > 
> > I'll close this poll two Mondays from now, May 30.
> > 
> > Note that this will be similar to the last midcycle; likely split up
> > into two sessions. Last time was 1500-2000 UTC and -0400 UTC. If
> > that worked for folks, we'll do the same times again.
> 
> June 20-22 won, with the votes being 18 to 14.
> 
> The actual dates UTC will be something like:
> 
> June 20 1500-2000
> June 21 -0400
> June 21 1500-2000
> June 22 -0400
> June 22 1500-2000
> June 23 -0400
> 
> I'll send out communication channels and such before the end of next
> week.
> 
> See you all there!
> 
> // jim
> 
> > 
> > Thanks!
> > 
> > // jim
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Andreas Scheuring
Is there a chance to get rid of this vif-plugged event at all? E.g. by
transitioning it to an ReST API interface? As far as I know this is the
only RPC interface between neutron and nova. 


-- 
-
Andreas 
IRC: andreas_s (formerly scheuran)



On Mo, 2016-06-06 at 20:25 +0900, Akihiro Motoki wrote:
> Hi,
> 
> If I understand correctly, what you need is to expose the neutron
> behavior through API or something. In this particular case, neutron
> need to send a vif-plugged event when neutron detects some event in
> the data plane (VIF plugging in OVS or some virtual switch). Thus I
> think the question can be generalized to whether we expose a
> capability (such that neutron server behaves in XXX way) through API
> (API version? extension?). For example, do we have an extension to
> expose that neutron supports the event callback mechanism?
> 
> I also think the important point is that it is a topic of
> deployment.Operators are responsible of deploying correct combination
> of nova and neutron.
> 
> Honestly I am not sure we need to expose this kind of things through
> API. Regarding the current event callback mechanism, we assume that
> operators deploy the expected combination of releases of nova and
> neutron. Can't we assume that operators deploy Newton nova and neutron
> when they want to use live-migration vif-plugging support?
> 
> Akihiro
> 
> 2016-06-06 17:06 GMT+09:00 Oleg Bondarev :
> > Hi,
> >
> > There are cases where it would be useful to know the version of Neutron (or
> > any other project) from API, like during upgrades or in cross-project
> > communication cases.
> > For example in https://review.openstack.org/#/c/246910/ Nova needs to know
> > if Neutron sends vif-plugged event during live migration. To ensure this it
> > should be enough to know Neutron is "Newton" or higher.
> >
> > Not sure why it wasn't done before (or was it and I'm just blind?) so the
> > question to the community is what are possible issues/downsides of exposing
> > code version through the API?
> >
> > Thanks,
> > Oleg
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Kolla][ovs-discuss] error when starting neutron-openvswitch-agent service

2016-06-06 Thread Qiao, Liyong
6: ovs-system:  mtu 1500 qdisc noop state DOWN
link/ether 3e:c8:1d:8e:b5:5b brd ff:ff:ff:ff:ff:ff
7: br-ex:  mtu 1500 qdisc noop state DOWN
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
8: br-int:  mtu 1500 qdisc noop state DOWN
link/ether 32:0d:72:8d:d6:42 brd ff:ff:ff:ff:ff:ff
9: br-tun:  mtu 1500 qdisc noop state DOWN
link/ether ea:d4:72:22:e2:4f brd ff:ff:ff:ff:ff:ff


I noted that these devices are not in UP state, you’d better to check them 
first.

Best Regards,
Qiao, Liyong (Eli) OTC SSG Intel

此致
敬礼!
英特尔(中国)有限公司软件与服务部开源技术中心 乔立勇



From: hu.zhiji...@zte.com.cn [mailto:hu.zhiji...@zte.com.cn]
Sent: Monday, June 06, 2016 6:54 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Neutron][Kolla][ovs-discuss] error when starting 
neutron-openvswitch-agent service

Hi Guys,

I am new to Neutron Kolla and OVS, I was trying to deploy Mitaka on CeontOS in 
a all-in-one environment using Kolla. After a successful deploying I realized 
that I should disable NetworkManager service roughly according to: 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/3/html/Installation_and_Configuration_Guide/Disabling_Network_Manager.html

But when I disabled NetworkManager and restarted network service (probably host 
machine also restarted), I cannot ping from my gateway through the external 
interface.

Here is the relevant log of ovs:

2016-06-06 09:19:37.278 1 INFO neutron.common.config [-] Logging enabled!
2016-06-06 09:19:37.283 1 INFO neutron.common.config [-] 
/usr/bin/neutron-openvswitch-agent version 8.0.0
2016-06-06 09:19:43.035 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Mapping physical network 
physnet1 to bridge br-ex
2016-06-06 09:19:45.236 1 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-ex']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: no row "int-br-ex" in table Interface

2016-06-06 09:19:49.979 1 INFO neutron.agent.l2.extensions.manager 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Loaded agent extensions: []
2016-06-06 09:19:52.185 1 WARNING neutron.agent.securitygroups_rpc 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Firewall driver 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver doesn't 
accept integration_bridge parameter in __init__(): __init__() got an unexpected 
keyword argument 'integration_bridge'
2016-06-06 09:19:53.204 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Agent initialized 
successfully, now running...
2016-06-06 09:19:53.733 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Configuring tunnel 
endpoints to other OVS agents



I use enp0s35 as both the VIP interface and the external interface because the 
host only has one interface...



Here is the ip addr  result before the deployment of enp0s35:

2: enp0s25:  mtu 1500 qdisc pfifo_fast state 
UP qlen 1000
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25
   valid_lft 10429sec preferred_lft 10429sec
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link
   valid_lft forever preferred_lft forever


Here is the ip addr result after the deployment

2: enp0s25:  mtu 1500 qdisc pfifo_fast master 
ovs-system state UP qlen 1000
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25
   valid_lft 7846sec preferred_lft 7846sec
inet 10.43.114.149/32 scope global enp0s25
   valid_lft forever preferred_lft forever
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link
   valid_lft forever preferred_lft forever
6: ovs-system:  mtu 1500 qdisc noop state DOWN
link/ether 3e:c8:1d:8e:b5:5b brd ff:ff:ff:ff:ff:ff
7: br-ex:  mtu 1500 qdisc noop state DOWN
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
8: br-int:  mtu 1500 qdisc noop state DOWN
link/ether 32:0d:72:8d:d6:42 brd ff:ff:ff:ff:ff:ff
9: br-tun:  mtu 1500 qdisc noop state DOWN
link/ether ea:d4:72:22:e2:4f brd ff:ff:ff:ff:ff:ff


Please help to see how to locate solve such kind of problem, many thanks!


Zhijiang







ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is stric

Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Sean M. Collins
I agree, it would be convenient to have something similar to what Nova
has:

https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/versions.py#L59-L60

We should put some resources behind implementing micro versioning and we
could end up with something similar.

It would also be nice to have the agents report their version, so it
bubbles up into the agent-list REST API calls.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][congress] python-congressclient 1.3.0 release (newton)

2016-06-06 Thread no-reply
We are thrilled to announce the release of:

python-congressclient 1.3.0: Client for Congress

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/python-congressclient

Please report issues through launchpad:

http://bugs.launchpad.net/python-congressclient

For more details, please see below.

Changes in python-congressclient 1.2.3..1.3.0
-

b6c19af Updated from global requirements
48893c7 Updated from global requirements
7494292 Updated from global requirements
da4bd55 Updated from global requirements
26d39ef Allows DataSource's config field to have not dict type obj
548d74f Updated from global requirements
78d6a63 Display driver field while listing datasources

Diffstat (except docs and test files)
-

congressclient/common/utils.py |  2 ++
congressclient/osc/v1/datasource.py|  3 ++-
requirements.txt   |  8 
test-requirements.txt  |  4 ++--
5 files changed, 21 insertions(+), 12 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 6fce840..88459db 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@ pbr>=1.6 # Apache-2.0
-Babel>=1.3 # BSD
+Babel>=2.3.4 # BSD
@@ -10,3 +10,3 @@ oslo.serialization>=1.10.0 # Apache-2.0
-oslo.utils>=3.5.0 # Apache-2.0
-python-keystoneclient!=1.8.0,!=2.1.0,>=1.6.0 # Apache-2.0
-requests!=2.9.0,>=2.8.1 # Apache-2.0
+oslo.utils>=3.9.0 # Apache-2.0
+python-keystoneclient!=1.8.0,!=2.1.0,>=1.7.0 # Apache-2.0
+requests>=2.10.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index fcc01bf..74436a3 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ discover # BSD
-fixtures>=1.3.1 # Apache-2.0/BSD
+fixtures<2.0,>=1.3.1 # Apache-2.0/BSD
@@ -15 +15 @@ testtools>=1.4.0 # MIT
-mock>=1.2 # BSD
+mock>=2.0 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] Kuryr did not detect neutron tag plugin in devstack

2016-06-06 Thread Antoni Segura Puimedon
On Sat, Jun 4, 2016 at 5:17 AM, Liping Mao (limao)  wrote:

> Hi Kuryr team,
>
> I notice kuryr did not detect neutron tag plugin in devstack[1].
> This is because when kuryr process start up in devstack,
> neutron-server did not finish load tag plugin.
> Kuryr use api call to detect neutron tag, so kuryr will not detect it.
> After I manually restart kuryr process, everything works well.
>
> I¹m not familiar with devstack, not sure if there is anyway to
> make sure neutron-server finished start before kuryr start up.
> I submit a patch[2], I just restart kuryr in extra stage, at that stage,
> Neutron-server already finish start.
> Any comments or good idea to solve this?
>

I proposed in the weekly meeting that the Neutron capability detection
be postponed to the first action we need. This way the check will happen
driven by the user and at a time when Neutron will surely be up and running.

Please just help to add your comments in patch or here. Thanks.
>
> [1] https://bugs.launchpad.net/kuryr/+bug/1587522
> [2] https://review.openstack.org/#/c/323453/
>
> Regards,
> Liping Mao
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [linuxbridge] Multiple VXLAN multicast groups

2016-06-06 Thread Sean M. Collins
Kevin Benton wrote:
> Just to be clear, it's not random. It follows a masking pattern so it is
> possible to know which address a given VNI will use. And if you use a /8
> prefix the VNIs will have a straightforward 1:1 mapping to multicast
> addresses.

So, it sounds like we need better documentation/help text string to help
clear this up.

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] next notification subteam meeting

2016-06-06 Thread Balázs Gibizer
Hi, 

The next notification subteam meeting will be held on 2016.06.07 17:00 UTC [1] 
on #openstack-meeting-4.

Cheers,
Gibi

[1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20160607T17

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] looking for documentation liaison

2016-06-06 Thread Loo, Ruby
Hi,

Thank you Vlad and Jay for volunteering! Neither of you love documentation, but 
Jay is “very willing”, so Jay wins :D

--ruby

On 2016-05-31, 1:23 PM, "Loo, Ruby"  wrote:

>Hi,
>
>We¹re looking for a documentation liaison [1]. If you love (Œlike¹ is also 
>acceptable) documentation, care that ironic has great documentation, and would 
>love to volunteer, please let us know.
>
>The position would require you to:
>
>- attend the weekly doc team meetings [2] (or biweekly, depending on which 
>times work for you), and represent ironic
>- attend the weekly ironic meetings[3] and report (via the subteam reports) on 
>anything that may impact ironic
>- open bugs/whatever to track getting any documentation-related work done. You 
>aren¹t expected to do the work yourself although please do if you¹d like!
>- know the general status of ironic documentation
>- see the expectations mentioned at [1]
>
>Please let me know if you have any questions. Thanks and may the best 
>candidate win ?
>
>--ruby
>
>[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Documentation
>[2] https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting
>[3] https://wiki.openstack.org/wiki/Meetings/Ironic

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] conducting, ovo and the placement api

2016-06-06 Thread Sylvain Bauza



Le 04/06/2016 16:37, Dan Smith a écrit :

There was a conversation earlier this week in which, if I understood
things correctly, one of the possible outcomes was that it might make
sense for the new placement service (which will perform the function
currently provided by the scheduler in Nova) to only get used over its
REST API, as this will ease its extraction to a standalone service.

FWIW, I think that has been the long term expectation for a while.
Eventually that service is separate, which means no RPC to/from Nova
itself. The thing that came up last week was more a timing issue of
whether we go straight to that in newton or stop at an intermediate
stage where we're still using RPC. Because of the cells thing, I was
hoping to be able to go straight to HTTP, which would be slightly nicer
than what is actually an upcall from the cell to the API (even though
we're still pretty flat).



Right, I take this upcall as the main problem we could have in a cells 
v2 world.
Using a REST API call for providing the resource inventories sounds good 
to me.




* The way to scale is to add more placement API servers and more
   nodes in the galera (or whatever) cluster. The API servers just
   talk to the persistence layer themselves. There are no tasks to
   "conduct".

I'm not sure that we'll want to consider this purely a data service. The
new resource filter approach is mostly a data operation, but it's not
complete -- it doesn't actually select a resource. For that we still
need to run some of the more complicated scheduler filters. I'm not sure
that completely dispensing with the queue (or queuing of some sort) and
doing all the decisions in the API worker while we hold the HTTP client
waiting is the right approach. I'd have to think about it more.


I think there are two very different points :

#1 Nova and other projects like Neutron or Cinder can provide their own 
inventories, so we need some kind of REST API for that. Good to me, +2.


#2 For the moment, only Nova wants to ask the scheduler to give it a 
destination, not Cinder or Neutron. Sure, we want to ask the scheduler 
to give us a destination possibly having a cross-project affinity, but 
that's still a compute node that the scheduler gives back to the nova 
conductor. By that, I'm really trying to explain that providing a REST 
API for selecting a destination really needs to be discussed more than 
by e-mails and also needs to see how the current request and the 
returned destination (a tuple now) can be interfaced for more than just 
nova. I don't think it's what we want to have *now*.




* If API needs to be versioned then it can version the external
   representations that it uses.

It certainly needs to be versioned, and certainly that versioned
external representation should be decoupled from the actual persistence.


++


* Nova-side versioned objects that are making http calls in
   themselves. For example, an Inventory object that knows how to
   save itself to the placement API over HTTP. Argh. No. Magical
   self-persisting objects are already messy enough. Adding a second
   medium over which persistence can happen is dire. Let's do
   something else please.

Um, why? What's the difference between a remotable object that uses
rabbit and RPC to ask a remote service to persist a thing, versus a
remotable object that uses HTTP to do it? It seems perfectly fine to me
to have the object be our internal interface, and the implementation
behind it does whatever we want it to do at a given point in time (i.e.
RPC to the scheduler or HTTP to the placement service). The
indirection_api in ovo is pluggable for a reason...




Agreed, that's not because we don't have a REST API indirection yet in 
Nova that we can't be having it next.



* A possible outcome here is that we're going to have objects in Nova
   and objects in Placement that people will expect to co-evolve.

I don't think so. I think the objects in Nova will mirror the external
representation of the placement API, much like the nova client tracks
evolution in nova's external API. As placement tries to expand its scope
it is likely to need to evolve its internal data structures like
anything else.


Yes, +1000 to what you say. Nova is *at the moment* the only consumer of 
that placement API, we don't want to resurrect a project that was 
defunct for good reasons by working on that epic story.


-Sylvain


--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Trove Newton spec proposal deadline

2016-06-06 Thread Amrith Kumar
This week, June 6-10 is R-17 for Newton and marks the deadline for the proposal 
of any specs for features to be considered for the Newton cycle.

Please get all specs into the trove-specs repository as soon as possible.

The following specs [1] are currently proposed and in need of review.

315079 RPC API Versioning
302416 Instance Upgrade
323989 Add support for dsv-volume-type mappings
313780 Persist last error message and display on 'show'
315619 Superconductor Spec
306620 Using Cinder snapshot as Trove backups
295274 Separate trove image build project based on libguestfs tools
307883 Extend Trove to allow for other compute backends
302952 extending trove to better utilize storage capabilities
294213 Multi-Region Support
256079 Add support for hbase in Trove

If you are not actively pursuing a particular spec for consideration in Newton, 
please mark it as such so we can focus review efforts on the specs that are 
immediately required for this release.

Thanks,

-amrith


[1] https://review.openstack.org/#/q/project:openstack/trove-specs+status:open



 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #84

2016-06-06 Thread Emilien Macchi
Hi Puppeteers!

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting-4.

Here's a first agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160607

Feel free to add more topics, and any outstanding bug and patch.

See you tomorrow!
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC andOVN

2016-06-06 Thread Na Zhu
Hi John,

One question need confirm with you, I think the ovn flow classifier driver 
and ovn port chain driver should call the APIs which you add to 
networking-ovn to configure the northbound db sfc tables, right? I see 
your networking-sfc ovn drivers, they does not call the APIs you add to 
networking-ovn, do you miss that?



Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:   Na Zhu/China/IBM@IBMCN
To: John McDowall 
Cc: Srilatha Tangirala , OpenStack Development 
Mailing List , Ryan Moats 
, "disc...@openvswitch.org" 
Date:   2016/06/06 14:28
Subject:Re: [openstack-dev] [ovs-discuss] [OVN] [networking-ovn] 
[networking-sfc] SFC andOVN



John,

Thanks your working overtime in last weekend, now we have the following 
works to do:
1, submit design spec to networking-sfc
2, submit the RFC to ovs community
3, debug end-to-end about your code changes.
4, submit the initial patch to networking-sfc
5, submit the initial patch to ovs community
6, submit the initial patch to networking-ovn 

Do you have plan to start #1 and #2 now? I think it can be done in 
parallel with the other tasks.
Srilatha and I can start #4 and #6, we need to look at your code changes 
and write the unit test scripts for your code changes and then submit to 
community, what do you think?




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:2016/06/06 11:35
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno and team,

I have written and compiled (but not tested ) the ovs/ovn interface to 
networking-ovn and similarly I have written but not tested the IDL 
interfaces on the networking-ovn side. I will put it all together tomorrow 
and start debugging end to end. I know I am going to find a lot of issues 
as it is a major rewrite from my original interface to networking-sfc �C it 
is the right path (IMHO) just a little more work than I expected.

I have merged my repos with the upstream masters and I will keep them sync
’ed so if you want to take a look and start thinking where you can help 
it would be really appreciated.

Regards

John

From: Na Zhu 
Date: Saturday, June 4, 2016 at 6:30 AM
To: John McDowall 
Cc: "disc...@openvswitch.org" , OpenStack 
Development Mailing List , Ryan Moats <
rmo...@us.ibm.com>, Srilatha Tangirala 
Subject: Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] SFC 
andOVN

Hi John,

OK, please keep me posted once you done, thanks very much.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:2016/06/03 13:15
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno 

Whatever gets it done faster- let me get the three repos aligned. I need 
to get the ovs/ovn work done so networking-ovn can call it, and the 
networking-sfc can call networking-ovn.

Hopefully I will have it done tomorrow or over the weekend - let's touch 
base Monday or Sunday night.

Regards 

John

Sent from my iPhone

On Jun 2, 2016, at 6:30 PM, Na Zhu  wrote:

Hi John,

I agree with submitting WIP patches to community, because you already did 
many works on networking-sfc and networking-ovn, it is better that you 
submit the initial patches about networking-sfc and networking-ovn, then 
me and Srilatha take over the patches. Do you have time to do it? if not, 
me and Srilatha can help to do it and you are always the co-author.




Regards,
Juno Zhu
IBM China Development Labs (CDL) Cloud IaaS Lab
Email: na...@cn.ibm.com
5F, Building 10, 399 Keyuan Road, Zhangjiang Hi-Tech Park, Pudong New 
District, Shanghai, China (201203)



From:John McDowall 
To:Na Zhu/China/IBM@IBMCN
Cc:"disc...@openvswitch.org" , "OpenStack 
Development Mailing List" , Ryan Moats 
, Srilatha Tangirala 
Date:2016/06/03 00:08
Subject:Re: [ovs-discuss] [OVN] [networking-ovn] [networking-sfc] 
SFC andOVN



Juno,

Sure make sense. I will have ovs/ovn in rough shape by end of week 
(hopefully) that will allow you to call the interfaces from 
networking-ovn. Ryan has asked that we submit WIP patches etc so hopefully 
that will kickstart the review process.
Also, hopefully some of the networking-sfc team will also be 

[openstack-dev] [nova] Update on resource providers work

2016-06-06 Thread Jay Pipes

Hi Stackers,

tl;dr
-

0) We moved the resource-providers database tables to the API DB in Nova 
instead of the child cell DB


1) We are reverting the patches to Nova that queried the old and new 
inventory/allocation fields and attempted online data migrations for 
inventory information. [1]


2) New strategy is to complete and review the placement REST API and 
have the resource tracker in Nova directly call the placement API 
instead of attempting to have the ComputeNode object try to determine 
whether a compute node's inventory has been migrated by a complicated 
join expression. [5]


details
---

After merging a number of patches to Nova that migrated inventory 
information out of the compute_nodes table into the new inventories 
table, we came to the conclusion that the inventories table needed to be 
in the API database instead of the child cell DB. Our original plan was 
to have the ComputeNode Nova object handle data migrations, however 
after some bugs popped up and extensive back and forth on the 
generic-resource-pools spec, we decided to change directions.


For a transition period, the resource tracker will continue setting the 
legacy inventory fields (e.g. memory_mb) on its stored ComputeNode 
object and calling ComputeNode.save() which will continue to store that 
inventory information in the child cell's DB. In addition to setting 
those legacy inventory fields via the ComputeNode Nova object, the 
resource tracker will call the new placement REST API to store inventory 
and allocation information. This will populate the inventories databas 
tables (now residing in the API database). We will do the same for the 
allocation information (e.g. how much memory an instance used on the 
host): continue to have the resource-tracker store allocation 
information in the legacy locations (e.g. 
instance_extra.flavor.memory_mb) and additionally call a placement REST 
API method to update the allocations database table in the API database.


After discussions, we determined it would be easier to make progress on 
the resource-providers work if we reverted [1] the patches that migrated 
inventory information online in the ComputeNode object. Part of the code 
joined the old compute_nodes table with the new inventories and 
allocations tables (that are now moved to the API database). Clearly, 
joining across two database instances wasn't going to work, so we needed 
to revert the code that changed the nova.db.api.compute_node_get() calls 
to join to both the inventories and allocations tables.


The plan for this and next week is to focus on getting the InventoryList 
and AllocationList object definition patches [2] merged. The 
InventoryList and AllocationList objects will be used by the placement 
API service to retrieve inventory and usage records from the API 
database and update those records in an atomic (to the resource 
provider) fashion using a compare-and-update strategy.


Once the InventoryList and AllocationList objects are merged, then we 
will focus on reviews of the placement REST API patches [3]. Again, we 
are planning on having the nova-compute resource tracker call these REST 
API calls directly (while continuing to use the Nova ComputeNode object 
for saving legacy inventory information). Clearly, before the resource 
tracker can call this placement REST API, we need the placement REST API 
service to be created and a client for it added to OSC. Once this client 
exists, we can add code to the resource tracker which uses it.


And all of the above needs to occur before we even start discussing 
dynamic resource classes [4] or further complications to the resource 
tracking in Nova.


[1] https://review.openstack.org/#/c/325436/
[2] Inventory: https://review.openstack.org/315288
Allocation: https://review.openstack.org/282442
[3] placement REST API: https://review.openstack.org/#/c/293104/
[4] https://review.openstack.org/#/c/312696/
[5] This is what we came up with originally to return information to the 
ComputeNode object that it could use in determining if the inventory for 
a compute node had been migrated to the new resource-providers 
inventories table: 
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/api.py#L590-L726


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Team meeting reminder - 06/06/2016

2016-06-06 Thread Renat Akhmerov
Hi,

We’re have a team meeting today as usually at #openstack-meeting at 16.00 UTC.

Agenda:
Review action items
Current status (progress, issues, roadblocks, further plans)
Custom Actions API spec
Open discussion

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [linuxbridge] Multiple VXLAN multicast groups

2016-06-06 Thread Jiří Kotlín
Hi,

unfortunately straightforward mapping is not usable for us, we need own
distribution of addresses according to vni.

For example - 2 data centers connected with VLANs, both with Cisco ASR-9K
on L3.

Routers have its own vni-multicast group mappings.

Router(config-if)# member vni 6010-6030 multicast-group 225.1.1.1

So we can create network with vni i.e. 6011 and it will get right multicast
address.

We also need to create networks on demand and not configure routers at each
change.

Jiří Kotlín
Developer

Ultimum Technologies s.r.o.
Na Poříčí 1047/26, 11000 Praha 1
Czech Republic

+420 602 288 358
jiri.kot...@ultimum.io
https://ultimum.io 

linkedin  | twitter
 | facebook
 | google+


2016-06-06 11:21 GMT+02:00 Kevin Benton :

> Just to be clear, it's not random. It follows a masking pattern so it is
> possible to know which address a given VNI will use. And if you use a /8
> prefix the VNIs will have a straightforward 1:1 mapping to multicast
> addresses.
> On Jun 6, 2016 01:35, "Jiří Kotlín"  wrote:
>
>> Hi,
>>
>> yes sorry - I was not concrete enough and the RFE should really be
>> reworded.
>>
>> Our goal is to have the  ability to control vni-multicast address
>> distribution somehow, not randomly.
>>
>> Considering multiple addresses support is already implemented in linux
>> bridge agent, I suppose implementing this feature should not cause any
>> problems.
>>
>> Thanks a lot for this hint and reply, I have tested the CIDR feature, but
>> forgot to mention this in RFE.
>>
>> Jiří Kotlín
>> Developer
>>
>> Ultimum Technologies s.r.o.
>> Na Poříčí 1047/26, 11000 Praha 1
>> Czech Republic
>>
>> +420 602 288 358
>> jiri.kot...@ultimum.io
>> https://ultimum.io 
>>
>> linkedin  |
>> twitter  | facebook
>>  | google+
>> 
>>
>> 2016-06-06 9:36 GMT+02:00 Kevin Benton :
>>
>>> The linux bridge agent does support using multiple VXLAN groups. You can
>>> specify a prefix for 'vxlan_group' and the VNIs will be spread across the
>>> multicast addresses in that prefix.[1]
>>>
>>> The only difference between that and what your RFE proposes is specific
>>> control over which multicast address is associated with each VNI. If that
>>> is a specific requirement, then the RFE needs to be reworded because that
>>> is the only difference between your proposal and what we have now for Linux
>>> Bridge.
>>>
>>>
>>> 1.
>>> https://github.com/openstack/neutron/blob/d8ae9cf4755416ca65108112a60e8b2e67607daf/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py#L34-L42
>>>
>>> On Mon, Jun 6, 2016 at 12:06 AM, Jiří Kotlín 
>>> wrote:
>>>
 Hi linuxbridge experts,

 the ability to define multiple VXLAN groups can be very useful in
 practice. Is there any design rationale why the vxlan_group was considered
 a single attribute?

 More info is in this RFE:
 https://bugs.launchpad.net/bugs/1579068

 Thank you in advance for any help you can provide.


 Jiří Kotlín
 Developer

 Ultimum Technologies s.r.o.
 Na Poříčí 1047/26, 11000 Praha 1
 Czech Republic

 +420 602 288 358
 jiri.kot...@ultimum.io
 https://ultimum.io 

 linkedin  |
 twitter  | facebook
  | google+
 


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?s

Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Akihiro Motoki
Hi,

If I understand correctly, what you need is to expose the neutron
behavior through API or something. In this particular case, neutron
need to send a vif-plugged event when neutron detects some event in
the data plane (VIF plugging in OVS or some virtual switch). Thus I
think the question can be generalized to whether we expose a
capability (such that neutron server behaves in XXX way) through API
(API version? extension?). For example, do we have an extension to
expose that neutron supports the event callback mechanism?

I also think the important point is that it is a topic of
deployment.Operators are responsible of deploying correct combination
of nova and neutron.

Honestly I am not sure we need to expose this kind of things through
API. Regarding the current event callback mechanism, we assume that
operators deploy the expected combination of releases of nova and
neutron. Can't we assume that operators deploy Newton nova and neutron
when they want to use live-migration vif-plugging support?

Akihiro

2016-06-06 17:06 GMT+09:00 Oleg Bondarev :
> Hi,
>
> There are cases where it would be useful to know the version of Neutron (or
> any other project) from API, like during upgrades or in cross-project
> communication cases.
> For example in https://review.openstack.org/#/c/246910/ Nova needs to know
> if Neutron sends vif-plugged event during live migration. To ensure this it
> should be enough to know Neutron is "Newton" or higher.
>
> Not sure why it wasn't done before (or was it and I'm just blind?) so the
> question to the community is what are possible issues/downsides of exposing
> code version through the API?
>
> Thanks,
> Oleg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Armando M.
On 6 June 2016 at 10:06, Oleg Bondarev  wrote:

> Hi,
>
> There are cases where it would be useful to know the version of Neutron
> (or any other project) from API, like during upgrades or in cross-project
> communication cases.
> For example in https://review.openstack.org/#/c/246910/ Nova needs to
> know if Neutron sends vif-plugged event during live migration. To ensure
> this it should be enough to know Neutron is "Newton" or higher.
>
> Not sure why it wasn't done before (or was it and I'm just blind?) so the
> question to the community is what are possible issues/downsides of exposing
> code version through the API?
>

If you are not talking about features exposed through the API (for which
they'd have a new extension being advertised), knowing that you're running
a specific version of the code might not guarantee that a particular
feature is available, especially in the case where the capability is an
implementation detail that is config tunable (evil, evil). This may also
lead to needless coupling between the two projects, as you'd still want to
code defensively and assume the specific behavior may or may not be there.

I suspect that your case is slightly different in that the lack of a
received event may be due to an error rather than a missing capability and
you would not be able to distinguish the difference if not optimistically
assume lack of capability. Then you need to make a "mental" note and come
back to the code to assume a failure two cycles down the road from when
your code merges. Definitely not a pretty workflow without advertising the
new feature explicitly via the API.


>
> Thanks,
> Oleg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Kolla][ovs-discuss] error when starting neutron-openvswitch-agent service

2016-06-06 Thread hu . zhijiang
Hi Guys,

I am new to Neutron Kolla and OVS, I was trying to deploy Mitaka on 
CeontOS in a all-in-one environment using Kolla. After a successful 
deploying I realized that I should disable NetworkManager service roughly 
according to: 
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/3/html/Installation_and_Configuration_Guide/Disabling_Network_Manager.html

But when I disabled NetworkManager and restarted network service (probably 
host machine also restarted), I cannot ping from my gateway through the 
external interface.

Here is the relevant log of ovs: 

2016-06-06 09:19:37.278 1 INFO neutron.common.config [-] Logging enabled!
2016-06-06 09:19:37.283 1 INFO neutron.common.config [-] 
/usr/bin/neutron-openvswitch-agent version 8.0.0
2016-06-06 09:19:43.035 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Mapping physical 
network physnet1 to bridge br-ex
2016-06-06 09:19:45.236 1 ERROR neutron.agent.ovsdb.impl_vsctl 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Unable to execute 
['ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--columns=type', 'list', 'Interface', 'int-br-ex']. Exception: Exit code: 
1; Stdin: ; Stdout: ; Stderr: ovs-vsctl: no row "int-br-ex" in table 
Interface

2016-06-06 09:19:49.979 1 INFO neutron.agent.l2.extensions.manager 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Loaded agent 
extensions: []
2016-06-06 09:19:52.185 1 WARNING neutron.agent.securitygroups_rpc 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Firewall driver 
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver 
doesn't accept integration_bridge parameter in __init__(): __init__() got 
an unexpected keyword argument 'integration_bridge'
2016-06-06 09:19:53.204 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Agent initialized 
successfully, now running...
2016-06-06 09:19:53.733 1 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-5b36-4eae-45d0-8240-ed9e76b04a73 - - - - -] Configuring tunnel 
endpoints to other OVS agents



I use enp0s35 as both the VIP interface and the external interface because 
the host only has one interface...



Here is the ip addr  result before the deployment of enp0s35:

2: enp0s25:  mtu 1500 qdisc pfifo_fast 
state UP qlen 1000
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25
   valid_lft 10429sec preferred_lft 10429sec
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link
   valid_lft forever preferred_lft forever


Here is the ip addr result after the deployment

2: enp0s25:  mtu 1500 qdisc pfifo_fast 
master ovs-system state UP qlen 1000
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
inet 10.43.114.40/24 brd 10.43.114.255 scope global dynamic enp0s25
   valid_lft 7846sec preferred_lft 7846sec
inet 10.43.114.149/32 scope global enp0s25
   valid_lft forever preferred_lft forever
inet6 fe80::1e6f:65ff:fe05:3711/64 scope link
   valid_lft forever preferred_lft forever
6: ovs-system:  mtu 1500 qdisc noop state DOWN
link/ether 3e:c8:1d:8e:b5:5b brd ff:ff:ff:ff:ff:ff
7: br-ex:  mtu 1500 qdisc noop state DOWN
link/ether 1c:6f:65:05:37:11 brd ff:ff:ff:ff:ff:ff
8: br-int:  mtu 1500 qdisc noop state DOWN
link/ether 32:0d:72:8d:d6:42 brd ff:ff:ff:ff:ff:ff
9: br-tun:  mtu 1500 qdisc noop state DOWN
link/ether ea:d4:72:22:e2:4f brd ff:ff:ff:ff:ff:ff


Please help to see how to locate solve such kind of problem, many thanks!


Zhijiang



ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Networking-SFC] Stable/mitaka version

2016-06-06 Thread Ihar Hrachyshka

> On 06 Jun 2016, at 11:39, Gary Kotton  wrote:
> 
> Hi,
> In git the project has a stable/liberty and trunk version. Will this be 
> supported in stable/mitaka?
> Thanks
> Gary

For this to happen, the team should propose a release-subproject tagged bug in 
LP. I don’t see any here:

https://bugs.launchpad.net/neutron/+bugs?field.tag=release-subproject

Ihar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Kuryr] roles of kuryr server in server/agent mode

2016-06-06 Thread Fawad Khaliq
Hi Vikas,

This is something we discussed over the summit. The kuryr (referred to as
server, more like a master instance) is in fact running on the master node
(Swarm, k8s, Mesos etc) in the VM. I will push a patch to clarify in the
spec.

And agree that the instance running inside the VMs will be capable to doing
the options you mentioned above.

Hope that clarifies it.

Thanks,
Fawad Khaliq


On Fri, Jun 3, 2016 at 5:26 PM, Vikas Choudhary 
wrote:

> Hi Fawad,
>
> While I was going through nested-containers-spec
> 
>  ,
> found it difficult to understand the roles and responsibilities of
> kuryr-server, which is supposed to be run on controller nodes.
>
> To me it seems like all queries such vlan-ID allocation, subport creation,
> ips etc , kuryr(running inside vm) should be able to make to neutron.
>
> Will appreciate few inputs from your side.
>
>
>
> Thanks & Regards
> Vikas Choudhary
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Networking-SFC] Stable/mitaka version

2016-06-06 Thread Gary Kotton
Hi,
In git the project has a stable/liberty and trunk version. Will this be 
supported in stable/mitaka?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] conducting, ovo and the placement api

2016-06-06 Thread Chris Dent


Thanks for the response, it's very useful to (to me at least) to
hash this stuff out in writing. More hashing and crazy talk below.

On Sat, 4 Jun 2016, Dan Smith wrote:


There was a conversation earlier this week in which, if I understood
things correctly, one of the possible outcomes was that it might make
sense for the new placement service (which will perform the function
currently provided by the scheduler in Nova) to only get used over its
REST API, as this will ease its extraction to a standalone service.


FWIW, I think that has been the long term expectation for a while.


Sure, but the change has been in the ordering and that has impact on
the several pieces involved.


* The way to scale is to add more placement API servers and more
  nodes in the galera (or whatever) cluster. The API servers just
  talk to the persistence layer themselves. There are no tasks to
  "conduct".


I'm not sure that we'll want to consider this purely a data service. The
new resource filter approach is mostly a data operation, but it's not
complete -- it doesn't actually select a resource. For that we still
need to run some of the more complicated scheduler filters. I'm not sure
that completely dispensing with the queue (or queuing of some sort) and
doing all the decisions in the API worker while we hold the HTTP client
waiting is the right approach. I'd have to think about it more.


Yeah, I think this gets to the center of some of the questions and
leads to some more: If the long term plan is inventory management
over HTTP and some major portion of filter is over SQL what happens
with those filters that are more dynamic (are metrics a good example
of such a thing?)?

The boundaries between stuff start getting weird.

If we can come up with a plan whereby we don't need the queue then a
lot of the things we _could_ do turn into premature optimizations.
But of course if we must have the queue, then that's not the case.


* If API needs to be versioned then it can version the external
  representations that it uses.


It certainly needs to be versioned, and certainly that versioned
external representation should be decoupled from the actual persistence.


For clarity of understanding and completeness of picture can you explain
why this is so and also why you are so sure that it is so[1]?


* Nova-side versioned objects that are making http calls in
  themselves. For example, an Inventory object that knows how to
  save itself to the placement API over HTTP. Argh. No. Magical
  self-persisting objects are already messy enough. Adding a second
  medium over which persistence can happen is dire. Let's do
  something else please.


Um, why? What's the difference between a remotable object that uses
rabbit and RPC to ask a remote service to persist a thing, versus a
remotable object that uses HTTP to do it? It seems perfectly fine to me
to have the object be our internal interface, and the implementation
behind it does whatever we want it to do at a given point in time (i.e.
RPC to the scheduler or HTTP to the placement service). The
indirection_api in ovo is pluggable for a reason...


I come from the school of thought that thinks that self-persisting
objects is backwards. There should be objects, which can be
persisted by other things and the interfaces should be kept
separate. In large part that's just a matter of taste (because in
the end the same stuff has to happen) or preference, so probably not
really worth talking about, except in the context of one of several
options for reducing complexity.

Strategies for that reduction in complexity is what I'm hoping to
extract from this thread. We have a lot of patterns in Nova that are
default solutions and tend to be repeated elsewhere because Nova's
had success with them. What's often forgotten is that some of those
solutions are fixes for problems that won't exist in new tools because
of different initial approaches.

So, for example, if there is a chance we can figure out up front how
to make the placement service be "data only" (in the sense that you
describe above) it becomes a lot easier to upgrade when there are
new releases.

Or, if we realize that it must have access to more dynamic information,
we could consider other solutions than persistent state via call-based
RPC over messaging over OVO as solutions. For example, just picking out
of the sky here, pub/sub events with tiny datasets.

Or if that is impossible, at least we know we thought about it.

Thanks again, this is all about putting some real context on the
bones we toss around.

[1] There's a school of API design that says versioning APIs is an
anti-pattern. You shouldn't version, but if you must, you should
version representations, not the API wholesale. (As in this is
version 2.1 of a "server" resource, in JSON.)

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent

Re: [openstack-dev] [neutron] [linuxbridge] Multiple VXLAN multicast groups

2016-06-06 Thread Kevin Benton
Just to be clear, it's not random. It follows a masking pattern so it is
possible to know which address a given VNI will use. And if you use a /8
prefix the VNIs will have a straightforward 1:1 mapping to multicast
addresses.
On Jun 6, 2016 01:35, "Jiří Kotlín"  wrote:

> Hi,
>
> yes sorry - I was not concrete enough and the RFE should really be
> reworded.
>
> Our goal is to have the  ability to control vni-multicast address
> distribution somehow, not randomly.
>
> Considering multiple addresses support is already implemented in linux
> bridge agent, I suppose implementing this feature should not cause any
> problems.
>
> Thanks a lot for this hint and reply, I have tested the CIDR feature, but
> forgot to mention this in RFE.
>
> Jiří Kotlín
> Developer
>
> Ultimum Technologies s.r.o.
> Na Poříčí 1047/26, 11000 Praha 1
> Czech Republic
>
> +420 602 288 358
> jiri.kot...@ultimum.io
> https://ultimum.io 
>
> linkedin  | twitter
>  | facebook
>  | google+
> 
>
> 2016-06-06 9:36 GMT+02:00 Kevin Benton :
>
>> The linux bridge agent does support using multiple VXLAN groups. You can
>> specify a prefix for 'vxlan_group' and the VNIs will be spread across the
>> multicast addresses in that prefix.[1]
>>
>> The only difference between that and what your RFE proposes is specific
>> control over which multicast address is associated with each VNI. If that
>> is a specific requirement, then the RFE needs to be reworded because that
>> is the only difference between your proposal and what we have now for Linux
>> Bridge.
>>
>>
>> 1.
>> https://github.com/openstack/neutron/blob/d8ae9cf4755416ca65108112a60e8b2e67607daf/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py#L34-L42
>>
>> On Mon, Jun 6, 2016 at 12:06 AM, Jiří Kotlín 
>> wrote:
>>
>>> Hi linuxbridge experts,
>>>
>>> the ability to define multiple VXLAN groups can be very useful in
>>> practice. Is there any design rationale why the vxlan_group was considered
>>> a single attribute?
>>>
>>> More info is in this RFE:
>>> https://bugs.launchpad.net/bugs/1579068
>>>
>>> Thank you in advance for any help you can provide.
>>>
>>>
>>> Jiří Kotlín
>>> Developer
>>>
>>> Ultimum Technologies s.r.o.
>>> Na Poříčí 1047/26, 11000 Praha 1
>>> Czech Republic
>>>
>>> +420 602 288 358
>>> jiri.kot...@ultimum.io
>>> https://ultimum.io 
>>>
>>> linkedin  |
>>> twitter  | facebook
>>>  | google+
>>> 
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][kuryr] Nested containers networking

2016-06-06 Thread Fawad Khaliq
On Tue, May 24, 2016 at 1:23 PM, Gal Sagie  wrote:

> Hi Hongbin,
>
> Thank you for starting this thread.
> The person that is going to work on this integration is Fawad (CC'ed) and
> hopefully others will help
> him (We have another person from Huawei that showed intrest in working on
> this).
>
+1

>
> I think Fawad, given that he is the primary person working on this should
> be Kuryr liaison for this integration
> and it could help alot if he has a contact in Magnum that can work with
> him on that closely.
> I can also serve as the coordinator between these efforts given that Fawad
> is too busy.
>
+1, I will serve as the liaison.


>
> The first task in my view is to start describing the action items for this
> integration, split the work and address
> any unknown issues during this process.
>
+1, I will share a plan sometime this week on the items.


>
> I think that design wise things are pretty close (Fawad, please correct me
> if there are any open issues)
> and we are just waiting to start the work (and solve any issues as they
> come)
>
That's pretty much is true. The open items are around the deployment of
Kuryr but I believe we can solve those.


> Thanks
> Gal.
>
>
> On Mon, May 23, 2016 at 6:35 PM, Hongbin Lu  wrote:
>
>> Hi Kuryr team,
>>
>>
>>
>> I want to start this ML to sync up the latest status of the nested
>> container networking implementation. Could I know who is implementing this
>> feature in Kuryr side and how Magnum team could help in this efforts? In
>> addition, I wonder if it makes sense to establish cross-project liaisons
>> between Kuryr and Magnum. Magnum relies on Kuryr to implement several
>> important features so I think it is helpful to setup a communication
>> channel between both teams. Thoughts?
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [linuxbridge] Multiple VXLAN multicast groups

2016-06-06 Thread Jiří Kotlín
Hi,

yes sorry - I was not concrete enough and the RFE should really be reworded.

Our goal is to have the  ability to control vni-multicast address
distribution somehow, not randomly.

Considering multiple addresses support is already implemented in linux
bridge agent, I suppose implementing this feature should not cause any
problems.

Thanks a lot for this hint and reply, I have tested the CIDR feature, but
forgot to mention this in RFE.

Jiří Kotlín
Developer

Ultimum Technologies s.r.o.
Na Poříčí 1047/26, 11000 Praha 1
Czech Republic

+420 602 288 358
jiri.kot...@ultimum.io
https://ultimum.io 

linkedin  | twitter
 | facebook
 | google+


2016-06-06 9:36 GMT+02:00 Kevin Benton :

> The linux bridge agent does support using multiple VXLAN groups. You can
> specify a prefix for 'vxlan_group' and the VNIs will be spread across the
> multicast addresses in that prefix.[1]
>
> The only difference between that and what your RFE proposes is specific
> control over which multicast address is associated with each VNI. If that
> is a specific requirement, then the RFE needs to be reworded because that
> is the only difference between your proposal and what we have now for Linux
> Bridge.
>
>
> 1.
> https://github.com/openstack/neutron/blob/d8ae9cf4755416ca65108112a60e8b2e67607daf/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py#L34-L42
>
> On Mon, Jun 6, 2016 at 12:06 AM, Jiří Kotlín 
> wrote:
>
>> Hi linuxbridge experts,
>>
>> the ability to define multiple VXLAN groups can be very useful in
>> practice. Is there any design rationale why the vxlan_group was considered
>> a single attribute?
>>
>> More info is in this RFE:
>> https://bugs.launchpad.net/bugs/1579068
>>
>> Thank you in advance for any help you can provide.
>>
>>
>> Jiří Kotlín
>> Developer
>>
>> Ultimum Technologies s.r.o.
>> Na Poříčí 1047/26, 11000 Praha 1
>> Czech Republic
>>
>> +420 602 288 358
>> jiri.kot...@ultimum.io
>> https://ultimum.io 
>>
>> linkedin  |
>> twitter  | facebook
>>  | google+
>> 
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Netconfig tasks changes

2016-06-06 Thread Aleksandr Didenko
Hi,

a bit different patch is on review now [0]. Instead of silently replacing
default gateway on the fly in netconfig.pp task it's putting new default
gateway into Hiera. Thus we'll have idempotency for subsequent netconfig.pp
runs even on Mongo roles. Also we'll have consistent network configuration
data in Hiera which any plugin can rely on.

I've built a custom ISO with this patch and run a set of custom tests on it
to cover multi-role and multi-rack cases [1] plus BVT - everything worked
fine.

Please feel free to review and comment the patch [0].

Regards,
Alex

[0] https://review.openstack.org/324307
[1] http://paste.openstack.org/show/508319/

On Wed, Jun 1, 2016 at 4:47 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> YAQL expressions support for task dependencies has been added to Nailgun
> [0]. So now it's possible to fix network configuration idempotency issue
> without introducing new 'netconfig' task [1]. There will be no problems
> with loops in task graph in such case (tested on multiroles, worked fine).
> When we deprecate role-based deployment (even emulated), then we'll be able
> to remove all those additional conditions from manifests and remove
> 'configure_default_route' task completely. Please feel free to review and
> comment the patch [1].
>
> Regards,
> Alex
>
> [0] https://review.openstack.org/#/c/320861/
> [1] https://review.openstack.org/#/c/322872/
>
> On Wed, May 25, 2016 at 10:39 AM, Simon Pasquier 
> wrote:
>
>> Hi Adam,
>> Maybe you want to look into network templates [1]? Although the
>> documentation is a bit sparse, it allows you to define flexible network
>> mappings.
>> BR,
>> Simon
>> [1]
>> https://docs.mirantis.com/openstack/fuel/fuel-8.0/operations.html#using-networking-templates
>>
>> On Wed, May 25, 2016 at 10:26 AM, Adam Heczko 
>> wrote:
>>
>>> Thanks Alex, will experiment with it once again although AFAIR it
>>> doesn't solve thing I'd like to do.
>>> I'll come later to you in case of any questions.
>>>
>>>
>>> On Wed, May 25, 2016 at 10:00 AM, Aleksandr Didenko <
>>> adide...@mirantis.com> wrote:
>>>
 Hey Adam,

 in Fuel we have the following option (checkbox) on Network Setting tab:

 Assign public network to all nodes
 When disabled, public network will be assigned to controllers only

 So if you uncheck it (by default it's unchecked) then public network
 and 'br-ex' will exist on controllers only. Other nodes won't even have
 "Public" network on node interface configuration UI.

 Regards,
 Alex

 On Wed, May 25, 2016 at 9:43 AM, Adam Heczko 
 wrote:

> Hello Alex,
> I have a question about the proposed changes.
> Is it possible to introduce new vlan and associated bridge only for
> controllers?
> I think about DMZ use case and possibility to expose public IPs/VIP
> and API endpoints on controllers on a completely separate L2 network
> (segment vlan/bridge) not present on any other nodes than controllers.
> Thanks.
>
> On Wed, May 25, 2016 at 9:28 AM, Aleksandr Didenko <
> adide...@mirantis.com> wrote:
>
>> Hi folks,
>>
>> we had to revert those changes [0] since it's impossible to propery
>> handle two different netconfig tasks for multi-role nodes. So everything
>> stays as it was before - we have single task 'netconfig' to configure
>> network for all roles and you don't need to change anything in your
>> plugins. Sorry for inconvenience.
>>
>> Our current plan for fixing network idempotency is to keep one task
>> but change 'cross-depends' parameter to yaql_exp. This will allow us to 
>> use
>> single 'netconfig' task for all roles but at the same time we'll be able 
>> to
>> properly order it: netconfig on non-controllers will be executed only
>> aftetr 'virtual_ips' task.
>>
>> Regards,
>> Alex
>>
>> [0] https://review.openstack.org/#/c/320530/
>>
>>
>> On Thu, May 19, 2016 at 2:36 PM, Aleksandr Didenko <
>> adide...@mirantis.com> wrote:
>>
>>> Hi all,
>>>
>>> please be aware that now we have two netconfig tasks (in Fuel 9.0+):
>>>
>>> - netconfig-controller - executed on controllers only
>>> - netconfig - executed on all other nodes
>>>
>>> puppet manifest is the same, but tasks are different. We had to do
>>> this [0] in order to fix network idempotency issues [1].
>>>
>>> So if you have 'netconfig' requirements in your plugin's tasks,
>>> please make sure to add 'netconfig-controller' as well, to work 
>>> properly on
>>> controllers.
>>>
>>> Regards,
>>> Alex
>>>
>>> [0] https://bugs.launchpad.net/fuel/+bug/1541309
>>> [1]
>>> https://review.openstack.org/#/q/I229957b60c85ed94c2d0ba829642dd6e465e9eca,n,z
>>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not 

[openstack-dev] [Neutron] Getting project version from API

2016-06-06 Thread Oleg Bondarev
Hi,

There are cases where it would be useful to know the version of Neutron (or
any other project) from API, like during upgrades or in cross-project
communication cases.
For example in https://review.openstack.org/#/c/246910/ Nova needs to know
if Neutron sends vif-plugged event during live migration. To ensure this it
should be enough to know Neutron is "Newton" or higher.

Not sure why it wasn't done before (or was it and I'm just blind?) so the
question to the community is what are possible issues/downsides of exposing
code version through the API?

Thanks,
Oleg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [linuxbridge] Multiple VXLAN multicast groups

2016-06-06 Thread Kevin Benton
The linux bridge agent does support using multiple VXLAN groups. You can
specify a prefix for 'vxlan_group' and the VNIs will be spread across the
multicast addresses in that prefix.[1]

The only difference between that and what your RFE proposes is specific
control over which multicast address is associated with each VNI. If that
is a specific requirement, then the RFE needs to be reworded because that
is the only difference between your proposal and what we have now for Linux
Bridge.


1.
https://github.com/openstack/neutron/blob/d8ae9cf4755416ca65108112a60e8b2e67607daf/neutron/plugins/ml2/drivers/linuxbridge/agent/common/config.py#L34-L42

On Mon, Jun 6, 2016 at 12:06 AM, Jiří Kotlín  wrote:

> Hi linuxbridge experts,
>
> the ability to define multiple VXLAN groups can be very useful in
> practice. Is there any design rationale why the vxlan_group was considered
> a single attribute?
>
> More info is in this RFE:
> https://bugs.launchpad.net/bugs/1579068
>
> Thank you in advance for any help you can provide.
>
>
> Jiří Kotlín
> Developer
>
> Ultimum Technologies s.r.o.
> Na Poříčí 1047/26, 11000 Praha 1
> Czech Republic
>
> +420 602 288 358
> jiri.kot...@ultimum.io
> https://ultimum.io 
>
> linkedin  | twitter
>  | facebook
>  | google+
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][notification] transformation TODO list

2016-06-06 Thread Balázs Gibizer
Hi, 

If you want to help nova to transform its notification interface
to an API then you can find your new TODO list on the wiki [1]. 
You can find the basic how-to information and the list of work 
items there. If you have any questions just ping me (gibi) on
IRC or join the weekly  notification subteam meeting [2].

gibi

[1] https://wiki.openstack.org/wiki/Nova/VersionedNotificationTransformation 
[2] https://wiki.openstack.org/wiki/Meetings/NovaNotification 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >