Re: [openstack-dev] [Fuel] Core rights in Fuel repositories

2015-01-23 Thread Roman Prykhodchenko
Aleksandra,

a general practice is to have program-core and program-milestone groups. That 
approach fits for fuel-* as well because there’s only one separate team that 
does releases for all projects.
What other folks think about that?

- romcheg

> 23 січ. 2015 о 18:36 Aleksandra Fedorova  написав(ла):
> 
> How should we deal with release management?
> 
> Currently I don't do any merges for stackforge/fuel-* projects but I
> need access to all of them to create branches at Hard Code Freeze.
> 
> Should we create separate fuel-release group for that? Should it be
> unified group for all repositories or every repository needs its own?
> 
> 
> 
> On Fri, Jan 23, 2015 at 7:04 PM, Roman Prykhodchenko  wrote:
>> Hi folks!
>> 
>> After moving python-fuelclient to its own repo some of you started asking a 
>> good question which is How do we manage core rights in different Fuel 
>> repositories. The problem is that there is a single fuel-core group which is 
>> used for all fuel-* repos, except for python-fuelclient.
>> 
>> The approach mentioned above does not work very well at the moment and so 
>> I’d like to propose a different one:
>> 
>> - Every new or separated project shoud introduce it’s own -core group.
>> -  That group vill only contain active core reviewers for only that project.
>> -  Removing or adding people will be done according to Approved OpenStack 
>> rules.
>> - fuel-core group will be reduced to the smallest possible number of people 
>> and only include the guys
>> who must have decision making powers according to Fuel project’s rules.
>> - fuel-core will be included to any other fuel-*core group
>> - elections to the fuel-core group will take place according to Fuel’s 
>> policies
>> - fuel-core group members are required for supervising reasons and taking an 
>> action in emergency cases
>> 
>> 
>> - romcheg
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> -- 
> Aleksandra Fedorova
> Fuel Devops Engineer
> bookwar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-23 Thread Doug Hellmann


On Fri, Jan 23, 2015, at 07:48 PM, Thomas Goirand wrote:
> Hi,
> 
> I've just noticed that oslo.log made it to global-requirements.txt 9
> days ago. How come are we still adding some name.spaced oslo libs?
> Wasn't the outcome of the discussion in Paris that we shouldn't do that
> anymore, and that we should be using oslo-log instead of oslo.log?
> 
> Is three something that I am missing here?
> 
> Cheers,
> 
> Thomas Goirand (zigo)

The naming is described in the spec:
http://specs.openstack.org/openstack/oslo-specs/specs/kilo/drop-namespace-packages.html

tl;dr - We did it this way to make life easier for the packagers.

Doug

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-23 Thread Louis Taylor
On Sat, Jan 24, 2015 at 01:48:32AM +0100, Thomas Goirand wrote:
> I've just noticed that oslo.log made it to global-requirements.txt 9
> days ago. How come are we still adding some name.spaced oslo libs?
> Wasn't the outcome of the discussion in Paris that we shouldn't do that
> anymore, and that we should be using oslo-log instead of oslo.log?
> 
> Is three something that I am missing here?

The decision at paris was to maintain the same naming convention so the oslo
packages are consistent, since dhellman didn't want to rename a lot of packages.

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-November/050313.html


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Why are we continuing to add new namespaced oslo libs?

2015-01-23 Thread Thomas Goirand
Hi,

I've just noticed that oslo.log made it to global-requirements.txt 9
days ago. How come are we still adding some name.spaced oslo libs?
Wasn't the outcome of the discussion in Paris that we shouldn't do that
anymore, and that we should be using oslo-log instead of oslo.log?

Is three something that I am missing here?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Thoughts on the nova<->neutron interface

2015-01-23 Thread Kevin Benton
It seems like a change to using internal RPC interfaces would be pretty
unstable at this point.

Can we start by identifying the shortcomings of the HTTP interface and see
if we can address them before making the jump to using an interface which
has been internal to Neutron so far?

I scanned through the etherpad and I really like Salvatore's idea of adding
a service plugin to Neutron that is designed specifically for interacting
with Nova. All of the Nova notification interactions can be handled there
and we can add new API components designed for Nova's use (e.g. syncing
data, etc). Does anyone have any objections to that approach?

On Fri, Jan 23, 2015 at 7:04 AM, Gary Kotton  wrote:

> Hi,
> As Salvatore mentioned this was one of the things that we discussed at the
> San Diego summit many years ago. I like the idea of using an RPC interface
> to speak with Neutron (we could do a similar thing with Cinder, glance
> etc). This would certainly address a number of issues with the interfaces
> that we use at the moment. It is certainly something worthwhile discussing
> next week.
> We would need to understand how to define versioned API¹s, how to deal
> with extensions etc.
> Thanks
> Gary
>
> On 1/23/15, 2:59 PM, "Russell Bryant"  wrote:
>
> >On 01/22/2015 06:40 PM, Salvatore Orlando wrote:
> >> I also like the idea of considering the RPC interface. What kind of
> >> stability / versioning exists on the Neutron RPC interface?
> >>
> >>
> >> Even if Neutron does not have fancy things such as objects with
> >> remotable method, I think its RPC interfaces are versioned exactly in
> >> the same way as Nova. On REST vs AMQP I do not have a strong opinion.
> >> This topic comes up quite often; on the one hand REST provides a cleaner
> >> separation of concerns between the two projects; on the other hand RPC
> >> will enable us to design an optimised interface specific to Nova. While
> >> REST over HTTP is not as bandwidth-efficient as RPC over AMQP it however
> >> allow deployers to use off-the-shelf tools for HTTP optimisation, such
> >> as load balancing, or caching.
> >
> >Neutron uses rpc versioning, but there are some problems with it (that I
> >have been working to clean up).  The first one is that the interfaces
> >are quite tangled together.  There are interfaces that appear separate,
> >but are used with a bunch of mixin classes and actually presented as a
> >single API over rpc.  That means they have to be versioned together,
> >which is not really happening consistently in practice.  I'm aiming to
> >have all of this cleared up by the end of Kilo, though.
> >
> >The other issue is related to the "fancy things such as objects with
> >remotable methods".  :-)  The key with this is versioning the data sent
> >over these interfaces.  Even with rpc interface versioning clear and
> >consistently used, I still wouldn't consider these as stable interfaces
> >until the data is versioned, as well.
> >
> >--
> >Russell Bryant
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-23 Thread Brad Topol
Keystone cores and ATC's,

Thank you very much for the nomination and your positive feedback.   I 
feel very honored to have receive this nomination.   I have thoroughly 
enjoyed collaborating with all of you for these past several years.   I 
look forward to our continued collaboration and am confident that our 
talented group of folks will continue to drive outstanding innovations in 
the Keystone project.

Regards,

Brad 

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Kendra Witherspoon (919) 254-0680



From:   Morgan Fainberg 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date:   01/23/2015 12:53 PM
Subject:Re: [openstack-dev] [Keystone] Nominating Brad Topol for 
Keystone-Spec core



Based upon the feedback to this thread, I want to congratulate Brad Topol 
as the newest member of the Keystone-Specs-Core team!
—Morgan

On Jan 18, 2015, at 11:11 AM, Morgan Fainberg  
wrote:

Hello all,

I would like to nominate Brad Topol for Keystone Spec core (core reviewer 
for Keystone specifications and API-Specification only: 
https://git.openstack.org/cgit/openstack/keystone-specs ). Brad has been a 
consistent voice advocating for well defined specifications, use of 
existing standards/technology, and ensuring the UX of all projects under 
the Keystone umbrella continue to improve. Brad brings to the table a 
significant amount of insight to the needs of the many types and sizes of 
OpenStack deployments, especially what real-world customers are demanding 
when integrating with the services. Brad is a core contributor on pycadf 
(also under the Keystone umbrella) and has consistently contributed code 
and reviews to the Keystone projects since the Grizzly release.

Please vote with +1/-1 on adding Brad to as core to the Keystone Spec 
repo. Voting will remain open until Friday Jan 23.

Cheers,
Morgan Fainberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Trying to set up LBaaS V2 on Juno with DVR

2015-01-23 Thread Kevin Benton
What happens if you deploy multiple Neutron servers?

On Fri, Jan 23, 2015 at 10:56 AM, Doug Wiegley 
wrote:

> Get ready to vomit.
>
> The lbaasv2 code you’re pulling is a non-agent driver.  Meaning, it runs
> haproxy on the *neutron controller* node, and only the controller node.
> It’s meant to be a POC for single node systems, not something you can
> deploy.
>
> In the upcoming mid-cycle, the driver will be agent-ified, like v1. I
> believe Phil from Rax will be leading that effort.
>
> Thanks,
> Doug
>
>
>
> On 1/23/15, 11:01 AM, "Al Miller"  wrote:
>
> >I have been trying to set up LBaaS v2 in a juno-based environment.
> >
> >I have successfully done this in devstack by setting it up based on
> >stable/juno, then grabbing https://review.openstack.org/#/c/123491/ and
> >the client from https://review.openstack.org/#/c/111475/, and then
> >editing neutron.conf to include the
> >neutron.services.loadbalancer.plugin.LoadBalancerPluginv2 service_plugin
> >and
> >service_provider=LOADBALANCERV2:Haproxy:neutron.services.loadbalancer.driv
> >ers.haproxy.synchronous_namespace_driver.HaproxyNSDriver:default.  I have
> >also enabled DVR,
> >
> >With this setup in devstack, I can use the LBaaS V2 CLI commands to set
> >up a  working V2 loadbalancer.
> >
> >The problem comes in when I try to do this in an openstack installation.
> >I have set up a three node installation based on Ubuntu 14.04 following
> >the procedure in
> >
> http://docs.openstack.org/juno/install-guide/install/apt/openstack-install
> >-guide-apt-juno.pdf.  I have a controller node for the API services, a
> >network node, and a compute node.   I can boot instances and create V1
> >loadbalancers.
> >
> >When I bring in the LBaaS V2 code into this environment, it is more
> >complex.  I need to add it to the neutron API server on the controller,
> >but also the compute node (the goal here is to test it with DVR).   So on
> >the compute node I install the neutron-lbaas-agent package, bring in the
> >123491 patch, and make the neutron.conf edits.  In this configuration,
> >the lbaas agent fails with an RPC timeout:
> >
> >2015-01-22 16:10:52.712 14795 ERROR
> >neutron.services.loadbalancer.agent.agent_manager [-] Unable to retrieve
> >ready devices
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager Traceback (most recent
> >call last):
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agen
> >t_manager.py", line 148, in sync_state
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager ready_instances =
> >set(self.plugin_rpc.get_ready_devices())
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agen
> >t_api.py", line 38, in get_ready_devices
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager
> >self.make_msg('get_ready_devices', host=self.host)
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 36, in
> >wrapper
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager return
> >method(*args, **kwargs)
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 175, in
> >call
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager context, msg,
> >rpc_method='call', **kwargs)
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 201, in
> >__call_rpc_method
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager return
> >func(context, msg['method'], **msg['args'])
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line
> >389, in call
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager return
> >self.prepare().call(ctxt, method, **kwargs)
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line
> >152, in call
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager retry=self.retry)
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.agent.agent_manager   File
> >"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90,
> >in _send
> >2015-01-22 16:10:52.712 14795 TRACE
> >neutron.services.loadbalancer.a

Re: [openstack-dev] [neutron] iptables routes are not being injected to router namespace

2015-01-23 Thread Carl Baldwin
Nice work, Brian!

On Thu, Jan 22, 2015 at 2:57 PM, Brian Haley  wrote:
> On 01/22/2015 02:35 PM, Kevin Benton wrote:
>> Right, there are two bugs here. One is in whatever went wrong with 
>> defer_apply
>> and one is with this exception handling code. I would allow the fix to go in 
>> for
>> the exception handling and then file another bug for the actual underlying
>> defer_apply bug.
>
> What went wrong with defer_apply() was caused by oslo.concurrency - version
> 1.4.1 seems to fix the problem, see https://review.openstack.org/#/c/149400/
> (thanks Ihar!)
>
> Xavier - can you update your oslo.concurrency to that version and verify it
> helps?  It seems to work in my config.
>
> Then the change in the other patchset could be applied, along with a test that
> triggers exceptions so this gets caught.
>
> Thanks,
>
> -Brian
>
>> On Thu, Jan 22, 2015 at 10:32 AM, Brian Haley > > wrote:
>>
>> On 01/22/2015 01:06 PM, Kevin Benton wrote:
>> > There was a bug for this already.
>> > https://bugs.launchpad.net/bugs/1413111
>>
>> Thanks Kevin.  I added more info to it, but don't think the patch 
>> proposed there
>> is correct.  Something in the iptables manager defer_apply() code isn't
>> quite right.
>>
>> -Brian
>>
>>
>> > On Thu, Jan 22, 2015 at 9:07 AM, Brian Haley > 
>> > >> wrote:
>> >
>> > On 01/22/2015 10:17 AM, Carl Baldwin wrote:
>> > > I think this warrants a bug report.  Could you file one with 
>> what you
>> > > know so far?
>> >
>> > Carl,
>> >
>> > Seems as though a recent change introduced a bug.  This is on a 
>> devstack
>> > I just created today, at l3/vpn-agent startup:
>> >
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent 
>> Traceback (most
>> > recent call last):
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/common/utils.py", line 342, in call
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent 
>> return
>> > func(*args, **kwargs)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in
>> process_router
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> >  self._process_external(ri)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in
>> _process_external
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> >  self._update_fip_statuses(ri, existing_floating_ips, fip_statuses)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> UnboundLocalError:
>> > local variable 'existing_floating_ips' referenced before assignment
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> > Traceback (most recent call last):
>> >   File 
>> "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py",
>> line
>> > 82, in _spawn_n_impl
>> > func(*args, **kwargs)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1093, 
>> in
>> > _process_router_update
>> > self._process_router_if_compatible(router)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1047, 
>> in
>> > _process_router_if_compatible
>> > self._process_added_router(router)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1056, 
>> in
>> > _process_added_router
>> > self.process_router(ri)
>> >   File "/opt/stack/neutron/neutron/common/utils.py", line 345, in 
>> call
>> > self.logger(e)
>> >   File
>> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
>> > 82, in __exit__
>> > six.reraise(self.type_, self.value, self.tb)
>> >   File "/opt/stack/neutron/neutron/common/utils.py", line 342, in 
>> call
>> > return func(*args, **kwargs)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in
>> > process_router
>> > self._process_external(ri)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in
>> > _process_external
>> > self._update_fip_statuses(ri, existing_floating_ips, 
>> fip_statuses)
>> > UnboundLocalError: local variable 'existing_floating_ips' 
>> referenced
>> before
>> > assignment
>> >
>> > Since that's happening while we're holding the iptables lock I'm 
>> assuming
>> > no rules are being applied.
>> >
>> > I'm looking into it now, will file a bug if there isn't already 
>> one.
>> >
>>

Re: [openstack-dev] [magnum] Schedule for rest of Kilo

2015-01-23 Thread Steven Dake

On 01/23/2015 02:46 PM, Adrian Otto wrote:

Steven,

Thanks for this email to capture and relay this decision from our IRC team 
meeting this week! I thought it might be helpful if I made a team calendar with 
an iCal feed that our team can subscribe to that would include our team meeting 
schedule, and our milestones to help make our plans more visible to the team. I 
can set this up if there is interest. Thoughts on this?


I probably wouldn't use an iCal feed, but publishing the schedule on the 
wiki might be helpful.  Others may want an ical feed though.


Regards
-steve


Cheers,

Adrian

On Jan 21, 2015, at 8:42 AM, Steven Dake  wrote:


TLDR; moving Magnum to match upstream release scheduling intersecting k3

As discussed in our last IRC meeting we want to merge our Magnum milestone 
schedules with the upstream OpenStack schedules as soon as feasible.  We think 
it is feasible to merge release schedules for K3.  As such, We have picked the 
following dates for our schedules:

Milestone #2:February 16th
Milestone #3 - merging with all OpenStack K3: March 19th
magnum rcs April 9th-23rd
magnum-2015.1 release - April 30th

 From k3 forward, we will follow the Kilo release schedule here:

https://wiki.openstack.org/wiki/Kilo_Release_Schedule

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Magnum's First Release

2015-01-23 Thread Adrian Otto
Eric,

Thanks for your participation, and your help to form the vision for how to get 
the containers world and OpenStack to fit together better. We really appreciate 
it! I echo your sentiment of appreciation for the whole team as well, several 
of which who have stepped up to contribute in a big way. I’m very happy with 
the momentum and diversity of this team.

Cheers,

Adrian

On Jan 20, 2015, at 11:05 PM, Eric Windisch 
mailto:e...@windisch.us>> wrote:



On Tue, Jan 20, 2015 at 11:48 PM, Adrian Otto 
mailto:adrian.o...@rackspace.com>> wrote:
Hello,

The Magnum community is pleased to announce the first release of Magnum 
available now for download from:
https://github.com/stackforge/magnum/releases/tag/m1

Congratulations to you and everyone else that made this possible!

Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-23 Thread Kevin Benton
It's worth noting that all Neutron ML2 drivers are required to move to
their own repos starting in Kilo so installing an extra python package to
use a driver will become part of the standard Neutron installation
workflow. So I would suggest creating a stackforge project for the vDS
driver and packaging it up.

On Fri, Jan 23, 2015 at 11:39 AM, Andrey Danin  wrote:

> Hi, all,
>
> As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
> environment or a VMware vCenter-oriented environment. We wants to go
> further and mix them together. A user should be able to run both
> hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
> Here is how we gonna do this.
>
> * When vCenter is used as a hypervisor the only way to use volumes with it
> is to use Cinder VMDK backend. And vise versa, KVM cannot operate with the
> volumes provided by Cinder VMDK backend. All that means that we should have
> two separe infrastructures (a hypervisor + a volume service) for each HV
> presented in environment. To do that we decided to place corresponding
> nova-compute and cinder-volume instances into different Availability Zones.
> Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
> user to mount a volume to an instance which doesn't support this volume
> type.
>
> * A cinder-volume service is just a proxy between vCenter Datastore and
> Glance when used with VMDK. It means that the service itself doesn't need a
> local hard drive but sometimes can significantly consume network. That's
> why it's not a good idea to always put it to a Controller node. So, we want
> to add a new role called 'cinder-vmdk'. A user will be able to put this
> role to whatever node he wants: a separate node or combine with other
> roles. HA will be achieved by placing the role on two or more nodes.
> Cinder-volume services on each node will be configured identicaly,
> including 'host' stanza. We have the same approach now for Cinder+Ceph.
>
> * Nova-compute services for vCenter are kept running on Controller nodes.
> They are managed by Corosync.
>
> * There are two options for network backend exist. A good old Nova-network
> and a modern Neutron with ML2 DVS driver enabled. The problem with
> Nova-network is that we have to run it in 'singlehost' mode. It means, that
> the only nova-network service will be running for the whole environment. It
> makes the service a single point of failure, prevents a user to use
> Security Groups, and increases a network consuming for the node where the
> service is running. The problem with Neutron is that there is no ML2 DVS
> driver in an upstream Neutron for Juno and even Kilo. The is an unmerged
> patch [1] with almost no chances to get in Kilo. Good news are that we
> managed to run a PoC lab with this driver and both HVs enabled. So, we can
> build the driver as a package but it'll be a little ugly. That's why we
> picked the Nova-network approach as a basis. In Cluster creation wizard
> will be something to choose if you want to use vCenter in a cluster or not.
> Depending on it the nova-network service will be run in the 'singlenode' or
> 'multinode' mode. May be, if we have enough resources we'll implement a
> Neutron + vDS support also.
>
> * We are going to move all VMWare-specific settings to a separate UI tab.
> On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
> VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
> wizard there will be a checkbox called 'add a VMware vCenter support into
> your cloud'. When it's enabled a user can choose nova-network only.
>
> * OSTF test suit will be extended to support separate sets of tests for
> each HV.
>
> [1] Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/
>
> Links to blueprints:
> https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
> https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role
> https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor
>
>
> I would appreciate to see your thoughts about all that.
>
>
>
> --
> Andrey Danin
> ada...@mirantis.com
> skype: gcon.monolake
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Schedule for rest of Kilo

2015-01-23 Thread Adrian Otto
Steven,

Thanks for this email to capture and relay this decision from our IRC team 
meeting this week! I thought it might be helpful if I made a team calendar with 
an iCal feed that our team can subscribe to that would include our team meeting 
schedule, and our milestones to help make our plans more visible to the team. I 
can set this up if there is interest. Thoughts on this?

Cheers,

Adrian

On Jan 21, 2015, at 8:42 AM, Steven Dake  wrote:

> TLDR; moving Magnum to match upstream release scheduling intersecting k3
> 
> As discussed in our last IRC meeting we want to merge our Magnum milestone 
> schedules with the upstream OpenStack schedules as soon as feasible.  We 
> think it is feasible to merge release schedules for K3.  As such, We have 
> picked the following dates for our schedules:
> 
> Milestone #2:February 16th
> Milestone #3 - merging with all OpenStack K3: March 19th
> magnum rcs April 9th-23rd
> magnum-2015.1 release - April 30th
> 
> From k3 forward, we will follow the Kilo release schedule here:
> 
> https://wiki.openstack.org/wiki/Kilo_Release_Schedule
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient support for V2.1 micro versions

2015-01-23 Thread Chen CH Ji
No, AFAICT it's not supported because the v2.1 microversion and related bp
are still under implementation ,there is no change on novaclient now ...

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   "Day, Phil" 
To: "OpenStack Development Mailing List
(openstack-dev@lists.openstack.org)"

Date:   01/23/2015 05:56 PM
Subject:[openstack-dev] [nova] novaclient support for V2.1 micro
versions



Hi Folks,

Is there any support yet in novaclient for requesting a specific
microversion ?   (looking at the final leg of extending clean-shutdown to
the API, and wondering how to test this in devstack via the novaclient)

Phil

 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Heat] Convergence Phase 1 implementation plan

2015-01-23 Thread Zane Bitter
I've mentioned this in passing a few times, but I want to lay it out 
here in a bit more detail for comment. Basically we're implementing 
convergence at a time when we still have a lot of 'unit' tests that are 
really integration tests, and we don't want to have to rewrite them to 
anticipate this new architecture, nor wait until they have all been 
converted into functional tests. And of course it goes without saying 
that we have to land all of these changes without breaking anything for 
users.


To those ends, my proposal is that we (temporarily) support two code 
paths: the existing, legacy in-memory path and the new, distributed 
convergence path. Stacks will contain a field indicating which code path 
they were created with, and each stack will be operated on only by that 
same code path throughout its lifecycle (i.e. a stack created in legacy 
mode will always use the legacy code). We'll add a config option, off by 
default, to enable the new code path. That way users can switch over at 
a time of their choosing. When we're satisfied that it's stable enough 
we can flip the default (note: IMHO this would have to happen before 
kilo-3 in order to make it for the Kilo release).


Based on this plan, I had a go at breaking the work down into discrete 
tasks, and because it turned out to be really long I put it in an 
etherpad rather than include it here:


https://etherpad.openstack.org/p/heat-convergence-tasks

If anyone has additions/changes then I suggest adding them to that 
etherpad and replying to this message to flag your changes.


To be clear, it's unlikely I will have time to do the implementation 
work on any of these tasks myself (although I will be trying to review 
as many of them as possible). So the goal here is to get as many 
contributors involved in doing stuff in parallel as we can.


There are obviously dependencies between many of these tasks, so my plan 
is to raise each one as a blueprint so we can see the magic picture that 
Launchpad shows. I want to get feedback first though, because there are 
18 of them so far, and rejigging everything in response to feedback 
would be a bit of a pain.


I'm also prepared to propose specs for all of these _if_ people think 
that would be helpful. I see three options here:

 - Propose 18 fairly minimal specs (maybe in a single review?)
 - Propose 1 large spec (essentially the contents of that etherpad)
 - Just discuss in the etherpad rather than Gerrit

Obviously that's in decreasing order of the amount of work required, but 
I'll do whatever folks think best for discussion.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-01-23 Thread Ruslan Kamaldinov
On Fri, Jan 23, 2015 at 9:04 PM, Andrew Pashkin  wrote:
> Hello!
>
> Current situation with SQLite support:
> - Migration tests does not run on SQLIte.
> - At the same time migrations themselves support SQLite (with bugs).
>
> Today I came across this bug:
> Error during execution of database downgrade
>
> We can resolve this bug by hardening SQLite support, in that case:
> - We need to fix migrations and make them support SQLite without bugs, and
> then continuously make some effort to maintain this support (manually
> writing migrations and test cases for them).
> - Also need to introduce migration tests run on SQLite.
>
> We also can drop SQLite support and in this case:
> - We just factor out all that related to SQLite from migrations one time and
> set this bug as "Won't fix".
>
> Let's discuss that.

I agree that we don't need to support SQLite in migrations. As it was
already said in [1], there is no point in running DB migrations
against SQLite.

Here is what I suggest to do:
1. Use ModelsMigrationsSync from [2] in tests to make sure that
SQLAlchemy models are in sync with migrations. Usage example can be
found at [3]
2. Populate DB schema from SQLAlchemy models in unit-tests which
require access to DB
3. Wipe out everything related to SQLite from DB migrations code
4. Recommend all developers to use MySQL when they run Murano locally
5. For those who still insist on SQLite we can provide a command line
option which would generate database schema from SQLAlchemy metadata.
This should be declared as development only feature, not supported for
any kind of production deployments

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-January/055058.html
[2] 
http://git.openstack.org/cgit/openstack/oslo.db/tree/oslo_db/sqlalchemy/test_migrations.py
[3] 
http://git.openstack.org/cgit/openstack/sahara/tree/sahara/tests/unit/db/migration/test_migrations_base.py#n198

Thanks,
Ruslan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-01-23 Thread Georgy Okrokvertskhov
Hi Andrew,

I understand the difficulties with SQLite support, but this is very useful
for development to have SQLite instead of any other DB. I think nodoby uses
SQLite in production, so probably we can just put a release note that there
is a know limitation with SQLite support.

Thanks
Gosha

On Fri, Jan 23, 2015 at 10:04 AM, Andrew Pashkin 
wrote:

>  Hello!
>
> Current situation with SQLite support:
> - Migration tests does not run on SQLIte.
> - At the same time migrations themselves support SQLite (with bugs).
>
> Today I came across this bug:
> Error during execution of database downgrade
> 
>
> We can resolve this bug by hardening SQLite support, in that case:
> - We need to fix migrations and make them support SQLite without bugs, and
> then continuously make some effort to maintain this support (manually
> writing migrations and test cases for them).
> - Also need to introduce migration tests run on SQLite.
>
> We also can drop SQLite support and in this case:
> - We just factor out all that related to SQLite from migrations one time
> and set this bug as "Won't fix".
>
> Let's discuss that.
>
> --
> With kind regards, Andrew Pashkin.
> cell phone - +7 (985) 898 57 59
> Skype - waves_in_fluids
> e-mail - apash...@mirantis.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Change reset-state to involve the driver

2015-01-23 Thread Mike Perez
On 19:44 Thu 22 Jan , D'Angelo, Scott wrote:
> Thanks to everyone who commented on the spec to change reset-state to involve
> the driver: https://review.openstack.org/#/c/134366/
> 
> I've put some comments in reply, and I'm going to attempt to capture the
> various ideas here. I hope we can discuss this at the Mid-Cycle in Austin.
> 1) The existing reset-state python-cinderclient command should not change in
>unexpected ways and shouldn't have any new parameters (general consensus
>here). It should not fail if the driver does not implement my proposed
>changes (my opinion).
> 2) The existing reset-state is broken for some use cases (my UseCase2, for
>example, when stuck in 'attaching' but volume is still attached to
>instance). Existing reset-state will work for other situations (my
>UseCase1, when stuck in 'attaching' but not really attached.
> 3)MikeP pointed out that moving _reset_status() would break clients. I could
>   use help with understanding some of the API code here.
> 4) Xing had noted that this doesn't fix Nova. I hope we can do that
>separately, since this is proving contentious enough. Some cases such as
>a timeout during initialize_connection() could be fixed in Nova with a bug
>once this change is in. Other Nova changes might require a new Nova API to
>call for cleanup during reset-state, and that sounds much more difficult
>to get through the Nova change process.
> 5) Walt suggested a new driver method reset_state(). This seems fine,
>although I had hoped terminate_connection() and detach_volume() would
>cover all possible cleanup in the driver.
> 6) MikeP pointed out that difficulty of getting 30+ drivers to implement
>a change. I hope that this can be done in such a way that the reset-state
>commands works exactly as it does today if this is not implemented in the
>driver. Putting code in the driver to improve what exists today would be
>strictly optional.

Scott thanks for your work on this! I think your last comments have clarified
things for me and I really like the direction this is going. I have replied to
the review with some addition comments to add your ideas as I would like to
keep the discussion in the review. Thanks!

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-23 Thread Jeremy Stanley
On 2015-01-23 12:02:19 +0100 (+0100), Matthias Runge wrote:
[...]
> I think providing/updating distro packages is quite comparable to
> updating pypi packages.
[...]

Within an order of magnitude anyway. The difference is that most
Python module upstream authors do their own packaging for PyPI (for
better or worse), but few are also package maintainers for Debian
and Red Hat so distro packages have a tendency to lag PyPI even if
you ignore distro release cycle induced delays.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Mike Bayer


Doug Hellmann  wrote:

> 
> 
> On Fri, Jan 23, 2015, at 12:49 PM, Mike Bayer wrote:
>> Mike Bayer  wrote:
>> 
>>> Ihar Hrachyshka  wrote:
>>> 
 On 01/23/2015 05:38 PM, Mike Bayer wrote:
> Doug Hellmann  wrote:
> 
>> We put the new base class for RequestContext in its own library because
>> both the logging and messaging code wanted to influence it's API. Would
>> it make sense to do this database setup there, too?
> whoa, where’s that? is this an oslo-wide RequestContext class ? that would
> solve everything b.c. right now every project seems to implement
> RequestContext themselves.
>> 
>> 
>> so Doug -
>> 
>> How does this “influence of API” occur, would oslo.db import
>> oslo_context.context and patch onto RequestContext at that point? Or the
>> other way around? Or… ?
> 
> No, it's a social thing. I didn't want dependencies between
> oslo.messaging and oslo.log, but the API of the context needs to support
> use cases in both places.
> 
> Your case might be different, in that we might need to actually have
> oslo.context depend on oslo.db in order to call some setup code. We'll
> have to think about whether that makes sense and what other dependencies
> it might introduce between the existing users of oslo.context.

hey Doug -

for the moment, I have oslo_db.sqlalchemy.enginefacade applying its descriptors 
at import time onto oslo_context:

https://review.openstack.org/#/c/138215/30/oslo_db/sqlalchemy/enginefacade.py

https://review.openstack.org/gitweb?p=openstack/oslo.db.git;a=blob;f=oslo_db/sqlalchemy/enginefacade.py;h=3f76678a6c9788f62288c8fa5ef520db8dff2c0a;hb=bc33d20dc6db2f8e5f8cb02b4eb5f97d24dafb7a#l692

https://review.openstack.org/gitweb?p=openstack/oslo.db.git;a=blob;f=oslo_db/sqlalchemy/enginefacade.py;h=3f76678a6c9788f62288c8fa5ef520db8dff2c0a;hb=bc33d20dc6db2f8e5f8cb02b4eb5f97d24dafb7a#l498




> 
> Doug
> 
>> I’m almost joyful that this is here.   Assuming we can get everyone to
>> use it, should be straightforward for that right?
>> 
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Changes to the core team

2015-01-23 Thread Mark McClain

> On Jan 22, 2015, at 12:21 PM, Kyle Mestery  wrote:
> 
> On Thu, Jan 15, 2015 at 4:31 PM, Kyle Mestery  > wrote:
> The last time we looked at core reviewer stats was in December [1]. In 
> looking at the current stats, I'm going to propose some changes to the core 
> team. Reviews are the most important part of being a core reviewer, so we 
> need to ensure cores are doing reviews. The stats for the 90 day period [2] 
> indicate some changes are needed for core reviewers who are no longer 
> reviewing on pace with the other core reviewers.
> 
> First of all, I'm removing Sumit Naiksatam from neutron-core. Sumit has been 
> a core reviewer for a long time, and his past contributions are very much 
> thanked by the entire OpenStack Neutron team. If Sumit jumps back in with 
> thoughtful reviews in the future, we can look at getting him back as a 
> Neutron core reviewer. But for now, his stats indicate he's not reviewing at 
> a level consistent with the rest of the Neutron core reviewers.
> 
> As part of the change, I'd like to propose Doug Wiegley as a new Neutron core 
> reviewer. Doug has been actively reviewing code across not only all the 
> Neutron projects, but also other projects such as infra. His help and work in 
> the services split in December were the reason we were so successful in 
> making that happen. Doug has also been instrumental in the Neutron LBaaS V2 
> rollout, as well as helping to merge code in the other neutron service 
> repositories.
> 
> I'd also like to take this time to remind everyone that reviewing code is a 
> responsibility, in Neutron the same as other projects. And core reviewers are 
> especially beholden to this responsibility. I'd also like to point out that 
> +1/-1 reviews are very useful, and I encourage everyone to continue reviewing 
> code even if you are not a core reviewer.
> 
> Existing neutron cores, please vote +1/-1 for the addition of Doug to the 
> core team.
> 
> It's been a week, and Doug has received plenty of support. Welcome to the 
> Neutron Core Review team Doug!
> 

Welcome Doug!

mark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] [vmware] Two hypervisors in one cloud

2015-01-23 Thread Andrey Danin
Hi, all,

As you may know, Fuel 6.0 has an ability to deploy either a KVM-oriented
environment or a VMware vCenter-oriented environment. We wants to go
further and mix them together. A user should be able to run both
hypervisors in one OpenStack environment. We want to get it in Fuel 6.1.
Here is how we gonna do this.

* When vCenter is used as a hypervisor the only way to use volumes with it
is to use Cinder VMDK backend. And vise versa, KVM cannot operate with the
volumes provided by Cinder VMDK backend. All that means that we should have
two separe infrastructures (a hypervisor + a volume service) for each HV
presented in environment. To do that we decided to place corresponding
nova-compute and cinder-volume instances into different Availability Zones.
Also we want to disable 'cross_az_attach' option in nova.conf to restrict a
user to mount a volume to an instance which doesn't support this volume
type.

* A cinder-volume service is just a proxy between vCenter Datastore and
Glance when used with VMDK. It means that the service itself doesn't need a
local hard drive but sometimes can significantly consume network. That's
why it's not a good idea to always put it to a Controller node. So, we want
to add a new role called 'cinder-vmdk'. A user will be able to put this
role to whatever node he wants: a separate node or combine with other
roles. HA will be achieved by placing the role on two or more nodes.
Cinder-volume services on each node will be configured identicaly,
including 'host' stanza. We have the same approach now for Cinder+Ceph.

* Nova-compute services for vCenter are kept running on Controller nodes.
They are managed by Corosync.

* There are two options for network backend exist. A good old Nova-network
and a modern Neutron with ML2 DVS driver enabled. The problem with
Nova-network is that we have to run it in 'singlehost' mode. It means, that
the only nova-network service will be running for the whole environment. It
makes the service a single point of failure, prevents a user to use
Security Groups, and increases a network consuming for the node where the
service is running. The problem with Neutron is that there is no ML2 DVS
driver in an upstream Neutron for Juno and even Kilo. The is an unmerged
patch [1] with almost no chances to get in Kilo. Good news are that we
managed to run a PoC lab with this driver and both HVs enabled. So, we can
build the driver as a package but it'll be a little ugly. That's why we
picked the Nova-network approach as a basis. In Cluster creation wizard
will be something to choose if you want to use vCenter in a cluster or not.
Depending on it the nova-network service will be run in the 'singlenode' or
'multinode' mode. May be, if we have enough resources we'll implement a
Neutron + vDS support also.

* We are going to move all VMWare-specific settings to a separate UI tab.
On the Settings tab we will keep a Glance backend switch (Swift, Ceph,
VMware) and a libvirt_type switch (KVM, qemu). At the cluster creation
wizard there will be a checkbox called 'add a VMware vCenter support into
your cloud'. When it's enabled a user can choose nova-network only.

* OSTF test suit will be extended to support separate sets of tests for
each HV.

[1] Neutron ML2 vDS driver https://review.openstack.org/#/c/111227/

Links to blueprints:
https://blueprints.launchpad.net/fuel/+spec/vmware-ui-settings
https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role
https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor


I would appreciate to see your thoughts about all that.



-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder][3rd CI] Confused about “*real* storage backend”requirement for 3rd CI.

2015-01-23 Thread Mike Perez
On 11:30 Fri 23 Jan , Bharat Kumar wrote:
> Liu,
> 
> Yes, by default DevStack configures cinder with "LVM". But we can
> customize DevStack to configure cinder with our own backend ("real
> storage backend").
> 
> Below is the link to the path, enables Automatic Configuration of
> GlusterFS for Cinder using devstack:
> https://review.openstack.org/#/c/133102/
> 
> And also below it the link to Configure CEPH with Cinder using devstack:
> https://review.openstack.org/#/c/65113/
> 
> Above two are old way of "real storage" plugin implementation. Sean
> Dague proposed a new way of devstack plugin implementation. Have a
> look at below two links:
> https://review.openstack.org/#/c/142805/
> https://review.openstack.org/#/c/142805/7/doc/source/plugins.rst

Just want to clarify that you don't have to make any changes in upstream
devstack to configure devstack to use your volume driver. Information for
configuring Devstack to use your driver as mentioned earlier can be found here:

https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#How_do_I_configure_DevStack_so_my_Driver_Passes_Tempest.3F

-- 
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs

2015-01-23 Thread Sumit Naiksatam
Hi Sachi,

This refers to a Neutron Network UUID. This option is available with
the Neutron Resource Mapping Driver. I do not believe it is currently
functional with the ODL driver.

Thanks,
~Sumit.

On Fri, Jan 23, 2015 at 1:04 AM, Sachi Gupta  wrote:
> Hi Yapeng, Sumit,
>
> In Openstack GBP command line, for l2policy help, there is an argument
> --network that can be passed. Can you please elaborate on which network do
> we need to pass here and what is the use of the same.
>
> gbp l2policy-create --help
> usage: gbp l2policy-create [-h] [-f {html,json,shell,table,value,yaml}]
>[-c COLUMN] [--max-width ]
>[--prefix PREFIX] [--request-format {json,xml}]
>[--tenant-id TENANT_ID] [--description
> DESCRIPTION]
>[--network NETWORK] [--l3-policy L3_POLICY]
>NAME
>
> Create a L2 Policy for a given tenant.
>
> positional arguments:
>   NAME  Name of L2 Policy to create
>
> optional arguments:
>   -h, --helpshow this help message and exit
>   --request-format {json,xml}
> The XML or JSON request format.
>   --tenant-id TENANT_ID
> The owner tenant ID.
>   --description DESCRIPTION
> Description of the L2 Policy
>   --network NETWORK Network to map the L2 Policy
>   --l3-policy L3_POLICY
> L3 Policy uuid
>
>
> Also, the PTG help includes an additional subnet parameter. Please also
> provide inputs on it.
>
> stack@tcs-ThinkCentre-M58p:/home/tcs/JUNIPER/gbp_openstack_odl/devstack$ gbp
> policy-target-group-create --help
> usage: gbp policy-target-group-create [-h]
>   [-f
> {html,json,shell,table,value,yaml}]
>   [-c COLUMN] [--max-width ]
>   [--prefix PREFIX]
>   [--request-format {json,xml}]
>   [--tenant-id TENANT_ID]
>   [--description DESCRIPTION]
>   [--l2-policy L2_POLICY]
>   [--provided-policy-rule-sets
> PROVIDED_POLICY_RULE_SETS]
>   [--consumed-policy-rule-sets
> CONSUMED_POLICY_RULE_SETS]
>   [--network-service-policy
> NETWORK_SERVICE_POLICY]
>   [--subnets SUBNETS]
>   NAME
> Create a Policy Target Group for a given tenant.
> positional arguments:
>   NAME  Name of Policy Target Group to create
>
> optional arguments:
>   -h, --helpshow this help message and exit
>   --request-format {json,xml}
> The XML or JSON request format.
>   --tenant-id TENANT_ID
> The owner tenant ID.
>   --description DESCRIPTION
> Description of the Policy Target Group
>   --l2-policy L2_POLICY
> L2 policy uuid
>
>   --provided-policy-rule-sets PROVIDED_POLICY_RULE_SETS
> Dictionary of provided policy rule set uuids
>   --consumed-policy-rule-sets CONSUMED_POLICY_RULE_SETS
> Dictionary of consumed policy rule set uuids
>   --network-service-policy NETWORK_SERVICE_POLICY
> Network service policy uuid
>   --subnets SUBNETS List of neutron subnet uuids
>
> output formatters:
>   output formatter options
>
>   -f {html,json,shell,table,value,yaml}, --format
> {html,json,shell,table,value,yaml}
> the output format, defaults to table
>   -c COLUMN, --column COLUMN
> specify the column(s) to include, can be repeated
>
> table formatter:
>   --max-width 
> Maximum display width, 0 to disable
>
> shell formatter:
>   a format a UNIX shell can parse (variable="value")
>
>   --prefix PREFIX   add a prefix to all variable names
>
>
>
>
> Thanks & Regards
> Sachi Gupta
>
>
>
> From:Yapeng Wu 
> To:Sachi Gupta , "OpenStack Development Mailing
> List (not for usage questions)" ,
> "groupbasedpolicy-...@lists.opendaylight.org"
> 
> Cc:"bu...@noironetworks.com" 
> Date:01/13/2015 11:48 PM
> Subject:RE: [openstack-dev] [Policy][Group-based-policy] ODL
> PolicyDriverSpecs
> 
>
>
>
> Hi, Sachi,
>
> Please see my inlined replies.
>
> Also, please refer to this link when you try to integrate OpenStack GBP and
> ODL GBP:
> https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallODLIntegrationDevstack
>
>
> Yapeng
>
> From: Sachi Gupta [mailto:sachi.gu...@tcs.com]
> Sent: Tuesday, January 13, 2015 4:02 AM
> To: OpenStack Development Mailing List (not for usage questions);
> groupbasedpolicy-...@lists.op

Re: [openstack-dev] [neutron][lbaas] Trying to set up LBaaS V2 on Juno with DVR

2015-01-23 Thread Doug Wiegley
Get ready to vomit.

The lbaasv2 code you’re pulling is a non-agent driver.  Meaning, it runs
haproxy on the *neutron controller* node, and only the controller node.
It’s meant to be a POC for single node systems, not something you can
deploy.

In the upcoming mid-cycle, the driver will be agent-ified, like v1. I
believe Phil from Rax will be leading that effort.

Thanks,
Doug



On 1/23/15, 11:01 AM, "Al Miller"  wrote:

>I have been trying to set up LBaaS v2 in a juno-based environment.
>
>I have successfully done this in devstack by setting it up based on
>stable/juno, then grabbing https://review.openstack.org/#/c/123491/ and
>the client from https://review.openstack.org/#/c/111475/, and then
>editing neutron.conf to include the
>neutron.services.loadbalancer.plugin.LoadBalancerPluginv2 service_plugin
>and 
>service_provider=LOADBALANCERV2:Haproxy:neutron.services.loadbalancer.driv
>ers.haproxy.synchronous_namespace_driver.HaproxyNSDriver:default.  I have
>also enabled DVR, 
>
>With this setup in devstack, I can use the LBaaS V2 CLI commands to set
>up a  working V2 loadbalancer.
>
>The problem comes in when I try to do this in an openstack installation.
>I have set up a three node installation based on Ubuntu 14.04 following
>the procedure in 
>http://docs.openstack.org/juno/install-guide/install/apt/openstack-install
>-guide-apt-juno.pdf.  I have a controller node for the API services, a
>network node, and a compute node.   I can boot instances and create V1
>loadbalancers.
>
>When I bring in the LBaaS V2 code into this environment, it is more
>complex.  I need to add it to the neutron API server on the controller,
>but also the compute node (the goal here is to test it with DVR).   So on
>the compute node I install the neutron-lbaas-agent package, bring in the
>123491 patch, and make the neutron.conf edits.  In this configuration,
>the lbaas agent fails with an RPC timeout:
>
>2015-01-22 16:10:52.712 14795 ERROR
>neutron.services.loadbalancer.agent.agent_manager [-] Unable to retrieve
>ready devices
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager Traceback (most recent
>call last):
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agen
>t_manager.py", line 148, in sync_state
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager ready_instances =
>set(self.plugin_rpc.get_ready_devices())
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agen
>t_api.py", line 38, in get_ready_devices
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager
>self.make_msg('get_ready_devices', host=self.host)
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 36, in
>wrapper
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager return
>method(*args, **kwargs)
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 175, in
>call
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager context, msg,
>rpc_method='call', **kwargs)
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 201, in
>__call_rpc_method
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager return
>func(context, msg['method'], **msg['args'])
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line
>389, in call
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager return
>self.prepare().call(ctxt, method, **kwargs)
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line
>152, in call
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager retry=self.retry)
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90,
>in _send
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager timeout=timeout,
>retry=retry)
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.agent_manager   File
>"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py",
>line 408, in send
>2015-01-22 16:10:52.712 14795 TRACE
>neutron.services.loadbalancer.agent.

Re: [openstack-dev] multi-queue virtio-net interface

2015-01-23 Thread Vladik Romanovsky
Unfortunately, I didn't get a feature freeze exception for this blueprint.
I will resubmit the spec in next cycle.

I think the best way for you to contribute is to review the spec,
when it's re-posted and +1 it, if you agree with the design.

Thanks,
Vladik 

- Original Message -
> From: "Steve Gordon" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Cc: mfiuc...@akamai.com
> Sent: Wednesday, 21 January, 2015 4:43:37 PM
> Subject: Re: [openstack-dev] multi-queue virtio-net interface
> 
> - Original Message -
> > From: "Rajagopalan Sivaramakrishnan" 
> > To: openstack-dev@lists.openstack.org
> > 
> > Hello,
> > We are hitting a performance bottleneck in the Contrail network
> > virtualization solution due to the virtio interface having a single
> > queue in VMs spawned using Openstack. There seems to be a blueprint to
> > address this by enabling multi-queue virtio-net at
> > 
> > https://blueprints.launchpad.net/nova/+spec/libvirt-virtio-net-multiqueue
> > 
> > It is not clear what the current status of this project is. We would be
> > happy
> > to contribute towards this effort if required. Could somebody please let us
> > know what the next steps should be to get this into an upcoming release?
> > 
> > Thanks,
> > 
> > Raja
> 
> The specification is up for review here:
> 
> https://review.openstack.org/#/c/128825/
> 
> There is an associated Feature Freeze Exception (FFE) email for this proposal
> here which would need to be approved for this to be included in Kilo:
> 
> 
> http://lists.openstack.org/pipermail/openstack-dev/2015-January/054263.html
> 
> Thanks,
> 
> Steve
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Doug Hellmann


On Fri, Jan 23, 2015, at 12:49 PM, Mike Bayer wrote:
> 
> 
> Mike Bayer  wrote:
> 
> > 
> > 
> > Ihar Hrachyshka  wrote:
> > 
> >> On 01/23/2015 05:38 PM, Mike Bayer wrote:
> >>> Doug Hellmann  wrote:
> >>> 
>  We put the new base class for RequestContext in its own library because
>  both the logging and messaging code wanted to influence it's API. Would
>  it make sense to do this database setup there, too?
> >>> whoa, where’s that? is this an oslo-wide RequestContext class ? that would
> >>> solve everything b.c. right now every project seems to implement
> >>> RequestContext themselves.
> 
> 
> so Doug -
> 
> How does this “influence of API” occur, would oslo.db import
> oslo_context.context and patch onto RequestContext at that point? Or the
> other way around? Or… ?

No, it's a social thing. I didn't want dependencies between
oslo.messaging and oslo.log, but the API of the context needs to support
use cases in both places.

Your case might be different, in that we might need to actually have
oslo.context depend on oslo.db in order to call some setup code. We'll
have to think about whether that makes sense and what other dependencies
it might introduce between the existing users of oslo.context.

Doug

> 
> 
> I’m almost joyful that this is here.   Assuming we can get everyone to
> use it, should be straightforward for that right?
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] SQLite support - drop or not?

2015-01-23 Thread Andrew Pashkin

Hello!

Current situation with SQLite support:
- Migration tests does not run on SQLIte.
- At the same time migrations themselves support SQLite (with bugs).

Today I came across this bug:
Error during execution of database downgrade 



We can resolve this bug by hardening SQLite support, in that case:
- We need to fix migrations and make them support SQLite without bugs, 
and then continuously make some effort to maintain this support 
(manually writing migrations and test cases for them).

- Also need to introduce migration tests run on SQLite.

We also can drop SQLite support and in this case:
- We just factor out all that related to SQLite from migrations one time 
and set this bug as "Won't fix".


Let's discuss that.

--
With kind regards, Andrew Pashkin.
cell phone - +7 (985) 898 57 59
Skype - waves_in_fluids
e-mail - apash...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] Trying to set up LBaaS V2 on Juno with DVR

2015-01-23 Thread Al Miller
I have been trying to set up LBaaS v2 in a juno-based environment.

I have successfully done this in devstack by setting it up based on 
stable/juno, then grabbing https://review.openstack.org/#/c/123491/ and the 
client from https://review.openstack.org/#/c/111475/, and then editing 
neutron.conf to include the 
neutron.services.loadbalancer.plugin.LoadBalancerPluginv2 service_plugin and 
service_provider=LOADBALANCERV2:Haproxy:neutron.services.loadbalancer.drivers.haproxy.synchronous_namespace_driver.HaproxyNSDriver:default.
  I have also enabled DVR, 

With this setup in devstack, I can use the LBaaS V2 CLI commands to set up a  
working V2 loadbalancer.

The problem comes in when I try to do this in an openstack installation.  I 
have set up a three node installation based on Ubuntu 14.04 following the 
procedure in 
http://docs.openstack.org/juno/install-guide/install/apt/openstack-install-guide-apt-juno.pdf.
  I have a controller node for the API services, a network node, and a compute 
node.   I can boot instances and create V1 loadbalancers.

When I bring in the LBaaS V2 code into this environment, it is more complex.  I 
need to add it to the neutron API server on the controller, but also the 
compute node (the goal here is to test it with DVR).   So on the compute node I 
install the neutron-lbaas-agent package, bring in the 123491 patch, and make 
the neutron.conf edits.  In this configuration, the lbaas agent fails with an 
RPC timeout:

2015-01-22 16:10:52.712 14795 ERROR 
neutron.services.loadbalancer.agent.agent_manager [-] Unable to retrieve ready 
devices
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager Traceback (most recent call 
last):
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agent_manager.py",
 line 148, in sync_state
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager ready_instances = 
set(self.plugin_rpc.get_ready_devices())
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agent_api.py",
 line 38, in get_ready_devices
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager 
self.make_msg('get_ready_devices', host=self.host)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/common/log.py", line 36, in wrapper
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager return method(*args, 
**kwargs)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 175, in call
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager context, msg, 
rpc_method='call', **kwargs)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/neutron/common/rpc.py", line 201, in 
__call_rpc_method
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager return func(context, 
msg['method'], **msg['args'])
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 389, in 
call
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager return 
self.prepare().call(ctxt, method, **kwargs)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 152, in 
call
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager retry=self.retry)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/transport.py", line 90, in 
_send
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager timeout=timeout, 
retry=retry)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
408, in send
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager retry=retry)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 
397, in _send
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager result = 
self._waiter.wait(msg_id, timeout)
2015-01-22 16:10:52.712 14795 TRACE 
neutron.services.loadbalancer.agent.agent_manager   File 
"/usr/lib/p

Re: [openstack-dev] [Keystone] Nominating Brad Topol for Keystone-Spec core

2015-01-23 Thread Morgan Fainberg
Based upon the feedback to this thread, I want to congratulate Brad Topol as 
the newest member of the Keystone-Specs-Core team!
—Morgan

> On Jan 18, 2015, at 11:11 AM, Morgan Fainberg  
> wrote:
> 
> Hello all,
> 
> I would like to nominate Brad Topol for Keystone Spec core (core reviewer for 
> Keystone specifications and API-Specification only: 
> https://git.openstack.org/cgit/openstack/keystone-specs 
>  ). Brad has been a 
> consistent voice advocating for well defined specifications, use of existing 
> standards/technology, and ensuring the UX of all projects under the Keystone 
> umbrella continue to improve. Brad brings to the table a significant amount 
> of insight to the needs of the many types and sizes of OpenStack deployments, 
> especially what real-world customers are demanding when integrating with the 
> services. Brad is a core contributor on pycadf (also under the Keystone 
> umbrella) and has consistently contributed code and reviews to the Keystone 
> projects since the Grizzly release.
> 
> Please vote with +1/-1 on adding Brad to as core to the Keystone Spec repo. 
> Voting will remain open until Friday Jan 23.
> 
> Cheers,
> Morgan Fainberg
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Mike Bayer


Mike Bayer  wrote:

> 
> 
> Ihar Hrachyshka  wrote:
> 
>> On 01/23/2015 05:38 PM, Mike Bayer wrote:
>>> Doug Hellmann  wrote:
>>> 
 We put the new base class for RequestContext in its own library because
 both the logging and messaging code wanted to influence it's API. Would
 it make sense to do this database setup there, too?
>>> whoa, where’s that? is this an oslo-wide RequestContext class ? that would
>>> solve everything b.c. right now every project seems to implement
>>> RequestContext themselves.


so Doug -

How does this “influence of API” occur, would oslo.db import
oslo_context.context and patch onto RequestContext at that point? Or the
other way around? Or… ?


I’m almost joyful that this is here.   Assuming we can get everyone to use it, 
should be straightforward for that right?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core rights in Fuel repositories

2015-01-23 Thread Aleksandra Fedorova
How should we deal with release management?

Currently I don't do any merges for stackforge/fuel-* projects but I
need access to all of them to create branches at Hard Code Freeze.

Should we create separate fuel-release group for that? Should it be
unified group for all repositories or every repository needs its own?



On Fri, Jan 23, 2015 at 7:04 PM, Roman Prykhodchenko  wrote:
> Hi folks!
>
> After moving python-fuelclient to its own repo some of you started asking a 
> good question which is How do we manage core rights in different Fuel 
> repositories. The problem is that there is a single fuel-core group which is 
> used for all fuel-* repos, except for python-fuelclient.
>
> The approach mentioned above does not work very well at the moment and so I’d 
> like to propose a different one:
>
> - Every new or separated project shoud introduce it’s own -core group.
>  -  That group vill only contain active core reviewers for only that project.
>  -  Removing or adding people will be done according to Approved OpenStack 
> rules.
> - fuel-core group will be reduced to the smallest possible number of people 
> and only include the guys
>  who must have decision making powers according to Fuel project’s rules.
>  - fuel-core will be included to any other fuel-*core group
>  - elections to the fuel-core group will take place according to Fuel’s 
> policies
>  - fuel-core group members are required for supervising reasons and taking an 
> action in emergency cases
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Aleksandra Fedorova
Fuel Devops Engineer
bookwar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient support for V2.1 micro versions

2015-01-23 Thread Andrey Kurilin
Hi!
As I know, there is no support of microversions in novaclient. Only
os-compute-api-version is supported, but it do nothing(and I start a thread
to clarify this question -
http://lists.openstack.org/pipermail/openstack-dev/2015-January/055027.html)

On Fri, Jan 23, 2015 at 6:53 PM, Day, Phil  wrote:

>  Hi Folks,
>
>
>
> Is there any support yet in novaclient for requesting a specific
> microversion ?   (looking at the final leg of extending clean-shutdown to
> the API, and wondering how to test this in devstack via the novaclient)
>
>
>
> Phil
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Mike Bayer


Ihar Hrachyshka  wrote:

> On 01/23/2015 05:38 PM, Mike Bayer wrote:
>> Doug Hellmann  wrote:
>> 
>>> We put the new base class for RequestContext in its own library because
>>> both the logging and messaging code wanted to influence it's API. Would
>>> it make sense to do this database setup there, too?
>> whoa, where’s that? is this an oslo-wide RequestContext class ? that would
>> solve everything b.c. right now every project seems to implement
>> RequestContext themselves.
> 
> https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py#L35
> 
> Though not every project migrated to it yet.

WOW !!

OK!

Dear Openstack:

Can you all start using oslo_context/context.py for your RequestContext
base, as a condition of migrating off of legacy EngineFacade?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core rights in Fuel repositories

2015-01-23 Thread Evgeniy L
+1

On Fri, Jan 23, 2015 at 7:45 PM, Igor Kalnitsky 
wrote:

> +1, no objections from my side.
>
> On Fri, Jan 23, 2015 at 6:04 PM, Roman Prykhodchenko 
> wrote:
> > Hi folks!
> >
> > After moving python-fuelclient to its own repo some of you started
> asking a good question which is How do we manage core rights in different
> Fuel repositories. The problem is that there is a single fuel-core group
> which is used for all fuel-* repos, except for python-fuelclient.
> >
> > The approach mentioned above does not work very well at the moment and
> so I’d like to propose a different one:
> >
> > - Every new or separated project shoud introduce it’s own -core group.
> >  -  That group vill only contain active core reviewers for only that
> project.
> >  -  Removing or adding people will be done according to Approved
> OpenStack rules.
> > - fuel-core group will be reduced to the smallest possible number of
> people and only include the guys
> >  who must have decision making powers according to Fuel project’s rules.
> >  - fuel-core will be included to any other fuel-*core group
> >  - elections to the fuel-core group will take place according to Fuel’s
> policies
> >  - fuel-core group members are required for supervising reasons and
> taking an action in emergency cases
> >
> >
> > - romcheg
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Can we require kernel 3.8+ for use with StrongSwan IPSec VPN for Kilo?

2015-01-23 Thread Paul Michali
Maybe I'm misunderstanding the issue?

I thought the reason there is no version check currently, is because a
check is being made to see if the process is in the same namespace as root
for the net namespace (as a proxy to checking that the mount namespace is
being used).

The comment indicates that one could check the mount namespace, but that
would require a kernel 3.8+.

My question was whether we should check the mount namespace and therefore
require a kernel of 3.8+?

Maybe it is just easier to stick with (C).

Thoughts? Should this review move forward as-is?

Regards,

PCM


PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Fri, Jan 23, 2015 at 10:37 AM, Kyle Mestery  wrote:

> According to the patch author, the check isn't necessary at all.
>
> On Fri, Jan 23, 2015 at 7:12 AM, Paul Michali  wrote:
>
>> To summarize, should we...
>>
>> A) Assume all kernels will be 3.8+ and use mount namespace (risky?)
>> B) Do a check to ensure kernel is 3.8+ and fall back to net namespace and
>> mount --bind if not (more work).
>> C) Just use net namespace as indication that namespace with mount --bind
>> done (simple)
>>
>> Maybe it is best to just do the simple thing for now. I wanted to double
>> check though, to see if the alternatives could/should be considered.
>>
>> Regards,
>>
>> PCM
>>
>>
>> PCM (Paul Michali)
>>
>> IRC pc_m (irc.freenode.com)
>> Twitter... @pmichali
>>
>>
>> On Fri, Jan 23, 2015 at 1:35 AM, Joshua Zhang > > wrote:
>>
>>> pls note that actually this patch doesn't have minumum kernel
>>> requirement because it only uses 'mount --bind' and 'net namespace', not
>>> use 'mount namespace'. ('mount --bind' is since linux 2.4, 'net namespace'
>>> is since Linux 3.0, 'mount namespace' is since Linux 3.8).
>>>
>>> so I think sanity checks for 3.8 is not need, any thoughts ?
>>>
>>> thanks.
>>>
>>>
>>> On Fri, Jan 23, 2015 at 2:12 PM, Kevin Benton  wrote:
>>>
 >If we can consolidate that and use a single tool from the master
 neutron repository, that would be my vote.

 +1 with a hook mechanism so the sanity checks stay in the *aas repos
 and they are only run if installed.

>
 On Thu, Jan 22, 2015 at 7:30 AM, Kyle Mestery 
 wrote:

> On Wed, Jan 21, 2015 at 10:27 AM, Ihar Hrachyshka  > wrote:
>
>>  On 01/20/2015 05:40 PM, Paul Michali wrote:
>>
>> Review https://review.openstack.org/#/c/146508/ is adding support
>> for StrongSwan VPN, which needs mount bind to be able to specify 
>> different
>> paths for config files.
>>
>>  The code, which used some older patch, does a test for
>> /proc/1/ns/net, instead of /proc/1/ns/mnt, because it stated that the
>> latter is only supported in kernel 3.8+. That was a while ago, and I'm
>> wondering if the condition is still true.  If we know that for Kilo and 
>> on,
>> we'll be dealing with 3.8+ kernels, we could use the more accurate test.
>>
>>  Can we require 3.8+ kernel for this?
>>
>>
>> I think we can but it's better to check with distributions. Red Hat
>> wise, we ship a kernel that is newer than 3.8.
>>
>>  If so, how and where do we ensure that is true?
>>
>>
>> Ideally, you would implement a sanity check for the feature you need
>> from the kernel. Though it opens a question of whether we want to ship
>> multiple sanity check tools for each of repos (neutron + 3 *aas repos).
>>
>> If we can consolidate that and use a single tool from the master
> neutron repository, that would be my vote.
>
>>
>>  Also, if you can kindly review the code here:
>> https://review.openstack.org/#/c/146508/5/neutron_vpnaas/services/vpn/common/netns_wrapper.py,
>> I'd really appreciate it, as I'm not versed in the Linux proc files at 
>> all.
>>
>>  Thanks!
>>
>>
>>   PCM (Paul Michali)
>>
>>  IRC pc_m (irc.freenode.com)
>> Twitter... @pmichali
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.

Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Ihar Hrachyshka

On 01/23/2015 05:38 PM, Mike Bayer wrote:


Doug Hellmann  wrote:


We put the new base class for RequestContext in its own library because
both the logging and messaging code wanted to influence it's API. Would
it make sense to do this database setup there, too?

whoa, where’s that? is this an oslo-wide RequestContext class ? that would
solve everything b.c. right now every project seems to implement
RequestContext themselves.


https://github.com/openstack/oslo.context/blob/master/oslo_context/context.py#L35

Though not every project migrated to it yet.

/Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] novaclient support for V2.1 micro versions

2015-01-23 Thread Day, Phil
Hi Folks,

Is there any support yet in novaclient for requesting a specific microversion ? 
  (looking at the final leg of extending clean-shutdown to the API, and 
wondering how to test this in devstack via the novaclient)

Phil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Core rights in Fuel repositories

2015-01-23 Thread Igor Kalnitsky
+1, no objections from my side.

On Fri, Jan 23, 2015 at 6:04 PM, Roman Prykhodchenko  wrote:
> Hi folks!
>
> After moving python-fuelclient to its own repo some of you started asking a 
> good question which is How do we manage core rights in different Fuel 
> repositories. The problem is that there is a single fuel-core group which is 
> used for all fuel-* repos, except for python-fuelclient.
>
> The approach mentioned above does not work very well at the moment and so I’d 
> like to propose a different one:
>
> - Every new or separated project shoud introduce it’s own -core group.
>  -  That group vill only contain active core reviewers for only that project.
>  -  Removing or adding people will be done according to Approved OpenStack 
> rules.
> - fuel-core group will be reduced to the smallest possible number of people 
> and only include the guys
>  who must have decision making powers according to Fuel project’s rules.
>  - fuel-core will be included to any other fuel-*core group
>  - elections to the fuel-core group will take place according to Fuel’s 
> policies
>  - fuel-core group members are required for supervising reasons and taking an 
> action in emergency cases
>
>
> - romcheg
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ec2-api] EC2 API standalone service

2015-01-23 Thread Alexandre Levine
This thread is created in regard to newly introduced EC2 API standalone 
service effort which is covered by the following:


Blueprint:
https://blueprints.launchpad.net/nova/+spec/ec2-api

Nova spec:
https://review.openstack.org/#/c/147882

Project and code:
https://github.com/stackforge/ec2-api

Kilo talks:
https://etherpad.openstack.org/p/kilo-nova-summit-unconference

Joe Gordon suggested to create this thread and explain the background 
for this effort, current state, and what's still needed, and to have a 
thread for more questions and ideas.


The idea arisen when we needed to improve current nova's EC2 API 
compatibility by fixing some bugs, removing some limitations and 
incompatibilities, and especially adding the whole VPC API part which 
was completely absent. We needed to use all of it in some proprietary 
clouds of ours. This was done after our initial attempt with the similar 
GCE-API functionality which showed to us that it's much easier to 
position such functionality as a separate service and separate project 
rather than trying to squeeze it into nova beside the existing EC2-API. 
When we started we already had the GCE-API finished functionality on 
stackforge. So we created another alike project for EC2-API.
Initially we created only the lacking VPC API part. The rest of the EC2 
functionality requests was just proxied into existing nova's EC2.
After the Kilo design summit it was decided that we'll extract the rest 
and have fully functional EC2 API standalone project which can 
eventually replace nova's one.


Current state is a fully functional (at least not worse than existing 
one) service covering EC2 API almost completely except for some 
limitations listed in the README and the ones not found yet (we're 
discovering some more gaps when we're adding more tests). Still, except 
for some nova EC2 extensions which are not supported by AWS CLI, all 
legit functionality present currently in nova is implemented in this 
service too.
All of the existing nova EC2 unit tests were ported or reflected in 
other tests.

Tempest API and scenario tests exist but it's unclear where to put them yet.

I can't say what else is needed because I don't see any particular 
showstoppers at the moment.


The sister GCE-API service is also proposed as a nova-spec and blueprint 
for addition to OpenStack.


Best regards,
  Alex Levine

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Mike Bayer


Doug Hellmann  wrote:

> We put the new base class for RequestContext in its own library because
> both the logging and messaging code wanted to influence it's API. Would
> it make sense to do this database setup there, too?

whoa, where’s that? is this an oslo-wide RequestContext class ? that would
solve everything b.c. right now every project seems to implement
RequestContext themselves.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaas] Object statuses

2015-01-23 Thread Jorge Miramontes
The example you put implicitly assigns the status tree to a load balancer.
Is sharing only allowed for sub resources? Or can sub resources be shared
across multiple load balancers? If that is the case then I suspect that
statuses may be exposed in many different places correct?

Cheers,
--Jorge




On 1/23/15, 1:21 AM, "Brandon Logan"  wrote:

>
>So I am resurrecting this topic now because we put this discussion on a a
>brief hold,
>but are now discussing it again and need to decide asap. We've all agreed
>we need a
>provisioning_status and operating_status fields. We now need to decide
>where to show
>these statuses to the user.
>
>Option 1:
>Show the statuses directly on the entity.
>
>Option 2-A:
>Show a status tree only on the load balancer object, but not on any
>entities.
>
>Option 2-B:
>Expose a resource for a GET request that will return that status tree.
>
>Example:
>GET /lbaas/loadbalancers/{LB_UUID}/statuses
>
>
>Option 1 is probably what most people are used to but it doesn't allow
>for sharing of
>objects, and when/if sharing is enabled, it will cause a break in
>contract and a new
>version of the API.  So it would essentially disallow object sharing.
>This requires
>a lot less work to implement.
>
>Option 2-* can be done with or without sharing, and when/if object
>sharing is enabled
>it wont break contract.  This will require more work to implement.
>
>My personal opinion is in favor of Option 2-B, but wouldn't argue with
>2-A either.
>
>Thanks,
>Brandon
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Core rights in Fuel repositories

2015-01-23 Thread Roman Prykhodchenko
Hi folks!

After moving python-fuelclient to its own repo some of you started asking a 
good question which is How do we manage core rights in different Fuel 
repositories. The problem is that there is a single fuel-core group which is 
used for all fuel-* repos, except for python-fuelclient.

The approach mentioned above does not work very well at the moment and so I’d 
like to propose a different one:

- Every new or separated project shoud introduce it’s own -core group.
 -  That group vill only contain active core reviewers for only that project.
 -  Removing or adding people will be done according to Approved OpenStack 
rules.
- fuel-core group will be reduced to the smallest possible number of people and 
only include the guys
 who must have decision making powers according to Fuel project’s rules.
 - fuel-core will be included to any other fuel-*core group
 - elections to the fuel-core group will take place according to Fuel’s policies
 - fuel-core group members are required for supervising reasons and taking an 
action in emergency cases


- romcheg


signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-23 Thread Evgeniya Shumakher
Folks -

I support the idea to keep plugins' code and other artifacts, e.g. design
specs, installation and user guides, test scripts, test plan, test report,
etc, in one repo, just to create dedicated folders for that.
My argument here is pretty simple, i consider a Fuel plugin as a separate
and independent project, which should be stored in a dedicated repo and
maintained by the plugin development team.

But i don't see why we can't use Fuel Launchpad [1] to create blueprints if
we think it's necessary, but a BP itself shouldn't be a 'must do' for those
who are working on Fuel plugins.

And couple more comments:

   1. Have a separate stackforge repo per Fuel plugin in format
   "fuel-plugin-", with separate core-reviewers group which should have
   plugin contributor initially

On stackforge.
Right now there are 4 Fuel plugins developed (GlusterFS, NetApp, LBaaS,
VPNaaS) and 4 more are coming (NFS, FWaaS, Contrail, EMC VNX). Keeping in
mind that the number of Fuel plugins will grow, does it make sense to keep
them in stackforge?
Mike, Alexander, we discussed an option to keep everything in fuel-infra
[3].
I would like to hear what other folks think about that.

On the repo name.
I would suggest to add the name of OpenStack component the plugin works
with also "fuel-plugin--", e.g. fuel-plugin-cinder-emc-vnx.

   1. Have docs folder in the plugin, and ability to build docs out of it
  - do we want Sphinx or simple Github docs format is Ok? So people can
  just go to github/stackforge to see docs

I agree with Evgeniy. We are talking about best practices of Fuel plugin
development. I would prefer to keep them as simple and as easy as possible.

   1. Have specification in the plugin repo
  - also, do we need Sphinx here?


   1. Have plugins tests in the repo

So, here is how the plugin repo structure could look like:

   - fuel-plugin--
   - specs
  - plugin
  - tests
  - docs
  - utils

Alexander -

I don't think that putting these specs [4, 5] to fuel-specs [6] is a good
idea.
Let's come to an agreement, so plugin developers will know where they
should commit code,specs and other docs.

Looking forward to your comments.
Thanks.


[1] https://launchpad.net/fuel
[2] https://github.com/stackforge
[3] https://review.fuel-infra.org/
[4] https://review.openstack.org/#/c/129586/
[5] https://review.openstack.org/#/c/148475/4
[6] https://github.com/stackforge/fuel-specs

On Fri, Jan 23, 2015 at 4:14 PM, Alexander Ignatov 
wrote:

> Mike,
>
> I also wanted to add that there is a PR already on adding plugins
> repos to stackforge: https://review.openstack.org/#/c/147169/
>
> All this looks good, but it’s not clear when this patch will be merged and
> repos are created.
> So the question is what should we do with the current spec made in
> fuel-specs[1,2] which are targeted for plugins?
> And how will look development process for plugins added to 6.1 roadmap?
> Especially for plugins came not from external vendors and partners. Will
> we create separate projects on the Launchpad and duplicate our
> For now I’m not sure if we need to wait for new infrastructure created in
> stackforge/launchpad for each plugin and follow the common
> procedure to land current plugins to existing repos during 6.1 milestone.
>
> [1] https://review.openstack.org/#/c/129586/
> [2] https://review.openstack.org/#/c/148475/4
>
> Regards,
> Alexander Ignatov
>
>
>
> On 23 Jan 2015, at 12:43, Nikolay Markov  wrote:
>
> I also wanted to add that there is a PR already on adding plugins
> repos to stackforge: https://review.openstack.org/#/c/147169/
>
> There is a battle in comments right now, because some people are not
> agree that so many repos are needed.
>
> On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov
>  wrote:
>
> Hi Fuelers,
> we've implemented pluggable architecture piece in 6.0, and got a number of
> plugins already. Overall development process for plugins is still not fully
> defined.
> We initially thought that having all the plugins in one repo on stackforge
> is Ok, we also put some docs into existing fuel-docs repo, and specs to
> fuel-specs.
>
> We might need a change here. Plugins are not tight to any particular
> release
> date, and they can also be separated each from other in terms of committers
> and core reviewers. Also, it seems to be pretty natural to keep all docs
> and
> design specs associated with particular plugin.
>
> With all said, following best dev practices, it is suggested to:
>
> Have a separate stackforge repo per Fuel plugin in format
> "fuel-plugin-", with separate core-reviewers group which should have
> plugin contributor initially
> Have docs folder in the plugin, and ability to build docs out of it
>
> do we want Sphinx or simple Github docs format is Ok? So people can just go
> to github/stackforge to see docs
>
> Have specification in the plugin repo
>
> also, do we need Sphinx here?
>
> Have plugins tests in the repo
>
> Ideas / suggestions / comments?
> Thanks,
> --

Re: [openstack-dev] [Cinder][3rd CI] Confused about “*real* storage backend”requirement for 3rd CI.

2015-01-23 Thread Asselin, Ramy
Another ‘sample’ you can use is here: 
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample#L2

Ramy

From: Bharat Kumar [mailto:bharat.kobag...@redhat.com]
Sent: Thursday, January 22, 2015 10:01 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Cinder][3rd CI] Confused about “*real* storage 
backend”requirement for 3rd CI.


On 01/22/2015 05:39 PM, Duncan Thomas wrote:
Please take a look at 
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers to learn how to 
configure devstack to use your driver rather than LVM.

On 22 January 2015 at 13:28, liuxinguo 
mailto:liuxin...@huawei.com>> wrote:

Hi Mike,



I received a email named “All Cinder Drivers Must Have a Third Party CI By 
March 19th 2015” and I feel confused about the “*real* storage backend”.



One of the requirements is: Run Tempest [5][6] volume tests against the 
devstack environment that's hooked up to your *real* storage backend.


• And my confusion is:
• Every time the CI is triggered by a newly came patch, the 3rd CI will 
build a new devstack environment and create a default cinder.conf file whick 
will set the backend to “lvmdriver-1” automatically. And the tempest will run 
against “lvmdriver-1”. So what’s the meaning for a *real* storage backend since 
the cinder.conf will be set to use “lvmdriver-1” automatically for every newly 
came patch ? And how should I configure the cinder.conf file to run the tempest 
for the newly came driver patch came from different venders since different 
venders need different configuration for cinder.conf file and need different 
storage backend. I mean, does our CI should run tempest against our *real* 
storage backend for every newly came driver patch in cinder?

Liu,

Yes, by default DevStack configures cinder with "LVM". But we can customize 
DevStack to configure cinder with our own backend ("real storage backend").

Below is the link to the path, enables Automatic Configuration of GlusterFS for 
Cinder using devstack:
https://review.openstack.org/#/c/133102/

And also below it the link to Configure CEPH with Cinder using devstack:
https://review.openstack.org/#/c/65113/

Above two are old way of "real storage" plugin implementation. Sean Dague 
proposed a new way of devstack plugin implementation. Have a look at below two 
links:
https://review.openstack.org/#/c/142805/
https://review.openstack.org/#/c/142805/7/doc/source/plugins.rst

Thanks and regards,
Liu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Duncan Thomas




__

OpenStack Development Mailing List (not for usage questions)

Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--

Warm Regards,

Bharat Kumar Kobagana

Software Engineer

OpenStack Storage – RedHat India

Mobile - +91 9949278005
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Can we require kernel 3.8+ for use with StrongSwan IPSec VPN for Kilo?

2015-01-23 Thread Kyle Mestery
According to the patch author, the check isn't necessary at all.

On Fri, Jan 23, 2015 at 7:12 AM, Paul Michali  wrote:

> To summarize, should we...
>
> A) Assume all kernels will be 3.8+ and use mount namespace (risky?)
> B) Do a check to ensure kernel is 3.8+ and fall back to net namespace and
> mount --bind if not (more work).
> C) Just use net namespace as indication that namespace with mount --bind
> done (simple)
>
> Maybe it is best to just do the simple thing for now. I wanted to double
> check though, to see if the alternatives could/should be considered.
>
> Regards,
>
> PCM
>
>
> PCM (Paul Michali)
>
> IRC pc_m (irc.freenode.com)
> Twitter... @pmichali
>
>
> On Fri, Jan 23, 2015 at 1:35 AM, Joshua Zhang 
> wrote:
>
>> pls note that actually this patch doesn't have minumum kernel requirement
>> because it only uses 'mount --bind' and 'net namespace', not use 'mount
>> namespace'. ('mount --bind' is since linux 2.4, 'net namespace' is since
>> Linux 3.0, 'mount namespace' is since Linux 3.8).
>>
>> so I think sanity checks for 3.8 is not need, any thoughts ?
>>
>> thanks.
>>
>>
>> On Fri, Jan 23, 2015 at 2:12 PM, Kevin Benton  wrote:
>>
>>> >If we can consolidate that and use a single tool from the master
>>> neutron repository, that would be my vote.
>>>
>>> +1 with a hook mechanism so the sanity checks stay in the *aas repos and
>>> they are only run if installed.
>>>

>>> On Thu, Jan 22, 2015 at 7:30 AM, Kyle Mestery 
>>> wrote:
>>>
 On Wed, Jan 21, 2015 at 10:27 AM, Ihar Hrachyshka 
 wrote:

>  On 01/20/2015 05:40 PM, Paul Michali wrote:
>
> Review https://review.openstack.org/#/c/146508/ is adding support for
> StrongSwan VPN, which needs mount bind to be able to specify different
> paths for config files.
>
>  The code, which used some older patch, does a test for
> /proc/1/ns/net, instead of /proc/1/ns/mnt, because it stated that the
> latter is only supported in kernel 3.8+. That was a while ago, and I'm
> wondering if the condition is still true.  If we know that for Kilo and 
> on,
> we'll be dealing with 3.8+ kernels, we could use the more accurate test.
>
>  Can we require 3.8+ kernel for this?
>
>
> I think we can but it's better to check with distributions. Red Hat
> wise, we ship a kernel that is newer than 3.8.
>
>  If so, how and where do we ensure that is true?
>
>
> Ideally, you would implement a sanity check for the feature you need
> from the kernel. Though it opens a question of whether we want to ship
> multiple sanity check tools for each of repos (neutron + 3 *aas repos).
>
> If we can consolidate that and use a single tool from the master
 neutron repository, that would be my vote.

>
>  Also, if you can kindly review the code here:
> https://review.openstack.org/#/c/146508/5/neutron_vpnaas/services/vpn/common/netns_wrapper.py,
> I'd really appreciate it, as I'm not versed in the Linux proc files at 
> all.
>
>  Thanks!
>
>
>   PCM (Paul Michali)
>
>  IRC pc_m (irc.freenode.com)
> Twitter... @pmichali
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Kevin Benton
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Best Regards
>> Zhang Hua(张华)
>> Software Engineer | Canonical
>> IRC:  zhhuabj
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> ___

Re: [openstack-dev] penstack Heat- OS::Heat::MultipartMime cannot be used as user_data for OS::Nova::Server

2015-01-23 Thread Lars Kellogg-Stedman
On Thu, Jan 22, 2015 at 04:09:09PM -0700, Vignesh Kumar wrote:
> I am new to heat orchestration and am trying to create a coreOS cluster
> with it. I have a OS::Heat::SoftwareConfig resource and a
> OS::Heat::CloudConfig resource and I have joined them both in a
> OS::Heat::MultipartMime resource which is then used as a user_data for a
> OS::Nova::Server resource. Unfortunately I am not able to see the
> configurations happening in my server resource using cloud-init...

If I take your template and use it to boot a Fedora system instead of
a CoreOS system, it works as intended.  Note that CoreOS does *not*
use the same cloud-init that everyone else uses, and it is entirely
possible that the CoreOS cloud-init does not support multipart MIME
user-data.

-- 
Lars Kellogg-Stedman  | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack  | http://blog.oddbit.com/



pgpIRZErym0lL.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] [nova] [all] potential enginefacade adjustment - can everyone live with this?

2015-01-23 Thread Doug Hellmann


On Thu, Jan 22, 2015, at 10:36 AM, Mike Bayer wrote:
> Hey all -
> 
> Concerning the enginefacade system, approved blueprint:
> 
> https://review.openstack.org/#/c/125181/
> 
> which will replace the use of oslo_db.sqlalchemy.EngineFacade ultimately
> across all projects that use it (which is, all of them that use a
> database).
> 
> We are struggling to find a solution for the issue of application-defined
> contexts that might do things that the facade needs to know about, namely
> 1. that the object might be copied using deepcopy() or 2. that the object
> might be sent into a new set of worker threads, where its attributes are
> accessed concurrently.
> 
> While the above blueprint and the implementations so far have assumed
> that we need to receive this context object and use simple assignment,
> e.g. “context.session = the_session” in order to provide its attributes,
> in order to accommodate 1. and 2. I’ve had to add a significant amount of
> complexity in order to accommodate those needs (see patch set 28 at
> https://review.openstack.org/#/c/138215/).   It all works fine, but
> predictably, people are not comfortable with the extra few yards into the
> weeds it has to go to make all that happen.  In particular, in order to
> accommodate a RequestContext that is running in a different thread, it
> has to be copied first, because we have no ability to make the “.session”
> or “.connection” attributes dynamic without access to the RequestContext
> class up front.
> 
> So, what’s the alternative.   It’s that enginefacade is given just a tiny
> bit of visibility into the *class* used to create your context, such as
> in Nova, the nova.context.RequestContext class, so that we can place
> dynamic descriptors on it before instantiation (or I suppose we could
> monkeypatch the class on the first RequestContext object we see, but that
> seems even less desirable).   The blueprint went out of its way to avoid
> this.   But with contexts being copied and thrown into threads, I didn’t
> consider these use cases and I’d have probably needed to do the BP
> differently.
> 
> So what does the change look like?If you’re not Nova, imagine you’re
> cinder.context.RequestContext, heat.common.context.RequestContext,
> glance.context.RequestContext, etc.We throw a class decorator onto
> the class so that enginefacade can add some descriptors:
> 
> diff --git a/nova/context.py b/nova/context.py
> index e78636c..205f926 100644
> --- a/nova/context.py
> +++ b/nova/context.py
> @@ -22,6 +22,7 @@ import copy
> from keystoneclient import auth
> from keystoneclient import service_catalog
> from oslo.utils import timeutils
> +from oslo_db.sqlalchemy import enginefacade
> import six
> 
> from nova import exception
> @@ -61,6 +62,7 @@ class _ContextAuthPlugin(auth.BaseAuthPlugin):
> region_name=region_name)
> 
> 
> +@enginefacade.transaction_context_provider
> class RequestContext(object):
> """Security context and request information.
> 
> 
> the implementation of this one can be seen here:
> https://review.openstack.org/#/c/149289/.   In particular we can see all
> the lines of code removed from oslo’s approach, and in fact there’s a lot
> more nasties I can take out once I get to work on that some more.
> 
> so what’s controversial about this?   It’s that there’s an
> “oslo_db.sqlalchemy” import up front in the XYZ/context.py module of
> every participating project, outside of where anything else “sqlalchemy”
> happens.  
> 
> There’s potentially other ways to do this - subclasses of RequestContext
> that are generated by abstract factories, for one.   As I left my Java
> gigs years ago I’m hesitant to go there either :).   Perhaps projects can
> opt to run their RequestContext class through this decorator
> conditionally, wherever it is that it gets decided they are about to use
> their db/sqlalchemy/api.py module.
> 
> So can I please get +1 / -1 from the list on, “oslo_db.sqlalchemy wants
> an up-front patch on everyone’s RequestContext class”  ?  thanks!

We put the new base class for RequestContext in its own library because
both the logging and messaging code wanted to influence it's API. Would
it make sense to do this database setup there, too?

Doug

> 
> - mike
> 
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Prototype of the script for Tempest auto-configuration

2015-01-23 Thread Yaroslav Lobankov
Hello everyone,

I would like to discuss the following patch [1] for Tempest. I think that
such feature
as auto-configuration of Tempest would be very useful for many engineers
and users.
I have recently tried to use the script from [1]. I rebased the patch on
master and ran the script.
The script was finished without any errors and the tempest.conf was
generated! Of course,
this patch needs a lot of work, but the idea looks very cool!

Also I would like to thank David Kranz for his working on initial version
of the script.

Any thoughts?

[1] https://review.openstack.org/#/c/133245

Regards,
Yaroslav Lobankov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Thoughts on the nova<->neutron interface

2015-01-23 Thread Gary Kotton
Hi,
As Salvatore mentioned this was one of the things that we discussed at the
San Diego summit many years ago. I like the idea of using an RPC interface
to speak with Neutron (we could do a similar thing with Cinder, glance
etc). This would certainly address a number of issues with the interfaces
that we use at the moment. It is certainly something worthwhile discussing
next week.
We would need to understand how to define versioned API¹s, how to deal
with extensions etc.
Thanks
Gary

On 1/23/15, 2:59 PM, "Russell Bryant"  wrote:

>On 01/22/2015 06:40 PM, Salvatore Orlando wrote:
>> I also like the idea of considering the RPC interface. What kind of
>> stability / versioning exists on the Neutron RPC interface?
>> 
>> 
>> Even if Neutron does not have fancy things such as objects with
>> remotable method, I think its RPC interfaces are versioned exactly in
>> the same way as Nova. On REST vs AMQP I do not have a strong opinion.
>> This topic comes up quite often; on the one hand REST provides a cleaner
>> separation of concerns between the two projects; on the other hand RPC
>> will enable us to design an optimised interface specific to Nova. While
>> REST over HTTP is not as bandwidth-efficient as RPC over AMQP it however
>> allow deployers to use off-the-shelf tools for HTTP optimisation, such
>> as load balancing, or caching.
>
>Neutron uses rpc versioning, but there are some problems with it (that I
>have been working to clean up).  The first one is that the interfaces
>are quite tangled together.  There are interfaces that appear separate,
>but are used with a bunch of mixin classes and actually presented as a
>single API over rpc.  That means they have to be versioned together,
>which is not really happening consistently in practice.  I'm aiming to
>have all of this cleared up by the end of Kilo, though.
>
>The other issue is related to the "fancy things such as objects with
>remotable methods".  :-)  The key with this is versioning the data sent
>over these interfaces.  Even with rpc interface versioning clear and
>consistently used, I still wouldn't consider these as stable interfaces
>until the data is versioned, as well.
>
>-- 
>Russell Bryant
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Jan 23 2015 [heat] [cinder]

2015-01-23 Thread Anne Gentle
__In review and merged this past week__
About 60 doc patches merged in the last week, including NUMA Topology
Filter docs, firewall introduction in the new Networking Guide, updates for
the CLI Ref for Ironic 0.3.3 client, and more updates to the Object Storage
portion of the End User Guide. Thank you Erik Wilson for continuing to
bring in Object Storage user how-tos.

We closed 18 doc bugs in the last week. Nice.

I'm extremely pleased with the uptick in contributions both to doc bugs and
patches for the API reference. There are some teams who need to get their
REST API docs updated:
Orchestration/heat: At least 15 doc bugs came in this week for your REST
API docs. Please take a look at [1].
Block Storage/cinder: Thanks for logging bug 1411724 which identified that
these extensions need documentation:- consistency groups
 - cgsnapshots
 - volume replication
 - volume transfer
 - volume unmanage

Also, the "playground" RST version of the end user guide merged this past
week, which is another step towards the new docs site redesign
implementation.

Also many thanks to our doc reviewers, this team does a great job both in
efficiency and coverage.


__High priority doc work__
The web site design is my highest priority right now, and the networking
guide progress is a high priority as well.


__Ongoing doc work__
Alexander Adamov from Mirantis has agreed to set up a Debian test lab for
testing the Debian install guide and address any doc bugs he finds. Thank
you Alexander!

__New incoming doc requests__

Anita Kuno seeks a person to document the nova-network to neutron migration
path. Please let us know if you're interested in this specific
documentation and we'll connect you.

__Doc tools updates__

As it turns out, we won't have a "clean out" patch proposed to the
clouddocs-maven-plugin repository, and that's fine. Rackspace isn't using
the stackforge-stored one any more, so if anyone wants to do that
clean-out, please let me know. The idea is that we'll move to Sphinx/RST
for all documentation and only rely on clouddocs-maven-plugin for building
the API reference, and then move towards Swagger for building the API
reference, but these are steps to be implemented over years time to ensure
we keep the requirements for our doc tools like translation.

__Other doc news__

I'm meeting with Doug Hellman next week to try to get our Sphinx theme done
in a way that's consumable by many teams. Stay tuned.

1.
https://bugs.launchpad.net/openstack-api-site/+bugs?field.searchtext=&orderby=-importance&field.status%3Alist=NEW&field.status%3Alist=CONFIRMED&field.status%3Alist=TRIAGED&field.status%3Alist=INPROGRESS&field.status%3Alist=FIXCOMMITTED&field.status%3Alist=INCOMPLETE_WITH_RESPONSE&field.status%3Alist=INCOMPLETE_WITHOUT_RESPONSE&assignee_option=any&field.assignee=&field.bug_reporter=tengqim&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field.has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on&search=Search

-- 
Anne Gentle
annegen...@justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-23 Thread Alexander Ignatov
Mike,

> I also wanted to add that there is a PR already on adding plugins
> repos to stackforge: https://review.openstack.org/#/c/147169/

All this looks good, but it’s not clear when this patch will be merged and 
repos are created.
So the question is what should we do with the current spec made in 
fuel-specs[1,2] which are targeted for plugins?
And how will look development process for plugins added to 6.1 roadmap? 
Especially for plugins came not from external vendors and partners. Will we 
create separate projects on the Launchpad and duplicate our 
For now I’m not sure if we need to wait for new infrastructure created in 
stackforge/launchpad for each plugin and follow the common 
procedure to land current plugins to existing repos during 6.1 milestone.

[1] https://review.openstack.org/#/c/129586/ 

[2] https://review.openstack.org/#/c/148475/4 

Regards,
Alexander Ignatov



> On 23 Jan 2015, at 12:43, Nikolay Markov  wrote:
> 
> I also wanted to add that there is a PR already on adding plugins
> repos to stackforge: https://review.openstack.org/#/c/147169/
> 
> There is a battle in comments right now, because some people are not
> agree that so many repos are needed.
> 
> On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov
>  wrote:
>> Hi Fuelers,
>> we've implemented pluggable architecture piece in 6.0, and got a number of
>> plugins already. Overall development process for plugins is still not fully
>> defined.
>> We initially thought that having all the plugins in one repo on stackforge
>> is Ok, we also put some docs into existing fuel-docs repo, and specs to
>> fuel-specs.
>> 
>> We might need a change here. Plugins are not tight to any particular release
>> date, and they can also be separated each from other in terms of committers
>> and core reviewers. Also, it seems to be pretty natural to keep all docs and
>> design specs associated with particular plugin.
>> 
>> With all said, following best dev practices, it is suggested to:
>> 
>> Have a separate stackforge repo per Fuel plugin in format
>> "fuel-plugin-", with separate core-reviewers group which should have
>> plugin contributor initially
>> Have docs folder in the plugin, and ability to build docs out of it
>> 
>> do we want Sphinx or simple Github docs format is Ok? So people can just go
>> to github/stackforge to see docs
>> 
>> Have specification in the plugin repo
>> 
>> also, do we need Sphinx here?
>> 
>> Have plugins tests in the repo
>> 
>> Ideas / suggestions / comments?
>> Thanks,
>> --
>> Mike Scherbakov
>> #mihgen
>> 
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> 
> 
> -- 
> Best regards,
> Nick Markov
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-novaclient][nova] future of --os-compute-api-version option and whole api versioning

2015-01-23 Thread Andrey Kurilin
Hi everyone!
After removing nova V3 API from novaclient[1], implementation of v1.1
client is used for v1.1, v2 and v3 [2].
Since we moving to micro versions, I wonder, do we need such mechanism of
choosing api version(os-compute-api-version) or we can simply remove it,
like in proposed change - [3]?
If we remove it, how micro version should be selected?


[1] - https://review.openstack.org/#/c/138694
[2] -
https://github.com/openstack/python-novaclient/blob/master/novaclient/client.py#L763-L769
[3] - https://review.openstack.org/#/c/149006

-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][vpnaas] Can we require kernel 3.8+ for use with StrongSwan IPSec VPN for Kilo?

2015-01-23 Thread Paul Michali
To summarize, should we...

A) Assume all kernels will be 3.8+ and use mount namespace (risky?)
B) Do a check to ensure kernel is 3.8+ and fall back to net namespace and
mount --bind if not (more work).
C) Just use net namespace as indication that namespace with mount --bind
done (simple)

Maybe it is best to just do the simple thing for now. I wanted to double
check though, to see if the alternatives could/should be considered.

Regards,

PCM


PCM (Paul Michali)

IRC pc_m (irc.freenode.com)
Twitter... @pmichali


On Fri, Jan 23, 2015 at 1:35 AM, Joshua Zhang 
wrote:

> pls note that actually this patch doesn't have minumum kernel requirement
> because it only uses 'mount --bind' and 'net namespace', not use 'mount
> namespace'. ('mount --bind' is since linux 2.4, 'net namespace' is since
> Linux 3.0, 'mount namespace' is since Linux 3.8).
>
> so I think sanity checks for 3.8 is not need, any thoughts ?
>
> thanks.
>
>
> On Fri, Jan 23, 2015 at 2:12 PM, Kevin Benton  wrote:
>
>> >If we can consolidate that and use a single tool from the master neutron
>> repository, that would be my vote.
>>
>> +1 with a hook mechanism so the sanity checks stay in the *aas repos and
>> they are only run if installed.
>>
>>>
>> On Thu, Jan 22, 2015 at 7:30 AM, Kyle Mestery 
>> wrote:
>>
>>> On Wed, Jan 21, 2015 at 10:27 AM, Ihar Hrachyshka 
>>> wrote:
>>>
  On 01/20/2015 05:40 PM, Paul Michali wrote:

 Review https://review.openstack.org/#/c/146508/ is adding support for
 StrongSwan VPN, which needs mount bind to be able to specify different
 paths for config files.

  The code, which used some older patch, does a test for
 /proc/1/ns/net, instead of /proc/1/ns/mnt, because it stated that the
 latter is only supported in kernel 3.8+. That was a while ago, and I'm
 wondering if the condition is still true.  If we know that for Kilo and on,
 we'll be dealing with 3.8+ kernels, we could use the more accurate test.

  Can we require 3.8+ kernel for this?


 I think we can but it's better to check with distributions. Red Hat
 wise, we ship a kernel that is newer than 3.8.

  If so, how and where do we ensure that is true?


 Ideally, you would implement a sanity check for the feature you need
 from the kernel. Though it opens a question of whether we want to ship
 multiple sanity check tools for each of repos (neutron + 3 *aas repos).

 If we can consolidate that and use a single tool from the master
>>> neutron repository, that would be my vote.
>>>

  Also, if you can kindly review the code here:
 https://review.openstack.org/#/c/146508/5/neutron_vpnaas/services/vpn/common/netns_wrapper.py,
 I'd really appreciate it, as I'm not versed in the Linux proc files at all.

  Thanks!


   PCM (Paul Michali)

  IRC pc_m (irc.freenode.com)
 Twitter... @pmichali



 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Kevin Benton
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards
> Zhang Hua(张华)
> Software Engineer | Canonical
> IRC:  zhhuabj
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Neutron] Thoughts on the nova<->neutron interface

2015-01-23 Thread Russell Bryant
On 01/22/2015 06:40 PM, Salvatore Orlando wrote:
> I also like the idea of considering the RPC interface. What kind of
> stability / versioning exists on the Neutron RPC interface?
> 
> 
> Even if Neutron does not have fancy things such as objects with
> remotable method, I think its RPC interfaces are versioned exactly in
> the same way as Nova. On REST vs AMQP I do not have a strong opinion.
> This topic comes up quite often; on the one hand REST provides a cleaner
> separation of concerns between the two projects; on the other hand RPC
> will enable us to design an optimised interface specific to Nova. While
> REST over HTTP is not as bandwidth-efficient as RPC over AMQP it however
> allow deployers to use off-the-shelf tools for HTTP optimisation, such
> as load balancing, or caching.

Neutron uses rpc versioning, but there are some problems with it (that I
have been working to clean up).  The first one is that the interfaces
are quite tangled together.  There are interfaces that appear separate,
but are used with a bunch of mixin classes and actually presented as a
single API over rpc.  That means they have to be versioned together,
which is not really happening consistently in practice.  I'm aiming to
have all of this cleared up by the end of Kilo, though.

The other issue is related to the "fancy things such as objects with
remotable methods".  :-)  The key with this is versioning the data sent
over these interfaces.  Even with rpc interface versioning clear and
consistently used, I still wouldn't consider these as stable interfaces
until the data is versioned, as well.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-01-23 Thread Oleksii Zamiatin

23.01.15, 13:22, Elena Ezhova ?:



On Fri, Jan 23, 2015 at 1:55 PM, Ilya Pekelny > wrote:




On Fri, Jan 23, 2015 at 12:46 PM, ozamiatin
mailto:ozamia...@mirantis.com>> wrote:

IMHO It should be created once per Reactor/Client or even per
driver instance.


Per driver, sounds good.

Wouldn't this create regression for Neutron? The original change was 
supposed to fix the bug [1], where each api-worker process got the 
same copy of the Context due to its singletony nature.
It wouldn't be a singleton now, beacuse each process should have it's 
own driver instance. We of course will check this case. Each api-worker 
should take their own context. The purpose now is to have not more than 
one context per worker.




By the way (I didn't check it yet with current implementation
of the driver) such approach should break the IPC, because
such kind of sockets should be produced from the same context.

Please, check it. Looks like a potential bug.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

[1] https://bugs.launchpad.net/neutron/+bug/1364814


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-01-23 Thread Elena Ezhova
On Fri, Jan 23, 2015 at 1:55 PM, Ilya Pekelny  wrote:

>
>
> On Fri, Jan 23, 2015 at 12:46 PM, ozamiatin 
> wrote:
>
>> IMHO It should be created once per Reactor/Client or even per driver
>> instance.
>>
>
> Per driver, sounds good.
>
Wouldn't this create regression for Neutron? The original change was
supposed to fix the bug [1], where each api-worker process got the same
copy of the Context due to its singletony nature.

>
>
>>
>> By the way (I didn't check it yet with current implementation of the
>> driver) such approach should break the IPC, because such kind of sockets
>> should be produced from the same context.
>>
>
> Please, check it. Looks like a potential bug.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
[1] https://bugs.launchpad.net/neutron/+bug/1364814
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-23 Thread Matthias Runge
On 23/01/15 10:31, Jeremy Stanley wrote:
> On 2015-01-23 10:11:46 +0100 (+0100), Matthias Runge wrote:
> [...]
>> It would be totally awesome to switch from pip install to using
>> distribution packages for testing purposes. At least for
>> dependencies.
> [...]
> 
> While it seems nice on the surface, the unfortunate truth is that
> neither the infra team nor the various package maintainers has the
> excess of manpower needed to be able to near-instantly package new
> requirements and new versions of requirements for the platforms on
> which we run our CI jobs. I fear that the turn-around on getting new
> requirements into projects would go from being on the order of hours
> or days to weeks or even months.
> 
> We could work around that by generating our own system packages for
> multiple platforms rather than relying on actual distro packages,

Oh, I still would try to get that enabled from a distro perspective. But
that's something, a distro CI could provide feedback here.

I think providing/updating distro packages is quite comparable to
updating pypi packages. Maintaining an additional set of packages is
just wrong IMO. Nevertheless, that might be required for a transitional
phase.

Matthias



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-01-23 Thread Ilya Pekelny
On Fri, Jan 23, 2015 at 12:46 PM, ozamiatin  wrote:

> IMHO It should be created once per Reactor/Client or even per driver
> instance.
>

Per driver, sounds good.


>
> By the way (I didn't check it yet with current implementation of the
> driver) such approach should break the IPC, because such kind of sockets
> should be produced from the same context.
>

Please, check it. Looks like a potential bug.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.messaging][zmq] Redundant zmq.Context creation

2015-01-23 Thread ozamiatin

Hi,

Working on zmq driver I've noticed that for now zmq.Context is created 
per socket which is definitely redundant. That was introduced by the change


https://review.openstack.org/#/c/126914/5/oslo/messaging/_drivers/impl_zmq.py

It makes the correct thing reducing the global context variable, but I 
think the context is still should have more global scope than per socket.


IMHO It should be created once per Reactor/Client or even per driver 
instance.


By the way (I didn't check it yet with current implementation of the 
driver) such approach should break the IPC, because such kind of sockets 
should be produced from the same context.


Regards,
Oleksii Zamiatin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-23 Thread Evgeniy L
Hi Mike,

All of the items look nice. I have a small comment regarding to the docs.
I don't think that we should force plugin developer to write the docs in
Sphinx compatible format, I vote for Github compatible docs format,
and in case if we want to show this information somewhere else,
we can use Github's engine [1] to convert the docs into html pages.

Thanks,

[1] https://github.com/github/markup/tree/master

On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov 
wrote:

> Hi Fuelers,
> we've implemented pluggable architecture piece in 6.0, and got a number of
> plugins already. Overall development process for plugins is still not fully
> defined.
> We initially thought that having all the plugins in one repo on stackforge
> is Ok, we also put some docs into existing fuel-docs repo, and specs to
> fuel-specs.
>
> We might need a change here. Plugins are not tight to any particular
> release date, and they can also be separated each from other in terms of
> committers and core reviewers. Also, it seems to be pretty natural to keep
> all docs and design specs associated with particular plugin.
>
> With all said, following best dev practices, it is suggested to:
>
>1. Have a separate stackforge repo per Fuel plugin in format
>"fuel-plugin-", with separate core-reviewers group which should have
>plugin contributor initially
>2. Have docs folder in the plugin, and ability to build docs out of it
>   - do we want Sphinx or simple Github docs format is Ok? So people
>   can just go to github/stackforge to see docs
>3. Have specification in the plugin repo
>   - also, do we need Sphinx here?
>4. Have plugins tests in the repo
>
> Ideas / suggestions / comments?
> Thanks,
> --
> Mike Scherbakov
> #mihgen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-23 Thread Irina Povolotskaya
Hi Fuelers and Mike,

I'd like to provide some ideas/comments for the issues Mike has put into
discussion.

Yesterday we had a nice discussion for plugins SDK.

According to this discussion, we should create an internal document for
plugins certification ASAP (I mean, steps to perform on the developers'
side and assignees for the particular tasks).

So, we could also describe there (just what you've mentioned):

   - repo issue: we should definitely mention that
   stackforge/fuel-plugin- is strongly recommended for usage (we have
   some common information in Fuel Plug-in Guide, but an internal document
   should focus on that)
   - docs for plugins: I'm already working on .pdf templates for both
   Plugin Guide (how to install/configure a plugin) and Test Plan/Report
   (we'll just move to the simplest format ever, but note that this is mostly
   related to certification workflow). Nevertheless, some repos already
   contain README.md files with a brief description,
   installation/configuration instructions. For example, see Borgan D plugin's
   [1]. There is no spec, but the concept seems clear on the whole.
   - As to specification: developers should provide it in the Test Plan, so
   it would be okay if they followed one of these two ways:1) took fuel-specs
   template as the basis and simply made a link from their Test Plan/Report to
   the spec 2) post this spec right in the Test Plan/Report.
   - test in the repo: sure, this should be covered.

I believe, I'll mostly be working on this internal document, so feel free
to comment/correct me, if something seems wrong here.

[1] https://github.com/stackforge/fuel-plugins/tree/master/ha_fencing
-- 
--
Best regards,

Irina
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-23 Thread Meg McRoberts
What sort of specification are you talking about here -- specs for
individual plugins
or a spec for how to implement a plugin?  If the latter, what is the
relationship of that
to the official documentation about how to create a plugin (to be added to
the
Developer Guide)?

meg

On Fri, Jan 23, 2015 at 1:43 AM, Nikolay Markov 
wrote:

> I also wanted to add that there is a PR already on adding plugins
> repos to stackforge: https://review.openstack.org/#/c/147169/
>
> There is a battle in comments right now, because some people are not
> agree that so many repos are needed.
>
> On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov
>  wrote:
> > Hi Fuelers,
> > we've implemented pluggable architecture piece in 6.0, and got a number
> of
> > plugins already. Overall development process for plugins is still not
> fully
> > defined.
> > We initially thought that having all the plugins in one repo on
> stackforge
> > is Ok, we also put some docs into existing fuel-docs repo, and specs to
> > fuel-specs.
> >
> > We might need a change here. Plugins are not tight to any particular
> release
> > date, and they can also be separated each from other in terms of
> committers
> > and core reviewers. Also, it seems to be pretty natural to keep all docs
> and
> > design specs associated with particular plugin.
> >
> > With all said, following best dev practices, it is suggested to:
> >
> > Have a separate stackforge repo per Fuel plugin in format
> > "fuel-plugin-", with separate core-reviewers group which should
> have
> > plugin contributor initially
> > Have docs folder in the plugin, and ability to build docs out of it
> >
> > do we want Sphinx or simple Github docs format is Ok? So people can just
> go
> > to github/stackforge to see docs
> >
> > Have specification in the plugin repo
> >
> > also, do we need Sphinx here?
> >
> > Have plugins tests in the repo
> >
> > Ideas / suggestions / comments?
> > Thanks,
> > --
> > Mike Scherbakov
> > #mihgen
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Best regards,
> Nick Markov
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Plugins for Fuel: repo, doc, spec - where?

2015-01-23 Thread Nikolay Markov
I also wanted to add that there is a PR already on adding plugins
repos to stackforge: https://review.openstack.org/#/c/147169/

There is a battle in comments right now, because some people are not
agree that so many repos are needed.

On Fri, Jan 23, 2015 at 1:25 AM, Mike Scherbakov
 wrote:
> Hi Fuelers,
> we've implemented pluggable architecture piece in 6.0, and got a number of
> plugins already. Overall development process for plugins is still not fully
> defined.
> We initially thought that having all the plugins in one repo on stackforge
> is Ok, we also put some docs into existing fuel-docs repo, and specs to
> fuel-specs.
>
> We might need a change here. Plugins are not tight to any particular release
> date, and they can also be separated each from other in terms of committers
> and core reviewers. Also, it seems to be pretty natural to keep all docs and
> design specs associated with particular plugin.
>
> With all said, following best dev practices, it is suggested to:
>
> Have a separate stackforge repo per Fuel plugin in format
> "fuel-plugin-", with separate core-reviewers group which should have
> plugin contributor initially
> Have docs folder in the plugin, and ability to build docs out of it
>
> do we want Sphinx or simple Github docs format is Ok? So people can just go
> to github/stackforge to see docs
>
> Have specification in the plugin repo
>
> also, do we need Sphinx here?
>
> Have plugins tests in the repo
>
> Ideas / suggestions / comments?
> Thanks,
> --
> Mike Scherbakov
> #mihgen
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best regards,
Nick Markov

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] iptables routes are not being injected to router namespace

2015-01-23 Thread Xavier León
On Thu, Jan 22, 2015 at 10:57 PM, Brian Haley  wrote:
> On 01/22/2015 02:35 PM, Kevin Benton wrote:
>> Right, there are two bugs here. One is in whatever went wrong with 
>> defer_apply
>> and one is with this exception handling code. I would allow the fix to go in 
>> for
>> the exception handling and then file another bug for the actual underlying
>> defer_apply bug.
>
> What went wrong with defer_apply() was caused by oslo.concurrency - version
> 1.4.1 seems to fix the problem, see https://review.openstack.org/#/c/149400/
> (thanks Ihar!)
>
> Xavier - can you update your oslo.concurrency to that version and verify it
> helps?  It seems to work in my config.

Yes. Updating to oslo.concurrency 1.4.1 fixed this problem.

Thanks,
Xavi

>
> Then the change in the other patchset could be applied, along with a test that
> triggers exceptions so this gets caught.
>
> Thanks,
>
> -Brian
>
>> On Thu, Jan 22, 2015 at 10:32 AM, Brian Haley > > wrote:
>>
>> On 01/22/2015 01:06 PM, Kevin Benton wrote:
>> > There was a bug for this already.
>> > https://bugs.launchpad.net/bugs/1413111
>>
>> Thanks Kevin.  I added more info to it, but don't think the patch 
>> proposed there
>> is correct.  Something in the iptables manager defer_apply() code isn't
>> quite right.
>>
>> -Brian
>>
>>
>> > On Thu, Jan 22, 2015 at 9:07 AM, Brian Haley > 
>> > >> wrote:
>> >
>> > On 01/22/2015 10:17 AM, Carl Baldwin wrote:
>> > > I think this warrants a bug report.  Could you file one with 
>> what you
>> > > know so far?
>> >
>> > Carl,
>> >
>> > Seems as though a recent change introduced a bug.  This is on a 
>> devstack
>> > I just created today, at l3/vpn-agent startup:
>> >
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent 
>> Traceback (most
>> > recent call last):
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/common/utils.py", line 342, in call
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent 
>> return
>> > func(*args, **kwargs)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in
>> process_router
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> >  self._process_external(ri)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent   File
>> > "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in
>> _process_external
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> >  self._update_fip_statuses(ri, existing_floating_ips, fip_statuses)
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> UnboundLocalError:
>> > local variable 'existing_floating_ips' referenced before assignment
>> > 2015-01-22 11:55:07.961 4203 TRACE neutron.agent.l3.agent
>> > Traceback (most recent call last):
>> >   File 
>> "/usr/local/lib/python2.7/dist-packages/eventlet/greenpool.py",
>> line
>> > 82, in _spawn_n_impl
>> > func(*args, **kwargs)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1093, 
>> in
>> > _process_router_update
>> > self._process_router_if_compatible(router)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1047, 
>> in
>> > _process_router_if_compatible
>> > self._process_added_router(router)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 1056, 
>> in
>> > _process_added_router
>> > self.process_router(ri)
>> >   File "/opt/stack/neutron/neutron/common/utils.py", line 345, in 
>> call
>> > self.logger(e)
>> >   File
>> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line
>> > 82, in __exit__
>> > six.reraise(self.type_, self.value, self.tb)
>> >   File "/opt/stack/neutron/neutron/common/utils.py", line 342, in 
>> call
>> > return func(*args, **kwargs)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 584, in
>> > process_router
>> > self._process_external(ri)
>> >   File "/opt/stack/neutron/neutron/agent/l3/agent.py", line 576, in
>> > _process_external
>> > self._update_fip_statuses(ri, existing_floating_ips, 
>> fip_statuses)
>> > UnboundLocalError: local variable 'existing_floating_ips' 
>> referenced
>> before
>> > assignment
>> >
>> > Since that's happening while we're holding the iptables lock I'm 
>> assuming
>> > no rules are being applied.
>> >
>> > I'm looking into it now, w

Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-23 Thread Jeremy Stanley
On 2015-01-23 10:11:46 +0100 (+0100), Matthias Runge wrote:
[...]
> It would be totally awesome to switch from pip install to using
> distribution packages for testing purposes. At least for
> dependencies.
[...]

While it seems nice on the surface, the unfortunate truth is that
neither the infra team nor the various package maintainers has the
excess of manpower needed to be able to near-instantly package new
requirements and new versions of requirements for the platforms on
which we run our CI jobs. I fear that the turn-around on getting new
requirements into projects would go from being on the order of hours
or days to weeks or even months.

We could work around that by generating our own system packages for
multiple platforms rather than relying on actual distro packages,
but doing that from an automated process isn't _really_ testing
distro packages at that point so wouldn't necessarily be any better
in the end than the situation we're in now.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] static files handling, bower/

2015-01-23 Thread Matthias Runge
On Thu, Jan 22, 2015 at 09:18:46PM +, Jeremy Stanley wrote:
> On 2015-01-22 16:06:55 -0500 (-0500), Matthew Farina wrote:
> [...]
> > When there is an update to our requirements, such as the recent
> > version increment in the version of angular used, a new package
> > version doesn't automatically show up as evident from that list.
> > How would that process be kicked off so we don't end up with a
> > missing dependency? Does that have to wait for a release cycle?
> 
> I don't want to speak for the distro package maintainers, but from
> what I've seen they generally wait until close enough to an
> integrated release that they can be pretty sure the requirements are
> close to frozen, so as not to waste effort packaging things which
> will end up not being used.
> 

Yes, exactly.

I for myself am using horizonfrom a github checkout, providing all
dependencies as RPM packages. That being said, I'm updating/packaging
requirements, whenever they'll show up for me.
There is no automatic process.
> I assume (perhaps incorrectly?) that we do use those in CI jobs, so
> that we can download the things a given project needs in an
> automated fashion--for us handling pip requirements lists is a
> solved problem (well, for some definitions of solved at least).

It would be totally awesome to switch from pip install to using
distribution packages for testing purposes. At least for dependencies.
> 
> > This appears to affect the testing setup as well. When we start to
> > use a new version of a JavaScript dependency no package will exist
> > for it. I believe this would mean the testing environment needs to
> > use the development setup, in the proposal, of bower. IIRC, bower
> > goes out to the Internet and there isn't a mirror of packages
> > (just a mirror of the registry). That means we'll loose the
> > ability to run testing installs from local mirrors for these
> > dependencies. I imagine a solution has been thought of for this.
> > Can you share any details?
Uh, we have seen so many timeouts and failing tests, because some
mirror was not answering fast enough etc. I don't think, adding another
external service will improve the situation here.

-- 
Matthias Runge 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy Driver Specs

2015-01-23 Thread Sachi Gupta
Hi Yapeng, Sumit,

In Openstack GBP command line, for l2policy help, there is an argument 
--network that can be passed. Can you please elaborate on which network do 
we need to pass here and what is the use of the same.

gbp l2policy-create --help 
usage: gbp l2policy-create [-h] [-f {html,json,shell,table,value,yaml}] 
   [-c COLUMN] [--max-width ] 
   [--prefix PREFIX] [--request-format {json,xml}] 

   [--tenant-id TENANT_ID] [--description 
DESCRIPTION] 
   [--network NETWORK] [--l3-policy L3_POLICY] 
   NAME 

Create a L2 Policy for a given tenant. 

positional arguments: 
  NAME  Name of L2 Policy to create 

optional arguments: 
  -h, --helpshow this help message and exit 
  --request-format {json,xml} 
The XML or JSON request format. 
  --tenant-id TENANT_ID 
The owner tenant ID. 
  --description DESCRIPTION 
Description of the L2 Policy 
  --network NETWORK Network to map the L2 Policy 
  --l3-policy L3_POLICY 
L3 Policy uuid 


Also, the PTG help includes an additional subnet parameter. Please also 
provide inputs on it.

stack@tcs-ThinkCentre-M58p:/home/tcs/JUNIPER/gbp_openstack_odl/devstack$ 
gbp policy-target-group-create --help 
usage: gbp policy-target-group-create [-h] 
  [-f 
{html,json,shell,table,value,yaml}] 
  [-c COLUMN] [--max-width ] 
  [--prefix PREFIX] 
  [--request-format {json,xml}] 
  [--tenant-id TENANT_ID] 
  [--description DESCRIPTION] 
  [--l2-policy L2_POLICY] 
  [--provided-policy-rule-sets 
PROVIDED_POLICY_RULE_SETS] 
  [--consumed-policy-rule-sets 
CONSUMED_POLICY_RULE_SETS] 
  [--network-service-policy 
NETWORK_SERVICE_POLICY] 
  [--subnets SUBNETS] 
  NAME 
Create a Policy Target Group for a given tenant. 
positional arguments: 
  NAME  Name of Policy Target Group to create 

optional arguments: 
  -h, --helpshow this help message and exit 
  --request-format {json,xml} 
The XML or JSON request format. 
  --tenant-id TENANT_ID 
The owner tenant ID. 
  --description DESCRIPTION 
Description of the Policy Target Group 
  --l2-policy L2_POLICY 
L2 policy uuid 

  --provided-policy-rule-sets PROVIDED_POLICY_RULE_SETS 
Dictionary of provided policy rule set uuids 
  --consumed-policy-rule-sets CONSUMED_POLICY_RULE_SETS 
Dictionary of consumed policy rule set uuids 
  --network-service-policy NETWORK_SERVICE_POLICY 
Network service policy uuid 
  --subnets SUBNETS List of neutron subnet uuids 

output formatters: 
  output formatter options 

  -f {html,json,shell,table,value,yaml}, --format 
{html,json,shell,table,value,yaml} 
the output format, defaults to table 
  -c COLUMN, --column COLUMN 
specify the column(s) to include, can be repeated 

table formatter: 
  --max-width  
Maximum display width, 0 to disable 

shell formatter: 
  a format a UNIX shell can parse (variable="value") 

  --prefix PREFIX   add a prefix to all variable names 




Thanks & Regards
Sachi Gupta



From:   Yapeng Wu 
To: Sachi Gupta , "OpenStack Development Mailing 
List (not for usage questions)" , 
"groupbasedpolicy-...@lists.opendaylight.org" 

Cc: "bu...@noironetworks.com" 
Date:   01/13/2015 11:48 PM
Subject:RE: [openstack-dev] [Policy][Group-based-policy] ODL 
Policy  Driver  Specs



Hi, Sachi,
 
Please see my inlined replies.
 
Also, please refer to this link when you try to integrate OpenStack GBP 
and ODL GBP:
https://wiki.openstack.org/wiki/GroupBasedPolicy/InstallODLIntegrationDevstack
 
 
Yapeng
 
From: Sachi Gupta [mailto:sachi.gu...@tcs.com] 
Sent: Tuesday, January 13, 2015 4:02 AM
To: OpenStack Development Mailing List (not for usage questions); 
groupbasedpolicy-...@lists.opendaylight.org; Yapeng Wu
Cc: bu...@noironetworks.com
Subject: Re: [openstack-dev] [Policy][Group-based-policy] ODL Policy 
Driver Specs
 
Hi, 

While working on the integration of Openstack With ODL GBP, I have the 
below queries: 
1.  Endpoint-group Create: When I create a new policy-target-group 
from Openstack say "gbp target-policy-group-create group1", it internally 
creates a l2.policy which includes the creation of the network and subnet. 
Me