Re: [openstack-dev] [Neutron] Release of a neutron sub-project

2015-09-30 Thread Kyle Mestery
On Tue, Sep 29, 2015 at 8:04 PM, Kyle Mestery  wrote:

> On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
> vadivel.openst...@gmail.com> wrote:
>
>> Hi,
>>
>> As per the Sub-Project Release process - i would like to tag and release
>> the following sub-project as part of upcoming Liberty release.
>> The process says talk to one of the member of 'neutron-release' group. I
>> couldn’t find a group mail-id for this group. Hence I am sending this email
>> to the dev list.
>>
>> I just have removed the version from setup.cfg and got the patch merged,
>> as specified in the release process. Can someone from the neutron-release
>> group makes this sub-project release.
>>
>>
>
> Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there so
> I can get your IRC NIC in case I have questions.
>
>
It turns out that the networking-ale-omniswitch pypi setup isn't correct,
see [1] for more info and how to correct. This turned out to be ok, because
it's forced me to re-examine the other networking sub-projects and their
pypi setup to ensure consistency, which the thread found here [1] will
resolve.

Once you resolve this ping me on IRC and I'll release this for you.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075880.html


> Thanks!
> Kyle
>
>
>>
>> ALE Omniswitch
>> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
>> Launchpad: https://launchpad.net/networking-ale-omniswitch
>> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>>
>> Thanks,
>> Vad
>> --
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Infra needs Gerrit developers

2015-09-30 Thread Wayne Warren
I am definitely interested in helping out with this as I feel the pain
of gerrit, particularly around text entry...

Not a huge fan of Java but might be able to take on some low-hanging
fruit once I've had a chance to tackle the JJB 2.0 API.

Maybe this is the wrong place to discuss, but is there any chance the
Gerrit project might consider a move toward Clojure as its primary
language? I suspect this could be done in a way that slowly deprecates
the use of Java over time but would need to spend time investigating
the current Gerrit architecture before making any strong claims about
this.

On Tue, Sep 29, 2015 at 3:30 PM, Zaro  wrote:
> Hello All,
>
> I believe you are all familiar with Gerrit.  Our community relies on it
> quite heavily and it is one of the most important applications in our CI
> infrastructure. I work on the OpenStack-infra team and I've been hacking on
> Gerrit for a while. I'm the infra team's sole Gerrit developer. I also test
> all our Gerrit upgrades prior to infra upgrading Gerrit.  There are many
> Gerrit feature and bug fix requests coming from the OpenStack community
> however due to limited resources it has been a challenge to meet those
> requests.
>
> I've been fielding some of those requests and trying to make Gerrit better
> for OpenStack.  I was wondering whether there are any other folks in our
> community who might also like to hack on a large scale java application
> that's being used by many corporations and open source projects in the
> world.  If so this is an opportunity for you to contribute.  I'm hoping to
> get more OpenStackers involved with the Gerrit community so we can
> collectively make OpenStack better.  If you would like to get involved let
> the openstack-infra folks know[1] and we will try help get you going.
>
> For instance our last attempt to upgrading Gerrit failed due to a bug[2]
> that makes repos unusable on a diff timeout.   This bug is still not fixed
> so a nice way to contribute is to help us fix things like this so we can
> continue to use never versions of Gerrit.
>
> [1] in #openstack-infra or on openstack-in...@lists.openstack.org
> [2] https://code.google.com/p/gerrit/issues/detail?id=3424
>
>
> Thank You.
> - Khai (AKA zaro)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-09-30 Thread Mikhail Fedosin
Thank you for your confidence in me folks! I I'll be happy to maintain the
stability of our project and continue working on its improvements.

Best regards,
Mike

On Wed, Sep 30, 2015 at 4:28 PM, Nikhil Komawar 
wrote:

>
>
> On 9/30/15 8:46 AM, Kuvaja, Erno wrote:
>
> Hi all,
>
>
>
> I’d like to propose following changes to glance-stable-maint team:
>
> 1)  Removing Zhi Yan Liu from the group; unfortunately he has moved
> on to other ventures and is not actively participating our operations
> anymore.
>
> +1 (always welcome back)
>
> 2)  Adding Mike Fedosin to the group; Mike has been reviewing and
> backporting patches to glance stable branches and is working with the right
> mindset. I think he would be great addition to share the workload around.
>
> +1 (definitely)
>
>
>
> Best,
>
> Erno (jokke_) Kuvaja
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
>
> Thanks,
> Nikhil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-30 Thread Sofer Athlan-Guyot
Gilles Dubreuil  writes:

> On 30/09/15 03:43, Rich Megginson wrote:
>> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>>
>>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
 Gilles Dubreuil  writes:

> On 15/09/15 06:53, Rich Megginson wrote:
>> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>>> Hi,
>>>
>>> Gilles Dubreuil  writes:
>>>
 A. The 'composite namevar' approach:

  keystone_tenant {'projectX::domainY': ... }
B. The 'meaningless name' approach:

 keystone_tenant {'myproject': name='projectX',
 domain=>'domainY',
 ...}

 Notes:
- Actually using both combined should work too with the domain
 supposedly overriding the name part of the domain.
- Please look at [1] this for some background between the two
 approaches:

 The question
 -
 Decide between the two approaches, the one we would like to
 retain for
 puppet-keystone.

 Why it matters?
 ---
 1. Domain names are mandatory in every user, group or project.
 Besides
 the backward compatibility period mentioned earlier, where no domain
 means using the default one.
 2. Long term impact
 3. Both approaches are not completely equivalent which different
 consequences on the future usage.
>>> I can't see why they couldn't be equivalent, but I may be missing
>>> something here.
>> I think we could support both.  I don't see it as an either/or
>> situation.
>>
 4. Being consistent
 5. Therefore the community to decide

 Pros/Cons
 --
 A.
>>> I think it's the B: meaningless approach here.
>>>
 Pros
   - Easier names
>>> That's subjective, creating unique and meaningful name don't look
>>> easy
>>> to me.
>> The point is that this allows choice - maybe the user already has some
>> naming scheme, or wants to use a more "natural" meaningful name -
>> rather
>> than being forced into a possibly "awkward" naming scheme with "::"
>>
>>keystone_user { 'heat domain admin user':
>>  name => 'admin',
>>  domain => 'HeatDomain',
>>  ...
>>}
>>
>>keystone_user_role {'heat domain admin user@::HeatDomain':
>>  roles => ['admin']
>>  ...
>>}
>>
 Cons
   - Titles have no meaning!
>> They have meaning to the user, not necessarily to Puppet.
>>
   - Cases where 2 or more resources could exists
>> This seems to be the hardest part - I still cannot figure out how
>> to use
>> "compound" names with Puppet.
>>
   - More difficult to debug
>> More difficult than it is already? :P
>>
   - Titles mismatch when listing the resources (self.instances)

 B.
 Pros
   - Unique titles guaranteed
   - No ambiguity between resource found and their title
 Cons
   - More complicated titles
 My vote
 
 I would love to have the approach A for easier name.
 But I've seen the challenge of maintaining the providers behind the
 curtains and the confusion it creates with name/titles and when
 not sure
 about the domain we're dealing with.
 Also I believe that supporting self.instances consistently with
 meaningful name is saner.
 Therefore I vote B
>>> +1 for B.
>>>
>>> My view is that this should be the advertised way, but the other
>>> method
>>> (meaningless) should be there if the user need it.
>>>
>>> So as far as I'm concerned the two idioms should co-exist.  This
>>> would
>>> mimic what is possible with all puppet resources.  For instance
>>> you can:
>>>
>>> file { '/tmp/foo.bar': ensure => present }
>>>
>>> and you can
>>>
>>> file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>>> present }
>>>
>>> The two refer to the same resource.
>> Right.
>>
> I disagree, using the name for the title is not creating a composite
> name. The latter requires adding at least another parameter to be part
> of the title.
>
> Also in the case of the file resource, a path/filename is a unique
> name,
> which is not the case of an Openstack user which might exist in several
> domains.
>
> I actually added the meaningful name case in:
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html
>
>
> But that doesn't work very well because without adding the domain to
> the
> name, the following 

[openstack-dev] [cinder] [Sahara] Block Device Driver updates

2015-09-30 Thread Ivan Kolodyazhny
Hi team,

I know that Block Device Driver (BDD) is not popular in Cinder community.
The main issues were:

* driver is not good maintained
* it doesn't feet minimum features set
* there is no CI for it
* it's not a Cinder way/it works only when instance and volume are created
on the same host
* etc

AFAK, it's widely used in Sahara &  Hadoop communities because it works
fast. I won't discuss driver's performance in this thread. I share my
performance tests results once I'll finish it.

I'm going to share drive updates with you about issues above.

1) driver is not good maintained - we are working on it right now and will
fix any found issues. We've got devstack plugin [1] for this driver.

2) it doesn't feet minimum features set - I've filed a blueprint [2] for
it. There are patches that implement needed features in the gerrit [3].

3) there is no CI for it - In Cinder community, we've got strong
requirement that each driver must has CI. I've absolutely agree with that.
That's why new infra job is proposed [4].

4) it works only when instance and volume are created on the same host -
I've filed a blueprint [5] but after testing I've found that it's already
implemented by [6].


I hope, I've answered all questions that were asked in IRC and in comments
for [6]. I will do my best to support this driver and propose fix to delete
if community decide  to delete it from the cinder tree


[1] https://github.com/openstack/devstack-plugin-bdd
[2]
https://blueprints.launchpad.net/cinder/+spec/block-device-driver-minimum-features-set
[3]
https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/block-device-driver-minimum-features-set,n,z
[4] https://review.openstack.org/228857
[5]
https://blueprints.launchpad.net/cinder/+spec/block-device-driver-via-iscsi
[6] https://review.openstack.org/#/c/200039/


Regards,
Ivan Kolodyazhny
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Sam Yaple
Also in favor is it lands before Liberty. But I don't want to see a format
change straight into Mitaka.

Sam Yaple

On Wed, Sep 30, 2015 at 1:03 PM, Steven Dake (stdake) 
wrote:

> I am in favor of this work if it lands before Liberty.
>
> Regards
> -steve
>
>
> On 9/30/15, 10:54 AM, "Jeff Peeler"  wrote:
>
> >The patch I just submitted[1] modifies the syntax of all.yml to use
> >dictionaries, which changes how variables are referenced. The key
> >point being in globals.yml, the overriding of a variable will change
> >from simply specifying the variable to using the dictionary value:
> >
> >old:
> >api_interface: 'eth0'
> >
> >new:
> >network:
> >api_interface: 'eth0'
> >
> >Preliminary feedback on IRC sounded positive, so I'll go ahead and
> >work on finishing the review immediately assuming that we'll go
> >forward. Please ping me if you hate this change so that I can stop the
> >work.
> >
> >[1] https://review.openstack.org/#/c/229535/
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Congress] Congress Usecases VM

2015-09-30 Thread Shiv Haris
Hi David,

Exactly what Tim mentioned in his email – there are 2 VMs.

The VM that I published has a README file in the home directory when you login 
with the credentials vagrant/vagrant.

Looking forward to your feedback.

-Shiv



From: Tim Hinrichs [mailto:t...@styra.com]
Sent: Wednesday, September 30, 2015 10:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi David,

There are 2 VM images for Congress that we're working on simultaneously: Shiv's 
and Alex's.

1. Shiv's image is to help new people understand some of the use cases Congress 
was designed for.  The goal is to include a bunch of use cases that we have 
working.

2. Alex's image is the one we'll be using for the hands-on-lab in Tokyo.  This 
one accompanies the Google doc instructions for the Hands On Lab: 
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub.

It sounds like you might be using Shiv's image with Alex's hands-on-lab 
instructions, so the instructions won't necessarily line up with the image.

Tim



On Wed, Sep 30, 2015 at 9:45 AM KARR, DAVID 
> wrote:
I think I’m seeing similar errors, but I’m not certain.  With the OVA I 
downloaded last night, when I run “./rejoin-stack.sh”, I get “Couldn’t find 
./stack-screenrc file; have you run stack.sh yet?”

Concerning the original page with setup instructions, at 
https://docs.google.com/document/d/1ispwf56bX8sy9T0KZyosdHrSR9WHEVA1oGEIYA22Orw/pub
 , I note that the the login user and password is different (probably obvious), 
and obviously the required path to “cd” to.

Also, after starting the VM, the instructions say to run “ifconfig” to get the 
IP address of the VM, and then to ssh to the VM.  This seems odd.  If I’ve 
already done “interact with the console”, then I’m already logged into the 
console.  The instructions also describe how to get to the Horizon client from 
your browser.  I’m not sure what this should say now.

From: Shiv Haris [mailto:sha...@brocade.com]
Sent: Friday, September 25, 2015 3:35 PM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Thanks Alex, Zhou,

I get errors from congress when I do a re-join. These errors seem to due to the 
order in which the services are coming up. Hence I still depend on running 
stack.sh after the VM is up and running. Please try out the new VM – also 
advise if you need to add any of your use cases. Also re-join starts “screen” – 
do we expect the end user to know how to use “screen”.

I do understand that running “stack.sh” takes time to run – but it does not do 
things that appear to be any kind of magic which we want to avoid in order to 
get the user excited.

I have uploaded a new version of the VM please experiment with this and let me 
know:

http://paloaltan.net/Congress/Congress_Usecases_SEPT_25_2015.ova

(root: vagrant password: vagrant)

-Shiv



From: Alex Yip [mailto:a...@vmware.com]
Sent: Thursday, September 24, 2015 5:09 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I was able to make devstack run without a network connection by disabling 
tempest.  So, I think it uses the loopback IP address, and that does not 
change, so rejoin-stack.sh works without a network at all.



- Alex






From: Zhou, Zhenzan >
Sent: Thursday, September 24, 2015 4:56 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Rejoin-stack.sh works only if its IP was not changed. So using NAT network and 
fixed ip inside the VM can help.

BR
Zhou Zhenzan

From: Alex Yip [mailto:a...@vmware.com]
Sent: Friday, September 25, 2015 01:37
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM


I have been using images, rather than snapshots.



It doesn't take that long to start up.  First, I boot the VM which takes a 
minute or so.  Then I run rejoin-stack.sh which takes just another minute or 
so.  It's really not that bad, and rejoin-stack.sh restores vms and openstack 
state that was running before.



- Alex






From: Shiv Haris >
Sent: Thursday, September 24, 2015 10:29 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Congress] Congress Usecases VM

Hi Congress folks,

I am looking for ideas. We want the Openstack to be running when the user 
instantiates the Usecase-VM. However creating a OVA file is possible only when 
the VM is halted which means Openstack is not running and the user will have to 
run devstack again (which is time 

Re: [openstack-dev] [kolla] new yaml format for all.yml, need feedback

2015-09-30 Thread Steven Dake (stdake)
I am in favor of this work if it lands before Liberty.

Regards
-steve


On 9/30/15, 10:54 AM, "Jeff Peeler"  wrote:

>The patch I just submitted[1] modifies the syntax of all.yml to use
>dictionaries, which changes how variables are referenced. The key
>point being in globals.yml, the overriding of a variable will change
>from simply specifying the variable to using the dictionary value:
>
>old:
>api_interface: 'eth0'
>
>new:
>network:
>api_interface: 'eth0'
>
>Preliminary feedback on IRC sounded positive, so I'll go ahead and
>work on finishing the review immediately assuming that we'll go
>forward. Please ping me if you hate this change so that I can stop the
>work.
>
>[1] https://review.openstack.org/#/c/229535/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Kyle Mestery
Folks:

In trying to release some networking sub-projects recently, I ran into an
issue [1] where I couldn't release some projects due to them not being
registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
but before that can merge, we need to make sure all projects have pypi
registrations in place. The following networking sub-projects do NOT have
pypi registrations in place and need them created following the guidelines
here [3]:

networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable
openstackci has "Owner" permissions, which allow for the publishing of
packages to pypi:

networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the
neutron-release team the ability to release pypi packages for those
packages.

Thanks!
Kyle

[1]
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3]
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Jordan Pittier
On Wed, Sep 30, 2015 at 1:58 PM, Thomas Goirand  wrote:

> Hi everyone!
>
> 1/ Announcement
> ===
>
> I'm pleased to announce, in advance of the final Liberty release, that
> Liberty RC1 not only has been fully uploaded to Debian Experimental, but
> also that the Tempest CI (which I maintain and is a package only CI, no
> deployment tooling involved), shows that it's also fully installable and
> working. There's still some failures, but these are, I am guessing, not
> due to problems in the packaging, but rather some Tempest setup problems
> which I intend to address.
>
> If you want to try out Liberty RC1 in Debian, you can either try it
> using Debian Sid + Experimental (recommended), or use the Jessie
> backport repository built out of Mirantis Jenkins server. Repositories
> are listed at this address:
>
> http://liberty-jessie.pkgs.mirantis.com/
>
> 2/ Quick note about Liberty Debian repositories
> ===
>
> During Debconf 15, someone reported that the fact the Jessie backports
> are on a Mirantis address is disturbing.
>
> Note that, while the above really is a non-Debian (ie: non official
> private) repository, it only contains unmodified source packages, only
> just rebuilt for Debian Stable. Please don't be afraid by the tainted
> "mirantis.com" domain name, I could have as well set a debian.net
> address (which has been on my todo list for a long time). But it is
> still Debian only packages. Everything there is strait out of Debian
> repositories, nothing added, modified or removed.
>
> I believe that Liberty release in Sid, is currently working very well,
> but I haven't tested it as much as the Jessie backport.
>
> Started with the Kilo release, I have been uploading packages to the
> official Debian backports repositories. I will do so as well for the
> Liberty release, after the final release is out, and after Liberty is
> fully migrated to Debian Testing (the rule for stable-backports is that
> packages *must* be available in Testing *first*, in order to provide an
> upgrade path). So I do expect Liberty to be available from
> jessie-backports maybe a few weeks *after* the final Liberty release.
> Before that, use the unofficial Debian repositories.
>
> 3/ Horizon dependencies still in NEW queue
> ==
>
> It is also worth noting that Horizon hasn't been fully FTP master
> approved, and that some packages are still remaining in the NEW queue.
> This isn't the first release with such an issue with Horizon. I hope
> that 1/ FTP masters will approve the remaining packages son 2/ for
> Mitaka, the Horizon team will care about freezing external dependencies
> (ie: new Javascript objects) earlier in the development cycle. I am
> hereby proposing that the Horizon 3rd party dependency freeze happens
> not later than Mitaka b2, so that we don't experience it again for the
> next release. Note that this problem affects both Debian and Ubuntu, as
> Ubuntu syncs dependencies from Debian.
>
> 5/ New packages in this release
> ===
>
> You may have noticed that the below packages are now part of Debian:
> - Manila
> - Aodh
> - ironic-inspector
> - Zaqar (this one is still in the FTP masters NEW queue...)
>
> I have also packaged a few more, but there are still blockers:
> - Congress (antlr version is too low in Debian)
> - Mistral
>
> 6/ Roadmap for Liberty final release
> 
>
> Next on my roadmap for the final release of Liberty, is finishing to
> upgrade the remaining components to the latest version tested in the
> gate. It has been done for most OpenStack deliverables, but about a
> dozen are still in the lowest version supported by our global-requirements.
>
> There's also some remaining work:
> - more Neutron drivers
> - Gnocchi
> - Address the remaining Tempest failures, and widen the scope of tests
> (add Sahara, Heat, Swift and others to the tested projects using the
> Debian package CI)
>
> I of course welcome everyone to test Liberty RC1 before the final
> release, and report bugs on the Debian bug tracker if needed.
>
> Also note that the Debian packaging CI is fully free software, and part
> of Debian as well (you can look into the openstack-meta-packages package
> in git.debian.org, and in openstack-pkg-tools). Contributions in this
> field are also welcome.
>
> 7/ Thanks to Canonical & every OpenStack upstream projects
> ==
>
> I'd like to point out that, even though I did the majority of the work
> myself, for this release, there was a way more collaboration with
> Canonical on the dependency chain. Indeed, for this Liberty release,
> Canonical decided to upload every dependency to Debian first, and then
> only sync from it. So a big thanks to the Canonical server team for
> doing community work with me together. I just hope we could push this
> even further, 

Re: [openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Kairat Kushaev
Agree with you. That's why I am asking about reasoning. Perhaps, we need to
realize how to get rid of this in glanceclient.

Best regards,
Kairat Kushaev

On Wed, Sep 30, 2015 at 7:04 PM, Jay Pipes  wrote:

> On 09/30/2015 09:31 AM, Kairat Kushaev wrote:
>
>> Hi All,
>> In short terms, I am wondering why we are validating responses from
>> server when we are doing
>> image-show, image-list, member-list, metadef-namespace-show and other
>> read-only requests.
>>
>> AFAIK, we are building warlock models when receiving responses from
>> server (see [0]). Each model requires schema to be fetched from glance
>> server. It means that each time we are doing image-show, image-list,
>> image-create, member-list and others we are requesting schema from the
>> server. AFAIU, we are using models to dynamically validate that object
>> is in accordance with schema but is it the case when glance receives
>> responses from the server?
>>
>> Could somebody please explain me the reasoning of this implementation?
>> Am I missed some usage cases when validation is required for server
>> responses?
>>
>> I also noticed that we already faced some issues with such
>> implementation that leads to "mocking" validation([1][2]).
>>
>
> The validation should not be done for responses, only ever requests (and
> it's unclear that there is value in doing this on the client side at all,
> IMHO).
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Murali R
Russell,

Are any additional options fields used in geneve between hypervisors at
this time? If so, how do they translate to vxlan when it hits gw? For
instance, I am interested to see if we can translate a custom header info
in vxlan to geneve headers and vice-versa. And if there are flow commands
available to add conditional flows at this time or if it is possible to
extend if need be.

Thanks
Murali

On Sun, Sep 27, 2015 at 1:14 PM, Russell Bryant  wrote:

> On 09/27/2015 02:26 AM, WANG, Ming Hao (Tony T) wrote:
> > Russell,
> >
> > Thanks for your valuable information.
> > I understood Geneve is some kind of tunnel format for network
> virtualization encapsulation, just like VxLAN.
> > But I'm still confused by the connection between Geneve and VTEP.
> > I suppose VTEP should be on behalf of "VxLAN Tunnel Endpoint", which
> should be used for VxLAN only.
> >
> > Does it become some "common tunnel endpoint" in OVN, and can be also
> used as a tunnel endpoint for Geneve?
>
> When using VTEP gateways, both the Geneve and VxLAN protocols are being
> used.  Packets between hypervisors are sent using Geneve.  Packets
> between a hypervisor and the gateway are sent using VxLAN.
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] The Absurdity of the Milestone-1 Deadline for Drivers

2015-09-30 Thread Ben Swartzlander

On 09/30/2015 12:11 PM, Mike Perez wrote:

On 13:29 Sep 28, Ben Swartzlander wrote:

I've always thought it was a bit strange to require new drivers to
merge by milestone 1. I think I understand the motivations of the
policy. The main motivation was to free up reviewers to review "other
things" and this policy guarantees that for 75% of the release
reviewers don't have to review new drivers. The other motivation was
to prevent vendors from turning up at the last minute with crappy
drivers that needed a ton of work, by encouraging them to get started
earlier, or forcing them to wait until the next cycle.

I believe that the deadline actually does more harm than good.

First of all, to those that don't want to spend time on driver
reviews, there are other solutions to that problem. Some people do
want to review the drivers, and those who don't can simply ignore
them and spend time on what they care about. I've heard people who
spend time on driver reviews say that the milestone-1 deadline
doesn't mean they spend less time reviewing drivers overall, it just
all gets crammed into the beginning of each release. It should be
obvious that setting a deadline doesn't actually affect the amount of
reviewer effort, it just concentrates that effort.

Some bad assumptions here:

* Nobody said they didn't want to review drivers.

* "Crammed" is completely an incorrect word here. An example with last release,
   we only had 3/17 drivers trying to get in during the last week of the
   milestone [1]. I don't think you're very active in Cinder to really judge how
   well the team has worked together to get these drivers in a timely way with
   vendors.


There are fair points. No argument. I think I managed to obscure my main 
point with too much assumptions and rhetoric though.


Let me restate my argument as simply as possible.

Drivers are relatively low risk to the project. They're a lot of work to 
review due to the size, but the risk of missing bugs is small because 
those bugs will affect only the users who choose to deploy the given 
driver. Also drivers are well understood, so the process of reviewing 
them is straightforward.


New features are high risk. Even a small change to the manager or API 
code can have dramatic impact on all users of Cinder. Larger changes 
that touch multiple modules in different areas must be reviewed by 
people who understand all of Cinder just to get basic assurance that 
they do what they say. Finding bugs in these kinds of changes is tricky. 
Reading the code only gets you so far, and automated testing only 
scratches the surface. You have to run the code and try it out. These 
things take time and core team time is a limited and precious resource.


Now, if you have some high risk changes and some low risk changes, which 
do you think it makes sense to work on early in the release, and which 
do you think is safe to merge at the last minute? I asked myself that 
question and decided that I'd rather to high risk stuff early and low 
risk stuff later. Based on that belief, I'm making a suggestion to move 
the deadlines around.




The argument about crappy code is also a lot weaker now that there
are CI requirements which force vendors to spend much more time up
front and clear a much higher quality bar before the driver is even
considered for merging. Drivers that aren't ready for merge can
always be deferred to a later release, but it seems weird to defer
drivers that are high quality just because they're submitted during
milestones 2 or 3.

"Crappy code" ... I don't know where that's coming from. If anything, CI has
helped get the drivers in faster to get rid of what you call "cramming".



That's good. If that's true, then I would think it supports an argument 
that the deadlines are unnecessary because the underlying problem 
(limited reviewer time) has been solved.




All the the above is just my opinion though, and you shouldn't care
about my opinions, as I don't do much coding and reviewing in Cinder.
There is a real reason I'm writing this email...

In Manila we added some major new features during Liberty. All of the
new features merged in the last week of L-3. It was a nightmare of
merge conflicts and angry core reviewers, and many contributors
worked through a holiday weekend to bring the release together. While
asking myself how we can avoid such a situation in the future, it
became clear to me that bigger features need to merge earlier -- the
earlier the better.

When I look at the release timeline, and ask myself when is the best
time to merge new major features, and when is the best time to merge
new drivers, it seems obvious that *features* need to happen early
and drivers should come *later*. New major features require FAR more
review time than new drivers, and they require testing, and even
after they merge they cause merge conflicts that everyone else has to
deal with. Better that that works happens in milestones 1 and 2 than
right before feature freeze. New 

Re: [openstack-dev] [election][TC] TC Candidacy

2015-09-30 Thread Barrett, Carol L
Mike - Congrats on your new position! Looking forward to working with you.
Carol

-Original Message-
From: Mike Perez [mailto:thin...@gmail.com] 
Sent: Wednesday, September 30, 2015 1:55 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [election][TC] TC Candidacy

Hi all!

I'm announcing my candidacy for a position on the OpenStack Technical Committee.

On October 1st I will be employed by the OpenStack Foundation as a 
Cross-Project Developer Coordinator to help bring focus and support to 
cross-project initiatives within the cross-project specs, Def Core, The Product 
Working group, etc.

I feel the items below have enabled others across this project to strive for 
quality. If you would all have me as a member of the Technical Committee, you 
can help me to enable more quality work in OpenStack.

* I have been working in OpenStack since 2010. I spent a good amount of my time
  working on OpenStack in my free time before being paid full time to work on
  it. It has been an important part of my life, and rewarding to see what we
  have all achieved together.

* I was PTL for the Cinder project in the Kilo and Liberty releases for two
  cross-project reasons:
  * Third party continuous integration (CI).
  * Stop talking about rolling upgrades, and actually make it happen for
operators.

* I led the effort in bringing third party continuous integration to the
  Cinder project for more than 60 different drivers. [1]
  * I removed 25 different storage drivers from Cinder to bring quality to the
project to ensure what was in the Kilo release would work for operators.
I did what I believed was right, regardless of whether it would cost me
re-election for PTL [2].
  * In my conversations with other projects, this has enabled others to
follow the same effort. Continuing this trend of quality cross-project will
be my next focus.

* During my first term of PTL for Cinder, the team, and much respect to Thang
  Pham working on an effort to end the rolling upgrade problem, not just for
  Cinder, but for *all* projects.
  * First step was making databases independent from services via Oslo
versioned objects.
  * In Liberty we have a solution coming that helps with RPC versioned messages
to allow upgrading services independently.

* I have attempted to help with diversity in our community.
  * Helped lead our community to raise $17,403 for the Ada Initiative [3],
which was helping address gender-diversity with a focus in open source.
  * For the Vancouver summit, I helped bring in the ally skills workshops from
the Ada Initiative, so that our community can continue to be a welcoming
environment [4].

* Within the Cinder team, I have enabled all to provide good documentation for
  important items in our release notes in Kilo [5] and Liberty [6].
  * Other projects have reached out to me after Kilo feeling motivated for this
same effort. I've explained in the August 2015 Operators midcycle sprint
that I will make this a cross-project effort in order to provide better
communication to our operators and users.

* I started an OpenStack Dev List summary in the OpenStack Weekly Newsletter
  (What you need to know from the developer's list), in order to enable others
  to keep up with the dev list on important cross-project information. [7][8]

* I created the Cinder v2 API which has brought consistency in
  request/responses with other OpenStack projects.
  * I documented Cinder v1 and Cinder v2 API's. Later on I created the Cinder
API reference documentation content. The attempt here was to enable others
to have somewhere to start, to continue quality documentation with
continued developments.

Please help me to do more positive work in this project. It would be an honor 
to be member of your technical committee.


Thank you,
Mike Perez

Official Candidacy: https://review.openstack.org/#/c/229298/2
Review History: https://review.openstack.org/#/q/reviewer:170,n,z
Commit History: https://review.openstack.org/#/q/owner:170,n,z
Stackalytics: http://stackalytics.com/?user_id=thingee
Foundation: https://www.openstack.org/community/members/profile/4840
IRC Freenode: thingee
Website: http://thing.ee


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - 
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:cinder-driver-removals,n,z
[3] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-May/064156.html
[5] - 
https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Block_Storage_.28Cinder.29
[6] - 
https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Block_Storage_.28Cinder.29
[7] - 
http://www.openstack.org/blog/2015/09/openstack-community-weekly-newsletter-sept-12-18/
[8] - 
http://www.openstack.org/blog/2015/09/openstack-weekly-community-newsletter-sept-19-25/


Re: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Dave Wilde
+1 from me as well

--
Dave Wilde
Sent with Airmail


On September 30, 2015 at 03:51:48, Jesse Pretorius 
(jesse.pretor...@gmail.com) wrote:

Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the last 
cycle and always makes an effort to ensure that his responses are made after 
thorough testing where possible. I have found his input to be valuable.

--
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Steven Dake (stdake)
Michal,

The vote was unanimous.  Welcome to the Kolla Core Reviewer team.  I have added 
you to the appropriate gerrit group.

Regards
-steve


From: Steven Dake >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, September 29, 2015 at 3:20 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core 
reviewer

Hi folks,

I am proposing Michal for core reviewer.  Consider my proposal as a +1 vote.  
Michal has done a fantastic job with rsyslog, has done a nice job overall 
contributing to the project for the last cycle, and has really improved his 
review quality and participation over the last several months.

Our process requires 3 +1 votes, with no veto (-1) votes.  If your uncertain, 
it is best to abstain :)  I will leave the voting open for 1 week until Tuesday 
October 6th or until there is a unanimous decision or a  veto.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Jastrzebski, Michal
Thanks everyone!

I really appreciate this and I hope to help to make kolla even better project 
than it is right now (and right now it's pretty cool;)). We have great 
community, very diverse and very dedicated. It's pleasure to work with all of 
you and let's keep up with great work in following releases:)

Thank you again,
Michał

> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: Wednesday, September 30, 2015 8:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for 
> core
> reviewer
> 
> Michal,
> 
> The vote was unanimous.  Welcome to the Kolla Core Reviewer team.  I have
> added you to the appropriate gerrit group.
> 
> Regards
> -steve
> 
> 
> From: Steven Dake  >
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>  d...@lists.openstack.org> >
> Date: Tuesday, September 29, 2015 at 3:20 PM
> To: "OpenStack Development Mailing List (not for usage questions)"
>  d...@lists.openstack.org> >
> Subject: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core
> reviewer
> 
> 
> 
>   Hi folks,
> 
>   I am proposing Michal for core reviewer.  Consider my proposal as a
> +1 vote.  Michal has done a fantastic job with rsyslog, has done a nice job
> overall contributing to the project for the last cycle, and has really 
> improved his
> review quality and participation over the last several months.
> 
>   Our process requires 3 +1 votes, with no veto (-1) votes.  If your
> uncertain, it is best to abstain :)  I will leave the voting open for 1 week 
> until
> Tuesday October 6th or until there is a unanimous decision or a  veto.
> 
>   Regards
>   -steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Sukhdev Kapur
Hey Kyle,

I am bit confused by this. I just checked networking-arista and see that
the co-owner of the project is openstackci
I also checked the [1] and [2] and the settings for networking-arista are
correct as well.

What else is missing which make you put networking-arista in the second
category?
Please advise.

Thanks
-Sukhdev


[1] - jenkins/jobs/projects.yaml

[2] - zuul/layout.yaml


On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery  wrote:

> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
> networking-infoblox
> networking-powervm
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
> networking-arista
> networking-l2gw
> networking-vsphere
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KILO: neutron port-update --allowed-address-pairs action=clear throws an exception

2015-09-30 Thread masoom alam
This patch: https://review.openstack.org/#/c/218551/

On Tue, Sep 29, 2015 at 11:55 PM, masoom alam 
wrote:

> After I applied the patch set 4 manually, I am still getting the following
> exception:
>
> DEBUG: urllib3.util.retry Converted retries value: 0 -> Retry(total=0,
> connect=None, read=None, redirect=0)
> DEBUG: keystoneclient.session RESP:
> DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
> {"message": "Request Failed: internal server error while processing your
> request.", "type": "HTTPInternalServerError", "detail": ""}}
> ERROR: neutronclient.shell Request Failed: internal server error while
> processing your request.
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
> line 766, in run_subcommand
> return run_command(cmd, cmd_parser, sub_argv)
>   File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
> line 101, in run_command
> return cmd.run(known_args)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
> line 535, in run
> obj_updater(_id, body)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 102, in with_params
> ret = self.function(instance, *args, **kwargs)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 549, in update_port
> return self.put(self.port_path % (port), body=body)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 302, in put
> headers=headers, params=params)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 270, in retry_request
> headers=headers, params=params)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 211, in do_request
> self._handle_fault_response(status_code, replybody)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 185, in _handle_fault_response
> exception_handler_v20(status_code, des_error_body)
>   File
> "/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
> 70, in exception_handler_v20
> status_code=status_code)
> InternalServerError: Request Failed: internal server error while
> processing your request.
>
>
> On Mon, Sep 28, 2015 at 9:09 AM, Akihiro Motoki  wrote:
>
>> Are you reading our reply comments?
>> At the moment, there is no way to set allowed-address-pairs to an empty
>> list by using neutron CLI.
>> When action=clear is passed, type=xxx, list=true and specified values are
>> ignored and None is sent to the server.
>> Thus you cannot set allowed-address-pairs to [] with neutron port-update
>> CLI command.
>>
>>
>> 2015-09-28 22:54 GMT+09:00 masoom alam :
>>
>>> This is even not working:
>>>
>>> root@openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>>  --allowed-address-pairs type=list [] action=clear
>>> AllowedAddressPair must contain ip_address
>>>
>>>
>>> root@openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>>  --allowed-address-pairs type=list {} action=clear
>>> AllowedAddressPair must contain ip_address
>>>
>>>
>>>
>>>
>>> On Mon, Sep 28, 2015 at 4:31 AM, masoom alam 
>>> wrote:
>>>
 Please help, its not working:

 root@openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
 neutron port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9

 +---+-+
 | Field | Value
   |

 +---+-+
 | admin_state_up| True
|
 | allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
 "fa:16:3e:69:e9:ef"}|
 | binding:host_id   | openstack-latest-kilo-28-09-2015-masoom
   |
 | binding:profile   | {}
|
 | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug":
 true}  |
 | binding:vif_type  | ovs
   |
 | binding:vnic_type | normal
|
 | device_id | d44b9025-f12b-4f85-8b7b-57cc1138acdd
|
 | device_owner  | compute:nova
|
 | extra_dhcp_opts   |
   |
 | fixed_ips | {"subnet_id":
 

Re: [openstack-dev] KILO: neutron port-update --allowed-address-pairs action=clear throws an exception

2015-09-30 Thread masoom alam
After I applied the patch set 4 manually, I am still getting the following
exception:

DEBUG: urllib3.util.retry Converted retries value: 0 -> Retry(total=0,
connect=None, read=None, redirect=0)
DEBUG: keystoneclient.session RESP:
DEBUG: neutronclient.v2_0.client Error message: {"NeutronError":
{"message": "Request Failed: internal server error while processing your
request.", "type": "HTTPInternalServerError", "detail": ""}}
ERROR: neutronclient.shell Request Failed: internal server error while
processing your request.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
line 766, in run_subcommand
return run_command(cmd, cmd_parser, sub_argv)
  File "/usr/local/lib/python2.7/dist-packages/neutronclient/shell.py",
line 101, in run_command
return cmd.run(known_args)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/neutron/v2_0/__init__.py",
line 535, in run
obj_updater(_id, body)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
102, in with_params
ret = self.function(instance, *args, **kwargs)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
549, in update_port
return self.put(self.port_path % (port), body=body)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
302, in put
headers=headers, params=params)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
270, in retry_request
headers=headers, params=params)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
211, in do_request
self._handle_fault_response(status_code, replybody)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
185, in _handle_fault_response
exception_handler_v20(status_code, des_error_body)
  File
"/usr/local/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line
70, in exception_handler_v20
status_code=status_code)
InternalServerError: Request Failed: internal server error while processing
your request.


On Mon, Sep 28, 2015 at 9:09 AM, Akihiro Motoki  wrote:

> Are you reading our reply comments?
> At the moment, there is no way to set allowed-address-pairs to an empty
> list by using neutron CLI.
> When action=clear is passed, type=xxx, list=true and specified values are
> ignored and None is sent to the server.
> Thus you cannot set allowed-address-pairs to [] with neutron port-update
> CLI command.
>
>
> 2015-09-28 22:54 GMT+09:00 masoom alam :
>
>> This is even not working:
>>
>> root@openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>  --allowed-address-pairs type=list [] action=clear
>> AllowedAddressPair must contain ip_address
>>
>>
>> root@openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack/accrc/admin#
>> neutron port-update e5b05961-e5d0-481b-bbd0-2ce4bbd9ea64
>>  --allowed-address-pairs type=list {} action=clear
>> AllowedAddressPair must contain ip_address
>>
>>
>>
>>
>> On Mon, Sep 28, 2015 at 4:31 AM, masoom alam 
>> wrote:
>>
>>> Please help, its not working:
>>>
>>> root@openstack-latest-kilo-28-09-2015-masoom:/opt/stack/devstack#
>>> neutron port-show 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>
>>> +---+-+
>>> | Field | Value
>>>   |
>>>
>>> +---+-+
>>> | admin_state_up| True
>>>  |
>>> | allowed_address_pairs | {"ip_address": "10.0.0.201", "mac_address":
>>> "fa:16:3e:69:e9:ef"}|
>>> | binding:host_id   | openstack-latest-kilo-28-09-2015-masoom
>>>   |
>>> | binding:profile   | {}
>>>  |
>>> | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
>>>  |
>>> | binding:vif_type  | ovs
>>>   |
>>> | binding:vnic_type | normal
>>>  |
>>> | device_id | d44b9025-f12b-4f85-8b7b-57cc1138acdd
>>>  |
>>> | device_owner  | compute:nova
>>>  |
>>> | extra_dhcp_opts   |
>>>   |
>>> | fixed_ips | {"subnet_id":
>>> "bbb6726a-937f-4e0d-8ac2-f82f84272b1f", "ip_address": "10.0.0.3"} |
>>> | id| 2d1bfe12-7db6-4665-9c98-6b9b8a043af9
>>>  |
>>> | mac_address   | fa:16:3e:69:e9:ef
>>>   |
>>> | name  |
>>>   |
>>> | 

Re: [openstack-dev] [Large Deployments Team][Performance Team] New informal working group suggestion

2015-09-30 Thread Dina Belova
Sandeep,

sorry for the late response :) I'm hoping to define 'spheres of interest'
and most painful moments using people's experience on Tokyo summit and
we'll find out what needs to be tested most and can be actually done. You
can share your ideas of what needs to be tested and focused on in
https://etherpad.openstack.org/p/openstack-performance-issues etherpad,
this will be a pool of ideas I'm going to use in Tokyo.

I can either create irc channel for the discussions or we can use
#openstack-operators channel as LDT is using it for the communication.
After Tokyo summit I'm planning to set Doodle voting for the time people
will be comfortable with to have periodic meetings :)

Cheers,
Dina

On Fri, Sep 25, 2015 at 1:52 PM, Sandeep Raman 
wrote:

> On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova  wrote:
>
>> Hey, OpenStackers!
>>
>> I'm writing to propose to organise new informal team to work specifically
>> on the OpenStack performance issues. This will be a sub team in already
>> existing Large Deployments Team, and I suppose it will be a good idea to
>> gather people interested in OpenStack performance in one room and identify
>> what issues are worrying contributors, what can be done and share results
>> of performance researches :)
>>
>
> Dina, I'm focused in performance and scale testing [no coding
> background].How can I contribute and what is the expectation from this
> informal team?
>
>>
>> So please volunteer to take part in this initiative. I hope it will be
>> many people interested and we'll be able to use cross-projects session
>> slot  to meet in Tokyo and
>> hold a kick-off meeting.
>>
>
> I'm not coming to Tokyo. How could I still be part of discussions if any?
> I also feel it is good to have a IRC channel for perf-scale discussion. Let
> me know your thoughts.
>
>
>> I would like to apologise I'm writing to two mailing lists at the same
>> time, but I want to make sure that all possibly interested people will
>> notice the email.
>>
>> Thanks and see you in Tokyo :)
>>
>> Cheers,
>> Dina
>>
>> --
>>
>> Best regards,
>>
>> Dina Belova
>>
>> Senior Software Engineer
>>
>> Mirantis Inc.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Shared storage space count for Nova

2015-09-30 Thread Kekane, Abhishek
Hi Devs,

Nova shared storage has issue [1] for counting free space, total space and disk 
available least which affects hypervisor stats and scheduler.
I have created a etherpad [2] which contains detail problem description and 
possible solution with possible challenges for this design.

Later I came to know there is ML [3] initiated by Jay Pipes which has a 
solution of creating resource pools for disk, CPU, memory, Numa modes etc.

IMO this is a good way and good to be addressed in Mitaka release. I am eager 
to work on this and will provide any kind of help in implementation, review etc.

Please give us your opinion about the same.

[1] https://bugs.launchpad.net/nova/+bug/1252321
[2] https://etherpad.openstack.org/p/shared-storage-space-count
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070564.html


Thank you,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Dougal Matthews
Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common I have
to grep through the projects I know that use it to make sure I don't break
anything.

Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.

Cheers,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver support in Devstack

2015-09-30 Thread Jordan Pittier
Hi Sean,
Because the recommended way in now to write devstack plugins.

Jordan

On Wed, Sep 30, 2015 at 3:29 AM, Sean Collins  wrote:

> This review was recently abandoned. Can you provide insight as to why?
>
> On September 17, 2015, at 2:30 PM, "Sean M. Collins" 
> wrote:
>
> You need to remove your Workflow-1.
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
+1

Pretty please don't make it a deployment project; because really some
other project that just specializes in deployment (ansible, chef,
puppet...) can do that better. I do get how public clouds can find a
deployment project useful (it allows customers to try out these new
~fancy~ COE things), but I also tend to think it's short-term thinking
to believe that such a project will last.

Now an integrated COE <-> openstack (keystone, cinder, neutron...)
project I think really does provide value and has some really neat
possiblities to provide a unique value add to openstack; a project that
can deploy some other software, meh, not so much IMHO. Of course an
integrated COE <-> openstack project will of course be much harder,
especially as the COE projects are not openstack 'native' but nothing
worth doing is easy. I hope that it was known that COE projects are a
new (and rapidly shifting) landscape and the going wasn't going to be
easy when magnum was created; don't lose hope! (I'm cheering for you
guys/gals).

My 2 cents,

Josh

On Wed, 30 Sep 2015 00:00:17 -0400
Monty Taylor  wrote:

> *waving hands wildly at details* ...
> 
> I believe that the real win is if Magnum's control plan can integrate 
> the network and storage fabrics that exist in an OpenStack with 
> kube/mesos/swarm. Just deploying is VERY meh. I do not care - it's
> not interesting ... an ansible playbook can do that in 5 minutes.
> OTOH - deploying some kube into a cloud in such a way that it shares
> a tenant network with some VMs that are there - that's good stuff and
> I think actually provides significant value.
> 
> On 09/29/2015 10:57 PM, Jay Lau wrote:
> > +1 to Egor, I think that the final goal of Magnum is container as a
> > service but not coe deployment as a service. ;-)
> >
> > Especially we are also working on Magnum UI, the Magnum UI should
> > export some interfaces to enable end user can create container
> > applications but not only coe deployment.
> >
> > I hope that the Magnum can be treated as another "Nova" which is
> > focusing on container service. I know it is difficult to unify all
> > of the concepts in different coe (k8s has pod, service, rc, swarm
> > only has container, nova only has VM, PM with different
> > hypervisors), but this deserve some deep dive and thinking to see
> > how can move forward.
> >
> > On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz  > > wrote:
> >
> > definitely ;), but the are some thoughts to Tom’s email.
> >
> > I agree that we shouldn't reinvent apis, but I don’t think
> > Magnum should only focus at deployment (I feel we will become
> > another Puppet/Chef/Ansible module if we do it ):)
> > I belive our goal should be seamlessly integrate
> > Kub/Mesos/Swarm to OpenStack ecosystem
> > (Neutron/Cinder/Barbican/etc) even if we need to step in to
> > Kub/Mesos/Swarm communities for that.
> >
> > —
> > Egor
> >
> > From: Adrian Otto  >  > >>
> > Reply-To: "OpenStack Development Mailing List (not for usage
> > questions)"  > 
> >  > >>
> > Date: Tuesday, September 29, 2015 at 08:44
> > To: "OpenStack Development Mailing List (not for usage
> > questions)"  > 
> >  > >>
> > Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
> >
> > This is definitely a topic we should cover in Tokyo.
> >
> > On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
> >  >  > >> wrote:
> >
> >
> > +1
> >
> > From: Tom Cammann  >  > >>
> > Reply-To: "openstack-dev@lists.openstack.org
> > 
> >  > >"
> >  > 
> >  > >>
> > Date: Tuesday, September 29, 2015 at 2:22 AM
> > To: "openstack-dev@lists.openstack.org
> > 
> >  > >"
> >  > 
> > 

[openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Jesse Pretorius
Hi everyone,

I'd like to propose that Steve Lewis (stevelle) be added as a core reviewer.

He has made an effort to consistently keep up with doing reviews in the
last cycle and always makes an effort to ensure that his responses are made
after thorough testing where possible. I have found his input to be
valuable.

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-30 Thread Gilles Dubreuil


On 30/09/15 03:43, Rich Megginson wrote:
> On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:
>>
>> On 15/09/15 19:55, Sofer Athlan-Guyot wrote:
>>> Gilles Dubreuil  writes:
>>>
 On 15/09/15 06:53, Rich Megginson wrote:
> On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:
>> Hi,
>>
>> Gilles Dubreuil  writes:
>>
>>> A. The 'composite namevar' approach:
>>>
>>>  keystone_tenant {'projectX::domainY': ... }
>>>B. The 'meaningless name' approach:
>>>
>>> keystone_tenant {'myproject': name='projectX',
>>> domain=>'domainY',
>>> ...}
>>>
>>> Notes:
>>>- Actually using both combined should work too with the domain
>>> supposedly overriding the name part of the domain.
>>>- Please look at [1] this for some background between the two
>>> approaches:
>>>
>>> The question
>>> -
>>> Decide between the two approaches, the one we would like to
>>> retain for
>>> puppet-keystone.
>>>
>>> Why it matters?
>>> ---
>>> 1. Domain names are mandatory in every user, group or project.
>>> Besides
>>> the backward compatibility period mentioned earlier, where no domain
>>> means using the default one.
>>> 2. Long term impact
>>> 3. Both approaches are not completely equivalent which different
>>> consequences on the future usage.
>> I can't see why they couldn't be equivalent, but I may be missing
>> something here.
> I think we could support both.  I don't see it as an either/or
> situation.
>
>>> 4. Being consistent
>>> 5. Therefore the community to decide
>>>
>>> Pros/Cons
>>> --
>>> A.
>> I think it's the B: meaningless approach here.
>>
>>> Pros
>>>   - Easier names
>> That's subjective, creating unique and meaningful name don't look
>> easy
>> to me.
> The point is that this allows choice - maybe the user already has some
> naming scheme, or wants to use a more "natural" meaningful name -
> rather
> than being forced into a possibly "awkward" naming scheme with "::"
>
>keystone_user { 'heat domain admin user':
>  name => 'admin',
>  domain => 'HeatDomain',
>  ...
>}
>
>keystone_user_role {'heat domain admin user@::HeatDomain':
>  roles => ['admin']
>  ...
>}
>
>>> Cons
>>>   - Titles have no meaning!
> They have meaning to the user, not necessarily to Puppet.
>
>>>   - Cases where 2 or more resources could exists
> This seems to be the hardest part - I still cannot figure out how
> to use
> "compound" names with Puppet.
>
>>>   - More difficult to debug
> More difficult than it is already? :P
>
>>>   - Titles mismatch when listing the resources (self.instances)
>>>
>>> B.
>>> Pros
>>>   - Unique titles guaranteed
>>>   - No ambiguity between resource found and their title
>>> Cons
>>>   - More complicated titles
>>> My vote
>>> 
>>> I would love to have the approach A for easier name.
>>> But I've seen the challenge of maintaining the providers behind the
>>> curtains and the confusion it creates with name/titles and when
>>> not sure
>>> about the domain we're dealing with.
>>> Also I believe that supporting self.instances consistently with
>>> meaningful name is saner.
>>> Therefore I vote B
>> +1 for B.
>>
>> My view is that this should be the advertised way, but the other
>> method
>> (meaningless) should be there if the user need it.
>>
>> So as far as I'm concerned the two idioms should co-exist.  This
>> would
>> mimic what is possible with all puppet resources.  For instance
>> you can:
>>
>> file { '/tmp/foo.bar': ensure => present }
>>
>> and you can
>>
>> file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
>> present }
>>
>> The two refer to the same resource.
> Right.
>
 I disagree, using the name for the title is not creating a composite
 name. The latter requires adding at least another parameter to be part
 of the title.

 Also in the case of the file resource, a path/filename is a unique
 name,
 which is not the case of an Openstack user which might exist in several
 domains.

 I actually added the meaningful name case in:
 http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


 But that doesn't work very well because without adding the domain to
 the
 name, the following fails:

 keystone_tenant {'project_1': domain => 'domain_A', ...}
 keystone_tenant {'project_1': domain => 'domain_B', ...}

 And adding the domain makes it a 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Peng Zhao
Echo with Monty:

> I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics > that exist in an OpenStack with kube/mesos/swarm.
We are working on the Cinder (ceph), Neutron, Keystone integration in HyperStack
[1] and love to contribute. Another TODO is the multi-tenancy support in
k8s/swarm/mesos. A global scheduler/orchestrator for all tenants yields higher
utilization rate than separate schedulers for each.
[1] https://launchpad.net/hyperstack
- Hyper - Make VM run like 
Container


On Wed, Sep 30, 2015 at 12:00 PM, Monty Taylor < mord...@inaugust.com > wrote:
*waving hands wildly at details* ...

I believe that the real win is if Magnum's control plan can integrate the
network and storage fabrics that exist in an OpenStack with kube/mesos/swarm.
Just deploying is VERY meh. I do not care - it's not interesting ... an ansible
playbook can do that in 5 minutes. OTOH - deploying some kube into a cloud in
such a way that it shares a tenant network with some VMs that are there - that's
good stuff and I think actually provides significant value.

On 09/29/2015 10:57 PM, Jay Lau wrote:
+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should export
some interfaces to enable end user can create container applications but
not only coe deployment.

I hope that the Magnum can be treated as another “Nova” which is
focusing on container service. I know it is difficult to unify all of
the concepts in different coe (k8s has pod, service, rc, swarm only has
container, nova only has VM, PM with different hypervisors), but this
deserve some deep dive and thinking to see how can move forward.

On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz < e...@walmartlabs.com
> wrote:

definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum
should only focus at deployment (I feel we will become another
Puppet/Chef/Ansible module if we do it ):)
I belive our goal should be seamlessly integrate Kub/Mesos/Swarm to
OpenStack ecosystem (Neutron/Cinder/Barbican/etc) even if we need to
step in to Kub/Mesos/Swarm communities for that.

—
Egor

From: Adrian Otto < adrian.o...@rackspace.com
>>
Reply-To: “OpenStack Development Mailing List (not for usage
questions)“ < openstack-dev@lists.openstack .org
>>
Date: Tuesday, September 29, 2015 at 08:44
To: “OpenStack Development Mailing List (not for usage questions)“
< openstack-dev@lists.openstack .org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This is definitely a topic we should cover in Tokyo.

On Sep 29, 2015, at 8:28 AM, Daneyon Hansen (danehans)
< daneh...@cisco.com
>> wrote:


+1

From: Tom Cammann < tom.camm...@hpe.com
>>
Reply-To: “ openstack-dev@lists.openstack .org
>”
< openstack-dev@lists.openstack .org
>>
Date: Tuesday, September 29, 2015 at 2:22 AM
To: “ openstack-dev@lists.openstack .org
>”
< openstack-dev@lists.openstack .org
>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

This has been my thinking in the last couple of months to completely
deprecate the COE specific APIs such as pod/service/rc and container.

As we now support Mesos, Kubernetes and Docker Swarm its going to be
very difficult and probably a wasted effort trying to consolidate
their separate APIs under a single Magnum API.

I'm starting to see Magnum as COEDaaS - Container Orchestration
Engine Deployment as a Service.

On 29/09/15 06:30, Ton Ngo wrote:
Would it make sense to ask the opposite of Wanghua's question:
should pod/service/rc be deprecated if the user can easily get to
the k8s api?
Even if we want to orchestrate these in a Heat template, the
corresponding heat resources can just interface with k8s instead of
Magnum.
Ton Ngo,

Egor Guz ---09/28/2015 10:20:02 PM---Also I belive
docker compose is just command line tool which doesn’t have any api
or scheduling feat

From: Egor Guz < e...@walmartlabs.com
> >
To: “ openstack-dev@lists.openstack .org
“>
< openstack-dev@lists.openstack .org
>>
Date: 09/28/2015 10:20 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
__ __



Also I belive docker compose is just command line tool which doesn’t
have any api or scheduling features.
But during last Docker Conf hackathon PayPal folks implemented
docker compose executor for Mesos
( https://github.com/mohitsoni/ compose-executor )
which can give you pod like experience.

—
Egor

From: Adrian Otto < adrian.o...@rackspace.com
>>>
Reply-To: “OpenStack Development Mailing List (not for usage
questions)“ < openstack-dev@lists.openstack .org
>>>
Date: Monday, September 28, 2015 at 22:03
To: “OpenStack Development Mailing List (not for usage questions)“
< openstack-dev@lists.openstack .org
>>>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Wanghua,

I do 

Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Paul Bourke

+1 I actually thought he was core already :)

On 30/09/15 01:24, Sam Yaple wrote:

+1 Michal will be a great addition to the Core team.

On Sep 29, 2015 6:48 PM, "Martin André" > wrote:



On Wed, Sep 30, 2015 at 7:20 AM, Steven Dake (stdake)
> wrote:

Hi folks,

I am proposing Michal for core reviewer.  Consider my proposal
as a +1 vote.  Michal has done a fantastic job with rsyslog, has
done a nice job overall contributing to the project for the last
cycle, and has really improved his review quality and
participation over the last several months.

Our process requires 3 +1 votes, with no veto (-1) votes.  If
your uncertain, it is best to abstain :)  I will leave the
voting open for 1 week until Tuesday October 6th or until there
is a unanimous decision or a  veto.


+1, without hesitation.

Martin

Regards
-steve


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Anant Patil
Hi,

One of remaining items in convergence is detecting and handling engine
(the engine worker) failures, and here are my thoughts.

Background: Since the work is distributed among heat engines, by some
means heat needs to detect the failure and pick up the tasks from failed
engine and re-distribute or run the task again.

One of the simple way is to poll the DB to detect the liveliness by
checking the table populated by heat-manage. Each engine records its
presence periodically by updating current timestamp. All the engines
will have a periodic task for checking the DB for liveliness of other
engines. Each engine will check for timestamp updated by other engines
and if it finds one which is older than the periodicity of timestamp
updates, then it detects a failure. When this happens, the remaining
engines, as and when they detect the failures, will try to acquire the
lock for in-progress resources that were handled by the engine which
died. They will then run the tasks to completion.

Another option is to use a coordination library like the community owned
tooz (http://docs.openstack.org/developer/tooz/) which supports
distributed locking and leader election. We use it to elect a leader
among heat engines and that will be responsible for running periodic
tasks for checking state of each engine and distributing the tasks to
other engines when one fails. The advantage, IMHO, will be simplified
heat code. Also, we can move the timeout task to the leader which will
run time out for all the stacks and sends signal for aborting operation
when timeout happens. The downside: an external resource like
Zookeper/memcached etc are needed for leader election.

In the long run, IMO, using a library like tooz will be useful for heat.
A lot of boiler plate needed for locking and running centralized tasks
(such as timeout) will not be needed in heat. Given that we are moving
towards distribution of tasks and horizontal scaling is preferred, it
will be advantageous to use them.

Please share your thoughts.

- Anant



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread John Belamaric
Kyle,

I have taken care of this for networking-infoblox. Please let me know if 
anything else is necessary.

Thanks,
John

On Sep 30, 2015, at 2:55 PM, Kyle Mestery 
> wrote:

Folks:

In trying to release some networking sub-projects recently, I ran into an issue 
[1] where I couldn't release some projects due to them not being registered on 
pypi. I have a patch out [2] which adds pypi publishing jobs, but before that 
can merge, we need to make sure all projects have pypi registrations in place. 
The following networking sub-projects do NOT have pypi registrations in place 
and need them created following the guidelines here [3]:

networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable 
openstackci has "Owner" permissions, which allow for the publishing of packages 
to pypi:

networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the 
neutron-release team the ability to release pypi packages for those packages.

Thanks!
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] 
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Kyle Mestery
Sukhdev, you're right, for some reason that one didn't show up in a pypi
search on pypi itself, but does in google. And it is correctly owned [1].

[1] https://pypi.python.org/pypi/networking_arista

On Wed, Sep 30, 2015 at 2:21 PM, Sukhdev Kapur 
wrote:

> Hey Kyle,
>
> I am bit confused by this. I just checked networking-arista and see that
> the co-owner of the project is openstackci
> I also checked the [1] and [2] and the settings for networking-arista are
> correct as well.
>
> What else is missing which make you put networking-arista in the second
> category?
> Please advise.
>
> Thanks
> -Sukhdev
>
>
> [1] - jenkins/jobs/projects.yaml
> 
> [2] - zuul/layout.yaml
> 
>
> On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery 
> wrote:
>
>> Folks:
>>
>> In trying to release some networking sub-projects recently, I ran into an
>> issue [1] where I couldn't release some projects due to them not being
>> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
>> but before that can merge, we need to make sure all projects have pypi
>> registrations in place. The following networking sub-projects do NOT have
>> pypi registrations in place and need them created following the guidelines
>> here [3]:
>>
>> networking-calico
>> networking-infoblox
>> networking-powervm
>>
>> The following pypi registrations did not follow directions to enable
>> openstackci has "Owner" permissions, which allow for the publishing of
>> packages to pypi:
>>
>> networking-ale-omniswitch
>> networking-arista
>> networking-l2gw
>> networking-vsphere
>>
>> Once these are corrected, we can merge [2] which will then allow the
>> neutron-release team the ability to release pypi packages for those
>> packages.
>>
>> Thanks!
>> Kyle
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
>> [2] https://review.openstack.org/#/c/229564/1
>> [3]
>> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [infra] split integration jobs

2015-09-30 Thread Emilien Macchi
Hello,

Today our Puppet OpenStack Integration jobs are deploying:
- mysql / rabbitmq
- keystone in wsgi with apache
- nova
- glance
- neutron with openvswitch
- cinder
- swift
- sahara
- heat
- ceilometer in wsgi with apache

Currently WIP:
- Horizon
- Trove

The status of the jobs is that some tempest tests (related to compute)
are failing randomly. Most of failures are because of timeouts:

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/neutron/server.txt.gz#_2015-09-30_18_38_32_425

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/nova/nova-compute.txt.gz#_2015-09-30_18_38_34_799

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/e374fd1/logs/nova/nova-compute.txt.gz#_2015-09-30_18_38_12_636

http://logs.openstack.org/70/229470/1/check/gate-puppet-openstack-integration-dsvm-centos7/1d88f34/logs/nova/nova-compute.txt.gz#_2015-09-30_20_26_34_730

The timeouts happen because Nova needs more than 300s (default) to spawn
a VM. Neutron is barely able to sustain to Nova requests.

It's obvious we reached jenkins slave resources limits.


We have 3 options:

#1 increase timeouts and try to give more time to services to accomplish
what they need to do.

#2 drop some services from our testing scenario.

#3 split our scenario to have scenario001 and scenario002.

I feel like #1 is not really a scalable idea, since we are going to test
more and more services.

I don't like #2 because we want to test all our modules, not just a
subset of them.

I like #3 but we are going to consume more CI resources (that's why I
put [infra] tag).


Side note: we have some non-voting upgrade jobs that we don't really pay
attention now, because of lack of time to work on them. They consume 2
slaves. If resources are a problem, we can drop them and replace by the
2 new integration jobs.

So I propose option #3 and
* drop upgrade jobs if infra says we're using too much resources with 2
more jobs
* replace them by the 2 new integration jobs
or option #3 by adding 2 more jobs with a new scenario, where services
would be split.

Any feedback from Infra / Puppet teams is welcome,
Thanks,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
Wouldn't that limit the ability to share/optimize resources then and 
increase the number of operators needed (since each COE/bay would need 
its own set of operators managing it)?


If all tenants are in a single openstack cloud, and under say a single 
company then there isn't much need for management isolation (in fact I 
think said feature is actually a anti-feature in a case like this). 
Especially since that management is already by keystone and the 
project/tenant & user associations and such there.


Security isolation I get, but if the COE is already multi-tenant aware 
and that multi-tenancy is connected into the openstack tenancy model, 
then it seems like that point is nil?


I get that the current tenancy boundary is the bay (aka the COE right?) 
but is that changeable? Is that ok with everyone, it seems oddly matched 
to say a company like yahoo, or other private cloud, where one COE would 
I think be preferred and tenancy should go inside of that; vs a eggshell 
like solution that seems like it would create more management and 
operability pain (now each yahoo internal group that creates a bay/coe 
needs to figure out how to operate it? and resources can't be shared 
and/or orchestrated across bays; h, seems like not fully using a COE 
for what it can do?)


Just my random thoughts, not sure how much is fixed in stone.

-Josh

Adrian Otto wrote:

Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever
single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
Swarm). This allows you to use native tools to interact with the COE in
that bay, rather than using an OpenStack specific client. If you want to
use the OpenStack client to create both bays, pods, and containers, you
can do that today. You also have the choice, for example, to run kubctl
against your Kubernetes bay, if you so desire.

Bays offer both a management and security isolation between multiple
tenants. There is no intent to share a single bay between multiple
tenants. In your use case, you would simply create two bays, one for
each of the yahoo-mail.XX tenants. I am not convinced that having an
uber-tenant makes sense.

Adrian


On Sep 30, 2015, at 1:13 PM, Joshua Harlow > wrote:

Adrian Otto wrote:

Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.



So an interesting question, but how is tenancy going to work, will
there be a keystone tenancy <-> COE tenancy adapter? From my
understanding a whole bay (COE?) is owned by a tenant, which is great
for tenants that want to ~experiment~ with a COE but seems disjoint
from the end goal of an integrated COE where the tenancy model of both
keystone and the COE is either the same or is adapted via some adapter
layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us
'
1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
...

All those tenancy information is in keystone, not replicated/synced
into the COE (or in some other COE specific disjoint 

[openstack-dev] [election][TC] TC Candidacy

2015-09-30 Thread Joshua Harlow

Hi folks,

I'd like to propose my candidacy for the technical committee
elections.

I've been involved in OpenStack for around ~four~ years now, working
to help integrate it into various Yahoo! systems and infrastructure.
I've been involved with integration and creation (and maturation) of
many projects (and libraries); for example rpm and venv packaging (via
anvil), cloud-init (a related tool), doc8 (a doc checking tool),
taskflow (an oslo library), tooz (an oslo library), automaton (an oslo
library), kazoo (a dependent library) and more.

As mentioned above, my contributions to OpenStack have been at the
project and library level. My experience in oslo (a group of
folks that specialize in cross-project libraries and reduction of
duplication across projects) has helped me grow and gain knowledge
about how to work across various projects. Now I would like to help
OpenStack projects become ~more~ excellent technically. I'd like to
be able  to leverage (and share) the experience I have gained at
Yahoo! to help make OpenStack that much better (we have tens of
thousands of VMs and thousands of hypervisors, tens of
thousands of baremetal instances split across many clusters with
varying network topology and layout).

I'd like to join the TC to aid some of the on-going work that helps
overhaul pieces of OpenStack to make them more scalable, more fault
tolerant, and in all honesty more ~modern~. I believe we (as a TC)
need to perform ~more~ outreach to projects and provide more advice
and guidance with respect to which technologies will help them scale
in the long term (for example instead of reinventing service discovery
solutions and/or distributed locking, use other open source solutions
that provide it already in a battle-hardened manner) proactively
instead of reactively.

I believe some of this can be solved by trying to make sure the TC is
on-top of: https://review.openstack.org/#/q/status:open+project:openstack
/openstack-specs,n,z and ensuring proposed/accepted cross-project
initiatives do not linger. (I'd personally rather have a cross-project
spec be reviewed and marked as not applicable vs. having a spec
linger.)

In summary, I would like to focus on helping this outreach and
involvement become better (and yes some of that outreach goes beyond
the OpenStack community), helping get OpenStack projects onto scalable
solutions (where applicable) and help make OpenStack become a cloud
solution that can work well for all (instead of work well for small
clouds and not work so well for large ones). Of course on-going
efforts need to conclude (tags for example) first but I hope that as a
TC member I can help promote work on OpenStack that helps the long
term technical sustainability (at small and megascale) of OpenStack
become better.

TLDR; work on getting TC to get more involved with the technical
outreach of OpenStack; reduce focus on approving projects and tags
and hopefully work to help the focus become on the long term technical
sustainability of OpenStack (at small and megascale); using my own
experiences to help in this process //

Thanks for considering me,

Joshua Harlow

--

Yahoo!

http://stackalytics.com/report/users/harlowja

Official submission @ https://review.openstack.org/229591

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Russell Bryant
On 09/30/2015 03:29 PM, Murali R wrote:
> Russell,
> 
> Are any additional options fields used in geneve between hypervisors at
> this time? If so, how do they translate to vxlan when it hits gw? For
> instance, I am interested to see if we can translate a custom header
> info in vxlan to geneve headers and vice-versa. 

Yes, geneve options are used. Specifically, there are three pieces of
metadata sent: a logical datapath ID (the logical switch, or network),
the source logical port, and the destination logical port.

Geneve is only used between hypervisors. VxLAN is only used between
hypervisors and a VTEP gateway. In that case, the additional metadata is
not included. There's just a tunnel ID in that case, used to identify
the source/destination logical switch on the VTEP gateway.

> And if there are flow
> commands available to add conditional flows at this time or if it is
> possible to extend if need be.

I'm not quite sure I understand this part.  Could you expand on what you
have in mind?

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow

Adrian Otto wrote:

Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.



So an interesting question, but how is tenancy going to work, will there 
be a keystone tenancy <-> COE tenancy adapter? From my understanding a 
whole bay (COE?) is owned by a tenant, which is great for tenants that 
want to ~experiment~ with a COE but seems disjoint from the end goal of 
an integrated COE where the tenancy model of both keystone and the COE 
is either the same or is adapted via some adapter layer.


For example:

1) Bay that is connected to uber-tenant 'yahoo'

   1.1) Pod inside bay that is connected to tenant 'yahoo-mail.us'
   1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
   ...

All those tenancy information is in keystone, not replicated/synced into 
the COE (or in some other COE specific disjoint system).


Thoughts?

This one becomes especially hard if said COE(s) don't even have a 
tenancy model in the first place :-/



Thanks,

Adrian


On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu Sent: Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: September-29-15
10:57 PM To: OpenStack Development Mailing List (not for usage
questions) Subject: Re: [openstack-dev] [magnum]swarm + compose =
k8s?



+1 to Egor, I think that the final goal of Magnum is container as a
service but not coe deployment as a service. ;-)

Especially we are also working on Magnum UI, the Magnum UI should
export some interfaces to enable end user can create container
applications but not only coe deployment.

I hope that the Magnum can be treated as another "Nova" which is
focusing on container service. I know it is difficult to unify all
of the concepts in different coe (k8s has pod, service, rc, swarm
only has container, nova only has VM,  PM with different
hypervisors), but this deserve some deep dive and thinking to see
how can move forward.





On Wed, Sep 30, 2015 at 1:11 AM, Egor Guz
wrote: definitely ;), but the are some thoughts to Tom’s email.

I agree that we shouldn't reinvent apis, but I don’t think Magnum

[openstack-dev] [nova] how to address boot from volume failures

2015-09-30 Thread Sean Dague
Today we attempted to branch devstack and grenade for liberty, and are
currently blocked because in liberty with openstack client and
novaclient, it's not possible to boot a server from volume using just
the volume id.

That's because of this change in novaclient -
https://review.openstack.org/#/c/221525/

That was done to resolve the issue that strong schema validation in Nova
started rejecting the kinds of calls that novaclient was making for boot
from volume, because the bdm 1 and 2 code was sharing common code and
got a bit tangled up. So 3 bdm 2 params were being sent on every request.

However, https://review.openstack.org/#/c/221525/ removed the ==1 code
path. If you pass in just {"vda": "$volume_id"} the code falls through,
volume id is lost, and nothing is booted. This is how the devstack
exercises and osc recommends booting from volume. I expect other people
might be doing that as well.

There seem to be a few options going forward:

1) fix the client without a revert

This would bring back a ==1 code path, which is basically just setting
volume_id, and move on. This means that until people upgrade their
client they loose access to this function on the server.

2) revert the client and loose up schema validation

If we revert the client to the old code, we also need to accept the fact
that novaclient has been sending 3 extra parameters to this API call
since as long as people can remember. We'd need a nova schema relax to
let those in and just accept that people are going to pass those.

3) fix osc and novaclient cli to not use this code path. This will also
require everyone upgrades both of those to not explode in the common
case of specifying boot from volume on the command line.

I slightly lean towards #2 on a compatibility front, but it's a chunk of
change at this point in the cycle, so I don't think there is a clear win
path. It would be good to collect opinions here. The bug tracking this
is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Sukhdev Kapur
Hey Kyle,

I have updated the ownership of networking-l2gw. I have +1'd your patch. As
soon as it merges the ACLs for the L2GW project will be fine as well.

Thanks for confirming about the networking-arista.

With this both of these packages should be good to go.

Thanks
-Sukhdev


On Wed, Sep 30, 2015 at 11:55 AM, Kyle Mestery  wrote:

> Folks:
>
> In trying to release some networking sub-projects recently, I ran into an
> issue [1] where I couldn't release some projects due to them not being
> registered on pypi. I have a patch out [2] which adds pypi publishing jobs,
> but before that can merge, we need to make sure all projects have pypi
> registrations in place. The following networking sub-projects do NOT have
> pypi registrations in place and need them created following the guidelines
> here [3]:
>
> networking-calico
> networking-infoblox
> networking-powervm
>
> The following pypi registrations did not follow directions to enable
> openstackci has "Owner" permissions, which allow for the publishing of
> packages to pypi:
>
> networking-ale-omniswitch
> networking-arista
> networking-l2gw
> networking-vsphere
>
> Once these are corrected, we can merge [2] which will then allow the
> neutron-release team the ability to release pypi packages for those
> packages.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
> [2] https://review.openstack.org/#/c/229564/1
> [3]
> http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Adrian Otto
Joshua,

The tenancy boundary in Magnum is the bay. You can place whatever single-tenant 
COE you want into the bay (Kubernetes, Mesos, Docker Swarm). This allows you to 
use native tools to interact with the COE in that bay, rather than using an 
OpenStack specific client. If you want to use the OpenStack client to create 
both bays, pods, and containers, you can do that today. You also have the 
choice, for example, to run kubctl against your Kubernetes bay, if you so 
desire.

Bays offer both a management and security isolation between multiple tenants. 
There is no intent to share a single bay between multiple tenants. In your use 
case, you would simply create two bays, one for each of the yahoo-mail.XX 
tenants. I am not convinced that having an uber-tenant makes sense.

Adrian

On Sep 30, 2015, at 1:13 PM, Joshua Harlow 
> wrote:

Adrian Otto wrote:
Thanks everyone who has provided feedback on this thread. The good
news is that most of what has been asked for from Magnum is actually
in scope already, and some of it has already been implemented. We
never aimed to be a COE deployment service. That happens to be a
necessity to achieve our more ambitious goal: We want to provide a
compelling Containers-as-a-Service solution for OpenStack clouds in a
way that offers maximum leverage of what’s already in OpenStack,
while giving end users the ability to use their favorite tools to
interact with their COE of choice, with the multi-tenancy capability
we expect from all OpenStack services, and simplified integration
with a wealth of existing OpenStack services (Identity,
Orchestration, Images, Networks, Storage, etc.).

The areas we have disagreement are whether the features offered for
the k8s COE should be mirrored in other COE’s. We have not attempted
to do that yet, and my suggestion is to continue resisting that
temptation because it is not aligned with our vision. We are not here
to re-invent container management as a hosted service. Instead, we
aim to integrate prevailing technology, and make it work great with
OpenStack. For example, adding docker-compose capability to Magnum is
currently out-of-scope, and I think it should stay that way. With
that said, I’m willing to have a discussion about this with the
community at our upcoming Summit.

An argument could be made for feature consistency among various COE
options (Bay Types). I see this as a relatively low value pursuit.
Basic features like integration with OpenStack Networking and
OpenStack Storage services should be universal. Whether you can
present a YAML file for a bay to perform internal orchestration is
not important in my view, as long as there is a prevailing way of
addressing that need. In the case of Docker Bays, you can simply
point a docker-compose client at it, and that will work fine.


So an interesting question, but how is tenancy going to work, will there be a 
keystone tenancy <-> COE tenancy adapter? From my understanding a whole bay 
(COE?) is owned by a tenant, which is great for tenants that want to 
~experiment~ with a COE but seems disjoint from the end goal of an integrated 
COE where the tenancy model of both keystone and the COE is either the same or 
is adapted via some adapter layer.

For example:

1) Bay that is connected to uber-tenant 'yahoo'

  1.1) Pod inside bay that is connected to tenant 
'yahoo-mail.us'
  1.2) Pod inside bay that is connected to tenant 'yahoo-mail.in'
  ...

All those tenancy information is in keystone, not replicated/synced into the 
COE (or in some other COE specific disjoint system).

Thoughts?

This one becomes especially hard if said COE(s) don't even have a tenancy model 
in the first place :-/

Thanks,

Adrian

On Sep 30, 2015, at 8:58 AM, Devdatta
Kulkarni>
  wrote:

+1 Hongbin.

From perspective of Solum, which hopes to use Magnum for its
application container scheduling requirements, deep integration of
COEs with OpenStack services like Keystone will be useful.
Specifically, I am thinking that it will be good if Solum can
depend on Keystone tokens to deploy and schedule containers on the
Bay nodes instead of having to use COE specific credentials. That
way, container resources will become first class components that
can be monitored using Ceilometer, access controlled using
Keystone, and managed from within Horizon.

Regards, Devdatta


From: Hongbin Lu> Sent: 
Wednesday, September
30, 2015 9:44 AM To: OpenStack Development Mailing List (not for
usage questions) Subject: Re: [openstack-dev] [magnum]swarm +
compose = k8s?


+1 from me as well.

I think what makes Magnum appealing is the promise to provide
container-as-a-service. I see coe deployment as a helper to achieve
the promise, instead of  the main goal.

Best regards, Hongbin


From: Jay Lau [mailto:jay.lau@gmail.com] Sent: 

[openstack-dev] [tripleo] How to selectively enable new services?

2015-09-30 Thread Steven Hardy
Hi all,

So I wanted to start some discussion on $subject, because atm we have a
couple of patches adding support for new services (which is great!):

Manila: https://review.openstack.org/#/c/188137/
Sahara: https://review.openstack.org/#/c/220863/

So, firstly I am *not* aiming to be any impediment to those landing, and I
know they have been in-progress for some time.  These look pretty close to
being ready to land and overall I think new service integration is a very
good thing for TripleO.

However, given the recent evolution towards the "big tent" of OpenStack, I
wanted to get some ideas on what an effective way to selectively enable
services would look like, as I can imagine not all users of TripleO want to
deploy all-the-services all of the time.

I was initially thinking we simply have e.g "EnableSahara" as a boolean in
overcloud-without-mergepy, and wire that in to the puppet manifests, such
that the services are not configured/started.  However comments in the
Sahara patch indicate it may be more complex than that, in particular
requiring changes to the loadbalancer puppet code and os-cloud-config.

This is all part of the more general "composable roles" problem, but is
there an initial step we can take, which will make it easy to simply
disable services (and ideally not pay the cost of configuring them at all)
on deployment?

Interested in peoples thoughts on this - has anyone already looked into it,
or is there any existing pattern we can reuse?

As mentioned above, not aiming to block anything on this, I guess we can
figure it out and retro-fit it to whatever services folks want to
selectively disable later if needed.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-30 Thread Thomas Goirand
On 09/25/2015 05:00 PM, Ryan Brown wrote:
> I believe the 72 limit is derived from 80-8 (terminal width - tab width)

If I'm not mistaking, 72 is because of the email format limitation.

Thomas


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Russell Bryant
On 09/30/2015 04:09 PM, Murali R wrote:
> Russel,
> 
> For instance if I have a nsh header embedded in vxlan in the incoming
> packet, I was wondering if I can transfer that to geneve options
> somehow. This is just as an example. I may have header other info either
> in vxlan or ip that needs to enter the ovn network and if we have
> generic ovs commands to handle that, it will be useful. If commands
> don't exist but extensible then I can do that as well.

Well, OVS itself doesn't support NSH yet.  There are patches on the OVS
dev mailing list for it, though.

http://openvswitch.org/pipermail/dev/2015-September/060678.html

Are you interested in SFC?  I have been thinking about that and don't
think it will be too hard to add support for it in OVN.  I'm not sure
when I'll work on it, but it's high on my personal todo list.  If you
want to do it with NSH, that will require OVS support first, of course.

If you're interested in more generic extensibility of OVN, there's at
least going to be one talk about that at the OVS conference in November.
 If you aren't there, it will be on video.  I'm not sure what ideas they
will be proposing.

Since we're on the OpenStack list, I assume we're talking in the
OpenStack context.  For any feature we're talking about, we also have to
talk about how that is exposed through the Neutron API.  So, "generic
extensibility" doesn't immediately make sense for the Neutron case.

SFC certainly makes sense.  There's a Neutron project for adding an SFC
API and from what I've seen so far, I think we'll be able to extend OVN
such that it can back that API.

-- 
Russell Bryant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Murali R
Russel,

For instance if I have a nsh header embedded in vxlan in the incoming
packet, I was wondering if I can transfer that to geneve options somehow.
This is just as an example. I may have header other info either in vxlan or
ip that needs to enter the ovn network and if we have generic ovs commands
to handle that, it will be useful. If commands don't exist but extensible
then I can do that as well.





On Wed, Sep 30, 2015 at 12:49 PM, Russell Bryant  wrote:

> On 09/30/2015 03:29 PM, Murali R wrote:
> > Russell,
> >
> > Are any additional options fields used in geneve between hypervisors at
> > this time? If so, how do they translate to vxlan when it hits gw? For
> > instance, I am interested to see if we can translate a custom header
> > info in vxlan to geneve headers and vice-versa.
>
> Yes, geneve options are used. Specifically, there are three pieces of
> metadata sent: a logical datapath ID (the logical switch, or network),
> the source logical port, and the destination logical port.
>
> Geneve is only used between hypervisors. VxLAN is only used between
> hypervisors and a VTEP gateway. In that case, the additional metadata is
> not included. There's just a tunnel ID in that case, used to identify
> the source/destination logical switch on the VTEP gateway.
>
> > And if there are flow
> > commands available to add conditional flows at this time or if it is
> > possible to extend if need be.
>
> I'm not quite sure I understand this part.  Could you expand on what you
> have in mind?
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][keystone] Choose domain names with 'composite namevar' or 'meaningless name'?

2015-09-30 Thread Rich Megginson

On 09/30/2015 11:43 AM, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 30/09/15 03:43, Rich Megginson wrote:

On 09/28/2015 10:18 PM, Gilles Dubreuil wrote:

On 15/09/15 19:55, Sofer Athlan-Guyot wrote:

Gilles Dubreuil  writes:


On 15/09/15 06:53, Rich Megginson wrote:

On 09/14/2015 02:30 PM, Sofer Athlan-Guyot wrote:

Hi,

Gilles Dubreuil  writes:


A. The 'composite namevar' approach:

  keystone_tenant {'projectX::domainY': ... }
B. The 'meaningless name' approach:

 keystone_tenant {'myproject': name='projectX',
domain=>'domainY',
...}

Notes:
- Actually using both combined should work too with the domain
supposedly overriding the name part of the domain.
- Please look at [1] this for some background between the two
approaches:

The question
-
Decide between the two approaches, the one we would like to
retain for
puppet-keystone.

Why it matters?
---
1. Domain names are mandatory in every user, group or project.
Besides
the backward compatibility period mentioned earlier, where no domain
means using the default one.
2. Long term impact
3. Both approaches are not completely equivalent which different
consequences on the future usage.

I can't see why they couldn't be equivalent, but I may be missing
something here.

I think we could support both.  I don't see it as an either/or
situation.


4. Being consistent
5. Therefore the community to decide

Pros/Cons
--
A.

I think it's the B: meaningless approach here.


 Pros
   - Easier names

That's subjective, creating unique and meaningful name don't look
easy
to me.

The point is that this allows choice - maybe the user already has some
naming scheme, or wants to use a more "natural" meaningful name -
rather
than being forced into a possibly "awkward" naming scheme with "::"

keystone_user { 'heat domain admin user':
  name => 'admin',
  domain => 'HeatDomain',
  ...
}

keystone_user_role {'heat domain admin user@::HeatDomain':
  roles => ['admin']
  ...
}


 Cons
   - Titles have no meaning!

They have meaning to the user, not necessarily to Puppet.


   - Cases where 2 or more resources could exists

This seems to be the hardest part - I still cannot figure out how
to use
"compound" names with Puppet.


   - More difficult to debug

More difficult than it is already? :P


   - Titles mismatch when listing the resources (self.instances)

B.
 Pros
   - Unique titles guaranteed
   - No ambiguity between resource found and their title
 Cons
   - More complicated titles
My vote

I would love to have the approach A for easier name.
But I've seen the challenge of maintaining the providers behind the
curtains and the confusion it creates with name/titles and when
not sure
about the domain we're dealing with.
Also I believe that supporting self.instances consistently with
meaningful name is saner.
Therefore I vote B

+1 for B.

My view is that this should be the advertised way, but the other
method
(meaningless) should be there if the user need it.

So as far as I'm concerned the two idioms should co-exist.  This
would
mimic what is possible with all puppet resources.  For instance
you can:

 file { '/tmp/foo.bar': ensure => present }

and you can

 file { 'meaningless_id': name => '/tmp/foo.bar', ensure =>
present }

The two refer to the same resource.

Right.


I disagree, using the name for the title is not creating a composite
name. The latter requires adding at least another parameter to be part
of the title.

Also in the case of the file resource, a path/filename is a unique
name,
which is not the case of an Openstack user which might exist in several
domains.

I actually added the meaningful name case in:
http://lists.openstack.org/pipermail/openstack-dev/2015-September/074325.html


But that doesn't work very well because without adding the domain to
the
name, the following fails:

keystone_tenant {'project_1': domain => 'domain_A', ...}
keystone_tenant {'project_1': domain => 'domain_B', ...}

And adding the domain makes it a de-facto 'composite name'.

I agree that my example is not similar to what the keystone provider has
to do.  What I wanted to point out is that user in puppet should be used
to have this kind of *interface*, one where your put something
meaningful in the title and one where you put something meaningless.
The fact that the meaningful one is a compound one shouldn't matter to
the user.


There is a big blocker of making use of domain name as parameter.
The issue is the limitation of autorequire.

Because autorequire doesn't support any parameter other than the
resource type and expects the resource title (or a list of) [1].

So for instance, keystone_user requires the tenant project1 from
domain1, then the resource name must be 'project1::domain1' because
otherwise there is no way to specify 'domain1':


Yeah, I 

Re: [openstack-dev] [stackalytics] Broken stats after project rename

2015-09-30 Thread Ilya Shakhat
Hi Jesse,

Thanks for letting know. Stackalytics team will fix the issue during the
day.

--Ilya

2015-09-30 12:19 GMT+03:00 Jesse Pretorius :

> Hi everyone,
>
> After the rename of os-ansible-deployment to openstack-ansible it appears
> that all git-related stats (eg: commits) prior to the rename have been lost.
>
> http://stackalytics.com/?metric=commits=openstack-ansible
>
> Can anyone assist with rectifying this?
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-09-30 Thread Vladimir Kuklin
+1 to Igor. Do we have voting system set up?

On Wed, Sep 30, 2015 at 4:35 AM, Igor Kalnitsky 
wrote:

> > * September 29 - October 8: PTL elections
>
> So, it's in progress. Where I can vote? I didn't receive any emails.
>
> On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
>  wrote:
> >> On 18 Sep 2015, at 04:39, Sergey Lukjanov 
> wrote:
> >>
> >>
> >> Time line:
> >>
> >> PTL elections
> >> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL
> position
> >> * September 29 - October 8: PTL elections
> >
> > Just a reminder that we have a deadline for candidates today.
> >
> > Regards,
> > --
> > Tomasz 'Zen' Napierala
> > Product Engineering - Poland
> >
> >
> >
> >
> >
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
35bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Devstack][Sahara][Cinder] BlockDeviceDriver support in Devstack

2015-09-30 Thread Ivan Kolodyazhny
Sean,

It was already implemented as devstack plugin

Regards,
Ivan Kolodyazhny

On Wed, Sep 30, 2015 at 11:47 AM, Jordan Pittier  wrote:

> Hi Sean,
> Because the recommended way in now to write devstack plugins.
>
> Jordan
>
> On Wed, Sep 30, 2015 at 3:29 AM, Sean Collins  wrote:
>
>> This review was recently abandoned. Can you provide insight as to why?
>>
>> On September 17, 2015, at 2:30 PM, "Sean M. Collins" 
>> wrote:
>>
>> You need to remove your Workflow-1.
>>
>> --
>> Sean M. Collins
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] PTL & Component Leads elections

2015-09-30 Thread Igor Kalnitsky
> * September 29 - October 8: PTL elections

So, it's in progress. Where I can vote? I didn't receive any emails.

On Mon, Sep 28, 2015 at 7:31 PM, Tomasz Napierala
 wrote:
>> On 18 Sep 2015, at 04:39, Sergey Lukjanov  wrote:
>>
>>
>> Time line:
>>
>> PTL elections
>> * September 18 - September 28, 21:59 UTC: Open candidacy for PTL position
>> * September 29 - October 8: PTL elections
>
> Just a reminder that we have a deadline for candidates today.
>
> Regards,
> --
> Tomasz 'Zen' Napierala
> Product Engineering - Poland
>
>
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-30 Thread Ihar Hrachyshka

> On 30 Sep 2015, at 12:53, Miguel Angel Ajo  wrote:
> 
> 
> 
> Ihar Hrachyshka wrote:
>>> On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:
>>> 
>>> Hi Ihar,
>>> 
>>> Ihar Hrachyshka :
> Miguel Angel Ajo :
>> Do you have a rough idea of what operations you may need to do?
> Right now, what bagpipe driver for networking-bgpvpn needs to interact 
> with is:
> - int_br OVSBridge (read-only)
> - tun_br OVSBridge (add patch port, add flows)
> - patch_int_ofport port number (read-only)
> - local_vlan_map dict (read-only)
> - setup_entry_for_arp_reply method (called to add static ARP entries)
> 
 Sounds very tightly coupled to OVS agent.
>> Please bear in mind, the extension interface will be available from 
>> different agent types
>> (OVS, SR-IOV, [eventually LB]), so this interface you're talking about 
>> could also serve as
>> a translation driver for the agents (where the translation is possible), 
>> I totally understand
>> that most extensions are specific agent bound, and we must be able to 
>> identify
>> the agent we're serving back exactly.
> Yes, I do have this in mind, but what we've identified for now seems to 
> be OVS specific.
 Indeed it does. Maybe you can try to define the needed pieces in high 
 level actions, not internal objects you need to access to. Like ‘- connect 
 endpoint X to Y’, ‘determine segmentation id for a network’ etc.
>>> I've been thinking about this, but would tend to reach the conclusion that 
>>> the things we need to interact with are pretty hard to abstract out into 
>>> something that would be generic across different agents.  Everything we 
>>> need to do in our case relates to how the agents use bridges and represent 
>>> networks internally: linuxbridge has one bridge per Network, while OVS has 
>>> a limited number of bridges playing different roles for all networks with 
>>> internal segmentation.
>>> 
>>> To look at the two things you  mention:
>>> - "connect endpoint X to Y" : what we need to do is redirect the traffic 
>>> destinated to the gateway of a Neutron network, to the thing that will do 
>>> the MPLS forwarding for the right BGP VPN context (called VRF), in our case 
>>> br-mpls (that could be done with an OVS table too) ; that action might be 
>>> abstracted out to hide the details specific to OVS, but I'm not sure on how 
>>> to  name the destination in a way that would be agnostic to these details, 
>>> and this is not really relevant to do until we have a relevant context in 
>>> which the linuxbridge would pass packets to something doing MPLS forwarding 
>>> (OVS is currently the only option we support for MPLS forwarding, and it 
>>> does not really make sense to mix linuxbridge for Neutron L2/L3 and OVS for 
>>> MPLS)
>>> - "determine segmentation id for a network": this is something really 
>>> OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, and 
>>> does not rely on internal segmentation
>>> 
>>> Completely abstracting out packet forwarding pipelines in OVS and 
>>> linuxbridge agents would possibly allow defining an interface that agent 
>>> extension could use without to know about anything specific to OVS or the 
>>> linuxbridge, but I believe this is a very significant taks to tackle.
>> 
>> If you look for a clean way to integrate with reference agents, then it’s 
>> something that we should try to achieve. I agree it’s not an easy thing.
>> 
>> Just an idea: can we have a resource for traffic forwarding, similar to 
>> security groups? I know folks are not ok with extending security groups API 
>> due to compatibility reasons, so maybe fwaas is the place to experiment with 
>> it.
>> 
>>> Hopefully it will be acceptable to create an interface, even it exposes a 
>>> set of methods specific to the linuxbridge agent and a set of methods 
>>> specific to the OVS agent.  That would mean that the agent extension that 
>>> can work in both contexts (not our case yet) would check the agent type 
>>> before using the first set or the second set.
>> 
>> The assumption of the whole idea of l2 agent extensions is that they are 
>> agent agnostic. In case of QoS, we implemented a common QoS extension that 
>> can be plugged in any agent [1], and a set of backend drivers (atm it’s just 
>> sr-iov [2] and ovs [3]) that are selected based on the driver type argument 
>> passed into the extension manager [4][5]. Your extension could use similar 
>> approach to select the backend.
>> 
>> [1]: 
>> https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions/qos.py#n169
>> [2]: 
>> https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py
>> [3]: 
>> https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py
>> [4]: 
>> 

Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-30 Thread Ivan Kolodyazhny
Sean,

openstack client supports Cinder API v2 since Liberty. What it the right
way ti fix grenade?

Regards,
Ivan Kolodyazhny,
Web Developer

On Wed, Sep 30, 2015 at 1:32 PM, Sean Dague  wrote:

> On 09/29/2015 01:32 PM, Mark Voelker wrote:
> >
> > Mark T. Voelker
> >
> >
> >
> >> On Sep 29, 2015, at 12:36 PM, Matt Fischer 
> wrote:
> >>
> >>
> >>
> >> I agree with John Griffith. I don't have any empirical evidences to back
> >> my "feelings" on that one but it's true that we weren't enable to enable
> >> Cinder v2 until now.
> >>
> >> Which makes me wonder: When can we actually deprecate an API version? I
> >> *feel* we are fast to jump on the deprecation when the replacement isn't
> >> 100% ready yet for several versions.
> >>
> >> --
> >> Mathieu
> >>
> >>
> >> I don't think it's too much to ask that versions can't be deprecated
> until the new version is 100% working, passing all tests, and the clients
> (at least python-xxxclients) can handle it without issues. Ideally I'd like
> to also throw in the criteria that devstack, rally, tempest, and other
> services are all using and exercising the new API.
> >>
> >> I agree that things feel rushed.
> >
> >
> > FWIW, the TC recently created an assert:follows-standard-deprecation
> tag.  Ivan linked to a thread in which Thierry asked for input on it, but
> FYI the final language as it was approved last week [1] is a bit different
> than originally proposed.  It now requires one release plus 3 linear months
> of deprecated-but-still-present-in-the-tree as a minimum, and recommends at
> least two full stable releases for significant features (an entire API
> version would undoubtedly fall into that bucket).  It also requires that a
> migration path will be documented.  However to Matt’s point, it doesn’t
> contain any language that says specific things like:
> >
> > In the case of major API version deprecation:
> > * $oldversion and $newversion must both work with
> [cinder|nova|whatever]client and openstackclient during the deprecation
> period.
> > * It must be possible to run $oldversion and $newversion concurrently on
> the servers to ensure end users don’t have to switch overnight.
> > * Devstack uses $newversion by default.
> > * $newversion works in Tempest/Rally/whatever else.
> >
> > What it *does* do is require that a thread be started here on
> openstack-operators [2] so that operators can provide feedback.  I would
> hope that feedback like “I can’t get clients to use it so please don’t
> remove it yet” would be taken into account by projects, which seems to be
> exactly what’s happening in this case with Cinder v1.  =)
> >
> > I’d hazard a guess that the TC would be interested in hearing about
> whether you think that plan is a reasonable one (and given that TC election
> season is upon us, candidates for the TC probably would too).
> >
> > [1] https://review.openstack.org/#/c/207467/
> > [2]
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
> >
> > At Your Service,
> >
> > Mark T. Voelker
>
> I would agree that the amount of breaks even in our own system has been
> substantial here, and I'm personally feeling we should probably revert
> the devstack change that turns off v1. It looks like it wasn't just one
> client that got caught in this, but most of them.
>
> This feels like this transition has been too much stick, and not enough
> carrot. IIRC openstack client wouldn't work with cinder v2 until a
> couple of months ago, as that made me do some weird things in grenade in
> building volumes. [1]
>
> -Sean
>
> 1.
>
> https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L40-L41
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-30 Thread thomas.morin

Hi Irena,

Irena Berezovsky :
> I would like to second  Kevin. This can be done in a similar way as 
ML2 Plugin passed plugin_context
> to ML2 Extension Drivers: 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/driver_api.py#L910.


Yes, this would be similar and could indeed be named agent_context .

However, contrarily to ML2 plugin which provides a context when calling 
most driver methods, I don't think that here we would need a context to 
be passed at each call of an AgentCoreResourceExtension, providing a 
interface to hook to the agent at initialize seems enough to me.


Thanks,

-Thomas



On Fri, Sep 25, 2015 at 11:57 AM, Kevin Benton > wrote:


   I think the 4th of the options you proposed would be the best. We
   don't want to give agents direct access to the agent object or else
   we will run the risk of breaking extensions all of the time during
   any kind of reorganization or refactoring. Having a well defined API
   in between will give us flexibility to move things around.

   On Fri, Sep 25, 2015 at 1:32 AM, > wrote:

   Hi everyone,

   (TL;DR: we would like an L2 agent extension to be able to call
   methods on the agent class, e.g. OVSAgent)

   In the networking-bgpvpn project, we need the reference driver
   to interact with the ML2 openvswitch agent with new RPCs to
   allow exchanging information with the BGP VPN implementation
   running on the compute nodes. We also need the OVS agent to
   setup specific things on the OVS bridges for MPLS traffic.

   To extend the agent behavior, we currently create a new agent by
   mimicking the main() in ovs_neutron_agent.py but instead of
   instantiating instantiate OVSAgent, with instantiate a class
   that overloads the OVSAgent class with the additional behavior
   we need [1] .

   This is really not the ideal way of extending the agent, and we
   would prefer using the L2 agent extension framework [2].

   Using the L2 agent extension framework would work, but only
   partially: it would easily allos us to register our RPC
   consumers, but not to let us access to some
   datastructures/methods of the agent that we need to use:
   setup_entry_for_arp_reply and local_vlan_map, access to the
   OVSBridge objects to manipulate OVS ports.

   I've filled-in an RFE bug to track this issue [5].

   We would like something like one of the following:
   1) augment the L2 agent extension interface
   (AgentCoreResourceExtension) to give access to the agent object
   (and thus let the extension call methods of the agent) by giving
   the agent as a parameter of the initialize method [4]
   2) augment the L2 agent extension interface
   (AgentCoreResourceExtension) to give access to the agent object
   (and thus let the extension call methods of the agent) by giving
   the agent as a parameter of a new setAgent method
   3) augment the L2 agent extension interface
   (AgentCoreResourceExtension) to give access only to
   specific/chosen methods on the agent object, for instance by
   giving a dict as a parameter of the initialize method [4], whose
   keys would be method names, and values would be pointer to these
   methods on the agent object
   4) define a new interface with methods to access things inside
   the agent, this interface would be implemented by an object
   instantiated by the agent, and that the agent would pass to the
   extension manager, thus allowing the extension manager to passe
   the object to an extension through the initialize method of
   AgentCoreResourceExtension [4]

   Any feedback on these ideas...?
   Of course any other idea is welcome...

   For the sake of triggering reaction, the question could be
   rephrased as: if we submit a change doing (1) above, would it
   have a reasonable chance of merging ?

   -Thomas

   [1]
   
https://github.com/openstack/networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/bagpipe/ovs_agent/ovs_bagpipe_neutron_agent.py
   [2] https://review.openstack.org/#/c/195439/
   [3]
   
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L30
   [4]
   
https://github.com/openstack/neutron/blob/master/neutron/agent/l2/agent_extension.py#L28
   [5] https://bugs.launchpad.net/neutron/+bug/1499637

   
_

   Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
   pas etre diffuses, exploites ou copies sans autorisation. Si vous avez 
recu 

Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-30 Thread Sean Dague
On 09/29/2015 01:32 PM, Mark Voelker wrote:
> 
> Mark T. Voelker
> 
> 
> 
>> On Sep 29, 2015, at 12:36 PM, Matt Fischer  wrote:
>>
>>
>>
>> I agree with John Griffith. I don't have any empirical evidences to back
>> my "feelings" on that one but it's true that we weren't enable to enable
>> Cinder v2 until now.
>>
>> Which makes me wonder: When can we actually deprecate an API version? I
>> *feel* we are fast to jump on the deprecation when the replacement isn't
>> 100% ready yet for several versions.
>>
>> --
>> Mathieu
>>
>>
>> I don't think it's too much to ask that versions can't be deprecated until 
>> the new version is 100% working, passing all tests, and the clients (at 
>> least python-xxxclients) can handle it without issues. Ideally I'd like to 
>> also throw in the criteria that devstack, rally, tempest, and other services 
>> are all using and exercising the new API.
>>
>> I agree that things feel rushed.
> 
> 
> FWIW, the TC recently created an assert:follows-standard-deprecation tag.  
> Ivan linked to a thread in which Thierry asked for input on it, but FYI the 
> final language as it was approved last week [1] is a bit different than 
> originally proposed.  It now requires one release plus 3 linear months of 
> deprecated-but-still-present-in-the-tree as a minimum, and recommends at 
> least two full stable releases for significant features (an entire API 
> version would undoubtedly fall into that bucket).  It also requires that a 
> migration path will be documented.  However to Matt’s point, it doesn’t 
> contain any language that says specific things like:
> 
> In the case of major API version deprecation:
> * $oldversion and $newversion must both work with 
> [cinder|nova|whatever]client and openstackclient during the deprecation 
> period.
> * It must be possible to run $oldversion and $newversion concurrently on the 
> servers to ensure end users don’t have to switch overnight. 
> * Devstack uses $newversion by default.
> * $newversion works in Tempest/Rally/whatever else.
> 
> What it *does* do is require that a thread be started here on 
> openstack-operators [2] so that operators can provide feedback.  I would hope 
> that feedback like “I can’t get clients to use it so please don’t remove it 
> yet” would be taken into account by projects, which seems to be exactly 
> what’s happening in this case with Cinder v1.  =)
> 
> I’d hazard a guess that the TC would be interested in hearing about whether 
> you think that plan is a reasonable one (and given that TC election season is 
> upon us, candidates for the TC probably would too).
> 
> [1] https://review.openstack.org/#/c/207467/
> [2] 
> http://git.openstack.org/cgit/openstack/governance/tree/reference/tags/assert_follows-standard-deprecation.rst#n59
> 
> At Your Service,
> 
> Mark T. Voelker

I would agree that the amount of breaks even in our own system has been
substantial here, and I'm personally feeling we should probably revert
the devstack change that turns off v1. It looks like it wasn't just one
client that got caught in this, but most of them.

This feels like this transition has been too much stick, and not enough
carrot. IIRC openstack client wouldn't work with cinder v2 until a
couple of months ago, as that made me do some weird things in grenade in
building volumes. [1]

-Sean

1.
https://github.com/openstack-dev/grenade/blob/master/projects/70_cinder/resources.sh#L40-L41

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-30 Thread Miguel Angel Ajo



Ihar Hrachyshka wrote:

On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:

Hi Ihar,

Ihar Hrachyshka :

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?

Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP entries)


Sounds very tightly coupled to OVS agent.

Please bear in mind, the extension interface will be available from different 
agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're talking about could 
also serve as
a translation driver for the agents (where the translation is possible), I 
totally understand
that most extensions are specific agent bound, and we must be able to identify
the agent we're serving back exactly.

Yes, I do have this in mind, but what we've identified for now seems to be OVS 
specific.

Indeed it does. Maybe you can try to define the needed pieces in high level 
actions, not internal objects you need to access to. Like ‘- connect endpoint X 
to Y’, ‘determine segmentation id for a network’ etc.

I've been thinking about this, but would tend to reach the conclusion that the 
things we need to interact with are pretty hard to abstract out into something 
that would be generic across different agents.  Everything we need to do in our 
case relates to how the agents use bridges and represent networks internally: 
linuxbridge has one bridge per Network, while OVS has a limited number of 
bridges playing different roles for all networks with internal segmentation.

To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the traffic 
destinated to the gateway of a Neutron network, to the thing that will do the MPLS 
forwarding for the right BGP VPN context (called VRF), in our case br-mpls (that could be 
done with an OVS table too) ; that action might be abstracted out to hide the details 
specific to OVS, but I'm not sure on how to  name the destination in a way that would be 
agnostic to these details, and this is not really relevant to do until we have a relevant 
context in which the linuxbridge would pass packets to something doing MPLS forwarding 
(OVS is currently the only option we support for MPLS forwarding, and it does not really 
make sense to mix linuxbridge for Neutron L2/L3 and OVS for MPLS)
- "determine segmentation id for a network": this is something really 
OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, and does not rely 
on internal segmentation

Completely abstracting out packet forwarding pipelines in OVS and linuxbridge 
agents would possibly allow defining an interface that agent extension could 
use without to know about anything specific to OVS or the linuxbridge, but I 
believe this is a very significant taks to tackle.


If you look for a clean way to integrate with reference agents, then it’s 
something that we should try to achieve. I agree it’s not an easy thing.

Just an idea: can we have a resource for traffic forwarding, similar to 
security groups? I know folks are not ok with extending security groups API due 
to compatibility reasons, so maybe fwaas is the place to experiment with it.


Hopefully it will be acceptable to create an interface, even it exposes a set 
of methods specific to the linuxbridge agent and a set of methods specific to 
the OVS agent.  That would mean that the agent extension that can work in both 
contexts (not our case yet) would check the agent type before using the first 
set or the second set.


The assumption of the whole idea of l2 agent extensions is that they are agent 
agnostic. In case of QoS, we implemented a common QoS extension that can be 
plugged in any agent [1], and a set of backend drivers (atm it’s just sr-iov 
[2] and ovs [3]) that are selected based on the driver type argument passed 
into the extension manager [4][5]. Your extension could use similar approach to 
select the backend.

[1]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions/qos.py#n169
[2]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py
[3]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py
[4]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#n395
[5]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py#n155


I disagree on the agent-agnostic thing. QoS extension for SR-IOV is 
totally not agnostic for OVS or LB, in the QoS case, it's just
accidental that OVS & LB share common bridges now due to the OVS Hybrid 
implementation that 

Re: [openstack-dev] [nova][python-novaclient] Functional test fail due to publicURL endpoint for volume service not found

2015-09-30 Thread Andrey Kurilin
Hi!
It looks like cause of issue is a disabling Cinder API V1 in gates by
default(
http://lists.openstack.org/pipermail/openstack-dev/2015-September/075689.html)
which was merged yesterday( https://review.openstack.org/#/c/194726/17 ).

Since Cinder V1 is disabled, python-novaclient has several issues:
 - "nova volume-* does not work when using cinder v2 API" -
https://bugs.launchpad.net/python-novaclient/+bug/1392846
 - "nova volume-* managers override service_type to 'volume', which is
missed in gates" - https://bugs.launchpad.net/python-novaclient/+bug/1501258


On Wed, Sep 30, 2015 at 5:56 AM, Zhenyu Zheng 
wrote:

> Hi, all
>
> I submitted a patch for novaclient last night:
> https://review.openstack.org/#/c/228769/ , and it turns out the
> functional test has failed due to:  publicURL endpoint for volume service
> not found. I also found out that another novaclient patch:
> https://review.openstack.org/#/c/217131/ also fails due to this error, so
> this must be a bug. Any idea on how to fix this?
>
> Thanks,
>
> BR,
>
> Zheng
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-09-30 Thread Murray, Paul (HP Cloud)

> Please respond to this post if you have an interest in this and what you 
> would like to see done. 
> Include anything you are already getting on with so we get a clear picture. 

Thank you to those who replied to this thread. I have used the contents to 
start an etherpad page here:

https://etherpad.openstack.org/p/mitaka-live-migration 

I have taken the liberty of listing those that responded to the thread and the 
authors of mentioned patches as interested people.

>From the responses and looking at the specs up for review it looks like there 
>are about five areas that could be addressed in Mitaka and several others that 
>could come later. The first five are:

- migrating instances with a mix of local disks and cinder volumes
- pause instance during migration
- cancel migration
- migrate suspended instances
- improve CI coverage

Not all of these are covered by specs yet and all the existing specs need 
reviews. Please look at the etherpad and see if there is anything you think is 
missing.

Paul



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] proposing Michal Jastrzebski (inc0) for core reviewer

2015-09-30 Thread Swapnil Kulkarni
On Wed, Sep 30, 2015 at 3:50 AM, Steven Dake (stdake) 
wrote:

> Hi folks,
>
> I am proposing Michal for core reviewer.  Consider my proposal as a +1
> vote.  Michal has done a fantastic job with rsyslog, has done a nice job
> overall contributing to the project for the last cycle, and has really
> improved his review quality and participation over the last several months.
>
> Our process requires 3 +1 votes, with no veto (-1) votes.  If your
> uncertain, it is best to abstain :)  I will leave the voting open for 1
> week until Tuesday October 6th or until there is a unanimous decision or a
>  veto.
>

+1 :)

>
> Regards
> -steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-30 Thread thomas.morin

Hi Ihar,

Ihar Hrachyshka :

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?

Right now, what bagpipe driver for networking-bgpvpn needs to interact with is:
- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP entries)


Sounds very tightly coupled to OVS agent.





Please bear in mind, the extension interface will be available from different 
agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're talking about could 
also serve as
a translation driver for the agents (where the translation is possible), I 
totally understand
that most extensions are specific agent bound, and we must be able to identify
the agent we're serving back exactly.

Yes, I do have this in mind, but what we've identified for now seems to be OVS 
specific.

Indeed it does. Maybe you can try to define the needed pieces in high level 
actions, not internal objects you need to access to. Like ‘- connect endpoint X 
to Y’, ‘determine segmentation id for a network’ etc.


I've been thinking about this, but would tend to reach the conclusion 
that the things we need to interact with are pretty hard to abstract out 
into something that would be generic across different agents.  
Everything we need to do in our case relates to how the agents use 
bridges and represent networks internally: linuxbridge has one bridge 
per Network, while OVS has a limited number of bridges playing different 
roles for all networks with internal segmentation.


To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the traffic 
destinated to the gateway of a Neutron network, to the thing that will 
do the MPLS forwarding for the right BGP VPN context (called VRF), in 
our case br-mpls (that could be done with an OVS table too) ; that 
action might be abstracted out to hide the details specific to OVS, but 
I'm not sure on how to  name the destination in a way that would be 
agnostic to these details, and this is not really relevant to do until 
we have a relevant context in which the linuxbridge would pass packets 
to something doing MPLS forwarding (OVS is currently the only option we 
support for MPLS forwarding, and it does not really make sense to mix 
linuxbridge for Neutron L2/L3 and OVS for MPLS)
- "determine segmentation id for a network": this is something really 
OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, 
and does not rely on internal segmentation


Completely abstracting out packet forwarding pipelines in OVS and 
linuxbridge agents would possibly allow defining an interface that agent 
extension could use without to know about anything specific to OVS or 
the linuxbridge, but I believe this is a very significant taks to tackle.


Hopefully it will be acceptable to create an interface, even it exposes 
a set of methods specific to the linuxbridge agent and a set of methods 
specific to the OVS agent.  That would mean that the agent extension 
that can work in both contexts (not our case yet) would check the agent 
type before using the first set or the second set.


Does this approach make sense ?

-Thomas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election] [tc] Candidacy for Mitaka

2015-09-30 Thread Julien Danjou
Hi fellow developers,

I hereby announce my candidacy for the OpenStack Technical Committee election.

I am currently employed by Red Hat and spend all my time working on upstream
OpenStack development. Something I've been doing since 2011. Those last years,
I ran the Ceilometer project as a PTL and already served the TC a few cycles
ago. I did many contributions to OpenStack as a whole, and I'm one of the top
contributors of the project[1] – hey I contributed to 72 OpenStack projects!

My plan here is to bring some of my views of the OpenStack world to the
technical committee, which actually does not seem to do much technical stuff
nowadays – much more bureaucracy. Maybe we should rename it?

I'm glad we now have a "big tent" approach of our community. I was one of the
first and only at the TC to say we should not push back projects for bad
reasons, and now we are accepting 10× times more. The tag system we are now
using and that has been imagined is nice, but as a new user of the tags, I find
them annoying and not always completely thought-through. I'm in favor of a more
agile and more user-oriented development, and I'd love bringing more of that.

I would also like to bring some of my hindsight about testing, usability and
documentation on the table. I've been, with part of the Telemetry team, able to
build a project that has a good and sane community, that works by default, has
a well-designed REST API and a great up-to-date documentation and is simple to
deploy and use. I wish the rest of OpenStack was a bit more like that.

[1] http://stackalytics.com/?metric=commits_id=jdanjou=all

Happy hacking!

-- 
Julien Danjou
/* Free Software hacker
   http://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][TC] TC Candidacy

2015-09-30 Thread Mike Perez
Hi all!

I'm announcing my candidacy for a position on the OpenStack Technical
Committee.

On October 1st I will be employed by the OpenStack Foundation as
a Cross-Project Developer Coordinator to help bring focus and support to
cross-project initiatives within the cross-project specs, Def Core, The Product
Working group, etc.

I feel the items below have enabled others across this project to strive for
quality. If you would all have me as a member of the Technical Committee, you
can help me to enable more quality work in OpenStack.

* I have been working in OpenStack since 2010. I spent a good amount of my time
  working on OpenStack in my free time before being paid full time to work on
  it. It has been an important part of my life, and rewarding to see what we
  have all achieved together.

* I was PTL for the Cinder project in the Kilo and Liberty releases for two
  cross-project reasons:
  * Third party continuous integration (CI).
  * Stop talking about rolling upgrades, and actually make it happen for
operators.

* I led the effort in bringing third party continuous integration to the
  Cinder project for more than 60 different drivers. [1]
  * I removed 25 different storage drivers from Cinder to bring quality to the
project to ensure what was in the Kilo release would work for operators.
I did what I believed was right, regardless of whether it would cost me
re-election for PTL [2].
  * In my conversations with other projects, this has enabled others to
follow the same effort. Continuing this trend of quality cross-project will
be my next focus.

* During my first term of PTL for Cinder, the team, and much respect to Thang
  Pham working on an effort to end the rolling upgrade problem, not just for
  Cinder, but for *all* projects.
  * First step was making databases independent from services via Oslo
versioned objects.
  * In Liberty we have a solution coming that helps with RPC versioned messages
to allow upgrading services independently.

* I have attempted to help with diversity in our community.
  * Helped lead our community to raise $17,403 for the Ada Initiative [3],
which was helping address gender-diversity with a focus in open source.
  * For the Vancouver summit, I helped bring in the ally skills workshops from
the Ada Initiative, so that our community can continue to be a welcoming
environment [4].

* Within the Cinder team, I have enabled all to provide good documentation for
  important items in our release notes in Kilo [5] and Liberty [6].
  * Other projects have reached out to me after Kilo feeling motivated for this
same effort. I've explained in the August 2015 Operators midcycle sprint
that I will make this a cross-project effort in order to provide better
communication to our operators and users.

* I started an OpenStack Dev List summary in the OpenStack Weekly Newsletter
  (What you need to know from the developer's list), in order to enable others
  to keep up with the dev list on important cross-project information. [7][8]

* I created the Cinder v2 API which has brought consistency in
  request/responses with other OpenStack projects.
  * I documented Cinder v1 and Cinder v2 API's. Later on I created the Cinder
API reference documentation content. The attempt here was to enable others
to have somewhere to start, to continue quality documentation with
continued developments.

Please help me to do more positive work in this project. It would be an
honor to be member of your technical committee.


Thank you,
Mike Perez

Official Candidacy: https://review.openstack.org/#/c/229298/2
Review History: https://review.openstack.org/#/q/reviewer:170,n,z
Commit History: https://review.openstack.org/#/q/owner:170,n,z
Stackalytics: http://stackalytics.com/?user_id=thingee
Foundation: https://www.openstack.org/community/members/profile/4840
IRC Freenode: thingee
Website: http://thing.ee


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054614.html
[2] - 
https://review.openstack.org/#/q/status:merged+project:openstack/cinder+branch:master+topic:cinder-driver-removals,n,z
[3] - 
http://lists.openstack.org/pipermail/openstack-dev/2014-October/047892.html
[4] - http://lists.openstack.org/pipermail/openstack-dev/2015-May/064156.html
[5] - 
https://wiki.openstack.org/wiki/ReleaseNotes/Kilo#OpenStack_Block_Storage_.28Cinder.29
[6] - 
https://wiki.openstack.org/wiki/ReleaseNotes/Liberty#OpenStack_Block_Storage_.28Cinder.29
[7] - 
http://www.openstack.org/blog/2015/09/openstack-community-weekly-newsletter-sept-12-18/
[8] - 
http://www.openstack.org/blog/2015/09/openstack-weekly-community-newsletter-sept-19-25/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Matt Thompson
Fully agree here -- +1 from me.

--Matt (mattt)

On Wed, Sep 30, 2015 at 9:51 AM, Jesse Pretorius 
wrote:

> Hi everyone,
>
> I'd like to propose that Steve Lewis (stevelle) be added as a core
> reviewer.
>
> He has made an effort to consistently keep up with doing reviews in the
> last cycle and always makes an effort to ensure that his responses are made
> after thorough testing where possible. I have found his input to be
> valuable.
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stackalytics] Broken stats after project rename

2015-09-30 Thread Jesse Pretorius
Hi everyone,

After the rename of os-ansible-deployment to openstack-ansible it appears
that all git-related stats (eg: commits) prior to the rename have been lost.

http://stackalytics.com/?metric=commits=openstack-ansible

Can anyone assist with rectifying this?

-- 
Jesse Pretorius
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Clint Byrum
Excerpts from Anant Patil's message of 2015-09-30 00:10:52 -0700:
> Hi,
> 
> One of remaining items in convergence is detecting and handling engine
> (the engine worker) failures, and here are my thoughts.
> 
> Background: Since the work is distributed among heat engines, by some
> means heat needs to detect the failure and pick up the tasks from failed
> engine and re-distribute or run the task again.
> 
> One of the simple way is to poll the DB to detect the liveliness by
> checking the table populated by heat-manage. Each engine records its
> presence periodically by updating current timestamp. All the engines
> will have a periodic task for checking the DB for liveliness of other
> engines. Each engine will check for timestamp updated by other engines
> and if it finds one which is older than the periodicity of timestamp
> updates, then it detects a failure. When this happens, the remaining
> engines, as and when they detect the failures, will try to acquire the
> lock for in-progress resources that were handled by the engine which
> died. They will then run the tasks to completion.
> 
> Another option is to use a coordination library like the community owned
> tooz (http://docs.openstack.org/developer/tooz/) which supports
> distributed locking and leader election. We use it to elect a leader
> among heat engines and that will be responsible for running periodic
> tasks for checking state of each engine and distributing the tasks to
> other engines when one fails. The advantage, IMHO, will be simplified
> heat code. Also, we can move the timeout task to the leader which will
> run time out for all the stacks and sends signal for aborting operation
> when timeout happens. The downside: an external resource like
> Zookeper/memcached etc are needed for leader election.
> 

It's becoming increasingly clear that OpenStack services in general need
to look at distributed locking primitives. There's a whole spec for that
right now:

https://review.openstack.org/#/c/209661/

I suggest joining that conversation, and embracing a DLM as the way to
do this.

Also, the leader election should be per-stack, and the leader selection
should be heavily weighted based on a consistent hash algorithm so that
you get even distribution of stacks to workers. You can look at how
Ironic breaks up all of the nodes that way. They're using a similar lock
to the one Heat uses now, so the two projects can collaborate nicely on
a real solution.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread ZengYingzhe
Hi Anant,
For the second option, if the leader engine fails, how to trigger a new leader 
election progress?
Best Regards,Yingzhe Zeng

> To: openstack-dev@lists.openstack.org
> From: anant.pa...@hpe.com
> Date: Wed, 30 Sep 2015 12:40:52 +0530
> Subject: [openstack-dev] [heat] Convergence: Detecting and handling worker 
> failures
> 
> Hi,
> 
> One of remaining items in convergence is detecting and handling engine
> (the engine worker) failures, and here are my thoughts.
> 
> Background: Since the work is distributed among heat engines, by some
> means heat needs to detect the failure and pick up the tasks from failed
> engine and re-distribute or run the task again.
> 
> One of the simple way is to poll the DB to detect the liveliness by
> checking the table populated by heat-manage. Each engine records its
> presence periodically by updating current timestamp. All the engines
> will have a periodic task for checking the DB for liveliness of other
> engines. Each engine will check for timestamp updated by other engines
> and if it finds one which is older than the periodicity of timestamp
> updates, then it detects a failure. When this happens, the remaining
> engines, as and when they detect the failures, will try to acquire the
> lock for in-progress resources that were handled by the engine which
> died. They will then run the tasks to completion.
> 
> Another option is to use a coordination library like the community owned
> tooz (http://docs.openstack.org/developer/tooz/) which supports
> distributed locking and leader election. We use it to elect a leader
> among heat engines and that will be responsible for running periodic
> tasks for checking state of each engine and distributing the tasks to
> other engines when one fails. The advantage, IMHO, will be simplified
> heat code. Also, we can move the timeout task to the leader which will
> run time out for all the stacks and sends signal for aborting operation
> when timeout happens. The downside: an external resource like
> Zookeper/memcached etc are needed for leader election.
> 
> In the long run, IMO, using a library like tooz will be useful for heat.
> A lot of boiler plate needed for locking and running centralized tasks
> (such as timeout) will not be needed in heat. Given that we are moving
> towards distribution of tasks and horizontal scaling is preferred, it
> will be advantageous to use them.
> 
> Please share your thoughts.
> 
> - Anant
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Proposing Steve Lewis (stevelle) for core reviewer

2015-09-30 Thread Andy McCrae
+1 from me.

On 30 September 2015 at 09:51, Jesse Pretorius 
wrote:

> Hi everyone,
>
> I'd like to propose that Steve Lewis (stevelle) be added as a core
> reviewer.
>
> He has made an effort to consistently keep up with doing reviews in the
> last cycle and always makes an effort to ensure that his responses are made
> after thorough testing where possible. I have found his input to be
> valuable.
>
> --
> Jesse Pretorius
> IRC: odyssey4me
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Shared storage space count for Nova

2015-09-30 Thread Jay Pipes

On 09/30/2015 03:04 AM, Kekane, Abhishek wrote:

Hi Devs,

Nova shared storage has issue [1] for counting free space, total space
and disk available least which affects hypervisor stats and scheduler.

I have created a etherpad [2] which contains detail problem description
and possible solution with possible challenges for this design.

Later I came to know there is ML [3] initiated by Jay Pipes which has a
solution of creating resource pools for disk, CPU, memory, Numa modes etc.

IMO this is a good way and good to be addressed in Mitaka release. I am
eager to work on this and will provide any kind of help in
implementation, review etc.

Please give us your opinion about the same.


Hi! I actually have created a work in progress blueprint for the above 
proposed solution here:


https://review.openstack.org/#/c/225546/

I will have it completed by end of week.

Best,
-jay


[1] https://bugs.launchpad.net/nova/+bug/1252321

[2] https://etherpad.openstack.org/p/shared-storage-space-count

[3] http://lists.openstack.org/pipermail/openstack-dev/2015-July/070564.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-09-30 Thread Ihar Hrachyshka

> On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:
> 
> Hi Ihar,
> 
> Ihar Hrachyshka :
>>> Miguel Angel Ajo :
 Do you have a rough idea of what operations you may need to do?
>>> Right now, what bagpipe driver for networking-bgpvpn needs to interact with 
>>> is:
>>> - int_br OVSBridge (read-only)
>>> - tun_br OVSBridge (add patch port, add flows)
>>> - patch_int_ofport port number (read-only)
>>> - local_vlan_map dict (read-only)
>>> - setup_entry_for_arp_reply method (called to add static ARP entries)
>>> 
>> Sounds very tightly coupled to OVS agent.
> 
>> 
 Please bear in mind, the extension interface will be available from 
 different agent types
 (OVS, SR-IOV, [eventually LB]), so this interface you're talking about 
 could also serve as
 a translation driver for the agents (where the translation is possible), I 
 totally understand
 that most extensions are specific agent bound, and we must be able to 
 identify
 the agent we're serving back exactly.
>>> Yes, I do have this in mind, but what we've identified for now seems to be 
>>> OVS specific.
>> Indeed it does. Maybe you can try to define the needed pieces in high level 
>> actions, not internal objects you need to access to. Like ‘- connect 
>> endpoint X to Y’, ‘determine segmentation id for a network’ etc.
> 
> I've been thinking about this, but would tend to reach the conclusion that 
> the things we need to interact with are pretty hard to abstract out into 
> something that would be generic across different agents.  Everything we need 
> to do in our case relates to how the agents use bridges and represent 
> networks internally: linuxbridge has one bridge per Network, while OVS has a 
> limited number of bridges playing different roles for all networks with 
> internal segmentation.
> 
> To look at the two things you  mention:
> - "connect endpoint X to Y" : what we need to do is redirect the traffic 
> destinated to the gateway of a Neutron network, to the thing that will do the 
> MPLS forwarding for the right BGP VPN context (called VRF), in our case 
> br-mpls (that could be done with an OVS table too) ; that action might be 
> abstracted out to hide the details specific to OVS, but I'm not sure on how 
> to  name the destination in a way that would be agnostic to these details, 
> and this is not really relevant to do until we have a relevant context in 
> which the linuxbridge would pass packets to something doing MPLS forwarding 
> (OVS is currently the only option we support for MPLS forwarding, and it does 
> not really make sense to mix linuxbridge for Neutron L2/L3 and OVS for MPLS)
> - "determine segmentation id for a network": this is something really 
> OVS-agent-specific, the linuxbridge agent uses multiple linux bridges, and 
> does not rely on internal segmentation
> 
> Completely abstracting out packet forwarding pipelines in OVS and linuxbridge 
> agents would possibly allow defining an interface that agent extension could 
> use without to know about anything specific to OVS or the linuxbridge, but I 
> believe this is a very significant taks to tackle.

If you look for a clean way to integrate with reference agents, then it’s 
something that we should try to achieve. I agree it’s not an easy thing.

Just an idea: can we have a resource for traffic forwarding, similar to 
security groups? I know folks are not ok with extending security groups API due 
to compatibility reasons, so maybe fwaas is the place to experiment with it.

> 
> Hopefully it will be acceptable to create an interface, even it exposes a set 
> of methods specific to the linuxbridge agent and a set of methods specific to 
> the OVS agent.  That would mean that the agent extension that can work in 
> both contexts (not our case yet) would check the agent type before using the 
> first set or the second set.

The assumption of the whole idea of l2 agent extensions is that they are agent 
agnostic. In case of QoS, we implemented a common QoS extension that can be 
plugged in any agent [1], and a set of backend drivers (atm it’s just sr-iov 
[2] and ovs [3]) that are selected based on the driver type argument passed 
into the extension manager [4][5]. Your extension could use similar approach to 
select the backend.

[1]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/agent/l2/extensions/qos.py#n169
[2]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/extension_drivers/qos_driver.py
[3]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py
[4]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#n395
[5]: 
https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/mech_sriov/agent/sriov_nic_agent.py#n155

> 
> Does this approach make sense ?
> 
> -Thomas
> 
> 

Re: [openstack-dev] [nova] Shared storage space count for Nova

2015-09-30 Thread Kekane, Abhishek
Hi Jay,

Thank you for the update.

Abhishek Kekane

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: 30 September 2015 15:48
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [nova] Shared storage space count for Nova

On 09/30/2015 03:04 AM, Kekane, Abhishek wrote:
> Hi Devs,
>
> Nova shared storage has issue [1] for counting free space, total space 
> and disk available least which affects hypervisor stats and scheduler.
>
> I have created a etherpad [2] which contains detail problem 
> description and possible solution with possible challenges for this design.
>
> Later I came to know there is ML [3] initiated by Jay Pipes which has 
> a solution of creating resource pools for disk, CPU, memory, Numa modes etc.
>
> IMO this is a good way and good to be addressed in Mitaka release. I 
> am eager to work on this and will provide any kind of help in 
> implementation, review etc.
>
> Please give us your opinion about the same.

Hi! I actually have created a work in progress blueprint for the above proposed 
solution here:

https://review.openstack.org/#/c/225546/

I will have it completed by end of week.

Best,
-jay

> [1] https://bugs.launchpad.net/nova/+bug/1252321
>
> [2] https://etherpad.openstack.org/p/shared-storage-space-count
>
> [3] 
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/070564.ht
> ml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-09-30 Thread Nikhil Komawar


On 9/30/15 8:46 AM, Kuvaja, Erno wrote:
>
> Hi all,
>
>  
>
> I’d like to propose following changes to glance-stable-maint team:
>
> 1)  Removing Zhi Yan Liu from the group; unfortunately he has
> moved on to other ventures and is not actively participating our
> operations anymore.
>
+1 (always welcome back)
>
> 2)  Adding Mike Fedosin to the group; Mike has been reviewing and
> backporting patches to glance stable branches and is working with the
> right mindset. I think he would be great addition to share the
> workload around.
>
+1 (definitely)
>
>  
>
> Best,
>
> Erno (jokke_) Kuvaja
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Models and validation for v2

2015-09-30 Thread Kairat Kushaev
Hi All,
In short terms, I am wondering why we are validating responses from server
when we are doing
image-show, image-list, member-list, metadef-namespace-show and other
read-only requests.

AFAIK, we are building warlock models when receiving responses from server
(see [0]). Each model requires schema to be fetched from glance server. It
means that each time we are doing image-show, image-list, image-create,
member-list and others we are requesting schema from the server. AFAIU, we
are using models to dynamically validate that object is in accordance with
schema but is it the case when glance receives responses from the server?

Could somebody please explain me the reasoning of this implementation? Am I
missed some usage cases when validation is required for server responses?

I also noticed that we already faced some issues with such implementation
that leads to "mocking" validation([1][2]).


[0]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L185
[1]:
https://github.com/openstack/python-glanceclient/blob/master/glanceclient/v2/images.py#L47
[2]: https://bugs.launchpad.net/python-glanceclient/+bug/1501046

Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Thomas Goirand
Hi everyone!

1/ Announcement
===

I'm pleased to announce, in advance of the final Liberty release, that
Liberty RC1 not only has been fully uploaded to Debian Experimental, but
also that the Tempest CI (which I maintain and is a package only CI, no
deployment tooling involved), shows that it's also fully installable and
working. There's still some failures, but these are, I am guessing, not
due to problems in the packaging, but rather some Tempest setup problems
which I intend to address.

If you want to try out Liberty RC1 in Debian, you can either try it
using Debian Sid + Experimental (recommended), or use the Jessie
backport repository built out of Mirantis Jenkins server. Repositories
are listed at this address:

http://liberty-jessie.pkgs.mirantis.com/

2/ Quick note about Liberty Debian repositories
===

During Debconf 15, someone reported that the fact the Jessie backports
are on a Mirantis address is disturbing.

Note that, while the above really is a non-Debian (ie: non official
private) repository, it only contains unmodified source packages, only
just rebuilt for Debian Stable. Please don't be afraid by the tainted
"mirantis.com" domain name, I could have as well set a debian.net
address (which has been on my todo list for a long time). But it is
still Debian only packages. Everything there is strait out of Debian
repositories, nothing added, modified or removed.

I believe that Liberty release in Sid, is currently working very well,
but I haven't tested it as much as the Jessie backport.

Started with the Kilo release, I have been uploading packages to the
official Debian backports repositories. I will do so as well for the
Liberty release, after the final release is out, and after Liberty is
fully migrated to Debian Testing (the rule for stable-backports is that
packages *must* be available in Testing *first*, in order to provide an
upgrade path). So I do expect Liberty to be available from
jessie-backports maybe a few weeks *after* the final Liberty release.
Before that, use the unofficial Debian repositories.

3/ Horizon dependencies still in NEW queue
==

It is also worth noting that Horizon hasn't been fully FTP master
approved, and that some packages are still remaining in the NEW queue.
This isn't the first release with such an issue with Horizon. I hope
that 1/ FTP masters will approve the remaining packages son 2/ for
Mitaka, the Horizon team will care about freezing external dependencies
(ie: new Javascript objects) earlier in the development cycle. I am
hereby proposing that the Horizon 3rd party dependency freeze happens
not later than Mitaka b2, so that we don't experience it again for the
next release. Note that this problem affects both Debian and Ubuntu, as
Ubuntu syncs dependencies from Debian.

5/ New packages in this release
===

You may have noticed that the below packages are now part of Debian:
- Manila
- Aodh
- ironic-inspector
- Zaqar (this one is still in the FTP masters NEW queue...)

I have also packaged a few more, but there are still blockers:
- Congress (antlr version is too low in Debian)
- Mistral

6/ Roadmap for Liberty final release


Next on my roadmap for the final release of Liberty, is finishing to
upgrade the remaining components to the latest version tested in the
gate. It has been done for most OpenStack deliverables, but about a
dozen are still in the lowest version supported by our global-requirements.

There's also some remaining work:
- more Neutron drivers
- Gnocchi
- Address the remaining Tempest failures, and widen the scope of tests
(add Sahara, Heat, Swift and others to the tested projects using the
Debian package CI)

I of course welcome everyone to test Liberty RC1 before the final
release, and report bugs on the Debian bug tracker if needed.

Also note that the Debian packaging CI is fully free software, and part
of Debian as well (you can look into the openstack-meta-packages package
in git.debian.org, and in openstack-pkg-tools). Contributions in this
field are also welcome.

7/ Thanks to Canonical & every OpenStack upstream projects
==

I'd like to point out that, even though I did the majority of the work
myself, for this release, there was a way more collaboration with
Canonical on the dependency chain. Indeed, for this Liberty release,
Canonical decided to upload every dependency to Debian first, and then
only sync from it. So a big thanks to the Canonical server team for
doing community work with me together. I just hope we could push this
even further, especially trying to have consistency for Nova and Neutron
binary package names, as it is an issue for Puppet guys.

Last, I would like to hereby thanks everyone who helped me fixing issues
in these packages. Thank you if you've been patient enough to explain,
and for 

Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-30 Thread Daniel P. Berrange
On Wed, Sep 30, 2015 at 08:10:43AM -0400, Sean Dague wrote:
> On 09/30/2015 07:29 AM, Ivan Kolodyazhny wrote:
> > Sean,
> > 
> > openstack client supports Cinder API v2 since Liberty. What it the right
> > way ti fix grenade?
> 
> Here's the thing.
> 
> With this change: Rally doesn't work, novaclient doesn't work, grenade
> doesn't work. Apparently nearly all the libraries in the real world
> don't work.
> 
> I feel like that list of incompatibilities should have been collected
> before this change. Managing a major API transition is a big deal, and
> having a pretty good idea who you are going to break before you do it is
> important. Just putting it out there and watching fallout isn't the
> right approach.

I have to agree, breaking APIs is a very big deal for consumers of
those APIs. When you break API you are trading off less work for
maintainers, vs extra pain for users. IMHO intentionally creating
pain for users is something that should be avoided unless there is
no practical alternative. I'd go as far as to say we should never
break API at all, which would mean keeping v1 around forever,
albeit recommending people use v2. If we really do want to kill
v1 and inflict pain on consumers, then we need to ensure that pain
is as close to zero as possible. This means we should not kill v1
until we've verified that all known current clients impl of v1 have
a v2 implementation available.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] tox -egenconfig not working

2015-09-30 Thread Vikas Choudhary
Hi,

I tried to generate sample kuryr.using "tox -e genconfig", but it is
failing:

genconfig create: /home/vikas/kuryr/.tox/genconfig
genconfig installdeps: -r/home/vikas/kuryr/requirements.txt,
-r/home/vikas/kuryr/test-requirements.txt
ERROR: could not install deps [-r/home/vikas/kuryr/requirements.txt,
-r/home/vikas/kuryr/test-requirements.txt]
___ summary
___
ERROR:   genconfig: could not install deps
[-r/home/vikas/kuryr/requirements.txt,
-r/home/vikas/kuryr/test-requirements.txt]



But if i run "pip install -r requirements.txt", its giving no error.

How to generalr sample config file? Please suggest.


-Vikas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Convergence: Detecting and handling worker failures

2015-09-30 Thread Ryan Brown

On 09/30/2015 03:10 AM, Anant Patil wrote:

Hi,

One of remaining items in convergence is detecting and handling engine
(the engine worker) failures, and here are my thoughts.

Background: Since the work is distributed among heat engines, by some
means heat needs to detect the failure and pick up the tasks from failed
engine and re-distribute or run the task again.

One of the simple way is to poll the DB to detect the liveliness by
checking the table populated by heat-manage. Each engine records its
presence periodically by updating current timestamp. All the engines
will have a periodic task for checking the DB for liveliness of other
engines. Each engine will check for timestamp updated by other engines
and if it finds one which is older than the periodicity of timestamp
updates, then it detects a failure. When this happens, the remaining
engines, as and when they detect the failures, will try to acquire the
lock for in-progress resources that were handled by the engine which
died. They will then run the tasks to completion.


Implementing our own locking system, even a "simple" one, sounds like a 
recipe for major bugs to me. I agree with your assessment that tooz is a 
better long-run decision.



Another option is to use a coordination library like the community owned
tooz (http://docs.openstack.org/developer/tooz/) which supports
distributed locking and leader election. We use it to elect a leader
among heat engines and that will be responsible for running periodic
tasks for checking state of each engine and distributing the tasks to
other engines when one fails. The advantage, IMHO, will be simplified
heat code. Also, we can move the timeout task to the leader which will
run time out for all the stacks and sends signal for aborting operation
when timeout happens. The downside: an external resource like
Zookeper/memcached etc are needed for leader election.


That's not necessarily true. For single-node installations (devstack, 
TripleO underclouds, etc) tooz offers file and IPC backends that don't 
need an extra service. Tooz's MySQL/PostgreSQL backends only provide 
distributed locking functionality, so we may need to depend on the 
memcached/redis/zookeeper backends for multi-node installs.


Even if tooz doesn't provide everything we need, I'm sure patches would 
be welcome.



In the long run, IMO, using a library like tooz will be useful for heat.
A lot of boiler plate needed for locking and running centralized tasks
(such as timeout) will not be needed in heat. Given that we are moving
towards distribution of tasks and horizontal scaling is preferred, it
will be advantageous to use them.

Please share your thoughts.

- Anant

--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Dmitry Tantsur

On 09/30/2015 03:15 PM, Ryan Brown wrote:

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common
I have
to grep through the projects I know that use it to make sure I don't
break
anything.


The API working group exists, but they focus on REST APIs so they don't
have any guidelines on library APIs.


Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public
might be too broad. For example, that would make all of libutils
ostensibly a "stable" interface. I don't think that's what we want,
especially this early in the lifecycle.

In heatclient, we present "heatclient.client" and "heatclient.exc"
modules as the main public API, and put versioned implementations in
modules.


I'd recommend to avoid things like 'heatclient.client', as in a big 
application it would lead to imports like


 from heatclient import client as heatclient

:)

What I did for ironic-inspector-client was to make a couple of most 
important things available directly on ironic_inspector_client top-level 
module, everything else - under ironic_inspector_client.v1 (modulo some 
legacy).




heatclient
|- client
|- exc
\- v1
   |- client
   |- resources
   |- events
   |- services

I think versioning the public API is the way to go, since it will make
it easier to maintain backwards compatibility while new needs/uses evolve.


++






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] live migration in Mitaka

2015-09-30 Thread Koniszewski, Pawel
> -Original Message-
> From: Murray, Paul (HP Cloud) [mailto:pmur...@hpe.com]
> Sent: Wednesday, September 30, 2015 1:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [nova] live migration in Mitaka
>
>
> > Please respond to this post if you have an interest in this and what you
> would like to see done.
> > Include anything you are already getting on with so we get a clear 
> > picture.
>
> Thank you to those who replied to this thread. I have used the contents to
> start an etherpad page here:
>
> https://etherpad.openstack.org/p/mitaka-live-migration
>
> I have taken the liberty of listing those that responded to the thread and

> the
> authors of mentioned patches as interested people.
>
> From the responses and looking at the specs up for review it looks like 
> there
> are about five areas that could be addressed in Mitaka and several others
> that could come later. The first five are:
>
> - migrating instances with a mix of local disks and cinder volumes

Preliminary patch is up for review [1], we need to switch it to libvirt's v3

migrate API.

> - pause instance during migration
> - cancel migration
> - migrate suspended instances

I'm not sure I understand this correctly. When user calls 'nova suspend' I 
thought that it actually "hibernates" VM and saves memory state to disk 
[2][3]. In such case there is nothing to "live" migrate - shouldn't 
cold-migration/resize solve this problem?

> - improve CI coverage
>
> Not all of these are covered by specs yet and all the existing specs need
> reviews. Please look at the etherpad and see if there is anything you
think 
> is
> missing.

Paul, thanks for taking care of this. I've added missing spec to force live 
migration to finish [4].

Hope we manage to discuss all these items in Tokyo.

[1] https://review.openstack.org/#/c/227278/
[2] 
https://github.com/openstack/nova/blob/e31d1e11bd42bcfbd7b2c3d732d184a367b75
d6f/nova/virt/libvirt/driver.py#L2311
[3] 
https://github.com/openstack/nova/blob/e31d1e11bd42bcfbd7b2c3d732d184a367b75
d6f/nova/virt/libvirt/guest.py#L308
[4] https://review.openstack.org/#/c/229040/

Kind Regards,
Pawel Koniszewski


smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] -1 due to line length violation in commit messages

2015-09-30 Thread Mike Spreitzer
> From: Gorka Eguileor 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 09/29/2015 07:34 AM
> Subject: Re: [openstack-dev] [all] -1 due to line length violation 
> in commit messages
...
> Since we are not all native speakers expecting everyone to realize that
> difference - which is completely right - may be a little optimistic,
> moreover considering that parts of those guidelines may even be written
> by non natives.
> 
> Let's say I interpret all "should" instances in that guideline as rules
> that don't need to be strictly enforced, I see that the Change-Id
> "should not be changed when rebasing" - this one would certainly be fun
> to watch if we didn't follow it - the blueprint "should give the name of
> a Launchpad blueprint" - I don't know any core that would not -1 a patch
> if he notices the BP reference missing - and machine targeted metadata
> "should all be grouped together at the end of the commit message" - this
> one everyone follows instinctively, so no problem.
> 
> And if we look at the i18n guidelines, almost everything is using
> should, but on reviews these are treated as strict *must* because of the
> implications.
> 
> Anyway, it's a matter of opinion and afaik in Cinder we don't even have
> a real problem with downvoting for the commit message length, I don't
> see more than 1 every couple of months or so.

Other communities have solved this by explicit reference to a standard 
defining terms like "must" and "should".

Regards,
Mike


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel][shotgun] do we still use subs?

2015-09-30 Thread Alexander Gordeev
Hello fuelers,

My question is related to shotgun tool[1] which will be invoked in
order to generate the diagnostic snapshot.

It has possibilities to substitute particular sensitive data such as
credentials/hostnames/IPs/etc with meaningless values. It's done by
Subs [2] object driver.

However, it seems that subs is not used anymore. Well, at least it was
turned off by default for fuel 5.1 [3] and newer. I won't able to find
any traces of its usage in the code at fuel-web repo.

Seems that this piece of code for subs could be ditched. Even more, it
should be ditched as it looks like a fifth wheel from the project
architecture point of view. As shotgun is totally about getting the
actual logs, but not about corrupting them unpredictably with sed
scripts.

Proper log sanitization is the another story entirely. I doubt if it
could be fitted into shotgun and being effective and/or well designed
at the same time.

Perhaps, i missed something and subs is still being used actively.
So, folks don't hesitate to respond, if you know something which helps
to shed a light on subs.

Let's discuss anything related to subs or even vote on its removal.
Maybe we need to wait for another 2 years to pass until we could
finally get rid of it.

Let me know your thoughts.

Thanks!


[1] https://github.com/stackforge/fuel-web/tree/master/shotgun
[2] 
https://github.com/stackforge/fuel-web/blob/master/shotgun/shotgun/driver.py#L165-L233
[3] 
https://github.com/stackforge/fuel-web/blob/stable/5.1/nailgun/nailgun/settings.yaml

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [cinder] [all] The future of Cinder API v1

2015-09-30 Thread Sean Dague
On 09/30/2015 07:29 AM, Ivan Kolodyazhny wrote:
> Sean,
> 
> openstack client supports Cinder API v2 since Liberty. What it the right
> way ti fix grenade?

Here's the thing.

With this change: Rally doesn't work, novaclient doesn't work, grenade
doesn't work. Apparently nearly all the libraries in the real world
don't work.

I feel like that list of incompatibilities should have been collected
before this change. Managing a major API transition is a big deal, and
having a pretty good idea who you are going to break before you do it is
important. Just putting it out there and watching fallout isn't the
right approach.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][glance] glance-stable-maint group refresher

2015-09-30 Thread Kuvaja, Erno
Hi all,

I'd like to propose following changes to glance-stable-maint team:

1)  Removing Zhi Yan Liu from the group; unfortunately he has moved on to 
other ventures and is not actively participating our operations anymore.

2)  Adding Mike Fedosin to the group; Mike has been reviewing and 
backporting patches to glance stable branches and is working with the right 
mindset. I think he would be great addition to share the workload around.

Best,
Erno (jokke_) Kuvaja
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Defining a public API for tripleo-common

2015-09-30 Thread Ryan Brown

On 09/30/2015 04:08 AM, Dougal Matthews wrote:

Hi,

What is the standard practice for defining public API's for OpenStack
libraries? As I am working on refactoring and updating tripleo-common I have
to grep through the projects I know that use it to make sure I don't break
anything.


The API working group exists, but they focus on REST APIs so they don't 
have any guidelines on library APIs.



Personally I would choose to have a policy of "If it is documented, it is
public" because that is very clear and it still allows us to do internal
refactoring.

Otherwise we could use __all__ to define what is public in each file, or
assume everything that doesn't start with an underscore is public.


I think assuming that anything without a leading underscore is public 
might be too broad. For example, that would make all of libutils 
ostensibly a "stable" interface. I don't think that's what we want, 
especially this early in the lifecycle.


In heatclient, we present "heatclient.client" and "heatclient.exc" 
modules as the main public API, and put versioned implementations in 
modules.


heatclient
|- client
|- exc
\- v1
  |- client
  |- resources
  |- events
  |- services

I think versioning the public API is the way to go, since it will make 
it easier to maintain backwards compatibility while new needs/uses evolve.


--
Ryan Brown / Senior Software Engineer, Openstack / Red Hat, Inc.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nfv][telcowg] Telco Working Group meeting schedule

2015-09-30 Thread Steve Gordon
- Original Message -
> From: "Steve Gordon" 
> To: openstack-operat...@lists.openstack.org, "OpenStack Development Mailing 
> List (not for usage questions)"
> 
> Hi all,
> 
> As discussed in last week's meeting [1] we have been seeing increasingly
> limited engagement in the 1900 UTC meeting slot. For this reason starting
> from next week's meeting (October 6th) it is proposed that we consolidate on
> the 1400 UTC slot which is generally better attended and stop alternating
> the time each week.
> 
> Unrelated to the above, I am traveling this Wednesday and will not be able to
> facilitate the meeting on September 30th @ 1900 UTC. Is anyone else able to
> help out by facilitating the meeting at this time? I can help out with
> agenda etc.
> 
> Thanks in advance,
> 
> Steve
> 
> 
> [1]
> http://eavesdrop.openstack.org/meetings/telcowg/2015/telcowg.2015-09-23-14.00.html

Hi all,

I was not able to find a backup facilitator so today's meeting is cancelled, I 
have updated the schedule on the wiki.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] how to address boot from volume failures

2015-09-30 Thread Andrew Laski

On 09/30/15 at 05:03pm, Sean Dague wrote:

Today we attempted to branch devstack and grenade for liberty, and are
currently blocked because in liberty with openstack client and
novaclient, it's not possible to boot a server from volume using just
the volume id.

That's because of this change in novaclient -
https://review.openstack.org/#/c/221525/

That was done to resolve the issue that strong schema validation in Nova
started rejecting the kinds of calls that novaclient was making for boot
from volume, because the bdm 1 and 2 code was sharing common code and
got a bit tangled up. So 3 bdm 2 params were being sent on every request.

However, https://review.openstack.org/#/c/221525/ removed the ==1 code
path. If you pass in just {"vda": "$volume_id"} the code falls through,
volume id is lost, and nothing is booted. This is how the devstack
exercises and osc recommends booting from volume. I expect other people
might be doing that as well.

There seem to be a few options going forward:

1) fix the client without a revert

This would bring back a ==1 code path, which is basically just setting
volume_id, and move on. This means that until people upgrade their
client they loose access to this function on the server.

2) revert the client and loose up schema validation

If we revert the client to the old code, we also need to accept the fact
that novaclient has been sending 3 extra parameters to this API call
since as long as people can remember. We'd need a nova schema relax to
let those in and just accept that people are going to pass those.

3) fix osc and novaclient cli to not use this code path. This will also
require everyone upgrades both of those to not explode in the common
case of specifying boot from volume on the command line.

I slightly lean towards #2 on a compatibility front, but it's a chunk of
change at this point in the cycle, so I don't think there is a clear win
path. It would be good to collect opinions here. The bug tracking this
is - https://bugs.launchpad.net/python-openstackclient/+bug/1501435


I have a slight preference for #1.  Nova is not buggy here novaclient 
is so I think we should contain the fix there.


Is using the v2 API an option?  That should also allow the 3 extra 
parameters mentioned in #2.




-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ops] Operator Local Patches

2015-09-30 Thread Matt Riedemann



On 9/29/2015 6:33 PM, Kris G. Lindgren wrote:

Hello All,

We have some pretty good contributions of local patches on the etherpad.
  We are going through right now and trying to group patches that
multiple people are carrying and patches that people may not be carrying
but solves a problem that they are running into.  If you can take some
time and either add your own local patches that you have to the ether
pad or add +1's next to the patches that are laid out, it would help us
immensely.

The etherpad can be found at:
https://etherpad.openstack.org/p/operator-local-patches

Thanks for your help!

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Tuesday, September 22, 2015 at 4:21 PM
To: openstack-operators
Subject: Re: Operator Local Patches

Hello all,

Friendly reminder: If you have local patches and haven't yet done so,
please contribute to the etherpad at:
https://etherpad.openstack.org/p/operator-local-patches

___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

From: "Kris G. Lindgren"
Date: Friday, September 18, 2015 at 4:35 PM
To: openstack-operators
Cc: Tom Fifield
Subject: Operator Local Patches

Hello Operators!

During the ops meetup in Palo Alto were we talking about sessions for
Tokyo. A session that I purposed, that got a bunch of +1's,  was about
local patches that operators were carrying.  From my experience this is
done to either implement business logic,  fix assumptions in projects
that do not apply to your implementation, implement business
requirements that are not yet implemented in openstack, or fix scale
related bugs.  What I would like to do is get a working group together
to do the following:

1.) Document local patches that operators have (even those that are in
gerrit right now waiting to be committed upstream)
2.) Figure out commonality in those patches
3.) Either upstream the common fixes to the appropriate projects or
figure out if a hook can be added to allow people to run their code at
that specific point
4.) 
5.) Profit

To start this off, I have documented every patch, along with a
description of what it does and why we did it (where needed), that
GoDaddy is running [1].  What I am asking is that the operator community
please update the etherpad with the patches that you are running, so
that we have a good starting point for discussions in Tokyo and beyond.

[1] - https://etherpad.openstack.org/p/operator-local-patches
___
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I saw this originally on the ops list and it's a great idea - cat 
herding the bazillion ops patches and seeing what common things rise to 
the top would be helpful.  Hopefully some of that can then be pushed 
into the projects.


There are a couple of things I could note that are specifically operator 
driven which could use eyes again.


1. purge deleted instances from nova database:

http://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/purge-deleted-instances-cmd.html

The spec is approved for mitaka, the code is out for review.  If people 
could test the change out it'd be helpful to vet it's usefulness.


2. I'm trying to revive a spec that was approved in liberty but the code 
never landed:


https://review.openstack.org/#/c/226925/

That's for force resetting quotas for a project/user so that on the next 
pass it gets recalculated. A question came up about making the user 
optional in that command so it's going to require a bit more review 
before we re-approve for mitaka since the design changes slightly.


3. mgagne was good enough to propose a patch upstream to neutron for a 
script he had out of tree:


https://review.openstack.org/#/c/221508/

That's a tool to deleted empty linux bridges.  The neutron linuxbridge 
agent used to remove those automatically but it caused race problems 
with nova so that was removed, but it'd still be good to have a tool to 
remove then as needed.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Hongbin Lu
+1 for both. Welcome!

From: Davanum Srinivas [mailto:dava...@gmail.com]
Sent: September-30-15 7:00 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] New Core Reviewers

+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
> wrote:
Core Reviewers,

I propose the following additions to magnum-core:

+Vilobh Meshram (vilobhmm)
+Hua Wang (humble00)

Please respond with +1 to agree or -1 to veto. This will be decided by either a 
simple majority of existing core reviewers, or by lazy consensus concluding on 
2015-10-06 at 00:00 UTC, in time for our next team meeting.

Thanks,

Adrian Otto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-30 Thread Steven Hillman (sthillma)
Makes sense to me.

Opened a bug to track the migration of agents/n1kv_vem.pp out of
puppet-neutron during the M-cycle:
https://bugs.launchpad.net/puppet-neutron/+bug/1501535

Thanks.
Steven Hillman

On 9/29/15, 9:23 AM, "Emilien Macchi"  wrote:

>My suggestion:
>
>* patch master to send deprecation warning if third party repositories
>are managed in our current puppet-neutron module.
>* do not manage third party repositories from now and do not accept any
>patch containing this kind of code.
>* in the next cycle, we will consider deleting legacy code that used to
>manage third party software repos.
>
>Thoughts?
>
>On 09/25/2015 12:32 PM, Anita Kuno wrote:
>> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>>> Hi There,
>>>
>>> I just added my comment on the review. I do agree with Emilien. There
>>>should be specific repos for plugins and drivers.
>>>
>>> BTW. I love the sdnmagic name  ;-)
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 9/25/15, 9:02 AM, "Emilien Macchi"  wrote:
>>>
 In our last meeting [1], we were discussing about whether managing or
 not external packaging repositories for Neutron plugin dependencies.

 Current situation:
 puppet-neutron is installing (packages like neutron-plugin-*) &
 configure Neutron plugins (configuration files like
 /etc/neutron/plugins/*.ini
 Some plugins (Cisco) are doing more: they install third party packages
 (not part of OpenStack), from external repos.

 The question is: should we continue that way and accept that kind of
 patch [2]?

 I vote for no: managing external packages & external repositories
should
 be up to an external more.
 Example: my SDN tool is called "sdnmagic":
 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
 configure the .ini file(s) to make it work in Neutron
 2/ create puppet-sdnmagic that will take care of everything else:
 install sdnmagic, manage packaging (and specific dependencies),
 repositories, etc.
 I -1 puppet-neutron should handle it. We are not managing SDN
soltution:
 we are enabling puppet-neutron to work with them.

 I would like to find a consensus here, that will be consistent across
 *all plugins* without exception.


 Thanks for your feedback,

 [1] http://goo.gl/zehmN2
 [2] https://review.openstack.org/#/c/209997/
 -- 
 Emilien Macchi

>>> 
>>>
>>>__
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> 
>> I think the data point provided by the Cinder situation needs to be
>> considered in this decision:
>>https://bugs.launchpad.net/manila/+bug/1499334
>> 
>> The bug report outlines the issue, but the tl;dr is that one Cinder
>> driver changed their licensing on a library required to run in tree
>>code.
>> 
>> Thanks,
>> Anita.
>> 
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>-- 
>Emilien Macchi
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] split integration jobs

2015-09-30 Thread Jeremy Stanley
On 2015-09-30 17:14:27 -0400 (-0400), Emilien Macchi wrote:
[...]
> I like #3 but we are going to consume more CI resources (that's why I
> put [infra] tag).
[...]

I don't think adding one more job is going to put a strain on our
available resources. In fact it consumes just about as much to run a
single job twice as long since we're constrained on the number of
running instances in our providers (ignoring for a moment the
spin-up/tear-down overhead incurred per job which, if you're
talking about long-running jobs anyway, is less wasteful than it is
for lots of very quick jobs). The number of puppet changes and
number of jobs currently run on each is considerably lower than a
lot of our other teams as well.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Release of a neutron sub-project

2015-09-30 Thread Vadivel Poonathan
Kyle,

We referenced arista's setup/config files when we setup the pypi for
our plugin. So if it is ok for Arista, then it would be ok for
ale-omniswitch too, i believe. You said Arista was ok when you did in
google, instead of pypi search, in another email. So can you pls.
check again ale-omniswitch as well and confirm.

If still it has an issue, can you pls. throw me some pointers on where
to enable the openstackci owener permission?..

Thanks,Vad--

The following pypi registrations did not follow directions to enable
openstackci has "Owner" permissions, which allow for the publishing of

packages to pypi:

networking-ale-omniswitch
networking-arista


On Wed, Sep 30, 2015 at 11:56 AM, Kyle Mestery  wrote:

> On Tue, Sep 29, 2015 at 8:04 PM, Kyle Mestery  wrote:
>
>> On Tue, Sep 29, 2015 at 2:36 PM, Vadivel Poonathan <
>> vadivel.openst...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> As per the Sub-Project Release process - i would like to tag and release
>>> the following sub-project as part of upcoming Liberty release.
>>> The process says talk to one of the member of 'neutron-release' group. I
>>> couldn’t find a group mail-id for this group. Hence I am sending this email
>>> to the dev list.
>>>
>>> I just have removed the version from setup.cfg and got the patch merged,
>>> as specified in the release process. Can someone from the neutron-release
>>> group makes this sub-project release.
>>>
>>>
>>
>> Vlad, I'll do this tomorrow. Find me on IRC (mestery) and ping me there
>> so I can get your IRC NIC in case I have questions.
>>
>>
> It turns out that the networking-ale-omniswitch pypi setup isn't correct,
> see [1] for more info and how to correct. This turned out to be ok, because
> it's forced me to re-examine the other networking sub-projects and their
> pypi setup to ensure consistency, which the thread found here [1] will
> resolve.
>
> Once you resolve this ping me on IRC and I'll release this for you.
>
> Thanks!
> Kyle
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2015-September/075880.html
>
>
>> Thanks!
>> Kyle
>>
>>
>>>
>>> ALE Omniswitch
>>> Git: https://git.openstack.org/cgit/openstack/networking-ale-omniswitch
>>> Launchpad: https://launchpad.net/networking-ale-omniswitch
>>> Pypi: https://pypi.python.org/pypi/networking-ale-omniswitch
>>>
>>> Thanks,
>>> Vad
>>> --
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] New driver submission deadline

2015-09-30 Thread Sean McGinnis
This message is for all new vendors looking to add a new Cinder driver
in the Mitaka release as well as any existing vendors that need to add a
new protocol/driver to what is already in tree.

There has been some discussion on the mailing list and in the IRC
channel about changes to our policy around submitting new drivers. While
this may lead to some changes after further discussion, I just want to
make it very clear that as of right now, there is no change to new
driver submission.

For the Mitaka release, according to our existing policy, the deadline
will be the M-1 milestone between December 1-3 [1].

Please read and understand all details for new driver submission
available on the Cinder wiki [2].

Requirements for a volume driver to be merged:
* The blueprint for your volume driver is submitted and approved.
* Your volume driver code is posted to gerrit and passing gate tests.
* Your volume driver code gerrit review page has results posted from
your CI [3], and is passing. Keep in mind that your CI must continue
running in order to stay in the release. This also includes future
releases.
* Your volume driver fulfills minimum features. [4]
* You meet all of the above at least by December 1st. Patches can take
quite some time to make it through gate leading up to a milestone. Do
not wait until the morning of the 1st to submit your driver!

To be clear:
* Your volume driver submission must meet *all* the items before we
review your code.
* If your volume driver is submitted after Mitaka-1, expect me to
reference this email and we'll request the volume driver to be
submitted in the N release.
* Even if you meet all of the above requirements by December 1st, it is
not guanranteed that your volume driver will be merged. You still need
to address all review comments in a timely manner and allow time for
gating testing to finish.

Initial merge is not a finish line and you are done. If third party CI
stops reporting, is unstable, or the core team has any reason to
question the quality of your driver, it may be removed at any time if
there is not cooperation to resolve any issues or concerns.


[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
[2] https://wiki.openstack.org/wiki/Cinder/how-to-contribute-a-driver
[3] https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers
[4] 
http://docs.openstack.org/developer/cinder/devref/drivers.html#minimum-features

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Large Deployments Team][Performance Team] New informal working group suggestion

2015-09-30 Thread Rogon, Kamil
Hello,

Thanks Dina for bringing up this great idea.



My team at Intel is working with Performance testing so far so we will be 
likely to be part of that project.

The performance aspect at large scale is an obstacle for enterprise 
deployments. For that reason Win The Enterprise 
  group may be also 
interested in this topic.



Regards,

Kamil Rogon



Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173
80-298 Gdansk



From: Dina Belova [mailto:dbel...@mirantis.com]
Sent: Wednesday, September 30, 2015 10:27 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Large Deployments Team][Performance Team] New 
informal working group suggestion



Sandeep,



sorry for the late response :) I'm hoping to define 'spheres of interest' and 
most painful moments using people's experience on Tokyo summit and we'll find 
out what needs to be tested most and can be actually done. You can share your 
ideas of what needs to be tested and focused on in 
 
https://etherpad.openstack.org/p/openstack-performance-issues etherpad, this 
will be a pool of ideas I'm going to use in Tokyo.



I can either create irc channel for the discussions or we can use 
#openstack-operators channel as LDT is using it for the communication. After 
Tokyo summit I'm planning to set Doodle voting for the time people will be 
comfortable with to have periodic meetings :)



Cheers,

Dina



On Fri, Sep 25, 2015 at 1:52 PM, Sandeep Raman  > wrote:

On Tue, Sep 22, 2015 at 6:27 PM, Dina Belova  > wrote:

Hey, OpenStackers!



I'm writing to propose to organise new informal team to work specifically on 
the OpenStack performance issues. This will be a sub team in already existing 
Large Deployments Team, and I suppose it will be a good idea to gather people 
interested in OpenStack performance in one room and identify what issues are 
worrying contributors, what can be done and share results of performance 
researches :)



Dina, I'm focused in performance and scale testing [no coding background].How 
can I contribute and what is the expectation from this informal team?



So please volunteer to take part in this initiative. I hope it will be many 
people interested and we'll be able to use cross-projects session slot 
  to meet in Tokyo and hold a 
kick-off meeting.



I'm not coming to Tokyo. How could I still be part of discussions if any? I 
also feel it is good to have a IRC channel for perf-scale discussion. Let me 
know your thoughts.



I would like to apologise I'm writing to two mailing lists at the same time, 
but I want to make sure that all possibly interested people will notice the 
email.



Thanks and see you in Tokyo :)



Cheers,

Dina



-- 

Best regards,

Dina Belova

Senior Software Engineer

Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev







-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] prepare 5.2.0 and 6.1.0 releases

2015-09-30 Thread Emilien Macchi
Hi,

I would like to organize a "release day" sometimes soon, to release
5.2.0 (Juno) [1] and 6.1.0 (Kilo) [2].

Also, we will take the opportunity of that day to consolidate our
process and bring more documentation [3].

If you have backport needs, please make sure they are all sent in
Gerrit, so our core team will review it.

If there is any volunteer to help in that process (documentation,
launchpad, release notes, reviewing backports), please raise your hand
on IRC.

Once we will release 5.2.0 and 6.1.0, we will schedule 7.0.0 (liberty)
release (probably end-october/early-november), but for now we're still
waiting for UCA & RDO Liberty stable packaging.

Thanks!

[1] https://goo.gl/U767kI
[2] https://goo.gl/HPuVfA
[2] https://wiki.openstack.org/wiki/Puppet/releases
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Remove nova-network as a deployment option in Fuel?

2015-09-30 Thread Mike Scherbakov
Hi team,
where do we stand with it now? I remember there was a plan to remove
nova-network support in 7.0, but we've delayed it due to vcenter/dvr or
something which was not ready for it.

Can we delete it now? The early in the cycle we do it, the easier it will
be.

Thanks!
-- 
Mike Scherbakov
#mihgen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Steven Dake (stdake)
Joshua,

If you share resources, you give up multi-tenancy.  No COE system has the
concept of multi-tenancy (kubernetes has some basic implementation but it
is totally insecure).  Not only does multi-tenancy have to “look like” it
offers multiple tenants isolation, but it actually has to deliver the
goods.

I understand that at first glance a company like Yahoo may not want
separate bays for their various applications because of the perceived
administrative overhead.  I would then challenge Yahoo to go deploy a COE
like kubernetes (which has no multi-tenancy or a very basic implementation
of such) and get it to work with hundreds of different competing
applications.  I would speculate the administrative overhead of getting
all that to work would be greater then the administrative overhead of
simply doing a bay create for the various tenants.

Placing tenancy inside a COE seems interesting, but no COE does that
today.  Maybe in the future they will.  Magnum was designed to present an
integration point between COEs and OpenStack today, not five years down
the road.  Its not as if we took shortcuts to get to where we are.

I will grant you that density is lower with the current design of Magnum
vs a full on integration with OpenStack within the COE itself.  However,
that model which is what I believe you proposed is a huge design change to
each COE which would overly complicate the COE at the gain of increased
density.  I personally don’t feel that pain is worth the gain.

Regards,
-steve


On 9/30/15, 2:18 PM, "Joshua Harlow"  wrote:

>Wouldn't that limit the ability to share/optimize resources then and
>increase the number of operators needed (since each COE/bay would need
>its own set of operators managing it)?
>
>If all tenants are in a single openstack cloud, and under say a single
>company then there isn't much need for management isolation (in fact I
>think said feature is actually a anti-feature in a case like this).
>Especially since that management is already by keystone and the
>project/tenant & user associations and such there.
>
>Security isolation I get, but if the COE is already multi-tenant aware
>and that multi-tenancy is connected into the openstack tenancy model,
>then it seems like that point is nil?
>
>I get that the current tenancy boundary is the bay (aka the COE right?)
>but is that changeable? Is that ok with everyone, it seems oddly matched
>to say a company like yahoo, or other private cloud, where one COE would
>I think be preferred and tenancy should go inside of that; vs a eggshell
>like solution that seems like it would create more management and
>operability pain (now each yahoo internal group that creates a bay/coe
>needs to figure out how to operate it? and resources can't be shared
>and/or orchestrated across bays; h, seems like not fully using a COE
>for what it can do?)
>
>Just my random thoughts, not sure how much is fixed in stone.
>
>-Josh
>
>Adrian Otto wrote:
>> Joshua,
>>
>> The tenancy boundary in Magnum is the bay. You can place whatever
>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
>> Swarm). This allows you to use native tools to interact with the COE in
>> that bay, rather than using an OpenStack specific client. If you want to
>> use the OpenStack client to create both bays, pods, and containers, you
>> can do that today. You also have the choice, for example, to run kubctl
>> against your Kubernetes bay, if you so desire.
>>
>> Bays offer both a management and security isolation between multiple
>> tenants. There is no intent to share a single bay between multiple
>> tenants. In your use case, you would simply create two bays, one for
>> each of the yahoo-mail.XX tenants. I am not convinced that having an
>> uber-tenant makes sense.
>>
>> Adrian
>>
>>> On Sep 30, 2015, at 1:13 PM, Joshua Harlow >> > wrote:
>>>
>>> Adrian Otto wrote:
 Thanks everyone who has provided feedback on this thread. The good
 news is that most of what has been asked for from Magnum is actually
 in scope already, and some of it has already been implemented. We
 never aimed to be a COE deployment service. That happens to be a
 necessity to achieve our more ambitious goal: We want to provide a
 compelling Containers-as-a-Service solution for OpenStack clouds in a
 way that offers maximum leverage of what’s already in OpenStack,
 while giving end users the ability to use their favorite tools to
 interact with their COE of choice, with the multi-tenancy capability
 we expect from all OpenStack services, and simplified integration
 with a wealth of existing OpenStack services (Identity,
 Orchestration, Images, Networks, Storage, etc.).

 The areas we have disagreement are whether the features offered for
 the k8s COE should be mirrored in other COE’s. We have not attempted
 to do that yet, and my suggestion is to continue 

Re: [openstack-dev] [magnum] New Core Reviewers

2015-09-30 Thread Davanum Srinivas
+1 from me for both Vilobh and Hua.

Thanks,
Dims

On Wed, Sep 30, 2015 at 6:47 PM, Adrian Otto 
wrote:

> Core Reviewers,
>
> I propose the following additions to magnum-core:
>
> +Vilobh Meshram (vilobhmm)
> +Hua Wang (humble00)
>
> Please respond with +1 to agree or -1 to veto. This will be decided by
> either a simple majority of existing core reviewers, or by lazy consensus
> concluding on 2015-10-06 at 00:00 UTC, in time for our next team meeting.
>
> Thanks,
>
> Adrian Otto
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pypi packages for networking sub-projects

2015-09-30 Thread Cathy Zhang
Hi Kyle,

Is this only about the sub-projects that are ready for release? I do not see 
networking-sfc sub-project in the list. Does this mean we have done the pypi 
registrations for the networking-sfc project correctly or it is not checked 
because it is not ready for release yet?

Thanks,
Cathy

From: Kyle Mestery [mailto:mest...@mestery.com]
Sent: Wednesday, September 30, 2015 11:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [neutron] pypi packages for networking sub-projects

Folks:
In trying to release some networking sub-projects recently, I ran into an issue 
[1] where I couldn't release some projects due to them not being registered on 
pypi. I have a patch out [2] which adds pypi publishing jobs, but before that 
can merge, we need to make sure all projects have pypi registrations in place. 
The following networking sub-projects do NOT have pypi registrations in place 
and need them created following the guidelines here [3]:
networking-calico
networking-infoblox
networking-powervm

The following pypi registrations did not follow directions to enable 
openstackci has "Owner" permissions, which allow for the publishing of packages 
to pypi:
networking-ale-omniswitch
networking-arista
networking-l2gw
networking-vsphere

Once these are corrected, we can merge [2] which will then allow the 
neutron-release team the ability to release pypi packages for those packages.
Thanks!
Kyle

[1] 
http://lists.openstack.org/pipermail/openstack-infra/2015-September/003244.html
[2] https://review.openstack.org/#/c/229564/1
[3] 
http://docs.openstack.org/infra/manual/creators.html#give-openstack-permission-to-publish-releases
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron + ovn] Does neutron ovn plugin support to setup multiple neutron networks for one container?

2015-09-30 Thread Murali R
Yes, sfc without nsh is what I am looking into and I am thinking ovn can
have a better approach.

I did an implementation of sfc around nsh that used ovs & flows from custom
ovs-agent back in mar-may. I added fields in ovs agent to send additional
info for actions as well. Neutron side was quite trivial. But the solution
required an implementation of ovs to listen on a different port to handle
nsh header so doubled the number of tunnels. The ovs code we used/modified
to was either from the link you sent or some other similar impl from Cisco
folks (I don't recall) that had actions and conditional commands for the
field. If we have generic ovs code to compare or set actions on any
configured address field was my thought. But haven't thought through much
on how to do that. In any case, with ovn we cannot define custom flows
directly on ovs, so that approach is dated now. But hoping some similar
feature can be added to ovn which can transpose some header field to geneve
options.

I am trying something right now with ovn and will be attending ovs
conference in nov. I am skipping openstack summit to attend something else
in far-east during that time. But lets keep the discussion going and
collaborate if you work on sfc.

On Wed, Sep 30, 2015 at 2:11 PM, Russell Bryant  wrote:

> On 09/30/2015 04:09 PM, Murali R wrote:
> > Russel,
> >
> > For instance if I have a nsh header embedded in vxlan in the incoming
> > packet, I was wondering if I can transfer that to geneve options
> > somehow. This is just as an example. I may have header other info either
> > in vxlan or ip that needs to enter the ovn network and if we have
> > generic ovs commands to handle that, it will be useful. If commands
> > don't exist but extensible then I can do that as well.
>
> Well, OVS itself doesn't support NSH yet.  There are patches on the OVS
> dev mailing list for it, though.
>
> http://openvswitch.org/pipermail/dev/2015-September/060678.html
>
> Are you interested in SFC?  I have been thinking about that and don't
> think it will be too hard to add support for it in OVN.  I'm not sure
> when I'll work on it, but it's high on my personal todo list.  If you
> want to do it with NSH, that will require OVS support first, of course.
>
> If you're interested in more generic extensibility of OVN, there's at
> least going to be one talk about that at the OVS conference in November.
>  If you aren't there, it will be on video.  I'm not sure what ideas they
> will be proposing.
>
> Since we're on the OpenStack list, I assume we're talking in the
> OpenStack context.  For any feature we're talking about, we also have to
> talk about how that is exposed through the Neutron API.  So, "generic
> extensibility" doesn't immediately make sense for the Neutron case.
>
> SFC certainly makes sense.  There's a Neutron project for adding an SFC
> API and from what I've seen so far, I think we'll be able to extend OVN
> such that it can back that API.
>
> --
> Russell Bryant
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Liberty RC1 availability in Debian

2015-09-30 Thread Thomas Goirand
On 09/30/2015 07:25 PM, Jordan Pittier wrote:
> We are not used to reading "thanks" messages from you :) So I enjoy this
> email even more !

I am well aware that I do have the reputation within the community to
complain too much. I'm the world champion of starting monster troll
thread by mistake. :)

Though mostly, I do like everyone I've approached so far (except maybe 2
persons out of a few hundreds, which is unavoidable), and feel like we
have an awesome, very helpful and friendly community.

It is my hope that everyone understands the amount of "WTF" situation I
have face every day due to what I do, and that I'm close to burning out
at the end of each release. Liberty isn't an exception. Seeing that
Tempest finally ran yesterday evening filled me with joy. These last
remaining 15 days before the final release will be painful, even though
I'm nearly done for this cycle: I do need holidays...

So let me do it once more: thanks everyone! :)

Looking forward to meet so many friends in Tokyo,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2015-09-30 Thread Joshua Harlow
Totally get it,

And its interesting the boundaries that are being pushed,

Also interesting to know the state of the world, and the state of magnum
and the state of COE systems. I'm somewhat surprised that they lack
multi-tenancy in any kind of manner (but I guess I'm not to surprised,
its a feature that many don't add-on until later, for better or
worse...), especially kubernetes (coming from google), but not entirely
shocked by it ;-)

Insightful stuff, thanks :)

Steven Dake (stdake) wrote:
> Joshua,
> 
> If you share resources, you give up multi-tenancy.  No COE system has the
> concept of multi-tenancy (kubernetes has some basic implementation but it
> is totally insecure).  Not only does multi-tenancy have to “look like” it
> offers multiple tenants isolation, but it actually has to deliver the
> goods.
> 
> I understand that at first glance a company like Yahoo may not want
> separate bays for their various applications because of the perceived
> administrative overhead.  I would then challenge Yahoo to go deploy a COE
> like kubernetes (which has no multi-tenancy or a very basic implementation
> of such) and get it to work with hundreds of different competing
> applications.  I would speculate the administrative overhead of getting
> all that to work would be greater then the administrative overhead of
> simply doing a bay create for the various tenants.
> 
> Placing tenancy inside a COE seems interesting, but no COE does that
> today.  Maybe in the future they will.  Magnum was designed to present an
> integration point between COEs and OpenStack today, not five years down
> the road.  Its not as if we took shortcuts to get to where we are.
> 
> I will grant you that density is lower with the current design of Magnum
> vs a full on integration with OpenStack within the COE itself.  However,
> that model which is what I believe you proposed is a huge design change to
> each COE which would overly complicate the COE at the gain of increased
> density.  I personally don’t feel that pain is worth the gain.
> 
> Regards,
> -steve
> 
> 
> On 9/30/15, 2:18 PM, "Joshua Harlow"  wrote:
> 
>> Wouldn't that limit the ability to share/optimize resources then and
>> increase the number of operators needed (since each COE/bay would need
>> its own set of operators managing it)?
>>
>> If all tenants are in a single openstack cloud, and under say a single
>> company then there isn't much need for management isolation (in fact I
>> think said feature is actually a anti-feature in a case like this).
>> Especially since that management is already by keystone and the
>> project/tenant&  user associations and such there.
>>
>> Security isolation I get, but if the COE is already multi-tenant aware
>> and that multi-tenancy is connected into the openstack tenancy model,
>> then it seems like that point is nil?
>>
>> I get that the current tenancy boundary is the bay (aka the COE right?)
>> but is that changeable? Is that ok with everyone, it seems oddly matched
>> to say a company like yahoo, or other private cloud, where one COE would
>> I think be preferred and tenancy should go inside of that; vs a eggshell
>> like solution that seems like it would create more management and
>> operability pain (now each yahoo internal group that creates a bay/coe
>> needs to figure out how to operate it? and resources can't be shared
>> and/or orchestrated across bays; h, seems like not fully using a COE
>> for what it can do?)
>>
>> Just my random thoughts, not sure how much is fixed in stone.
>>
>> -Josh
>>
>> Adrian Otto wrote:
>>> Joshua,
>>>
>>> The tenancy boundary in Magnum is the bay. You can place whatever
>>> single-tenant COE you want into the bay (Kubernetes, Mesos, Docker
>>> Swarm). This allows you to use native tools to interact with the COE in
>>> that bay, rather than using an OpenStack specific client. If you want to
>>> use the OpenStack client to create both bays, pods, and containers, you
>>> can do that today. You also have the choice, for example, to run kubctl
>>> against your Kubernetes bay, if you so desire.
>>>
>>> Bays offer both a management and security isolation between multiple
>>> tenants. There is no intent to share a single bay between multiple
>>> tenants. In your use case, you would simply create two bays, one for
>>> each of the yahoo-mail.XX tenants. I am not convinced that having an
>>> uber-tenant makes sense.
>>>
>>> Adrian
>>>
 On Sep 30, 2015, at 1:13 PM, Joshua Harlow>  wrote:

 Adrian Otto wrote:
> Thanks everyone who has provided feedback on this thread. The good
> news is that most of what has been asked for from Magnum is actually
> in scope already, and some of it has already been implemented. We
> never aimed to be a COE deployment service. That happens to be a
> necessity to achieve our more ambitious goal: We want to provide a
> compelling Containers-as-a-Service 

Re: [openstack-dev] [puppet] should puppet-neutron manage third party software?

2015-09-30 Thread Emilien Macchi


On 09/29/2015 12:23 PM, Emilien Macchi wrote:
> My suggestion:
> 
> * patch master to send deprecation warning if third party repositories
> are managed in our current puppet-neutron module.
> * do not manage third party repositories from now and do not accept any
> patch containing this kind of code.
> * in the next cycle, we will consider deleting legacy code that used to
> manage third party software repos.
> 
> Thoughts?

Silence probably means lazy consensus.
I submitted a patch: https://review.openstack.org/#/c/229675/ - please
review.

I also contacted Cisco and they acknowledged it, and will work on
puppet-n1kv to externalize third party software.


> On 09/25/2015 12:32 PM, Anita Kuno wrote:
>> On 09/25/2015 12:14 PM, Edgar Magana wrote:
>>> Hi There,
>>>
>>> I just added my comment on the review. I do agree with Emilien. There 
>>> should be specific repos for plugins and drivers.
>>>
>>> BTW. I love the sdnmagic name  ;-)
>>>
>>> Edgar
>>>
>>>
>>>
>>>
>>> On 9/25/15, 9:02 AM, "Emilien Macchi"  wrote:
>>>
 In our last meeting [1], we were discussing about whether managing or
 not external packaging repositories for Neutron plugin dependencies.

 Current situation:
 puppet-neutron is installing (packages like neutron-plugin-*) &
 configure Neutron plugins (configuration files like
 /etc/neutron/plugins/*.ini
 Some plugins (Cisco) are doing more: they install third party packages
 (not part of OpenStack), from external repos.

 The question is: should we continue that way and accept that kind of
 patch [2]?

 I vote for no: managing external packages & external repositories should
 be up to an external more.
 Example: my SDN tool is called "sdnmagic":
 1/ patch puppet-neutron to manage neutron-plugin-sdnmagic package and
 configure the .ini file(s) to make it work in Neutron
 2/ create puppet-sdnmagic that will take care of everything else:
 install sdnmagic, manage packaging (and specific dependencies),
 repositories, etc.
 I -1 puppet-neutron should handle it. We are not managing SDN soltution:
 we are enabling puppet-neutron to work with them.

 I would like to find a consensus here, that will be consistent across
 *all plugins* without exception.


 Thanks for your feedback,

 [1] http://goo.gl/zehmN2
 [2] https://review.openstack.org/#/c/209997/
 -- 
 Emilien Macchi

>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> I think the data point provided by the Cinder situation needs to be
>> considered in this decision: https://bugs.launchpad.net/manila/+bug/1499334
>>
>> The bug report outlines the issue, but the tl;dr is that one Cinder
>> driver changed their licensing on a library required to run in tree code.
>>
>> Thanks,
>> Anita.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [infra] split integration jobs

2015-09-30 Thread Andrew Woodward
Emillien,

What image is being used to spawn the image? We see 300 sec as a good
timeout time in fuel with a cirros image. The time can usually be
substantially cut if the image is of any size using ceph for ephemeral...

On Wed, Sep 30, 2015 at 4:37 PM Jeremy Stanley  wrote:

> On 2015-09-30 17:14:27 -0400 (-0400), Emilien Macchi wrote:
> [...]
> > I like #3 but we are going to consume more CI resources (that's why I
> > put [infra] tag).
> [...]
>
> I don't think adding one more job is going to put a strain on our
> available resources. In fact it consumes just about as much to run a
> single job twice as long since we're constrained on the number of
> running instances in our providers (ignoring for a moment the
> spin-up/tear-down overhead incurred per job which, if you're
> talking about long-running jobs anyway, is less wasteful than it is
> for lots of very quick jobs). The number of puppet changes and
> number of jobs currently run on each is considerably lower than a
> lot of our other teams as well.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

--

Andrew Woodward

Mirantis

Fuel Community Ambassador

Ceph Community
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >