Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Chris Dent

On Thu, 5 Nov 2015, Robert Collins wrote:


In the session we were told that zookeeper is already used in CI jobs
for ceilometer (was this wrong?) and thats why we figured it made a
sane default for devstack.


For clarity: What ceilometer (actually gnocchi) is doing is using tooz
in CI (gate-ceilometer-dsvm-integration). And for now it is using
redis as that was "simple".

Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
tooz for coordinating group partitioning in active-active HA setups
and shared locks. Again the standard deploy for that has been to use
redis because of availability. It's fairly understood that zookeeper
would be more correct but there are packaging concerns.

--
Chris Dent   (�s°□°)�s�喋擤ォ�http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Tang Chen


On 11/05/2015 02:36 AM, Jonathan D. Proulx wrote:

On Wed, Nov 04, 2015 at 06:17:17PM +, Murray, Paul (HP Cloud) wrote:
:> From: Jay Pipes [mailto:jaypi...@gmail.com]
:> A fair point. However, I think that a generic update VM API, which would
:> allow changes to the resources consumed by the VM along with capabiities
:> like CPU model or local disk performance (SSD) is a better way to handle this
:> than a resize-specific API.
:
:
:Sorry I am so late to this - but this stuck out for me.
:
:Resize is an operation that a cloud user would do to his VM. Usually the
:cloud user does not know what host the VM is running on so a resize does
:not appear to be a move at all.
:
:Migrate is an operation that a cloud operator does to a VM that is not normally
:available to a cloud user. A cloud operator does not change the VM because
:the operator just provides what the user asked for. He only choses where he is
:going to put it.
:
:It seems clear to me that resize and migrate are very definitely different 
things,
:even if they are implemented using the same code path internally for 
convenience.
:At the very least I believe they need to be kept separate at the API so we can 
apply
:different policy to control access to them.

As an operator I'm with Paul on this.

By all means use the same code path becasue behind the scenes it *is*


Hi Jonathan,

I'm sorry that I cannot understand why resize and migrate are the same 
thing behind.


I have some understanding of my own here, please help to review.
https://blueprints.launchpad.net/nova/+spec/migration-type-refactor

I'm not sure if I understand it right or wrong. In my understanding, 
resizing a VM doesn't need to migrate it.


Thanks.


the same thing.

BUT, at the API level we do need the distinction particularly for access
control policy. The UX 'findablility' is important too, but if that
were the only issue a bit of syntactic sugar in the UI could take care
of it.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [kuryr] external-network-connectivity

2015-11-05 Thread Vikas Choudhary
++[Neutron] tag

On Thu, Nov 5, 2015 at 2:43 PM, Fawad Khaliq  wrote:

>
> On Thu, Nov 5, 2015 at 10:07 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> @Fawad,
>>
>> I could not get you completely. Are you suggesting to write a spec my
>> already drafter bp[1] . If yes, thanks for the suggestion , will do once
>> design discussion gets finalized.
>>
> That's right. Sounds good. Thanks!
>
>
>>
>>
>> [1]
>> https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity
>>
>>
>> Thanks
>> Vikas
>>
>> On Thu, Nov 5, 2015 at 2:26 PM, Fawad Khaliq  wrote:
>>
>>> Hi Vikas,
>>>
>>> I suggest take a stab at creating a blueprint for this with details
>>> through Gerrit on top of this [1].
>>>
>>> @Toni/Gal, would be great to have specs as part of Kuryr. I have added
>>> the directory with a template here [1]
>>>
>>> [1] https://review.openstack.org/#/c/241935/
>>>
>>> Thanks,
>>> Fawad Khaliq
>>>
>>>
>>> On Thu, Nov 5, 2015 at 7:08 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi All,

 Would appreciate your views on
 https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity
 .



 -Vikas Choudhary


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Vikas Choudhary
++[Neutron] tag

On Thu, Nov 5, 2015 at 11:33 AM, Vikas Choudhary  wrote:

> Hi All,
>
> I would appreciate inputs on following queries:
> 1. Are we assuming nova bm nodes to be docker host for now?
>
> If Not:
>  - Assuming nova vm as docker host and ovs as networking plugin:
> This line is from the etherpad[1], "Eachdriver would have an
> executable that receives the name of the veth pair that has to be bound to
> the overlay" .
> Query 1:  As per current ovs binding proposals by Feisky[2]
> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
> understand how overlay will work. AFAICT , neutron will configure br-tun of
> compute machines ovs only. How overlay(br-tun) configuration will happen
> inside vm ?
>
>  Query 2: Are we having double encapsulation(both at vm and
> compute)? Is not it possible to bind vif into compute host br-int?
>
>  Query3: I did not see subnet tags for network plugin being
> passed in any of the binding patches[2][3][4]. Dont we need that?
>
>
> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
> [2]  https://review.openstack.org/#/c/241558/
> [3]  https://review.openstack.org/#/c/232948/1
> [4]  https://review.openstack.org/#/c/227972/
>
>
> -Vikas Choudhary
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-05 Thread Richard Jones
As a consumer of such APIs on the Horizon side, I'm all for consistency in
pagination, and more of it, so yes please!

On 5 November 2015 at 13:24, Tony Breeds  wrote:

> On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
> > Hi All,
> > Around the middle of October a spec [1] was uploaded to add
> pagination
> > support to the os-hypervisors API.  While I recognize the use case it
> seemed
> > like adding another pagination implementation wasn't an awesome idea.
> >
> > Today I see 3 more requests to add pagination to APIs [2]
> >
> > Perhaps I'm over thinking it but should we do something more strategic
> rather
> > than scattering "add pagination here".
> >
> > It looks to me like we have at least 3 parties interested in this.
> >
> > Yours Tony.
> >
> > [1] https://review.openstack.org/#/c/234038
> > [2]
> https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z
>
> Sorry about the send without complete subject.
>
> Yours Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Vikas Choudhary
++ [Neutron] tag

On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary  wrote:

> Hi all,
>
> By network control plane i specifically mean here sharing network state
> across docker daemons sitting on different hosts/nova_vms in multi-host
> networking.
>
> libnetwork provides flexibility where vendors have a choice between
> network control plane to be handled by libnetwork(libkv) or remote driver
> itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
> remote driver capability as "local".
>
> "local" is our current default "capability" configuration in kuryr.
>
> I have following queries:
> 1. Does it mean Kuryr is taking responsibility of sharing network state
> across docker daemons? If yes, network created on one docker host should be
> visible in "docker network ls" on other hosts. To achieve this, I guess
> kuryr driver will need help of some distributed data-store like consul etc.
> so that kuryr driver on other hosts could create network in docker on other
> hosts. Is this correct?
>
> 2. Why we cannot  set default scope as "Global" and let libkv do the
> network state sync work?
>
> Thoughts?
>
> Regards
> -Vikas Choudhary
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Default PostgreSQL server encoding is 'ascii'

2015-11-05 Thread Evgeniy L
Hi,

I believe we don't have any VirtualBox specific hacks, especially in terms
of
database configuration. By "development env" Vitaly meant fake UI, when
developer installs and configures the database by himself, without any iso
images, so probably his db is configured correctly with utf-8.

Also we should make sure that upgrade works correctly, after the problem is
fixed.

Thanks,

On Wed, Nov 4, 2015 at 2:30 PM, Artem Roma  wrote:

> Hi, folks!
>
> Recently I've been working on this bug [1] and have found that default
> encoding of database server used by FUEL infrastructure components
> (Nailgun, OSTF, etc) is ascii. At least this is true for environment set up
> via VirtualBox scripts. This situation may (and already does returning to
> the bug) cause obfuscating problems when dealing with non-ascii string data
> supplied by user such as names for nodes, clusters etc. Nailgun encodes
> such data in UTF-8 before sending to the database so misinterpretation by
> former while saving it is sure thing.
>
> I wonder if we have such situation on all Fuel environments or only on
> those set by VB scripts, because as for me it seems as pretty serious flaw
> in our infrastructure. It would be great to have some comments from people
> more competent in areas regarding to the matter.
>
> ​[1]​ https://bugs.launchpad.net/fuel/+bug/1472275
>
> --
> Regards!)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] external-network-connectivity

2015-11-05 Thread Vikas Choudhary
@Fawad,

I could not get you completely. Are you suggesting to write a spec my
already drafter bp[1] . If yes, thanks for the suggestion , will do once
design discussion gets finalized.


[1]
https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity

Thanks
Vikas

On Thu, Nov 5, 2015 at 2:26 PM, Fawad Khaliq  wrote:

> Hi Vikas,
>
> I suggest take a stab at creating a blueprint for this with details
> through Gerrit on top of this [1].
>
> @Toni/Gal, would be great to have specs as part of Kuryr. I have added the
> directory with a template here [1]
>
> [1] https://review.openstack.org/#/c/241935/
>
> Thanks,
> Fawad Khaliq
>
>
> On Thu, Nov 5, 2015 at 7:08 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi All,
>>
>> Would appreciate your views on
>> https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity
>> .
>>
>>
>>
>> -Vikas Choudhary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] external-network-connectivity

2015-11-05 Thread Fawad Khaliq
On Thu, Nov 5, 2015 at 10:07 AM, Vikas Choudhary  wrote:

> @Fawad,
>
> I could not get you completely. Are you suggesting to write a spec my
> already drafter bp[1] . If yes, thanks for the suggestion , will do once
> design discussion gets finalized.
>
That's right. Sounds good. Thanks!


>
>
> [1]
> https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity
>
>
> Thanks
> Vikas
>
> On Thu, Nov 5, 2015 at 2:26 PM, Fawad Khaliq  wrote:
>
>> Hi Vikas,
>>
>> I suggest take a stab at creating a blueprint for this with details
>> through Gerrit on top of this [1].
>>
>> @Toni/Gal, would be great to have specs as part of Kuryr. I have added
>> the directory with a template here [1]
>>
>> [1] https://review.openstack.org/#/c/241935/
>>
>> Thanks,
>> Fawad Khaliq
>>
>>
>> On Thu, Nov 5, 2015 at 7:08 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> Would appreciate your views on
>>> https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity
>>> .
>>>
>>>
>>>
>>> -Vikas Choudhary
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-05 Thread Fawad Khaliq
On Tue, Nov 3, 2015 at 5:49 PM, Ihar Hrachyshka  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi all,
>
> currently we have a single neutron-wide stable-maint gerrit group that
> maintains all stable branches for all stadium subprojects. I believe
> that in lots of cases it would be better to have subproject members to
> run their own stable maintenance programs, leaving
> neutron-stable-maint folks to help them in non-obvious cases, and to
> periodically validate that project wide stable policies are still honore
> d.
>
> I suggest we open gate to creating subproject stable-maint teams where
> current neutron-stable-maint members feel those subprojects are ready
> for that and can be trusted to apply stable branch policies in
> consistent way.
>
> Note that I don't suggest we grant those new permissions completely
> automatically. If neutron-stable-maint team does not feel safe to give
> out those permissions to some stable branches, their feeling should be
> respected.
>
> I believe it will be beneficial both for subprojects that would be
> able to iterate on backports in more efficient way; as well as for
> neutron-stable-maint members who are often busy with other stuff, and
> often times are not the best candidates to validate technical validity
> of backports in random stadium projects anyway. It would also be in
> line with general 'open by default' attitude we seem to embrace in
> Neutron.
>
> If we decide it's the way to go, there are alternatives on how we
> implement it. For example, we can grant those subproject teams all
> permissions to merge patches; or we can leave +W votes to
> neutron-stable-maint group.
>
> I vote for opening the gates, *and* for granting +W votes where
> projects showed reasonable quality of proposed backports before; and
> leaving +W to neutron-stable-maint in those rare cases where history
> showed backports could get more attention and safety considerations
> [with expectation that those subprojects will eventually own +W votes
> as well, once quality concerns are cleared].
>
> If we indeed decide to bootstrap subproject stable-maint teams, I
> volunteer to reach the candidate teams for them to decide on initial
> lists of stable-maint members, and walk them thru stable policies.
>
> Comments?
>
+1


>
> Ihar
> -BEGIN PGP SIGNATURE-
>
> iQEcBAEBAgAGBQJWOOWkAAoJEC5aWaUY1u57sVIIALrnqvuj3t7c25DBHvywxBZV
> tCMlRY4cRCmFuVy0VXokM5DxGQ3VRwbJ4uWzuXbeaJxuVWYT2Kn8JJ+yRjdg7Kc4
> 5KXy3Xv0MdJnQgMMMgyjJxlTK4MgBKEsCzIRX/HLButxcXh3tqWAh0oc8WW3FKtm
> wWFZ/2Gmf4K9OjuGc5F3dvbhVeT23IvN+3VkobEpWxNUHHoALy31kz7ro2WMiGs7
> GHzatA2INWVbKfYo2QBnszGTp4XXaS5KFAO8+4H+HvPLxOODclevfKchOIe6jthH
> F1z4JcJNMmQrQDg1WSqAjspAlne1sqdVLX0efbvagJXb3Ju63eSLrvUjyCsZG4Q=
> =HE+y
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] external-network-connectivity

2015-11-05 Thread Fawad Khaliq
Hi Vikas,

I suggest take a stab at creating a blueprint for this with details through
Gerrit on top of this [1].

@Toni/Gal, would be great to have specs as part of Kuryr. I have added the
directory with a template here [1]

[1] https://review.openstack.org/#/c/241935/

Thanks,
Fawad Khaliq


On Thu, Nov 5, 2015 at 7:08 AM, Vikas Choudhary 
wrote:

> Hi All,
>
> Would appreciate your views on
> https://blueprints.launchpad.net/kuryr/+spec/external-network-connectivity
> .
>
>
>
> -Vikas Choudhary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-05 Thread Evgeniy L
Hi Javeria,

As far as I know there is no way to run the task on Fuel master host itself.
Since MCollective is installed in the container and tasks get executed using
MCollective, as a workaround you may try to ssh from the container to the
host.

Also I have several additional questions:
1. what version of Fuel do you use?
2. could you please clarify what did you mean by "moving to
deployment_tasks.yaml"?
3. could you please describe your use-case a bit more? Why do you want to
run
tasks on the host itself?

Thanks,

On Wed, Nov 4, 2015 at 11:25 PM, Javeria Khan  wrote:

> Thanks Igor, Alex. Guess there isn't any support for running tasks
> directly on the Fuel Master node for now.
>
> I did try moving to deployment_tasks.yaml, however it leads to other
> issues such as "/etc/fuel/plugins// does not exist" failing on
> deployments.
>
> I'm trying to move back to using the former tasks.yaml, but the
> fuel-plugin-builder keeps looking for deployment_tasks.yaml now. There
> should be some build source list I can remove?
>
>
> --
> Javeria
>
> On Wed, Nov 4, 2015 at 12:44 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> please note that such tasks are executed inside 'mcollective' docker
>> container, not on the Fuel master host system.
>>
>> Regards,
>> Alex
>>
>> On Tue, Nov 3, 2015 at 10:41 PM, Igor Kalnitsky 
>> wrote:
>>
>>> Hi Javeria,
>>>
>>> Try to use 'master' in 'role' field. Example:
>>>
>>> - role: 'master'
>>>   stage: pre_deployment
>>>   type: shell
>>>   parameters:
>>>   cmd: echo all > /tmp/plugin.all
>>>   timeout: 42
>>>
>>> Let me know if you need additional help.
>>>
>>> Thanks,
>>> Igor
>>>
>>> P.S: Since Fuel 7.0 it's recommended to use deployment_tasks.yaml
>>> instead of tasks.yaml. Please see Fuel Plugins wiki page for details.
>>>
>>> On Tue, Nov 3, 2015 at 10:26 PM, Javeria Khan 
>>> wrote:
>>> > Hey everyone,
>>> >
>>> > I've been working on a fuel plugin and for some reason just cant
>>> figure out
>>> > how to run a task on the fuel master node through the tasks.yaml. Is
>>> there
>>> > even a role for it?
>>> >
>>> > Something similar to what ansible does with localhost would work.
>>> >
>>> > Thanks,
>>> > Javeria
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-21, Nov 9-13

2015-11-05 Thread Doug Hellmann
This is the first in a series of email reminders about important dates
on the schedule as we work towards the Mitaka release. We will be
counting down from R-23, the Mitaka summit, to the release in R-0 the
week of April 4-8. If all goes as planned, these emails will be sent
just before the week mentioned in the subject (on my Thursday, but some
of you live in the future). I don't plan to send email every week, and I
will try to keep each one short.

For release liaisons and PTLs, these emails take the place of the 1-on-1
synchronization meetings we held during the Liberty cycle, and will
frequently contain reminders for actions that need to be taken to move
the release forward.

Focus
-

We are currently working towards the Mitaka 1 milestone. Teams should be
focusing on wrapping up incomplete work left over from the end of the
Liberty cycle, finalizing and announcing plans from the summit, and
completing specs and blueprints.

Release Actions
---

All deliverables should have reno configured before Mitaka 1. See
http://lists.openstack.org/pipermail/openstack-dev/2015-November/078301.html
for details, and follow up on that thread with questions.

Review stable/liberty branches for patches that have landed since the last
release and determine if your deliverables need new tags.

Important Dates
---

Mitaka 1 - Dec 1-3 (3 weeks away)

Mitaka release schedule: https://wiki.openstack.org/wiki/Mitaka_Release_Schedule

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Prathyusha Guduri
Thanks Mooney,

will correct my localrc and run again

On Thu, Nov 5, 2015 at 6:23 PM, Mooney, Sean K  wrote:
> Hello
> When set OVS_DPDK_MODE=controller_ovs
>
> You are disabling install of ovs-dpdk on the contoler node and only 
> installing mechanism driver.
>
> If you want to install ovs-dpdk on the controller node you should set this 
> value as follows
>
> OVS_DPDK_MODE=controller_ovs_dpdk
>
> See
> https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node
>
> ovs with dpdk will be installed in /usr/bin not user local bin as it does a 
> system wide install not a local install.
>
> Installation documentation can be found here
> https://github.com/openstack/networking-ovs-dpdk/tree/master/doc/source
>
> the networking-ovs-dpdk repo has been recently moved from stackforge to the 
> openstack namespace following the
> retirement of stackforge.
>
> Some like in the git repo still need to be updated to reflect this change.
>
> Regards
> sean
> -Original Message-
> From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
> Sent: Thursday, November 5, 2015 11:02 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [networking-ovs-dpdk]
>
> Hello all,
>
> Trying to install openstack with ovs-dpdk driver from devstack.
>
> Following is my localrc file
>
> HOST_IP_IFACE=eth0
> HOST_IP=10.0.2.15
> HOST_NAME=$(hostname)
>
> DATABASE_PASSWORD=open
> RABBIT_PASSWORD=open
> SERVICE_TOKEN=open
> SERVICE_PASSWORD=open
> ADMIN_PASSWORD=open
> MYSQL_PASSWORD=open
> HORIZON_PASSWORD=open
>
>
> enable_plugin networking-ovs-dpdk
> https://github.com/stackforge/networking-ovs-dpdk master 
> OVS_DPDK_MODE=controller_ovs
>
> disable_service n-net
> disable_service n-cpu
> enable_service neutron
> enable_service q-svc
> enable_service q-agt
> enable_service q-dhcp
> enable_service q-l3
> enable_service q-meta
> enable_service n-novnc
>
> DEST=/opt/stack
> SCREEN_LOGDIR=$DEST/logs/screen
> LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
> LOGDAYS=1
>
> Q_ML2_TENANT_NETWORK_TYPE=vlan
> ENABLE_TENANT_VLANS=True
> ENABLE_TENANT_TUNNELS=False
>
> #Dual socket platform with 16GB RAM,3072*2048kB hugepages leaves ~4G for the 
> system.
> OVS_NUM_HUGEPAGES=2048
> #Dual socket platform with 64GB RAM,14336*2048kB hugepages leaves ~6G for the 
> system.
> #OVS_NUM_HUGEPAGES=14336
>
> OVS_DATAPATH_TYPE=netdev
> OVS_LOG_DIR=/opt/stack/logs
> OVS_BRIDGE_MAPPINGS=public:br-ex
>
> ML2_VLAN_RANGES=public:100:200
> MULTI_HOST=1
>
> #[[post-config|$NOVA_CONF]]
> #[DEFAULT]
> firewall_driver=nova.virt.firewall.NoopFirewallDriver
> novncproxy_host=0.0.0.0
> novncproxy_port=6080
> scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter
>
>
> After running ./stack.sh which was sucessful , I could see that in 
> ml2.conf.ini file ovsdpdk was added as the mechanism driver. But the agent 
> running was still openvswitch. Tried running ovsdpdk on q-agt screen, but 
> failed because ovsdpdk was not installed in /usr/local/bin, which I thought 
> devstack is supposed to do.
> Tried running setup.py in networking-ovs-dpdk folder, but that also did not 
> install ovs-dpdk in /usr/local/bin.
>
> Am stuck here. Please guide me how to proceed further. Also the Readme in 
> networking-ovs-dpdk folder says the instructions regarding installation are 
> available in below links - 
> http://git.openstack.org/cgit/stackforge/networking-ovs-dpdk/tree/doc/source/installation.rst
>
> But no repos found there. Kindly guide me to a doc or something on how to 
> build ovs-dpdk from devstack
>
> Thank you,
> Prathyusha
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-onos]: Proposing new cores for networking-onos

2015-11-05 Thread Kyle Mestery
+1 for both.

On Thu, Nov 5, 2015 at 7:00 AM, Gal Sagie  wrote:

> +1 for both from me
>
> On Thu, Nov 5, 2015 at 2:53 PM, Vikram Choudhary 
> wrote:
>
>> Hi All,
>>
>> I would like to propose Mr. Ramanjaneya Reddy Palleti and Mr. Dongfeng as
>> new cores for networking-onos project. Their contribution was significant
>> in the last Liberty cycle w.r.t to this project.
>>
>> *Facts:*
>> http://stackalytics.com/?metric=loc=networking-onos=all
>>
>> Request existing cores to vote for this proposal.
>>
>> Thanks
>> Vikram
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Andrew Laski

On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:




From: Ed Leafe [mailto:e...@leafe.com]
On Nov 5, 2015, at 2:43 AM, Tang Chen  wrote:

> I'm sorry that I cannot understand why resize and migrate are the same
thing behind.

Resize is essentially a migration to the same host, rather than a different
host. The process is still taking an existing VM and using it to create another
VM that appears to the user as the same (ID, networking, attached volumes,
metadata, etc.)





Or more specifically, the migrate and resize API actions both call the resize
function in the compute api. As Ed said, they are basically the same behind
the scenes. (But the API difference is important.)


Can you be a little more specific on what API difference is important to 
you?  There are two differences currently between migrate and resize in 
the API:


1. There is a different policy check, but this only really protects the 
next bit.


2. Resize passes in a new flavor and migration does not.

Both actions result in an instance being scheduled to a new host.  If 
they were consolidated into a single action with a policy check to 
enforce that users specified a new flavor and admins could leave that 
off would that be problematic for you?





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][stable] tools for keeping up with stable/liberty releases

2015-11-05 Thread Doug Hellmann
Release liaisons,

As described in [1], we are changing our stable release policy for
Liberty to encourage projects to tag new releases when they have
patches ready to be released. There is a script in the
openstack-infra/release-tools repository to make it easier to keep
track of what has not yet been released.

The list_unreleased_changes.sh script takes 2 arguments, the branch name
and the repository name(s). It clones a temporary copy of the
repositories and looks for changes since the last tag on the given
branch.

For example:

  $ ./list_unreleased_changes.sh stable/liberty openstack/glance
  
  [ Cloning openstack/glance ]
  INFO:zuul.CloneMapper:Workspace path set to: 
/mnt/projects/release-tools/release-tools/list-unreleased-kFv
  INFO:zuul.CloneMapper:Mapping projects to workspace...
  INFO:zuul.CloneMapper:  openstack/glance -> 
/mnt/projects/release-tools/release-tools/list-unreleased-kFv/openstack/glance
  INFO:zuul.CloneMapper:Expansion completed.
  INFO:zuul.Cloner:Preparing 1 repositories
  INFO:zuul.Cloner:Creating repo openstack/glance from upstream 
git://git.openstack.org/openstack/glance
  INFO:zuul.Cloner:upstream repo has branch stable/liberty
  INFO:zuul.Cloner:Falling back to branch stable/liberty
  INFO:zuul.Cloner:Prepared openstack/glance repo with branch stable/liberty
  INFO:zuul.Cloner:Prepared all repositories
  Creating a git remote called "gerrit" that maps to:
ssh://doug-hellm...@review.openstack.org:29418/openstack/glance.git
  
  [ Unreleased changes in openstack/glance ]
  
  Changes in glance 11.0.0..a50026b
  -
  53d48d8 2015-11-03 18:00:26 + add first reno-based release note
  4a31949 2015-11-03 18:00:26 + set default branch for git review
  aae81e2 2015-10-23 15:52:53 + Updated from global requirements
  b977544 2015-10-22 07:03:47 + Pass CONF to logging setup
  25ead6a 2015-10-18 10:12:26 + Fixed registry invalid token exception 
handling
  5434297 2015-10-17 10:40:42 + Updated from global requirements
  8902d12 2015-10-16 10:34:46 + Decrease test failure if second changes 
during run
  7158d78 2015-10-15 15:43:15 +0200 Switch to post-versioning
  
  [ Cleaning up ]

When you decide that it is time to prepare a new release, submit a patch
to the openstack/releases repository with the SHA, version, and other
info. See the README there for more details.

Doug

[1] http://lists.openstack.org/pipermail/openstack-dev/2015-November/078281.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Plugins] Role for Fuel Master Node

2015-11-05 Thread Javeria Khan
Hi Evgeniy,

>
> 1. what version of Fuel do you use?
>
Using 7.0


> 2. could you please clarify what did you mean by "moving to
> deployment_tasks.yaml"?
>
I tried changing my tasks.yaml to a deployment_tasks.yaml as the wiki
suggests for 7.0. However I kept hitting issues.


> 3. could you please describe your use-case a bit more? Why do you want to
> run
> tasks on the host itself?
>

I have a monitoring tool that accompanies my new plugin, which basically
uses a config file that contains details about the cluster (IPs, VIPs,
networks etc). This config file is typically created on the installer nodes
through the deployment, Fuel Master in this case.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Gal Sagie
The current OVS binding proposals are not for nested containers.
I am not sure if you are asking about that case or about the nested
containers inside a VM case.

For the nested containers, we will use Neutron solutions that support this
kind of configuration, for example
if you look at OVN you can define "parent" and "sub" ports, so OVN knows to
perform the logical pipeline in the compute host
and only perform VLAN tagging inside the VM (as Toni mentioned)

If you need more clarification you can catch me on IRC as well and we can
talk.

On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary 
wrote:

> Hi All,
>
> I would appreciate inputs on following queries:
> 1. Are we assuming nova bm nodes to be docker host for now?
>
> If Not:
>  - Assuming nova vm as docker host and ovs as networking plugin:
> This line is from the etherpad[1], "Eachdriver would have an
> executable that receives the name of the veth pair that has to be bound to
> the overlay" .
> Query 1:  As per current ovs binding proposals by Feisky[2]
> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
> understand how overlay will work. AFAICT , neutron will configure br-tun of
> compute machines ovs only. How overlay(br-tun) configuration will happen
> inside vm ?
>
>  Query 2: Are we having double encapsulation(both at vm and
> compute)? Is not it possible to bind vif into compute host br-int?
>
>  Query3: I did not see subnet tags for network plugin being
> passed in any of the binding patches[2][3][4]. Dont we need that?
>
>
> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
> [2]  https://review.openstack.org/#/c/241558/
> [3]  https://review.openstack.org/#/c/232948/1
> [4]  https://review.openstack.org/#/c/227972/
>
>
> -Vikas Choudhary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Sean Dague
On 11/05/2015 06:00 AM, Thierry Carrez wrote:
> Hayes, Graham wrote:
>> On 04/11/15 20:04, Ed Leafe wrote:
>>> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:

 Here's a Devstack review for zookeeper in support of this initiative:

 https://review.openstack.org/241040

 Thanks,
 Dims
>>>
>>> I thought that the operators at that session made it very clear that they 
>>> would *not* run any Java applications, and that if OpenStack required a 
>>> Java app to run, they would no longer use it.
>>>
>>> I like the idea of using Zookeeper as the DLM, but I don't think it should 
>>> be set up as a default, even for devstack, given the vehement opposition 
>>> expressed.
>>>
>>>
>>> -- Ed Leafe
>>>
>>
>> I got the impression that there was *some* operators that wouldn't run
>> java.

I feel like I'd like to see that with data. Because every Ops session
I've been in around logging and debugging has had nearly everyone raise
their hand that they are running the ELK stack for log analysis. So they
are all running Java already.

I would absolutely hate to have some design point get made based on
rumors from ops and "java is icky" sentiment from the dev space.

Defaults matter, because it means you get a critical mass of operators
running similar configs, and they can build and share knowledge. For all
of the issues with Rabbit, it has demonstrably been good to have
collaboration in the field between operators that have shared patterns
and fed back the issues. So we should really say Zookeeper is the
default choice, even if there are others people could choose that have
extra mustachy / monocle goodness.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-onos]: Proposing new cores for networking-onos

2015-11-05 Thread Vikram Choudhary
Hi All,

I would like to propose Mr. Ramanjaneya Reddy Palleti and Mr. Dongfeng as
new cores for networking-onos project. Their contribution was significant
in the last Liberty cycle w.r.t to this project.

*Facts:*
http://stackalytics.com/?metric=loc=networking-onos=all

Request existing cores to vote for this proposal.

Thanks
Vikram
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-onos]: Proposing new cores for networking-onos

2015-11-05 Thread Gal Sagie
+1 for both from me

On Thu, Nov 5, 2015 at 2:53 PM, Vikram Choudhary  wrote:

> Hi All,
>
> I would like to propose Mr. Ramanjaneya Reddy Palleti and Mr. Dongfeng as
> new cores for networking-onos project. Their contribution was significant
> in the last Liberty cycle w.r.t to this project.
>
> *Facts:*
> http://stackalytics.com/?metric=loc=networking-onos=all
>
> Request existing cores to vote for this proposal.
>
> Thanks
> Vikram
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards ,

The G.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Germy Lure
I don't know if this would make more sense. Let's assume that
we add arbitrary blobs(ABs) to IPAM even every neutron object. What
would happen? People can do anything via those APIs. Any new
attribute even the whole model could be passed through those
so-called ABs. Except the architecture issues, I think people like
Shraddha would never report any case to the community. People even
didn't need the community. Because they even can define an object
contains only id and AB. eg. Port like this:
{
"id":"" #uuid format
"params":{} # a json dictionary
}
Everything can be filled in this *Big Box*. Is that an API?

But on the other hand, If we don't have such a block. People must
extend API and extra tables themselves or push community approve
and merge the feature which will be a long cycle. In the end, people
would think that Neutron is so bad to use it with so many limitations
and update like a snail.

It's difficult, but it's time to make a decision. OK, I prefer adding it.

Thanks.
Germy
.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][stable] should we open gate for per sub-project stable-maint teams?

2015-11-05 Thread Ihar Hrachyshka

Armando M.  wrote:


On 3 November 2015 at 08:49, Ihar Hrachyshka  wrote:


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi all,

currently we have a single neutron-wide stable-maint gerrit group that
maintains all stable branches for all stadium subprojects. I believe
that in lots of cases it would be better to have subproject members to
run their own stable maintenance programs, leaving
neutron-stable-maint folks to help them in non-obvious cases, and to
periodically validate that project wide stable policies are still honore
d.

I suggest we open gate to creating subproject stable-maint teams where
current neutron-stable-maint members feel those subprojects are ready
for that and can be trusted to apply stable branch policies in
consistent way.

Note that I don't suggest we grant those new permissions completely
automatically. If neutron-stable-maint team does not feel safe to give
out those permissions to some stable branches, their feeling should be
respected.

I believe it will be beneficial both for subprojects that would be
able to iterate on backports in more efficient way; as well as for
neutron-stable-maint members who are often busy with other stuff, and
often times are not the best candidates to validate technical validity
of backports in random stadium projects anyway. It would also be in
line with general 'open by default' attitude we seem to embrace in
Neutron.

If we decide it's the way to go, there are alternatives on how we
implement it. For example, we can grant those subproject teams all
permissions to merge patches; or we can leave +W votes to
neutron-stable-maint group.

I vote for opening the gates, *and* for granting +W votes where
projects showed reasonable quality of proposed backports before; and
leaving +W to neutron-stable-maint in those rare cases where history
showed backports could get more attention and safety considerations
[with expectation that those subprojects will eventually own +W votes
as well, once quality concerns are cleared].

If we indeed decide to bootstrap subproject stable-maint teams, I
volunteer to reach the candidate teams for them to decide on initial
lists of stable-maint members, and walk them thru stable policies.

Comments?


It was like this in the past, then it got changed, now we're proposing of
changing it back? Will we change it back again in 6 months time? Just
wondering :)


Neutron: it’s all about change!

Jokes aside, I don’t believe we were in this situation before. I think once  
we started to spin off subprojects, it was always the case that only  
neutron-stable-maint members are allowed to +2 or +A for all stable  
branches for all subprojects, both ‘core’ and ‘stadium’.




I suppose this has to do with the larger question of what belonging to the
stadium really means. I guess this is a concept that is still shaping up,
but if the concept is here to stay, I personally believe that being part of
the stadium means adhering to a common set of practices and principles
(like those largely implemented in OpenStack) where all projects feel and
behave equally. We have evidence where a few feel that 'stable' is not a
concept worth honoring and for that reason I am wary to relax this


Indeed, if any change occurs, it should not relax expectations. That’s why  
I would like us to be picky about which teams could get their stable  
groups, and which of them have not proved yet their commitment to the  
project wide stable criteria.


I agree that stable initiative should be discussed in context of larger  
stadium requirements.


F.e. we have subprojects that do not have decent test coverage, that  
nevertheless continue to band-aid bugs in their code with more fixes that  
do not include tests. Those bug fixes are sometimes proposed for backports.  
I believe decent testing coverage should be a requirement for any stadium  
project, something that could result in dropping from stadium if not  
achieved in reasonable time.




I suppose it could be fine to have a probation period only to grant full
rights later on, but who is going to police that? That's a job in itself.
Once the permission is granted are we ever really gonna revoke it? And what
does this mean once the damage is done?



I presume it does not differ from current trust model used in  
neutron-stable-maint: folks get their votes and are generally not  
explicitly supervised. If issues will arise, yes, we would need to revoke  
voting rights and clean up the mess. Yes, for vendor repositories there is  
a slight difference, since there is no real external visibility, as we have  
in other vendor-agnostic teams.



Perhaps an alternative could be to add a selected member of each subproject
to the neutron-stable-maint, with the proviso that they are only supposed
to +2 their backports (the same way Lieutenant is supposed to +2 their
area, and *only their area* of expertise), leaving the +2/+A to more
seasoned folks who have been doing 

Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Vikas Choudhary
Thanks Toni.
On 5 Nov 2015 16:02, "Antoni Segura Puimedon" 
wrote:

>
>
> On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> ++ [Neutron] tag
>>
>>
>> On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> By network control plane i specifically mean here sharing network state
>>> across docker daemons sitting on different hosts/nova_vms in multi-host
>>> networking.
>>>
>>> libnetwork provides flexibility where vendors have a choice between
>>> network control plane to be handled by libnetwork(libkv) or remote driver
>>> itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
>>> remote driver capability as "local".
>>>
>>> "local" is our current default "capability" configuration in kuryr.
>>>
>>> I have following queries:
>>> 1. Does it mean Kuryr is taking responsibility of sharing network state
>>> across docker daemons? If yes, network created on one docker host should be
>>> visible in "docker network ls" on other hosts. To achieve this, I guess
>>> kuryr driver will need help of some distributed data-store like consul etc.
>>> so that kuryr driver on other hosts could create network in docker on other
>>> hosts. Is this correct?
>>>
>>> 2. Why we cannot  set default scope as "Global" and let libkv do the
>>> network state sync work?
>>>
>>> Thoughts?
>>>
>>
> Hi Vikas,
>
> Thanks for raising this. As part of the current work on enabling
> multi-node we should be moving the default to 'global'.
>
>
>>
>>> Regards
>>> -Vikas Choudhary
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

2015-11-05 Thread John Davidge (jodavidg)
++

Sounds very sensible to me!

John

From: "Armando M." >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, 4 November 2015 21:23
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [Neutron] Mid-cycle meetup for Mitaka

Hi folks,

After some consideration, I am proposing a change for the Mitaka release cycle 
in relation to the mid-cycle meetup event.

My proposal is to defer the gathering to later in the release cycle [1], and 
assess whether we have it or not based on the course of events in the cycle. If 
we feel that a last push closer to the end will help us hit some critical 
targets, then I am all in for arranging it.

Based on our latest experiences, I have not seen a strong correlation between 
progress made during the cycle and progress made during the meetup, so we might 
as well save us the trouble of travelling close to Christmas.

I'd like to thank Kyle, Miguel Lavalle and Doug for looking into the logistics. 
We may still need their services later in the new year, but as of now all I can 
say is:

Happy (distributed) hacking!

Cheers,
Armando

[1] https://wiki.openstack.org/wiki/Mitaka_Release_Schedule
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Sean Dague
On 11/05/2015 03:08 AM, Chris Dent wrote:
> On Thu, 5 Nov 2015, Robert Collins wrote:
> 
>> In the session we were told that zookeeper is already used in CI jobs
>> for ceilometer (was this wrong?) and thats why we figured it made a
>> sane default for devstack.
> 
> For clarity: What ceilometer (actually gnocchi) is doing is using tooz
> in CI (gate-ceilometer-dsvm-integration). And for now it is using
> redis as that was "simple".
> 
> Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
> tooz for coordinating group partitioning in active-active HA setups
> and shared locks. Again the standard deploy for that has been to use
> redis because of availability. It's fairly understood that zookeeper
> would be more correct but there are packaging concerns.

What are the packaging concerns for zookeeper?

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Akihiro Motoki
2015-11-05 21:30 GMT+09:00 Gal Sagie :
> The current OVS binding proposals are not for nested containers.
> I am not sure if you are asking about that case or about the nested
> containers inside a VM case.
>
> For the nested containers, we will use Neutron solutions that support this
> kind of configuration, for example
> if you look at OVN you can define "parent" and "sub" ports, so OVN knows to
> perform the logical pipeline in the compute host
> and only perform VLAN tagging inside the VM (as Toni mentioned)

I felt that the VLAN-aware VM effort affects many on-going efforts in
Neutron stadium
including Kuryr through the summit discussion.
Let's keep your eyes on VLAN-aware VM effort and your feedback would
be appreciated.
The initial effort is found in
https://review.openstack.org/#/c/210309/ (Trunk port: API extension).

Akihiro


>
> If you need more clarification you can catch me on IRC as well and we can
> talk.
>
> On Thu, Nov 5, 2015 at 8:03 AM, Vikas Choudhary 
> wrote:
>>
>> Hi All,
>>
>> I would appreciate inputs on following queries:
>> 1. Are we assuming nova bm nodes to be docker host for now?
>>
>> If Not:
>>  - Assuming nova vm as docker host and ovs as networking plugin:
>> This line is from the etherpad[1], "Eachdriver would have an
>> executable that receives the name of the veth pair that has to be bound to
>> the overlay" .
>> Query 1:  As per current ovs binding proposals by Feisky[2]
>> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
>> understand how overlay will work. AFAICT , neutron will configure br-tun of
>> compute machines ovs only. How overlay(br-tun) configuration will happen
>> inside vm ?
>>
>>  Query 2: Are we having double encapsulation(both at vm and
>> compute)? Is not it possible to bind vif into compute host br-int?
>>
>>  Query3: I did not see subnet tags for network plugin being
>> passed in any of the binding patches[2][3][4]. Dont we need that?
>>
>>
>> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
>> [2]  https://review.openstack.org/#/c/241558/
>> [3]  https://review.openstack.org/#/c/232948/1
>> [4]  https://review.openstack.org/#/c/227972/
>>
>>
>> -Vikas Choudhary
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Mooney, Sean K
Hello
When set OVS_DPDK_MODE=controller_ovs

You are disabling install of ovs-dpdk on the contoler node and only installing 
mechanism driver.

If you want to install ovs-dpdk on the controller node you should set this 
value as follows

OVS_DPDK_MODE=controller_ovs_dpdk

See 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node

ovs with dpdk will be installed in /usr/bin not user local bin as it does a 
system wide install not a local install.

Installation documentation can be found here
https://github.com/openstack/networking-ovs-dpdk/tree/master/doc/source

the networking-ovs-dpdk repo has been recently moved from stackforge to the 
openstack namespace following the
retirement of stackforge.

Some like in the git repo still need to be updated to reflect this change.

Regards
sean
-Original Message-
From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com] 
Sent: Thursday, November 5, 2015 11:02 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk]

Hello all,

Trying to install openstack with ovs-dpdk driver from devstack.

Following is my localrc file

HOST_IP_IFACE=eth0
HOST_IP=10.0.2.15
HOST_NAME=$(hostname)

DATABASE_PASSWORD=open
RABBIT_PASSWORD=open
SERVICE_TOKEN=open
SERVICE_PASSWORD=open
ADMIN_PASSWORD=open
MYSQL_PASSWORD=open
HORIZON_PASSWORD=open


enable_plugin networking-ovs-dpdk
https://github.com/stackforge/networking-ovs-dpdk master 
OVS_DPDK_MODE=controller_ovs

disable_service n-net
disable_service n-cpu
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service n-novnc

DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
LOGDAYS=1

Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
ENABLE_TENANT_TUNNELS=False

#Dual socket platform with 16GB RAM,3072*2048kB hugepages leaves ~4G for the 
system.
OVS_NUM_HUGEPAGES=2048
#Dual socket platform with 64GB RAM,14336*2048kB hugepages leaves ~6G for the 
system.
#OVS_NUM_HUGEPAGES=14336

OVS_DATAPATH_TYPE=netdev
OVS_LOG_DIR=/opt/stack/logs
OVS_BRIDGE_MAPPINGS=public:br-ex

ML2_VLAN_RANGES=public:100:200
MULTI_HOST=1

#[[post-config|$NOVA_CONF]]
#[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter


After running ./stack.sh which was sucessful , I could see that in ml2.conf.ini 
file ovsdpdk was added as the mechanism driver. But the agent running was still 
openvswitch. Tried running ovsdpdk on q-agt screen, but failed because ovsdpdk 
was not installed in /usr/local/bin, which I thought devstack is supposed to do.
Tried running setup.py in networking-ovs-dpdk folder, but that also did not 
install ovs-dpdk in /usr/local/bin.

Am stuck here. Please guide me how to proceed further. Also the Readme in 
networking-ovs-dpdk folder says the instructions regarding installation are 
available in below links - 
http://git.openstack.org/cgit/stackforge/networking-ovs-dpdk/tree/doc/source/installation.rst

But no repos found there. Kindly guide me to a doc or something on how to build 
ovs-dpdk from devstack

Thank you,
Prathyusha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] How could an L2 agent extension access agent methods ?

2015-11-05 Thread Thomas Morin

Hi Ihar,

Ihar Hrachyshka :

Reviving the thread.
[...] (I appreciate if someone checks me on the following though):


This is an excellent recap.



 I set up a new etherpad to collect feedback from subprojects [2].


I've filled in details for networking-bgpvpn.
Please tell me if you need more information.



Once we collect use cases there and agree on agent API for extensions 
(even if per agent type), we will implement it and define as stable 
API, then pass objects that implement the API into extensions thru 
extension manager. If extensions support multiple agent types, they 
can still distinguish between which API to use based on agent type 
string passed into extension manager.


I really hope we start to collect use cases early so that we have time 
to polish agent API and make it part of l2 extensions earlier in 
Mitaka cycle.


We'll be happy to validate the applicability of this approach as soon as 
something is ready.


Thanks for taking up this work!

-Thomas




Ihar Hrachyshka  wrote:

On 30 Sep 2015, at 12:53, Miguel Angel Ajo  
wrote:




Ihar Hrachyshka wrote:

On 30 Sep 2015, at 12:08, thomas.mo...@orange.com wrote:

Hi Ihar,

Ihar Hrachyshka :

Miguel Angel Ajo :

Do you have a rough idea of what operations you may need to do?
Right now, what bagpipe driver for networking-bgpvpn needs to 
interact with is:

- int_br OVSBridge (read-only)
- tun_br OVSBridge (add patch port, add flows)
- patch_int_ofport port number (read-only)
- local_vlan_map dict (read-only)
- setup_entry_for_arp_reply method (called to add static ARP 
entries)

Sounds very tightly coupled to OVS agent.
Please bear in mind, the extension interface will be available 
from different agent types
(OVS, SR-IOV, [eventually LB]), so this interface you're 
talking about could also serve as
a translation driver for the agents (where the translation is 
possible), I totally understand
that most extensions are specific agent bound, and we must be 
able to identify

the agent we're serving back exactly.
Yes, I do have this in mind, but what we've identified for now 
seems to be OVS specific.
Indeed it does. Maybe you can try to define the needed pieces in 
high level actions, not internal objects you need to access to. 
Like ‘- connect endpoint X to Y’, ‘determine segmentation id for 
a network’ etc.
I've been thinking about this, but would tend to reach the 
conclusion that the things we need to interact with are pretty 
hard to abstract out into something that would be generic across 
different agents.  Everything we need to do in our case relates to 
how the agents use bridges and represent networks internally: 
linuxbridge has one bridge per Network, while OVS has a limited 
number of bridges playing different roles for all networks with 
internal segmentation.


To look at the two things you  mention:
- "connect endpoint X to Y" : what we need to do is redirect the 
traffic destinated to the gateway of a Neutron network, to the 
thing that will do the MPLS forwarding for the right BGP VPN 
context (called VRF), in our case br-mpls (that could be done with 
an OVS table too) ; that action might be abstracted out to hide 
the details specific to OVS, but I'm not sure on how to  name the 
destination in a way that would be agnostic to these details, and 
this is not really relevant to do until we have a relevant context 
in which the linuxbridge would pass packets to something doing 
MPLS forwarding (OVS is currently the only option we support for 
MPLS forwarding, and it does not really make sense to mix 
linuxbridge for Neutron L2/L3 and OVS for MPLS)
- "determine segmentation id for a network": this is something 
really OVS-agent-specific, the linuxbridge agent uses multiple 
linux bridges, and does not rely on internal segmentation


Completely abstracting out packet forwarding pipelines in OVS and 
linuxbridge agents would possibly allow defining an interface that 
agent extension could use without to know about anything specific 
to OVS or the linuxbridge, but I believe this is a very 
significant taks to tackle.


If you look for a clean way to integrate with reference agents, 
then it’s something that we should try to achieve. I agree it’s not 
an easy thing.


Just an idea: can we have a resource for traffic forwarding, 
similar to security groups? I know folks are not ok with extending 
security groups API due to compatibility reasons, so maybe fwaas is 
the place to experiment with it.


Hopefully it will be acceptable to create an interface, even it 
exposes a set of methods specific to the linuxbridge agent and a 
set of methods specific to the OVS agent.  That would mean that 
the agent extension that can work in both contexts (not our case 
yet) would check the agent type before using the first set or the 
second set.


The assumption of the whole idea of l2 agent extensions is that 
they are agent agnostic. In case of QoS, we implemented a common 
QoS extension that 

Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Ed Leafe
On Nov 5, 2015, at 2:43 AM, Tang Chen  wrote:

> I'm sorry that I cannot understand why resize and migrate are the same thing 
> behind.

Resize is essentially a migration to the same host, rather than a different 
host. The process is still taking an existing VM and using it to create another 
VM that appears to the user as the same (ID, networking, attached volumes, 
metadata, etc.)

-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Antoni Segura Puimedon
On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary  wrote:

> ++ [Neutron] tag
>
>
> On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi all,
>>
>> By network control plane i specifically mean here sharing network state
>> across docker daemons sitting on different hosts/nova_vms in multi-host
>> networking.
>>
>> libnetwork provides flexibility where vendors have a choice between
>> network control plane to be handled by libnetwork(libkv) or remote driver
>> itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
>> remote driver capability as "local".
>>
>> "local" is our current default "capability" configuration in kuryr.
>>
>> I have following queries:
>> 1. Does it mean Kuryr is taking responsibility of sharing network state
>> across docker daemons? If yes, network created on one docker host should be
>> visible in "docker network ls" on other hosts. To achieve this, I guess
>> kuryr driver will need help of some distributed data-store like consul etc.
>> so that kuryr driver on other hosts could create network in docker on other
>> hosts. Is this correct?
>>
>> 2. Why we cannot  set default scope as "Global" and let libkv do the
>> network state sync work?
>>
>> Thoughts?
>>
>
Hi Vikas,

Thanks for raising this. As part of the current work on enabling multi-node
we should be moving the default to 'global'.


>
>> Regards
>> -Vikas Choudhary
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Thierry Carrez
Hayes, Graham wrote:
> On 04/11/15 20:04, Ed Leafe wrote:
>> On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:
>>>
>>> Here's a Devstack review for zookeeper in support of this initiative:
>>>
>>> https://review.openstack.org/241040
>>>
>>> Thanks,
>>> Dims
>>
>> I thought that the operators at that session made it very clear that they 
>> would *not* run any Java applications, and that if OpenStack required a Java 
>> app to run, they would no longer use it.
>>
>> I like the idea of using Zookeeper as the DLM, but I don't think it should 
>> be set up as a default, even for devstack, given the vehement opposition 
>> expressed.
>>
>>
>> -- Ed Leafe
>>
> 
> I got the impression that there was *some* operators that wouldn't run
> java.
> 
> I do not see an issue with having ZooKeeper as the default, as long as
> there is an alternate solution that also works for the operators that do
> not want to use it.

Yes, that is my recollection. We can't make Java mandatory, so we need
to have the *option* to not run any Java (for those people who don't
want to start touching it, for various reasons).

IMHO that doesn't mean ZK cannot be the early default in devstack, or
that we should hold all DLM work until a Consul/etcd driver is
production-ready. It just means we need to have people signed up to
build and maintain a Consul and/or etcd driver :)

NB: I wouldn't mind helping on an etcd driver, that sounds like a fun
side project. I'm just totally unsure I'll have time to do it.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Prathyusha Guduri
Hello all,

Trying to install openstack with ovs-dpdk driver from devstack.

Following is my localrc file

HOST_IP_IFACE=eth0
HOST_IP=10.0.2.15
HOST_NAME=$(hostname)

DATABASE_PASSWORD=open
RABBIT_PASSWORD=open
SERVICE_TOKEN=open
SERVICE_PASSWORD=open
ADMIN_PASSWORD=open
MYSQL_PASSWORD=open
HORIZON_PASSWORD=open


enable_plugin networking-ovs-dpdk
https://github.com/stackforge/networking-ovs-dpdk master
OVS_DPDK_MODE=controller_ovs

disable_service n-net
disable_service n-cpu
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service n-novnc

DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
LOGDAYS=1

Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
ENABLE_TENANT_TUNNELS=False

#Dual socket platform with 16GB RAM,3072*2048kB hugepages leaves ~4G
for the system.
OVS_NUM_HUGEPAGES=2048
#Dual socket platform with 64GB RAM,14336*2048kB hugepages leaves ~6G
for the system.
#OVS_NUM_HUGEPAGES=14336

OVS_DATAPATH_TYPE=netdev
OVS_LOG_DIR=/opt/stack/logs
OVS_BRIDGE_MAPPINGS=public:br-ex

ML2_VLAN_RANGES=public:100:200
MULTI_HOST=1

#[[post-config|$NOVA_CONF]]
#[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter


After running ./stack.sh which was sucessful , I could see that in
ml2.conf.ini file ovsdpdk was added as the mechanism driver. But the
agent running was still openvswitch. Tried running ovsdpdk on q-agt
screen, but failed because ovsdpdk was not installed in
/usr/local/bin, which I thought devstack is supposed to do.
Tried running setup.py in networking-ovs-dpdk folder, but that also
did not install ovs-dpdk in /usr/local/bin.

Am stuck here. Please guide me how to proceed further. Also the Readme
in networking-ovs-dpdk folder says the instructions regarding
installation are available in below links -
http://git.openstack.org/cgit/stackforge/networking-ovs-dpdk/tree/doc/source/installation.rst

But no repos found there. Kindly guide me to a doc or something on how
to build ovs-dpdk from devstack

Thank you,
Prathyusha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] live migration sub-team meeting

2015-11-05 Thread Murray, Paul (HP Cloud)
> > Most team members expressed they would like a regular IRC meeting for
> > tracking work and raising blocking issues. Looking at the contributors
> > here [2], most of the participants seem to be in the European
> > continent (in time zones ranging from UTC to UTC+3) with a few in the
> > US (please correct me if I am wrong). That suggests that a time around
> > 1500 UTC makes sense.
> >
> > I would like to invite suggestions for a day and time for a weekly
> > meeting -
> 
> Maybe you could create a quick Doodle poll to reach a rough consensus on
> day/time:
> 
> http://doodle.com/

Yes, of course, here's the poll: 

http://doodle.com/poll/rbta6n3qsrzcqfbn 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Planning and prioritizing session for Mitaka

2015-11-05 Thread ELISHA, Moshe (Moshe)
Great. That works for ALU folks.

-Original Message-
From: Renat Akhmerov [mailto:rakhme...@mirantis.com] 
Sent: Thursday, November 05, 2015 9:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [mistral] Planning and prioritizing session for Mitaka

Team,

We’ve done a great job at the summit discussing our most hot topics within the 
project and a lot of important decisions were made. I would like to have though 
one more session in IRC to wrap up this by going over all the BPs/bugs we 
created in order to scope and prioritize them.

I’m proposing next Monday 9 Nov at 7.00 UTC. If you have other time options 
let’s communicate.

Thanks

Renat Akhmerov
@ Mirantis Inc.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Rapelly, Varun
Hi All,

Can we use https://github.com/openstack/networking-ovs-dpdk with packstack??

I'm trying to configure devstack with ovs-dpdk on ubuntu. But till now no 
success.

Could anybody tell whether it is supported on ubuntu or not? or only on Fedora 
it is tested?


Regards,
Varun

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] mutihost networking with nova vm as docker host

2015-11-05 Thread Antoni Segura Puimedon
On Thu, Nov 5, 2015 at 10:38 AM, Vikas Choudhary  wrote:

> ++[Neutron] tag
>
>
> On Thu, Nov 5, 2015 at 11:33 AM, Vikas Choudhary <
> choudharyvika...@gmail.com> wrote:
>
>> Hi All,
>>
>> I would appreciate inputs on following queries:
>> 1. Are we assuming nova bm nodes to be docker host for now?
>>
>
Yes. That's the assumption for deployments as of now before we tackle
containers running on
more complicated deployment topologies (like containers running inside
tenant VMs).


>
>> If Not:
>>
>
When we go for other kinds of deployments,


>  - Assuming nova vm as docker host and ovs as networking plugin:
>> This line is from the etherpad[1], "Eachdriver would have an
>> executable that receives the name of the veth pair that has to be bound to
>> the overlay" .
>>
>
The binding will obviously have to change for such deployments


> Query 1:  As per current ovs binding proposals by Feisky[2]
>> and Diga[3], vif seems to be binding with br-int on vm. I am unable to
>> understand how overlay will work. AFAICT , neutron will configure br-tun of
>> compute machines ovs only. How overlay(br-tun) configuration will happen
>> inside vm ?
>>
>>  Query 2: Are we having double encapsulation(both at vm and
>> compute)? Is not it possible to bind vif into compute host br-int?
>>
>>  Query3: I did not see subnet tags for network plugin being
>> passed in any of the binding patches[2][3][4]. Dont we need that?
>>
>
The spec for containers on VMs has not yet been drafted and we are open for
proposals and discussion. I would like to have more than one spec proposal
for it and to try to achieve community consensus before the new year for
what's the best way to go.

Currently it seems that the approaches that will be proposed are:
- ovn-like solution with vlan tag per port [5]
- routed solution with port per VM as explained by Brenden Blanco [6]

I'm hoping that we will arrive to something in between or perhaps more
complete than either of those options.

[5] http://docs.openstack.org/developer/networking-ovn/containers.html
[6] https://gist.github.com/drzaeus77/89aa3db154c688a15ee6

Regards,

Toni

>
>>
>> [1]  https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
>> [2]  https://review.openstack.org/#/c/241558/
>> [3]  https://review.openstack.org/#/c/232948/1
>> [4]  https://review.openstack.org/#/c/227972/
>>
>>
>> -Vikas Choudhary
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer]:Subscribe and Publish Notification frame work in Ceilometer !

2015-11-05 Thread Raghunath D
Hi Pradeep,

Presently we are looking for a monitoring service.Using monitoring service 
user's/application's 
will subscribe for few notification's/events from openstack infrastructure and 
monitoring service
will publish these notification to user's/application's.

We are exploring Ceilometer for this purpose.We came across below blue print 
which is similar to our requirement.

 https://blueprints.launchpad.net/ceilometer/+spec/declarative-notifications.

We have few queries on declarative-notifications frame work,could you please 
help us in addressing them:

1.We are looking for an API for Subcribing/Publishing notification.Do this 
frame work exposes any such API,if yes could you 
   please provide us API doc or spec how to use it.
2.If the frame work doesn't have such API,does any of the development group is 
working in this area.
3.Please suggest what would be the best place in ceilometer notification frame 
work(publisher/dispatcher/..)
   to implement the Subscribe and Publish API.

With Best Regards
Raghunath Dudyala
Tata Consultancy Services Limited
Mailto: raghunat...@tcs.com
Website: http://www.tcs.com

Experience certainty.   IT Services
Business Solutions
Consulting

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-05 Thread John Garbutt
On 5 November 2015 at 09:46, Richard Jones  wrote:
> As a consumer of such APIs on the Horizon side, I'm all for consistency in
> pagination, and more of it, so yes please!
>
> On 5 November 2015 at 13:24, Tony Breeds  wrote:
>>
>> On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
>> > Hi All,
>> > Around the middle of October a spec [1] was uploaded to add
>> > pagination
>> > support to the os-hypervisors API.  While I recognize the use case it
>> > seemed
>> > like adding another pagination implementation wasn't an awesome idea.
>> >
>> > Today I see 3 more requests to add pagination to APIs [2]
>> >
>> > Perhaps I'm over thinking it but should we do something more strategic
>> > rather
>> > than scattering "add pagination here".

+1

The plan, as I understand it, is to first finish off this API WG guideline:
http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html

Once we have agreement, we can try to support that in a new micro version.

Important as this is, for mitaka, we agreed to focus on documentation,
not resolve this API inconsistencies/wrinkles:
http://specs.openstack.org/openstack/nova-specs/priorities/mitaka-priorities.html#v2-1-api

I would love to see us work with the API WG and get that guideline
completed by the end of Mitaka, so we can implement something next
cycle. If we get to implementing sooner, then awesome, but thats
assuming we have the API documentation work complete first.

Thanks,
John

>> >
>> > [1] https://review.openstack.org/#/c/234038
>> > [2]
>> > https://review.openstack.org/#/q/message:pagination+project:openstack/nova-specs+status:open,n,z
>>
>> Sorry about the send without complete subject.
>>
>> Yours Tony.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-05 Thread Miguel Lavalle
Hey Paisano,

Thanks for your great contributions.

Un abrazo

On Wed, Nov 4, 2015 at 6:28 PM, Edgar Magana 
wrote:

> Dear Colleagues,
>
> I have been part of this community from the very beginning when in Santa
> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
> networking project.
> Neutron has become is a very unique piece of code and it requires an
> approval team that will always be on the top of everything, this is why I
> would like to communicate you that I decided to step down as Neutron Core.
>
> These are not breaking news for many of you because I shared this thought
> during the summit in Tokyo and now it is a commitment. I want to let you
> know that I learnt a lot from you and I hope my comments and reviews never
> offended you.
>
> I will be around of course. I will continue my work on code reviews and
> coordination on the Networking Guide.
>
> Thank you all for your support and good feedback,
>
> Edgar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Jay Pipes

On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:

Hi Salvatore,

Thanks for the feedback. I agree with you that arbitrary JSON blobs will
make IPAM much more powerful. Some other projects already do things like
this.


:( Actually, though "powerful" it also leads to implementation details 
leaking directly out of the public REST API. I'm very negative on this 
and would prefer an actual codified REST API that can be relied on 
regardless of backend driver or implementation.



e.g. In Ironic, node has driver_info, which is JSON. it also has an
'extras' arbitrary JSON field. This allows us to put any information in
there that we think is important for us.


Yeah, and this is a bad thing, IMHO. Public REST APIs should be 
structured, not a Wild West free-for-all. The biggest problem with using 
free-form JSON blobs in RESTful APIs like this is that you throw away 
the ability to evolve the API in a structured, versioned way. Instead of 
evolving the API using microversions, instead every vendor just jams 
whatever they feel like into the JSON blob over time. There's no way for 
clients to know what the server will return at any given time.


Achieving consensus on a REST API that meets the needs of a variety of 
backend implementations is *hard work*, yes, but it's what we need to do 
if we are to have APIs that are viewed in the industry as stable, 
discoverable, and reliably useful.


Best,
-jay

Best,
-jay


Hoping to get some positive feedback from API and DB lieutenants too.


On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
> wrote:

Arbitrary blobs are a powerful tools to circumvent limitations of an
API, as well as other constraints which might be imposed for
versioning or portability purposes.
The parameters that should end up in such blob are typically
specific for the target IPAM driver (to an extent they might even
identify a specific driver to use), and therefore an API consumer
who knows what backend is performing IPAM can surely leverage it.

Therefore this would make a lot of sense, assuming API portability
and not leaking backend details are not a concern.
The Neutron team API & DB lieutenants will be able to provide more
input on this regard.

In this case other approaches such as a vendor specific extension
are not a solution - assuming your granularity level is the
allocation pool; indeed allocation pools are not first-class neutron
resources, and it is not therefore possible to have APIs which
associate vendor specific properties to allocation pools.

Salvatore

On 4 November 2015 at 21:46, Shraddha Pandhe
>
wrote:

Hi folks,

I have a small question/suggestion about IPAM.

With IPAM, we are allowing users to have their own IPAM drivers
so that they can manage IP allocation. The problem is, the new
ipam tables in the database have the same columns as the old
tables. So, as a user, if I want to have my own logic for ip
allocation, I can't actually get any help from the database.
Whereas, if we had an arbitrary json blob in the ipam tables, I
could put any useful information/tags there, that can help me
for allocation.

Does this make sense?

e.g. If I want to create multiple allocation pools in a subnet
and use them for different purposes, I would need some sort of
tag for each allocation pool for identification. Right now,
there is no scope for doing something like that.

Any thoughts? If there are any other way to solve the problem,
please let me know





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [searchlight] Today's IRC meeting

2015-11-05 Thread Tripp, Travis S
Hello all,

The US time change while many of us were still getting home from Japan threw 
myself and several others off with today’s meeting time. Sorry about that! 
We’ll pick back up next week.  Next week’s agenda can be found at the below 
link.  Please feel free to add to to it / modify it and let’s talk in the IRC 
room more prior to it. Primarily, we need to continue reviewing and 
prioritizing Mitaka work.

https://etherpad.openstack.org/p/search-team-meeting-agenda

Thanks,
Travis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Adam Young
Can people help me work through the right set of tools for this use case 
(has come up from several Operators) and map out a plan to implement it:


Large cloud with many users coming from multiple Federation sources has 
a policy of providing a minimal setup for each user upon first visit to 
the cloud:  Create a project for the user with a minimal quota, and 
provide them a role assignment.


Here are the gaps, as I see it:

1.  Keystone provides a notification that a user has logged in, but 
there is nothing capable of executing on this notification at the 
moment.  Only Ceilometer listens to Keystone notifications.


2.  Keystone does not have a workflow engine, and should not be 
auto-creating projects.  This is something that should be performed via 
a Heat template, and Keystone does not know about Heat, nor should it.


3.  The Mapping code is pretty static; it assumes a user entry or a 
group entry in identity when creating a role assignment, and neither 
will exist.


We can assume a special domain for Federated users to have per-user 
projects.


So; lets assume a Heat Template that does the following:

1. Creates a user in the per-user-projects domain
2. Assigns a role to the Federated user in that project
3. Sets the minimal quota for the user
4. Somehow notifies the user that the project has been set up.

This last probably assumes an email address from the Federated 
assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
for any projects" error, and is stumped.


How is quota assignment done in the other projects now?  What happens 
when a project is created in Keystone?  Does that information gets 
transferred to the other services, and, if so, how?  Do most people use 
a custom provisioning tool for this workflow?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] can we deprecate the xvp console?

2015-11-05 Thread Bob Ball
> I noticed today that nova.console.xvp hits the database directly for
> console pools. We should convert this to objects so that the console
> service does not have direct access to the database (this is the only
> console I see that hits the database directly). However, rather than go
> through the work of create an object for ConsolePools, if no one is
> using xvp consoles in nova then we could deprecate it.

I believe that deprecating the XVP consoles would be the better move; XVP has 
not been maintained for 2 years (https://github.com/xvpsource/xvp/) and 
standard XenServer OpenStack installations do not use the XVP console.

Bob

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Jim Rollenhagen
On Thu, Nov 05, 2015 at 11:55:50AM -0500, Jay Pipes wrote:
> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
> >Hi Salvatore,
> >
> >Thanks for the feedback. I agree with you that arbitrary JSON blobs will
> >make IPAM much more powerful. Some other projects already do things like
> >this.
> 
> :( Actually, though "powerful" it also leads to implementation details
> leaking directly out of the public REST API. I'm very negative on this and
> would prefer an actual codified REST API that can be relied on regardless of
> backend driver or implementation.
> 
> >e.g. In Ironic, node has driver_info, which is JSON. it also has an
> >'extras' arbitrary JSON field. This allows us to put any information in
> >there that we think is important for us.
> 
> Yeah, and this is a bad thing, IMHO. Public REST APIs should be structured,
> not a Wild West free-for-all. The biggest problem with using free-form JSON
> blobs in RESTful APIs like this is that you throw away the ability to evolve
> the API in a structured, versioned way. Instead of evolving the API using
> microversions, instead every vendor just jams whatever they feel like into
> the JSON blob over time. There's no way for clients to know what the server
> will return at any given time.

Right, this has caused Ironic some pain in the past (though it does make
it easier for drivers to add some random info they need). I'd like to
try to move away from this sometime soon(tm).

// jim

> 
> Achieving consensus on a REST API that meets the needs of a variety of
> backend implementations is *hard work*, yes, but it's what we need to do if
> we are to have APIs that are viewed in the industry as stable, discoverable,
> and reliably useful.
> 
> Best,
> -jay
> 
> Best,
> -jay
> 
> >Hoping to get some positive feedback from API and DB lieutenants too.
> >
> >
> >On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
> >> wrote:
> >
> >Arbitrary blobs are a powerful tools to circumvent limitations of an
> >API, as well as other constraints which might be imposed for
> >versioning or portability purposes.
> >The parameters that should end up in such blob are typically
> >specific for the target IPAM driver (to an extent they might even
> >identify a specific driver to use), and therefore an API consumer
> >who knows what backend is performing IPAM can surely leverage it.
> >
> >Therefore this would make a lot of sense, assuming API portability
> >and not leaking backend details are not a concern.
> >The Neutron team API & DB lieutenants will be able to provide more
> >input on this regard.
> >
> >In this case other approaches such as a vendor specific extension
> >are not a solution - assuming your granularity level is the
> >allocation pool; indeed allocation pools are not first-class neutron
> >resources, and it is not therefore possible to have APIs which
> >associate vendor specific properties to allocation pools.
> >
> >Salvatore
> >
> >On 4 November 2015 at 21:46, Shraddha Pandhe
> >>
> >wrote:
> >
> >Hi folks,
> >
> >I have a small question/suggestion about IPAM.
> >
> >With IPAM, we are allowing users to have their own IPAM drivers
> >so that they can manage IP allocation. The problem is, the new
> >ipam tables in the database have the same columns as the old
> >tables. So, as a user, if I want to have my own logic for ip
> >allocation, I can't actually get any help from the database.
> >Whereas, if we had an arbitrary json blob in the ipam tables, I
> >could put any useful information/tags there, that can help me
> >for allocation.
> >
> >Does this make sense?
> >
> >e.g. If I want to create multiple allocation pools in a subnet
> >and use them for different purposes, I would need some sort of
> >tag for each allocation pool for identification. Right now,
> >there is no scope for doing something like that.
> >
> >Any thoughts? If there are any other way to solve the problem,
> >please let me know
> >
> >
> >
> >
> >
> > __
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> > 
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> > __
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe:
> >openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >

Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Kyle Mestery
On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:

> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>
>> Hi Salvatore,
>>
>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>> make IPAM much more powerful. Some other projects already do things like
>> this.
>>
>
> :( Actually, though "powerful" it also leads to implementation details
> leaking directly out of the public REST API. I'm very negative on this and
> would prefer an actual codified REST API that can be relied on regardless
> of backend driver or implementation.
>

I agree with Jay here. We've had people propose similar things in Neutron
before, and I've been against them. The entire point of the Neutron REST
API is to not leak these details out. It dampens the strength of the
logical model, and it tends to have users become reliant on backend
implementations.


>
> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>> 'extras' arbitrary JSON field. This allows us to put any information in
>> there that we think is important for us.
>>
>
> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
> structured, not a Wild West free-for-all. The biggest problem with using
> free-form JSON blobs in RESTful APIs like this is that you throw away the
> ability to evolve the API in a structured, versioned way. Instead of
> evolving the API using microversions, instead every vendor just jams
> whatever they feel like into the JSON blob over time. There's no way for
> clients to know what the server will return at any given time.
>
> Achieving consensus on a REST API that meets the needs of a variety of
> backend implementations is *hard work*, yes, but it's what we need to do if
> we are to have APIs that are viewed in the industry as stable,
> discoverable, and reliably useful.
>

++, this is the correct way forward.

Thanks,
Kyle


>
> Best,
> -jay
>
> Best,
> -jay
>
> Hoping to get some positive feedback from API and DB lieutenants too.
>>
>>
>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>> > wrote:
>>
>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>> API, as well as other constraints which might be imposed for
>> versioning or portability purposes.
>> The parameters that should end up in such blob are typically
>> specific for the target IPAM driver (to an extent they might even
>> identify a specific driver to use), and therefore an API consumer
>> who knows what backend is performing IPAM can surely leverage it.
>>
>> Therefore this would make a lot of sense, assuming API portability
>> and not leaking backend details are not a concern.
>> The Neutron team API & DB lieutenants will be able to provide more
>> input on this regard.
>>
>> In this case other approaches such as a vendor specific extension
>> are not a solution - assuming your granularity level is the
>> allocation pool; indeed allocation pools are not first-class neutron
>> resources, and it is not therefore possible to have APIs which
>> associate vendor specific properties to allocation pools.
>>
>> Salvatore
>>
>> On 4 November 2015 at 21:46, Shraddha Pandhe
>> >
>> wrote:
>>
>> Hi folks,
>>
>> I have a small question/suggestion about IPAM.
>>
>> With IPAM, we are allowing users to have their own IPAM drivers
>> so that they can manage IP allocation. The problem is, the new
>> ipam tables in the database have the same columns as the old
>> tables. So, as a user, if I want to have my own logic for ip
>> allocation, I can't actually get any help from the database.
>> Whereas, if we had an arbitrary json blob in the ipam tables, I
>> could put any useful information/tags there, that can help me
>> for allocation.
>>
>> Does this make sense?
>>
>> e.g. If I want to create multiple allocation pools in a subnet
>> and use them for different purposes, I would need some sort of
>> tag for each allocation pool for identification. Right now,
>> there is no scope for doing something like that.
>>
>> Any thoughts? If there are any other way to solve the problem,
>> please let me know
>>
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> <
>> http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> 

Re: [openstack-dev] [keystone] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Doug Hellmann
Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> Can people help me work through the right set of tools for this use case 
> (has come up from several Operators) and map out a plan to implement it:
> 
> Large cloud with many users coming from multiple Federation sources has 
> a policy of providing a minimal setup for each user upon first visit to 
> the cloud:  Create a project for the user with a minimal quota, and 
> provide them a role assignment.
> 
> Here are the gaps, as I see it:
> 
> 1.  Keystone provides a notification that a user has logged in, but 
> there is nothing capable of executing on this notification at the 
> moment.  Only Ceilometer listens to Keystone notifications.
> 
> 2.  Keystone does not have a workflow engine, and should not be 
> auto-creating projects.  This is something that should be performed via 
> a Heat template, and Keystone does not know about Heat, nor should it.
> 
> 3.  The Mapping code is pretty static; it assumes a user entry or a 
> group entry in identity when creating a role assignment, and neither 
> will exist.
> 
> We can assume a special domain for Federated users to have per-user 
> projects.
> 
> So; lets assume a Heat Template that does the following:
> 
> 1. Creates a user in the per-user-projects domain
> 2. Assigns a role to the Federated user in that project
> 3. Sets the minimal quota for the user
> 4. Somehow notifies the user that the project has been set up.
> 
> This last probably assumes an email address from the Federated 
> assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
> for any projects" error, and is stumped.
> 
> How is quota assignment done in the other projects now?  What happens 
> when a project is created in Keystone?  Does that information gets 
> transferred to the other services, and, if so, how?  Do most people use 
> a custom provisioning tool for this workflow?
> 

I know at Dreamhost we built some custom integration that was triggered
when someone turned on the Dreamcompute service in their account in our
existing user management system. That integration created the account in
keystone, set up a default network in neutron, etc. I've long thought we
needed a "new tenant creation" service of some sort, that sits outside
of our existing services and pokes them to do something when a new
tenant is established. Using heat as the implementation makes sense, for
things that heat can control, but we don't want keystone to depend on
heat and we don't want to bake such a specialized feature into heat
itself.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Stepping Down from Neutron Core Responsibilities

2015-11-05 Thread Paul Michali
Appreciate all the work Edgar!

Regards,

PCM

On Thu, Nov 5, 2015 at 11:15 AM Miguel Lavalle  wrote:

> Hey Paisano,
>
> Thanks for your great contributions.
>
> Un abrazo
>
> On Wed, Nov 4, 2015 at 6:28 PM, Edgar Magana 
> wrote:
>
>> Dear Colleagues,
>>
>> I have been part of this community from the very beginning when in Santa
>> Clara, CA back in 2011 a bunch of we crazy people decided to work on this
>> networking project.
>> Neutron has become is a very unique piece of code and it requires an
>> approval team that will always be on the top of everything, this is why I
>> would like to communicate you that I decided to step down as Neutron Core.
>>
>> These are not breaking news for many of you because I shared this thought
>> during the summit in Tokyo and now it is a commitment. I want to let you
>> know that I learnt a lot from you and I hope my comments and reviews never
>> offended you.
>>
>> I will be around of course. I will continue my work on code reviews and
>> coordination on the Networking Guide.
>>
>> Thank you all for your support and good feedback,
>>
>> Edgar
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] can we deprecate the xvp console?

2015-11-05 Thread Matt Riedemann
I noticed today that nova.console.xvp hits the database directly for 
console pools. We should convert this to objects so that the console 
service does not have direct access to the database (this is the only 
console I see that hits the database directly). However, rather than go 
through the work of create an object for ConsolePools, if no one is 
using xvp consoles in nova then we could deprecate it.


It looks like it was added back in diablo [1] (at least).

Someone from Rackspace in IRC said that they weren't using it, so given 
it's for xenserver I assume that means probably no one is using it, but 
we need to ask first.


Please respond else I'll probably move forward with deprecation at some 
point in mitaka-1.


[1] 
https://github.com/openstack/nova/commit/b437a98738c7a564205d1b27e36b844cd54445d1


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][bugs] Developers Guide: Who's merging that?

2015-11-05 Thread Jeremy Stanley
On 2015-11-05 16:23:56 +0100 (+0100), Markus Zoeller wrote:
> some months ago I wrote down all the things a developer should know
> about the bug handling process in general [1]. It is written as a
> project agnostic thing and got some +1s but it isn't merged yet.
> It would be helpful when I could use it to give this as a pointer
> to new contributors as I'm under the impression that the mental image
> differs a lot among the contributors. So, my questions are:
> 
> 1) Who's in charge of merging such non-project-specific things?
[...]

This is a big part of the problem your addition is facing, in my
opinion. The OpenStack Infrastructure Manual is an attempt at a
technical manual for interfacing with the systems written and
maintained by the OpenStack Project Infrastructure team. It has,
unfortunately, also grown some sections which contain cultural
background and related recommendations because until recently there
was no better venue for those topics, but we're going to be ripping
those out and proposing them to documents maintained by more
appropriate teams at the earliest opportunity.

Bug management falls into a grey area currently, where a lot of the
information contributors need is cultural background mixed with
workflow information on using Launchpad (which is not really managed
by the Infra team). Some of the material there is still a fit for
the Infra Manual insofar as we do intend to start maintaining a
defect and task tracker for the OpenStack community in the near
future, so information on how to use Launchpad is probably an
acceptable placeholder until that's ready (however much of it should
likely just link to Launchpad's own documentation for now).

Cultural content about the lifecycle of bugs, standard practices for
triage, et cetera are likely better suited to the newly created
Project Team Guide; and then there's another class of content in
your proposed addition, content which is primarily of interest to
people reporting bugs for the first time. The Developer Guide
audience doesn't, I think, have a lot of overlap with
users/deployers who need guidance on what sort of information to put
in a bug report. Unfortunately, I don't have any great suggestions
for another community-maintained document which aligns well with
that target audience either.

So anyway, to my main point, topics in collaboratively-maintained
documentation are going to end up being closely tied to the
expertise of the review team for the document being targeted. In the
case of the Infra Manual that's the systems administrators who
configure and maintain our community infrastructure. I won't speak
for others on the team, but I don't personally feel comfortable
deciding what details a user should include in a bug report for
python-novaclient, or how the Cinder team should triage their bug
reports.

I expect that the lack of core reviews are due to:

1. Few of the core reviewers feel they can accurately judge much of
the content you've proposed in that change.

2. Nobody feels empowered to tell you that this large and
well-written piece of documentation you've spent a lot of time
putting together is a poor fit and should be split up and much of it
put somewhere else more suitable (especially without a suggestion as
to where that might be).

3. The core review team for this is the core review team for all our
infrastructure systems, and we're all unfortunately very behind in
handling the current review volume.

-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] attaching and detaching volumes in the API

2015-11-05 Thread Murray, Paul (HP Cloud)
Normally operations on instances are synchronized at the compute node. In some 
cases it is necessary to synchronize somehow at the API. I have one of those 
cases and wondered what is a good way to go about it.

As part of this spec: https://review.openstack.org/#/c/221732/

I want to attach/detach volumes (and so manipulate block device mappings) when 
an instance is not on any compute node (actually when in shelved). Normally 
this happens in a function on the compute manager synchronized on the instance 
uuid. When an instance is in the shelved_offloaded state it is not on a compute 
host, so the operations have to be done at the API (an existing example is when 
the instance deleted in this state - the cleanup is done in the API but is not 
synchronized in this case).

One option I can see is using tack states, using expected_task_state parameter 
in instance.save() to control state transitions. In the API this makes sense as 
the calls will be synchronous so if an operation cannot be done it can be 
reported back to the user in an error return. I'm sure there must be some other 
options.

Any suggestions would be welcome.

Paul

Paul Murray
Nova Technical Lead, HP Cloud
Hewlett Packard Enterprise
+44 117 316 2527



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread gord chung



On 05/11/2015 1:06 PM, Nader Lahouti wrote:

Hi Doug,

I have an app that listens to notifications and used the info provided in
http://docs.openstack.org/developer/oslo.messaging/notification_listener.html


Basically I create
1. NotificationEndpoints(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
2. NotifcationListener(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
3. and call start() and  then wait()


the correct usage is to call stop() before wait()[1]. for reference on 
how to use listeners, you can see Ceilometer[2]


[1]http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250


--
gord


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Joshua Harlow

Sean Dague wrote:

On 11/05/2015 06:00 AM, Thierry Carrez wrote:

Hayes, Graham wrote:

On 04/11/15 20:04, Ed Leafe wrote:

On Nov 3, 2015, at 6:45 AM, Davanum Srinivas  wrote:

Here's a Devstack review for zookeeper in support of this initiative:

https://review.openstack.org/241040

Thanks,
Dims

I thought that the operators at that session made it very clear that they would 
*not* run any Java applications, and that if OpenStack required a Java app to 
run, they would no longer use it.

I like the idea of using Zookeeper as the DLM, but I don't think it should be 
set up as a default, even for devstack, given the vehement opposition expressed.


-- Ed Leafe


I got the impression that there was *some* operators that wouldn't run
java.


I feel like I'd like to see that with data. Because every Ops session
I've been in around logging and debugging has had nearly everyone raise
their hand that they are running the ELK stack for log analysis. So they
are all running Java already.

I would absolutely hate to have some design point get made based on
rumors from ops and "java is icky" sentiment from the dev space.

Defaults matter, because it means you get a critical mass of operators
running similar configs, and they can build and share knowledge. For all
of the issues with Rabbit, it has demonstrably been good to have
collaboration in the field between operators that have shared patterns
and fed back the issues. So we should really say Zookeeper is the
default choice, even if there are others people could choose that have
extra mustachy / monocle goodness.



+1 from me

I mean I get that there will be some person out there that will say 'no 
icky thats java' but said type of people will *always* exist, no matter 
what the situation and if we are basing sound technical decisions on 
that one (and/or small set of people) person it makes me wonder what the 
heck we are doing...


Because that's totally crazy (IMHO). After a while we need to listen to 
the 99% and make a solution targeted at them, and accept that we will 
not make 100% of people happy all the time. This is why I personally 
like being opinionated and I think/thought that openstack as a group had 
matured enough to do this (but I see that it still isn't ready to do this).


My 2 cents,

-Josh


-Sean



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Nader Lahouti
Thanks for the pointer, I'll look into it. But one question, by calling
stop() and then wait(), does it mean the application has to call start()
again after the wait()? to process more messages?

I am also using
http://docs.openstack.org/developer/oslo.messaging/server.html for the RPC
server
Does it mean there has to be stop() and then wait() there as well?


Thanks,
Nader.



On Thu, Nov 5, 2015 at 10:19 AM, gord chung  wrote:

>
>
> On 05/11/2015 1:06 PM, Nader Lahouti wrote:
>
>> Hi Doug,
>>
>> I have an app that listens to notifications and used the info provided in
>>
>> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
>>
>>
>> Basically I create
>> 1. NotificationEndpoints(object):
>>
>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
>> 2. NotifcationListener(object):
>>
>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
>> 3. and call start() and  then wait()
>>
>
> the correct usage is to call stop() before wait()[1]. for reference on how
> to use listeners, you can see Ceilometer[2]
>
> [1]
> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
> [2]
> https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250
>
> --
> gord
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Daneyon Hansen (danehans)
All,

I apologize for issues with today's meeting. My calendar was updated to reflect 
daylight savings and displayed an incorrect meeting start time. This issue is 
now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has been pushed 
back 30 minutes from our usual start time. This is because Docker is hosting a 
Meetup [1] to discuss the new 1.9 networking features. I encourage everyone to 
attend the Meetup.

[1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/
[2] 
https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting

Regards,
Daneyon Hansen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Nova] Tempest Hypervisor Feature Tagging

2015-11-05 Thread Rafael Folco
Is there any way to know what hypervisor features[1] were tested in a Tempest 
run? 
From what I’ve seen, currently there is no way to tell what tests cover what 
features.
Looks like Tempest has UUID and service tagging, but no reference to the 
hypervisor features.

It would be good to track/map covered features and generate a report for CI.
In case of any interest in that, I’d like to validate if the metadata tagging 
(similar to UUID) is a reasonable approach.

[1] http://docs.openstack.org/developer/nova/support-matrix.html 


Thanks.

-rfolco

Rafael Folco
OpenStack Continuous Integration
IBM Linux Technology Center



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Bugs status

2015-11-05 Thread Dmitry Pyzhov
Hi guys,

new report is based on 'area' tags. I'm sorry for hardly readable heap of
numbers. Here are values for current numbers of open bugs, number of bugs
opened since last Thursday and number of closed bug for the same period.

Bugs in python, library and UI areas. Format: Total open(UI open/Python
open/Library open) +Total income (UI/Python/Library) -Total
outcome(UI/Python/Library)

Defects:
- Critical/high: 58(5/33/20) +23(1/8/14) -27(2/6/19)
- Medium/low: 199(44/103/52) +14(1/4/9) -21(0/13/8)
Features tracked as bug reports:
- Critical/high: 38(1/31/6) +1(0/1/0) -3(1/1/1)
- Medium/low: 79(3/61/15) +2(0/1/1) -3(0/1/2)
Technical debt bugs:
- Critical/high: 14(0/9/5) +2(0/2/0) -3(0/2/1)
- Medium/low: 91(1/68/22) +4(0/2/2) -6(0/4/2)

Let me decrypt first row that is important. We have 58 high and critical
priority open bugs. 5 of them are in UI 33 are in python and 20 are in
library. In last 7 days we've got 23 new bugs and closed 27.

Little bit more about high and critical priority bugs. In library we fixed
as much bugs as we have in total. It means this number doesn't depend on
our fixing speed any more. The only way this number can be reduced is by
reducing of bugs income.

In python we have 33 high/critical bugs and 15 of them are related to
features being developed. We have several really tricky bugs but we are
close to the end of the queue. It doesn't looks like we can do anything to
significantly reduce number of bugs here. We getting new bugs and we fixing
them.

I hope that we'll be able to focus on 14 high priority tech-debt and 155
medium priority bugs soon. It highly depends on new findings.

Bugs in other teams. Format: open total(open high) +income total(income
high) -outcome total(outcome high).
- QA: 71(21) +24(13) -21(13)
- Docs: 156(35) +6(2) -2(0)
- Devops: 62(24) +10(5) -10(8)
- Build: 43(12) +11(9) -20(13)
- CI: 63(31) +10(7) -11(10)
- MOS: 45(15) +7(4) -3(1)
- Partners: 12(5) +0(0) -0(0)
- MOS Linux: 15(5) +0(0) -1(0)
- Plugins: 3(1) +1(1) -3(2)

Let me explain first row as an example. We have 71 bugs on QA, 21 of them
have high or critical priority. 24 new bugs were created in last 7 days, 13
of them are high/critical. 21 bugs were closed during same period. 13
closed bugs have high or critical priority.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Networking Subteam Meetings

2015-11-05 Thread Lee Calcote
It seems integration of the SocketPlane acquisition has come to fruition in 1.9…

Lee

> On Nov 5, 2015, at 1:18 PM, Daneyon Hansen (danehans)  
> wrote:
> 
> All,
> 
> I apologize for issues with today's meeting. My calendar was updated to 
> reflect daylight savings and displayed an incorrect meeting start time. This 
> issue is now resolved. We will meet on 11/12 at 18:30 UTC. The meeting has 
> been pushed back 30 minutes from our usual start time. This is because Docker 
> is hosting a Meetup [1] to discuss the new 1.9 networking features. I 
> encourage everyone to attend the Meetup.
> 
> [1] http://www.meetup.com/Docker-Online-Meetup/events/226522306/ 
> 
> [2] 
> https://wiki.openstack.org/wiki/Meetings/Containers#Container_Networking_Subteam_Meeting
>  
> 
> 
> Regards,
> Daneyon Hansen
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Joshua Harlow

Clint Byrum wrote:

Excerpts from Chris Dent's message of 2015-11-05 00:08:16 -0800:

On Thu, 5 Nov 2015, Robert Collins wrote:


In the session we were told that zookeeper is already used in CI jobs
for ceilometer (was this wrong?) and thats why we figured it made a
sane default for devstack.

For clarity: What ceilometer (actually gnocchi) is doing is using tooz
in CI (gate-ceilometer-dsvm-integration). And for now it is using
redis as that was "simple".

Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
tooz for coordinating group partitioning in active-active HA setups
and shared locks. Again the standard deploy for that has been to use
redis because of availability. It's fairly understood that zookeeper
would be more correct but there are packaging concerns.



Redis jettisons all consistency on partitions... It's really ugly:

https://aphyr.com/posts/307-call-me-maybe-redis-redux

 These results are catastrophic. In a partition which lasted for
 roughly 45% of the test, 45% of acknowledged writes were thrown
 away. To add insult to injury, Redis preserved all the failed writes
 in place of the successful ones.

So... yeah. I actually think it is dangerous to have Redis in tooz at
all. One partition and you have split brains, locks granted to multiple
places, and basically the pure chaos that you were trying to prevent by
using a lock in the first place. If you're using redis, the only sane
thing to do is to shut everything down when there's a partition (which
is not easy to detect!).


This is where it gets weird, redis, imho, is alot like openstack, alot 
of ways to tweak it, alot of operational modes and a few 
clustering/failover modes.


The one that that I think the above mentions is sentinel:

http://redis.io/topics/sentinel

But from my understanding the following is being created/evolving to 
make this better (to some degree):


http://redis.io/topics/cluster-tutorial

http://redis.io/topics/cluster-spec

Overall maybe we should deprecate the redis driver, and come back to it 
when clustering has been more proven out (afaik redis clustering is 
fairly new); that might be acceptable imho, if we as a community are 
willing to do this.




To contrast this with Zookeeper and Consul:

https://aphyr.com/posts/291-call-me-maybe-zookeeper
https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

Even though etcd and consul ended up suffering from stale reads, they
added pieces to their API that allow fully consistent reads (presumably
suffering a performance penalty when doing so).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. What does it imply?

2015-11-05 Thread Vasudevan, Swaminathan (PNB Roseville)
+1

-Original Message-
From: Carl Baldwin [mailto:c...@ecbaldwin.net] 
Sent: Thursday, November 05, 2015 10:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. 
What does it imply?

On Thu, Nov 5, 2015 at 8:17 AM, Ihar Hrachyshka  wrote:
> - Releases page on wiki [2] calls the branch ‘Security-supported’ (and 
> it’s not clear what it implies)

I saw this same thing yesterday when it was pointed out in the DVR IRC meeting 
[1].  I have a hard time believing that we want to abandon bug fix support for 
Kilo especially given recent attempts to be more proactive about it [2] (which 
I applaud).  I suspect that there has simply been a mis-communication and we 
need to get the story straight in the wiki pages which Ihar pointed out.

> - StableBranch page though requires that we don’t merge non-critical 
> bug fixes there: "Only critical bugfixes and security patches are acceptable”

Seems a little premature for Kilo.  It is little more than 6 months old.

> Some projects may want to continue backporting reasonable (even though
> non-critical) fixes to older stable branches. F.e. in neutron, I think 
> there is will to continue providing backports for the branch.

+1  I'd like to reiterate my support for backporting appropriate and
sensible bug fixes to Kilo.

> I wonder though whether we would not break some global openstack rules 
> by continuing with those backports. Are projects actually limited 
> about what types of bug fixes are supposed to go in stable branches, 
> or we embrace different models of stable maintenance and allow for 
> some freedom per project?

Carl

[1] 
http://eavesdrop.openstack.org/meetings/neutron_dvr/2015/neutron_dvr.2015-11-04-15.00.log.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077236.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Spam] Re: [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Clint Byrum
Excerpts from Chris Dent's message of 2015-11-05 00:08:16 -0800:
> On Thu, 5 Nov 2015, Robert Collins wrote:
> 
> > In the session we were told that zookeeper is already used in CI jobs
> > for ceilometer (was this wrong?) and thats why we figured it made a
> > sane default for devstack.
> 
> For clarity: What ceilometer (actually gnocchi) is doing is using tooz
> in CI (gate-ceilometer-dsvm-integration). And for now it is using
> redis as that was "simple".
> 
> Outside of CI it is possible to deploy ceilo, aodh and gnocchi to use
> tooz for coordinating group partitioning in active-active HA setups
> and shared locks. Again the standard deploy for that has been to use
> redis because of availability. It's fairly understood that zookeeper
> would be more correct but there are packaging concerns.
> 

Redis jettisons all consistency on partitions... It's really ugly:

https://aphyr.com/posts/307-call-me-maybe-redis-redux

These results are catastrophic. In a partition which lasted for
roughly 45% of the test, 45% of acknowledged writes were thrown
away. To add insult to injury, Redis preserved all the failed writes
in place of the successful ones.

So... yeah. I actually think it is dangerous to have Redis in tooz at
all. One partition and you have split brains, locks granted to multiple
places, and basically the pure chaos that you were trying to prevent by
using a lock in the first place. If you're using redis, the only sane
thing to do is to shut everything down when there's a partition (which
is not easy to detect!).

To contrast this with Zookeeper and Consul:

https://aphyr.com/posts/291-call-me-maybe-zookeeper
https://aphyr.com/posts/316-call-me-maybe-etcd-and-consul

Even though etcd and consul ended up suffering from stale reads, they
added pieces to their API that allow fully consistent reads (presumably
suffering a performance penalty when doing so).

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > Can people help me work through the right set of tools for this use case 
> > (has come up from several Operators) and map out a plan to implement it:
> > 
> > Large cloud with many users coming from multiple Federation sources has 
> > a policy of providing a minimal setup for each user upon first visit to 
> > the cloud:  Create a project for the user with a minimal quota, and 
> > provide them a role assignment.
> > 
> > Here are the gaps, as I see it:
> > 
> > 1.  Keystone provides a notification that a user has logged in, but 
> > there is nothing capable of executing on this notification at the 
> > moment.  Only Ceilometer listens to Keystone notifications.
> > 
> > 2.  Keystone does not have a workflow engine, and should not be 
> > auto-creating projects.  This is something that should be performed via 
> > a Heat template, and Keystone does not know about Heat, nor should it.
> > 
> > 3.  The Mapping code is pretty static; it assumes a user entry or a 
> > group entry in identity when creating a role assignment, and neither 
> > will exist.
> > 
> > We can assume a special domain for Federated users to have per-user 
> > projects.
> > 
> > So; lets assume a Heat Template that does the following:
> > 
> > 1. Creates a user in the per-user-projects domain
> > 2. Assigns a role to the Federated user in that project
> > 3. Sets the minimal quota for the user
> > 4. Somehow notifies the user that the project has been set up.
> > 
> > This last probably assumes an email address from the Federated 
> > assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
> > for any projects" error, and is stumped.
> > 
> > How is quota assignment done in the other projects now?  What happens 
> > when a project is created in Keystone?  Does that information gets 
> > transferred to the other services, and, if so, how?  Do most people use 
> > a custom provisioning tool for this workflow?
> > 
> 
> I know at Dreamhost we built some custom integration that was triggered
> when someone turned on the Dreamcompute service in their account in our
> existing user management system. That integration created the account in
> keystone, set up a default network in neutron, etc. I've long thought we
> needed a "new tenant creation" service of some sort, that sits outside
> of our existing services and pokes them to do something when a new
> tenant is established. Using heat as the implementation makes sense, for
> things that heat can control, but we don't want keystone to depend on
> heat and we don't want to bake such a specialized feature into heat
> itself.
> 

I agree, an automation piece that is built-in and easy to add to
OpenStack would be great.

I do not agree that it should be Heat. Heat is for managing stacks that
live on and change over time and thus need the complexity of the graph
model Heat presents.

I'd actually say that Mistral or Ansible are better choices for this. A
service which listens to the notification bus and triggered a workflow
defined somewhere in either Ansible playbooks or Mistral's workflow
language would simply run through the "skel" workflow for each user.

The actual workflow would probably almost always be somewhat site
specific, but it would make sense for Keystone to include a few basic ones
as "contrib" elements. For instance, the "notify the user" piece would
likely be simplest if you just let the workflow tool send an email. But
if your cloud has Zaqar, you may want to use that as well or instead.

Adding Mistral here to see if they have some thoughts on how this
might work.

BTW, if this does form into a new project, I suggest naming it
Skeleton[1]

[1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] can we deprecate the xvp console?

2015-11-05 Thread Andrew Laski

On 11/05/15 at 10:39am, Matt Riedemann wrote:
I noticed today that nova.console.xvp hits the database directly for 
console pools. We should convert this to objects so that the console 
service does not have direct access to the database (this is the only 
console I see that hits the database directly). However, rather than 
go through the work of create an object for ConsolePools, if no one 
is using xvp consoles in nova then we could deprecate it.


It looks like it was added back in diablo [1] (at least).

Someone from Rackspace in IRC said that they weren't using it, so 
given it's for xenserver I assume that means probably no one is using 
it, but we need to ask first.


So apparently I was wrong.  We are using both novnc and xvpvncproxy in 
an attempt to eventually get off of xvpvnxproxy.  It's possible that 
nobody else is using it, but Rackspace at least is for now.




Please respond else I'll probably move forward with deprecation at 
some point in mitaka-1.


[1] 
https://github.com/openstack/nova/commit/b437a98738c7a564205d1b27e36b844cd54445d1

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread gord chung
my understanding is that if you are calling stop()/wait() your intention 
is to shut down the listener. if you intend on keeping an active 
consumer on the queue, you shouldn't be calling either stop() or wait(), 
just start.


On 05/11/2015 2:07 PM, Nader Lahouti wrote:


Thanks for the pointer, I'll look into it. But one question, by 
calling stop() and then wait(), does it mean the application has to 
call start() again after the wait()? to process more messages?


I am also using 
http://docs.openstack.org/developer/oslo.messaging/server.html for the 
RPC server

Does it mean there has to be stop() and then wait() there as well?


Thanks,
Nader.



On Thu, Nov 5, 2015 at 10:19 AM, gord chung > wrote:




On 05/11/2015 1:06 PM, Nader Lahouti wrote:

Hi Doug,

I have an app that listens to notifications and used the info
provided in

http://docs.openstack.org/developer/oslo.messaging/notification_listener.html


Basically I create
1. NotificationEndpoints(object):

https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
2. NotifcationListener(object):

https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
3. and call start() and  then wait()


the correct usage is to call stop() before wait()[1]. for
reference on how to use listeners, you can see Ceilometer[2]


[1]http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
[2]
https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250

-- 
gord




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Nader Lahouti
Hi Doug,

I have an app that listens to notifications and used the info provided in
http://docs.openstack.org/developer/oslo.messaging/notification_listener.html


Basically I create
1. NotificationEndpoints(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
2. NotifcationListener(object):
https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
3. and call start() and  then wait()


Thanks,
Nader.



On Thu, Nov 5, 2015 at 5:27 AM, Doug Hellmann  wrote:

> Excerpts from Nader Lahouti's message of 2015-11-04 21:25:15 -0800:
> > Hi,
> >
> > I'm seeing the below warning message continuously:
> >
> > 2015-11-04 21:09:38  WARNING [oslo_messaging.server] wait() should have
> > been called after stop() as wait() waits for existing messages to finish
> > processing, it has been 692.98 seconds and stop() still has not been
> called
> >
> > How to avoid this waring message? Anything needs to be changed when using
> > the notification API with the latest oslo_messaging?
> >
> > Thanks,
> > Nader.
>
> What version of what application is producing the message?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][neutron] Kilo is 'security-supported'. What does it imply?

2015-11-05 Thread Carl Baldwin
On Thu, Nov 5, 2015 at 8:17 AM, Ihar Hrachyshka  wrote:
> - Releases page on wiki [2] calls the branch ‘Security-supported’ (and it’s
> not clear what it implies)

I saw this same thing yesterday when it was pointed out in the DVR IRC
meeting [1].  I have a hard time believing that we want to abandon bug
fix support for Kilo especially given recent attempts to be more
proactive about it [2] (which I applaud).  I suspect that there has
simply been a mis-communication and we need to get the story straight
in the wiki pages which Ihar pointed out.

> - StableBranch page though requires that we don’t merge non-critical bug
> fixes there: "Only critical bugfixes and security patches are acceptable”

Seems a little premature for Kilo.  It is little more than 6 months old.

> Some projects may want to continue backporting reasonable (even though
> non-critical) fixes to older stable branches. F.e. in neutron, I think there
> is will to continue providing backports for the branch.

+1  I'd like to reiterate my support for backporting appropriate and
sensible bug fixes to Kilo.

> I wonder though whether we would not break some global openstack rules by
> continuing with those backports. Are projects actually limited about what
> types of bug fixes are supposed to go in stable branches, or we embrace
> different models of stable maintenance and allow for some freedom per
> project?

Carl

[1] 
http://eavesdrop.openstack.org/meetings/neutron_dvr/2015/neutron_dvr.2015-11-04-15.00.log.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-October/077236.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Robert Collins
On 5 November 2015 at 11:32, Fox, Kevin M  wrote:
> To clarify that statement a little more,
>
> Speaking only for myself as an op, I don't want to support yet one more 
> snowflake in a sea of snowflakes, that works differently then all the rest, 
> without a very good reason.
>
> Java has its own set of issues associated with the JVM. Care, and feeding 
> sorts of things. If we are to invest time/money/people in learning how to 
> properly maintain it, its easier to justify if its not just a one off for 
> just DLM,
>
> So I wouldn't go so far as to say we're vehemently opposed to java, just that 
> DLM on its own is probably not a strong enough feature all on its own to 
> justify requiring pulling in java. Its been only a very recent thing that you 
> could convince folks that DLM was needed at all. So either make java 
> optional, or find some other use cases that needs java badly enough that you 
> can make java a required component. I suspect some day searchlight might be 
> compelling enough for that, but not today.
>
> As for the default, the default should be good reference. if most sites would 
> run with etc or something else since java isn't needed, then don't default 
> zookeeper on.

So lets be clear about the discussion at the summit.

There were three, non-conflicting and distinct concerns raised about Java.

One is the 'its a new platform for us operators to understand
operations around' - which is fair, and indeed, Java has different
(not better, different) behaviours to the CPython VM.

Secondly, 'us operators do not want to be a special snowflake, we
*want* to run the majority configuration' - which makes sense, and is
one reason to aim for a convergent stack where possible.

Thirdly, 'many of our customers *will not* run Oracle's JVM and the
stability and performance of Zookeeper on openjdk is an unknown'. The
argument was that we can't pick zk because the herd run it on Oracle's
JVM not openjdk - now there are some unquantified bits here, but it is
known that openjdk has had sufficient differences to Oracle JVM to
cause subtle bugs, so if most large zk shops are running Oracle JVM
then indeed this becomes a special-snowflake risk.

I don't recall *anyone* saying they thought zk was bad, or that they
would refuse to run it if we had chosen zk rather than tooz. We got
stuck on that third issue - there was no way to answer it in the
session, and its obviously a terrifying risk to take.

And because for every option some operators were going to be unhappy,
we fell back to the choice of not making a choice.

There are a bunch of parameters around DLM usage that we haven't
quantified yet - we can talk capabilities sensibly, but we don't yet
know how much load we will put on the DLM, nor how it will scale
relative to cloud size. My naive expectation is that we'll need a
-very- large cloud to stress the cluster size of any decent DLM, but
that request rate / latency could be a potential issue as clouds scale
(e.g. need care and feeding).

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr] competing implementations

2015-11-05 Thread Adrian Otto
Sometimes producing alternate implementations can be more effective than 
abstract discussions because they are more concrete. If an implementation can 
be produced (possibly multiple different implementations by different 
contributors) in a short period of time without significant effort, that’s 
usually better than a lengthy discussion. Keep in mind that even a WIP review 
can be helpful for facilitating this sort of a discussion. Having a talk about 
a specific review is usually much more effective than when the discussion is 
happening completely in abstract terms.

Keep in mind that many OpenStack contributors speak English as a second 
language. They may actually be much more effective in expressing their ideas in 
code rather than in the form of a debate. Using alternate implementations for 
something is one way to let these contributors shine with a novel idea, even if 
they struggle to articulate themselves or feel uncomfortable in a verbal debate.

If you are about to go implement something that takes a significant effort, 
then it would be annoying to have an alternate implantation show up and you’ll 
feel like your work goes to waste. The way to prevent this is to encourage all 
active contributors to share ideas in the project IRC channel, and show up 
regularly to the team meetings, and covey your intent to the technical lead. If 
you are surprised by alternate implementations for your work, that’s a symptom 
that one or more of you are not well coordinated. If we solve that, everyone 
can potentially move more quickly. Anyone struggling with this problem might 
consider the guidance I offered in Vancouver [1].

Adrian

[1] 
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/7-habits-of-highly-effective-contributors

On Nov 4, 2015, at 7:04 PM, Vikas Choudhary 
> wrote:


If we see from the angle of the contributor whose approach would not be better 
than other competing one, it will be far easy for him to accept logic at 
discussion stage rather after weeks of tracking review request and addressing 
review comments.

On 5 Nov 2015 08:24, "Vikas Choudhary" 
> wrote:

@Toni ,

In scenarios where two developers, with different implementation approaches, 
are not able to reach any consensus over gerrit or ml, IMO, other core members 
can do a voting or discussion and then PTL should take a call which one to 
accept and allow for implementation. Anyways community has to make a call even 
after implementations, so why to unnecessary waste effort in implementation.
WDYT?

On 4 Nov 2015 19:35, "Baohua Yang" 
> wrote:
Sure, thanks!
And suggest add the time and channel information at the kuryr wiki page.


On Wed, Nov 4, 2015 at 9:45 PM, Antoni Segura Puimedon 
> wrote:


On Wed, Nov 4, 2015 at 2:38 PM, Baohua Yang 
> wrote:
+1, Antoni!
btw, is our weekly meeting still on meeting-4 channel?
Not found it there yesterday.

Yes, it is still on openstack-meeting-4, but this week we skipped it, since 
some of us were
traveling and we already held the meeting on Friday. Next Monday it will be 
held as usual
and the following week we start alternating (we have yet to get a room for that 
one).

On Wed, Nov 4, 2015 at 9:27 PM, Antoni Segura Puimedon 
> wrote:
Hi Kuryrs,

Last Friday, as part of the contributors meetup, we discussed also code 
contribution etiquette. Like other OpenStack project (Magnum comes to mind), 
the etiquette for what to do when there is disagreement in the way to code a 
blueprint of fix a bug is as follows:

1.- Try to reach out so that the original implementation gets closer to a 
compromise by having the discussion in gerrit (and Mailing list if it requires 
a wider range of arguments).
2.- If a compromise can't be reached, feel free to make a separate 
implementation arguing well its difference, virtues and comparative 
disadvantages. We trust the whole community of reviewers to be able to judge 
which is the best implementation and I expect that often the reviewers will 
steer both submissions closer than they originally were.
3.- If both competing implementations get the necessary support, the core 
reviewers will take a specific decision on which to take based on technical 
merit. Important factor are:
* conciseness,
* simplicity,
* loose coupling,
* logging and error reporting,
* test coverage,
* extensibility (when an immediate pending and blueprinted feature can 
better be built on top of it).
* documentation,
* performance.

It is important to remember that technical disagreement is a healthy thing and 
should be tackled with civility. If we follow the rules 

Re: [openstack-dev] DevStack errors...

2015-11-05 Thread Thales
Neil Jerram wrote:"When you say 'on Ubuntu 14.04', are we talking a completely 
fresh install with nothing else on it?  That's the most reliable way to run 
DevStack - people normally create a fresh disposable VM for this kind of work."

   -- I finally got it running!   I did what you said, and created a VM.   I 
basically followed this guys video tutorial.  The only difference is I used the 
stable/liberty instead of the stable/icehouse (which I guess no longer exists). 
  It is, however, *very* slow on my machine, with 4 giga bytes and 30 GB HDD.  
   I did have some problems getting VirtualBox working (I know others are using 
VMware) with their "guest additions", because none of the standard instructions 
worked.    Some user on askubuntu.com here had the answer.  This gave me the 
bigger 
screen.http://askubuntu.com/questions/451805/screen-resolution-problem-with-ubuntu-14-04-and-virtualbox


  The answer given by the guy named "Chip" and then the reply to him by "Snark" 
did the trick.   
The tutorial I used:https://www.youtube.com/watch?v=zoi8WpGwrXM


  I supplied details here in case anyone else has the same difficulties.
   Thanks for the help!
Regards,...John On Tuesday, November 3, 2015 3:35 AM, Neil Jerram 
 wrote:
   

  On 02/11/15 23:56, Thales wrote:

I'm trying to get DevStack to work, but am getting errors.  Is this a good list 
to ask questions for this?  I can't seem to get answers anywhere I look.   I 
tried the openstack list, but it kind of moves slow.
Thanks for any help.
Regards, John

In case it helps, I had no problem using DevStack's stable/liberty branch 
yesterday.  If you don't specifically need master, you might try that too:

  # Clone the DevStack repository.
  git clone https://git.openstack.org/openstack-dev/devstack

  # Use the stable/liberty branch.
  cd devstack
  git checkout stable/liberty

  ...

I also just looked again at your report on openstack@.  Were you using Python 
2.7?

I expect you'll have seen discussions like 
http://stackoverflow.com/questions/23176697/importerror-no-module-named-io-in-ubuntu-14-04.
  It's not obvious to me how those can be relevant, though, as they seem to 
involve corruption of an existing virtualenv, whereas DevStack I believe 
creates a virtualenv from scratch.

When you say 'on Ubuntu 14.04', are we talking a completely fresh install with 
nothing else on it?  That's the most reliable way to run DevStack - people 
normally create a fresh disposable VM for this kind of work.

Regards,
    Neil



  __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Congress] Summit recap

2015-11-05 Thread Tim Hinrichs
Hi all,

It was great seeing so many Congress people in Tokyo last week!  Hopefully
you've all had a chance to recover by now.  Here's an overview of what
happened.  I was planning to go over this at this week's IRC meeting, but
forgot about the U.S. time change and missed the meeting--sorry about that.

1. Hands On Lab.   There were 40-50 people who attended, and all but 3-4 of
them got the VM we provided installed and worked through the lab.  1 of the
failures didn't have enough memory; 1 was something to do with VDX (?
Eric--is that right?); 1 was a version of Linux for which there wasn't a
VirtualBox installer.  The only weird problem was a glitch with the Horizon
interface that wouldn't show a table that we could show on the command
line.  Overall, people seemed to like Congress and what it had to offer.

2. Working session: distributed architecture
Base class is working with oslo-messaging, but unit tests are not working.
Peter is planning to debug and push to review in the next few weeks.

One thing we discussed was that the distributed architecture is only a
building block for an HA design.  But it does not deliver HA.  In
particular, for HA we will want to have multiple copies of the policy
engine, and these copies should be hidden from the user; the system should
take care of mapping an API call intended for the policy engine to one of
the copies.  The distributed architecture does not hide the existence of
multiple policy engines; rather, the user is responsible for spinning up
multiple policy engines, giving them different names, and directing API
requests to whichever one of the policy engines she wants to interact with.

3. Working session: infrastructure/testing
- We agreed to add Murano tests to our gate (as non-voting) to ensure that
we know when we add something to Congress that breaks Murano.  Should be
sufficient to simply copy their jenkins job into the Congress job-list and
make that job non-voting.

- We discussed the problem of datasource drivers, where to store them, and
how to test them.  Neutron has a similar issue with vendor-specific
plugins.  We thought it would be nice to have a separate requirements.txt
file for each driver; but then it is unclear how to test datasource drivers
in the gate because setup.py only installs the 1 requirements.txt in the
root directory.  So in the end, we decided the right thing was to have 1
requirements.txt file that includes all the dependencies for the OpenStack
drivers so that we can test those in the gate, and to have a separate
requirements.txt for each of the non-OpenStack drivers, since we can't test
those in the gate anyway.

4. Working session: Monasca and NFV.
- Fabio introduced us to Monasca, which is a monitoring project about to be
accepted into the BigTent.  It is an alternative to Ceilometer and focused
on high-performance.  They have alarms that can be set to inform the caller
any time a certain kind of event occurs.  Monasca is supposed to get a
superset of the data that Congress currently has drivers for.  They
suggested that Congress could automatically generate alarms based on the
data required by policy.  As a first step, we decided to write a simple
datasource driver to integrate with Monasca, as an easy way for the
Congress team to get familiar with Monasca.

- OPNFV Doctor project.  The Doctor project aims to detect and manage
faults in OPNFV platforms.  They hoped to use Congress to help identify
faults.  They wanted to connect Zabbix to Congress, which creates events
and have Congress push out config changes.  Concretely they asked for a
push-style datasource driver so that Zabbix could push data to Congress
through the API.  The blueprint for that work is here:
https://blueprints.launchpad.net/congress/+spec/push-type-datasource-driver

5. Discussion about Application-level Intent.

Outside the working sessions we talked with Ken Owens and his team about
application-level intent.  They are planning on building an
application-specific policy engine within the Congress framework.  For each
VM in an application, the user can rank the sensitivity of that VM as
low/medium/high for a handful of properties, e.g. latency, throughput.  The
provisioning system (which is external to Congress) then provisions the app
according to that policy, and the policy engine within Congress continually
monitors those properties and corrects violations.  The plan is to start
this as a completely standalone policy engine running a Congress node but
build it with an eye toward eventually delegating from the agnostic policy
engine to the application-intent engine.

6. Senlin project.  I heard about this project for the first time at the
summit.  It's policy-based cluster management.  Here's an email with more
details.

http://lists.openstack.org/pipermail/openstack-dev/2015-November/078498.html

It'd be great if those attended could respond with clarifications,
comments, and things I missed.

Let me know if anyone has questions/comments.
Tim

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-11-04 14:32:42 -0800:
> To clarify that statement a little more,
> 
> Speaking only for myself as an op, I don't want to support yet one more 
> snowflake in a sea of snowflakes, that works differently then all the rest, 
> without a very good reason.
> 
> Java has its own set of issues associated with the JVM. Care, and feeding 
> sorts of things. If we are to invest time/money/people in learning how to 
> properly maintain it, its easier to justify if its not just a one off for 
> just DLM,
> 
> So I wouldn't go so far as to say we're vehemently opposed to java, just that 
> DLM on its own is probably not a strong enough feature all on its own to 
> justify requiring pulling in java. Its been only a very recent thing that you 
> could convince folks that DLM was needed at all. So either make java 
> optional, or find some other use cases that needs java badly enough that you 
> can make java a required component. I suspect some day searchlight might be 
> compelling enough for that, but not today.
> 
> As for the default, the default should be good reference. if most sites would 
> run with etc or something else since java isn't needed, then don't default 
> zookeeper on.
> 

There are a number of reasons, but the most important are:

* Resilience in the face of failures - The current database+MQ based
  solutions are all custom made and have unknown characteristics when
  there are network partitions and node failures.
* Scalability - The current database+MQ solutions rely on polling the
  database and/or sending lots of heartbeat messages or even using the
  database to store heartbeat transactions. This scales fine for tiny
  clusters, but when every new node adds more churn to the MQ and
  database, this will (and has been observed to) be intractable.
* Tech debt - OpenStack is inventing lock solutions and then maintaining
  them. And service discovery solutions, and then maintaining them.
  Wouldn't you rather have better upgrade stories, more stability, more
  scale, and more featuers?

If those aren't compelling enough reasons to deploy a mature java service
like Zookeeper, I don't know what would be. But I do think using the
abstraction layer of tooz will at least allow us to move forward without
having to convince everybody everywhere that this is actually just the
path of least resistance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Adam Young

On 11/05/2015 01:09 PM, Clint Byrum wrote:

Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:

Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:

Can people help me work through the right set of tools for this use case
(has come up from several Operators) and map out a plan to implement it:

Large cloud with many users coming from multiple Federation sources has
a policy of providing a minimal setup for each user upon first visit to
the cloud:  Create a project for the user with a minimal quota, and
provide them a role assignment.

Here are the gaps, as I see it:

1.  Keystone provides a notification that a user has logged in, but
there is nothing capable of executing on this notification at the
moment.  Only Ceilometer listens to Keystone notifications.

2.  Keystone does not have a workflow engine, and should not be
auto-creating projects.  This is something that should be performed via
a Heat template, and Keystone does not know about Heat, nor should it.

3.  The Mapping code is pretty static; it assumes a user entry or a
group entry in identity when creating a role assignment, and neither
will exist.

We can assume a special domain for Federated users to have per-user
projects.

So; lets assume a Heat Template that does the following:

1. Creates a user in the per-user-projects domain
2. Assigns a role to the Federated user in that project
3. Sets the minimal quota for the user
4. Somehow notifies the user that the project has been set up.

This last probably assumes an email address from the Federated
assertion.  Otherwise, the user hits Horizon, gets a "not authenticated
for any projects" error, and is stumped.

How is quota assignment done in the other projects now?  What happens
when a project is created in Keystone?  Does that information gets
transferred to the other services, and, if so, how?  Do most people use
a custom provisioning tool for this workflow?


I know at Dreamhost we built some custom integration that was triggered
when someone turned on the Dreamcompute service in their account in our
existing user management system. That integration created the account in
keystone, set up a default network in neutron, etc. I've long thought we
needed a "new tenant creation" service of some sort, that sits outside
of our existing services and pokes them to do something when a new
tenant is established. Using heat as the implementation makes sense, for
things that heat can control, but we don't want keystone to depend on
heat and we don't want to bake such a specialized feature into heat
itself.


I agree, an automation piece that is built-in and easy to add to
OpenStack would be great.

I do not agree that it should be Heat. Heat is for managing stacks that
live on and change over time and thus need the complexity of the graph
model Heat presents.
It would be a simpler template than most, but I'm trying to avoid adding 
additional complexity here.





I'd actually say that Mistral or Ansible are better choices for this. A
service which listens to the notification bus and triggered a workflow
defined somewhere in either Ansible playbooks or Mistral's workflow
language would simply run through the "skel" workflow for each user.

The actual workflow would probably almost always be somewhat site
specific, but it would make sense for Keystone to include a few basic ones
as "contrib" elements. For instance, the "notify the user" piece would
likely be simplest if you just let the workflow tool send an email. But
if your cloud has Zaqar, you may want to use that as well or instead.

Adding Mistral here to see if they have some thoughts on how this
might work.

BTW, if this does form into a new project, I suggest naming it
Skeleton[1]


I really do not want it to be a new project, but rather I think it 
should be a mapping of the capabilities of the existing projects.



We had discussed Mistral in Vancouver as the listener.  Would it make 
sense to have Keystone notify Mistral, and then Mistral kick off the 
workflow?


The one issue I waffle on is whether Keystone itself should be 
responsible for the Keystone-specific stuff, as part of the initial log 
in, and thus give an immediate response to the user upon first 
authentication.



Alternatively, we could provide a feedback in Horizon etc letting the 
user know that the process is underway, and even letting them add an 
email address for the callback if one cannot be deduced from the WebUI.



Would it male more sense to have this a Horizon-driven workflow, using 
an unscoped Federation token?




[1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Dolph Mathews
On Thu, Nov 5, 2015 at 3:43 PM, Doug Hellmann  wrote:

> Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > > Can people help me work through the right set of tools for this use
> case
> > > > (has come up from several Operators) and map out a plan to implement
> it:
> > > >
> > > > Large cloud with many users coming from multiple Federation sources
> has
> > > > a policy of providing a minimal setup for each user upon first visit
> to
> > > > the cloud:  Create a project for the user with a minimal quota, and
> > > > provide them a role assignment.
> > > >
> > > > Here are the gaps, as I see it:
> > > >
> > > > 1.  Keystone provides a notification that a user has logged in, but
> > > > there is nothing capable of executing on this notification at the
> > > > moment.  Only Ceilometer listens to Keystone notifications.
> > > >
> > > > 2.  Keystone does not have a workflow engine, and should not be
> > > > auto-creating projects.  This is something that should be performed
> via
> > > > a Heat template, and Keystone does not know about Heat, nor should
> it.
> > > >
> > > > 3.  The Mapping code is pretty static; it assumes a user entry or a
> > > > group entry in identity when creating a role assignment, and neither
> > > > will exist.
> > > >
> > > > We can assume a special domain for Federated users to have per-user
> > > > projects.
> > > >
> > > > So; lets assume a Heat Template that does the following:
> > > >
> > > > 1. Creates a user in the per-user-projects domain
> > > > 2. Assigns a role to the Federated user in that project
> > > > 3. Sets the minimal quota for the user
> > > > 4. Somehow notifies the user that the project has been set up.
> > > >
> > > > This last probably assumes an email address from the Federated
> > > > assertion.  Otherwise, the user hits Horizon, gets a "not
> authenticated
> > > > for any projects" error, and is stumped.
> > > >
> > > > How is quota assignment done in the other projects now?  What happens
> > > > when a project is created in Keystone?  Does that information gets
> > > > transferred to the other services, and, if so, how?  Do most people
> use
> > > > a custom provisioning tool for this workflow?
> > > >
> > >
> > > I know at Dreamhost we built some custom integration that was triggered
> > > when someone turned on the Dreamcompute service in their account in our
> > > existing user management system. That integration created the account
> in
> > > keystone, set up a default network in neutron, etc. I've long thought
> we
> > > needed a "new tenant creation" service of some sort, that sits outside
> > > of our existing services and pokes them to do something when a new
> > > tenant is established. Using heat as the implementation makes sense,
> for
> > > things that heat can control, but we don't want keystone to depend on
> > > heat and we don't want to bake such a specialized feature into heat
> > > itself.
> > >
> >
> > I agree, an automation piece that is built-in and easy to add to
> > OpenStack would be great.
> >
> > I do not agree that it should be Heat. Heat is for managing stacks that
> > live on and change over time and thus need the complexity of the graph
> > model Heat presents.
> >
> > I'd actually say that Mistral or Ansible are better choices for this. A
> > service which listens to the notification bus and triggered a workflow
> > defined somewhere in either Ansible playbooks or Mistral's workflow
> > language would simply run through the "skel" workflow for each user.
> >
> > The actual workflow would probably almost always be somewhat site
> > specific, but it would make sense for Keystone to include a few basic
> ones
> > as "contrib" elements. For instance, the "notify the user" piece would
> > likely be simplest if you just let the workflow tool send an email. But
> > if your cloud has Zaqar, you may want to use that as well or instead.
> >
> > Adding Mistral here to see if they have some thoughts on how this
> > might work.
> >
> > BTW, if this does form into a new project, I suggest naming it
> > Skeleton[1]
>
> Following the pattern of Kite's naming, I think a Dirigible is a
> better way to get users into the cloud. :-)
>

lol +1

Is this use case specifically for keystone-to-keystone, or for federation
in general?

As an outcome of the Vancouver summit, we had a use case for mirroring a
federated user's project ID from the identity provider cloud to the service
provider cloud. The goal would be that a user can burst into a second cloud
and immediately receive a token scoped to the same project ID that they're
already familiar with (which implies a role assignment of some sort; for
example, member). That would have to be done in real time though, not by a
secondary service.

And with shadow users, we're looking at creating an identity (basically,

Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Clint Byrum
Excerpts from Fox, Kevin M's message of 2015-11-05 13:18:13 -0800:
> Your assuming there are only 2 choices,
>  zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might 
> be needed. Though even with its flaws, the db+rabbit choice has a few 
> benefits too.
> 

Well, I'm assuming it is zk/etcd/consul, because while the java argument
is rather religious, the reality is all three are significantly different
from databases and message queues and thus will be "snowflakes". But yes,
I _am_ assuming that Zookeeper is a natural, logical, simple choice,
and that fact that it runs in a jvm is a poor reason to avoid it.

> You also seem to assert that to support large clouds, the default must be 
> something that can scale that large. While that would be nice, I don't think 
> its a requirement if its overly burdensome on deployers of non huge clouds.
> 

I think the current solution even scales poorly for medium sized
clouds. Only the tiniest of clouds with the fewest nodes can really
sustain all of that polling without incurring cost for that overhead
that would be better spent on serviceing users.

> I don't have metrics, but I would be surprised if most deployments today 
> (production + other) used 3 controllers with a full ha setup. I would guess 
> that the majority are single controller setups. With those, the overhead of 
> maintaining a whole dlm like zk seems like overkill. If db+rabbit would work 
> for that one case, that would be one less thing to have to setup for an op. 
> They already have to setup db+rabbit. Or even a clm plugin of some sort, that 
> won't scale, but would be very easy to deploy, and change out later when 
> needed would be very useful.
> 

We do have metrics:

http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf

Page 35, "How many physical compute nodes do OpenStack clouds have?"


10-99:42%
1-9:  36%
100-999:  15%
1000-: 7%

So for respondents to that survey, yes, "most" are running less than 100
nodes. However, by compute node count, if we extrapolate a bit:

There were 154 respondents so:

10-99 * 42% =640 - 6403 nodes
1-9 * 36% =  55 - 498 nodes
100-999 * 15% =  2300 - 23076 nodes
1000- * 7% = 1 - 107789 nodes

So in terms of the number of actual computers running OpenStack compute,
as an example, from the survey respondents, there are more computes
running in *one* of the clouds with more than 1000 nodes than there are
in *all* of the clouds with less than 10 nodes, and certainly more in
all of the clouds over 1000 nodes, than in all of the clouds with less
than 100 nodes.

What this means, to me, is that the investment in OpenStack should focus
on those with > 1000, since those orgs are definitely investing a lot
more today. We shouldn't make it _hard_ to do a tiny cloud, but I think
it's ok to make the tiny cloud less efficient if it means we can grow
it into a monster cloud at any point and we continue to garner support
from orgs who need to build large scale clouds.

(I realize I'm biased because I want to build a cloud with more than
1000 nodes ;)

> etcd is starting to show up in a lot of other projects, and so it may be at 
> sites already. being able to support it may be less of a burden to operators 
> then zk in some cases.
> 

Sure, just like some shops already have postgres and in theory you can
still run OpenStack on postgres. But the testing level for postgres
support is so abyssmal that I'd be surprised if anybody was actually
_choosing_ to do this. I can see this going the same way, where we give
everyone a choice, but then end up with almost nobody using any
alternative choices because the community has only rallied around the
one dominat choice.

> If your cloud grows to the point where the dlm choice really matters for 
> scalability/correctness, then you probably have enough staff members to deal 
> with adding in zk, and that's probably the right choice.
> 

If your cloud is 40 compute nodes, and three nines (which, lets face
it, thats the availability profile of a cloud with one controller), we
can just throw Zookeeper up untuned and satisfy the needs. Why would we
want to put up a custom homegrown db+mq solution and then force a change
later on if the cloud grows? A single code path seems a lot better than
multiple code paths, some of which are not really well tested.

> You can have multiple suggested things in addition to one default. Default to 
> the thing that makes the most sense in the common most deployments, and make 
> specific recommendations for certain scenarios. like, "if greater then 100 
> nodes, we strongly recommend using zk" or something to that effect.
> 

Choices are not free either. Just edit that statement there: "We
strongly recommend using zk." Nothing about ZK, etcd, or consul,
invalidates running on a small cloud. In many ways it makes things
simpler, since the user doesn't have to decide on a DLM, but instead
just installs the thing we recommend.


Re: [openstack-dev] [puppet] Match type checking from oslo.config.

2015-11-05 Thread Sofer Athlan-Guyot
Hunter Haugen  writes:

>> Ouha!  I didn't know that property could have parent class defined.
>> This is nice.  Does it work also for parameter ?
>
> I haven't tried, but property is just a subclass of parameter so
> truthy could probably be made a parameter then become a parent of
> either a property or a parameter.

I will make a test tomorrow and report back how it goes, but you're
right, it should be ok.

>
>>
>> The NetScalerTruthy is more or less what would be needed for thruthy stuff.
>>
>> On my side I came up with this solution (for different stuff, but the
>> same principle could be used here as well):
>>
>> https://review.openstack.org/#/c/238954/10/lib/puppet_x/keystone/type/read_only.rb
>>
>> And I call it like that:
>>
>>   newproperty(:id) do
>> include PuppetX::Keystone::Type::ReadOnly
>>   end
>>
>> I was thinking of extending this scheme to have needed types (Boolean,
>> ...):
>>
>>   newproperty(:truth) do
>> include PuppetX::Openstack::Type::Boolean
>>   end
>>
>> Your solution in NetScalerTruthy is nice, integrated with puppet, but
>> require a function call.
>
> The function call is to a) pass documentation inline (since I assume
> every attribute has different documentation so didn't want to hardcode
> it in the truthy class), and b) pass the default truthy/falsy values
> that should be exposed to the provider (ie, allow you to cast all
> truthy values to `"enable"` and `"disable"` instead of only supporting
> `true` and `false`.
>
> The truthy class could obviously be implemented such that if no block
> is passed to the attribute then the method is automatically called
> with default values, then you wouldn't even need the `include` mixin.

That's look like a perfect interface.  I'm going to try this on some
code.  I will report here tomorrow, hopefully in a small review :)

Thanks again for those great insights.

>>
>> My "solution" require no function call unless you have to pass
>> parameters. If you have to pass parameter, the interface I used is a
>> preset function.  Here is an example:
>>
>> https://review.openstack.org/#/c/239434/8/lib/puppet_x/keystone/type/required.rb
>>
>> and you use it like this:
>>
>>   newparam(:type) do
>> isnamevar
>> def required_custom_message
>>   'Not specifying type parameter in Keystone_endpoint is a bug. ' \
>> 'See bug https://bugs.launchpad.net/puppet-keystone/+bug/1506996 '
>> \
>> "and https://review.openstack.org/#/c/238954/ for more
>> information.\n"
>> end
>> include PuppetX::Keystone::Type::Required
>>   end
>>
>> So, modulo you can have parameter with parent, both solutions could be
>> used.  Which one will it be:
>>  - one solution (NetScalerTruthy) is based on inheritance, mine on
>> composition.
>>  - you have a function call to make with NetScalerTruthy no matter what;
>>  - you have to define function to pass parameter with my solution (but
>>that shouldn't be required very often)
>>
>> I tend to prefer my resulting syntax, but that's really me ... I may be
>> biased.
>>
>> What do you think ?
>>
>>>
>>> On Mon, Nov 2, 2015 at 12:06 PM Cody Herriges 
>>> wrote:
>>>
>>> Sofer Athlan-Guyot wrote:
>>> > Hi,
>>> >
>>> > The idea would be to have some of the types defined oslo config
>>> >
>>>
>>> http://git.openstack.org/cgit/openstack/oslo.config/tree/oslo_config/types.
>>> py
>>> > ported to puppet type. Those that looks like good candidates
>>> are:
>>> > - Boolean;
>>> > - IPAddress;
>>> > and in a lesser extend:
>>> > - Integer;
>>> > - Float;
>>> >
>>> > For instance in puppet type requiring a Boolean, we may test
>>> > "/[tT]rue|[fF]alse/", but the real thing is :
>>> >
>>> > TRUE_VALUES = ['true', '1', 'on', 'yes']
>>> > FALSE_VALUES = ['false', '0', 'off', 'no']
>>> >
>>>
>>> Good idea. I'd only add that we should convert 'true' and 'false'
>>> to
>>> real booleans for Puppet's purposes since the Puppet language is
>>> now typed.
>>>
>>> --
>>> Cody
>>>
>>> ___
>>> ___
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> --
>> Sofer Athlan-Guyot
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] [neutron] Ether pad on O(n)/Linear Execution Time/Hyper-Scale

2015-11-05 Thread Ryan Moats


I promised during the DVR IRC meeting yesterday to re-run the L3 agent
experiments that I've been doing that have led to performance based patches
over the last two months and to provide an etherpad with both the results
and the methodology.

The etherpad is up for folks to review at [1].  While writing this, I
decided to no longer call this work "O(n)" or "Linear Execution Time" but
rather "Hyper-Scale" (because that sounds so much more cool (smile)).  Most
of what is there is methodology - while I've got some results from
yesterday, but I need to dig down some more, so I'll be updating that part
either tomorrow or early next week.

One thought that Kyle and I were discussing was should the "how" part go
into a devref, so that we aren't dependent on an etherpad.  I'm thinking
it's not a bad idea, but I'm wondering if it should only be in neutron or
if it should be elsewhere (like user docs that go along with code that
would implement [2] in oslo...

Thoughts and comments are welcome,
Ryan Moats (regXboi)

[1] https://etherpad.openstack.org/p/hyper-scale
[2] https://bugs.launchpad.net/neutron/+bug/1512864
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]:Subscribe and Publish Notification frame work in Ceilometer !

2015-11-05 Thread gord chung



On 05/11/2015 5:11 AM, Raghunath D wrote:

Hi Pradeep,

Presently we are looking for a monitoring service.Using monitoring 
service user's/application's
will subscribe for few notification's/events from openstack 
infrastructure and monitoring service

will publish these notification to user's/application's.

We are exploring Ceilometer for this purpose.We came across below blue 
print which is similar to our requirement.


 https://blueprints.launchpad.net/ceilometer/+spec/declarative-notifications.


i'm not exactly clear on what you are trying to achieve. that said, the 
basic premise of the above blueprint is that if serviceX (nova, neutron, 
etc...) starts publishing a new notification with a metric of interest, 
Ceilometer can be easily configured to capture said metric by adding a 
metric definition to a definition file[1] or a custom definition 
file[2]. the same can be done for events[3].




We have few queries on declarative-notifications frame work,could you 
please help us in addressing them:


1.We are looking for an API for Subcribing/Publishing notification.Do 
this frame work exposes any such API,if yes could you

   please provide us API doc or spec how to use it.
2.If the frame work doesn't have such API,does any of the development 
group is working in this area.
3.Please suggest what would be the best place in ceilometer 
notification frame work(publisher/dispatcher/..)

   to implement the Subscribe and Publish API.


from what is described, it seems like you'd like Ceilometer to capture a 
notification and republish it rather than stored in a Ceilometer 
supported storage driver (ie Gnocchi, ElasticSearch, SQL, etc...). 
currently, the only way to do this is to not enable a collector service. 
doing so, the Event/Sample will be published to a message queue 
(default) which you can configure your service to pull from. currently, 
i don't believe oslo.messaging supports pub/sub work flow. 
alternatively, you can use one of the other publishers[4]. the kafka 
publisher should allow you to do a pub/sub type workflow. i know RAX has 
atom hopper[5] which uses atom feeds to support pub/sub functionality. 
there was discussions on adding support for this but no work has been 
done on it. feel free to propose it if you feel it's worthwhile.


[1] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/data/meters.yaml
[2] 
https://github.com/openstack/ceilometer/blob/master/ceilometer/meter/notifications.py#L31
[3] 
https://github.com/openstack/ceilometer/blob/master/etc/ceilometer/event_definitions.yaml
[4] 
http://docs.openstack.org/admin-guide-cloud/telemetry-data-retrieval.html#publishers

[5] http://atomhopper.org/

cheers,

--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] manila-api failure in liberty

2015-11-05 Thread Igor Feoktistov
Hi Valeriy,

Thank you. Updated api-paste.ini resolved the issue with manila-api failure

Thanks,
Igor.

> Hello Igor,
> 
> Mentioned error indicates that file "etc/manila/api-paste.ini" was not
> updated with one from new version of Manila. This file has dependency on
> version of project and can differ from release to release. So, just copy
> liberty version of this file to "/etc/manila/api-paste.ini" and then run
> Liberty Manila API service.

> -- 
> Kind Regards
> Valeriy Ponomaryov
> www.mirantis.com
> vponomaryov at mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [Bug#1497073]The return sample body of sample-list is different when use -m and not

2015-11-05 Thread gord chung
i'm sort of torn on this item. there's a general feeling that regarding 
api, nothing should be dropped so i'm hesitant to actually deprecate it. 
i think changing the data also is very dangerous when it comes to 
compatibility (even though keeping it increases inconsistency).


maybe the better solution is to document that these are different APIs 
and will return different results.


On 05/11/2015 2:30 AM, Lin Juan IX Xia wrote:

Hi,

Here is an open bug : https://bugs.launchpad.net/ceilometer/+bug/1497073

Is it a bug or not?

For the command "ceilometer sample-list --meter cpu", it calls 
"/v2/meter" API and return the OldSample objects
which return body is different from "ceilometer sample-list --query 
'meter=cpu'".
To fix this inconformity, we can deprecate the command using -m or fix 
it to return the same body as command sample-list

Best Regards,
Xia Linjuan




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [DefCore] Request for reviews and comments for 2016.01 DefCore (interop) guideline

2015-11-05 Thread Egle Sigler
Hello OpenStack Community,


The DefCore guideline for 2016.01 is now up for review, and we need your 
feedback. Please review and comment: https://review.openstack.org/#/c/239830/


At this time, we need feedback for capabilities that will become advisory in 
2016.01 and required in 2016.07:


"advisory": [
   "networks-l3-router",
   "networks-l2-CRUD",
   "networks-l3-CRUD",
   "networks-security-groups-CRUD",
   "compute-list-api-versions",
   "images-v2-remove",
   "images-v2-update",
   "images-v2-share",
   "images-v2-import",
   "images-v2-list",
   "images-v2-delete",
   "images-v2-get",
   "volumes-v2-create-delete",
   "volumes-v2-attach-detach",
   "volumes-v2-snapshot-create-delete",
   "volumes-v2-get",
   "volumes-v2-list",
   "volumes-v2-update",
   "volumes-v2-copy-image-to-volume",
   "volumes-v2-copy-volume-to-image",
   "volumes-v2-clone",
   "volumes-v2-qos",
   "volumes-v2-availability-zones",
   "volumes-v2-extensions",
   "volumes-v2-metadata",
   "volumes-v2-transfer",
   "volumes-v2-reserve",
   "volumes-v2-readonly",
   "identity-v3-api-discovery"
 ],


Each of these capabilities have Tempest tests associated with them. Please 
review and provide feedback. At this point, we can only remove advisory 
capabilities, and not others.


How to get involved in DefCore:

Join the mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/defcore-committee

Find us on IRC (chat.freenode.net): #openstack-defcore

Submit, review, comment: 
https://review.openstack.org/#/q/status:open+project:openstack/defcore,n,z

Join our weekly meetings on IRC: 
https://wiki.openstack.org/wiki/Governance/DefCoreCommittee#Meetings


New to DefCore?  Some pointers:

Intro to DefCore with heavy references to Dr. Who 
http://www.slideshare.net/markvoelker/defcore-the-interoperability-standard-for-openstack-53040869

DefCore 101 Tokyo presentation and slides: 
https://www.youtube.com/watch?v=MfUAuObSkK8  
http://www.slideshare.net/rhirschfeld/tokyo-defcore-presentation

Wiki: https://wiki.openstack.org/wiki/DefCore

Hacking file: https://github.com/openstack/defcore/blob/master/HACKING.rst


Please let me know if you have any questions!

Thank you,

Egle Sigler

DefCore Committee Co-Chair


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Logging - filling up my tiny SSDs

2015-11-05 Thread Sean M. Collins
On Wed, Nov 04, 2015 at 07:25:24AM EST, Sean Dague wrote:
> On 11/02/2015 10:36 AM, Sean M. Collins wrote:
> > On Sun, Nov 01, 2015 at 10:12:10PM EST, Davanum Srinivas wrote:
> >> Sean,
> >>
> >> I typically switch off screen and am able to redirect logs to a specified
> >> directory. Does this help?
> >>
> >> USE_SCREEN=False
> >> LOGDIR=/opt/stack/logs/
> > 
> > It's not that I want to disable screen. I want screen to run, and not
> > log the output to files, since I have a tiny 16GB ssd card on these NUCs
> > and it fills it up if I leave it running for a week or so. 
> 
> If you right a patch, I think it's fine to include, however it's a
> pretty edge case. Super small disks (I didn't even realize they made SSD
> that small, I thought 120 was about the floor), and running devstack for
> long times without rebuild.

I'll make sure to name the variable appropriately. Some ideas:

SEAN_COLLINS_CREEPY_BASEMENT_DEVSTACK_LAB
SEANS_DISCOUNT_DEVSTACK_EMPORIUM
ANT_SIZED_SSD

;)



-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Arbitrary JSON blobs in ipam db tables

2015-11-05 Thread Shraddha Pandhe
Hi,

I agree with all of you about the REST Apis.

As I said before, I had to bring up the idea of JSON blob because based on
previous discussions, it looked like neutron community was not willing to
enhance the schemas for different ipam dbs. Entire rationale behind
pluggable IPAM is to provide flexibility. So, community should be open to
ideas for enhancing the schema to incorporate more information in the db
tables. I would be extremely happy if use cases for different companies are
considered and schema is enhanced to include specific columns in db
 schemas instead of a column with random JSON blob.

Lets pick up subnets db table for example. We have some use cases where it
would be great if following information is associated with the subnet db
table

1. Rack switch info
2. Backplane info
3. DHCP ip helpers
4. Option to tag allocation pools inside subnets
5. Multiple gateway addresses

We also want to store some information about the backplanes locally, so a
different table might be useful.

In a way, this information is not specific to our company. Its generic
information which ought to go with the subnets. Different companies can use
this information differently in their IPAM drivers. But, the information
needs to be made available to justify the flexibility of ipam

In Yahoo! OpenStack is still not the source of truth for this kind of
information and database limitation is one of the reasons. I would prefer
to avoid having our own database to make sure that our use-cases are always
shared with the community.








On Thu, Nov 5, 2015 at 9:37 AM, Kyle Mestery  wrote:

> On Thu, Nov 5, 2015 at 10:55 AM, Jay Pipes  wrote:
>
>> On 11/04/2015 04:21 PM, Shraddha Pandhe wrote:
>>
>>> Hi Salvatore,
>>>
>>> Thanks for the feedback. I agree with you that arbitrary JSON blobs will
>>> make IPAM much more powerful. Some other projects already do things like
>>> this.
>>>
>>
>> :( Actually, though "powerful" it also leads to implementation details
>> leaking directly out of the public REST API. I'm very negative on this and
>> would prefer an actual codified REST API that can be relied on regardless
>> of backend driver or implementation.
>>
>
> I agree with Jay here. We've had people propose similar things in Neutron
> before, and I've been against them. The entire point of the Neutron REST
> API is to not leak these details out. It dampens the strength of the
> logical model, and it tends to have users become reliant on backend
> implementations.
>
>
>>
>> e.g. In Ironic, node has driver_info, which is JSON. it also has an
>>> 'extras' arbitrary JSON field. This allows us to put any information in
>>> there that we think is important for us.
>>>
>>
>> Yeah, and this is a bad thing, IMHO. Public REST APIs should be
>> structured, not a Wild West free-for-all. The biggest problem with using
>> free-form JSON blobs in RESTful APIs like this is that you throw away the
>> ability to evolve the API in a structured, versioned way. Instead of
>> evolving the API using microversions, instead every vendor just jams
>> whatever they feel like into the JSON blob over time. There's no way for
>> clients to know what the server will return at any given time.
>>
>> Achieving consensus on a REST API that meets the needs of a variety of
>> backend implementations is *hard work*, yes, but it's what we need to do if
>> we are to have APIs that are viewed in the industry as stable,
>> discoverable, and reliably useful.
>>
>
> ++, this is the correct way forward.
>
> Thanks,
> Kyle
>
>
>>
>> Best,
>> -jay
>>
>> Best,
>> -jay
>>
>> Hoping to get some positive feedback from API and DB lieutenants too.
>>>
>>>
>>> On Wed, Nov 4, 2015 at 1:06 PM, Salvatore Orlando
>>> > wrote:
>>>
>>> Arbitrary blobs are a powerful tools to circumvent limitations of an
>>> API, as well as other constraints which might be imposed for
>>> versioning or portability purposes.
>>> The parameters that should end up in such blob are typically
>>> specific for the target IPAM driver (to an extent they might even
>>> identify a specific driver to use), and therefore an API consumer
>>> who knows what backend is performing IPAM can surely leverage it.
>>>
>>> Therefore this would make a lot of sense, assuming API portability
>>> and not leaking backend details are not a concern.
>>> The Neutron team API & DB lieutenants will be able to provide more
>>> input on this regard.
>>>
>>> In this case other approaches such as a vendor specific extension
>>> are not a solution - assuming your granularity level is the
>>> allocation pool; indeed allocation pools are not first-class neutron
>>> resources, and it is not therefore possible to have APIs which
>>> associate vendor specific properties to allocation pools.
>>>
>>> Salvatore
>>>
>>> On 4 November 2015 at 21:46, Shraddha Pandhe
>>> 

[openstack-dev] [infra] Gerrit maintenance for project renames 2015-11-06 (tomorrow) at 20:00 UTC

2015-11-05 Thread Jeremy Stanley
Sorry for the short notice, but the Infra team will be taking Gerrit
offline briefly from 20:00 to 20:15 tomorrow/Friday, November 6 to
rename the following projects:

openstack-infra/puppet-openstack-health ->
openstack-infra/puppet-openstack_health
openstack/akanda-rug -> openstack/astara
openstack/akanda-appliance -> openstack/astara-appliance
openstack/akanda-horizon -> openstack/astara-horizon
openstack/akanda-neutron -> openstack/astara-neutron
openstack/akanda -> openstack-attic/akanda
openstack/akanda-appliance-builder ->
openstack-attic/akanda-appliance-builder

And we'll move these lingering projects which missed the first boat
from StackForgeville to OpenStack City:

stackforge/networking-bigswitch ->
openstack/networking-bigswitch
stackforge/compass-install -> openstack/compass-install

Also if change 237936 gets corrected in the next 18 hours or so, we
may rename:

openstack/networking-bagpipe-l2 -> openstack/networking-bagpipe

As always, feel free to follow up to this message or pop into
#openstack-infra on Freenode if you have any questions/concerns.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2015-11-05 10:09:49 -0800:
> Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> > Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> > > Can people help me work through the right set of tools for this use case 
> > > (has come up from several Operators) and map out a plan to implement it:
> > > 
> > > Large cloud with many users coming from multiple Federation sources has 
> > > a policy of providing a minimal setup for each user upon first visit to 
> > > the cloud:  Create a project for the user with a minimal quota, and 
> > > provide them a role assignment.
> > > 
> > > Here are the gaps, as I see it:
> > > 
> > > 1.  Keystone provides a notification that a user has logged in, but 
> > > there is nothing capable of executing on this notification at the 
> > > moment.  Only Ceilometer listens to Keystone notifications.
> > > 
> > > 2.  Keystone does not have a workflow engine, and should not be 
> > > auto-creating projects.  This is something that should be performed via 
> > > a Heat template, and Keystone does not know about Heat, nor should it.
> > > 
> > > 3.  The Mapping code is pretty static; it assumes a user entry or a 
> > > group entry in identity when creating a role assignment, and neither 
> > > will exist.
> > > 
> > > We can assume a special domain for Federated users to have per-user 
> > > projects.
> > > 
> > > So; lets assume a Heat Template that does the following:
> > > 
> > > 1. Creates a user in the per-user-projects domain
> > > 2. Assigns a role to the Federated user in that project
> > > 3. Sets the minimal quota for the user
> > > 4. Somehow notifies the user that the project has been set up.
> > > 
> > > This last probably assumes an email address from the Federated 
> > > assertion.  Otherwise, the user hits Horizon, gets a "not authenticated 
> > > for any projects" error, and is stumped.
> > > 
> > > How is quota assignment done in the other projects now?  What happens 
> > > when a project is created in Keystone?  Does that information gets 
> > > transferred to the other services, and, if so, how?  Do most people use 
> > > a custom provisioning tool for this workflow?
> > > 
> > 
> > I know at Dreamhost we built some custom integration that was triggered
> > when someone turned on the Dreamcompute service in their account in our
> > existing user management system. That integration created the account in
> > keystone, set up a default network in neutron, etc. I've long thought we
> > needed a "new tenant creation" service of some sort, that sits outside
> > of our existing services and pokes them to do something when a new
> > tenant is established. Using heat as the implementation makes sense, for
> > things that heat can control, but we don't want keystone to depend on
> > heat and we don't want to bake such a specialized feature into heat
> > itself.
> > 
> 
> I agree, an automation piece that is built-in and easy to add to
> OpenStack would be great.
> 
> I do not agree that it should be Heat. Heat is for managing stacks that
> live on and change over time and thus need the complexity of the graph
> model Heat presents.
> 
> I'd actually say that Mistral or Ansible are better choices for this. A
> service which listens to the notification bus and triggered a workflow
> defined somewhere in either Ansible playbooks or Mistral's workflow
> language would simply run through the "skel" workflow for each user.
> 
> The actual workflow would probably almost always be somewhat site
> specific, but it would make sense for Keystone to include a few basic ones
> as "contrib" elements. For instance, the "notify the user" piece would
> likely be simplest if you just let the workflow tool send an email. But
> if your cloud has Zaqar, you may want to use that as well or instead.
> 
> Adding Mistral here to see if they have some thoughts on how this
> might work.
> 
> BTW, if this does form into a new project, I suggest naming it
> Skeleton[1]

Following the pattern of Kite's naming, I think a Dirigible is a
better way to get users into the cloud. :-)

Doug

> 
> [1] https://goo.gl/photos/EML6EPKeqRXioWfd8 (that was my front yard..)
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Outcome of distributed lock manager discussion @ the summit

2015-11-05 Thread Fox, Kevin M
Your assuming there are only 2 choices,
 zk or db+rabbit. I'm claiming both hare suboptimal at present. a 3rd might be 
needed. Though even with its flaws, the db+rabbit choice has a few benefits too.

You also seem to assert that to support large clouds, the default must be 
something that can scale that large. While that would be nice, I don't think 
its a requirement if its overly burdensome on deployers of non huge clouds.

I don't have metrics, but I would be surprised if most deployments today 
(production + other) used 3 controllers with a full ha setup. I would guess 
that the majority are single controller setups. With those, the overhead of 
maintaining a whole dlm like zk seems like overkill. If db+rabbit would work 
for that one case, that would be one less thing to have to setup for an op. 
They already have to setup db+rabbit. Or even a clm plugin of some sort, that 
won't scale, but would be very easy to deploy, and change out later when needed 
would be very useful.

etcd is starting to show up in a lot of other projects, and so it may be at 
sites already. being able to support it may be less of a burden to operators 
then zk in some cases.

If your cloud grows to the point where the dlm choice really matters for 
scalability/correctness, then you probably have enough staff members to deal 
with adding in zk, and that's probably the right choice.

You can have multiple suggested things in addition to one default. Default to 
the thing that makes the most sense in the common most deployments, and make 
specific recommendations for certain scenarios. like, "if greater then 100 
nodes, we strongly recommend using zk" or something to that effect.

Thanks,
Kevin



From: Clint Byrum [cl...@fewbar.com]
Sent: Thursday, November 05, 2015 11:44 AM
To: openstack-dev
Subject: Re: [openstack-dev] [all] Outcome of distributed lock manager  
discussion @ the summit

Excerpts from Fox, Kevin M's message of 2015-11-04 14:32:42 -0800:
> To clarify that statement a little more,
>
> Speaking only for myself as an op, I don't want to support yet one more 
> snowflake in a sea of snowflakes, that works differently then all the rest, 
> without a very good reason.
>
> Java has its own set of issues associated with the JVM. Care, and feeding 
> sorts of things. If we are to invest time/money/people in learning how to 
> properly maintain it, its easier to justify if its not just a one off for 
> just DLM,
>
> So I wouldn't go so far as to say we're vehemently opposed to java, just that 
> DLM on its own is probably not a strong enough feature all on its own to 
> justify requiring pulling in java. Its been only a very recent thing that you 
> could convince folks that DLM was needed at all. So either make java 
> optional, or find some other use cases that needs java badly enough that you 
> can make java a required component. I suspect some day searchlight might be 
> compelling enough for that, but not today.
>
> As for the default, the default should be good reference. if most sites would 
> run with etc or something else since java isn't needed, then don't default 
> zookeeper on.
>

There are a number of reasons, but the most important are:

* Resilience in the face of failures - The current database+MQ based
  solutions are all custom made and have unknown characteristics when
  there are network partitions and node failures.
* Scalability - The current database+MQ solutions rely on polling the
  database and/or sending lots of heartbeat messages or even using the
  database to store heartbeat transactions. This scales fine for tiny
  clusters, but when every new node adds more churn to the MQ and
  database, this will (and has been observed to) be intractable.
* Tech debt - OpenStack is inventing lock solutions and then maintaining
  them. And service discovery solutions, and then maintaining them.
  Wouldn't you rather have better upgrade stories, more stability, more
  scale, and more featuers?

If those aren't compelling enough reasons to deploy a mature java service
like Zookeeper, I don't know what would be. But I do think using the
abstraction layer of tooz will at least allow us to move forward without
having to convince everybody everywhere that this is actually just the
path of least resistance.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [Mistral] Autoprovisioning, per-user projects, and Federation

2015-11-05 Thread Doug Hellmann
Excerpts from Adam Young's message of 2015-11-05 15:14:03 -0500:
> On 11/05/2015 01:09 PM, Clint Byrum wrote:
> > Excerpts from Doug Hellmann's message of 2015-11-05 09:51:41 -0800:
> >> Excerpts from Adam Young's message of 2015-11-05 12:34:12 -0500:
> >>> Can people help me work through the right set of tools for this use case
> >>> (has come up from several Operators) and map out a plan to implement it:
> >>>
> >>> Large cloud with many users coming from multiple Federation sources has
> >>> a policy of providing a minimal setup for each user upon first visit to
> >>> the cloud:  Create a project for the user with a minimal quota, and
> >>> provide them a role assignment.
> >>>
> >>> Here are the gaps, as I see it:
> >>>
> >>> 1.  Keystone provides a notification that a user has logged in, but
> >>> there is nothing capable of executing on this notification at the
> >>> moment.  Only Ceilometer listens to Keystone notifications.
> >>>
> >>> 2.  Keystone does not have a workflow engine, and should not be
> >>> auto-creating projects.  This is something that should be performed via
> >>> a Heat template, and Keystone does not know about Heat, nor should it.
> >>>
> >>> 3.  The Mapping code is pretty static; it assumes a user entry or a
> >>> group entry in identity when creating a role assignment, and neither
> >>> will exist.
> >>>
> >>> We can assume a special domain for Federated users to have per-user
> >>> projects.
> >>>
> >>> So; lets assume a Heat Template that does the following:
> >>>
> >>> 1. Creates a user in the per-user-projects domain
> >>> 2. Assigns a role to the Federated user in that project
> >>> 3. Sets the minimal quota for the user
> >>> 4. Somehow notifies the user that the project has been set up.
> >>>
> >>> This last probably assumes an email address from the Federated
> >>> assertion.  Otherwise, the user hits Horizon, gets a "not authenticated
> >>> for any projects" error, and is stumped.
> >>>
> >>> How is quota assignment done in the other projects now?  What happens
> >>> when a project is created in Keystone?  Does that information gets
> >>> transferred to the other services, and, if so, how?  Do most people use
> >>> a custom provisioning tool for this workflow?
> >>>
> >> I know at Dreamhost we built some custom integration that was triggered
> >> when someone turned on the Dreamcompute service in their account in our
> >> existing user management system. That integration created the account in
> >> keystone, set up a default network in neutron, etc. I've long thought we
> >> needed a "new tenant creation" service of some sort, that sits outside
> >> of our existing services and pokes them to do something when a new
> >> tenant is established. Using heat as the implementation makes sense, for
> >> things that heat can control, but we don't want keystone to depend on
> >> heat and we don't want to bake such a specialized feature into heat
> >> itself.
> >>
> > I agree, an automation piece that is built-in and easy to add to
> > OpenStack would be great.
> >
> > I do not agree that it should be Heat. Heat is for managing stacks that
> > live on and change over time and thus need the complexity of the graph
> > model Heat presents.
> It would be a simpler template than most, but I'm trying to avoid adding 
> additional complexity here.
> 
> >
> > I'd actually say that Mistral or Ansible are better choices for this. A
> > service which listens to the notification bus and triggered a workflow
> > defined somewhere in either Ansible playbooks or Mistral's workflow
> > language would simply run through the "skel" workflow for each user.
> >
> > The actual workflow would probably almost always be somewhat site
> > specific, but it would make sense for Keystone to include a few basic ones
> > as "contrib" elements. For instance, the "notify the user" piece would
> > likely be simplest if you just let the workflow tool send an email. But
> > if your cloud has Zaqar, you may want to use that as well or instead.
> >
> > Adding Mistral here to see if they have some thoughts on how this
> > might work.
> >
> > BTW, if this does form into a new project, I suggest naming it
> > Skeleton[1]
> 
> I really do not want it to be a new project, but rather I think it 
> should be a mapping of the capabilities of the existing projects.
> 
> 
> We had discussed Mistral in Vancouver as the listener.  Would it make 
> sense to have Keystone notify Mistral, and then Mistral kick off the 
> workflow?

Mistral would need to catch the event and take action on behalf of the
new tenant with some sort of admin rights. Is that possible now?

> 
> The one issue I waffle on is whether Keystone itself should be 
> responsible for the Keystone-specific stuff, as part of the initial log 
> in, and thus give an immediate response to the user upon first 
> authentication.

For the federation case that may make sense. For setting up a new
tenant or user, it may not.

> 
> 
> Alternatively, we could provide a 

Re: [openstack-dev] [manila] manila-api failure in liberty

2015-11-05 Thread Valeriy Ponomaryov
Hello Igor,

Mentioned error indicates that file "etc/manila/api-paste.ini" was not
updated with one from new version of Manila. This file has dependency on
version of project and can differ from release to release. So, just copy
liberty version of this file to "/etc/manila/api-paste.ini" and then run
Liberty Manila API service.

-- 
Kind Regards
Valeriy Ponomaryov
www.mirantis.com
vponomar...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up, Doc? 6 November 2015

2015-11-05 Thread Lana Brindley
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi everyone,

Wow! What a great Summit! And isn't Tokyo a truly beautiful and amazing city? 
Thank you so much to the Japanese Stackers who hosted us, and to everyone who 
came along to the docs sessions and helped us hammer out a great plan for 
Mitaka. I'm very excited about this release!

This week, I've been catching up with everything that happened during Summit, 
and also working on outreach tasks. Today, I recorded an interview with 
Foundation about the project, and I've also published a blog post on the same 
topic, which will be published soon. You might also like to check out the 
Superuser interview Anne and I did about the docs while we were in Tokyo: 
http://superuser.openstack.org/articles/openstack-documentation-why-it-s-important-and-how-you-can-contribute

== Progress towards Mitaka ==

152 days to go!

77 bugs closed so far for this release.

API Docs
* The API docs are will be switched to swagger: 
http://specs.openstack.org/openstack/docs-specs/specs/liberty/api-site.html

DocImpact
* I've removed the WIP from this blueprint, and will be working on this from 
next week: 
https://blueprints.launchpad.net/openstack-manuals/+spec/review-docimpact

RST Conversions
* Arch Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-rst
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* Ops Guide
** https://blueprints.launchpad.net/openstack-manuals/+spec/ops-guide-rst
** Lana will reach out to O'Reilly to discuss the printed book before this work 
begins
* Config Ref
** Thanks for all the offers of help on this one! Please contact the Config Ref 
Speciality team: https://wiki.openstack.org/wiki/Documentation/ConfigRef

Reorganisations
* Arch Guide
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/archguide-mitaka-reorg
** Contact the Ops Guide Speciality team: 
https://wiki.openstack.org/wiki/Documentation/OpsGuide
* User Guides
** 
https://blueprints.launchpad.net/openstack-manuals/+spec/user-guides-reorganised
** Contact the User Guide Speciality team: 
https://wiki.openstack.org/wiki/User_Guides

Training
* Labs
** https://blueprints.launchpad.net/openstack-manuals/+spec/training-labs
* Guides
** Upstream University & 'core' component updates, EOL Upstream Wiki page.

Document openstack-doc-tools
* Need volunteers for this!

Reorganise index page
* The API docs have already moved off the front page
* We need volunteers to look at the organisation of this page and to write 
one-sentence summaries of each book

== Doc team meeting ==

Meetings will kick off again next week with the APAC meeting:

APAC: Wednesday 11 November, 00:30:00 UTC
US: Wednesday 18 November, 14:00 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

- --

Keep on doc'ing!
Lana


- -- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJWPC4TAAoJELppzVb4+KUynpUH/30Y6pv7Zrse+YM1ki2pqLqi
dp0f9RysJQkvXOA7OWy48kWLWXgMF0/hq1DIhrZ9AlsUCGOGC04/YVGNyaCAxMkx
TTpyi6gJWl9Fiwbrc6k63MPx7OMFDcGu8KQow7tCBewH0jYngiJeP/mxIP6AnhBy
SHVsZZ4OG99w/xZyUe8rVGkpXLFUfow8u0r4hCLlFGSUxLD3jz8ABp2HX7mf3ICi
0u1rgxD08lWSHPHRmhzUZ+kx7uW1ZY0UWiX1rsyTU690dsYbYeJSUcY+Saf2md52
eIGk7yevZaYczXvn9vo/rfwZCc4G5jyRFp55yR/BfD/2NVsjmt2vvguWTnDywEE=
=Y0ik
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Migration state machine proposal.

2015-11-05 Thread Chris Friesen

On 11/05/2015 08:33 AM, Andrew Laski wrote:

On 11/05/15 at 01:28pm, Murray, Paul (HP Cloud) wrote:



Or more specifically, the migrate and resize API actions both call the resize
function in the compute api. As Ed said, they are basically the same behind
the scenes. (But the API difference is important.)


Can you be a little more specific on what API difference is important to you?
There are two differences currently between migrate and resize in the API:

1. There is a different policy check, but this only really protects the next 
bit.

2. Resize passes in a new flavor and migration does not.

Both actions result in an instance being scheduled to a new host.  If they were
consolidated into a single action with a policy check to enforce that users
specified a new flavor and admins could leave that off would that be problematic
for you?



To me, the fact that resize and cold migration share the same implementation is 
just that, an implementation detail.


From the outside they are different things...one is "take this instance and 
move it somewhere else", and the other "take this instance and change its 
resource profile".


To me, the external API would make more sense as:

1) resize

2) migrate (with option of cold or live, and with option to specify a 
destination, and with option to override the scheduler if the specified 
destination doesn't pass filters)



And while we're talking, I don't understand why "allow_resize_to_same_host" 
defaults to False.  The comments in https://bugs.launchpad.net/nova/+bug/1251266 
say that it's not intended to be used in production, but doesn't give a 
rationale for that statement.  If you're using local storage and you just want 
to add some more CPUs/RAM to the instance, wouldn't it be beneficial to avoid 
the need to copy the rootfs?


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] attaching and detaching volumes in the API

2015-11-05 Thread Chris Friesen

On 11/05/2015 12:13 PM, Murray, Paul (HP Cloud) wrote:


As part of this spec: https://review.openstack.org/#/c/221732/

I want to attach/detach volumes (and so manipulate block device mappings) when
an instance is not on any compute node (actually when in shelved). Normally this
happens in a function on the compute manager synchronized on the instance uuid.
When an instance is in the shelved_offloaded state it is not on a compute host,
so the operations have to be done at the API (an existing example is when the
instance deleted in this state – the cleanup is done in the API but is not
synchronized in this case).

One option I can see is using tack states, using expected_task_state parameter
in instance.save() to control state transitions. In the API this makes sense as
the calls will be synchronous so if an operation cannot be done it can be
reported back to the user in an error return. I’m sure there must be some other
options.


Whatever you do requires a single synchronization point.  If we can't use 
nova-compute, the only other option is the database.   (Since we don't yet have 
a DLM.)


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][policy] Exposing hypervisor details to users

2015-11-05 Thread Tony Breeds
Hello all,
I came across [1] which is notionally an ironic bug in that horizon presents
VM operations (like suspend) to users.  Clearly these options don't make sense
to ironic which can be confusing.

There is a horizon fix that just disables migrate/suspened and other functaions
if the operator sets a flag say ironic is present.  Clealy this is sub optimal
for a mixed hv environment.

The data needed (hpervisor type) is currently avilable only to admins, a quick
hack to remove this policy restriction is functional.

There are a few ways to solve this.

 1. Change the default from "rule:admin_api" to "" (for 
os_compute_api:os-extended-server-attributes and
os_compute_api:os-hypervisors), and set a list of values we're
comfortbale exposing the user (hypervisor_type and
hypervisor_hostname).  So a user can get the hypervisor_name as part of
the instance deatils and get the hypervisor_type from the
os-hypervisors.  This would work for horizon but increases the API load
on nova and kinda implies that horizon would have to cache the data and
open-code assumptions that hypervisor_type can/can't do action $x

 2. Include the hypervisor_type with the instance data.  This would place the 
burdon on nova.  It makes the looking up instance details slightly more
complex but doesn't result in additional API queries, nor caching
overhead in horizon.  This has the same opencoding issues as Option 1.

 3. Define a service user and have horizon look up the hypervisors details via 
that role.  Has all the drawbacks as option 1 and I'm struggling to
think of many benefits.

 4. Create a capabilitioes API of some description, that can be queried so that
consumers (horizon) can known

 5. Some other way for users to know what kind of hypervisor they're on, Perhaps
there is an established image property that would work here?

If we're okay with exposing the hypervisor_type to users, then #2 is pretty
quick and easy, and could be done in Mitaka.  Option 4 is probably the best
long term solution but I think is best done in 'N' as it needs lots of
discussion.

Yours Tony.

[1] https://bugs.launchpad.net/nova/+bug/1483639


pgpJRBy6xvvSg.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Senllin][Magnum]Add container type profile to Senlin

2015-11-05 Thread Haiwei Xu
Hi all,

As we know, currently Senlin supports two kind of profiles: Nova instance and
Heat stack, of course, we want to support container. After back from the summit,
I discussed it with a Magnum Core yuanying, we reached an agreement that adding
a container type profile support in Senlin. Maybe this idea is already thought 
about by you
guys.
Our general idea is Senlin makes a request to Docker API to start/from a 
container to/from
a Magnum Bay, the container will be shown in the senlin node-list like nova 
instance
And heat stack, and can also be added to one cluster or doing auto-scaling.
Here is the profile file example:

type: os.magnum.swarm.container
version: 1.0
properties:
  bay_id: swarm_bay
  compose_file: docker-compose.yaml

or:

type: os.magnum.kubernetes.container
version: 1.0
properties:
  bay_id: kubernetes_bay
  manifest: replication_controller.yaml

We will support two kinds of container creation.
What is your thought about this? Any comments are welcome.

Regards,
Xuhaiwei


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo_messaging] Regarding " WARNING [oslo_messaging.server] wait() should have been called after stop() as wait() ...

2015-11-05 Thread Nader Lahouti
Thanks Gord for the explanation.

Nader.

On Thu, Nov 5, 2015 at 11:49 AM, gord chung  wrote:

> my understanding is that if you are calling stop()/wait() your intention
> is to shut down the listener. if you intend on keeping an active consumer
> on the queue, you shouldn't be calling either stop() or wait(), just start.
>
>
> On 05/11/2015 2:07 PM, Nader Lahouti wrote:
>
>
> Thanks for the pointer, I'll look into it. But one question, by calling
> stop() and then wait(), does it mean the application has to call start()
> again after the wait()? to process more messages?
>
> I am also using
> http://docs.openstack.org/developer/oslo.messaging/server.html for the
> RPC server
> Does it mean there has to be stop() and then wait() there as well?
>
>
> Thanks,
> Nader.
>
>
>
> On Thu, Nov 5, 2015 at 10:19 AM, gord chung  wrote:
>
>>
>>
>> On 05/11/2015 1:06 PM, Nader Lahouti wrote:
>>
>>> Hi Doug,
>>>
>>> I have an app that listens to notifications and used the info provided in
>>>
>>> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
>>>
>>>
>>> Basically I create
>>> 1. NotificationEndpoints(object):
>>>
>>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L89
>>> 2. NotifcationListener(object):
>>>
>>> https://github.com/openstack/networking-cisco/blob/master/networking_cisco/apps/saf/common/rpc.py#L100
>>> 3. and call start() and  then wait()
>>>
>>
>> the correct usage is to call stop() before wait()[1]. for reference on
>> how to use listeners, you can see Ceilometer[2]
>>
>> [1]
>> http://docs.openstack.org/developer/oslo.messaging/notification_listener.html
>> [2]
>> https://github.com/openstack/ceilometer/blob/master/ceilometer/utils.py#L250
>>
>> --
>> gord
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> gord
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][api] Pagination in thre API

2015-11-05 Thread Zhenyu Zheng
So lets work on the API WG guideline first, looking forward to get it done
sooner, pagination is actually very useful in production deployment.

On Thu, Nov 5, 2015 at 11:16 PM, Everett Toews 
wrote:

> On Nov 5, 2015, at 5:44 AM, John Garbutt  wrote:
>
>
> On 5 November 2015 at 09:46, Richard Jones  wrote:
>
> As a consumer of such APIs on the Horizon side, I'm all for consistency in
> pagination, and more of it, so yes please!
>
> On 5 November 2015 at 13:24, Tony Breeds  wrote:
>
>
> On Thu, Nov 05, 2015 at 01:09:36PM +1100, Tony Breeds wrote:
>
> Hi All,
>Around the middle of October a spec [1] was uploaded to add
> pagination
> support to the os-hypervisors API.  While I recognize the use case it
> seemed
> like adding another pagination implementation wasn't an awesome idea.
>
> Today I see 3 more requests to add pagination to APIs [2]
>
> Perhaps I'm over thinking it but should we do something more strategic
> rather
> than scattering "add pagination here".
>
>
> +1
>
> The plan, as I understand it, is to first finish off this API WG guideline:
>
> http://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html
>
>
>
> An attempt at an API guideline for pagination is here [1] but hasn't
> received any updates in over a month, which can be understandable as
> sometimes other work takes precedence.
>
> Perhaps we can get that guideline moving again?
>
> If it's becoming difficult to reach agreement on that approach in the
> guideline, it could be worthwhile to take a step back and do some analysis
> on the way pagination is done in the more established APIs. I've found that
> such analysis can be very helpful as you're moving forward from a known
> state.
>
> The place for that analysis is in Current Design [2] by filling in the
> Pagination page. You can find many examples of such analysis from the
> Current Design like Sorting [3].
>
> Cheers,
> Everett
>
>
> [1] https://review.openstack.org/#/c/190743/
> [2] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design
> [3]
> https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Sorting
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][kuryr] network control plane (libkv role)

2015-11-05 Thread Taku Fukushima
Hi Vikas,

I thought the "capability" affected the propagation of the network state
across nodes as well. However, in my environment, where I tried Consul and
ZooKeeper, I observed a new network created in a host is displayed on
another host when I hit "sudo docker network ls" even if I set the
capability to "local", which is the current default. So I'm just wondering
what this capability means. The spec doesn't say much about it.

https://github.com/docker/libnetwork/blob/8d03e80f21c2f21a792efbd49509f487da0d89cc/docs/remote.md#set-capability

I saw your bug report that describes the network state propagation didn't
happen appropriately. I also experienced the issue and I'd say it would be
the configuration issue. Please try with the following option. I'm putting
it in /etc/default/docker and managing the docker daemon through "service"
command.

DOCKER_OPTS="-D -H unix:///var/run/docker.sock -H :2376
--cluster-store=consul://192.168.11.14:8500 --cluster-advertise=
192.168.11.18:2376"

The network is the only user facing entity in libnetwork for now since the
concept of the "service" is abandoned in the stable Docker 1.9.0 release
and it's shared by libnetwork through libkv across multiple hosts. Endpoint
information is stored as a part of the network information as you
documented in the devref and the network is all what we need so far.

https://github.com/openstack/kuryr/blob/d1f4272d6b6339686a7e002f8af93320f5430e43/doc/source/devref/libnetwork_remote_driver_design.rst#libnetwork-user-workflow-with-kuryr-as-remote-network-driver---host-networking

Regarding changing the capability to "global", it totally makes sense and
we should change it despite the networks would be shared among multiple
hosts anyways.

Best regards,
Taku Fukushima


On Thu, Nov 5, 2015 at 8:39 PM, Vikas Choudhary 
wrote:

> Thanks Toni.
> On 5 Nov 2015 16:02, "Antoni Segura Puimedon" <
> toni+openstac...@midokura.com> wrote:
>
>>
>>
>> On Thu, Nov 5, 2015 at 10:47 AM, Vikas Choudhary <
>> choudharyvika...@gmail.com> wrote:
>>
>>> ++ [Neutron] tag
>>>
>>>
>>> On Thu, Nov 5, 2015 at 10:40 AM, Vikas Choudhary <
>>> choudharyvika...@gmail.com> wrote:
>>>
 Hi all,

 By network control plane i specifically mean here sharing network state
 across docker daemons sitting on different hosts/nova_vms in multi-host
 networking.

 libnetwork provides flexibility where vendors have a choice between
 network control plane to be handled by libnetwork(libkv) or remote driver
 itself OOB. Vendor can choose to "mute" libnetwork/libkv by advertising
 remote driver capability as "local".

 "local" is our current default "capability" configuration in kuryr.

 I have following queries:
 1. Does it mean Kuryr is taking responsibility of sharing network state
 across docker daemons? If yes, network created on one docker host should be
 visible in "docker network ls" on other hosts. To achieve this, I guess
 kuryr driver will need help of some distributed data-store like consul etc.
 so that kuryr driver on other hosts could create network in docker on other
 hosts. Is this correct?

 2. Why we cannot  set default scope as "Global" and let libkv do the
 network state sync work?

 Thoughts?

>>>
>> Hi Vikas,
>>
>> Thanks for raising this. As part of the current work on enabling
>> multi-node we should be moving the default to 'global'.
>>
>>
>>>
 Regards
 -Vikas Choudhary

>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Deprecation of OFAgent in Mitaka

2015-11-05 Thread fumihiko kakuma
Hi,

The ryu team added the ofagent as a ml2 driver that implements
a python native openflow using ryu library.

In Liberty, the ovs ml2 driver gained the "native" of_interface
driver, which uses the ryu library to communicate with ovs switches.
The ryu team believes this is better solution than the ofagent driver.

Then we plan to deprecate ofagent in Mitaka and remove
it in next N cycle.


Thanks,
fumihiko kakuma

-- 
fumihiko kakuma 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-05 Thread Tony Breeds
Hello all,

I'll start by acknowledging that this is a big and complex issue and I
do not claim to be across all the view points, nor do I claim to be
particularly persuasive ;P

Having stated that, I'd like to seek constructive feedback on the idea of
keeping Juno around for a little longer.  During the summit I spoke to a
number of operators, vendors and developers on this topic.  There was some
support and some "That's crazy pants!" responses.  I clearly didn't make it
around to everyone, hence this email.

Acknowledging my affiliation/bias:  I work for Rackspace in the private
cloud team.  We support a number of customers currently running Juno that are,
for a variety of reasons, challenged by the Kilo upgrade.

Here is a summary of the main points that have come up in my conversations,
both for and against.

Keep Juno:
 * According to the current user survey[1] Icehouse still has the
   biggest install base in production clouds.  Juno is second, which makes
   sense. If we EOL Juno this month that means ~75% of production clouds
   will be running an EOL'd release.  Clearly many of these operators have
   support contracts from their vendor, so those operators won't be left 
   completely adrift, but I believe it's the vendors that benefit from keeping
   Juno around. By working together *in the community* we'll see the best
   results.

 * We only recently EOL'd Icehouse[2].  Sure it was well communicated, but we
   still have a huge Icehouse/Juno install base.

For me this is pretty compelling but for balance  

Keep the current plan and EOL Juno Real Soon Now:
 * There is also no ignoring the elephant in the room that with HP stepping
   back from public cloud there are questions about our CI capacity, and
   keeping Juno will have an impact on that critical resource.

 * Juno (and other stable/*) resources have a non-zero impact on *every*
   project, esp. @infra and release management.  We need to ensure this
   isn't too much of a burden.  This mostly means we need enough trustworthy
   volunteers.

 * Juno is also tied up with Python 2.6 support. When
   Juno goes, so will Python 2.6 which is a happy feeling for a number of
   people, and more importantly reduces complexity in our project
   infrastructure.

 * Even if we keep Juno for 6 months or 1 year, that doesn't help vendors
   that are "on the hook" for multiple years of support, so for that case
   we're really only delaying the inevitable.

 * Some number of the production clouds may never migrate from $version, in
   which case longer support for Juno isn't going to help them.


I'm sure these question were well discussed at the VYR summit where we set
the EOL date for Juno, but I was new then :) What I'm asking is:

1) Is it even possible to keep Juno alive (is the impact on the project as
   a whole acceptable)?

Assuming a positive answer:

2) Who's going to do the work?
- Me, who else?
3) What do we do if people don't actually do the work but we as a community
   have made a commitment?
4) If we keep Juno alive for $some_time, does that imply we also bump the
   life cycle on Kilo and liberty and Mitaka etc?

Yours Tony.

[1] http://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf
(page 20)
[2] http://git.openstack.org/cgit/openstack/nova/tag/?h=icehouse-eol



pgpzQJvMDmBfU.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Learning to Debug the Gate

2015-11-05 Thread Anita Kuno
On 11/03/2015 05:30 PM, Anita Kuno wrote:
> On 11/02/2015 12:39 PM, Anita Kuno wrote:
>> On 10/29/2015 10:42 PM, Anita Kuno wrote:
>>> On 10/29/2015 08:27 AM, Anita Kuno wrote:
 On 10/28/2015 12:14 AM, Matt Riedemann wrote:
>
>
> On 10/27/2015 4:08 AM, Anita Kuno wrote:
>> Learning how to debug the gate was identified as a theme at the
>> "Establish Key Themes for the Mitaka Cycle" cross-project session:
>> https://etherpad.openstack.org/p/mitaka-crossproject-themes
>>
>> I agreed to take on this item and facilitate the process.
>>
>> Part one of the conversation includes referencing this video created by
>> Sean Dague and Dan Smith:
>> https://www.youtube.com/watch?v=fowBDdLGBlU
>>
>> Please consume this as you are able.
>>
>> Other suggestions for how to build on this resource were mentioned and
>> will be coming in the future but this was an easy, actionable first step.
>>
>> Thank you,
>> Anita.
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/tales-from-the-gate-how-debugging-the-gate-helps-your-enterprise
>
>

 The source for the definition of "the gate":
 http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul/layout.yaml#n34

 Thanks for following along,
 Anita.

>>>
>>> This is the status page showing the status of our running jobs,
>>> including patches in the gate pipeline: http://status.openstack.org/zuul/
>>>
>>> Thank you,
>>> Anita.
>>>
>>
>> This is a simulation of how the gate tests patches:
>> http://docs.openstack.org/infra/publications/zuul/#%2818%29
>>
>> Click in the browser window to advance the simulation.
>>
>> Thank you,
>> Anita.
>>
> 
> Here is a presentation that uses the slide deck linked above, I
> recommend watching: https://www.youtube.com/watch?v=WDoSCGPiFDQ
> 
> Thank you,
> Anita.
> 

Three links in this edition of Learning to Debug the Gate:

The view that tracks our top bugs:
http://status.openstack.org/elastic-recheck/

The logstash queries that create the above view:
http://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/queries

Logstash itself, where you too can practice creating queries:
http://logstash.openstack.org

Note: in logstash the query is the transferable piece of information.
Filters can help you create a query, they do not populate a query. The
information that is in the query bar is what is important here.

Practice making some queries of your own.

Thanks for reading,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] distributing work using work items - call for participation in distributed blueprint development

2015-11-05 Thread Steven Dake (stdake)
HI folks,

Sam Yaple had suggested we try using Work Items to track our work rather then 
Etherpad for complex distributed tasks.  I've picked a pretty easy blueprint 
which should be mostly one line patches where everyone can chip in.  The work 
should be pretty easy, even for new contributors to the project - so please 
feel free to sign up for contributing work even if you are new to the project.  
If your unable to set your name in the work items field, ping sdake on irc to 
add you to the kolla-drivers group.

The blueprint is:
https://blueprints.launchpad.net/kolla/+spec/drop-root

The goal of the blueprint is to run the processes for each container as the 
correct UID instead of root (except for the case where the container requires 
root to do its job).  These are easy to pick out in the ansible files by the 
privileged: true flag.  The real goal of this blueprint is to test if this new 
work items workflow is faster and more effective then etherpad (while also 
delivering this essential security work for mitaka-1 (deadline December 4th).

Please take a moment to sign up for 1-4 container sets.  To do that, click the 
Yellow checkbox in the work items field in launchpad, and then replace the 
"unassigned" entry next to the work item with your irc nickname.  I'd like this 
work to finish as rapidly as possible, so please try to knock out the work by 
next Friday (November 13th).  Please try to complete the work if you assign 
yourself to the container set by November 13th.

Regards,
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >