Re: [openstack-dev] [nova][neutron] New BP for live migration with direct pci passthru

2016-02-15 Thread Xie, Xianshan
Hi, Fawad,


> Can you please share the link?
https://blueprints.launchpad.net/nova/+spec/direct-pci-passthrough-live-migration

Thanks in advance.


Best regards,
xiexs

From: Fawad Khaliq [mailto:fa...@plumgrid.com]
Sent: Tuesday, February 16, 2016 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] New BP for live migration with 
direct pci passthru

On Mon, Feb 1, 2016 at 3:25 PM, Xie, Xianshan 
> wrote:
Hi, all,
  I have registered a new BP about the live migration with a direct pci 
passthru device.
  Could you please help me to review it? Thanks in advance.

Can you please share the link?


The following is the details:
--
SR-IOV has been supported for a long while, in the community's point of view,
the pci passthru with Macvtap can be live migrated possibly, but the direct pci 
passthru
seems hard to implement the migration as the passthru VF is totally controlled 
by
the VMs so that some internal states may be unknown by the hypervisor.

But we think the direct pci passthru model can also be live migrated with the
following combination of a series of technology/operation based on the enhanced
Qemu-Geust-Agent(QGA) which has already been supported by nova.
   1)Bond the direct pci passthru NIC with a virtual NIC.
 This will keep the network connectivity during the live migration.
   2)Unenslave the direct pci passthru NIC
   3)Hot-unplug the direct pci passthru NIC
   4)Live-migrate guest with the virtual NIC
   5)Hot-plug the direct pci passthru NIC on the target host
   6)Enslave the direct pci passthru NIC

And more inforation about this concept can refer to [1].
[1]https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
--

Best regards,
Xiexs



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] "No Open Core" in 2016

2016-02-15 Thread Sean M. Collins
Thomas Goirand wrote:
> Oh, that, and ... not using CassandraDB. And yes, this thread is a good
> place to have this topic. I'm not sure who replied to me this thread
> wasn't the place to discuss it: I respectfully disagree, since it's
> another major blocker, IMO as important, if not more, as using a free
> software CDN solution.

Let's handle the policy implications discussed in this thread before we
dive into the "don't use this component that I dislike" bikeshed.
Reading the thread, it appears that we've made good progress on building
consensus towards having Poppy consider an open source CDN as the
"reference implementation" (to use some Neutron parlance).

Then we can bikeshed about how good/bad the components used in the
reference implementation are. Later. The point being, there is an open
source solution that will be used to flesh out a true vendor-neutral API
(as I understand Mike Perez's position, and agree with!).

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage meeting tomorrow

2016-02-15 Thread Afek, Ifat (Nokia - IL)
Hi,

We will have Vitrage weekly meeting tomorrow, Wednesday at 9:00 UTC, on 
#openstack-meeting-3 channel.

Agenda:

* Current status and progress
* Review action items
* Next steps 
* Open Discussion

You are welcome to join.

Thanks, 
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Integrating physical appliance into virtual infrastructure

2016-02-15 Thread Fawad Khaliq
On Mon, Feb 1, 2016 at 10:00 PM, Vijay Venkatachalam <
vijay.venkatacha...@citrix.com> wrote:

>
>
> L2GW seems like a good option for bridging/linking /integrating physical
> appliances which does not support overlay technology (say VXLAN) natively.
>
>
>
> In my case the physical appliance supports VXLAN natively, meaning it can
> act as a VTEP. The appliance is capable of decapsulating packets that are
> received and encapsulating packets that are sent (looking at the forwarding
> table).
>
>
>
> Now we want to add the capability in the  middleware/controller so that
> forwarding tables in the appliance can be populated and also let the rest
> of infrastructure know about the physical appliance (VTEP) and its L2 info?
>
>
>
> Is it possible to achieve this?
>

You could have your own south bound implementation using l2gw [1]

[1] https://review.openstack.org/#/c/206638/

>
>
> Thanks,
>
> Vijay V.
>
>
>
>
>
>
>
> *From:* Gal Sagie [mailto:gal.sa...@gmail.com]
> *Sent:* 01 February 2016 19:38
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
> *Subject:* Re: [openstack-dev] [Neutron] Integrating physical appliance
> into virtual infrastructure
>
>
>
> There is a project that aims at solving your use cases (at least from a
> general view)
>
> Its called L2GW and uses OVSDB Hardware VTEP schema (which is supported by
> many physical appliances for switching capabilities)
>
>
>
> Some information: https://wiki.openstack.org/wiki/Neutron/L2-GW
>
>
>
> There are also other possible solutions, depending what you are trying to
> do and what is the physical applicance job.
>
>
>
>
>
>
>
> On Mon, Feb 1, 2016 at 3:44 PM, Vijay Venkatachalam <
> vijay.venkatacha...@citrix.com> wrote:
>
> Hi ,
>
>
>
> How to integrate a physical appliance into the virtual OpenStack
> infrastructure (with L2 population)? Can you please point me to any
> relevant material.
>
>
>
> We want to add the capability to “properly” schedule the port on the
> physical appliance, so that the rest of the virtual infrastructure knows
> that a new port is scheduled in the physical appliance.  How to do this?
>
>
>
> We manage the appliance through a middleware. Today, when it creates a
> neutron port, that is to be hosted on the physical appliance, the port is
> dangling.  Meaning, the virtual infrastructure does not know where this
> port is hosted/implemented. How to fix this?
>
>
>
> Also, we want the physical appliance plugged into L2 population mechanism.
> Looks like the L2 population driver is distributing L2 info to all virtual
> infrastructure nodes where a neutron agent is running. Can we leverage this
> framework? We don’t want to run the neutron agent in the physical
> appliance, can it run in the middle ware?
>
>
>
> Thanks,
>
> Vijay V.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Best Regards ,
>
> The G.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][taas] l2 gateway in OpenStack

2016-02-15 Thread Fawad Khaliq
Hi TaaS folks,

As discussed in the last IRC meeting, here is some information on
l2-gateway that might be useful.

The project resides here [1]. There was a Neutron lightning talk at Paris
summit [2] and another detailed session at Vancouver [3]. I hope this will
help understand how l2 gateway works.

[1] https://github.com/openstack/networking-l2gw
[2] https://www.youtube.com/watch?v=NHQRAKvOr-U
[3] https://www.youtube.com/watch?v=74Wfr4myf5k

Thanks,
Fawad Khaliq
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-02-15 Thread John Griffith
On Mon, Feb 15, 2016 at 1:02 PM, Clark Boylan  wrote:

> On Mon, Feb 15, 2016, at 11:48 AM, Ivan Kolodyazhny wrote:
> > Hi all,
> >
> > I'll talk mostly about python-cinderclient but the same question could be
> > related for other clients.
> >
> > Now, for python-cinderclient we've got to kinds for functional/integrated
> > jobs:
> >
> > 1) gate-cinderclient-dsvm-functional - a very limited (for now) set of
> > functional tests, most of them were part of tempest CLI tests in the
> > past.
> >
> > 2) gate-tempest-dsvm-neutron-src-python-cinderclient - if I understand
> > right, the idea os this job was to have integrated tests to test
> > cinderclient with other projects to verify that new patch to
> > python-cinderclietn won't break any other project.
> > But it does *not* test cinderclient at all, except few attach-related
> > tests
> > because Tempest doesn't use python-*client.
>
> Tempest doesn't use python-*client to talk to the APIs but the various
> OpenStack services do use python-*client to talk to the other services.
> Using cinderclient as an example, nova consumes cinderclient to perform
> volume operations in nova/volume/cinder.py. There is value in this
> existing test if those code paths are exercised. Basically ensuring the
> next release of cinderclient does not break nova. It may be the case
> that cinderclient is a bad example because tempest doesn't do volume
> operations through nova, but I am sure for many of the other clients
> these tests do provide value.
>
> >
> > The same job was added for python-heatclient but was removed because
> > devstack didn't install Heat for that job [1].
> >
> > We agreed [2] to remove this job from cinderclient gates too, once
> > functional or integration tests will be implemented.
>
> Just make sure that you don't lose exercising of the above code paths
> when this transition happens. If we don't currently test that code it
> would be a good goal for any new integration testing to do so.
>
> >
> >
> > There is a proposal to python-cinderclient tests to implement some
> > cross-project testing to make sure, that new python-cinderclient won't
> > break any of existing project who use it.
> >
> > After discussing in IRC with John Griffith (jgriffith) I'm realized that
> > it
> > could be an cross-project initiative in such kind of integration tests.
> > OpenStack Client (OSC) could cover some part of such tests, but does it
> > mean that we'll run OSC tests on every patch to python-*client? We can
> > run
> > only cinder-realated OSC tests on our gates to verify that it doesn't
> > breack OSC and, may be other project.
> >
> > The other option, is to implement tests like [3] per project basis and
> > call
> > it "integration".  Such tests could cover more cases than OSC functional
> > tests and have more project-related test cases, e.g.: test some
> > python-cinderclient specific corner cases, which is not related to OSC.
> >
> > IMO, It would be good to have some cross-project decision on how will be
> > implement clients' integration tests per project.
> >
> >
> > [1] https://review.openstack.org/#/c/272411/
> > [2]
> >
> http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-12-16-16.00.log.html
> > [3] https://review.openstack.org/#/c/279432/8
> >
> > Regards,
> > Ivan Kolodyazhny,
> > http://blog.e0ne.info/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hey Everyone,

So this started after after I made some comments on a patch in CinderClient
that added attach/detach integration tests to cinder's local test repo.  My
first thought was that we should focus on just Cinder functional tests
first, and that maybe the integration tests (including with the clients)
should be centralized, or have a more standardized approach to them.

What I was getting at is that while the Tempest tests don't use the clients
directly, there are a number of places where tempest does end up calling
them indirectly. Volume attach in Nova is a good example of this, while we
don't call NovaClient to do this, the Nova API drills down into
volume/cinder.py which just loads and calls CinderClient in order to issue
the volume related calls that it does.  My thought was that maybe it would
be useful to have a more cross-project effort for things like this. There
are other places we do this in a few projects with Glance, Keystone and
Swift as I recall.

I was actually thinking of a common test-run (similar to dsvm-full) but
something that focuses exclusively on the cross-project calls that are made
via clients.  The idea being that any xxx-client change would have to load
this scenario up with current master and run successfully.  Maybe this is
overkill?  Maybe we don't do the "import xxxClient" in as many 

Re: [openstack-dev] [Swift] Erasure coding and geo replication

2016-02-15 Thread Mark Kirkwood

On 16/02/16 17:10, Mark Kirkwood wrote:

On 15/02/16 23:29, Kota TSUYUZAKI wrote:

Hello Mark,

AFAIK, a few reasons for that we still are in working progress for
erasure code + geo replication.


and expect to survive a region outage...

With that I mind I did some experiments (Liberty swift) and it looks
to me like if you have:

- num_data_frags < num_nodes in (smallest) region

and:

- num_parity_frags = num_data_frags


then having a region fail does not result in service outage.


Good point but note that the PyECLib v1.0.7 (pinned to Kilo/Liberty
stable) still have a problem which cannot decode the original data
when all feed fragments are parity frags[1]. (i.e. if set
num_parity_frags = num_data frags and then, num_parity_frags comes
into proxy for GET request, it will fail at the decoding) The problem
was already resolved in the PyECLib/liberasurecode at master
branch and current swift master has the PyECLib>=1.0.7 dependencies so
if you thought to use the newest Swift, it might be not
a matter.



Ah right, in my testing I always took down my "1st" region...which will
have had data fragments therein. For interest I'll try to provoke a
situation where I have all parity ones to assemble (and see what happens).




So tried this out - still works fine. Checking version of pyeclib I see 
Ubuntu 15.10 is giving me:


- Swift 2.5.0
- pyeclib 1.0.8

Hmmm - Canonical deliberately upping the version of pyeclib (shock)? 
Interesting...anyway explains why I cannot get it to fail. However, all 
your other points are noted, and again thanks!


Regards

Mark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-15 Thread 少合冯
File a bug.
https://bugs.launchpad.net/openstack-api-site/+bug/1545922


Anne,  Alex, and Ghanshyam Mann,
Can this be raised in the API meeting for discussion?

I'm very care my patch:
https://review.openstack.org/#/c/258771/12/nova/api/openstack/compute/server_migrations.py

Should I allow it as DB index?

BR
Shaohe Feng.


2016-02-16 10:46 GMT+08:00 GHANSHYAM MANN :

> Yes, currently Nova support that for show/update/delete server APIs etc
> (both v2 and v2.1) and python-novaclient too. But I think that was old
> behaviour and for ec2 API mainly?
>
> I searched on ec2 repo [1] and they get the instance from nova using UUID,
> i did not find any place they are fetching using id. Hut not sure if
> external interface directly fetch that on nova by 'id'.
>
> But apart from that, may be some users using 'id' instead of 'uuid' but
> that was not recommended or documented anywhere So in that case can we
> remove this old behaviour without version bump?
>
>
> [1].. https://github.com/openstack/ec2-api
>
> Regards
> Ghanshyam Mann
>
> On Tue, Feb 16, 2016 at 11:24 AM, Anne Gentle <
> annegen...@justwriteclick.com> wrote:
>
>>
>>
>> On Mon, Feb 15, 2016 at 6:03 PM, 少合冯  wrote:
>>
>>> I guess others may ask the same questions.
>>>
>>> I read the nova API doc:
>>> such as this API:
>>> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
>>>
>>> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
>>> *Show server details*
>>>
>>>
>>> *Request parameters*
>>> ParameterStyleTypeDescription
>>> tenant_id URI csapi:UUID
>>>
>>> The UUID of the tenant in a multi-tenancy cloud.
>>> server_id URI csapi:UUID
>>>
>>> The UUID of the server.
>>>
>>> But I can get the server by DB index:
>>>
>>> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
>>> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
>>> {
>>> "server": {
>>> "OS-DCF:diskConfig": "MANUAL",
>>> "OS-EXT-AZ:availability_zone": "nova",
>>> "OS-EXT-SRV-ATTR:host": "shaohe1",
>>> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
>>> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
>>> "OS-EXT-STS:power_state": 1,
>>> "OS-EXT-STS:task_state": "migrating",
>>> "OS-EXT-STS:vm_state": "error",
>>> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
>>> "OS-SRV-USG:terminated_at": null,
>>> ..
>>> }
>>> }
>>>
>>> and the code really allow it use  DB index
>>> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
>>>
>>>
>> Nice find. Can you log this as an API bug and we'll triage it -- can even
>> help you fix it on the site if you like.
>>
>> https://bugs.launchpad.net/openstack-api-site/+filebug
>>
>> Basically, click that link, write a short summary, then copy and paste in
>> this email's contents, it has lots of good info.
>>
>> Let me know if you'd also like to fix the bug on the site.
>>
>> And hey nova team, if you think it's actually an API bug, we'll move it
>> over to you.
>>
>> Thanks for reporting it!
>> Anne
>>
>>
>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Anne Gentle
>> Rackspace
>> Principal Engineer
>> www.justwriteclick.com
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Erasure coding and geo replication

2016-02-15 Thread Mark Kirkwood

On 15/02/16 23:29, Kota TSUYUZAKI wrote:

Hello Mark,

AFAIK, a few reasons for that we still are in working progress for erasure code 
+ geo replication.


and expect to survive a region outage...

With that I mind I did some experiments (Liberty swift) and it looks to me like 
if you have:

- num_data_frags < num_nodes in (smallest) region

and:

- num_parity_frags = num_data_frags


then having a region fail does not result in service outage.


Good point but note that the PyECLib v1.0.7 (pinned to Kilo/Liberty stable) 
still have a problem which cannot decode the original data when all feed 
fragments are parity frags[1]. (i.e. if set
num_parity_frags = num_data frags and then, num_parity_frags comes into proxy 
for GET request, it will fail at the decoding) The problem was already resolved 
in the PyECLib/liberasurecode at master
branch and current swift master has the PyECLib>=1.0.7 dependencies so if you 
thought to use the newest Swift, it might be not
a matter.



Ah right, in my testing I always took down my "1st" region...which will 
have had data fragments therein. For interest I'll try to provoke a 
situation where I have all parity ones to assemble (and see what happens).




In the Swift perspective, I think that we need more tests/discussion for geo 
replication around write/read affinity[2] which is geo replication stuff in 
Swift itself and performances.

For the write/read affinity, actually we didn't consider the affinity control 
to simplify the implementation until EC landed into Swift master[3] so I think 
it's time to make sure how we can use the
affinity control with EC but it's not done yet.

For the performance perspective, in my experiments, more parities causes quite 
performance degradation[4]. To prevent the degradation, I am working for the 
spec which makes duplicated copy from
data/parity fragments and spread them out into geo regions.

To sumurize, we've not done the work yet but we welcome to discuss and 
contribute for EC + geo replication anytime, IMO.

Thanks,
Kota

1: 
https://bitbucket.org/tsg-/liberasurecode/commits/a01b1818c874a65d1d1fb8f11ea441e9d3e18771
2: 
http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters
3: 
http://docs.openstack.org/developer/swift/overview_erasure_code.html#region-support
4: 
https://specs.openstack.org/openstack/swift-specs/specs/in_progress/global_ec_cluster.html





Excellent - thank you for a very comprehensive answer.

Regards

Mark



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday February 16th at 19:00 UTC

2016-02-15 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday February 16th, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-02-09-19.02.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-02-09-19.02.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-02-09-19.02.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-15 Thread Alex Xu
2016-02-16 9:47 GMT+08:00 GHANSHYAM MANN :

> Regards
> Ghanshyam Mann
>
>
> On Mon, Feb 15, 2016 at 12:07 PM, Alex Xu  wrote:
> > If we support 2.x.y, when we bump 'x' is a problem. We didn't order the
> API
> > changes for now, the version of API change is just based on the order of
> > patch merge. For support 2.x.y, we need bump 'y' first for
> back-compatible
> > changes I guess.
> >
> > As I remember, we said before, the new feature is the motivation of user
> > upgrade their client to support new version API, whatever the new
> version is
> > backward compatible or incompatible. So I guess the initial thinking we
> hope
> > user always upgrade their code than always stop at old version? If we
> bump
> > 'x' after a lot of 'y', will that lead to user always stop at 'x'
> version?
> > And the evolution of api will slow down.
> >
> > Or we limit to each release cycle. In each release, we bump 'y' first,
> and
> > then bump 'x'. Even there isn't any back-incompatible change in the
> release.
> > We still bump 'x' when released. Then we can encourage user upgrade their
> > code. But I still think the back-incompatible API change will be slow
> down
> > in development, as it need always merged after back-compatible API change
> > patches.
>
> Yea that true and will be more complicated from development
> perspective which leads to slow down the evolution of API changes.
> But if we support x.y then still we can change x at any time back
> in-comp changes happens(i mean before y also)? Or I may not be getting
> the issue you mentioned about always bump y before x.
>

If the back-incompatible change merged before back-compatible change, then
'y' become useless. For example, the initial version is 2.1.0, then we have
3 back-comp and 3 in-comp changes, and we are unlucky, in-comp changes
merged first, then we get version 2.4.3, then if user want to use those
back-comp changes, it still need upgrade those 3 in-comp changes.


>
> I like the idea of distinguish the backward comp and in-comp changes
> with x and y which always gives clear perspective about changes.
> But it should not lead users to ignore y. I mean some backward comp
> changes which are really good gets ignored by users as they start look
> at the x only.
> For example- "adding attribute in resource representation" is back
> comp change (if so) and if that is added as y then, it might get
> ignored by users.
>
> Another way to clearly distinguish backward comp and in-comp changes
> is through documentation which was initially discussed during
> microversion specs. Currently doc has good description about each
> changes but not much clear way about backward comp or not.
> Which we can do by adding a clear flag [Backward Compatible/
> Incompatible] for each version in doc [1]-
>
>
+1 for doc the change is backward comp or not.


> >
> >
> >
> > 2016-02-13 4:55 GMT+08:00 Andrew Laski :
> >>
> >> Starting a new thread to continue a thought that came up in
> >>
> >>
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html
> .
> >> The Nova API microversion framework allows for backwards compatible and
> >> backwards incompatible changes but there is no way to programmatically
> >> distinguish the two. This means that as a user of the API I need to
> >> understand every change between the version I'm using now and a new
> >> version I would like to move to in case an intermediate version changes
> >> default behaviors or removes something I'm currently using.
> >>
> >> I would suggest that a more user friendly approach would be to
> >> distinguish the two types of changes. Perhaps something like 2.x.y where
> >> x is bumped for a backwards incompatible change and y is still
> >> monotonically increasing regardless of bumps to x. So if the current
> >> version is 2.2.7 a new backwards compatible change would bump to 2.2.8
> >> or a new backwards incompatible change would bump to 2.3.8. As a user
> >> this would allow me to fairly freely bump the version I'm consuming
> >> until x changes at which point I need to take more care in moving to a
> >> new version.
> >>
> >> Just wanted to throw the idea out to get some feedback. Or perhaps this
> >> was already discussed and dismissed when microversions were added and I
> >> just missed it.
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> [1]
> 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread 王华
I think master nodes should be controlled by Magnum, so that we can do the
operation work for users. AWS and GCE use the mode. And master nodes are
resource-consuming. If master nodes are not controlled by users, we can do
some optimization to reduce the cost which is invisible to users. For
example, we can combine some masters node into one with correct isolation.

Regards,
Wanghua

On Tue, Feb 16, 2016 at 1:52 AM, Hongbin Lu  wrote:

> Regarding to the COE mode, it seems there are three options:
>
> 1.   Place both master nodes and worker nodes to user’s tenant
> (current implementation).
>
> 2.   Place only worker nodes to user’s tenant.
>
> 3.   Hide both master nodes and worker nodes from user’s tenant.
>
>
>
> Frankly, I don’t know which one will succeed/fail in the future. Each mode
> seems to have use cases. Maybe magnum could support multiple modes?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Corey O'Brien [mailto:coreypobr...@gmail.com]
> *Sent:* February-15-16 8:43 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>
>
> Hi all,
>
>
>
> A few thoughts to add:
>
>
>
> I like the idea of isolating the masters so that they are not
> tenant-controllable, but I don't think the Magnum control plane is the
> right place for them. They still need to be running on tenant-owned
> resources so that they have access to things like isolated tenant networks
> or that any bandwidth they consume can still be attributed and billed to
> tenants.
>
>
>
> I think we should extend that concept a little to include worker nodes as
> well. While they should live in the tenant like the masters, they shouldn't
> be controllable by the tenant through anything other than the COE API. The
> main use case that Magnum should be addressing is providing a managed COE
> environment. Like Hongbin mentioned, Magnum users won't have the domain
> knowledge to properly maintain the swarm/k8s/mesos infrastructure the same
> way that Nova users aren't expected to know how to manage a hypervisor.
>
>
>
> I agree with Egor that trying to have Magnum schedule containers is going
> to be a losing battle. Swarm/K8s/Mesos are always going to have better
> scheduling for their containers. We don't have the resources to try to be
> yet another container orchestration engine. Besides that, as a developer, I
> don't want to learn another set of orchestration semantics when I already
> know swarm or k8s or mesos.
>
>
>
> @Kris, I appreciate the real use case you outlined. In your idea of having
> multiple projects use the same masters, how would you intend to isolate
> them? As far as I can tell none of the COEs would have any way to isolate
> those teams from each other if they share a master. I think this is a big
> problem with the idea of sharing masters even within a single tenant. As an
> operator, I definitely want to know that users can isolate their resources
> from other users and tenants can isolate their resources from other tenants.
>
>
>
> Corey
>
>
>
> On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao  wrote:
>
> Hi,
>
>
>
> I wanted to give some thoughts to the thread.
>
>
>
> There are various perspective around “Hosted vs Self-managed COE”, But if
> you stand at the developer's position, it basically comes down to “Ops vs
> Flexibility”.
>
>
>
> For those who want more control of the stack, so as to customize in anyway
> they see fit, self-managed is a more appealing option. However, one may
> argue that the same job can be done with a heat template+some patchwork of
> cinder/neutron. And the heat template is more customizable than magnum,
> which probably introduces some requirements on the COE configuration.
>
>
>
> For people who don't want to manage the COE, hosted is a no-brainer. The
> question here is that which one is the core compute engine is the stack,
> nova or COE? Unless you are running a public, multi-tenant OpenStack
> deployment, it is highly likely that you are sticking with only one COE.
> Supposing k8s is what your team is dealing with everyday, then why you need
> nova sitting under k8s, whose job is just launching some VMs. After all, it
> is the COE that orchestrates cinder/neutron.
>
>
>
> One idea of this is to put COE at the same layer of nova. Instead of
> running atop nova, these two run side by side. So you got two compute
> engines: nova for IaaS workload, k8s for CaaS workload. If you go this way, 
> hypernetes
> is probably what you are looking
> for.
>
>
>
> Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with
> Docker registry, and use nova to launch Docker images. But this is not done
> by nova-docker, simply because it is hard to integrate things like
> cinder/neutron with lxc. The idea is a nova-hyper driver
> .
> Since 

Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-15 Thread GHANSHYAM MANN
Yes, currently Nova support that for show/update/delete server APIs etc
(both v2 and v2.1) and python-novaclient too. But I think that was old
behaviour and for ec2 API mainly?

I searched on ec2 repo [1] and they get the instance from nova using UUID,
i did not find any place they are fetching using id. Hut not sure if
external interface directly fetch that on nova by 'id'.

But apart from that, may be some users using 'id' instead of 'uuid' but
that was not recommended or documented anywhere So in that case can we
remove this old behaviour without version bump?


[1].. https://github.com/openstack/ec2-api

Regards
Ghanshyam Mann

On Tue, Feb 16, 2016 at 11:24 AM, Anne Gentle  wrote:

>
>
> On Mon, Feb 15, 2016 at 6:03 PM, 少合冯  wrote:
>
>> I guess others may ask the same questions.
>>
>> I read the nova API doc:
>> such as this API:
>> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
>>
>> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
>> *Show server details*
>>
>>
>> *Request parameters*
>> ParameterStyleTypeDescription
>> tenant_id URI csapi:UUID
>>
>> The UUID of the tenant in a multi-tenancy cloud.
>> server_id URI csapi:UUID
>>
>> The UUID of the server.
>>
>> But I can get the server by DB index:
>>
>> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
>> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
>> {
>> "server": {
>> "OS-DCF:diskConfig": "MANUAL",
>> "OS-EXT-AZ:availability_zone": "nova",
>> "OS-EXT-SRV-ATTR:host": "shaohe1",
>> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
>> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
>> "OS-EXT-STS:power_state": 1,
>> "OS-EXT-STS:task_state": "migrating",
>> "OS-EXT-STS:vm_state": "error",
>> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
>> "OS-SRV-USG:terminated_at": null,
>> ..
>> }
>> }
>>
>> and the code really allow it use  DB index
>> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
>>
>>
> Nice find. Can you log this as an API bug and we'll triage it -- can even
> help you fix it on the site if you like.
>
> https://bugs.launchpad.net/openstack-api-site/+filebug
>
> Basically, click that link, write a short summary, then copy and paste in
> this email's contents, it has lots of good info.
>
> Let me know if you'd also like to fix the bug on the site.
>
> And hey nova team, if you think it's actually an API bug, we'll move it
> over to you.
>
> Thanks for reporting it!
> Anne
>
>
>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Anne Gentle
> Rackspace
> Principal Engineer
> www.justwriteclick.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-15 Thread Alex Xu
I don't think our API supports get servers by DB index is good idea. So I
prefer we remove it in the future with microversions. But for now, yes, it
is here.

2016-02-16 8:03 GMT+08:00 少合冯 :

> I guess others may ask the same questions.
>
> I read the nova API doc:
> such as this API:
> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
>
> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
> *Show server details*
>
>
> *Request parameters*
> ParameterStyleTypeDescription
> tenant_id URI csapi:UUID
>
> The UUID of the tenant in a multi-tenancy cloud.
> server_id URI csapi:UUID
>
> The UUID of the server.
>
> But I can get the server by DB index:
>
> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
> {
> "server": {
> "OS-DCF:diskConfig": "MANUAL",
> "OS-EXT-AZ:availability_zone": "nova",
> "OS-EXT-SRV-ATTR:host": "shaohe1",
> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
> "OS-EXT-STS:power_state": 1,
> "OS-EXT-STS:task_state": "migrating",
> "OS-EXT-STS:vm_state": "error",
> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
> "OS-SRV-USG:terminated_at": null,
> ..
> }
> }
>
> and the code really allow it use  DB index
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Deprecation policy between projects

2016-02-15 Thread gordon chung


On 14/02/2016 8:32 AM, Ken'ichi Ohmichi wrote:
> Hi,
>
> Do we have any deprecation policies between projects?
> When we can remove old drivers of the other projects after they were
> marked as deprecated?
> In nova, there are many drivers for the other projects and there are
> patches which remove this kind of code. (e.g:
> https://review.openstack.org/#/c/274696/)
>
> This seems a common question and I maybe missed previous discussion.
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

the 'official' deprecation policy with regards to tags is this: 
https://governance.openstack.org/reference/tags/assert_follows-standard-deprecation.html

i'd imagine it holds up whether talking about internal features or in 
your case features across projects.

cheers,
-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-15 Thread Anne Gentle
On Mon, Feb 15, 2016 at 6:03 PM, 少合冯  wrote:

> I guess others may ask the same questions.
>
> I read the nova API doc:
> such as this API:
> http://developer.openstack.org/api-ref-compute-v2.1.html#showServer
>
> GET /v2.1/​{tenant_id}​/servers/​{server_id}​
> *Show server details*
>
>
> *Request parameters*
> ParameterStyleTypeDescription
> tenant_id URI csapi:UUID
>
> The UUID of the tenant in a multi-tenancy cloud.
> server_id URI csapi:UUID
>
> The UUID of the server.
>
> But I can get the server by DB index:
>
> curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
> http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
> {
> "server": {
> "OS-DCF:diskConfig": "MANUAL",
> "OS-EXT-AZ:availability_zone": "nova",
> "OS-EXT-SRV-ATTR:host": "shaohe1",
> "OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
> "OS-EXT-SRV-ATTR:instance_name": "instance-0002",
> "OS-EXT-STS:power_state": 1,
> "OS-EXT-STS:task_state": "migrating",
> "OS-EXT-STS:vm_state": "error",
> "OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
> "OS-SRV-USG:terminated_at": null,
> ..
> }
> }
>
> and the code really allow it use  DB index
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
>
>
Nice find. Can you log this as an API bug and we'll triage it -- can even
help you fix it on the site if you like.

https://bugs.launchpad.net/openstack-api-site/+filebug

Basically, click that link, write a short summary, then copy and paste in
this email's contents, it has lots of good info.

Let me know if you'd also like to fix the bug on the site.

And hey nova team, if you think it's actually an API bug, we'll move it
over to you.

Thanks for reporting it!
Anne



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Anne Gentle
Rackspace
Principal Engineer
www.justwriteclick.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] tooz 1.31.0 release (mitaka)

2016-02-15 Thread no-reply
We are stoked to announce the release of:

tooz 1.31.0: Coordination library for distributed systems.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/tooz

With package available at:

https://pypi.python.org/pypi/tooz

Please report issues through launchpad:

http://bugs.launchpad.net/python-tooz/

For more details, please see below.

Changes in tooz 1.30.0..1.31.0
--

3687a58 Updated from global requirements
13100bc Updated from global requirements
d3b82b7 Add .tox, *.pyo and *.egg to .gitignore
c02b573 Enable OS_LOG_CAPTURE so that logs can be seen (on error)
61c4224 Add lock breaking

Diffstat (except docs and test files)
-

.gitignore  |  4 +++-
.testr.conf |  7 ++-
requirements.txt|  2 +-
test-requirements.txt   |  2 +-
tooz/drivers/etcd.py| 23 +--
tooz/drivers/ipc.py | 17 +++--
tooz/drivers/memcached.py   | 22 +-
tooz/drivers/redis.py   |  4 
tooz/locking.py | 17 -
10 files changed, 105 insertions(+), 14 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 0122587..bb29af2 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -16 +16 @@ futures>=3.0;python_version=='2.7' or python_version=='2.6' # BSD
-futurist>=0.6.0 # Apache-2.0
+futurist>=0.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 1026447..633cb0c 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -29 +29 @@ redis>=2.10.0 # MIT
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.messaging 4.3.0 release (mitaka)

2016-02-15 Thread no-reply
We are pumped to announce the release of:

oslo.messaging 4.3.0: Oslo Messaging API

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

Changes in oslo.messaging 4.2.0..4.3.0
--

3c0a48a simulator.py improvements
5954d2a rabbit: improvements to QoS
6d654ec Updated from global requirements
36c3965 Remove server queue creating if target's server is empty
e3fb672 Updated from global requirements
0fb20d8 Correctly set socket timeout for publishing
3adb5ab Updated from global requirements
668062e Use more secure yaml.safe_load() instead of yaml.load()
be89def [kombu] Implement experimental message compression
fbe10f0 [zmq] Multithreading access to zmq sockets
87ff93e [zmq] ZMQ_LINGER default value
9c182ee Remove matchmaker_redis configs from [DEFAULT]
3cb8934 Refactors base classes
2ae4f8f [zmq] Use PUSH/PULL for direct CAST
bac3969 support ability to set thread pool size per listener
2b77d50 Fix misspellings

Diffstat (except docs and test files)
-

oslo_messaging/_drivers/amqp.py|   4 +-
oslo_messaging/_drivers/amqpdriver.py  |  17 ++-
oslo_messaging/_drivers/base.py|  24 ++---
oslo_messaging/_drivers/impl_fake.py   |  18 ++--
oslo_messaging/_drivers/impl_kafka.py  |  15 ++-
oslo_messaging/_drivers/impl_pika.py   |   4 +-
oslo_messaging/_drivers/impl_rabbit.py |  38 +--
oslo_messaging/_drivers/impl_zmq.py|   7 +-
.../_drivers/pika_driver/pika_message.py   |  11 +-
oslo_messaging/_drivers/pika_driver/pika_poller.py |  47 
oslo_messaging/_drivers/protocols/amqp/driver.py   |   8 +-
.../publishers/dealer/zmq_dealer_call_publisher.py |  51 -
.../publishers/dealer/zmq_dealer_publisher.py  | 119 -
.../client/publishers/zmq_publisher_base.py|  32 +-
.../client/publishers/zmq_push_publisher.py|  39 +++
.../_drivers/zmq_driver/client/zmq_client.py   |   9 +-
.../server/consumers/zmq_consumer_base.py  |  42 
.../server/consumers/zmq_pull_consumer.py  |  35 --
.../server/consumers/zmq_router_consumer.py|  53 ++---
.../server/consumers/zmq_sub_consumer.py   |  11 +-
.../zmq_driver/server/zmq_incoming_message.py  |   6 +-
.../_drivers/zmq_driver/server/zmq_server.py   |  15 ++-
oslo_messaging/_drivers/zmq_driver/zmq_socket.py   |   7 +-
oslo_messaging/_executors/impl_pooledexecutor.py   |   6 +-
oslo_messaging/dispatcher.py   |   2 +-
oslo_messaging/opts.py |   1 -
requirements.txt   |   6 +-
tools/simulator.py |  15 +--
31 files changed, 395 insertions(+), 310 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 71be8bc..c09392b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ pbr>=1.6 # Apache-2.0
-futurist>=0.6.0 # Apache-2.0
+futurist>=0.11.0 # Apache-2.0
@@ -26 +26 @@ cachetools>=1.0.0 # MIT License
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT
@@ -37 +37 @@ amqp>=1.4.0 # LGPL
-kombu>=3.0.7 # BSD
+kombu>=3.0.25 # BSD



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] taskflow 1.28.0 release (mitaka)

2016-02-15 Thread no-reply
We are pleased to announce the release of:

taskflow 1.28.0: Taskflow structured state management library.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/taskflow

With package available at:

https://pypi.python.org/pypi/taskflow

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

For more details, please see below.

Changes in taskflow 1.27.0..1.28.0
--

63b380f Add WBE worker expiry
1ab60b7 Some WBE protocol/executor cleanups
a70bd8a Remove need for separate notify thread
7ea2bfc Updated from global requirements
8c1172b Don't bother scanning for workers if no new messages arrived
2515d3c Updated from global requirements
00d3a25 Updated from global requirements
77e89ea Updated from global requirements
cea71f2 Fix for WBE sporadic timeout of tasks

Diffstat (except docs and test files)
-

requirements.txt  |   2 +-
taskflow/engines/worker_based/engine.py   |  11 +-
taskflow/engines/worker_based/executor.py | 211 ++
taskflow/engines/worker_based/protocol.py |  94 --
taskflow/engines/worker_based/types.py| 156 
taskflow/types/cache.py   |  85 -
test-requirements.txt |   4 +-
14 files changed, 351 insertions(+), 431 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 5e73cb1..2cad471 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -17 +17 @@ enum34;python_version=='2.7' or python_version=='2.6' or 
python_version=='3.3' #
-futurist>=0.6.0 # Apache-2.0
+futurist>=0.11.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 7d2eaaf..f724ea3 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12 +12 @@ testscenarios>=0.4 # Apache-2.0/BSD
-kombu>=3.0.7 # BSD
+kombu>=3.0.25 # BSD
@@ -32 +32 @@ PyMySQL>=0.6.2 # MIT License
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.versionedobjects 1.6.0 release (mitaka)

2016-02-15 Thread no-reply
We are pleased to announce the release of:

oslo.versionedobjects 1.6.0: Oslo Versioned Objects library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.5.0..1.6.0
-

7be5865 Updated from global requirements
f773c29 Updated from global requirements

Diffstat (except docs and test files)
-




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] osprofiler 1.1.0 release (mitaka)

2016-02-15 Thread no-reply
We are happy to announce the release of:

osprofiler 1.1.0: OpenStack Profiler Library

This release is part of the mitaka release series.

With package available at:

https://pypi.python.org/pypi/osprofiler

For more details, please see below.

Changes in osprofiler 1.0.1..1.1.0
--

2ab53df run py34 tests before py27 to work around testr bug
1720b73 stop making a copy of options discovered by config generator
ac45be4 Make class detection more accurate
be25e99 Expose X-Trace-* constants
3ebc438 remove python 2.6 trove classifier

Diffstat (except docs and test files)
-

osprofiler/opts.py |  5 +
osprofiler/profiler.py |  2 +-
osprofiler/web.py  | 14 ++
setup.cfg  |  1 -
tox.ini|  2 +-
5 files changed, 13 insertions(+), 11 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.vmware 2.4.0 release (mitaka)

2016-02-15 Thread no-reply
We are stoked to announce the release of:

oslo.vmware 2.4.0: Oslo VMware library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.vmware

With package available at:

https://pypi.python.org/pypi/oslo.vmware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

For more details, please see below.

Changes in oslo.vmware 2.3.0..2.4.0
---

aef1f4a Updated from global requirements

Diffstat (except docs and test files)
-

requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c6ad26e..c3ed946 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -18 +18 @@ suds-jurko>=0.6 # LGPL
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.utils 3.6.0 release (mitaka)

2016-02-15 Thread no-reply
We are satisfied to announce the release of:

oslo.utils 3.6.0: Oslo Utility library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.utils

With package available at:

https://pypi.python.org/pypi/oslo.utils

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.utils

For more details, please see below.

Changes in oslo.utils 3.5.0..3.6.0
--

cf3de7d Remove bandit.yaml in favor of defaults

Diffstat (except docs and test files)
-

bandit.yaml | 359 
tox.ini |   2 +-
2 files changed, 1 insertion(+), 360 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.rootwrap 4.0.0 release (mitaka)

2016-02-15 Thread no-reply
We are amped to announce the release of:

oslo.rootwrap 4.0.0: Oslo Rootwrap

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.rootwrap

With package available at:

https://pypi.python.org/pypi/oslo.rootwrap

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.rootwrap

For more details, please see below.

Changes in oslo.rootwrap 3.2.0..4.0.0
-

5b0824e Updated from global requirements
6d104a3 Remove unused use-syslog-rfc-format option

Diffstat (except docs and test files)
-

.gitignore   | 1 +
etc/rootwrap.conf.sample | 5 -
oslo_rootwrap/wrapper.py | 7 ---
test-requirements.txt| 2 +-
6 files changed, 2 insertions(+), 25 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 2cd8f75..9f2a3f7 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -24 +24 @@ mock>=1.2 # BSD
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.service 1.5.0 release (mitaka)

2016-02-15 Thread no-reply
We are pleased to announce the release of:

oslo.service 1.5.0: oslo.service library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 1.4.0..1.5.0


d94735b Use requests in TestWSGIServerWithSSL instead of raw socket client
fdb4861 Fix misspelling and rewrite sentence
81cc23f Add a more useful/detailed frame dumping function

Diffstat (except docs and test files)
-

oslo_service/eventlet_backdoor.py | 44 +--
oslo_service/service.py   |  2 +-
3 files changed, 57 insertions(+), 25 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.config 3.7.0 release (mitaka)

2016-02-15 Thread no-reply
We are jazzed to announce the release of:

oslo.config 3.7.0: Oslo Configuration API

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

Changes in oslo.config 3.6.0..3.7.0
---

b09ba4a add generator hook for apps to update option defaults
0831729 Updated from global requirements
1a64804 refactor generator._list_opts for further enhancement
afdbfa6 always show coverage output from tests
f1ae1e3 handle group objects in sphinxext
424c772 refactor sphinxext and add unit tests
ad440d7 have show-options load the generator config file
f59bd45 refactor generator closures to private methods
1e5956a remove specially attribute handling from _Namespace
5efb1a2 Log mutated options at INFO

Diffstat (except docs and test files)
-

oslo_config/cfg.py  |  36 ++--
oslo_config/generator.py| 102 
oslo_config/sphinxext.py| 284 +++-
requirements.txt|   2 +-
tox.ini |   5 +-
10 files changed, 772 insertions(+), 221 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 2d84059..029335d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -5 +5 @@
-debtcollector>=1.2.0  # Apache-2.0
+debtcollector>=1.2.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.reports 1.5.0 release (mitaka)

2016-02-15 Thread no-reply
We are jubilant to announce the release of:

oslo.reports 1.5.0: oslo.reports library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.reports

With package available at:

https://pypi.python.org/pypi/oslo.reports

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.reports

For more details, please see below.

Changes in oslo.reports 1.4.0..1.5.0


8770d7c Updated from global requirements

Diffstat (except docs and test files)
-

test-requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 907fc73..06da880 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -14 +14 @@ oslo.config>=3.4.0 # Apache-2.0
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.log 3.0.0 release (mitaka)

2016-02-15 Thread no-reply
We are content to announce the release of:

oslo.log 3.0.0: oslo.log library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 2.4.0..3.0.0


17fcaea remove pypy from default tox environment list
5648447 stop making a copy of options discovered by config generator
c0afcf5 always run coverage report
292952d Remove bandit.yaml in favor of defaults
2057faf Fix spell typos
b0aff1a Remove deprecated log-format option

Diffstat (except docs and test files)
-

.gitignore  |   2 +-
bandit.yaml | 359 
oslo_log/_options.py|  25 +--
oslo_log/log.py |  16 +-
tox.ini |  12 +-
10 files changed, 32 insertions(+), 416 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.privsep 1.1.0 release (mitaka)

2016-02-15 Thread no-reply
We are thrilled to announce the release of:

oslo.privsep 1.1.0: OpenStack library for privilege separation

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

Changes in oslo.privsep 1.0.0..1.1.0


539ff4e UnprivilegedPrivsepFixture: Clear capabilities config
ce5b7c7 Change name of privsep_helper to match code

Diffstat (except docs and test files)
-

setup.cfg | 2 +-
2 files changed, 3 insertions(+), 1 deletion(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] oslo.db 4.5.0 release (mitaka)

2016-02-15 Thread no-reply
We are glad to announce the release of:

oslo.db 4.5.0: Oslo Database library

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

Changes in oslo.db 4.4.0..4.5.0
---

0fab50b Updated from global requirements
85a6799 stop making a copy of options discovered by config generator

Diffstat (except docs and test files)
-

oslo_db/options.py | 4 +---
setup.cfg  | 2 +-
2 files changed, 2 insertions(+), 4 deletions(-)



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][oslo] futurist 0.12.0 release (mitaka)

2016-02-15 Thread no-reply
We are delighted to announce the release of:

futurist 0.12.0: Useful additions to futures, from the future.

This release is part of the mitaka release series.

With source available at:

http://git.openstack.org/cgit/openstack/futurist

With package available at:

https://pypi.python.org/pypi/futurist

Please report issues through launchpad:

http://bugs.launchpad.net/futurist

For more details, please see below.

Changes in futurist 0.11.0..0.12.0
--

cc7bc70 Updated from global requirements
da698e8 Ensure all futures have completed before run returns

Diffstat (except docs and test files)
-

futurist/_utils.py| 24 
futurist/periodics.py | 15 ---
test-requirements.txt |  2 +-
3 files changed, 37 insertions(+), 4 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index 73e9108..f17e854 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -8 +8 @@ hacking<0.11,>=0.10.0
-eventlet!=0.18.0,>=0.17.4 # MIT
+eventlet>=0.18.2 # MIT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Should we signal backwards incompatible changes in microversions?

2016-02-15 Thread GHANSHYAM MANN
Regards
Ghanshyam Mann


On Mon, Feb 15, 2016 at 12:07 PM, Alex Xu  wrote:
> If we support 2.x.y, when we bump 'x' is a problem. We didn't order the API
> changes for now, the version of API change is just based on the order of
> patch merge. For support 2.x.y, we need bump 'y' first for back-compatible
> changes I guess.
>
> As I remember, we said before, the new feature is the motivation of user
> upgrade their client to support new version API, whatever the new version is
> backward compatible or incompatible. So I guess the initial thinking we hope
> user always upgrade their code than always stop at old version? If we bump
> 'x' after a lot of 'y', will that lead to user always stop at 'x' version?
> And the evolution of api will slow down.
>
> Or we limit to each release cycle. In each release, we bump 'y' first, and
> then bump 'x'. Even there isn't any back-incompatible change in the release.
> We still bump 'x' when released. Then we can encourage user upgrade their
> code. But I still think the back-incompatible API change will be slow down
> in development, as it need always merged after back-compatible API change
> patches.

Yea that true and will be more complicated from development
perspective which leads to slow down the evolution of API changes.
But if we support x.y then still we can change x at any time back
in-comp changes happens(i mean before y also)? Or I may not be getting
the issue you mentioned about always bump y before x.

I like the idea of distinguish the backward comp and in-comp changes
with x and y which always gives clear perspective about changes.
But it should not lead users to ignore y. I mean some backward comp
changes which are really good gets ignored by users as they start look
at the x only.
For example- "adding attribute in resource representation" is back
comp change (if so) and if that is added as y then, it might get
ignored by users.

Another way to clearly distinguish backward comp and in-comp changes
is through documentation which was initially discussed during
microversion specs. Currently doc has good description about each
changes but not much clear way about backward comp or not.
Which we can do by adding a clear flag [Backward Compatible/
Incompatible] for each version in doc [1]-

>
>
>
> 2016-02-13 4:55 GMT+08:00 Andrew Laski :
>>
>> Starting a new thread to continue a thought that came up in
>>
>> http://lists.openstack.org/pipermail/openstack-dev/2016-February/086457.html.
>> The Nova API microversion framework allows for backwards compatible and
>> backwards incompatible changes but there is no way to programmatically
>> distinguish the two. This means that as a user of the API I need to
>> understand every change between the version I'm using now and a new
>> version I would like to move to in case an intermediate version changes
>> default behaviors or removes something I'm currently using.
>>
>> I would suggest that a more user friendly approach would be to
>> distinguish the two types of changes. Perhaps something like 2.x.y where
>> x is bumped for a backwards incompatible change and y is still
>> monotonically increasing regardless of bumps to x. So if the current
>> version is 2.2.7 a new backwards compatible change would bump to 2.2.8
>> or a new backwards incompatible change would bump to 2.3.8. As a user
>> this would allow me to fairly freely bump the version I'm consuming
>> until x changes at which point I need to take more care in moving to a
>> new version.
>>
>> Just wanted to throw the idea out to get some feedback. Or perhaps this
>> was already discussed and dismissed when microversions were added and I
>> just missed it.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

[1] 
https://github.com/openstack/nova/blob/master/nova/api/openstack/rest_api_version_history.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] weekly meeting of Feb.17th

2016-02-15 Thread joehuang
Hi,

After the Chinese new year festival, let's resume the weekly meeting, and 
agenda as following.

Agenda:
# Progress of To-do list review: https://etherpad.openstack.org/p/TricircleToDo
# SEG
# Quota management
# exception logging, flavor mapping
# Pod scheduling
# L2 networking across pods

Best Regards
Chaoyi Huang ( Joe Huang )
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Ansible] Meetings this week

2016-02-15 Thread Jesse Pretorius
Hi everyone,

Due to being active at the Ops Mid Cycle, the OpenStack-Ansible Mid Cycle
and Ansiblefest this week the bug triage and community meetings this week
will not be taking place.

We'll resume the normal scheduled meetings next week.

Thanks,

Jesse
IRC: odyssey4me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][API] Does nova API allow the server_id parem as DB index?

2016-02-15 Thread 少合冯
I guess others may ask the same questions.

I read the nova API doc:
such as this API:
http://developer.openstack.org/api-ref-compute-v2.1.html#showServer

GET /v2.1/​{tenant_id}​/servers/​{server_id}​
*Show server details*


*Request parameters*
ParameterStyleTypeDescription
tenant_id URI csapi:UUID

The UUID of the tenant in a multi-tenancy cloud.
server_id URI csapi:UUID

The UUID of the server.

But I can get the server by DB index:

curl -s -H X-Auth-Token:6b8968eb38df47c6a09ac9aee81ea0c6
http://192.168.2.103:8774/v2.1/f5a8829cc14c4825a2728b273aa91aa1/servers/2
{
"server": {
"OS-DCF:diskConfig": "MANUAL",
"OS-EXT-AZ:availability_zone": "nova",
"OS-EXT-SRV-ATTR:host": "shaohe1",
"OS-EXT-SRV-ATTR:hypervisor_hostname": "shaohe1",
"OS-EXT-SRV-ATTR:instance_name": "instance-0002",
"OS-EXT-STS:power_state": 1,
"OS-EXT-STS:task_state": "migrating",
"OS-EXT-STS:vm_state": "error",
"OS-SRV-USG:launched_at": "2015-12-18T07:41:00.00",
"OS-SRV-USG:terminated_at": null,
..
}
}

and the code really allow it use  DB index
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L1939
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

2016-02-15 Thread Arkady_Kanevsky
I like any formal documented process.
Having only bug fixes for stable releases is a good and consistent stand.

But we are bumping into fundamental issue of integrated release.
By the time release comes out the new functionality of nova, or neutron or most 
other components do not triple heat templates that deploy and configure that 
functionality for overcloud. So as soon as somebody will try to use Triple O 
(or RDO manager or OSP director, or any other derivative) they try to remedy 
this shortcoming. Being good openstack citizens these folks submit heat 
templates upstream. We can state that we will not take them into stable 
release. They will push that community away from upstream and will diverge 
their deployment from it. Or they will pound on their openstack provider (if it 
is not upstream) to have a place to maintain them for them. Either way it is 
ugly. And will make upgrade and update for these Customers a nightmare.

TripleO is not a unique to that. Horizon, and others, has it in spades.

Do not know if we will be able to fix it in openstack itself or we rely on 
distros to handle it.
But we will need some place for users to provide their fixes and extension of 
current capabilities of TripleO to support existing configuration and new 
functionality of released core openstack.

Thanks,
Arkady

-Original Message-
From: Steven Hardy [mailto:sha...@redhat.com]
Sent: Monday, February 15, 2016 3:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

On Wed, Feb 10, 2016 at 07:05:41PM +0100, James Slagle wrote:
> On Wed, Feb 10, 2016 at 4:57 PM, Steven Hardy wrote:
>
> Hi all,
>
> We discussed this in our meeting[1] this week, and agreed a ML
> discussion
> to gain consensus and give folks visibility of the outcome would be a
> good
> idea.
>
> In summary, we adopted a more permissive "release branch" policy[2] for
> our
> stable/liberty branches, where feature backports would be allowed,
> provided
> they worked with liberty and didn't break backwards compatibility.
>
> The original idea was really to provide a mechanism to "catch up" where
> features are added e.g to liberty OpenStack components late in the cycle
> and TripleO requires changes to integrate with them.
>
> However, the reality has been that the permissive backport policy has
> been
> somewhat abused (IMHO) with a large number of major features being
> proposed
> for backport, and in a few cases this has broken downstream (RDO)
> consumers
> of TripleO.
>
> Thus, I would propose that from Mitaka, we revise our backport policy to
> simply align with the standard stable branch model observed by all
> projects[3].
>
> Hopefully this will allow us to retain the benefits of the stable branch
> process, but provide better stability for downstream consumers of these
> branches, and minimise confusion regarding what is a permissable
> backport.
>
> If we do this, only backports that can reasonably be considered
> "Appropriate fixes"[4] will be valid backports - in the majority of
> cases
> this will mean bugfixes only, and large features where the risk of
> regression is significant will not be allowed.
>
> What are peoples thoughts on this?
>
> â**I'm in agreement. I think this change is needed and will help set
> better expectations around what will be included in which release.
>
> If we adopt this as the new policy, then the immediate followup is to set
> and communicate when we'll be cutting the stable branches, so that it's
> understood when the features have to be done/committed. I'd suggest that
> we more or less completely adopt the integrated release schedule[1]. Which
> I believe means the week of RC1 for cutting the stable/mitaka branches,
> which is March 14th-18th.
>
> It seems to follow logically then that we'd then want to also be more
> aggresively aligned with other integrated release events such as the
> feature freeze date, Feb 29th - March 4th.

Yes, agreeing a backport policy is the first step, and aligning all our release 
policies with the rest of OpenStack is the logical next step.

> An alternative to strictly following the schedule, would be to say that
> TripleO lags the integrated release dates by some number of weeks (1 or 2
> I'd think), to allow for some "catchup" time since TripleO is often
> consuming features from projects part of the integrated release.

The risk with this approach is there remains some confusion about our 
deadlines, and there is an increased risk that our 1-2 weeks window slips and 
we end up with a similar problem to that which we have now.

I'd propose we align with whatever schedule the puppet community observes, 
given that (with out current implementation at least), it's unlikely we can 
land any features actually related to new-feature-in-$service type patches 
without that feature already having support in the puppet modules?

Perhaps we can seek out some guidance from Emilien, as 

Re: [openstack-dev] [Nova][Glance]Glance v2 api support in Nova

2016-02-15 Thread Flavio Percoco

On 12/02/16 18:24 +0300, Mikhail Fedosin wrote:

Hello!

In late December I wrote several messages about glance v2 support in Nova and
Nova's xen plugin. Many things have been done after that and now I'm happy to
announce that there we have a set of commits that makes Nova fully v2
compatible (xen plugin works too)!

Here's the link to the top commit https://review.openstack.org/#/c/259097/
Here's the link to approved spec for Mitaka https://github.com/openstack/
nova-specs/blob/master/specs/mitaka/approved/use-glance-v2-api.rst

I think it'll be a big step for OpenStack, because api v2 is much more stable
and RESTful than v1.  We would very much like to deprecate v1 at some point. v2
is 'Current' since Juno, and after that there we've had a lot of attempts to
adopt it in Nova, and every time it was postponed to next release cycle.

Unfortunately, it may not happen this time - this work was marked as
'non-priority' when the related patches had been done. I think it's a big
omission, because this work is essential for all OpenStack, and it will be a
shame if we won't be able to land it in Mitaka.
As far as I know, Feature Freeze will be announced on March, 3rd, and we still
have enough time and people to test it before. All patches are split into small
commits (100 LOC max), so they should be relatively easy to review.

I wonder if Nova community members may change their decision and unblock this
patches? Thanks in advance!


A couple of weeks ago, I had a chat with Sean Dague and John Garbutt and we
agreed that it was probably better to wait until Newton. After that chat, we
held a Glance virtual mid-cycle where Mikhail mentioned that he would rather
sprint on getting Nova on v2 than waiting for Newton. The terms and code Mikhail
worked on aligns with what has been discussed throughout the cycle in numerous
chats, patch sets, etc.

After all the effort that has been put on this (including getting a py24
environment ready to test the xenplugin) it'd be a real shame to have this work
pushed to Newton. The Glance team *needs* to be able to deprecate v1 and the
team has been working on this ever since Kilo, when this effort of moving Nova
to v2 started.

I believe it has to be an OpenStack priority to make this happen or, at the very
least, a cross-project effort that involves all services relying on Glance. Nova
is the last service in the list, AFAICT, and the Glance team has been very
active on this front. This is not to imply the Nova team hasn't help, in fact,
there's been lots of support/feedback from the nova team during Mitaka. It is
because of that that I believe we should grant this patches an exception and let
them in.

Part of the feedback the Nova team has provided is that some of that code that
has been proposed should live in glanceclient. The Glance team is ready to react
and merge that code, release glanceclient, and get Nova on v2.

Either way, I believe the Glance team should move forward either way on
deprecating v1 in the Newton time frame. Although, this is not for me to decide
right now and perhaps this thread is not the right place for that discussion.

Is this something the Nova team would like to consider?

All that said, I'd like to thanks Mikhail for all the effort put on this and
those sleepless nights he spent on making the xenplugin work with v2. This was
not an easy task to take on and Mikhail succeeded on reaching the goal.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread Arkady_Kanevsky
I think this will be very detrimental to development community.
The best feedback we get is from user/customer community that are at the summit 
but most likely will not attend separate design summit.

I will ignore financial implication of 2 separate summits.

-Original Message-
From: Eoghan Glynn [mailto:egl...@redhat.com]
Sent: Monday, February 15, 2016 3:36 AM
To: James Bottomley
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] Proposal: Separate design summits from 
OpenStack conferences



> > Honestly I don't know of any communication between two cores at a +2
> > party that couldn't have just as easily happened surrounded by other
> > contributors. Nor, I hope, does anyone put in the substantial
> > reviewing effort required to become a core in order to score a few
> > free beers and see some local entertainment. Similarly for the TC,
> > one would hope that dinner doesn't figure in the system incentives
> > that drives folks to throw their hat into the ring.
>
> Heh, you'd be surprised.
>
> I don't object to the proposal, just the implication that there's
> something wrong with parties for specific groups: we did abandon the
> speaker party at Plumbers because the separation didn't seem to be
> useful and concentrated instead on doing a great party for everyone.
>
> > In any case, I've derailed the main thrust of the discussion here,
> > which I believe could be summed up by:
> >
> > "let's dial down the glitz a notch, and get back to basics"
> >
> > That sentiment I support in general, but I'd just be more selective
> > as to which social events should be first in line to be culled in
> > order to create a better atmosphere at summit.
> >
> > And I'd be far more concerned about getting the choice of location,
> > cadence, attendees, and format right, than in questions of who
> > drinks with whom.
>
> OK, so here's a proposal, why not reinvent the Cores party as a Meet
> the Cores Party instead (open to all design summit attendees)? Just
> make sure it's advertised in a way that could only possibly appeal to
> design summit attendees (so the suits don't want to go), use the same
> buget (which will necessitate a dial down) and it becomes an inclusive
> event that serves a useful purpose.

Sure, I'd be totally agnostic on the branding as long as the widest audience is 
invited ... e.g. all ATCs, or even all summit attendees.

Actually that distinction between ATCs and other attendees just sparked another 
thought ...

Traditionally all ATCs earn a free pass for summit, whereas the other attendees 
pay $600 or more for entry. I'm wondering if (a) there's some 
cross-subsidization going on here and (b) if the design summit was cleaved off, 
would the loss of the income from the non-ATCs sound the death-knell for the 
traditional ATC free pass?

>From my Pov, that would not be an excellent outcome. Someone with better 
>visibility on the financial structure of summit funding might be able to 
>clarify that.

Cheers,
Eoghan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Why do we use pip install -U as our install_command

2016-02-15 Thread Robert Collins
On 11 February 2016 at 16:09, Clark Boylan  wrote:
> The reason that I remember off the top of my head is because we spent
> far too much time telling people to run `tox -r` when their code failed
> during Jenkins testing but ran just fine locally. It removes a
> significant amount of debugging overhead to have everyone using a
> relatively consistent set of packages whenever they rerun tests.

So for liberty and up, we have constraints which avoid this - every
run will be precisely specified once a project has adopted the tox
rules. For kilo, if the only reason for -U was to deal with bad lower
requirements / missing exclusions for bad releases (which is what I
infer from what you recall) then I think we're better off without -U
in kilo, because so much of our requirements in kilo are not capped,
we're going to face growing breakage as external projects do change
things.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reminder/request for cross-project stable CPLs to be in #openstack-stable IRC

2016-02-15 Thread Matt Riedemann
This came up in today's stable team meeting, but cross-project liaisons 
for stable branches [1] should be available in the #openstack-stable IRC 
channel. By default if a project doesn't have a stable CPL, then the PTL 
is the rep.


This is more or less just to improve communication between the stable 
team and the projects, for getting attention on issues.


[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Thoughts about the relationship between RDO and TripleO

2016-02-15 Thread John Trowbridge


On 02/15/2016 02:50 PM, James Slagle wrote:
> On Mon, Feb 15, 2016 at 2:05 PM, John Trowbridge  wrote:
>> Howdy,
>>
>> The spec to replace instack-virt-setup[1] got me thinking about the
>> relationship between RDO and TripleO. Specifically, when thinking about
>> where to store/create an undercloud.qcow2 image, and if this effort is
>> worth duplicating.
>>
>> Originally, I agreed with the comments on the spec wrt the fact that we
>> do not want to rely on RDO artifacts for TripleO CI. However, we do
>> exactly that already. Delorean packages are 100% a RDO artifact. So it
>> seems a bit odd to say we do not want to rely on an image that is really
>> just a bunch of those other artifacts, that we already rely on, rolled
>> up into a qcow.
> 
> That's fair I suppose. It just felt a bit like adding dependencies in
> the wrong direction, and I would prefer to see the undercloud image
> generated directly via tripleo-ci.
> 
> Isn't the plan to push more of the Delorean package git repo sources
> under the OpenStack namespace in the future? If so, and things are
> moving in that direction then I figured it would be better to have
> less dependencies on the RDO side, and use as much of the existing
> tripleo-ci as possible.
>

There has been some talk of doing this, but I do not think anyone
currently working on RDO has time to be PTL of an openstack project. I'm
also not sure this would just be delorean packaging branches. My
understanding was that we would move all the branches upstream. In any
case, there is no movement on this right now.

> I was advocating for using tripleo-ci to build the undercloud image,
> because we know it can and already does (we just doesn't save it), so
> we should default to using tripleo-ci instead of RDO CI. tripleo-ci
> doesn't build a whole delorean snapshot of master today (we only build
> a small subset of the packages we're testing for that CI run), so in
> that case, we use the RDO hosted one. I'm not saying tripleo-ci should
> build a whole deloean repo by any means. Just that in the specific
> case of the undercloud image, tripleo-ci is more or less already
> building that, so we should save and reuse it.
>

Right I agree that it would be great if tripleo-ci was producing this
image. Then RDO could just be a consumer of it. Having solved this
problem for RDO though, I can say there are some pieces of it that are
non-trivial. Where to store the images is a big one. Does TripleO have
somewhere we can store images that would be used by both TripleO and
downstream consumers?

Efficiently creating the image is also not as simple as just taking what
we currently have in tripleo-ci and saving it off before `openstack
undercloud install` is run. We build the overcloud images and install
all of the packages after that in tripleo-ci currently. Those two steps
are the majority of the time, so saving an image before then does not
buy much.

It might be worth splitting out a second spec solely dedicated to how we
can efficiently build an undercloud.qcow2. This could be more process
oriented, ie what an efficient list of steps is, rather than a specific
tool to implement that process.
>>
>> On the other hand, it seems a bit odd that we rely on delorean packages
>> at all. This creates a bit of a sticky situation for RDO. Take the case
>> where RDO has identified all issues that need to be fixed to work with
>> HEAD of master, but some patches have not merged yet. It should be ok
>> for RDO to put a couple .patch files in the packaging, and be on our
>> merry way until those are merged upstream and can be removed.
> 
> I actually think that the trunk packaging shouldn't ever do that. It
> should be 100% straight from trunk with no patches.
> 
I agree in principle, but it seems weird that it is not even really the
RDO community's decision to make. This example was really just meant to
show that for better or worse the two projects are co-dependent in both
directions. Which then made me wonder where is the line?

It seems like if the undercloud.qcow2 is simply a collection of packages
pre-installed in an image that allow deploying TripleO it is not all
that different than a RPM repository. Note, there are some things in the
current image build that fall outside of that, but I think it might be a
better effort to fix those things in RDO than it would be to reinvent
what is already in RDO in TripleO.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-15 Thread Emilien Macchi


On 02/15/2016 01:16 PM, David Moreau Simard wrote:
> So is it implied that a version "8.0.0b1" of a puppet module works
> with the "8.0.0b1" of it's parent project ?

No, releases are not synced with OpenStack projects.
The only thing we currently guarantee is that 8.0.0b1 will work with
what our CI is testing, which is something close to trunk at this time.

Once we release stable branches, the repos are configured to go on
stable repos.

> This has some implications, it means there are expectations that
> puppet-openstack is able to keep up with upstream changes throughout
> the cycle.
> We've gotten pretty good at following trunk with the myriad of CI but
> some changes are more challenging than others.

We did awesome progress thanks to RDO folks, that keep up with upstream.
Though we can't say the same about Ubuntu, who are not releasing
packages quite often, comparing to you.
That's a blocker if we want to follow OpenStack projects tags more closely.

Hopefully one day OpenStack will provide official Debian/Ubuntu
packaging, close from trunk, so we might investigating at using them,
instead of UCA. But that's another topic, that will require discussions
and agreements in our group.

> Would there be times where the puppet module's tagged release would
> lag behind upstream (say, a couple days because we need to merge new
> things) and if so, does tagging lose some of it's value ?
> The liberty puppet modules were released quite a while after liberty
> went stable.
> 
> 
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
> 
> dmsimard = [irc, github, twitter]
> 
> 
> On Mon, Feb 15, 2016 at 11:13 AM, Emilien Macchi  wrote:
>> Hi,
>>
>> While Puppet modules releases are independently managed, we have some
>> requests from both RDO & Debian folks to push a first tag in our Puppet
>> modules, for Mitaka release, so they can start provide Mitaka packaging
>> based on tag, and not on commits.
>>
>> This is something we never did before, usually we wait until the end of
>> the cycle and try to push a tag soon after the official release.
>> But we want to experiment beta tags and see if it helps.
>>
>> The Mitaka tag would be 8.0.0b1 and pushed by the end of February (I'll
>> work on it).
>> Though stable/mitaka branch won't be created until official Mitaka
>> release. Same thing for release notes, that will be provided at the end
>> of the cycle as usual.
>>
>> Any thoughts are welcome,
>> --
>> Emilien Macchi
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Baremetal Deploy Ramdisk functional testing

2016-02-15 Thread Jim Rollenhagen
On Mon, Feb 15, 2016 at 01:48:11PM +0200, Maksym Lobur wrote:
> Re-sending with Ironic stamp… Folks, please see below: 

Hi, I meant to reply to this earlier, but didn't get to it, sorry for
that.

> 
> > 
> > Hi All,
> > 
> > In bareon [1] we have test framework to test deploy ramdsik with bareon 
> > inside (baremetal deployments). This is a functional testing, we do a full 
> > partitioning/image_deployment in a VM, then reboot to see if tenant image 
> > deployed properly. Blueprint is at [2], test example is at [3]. We were 
> > going to put a framework to a separate repo, while keeping functional tests 
> > in bareon tree.
> > 
> > Does someone else have need to test some kind of deployment ramdisks? Maybe 
> > already have existing tools for this? Or would be interested to reuse our 
> > code? Current pull request is to create bareon-func-test repo [4]. But if 
> > that makes sense, we could do something like ramdisk-func-test, e.g. try to 
> > generalize the framework to test other ramdisks/agents.

So, as I understand this, the goal is to test the bareon ramdisk without
bringing up the rest of the services that talk to it?

Currently in ironic-land, the only runtime testing of our ramdisk(s) is
the integration tests we do with the rest of openstack (the dsvm tests).
We've considered functional tests of the ironic-python-agent API, but
haven't much considered writing scripts to boot it on a VM and do
things.

Given the ways IPA interacts with Ironic, we'd basically just end up
re-writing a bunch of Ironic code if we wanted to do this; however it
might not be too much code, so maybe it's worth it? I'm not sure; I
guess it isn't a major priority for us right now.

All this is to say, I guess I'd have to look at the functional test
framework you're building. I'm not opposed to making it more general,
and as changing repo names is expensive (requires gerrit downtime), it
might be worth naming it ramdisk-func-test or similar now just in case. :)

// jim

> > 
> > [1] https://wiki.openstack.org/wiki/Bareon 
> > 
> > [2] https://blueprints.launchpad.net/bareon/+spec/bareon-functional-testing 
> > 
> > [3] http://pastebin.com/mL39QJS6 
> > [4] https://review.openstack.org/#/c/279120/ 
> > 
> > 
> > 
> > Regards,
> > Max Lobur,
> > OpenStack Developer, Mirantis, Inc.
> > 
> 

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [barbican] Nominating Fernando Diaz for Barbican Core

2016-02-15 Thread Nathan Reller
+1

He is a great addition to the Barbican community.

-Nate

On Mon, Feb 15, 2016 at 1:34 PM, Dave McCowan (dmccowan)
 wrote:
> +1
>
> On 2/15/16, 12:45 PM, "Douglas Mendizábal"
>  wrote:
>
>>-BEGIN PGP SIGNED MESSAGE-
>>Hash: SHA512
>>
>>Hi All,
>>
>>I would like to nominate Fernando Diaz for the Barbican Core team.
>>Fernando has been an enthusiastic contributor since joining the
>>Barbican team.  He is currently the most active non-core reviewer on
>>Barbican projects for the last 90 days. [1]  He¹s got an excellent eye
>>for review and I think he would make an excellent addition to the team.
>>
>>As a reminder to our current core reviewers, our Core Team policy is
>>documented in the wiki. [2]  So please reply to this thread with your
>>votes.
>>
>>Thanks,
>>- - Douglas Mendizábal
>>
>>[1] http://stackalytics.com/report/contribution/barbican-group/90
>>[2] https://wiki.openstack.org/wiki/Barbican/CoreTeam
>>-BEGIN PGP SIGNATURE-
>>
>>iQIcBAEBCgAGBQJWwg7GAAoJEB7Z2EQgmLX7mEUP/RFZCAZ9ZbDAB48lgek1dAPo
>>0KoK8IjkQhwKE7VYYhHwoZIlIdG0Eb9t8Y8ia9+k3SDzo0Q3upXckqEZbiZUZWQT
>>gv0BwpMkfUJ8zQeEQouomN8p/ZeRayZuAi3rz5rfte/JDr6IA1eFmPFRAQG+UO1H
>>hN6Al7nGm1Ixu/t969fqzslI6FwUu6CHGMDAcoXxL1mlCptC82mRzZZ26m0ObPFQ
>>GcnVka7CPseMqHCM6ls9I8AIubAWRehth9oLp7MP6D1vmJagNkwsWnHw5iB1rfBu
>>5TU9litnvei89kZ0uoIiOVruo9zh8R/hXn7CT3s3+9sQdwURrmoTHpEWlPlFQ8c7
>>ZRqRB7xt4L6jBt3lhoxCDftbmLIDMjBDPsfZjQKG44c+Tr0XuvaWTBVu/giJhvYW
>>RnKbQNhB/cByVooVc9itRC8QxZfI+3Ee1X+YEJrALDxjinAl7cHbi1T3DhmFyhRn
>>RbnF/9OVqy4+EqLsZFqEWdVZtyn29NWBOmiTyYtufufI41Yu+L6KU/hKj/a5HcSh
>>P63Bwp2cpyu+bJsCvMJWfXwW2odUte9RBKsmozFz2Q1ge2FMsr1nR7U0/+DVLoYI
>>GLHB3uvnKi2PgWAdbJCS21J6bGlhVX/MoIP/4gWmVuG5LflAfbyg3n5STgnRr7vQ
>>iBe0oW8NcTsETUVCYkwA
>>=09Wx
>>-END PGP SIGNATURE-
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-15 Thread Davanum Srinivas
Douglas,

Which means we should make sure pysaml2 and paramiko do switch to the
" pyca/cryptography" ASAP. However, what do we do in the mean time?

-- Dims

On Mon, Feb 15, 2016 at 1:30 PM, Douglas Mendizábal
 wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> One more thing: I forgot to point out that pyca/cryptography is
> already part of global-requirements. [1]
>
> - - Douglas Mendizábal
>
> [1]
> http://git.openstack.org/cgit/openstack/requirements/tree/global-require
> ments.txt#n25
>
> On 2/15/16 12:24 PM, Douglas Mendizábal wrote:
>> I had not previously heard of pycryptodome. Is this supposed to be
>> a drop-in replacement for pycrypto?  If so then it sounds like
>> they're doing a terrible job of it.
>>
>> The plan for Barbican has been to wait for pyca/cryptography [1] to
>> add support for the apis we needed to be able to drop our pycrypto
>> dependency.  I'll have to double check the latest pyca/cryptography
>> notes, but I do believe it's at a point now where it can be used in
>> Barbican to replace pycrypto. This would be the preferred fix for
>> us.
>>
>> AFAIK the paramiko folks were going to adopt pyca/cryptography as
>> well, so it appears that pycryptodome support will not be merged
>> there either. [2]
>>
>> Additionaly, bespoke pure-python cryptography gives me the heebie
>> jeebies, so I would strongly recommend to move all cryptographic
>> work to use pyca/cryptography instead of pycryptodome.
>>
>> - Douglas Mendizábal
>>
>> [1] https://cryptography.io/en/latest/ [2]
>> https://github.com/paramiko/paramiko/pull/646
>>
>> On 2/15/16 6:44 AM, Haïkel wrote:
>>> 2016-02-14 23:16 GMT+01:00 Davanum Srinivas :
 Hi,

 Short Story: pycryptodome if installed inadvertently will
 break several projects: Example :
 https://review.openstack.org/#/c/279926/

 Long Story: There's a new kid in town pycryptodome:
 https://github.com/Legrandin/pycryptodome

 Because pycrypto itself has not been maintained for a while:
 https://github.com/dlitz/pycrypto

 So folks like pysaml2 and paramiko are trying to switch over:
 https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbf
> b
>>


> de6b41ac63




>> https://github.com/paramiko/paramiko/issues/637

 In fact pysaml2===4.0.3 has already switched over. So the
 requirements bot/script has been trying to alert us to this
 new dependency, you can see Nova fail.
 https://review.openstack.org/#/c/279926/

 Why does it fail? For example, the new library is strict about
 getting bytes for keys and has dropped some parameters in
 methods. for example:
 https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/Pub
> l
>>


> icKey/RSA.py#L405




>> https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA
> .p
>>
>>
>>
> y#L499

 Another problem, if pycrypto gets installed last then things
 will work, if it pycryptodome gets installed last, things will
  fail. So we definitely cannot allow both in our
 global-requirements and upper-constraints. We can always try to
 pin stuff, but things will fail as there are a lot of jobs that
 do not honor upper-constraints. And things will fail in the
 field for Mitaka.

 Action: So what can we do? One possibility is to pin
 requirements and hope for the best. Another is to tolerate the
 install of either pycrypto or pycryptodome and test both
 combinations so we don't have to fight this battle.

 Example for Nova : https://review.openstack.org/#/c/279909/
 Example for Glance : https://review.openstack.org/#/c/280008/
 Example for Barbican :
 https://review.openstack.org/#/c/280014/

 What do you think?

 Thanks, Dims

>>
>>> This is annoying from a packaging PoV.
>>
>>> We have dependencies relying on pycrypto (e.g oauthlib used by
>>> keystone, paramiko by even more projects), and we can't control
>>> the order of installation. My 2 cts will be to favor the latter
>>> solution and test both combinations until N or O releases (and
>>> then get rid of pycrypto definitively), so we can handle this
>>> gracefully.
>>
>>
>>> Regards, H.
>>
>>> _
> _
>>
>>>
>>>
> 
>>
>>
>>
>>
>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>>
>>>
>>
>> __
> 
>>
>>
>>
> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> -BEGIN PGP 

Re: [openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-02-15 Thread Clark Boylan
On Mon, Feb 15, 2016, at 11:48 AM, Ivan Kolodyazhny wrote:
> Hi all,
> 
> I'll talk mostly about python-cinderclient but the same question could be
> related for other clients.
> 
> Now, for python-cinderclient we've got to kinds for functional/integrated
> jobs:
> 
> 1) gate-cinderclient-dsvm-functional - a very limited (for now) set of
> functional tests, most of them were part of tempest CLI tests in the
> past.
> 
> 2) gate-tempest-dsvm-neutron-src-python-cinderclient - if I understand
> right, the idea os this job was to have integrated tests to test
> cinderclient with other projects to verify that new patch to
> python-cinderclietn won't break any other project.
> But it does *not* test cinderclient at all, except few attach-related
> tests
> because Tempest doesn't use python-*client.

Tempest doesn't use python-*client to talk to the APIs but the various
OpenStack services do use python-*client to talk to the other services.
Using cinderclient as an example, nova consumes cinderclient to perform
volume operations in nova/volume/cinder.py. There is value in this
existing test if those code paths are exercised. Basically ensuring the
next release of cinderclient does not break nova. It may be the case
that cinderclient is a bad example because tempest doesn't do volume
operations through nova, but I am sure for many of the other clients
these tests do provide value.

> 
> The same job was added for python-heatclient but was removed because
> devstack didn't install Heat for that job [1].
> 
> We agreed [2] to remove this job from cinderclient gates too, once
> functional or integration tests will be implemented.

Just make sure that you don't lose exercising of the above code paths
when this transition happens. If we don't currently test that code it
would be a good goal for any new integration testing to do so.

> 
> 
> There is a proposal to python-cinderclient tests to implement some
> cross-project testing to make sure, that new python-cinderclient won't
> break any of existing project who use it.
> 
> After discussing in IRC with John Griffith (jgriffith) I'm realized that
> it
> could be an cross-project initiative in such kind of integration tests.
> OpenStack Client (OSC) could cover some part of such tests, but does it
> mean that we'll run OSC tests on every patch to python-*client? We can
> run
> only cinder-realated OSC tests on our gates to verify that it doesn't
> breack OSC and, may be other project.
> 
> The other option, is to implement tests like [3] per project basis and
> call
> it "integration".  Such tests could cover more cases than OSC functional
> tests and have more project-related test cases, e.g.: test some
> python-cinderclient specific corner cases, which is not related to OSC.
> 
> IMO, It would be good to have some cross-project decision on how will be
> implement clients' integration tests per project.
> 
> 
> [1] https://review.openstack.org/#/c/272411/
> [2]
> http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-12-16-16.00.log.html
> [3] https://review.openstack.org/#/c/279432/8
> 
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Google Sumer of Code 2016 - Call for ideas and mentors (deadline 19/02/2016)

2016-02-15 Thread Victoria Martínez de la Cruz
Friendly reminder, we are still looking for mentors and internship ideas.
Join us [0] and submit your internship project ideas in [1].

The deadline for our application as mentoring organization is 19/02/2016

[0] https://wiki.openstack.org/wiki/GSoC2016
[1] https://wiki.openstack.org/wiki/Internship_ideas

2016-02-01 16:12 GMT-03:00 Victoria Martínez de la Cruz <
victo...@vmartinezdelacruz.com>:

> Hi all,
>
> Google Summer of Code (GSoC) is a program that matches mentoring
> organizations with college and university student developers who are paid
> to write open source code. It has been around 2005 and we had been accepted
> as a mentor organization in only one opportunity (2014) having a great
> outcome for both interns and for our community. We expect to be able to
> join this year again, but for that, we will need your help.
>
> Mentors
>
> We need to submit our application as a mentoring organization, but for
> that, we need to have a clear outline of what different projects we have
> for interns to work on.
>
> *** The deadline for mentoring organizations applications is 19/02/2016.
> ***
>
> If you are interested in mentoring but you have doubts about it, please
> feel free to reach us here or on #openstack-gsoc. We will be happy to reply
> any doubt you may have about mentoring for this internship. Also, you can
> check out this guide [0].
>
> If you are already convinced that you want to join us as a mentor for this
> round, add your name in the OpenStack Google Summer of Code 2016 wiki page
> [1] and add your project ideas in [2]. Make sure you leave your contact
> information in the OpenStack GSoC 2016 wiki and that you add all the
> important details about the project idea. Also reach us if there is
> something you are not certain about.
>
> Interns
>
> Whereas we don't know yet if we are going to make it as a mentoring
> organization for this round, if you want to join us as an intern and you
> want to help OpenStack to get selected as a mentoring organization, you can
> help us proposing different tasks for the various projects we have in our
> ecosystem.
>
> For your inspiration, you can check out past projects in [3] and [4].
>
> Looking forward to see GSoC happening again in our community!
>
> Thanks,
>
> Victoria
>
> [0] http://en.flossmanuals.net/gsocmentoring/
> [1] https://wiki.openstack.org/wiki/GSoC2016
> [2] https://wiki.openstack.org/wiki/Internship_ideas
> [3] https://wiki.openstack.org/wiki/GSoC2014
> [4] https://wiki.openstack.org/wiki/GSoC2015
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Thoughts about the relationship between RDO and TripleO

2016-02-15 Thread James Slagle
On Mon, Feb 15, 2016 at 2:05 PM, John Trowbridge  wrote:
> Howdy,
>
> The spec to replace instack-virt-setup[1] got me thinking about the
> relationship between RDO and TripleO. Specifically, when thinking about
> where to store/create an undercloud.qcow2 image, and if this effort is
> worth duplicating.
>
> Originally, I agreed with the comments on the spec wrt the fact that we
> do not want to rely on RDO artifacts for TripleO CI. However, we do
> exactly that already. Delorean packages are 100% a RDO artifact. So it
> seems a bit odd to say we do not want to rely on an image that is really
> just a bunch of those other artifacts, that we already rely on, rolled
> up into a qcow.

That's fair I suppose. It just felt a bit like adding dependencies in
the wrong direction, and I would prefer to see the undercloud image
generated directly via tripleo-ci.

Isn't the plan to push more of the Delorean package git repo sources
under the OpenStack namespace in the future? If so, and things are
moving in that direction then I figured it would be better to have
less dependencies on the RDO side, and use as much of the existing
tripleo-ci as possible.

I was advocating for using tripleo-ci to build the undercloud image,
because we know it can and already does (we just doesn't save it), so
we should default to using tripleo-ci instead of RDO CI. tripleo-ci
doesn't build a whole delorean snapshot of master today (we only build
a small subset of the packages we're testing for that CI run), so in
that case, we use the RDO hosted one. I'm not saying tripleo-ci should
build a whole deloean repo by any means. Just that in the specific
case of the undercloud image, tripleo-ci is more or less already
building that, so we should save and reuse it.

>
> On the other hand, it seems a bit odd that we rely on delorean packages
> at all. This creates a bit of a sticky situation for RDO. Take the case
> where RDO has identified all issues that need to be fixed to work with
> HEAD of master, but some patches have not merged yet. It should be ok
> for RDO to put a couple .patch files in the packaging, and be on our
> merry way until those are merged upstream and can be removed.

I actually think that the trunk packaging shouldn't ever do that. It
should be 100% straight from trunk with no patches.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][all] Integration python-*client tests on gates

2016-02-15 Thread Ivan Kolodyazhny
Hi all,

I'll talk mostly about python-cinderclient but the same question could be
related for other clients.

Now, for python-cinderclient we've got to kinds for functional/integrated
jobs:

1) gate-cinderclient-dsvm-functional - a very limited (for now) set of
functional tests, most of them were part of tempest CLI tests in the past.

2) gate-tempest-dsvm-neutron-src-python-cinderclient - if I understand
right, the idea os this job was to have integrated tests to test
cinderclient with other projects to verify that new patch to
python-cinderclietn won't break any other project.
But it does *not* test cinderclient at all, except few attach-related tests
because Tempest doesn't use python-*client.

The same job was added for python-heatclient but was removed because
devstack didn't install Heat for that job [1].

We agreed [2] to remove this job from cinderclient gates too, once
functional or integration tests will be implemented.


There is a proposal to python-cinderclient tests to implement some
cross-project testing to make sure, that new python-cinderclient won't
break any of existing project who use it.

After discussing in IRC with John Griffith (jgriffith) I'm realized that it
could be an cross-project initiative in such kind of integration tests.
OpenStack Client (OSC) could cover some part of such tests, but does it
mean that we'll run OSC tests on every patch to python-*client? We can run
only cinder-realated OSC tests on our gates to verify that it doesn't
breack OSC and, may be other project.

The other option, is to implement tests like [3] per project basis and call
it "integration".  Such tests could cover more cases than OSC functional
tests and have more project-related test cases, e.g.: test some
python-cinderclient specific corner cases, which is not related to OSC.

IMO, It would be good to have some cross-project decision on how will be
implement clients' integration tests per project.


[1] https://review.openstack.org/#/c/272411/
[2]
http://eavesdrop.openstack.org/meetings/cinder/2015/cinder.2015-12-16-16.00.log.html
[3] https://review.openstack.org/#/c/279432/8

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Thoughts about the relationship between RDO and TripleO

2016-02-15 Thread John Trowbridge
Howdy,

The spec to replace instack-virt-setup[1] got me thinking about the
relationship between RDO and TripleO. Specifically, when thinking about
where to store/create an undercloud.qcow2 image, and if this effort is
worth duplicating.

Originally, I agreed with the comments on the spec wrt the fact that we
do not want to rely on RDO artifacts for TripleO CI. However, we do
exactly that already. Delorean packages are 100% a RDO artifact. So it
seems a bit odd to say we do not want to rely on an image that is really
just a bunch of those other artifacts, that we already rely on, rolled
up into a qcow.

On the other hand, it seems a bit odd that we rely on delorean packages
at all. This creates a bit of a sticky situation for RDO. Take the case
where RDO has identified all issues that need to be fixed to work with
HEAD of master, but some patches have not merged yet. It should be ok
for RDO to put a couple .patch files in the packaging, and be on our
merry way until those are merged upstream and can be removed. However,
if we did this today, it would break TripleO CI since TripleO CI would
then pick up these patched RPMs from delorean.

I am not sure what the best path to resolve this is. Ideally, the above
need for .patch files is not there, but that is another topic.

-trown


[1]
https://review.openstack.org/#/c/276810/2/specs/mitaka/tripleo-quickstart.rst

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-15 Thread Anastasia Urlapova
Aleksey, great news!

On Mon, Feb 15, 2016 at 7:36 PM, Alexey Shtokolov 
wrote:

> Fuelers,
>
> Task based deployment engine has been enabled in master (Fuel 9.0) by
> default [0]
>
> [0] - https://review.openstack.org/#/c/273693/
>
> WBR, Alexey Shtokolov
>
> 2016-02-09 21:57 GMT+03:00 Vladimir Kuklin :
>
>> Folks
>>
>> It seems that docker removal spoilt our celebration a bit. Here is a bug
>> link https://bugs.launchpad.net/fuel/+bug/1543720 . Fix is trivial, but
>> will postpone swarm run for another day. Nevertheless, it seems to be the
>> only issue affecting our ability to use TBD.
>>
>> Stay tuned!
>>
>> On Tue, Feb 9, 2016 at 2:26 PM, Igor Kalnitsky 
>> wrote:
>>
>>> > I've run BVT more than 100 times, it works,
>>>
>>> You run it some time ago. There were a lot of opportunities to
>>> introduce regression in both Nailgun and tasks of Fuel Library. ;)
>>>
>>> > We are going to run a swarm test today against the ISO with enabled
>>> task-based deployment
>>>
>>> So there will be a custom ISO, correct? If so, it works for me and
>>> I'll wait for its result.
>>>
>>> On Tue, Feb 9, 2016 at 1:17 PM, Alexey Shtokolov
>>>  wrote:
>>> > Igor,
>>> >
>>> > We are going to run a swarm test today against the ISO with enabled
>>> > task-based deployment, than check results and merge changes tomorrow.
>>> > I've run BVT more than 100 times, it works, but I would like to check
>>> more
>>> > deployment cases.
>>> > And I guess it should be easy to troubleshoot if docker-related and
>>> > task-based related changes will be separated by a few days.
>>> >
>>> > 2016-02-09 13:39 GMT+03:00 Igor Kalnitsky :
>>> >>
>>> >> Well, I'm going to build a new ISO and run BVT. As soon as they are
>>> >> green, I'm going to approve the change.
>>> >>
>>> >> On Tue, Feb 9, 2016 at 12:32 PM, Bogdan Dobrelya <
>>> bdobre...@mirantis.com>
>>> >> wrote:
>>> >> > On 08.02.2016 17:05, Igor Kalnitsky wrote:
>>> >> >> Hey Fuelers,
>>> >> >>
>>> >> >> When we are going to enable it? I think since HCF is passed for
>>> >> >> stable/8.0, it's time to enable task-based deployment for master
>>> >> >> branch.
>>> >> >>
>>> >> >> Opinion?
>>> >> >
>>> >> > This must be done for the 9.0, IMHO.
>>> >> >
>>> >> >>
>>> >> >> - Igor
>>> >> >>
>>> >> >> On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya
>>> >> >>  wrote:
>>> >> >>> On 02.02.2016 17:35, Alexey Shtokolov wrote:
>>> >>  Hi Fuelers!
>>> >> 
>>> >>  As you may be aware, since [0] Fuel has implemented a new
>>> >>  orchestration
>>> >>  engine [1]
>>> >>  We switched the deployment paradigm from role-based (aka
>>> granular) to
>>> >>  task-based and now Fuel can deploy all nodes simultaneously using
>>> >>  cross-node dependencies between deployment tasks.
>>> >> >>>
>>> >> >>> That is great news! Please do not forget about docs updates as
>>> well.
>>> >> >>> Those docs are always forgotten like poor orphans... I submitted a
>>> >> >>> patch
>>> >> >>> [0] to MOS docs, please review and add more details, if possible,
>>> for
>>> >> >>> plugins impact as well.
>>> >> >>>
>>> >> >>> [0] https://review.fuel-infra.org/#/c/16509/
>>> >> >>>
>>> >> 
>>> >>  This feature is experimental in Fuel 8.0 and will be enabled by
>>> >>  default
>>> >>  for Fuel 9.0
>>> >> 
>>> >>  Allow me to show you the results. We made some benchmarks on our
>>> bare
>>> >>  metal lab [2]
>>> >> 
>>> >>  Case #1. 3 controllers + 7 computes w/ ceph.
>>> >>  Task-based deployment takes *~38* minutes vs *~1h15m* for
>>> granular
>>> >>  (*~2*
>>> >>  times faster)
>>> >>  Here and below the deployment time is average time for 10 runs
>>> >> 
>>> >>  Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
>>> >>  Task-based deployment takes *~41* minutes vs *~1h32m* for
>>> granular
>>> >>  (*~2.24* times faster)
>>> >> 
>>> >> 
>>> >> 
>>> >>  Also we took measurements for Fuel CI test cases. Standard BVT
>>> >>  (Master
>>> >>  node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs
>>> on one
>>> >>  host)
>>> >> 
>>> >>  Fuel CI slaves with *4 *cores *~1.1* times faster
>>> >>  In case of 4 cores for 7 VMs they are fighting for CPU resources
>>> and
>>> >>  it
>>> >>  marginalizes the gain of task-based deployment
>>> >> 
>>> >>  Fuel CI slaves with *6* cores *~1.6* times faster
>>> >> 
>>> >>  Fuel CI slaves with *12* cores *~1.7* times faster
>>> >> >>>
>>> >> >>> These are really outstanding results!
>>> >> >>> (tl;dr)
>>> >> >>> I believe the next step may be to leverage the "external install
>>> & svc
>>> >> >>> management" feature (example [1]) of the Liberty release (7.0.0)
>>> of
>>> >> >>> Puppet-Openstack (PO) modules. So we could use separate concurrent
>>> >> >>> cross-depends based tasks 

Re: [openstack-dev] [barbican] Nominating Fernando Diaz for Barbican Core

2016-02-15 Thread Dave McCowan (dmccowan)
+1

On 2/15/16, 12:45 PM, "Douglas Mendizábal"
 wrote:

>-BEGIN PGP SIGNED MESSAGE-
>Hash: SHA512
>
>Hi All,
>
>I would like to nominate Fernando Diaz for the Barbican Core team.
>Fernando has been an enthusiastic contributor since joining the
>Barbican team.  He is currently the most active non-core reviewer on
>Barbican projects for the last 90 days. [1]  He¹s got an excellent eye
>for review and I think he would make an excellent addition to the team.
>
>As a reminder to our current core reviewers, our Core Team policy is
>documented in the wiki. [2]  So please reply to this thread with your
>votes.
>
>Thanks,
>- - Douglas Mendizábal
>
>[1] http://stackalytics.com/report/contribution/barbican-group/90
>[2] https://wiki.openstack.org/wiki/Barbican/CoreTeam
>-BEGIN PGP SIGNATURE-
>
>iQIcBAEBCgAGBQJWwg7GAAoJEB7Z2EQgmLX7mEUP/RFZCAZ9ZbDAB48lgek1dAPo
>0KoK8IjkQhwKE7VYYhHwoZIlIdG0Eb9t8Y8ia9+k3SDzo0Q3upXckqEZbiZUZWQT
>gv0BwpMkfUJ8zQeEQouomN8p/ZeRayZuAi3rz5rfte/JDr6IA1eFmPFRAQG+UO1H
>hN6Al7nGm1Ixu/t969fqzslI6FwUu6CHGMDAcoXxL1mlCptC82mRzZZ26m0ObPFQ
>GcnVka7CPseMqHCM6ls9I8AIubAWRehth9oLp7MP6D1vmJagNkwsWnHw5iB1rfBu
>5TU9litnvei89kZ0uoIiOVruo9zh8R/hXn7CT3s3+9sQdwURrmoTHpEWlPlFQ8c7
>ZRqRB7xt4L6jBt3lhoxCDftbmLIDMjBDPsfZjQKG44c+Tr0XuvaWTBVu/giJhvYW
>RnKbQNhB/cByVooVc9itRC8QxZfI+3Ee1X+YEJrALDxjinAl7cHbi1T3DhmFyhRn
>RbnF/9OVqy4+EqLsZFqEWdVZtyn29NWBOmiTyYtufufI41Yu+L6KU/hKj/a5HcSh
>P63Bwp2cpyu+bJsCvMJWfXwW2odUte9RBKsmozFz2Q1ge2FMsr1nR7U0/+DVLoYI
>GLHB3uvnKi2PgWAdbJCS21J6bGlhVX/MoIP/4gWmVuG5LflAfbyg3n5STgnRr7vQ
>iBe0oW8NcTsETUVCYkwA
>=09Wx
>-END PGP SIGNATURE-
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-15 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

One more thing: I forgot to point out that pyca/cryptography is
already part of global-requirements. [1]

- - Douglas Mendizábal

[1]
http://git.openstack.org/cgit/openstack/requirements/tree/global-require
ments.txt#n25

On 2/15/16 12:24 PM, Douglas Mendizábal wrote:
> I had not previously heard of pycryptodome. Is this supposed to be 
> a drop-in replacement for pycrypto?  If so then it sounds like 
> they're doing a terrible job of it.
> 
> The plan for Barbican has been to wait for pyca/cryptography [1] to
> add support for the apis we needed to be able to drop our pycrypto
> dependency.  I'll have to double check the latest pyca/cryptography
> notes, but I do believe it's at a point now where it can be used in
> Barbican to replace pycrypto. This would be the preferred fix for
> us.
> 
> AFAIK the paramiko folks were going to adopt pyca/cryptography as 
> well, so it appears that pycryptodome support will not be merged 
> there either. [2]
> 
> Additionaly, bespoke pure-python cryptography gives me the heebie 
> jeebies, so I would strongly recommend to move all cryptographic 
> work to use pyca/cryptography instead of pycryptodome.
> 
> - Douglas Mendizábal
> 
> [1] https://cryptography.io/en/latest/ [2] 
> https://github.com/paramiko/paramiko/pull/646
> 
> On 2/15/16 6:44 AM, Haïkel wrote:
>> 2016-02-14 23:16 GMT+01:00 Davanum Srinivas :
>>> Hi,
>>> 
>>> Short Story: pycryptodome if installed inadvertently will
>>> break several projects: Example : 
>>> https://review.openstack.org/#/c/279926/
>>> 
>>> Long Story: There's a new kid in town pycryptodome: 
>>> https://github.com/Legrandin/pycryptodome
>>> 
>>> Because pycrypto itself has not been maintained for a while: 
>>> https://github.com/dlitz/pycrypto
>>> 
>>> So folks like pysaml2 and paramiko are trying to switch over: 
>>> https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbf
b
>
>>>
>>> 
de6b41ac63
>>> 
>>> 
>>> 
>>> 
> https://github.com/paramiko/paramiko/issues/637
>>> 
>>> In fact pysaml2===4.0.3 has already switched over. So the 
>>> requirements bot/script has been trying to alert us to this
>>> new dependency, you can see Nova fail. 
>>> https://review.openstack.org/#/c/279926/
>>> 
>>> Why does it fail? For example, the new library is strict about 
>>> getting bytes for keys and has dropped some parameters in 
>>> methods. for example: 
>>> https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/Pub
l
>
>>>
>>> 
icKey/RSA.py#L405
>>> 
>>> 
>>> 
>>> 
> https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA
.p
>
>
> 
y#L499
>>> 
>>> Another problem, if pycrypto gets installed last then things 
>>> will work, if it pycryptodome gets installed last, things will
>>>  fail. So we definitely cannot allow both in our 
>>> global-requirements and upper-constraints. We can always try to
>>> pin stuff, but things will fail as there are a lot of jobs that
>>> do not honor upper-constraints. And things will fail in the
>>> field for Mitaka.
>>> 
>>> Action: So what can we do? One possibility is to pin 
>>> requirements and hope for the best. Another is to tolerate the 
>>> install of either pycrypto or pycryptodome and test both 
>>> combinations so we don't have to fight this battle.
>>> 
>>> Example for Nova : https://review.openstack.org/#/c/279909/ 
>>> Example for Glance : https://review.openstack.org/#/c/280008/ 
>>> Example for Barbican : 
>>> https://review.openstack.org/#/c/280014/
>>> 
>>> What do you think?
>>> 
>>> Thanks, Dims
>>> 
> 
>> This is annoying from a packaging PoV.
> 
>> We have dependencies relying on pycrypto (e.g oauthlib used by 
>> keystone, paramiko by even more projects), and we can't control 
>> the order of installation. My 2 cts will be to favor the latter 
>> solution and test both combinations until N or O releases (and 
>> then get rid of pycrypto definitively), so we can handle this 
>> gracefully.
> 
> 
>> Regards, H.
> 
>> _
_
>
>>
>> 

> 
> 
> 
> 
> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>>
>> 
> 
> __

>
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJWwhkqAAoJEB7Z2EQgmLX7FaoP/17xVW32kefsDZi48xDvKzul
4IXIOcImSPNQX5FEDi6A9Q/TrTQPmyZWLYA1Ua+3dKG7fZoQ5r/cFgLCtX/gw3q/
E2rtCJxdaYfg6AYUCeLJrtYifio7eyVGp8PBTNSF8x4vTQu3qfKYYEvuTlevOrAr
RTXgjfDq8Dpw1U1yIgmu+civN22TJIB0+drM2RDHvndDNsHzLRdFv0OIaCzDEyP0
Ao5m5cjr5DAOGUwA5r2r+LAMFDe0malvC0jAAmR0u+GC2W2ys18IVvvvIXSMY8u2

Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-15 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I had not previously heard of pycryptodome. Is this supposed to be a
drop-in replacement for pycrypto?  If so then it sounds like they're
doing a terrible job of it.

The plan for Barbican has been to wait for pyca/cryptography [1] to
add support for the apis we needed to be able to drop our pycrypto
dependency.  I'll have to double check the latest pyca/cryptography
notes, but I do believe it's at a point now where it can be used in
Barbican to replace pycrypto. This would be the preferred fix for us.

AFAIK the paramiko folks were going to adopt pyca/cryptography as
well, so it appears that pycryptodome support will not be merged
there either. [2]

Additionaly, bespoke pure-python cryptography gives me the heebie
jeebies, so I would strongly recommend to move all cryptographic work
to use pyca/cryptography instead of pycryptodome.

- - Douglas Mendizábal

[1] https://cryptography.io/en/latest/
[2] https://github.com/paramiko/paramiko/pull/646

On 2/15/16 6:44 AM, Haïkel wrote:
> 2016-02-14 23:16 GMT+01:00 Davanum Srinivas :
>> Hi,
>> 
>> Short Story: pycryptodome if installed inadvertently will break 
>> several projects: Example : 
>> https://review.openstack.org/#/c/279926/
>> 
>> Long Story: There's a new kid in town pycryptodome: 
>> https://github.com/Legrandin/pycryptodome
>> 
>> Because pycrypto itself has not been maintained for a while: 
>> https://github.com/dlitz/pycrypto
>> 
>> So folks like pysaml2 and paramiko are trying to switch over: 
>> https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbfb
de6b41ac63
>>
>>
>>
>> 
https://github.com/paramiko/paramiko/issues/637
>> 
>> In fact pysaml2===4.0.3 has already switched over. So the 
>> requirements bot/script has been trying to alert us to this new 
>> dependency, you can see Nova fail. 
>> https://review.openstack.org/#/c/279926/
>> 
>> Why does it fail? For example, the new library is strict about 
>> getting bytes for keys and has dropped some parameters in 
>> methods. for example: 
>> https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/Publ
icKey/RSA.py#L405
>>
>>
>>
>> 
https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA.p
y#L499
>> 
>> Another problem, if pycrypto gets installed last then things
>> will work, if it pycryptodome gets installed last, things will
>> fail. So we definitely cannot allow both in our
>> global-requirements and upper-constraints. We can always try to
>> pin stuff, but things will fail as there are a lot of jobs that
>> do not honor upper-constraints. And things will fail in the field
>> for Mitaka.
>> 
>> Action: So what can we do? One possibility is to pin requirements
>> and hope for the best. Another is to tolerate the install of
>> either pycrypto or pycryptodome and test both combinations so we
>> don't have to fight this battle.
>> 
>> Example for Nova : https://review.openstack.org/#/c/279909/ 
>> Example for Glance : https://review.openstack.org/#/c/280008/ 
>> Example for Barbican : https://review.openstack.org/#/c/280014/
>> 
>> What do you think?
>> 
>> Thanks, Dims
>> 
> 
> This is annoying from a packaging PoV.
> 
> We have dependencies relying on pycrypto (e.g oauthlib used by 
> keystone, paramiko by even more projects), and we can't control
> the order of installation. My 2 cts will be to favor the latter 
> solution and test both combinations until N or O releases (and then
> get rid of pycrypto definitively), so we can handle this 
> gracefully.
> 
> 
> Regards, H.
> 
> __

>
>
>
> 
OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJWwhehAAoJEB7Z2EQgmLX7ZvwP/1a4vWgVeryvJGXNP/O5aoml
hvlvJCcJrW0vyRfycQBN4nNSVZLrhjxy+5XIW86/OjfDVnSci2hdI+zKwoGpSrDM
NEi80j6ll31QLBQMDVlNvwv/5DGukJ1fjN35IhHMwWYCBBOU7VGFUuBhdwi47vW4
qHI99Rkf1P6wpVygPTRMye0Z9T249XiYtDckverqEGT7jsYu0SBbK3ti/zbcSmXw
upSAQRYa9GIklVe3GMd0CiD933YsxpCOqGtuhtwslPlbCh0Pd23FbRLFf+Sufojl
9hky7dbl/gKFjf2tHaenYdFun+mlP7bKpYzJ+Hghszw3BACpXeK+U+dcdg9wJTgy
POejML3Kuo5jYnCmWahWuNCuSHepace2E36nm0hsAcC5ntePrKHI31fo9nmiyz/4
1XmUQ96HEl2CUVWFpcYbencf+412o3RGpETita26gUOK+iiBemEA4WWmfAI+9uo0
v3b014Jpyth25CV6uB4vSotbk5p191EBPaUVR7kMhMfx2YJZFWMXD+Hifi72vWjs
oSpoojTiDCj6ctEocTGGnnqMSaO8bNjLOk5fvO0IyLcEjkLrMZEeXS8UsCsyMuQ5
XNncop2G6ABWbrrkpwkAJMoOoHqjQ48DDlPd4qHAJueYh6ENJr/WOVftG7htESo/
BTUtLmCOHdtR05xVf3Hn
=t6oe
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-15 Thread David Moreau Simard
So is it implied that a version "8.0.0b1" of a puppet module works
with the "8.0.0b1" of it's parent project ?

This has some implications, it means there are expectations that
puppet-openstack is able to keep up with upstream changes throughout
the cycle.
We've gotten pretty good at following trunk with the myriad of CI but
some changes are more challenging than others.

Would there be times where the puppet module's tagged release would
lag behind upstream (say, a couple days because we need to merge new
things) and if so, does tagging lose some of it's value ?
The liberty puppet modules were released quite a while after liberty
went stable.


David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Mon, Feb 15, 2016 at 11:13 AM, Emilien Macchi  wrote:
> Hi,
>
> While Puppet modules releases are independently managed, we have some
> requests from both RDO & Debian folks to push a first tag in our Puppet
> modules, for Mitaka release, so they can start provide Mitaka packaging
> based on tag, and not on commits.
>
> This is something we never did before, usually we wait until the end of
> the cycle and try to push a tag soon after the official release.
> But we want to experiment beta tags and see if it helps.
>
> The Mitaka tag would be 8.0.0b1 and pushed by the end of February (I'll
> work on it).
> Though stable/mitaka branch won't be created until official Mitaka
> release. Same thing for release notes, that will be provided at the end
> of the cycle as usual.
>
> Any thoughts are welcome,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Push Mitaka beta tag

2016-02-15 Thread Matt Fischer
Emilien,

More tags like this cannot hurt, it makes it easier to follow things,
thanks for doing this.

On Mon, Feb 15, 2016 at 9:13 AM, Emilien Macchi  wrote:

> Hi,
>
> While Puppet modules releases are independently managed, we have some
> requests from both RDO & Debian folks to push a first tag in our Puppet
> modules, for Mitaka release, so they can start provide Mitaka packaging
> based on tag, and not on commits.
>
> This is something we never did before, usually we wait until the end of
> the cycle and try to push a tag soon after the official release.
> But we want to experiment beta tags and see if it helps.
>
> The Mitaka tag would be 8.0.0b1 and pushed by the end of February (I'll
> work on it).
> Though stable/mitaka branch won't be created until official Mitaka
> release. Same thing for release notes, that will be provided at the end
> of the cycle as usual.
>
> Any thoughts are welcome,
> --
> Emilien Macchi
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread Hongbin Lu
Regarding to the COE mode, it seems there are three options:

1.   Place both master nodes and worker nodes to user’s tenant (current 
implementation).

2.   Place only worker nodes to user’s tenant.

3.   Hide both master nodes and worker nodes from user’s tenant.

Frankly, I don’t know which one will succeed/fail in the future. Each mode 
seems to have use cases. Maybe magnum could support multiple modes?

Best regards,
Hongbin

From: Corey O'Brien [mailto:coreypobr...@gmail.com]
Sent: February-15-16 8:43 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi all,

A few thoughts to add:

I like the idea of isolating the masters so that they are not 
tenant-controllable, but I don't think the Magnum control plane is the right 
place for them. They still need to be running on tenant-owned resources so that 
they have access to things like isolated tenant networks or that any bandwidth 
they consume can still be attributed and billed to tenants.

I think we should extend that concept a little to include worker nodes as well. 
While they should live in the tenant like the masters, they shouldn't be 
controllable by the tenant through anything other than the COE API. The main 
use case that Magnum should be addressing is providing a managed COE 
environment. Like Hongbin mentioned, Magnum users won't have the domain 
knowledge to properly maintain the swarm/k8s/mesos infrastructure the same way 
that Nova users aren't expected to know how to manage a hypervisor.

I agree with Egor that trying to have Magnum schedule containers is going to be 
a losing battle. Swarm/K8s/Mesos are always going to have better scheduling for 
their containers. We don't have the resources to try to be yet another 
container orchestration engine. Besides that, as a developer, I don't want to 
learn another set of orchestration semantics when I already know swarm or k8s 
or mesos.

@Kris, I appreciate the real use case you outlined. In your idea of having 
multiple projects use the same masters, how would you intend to isolate them? 
As far as I can tell none of the COEs would have any way to isolate those teams 
from each other if they share a master. I think this is a big problem with the 
idea of sharing masters even within a single tenant. As an operator, I 
definitely want to know that users can isolate their resources from other users 
and tenants can isolate their resources from other tenants.

Corey

On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao > 
wrote:
Hi,

I wanted to give some thoughts to the thread.

There are various perspective around “Hosted vs Self-managed COE”, But if you 
stand at the developer's position, it basically comes down to “Ops vs 
Flexibility”.

For those who want more control of the stack, so as to customize in anyway they 
see fit, self-managed is a more appealing option. However, one may argue that 
the same job can be done with a heat template+some patchwork of cinder/neutron. 
And the heat template is more customizable than magnum, which probably 
introduces some requirements on the COE configuration.

For people who don't want to manage the COE, hosted is a no-brainer. The 
question here is that which one is the core compute engine is the stack, nova 
or COE? Unless you are running a public, multi-tenant OpenStack deployment, it 
is highly likely that you are sticking with only one COE. Supposing k8s is what 
your team is dealing with everyday, then why you need nova sitting under k8s, 
whose job is just launching some VMs. After all, it is the COE that 
orchestrates cinder/neutron.

One idea of this is to put COE at the same layer of nova. Instead of running 
atop nova, these two run side by side. So you got two compute engines: nova for 
IaaS workload, k8s for CaaS workload. If you go this way, hypernetes 
 is probably what you are looking for.

Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with Docker 
registry, and use nova to launch Docker images. But this is not done by 
nova-docker, simply because it is hard to integrate things like cinder/neutron 
with lxc. The idea is a nova-hyper 
driver.
 Since Hyper is hypervisor-based, it is much easier to make it work with 
others. SHAMELESS PROMOTION: if you are interested in this idea, we've 
submitted a proposal at the Austin summit: 
https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211.

Peng

Disclaim: I maintainer Hyper.

-
Hyper - Make VM run like Container



On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu 
> wrote:
My replies are inline.

From: Kai Qiang Wu [mailto:wk...@cn.ibm.com]
Sent: February-14-16 7:17 PM
To: OpenStack 

Re: [openstack-dev] [barbican] Nominating Fernando Diaz for Barbican Core

2016-02-15 Thread Juan Antonio Osorio
+1 !

On Mon, Feb 15, 2016 at 7:45 PM, Douglas Mendizábal <
douglas.mendiza...@rackspace.com> wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Hi All,
>
> I would like to nominate Fernando Diaz for the Barbican Core team.
> Fernando has been an enthusiastic contributor since joining the
> Barbican team.  He is currently the most active non-core reviewer on
> Barbican projects for the last 90 days. [1]  He’s got an excellent eye
> for review and I think he would make an excellent addition to the team.
>
> As a reminder to our current core reviewers, our Core Team policy is
> documented in the wiki. [2]  So please reply to this thread with your
> votes.
>
> Thanks,
> - - Douglas Mendizábal
>
> [1] http://stackalytics.com/report/contribution/barbican-group/90
> [2] https://wiki.openstack.org/wiki/Barbican/CoreTeam
> -BEGIN PGP SIGNATURE-
>
> iQIcBAEBCgAGBQJWwg7GAAoJEB7Z2EQgmLX7mEUP/RFZCAZ9ZbDAB48lgek1dAPo
> 0KoK8IjkQhwKE7VYYhHwoZIlIdG0Eb9t8Y8ia9+k3SDzo0Q3upXckqEZbiZUZWQT
> gv0BwpMkfUJ8zQeEQouomN8p/ZeRayZuAi3rz5rfte/JDr6IA1eFmPFRAQG+UO1H
> hN6Al7nGm1Ixu/t969fqzslI6FwUu6CHGMDAcoXxL1mlCptC82mRzZZ26m0ObPFQ
> GcnVka7CPseMqHCM6ls9I8AIubAWRehth9oLp7MP6D1vmJagNkwsWnHw5iB1rfBu
> 5TU9litnvei89kZ0uoIiOVruo9zh8R/hXn7CT3s3+9sQdwURrmoTHpEWlPlFQ8c7
> ZRqRB7xt4L6jBt3lhoxCDftbmLIDMjBDPsfZjQKG44c+Tr0XuvaWTBVu/giJhvYW
> RnKbQNhB/cByVooVc9itRC8QxZfI+3Ee1X+YEJrALDxjinAl7cHbi1T3DhmFyhRn
> RbnF/9OVqy4+EqLsZFqEWdVZtyn29NWBOmiTyYtufufI41Yu+L6KU/hKj/a5HcSh
> P63Bwp2cpyu+bJsCvMJWfXwW2odUte9RBKsmozFz2Q1ge2FMsr1nR7U0/+DVLoYI
> GLHB3uvnKi2PgWAdbJCS21J6bGlhVX/MoIP/4gWmVuG5LflAfbyg3n5STgnRr7vQ
> iBe0oW8NcTsETUVCYkwA
> =09Wx
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] Nominating Fernando Diaz for Barbican Core

2016-02-15 Thread Douglas Mendizábal
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi All,

I would like to nominate Fernando Diaz for the Barbican Core team.
Fernando has been an enthusiastic contributor since joining the
Barbican team.  He is currently the most active non-core reviewer on
Barbican projects for the last 90 days. [1]  He’s got an excellent eye
for review and I think he would make an excellent addition to the team.

As a reminder to our current core reviewers, our Core Team policy is
documented in the wiki. [2]  So please reply to this thread with your
votes.

Thanks,
- - Douglas Mendizábal

[1] http://stackalytics.com/report/contribution/barbican-group/90
[2] https://wiki.openstack.org/wiki/Barbican/CoreTeam
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJWwg7GAAoJEB7Z2EQgmLX7mEUP/RFZCAZ9ZbDAB48lgek1dAPo
0KoK8IjkQhwKE7VYYhHwoZIlIdG0Eb9t8Y8ia9+k3SDzo0Q3upXckqEZbiZUZWQT
gv0BwpMkfUJ8zQeEQouomN8p/ZeRayZuAi3rz5rfte/JDr6IA1eFmPFRAQG+UO1H
hN6Al7nGm1Ixu/t969fqzslI6FwUu6CHGMDAcoXxL1mlCptC82mRzZZ26m0ObPFQ
GcnVka7CPseMqHCM6ls9I8AIubAWRehth9oLp7MP6D1vmJagNkwsWnHw5iB1rfBu
5TU9litnvei89kZ0uoIiOVruo9zh8R/hXn7CT3s3+9sQdwURrmoTHpEWlPlFQ8c7
ZRqRB7xt4L6jBt3lhoxCDftbmLIDMjBDPsfZjQKG44c+Tr0XuvaWTBVu/giJhvYW
RnKbQNhB/cByVooVc9itRC8QxZfI+3Ee1X+YEJrALDxjinAl7cHbi1T3DhmFyhRn
RbnF/9OVqy4+EqLsZFqEWdVZtyn29NWBOmiTyYtufufI41Yu+L6KU/hKj/a5HcSh
P63Bwp2cpyu+bJsCvMJWfXwW2odUte9RBKsmozFz2Q1ge2FMsr1nR7U0/+DVLoYI
GLHB3uvnKi2PgWAdbJCS21J6bGlhVX/MoIP/4gWmVuG5LflAfbyg3n5STgnRr7vQ
iBe0oW8NcTsETUVCYkwA
=09Wx
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Trove] Horizon-Trove External Repository

2016-02-15 Thread Thai Q Tran
Great article Andreas!
 
For angular translation to work, you'll need to reference https://github.com/openstack/horizon/blob/master/horizon/utils/babel_extract_angular.py which the guide already covers.
 
To use it, see https://angular-gettext.rocketeer.be/dev-guide/annotate/ for examples.
 
 
- Original message -From: Andreas Jaeger To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: Re: [openstack-dev] [Horizon][Trove] Horizon-Trove External RepositoryDate: Mon, Feb 15, 2016 2:44 AM 
On 2016-02-15 11:32, Omer (Nokia - IL) Etrog wrote:> Hi,> Is there proper documentation how to add localization for external plugins that use AngularJS?http://docs.openstack.org/infra/manual/creators.html#enabling-translation-infrastructureNote that the translation team only translates official projects - andprioritizes there on what they do,Andreas-- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany   GF: Felix Imendörffer, Jane Smithard, Graham Norton,       HRB 21284 (AG Nürnberg)    GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project] Meeting, Tue February 16th, 21:00 UTC

2016-02-15 Thread Mike Perez

Hi all,

We will be having a meeting tomorrow February 16th at 21:00 UTC in the
#openstack-meeting-cp channel.

* Team announcements (horizontal, vertical, diagonal)
* A Common Policy Scenario Across All Projects [1]
* Support for 4-byte unicode for naming volume, snapshot, instance
  etc.(sheel, jgregor)
  * MySQL's character set which is utf8 does not support 4 byte
characters.
  * In cinder, 4 byte unicode support required for entity names like
volume, snapshot etc..
  * Same would be the case with nova for instance name and other
entities.
  * Discussion required for conversion of utf8 to utf8mb4 to support 4
byte unicode in different openstack components.
* Open discussion

[1] - https://review.openstack.org/#/c/245629/

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Mitaka Mid-Cycle Coding Sprint Registration

2016-02-15 Thread Kyle Mestery
On Mon, Feb 15, 2016 at 10:33 AM, Anita Kuno  wrote:
> On 02/15/2016 04:06 PM, Kyle Mestery wrote:
>> Hi folks!
>>
>> The mid-cycle is almost upon us. IBM, as the sponsor company, is
>> requesting some information from everyone is registered (name, email,
>> company, US citizen or not), so please make sure to register on the
>> Eventbrite site I've created here [1]. If everyone who's attending
>> could please do that by tomorrow that would help our sponsors a lot!
>>
>> Thank you!
>> Kyle
>>
>> [1] 
>> https://www.eventbrite.com/e/neutron-mitaka-mid-cycle-coding-sprint-tickets-21634783219
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> I just registered using this link and was asked for my email and first
> and last name.
>
> I wasn't asked for my country of citizenship nor was I asked for the
> name of the folks who send me paycheques.
>
I've updated the eventbrite to acquire both of those bits of information.

> Thanks,
> Anita.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral team meeting minutes

2016-02-15 Thread Nikolay Makhotkin
Thank you for attending the meeting today!

Next meeting is scheduled on 22 Feb. It is non-working day in Russia so
Renat, Anastasia and I very likely won't come to the meeting.

Minutes:
http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-02-15-16.00.html
 Log:
http://eavesdrop.openstack.org/meetings/mistral/2016/mistral.2016-02-15-16.00.log.html


-- 
Best Regards,
Nikolay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Task Based Deployment Is at Least Twice Faster

2016-02-15 Thread Alexey Shtokolov
Fuelers,

Task based deployment engine has been enabled in master (Fuel 9.0) by
default [0]

[0] - https://review.openstack.org/#/c/273693/

WBR, Alexey Shtokolov

2016-02-09 21:57 GMT+03:00 Vladimir Kuklin :

> Folks
>
> It seems that docker removal spoilt our celebration a bit. Here is a bug
> link https://bugs.launchpad.net/fuel/+bug/1543720 . Fix is trivial, but
> will postpone swarm run for another day. Nevertheless, it seems to be the
> only issue affecting our ability to use TBD.
>
> Stay tuned!
>
> On Tue, Feb 9, 2016 at 2:26 PM, Igor Kalnitsky 
> wrote:
>
>> > I've run BVT more than 100 times, it works,
>>
>> You run it some time ago. There were a lot of opportunities to
>> introduce regression in both Nailgun and tasks of Fuel Library. ;)
>>
>> > We are going to run a swarm test today against the ISO with enabled
>> task-based deployment
>>
>> So there will be a custom ISO, correct? If so, it works for me and
>> I'll wait for its result.
>>
>> On Tue, Feb 9, 2016 at 1:17 PM, Alexey Shtokolov
>>  wrote:
>> > Igor,
>> >
>> > We are going to run a swarm test today against the ISO with enabled
>> > task-based deployment, than check results and merge changes tomorrow.
>> > I've run BVT more than 100 times, it works, but I would like to check
>> more
>> > deployment cases.
>> > And I guess it should be easy to troubleshoot if docker-related and
>> > task-based related changes will be separated by a few days.
>> >
>> > 2016-02-09 13:39 GMT+03:00 Igor Kalnitsky :
>> >>
>> >> Well, I'm going to build a new ISO and run BVT. As soon as they are
>> >> green, I'm going to approve the change.
>> >>
>> >> On Tue, Feb 9, 2016 at 12:32 PM, Bogdan Dobrelya <
>> bdobre...@mirantis.com>
>> >> wrote:
>> >> > On 08.02.2016 17:05, Igor Kalnitsky wrote:
>> >> >> Hey Fuelers,
>> >> >>
>> >> >> When we are going to enable it? I think since HCF is passed for
>> >> >> stable/8.0, it's time to enable task-based deployment for master
>> >> >> branch.
>> >> >>
>> >> >> Opinion?
>> >> >
>> >> > This must be done for the 9.0, IMHO.
>> >> >
>> >> >>
>> >> >> - Igor
>> >> >>
>> >> >> On Wed, Feb 3, 2016 at 12:31 PM, Bogdan Dobrelya
>> >> >>  wrote:
>> >> >>> On 02.02.2016 17:35, Alexey Shtokolov wrote:
>> >>  Hi Fuelers!
>> >> 
>> >>  As you may be aware, since [0] Fuel has implemented a new
>> >>  orchestration
>> >>  engine [1]
>> >>  We switched the deployment paradigm from role-based (aka
>> granular) to
>> >>  task-based and now Fuel can deploy all nodes simultaneously using
>> >>  cross-node dependencies between deployment tasks.
>> >> >>>
>> >> >>> That is great news! Please do not forget about docs updates as
>> well.
>> >> >>> Those docs are always forgotten like poor orphans... I submitted a
>> >> >>> patch
>> >> >>> [0] to MOS docs, please review and add more details, if possible,
>> for
>> >> >>> plugins impact as well.
>> >> >>>
>> >> >>> [0] https://review.fuel-infra.org/#/c/16509/
>> >> >>>
>> >> 
>> >>  This feature is experimental in Fuel 8.0 and will be enabled by
>> >>  default
>> >>  for Fuel 9.0
>> >> 
>> >>  Allow me to show you the results. We made some benchmarks on our
>> bare
>> >>  metal lab [2]
>> >> 
>> >>  Case #1. 3 controllers + 7 computes w/ ceph.
>> >>  Task-based deployment takes *~38* minutes vs *~1h15m* for granular
>> >>  (*~2*
>> >>  times faster)
>> >>  Here and below the deployment time is average time for 10 runs
>> >> 
>> >>  Case #2. 3 controllers + 3 mongodb + 4 computes w/ ceph.
>> >>  Task-based deployment takes *~41* minutes vs *~1h32m* for granular
>> >>  (*~2.24* times faster)
>> >> 
>> >> 
>> >> 
>> >>  Also we took measurements for Fuel CI test cases. Standard BVT
>> >>  (Master
>> >>  node + 3 controllers + 3 computes w/ ceph. All are in qemu VMs on
>> one
>> >>  host)
>> >> 
>> >>  Fuel CI slaves with *4 *cores *~1.1* times faster
>> >>  In case of 4 cores for 7 VMs they are fighting for CPU resources
>> and
>> >>  it
>> >>  marginalizes the gain of task-based deployment
>> >> 
>> >>  Fuel CI slaves with *6* cores *~1.6* times faster
>> >> 
>> >>  Fuel CI slaves with *12* cores *~1.7* times faster
>> >> >>>
>> >> >>> These are really outstanding results!
>> >> >>> (tl;dr)
>> >> >>> I believe the next step may be to leverage the "external install &
>> svc
>> >> >>> management" feature (example [1]) of the Liberty release (7.0.0) of
>> >> >>> Puppet-Openstack (PO) modules. So we could use separate concurrent
>> >> >>> cross-depends based tasks *within a single node* as well, like:
>> >> >>> - task: install_all_packages - a singleton task for a node,
>> >> >>> - task: [configure_x, for each x] - concurrent for a node,
>> >> >>> - task: [manage_service_x, for each x] - some may be concurrent
>> for a

Re: [openstack-dev] [neutron] Mitaka Mid-Cycle Coding Sprint Registration

2016-02-15 Thread Anita Kuno
On 02/15/2016 04:06 PM, Kyle Mestery wrote:
> Hi folks!
> 
> The mid-cycle is almost upon us. IBM, as the sponsor company, is
> requesting some information from everyone is registered (name, email,
> company, US citizen or not), so please make sure to register on the
> Eventbrite site I've created here [1]. If everyone who's attending
> could please do that by tomorrow that would help our sponsors a lot!
> 
> Thank you!
> Kyle
> 
> [1] 
> https://www.eventbrite.com/e/neutron-mitaka-mid-cycle-coding-sprint-tickets-21634783219
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

I just registered using this link and was asked for my email and first
and last name.

I wasn't asked for my country of citizenship nor was I asked for the
name of the folks who send me paycheques.

Thanks,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

2016-02-15 Thread Pavel Bondar
On 13.02.2016 02:42, Carl Baldwin wrote:
> On Fri, Feb 12, 2016 at 5:01 AM, Ihar Hrachyshka  wrote:
 It is only internal implementation changes.

 That's not entirely true, is it? There are config variables to change and
 it opens up the possibility of a scenario that the operator may not care
 about.

>>> If we were to remove the non-pluggable version altogether, then the
>>> default for ipam_driver would switch from None to internal. Therefore, there
>>> would be no config file changes needed.
>>>
>>> I think this is correct.
>>> Assuming the migration path to Neutron will include the data
>>> transformation from built-in to pluggable IPAM, do we just remove the old
>>> code and models?
>>> On the other hand do you think it might make sense to give operators a
>>> chance to rollback - perhaps just in case some nasty bug pops up?
>>
>> They can always revert to a previous release. And if we enable the new
>> implementation start of Newton, we’ll have enough time to fix bugs that will
>> pop up in gate.
> So, to do this, we have to consider two classes of current users.
> Since the pluggable implementation has been available, I think that we
> have to assume someone might be using it.  Someone could easily have
> turned it on in a green-field deployment.  If we push the offline
> migration in to Mitaka as per my last email then we'll likely get a
> few more of these but it doesn't really matter, the point is that I
> think we need t assume that they exist.
>
> 1) Users of the old baked-in implementation
>   - Their current data is stored in the old tables.
>
> 2) User of the new pluggable implementation
>  - Their current data is stored in the new tables.
>
> So, how does an unconditional migration work?  We can't just copy the
> old tables to the new tables because we might clobber data for the
> users in #2.  I've already heard that conditional migrations are a
> pain and shouldn't be considered.  This seems like a problem.
>
> I had an idea that I wanted to share but I'm warning you, it sounds a
> little crazy even to me.  But, maybe it could work.  Read through it
> for entertainment purposes if nothing else.
>
> Instead of migrating data from the old tables to the new.  What if we
> migrated the old tables in place in a patch set that removed all of
> the old code?  The table structure is nearly identical, right?  The
> differences, I imagine, could be easily handled by an alembic
> migration.  Correct me if I'm wrong.
>
> Now, we still have a difference between users in groups #1 and #2
> above.  To keep them separate, we would call the new built-in
> pluggable driver "built-in", "neutron", or whatever.  The name isn't
> important except that it can't be "internal".
>
> 1) Users who were migrated to the new baked-in implementation.
>   - Their current data is still in the old tables but they have been
> migrated to look just like the new tables.
>   - They have still not set "ipam_driver" in their config so they get
> the new default of "built-in".
>
> 2) Early adopters of built-in pluggable ipam
>   - Their current data is still in the new tables
>   - They have their config set to "internal" already
>
> So, now we have to deal with two identical pluggable implementations:
> one called "built-in" and the other called "internal" but otherwise
> they're identical in every way.  So, to handle this, could we
> parameterize the plugin so that they share exactly the same code while
> "internal" is deprecated?  Just the table names would be
> parameterized.
>
> We have to eventually reconcile users in group #1 with #2.  But, now
> that the tables are identical we could provide an offline migration
> that might be as simple as deleting the "old" tables and renaming the
> "new" tables.  Now, only users in group #2 are required to perform an
> offline migration.
>
> Carl
Hi Carl,

Your idea sounds workable to me. However I think a simpler way exists.

Significant part of 'built-in' ipam tables are continued to be updated even
if reference ipam driver is used (or any another driver).
It happens because these tables are API exposed, so they still have to
receive updates.

To be specific, next models from 'built-in' are related to ipam in some way:
- Subnet;
- IPAllocationPool;
- IPAllocation;
- IPAvailabilityRange;
Only IPAvailabilityRange stops to receive updates when switch to pluggable
ipam backend occurs.  And IPAvailabilityRange can be rebuilt based on
information
from IPAllocationPool and IPAllocation models [1].
It gives us ability to rebuild all the needed ipam information in
'built-in' tables even
if ipam driver is used.

I am trying to implement this approach in current version of migration
to pluggable ipam [2]
to allow safe switch back to 'built-in' implementation from pluggable.
'--rebuild' flag forces ip availability ranges recalculation relying
only on tables that are up to date
independently of backend currently used.

So to remove built-in ipam 

Re: [openstack-dev] [oslo][all] Announcing our new Olso Project

2016-02-15 Thread Ronald Bradford
For the #OpenStack Mitaka M3 freeze and final release we have made it even
easier to play and win Oslo Bingo.
Simply pick the next release word. Free entry at
http://j.mp/Oslo-bingo-Mitaka

Follow the results on Twitter - https://twitter.com/OsloBingo

Ronald Bradford

Web Site: http://ronaldbradford.com
LinkedIn:  http://www.linkedin.com/in/ronaldbradford
Twitter:@RonaldBradford 
Skype: RonaldBradford
GTalk: Ronald.Bradford
IRC: rbradfor


On Mon, Feb 1, 2016 at 12:50 PM, Ronald Bradford 
wrote:

> The Olso team is proud to announce the release of Oslo Bingo.  In Oslo
> we like to spice up our release notes using meaningful random
> adjectives [1].
>
> Each month the Oslo team will select an adjective to be the Oslo Bingo
> word of the month.
>
> For February 2016 we have selected "jazzed" (from rlrossit).
>
> To play, simply pick the first Oslo project that will have release
> notes using our Bingo word of the month (i.e. jazzed). Check out
> recent release notes that selected "overjoyed" [2] and "jubilant" [3]
> to see what we mean.
>
> Entry is free for all at http://j.mp/Oslo-bingo [4]
>
> The winner each month will get a limited edition Oslo t-shirt,
> sponsored by HPE (quantity and sizes limited):
> http://j.mp/Oslo-bingo-prize
>
> More details at [5]
>
>
> [1]
> http://git.openstack.org/cgit/openstack-infra/release-tools/tree/releasetools/release_notes.py#n33
> [2]
> http://lists.openstack.org/pipermail/openstack-dev/2016-January/085000.html
> [3]
> http://lists.openstack.org/pipermail/openstack-dev/2016-January/083797.html
> [4] http://j.mp/Oslo-bingo
> [5] https://etherpad.openstack.org/p/Oslo_Bingo
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Push Mitaka beta tag

2016-02-15 Thread Emilien Macchi
Hi,

While Puppet modules releases are independently managed, we have some
requests from both RDO & Debian folks to push a first tag in our Puppet
modules, for Mitaka release, so they can start provide Mitaka packaging
based on tag, and not on commits.

This is something we never did before, usually we wait until the end of
the cycle and try to push a tag soon after the official release.
But we want to experiment beta tags and see if it helps.

The Mitaka tag would be 8.0.0b1 and pushed by the end of February (I'll
work on it).
Though stable/mitaka branch won't be created until official Mitaka
release. Same thing for release notes, that will be provided at the end
of the cycle as usual.

Any thoughts are welcome,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Mitaka Mid-Cycle Coding Sprint Registration

2016-02-15 Thread Kyle Mestery
Hi folks!

The mid-cycle is almost upon us. IBM, as the sponsor company, is
requesting some information from everyone is registered (name, email,
company, US citizen or not), so please make sure to register on the
Eventbrite site I've created here [1]. If everyone who's attending
could please do that by tomorrow that would help our sponsors a lot!

Thank you!
Kyle

[1] 
https://www.eventbrite.com/e/neutron-mitaka-mid-cycle-coding-sprint-tickets-21634783219

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread Jim Meyer
On Feb 15, 2016, at 7:59 AM, Jeremy Stanley  wrote:
> 
>> On 2016-02-15 04:36:25 -0500 (-0500), Eoghan Glynn wrote:
>> [...]
>> Traditionally all ATCs earn a free pass for summit, whereas the
>> other attendees pay $600 or more for entry. I'm wondering if (a)
>> there's some cross-subsidization going on here and (b) if the
>> design summit was cleaved off, would the loss of the income from
>> the non-ATCs sound the death-knell for the traditional ATC free
>> pass?
> [...]
> 
> It was my understanding that we provided 100% discounted admission
> to the conference so that contributors were not charged for design
> summit access, since they happened in the same venue at the same
> time. However, pricing and discount terms are at the discretion of
> the conference organizers so I don't really have more detail.
> Hopefully once Thierry publishes his proposal we'll have a better
> example on which to draw conclusions about related any prices and
> funding challenges.

Fair, and I'll observe as someone who sends literally (small) hundreds of 
people to the summit that conference fees has never been the significant 
expense. 

It's airfare and expensive venue hotels every time.

--j
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Fuel plugins: lets have some rules

2016-02-15 Thread Mateusz Matuszkowiak
Dmitry,

So this changes the workflow for the devopses, the fuel plugin repo creators 
under Openstack namespace.

As I understand, development of every new fuel plugin must be now started in a 
private github repo first, 
and when a developer(s) decide they want to go level higher they request for a 
validation. 
And only after successfull validation they should request a repository creation 
(via LP bug) for their plugin under Openstack NS, right? 
I’m asking since its important for us to properly handle these kind of bugs [0].

Regards,

[0] https://bugs.launchpad.net/fuel/+bug/1544536 


--
Fuel DevOps
Mateusz Matuszkowiak


> On Feb 3, 2016, at 3:27 AM, Dmitry Borodaenko  
> wrote:
> 
> It has been over a year since pluggable architecture was introduced in
> Fuel 6.0, and I think it's safe to declare it an unmitigated success. A
> search for "fuel-plugin" on GitHub brings up 167 repositories [0],
> there's 63 Fuel plugin repositories on review.openstack.org [1], 25 Fuel
> plugins are listed in the DriverLog [2].
> 
> [0] https://github.com/search?q=fuel-plugin-
> [1] 
> https://review.openstack.org/#/admin/projects/?filter=openstack%252Ffuel-plugin-
> [2] http://stackalytics.com/report/driverlog?project_id=openstack%2Ffuel
> 
> Even though the plugin engine is not yet complete (there still are
> things you can do in Fuel core that you cannot do in a plugin), dozens
> of deployers and developers [3] used it to expand Fuel capabilities
> beyond the limitations of our default reference architecture.
> 
> [3] http://stackalytics.com/report/contribution/fuel-plugins-group/360
> 
> There's a noticeable bump in contributions around October 2015 after
> Fuel 7.0 was released, most likely inspired by the plugin engine
> improvements introduced in that version [4]. As we continue to expand
> plugins capabilities, I expect more and more plugins to appear.
> 
> [4] 
> https://git.openstack.org/cgit/openstack/fuel-docs/tree/pages/release-notes/v7-0/new_features/plugins.rst?h=stable/7.0
> 
> The question of how useful exactly all those plugins are is a bit harder
> to answer. DriverLog isn't much help: less than half of Fuel plugins
> hosted on OpenStack infrastructure are even registered there, and of
> those that are, only 6 have CI jobs with recent successful runs. Does
> this mean that 90% of Fuel plugins are broken and unmaintained? Not
> necessarily, but it does mean that we have no way to tell.
> 
> An even harder question is: once we determine that some plugins are more
> equal than others, what should we do about the less useful and the less
> actively maintained?
> 
> To objectively answer both questions, we need to define support levels
> for Fuel plugins and set some reasonable expectations about how plugins
> can qualify for each level.
> 
> Level 3. Plugin is not actively supported
> 
> I believe that having hundreds of Fuel plugins out there on GitHub and
> elsewhere is great, and we should encourage people to create more of
> those and do whatever they like with them. Even a single-commit "deploy
> and forget" plugin is useful as an idea, a source of inspiration, and a
> starting point for other people who might want to take it further.
> 
> At this level, there should be zero expectations and zero obligations
> between Fuel plugin writers and OpenStack community. At the moment, Fuel
> plugins developers guide recommends [5] to request a Gerrit repo in the
> openstack/ namespace and set up branches, tags, CI, and a code review
> process around it, aligned with OpenStack development process. Which is
> generally a good idea, except for all the cases where it's too much
> overhead and ends up not being followed closely enough to be useful.
> 
> [5] https://wiki.openstack.org/wiki/Fuel/Plugins#Repo
> 
> Instead of vague blanket recommendations, we should explictly state that
> it's fine to do none of that and just stay on GitHub, and that if you
> intend to move to the next level and actively maintain your plugin, and
> expect support with that from Fuel developers and other OpenStack
> projects, these recommendations are not optional and must be fulfilled.
> 
> Level 2. Plugin is actively supported by its registered maintainers 
> 
> To support a Fuel plugin, we need to answer two fundamental questions:
> Can we? Should we?
> 
> I think the minimum requirements to say "yes" to both are:
> 
> a) All of the plugin's source code is explicitly licensed under an
>   OSI-approved license;
> 
> b) The plugin source code repository does not contain binary artefacts
>   such as RPM packages or ISO images (*);
> 
> c) The plugin is registered in DriverLog;
> 
> d) Plugin maintainers listed in DriverLog have confirmed the intent to
>   support the plugin;
> 
> e) Plugin repository on review.openstack.org has a voting CI job that is
>   passing with the latest or, at least, previous major release of Fuel.
> 
> f) All deviations from the 

Re: [openstack-dev] [HA][RabbitMQ][messaging][Pacemaker][operators] Improved OCF resource agent for dynamic active-active mirrored clustering

2016-02-15 Thread Bogdan Dobrelya
Hello!
A quick status update inline:

On 23.10.2015 10:01, Bogdan Dobrelya wrote:
> Hello.
> I'm glad to announce that the pacemaker OCF resource agent for the
> rabbitmq clustering, which was born in the Fuel project initially, now
> available and maintained upstream! It will be shipped with the
> rabbitmq-server 3.5.7 package (release by November, 2015)
> 
> You can read about this OCF agent in the official guide [0] (flow charts
> for promote/demote/start/stop actions in progress).
> 
> And you can try it as a tiny cluster example with a Vagrant box for
> Atlas [1]. Note, this only installs an Ubuntu box with a
> Corosync/Pacemaker & RabbitMQ clusters running, no Fuel or OpenStack
> required :-)

- Extracted Vagrantfile and cluster provision scripts to a separate repo
[1]. The packer example repo [2] now manages only atlas and docker
(new!) builds.

- Added docker images for Ubuntu Trusty [3] and Wily [4]. Only the
latter works stable, though. For Ubuntu Trusty, there are vagrant boxes
working w/o issues, so perhaps that is only docker related issue.
Perhaps I can build new ones for Xenial or other distros as well.

- Vagrantfile can now also deploy with the docker provider. Although,
there are few hacks to w/a unimplemented in vagrant docker things...

So, what's next?

- I'm open for merging both [5], [6] of the existing OCF RA solutions,
as it was proposed by Andrew Beekhoff. Let's make it happen.

- Would be nice to make Travis CI based gate to the upstream
rabbitmq-server's HA OCF RA. As for now, it relies on Fuel CI gates and
manual testing with atlas boxes.

- Please also consider Travis or a suchlike CI for the resource-agents'
rabbit-cluster OCF RA as well.

[1] https://github.com/bogdando/rabbitmq-cluster-ocf-vagrant
[2] https://github.com/bogdando/packer-atlas-example
[3] https://hub.docker.com/r/bogdando/rabbitmq-cluster-ocf/
[4] https://hub.docker.com/r/bogdando/rabbitmq-cluster-ocf-wily/
[5]
https://github.com/rabbitmq/rabbitmq-server/blob/master/scripts/rabbitmq-server-ha.ocf
[6]
https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/rabbitmq-cluster

> 
> I'm also planning to refer this official RabbitMQ cluster setup guide in
> the OpenStack HA guide as well [2].

Done, see [7]

[7] http://docs.openstack.org/ha-guide/controller-ha-rabbitmq.html

> 
> PS. Original rabbitmq-users mail thread is here [3].
> [openstack-operators] cross posted as well.
> 
> [0] http://www.rabbitmq.com/pacemaker.html
> [1] https://atlas.hashicorp.com/bogdando/boxes/rabbitmq-cluster-ocf
> [2] https://bugs.launchpad.net/openstack-manuals/+bug/1497528
> [3] https://groups.google.com/forum/#!topic/rabbitmq-users/BnoIQJb34Ao
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread Jeremy Stanley
On 2016-02-15 04:36:25 -0500 (-0500), Eoghan Glynn wrote:
[...]
> Traditionally all ATCs earn a free pass for summit, whereas the
> other attendees pay $600 or more for entry. I'm wondering if (a)
> there's some cross-subsidization going on here and (b) if the
> design summit was cleaved off, would the loss of the income from
> the non-ATCs sound the death-knell for the traditional ATC free
> pass?
[...]

It was my understanding that we provided 100% discounted admission
to the conference so that contributors were not charged for design
summit access, since they happened in the same venue at the same
time. However, pricing and discount terms are at the discretion of
the conference organizers so I don't really have more detail.
Hopefully once Thierry publishes his proposal we'll have a better
example on which to draw conclusions about related any prices and
funding challenges.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #70

2016-02-15 Thread Emilien Macchi
Hello

We'll have our weekly meeting tomorrow at 3pm UTC on
#openstack-meeting4.

https://wiki.openstack.org/wiki/Meetings/PuppetOpenStack

As usual, free free to bring topics in this etherpad:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160216

We'll also have open discussion for bugs & reviews, so anyone is welcome
to join.

See you there,
-- 
Emilien Macchi



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Proposal: Separate design summits from OpenStack conferences

2016-02-15 Thread James Bottomley
On Mon, 2016-02-15 at 04:36 -0500, Eoghan Glynn wrote:
> 
> > > Honestly I don't know of any communication between two cores at a 
> > > +2 party that couldn't have just as easily happened surrounded by
> > > other contributors. Nor, I hope, does anyone put in the 
> > > substantial reviewing effort required to become a core in order 
> > > to score a few free beers and see some local entertainment. 
> > > Similarly for the TC, one would hope that dinner doesn't figure 
> > > in the system incentives that drives folks to throw their hat
> > > into the ring.
> > 
> > Heh, you'd be surprised.
> > 
> > I don't object to the proposal, just the implication that there's
> > something wrong with parties for specific groups: we did abandon 
> > the speaker party at Plumbers because the separation didn't seem to 
> > be useful and concentrated instead on doing a great party for
> > everyone.
> > 
> > > In any case, I've derailed the main thrust of the discussion 
> > > here, which I believe could be summed up by:
> > > 
> > >   "let's dial down the glitz a notch, and get back to basics"
> > > 
> > > That sentiment I support in general, but I'd just be more 
> > > selective as to which social events should be first in line to be 
> > > culled in order to create a better atmosphere at summit.
> > > 
> > > And I'd be far more concerned about getting the choice of 
> > > location, cadence, attendees, and format right, than in questions 
> > > of who drinks with whom.
> > 
> > OK, so here's a proposal, why not reinvent the Cores party as a 
> > Meet the Cores Party instead (open to all design summit attendees)?
> >  Just make sure it's advertised in a way that could only possibly 
> > appeal to design summit attendees (so the suits don't want to go), 
> > use the same buget (which will necessitate a dial down) and it 
> > becomes an inclusive event that serves a useful purpose.
> 
> Sure, I'd be totally agnostic on the branding as long as the widest
> audience is invited ... e.g. all ATCs, or even all summit attendees.

If you make it ATC only, you've just restricted it to no newcomers,
which is exclusive again.  I think it wants to be open to all design
summit attendees regardless of ATC status.  Since there's no separate
badge, I think you keep the OpenStack summit attendees out by judicious
advertising.  Perhaps, say, only advertise on openstack-dev@ and
posters at the actual design summit?

James


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] eslint without color?

2016-02-15 Thread Jeremy Stanley
On 2016-02-15 16:45:09 +1100 (+1100), Richard Jones wrote:
> I'm just curious why our eslint configuration (in packages.json)
> specifies --no-color. It's much harder to spot the errors without
> color, and I always end up running it manually to get the color.
> Also, karma output has color, so why one and not the other?
> 
> In short, would anyone object to turning color on for eslint?

If it's being run in gate jobs, we need some mechanism to disable
ANSI color there since the embedded escapes tend to make logged
output from it nearly unreadable. If it acts like most _sane_
command-line tools however, it should disable all terminal escapes
if it detects that there's no controlling terminal (for example, if
stdin is null). No idea whether command-line sanity is common to
Javascript programmers, but perhaps someone here is willing to make
it a reality if not.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] tenant vs. project

2016-02-15 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/12/2016 07:40 PM, Adam Young wrote:

> Tenant never quite made sense to me. A tenant is the person that 
> occupies an apartment or building, but not the building itself.

For public clouds, where you have customers sharing resources on the
same hardware, it makes sense. But yeah, for Keystone (and most other
services in OpenStack), project is clearer, IMO.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWwemUAAoJEKMgtcocwZqLgW4P/2ixzq9Aku14ub1I+jtyhbue
/wYoY86BTySsWtotE5jBP5lYXHRGdOLzWTKsxjPjkKvX7UEvIN5OdBUeYibwxrz/
PxPE4434Sdg6OIj0bwUmB9CHtuusfic4e+CrRPhBKRHDRb5nBJXlJsMWiP5siwbI
O5j/uFaYOR2o1F45szhhYoeIkBxgW0lcw6KAGhHCYuyQSSoLcUAXZrCGAaeWiGfi
sT9VtmZRyjOtL/BvTBcUdf2EndmC6eGMve3KST+9p050+9NakFG15Xm2KWPAE+qW
zT+SWc+ecGjhIPXO7OCBI8lYuKXaroN95a0/z4vsaEkY0rTPW3tmPViKrA9McOPG
HoJhAj0fXWrlYe1yyBBVS/U91vs3S7PrHh6vGXXtM30BytN+KIQ8nfjaIojHofqM
zsW0cKOF+v92Q3sNwEmMnzvqrE6jeeIgqTKxxSRgwXMej/Q4WTGAvPqMnKXKDQHE
XFK5OHRVfd29AE9M9cSrn3WneAB8KZKz3zyGw+s2sJevDrce/vKh9mV//HpZDa4O
trBAkYXq4ju/mpvami+wji+7p4mGHmb6+5j6BXHTE7RaXF7IhnR+LA7GZimKu26S
KrSRUQnj2+dqQzXYnVU+OqoS3XWryGDMSo3wcGzHAFBB3I75XLdLWeFx8LvJNFHS
mRes565P0LfDcyZvjnG0
=86tt
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][QA] New runner for fuel-qa system tests

2016-02-15 Thread Dennis Dmitriev
Hi all!

Please be informed that we merged a new runner for fuel-qa system tests
[1] : run_system_test.py

Features of new runner:

- auto discovering all test in both test suites ([2] and [3])
- show the groups from the test suites
- explain content of groups
- run the several groups at the same time
- combine configuration with the test groups from new suite
- run old groups (from [2])
- use runner in utils/jenkins/system_tests.sh


IMPORTANT:

For old-style system tests from [2] there is no impact.

But there was a significant change for template-based tests [3]  in the
test case declaration:
- add @testcase decorator use instead of @factory

Here is an example of how test case was changed: [4]
Please make the same changes if fuel-qa is used as a module for running
your external test cases.


[1] https://review.openstack.org/#/c/240016/
[2] https://github.com/openstack/fuel-qa/tree/master/fuelweb_test/tests
[3] https://github.com/openstack/fuel-qa/tree/master/system_test/tests
[4]
https://review.openstack.org/#/c/240016/27/system_test/tests/test_redeploy_after_stop.py

-- 
Regards,
Dennis Dmitriev
QA Engineer,
Mirantis Inc. http://www.mirantis.com
e-mail/jabber: dis.x...@gmail.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Move virtualbox scripts to a separate directory

2016-02-15 Thread Vladimir Kozhukalov
Dear colleagues,

I'd like to announce that we are next to moving fuel-main/virtualbox
directory to a separate git repository. This directory contains a set of
bash scripts that could be used to easily deploy Fuel environment and try
to deploy OpenStack cluster using Fuel. Virtualbox is used as a
virtualization layer.

Checklist for this change is as follows:

   1. Launchpad bug: https://bugs.launchpad.net/fuel/+bug/1544271
   2. project-config patch https://review.openstack.org/#/c/279074/2 (ON
   REVIEW)
   3. prepare upstream (DONE) https://github.com/kozhukalov/fuel-virtualbox
   4. .gitreview file (TODO)
   5. .gitignore file (TODO)
   6. MAINTAINERS file (TODO)
   7. remove old files from fuel-main (TODO)

Virtualbox directory is not actively changed, so freezing this directory
for a while is not going to affect the development process significantly.
>From this moment virtualbox directory is declared freezed and all changes
in this directory that are currently in work should be later backported to
the new git repository (fuel-virtualbox).

Vladimir Kozhukalov
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-15 Thread Ed Leafe
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 02/15/2016 03:27 AM, Sylvain Bauza wrote:

> - can we have the feature optional for operators

One thing that concerns me is the lesson learned from simply having a
compute node's instance information sent and persisted in memory. That
was resisted by several large operators, due to overhead. This
proposal will have to store that and more in memory.

- -- 

- -- Ed Leafe
-BEGIN PGP SIGNATURE-
Version: GnuPG v2
Comment: GPGTools - https://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJWwdpaAAoJEKMgtcocwZqL7AsP/2/NqRtyepV/vmf7AMR/aI4P
sKKdD5tT3dydvNZnsvSRWTkzTfvSPQqXSEOPfbMsdAcNGFerRMQ+gz4CTq9Jioa4
4Q7H3k4BgMucCOq8ZScUHLb4Ymw2A2ksXYqVIk4BALH3H0i/V+M6XMXQC7mFLSS2
Y8wqdPfb5qIvR4Zf90XPi1wkQiL0rx3WLpN6wHYovaJS7rxMfOT8/ZoO5s/zs5FM
n9n+qcB0aMY4RT4+8J49homw1+hatPmo0lp4Hcyp7cCg1cvUidIXDDqw6ycMKRio
mKMthiNT01kCG1mSRd9U3aszXnovGqGspl7K1R1SBt+4kiHIXQ4khiTDSnNuxnk+
3GnM3gL72ZjVcppDReII9KpJ9ZPxhD4tKNsRfcZQrEjQCwGz5tY275FItyVypw1x
tVLIsPzPbqFV/AjubPVChmypWsnG3bZT4eoCK0k8stRd5SArfR61Z20ai4r1fqWO
1HU6kT1iYCwhMUk58Om35e/G1a/4t/ZiSvdzS1NermD6SGh0oMmRYgQs7nSRMZ5Y
pOIDLToVN/AnSxpgNgqsOpgLXRn3vCskrqxJqqcLiwEQnkh10gEnEvZ6gPAXOJLl
axctk4jEbKLdNoEwEXuiAFJjGlNG/8J5J3T9CHbFRuUVDKZfcgKZc4l49pTYa8iA
OiKK4a6Fw2UWbAdBTuCD
=sxd3
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Testing schema migrations was RE: [grenade][keystone] Keystone multinode grenade

2016-02-15 Thread Grasza, Grzegorz

> From: Morgan Fainberg [mailto:morgan.fainb...@gmail.com] 
>>
>> Keystone stable working with master db seems like an interesting bit, are
>> there already tests for that?
>
>Not yet. Right now there is only a unit test, checking obvious 
>incompatibilities.
>
> As an FYI, this test was reverted as we spent a significant time around 
> covering
> it at the midcycle (and it was going to require us to significantly rework 
> in-flight
> code (and was developed / agreed upon before the new db restrictions landed).
> We will be revisiting this with the now better understanding of the scope and
> how to handle the "limited" downtime upgrade for first thing in Newton.

In the commit description you mentioned that "the base feature of what this test
encompasses will instead be migrated over to a full separate gate/check job that
will be able to handle the more complex tasks of ensuring schema upgrades make
sense."

As I understand, a gate test which upgrades the DB to the latest version and 
then
runs tempest on the old release would cover the cases which the unit test 
covered.
Is this what you had in mind?

Do you think I can start working on it, or maybe we should synchronize on what 
the
final approach should be beforehand?

Can you elaborate more about what the ideas of testing schema changes were
at the midcycle?

What especially interests me, is whether you discussed any ideas which might be 
better than just running tempest on keystone in HA.

I'm sorry I couldn't take part in these discussions.

/ Greg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Re: Assistance with Magnum Setup

2016-02-15 Thread Shiva Ramdeen
Hi Hongbin,


There is an online dev-quickstart guide that I have been using to install 
Magnum. This guide is intended to install magnum with devstack however the 
guide seems to not include all the information needed to connect magnum, 
Barbican and keystone which is where my problems are showing up. I have created 
a document of my own using the information online (you can see it attached).


I have already joined the Openstack-containers IRC channel and posted my issues 
there.


Thank you for your assistance.

Best wishes,
Shiva


From: Hongbin Lu 
Sent: Sunday, February 14, 2016 4:32 PM
To: OpenStack Development Mailing List (not for usage questions); Shiva Ramdeen
Subject: RE: [magnum] Re: Assistance with Magnum Setup


Steve,



Thanks for directing Shiva to here. BTW, most of your code on objects and db 
are still here :).



Shiva,



Please do join the #openstack-containers channel (It is hard to do 
trouble-shooting in ML). I believe contributors in the channel are happy to 
help you. For Magnum team, it looks we should have an installation guide. Do we 
have a BP for that? If not, I think we should create one and give it a high 
priority.



Best regards,

Hongbin



From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: February-14-16 10:54 AM
To: Shiva Ramdeen
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum] Re: Assistance with Magnum Setup



Shiva,



First off, welcome to OpenStack :)  Feel free to call me Steve.



Ccing openstack-dev which is typically about development questions not usage 
questions, but you might have found some kind of bug.



I am not sure what the state of Magnum and Keystone is with OpenStack.  I 
recall at our Liberty midcycle we were planning to implement trusts.  Perhaps 
some of that work broke?



I would highly recommend obtaining yourself an IRC client, joining a freenode 
server, and joining the #openstack-containers channel.  Here you can meet with 
the core reviewers and many users who may have seen your problem in the past 
and have pointers for resolution.



Another option is to search the IRC archives for the channel here:

http://eavesdrop.openstack.org/irclogs/%23openstack-containers/

IRC logs - 
eavesdrop.openstack.org
eavesdrop.openstack.org
IRC logs. Latest (bookmarkable) 2016-01-21 (Thursday) 2016-01-20 (Wednesday) 
2016-01-19 (Tuesday) 2016-01-18 (Monday) 2016-01-17 (Sunday) 2016-01-16 
(Saturday)





Finally, my detailed knowledge of Magnum is a bit dated, not having written any 
code for Magnum for over 6 months.  Although I wrote a lot of the initial code, 
most of it has been replaced ;) by the rockin Magnum core review team.  They 
can definitely get you going - just find them on irc.



Regards

-steve



From: Shiva Ramdeen 
>
Date: Sunday, February 14, 2016 at 6:33 AM
To: Steven Dake >
Subject: Assistance with Magnum Setup



Hello Mr. Dake,



Firstly let me introduce myself. My name is Shiva Ramdeen, I am a final year 
student at the University of the West Indies studying for my degree in
Electrical and Computer Engineering. I am currently working on a my final year 
project which deals with the performance of Magnum and Nova-Docker. I have been 
attempting to install Magnum on a liberty install of Openstack. However, I have 
been unable till to get Magnum to authenticate with keystone and thus cannot 
create swarm bays. I fear that I have depleted all of the online resources that 
explain the setup of Magnum and as a last resort I am seeking any assistance 
that you may be able to provide that may help me resolve this issue.  I would 
be available to provide any further details at your best convenience. Thank you 
in advance.



Kindest Regards,

Shiva Ramdeen


barbicaninstallationpattern.docx
Description: barbicaninstallationpattern.docx


magnuminstallationpattern.docx
Description: magnuminstallationpattern.docx
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A prototype implementation towards the "shared state scheduler"

2016-02-15 Thread Sylvain Bauza



Le 15/02/2016 10:48, Cheng, Yingxin a écrit :


Thanks Sylvain,

1. The below ideas will be extended to a spec ASAP.



Nice, looking forward to it then :-)


2. Thanks for providing concerns I’ve not thought it yet, they will be 
in the spec soon.


3. Let me copy my thoughts from another thread about the integration 
with resource-provider:


The idea is about “Only compute node knows its own final compute-node 
resource view” or “The accurate resource view only exists at the place 
where it is actually consumed.” I.e., The incremental updates can only 
come from the actual “consumption” action, no matter where it is(e.g. 
compute node, storage service, network service, etc.). Borrow the 
terms from resource-provider, compute nodes can maintain its accurate 
version of “compute-node-inventory” cache, and can send incremental 
updates because it actually consumes compute resources, furthermore, 
storage service can also maintain an accurate version of 
“storage-inventory” cache and send incremental updates if it also 
consumes storage resources. If there are central services in charge of 
consuming all the resources, the accurate cache and updates must come 
from them.




That is one of the things I'd like to see in your spec, and how you 
could interact with the new model.

Thanks,
-Sylvain



Regards,

-Yingxin

*From:*Sylvain Bauza [mailto:sba...@redhat.com]
*Sent:* Monday, February 15, 2016 5:28 PM
*To:* OpenStack Development Mailing List (not for usage questions) 

*Subject:* Re: [openstack-dev] [nova] A prototype implementation 
towards the "shared state scheduler"


Le 15/02/2016 06:21, Cheng, Yingxin a écrit :

Hi,

I’ve uploaded a prototype https://review.openstack.org/#/c/280047/
 to testify its design
goals in accuracy, performance, reliability and compatibility
improvements. It will also be an Austin Summit Session if elected:

https://www.openstack.org/summit/austin-2016/vote-for-speakers/Presentation/7316


I want to gather opinions about this idea:

1. Is this feature possible to be accepted in the Newton release?


Such feature requires a spec file to be written 
http://docs.openstack.org/developer/nova/process.html#how-do-i-get-my-code-merged


Ideally, I'd like to see your below ideas written in that spec file so 
it would be the best way to discuss on the design.




2. Suggestions to improve its design and compatibility.


I don't want to go into details here (that's rather the goal of the 
spec for that), but my biggest concerns would be when reviewing the spec :
 - how this can meet the OpenStack mission statement (ie. ubiquitous 
solution that would be easy to install and massively scalable)
 - how this can be integrated with the existing (filters, weighers) to 
provide a clean and simple path for operators to upgrade
 - how this can be supporting rolling upgrades (old computes sending 
updates to new scheduler)

 - how can we test it
 - can we have the feature optional for operators



3. Possibilities to integrate with resource-provider bp series: I
know resource-provider is the major direction of Nova scheduler,
and there will be fundamental changes in the future, especially
according to the bp

https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst.
However, this prototype proposes a much faster and compatible way
to make schedule decisions based on scheduler caches. The
in-memory decisions are made at the same speed with the caching
scheduler, but the caches are kept consistent with compute nodes
as quickly as possible without db refreshing.


That's the key point, thanks for noticing our priorities. So, you know 
that our resource modeling is drastically subject to change in Mitaka 
and Newton. That is the new game, so I'd love to see how you plan to 
interact with that.
Ideally, I'd appreciate if Jay Pipes, Chris Dent and you could share 
your ideas because all of you are having great ideas to improve a 
current frustrating solution.


-Sylvain



Here is the detailed design of the mentioned prototype:

>>

Background:

The host state cache maintained by host manager is the scheduler
resource view during schedule decision making. It is updated
whenever a request is received[1], and all the compute node
records are retrieved from db every time. There are several
problems in this update model, proven in experiments[3]:

1. Performance: The scheduler performance is largely affected by
db access in retrieving compute node records. The db block time of
a single request is 355ms in average in the deployment of 3
compute nodes, compared with only 3ms in in-memory
decision-making. Imagine there could be at most 1k nodes, even 10k
nodes in the future.

2. Race conditions: This is not only a 

Re: [openstack-dev] [magnum]swarm + compose = k8s?

2016-02-15 Thread Corey O'Brien
Hi all,

A few thoughts to add:

I like the idea of isolating the masters so that they are not
tenant-controllable, but I don't think the Magnum control plane is the
right place for them. They still need to be running on tenant-owned
resources so that they have access to things like isolated tenant networks
or that any bandwidth they consume can still be attributed and billed to
tenants.

I think we should extend that concept a little to include worker nodes as
well. While they should live in the tenant like the masters, they shouldn't
be controllable by the tenant through anything other than the COE API. The
main use case that Magnum should be addressing is providing a managed COE
environment. Like Hongbin mentioned, Magnum users won't have the domain
knowledge to properly maintain the swarm/k8s/mesos infrastructure the same
way that Nova users aren't expected to know how to manage a hypervisor.

I agree with Egor that trying to have Magnum schedule containers is going
to be a losing battle. Swarm/K8s/Mesos are always going to have better
scheduling for their containers. We don't have the resources to try to be
yet another container orchestration engine. Besides that, as a developer, I
don't want to learn another set of orchestration semantics when I already
know swarm or k8s or mesos.

@Kris, I appreciate the real use case you outlined. In your idea of having
multiple projects use the same masters, how would you intend to isolate
them? As far as I can tell none of the COEs would have any way to isolate
those teams from each other if they share a master. I think this is a big
problem with the idea of sharing masters even within a single tenant. As an
operator, I definitely want to know that users can isolate their resources
from other users and tenants can isolate their resources from other tenants.

Corey

On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao  wrote:

> Hi,
>
> I wanted to give some thoughts to the thread.
>
> There are various perspective around “Hosted vs Self-managed COE”, But if
> you stand at the developer's position, it basically comes down to “Ops vs
> Flexibility”.
>
> For those who want more control of the stack, so as to customize in anyway
> they see fit, self-managed is a more appealing option. However, one may
> argue that the same job can be done with a heat template+some patchwork of
> cinder/neutron. And the heat template is more customizable than magnum,
> which probably introduces some requirements on the COE configuration.
>
> For people who don't want to manage the COE, hosted is a no-brainer. The
> question here is that which one is the core compute engine is the stack,
> nova or COE? Unless you are running a public, multi-tenant OpenStack
> deployment, it is highly likely that you are sticking with only one COE.
> Supposing k8s is what your team is dealing with everyday, then why you need
> nova sitting under k8s, whose job is just launching some VMs. After all, it
> is the COE that orchestrates cinder/neutron.
>
> One idea of this is to put COE at the same layer of nova. Instead of
> running atop nova, these two run side by side. So you got two compute
> engines: nova for IaaS workload, k8s for CaaS workload. If you go this way, 
> hypernetes
> is probably what you are looking
> for.
>
> Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with
> Docker registry, and use nova to launch Docker images. But this is not done
> by nova-docker, simply because it is hard to integrate things like
> cinder/neutron with lxc. The idea is a nova-hyper driver
> .
> Since Hyper is hypervisor-based, it is much easier to make it work with
> others. SHAMELESS PROMOTION: if you are interested in this idea, we've
> submitted a proposal at the Austin summit:
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211
> .
>
> Peng
>
> Disclaim: I maintainer Hyper.
>
> -
> Hyper - Make VM run like Container
>
>
>
> On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu  wrote:
>
>> My replies are inline.
>>
>>
>>
>> *From:* Kai Qiang Wu [mailto:wk...@cn.ibm.com]
>> *Sent:* February-14-16 7:17 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>>
>>
>>
>> HongBin,
>>
>> See my replies and questions in line. >>
>>
>>
>> Thanks
>>
>> Best Wishes,
>>
>> 
>> Kai Qiang Wu (吴开强 Kennan)
>> IBM China System and Technology Lab, Beijing
>>
>> E-mail: wk...@cn.ibm.com
>> Tel: 86-10-82451647
>> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
>> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>>
>> 

Re: [openstack-dev] [all] Any projects using sqlalchemy-utils?

2016-02-15 Thread Corey Bryant
On Mon, Feb 15, 2016 at 7:57 AM, Julien Danjou  wrote:

> On Fri, Feb 12 2016, Corey Bryant wrote:
>
> > taskflow started using it recently, however it's only needed for a single
> > type in taskflow (JSONType).  I'm wondering if it's worth the effort of
> > maintaining it and it's dependencies in Ubuntu main or if perhaps we can
> > just revert this bit to define the JSONType internally.
>
> As Haïkel said, we use it for a while in Gnocchi, and have no intention
> or copy/pasting its code. It's even likely I'll spread its use in other
> telemetry projects.


> If an effort should be made, it's probably to merge from
> sqlalchemy-utils to sqlalchemy. :)
>
> If there's any reason it's a burden to maintain this Python package
> compared to others in Ubuntu, let us know, maybe we can fix/enhance
> something upstream?
>
> --
> Julien Danjou
> /* Free Software hacker
>https://julien.danjou.info */
>


It's not a problem at all.  I just wanted to make sure there were more
projects using sqlalchemy-utils than the single type used in taskflow.  And
there are!  So that answers my question.  Thanks.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] How would nova microversion get-me-a-network in the API?

2016-02-15 Thread Chris Dent

On Fri, 12 Feb 2016, Doug Wiegley wrote:


It hurts discoverability, and “expectedness”. If I’m new to
openstack, having it default boot unusable just means the first time I
use ’nova boot’, I’ll end up with a useless VM. People don’t
read docs first, it should “just work” as far as that’s sane.
And OpenStack has a LOT of these little annoyances for the sake of
strict correctness while optimizing for an unusual or rare case.


Sorry for being a bit late to the game and jumping into the thread
in the middle, but: I wanted to highlight above paragraph. Yes,
there are many of these little annoyances in OpenStack where the
principle of least surprise is completely violated.

In the present day, with the many structures we have in place to manage
versions, backwards compatibility, rolling upgrades, etc it is
increasingly difficult to fix these problems. That rather sucks. If
something is wrong and bad for users _now_ (especially the ones that
don't exist yet) and fixing it costs users only a bit of effort, we may
as well do it.

Microversions, as an example, are designed to make backwards
incompatible changes possible: unless a client requests 'latest' it
is fixed in time. The client users and authors have to make an
explicit choice to move forward. Let's use them and quit
constructing obstacles in the way of making things better.

We can argue that some deployer might move the microversion bar without
notifying users and gosh, we better protect against that. No, we
shouldn't. That's an issue between the deployers and the users.
OpenStack has built in the protections, if people just to use them
poorly, their problem.


The original stated goal of this simpler neutron api was to get back
to the simpler nova boot. I’d like to see that happen.


+1

--
Chris Dent   (╯°□°)╯︵┻━┻http://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-15 Thread Flavio Percoco

On 14/02/16 17:16 -0500, Davanum Srinivas wrote:

Hi,

Short Story:
pycryptodome if installed inadvertently will break several projects:
Example : https://review.openstack.org/#/c/279926/

Long Story:
There's a new kid in town pycryptodome:
https://github.com/Legrandin/pycryptodome

Because pycrypto itself has not been maintained for a while:
https://github.com/dlitz/pycrypto

So folks like pysaml2 and paramiko are trying to switch over:
https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbfbde6b41ac63
https://github.com/paramiko/paramiko/issues/637

In fact pysaml2===4.0.3 has already switched over. So the requirements
bot/script has been trying to alert us to this new dependency, you can
see Nova fail.
https://review.openstack.org/#/c/279926/

Why does it fail? For example, the new library is strict about getting
bytes for keys and has dropped some parameters in methods. for
example:
https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/PublicKey/RSA.py#L405
https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA.py#L499

Another problem, if pycrypto gets installed last then things will
work, if it pycryptodome gets installed last, things will fail. So we
definitely cannot allow both in our global-requirements and
upper-constraints. We can always try to pin stuff, but things will
fail as there are a lot of jobs that do not honor upper-constraints.
And things will fail in the field for Mitaka.

Action:
So what can we do? One possibility is to pin requirements and hope for
the best. Another is to tolerate the install of either pycrypto or
pycryptodome and test both combinations so we don't have to fight this
battle.

Example for Nova : https://review.openstack.org/#/c/279909/
Example for Glance : https://review.openstack.org/#/c/280008/


I'm not opposed to this as a short term solution.

Flavio


Example for Barbican : https://review.openstack.org/#/c/280014/

What do you think?

Thanks,
Dims


--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API sub-team meeting

2016-02-15 Thread Alex Xu
We have weekly Nova API meeting tomorrow. The meeting is being held Tuesday
UTC1200.

The proposed agenda and meeting details are here:

https://wiki.openstack.org/wiki/Meetings/NovaAPI

Please feel free to add items to the agenda.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Any projects using sqlalchemy-utils?

2016-02-15 Thread Julien Danjou
On Fri, Feb 12 2016, Corey Bryant wrote:

> taskflow started using it recently, however it's only needed for a single
> type in taskflow (JSONType).  I'm wondering if it's worth the effort of
> maintaining it and it's dependencies in Ubuntu main or if perhaps we can
> just revert this bit to define the JSONType internally.

As Haïkel said, we use it for a while in Gnocchi, and have no intention
or copy/pasting its code. It's even likely I'll spread its use in other
telemetry projects.

If an effort should be made, it's probably to merge from
sqlalchemy-utils to sqlalchemy. :)

If there's any reason it's a burden to maintain this Python package
compared to others in Ubuntu, let us know, maybe we can fix/enhance
something upstream?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][glance][barbican][kite][requirements] pycrypto vs pycryptodome

2016-02-15 Thread Haïkel
2016-02-14 23:16 GMT+01:00 Davanum Srinivas :
> Hi,
>
> Short Story:
> pycryptodome if installed inadvertently will break several projects:
> Example : https://review.openstack.org/#/c/279926/
>
> Long Story:
> There's a new kid in town pycryptodome:
> https://github.com/Legrandin/pycryptodome
>
> Because pycrypto itself has not been maintained for a while:
> https://github.com/dlitz/pycrypto
>
> So folks like pysaml2 and paramiko are trying to switch over:
> https://github.com/rohe/pysaml2/commit/0e4f5fa48b1965b269f69bd383bbfbde6b41ac63
> https://github.com/paramiko/paramiko/issues/637
>
> In fact pysaml2===4.0.3 has already switched over. So the requirements
> bot/script has been trying to alert us to this new dependency, you can
> see Nova fail.
> https://review.openstack.org/#/c/279926/
>
> Why does it fail? For example, the new library is strict about getting
> bytes for keys and has dropped some parameters in methods. for
> example:
> https://github.com/Legrandin/pycryptodome/blob/master/lib/Crypto/PublicKey/RSA.py#L405
> https://github.com/dlitz/pycrypto/blob/master/lib/Crypto/PublicKey/RSA.py#L499
>
> Another problem, if pycrypto gets installed last then things will
> work, if it pycryptodome gets installed last, things will fail. So we
> definitely cannot allow both in our global-requirements and
> upper-constraints. We can always try to pin stuff, but things will
> fail as there are a lot of jobs that do not honor upper-constraints.
> And things will fail in the field for Mitaka.
>
> Action:
> So what can we do? One possibility is to pin requirements and hope for
> the best. Another is to tolerate the install of either pycrypto or
> pycryptodome and test both combinations so we don't have to fight this
> battle.
>
> Example for Nova : https://review.openstack.org/#/c/279909/
> Example for Glance : https://review.openstack.org/#/c/280008/
> Example for Barbican : https://review.openstack.org/#/c/280014/
>
> What do you think?
>
> Thanks,
> Dims
>

This is annoying from a packaging PoV.

We have dependencies relying on pycrypto (e.g oauthlib used by
keystone, paramiko by even more projects), and we can't control the
order of installation.
My 2 cts will be to favor the latter solution and test both
combinations until N or O releases (and then get rid of pycrypto
definitively), so we can handle this gracefully.


Regards,
H.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Kuryr] IRC Meeting today (2/15) - 1500 UTC

2016-02-15 Thread Gal Sagie
Hello All

We will have an IRC meeting today (Monday, 2/15) at 1500 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Kuryr

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/kuryr/2016/kuryr.2016-02-09-03.00.html

Please update the agenda if you have any subject you would like to discuss
about.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Stable branch policy for Mitaka

2016-02-15 Thread John Trowbridge


On 02/15/2016 03:59 AM, Steven Hardy wrote:
> On Wed, Feb 10, 2016 at 07:05:41PM +0100, James Slagle wrote:
>>On Wed, Feb 10, 2016 at 4:57 PM, Steven Hardy  wrote:
>>
>>  Hi all,
>>
>>  We discussed this in our meeting[1] this week, and agreed a ML
>>  discussion
>>  to gain consensus and give folks visibility of the outcome would be a
>>  good
>>  idea.
>>
>>  In summary, we adopted a more permissive "release branch" policy[2] for
>>  our
>>  stable/liberty branches, where feature backports would be allowed,
>>  provided
>>  they worked with liberty and didn't break backwards compatibility.
>>
>>  The original idea was really to provide a mechanism to "catch up" where
>>  features are added e.g to liberty OpenStack components late in the cycle
>>  and TripleO requires changes to integrate with them.
>>
>>  However, the reality has been that the permissive backport policy has
>>  been
>>  somewhat abused (IMHO) with a large number of major features being
>>  proposed
>>  for backport, and in a few cases this has broken downstream (RDO)
>>  consumers
>>  of TripleO.
>>
>>  Thus, I would propose that from Mitaka, we revise our backport policy to
>>  simply align with the standard stable branch model observed by all
>>  projects[3].
>>
>>  Hopefully this will allow us to retain the benefits of the stable branch
>>  process, but provide better stability for downstream consumers of these
>>  branches, and minimise confusion regarding what is a permissable
>>  backport.
>>
>>  If we do this, only backports that can reasonably be considered
>>  "Appropriate fixes"[4] will be valid backports - in the majority of
>>  cases
>>  this will mean bugfixes only, and large features where the risk of
>>  regression is significant will not be allowed.
>>
>>  What are peoples thoughts on this?
>>
>>â**I'm in agreement. I think this change is needed and will help set
>>better expectations around what will be included in which release.
>>
>>If we adopt this as the new policy, then the immediate followup is to set
>>and communicate when we'll be cutting the stable branches, so that it's
>>understood when the features have to be done/committed. I'd suggest that
>>we more or less completely adopt the integrated release schedule[1]. Which
>>I believe means the week of RC1 for cutting the stable/mitaka branches,
>>which is March 14th-18th.
>>
>>It seems to follow logically then that we'd then want to also be more
>>aggresively aligned with other integrated release events such as the
>>feature freeze date, Feb 29th - March 4th.
> 
> Yes, agreeing a backport policy is the first step, and aligning all our
> release policies with the rest of OpenStack is the logical next step.
> 
>>An alternative to strictly following the schedule, would be to say that
>>TripleO lags the integrated release dates by some number of weeks (1 or 2
>>I'd think), to allow for some "catchup" time since TripleO is often
>>consuming features from projects part of the integrated release.
> 
> The risk with this approach is there remains some confusion about our
> deadlines, and there is an increased risk that our 1-2 weeks window slips
> and we end up with a similar problem to that which we have now.
> 
>From a packaging POV, I am also -1 on lagging the integrated release.
This creates a situation where TripleO can not be used as the method to
test the integrated release packaging. This means relying on other
installers (Packstack), which means less use of TripleO in the RDO
community.

Any big feature that needs support in TripleO, that is in the integrated
release, would have a spec landed in advance. So, I do not think it is
all that burdensome to land TripleO support for the features on the same
schedule.

> I'd propose we align with whatever schedule the puppet community observes,
> given that (with out current implementation at least), it's unlikely we can
> land any features actually related to new-feature-in-$service type patches
> without that feature already having support in the puppet modules?
> 

+1 to following puppet module lead. It seems like any new feature type
patch we wanted to support in TripleO should be implemented very close
to the patch in the puppet module which enables it. Ideally, we could
land TripleO support at the same time as the feature is enabled in
puppet using depends on.

> Perhaps we can seek out some guidance from Emilien, as I'm not 100% sure of
> the release model observed for the puppet modules?
> 
> If you look at the features we're backporting, most of them aren't related
> to features requiring "catchup", e.g IPv6, SSL, Upgrades - these are all
> cross-project TripleO features and there are very few (if any?) "catchup"
> type requirements AFAICT.
> 
> Also, if you look at other projects, such as Heat 

[openstack-dev] [ironic] Baremetal Deploy Ramdisk functional testing

2016-02-15 Thread Maksym Lobur
Re-sending with Ironic stamp… Folks, please see below: 

> 
> Hi All,
> 
> In bareon [1] we have test framework to test deploy ramdsik with bareon 
> inside (baremetal deployments). This is a functional testing, we do a full 
> partitioning/image_deployment in a VM, then reboot to see if tenant image 
> deployed properly. Blueprint is at [2], test example is at [3]. We were going 
> to put a framework to a separate repo, while keeping functional tests in 
> bareon tree.
> 
> Does someone else have need to test some kind of deployment ramdisks? Maybe 
> already have existing tools for this? Or would be interested to reuse our 
> code? Current pull request is to create bareon-func-test repo [4]. But if 
> that makes sense, we could do something like ramdisk-func-test, e.g. try to 
> generalize the framework to test other ramdisks/agents.
> 
> [1] https://wiki.openstack.org/wiki/Bareon 
> 
> [2] https://blueprints.launchpad.net/bareon/+spec/bareon-functional-testing 
> 
> [3] http://pastebin.com/mL39QJS6 
> [4] https://review.openstack.org/#/c/279120/ 
> 
> 
> 
> Regards,
> Max Lobur,
> OpenStack Developer, Mirantis, Inc.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Octavia (LBaaS) license question

2016-02-15 Thread Samuel Bercovici
OpenStack is using KVM and Linux as reference implementation, both are GPL.

From: Gal Sagie [mailto:gal.sa...@gmail.com]
Sent: Sunday, February 14, 2016 8:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Octavia (LBaaS) license question

Hello All,

I recently started looking at Octavia and i noticed it uses HAProxy as its 
reference
implementation.
I also noticed that HAProxy is GPL license so i am wondering how is this 
working?

Just interested to know if there is a special exemption ?

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Nominate Fedor Zhadaev for the fuel-menu-core team

2016-02-15 Thread Dmitry Klenov
Well done, Fedor! Congrats!

-Dmitry.

On Mon, Feb 15, 2016 at 1:12 PM, Maksim Malchuk 
wrote:

> Congrats!
>
>
> On Mon, Feb 15, 2016 at 1:08 PM, Fedor Zhadaev 
> wrote:
>
>> Thank you!
>> --
>> Kind Regards,
>> Fedor Zhadaev
>>
>> skype: zhadaevfm
>> IRC: fzhadaev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Best Regards,
> Maksim Malchuk,
> Senior DevOps Engineer,
> MOS: Product Engineering,
> Mirantis, Inc
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Trove] Horizon-Trove External Repository

2016-02-15 Thread Andreas Jaeger
On 2016-02-15 11:32, Omer (Nokia - IL) Etrog wrote:
> Hi,
> Is there proper documentation how to add localization for external plugins 
> that use AngularJS?


http://docs.openstack.org/infra/manual/creators.html#enabling-translation-infrastructure

Note that the translation team only translates official projects - and
prioritizes there on what they do,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][Trove] Horizon-Trove External Repository

2016-02-15 Thread Etrog, Omer (Nokia - IL)
Hi,
Is there proper documentation how to add localization for external plugins that 
use AngularJS?
Thanks,
Omer Etrog
Vitrage Team    

From: Thai Q Tran [mailto:tqt...@us.ibm.com] 
Sent: Thursday, December 03, 2015 11:44 PM
To: openstack-dev@lists.openstack.org
Cc: Vince Brunssen
Subject: [openstack-dev] [Horizon][Trove] Horizon-Trove External Repository

Hello Trovers and Horizoneers,
 
The intention of this email is to get everyone on the same page so we are all 
aware of what is going on. As many of you are probably already aware, Horizon 
is moving toward the plugin model for all of its dashboards (including existing 
dashboards). This release cycle, we are aiming to move Sahara and Trove into 
their own repository with joint ownership of the respective project. I have 
spoken to interested parties, Craig, and David about it and we are all in 
agreement. Ideally, this should help speed up the review process for Trove, as 
you now own part of the code and ownership.
 
Horizon still have some things we need to tidy up on our end to make sure we 
have full support for testing and localization for external plugins. We expect 
this to get resolve within the next few weeks. Work on excising the Trove code 
will begin this week so expect a patch for that soon! It would be ideal if we 
can merge existing Trove code before the excision happens. David has agreed to 
let these patches merge with one core vote if we have enough Trovers 
reviewing/reverifying them. So please help us help you.
 
David and Craig, if I left anything else out, feel free add to this. Otherwise, 
have a good xmas everyone. Looking to working with you all in the coming weeks.
 
Regard,
Thai (tqtran)
 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Erasure coding and geo replication

2016-02-15 Thread Kota TSUYUZAKI
Hello Mark,

AFAIK, a few reasons for that we still are in working progress for erasure code 
+ geo replication.

>> and expect to survive a region outage...
>>
>> With that I mind I did some experiments (Liberty swift) and it looks to me 
>> like if you have:
>>
>> - num_data_frags < num_nodes in (smallest) region
>>
>> and:
>>
>> - num_parity_frags = num_data_frags
>>
>>
>> then having a region fail does not result in service outage.

Good point but note that the PyECLib v1.0.7 (pinned to Kilo/Liberty stable) 
still have a problem which cannot decode the original data when all feed 
fragments are parity frags[1]. (i.e. if set
num_parity_frags = num_data frags and then, num_parity_frags comes into proxy 
for GET request, it will fail at the decoding) The problem was already resolved 
in the PyECLib/liberasurecode at master
branch and current swift master has the PyECLib>=1.0.7 dependencies so if you 
thought to use the newest Swift, it might be not
a matter.

In the Swift perspective, I think that we need more tests/discussion for geo 
replication around write/read affinity[2] which is geo replication stuff in 
Swift itself and performances.

For the write/read affinity, actually we didn't consider the affinity control 
to simplify the implementation until EC landed into Swift master[3] so I think 
it's time to make sure how we can use the
affinity control with EC but it's not done yet.

For the performance perspective, in my experiments, more parities causes quite 
performance degradation[4]. To prevent the degradation, I am working for the 
spec which makes duplicated copy from
data/parity fragments and spread them out into geo regions.

To sumurize, we've not done the work yet but we welcome to discuss and 
contribute for EC + geo replication anytime, IMO.

Thanks,
Kota

1: 
https://bitbucket.org/tsg-/liberasurecode/commits/a01b1818c874a65d1d1fb8f11ea441e9d3e18771
2: 
http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters
3: 
http://docs.openstack.org/developer/swift/overview_erasure_code.html#region-support
4: 
https://specs.openstack.org/openstack/swift-specs/specs/in_progress/global_ec_cluster.html



(2016/02/15 18:00), Mark Kirkwood wrote:
> After looking at:
> 
> https://www.youtube.com/watch?v=9YHvYkcse-k
> 
> I have a question (that follows on from Bruno's) about using erasure coding 
> with geo replication.
> 
> Now the example given to show why you could/should not use erasure coding 
> with geo replication is somewhat flawed as it is immediately clear that you 
> cannot set:
> 
> - num_data_frags > num_devices (or nodes) in a region
> 
> and expect to survive a region outage...
> 
> With that I mind I did some experiments (Liberty swift) and it looks to me 
> like if you have:
> 
> - num_data_frags < num_nodes in (smallest) region
> 
> and:
> 
> - num_parity_frags = num_data_frags
> 
> 
> then having a region fail does not result in service outage.
> 
> So my real question is - it looks like it *is* possible to use erasure coding 
> in geo replicated situations - however I may well be missing something 
> significant, so I'd love some clarification here [1]!
> 
> Cheers
> 
> Mark
> 
> [1] Reduction is disk usage and net traffic looks attractive
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


-- 
--
Kota Tsuyuzaki(露﨑 浩太)  
NTT Software Innovation Center
Cloud Solution Project
Phone  0422-59-2837
Fax0422-59-2965
---



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Update on live migration priority

2016-02-15 Thread Kashyap Chamarthy
On Fri, Feb 12, 2016 at 04:21:27PM +, Murray, Paul (HP Cloud) wrote:
> This time with a tag in case anyone is filtering...

Yep, I was filtering, and would've missed it without your tag. :-)

> From: Murray, Paul (HP Cloud)
> Sent: 12 February 2016 16:16
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] Update on live migration priority
> 
> The objective for the live migration priority is to improve the
> stability of migrations based on operator experience. The high level
> approach is to do the following:
> 
> 1.   Improve CI
> 
> 2.   Improve documentation
>
> 
> 3.   Improve manageability of migrations
> 
> 4.   Fix bugs
> 
> In this cycle we targeted a few immediately implementable features
> that would help, specifically giving operators commands to allow them
> to manage migrations (inspect progress, force completion, and cancel)
> and improve security (split-networks and remove ssh-based
> resize/migration; aka storage pools).
> 
> Most of these are on track to be completed in this cycle with the
> exception of storage pools work which is being deferred. Further
> details follow.
> 
> Expand CI coverage - in progress
> 
> There is a job in the experimental queue called:
> gate-tempest-dsvm-multinode-live-migrationqueued. This will become the
> job that performs live migration tests; any live migration tests in
> other jobs will be removed. At present the job has been configured to
> cover different storage configurations including cinder, NFS, ceph.
> Tests are now being added to the job. Patches are currently up for
> live migration of instances with swap and instances with ephemeral
> disks.
> 
> Please trigger the experimental queue if your patches touch migrations
> in some way so we can check the stability of the jobs. Once stable and
> with sufficient tests we will promote the job from the experimental
> queue so that it always runs.
> 
> See: https://review.openstack.org/#/q/topic:lm_test
> 
> Improve API docs - done
> 
> Some changes were made to the API guide for moving servers, including
> better descriptions for the server actions migrate, live migrate,
> shelve, resize and evacuate (
> http://developer.openstack.org/api-guide/compute/server_concepts.html#server-actions
> ) and a section that describes reasons for moving VMs with common use
> cases outlined (
> http://developer.openstack.org/api-guide/compute/server_concepts.html#moving-servers
> )
> 
> Block live migration with attached volumes - done
> 
> The selective block device migration API in libvirt 1.2.17 is used to
> allow block migration when volumes are attached. A follow on patch to
> allow readonly drives to be copied in block migration has not been
> completed. This patch is required to allow iso9600 format config
> drives to be migrated. Without it only vfat config drives can be
> migrated. There is still some thought going into that - see:
> https://review.openstack.org/#/c/234659
> 
> Force complete - requires python-novaclient change
> 
> Force-complete forces a live migration to complete  by pausing the VM
> and restarting it when it has completed migration. This is intended as
> a brute force way to make a VM complete its migration when it is
> taking too long. In the future auto-converge and post-copy will be
> looked at. These became available in qemu 2.5.
> 
> Force complete is done in nova but still requires a change to
> python-novaclient to implement the CLI.
> 
> Cancel - in progress
> 
> Cancel stops a live migration, leaving it on the source host with the
> migration status left as "cancelled". This is in progress and follows
> the pattern of force-complete. Unfortunately this needs to be bundled
> up into one patch to avoid multiple API bumps.
> 
> Patches for review:
> https://review.openstack.org/#/q/status:open+topic:bp/abort-live-migration
> 
> Progress reporting - in progress (no pun intended)
> 
> Progress reporting introduces migrations as a sub-resource of servers
> and adds progress data to the migration record. There was some debate
> at the mid cycle and on the mailing list about how to record this
> transient data. It is a waste to keep writing it to the database, but
> as it is generated at the compute manager but examined at the API it
> was felt that writing it to the database is necessary to fit the
> existing architecture. The conclusions was that writing to the
> database every 5 seconds would not cause a significant overhead.
> Alternatives could be persued later if necessary. For discussion see
> this ML thread:
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/085662.html
> and the IRC meeting transcript here:
> http://eavesdrop.openstack.org/meetings/nova_live_migration/2016/nova_live_migration.2016-02-09-14.01.log.html
> 
> Patches for review:
> https://review.openstack.org/#/q/status:open+topic:bp/live-migration-progress-report
> 
> Split networking - done
> 
> Split networking adds a configuration parameter to specify
> 

[openstack-dev] [Bareon] Weekly update

2016-02-15 Thread Evgeniy L
Hi,

After the discussion with some folks, we agreed that it might be useful for
the community to start sending weekly updates on what Bareon team is
working on and what is our progress.

So here is a first weekly update from Bareon team.

1. Data pipelines for Nailgun integration (changing provisioning/deployment
data with extensions), previous week we've had an active discussion, mostly
agreed on implementation, both spec [0] and code [1] are on review.
2. Bugs debugging/reviewing/fixing [2]-[8]
3. Due to bugs not much progress on pluggable do actions this week [9]
4. Dynamic allocation, no progress [10]
5. Engineers from Cray team have 6 features [11] for upstream Bareon,
currently we are trying to figure out what are the best ways to land these
changes.

Thanks,

[0] https://review.openstack.org/#/c/274653/
[1] https://review.openstack.org/#/c/272977/

[2] https://bugs.launchpad.net/fuel/+bug/1538645
[3] https://bugs.launchpad.net/fuel/+bug/1543063
[4] https://bugs.launchpad.net/fuel/+bug/1544818
[5] https://bugs.launchpad.net/fuel/+bug/1544816
[6] https://bugs.launchpad.net/fuel/+bug/1543240
[7] https://bugs.launchpad.net/fuel/+bug/1543233
[8] https://bugs.launchpad.net/fuel/+bug/1543221

[9] https://blueprints.launchpad.net/bareon/+spec/pluggable-do-actions
[10] https://blueprints.launchpad.net/bareon/+spec/dynamic-allocation
[11]
https://review.openstack.org/#/q/project:openstack/bareon-specs+owner:%22Max+Lobur+%253Cmax_lobur%2540outlook.com%253E%22
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >