[openstack-dev] [tripleo] Deprecated Parameters Warning

2017-06-05 Thread Saravanan KR
Hello,

I am working on a patch [1] to list the deprecated parameters of the
current plan. It depends on a heat patch[2] which provides
parameter_group support for nested stacks. The change is to add a new
workflow to analyze the plan templates and find out the list of
deprecated parameters, identified by parameter_groups with label
"deprecated".

This workflow can be used by CLI and UI to provide a warning to the
user about the deprecated parameters. This is only the listing,
changes are required in tripleoclient to invoke and and provide
warning. I am sending this mail to update the group, to bring
awareness on the parameter deprecation.

Regards,
Saravanan KR

[1] https://review.openstack.org/#/c/463949/
[2] https://review.openstack.org/#/c/463941/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Dan Prince
On Mon, 2017-06-05 at 16:11 +0200, Jiří Stránský wrote:
> On 5.6.2017 08:59, Sagi Shnaidman wrote:
> > Hi
> > I think a "deep dive" about containers in TripleO and some helpful
> > documentation would help a lot for valuable reviews of these
> > container
> > patches. The knowledge gap that's accumulated here is pretty big.
> 
> As per last week's discussion [1], i hope this is something i could
> do. 
> I'm drafting a preliminary agenda in this etherpad, feel free to add 
> more suggestions if i missed something:
> 
> https://etherpad.openstack.org/p/tripleo-deep-dive-containers
> 
> My current intention is to give a fairly high level view of the
> TripleO 
> container land: from deployment, upgrades, debugging failed CI jobs,
> to 
> how CI itself was done.
> 
> I'm hoping we could make it this Thursday still. If that's too short
> of 
> a notice for several folks or if i hit some trouble with preparation,
> we 
> might move it to 15th. Any feedback is welcome of course.

Nice Jirka. Thanks for organizing this!

Dan

> 
> Have a good day,
> 
> Jirka
> 
> > 
> > Thanks
> > 
> > On Jun 5, 2017 03:39, "Dan Prince"  wrote:
> > 
> > > Hi,
> > > 
> > > Any help reviewing the following patches for the overcloud
> > > containerization effort in TripleO would be appreciated:
> > > 
> > > https://etherpad.openstack.org/p/tripleo-containers-todo
> > > 
> > > If you've got new services related to the containerization
> > > efforts feel
> > > free to add them here too.
> > > 
> > > Thanks,
> > > 
> > > Dan
> > > 
> > > _
> > > _
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
> > > subscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > > 
> > 
> > 
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][kolla][release] Policy regarding backports of gate code

2017-06-05 Thread Ihar Hrachyshka

On 06/05/2017 09:42 AM, Michał Jastrzębski wrote:

My question is, is it ok to backport gate logic to stable branch?
Regular code doesn't change so it might not be considered a feature
backport (users won't see a thing).


Yes, that's allowed. Stable maintainers are concerned about 
destabilizing production code. Touching gate infrastructure, adding 
tests, updating documentation is almost always ok. (Assuming your 
changes don't e.g. reduce test coverage.)


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-05 Thread Ed Leafe
We had a very lively discussion this morning during the Scheduler subteam 
meeting, which was continued in a Google hangout. The subject was how to handle 
claiming resources when the Resource Provider is not "simple". By "simple", I 
mean a compute node that provides all of the resources itself, as contrasted 
with a compute node that uses a shared storage for disk space, or which has 
complex nested relationships with things such as PCI devices or NUMA nodes. The 
current situation is as follows:

a) scheduler gets a request with certain resource requirements (RAM, disk, CPU, 
etc.)
b) scheduler passes these resource requirements to placement, which returns a 
list of hosts (compute nodes) that can satisfy the request.
c) scheduler runs these through some filters and weighers to get a list ordered 
by best "fit"
d) it then tries to claim the resources, by posting to placement allocations 
for these resources against the selected host
e) once the allocation succeeds, scheduler returns that host to conductor to 
then have the VM built

(some details for edge cases left out for clarity of the overall process)

The problem we discussed comes into play when the compute node isn't the actual 
provider of the resources. The easiest example to consider is when the computes 
are associated with a shared storage provider. The placement query is smart 
enough to know that even if the compute node doesn't have enough local disk, it 
will get it from the shared storage, so it will return that host in step b) 
above. If the scheduler then chooses that host, when it tries to claim it, it 
will pass the resources and the compute node UUID back to placement to make the 
allocations. This is the point where the current code would fall short: 
somehow, placement needs to know to allocate the disk requested against the 
shared storage provider, and not the compute node.

One proposal is to essentially use the same logic in placement that was used to 
include that host in those matching the requirements. In other words, when it 
tries to allocate the amount of disk, it would determine that that host is in a 
shared storage aggregate, and be smart enough to allocate against that 
provider. This was referred to in our discussion as "Plan A".

Another proposal involved a change to how placement responds to the scheduler. 
Instead of just returning the UUIDs of the compute nodes that satisfy the 
required resources, it would include a whole bunch of additional information in 
a structured response. A straw man example of such a response is here: 
https://etherpad.openstack.org/p/placement-allocations-straw-man. This was 
referred to as "Plan B". The main feature of this approach is that part of that 
response would be the JSON dict for the allocation call, containing the 
specific resource provider UUID for each resource. This way, when the scheduler 
selects a host, it would simply pass that dict back to the /allocations call, 
and placement would be able to do the allocations directly against that 
information.

There was another issue raised: simply providing the host UUIDs didn't give the 
scheduler enough information in order to run its filters and weighers. Since 
the scheduler uses those UUIDs to construct HostState objects, the specific 
missing information was never completely clarified, so I'm just including this 
aspect of the conversation for completeness. It is orthogonal to the question 
of how to allocate when the resource provider is not "simple".

My current feeling is that we got ourselves into our existing mess of ugly, 
convoluted code when we tried to add these complex relationships into the 
resource tracker and the scheduler. We set out to create the placement engine 
to bring some sanity back to how we think about things we need to virtualize. I 
would really hate to see us make the same mistake again, by adding a good deal 
of complexity to handle a few non-simple cases. What I would like to avoid, no 
matter what the eventual solution chosen, is representing this complexity in 
multiple places. Currently the only two candidates for this logic are the 
placement engine, which knows about these relationships already, or the compute 
service itself, which has to handle the management of these complex virtualized 
resources.

I don't know the answer. I'm hoping that we can have a discussion that might 
uncover a clear approach, or, at the very least, one that is less murky than 
the others.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [murano] Meeting time

2017-06-05 Thread MONTEIRO, FELIPE C
Hi all,

According to the patch currently up at [0], the meeting time that works for 
most everyone who currently attends has been changed to 13:00 UTC (still the 
same day on Tuesday). If this meeting time is objectionable to anyone, feel 
free to review [0].

Because [0] has yet to be merged and become official, the meeting time tomorrow 
will still be held during the usual time (17:00 UTC). Beginning next week our 
new meeting time should become set in stone.

[0] https://review.openstack.org/#/c/468182/

Felipe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-06-05 Thread Yeleswarapu, Ramamani
Hi,

We are glad to present this week's priorities and subteam report for Ironic. As 
usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. booting from volume:
1.1. the next patch: https://review.openstack.org/#/c/406290
2. Rolling upgrades:
2.1. spec update: https://review.openstack.org/#/c/469940/
2.2. the next patch (add version column): 
https://review.openstack.org/#/c/412397/
3. OSC commands for ironic driver-related commands
3.1. review the spec: https://review.openstack.org/#/c/439907/
4. Physical network topology awareness:
4.1. next patch on review: https://review.openstack.org/#/c/461301/


Bugs (dtantsur, vdrok, TheJulia)

- Stats (diff between 22 May 2017 and 5 Jun 2017)
- Ironic: 249 bugs (+6) + 252 wishlist items (+1). 24 new, 196 in progress 
(+5), 1 critical (+1), 30 high (+4) and 32 incomplete
- Inspector: 14 bugs (+2) + 30 wishlist items (+2). 1 new (-2), 12 in progress, 
0 critical, 2 high (+1) and 3 incomplete
- Nova bugs with Ironic tag: 12. 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- next patch to be reviewed: https://review.openstack.org/#/c/429770/
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476
- local boot with partition images: TODO 
https://bugs.launchpad.net/ironic/+bug/1531149
- adoption: https://review.openstack.org/#/c/344975/
- should probably be changed to use standalone tests

Generic boot-from-volume (TheJulia, dtantsur)
-
- specs and blueprints:
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/volume-connection-information.html
- code: https://review.openstack.org/#/q/topic:bug/1526231
- 
http://specs.openstack.org/openstack/ironic-specs/specs/approved/boot-from-volume-reference-drivers.html
- code: https://review.openstack.org/#/q/topic:bug/1559691
- https://blueprints.launchpad.net/nova/+spec/ironic-boot-from-volume
- code: 
https://review.openstack.org/#/q/topic:bp/ironic-boot-from-volume
- status as of most recent weekly meeting:
- mjturek is working on getting together devstack config updates/script 
changes in order to support this configuration No updates
- hshiina uploaded some devstack patches [See etherpad]
- hshiina is looking in Nova side changes and is attempting to obtain 
clarity on some of the issues that tenant network separation introduced into 
the deployment workflow.
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/406290 Wiring in attach/detach 
operations
TheJulia will sync-up with the other contributors to this effort 
and see about getting the feedback addressed in the patchset.
https://review.openstack.org/#/c/413324 iPXE template
https://review.openstack.org/#/c/454243/ - WIP logic changes for 
deployment process.  Tenant network separation introduced some additional 
complexity, quick conceptual feedback requested.
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
- spec approved; code patches: 
https://review.openstack.org/#/q/topic:bug/1526283
- status as of most recent weekly meeting:
- Based on feedback from vdrok and jlvillal, rloo redesigned (conceptually 
simpler) the way IronicObject versioning is handled by services running 
different releases:
- update to the spec, ready for reviews: 'Rolling upgrades: different 
object versions' https://review.openstack.org/#/c/469940/
- next patch ready for reviews, if update is approved: 'Add version 
column' https://review.openstack.org/#/c/412397/
- Testing work: done as per spec, but rloo wants to ask vasyl whether we 
can improve. grenade test will do upgrade so we have old API sending requests 
to old and/or new conductor, but rloo doesn't think there is anything to 
control -which- conductor handles the request, so what if old conductor handles 
all the requests?

Reference architecture guide (jroll, dtantsur)
--
- no updates, dtantsur plans to start working on some text for the 
install-guide soon(ish)

Python 3.5 compatibility (Nisha, Ankit)
---
- Topic: 

[openstack-dev] [nova] Providing interface-scoped nameservers in network_data.json

2017-06-05 Thread Lars Kellogg-Stedman
While investigating a bug report against cloud-init ("why don't you put
nameservers in interface configuration files?"). I discovered that Nova
munges the information received from Neutron to take the network-scoped
nameserver entries and move them all into a global "services" section.

It turns out that people may actually want to preserve the information
about which interface is associated with a particular nameserver so that
the system can be configured to manage the resolver configuration as
interfaces are brought up/down.

I've proposed https://review.openstack.org/#/c/467699/ to resolve this
issue, which adds nameserver information to the "network" section.  This
*does not* remove the global "services" key, so existing code that expects
to find nameservers there will continue to operate as it does now.  This
simply exposes the information in an additional location where there is
more context available.

Thanks for looking,

-- Lars
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][kolla][release] Policy regarding backports of gate code

2017-06-05 Thread Michał Jastrzębski
Hello,

Since we're working hard on providing pipeline for docker publishing,
that will require heavy gating of container images to be published. We
also would like to publish stable/ocata images to enable release
upgrade gates from O to P.

My question is, is it ok to backport gate logic to stable branch?
Regular code doesn't change so it might not be considered a feature
backport (users won't see a thing).
Since zuul runs all the jobs regarding of branch, unless we backport
this code, our multinode ocata jobs will be just huge waste of
resources.

First of reviews in question: https://review.openstack.org/#/c/466007/
As you can see it's quite an extensive overhaul of gating so it's much
more than a bug. How should we proceed?

Regards,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-05 Thread Zane Bitter

On 05/06/17 09:43, Sean Dague wrote:

On 06/01/2017 06:09 AM, Chris Dent wrote:


It's clear from this thread and other conversations that the
management of tempest plugins is creating a multiplicity of issues
and confusions:

* Some projects are required to use plugins and some are not. This
   creates classes of projects.


While this is true, there are also reasons for that. We decided to break
up the compute service into distinct parts years ago to help let each
part grow dedicated expertise (images, networking, block storage).
However, there is a ton of coupling here even though these are broken up.

My continued resistance to decomposing the QA side of those projects is
getting that integration testing right, and debugging it is hard,
because there are so many interactions required to have a working server
started. And Nova, Neutron, Cinder are the top three most active
projects in OpenStack. So the rate of change individually is quite high.
Forcing those services out into plugins because of the feeling that


Presumably that could be addressed by splitting the 
Nova/Neutron/Cinder/Glance tests not used by the OpenStack Powered 
trademark program (aka DefCore) into a combined base-compute Tempest 
plugin though?


Or are you saying there's coupling between the Defcore and non-Defcore 
tests that isn't mediated by tempest-lib? If that's the case then I'd be 
concerned that we might run into this problem again in the future.



something doesn't look fair on paper is just generating more work to
create spherical elephants, instead of acknowledging the amount of work
the QA team has on it's shoulders, and letting it optimize for a better
experience by OpenStack users. Especially given limited resources.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Jiří Stránský

On 5.6.2017 08:59, Sagi Shnaidman wrote:

Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these container
patches. The knowledge gap that's accumulated here is pretty big.


As per last week's discussion [1], i hope this is something i could do. 
I'm drafting a preliminary agenda in this etherpad, feel free to add 
more suggestions if i missed something:


https://etherpad.openstack.org/p/tripleo-deep-dive-containers

My current intention is to give a fairly high level view of the TripleO 
container land: from deployment, upgrades, debugging failed CI jobs, to 
how CI itself was done.


I'm hoping we could make it this Thursday still. If that's too short of 
a notice for several folks or if i hit some trouble with preparation, we 
might move it to 15th. Any feedback is welcome of course.


Have a good day,

Jirka



Thanks

On Jun 5, 2017 03:39, "Dan Prince"  wrote:


Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization efforts feel
free to add them here too.

Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] request_id middleware in glance

2017-06-05 Thread Sean Dague
I had not realized this until doing the research at the Summit about
request_id middleware in the base iaas services, that glance did
something quite different than the others (and has always allowed the
request_id to be set).

I'd actually like to modify that behavior to setting the new
global_request_id variable in oslo.context instead -
https://review.openstack.org/#/c/468443/ which gives you 2 ids that can
be tracked, one per inbound request, and one that might be set globally.

It's a small change (a larger change to use the oslo.middleware would be
a bit more complicated, and while good long term, is beyond scope right
now), but I wanted to get an idea how this sits with the glance team.

Thanks in advance,

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-05 Thread Sean Dague
On 06/01/2017 06:09 AM, Chris Dent wrote:

> It's clear from this thread and other conversations that the
> management of tempest plugins is creating a multiplicity of issues
> and confusions:
> 
> * Some projects are required to use plugins and some are not. This
>   creates classes of projects.

While this is true, there are also reasons for that. We decided to break
up the compute service into distinct parts years ago to help let each
part grow dedicated expertise (images, networking, block storage).
However, there is a ton of coupling here even though these are broken up.

My continued resistance to decomposing the QA side of those projects is
getting that integration testing right, and debugging it is hard,
because there are so many interactions required to have a working server
started. And Nova, Neutron, Cinder are the top three most active
projects in OpenStack. So the rate of change individually is quite high.
Forcing those services out into plugins because of the feeling that
something doesn't look fair on paper is just generating more work to
create spherical elephants, instead of acknowledging the amount of work
the QA team has on it's shoulders, and letting it optimize for a better
experience by OpenStack users. Especially given limited resources.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-06-05 Thread Anil Venkata
Thanks Kevin. I added it to get_router_ids [1], which is called when
full_sync flag is set(i.e when agent is AGENT_REVIVED, updated or started)
and  not in get_routers/sync_routers.

[1] https://review.openstack.org/#/c/470905/

On Sat, May 27, 2017 at 2:54 AM, Kevin Benton  wrote:

> I recommend a completely new RPC endpoint to trigger this behavior that
> the L3 agent calls before sync routers. Don't try to add it to sync routers
> which is already quite complex. :)
>
> On Fri, May 26, 2017 at 7:53 AM, Anil Venkata 
> wrote:
>
>> Thanks Kevin, Agree with you. I will try to implement this suggestion.
>>
>> On Fri, May 26, 2017 at 7:01 PM, Kevin Benton  wrote:
>>
>>> Just triggering a status change should just be handled as a port update
>>> on the agent side which shouldn't interrupt any existing flows. So an l3
>>> agent reboot should be safe in this case.
>>>
>>> On May 26, 2017 6:06 AM, "Anil Venkata"  wrote:
>>>
 On Fri, May 26, 2017 at 6:14 PM, Kevin Benton  wrote:

> Perhaps when the L3 agent starts up we can have it explicitly set the
> port status to DOWN for all of the HA ports on that node. Then we are
> guaranteed that when they go to ACTIVE it will be because the L2 agent has
> wired the ports.
>
>
 Thanks Kevin. Will it create dependency of dataplane on controlplane.
 For example, if the node is properly configured(l2 agent wired up,
 keepalived configured, VRRP exchange happening) but user just restarted
 only l3 agent, then with the suggestion we won't break l2
 connectivity(leading to multiple HA masters) by re configuring again?

 Or is there a way server can detect that node(not only agent) is down
 and set port status?


> On Fri, May 26, 2017 at 5:27 AM, Anil Venkata 
> wrote:
>
>> This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
>> Earlier to fix this, we added code [1] to spawn keepalived only when
>> HA network port status is active.
>>
>> But, on reboot, node will get HA network port's status as ACTIVE from
>> server(please see comment [2]),
>> though l2 agent might not have wired[3] the port, resulting in
>> spawning  keepalived. Any suggestions
>> how l3 agent can detect that l2 agent has not wired the port and
>> then avoid spawning keepalived?
>>
>> [1] https://review.openstack.org/#/c/357458/
>> [2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
>> [3] l2 agent wiring means setting up ovs flows on br-tun to make port
>> usable
>>
>> Thanks
>> Anilvenkata
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Fix an issue with resolving citations in nova-specs

2017-06-05 Thread Sean Dague
On 06/05/2017 01:06 AM, Takashi Natsume wrote:
> Hi, Nova developers.
> 
> The version of sphinx has been capped (*1)
> in order to fix an issue (*2, *3) with resolving citations in nova-specs.
> 
> But IMO, it is better to fix citations in specs (*4)
> rather than capping the sphinx version.

Thank you for the patch, I just merged *4.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] Update for global requirements

2017-06-05 Thread Gary Kotton
Thanks!
It seems like https://review.openstack.org/440882 is the problem. Now that we 
have resolved the issues we should return this.
I have proposed patches to return these.
Thanks
Gary

On 6/4/17, 4:15 PM, "Jeremy Stanley"  wrote:

On 2017-06-04 11:35:16 + (+), Gary Kotton wrote:
> The bot that runs for the requirements update sometimes does not
> update the vmware-nsx project. I am not sure if this happens with
> other projects. Does anyone know how we can trouble shoot this?

I assume you mean the openstack/vmware-nsxlib repository? I see a
proposed update to it as of 04:40 UTC today:

https://review.openstack.org/470566

If you're talking about the actual openstack/vmware-nsx (not ...lib)
repository, it's not listed in the projects.txt file in
openstack/requirements:

http://git.openstack.org/cgit/openstack/requirements/tree/projects.txt

Looks like it was removed a few months ago:

https://review.openstack.org/440882

-- 
Jeremy Stanley


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-06-05 Thread zengchen
Hi all:
The repository of Fuxi-golang has been set up! We can submit bugs and 
bps[1] through launchpad now.
To Antoni Segura, this patch[2] will need your +1 to be merged. So could 
you please take a look at it. Thanks very much!


[1] https://launchpad.net/fuxi-golang
[2] https://review.openstack.org/#/c/470111/


Best Wishes
zengchen





At 2017-05-31 23:55:15, "Hongbin Lu"  wrote:


Please find my replies inline.

 

Best regards,

Hongbin

 

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 9:56 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

 

 

 

On 30 May 2017 at 15:26, Hongbin Lu  wrote:

Please consider leveraging Fuxi instead.

 

Is there a missing functionality from rexray?

 

[Hongbin Lu] From my understanding, Rexray targets on the overcloud use cases 
and assumes that containers are running on top of nova instances. You mentioned 
Magnum is leveraging Rexray for Cinder integration. Actually, I am the core 
reviewer who reviewed and approved those Rexray patches. From what I observed, 
the functionalities provided by Rexray are minimal. What it was doing is simply 
calling Cinder API to search an existing volume, attach the volume to the Nova 
instance, and let docker to bind-mount the volume to the container. At the time 
I was testing it, it seems to have some mystery bugs that prevented me to get 
the cluster to work. It was packaged by a large container image, which might 
take more than 5 minutes to pull down. With that said, Rexray might be a choice 
for someone who are looking for cross cloud-providers solution. Fuxi will focus 
on OpenStack and targets on both overcloud and undercloud use cases. That means 
Fuxi can work with Nova+Cinder or a standalone Cinder. As John pointed out in 
another reply, another benefit of Fuxi is to resolve the fragmentation problem 
of existing solutions. Those are the differentiators of Fuxi.

 

Kuryr/Fuxi team is working very hard to deliver the docker network/storage 
plugins. I wish you will work with us to get them integrated with 
Magnum-provisioned cluster.

 

Patches are welcome to support fuxi as an *option* instead of rexray, so users 
can choose.

 

Currently, COE clusters provisioned by Magnum is far away from 
enterprise-ready. I think the Magnum project will be better off if it can adopt 
Kuryr/Fuxi which will give you a better OpenStack integration.

 

Best regards,

Hongbin

 

fuxi feature request: Add authentication using a trustee and a trustID.

 

[Hongbin Lu] I believe this is already supported.

 

Cheers,
Spyros

 

 

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: May-30-17 7:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

 

FYI, there is already a cinder volume driver for docker available, written

in golang, from rexray [1].


Our team recently contributed to libstorage [3], it could support manila too. 
Rexray
also supports the popular cloud providers.

Magnum's docker swarm cluster driver, already leverages rexray for cinder 
integration. [2]

Cheers,
Spyros 

 

[1] https://github.com/codedellemc/rexray/releases/tag/v0.9.0 

[2] https://github.com/codedellemc/libstorage/releases/tag/v0.6.0

[3] 
http://git.openstack.org/cgit/openstack/magnum/tree/magnum/drivers/common/templates/swarm/fragments/volume-service.sh?h=stable/ocata

 

On 27 May 2017 at 12:15, zengchen  wrote:

Hi John & Ben:

 I have committed a patch[1] to add a new repository to Openstack. Please take 
a look at it. Thanks very much!

 

 [1]: https://review.openstack.org/#/c/468635

 

Best Wishes!

zengchen






在 2017-05-26 21:30:48,"John Griffith"  写道:

 

 

On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:

 

Hi john:

I have seen your updates on the bp. I agree with your plan on how to 
develop the codes.

However, there is one issue I have to remind you that at present, Fuxi not 
only can convert

 Cinder volume to Docker, but also Manila file. So, do you consider to involve 
Manila part of codes

 in the new Fuxi-golang?

Agreed, that's a really good and important point.  Yes, I believe Ben 
Swartzlander

 

is interested, we can check with him and make sure but I certainly hope that 
Manila would be interested.

Besides, IMO, It is better to create a repository for Fuxi-golang, because

 Fuxi is the project of Openstack,

Yeah, that seems fine; I just didn't know if there needed to be any more 
conversation with other folks on any of this before charing ahead on new repos 
etc.  Doesn't matter much to me though.

 

 

   Thanks very much!

 

Best Wishes!

zengchen

 


At 2017-05-25 22:47:29, "John Griffith"  wrote:

 

 

On Thu, May 25, 2017 

Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Dmitry Tantsur

On 06/05/2017 10:55 AM, Flavio Percoco wrote:

On 05/06/17 10:29 +0200, Emilien Macchi wrote:

On Mon, Jun 5, 2017 at 8:59 AM, Sagi Shnaidman  wrote:

Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these container
patches. The knowledge gap that's accumulated here is pretty big.


This is not the first time I'm hearing this, indeed it would be super useful.


Agreed that some deep dive should be organized. There's some documentation
already, though. It may need to be updated in some areas and I'm sure it won't
be enough.

Flavio

https://docs.openstack.org/developer/tripleo-docs/containers_deployment/


Note that 
https://docs.openstack.org/developer/tripleo-docs/containers_deployment/architecture.html#docker-specific-settings 
is apparently already outdated, at least around "kolla_config" part (that's what 
I was told on review, don't shoot the messenger please).





Thanks

On Jun 5, 2017 03:39, "Dan Prince"  wrote:


Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization efforts feel
free to add them here too.

Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Emilien Macchi
On Mon, Jun 5, 2017 at 2:34 AM, Dan Prince  wrote:
> Hi,
>
> Any help reviewing the following patches for the overcloud
> containerization effort in TripleO would be appreciated:
>
> https://etherpad.openstack.org/p/tripleo-containers-todo

Nice summary, it really helps. Thanks.
Maybe could we prioritize the items that need to be reviewed asap,
just in case this is required for you to make faster progress.

> If you've got new services related to the containerization efforts feel
> free to add them here too.
>
> Thanks,
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Flavio Percoco

On 05/06/17 10:29 +0200, Emilien Macchi wrote:

On Mon, Jun 5, 2017 at 8:59 AM, Sagi Shnaidman  wrote:

Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these container
patches. The knowledge gap that's accumulated here is pretty big.


This is not the first time I'm hearing this, indeed it would be super useful.


Agreed that some deep dive should be organized. There's some documentation
already, though. It may need to be updated in some areas and I'm sure it won't
be enough.

Flavio

https://docs.openstack.org/developer/tripleo-docs/containers_deployment/


Thanks

On Jun 5, 2017 03:39, "Dan Prince"  wrote:


Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization efforts feel
free to add them here too.

Thanks,

Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Emilien Macchi
On Mon, Jun 5, 2017 at 8:59 AM, Sagi Shnaidman  wrote:
> Hi
> I think a "deep dive" about containers in TripleO and some helpful
> documentation would help a lot for valuable reviews of these container
> patches. The knowledge gap that's accumulated here is pretty big.

This is not the first time I'm hearing this, indeed it would be super useful.

> Thanks
>
> On Jun 5, 2017 03:39, "Dan Prince"  wrote:
>>
>> Hi,
>>
>> Any help reviewing the following patches for the overcloud
>> containerization effort in TripleO would be appreciated:
>>
>> https://etherpad.openstack.org/p/tripleo-containers-todo
>>
>> If you've got new services related to the containerization efforts feel
>> free to add them here too.
>>
>> Thanks,
>>
>> Dan
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] weekly IRC meeting cancelled today

2017-06-05 Thread Antoni Segura Puimedon
Hi Kuryrs,

Today Irena and I are attending the OpenStack Israel day and won't be
able to chair the meeting. We can catch up tomorrow on IRC during the
day.

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-05 Thread Sagi Shnaidman
Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these container
patches. The knowledge gap that's accumulated here is pretty big.

Thanks

On Jun 5, 2017 03:39, "Dan Prince"  wrote:

> Hi,
>
> Any help reviewing the following patches for the overcloud
> containerization effort in TripleO would be appreciated:
>
> https://etherpad.openstack.org/p/tripleo-containers-todo
>
> If you've got new services related to the containerization efforts feel
> free to add them here too.
>
> Thanks,
>
> Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev