Re: [openstack-dev] [vitrage] entity graph layout

2016-08-06 Thread Afek, Ifat (Nokia - IL)
Hi,

It is possible to adjust the layout of the graph. You can double-click on a 
vertex and it will remain pinned to its place. You can then move the pinned 
vertices around to adjust the graph layout.

Hope it helped, and let us know if you need additional help with your demo.

Best Regards,
Ifat.


From: Yujun Zhang
Date: Friday, 5 August 2016 at 09:32
Hi, all,

I'm building a demo of vitrage. The dynamic entity graph looks interesting.

But when more entities are added, things becomes crowded and the links screw 
over each other. Dragging the items will not help much.

Is it possible to adjust the layout so I can get a more regular/stable tree 
view of the entities?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] HA with only one node.

2016-08-06 Thread Adam Young

On 08/06/2016 03:20 PM, Dan Prince wrote:

On Sat, 2016-08-06 at 13:21 -0400, Adam Young wrote:

As I try to debug Federaion problems, I am often finding I have to
check
three nodes to see where the actual requrest was processed. However,
If
I close down to of the controller nodes in Nova, the whole thing just
fails.


So, while that in it self is a problem, what I would like to be able
to
do in development is have HA running, but with only a single
controller
node answering requests.  How do I do that?

I have a $HOME/custom.yaml environment file which contains this:

parameters:
   ControllerCount: 1

If you do something similar and then include that environment in your
--environments list you should end up with just a single controller.

Do this in addition to using environments/puppet-pacemaker.yaml and you
should have "single node HA" (aka pacemaker on a single controller).

Cool, will try it.

I kindof am still doing trial and error on a cluster we're I've made 
changes on the node.   Not ready to tear them down.  But the fact that 
killing a node means that Web calls fail means that HA Proxy is not 
sufficient to give us HA.  Is there something I can do with the Load 
Balancer or something if I shut down two of the nodes to keep things 
running?





Dan



_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][tripleo] status of tripleo-test-cloud-rh1

2016-08-06 Thread Paul Belanger
Greetings,

5 months ago fungi posted:

  [tripleo] becoming third party CI (was: enabling third party CI)[1]

About having the discussion whether the existing TripleO CI should itself follow
our third-party integration model instead of the current implementation relying
on our main community Zuul/Nodepool/Jenkins servers.

The result of the thread had some pros and cons, which I encourge people to
re-read.

At the Austin summit we continued the topic of moving tripleo-ci into 3rd party
CI. Again, consensus could not be reached however we made some progress.  I
would take on the responsibility to help bring tripleo-test-cloud-rh1 more
inline with openstack-infra tooling.

That includes, but is not limited to:

  - Initial support for centos-7 jenkins slave (tripleo-ci)
https://review.openstack.org/#/c/312725/
  - Add centos-7 to tripleo cloud (project-config)
https://review.openstack.org/#/c/311721/
  - Revert "Revert "Migrate tripleo to centos-7"" (project-config)
https://review.openstack.org/#/c/327425/
  - Revert "Disable tripleo-test-cloud-rh1 until we have AFS mirrors" 
(project-config)
https://review.openstack.org/#/c/349659/
  - Add tripleo-test-cloud grafana dashboard
https://review.openstack.org/#/c/351251/

And various other reviews adding AFS mirrors for centos / epel. Updates to
tripleo-ci using our openstack-infra AFS mirrors along with providing general
support for both tripleo-test-cloud-rh1 and tripleo-test-cloud-rh2.

In a short amount of time, we've made great progress with
tripleo-test-cloud-rh1, helping bring it more inline with openstack-infra
tooling.  While we are not finished, there is still some private infrastrucuture
that tripleo-ci is depending on. I am confident in the next 3 months we should
have that all replaced and using openstack community infrastruture.

However on Friday[2], we started talking about tripleo-test-cloud-rh1 again in
#openstack-infra and found ourselves revisiting the original email. It is all
driven from the current effort from tripleo to start using move community clouds
for running tripleo-ci jobs.  Today, 3 different type of tripleo-ci jobs are now
run across all our clouds, for example there is a centos-7-2-node jobs. However,
tripleo-test-cloud-rh1 is only today setup to accept only tripleo-ci jobs. This
job does not run on tripleo-test-cloud-rh1.

jeblair posted the following statement:

  It feels like the tripleo cloud has been grandfathered in its current state
  for a while.  I'd just like to make sure we're being fair to everyone.  So if
  tripleo wants to run tripleo jobs, then i think we should move it to 3rd party
  ci.  I think that's a fine choice and we can continue to work together
  (please!) but with better division of reponsibilities.  Or, if we want to
  revise the idea of a multi-provider hardware platform that's available for all
  openstack projects, i'm game for that.  It would be great, but more work.

Should we continue the push to move tripleo-test-cloud-rh1 to 3rd party CI
(removing from nodepool.o.o) or do we start enabling more jobs on
tripleo-test-cloud-rh1 bringing the cloud even more into openstack-infra?

My personal thoughts, as somebody who's been working on it for the last 4
months, I still feel tripleo-test-cloud-rh1 should move to 3rd party CI.
However, with the work done in the last 4 months, I believe
tripleo-test-cloud-rh1 _could_ start running additional jobs based on the work
above.

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-March/088988.html
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-08-05.log.html#t2016-08-05T23:07:35

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Ben Swartzlander

On 08/06/2016 06:11 PM, Jeremy Stanley wrote:

On 2016-08-06 17:51:02 -0400 (-0400), Ben Swartzlander wrote:
[...]

when it's no longer to run dsvm jobs on them (because those jobs
WILL eventually break as infra stops maintaining support for very
old releases) then we simply remove those jobs and rely on vendor
CI + minimal upstream tests (pep8, unit tests).


This suggestion has been resisted in the past as it's not up to our
community's QA standards, and implying there is "support" when we
can no longer test that changes don't cause breakage is effectively
dishonest. In the past we've held that if a branch is no longer
testable, then there's not much reason to collaborate on code
reviewing proposed backports in the first place. If we're reducing
these branches to merely a holding place for "fixes" that "might
work" it doesn't sound particularly beneficial.


Well this was the whole point, and the reason I suggested using a 
different branch other than stable/release. Keeping the branches open 
for driver bugfix backports is only valuable if we can go 5 releases back.


I agree the level of QA we can do gets less as releases get older, and 
nobody expects the Infra team to keep devstack-gate running on such old 
releases. However vendors and distros DO support such old releases and 
the proposal to create these branches is largely to simplify the 
distributions of bugfixes from vendors to customers and distros.


Compare this proposal to the status quo, which is that several vendors 
effectively maintain forks of Cinder on github or other public repos 
just to have a place to distribute bugfixes on old releases. Distros 
either need to know about these repos or do the backports from master 
themselves when taking bugfixes into old releases.


-Ben


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Jeremy Stanley
On 2016-08-06 17:51:02 -0400 (-0400), Ben Swartzlander wrote:
[...]
> when it's no longer to run dsvm jobs on them (because those jobs
> WILL eventually break as infra stops maintaining support for very
> old releases) then we simply remove those jobs and rely on vendor
> CI + minimal upstream tests (pep8, unit tests).

This suggestion has been resisted in the past as it's not up to our
community's QA standards, and implying there is "support" when we
can no longer test that changes don't cause breakage is effectively
dishonest. In the past we've held that if a branch is no longer
testable, then there's not much reason to collaborate on code
reviewing proposed backports in the first place. If we're reducing
these branches to merely a holding place for "fixes" that "might
work" it doesn't sound particularly beneficial.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Ben Swartzlander

On 08/06/2016 11:31 AM, Sean McGinnis wrote:

This may mostly be a Cinder concern, but putting it out there to get
wider input.

For some time now there has been some debate about moving third party
drivers in Cinder to be out of tree. I won't go into that too much,
other than to point out one of the major drivers for this desire that
was brought up at our recent Cinder midcycle.

It turned out at least part of the desire to move drivers out of tree
came down to the difficulty in getting bug fixes out to end users that
were on older stable versions, whether because that's what their distro
was still using, or because of some other internal constraint that
prevented them from upgrading.

A lot of times what several vendors ended up doing is forking Cinder to
their own github repo and keeping that in sync with backports, plus
including driver fixes they needed to get out to their end users. This
has a few drawbacks:

1- this is more work for the vendor to keep this fork up to date
2- end users don't necessarily know where to go to find these without
   calling in to a support desk (that then troubleshoots a known issue
   and hopefully eventually ends up contacting the folks internally that
   actually work on Cinder that know it's been fixed and where to get
   the updates). Generally a bad taste for someone using Cinder and
   OpenStack.
3- Distros that package stable branches aren't able to pick up these
   changes, even if they are picking up stable branch updates for
   security fixes
4- We end up with a lot of patches proposed against security only stable
   branches that we need to either leave or abandon, just so a vendor
   can point end users to the patch to be able to grab the code changes

Proposed Solution
-

So part of our discussion at the midcycle was a desire to open up stable
restrictions for getting these driver bugfixes backported. At the time,
we had discussed having new branches created off of the stable branches
specifically for driver bugfixes. Something like:

stable/mitaka > stable/mitaka-drivers

After talking to the infra team, this really did sound like overkill.
The suggestion was to just change our stable policy in regards to driver
bugfix backports. No need to create and maintain more branches. No need
to set up gate jobs and things like that.

So this is a divergence from our official policy. I want to propose
we officially make a change to our stable policy to call out that
drivers bugfixes (NOT new driver features) be allowed at any time.

If that's not OK with other project teams that support any kind of third
party drivers, I will just implement this policy specific to Cinder
unless there is a very strong objection, with good logic behind it, why
this should not be allowed.

This would address a lot of the concerns at least within Cinder and
allow us to better support users stuck on older releases.

I'm open and welcome to any feedback on this. Unless there are any major
concerns raised, I will at least instruct any Cinder stable cores to
start allowing these bugfix patches through past the security only
phase.


The only issue I see with this modified proposal is that it doesn't 
address the lifetime of the stable branches. If the plan is to use the 
normal stable branch instead of making a special branch, then we also 
need to find a way to keep stable branches around for practically 
forever (way longer than the typical 12 months).


Those of us dealing with bugfix backports for customers inevitably are 
looking at going 3, 4, or 5 releases back with the backports. Therefore 
I'd suggest modifying the policy to keep the stable branches around more 
or less forever, and when it's no longer to run dsvm jobs on them 
(because those jobs WILL eventually break as infra stops maintaining 
support for very old releases) then we simply remove those jobs and rely 
on vendor CI + minimal upstream tests (pep8, unit tests).


-Ben


Thanks!

Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-06 Thread Jeremy Stanley
On 2016-08-06 14:44:27 -0600 (-0600), Doug Wiegley wrote:
> I would be tempted to make a custom image, and ask to put it on
> our mirrors, or have nodepool manage the image building and
> storing.

Some projects (I think at least Ironic and Trove) have CI jobs to
build custom virtual machine images they then boot under nova in
DevStack using jobs. At the moment the image build jobs are
uploading to tarballs.openstack.org and then test jobs are consuming
them from there.

> You can also likely just have the module on the local mirrors,
> which would alleviate the random internet issue.
[...]

We've discussed this, and I think it makes sense. If we move our
tarballs site into AFS, then we could serve its contents from our
local AFS cache mirrors in each provider for improved performance.
This may not work well for exceptionally large images due to the
time it takes to pull them into the AFS cache over the Internet, but
some experimentation with small and infrequently-updated custom disk
images seems like it could prove worthwhile.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-06 Thread Doug Wiegley
I would be tempted to make a custom image, and ask to put it on our mirrors, or 
have nodepool manage the image building and storing.

You can also likely just have the module on the local mirrors, which would 
alleviate the random internet issue. 

Bigger OS'es with nested Virt is kinda pain. 

Doug


> On Aug 5, 2016, at 3:37 PM, Kevin Benton  wrote:
> 
> Hi,
> 
> In neutron there is a new feature under active development to allow a VM to 
> attach to many networks via its single interface using VLAN tags.
> 
> We would like this to be tested in a scenario test in the gate, but in order 
> to do that the guest instance must have support for VLAN tags (the 8021q 
> kernel module for Linux VMs). Cirros does not ship with this module so I have 
> a few questions.
> 
> Do any other projects need to load a kernel module for a specific test? If 
> not, where would the best place be to store the module so we can load it for 
> that test; or, should we download it directly from the Internet (worried 
> about the stability of this)?
> 
> Thanks, 
> Kevin Benton
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-06 Thread Mooney, Sean K

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, August 5, 2016 10:37 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [infra][neutron] - best way to load 8021q kernel 
module into cirros


Hi,

In neutron there is a new feature under active development to allow a VM to 
attach to many networks via its single interface using VLAN tags.
 [Mooney, Sean K] In this case I take it that you want to create a scenario 
test that will cover teh vlan aware vms work is that correct?

We would like this to be tested in a scenario test in the gate, but in order to 
do that the guest instance must have support for VLAN tags (the 8021q kernel 
module for Linux VMs). Cirros does not ship with this module so I have a few 
questions.
[Mooney, Sean K] Is there a reason you cannot use a Ubuntu or centos cloud 
image for the guest for this test?
both would require the vm flavor to have at least 256mb of ram but I think that 
should be fine.

Do any other projects need to load a kernel module for a specific test? If not, 
where would the best place be to store the module so we can load it for that 
test; or, should we download it directly from the Internet (worried about the 
stability of this)?
[Mooney, Sean K]  how big is it? Would it fit on a configdrive/retrieve it via 
the metatdata service.
looking at https://bugs.launchpad.net/cirros/+bug/1605832 they are suggesting 
using or add a get-kernel-module command but if it was small
you could just store it in the metatdata service/config drive or even swift and 
just curl it locally and run insmod to insert it.





Thanks,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] HA with only one node.

2016-08-06 Thread Dan Prince
On Sat, 2016-08-06 at 13:21 -0400, Adam Young wrote:
> As I try to debug Federaion problems, I am often finding I have to
> check 
> three nodes to see where the actual requrest was processed. However,
> If 
> I close down to of the controller nodes in Nova, the whole thing just
> fails.
> 
> 
> So, while that in it self is a problem, what I would like to be able
> to 
> do in development is have HA running, but with only a single
> controller 
> node answering requests.  How do I do that?

I have a $HOME/custom.yaml environment file which contains this:

parameters:
  ControllerCount: 1

If you do something similar and then include that environment in your
--environments list you should end up with just a single controller.

Do this in addition to using environments/puppet-pacemaker.yaml and you
should have "single node HA" (aka pacemaker on a single controller).

Dan

> 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-06 Thread Mooney, Sean K
Hi just a quick fyi,
About 2 weeks ago I did some light testing with the conntrack security group 
driver and the newly
Merged upserspace conntrack support in ovs.

I can confirm that at least form my initial smoke tests where I
Uses netcat ping and ssh to try and establish connections between two vms the
Conntrack security group driver appears to function correctly with the 
userspace connection tracker.

We have not looked at any of the performance yet but assuming it is at an 
acceptable level I am planning to
Deprecate the learn action based driver in networking-ovs-dpdk and remove it 
once  we have cut the stable newton
Branch.

We hope to do some rfc 2544 throughput testing to evaluate the performance 
sometime mid-September.
Assuming all goes well I plan on enabling the conntrack based security group 
driver by default when the
Networking-ovs-dpdk devstack plugin is loaded. We will also evaluate enabling 
the security group tests
In our third party ci to ensure it continues to function correctly  with 
ovs-dpdk.

Regards
Seán

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Some thoughts on API microversions

2016-08-06 Thread Matt Riedemann

On 8/3/2016 7:54 PM, Andrew Laski wrote:

I've brought some of these thoughts up a few times in conversations
where the Nova team is trying to decide if a particular change
warrants a microversion. I'm sure I've annoyed some people by this
point because it wasn't germane to those discussions. So I'll lay
this out in it's own thread.

I am a fan of microversions. I think they work wonderfully to
express when a resource representation changes, or when different
data is required in a request. This allows clients to make the same
request across multiple clouds and expect the exact same response
format, assuming those clouds support that particular microversion. I
also think they work well to express that a new resource is
available. However I do think think they have some shortcomings in
expressing that a resource has been removed. But in short I think
microversions work great for expressing that there have been changes
to the structure and format of the API.

I think microversions are being overused as a signal for other types
of changes in the API because they are the only tool we have
available. The most recent example is a proposal to allow the
revert_resize API call to work when a resizing instance ends up in an
error state. I consider microversions to be problematic for changes
like that because we end up in one of two situations:

1. The microversion is a signal that the API now supports this
action, but users can perform the action at any microversion. What
this really indicates is that the deployment being queried has
upgraded to a certain point and has a new capability. The structure
and format of the API have not changed so an API microversion is the
wrong tool here. And the expected use of a microversion, in my
opinion, is to demarcate that the API is now different at this
particular point.

2. The microversion is a signal that the API now supports this
action, and users are restricted to using it only on or after that
microversion. In many cases this is an artificial constraint placed
just to satisfy the expectation that the API does not change before
the microversion. But the reality is that if the API change was
exposed to every microversion it does not affect the ability I lauded
above of a client being able to send the same request and receive the
same response from disparate clouds. In other words exposing the new
action for all microversions does not affect the interoperability
story of Nova which is the real use case for microversions. I do
recognize that the situation may be more nuanced and constraining the
action to specific microversions may be necessary, but that's not
always true.

In case 1 above I think we could find a better way to do this. And I
don't think we should do case 2, though there may be special cases
that warrant it.

As possible alternate signalling methods I would like to propose the
following for consideration:

Exposing capabilities that a user is allowed to use. This has been
discussed before and there is general agreement that this is
something we would like in Nova. Capabilities will programatically
inform users that a new action has been added or an existing action
can be performed in more cases, like revert_resize. With that in
place we can avoid the ambiguous use of microversions to do that. In
the meantime I would like the team to consider not using
microversions for this case. We have enough of them being added that
I think for now we could just wait for the next microversion after a
capability is added and document the new capability there.

Secondly we could consider some indicator that exposes how new the
code in a deployment is. Rather than using microversions as a proxy
to indicate that a deployment has hit a certain point perhaps there
could be a header that indicates the date of the last commit in that
code. That's not an ideal way to implement it but hopefully it makes
it clear what I'm suggesting. Some marker that a user can use to
determine that a new behavior is to be expected, but not one that's
more intended to signal structural API changes.

Thoughts?

-Andrew

__



OpenStack Development Mailing List (not for usage questions)

Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I probably can't add much to what's already been said elsewhere in this
thread. I agree microversions aren't a great fit for all things, and
they are especially tricky around behavior changes, like allowing rescue
of a volume-backed instance, or revert_resize of an instance in error state.

We have done a couple in Newton (2.32, 2.37) that are based on (1) 
request schema changes and (2) minimum nova-compute service version in 
the deployment. The (2) part of that is really fudging capabilities. For 
example, 2.32 will work if the minimum nova-compute is new enough AND 
the compute you hit is libvirt or 

[openstack-dev] [tripleo] HA with only one node.

2016-08-06 Thread Adam Young
As I try to debug Federaion problems, I am often finding I have to check 
three nodes to see where the actual requrest was processed. However, If 
I close down to of the controller nodes in Nova, the whole thing just fails.



So, while that in it self is a problem, what I would like to be able to 
do in development is have HA running, but with only a single controller 
node answering requests.  How do I do that?



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we need a Requirements Team and PTL

2016-08-06 Thread Jordan Pittier
On Sat, Aug 6, 2016 at 6:16 PM, Anita Kuno  wrote:

> On 16-08-06 10:34 AM, Davanum Srinivas wrote:
>
>> Folks,
>>
>> Question asked by Julien here:
>> https://twitter.com/juldanjou/status/761897228596350976
>>
>> Answer:
>> There's a boat load of work that goes on in global requirements
>> process. Here's the list of things that we dropped on the new team
>> being formed:
>> https://etherpad.openstack.org/p/requirements-tasks
>>
>> Please feel free to look at the requirements repo, weekly chats etc to
>> get an idea.
>>
>> Also if if you disagree, please bring it up in a community forum so
>> you get better answers for your concerns.
>>
>> Thanks,
>> Dims
>>
>> I have to say that I am disappointed that if a community member felt like
> questioning a community action, that question did not occur in a community
> channel of communication prior to action being taken.
>
> The election workflow was posted to the mailing list a week prior to the
> election commencing. Questioning the purpose of an election while an
> election is taking place fosters distrust in the election process, which is
> feel is unfair to all those participating in the process with good intent.
>
> If you have questions about the purpose of an election please voice them
> at the appropriate time and venue so your concerns can be addressed.
>
> Thank you,
> Anita.


We should all relax. This is Twitter, who doesn't like a couple of
"retweet" and "likes" ? I am pretty sure most of us know the role of the
Requirements project.

-- 
 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we need a Requirements Team and PTL

2016-08-06 Thread Anita Kuno

On 16-08-06 10:34 AM, Davanum Srinivas wrote:

Folks,

Question asked by Julien here:
https://twitter.com/juldanjou/status/761897228596350976

Answer:
There's a boat load of work that goes on in global requirements
process. Here's the list of things that we dropped on the new team
being formed:
https://etherpad.openstack.org/p/requirements-tasks

Please feel free to look at the requirements repo, weekly chats etc to
get an idea.

Also if if you disagree, please bring it up in a community forum so
you get better answers for your concerns.

Thanks,
Dims

I have to say that I am disappointed that if a community member felt 
like questioning a community action, that question did not occur in a 
community channel of communication prior to action being taken.


The election workflow was posted to the mailing list a week prior to the 
election commencing. Questioning the purpose of an election while an 
election is taking place fosters distrust in the election process, which 
is feel is unfair to all those participating in the process with good 
intent.


If you have questions about the purpose of an election please voice them 
at the appropriate time and venue so your concerns can be addressed.


Thank you,
Anita.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Duncan Thomas
+1 from me

Sound like the best solution to at least part of the problem that was
causing people to want to pull the drivers out of tree

On 6 Aug 2016 18:49, "Philipp Marek"  wrote:

> > I want to propose
> > we officially make a change to our stable policy to call out that
> > drivers bugfixes (NOT new driver features) be allowed at any time.
> Emphatically +1 from me.
>
>
> With the small addendum that "bugfixes" should include compatibility
> changes for libraries used.
>
>
> Thanks for bringing that up!
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Philipp Marek
> I want to propose
> we officially make a change to our stable policy to call out that
> drivers bugfixes (NOT new driver features) be allowed at any time.
Emphatically +1 from me.


With the small addendum that "bugfixes" should include compatibility
changes for libraries used.


Thanks for bringing that up!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-06 Thread Sean McGinnis
This may mostly be a Cinder concern, but putting it out there to get
wider input.

For some time now there has been some debate about moving third party
drivers in Cinder to be out of tree. I won't go into that too much,
other than to point out one of the major drivers for this desire that
was brought up at our recent Cinder midcycle.

It turned out at least part of the desire to move drivers out of tree
came down to the difficulty in getting bug fixes out to end users that
were on older stable versions, whether because that's what their distro
was still using, or because of some other internal constraint that
prevented them from upgrading.

A lot of times what several vendors ended up doing is forking Cinder to
their own github repo and keeping that in sync with backports, plus
including driver fixes they needed to get out to their end users. This
has a few drawbacks:

1- this is more work for the vendor to keep this fork up to date
2- end users don't necessarily know where to go to find these without
   calling in to a support desk (that then troubleshoots a known issue
   and hopefully eventually ends up contacting the folks internally that
   actually work on Cinder that know it's been fixed and where to get
   the updates). Generally a bad taste for someone using Cinder and
   OpenStack.
3- Distros that package stable branches aren't able to pick up these
   changes, even if they are picking up stable branch updates for
   security fixes
4- We end up with a lot of patches proposed against security only stable
   branches that we need to either leave or abandon, just so a vendor
   can point end users to the patch to be able to grab the code changes

Proposed Solution
-

So part of our discussion at the midcycle was a desire to open up stable
restrictions for getting these driver bugfixes backported. At the time,
we had discussed having new branches created off of the stable branches
specifically for driver bugfixes. Something like:

stable/mitaka > stable/mitaka-drivers

After talking to the infra team, this really did sound like overkill.
The suggestion was to just change our stable policy in regards to driver
bugfix backports. No need to create and maintain more branches. No need
to set up gate jobs and things like that.

So this is a divergence from our official policy. I want to propose
we officially make a change to our stable policy to call out that
drivers bugfixes (NOT new driver features) be allowed at any time.

If that's not OK with other project teams that support any kind of third
party drivers, I will just implement this policy specific to Cinder
unless there is a very strong objection, with good logic behind it, why
this should not be allowed.

This would address a lot of the concerns at least within Cinder and
allow us to better support users stuck on older releases.

I'm open and welcome to any feedback on this. Unless there are any major
concerns raised, I will at least instruct any Cinder stable cores to
start allowing these bugfix patches through past the security only
phase.

Thanks!

Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why do we need a Requirements Team and PTL

2016-08-06 Thread Matthew Thode
On 08/06/2016 09:34 AM, Davanum Srinivas wrote:
> Folks,
> 
> Question asked by Julien here:
> https://twitter.com/juldanjou/status/761897228596350976
> 
> Answer:
> There's a boat load of work that goes on in global requirements
> process. Here's the list of things that we dropped on the new team
> being formed:
> https://etherpad.openstack.org/p/requirements-tasks
> 
> Please feel free to look at the requirements repo, weekly chats etc to
> get an idea.
> 
> Also if if you disagree, please bring it up in a community forum so
> you get better answers for your concerns.
> 
> Thanks,
> Dims
> 

To add, that list has been cleaned up since we first started, so there
was more on it then just that.

-- 
-- Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Why do we need a Requirements Team and PTL

2016-08-06 Thread Davanum Srinivas
Folks,

Question asked by Julien here:
https://twitter.com/juldanjou/status/761897228596350976

Answer:
There's a boat load of work that goes on in global requirements
process. Here's the list of things that we dropped on the new team
being formed:
https://etherpad.openstack.org/p/requirements-tasks

Please feel free to look at the requirements repo, weekly chats etc to
get an idea.

Also if if you disagree, please bring it up in a community forum so
you get better answers for your concerns.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-06 Thread Steven Dake (stdake)
Ian,

I value your input, but concern still stands.  Amazon's compute API moves 
slowly in comparison to Docker registry's API.  Making a parity implementation 
to the Docker v2 registry API is a complex and difficult challenge.  It is much 
more significant then simply making an API.  An implementation needs to stand 
behind that API.

Regards
-steve

From: Ian Cordasco >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday, August 6, 2016 at 4:52 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project


However, interested parties could start a project like the ec2 project that is 
independently released and provides that compatibility using glare

On Aug 6, 2016 5:18 AM, "Steven Dake (stdake)" 
> wrote:
Kevin,

Agree it would be a very useful feature, however, easier said then done.  Part 
of Docker's approach is to "move fast";they schedule releases every 2 months.  
I'm sure the glare team is quite competent, however, keeping API parity on such 
a fast moving project such as the docker registry API is a big objective not to 
be undertaken lightly.  If  there isn't complete API parity with the docker 
rregistry v2 API, the work wouldn't be particularly useful to many in the 
container communities inside OpenStack as Hongbin pointed out.

Regards
-steve

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, August 5, 2016 at 2:29 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

If glare was docker repo api compatible though, I think it would be quite 
useful. then each tenant doesn't have to set one up themselves.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Friday, August 05, 2016 1:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Replied inline.

From: Mikhail Fedosin [mailto:mfedo...@mirantis.com]
Sent: August-05-16 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Thank you all for your responses!

>From my side I can add that our separation is a deliberate step. We 
>pre-weighed all pros and cons and our final decision was that moving forward 
>as a new project is the lesser of two evils. Undoubtedly, in the short term it 
>will be painful, but I believe that in the long run Glare will win.

Also, I want to say, that Glare was designed as an open project and we want to 
build a good community with members from different companies. Glare suppose to 
be a backend for Heat (and therefore TripleO), App-Catalog, Tacker and 
definitely Nova. In addition we are considering the possibility of storage 
Docker containers, which may be useful for Magnum.

[Hongbin Lu] Magnum doesn’t have any plan to store docker images at Glare, 
because COE (i.e. Kubernetes) is simply incompatible with any API other than 
docker registry. Zun might have use cases to store docker images at Glare if 
Glare is part of Glance, but I am reluctant to set a dependency on Glare if 
Glare is a totally branch new service.

Then, I think that comparison between Image API and Artifact API is not 
correct. Moreover, in my opinion Image API imposes artificial constraints. Just 
imagine that your file system can only store images in JPG format (more 
precisely, it could store any data, but it is imperative that all files must 
have the extension ".jpg"). Likewise Glance - I can put there any data, it can 
be both packages and templates, as well as video from my holiday. And this 
interface, though not ideal, may not work for all services. But those 
artificial limitations that have been created, do Glance uncomfortable even for 
storing images.

On the other hand Glare provides unified interface for all possible binary data 
types. If we take the example with filesystem, in Glare's case it supports all 
file extensions, folders, history of file changes on your disk, data validation 
and conversion, import/export files from different computers and so on. These 
features are not presented in Glance and I think they never will, because of 

Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-06 Thread John Dennis

On 08/05/2016 06:06 PM, Adam Young wrote:

Ah...just noticed the redirect is to :5000, not port :13000 which is
the HA Proxy port.


OK, this is due to the SAML request:


https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;

Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
ForceAuthn="false"
IsPassive="false"

AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
>

https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata




My guess is HA proxy is not passing on the proper, and the
mod_auth_mellon does not know to rewrite it from 5000 to 13000


You can't change the contents of a SAML AuthnRequest, often they are 
signed. Also, the AssertionConsumerServiceURL's and other URL's in SAML 
messages are validated to assure they match the metadata associated with 
EntityID (issuer). The addresses used inbound and outbound have to be 
correctly handled by the proxy configuration without modifying the 
content of the message being passed on the transport.



--
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-06 Thread Ian Cordasco
However, interested parties could start a project like the ec2 project that
is independently released and provides that compatibility using glare

On Aug 6, 2016 5:18 AM, "Steven Dake (stdake)"  wrote:

> Kevin,
>
> Agree it would be a very useful feature, however, easier said then done.
> Part of Docker's approach is to "move fast";they schedule releases every 2
> months.  I'm sure the glare team is quite competent, however, keeping API
> parity on such a fast moving project such as the docker registry API is a
> big objective not to be undertaken lightly.  If  there isn't complete API
> parity with the docker rregistry v2 API, the work wouldn't be particularly
> useful to many in the container communities inside OpenStack as Hongbin
> pointed out.
>
> Regards
> -steve
>
> From: "Fox, Kevin M" 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Friday, August 5, 2016 at 2:29 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker]
> Glare as a new Project
>
> If glare was docker repo api compatible though, I think it would be quite
> useful. then each tenant doesn't have to set one up themselves.
>
> Thanks,
> Kevin
>
> --
> *From:* Hongbin Lu [hongbin...@huawei.com]
> *Sent:* Friday, August 05, 2016 1:29 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker]
> Glare as a new Project
>
> Replied inline.
>
>
>
> *From:* Mikhail Fedosin [mailto:mfedo...@mirantis.com
> ]
> *Sent:* August-05-16 2:10 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker]
> Glare as a new Project
>
>
>
> Thank you all for your responses!
>
>
>
> From my side I can add that our separation is a deliberate step. We
> pre-weighed all pros and cons and our final decision was that moving
> forward as a new project is the lesser of two evils. Undoubtedly, in the
> short term it will be painful, but I believe that in the long run Glare
> will win.
>
>
>
> Also, I want to say, that Glare was designed as an open project and we
> want to build a good community with members from different companies. Glare
> suppose to be a backend for Heat (and therefore TripleO), App-Catalog,
> Tacker and definitely Nova. In addition we are considering the possibility
> of storage Docker containers, which may be useful for Magnum.
>
>
>
> *[Hongbin Lu] Magnum doesn’t have any plan to store docker images at
> Glare, because COE (i.e. Kubernetes) is simply incompatible with any API
> other than docker registry. Zun might have use cases to store docker images
> at Glare if Glare is part of Glance, but I am reluctant to set a dependency
> on Glare if Glare is a totally branch new service.*
>
>
>
> Then, I think that comparison between Image API and Artifact API is not
> correct. Moreover, in my opinion Image API imposes artificial constraints.
> Just imagine that your file system can only store images in JPG format
> (more precisely, it could store any data, but it is imperative that all
> files must have the extension ".jpg"). Likewise Glance - I can put there
> any data, it can be both packages and templates, as well as video from my
> holiday. And this interface, though not ideal, may not work for all
> services. But those artificial limitations that have been created, do
> Glance uncomfortable even for storing images.
>
>
>
> On the other hand Glare provides unified interface for all possible binary
> data types. If we take the example with filesystem, in Glare's case it
> supports all file extensions, folders, history of file changes on your
> disk, data validation and conversion, import/export files from different
> computers and so on. These features are not presented in Glance and I think
> they never will, because of deficiencies in the architecture.
>
>
>
> For this reason I think Glare's adoption is important and it will be a
> huge step forward for OpenStack and the whole community.
>
>
>
> Thanks again! If you want to support us, please vote for our talk on
> Barcelona summit - https://www.openstack.org/summit/barcelona-2016/vote-
> for-speakers/ Search "Glare" and there will be our presentation.
>
>
>
> Best,
>
> Mike
>
>
>
> On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx 
> wrote:
>
>
> I don't have a strong opinion on the split vs stay discussion. It
> does seem there's been sustained if ineffective attempts to keep this
> together so I lean toward supporting the divorce.
>
> But let's not pretend there are no costs for this.
>
> On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
> :On 08/04/2016 06:40 PM, Clint Byrum wrote:
>
> 

Re: [openstack-dev] [kolla] I need help on multisite deploy

2016-08-06 Thread Steven Dake (stdake)
Lu Yao,

Please drop by #openstack-kolla and our community can help you get rolling.  
Although now isn't a particularly good time (4:30 AM PST) - I'm awake and can 
get you up and running.  My nick is sdake.

Regards,
-steve

From: "lu.yao...@zte.com.cn" 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday, August 6, 2016 at 12:52 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: [openstack-dev] [kolla] I need help on multisite deploy

When I deploy multinode with kolla,one of the hosts raise error as follows, I 
need help to solve the problem. Can you help me? Thanks!


ASK: [neutron | Starting openvswitch-vswitchd container] *
<10.43.114.20> ESTABLISH CONNECTION FOR USER: root
<10.43.114.48> ESTABLISH CONNECTION FOR USER: root
<10.43.114.20> REMOTE_MODULE kolla_docker 
image=10.43.177.190:4000/kolla/centos-binary-openvswitch-vswitchd:2.0.3 
action=start_container name=openvswitch_vswitchd
<10.43.114.48> REMOTE_MODULE kolla_docker 
image=10.43.177.190:4000/kolla/centos-binary-openvswitch-vswitchd:2.0.3 
action=start_container name=openvswitch_vswitchd
<10.43.114.20> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s 
-o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.20 /bin/sh -c 'mkdir 
-p $HOME/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193 && echo 
$HOME/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193'
<10.43.114.48> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s 
-o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.48 /bin/sh -c 'mkdir 
-p $HOME/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895 && echo 
$HOME/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895'
<10.43.114.20> PUT /tmp/tmppMwwNZ TO 
/root/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193/kolla_docker
<10.43.114.48> PUT /tmp/tmp9l7GF6 TO 
/root/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895/kolla_docker
<10.43.114.20> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s 
-o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.20 /bin/sh -c 'LANG=C 
LC_CTYPE=C /usr/bin/python 
/root/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193/kolla_docker; rm 
-rf /root/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193/ >/dev/null 
2>&1'
<10.43.114.48> EXEC ssh -C -tt -v -o ControlMaster=auto -o ControlPersist=60s 
-o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" -o 
KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o 
PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.48 /bin/sh -c 'LANG=C 
LC_CTYPE=C /usr/bin/python 
/root/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895/kolla_docker; rm 
-rf /root/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895/ >/dev/null 
2>&1'
failed: [10.43.114.20] => {"changed": true, "failed": true}
msg: APIError(HTTPError(u'409 Client Error: Conflict for url: 
http+docker://localunixsocket/v1.23/containers/create?name=openvswitch_vswitchd',),)
changed: [10.43.114.48] => {"changed": true, "result": false}
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glare][kolla] timing of release of glare and request for technical interview on IRC wherever the glare team wants to have it

2016-08-06 Thread Steven Dake (stdake)
Hey folks,

I guess the split of glare and glance have been in the works for awhile.  It is 
challenging for operational deployment managers (ODMs) such as Kolla to keep up 
with the internal goings-on of every big tent project (or projects that shave 
off from big-tent projects).  Just to be clear up front, the Kolla community 
doesn't care that glare split the work out.  The Kolla development team adapts 
very quickly to upstream changes.  What we do care about is that we present an 
accurate deployment for Newton that represents the best that OpenStack has to 
offer and offer a seamless operational experience - within Kolla's capacity 
constraints.

 I need information on when the code base will be ready to consume (from a 
tarball on tarballs.oo).  Is this planned for milestone 3 - or post Newton?  
I'd recommend post-Newton for the split to be consumable - it would ease the 
difficulty of adoption if the various ODMs (and Operators) had more then 3 
weeks to work with on what is clearly a project required by every deployment 
based upon the threads I read.

I have some other technical questions related to how the glance registry 
disappears (I believe this point was mentioned in another thread by Jay) as 
well as the upgrade mechanism (if any is needed) that would best be served by a 
high bandwidth conversation on IRC (and those conversations are recorded on 
eavesdrop for others to benefit).

Would one of the technical cats from glare team drop by #opentack-kolla so we 
can get a quick (30 minutes) technical interview on the work to understand how 
this change impacts OpenStack in the short term (prior to Newton) and the long  
term layout of the two projects so we can make a determination a to how to 
proceed technically?  I don't necessarily need to be there - any of our core 
reviewer team can handle the Q - but would like to be there if possible.

If that doesn't work for the glare team, could we get 30 minutes of agenda time 
in what I'm sure is a busy glare team meeting to have the same technical 
discussion?

If that doesn't work for the glare team, we can host the topic in the kolla 
team meeting (UTC1600 on Wednesdays) if a glare core reviewer or the glare PTL 
can stop by.

Please let me know how you wish to proceed.

TIA
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-06 Thread Steven Dake (stdake)
Kevin,

Agree it would be a very useful feature, however, easier said then done.  Part 
of Docker's approach is to "move fast";they schedule releases every 2 months.  
I'm sure the glare team is quite competent, however, keeping API parity on such 
a fast moving project such as the docker registry API is a big objective not to 
be undertaken lightly.  If  there isn't complete API parity with the docker 
rregistry v2 API, the work wouldn't be particularly useful to many in the 
container communities inside OpenStack as Hongbin pointed out.

Regards
-steve

From: "Fox, Kevin M" >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, August 5, 2016 at 2:29 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

If glare was docker repo api compatible though, I think it would be quite 
useful. then each tenant doesn't have to set one up themselves.

Thanks,
Kevin


From: Hongbin Lu [hongbin...@huawei.com]
Sent: Friday, August 05, 2016 1:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Replied inline.

From: Mikhail Fedosin [mailto:mfedo...@mirantis.com]
Sent: August-05-16 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] 
Glare as a new Project

Thank you all for your responses!

>From my side I can add that our separation is a deliberate step. We 
>pre-weighed all pros and cons and our final decision was that moving forward 
>as a new project is the lesser of two evils. Undoubtedly, in the short term it 
>will be painful, but I believe that in the long run Glare will win.

Also, I want to say, that Glare was designed as an open project and we want to 
build a good community with members from different companies. Glare suppose to 
be a backend for Heat (and therefore TripleO), App-Catalog, Tacker and 
definitely Nova. In addition we are considering the possibility of storage 
Docker containers, which may be useful for Magnum.

[Hongbin Lu] Magnum doesn’t have any plan to store docker images at Glare, 
because COE (i.e. Kubernetes) is simply incompatible with any API other than 
docker registry. Zun might have use cases to store docker images at Glare if 
Glare is part of Glance, but I am reluctant to set a dependency on Glare if 
Glare is a totally branch new service.

Then, I think that comparison between Image API and Artifact API is not 
correct. Moreover, in my opinion Image API imposes artificial constraints. Just 
imagine that your file system can only store images in JPG format (more 
precisely, it could store any data, but it is imperative that all files must 
have the extension ".jpg"). Likewise Glance - I can put there any data, it can 
be both packages and templates, as well as video from my holiday. And this 
interface, though not ideal, may not work for all services. But those 
artificial limitations that have been created, do Glance uncomfortable even for 
storing images.

On the other hand Glare provides unified interface for all possible binary data 
types. If we take the example with filesystem, in Glare's case it supports all 
file extensions, folders, history of file changes on your disk, data validation 
and conversion, import/export files from different computers and so on. These 
features are not presented in Glance and I think they never will, because of 
deficiencies in the architecture.

For this reason I think Glare's adoption is important and it will be a huge 
step forward for OpenStack and the whole community.

Thanks again! If you want to support us, please vote for our talk on Barcelona 
summit - https://www.openstack.org/summit/barcelona-2016/vote-for-speakers/ 
Search "Glare" and there will be our presentation.

Best,
Mike

On Fri, Aug 5, 2016 at 5:22 PM, Jonathan D. Proulx 
> wrote:

I don't have a strong opinion on the split vs stay discussion. It
does seem there's been sustained if ineffective attempts to keep this
together so I lean toward supporting the divorce.

But let's not pretend there are no costs for this.

On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote:
:On 08/04/2016 06:40 PM, Clint Byrum wrote:

:>But, if I look at this from a user perspective, if I do want to use
:>anything other than images as cloud artifacts, the story is pretty
:>confusing.
:
:Actually, I beg to differ. A unified OpenStack Artifacts API,
:long-term, will be more user-friendly and less confusing since a

[openstack-dev] [kolla] I need help on multisite deploy

2016-08-06 Thread lu . yao135
When I deploy multinode with kolla,one of the hosts raise error as 
follows, I need help to solve the problem. Can you help me? Thanks! 


ASK: [neutron | Starting openvswitch-vswitchd container] 
* 
<10.43.114.20> ESTABLISH CONNECTION FOR USER: root 
<10.43.114.48> ESTABLISH CONNECTION FOR USER: root 
<10.43.114.20> REMOTE_MODULE kolla_docker 
image=10.43.177.190:4000/kolla/centos-binary-openvswitch-vswitchd:2.0.3 
action=start_container name=openvswitch_vswitchd 
<10.43.114.48> REMOTE_MODULE kolla_docker 
image=10.43.177.190:4000/kolla/centos-binary-openvswitch-vswitchd:2.0.3 
action=start_container name=openvswitch_vswitchd 
<10.43.114.20> EXEC ssh -C -tt -v -o ControlMaster=auto -o 
ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" 
-o KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey 
-o PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.20 /bin/sh -c 
'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193 && 
echo $HOME/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193' 
<10.43.114.48> EXEC ssh -C -tt -v -o ControlMaster=auto -o 
ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" 
-o KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey 
-o PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.48 /bin/sh -c 
'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895 && 
echo $HOME/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895' 
<10.43.114.20> PUT /tmp/tmppMwwNZ TO 
/root/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193/kolla_docker 
<10.43.114.48> PUT /tmp/tmp9l7GF6 TO 
/root/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895/kolla_docker 
<10.43.114.20> EXEC ssh -C -tt -v -o ControlMaster=auto -o 
ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" 
-o KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey 
-o PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.20 /bin/sh -c 
'LANG=C LC_CTYPE=C /usr/bin/python 
/root/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193/kolla_docker; 
rm -rf /root/.ansible/tmp/ansible-tmp-1470384034.9-178486079034193/ 
>/dev/null 2>&1' 
<10.43.114.48> EXEC ssh -C -tt -v -o ControlMaster=auto -o 
ControlPersist=60s -o ControlPath="/root/.ansible/cp/ansible-ssh-%h-%p-%r" 
-o KbdInteractiveAuthentication=no -o 
PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey 
-o PasswordAuthentication=no -o ConnectTimeout=10 10.43.114.48 /bin/sh -c 
'LANG=C LC_CTYPE=C /usr/bin/python 
/root/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895/kolla_docker; 
rm -rf /root/.ansible/tmp/ansible-tmp-1470384034.9-111241252335895/ 
>/dev/null 2>&1' 
failed: [10.43.114.20] => {"changed": true, "failed": true} 
msg: APIError(HTTPError(u'409 Client Error: Conflict for url: 
http+docker://localunixsocket/v1.23/containers/create?name=openvswitch_vswitchd',),)
 

changed: [10.43.114.48] => {"changed": true, "result": false} 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo] Federation, mod_mellon, and HA Proxy

2016-08-06 Thread Juan Antonio Osorio
Adam, that should be fixed by https://review.openstack.org/#/c/341354/
which merged not too many days ago. Before that commit we had another
configuration which was already deprecated in keystone upstream.

On 6 Aug 2016 05:04, "Adam Young"  wrote:

> On 08/05/2016 06:40 PM, Fox, Kevin M wrote:
>
> --
> *From:* Adam Young [ayo...@redhat.com]
> *Sent:* Friday, August 05, 2016 3:06 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [keystone][tripleo] Federation,
> mod_mellon, and HA Proxy
>
> On 08/05/2016 04:54 PM, Adam Young wrote:
>
> On 08/05/2016 04:52 PM, Adam Young wrote:
>
> Today I discovered that we need to modify the HA proxy config to tell it
> to rewrite redirects.  Otherwise, I get a link to
>
> http://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse
>
>
> Which should be https, not http.
>
>
> I mimicked the lines in the horizon config so that the keystone section
> looks like this:
>
>
> listen keystone_public
>   bind 10.0.0.4:13000 transparent ssl crt
> /etc/pki/tls/private/overcloud_endpoint.pem
>   bind 172.16.2.5:5000 transparent
>   mode http
>   redirect scheme https code 301 if { hdr(host) -i 10.0.0.4 } !{ ssl_fc }
>   rsprep ^Location:\ http://(.*)  Location:\
> https://\1
>   http-request set-header X-Forwarded-Proto https if { ssl_fc }
>   http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
>   server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000
> rise 2
>   server overcloud-controller-1 172.16.2.6:5000 check fall 5 inter 2000
> rise 2
>   server overcloud-controller-2 172.16.2.9:5000 check fall 5 inter 2000
> rise 2
>
> And.. it seemed to work the first time, but not the second.  Now I get
>
> "Secure Connection Failed
>
> The connection to openstack.ayoung-dell-t1700.test:5000 was interrupted
> while the page was loading."
>
> Guessing the first success was actually a transient error.
>
> So it looks like my change was necessary but not sufficient.
>
> This is needed to make mod_auth_mellon work when loaded into Apache, and
> Apache is running behind  HA proxy (Tripleo setup).
>
>
> There is no SSL setup inside the Keystone server, it is just doing
> straight HTTP.  While I'd like to change this long term, I'd like to get
> things working this way first, but am willing to make whatever changes are
> needed to get SAML and Federation working soonest.
>
>
>
>
> Ah...just noticed the redirect is to :5000, not port :13000 which is the
> HA Proxy port.
>
>
> OK, this is due to the SAML request:
>
>
>  xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion"
> ID="_5089011BEBD0F6B82074F67E904F598D"
> Version="2.0"
> IssueInstant="2016-08-05T21:55:18Z"
> 
> Destination="https://identity.ayoung-dell-t1700.test/auth/realms/openstack/protocol/saml;
>  
> 
> Consent="urn:oasis:names:tc:SAML:2.0:consent:current-implicit"
> ForceAuthn="false"
> IsPassive="false"
> 
> AssertionConsumerServiceURL="https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/postResponse;
>  
> >
> 
> https://openstack.ayoung-dell-t1700.test:5000/v3/mellon/metadata
>  Format="urn:oasis:names:tc:SAML:2.0:nameid-format:transient"
> AllowCreate="true"
> />
> 
>
>
> My guess is HA proxy is not passing on the proper, and the mod_auth_mellon
> does not know to rewrite it from 5000 to 13000
>
>
> "rewriting is more expensive then getting the web server to return the
> right prefix. Is that an option? Usually its just a bug that needs a minor
> patch to fix.
>
> Thanks,
> Kevin"
>
>
> Well, I think in this case, the expense is not something to worry about:
> SAML is way more chatty than normal traffic, and the rewrite won't be a
> drop a in the bucket.
>
> I think the right thing to do is to get HA proxy top pass on the correct
> URL, including the port, to the backend, but I don't think it is done in
> the rsprep directive.  As John Dennis pointed out to me, the
> mod_auth_mellon code uses the apache ap_construct_url(r->pool,
> cfg->endpoint_path, r) where r is the current request record.  And that has
> to be passed from HA proxy to Apache.
>
> HA proxy is terminating SSL, and then calling Apache via
>
>
>   server overcloud-controller-0 172.16.2.8:5000 check fall 5 inter 2000
> rise 2
> and two others.  Everything appears to be properly translated except the
> port.
>
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe