[openstack-dev] [puppet]puppet-mistral

2015-08-12 Thread BORTMAN, Limor (Limor)
Hi all,
I have some changes at the end of last week and so far i got no comment on it. 
can anyone please take a look? 
https://review.openstack.org/#/c/208457/10

Thanks 
Thanks 
Limor Stotland
ALCATEL-LUCENT
SENIOR SOFTWARE ENGINEER
CLOUDBAND BUSINESS UNIT
16 Atir Yeda St. Kfar-Saba 44643, ISRAEL
T:  +972 (0) 9 793 3166
M: +972 (0) 54 585 3736
limor.bort...@alcatel-lucent.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Magnum] Obtain the objects from the bay endpoint

2015-08-12 Thread Akash Gangil
Hi,

I have a few questions. inline.



> Problem :-
>
> Currently objects (pod/rc/service) are read from the database. In order
> for native clients to work, they must be read from the ReST bay endpoint.
> To execute native clients, we must have one truth of the state of the
> system, not two as in its current state of art.
>
>
What is meant by the "native" clients here? Can you give an example?


A]  READ path needs to be changed :
>
> 1. For python clients :-
>
> python-magnum client->rest api->conductor->rest-endpoint-k8s-api handler
>
> In its present state of art this is python-magnum client->rest api->db
>
> 2. For native clients :-
>
> native client->rest-endpoint-k8s-api
>
>
If native client can get all the info through the rest-endpoint-k8s
handler, why in case of magnum client do we need to go through rest-api->
conductor? Do we parse or modify the k8s-api data before responding to the
python-magnum client?



> B] WRITE operations need to happen via the rest endpoint instead of the
> conductor.
>

If we completely bypass the conductor, is there any way to keep a track of
trace of how a resource was modified? Since I presume now magnum doesn't
have that info, since we talk to k8s-api directly? Or is this irrelevant?

> C] Another requirement that needs to be satisfied is that data returned by
> magnum should be the same whether its created by native client or
> python-magnum client.
>

I don't understand why is the information duplicated in the magnum db and
k8s data source in first place? From what I understand magnum has its own
database which is with k8s-api responses?

> The fix will make sure all of the above conditions are met.
>
> Need your input on the proposed approach.
>
> -Vilobh
>
> [1] *https://blueprints.launchpad.net/magnum/+spec/objects-from-bay*
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Akash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Aligning LP groups with real teams

2015-08-12 Thread Aleksandr Didenko
Hi,

just wanted to let you know that fuel-astute and fuel-provisioning groups
have been removed from LP.

> BTW, any chance we can somehow to reduce spam emails when some bug was
assigned to another team?

Igor, I'd recommend to setup email filters and lables. There are messages
like "You received this bug notification because you are a member of Fuel
for Openstack, which is a bug assignee" or "You received this bug
notification because you are a member of Fuel Library Team, which is a bug
assignee". So you can direct such emails into different folders/labels.

Regards,
Alex

On Fri, Jul 17, 2015 at 4:26 PM, Igor Kalnitsky 
wrote:

> Hello,
>
> Here's my +2 on this. :)
>
> BTW, any chance we can somehow to reduce spam emails when some bug was
> assigned to another team? For instance, I see email notifications when
> bug's assigned to fuel-library.
>
> Thanks,
> Igor
>
> On Fri, Jul 17, 2015 at 4:16 PM, Tatyana Leontovich
>  wrote:
> > Hi,
> >
> > Alex vote +1 to use astute tag as well to get possibility identify issues
> > related to astute in the most easy way.
> >
> > Regards,
> > Tanya
> >
> > On Fri, Jul 17, 2015 at 3:59 PM, Aleksandr Didenko <
> adide...@mirantis.com>
> > wrote:
> >>
> >> Hi,
> >>
> >> as we decided on the recent Fuel weekly IRC meeting, we need to align LP
> >> fuel-* groups with our teams and bug confirmation queues/duties. We
> decided
> >> to start with fuel-astute [0] and fuel-provsioning [1] LP groups that
> have 2
> >> members each. So from now on please assign bugs about provisioning and
> >> astute to fuel-python [2] LP gorup and add 'ibp' tag for bugs about
> >> provisioning.
> >> Guys from fuel-python, please pay attention to bug tags. If you're the
> SME
> >> for 'ibp', please take a look at 'ibp' bugs first.
> >>
> >> Btw should we also use 'astute' tag for the same purpose?
> >>
> >> Also we need someone to delete  fuel-astute and fuel-provisioning groups
> >> from LP, if there are no objections.
> >>
> >> Regards,
> >> Alex
> >>
> >> [0] https://launchpad.net/~fuel-astute/+members#active
> >> [1] https://launchpad.net/~fuel-provisioning/+members#active
> >> [2] https://launchpad.net/~fuel-python/+members#active
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Mitaka nova-specs is open?

2015-08-12 Thread Kekane, Abhishek
Hi Nova Devs,

I have submitted a nova-specs for liberty but as it is not approved I have 
moved it under liberty/backlog.

Now mitaka specs directory is added in nova-specs.
Should it be ok to move nova-specs from specs/liberty/backlog/approved to 
specs/mitaka/approved directory?

Please let me know your opinion on the same.


Thanks & Best Regards,

Abhishek Kekane

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-12 Thread Kuvaja, Erno
> -Original Message-
> From: Mike Perez [mailto:thin...@gmail.com]
> Sent: 11 August 2015 19:04
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
> glance
> 
> On 15:06 Aug 11, Kuvaja, Erno wrote:
> > > -Original Message-
> > > From: Jay Pipes [mailto:jaypi...@gmail.com]
> 
> 
> 
> > > Having the image cache local to the compute nodes themselves gives
> > > the best performance overall, and with glance_store, means that
> > > glance-api isn't needed at all, and Glance can become just a
> > > metadata repository, which would be awesome, IMHO.
> >
> > You have any figures to back this up in scale? We've heard similar
> > claims for quite a while and as soon as people starts to actually look
> > into how the environments behaves, they quite quickly turn back. As
> > you're not the first one, I'd like to make the same request as to
> > everyone before, show your data to back this claim up! Until that it
> > is just like you say it is, opinion.;)
> 
> The claims I make with Cinder doing caching on its own versus just using
> Glance with rally with an 8G image:
> 
> Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
> Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds
> 
> http://thing.ee/x/cache_results/
> 
> Thanks to Patrick East for pulling these results together.
> 
> Keep in mind, this is using a block storage backend that is completely
> separate from the OpenStack nodes. It's *not* using a local LVM all in one
> OpenStack contraption. This is important because even if you have Glance
> caching enabled, and there was no cache miss, you still have to dd the bits to
> the block device, which is still going over the network. Unless Glance is 
> going
> to cache on the storage array itself, forget about it.
> 
> Glance should be focusing on other issues, rather than trying to make
> copying image bits over the network and dd'ing to a block device faster.
> 
> --
> Mike Perez
> 
Thanks Mike,

So without cinder cache your times averaged roughly 150+second marks. The 
couple of first volumes with the cache took roughly 170+seconds. What the data 
does not tell, was cinder pulling the images directly from glance backend 
rather than through glance on either of these cases?

Somehow you need to seed those caches and that seeding time/mechanism is where 
the debate seems to be. Can you afford keeping every image in cache so that 
they are all local or if you need to pull the image to seed your cache how much 
you will benefit that your 100 cinder nodes are pulling it directly from 
backend X versus glance caching/sitting in between. How block storage backend 
handles that 100 concurrent reads by different client when you are seeding it 
between different arrays? The scale starts matter here because it makes a lot 
of difference on backend if it's couple of cinder or nova nodes requesting the 
image vs. 100s of them. Lots of backends tends to not like such loads or we 
outperform them due not having to fight for the bandwidth with other consumers 
of that backend.

That dd part we gladly leave to you, the network takes what it takes to 
transfer and we will be happily handing the bits over at the other end still, 
so you have something to dd. That is our business and we do it pretty well. 

- Erno
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Bug *Review* Day - Liberty

2015-08-12 Thread Markus Zoeller
Today is bug review day, wohoo! Would be great if we shift our review
focus today to patch sets for bugs with a high priority.

If you have questions, contact me (markus_z) or another member of the
nova bug team [1] on IRC #openstack-nova.

Regards,
Markus Zoeller (markus_z)

[1] https://launchpad.net/~nova-bugs

Markus Zoeller/Germany/IBM wrote on 08/07/2015 05:21:02 PM:

> From: Markus Zoeller/Germany/IBM
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: 08/07/2015 05:21 PM
> Subject: [nova] Bug *Review* Day - Liberty
> 
> As freshly crowned "bug czar" I'd like to advertise the "bug review day"
> which takes place next Wednesday, August the 12th [1].
> The "bug triage day" last week did a good job to set the priorities of
> "undecided" bugs [2].
> We can use [3] to get an overview of the current reviews for bugs. When
> [4] is merged, the list will be sortable by type, so that we can focus
> on the bug reviews with a high priority first.
> 
> If you have questions, contact me (markus_z) or another member of the
> nova bug team on IRC #openstack-nova.
> 
> Regards,
> Markus Zoeller (markus_z)
> 
> [1] https://wiki.openstack.org/wiki/Nova/
> Liberty_Release_Schedule#Special_review_days
> [2] 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071552.html
> [3] http://status.openstack.org/reviews/#nova
> [4] https://review.openstack.org/#/c/210481/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable is hosed

2015-08-12 Thread Thierry Carrez
Matt Riedemann wrote:
> Just an update:
> 
> Kilo: I think we are OK here now, at least for some projects like nova -
> raising the minimum required neutronclient to >=2.4.0 seems to have
> fixed things.
> 
> Juno: We're still blocked on the large ops job:
> 
> https://bugs.launchpad.net/openstack-gate/+bug/1482350
> 
> I'll probably take a deeper look at options there tomorrow.  lifeless
> left a suggestion in the bug report.

Thanks for working on this ! I'm back from vacation now, still catching
up. Don't hesitate to pull me in discussion of options or ping me if you
need the occasional review help.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Race conditions in fwaas that impact the gate

2015-08-12 Thread Sean M. Collins
[reformatted and infra tag added]


On Tue, Aug 11, 2015 at 07:32:34PM EDT, Salvatore Orlando wrote:
> On 12 August 2015 at 00:21, Sean M. Collins  wrote:
> 
> > Hello,
> >
> > Today has been an exciting day, to say the least. Earlier today I was
> > pinged on IRC about some firewall as a service unit test failures that
> > were blocking patches from being merged, such as
> > https://review.openstack.org/#/c/211537/.
> >
> > Neutron devs started poking around a bit and discussing on the IRC channel.
> >
> >
> > http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-08-11.log.html#t2015-08-11T16:59:13
> >
> > I've started to dig a little bit and document what I've found on this
> > bug.
> >
> > https://bugs.launchpad.net/neutron/+bug/1483875
> >
> > There was a change recently merged in devstack-gate which changes the
> > MySQL database driver and the number of workers -
> > https://review.openstack.org/#/c/210649/
> > which might be what is triggering the race condition - but I'm honestly
> > not sure.
> >
> > I proposed a revert to a section of the FwaaS code, but frankly I'm not
> > sure if this will fix the problem - https://review.openstack.org/211677
> > - so I bumped it out of the merge queue when my anxiety reached maximum.
> > I'm just not confident enough about my knowledge of the FwaaS codebase
> > to really be making these kinds of changes.
> >
> > Is there anyone that has any insights?
> >
> >
> > --
> > Sean M. Collins
> >
> >
>
> I have been hit by these failures as well.
> I think you did well by bumping out that revert from the queue; I think it
> simply cures the sympton possibly affecting correct operations of the
> firewall service.
> If we are looking at removing the sympton on the API job, than I'd skip the
> failing tests while somebody figures out what's going on (unless the team
> decides that it is better to revert again multiple workers).
> 
> However, I think the issue might not be limited at firewall. I've seen a
> worrying spike in rally failures [1]. Since it's non-voting probably
> developers do not care a lot about it, but it provides very useful
> insights. I am looking at rally logs now - at the moment I have not yet a
> clear idea of the root cause of such failures.


Ihar pushed a revert of the DevStack gate job[1], maybe infra can weigh in
on that - otherwise if it makes everyone happier I can just set the test
to skip for the time being to unblock everyone. I'll then do my research
I've been meaning to do into xfail[2] so we can continue running tests and
capturing data, but not making a job fail because of a test or race
condition we're aware of.

[1]: https://review.openstack.org/#/c/211853/

[2]: http://pytest.org/latest/skipping.html

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-12 Thread Thierry Carrez
Gary Kotton wrote:
> 
> On 8/12/15, 12:12 AM, "Mike Perez"  wrote:
>> On 15:39 Aug 11, Gary Kotton wrote:
>>> On 8/11/15, 6:09 PM, "Jay Pipes"  wrote:
>>>
 Are you saying that *new functionality* was added to the stable/kilo
 branch of *Neutron*, and because new functionality was added to
 stable/kilo's Neutron, that stable/kilo *Nova* will no longer work?
>>>
>>> Yes. That is exactly what I am saying. The issues is as follows. The
>>> NSXv
>>> manager requires the virtual machines VNIC index to enable the security
>>> groups to work. Without that a VM will not be able to send and receive
>>> traffic. In addition to this the NSXv plugin does not have any agents so
>>> we need to do the metadata plugin changes to ensure metadata support. So
>>> effectively with the patches: https://review.openstack.org/209372 and
>>> https://review.openstack.org/209374 the stable/kilo nova code will not
>>> work with the stable/kilo neutron NSXv plugin.
>> 
>>
>>> So what do you suggest?
>>
>> This was added in Neutron during Kilo [1].
>>
>> It's the responsibility of the patch owner to revert things if something
>> doesn't land in a dependency patch of some other project.
>>
>> I'm not familiar with the patch, but you can see if Neutron folks will
>> accept
>> a revert in stable/kilo. There's no reason to get other projects involved
>> because this wasn't handled properly.
>>
>> [1] - https://review.openstack.org/#/c/144278/
> 
> So you are suggesting that we revert the neutron plugin? I do not think
> that a revert is relevant here.

Yeah, I'm not sure reverting the Neutron patch would be more acceptable.
That one landed in Neutron kilo in time.

The issue here is that due to Nova's review velocity during the kilo
cycle (and arguably the failure to raise this as a cross-project issue
affecting the release), the VMware NSXv support was shipped as broken in
Kilo, and requires non-trivial changes to get fixed.

We have two options: bending the stable rules to allow the fix to be
backported, or document it as broken in Kilo with the invasive patches
being made available for people and distributions who still want to
apply it.

Given that we are 4 months into Kilo, I'd say stable/kilo users are used
to this being broken at this point, so my vote would go for the second
option.

That said, we should definitely raise [1] as a cross-project issue and
see how we could work it into Liberty, so that we don't end up in the
same dark corner in 4 months. I just don't want to break the stable
rules (and the user confidence we've built around us applying them) to
retroactively pay back review velocity / trust issues within Nova.

[1] https://review.openstack.org/#/c/165750/

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Does murano dymamic-ui have plan to support "edit" function?

2015-08-12 Thread WANG, Ming Hao (Tony T)
Dear OpenStack developers,

Currently, murano dynamic-ui is "one-time" GUI, and I can't edit data what has 
been submitted.
Does murano dynamic-ui have plan to support "edit" function in the future?

For example, developer develops some Wizard GUI to do some configuration, and 
user wants to change some configuration after the deployment.

Thanks,
Tony

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-12 Thread Eduard Matei
Hi,

Found some more info (finally): i added a function in the script part in
the jenkins job and an export -f and it seems it's being called, so now my
backend is installed and configured.

I'm now trying to configure cinder to use my driver when running but i
couldn't find a way to configure it.
https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#How_do_I_configure_DevStack_so_my_Driver_Passes_Tempest.3F
mentions "Sample local.conf". How do i edit that file?

I tried exporting TEMPEST_VOLUME_DRIVER... but still the tests seem to use
the default driver.

Thanks,

-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does murano dymamic-ui have plan to support "edit" function?

2015-08-12 Thread Kirill Zaitsev
Hi, sure there are such plans! This have been long referred as 
per-component-UI. I’m really hoping there would be some traction about it 
during mitaka cycle. Not in liberty though, feature freeze is less than a month 
away.

btw, if you’re interested in custom tweaking and fine-tuning of murano 
object-model you can take a look at these CLI tools 
https://review.openstack.org/#/q/project:openstack/python-muranoclient+branch:master+topic:bp/env-configuration-from-cli,n,z

and this https://review.openstack.org/#/c/208659/ commit in particular. 
Although using those would require you to have some knowledge about how murano 
handles things internally.


-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 12 Aug 2015 at 13:23:47, WANG, Ming Hao (Tony T) 
(tony.a.w...@alcatel-lucent.com) wrote:

Dear OpenStack developers,

 

Currently, murano dynamic-ui is “one-time” GUI, and I can’t edit data what has 
been submitted.

Does murano dynamic-ui have plan to support "edit" function in the future? 

 

For example, developer develops some Wizard GUI to do some configuration, and 
user wants to change some configuration after the deployment.

 

Thanks,

Tony

 

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does murano dymamic-ui have plan to support "edit" function?

2015-08-12 Thread WANG, Ming Hao (Tony T)
Kirll,

Thanks for your info very much!
We will study it first.

Thanks,
Tony

From: Kirill Zaitsev [mailto:kzait...@mirantis.com]
Sent: Wednesday, August 12, 2015 7:12 PM
To: WANG, Ming Hao (Tony T); OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] Does murano dymamic-ui have plan to support "edit" 
function?

Hi, sure there are such plans! This have been long referred as 
per-component-UI. I’m really hoping there would be some traction about it 
during mitaka cycle. Not in liberty though, feature freeze is less than a month 
away.

btw, if you’re interested in custom tweaking and fine-tuning of murano 
object-model you can take a look at these CLI tools 
https://review.openstack.org/#/q/project:openstack/python-muranoclient+branch:master+topic:bp/env-configuration-from-cli,n,z

and this https://review.openstack.org/#/c/208659/ commit in particular. 
Although using those would require you to have some knowledge about how murano 
handles things internally.


--
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc


On 12 Aug 2015 at 13:23:47, WANG, Ming Hao (Tony T) 
(tony.a.w...@alcatel-lucent.com) wrote:
Dear OpenStack developers,

Currently, murano dynamic-ui is “one-time” GUI, and I can’t edit data what has 
been submitted.
Does murano dynamic-ui have plan to support "edit" function in the future?

For example, developer develops some Wizard GUI to do some configuration, and 
user wants to change some configuration after the deployment.

Thanks,
Tony

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mitaka nova-specs is open?

2015-08-12 Thread Alexis Lee
Kekane, Abhishek said on Wed, Aug 12, 2015 at 08:59:02AM +:
> I have submitted a nova-specs for liberty but as it is not approved I have 
> moved it under liberty/backlog.
> 
> Now mitaka specs directory is added in nova-specs.
> Should it be ok to move nova-specs from specs/liberty/backlog/approved to 
> specs/mitaka/approved directory?

As I understand it, if you're planning to implement it, apply to a
release. Otherwise, the backlog.

This:
https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Spec_and_Blueprint_Approval_Freeze

says that Mitaka will be open from Liberty-2, which we've passed.


Alexis
-- 
Nova Engineer, HP Cloud.  AKA lealexis, lxsli.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [infra] Race conditions in fwaas that impact the gate

2015-08-12 Thread Robert Collins
On 12 August 2015 at 21:57, Sean M. Collins  wrote:
> [reformatted and infra tag added]
>
>
...
> [2]: http://pytest.org/latest/skipping.html

https://docs.python.org/2/library/unittest.html#skipping-tests-and-expected-failures

will be more helpful, as we're using testtools (which is a layer on unittest).

One note, don't import 'unittest' - always 'unittest2'.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-12 Thread Andrey Pavlov
Hi,

You can something like this -
https://github.com/openstack-infra/project-config/blob/master/jenkins/jobs/devstack-gate.yaml#L2201

On Wed, Aug 12, 2015 at 2:11 PM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:

> Hi,
>
> Found some more info (finally): i added a function in the script part in
> the jenkins job and an export -f and it seems it's being called, so now my
> backend is installed and configured.
>
> I'm now trying to configure cinder to use my driver when running but i
> couldn't find a way to configure it.
>
> https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#How_do_I_configure_DevStack_so_my_Driver_Passes_Tempest.3F
> mentions "Sample local.conf". How do i edit that file?
>
> I tried exporting TEMPEST_VOLUME_DRIVER... but still the tests seem to use
> the default driver.
>
> Thanks,
>
> --
>
> *Eduard Biceri Matei, Senior Software Developer*
> www.cloudfounders.com
>  | eduard.ma...@cloudfounders.com
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Kind regards,
Andrey Pavlov.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]Weekly Team Meeting 2015.08.12 Agenda

2015-08-12 Thread Zhipeng Huang
Hi Team,

We will continue to have our regular meeting today. The agenda for today
would be:

   1. local/bottom cascade service design
   2. gampel to have a list of resources that need to be retrieved so we
   can check if it can be done by tenant context in the design work on
   resource mapping
   3. RPC call from Neutron API to Cascade Service


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Prooduct Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Liberty-3 BPs and gerrit topics

2015-08-12 Thread Daniel Comnea
Kyle,

Is the dashboard available to a limited set of users? curious by nature
tried to access it and it said "Session expired, please login again"
however i thought i don't have to login to Gerrit, am i missing something?


Cheers,
Dani

On Tue, Aug 11, 2015 at 2:44 PM, Kyle Mestery  wrote:

> Folks:
>
> To make reviewing all approved work for Liberty-3 in Neutron easier, I've
> created a handy dandy gerrit dashboard [1]. What will make this even more
> useful is if everyone makes sure to set their topics to something uniform
> from their approved LP BP found here [2]. The gerrit dashboard includes all
> Essential, High, and Medium priority BPs from that link. If everyone who
> has patches could make sure their gerrit topics for the patches are synced
> to what is in the LP BP, that will help as people use the dashboard to
> review in the final weeks before FF.
>
> Thanks!
> Kyle
>
> [1] https://goo.gl/x9bO7i
> [2] https://launchpad.net/neutron/+milestone/liberty-3
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Liberty-3 BPs and gerrit topics

2015-08-12 Thread Kyle Mestery
On Wed, Aug 12, 2015 at 7:05 AM, Daniel Comnea 
wrote:

> Kyle,
>
> Is the dashboard available to a limited set of users? curious by nature
> tried to access it and it said "Session expired, please login again"
> however i thought i don't have to login to Gerrit, am i missing something?
>
>
You will be required to login to gerrit to  view it, the dashboard makes
use of your login ID, for example to not show you patches which you have
proposed. :)


> Cheers,
> Dani
>
> On Tue, Aug 11, 2015 at 2:44 PM, Kyle Mestery  wrote:
>
>> Folks:
>>
>> To make reviewing all approved work for Liberty-3 in Neutron easier, I've
>> created a handy dandy gerrit dashboard [1]. What will make this even more
>> useful is if everyone makes sure to set their topics to something uniform
>> from their approved LP BP found here [2]. The gerrit dashboard includes all
>> Essential, High, and Medium priority BPs from that link. If everyone who
>> has patches could make sure their gerrit topics for the patches are synced
>> to what is in the LP BP, that will help as people use the dashboard to
>> review in the final weeks before FF.
>>
>> Thanks!
>> Kyle
>>
>> [1] https://goo.gl/x9bO7i
>> [2] https://launchpad.net/neutron/+milestone/liberty-3
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Liberty-3 BPs and gerrit topics

2015-08-12 Thread Kyle Mestery
On Tue, Aug 11, 2015 at 8:44 AM, Kyle Mestery  wrote:

> Folks:
>
> To make reviewing all approved work for Liberty-3 in Neutron easier, I've
> created a handy dandy gerrit dashboard [1]. What will make this even more
> useful is if everyone makes sure to set their topics to something uniform
> from their approved LP BP found here [2]. The gerrit dashboard includes all
> Essential, High, and Medium priority BPs from that link. If everyone who
> has patches could make sure their gerrit topics for the patches are synced
> to what is in the LP BP, that will help as people use the dashboard to
> review in the final weeks before FF.
>
>
I should note that I've posted a review for how this dashboard was
generated here [1]. I've marked it WIP. I've done this in case people want
to see what topics I used to generate the dashboard to align their patches.

Thanks!
Kyle

[1] https://review.openstack.org/#/c/211666/


> Thanks!
> Kyle
>
> [1] https://goo.gl/x9bO7i
> [2] https://launchpad.net/neutron/+milestone/liberty-3
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Monasca] Minutes for Monasca mid-cycle meetup

2015-08-12 Thread Chris Dent

On Tue, 11 Aug 2015, Hochmuth, Roland M wrote:


It sounds like we should connect up soon. I could attend a Ceilometer
meeting, or you could attend the Monasca meeting which is held Tuesday
mornings at 9:00 MST.


I'm away this coming Tuesday, but perhaps some of the other Ceilo
people could show up? I've got it on my schedule to come the week
after.

I suspect there's a lot we can do over the long run to avoid
duplicating code and effort but that there will be some humps to
ride over to different pieces (and people!) talking to one another.
Should be fun. Looking forward to it.

--
Chris Dent tw:@anticdent freenode:cdent
https://tank.peermore.com/tanks/cdent

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Kyle Mestery
It gives me great pleasure to propose Russell Bryant and Brandon Logan as
core reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have
both been incredible contributors to Neutron for a while now. Their
expertise has been particularly helpful in the area they are being proposed
in. Their review stats [1] place them both comfortably in the range of
existing Neutron core reviewers. I expect them to continue working with all
community members to drive Neutron forward for the rest of Liberty and into
Mitaka.

Existing DB/API/RPC core reviewers (and other Neutron core reviewers),
please vote +1/-1 for the addition of Russell and Brandon.

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Henry Gessau
+1 to both!

On Wed, Aug 12, 2015, Kyle Mestery  wrote:
> It gives me great pleasure to propose Russell Bryant and Brandon Logan as core
> reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have both 
> been
> incredible contributors to Neutron for a while now. Their expertise has been
> particularly helpful in the area they are being proposed in. Their review 
> stats
> [1] place them both comfortably in the range of existing Neutron core 
> reviewers.
> I expect them to continue working with all community members to drive Neutron
> forward for the rest of Liberty and into Mitaka.
> 
> Existing DB/API/RPC core reviewers (and other Neutron core reviewers), please
> vote +1/-1 for the addition of Russell and Brandon.
> 
> Thanks!
> Kyle
> 
> [1] http://stackalytics.com/report/contribution/neutron-group/90



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Cinder creates encrypted volume from image

2015-08-12 Thread Li, Xiaoyan
Hi all,

Currently when cinder creates a volume with an encrypted volume type from an 
image(which is unencrypted), it just reads data from image, and writes them
Into the volume.
As a result the encrypted volume contains unencrypted data, and Nova fails to 
boot from the volume.
https://bugs.launchpad.net/nova/+bug/1465656

I would like to implement the function that when creating an encrypted volume 
from an image, cinder reads data, encrypts and writes to the volume. 
So that the encrypted volume contains encrypted data as it should be.
https://blueprints.launchpad.net/cinder/+spec/encrypt-volume-with-image

Anyone else is working on it? 
Any suggestions?

Best wishes
Lisa


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Oleg Bondarev
+1

On Wed, Aug 12, 2015 at 4:55 PM, Henry Gessau  wrote:

> +1 to both!
>
> On Wed, Aug 12, 2015, Kyle Mestery  wrote:
> > It gives me great pleasure to propose Russell Bryant and Brandon Logan
> as core
> > reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have
> both been
> > incredible contributors to Neutron for a while now. Their expertise has
> been
> > particularly helpful in the area they are being proposed in. Their
> review stats
> > [1] place them both comfortably in the range of existing Neutron core
> reviewers.
> > I expect them to continue working with all community members to drive
> Neutron
> > forward for the rest of Liberty and into Mitaka.
> >
> > Existing DB/API/RPC core reviewers (and other Neutron core reviewers),
> please
> > vote +1/-1 for the addition of Russell and Brandon.
> >
> > Thanks!
> > Kyle
> >
> > [1] http://stackalytics.com/report/contribution/neutron-group/90
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Edgar Magana
+1 and +1

Great addition to the team!

Edgar

From: Kyle Mestery
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, August 12, 2015 at 6:45 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [neutron] I am pleased to propose two new Neutron 
API/DB/RPC core reviewers!

It gives me great pleasure to propose Russell Bryant and Brandon Logan as core 
reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have both been 
incredible contributors to Neutron for a while now. Their expertise has been 
particularly helpful in the area they are being proposed in. Their review stats 
[1] place them both comfortably in the range of existing Neutron core 
reviewers. I expect them to continue working with all community members to 
drive Neutron forward for the rest of Liberty and into Mitaka.

Existing DB/API/RPC core reviewers (and other Neutron core reviewers), please 
vote +1/-1 for the addition of Russell and Brandon.

Thanks!
Kyle

[1] http://stackalytics.com/report/contribution/neutron-group/90
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][ha] DVR-HA router is not working.

2015-08-12 Thread Assaf Muller
Adding the author of the patches. This reinforces the need to hold on
merging these patches until they have an in-tree integration test.

On Tue, Aug 11, 2015 at 4:13 PM, Korzeniewski, Artur <
artur.korzeniew...@intel.com> wrote:

> Hi,
>
> I’ve been playing around with DVR-HA patches [1][2], have them applied on
> Mondays master branch.
>
> The problem is that the dvr-ha router is not working with SNAT and
> floating ips.
>
>
>
> My setup:
>
> Devstack-34 – all in one (controller, compute, DVR agent, DHCP node)
>
> Devstack-35 – compute node and DVR agent
>
> Devstack-36 – network node (SNAT)
>
> Devstack-37 – network node 2 (SNAT)
>
>
>
> External interface (router_gateway) is down for created dvr-ha router. The
> snat port (router_centralized_snat) is also down after connecting the
> tenant network.
>
> I’m not sure where is the problem, can someone look at the logs and point
> me the place where to look for the answer why the ports are not reported as
> UP?
>
> Add default gateway for DVR-HA router, log from active network node:
> http://pastebin.com/S7rYpDns
>
> Add default gateway for DVR-HA router, log from neutron server node:
> http://pastebin.com/WpcV1g09
>
>
>
> The external gateway IP is not reachable from external network, and VMs
> are not able to ping default gateway  (10.2.2.1)…
>
> I have to add, that on the same setup the usual DVR router is working fine
> (hosted on the same network node)
>
>
>
> [1] https://review.openstack.org/#/c/196893
>
> [2] https://review.openstack.org/#/c/143169
>
>
>
> Regards,
>
> Artur Korzeniewski
>
> 
>
> Intel Technology Poland sp. z o.o.
>
> KRS 101882
>
> ul. Slowackiego 173, 80-298 Gdansk
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Anna Kamyshnikova
+1 for both

On Wed, Aug 12, 2015 at 5:04 PM, Edgar Magana 
wrote:

> +1 and +1
>
> Great addition to the team!
>
> Edgar
>
> From: Kyle Mestery
> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
> Date: Wednesday, August 12, 2015 at 6:45 AM
> To: "OpenStack Development Mailing List (not for usage questions)"
> Subject: [openstack-dev] [neutron] I am pleased to propose two new
> Neutron API/DB/RPC core reviewers!
>
> It gives me great pleasure to propose Russell Bryant and Brandon Logan as
> core reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have
> both been incredible contributors to Neutron for a while now. Their
> expertise has been particularly helpful in the area they are being proposed
> in. Their review stats [1] place them both comfortably in the range of
> existing Neutron core reviewers. I expect them to continue working with all
> community members to drive Neutron forward for the rest of Liberty and into
> Mitaka.
>
> Existing DB/API/RPC core reviewers (and other Neutron core reviewers),
> please vote +1/-1 for the addition of Russell and Brandon.
>
> Thanks!
> Kyle
>
> [1] http://stackalytics.com/report/contribution/neutron-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Implement the API to create master instance and slave instances with one request

2015-08-12 Thread Doug Shelley
As of Kilo, you can add a ―replica-count parameter to trove create ―replica-of 
to have it spin up multiple mysql slaves simultaneously. This same construct is 
in the python/REST API as well. I realize that you still need to create a 
master first, but thought I would point this out as it might be helpful to you.

Regards,
Doug


From: 陈迪豪 mailto:chendi...@unitedstack.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, August 11, 2015 at 11:45 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [trove]Implement the API to create master instance and 
slave instances with one request

Now we can create mysql master instance and slave instance one by one.

It would be much better to allow user to create one master instance and 
multiple slave instances with one request.

Any suggestion about this, the API design or the implementation?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Gary Kotton
+1

From: Anna Kamyshnikova 
mailto:akamyshnik...@mirantis.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 12, 2015 at 5:14 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron 
API/DB/RPC core reviewers!

+1 for both

On Wed, Aug 12, 2015 at 5:04 PM, Edgar Magana 
mailto:edgar.mag...@workday.com>> wrote:
+1 and +1

Great addition to the team!

Edgar

From: Kyle Mestery
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, August 12, 2015 at 6:45 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [neutron] I am pleased to propose two new Neutron 
API/DB/RPC core reviewers!

It gives me great pleasure to propose Russell Bryant and Brandon Logan as core 
reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have both been 
incredible contributors to Neutron for a while now. Their expertise has been 
particularly helpful in the area they are being proposed in. Their review stats 
[1] place them both comfortably in the range of existing Neutron core 
reviewers. I expect them to continue working with all community members to 
drive Neutron forward for the rest of Liberty and into Mitaka.

Existing DB/API/RPC core reviewers (and other Neutron core reviewers), please 
vote +1/-1 for the addition of Russell and Brandon.

Thanks!
Kyle

[1] 
http://stackalytics.com/report/contribution/neutron-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Gary Kotton
Well, actually +2 as there are 2 nominations. :)

From: Gary Kotton mailto:gkot...@vmware.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 12, 2015 at 5:23 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron 
API/DB/RPC core reviewers!

+1

From: Anna Kamyshnikova 
mailto:akamyshnik...@mirantis.com>>
Reply-To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 12, 2015 at 5:14 PM
To: OpenStack List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron 
API/DB/RPC core reviewers!

+1 for both

On Wed, Aug 12, 2015 at 5:04 PM, Edgar Magana 
mailto:edgar.mag...@workday.com>> wrote:
+1 and +1

Great addition to the team!

Edgar

From: Kyle Mestery
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Wednesday, August 12, 2015 at 6:45 AM
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: [openstack-dev] [neutron] I am pleased to propose two new Neutron 
API/DB/RPC core reviewers!

It gives me great pleasure to propose Russell Bryant and Brandon Logan as core 
reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have both been 
incredible contributors to Neutron for a while now. Their expertise has been 
particularly helpful in the area they are being proposed in. Their review stats 
[1] place them both comfortably in the range of existing Neutron core 
reviewers. I expect them to continue working with all community members to 
drive Neutron forward for the rest of Liberty and into Mitaka.

Existing DB/API/RPC core reviewers (and other Neutron core reviewers), please 
vote +1/-1 for the addition of Russell and Brandon.

Thanks!
Kyle

[1] 
http://stackalytics.com/report/contribution/neutron-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Regards,
Ann Kamyshnikova
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Liberty-2 dev milestone has been released

2015-08-12 Thread Renat Akhmerov
Hi,

Mistral Liberty-2 development milestone has been released.

Look at release page [1] to find detailed info about implemented blueprints and 
fixed bugs.

[1] https://launchpad.net/mistral/liberty/liberty-2 


Thanks!

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder]Cinder creates encrypted volume from image

2015-08-12 Thread Duncan Thomas
That's a bug! I'm not aware of anybody working on it, and I don't see any
old bugs open for it.  A suitable fix would be very welcome
On 12 Aug 2015 14:54, "Li, Xiaoyan"  wrote:

> Hi all,
>
> Currently when cinder creates a volume with an encrypted volume type from
> an image(which is unencrypted), it just reads data from image, and writes
> them
> Into the volume.
> As a result the encrypted volume contains unencrypted data, and Nova fails
> to boot from the volume.
> https://bugs.launchpad.net/nova/+bug/1465656
>
> I would like to implement the function that when creating an encrypted
> volume from an image, cinder reads data, encrypts and writes to the volume.
> So that the encrypted volume contains encrypted data as it should be.
> https://blueprints.launchpad.net/cinder/+spec/encrypt-volume-with-image
>
> Anyone else is working on it?
> Any suggestions?
>
> Best wishes
> Lisa
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Mistral Client 1.0.1 has been released

2015-08-12 Thread Renat Akhmerov
Hi,

Mistral Client (Python API and CLI) version 1.0.1 has been released. It 
contains changes needed for Python 3.4 compatibility and a number of bug fixes.

[1] https://launchpad.net/python-mistralclient/liberty/1.0.1 


Thanks

Renat Akhmerov
@ Mirantis Inc.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Doug Wiegley
A big +1 to both!!

Doug



> On Aug 12, 2015, at 6:45 AM, Kyle Mestery  wrote:
> 
> It gives me great pleasure to propose Russell Bryant and Brandon Logan as 
> core reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have 
> both been incredible contributors to Neutron for a while now. Their expertise 
> has been particularly helpful in the area they are being proposed in. Their 
> review stats [1] place them both comfortably in the range of existing Neutron 
> core reviewers. I expect them to continue working with all community members 
> to drive Neutron forward for the rest of Liberty and into Mitaka.
> 
> Existing DB/API/RPC core reviewers (and other Neutron core reviewers), please 
> vote +1/-1 for the addition of Russell and Brandon.
> 
> Thanks!
> Kyle
> 
> [1] http://stackalytics.com/report/contribution/neutron-group/90
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Mike Bayer



On 8/11/15 7:14 PM, Sachin Manpathak wrote:
I am struggling with python code profiling in general. It has its own 
caveats like 100% plus overhead.
However, on a host with only nova services (DB on a different host), I 
see cpu utilization spike up quickly with scale. The DB server is 
relatively calm and never goes over 20%. On a system which relies on 
DB to fetch all the data, this should not happen.
The DB's resources are intended to scale up in response to wide degree 
of concurrency, that is, lots and lots of API services all hitting it 
from many concurrent API calls."with scale" here is a slippery 
term.  What kind of concurrency are you testing with ? How many CPUs 
serving API calls are utilized simultaneously?   To saturate the 
database you need many dozens, and even then you don't want your 
database CPU going very high.   20% does not seem that low to me, 
actually.I disagree with the concept that high database CPU refers 
to a performant application, or that DB saturation is a requirement in 
order for a database-delivered application to be performant; I think the 
opposite is true. In web application development, when I worked with 
production sites at high volume, the goal was to use enough caching so 
that major site pages being viewed constantly could be delivered with 
*no* database access whatsoever. We wanted to see the majority of the 
site being sent to customers with the database at essentially zero; this 
is how you get page response times down from 200-300 ms down to 20 or 
30.  If you want to measure performance, looking at API response 
time is probably better than looking at CPU utilization first.


That said, Python is a very CPU intensive language, because it is an 
interpreted scripting language.   Operations that in a language like 
compiled C would be hardly a whisper of CPU end up being major 
operations in Python. Openstack suffers from a large amount of 
function call overhead even for simple API operations, as it is an 
extremely layered system with very little use of caching.   Until it 
moves to a JIT-based interpreter like Pypy that can flatten out 
call-chains, the amount of overhead just for an API call to come in and 
go back out with a response will remain significant.   As for caching, 
making use of a technique such as memcached caching of data structures 
can also greatly improve performance because we can cache pre-assembled 
data, removing the need to repeatedly extract it from multiple tables to 
be pieced together in Python, which is also a very CPU intensive 
activity.   This is something that will be happening more in the future, 
but as it improves the performance of Openstack, it will be removing 
even more load from the database. Again, I'd look at API response times 
as the first thing to measure.


That said, certainly the joining of data in Python may be unnecessary 
and I'm not sure if we can't revisit the history Dan refers to when he 
says there were "very large result sets", if we are referring to number 
of rows, joining in SQL or in Python will still involve the same number 
of "rows", and SQLAlchemy also offers many techniques of optimizing the 
overhead of fetching lots of rows which Nova currently doesn't make use 
of (see 
https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Eager_load_and_Column_load_tuning 
for a primer on this).


If OTOH we are referring to the width of the columns and the join is 
such that you're going to get the same A identity over and over again,  
if you join A and B you get a "wide" row with all of A and B with a very 
large amount of redundant data sent over the wire again and again (note 
that the database drivers available to us in Python always send all rows 
and columns over the wire unconditionally, whether or not we fetch them 
in application code).  In this case you *do* want to do the join in 
Python to some extent, though you use the database to deliver the 
simplest information possible to work with first; you get the full row 
for all of the A entries, then a second query for all of B plus A's 
primary key that can be quickly matched to that of A.SQLAlchemy 
offers this as "subquery eager loading" and it is definitely much more 
performant than a single full join when you have wide rows for 
individual entities.The database is doing the join to the extent 
that it can deliver the primary key information for A and B which can be 
operated upon very quickly in memory, as we already have all the A 
identities in a hash lookup in any case.


Overall if you're looking to make Openstack faster, where you want to be 
is 1. what is the response time of an API call and 2. what do the Python 
profiles look like for those API calls?  For a primer on Python 
profiling see for example my own FAQ entry here: 
http://docs.sqlalchemy.org/en/rel_1_0/faq/performance.html#code-profiling. 
This kind of profiling is a lot of work and is very tedious, compared to 
just running a big r

Re: [openstack-dev] [security] [docs] Security Guide Freeze and RST migration - Complete

2015-08-12 Thread Dillon, Nathaniel
All,

The RST migration has completed, and the freeze is lifted, all incoming patches 
will need to be in RST format.

Thanks to the Docs team - especially Andreas - for the awesome support!

Thanks again,

Nathaniel

> On Jul 21, 2015, at 7:46 AM, Dillon, Nathaniel  
> wrote:
> 
> All,
> 
> The OpenStack Security Guide is migrating to RST format [1] and with help 
> from the docs team we hope to have this completed shortly. We will therefore 
> be entering a freeze on all changes coming into the Security Guide until the 
> migration is complete, and all future changes will be in the much easier RST 
> format.
> 
> Progress can be tracked on the etherpad [2] or specific issues can be asked 
> in reply to this message or during the Security Guide weekly meeting [3], and 
> an announcement will be made when the migration is complete.
> 
> Thanks,
> 
> Nathaniel
> 
> [1] https://bugs.launchpad.net/openstack-manuals/+bug/1463111
> [2] https://etherpad.openstack.org/p/sec-guide-rst
> [3] https://wiki.openstack.org/wiki/Documentation/SecurityGuide
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Dan Smith
> If OTOH we are referring to the width of the columns and the join is
> such that you're going to get the same A identity over and over again, 
> if you join A and B you get a "wide" row with all of A and B with a very
> large amount of redundant data sent over the wire again and again (note
> that the database drivers available to us in Python always send all rows
> and columns over the wire unconditionally, whether or not we fetch them
> in application code).

Yep, it was this. N instances times M rows of metadata each. If you pull
100 instances and they each have 30 rows of system metadata, that's a
lot of data, and most of it is the instance being repeated 30 times for
each metadata row. When we first released code doing this, a prominent
host immediately raised the red flag because their DB traffic shot
through the roof.

> In this case you *do* want to do the join in
> Python to some extent, though you use the database to deliver the
> simplest information possible to work with first; you get the full row
> for all of the A entries, then a second query for all of B plus A's
> primary key that can be quickly matched to that of A.

This is what we're doing. Fetch the list of instances that match the
filters, then for the ones that were returned, get their metadata.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mitaka nova-specs is open?

2015-08-12 Thread John Garbutt
On 12 August 2015 at 12:20, Alexis Lee  wrote:
> Kekane, Abhishek said on Wed, Aug 12, 2015 at 08:59:02AM +:
>> I have submitted a nova-specs for liberty but as it is not approved I have 
>> moved it under liberty/backlog.
>>
>> Now mitaka specs directory is added in nova-specs.
>> Should it be ok to move nova-specs from specs/liberty/backlog/approved to 
>> specs/mitaka/approved directory?
>
> As I understand it, if you're planning to implement it, apply to a
> release. Otherwise, the backlog.
>
> This:
> https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Spec_and_Blueprint_Approval_Freeze
>
> says that Mitaka will be open from Liberty-2, which we've passed.

+1

We are open for Mitaka specs now.

Sorry if that was not clear from the previous status updates.

The downside, is there are few folks free to review those right now.
I expect more reviews once liberty-3 is tagged, but the most once
master is open for Mitaka.

I hope that helps,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and glance

2015-08-12 Thread Mike Perez
On Wed, Aug 12, 2015 at 2:23 AM, Kuvaja, Erno  wrote:
>> -Original Message-
>> From: Mike Perez [mailto:thin...@gmail.com]
>> Sent: 11 August 2015 19:04
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [Nova] [Cinder] [Glance] glance_store and
>> glance
>>
>> On 15:06 Aug 11, Kuvaja, Erno wrote:
>> > > -Original Message-
>> > > From: Jay Pipes [mailto:jaypi...@gmail.com]
>>
>> 
>>
>> > > Having the image cache local to the compute nodes themselves gives
>> > > the best performance overall, and with glance_store, means that
>> > > glance-api isn't needed at all, and Glance can become just a
>> > > metadata repository, which would be awesome, IMHO.
>> >
>> > You have any figures to back this up in scale? We've heard similar
>> > claims for quite a while and as soon as people starts to actually look
>> > into how the environments behaves, they quite quickly turn back. As
>> > you're not the first one, I'd like to make the same request as to
>> > everyone before, show your data to back this claim up! Until that it
>> > is just like you say it is, opinion.;)
>>
>> The claims I make with Cinder doing caching on its own versus just using
>> Glance with rally with an 8G image:
>>
>> Creating/deleting 50 volumes w/ Cinder image cache: 324 seconds
>> Creating/delete 50 volumes w/o Cinder image cache: 3952 seconds
>>
>> http://thing.ee/x/cache_results/
>>
>> Thanks to Patrick East for pulling these results together.
>>
>> Keep in mind, this is using a block storage backend that is completely
>> separate from the OpenStack nodes. It's *not* using a local LVM all in one
>> OpenStack contraption. This is important because even if you have Glance
>> caching enabled, and there was no cache miss, you still have to dd the bits 
>> to
>> the block device, which is still going over the network. Unless Glance is 
>> going
>> to cache on the storage array itself, forget about it.
>>
>> Glance should be focusing on other issues, rather than trying to make
>> copying image bits over the network and dd'ing to a block device faster.
>>
>> --
>> Mike Perez
>>
> Thanks Mike,
>
> So without cinder cache your times averaged roughly 150+second marks. The
> couple of first volumes with the cache took roughly 170+seconds. What the
> data does not tell, was cinder pulling the images directly from glance
> backend rather than through glance on either of these cases?

Oh but I did, and that's the beauty of this, the files marked
cinder-cache-x.html are avoiding Glance as soon as it can, using the Cinder
generic image cache solution [1]. Please reread my when I say Glance is unable
to do caching in a storage array, so we don't rely on Glance. It's too slow
otherwise.

Take this example with 50 volumes created from image with Cinder's image cache
[2]:

* Is using Glance cache (oh no cache miss)
* Downloads the image from whatever glance store
* dd's the bits to the exported block device.
* the bits travel to the storage array that the block device was exported from.
* [2nd-50th] request of that same image comes, Cinder instead just references
  a cinder:// endpoint which has the storage array do a copy on write. ZERO
  COPYING since we can clone the image. Just a reference pointer and done, move
  on.

> Somehow you need to seed those caches and that seeding time/mechanism is
> where the debate seems to be. Can you afford keeping every image in cache so
> that they are all local or if you need to pull the image to seed your cache
> how much you will benefit that your 100 cinder nodes are pulling it directly
> from backend X versus glance caching/sitting in between. How block storage
> backend handles that 100 concurrent reads by different client when you are
> seeding it between different arrays? The scale starts matter here because it
> makes a lot of difference on backend if it's couple of cinder or nova nodes
> requesting the image vs. 100s of them. Lots of backends tends to not like
> such loads or we outperform them due not having to fight for the bandwidth
> with other consumers of that backend.

Are you seriously asking if a backend is going to be with stand concurrent
reads compared to Glance cache?

All storage backends do is I/O, unlike Glance which is trying to do a million
things and just pissing off the community.

They do it pretty darn well and are a lot more sophisticated than Glance cache.
I'd pick Ceph w/ Cinder generic image cache doing copy on writes over Glance
cache any day.

As it stands Cinder will be recommending in documentation for users to use the
generic image cache solution over Glance Cache.


[1] - https://review.openstack.org/#/c/195795/
[2] - 
http://thing.ee/x/cache_results/cinder-cache-50.html#/CinderVolumes.create_and_delete_volume

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subjec

Re: [openstack-dev] [Magnum] Obtain the objects from the bay endpoint

2015-08-12 Thread Steven Dake (stdake)


From: Akash Gangil mailto:akashg1...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, August 12, 2015 at 1:37 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] Obtain the objects from the bay endpoint

Hi,

I have a few questions. inline.


Problem :-

Currently objects (pod/rc/service) are read from the database. In order for 
native clients to work, they must be read from the ReST bay endpoint. To 
execute native clients, we must have one truth of the state of the system, not 
two as in its current state of art.


What is meant by the "native" clients here? Can you give an example?

Native client is docker binary or kubectl from those various projects.  We also 
need to support python-magnumclient operations to support further Heat 
integration, which allows Magnum to be used well with proprietary software 
implementations that may be doing orchestration via Heat.



A]  READ path needs to be changed :

1. For python clients :-

python-magnum client->rest api->conductor->rest-endpoint-k8s-api handler

In its present state of art this is python-magnum client->rest api->db

2. For native clients :-

native client->rest-endpoint-k8s-api

If native client can get all the info through the rest-endpoint-k8s handler, 
why in case of magnum client do we need to go through rest-api-> conductor? Do 
we parse or modify the k8s-api data before responding to the python-magnum 
client?



Kubernetes has a rest API endpoint running in the bay.  This is different from 
the Magnum rest API.  This is what is referred to above.

B] WRITE operations need to happen via the rest endpoint instead of the 
conductor.

If we completely bypass the conductor, is there any way to keep a track of 
trace of how a resource was modified? Since I presume now magnum doesn't have 
that info, since we talk to k8s-api directly? Or is this irrelevant?

C] Another requirement that needs to be satisfied is that data returned by 
magnum should be the same whether its created by native client or python-magnum 
client.

I don't understand why is the information duplicated in the magnum db and k8s 
data source in first place? From what I understand magnum has its own database 
which is with k8s-api responses?

The reason it is duplicated is because when I wrote the original code, I didn’t 
forsee this objective.  Essentially I’m not perfect ;)


The fix will make sure all of the above conditions are met.

Need your input on the proposed approach.

ACK accurate of my understanding of the proposed approach :)

-Vilobh

[1] 
https://blueprints.launchpad.net/magnum/+spec/objects-from-bay__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Akash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On 08/12/2015 03:45 PM, Kyle Mestery wrote:
> It gives me great pleasure to propose Russell Bryant and Brandon
> Logan as core reviewers in the API/DB/RPC area of Neutron. Russell
> and Brandon have both been incredible contributors to Neutron for a
> while now. Their expertise has been particularly helpful in the
> area they are being proposed in. Their review stats [1] place them
> both comfortably in the range of existing Neutron core reviewers. I
> expect them to continue working with all community members to drive
> Neutron forward for the rest of Liberty and into Mitaka.
> 
> Existing DB/API/RPC core reviewers (and other Neutron core
> reviewers), please vote +1/-1 for the addition of Russell and
> Brandon.
> 
> Thanks! Kyle
> 
> [1] http://stackalytics.com/report/contribution/neutron-group/90
> 

Shouldn't we use the link that shows neutron core repo contributions
only? I think this is the right one:

http://stackalytics.com/report/contribution/neutron/90

Ihar
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVy2wjAAoJEC5aWaUY1u57LUQIAIwHlnhzzucTJss5dE3dUeiP
WQ7h7Oax45BWhaXD1a9/ux4HYeUX0haVPLO7KqgiLaMxu5H8r98QZpsK5NnxMsE/
XuQHM5/i8diHuZnfmP8W+kzjfuS7xxiBxqnmg3AF9PrcHOu10YCnSQaRAzbsSwcc
R7ifeLexF8kpE9PI0/eAMBtoVmidjnxuEfU+hK0zto3MCQ86SFxeYut+efhiaphz
CiN/H440gllw3TdZsNCMAP8ie4+cjbR9W6vkMieq3Z2esNfAZQaTaJ8NPeLzGpHj
4+NjFTuTTQmtYmqMiVlqDeg3y0LaE21qdI649XdRub8Xp//Ht7xnnQcOrW2lSPM=
=khEG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Armando M.
Is this an example of +1+1=3?

On 12 August 2015 at 07:51, Doug Wiegley 
wrote:

> A big +1 to both!!
>
> Doug
>
>
>
> On Aug 12, 2015, at 6:45 AM, Kyle Mestery  wrote:
>
> It gives me great pleasure to propose Russell Bryant and Brandon Logan as
> core reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have
> both been incredible contributors to Neutron for a while now. Their
> expertise has been particularly helpful in the area they are being proposed
> in. Their review stats [1] place them both comfortably in the range of
> existing Neutron core reviewers. I expect them to continue working with all
> community members to drive Neutron forward for the rest of Liberty and into
> Mitaka.
>
> Existing DB/API/RPC core reviewers (and other Neutron core reviewers),
> please vote +1/-1 for the addition of Russell and Brandon.
>
> Thanks!
> Kyle
>
> [1] http://stackalytics.com/report/contribution/neutron-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][ha] DVR-HA router is not working.

2015-08-12 Thread Korzeniewski, Artur
I have checked that:
1) the router_gateway is configured properly on network node (in namespace and 
in OVS), but the port is not reported as UP to controller node. Also, the ARP 
is not performed so no-one from external network is aware of the gateway 
address. When I ping from the GW to external host, the host knows where the GW 
IP is and can ping back.

2) The distributed snat port is created in namespace and IP is configured, but 
port is not reported as UP to controller, and there is no connectivity with 
other VMs inside the tenant network, maybe the port is not configured properly 
in OVS…

Regards,
Artur

From: Assaf Muller [mailto:amul...@redhat.com]
Sent: Wednesday, August 12, 2015 4:09 PM
To: OpenStack Development Mailing List (not for usage questions); 
adolfo.dua...@hp.com
Subject: Re: [openstack-dev] [neutron][dvr][ha] DVR-HA router is not working.

Adding the author of the patches. This reinforces the need to hold on merging 
these patches until they have an in-tree integration test.

On Tue, Aug 11, 2015 at 4:13 PM, Korzeniewski, Artur 
mailto:artur.korzeniew...@intel.com>> wrote:
Hi,
I’ve been playing around with DVR-HA patches [1][2], have them applied on 
Mondays master branch.
The problem is that the dvr-ha router is not working with SNAT and floating ips.

My setup:
Devstack-34 – all in one (controller, compute, DVR agent, DHCP node)
Devstack-35 – compute node and DVR agent
Devstack-36 – network node (SNAT)
Devstack-37 – network node 2 (SNAT)

External interface (router_gateway) is down for created dvr-ha router. The snat 
port (router_centralized_snat) is also down after connecting the tenant network.
I’m not sure where is the problem, can someone look at the logs and point me 
the place where to look for the answer why the ports are not reported as UP?
Add default gateway for DVR-HA router, log from active network node: 
http://pastebin.com/S7rYpDns
Add default gateway for DVR-HA router, log from neutron server node: 
http://pastebin.com/WpcV1g09

The external gateway IP is not reachable from external network, and VMs are not 
able to ping default gateway  (10.2.2.1)…
I have to add, that on the same setup the usual DVR router is working fine 
(hosted on the same network node)

[1] https://review.openstack.org/#/c/196893
[2] https://review.openstack.org/#/c/143169

Regards,
Artur Korzeniewski

Intel Technology Poland sp. z o.o.
KRS 101882
ul. Slowackiego 173, 80-298 Gdansk


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Compass] Call for contributors

2015-08-12 Thread Weidong Shao
Hi OpenStackers,

Compass is not new to OpenStack community. We started it as an OpenStack
deployment tool at the HongKong summit. We then showcased it at the Paris
summit.

However, the project has gone through some changes recently. We'd like to
re-introduce Compass and welcome new developers to expand our efforts,
share in its design, and advance its usefulness to the OpenStack community.

We intend to follow the 4 openness guidelines and enter the "Big Tent". We
have had some feedback from TC reviewers and others and realize we have
some work to do to get there. More developers interested in working on the
project will get us there easier.

Besides the openness Os, there is critical developer work we need to get to
one of the OpenStack Os.  For example, we have forked Chef cookbooks, and
Ansible written from scratch for OpenStack deployment. We need to merge the
Compass Ansible playbooks back to openstack upstream repo
(os-ansible-deployment).

We also need to reach out to other related projects, such as Ironic, to
make sure that where our efforts overlap, we provided added value, not
different ways of doing the same thing.

Lot of work we think will add to the OpenStack community.


   - The project wiki page is at https://wiki.openstack.org/wiki/Compass
   - The launchpad is: https://launchpad.net/compass
   - The weekly IRC meeting is on openstack-meeting4 0100 Thursdays UTC (or
   Wed 6pm PDT)
   - Code repo is under stackforge
   https://github.com/stackforge/compass-core
   https://github.com/stackforge/compass-web
   https://github.com/stackforge/compass-adapters


Thanks,
Weidong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Compass] Regarding Ansible Playbook vs upstream repo

2015-08-12 Thread Weidong Shao
[+ openstack-dev as suggested]

Steven,

Thank you for the candid comments and suggestions! We are revising our
mission to be openstack-centric as we plan our next phase. We will follow
the suggested steps by you and other TC reviewers that make sense for our
project.

On the specific question on Ansible playbook, it is good that we understand
why Kolla's Ansible is different. In Compass, we are trying to deprecate
ours and adopt os-ansible-deployment instead.

more in line:

On Mon, Aug 10, 2015 at 5:47 PM Steven Dake (stdake) 
wrote:

> Xicheng.  Comments inline.  This discussion should probably be on the
> openstack-dev mailing list, but I understand if you don’t feel comfortable
> asking such questions there.  IMO this is part  of the problem with Compass
> in big tent…
>
> I have copied Sam Yaple, one of our core reviewers because I think this is
> relevant to him.
>
> From: Xicheng Chang 
> Date: Monday, August 10, 2015 at 2:46 PM
> To: Steven Dake 
> Cc: Weidong Shao , Xicheng Chang <
> xicheng.ch...@huawei.com>
> Subject: Ansible playbook for openstack
>
> Hi Steven,
>
> This is Xicheng from Compass dev-team. I heard from Weidong that Kolla has
> its own Ansible playbooks for deployment and I would like to learn more
> about the following aspects:
>
> - What is the github url of this ansible project?
>
>
> http://github.com/stackforge/kolla
>
> We are not just an ansible project.  The goal of our project is deployment
> using thin container technology and providing a reference implementation of
> deployment tooling.  We fully expect TripleO will provide an integration
> with Kolla using puppet to our thin container technology.
>
>
> - Does it have anything to do with the upstream
> stackforge/os-ansible-deployment?
>
>
> No we are a completely separate project.  We can’t use OSADs ansible
> scripts because our Ansible implementation is tightly integrated with our
> container technology.  Maybe some day we can merge but getting the two
> independently formed communities on board with such a merger would be
> difficult.
>
>
>
> - What effort did your team make on ansible playbooks in order to get
> inducted to the openstack big-tent? I think it is required to use upstream
> deployment cookbooks/manifests/playbooks.(we currently have our own
> deployment repo at
>
> github.com/stackforge/compass-adapters)
>
>
> My take on the TC is they are willing to accept many different deployment
> tools into the Big Tent assuming they offer something unique and are a
> legitimate OpenStack project, following OpenStack processes, with a
> properly diversified community.
>
> Kolla’s big tent application for reference:
> https://review.openstack.org/206789
>
> I hope you don’t mistake my directness at answering the question I think
> you really want answered for rudeness, but as a casual observer of the
> Compass big tent application I noticed the following problems:
>
>
>1. You have your own domain name vs using OpenStack infrastructure for
>user interaction and marketing.  Openstack has all our infrastructure to
>engage the community in one place not spread them all out all over.
>
> Good suggestion. We will move all related content to our wiki page on
OpenStack. This also helps us as it is easier to keep a single location up
to date and consistent than two.

>
>1. According to Jay, compass installs all different kinds of
>infrastructure.  This is a non-starter.  Generic tools have always been
>rejected for openstack namespace.  It shows a lack of commitment to
>OpenStack’s success and a desire to “hedge your bets”.
>
> The project is focused on OpenStack, but deployment spans many thing not
strictly OpenStack.  Just as Ironic installs baremetal systems, ours deploy
them.  Ironic is expected to be key in Compass going forward.  If a
customer wants to deploy a baremetal instance with a container with a
legacy app, we don't expect Compass to not allow it, but we won't be
providing special tooling to specifically enable it, either.

>
>1. This email is an example of not participating in the open.  You
>would get the same exact response from me on openstack-dev.  It would be
>better if the entire community could learn from my opinions rather then a
>couple people.
>
> correcting it right now.

>
>1. I get that other systems like open daylight, NFV, and all the other
>new networking systems are becoming popular.  It is appealing to want to
>compete with these in the same deployment tool that deploys OpenStack.
>That is a nonstarter.
>
> As I mentioned above,  the scope of Compass, as an (hoping to be)
Openstack project is being adjusted to focus on OpenStack deployment.

>
>1. I think including CEPH is fine.  Everyone is in love with ceph.  We
>are going to use it in Kolla for our persistent storage.  No openstack
>project solves this problem in a suitable way.
>
> A course of action to correct the deficiencies pointed out by the TC:
>
>
>1. Co

Re: [openstack-dev] [security] [docs] Security Guide Freeze and RST migration - Complete

2015-08-12 Thread Anne Gentle
Hi and congrats on the conversion! 

At your next meeting can you discuss the Lulu PDF for sale? I don't have 
bandwidth to test PDT output for compatibility with Lulu but you can talk about 
if it's worth the effort to test.

Thanks,
Anne


> On Aug 12, 2015, at 10:12 AM, Dillon, Nathaniel  
> wrote:
> 
> All,
> 
> The RST migration has completed, and the freeze is lifted, all incoming 
> patches will need to be in RST format.
> 
> Thanks to the Docs team - especially Andreas - for the awesome support!
> 
> Thanks again,
> 
> Nathaniel
> 
>> On Jul 21, 2015, at 7:46 AM, Dillon, Nathaniel  
>> wrote:
>> 
>> All,
>> 
>> The OpenStack Security Guide is migrating to RST format [1] and with help 
>> from the docs team we hope to have this completed shortly. We will therefore 
>> be entering a freeze on all changes coming into the Security Guide until the 
>> migration is complete, and all future changes will be in the much easier RST 
>> format.
>> 
>> Progress can be tracked on the etherpad [2] or specific issues can be asked 
>> in reply to this message or during the Security Guide weekly meeting [3], 
>> and an announcement will be made when the migration is complete.
>> 
>> Thanks,
>> 
>> Nathaniel
>> 
>> [1] https://bugs.launchpad.net/openstack-manuals/+bug/1463111
>> [2] https://etherpad.openstack.org/p/sec-guide-rst
>> [3] https://wiki.openstack.org/wiki/Documentation/SecurityGuide
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Compass] Call for contributors

2015-08-12 Thread Jay Pipes
A big +1 from me, Weidong, for reaching out to the OpenStack community 
and embracing it.


I look forward to seeing the Compass developer community *collaborating* 
with OSAD, the OpenStack Chef community, and other projects, like Ironic.


Best,
-jay

On 08/12/2015 12:23 PM, Weidong Shao wrote:

Hi OpenStackers,

Compass is not new to OpenStack community. We started it as an OpenStack
deployment tool at the HongKong summit. We then showcased it at the
Paris summit.

However, the project has gone through some changes recently. We'd like
to re-introduce Compass and welcome new developers to expand our
efforts, share in its design, and advance its usefulness to the
OpenStack community.

We intend to follow the 4 openness guidelines and enter the "Big Tent".
We have had some feedback from TC reviewers and others and realize we
have some work to do to get there. More developers interested in working
on the project will get us there easier.

Besides the openness Os, there is critical developer work we need to get
to one of the OpenStack Os.  For example, we have forked Chef cookbooks,
and Ansible written from scratch for OpenStack deployment. We need to
merge the Compass Ansible playbooks back to openstack upstream repo
(os-ansible-deployment).

We also need to reach out to other related projects, such as Ironic, to
make sure that where our efforts overlap, we provided added value, not
different ways of doing the same thing.

Lot of work we think will add to the OpenStack community.

  * The project wiki page is at https://wiki.openstack.org/wiki/Compass
  * The launchpad is: https://launchpad.net/compass
  * The weekly IRC meeting is on openstack-meeting4 0100 Thursdays UTC
(or Wed 6pm PDT)
  * Code repo is under stackforge
https://github.com/stackforge/compass-core
https://github.com/stackforge/compass-web
https://github.com/stackforge/compass-adapters


Thanks,
Weidong




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread David Chadwick


On 11/08/2015 01:46, Jamie Lennox wrote:
> 
> 
> - Original Message -
>> From: "Jamie Lennox"  To: "OpenStack
>> Development Mailing List (not for usage questions)"
>>  Sent: Tuesday, 11 August, 2015
>> 10:09:33 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
>> Federated Login
>> 
>> 
>> 
>> - Original Message -
>>> From: "David Chadwick"  To:
>>> openstack-dev@lists.openstack.org Sent: Tuesday, 11 August, 2015
>>> 12:50:21 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
>>> Federated Login
>>> 
>>> 
>>> 
>>> On 10/08/2015 01:53, Jamie Lennox wrote:
 
 
 - Original Message -
> From: "David Chadwick"  To: 
> openstack-dev@lists.openstack.org Sent: Sunday, August 9,
> 2015 12:29:49 AM Subject: Re: [openstack-dev] [Keystone]
> [Horizon] Federated Login
> 
> Hi Jamie
> 
> nice presentation, thanks for sharing it. I have forwarded it
> to my students working on federation aspects of Horizon.
> 
> About public federated cloud access, the way you envisage it,
> i.e. that every user will have his own tailored (subdomain)
> URL to the SP is not how it works in the real world today.
> SPs typically provide one URL, which everyone from every IdP
> uses, so that no matter which browser you are using, from
> wherever you are in the world, you can access the SP (via
> your IdP). The only thing the user needs to know, is the name
> of his IdP, in order to correctly choose it.
> 
> So discovery of all available IdPs is needed. You are correct
> in saying that Shib supports a separate discovery service
> (WAYF), but Horizon can also play this role, by listing the
> IdPs for the user. This is the mod that my student is making
> to Horizon, by adding type ahead searching.
 
 So my point at the moment is that unless there's something i'm 
 missing in the way shib/mellon discovery works is that horizon
 can't. Because we forward to a common websso entry point there
 is no way (i know) for the users selection in horizon to be
 forwarded to keystone. You would still need a custom "select
 your idp" discovery page in front of keystone. I'm not sure if
 this addition is part of your students work, it just hasn't
 been mentioned yet.
 
> About your proposed discovery mod, surely this seems to be
> going in the wrong direction. A common entry point to
> Keystone for all IdPs, as we have now with WebSSO, seems to
> be preferable to separate entry points per IdP. Which high
> street shop has separate doors for each user? Or have I
> misunderstood the purpose of your mod?
 
 The purpose of the mod is purely to bypass the need to have a 
 shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2.
 This page is currently required to allow a user to select their
 idp (presumably from the ones supported by keystone) and
 redirect to that IDPs specific login page.
>>> 
>>> There are two functionalities that are required: a) Horizon
>>> finding the redirection login URL of the IdP chosen by the user 
>>> b) Keystone finding which IdP was used for login.
>>> 
>>> The second is already done by Apache telling Keystone in the
>>> header field.
>>> 
>>> The first is part of the metadata of the IdP, and Keystone should
>>> make this available to Horizon via an API call. Ideally when
>>> Horizon calls Keystone for the list of trusted IdPs, then the
>>> user friendly name of the IdP (to be displayed to the user) and
>>> the login page URL should be returned. Then Horizon can present
>>> the user friendly list to the user, get the login URL that
>>> matches this, then redirect the user to the IdP telling the IdP
>>> the common callback URL of Keystone.
>> 
>> So my understanding was that this wasn't possible. Because we want
>> to have keystone be the registered service provider and receive the
>> returned SAML assertions the login redirect must be issued from
>> keystone and not horizon. Is it possible to issue a login request
>> from horizon that returns the response to keystone? This seems
>> dodgy to me but may be possible if all the trust relationships are
>> set up.
> 
> Note also that currently this metadata including the login URL is not
> known by keystone. It's controlled by apache in the metadata xml
> files so we would have to add this information to keystone. Obviously
> this is doable just extra config setup that would require double
> handling of this URL.

My idea is to use Horizon as the WAYF/Discovery service, approximately
as follows

1. The user goes to Horizon to login locally or to discover which
federated IdP to use
2. Horizon dynamically populates the list of IDPs by querying Keystone
3. The user chooses the IdP and Horizon redirects the user to
Apache/Keystone, telling it the IdP to use
4. Apache creates the SAML assertion and sends it to the IDP.

In order to use the standard SAML Discovery Pr

Re: [openstack-dev] Does murano dymamic-ui have plan to support "edit" function?

2015-08-12 Thread Alexander Tivelkov
Hi Tony,

Thanks for your interest!

This is a complicated topic. Being able to edit the object model (with
DynamicUI, the CLI tools mentioned by Kirill or manually with murano's
API) is just the tip of the iceberg: if you have already deployed your
application, modifying of some of its input properties will not be
enough: the application developer has to supply the logic which will
handle the changes and will execute all the needed actions to
reconfigure the app.

Right now the right way to do so is to create an "action" method (see
[1] for details), which may be called for already deployed apps. In
this action you may change the properties of the object and do
whatever custom handling you may need to reconfigure your app. For
example, if you want to change the password of the database admin user
of the mysql app, you may add an action method "changeAdminPassword"
which will not only set '$.password' property to a new value, but will
also execute an appropriate password-changing script on the VM running
the database instance.

Right now the actions are partially supported on the UI level (in the
dashboard you may call any action of any deployed application),
however currently you cannot pass any parameters to these actions if
called from the UI, and being able to pass them is indeed required for
your scenario (in the aforementioned example with DB password change,
such an action should have at least one parameter - the new password
value). This will be probably addressed during the M cycle as part of
the "per-component UI" initiative mentioned by Kirill: we will provide
a way to render dynamic UI dialogs not only for the new applications
being added but also for the actions of the already deployed apps.

Hope this helps.
Please let me know if you have any questions on Actions and any other
related topics

[1] 
http://murano.readthedocs.org/en/latest/draft/appdev-guide/murano_pl.html#murano-actions

--
Regards,
Alexander Tivelkov


On Wed, Aug 12, 2015 at 2:16 PM, WANG, Ming Hao (Tony T)
 wrote:
> Kirll,
>
>
>
> Thanks for your info very much!
>
> We will study it first.
>
>
>
> Thanks,
>
> Tony
>
>
>
> From: Kirill Zaitsev [mailto:kzait...@mirantis.com]
> Sent: Wednesday, August 12, 2015 7:12 PM
> To: WANG, Ming Hao (Tony T); OpenStack Development Mailing List (not for
> usage questions)
> Subject: Re: [openstack-dev] Does murano dymamic-ui have plan to support
> "edit" function?
>
>
>
> Hi, sure there are such plans! This have been long referred as
> per-component-UI. I’m really hoping there would be some traction about it
> during mitaka cycle. Not in liberty though, feature freeze is less than a
> month away.
>
>
>
> btw, if you’re interested in custom tweaking and fine-tuning of murano
> object-model you can take a look at these CLI tools
> https://review.openstack.org/#/q/project:openstack/python-muranoclient+branch:master+topic:bp/env-configuration-from-cli,n,z
>
>
>
> and this https://review.openstack.org/#/c/208659/ commit in particular.
> Although using those would require you to have some knowledge about how
> murano handles things internally.
>
>
>
>
>
> --
> Kirill Zaitsev
> Murano team
>
> Software Engineer
>
> Mirantis, Inc
>
>
>
> On 12 Aug 2015 at 13:23:47, WANG, Ming Hao (Tony T)
> (tony.a.w...@alcatel-lucent.com) wrote:
>
> Dear OpenStack developers,
>
>
>
> Currently, murano dynamic-ui is “one-time” GUI, and I can’t edit data what
> has been submitted.
>
> Does murano dynamic-ui have plan to support "edit" function in the future?
>
>
>
> For example, developer develops some Wizard GUI to do some configuration,
> and user wants to change some configuration after the deployment.
>
>
>
> Thanks,
>
> Tony
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Kyle Mestery
On Wed, Aug 12, 2015 at 10:54 AM, Ihar Hrachyshka 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> On 08/12/2015 03:45 PM, Kyle Mestery wrote:
> > It gives me great pleasure to propose Russell Bryant and Brandon
> > Logan as core reviewers in the API/DB/RPC area of Neutron. Russell
> > and Brandon have both been incredible contributors to Neutron for a
> > while now. Their expertise has been particularly helpful in the
> > area they are being proposed in. Their review stats [1] place them
> > both comfortably in the range of existing Neutron core reviewers. I
> > expect them to continue working with all community members to drive
> > Neutron forward for the rest of Liberty and into Mitaka.
> >
> > Existing DB/API/RPC core reviewers (and other Neutron core
> > reviewers), please vote +1/-1 for the addition of Russell and
> > Brandon.
> >
> > Thanks! Kyle
> >
> > [1] http://stackalytics.com/report/contribution/neutron-group/90
> >
>
> Shouldn't we use the link that shows neutron core repo contributions
> only? I think this is the right one:
>
> http://stackalytics.com/report/contribution/neutron/90
>
>
Sure, if you want to look at only the neutron repo. I tend to look at
people across all of our repos, which you may or may not agree with. I also
think that it's worth looking at the statement of what core reviewers do
found here [1]. Particularly what common ideals all core reviewers across
Neutron share. I'll copy them here:

1. They share responsibility in the project’s success.
2. They have made a long-term, recurring time investment to improve the
project.
3. They spend their time doing what needs to be done to ensure the projects
success, not necessarily what is the most interesting or fun.

Also, keep in mind how we nominate core reviewers now that we have a
Lieutenant system [2].

Finally, it's worth all core reviewers having a look at what's expected of
core reviewers here. [3] I should point out that the team is severely
lacking in weekly meeting attendance at this point, but it's not a good
thread to do that. Instead, I'll just point out what we as a team have
codified as expectations for core reviewers.

Thanks!
Kyle

[1]
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#neutron-core-reviewers
[2]
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#adding-or-removing-core-reviewers
[3]
http://docs.openstack.org/developer/neutron/policies/core-reviewers.html#neutron-core-reviewer-membership-expectations

Ihar
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVy2wjAAoJEC5aWaUY1u57LUQIAIwHlnhzzucTJss5dE3dUeiP
> WQ7h7Oax45BWhaXD1a9/ux4HYeUX0haVPLO7KqgiLaMxu5H8r98QZpsK5NnxMsE/
> XuQHM5/i8diHuZnfmP8W+kzjfuS7xxiBxqnmg3AF9PrcHOu10YCnSQaRAzbsSwcc
> R7ifeLexF8kpE9PI0/eAMBtoVmidjnxuEfU+hK0zto3MCQ86SFxeYut+efhiaphz
> CiN/H440gllw3TdZsNCMAP8ie4+cjbR9W6vkMieq3Z2esNfAZQaTaJ8NPeLzGpHj
> 4+NjFTuTTQmtYmqMiVlqDeg3y0LaE21qdI649XdRub8Xp//Ht7xnnQcOrW2lSPM=
> =khEG
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-12 Thread Markus Zoeller
Another thing which makes it hard to understand the impact of the config
options is, that it's not clear how the interdependency to other config 
options is. As an example, the "serial_console.base_url" has a 
dependency to "DEFAULT.cert" and "DEFAULT.key" if you want to use 
secured websockets ("base_url=wss://..."). Another one is the option
"serial_console.serialproxy_port". This port number must be the same
as it is in "serial_console.base_url". I couldn't find an explanation to
this.

The three questions I have with every config option:
1) which service(s) access this option?
2) what does it do? / what's the impact? 
3) which other options do I need to tweek to get the described impact?

Would it make sense to stage the changes?
M cycle: move the config options out of the modules to another place
 (like the approach Sean proposed) and annotate them with
 the services which uses them
N cycle: inject the options into the drivers and eliminate the global
 variables this way (like Daniel et al. proposed)

Especially for new contributors like me who didn't start in any of the
early releases and didn't have the change to grow with Nova and its
complexity, it would really help me a lot and enable me to contribute
in a better way.

As a side note:
The "nova.flagmappings" file, which gets generated when you want to 
build the configuration reference manual, contains 804 config options
for Nova. Quite a lot I think :)

Sean Dague  wrote on 07/27/2015 04:35:56 PM:

> From: Sean Dague 
> To: openstack-dev@lists.openstack.org
> Date: 07/27/2015 04:36 PM
> Subject: Re: [openstack-dev] [openstack][nova] Streamlining of config 
> options in nova
> 
> On 07/27/2015 10:05 AM, Daniel P. Berrange wrote:
> > On Fri, Jul 24, 2015 at 09:48:15AM +0100, Daniel P. Berrange wrote:
> >> On Thu, Jul 23, 2015 at 05:55:36PM +0300, mhorban wrote:
> >>> Hi all,
> >>>
> >>> During development process in nova I faced with an issue related 
with config
> >>> options. Now we have lists of config options and registering options 
mixed
> >>> with source code in regular files.
> >>> From one side it can be convenient: to have module-encapsulated 
config
> >>> options. But problems appear when we need to use some config option 
in
> >>> different modules/packages.
> >>>
> >>> If some option is registered in module X and module X imports module 
Y for
> >>> some reasons...
> >>> and in one day we need to import this option in module Y we will get
> >>> exception
> >>> NoSuchOptError on import_opt in module Y.
> >>> Because of circular dependency.
> >>> To resolve it we can move registering of this option in Y module(in 
the
> >>> inappropriate place) or use other tricks.
> >>>
> >>> I offer to create file options.py in each package and move all 
package's
> >>> config options and registration code there.
> >>> Such approach allows us to import any option in any place of nova 
without
> >>> problems.
> >>>
> >>> Implementations of this refactoring can be done piece by piece where 
piece
> >>> is
> >>> one package.
> >>>
> >>> What is your opinion about this idea?
> >>
> >> I tend to think that focusing on problems with dependancy ordering 
when
> >> modules import each others config options is merely attacking a 
symptom
> >> of the real root cause problem.
> >>
> >> The way we use config options is really entirely wrong. We have gone
> >> to the trouble of creating (or trying to create) structured code with
> >> isolated functional areas, files and object classes, and then we 
throw
> >> in these config options which are essentially global variables which 
are
> >> allowed to be accessed by any code anywhere. This destroys the 
isolation
> >> of the various classes we've created, and means their behaviour often
> >> based on side effects of config options from unrelated pieces of 
code.
> >> It is total madness in terms of good design practices to have such 
use
> >> of global variables.
> >>
> >> So IMHO, if we want to fix the real big problem with config options, 
we
> >> need to be looking to solution where we stop using config options as
> >> global variables. We should change our various classes so that the
> >> neccessary configurable options as passed into object constructors
> >> and/or methods as parameters.
> >>
> >> As an example in the libvirt driver.
> >>
> >> I would set it up so that /only/ the LibvirtDriver class in driver.py
> >> was allowed to access the CONF config options. In its constructor it
> >> would load all the various config options it needs, and either set
> >> class attributes for them, or pass them into other methods it calls.
> >> So in the driver.py, instead of calling 
CONF.libvirt.libvirt_migration_uri
> >> everywhere in the code,  in the constructor we'd save that config 
param
> >> value to an attribute 'self.mig_uri = 
CONF.libvirt.libvirt_migration_uri'
> >> and then where needed, we'd just call "self.mig_uri".
> >>
> >> Now in the various other libvirt files, imagebackend.py, vol

Re: [openstack-dev] [TaskFlow] Cross-run persistence

2015-08-12 Thread Demian Brecht

> On Aug 10, 2015, at 1:18 PM, Joshua Harlow  wrote:
> 
> Is that what u are looking for? (or possibly something else?),

Hi Josh,

Thanks for the reply. I haven’t had time to dig into it much further, but I’m 
not sure that’s what I’m looking for (unless I’m missing something in initial 
configuration. What I have tried so far is something to the effect of:

store = {‘foo’: ‘bar’}
class MyAuthTask(task.Task):
default_provides = ‘auth_token’

def execute(self):
return ‘mytoken'

engine = engines.load(flow, …, store=store)
engine.run()
print(store)

What I’m trying to get at is to selectively persist the token. In this case, 
after engine execution, values injected to the store are no longer there. If I 
understand how the engine works (which I may not, I’ve only been looking into 
this /very/ lightly over the last few days), as tasks are executed, what they 
provide is injected into the store. Unless you specifically tell the system to 
not purge /anything/ from the store, all will be lost once the engine is 
complete. I also don’t want to /all/ data to persist, but only select items 
(i.e. tokens).

Am I misunderstanding something here?

Thanks again,
Demian

---
GPG Fingerprint: 9530 B4AF 551B F3CD A45C  476C D4E5 662D DB97 69E3



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Does murano dymamic-ui have plan to support "edit" function?

2015-08-12 Thread Kirill Zaitsev
Hi Tony, 

After re-readng your question again I tend to agree with Alex. Actions seem to 
be exactly what you’re asking for.

-- 
Kirill Zaitsev
Murano team
Software Engineer
Mirantis, Inc

On 12 Aug 2015 at 20:16:08, Alexander Tivelkov (ativel...@mirantis.com) wrote:

Hi Tony,  

Thanks for your interest!  

This is a complicated topic. Being able to edit the object model (with  
DynamicUI, the CLI tools mentioned by Kirill or manually with murano's  
API) is just the tip of the iceberg: if you have already deployed your  
application, modifying of some of its input properties will not be  
enough: the application developer has to supply the logic which will  
handle the changes and will execute all the needed actions to  
reconfigure the app.  

Right now the right way to do so is to create an "action" method (see  
[1] for details), which may be called for already deployed apps. In  
this action you may change the properties of the object and do  
whatever custom handling you may need to reconfigure your app. For  
example, if you want to change the password of the database admin user  
of the mysql app, you may add an action method "changeAdminPassword"  
which will not only set '$.password' property to a new value, but will  
also execute an appropriate password-changing script on the VM running  
the database instance.  

Right now the actions are partially supported on the UI level (in the  
dashboard you may call any action of any deployed application),  
however currently you cannot pass any parameters to these actions if  
called from the UI, and being able to pass them is indeed required for  
your scenario (in the aforementioned example with DB password change,  
such an action should have at least one parameter - the new password  
value). This will be probably addressed during the M cycle as part of  
the "per-component UI" initiative mentioned by Kirill: we will provide  
a way to render dynamic UI dialogs not only for the new applications  
being added but also for the actions of the already deployed apps.  

Hope this helps.  
Please let me know if you have any questions on Actions and any other  
related topics  

[1] 
http://murano.readthedocs.org/en/latest/draft/appdev-guide/murano_pl.html#murano-actions
  

--  
Regards,  
Alexander Tivelkov  


On Wed, Aug 12, 2015 at 2:16 PM, WANG, Ming Hao (Tony T)  
 wrote:  
> Kirll,  
>  
>  
>  
> Thanks for your info very much!  
>  
> We will study it first.  
>  
>  
>  
> Thanks,  
>  
> Tony  
>  
>  
>  
> From: Kirill Zaitsev [mailto:kzait...@mirantis.com]  
> Sent: Wednesday, August 12, 2015 7:12 PM  
> To: WANG, Ming Hao (Tony T); OpenStack Development Mailing List (not for  
> usage questions)  
> Subject: Re: [openstack-dev] Does murano dymamic-ui have plan to support  
> "edit" function?  
>  
>  
>  
> Hi, sure there are such plans! This have been long referred as  
> per-component-UI. I’m really hoping there would be some traction about it  
> during mitaka cycle. Not in liberty though, feature freeze is less than a  
> month away.  
>  
>  
>  
> btw, if you’re interested in custom tweaking and fine-tuning of murano  
> object-model you can take a look at these CLI tools  
> https://review.openstack.org/#/q/project:openstack/python-muranoclient+branch:master+topic:bp/env-configuration-from-cli,n,z
>   
>  
>  
>  
> and this https://review.openstack.org/#/c/208659/ commit in particular.  
> Although using those would require you to have some knowledge about how  
> murano handles things internally.  
>  
>  
>  
>  
>  
> --  
> Kirill Zaitsev  
> Murano team  
>  
> Software Engineer  
>  
> Mirantis, Inc  
>  
>  
>  
> On 12 Aug 2015 at 13:23:47, WANG, Ming Hao (Tony T)  
> (tony.a.w...@alcatel-lucent.com) wrote:  
>  
> Dear OpenStack developers,  
>  
>  
>  
> Currently, murano dynamic-ui is “one-time” GUI, and I can’t edit data what  
> has been submitted.  
>  
> Does murano dynamic-ui have plan to support "edit" function in the future?  
>  
>  
>  
> For example, developer develops some Wizard GUI to do some configuration,  
> and user wants to change some configuration after the deployment.  
>  
>  
>  
> Thanks,  
>  
> Tony  
>  
>  
>  
> __  
> OpenStack Development Mailing List (not for usage questions)  
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>  
>  
> __  
> OpenStack Development Mailing List (not for usage questions)  
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
>  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subj

Re: [openstack-dev] [Stable][Nova] VMware NSXv Support

2015-08-12 Thread John Garbutt
Apologies to go back in time in the tread, but I wanted to share some
extra context...

On 10 August 2015 at 16:17, Gary Kotton  wrote:
> I agree that the sub team(s) need to review more.
>
> The question is how do the team member feel like they are making progress?
> That is, do they see patches
> Land. Do they receive postive feedback from cores that things are good,
> bad or ugly?

I have tried to write up why I think everyone should do more reviews:
https://wiki.openstack.org/wiki/Nova/Mentoring#Why_do_code_reviews_if_I_am_not_in_nova-core.3F

> I think that the PTL should assign at least 2 cores to each sub team. Let
> the team have accountability. Without that there is no way of getting
> anything done and we are back in the same spot.
>
> Without that we are just doing more of the same.

The current plan (started at beginning of liberty) is to get subteams
to help focus the core review effort by tell us what patches they have
reviewed already, and think are the most important:
https://etherpad.openstack.org/p/liberty-nova-priorities-tracking

This worked well in kilo for the priorities and trivial patches. We
are trying to extend it. I am regularly asking all cores to prefer
review patches listed in the etherpad. It appears this is now starting
to happen, slowly.

I hope that recommendation becomes trusted enough to mean more than
just "please review me":
https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Subteam_recommendation_as_a_.2B2

Thanks,
John

PS
A poor summary of some of the related discussions in the past, can be
found here:
https://wiki.openstack.org/wiki/Nova/Liberty_Release_Schedule#Splitting_out_the_virt_drivers_.28or_other_bits_of_code.29

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-12 Thread Christopher Aedo
On Sun, Aug 9, 2015 at 3:32 PM, Steve Baker  wrote:
> On 07/08/15 06:56, Fox, Kevin M wrote:
>>
>> Heat templates so far seems to be a place to dump examples for showing off
>> how to use specific heat resources/features.
>>
>> Are there any intentions to maintain production ready heat templates in
>> it? Last I asked the answer seemed to be no.
>>
>> If I misunderstood, heat-templates would be a logical place to put them
>> then.
>>
> Historically heat-templates has avoided hosting production-ready templates,
> but this has purely been due to having the resources available to maintain
> them.
>
> If a community emerged who were motivated to author, maintain and support
> the infrastructure which tests these templates then I think they would
> benefit from being hosted in the heat-templates repository. It sounds like
> such a community is coalescing around the app-catalog project.
>
> Production-ready templates could end up somewhere like
> heat-templates/hot/app-catalog. If this takes off then heat-templates can be
> assigned its own core team so that more than just heat-core could approve
> these templates.

Steve and Kevin have both touched on what I was hoping for when I sent
the initial note.  That is to try to make a place for developing heat
templates, and a path to get there for those who might contribute.

Ryan asked a key question that was definitely not clear from what I was asking:
On Thu, Aug 6, 2015 at 11:55 AM, Ryan Brown  wrote:
> What do you imagine these templates being for? Are people creating
> little reusable snippets/nested stacks that can be incorporated into
> someone else's infrastructure? Or standalone templates for stuff like
> "here, instant mongodb cluster"?

I was thinking of this from the perspective of all the people who have
access to OpenStack clouds now (whether private self-hosted clouds or
the dozens of public OpenStack clouds people can use today).  The App
Catalog is meant to be a showcase for things that you can do with that
cloud; packing it full of useful Heat templates would be excellent.  I
am thinking standalone templates, basically like the great set of
templates available on Rackspace cloud [1] but ones tailored for ALL
OpenStack clouds.

I do not believe just creating a repo will magically result in people
adding templates there (with expert guidance from unspecified core
reviewers).  But it feels to me like we are missing a place and
community for sharing the templates that are being developed (or the
ideas that could be turned into templates).  Like Kevin pointed out
for the work J^2 did for the Chef templates, there's no obvious place
to go check first to see if someone else has created and shared a
template before you start working on one.  Ideally the app-catalog
becomes that place, but I'm trying to figure out how to engage the
Heat community in making that a reality.  If making a new repo is not
the answer (and I agree with most of the points in this thread -
that's not the way forward), let's see what else we can do.

Can we agree the world of people using OpenStack would benefit from
having easy access to Heat templates that stand up applications for
users?  Given that, what would be the best way to start collecting
what already exists, and start encouraging newcomers to contribute
there?

[1]: https://github.com/rackspace-orchestration-templates

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][keystone] Federation beyond Shibboleth

2015-08-12 Thread Adam Young

On 08/11/2015 06:21 AM, Jesse Pretorius wrote:

Hi everyone,

Yesterday we released implementing Keystone as a Federated Service 
Provider as part of the openstack-ansible deployment tooling [1].


This is a starting implementation which was purposefully scoped to 
only use Shibboleth and only support SAML2. The scope was limited due 
to the complexity of getting it working in the first place, but also 
as this was seen to be the use-case which would give the most value.


The implementation, however, was done in a manner which we believe is 
reasonably extendable to accommodate other protocols including OpenID, 
Kerberos, etc. It should also be reasonably easy to develop the Mellon 
SAML implementation instead of the Shibboleth module, although I that 
would probably be slightly more complex. Our spec [2] has already 
covered these extensions, so all we'd need to do is define blueprints 
to cover them and target them at specific milestones.


We'd like to ask whether others would be interested in diving in to 
implement the additional protocols, to implement the alternative 
mod_auth_mellon and also to apply other improvements as we roll on 
towards the target of releasing liberty.

The simplest one is Kerberos + SSSD;

Kerberos provides Authentication.
mod_lookup_identity uses SSSD to get Groups.  It turns LDAP into 
another  Federated identity, much simpler than the LDAP code in Keystone 
(I am responsible for that mess).


We are working on automating this via Ansible on top of a RHEL/Centos 7 
install to demo in Tokyo.


I am not certain if all the pieces are in place yet for Debian based 
install.  Specifically, it needs an updated sssd-dbus package.


We also have mod_mellon and Ipsilon working, as Jamie demo'ed at Pycon AU.


We're happy to work along side anyone who's not familiar with 
openstack-ansible, or even ansible, to setup a test environment (this 
can be done in about an hour) and to prepare a patch for review.


If you have any questions or comments, please feel free to contact me 
via email or on IRC.


Best regards,

Jesse
IRC: odyssey4me

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071748.html
[2] 
https://github.com/stackforge/os-ansible-deployment-specs/blob/master/specs/kilo/keystone-federation.rst





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Sachin Manpathak
Thanks, This feedback was helpful.
Perhaps my paraphrasing was misleading. I am not running openstack at scale
in order to see how much the DB can sustain. My observation was that the
host running nova services saturates on CPU much earlier than the DB does.
Joins could be one of the reasons. I also observed that background tasks
like instance creation, resource/stats updates contend with get queries. In
addition to caching optimizations prioritizing tasks in nova could help.

Is there a nova API to fetch list of instances without metadata? Until I
find a good way to profile openstack code, changing the queries can be a
good experiement IMO.



On Wed, Aug 12, 2015 at 8:12 AM, Dan Smith  wrote:

> > If OTOH we are referring to the width of the columns and the join is
> > such that you're going to get the same A identity over and over again,
> > if you join A and B you get a "wide" row with all of A and B with a very
> > large amount of redundant data sent over the wire again and again (note
> > that the database drivers available to us in Python always send all rows
> > and columns over the wire unconditionally, whether or not we fetch them
> > in application code).
>
> Yep, it was this. N instances times M rows of metadata each. If you pull
> 100 instances and they each have 30 rows of system metadata, that's a
> lot of data, and most of it is the instance being repeated 30 times for
> each metadata row. When we first released code doing this, a prominent
> host immediately raised the red flag because their DB traffic shot
> through the roof.
>
> > In this case you *do* want to do the join in
> > Python to some extent, though you use the database to deliver the
> > simplest information possible to work with first; you get the full row
> > for all of the A entries, then a second query for all of B plus A's
> > primary key that can be quickly matched to that of A.
>
> This is what we're doing. Fetch the list of instances that match the
> filters, then for the ones that were returned, get their metadata.
>
> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread Lance Bragstad
On Wed, Aug 12, 2015 at 12:06 PM, David Chadwick 
wrote:

>
>
> On 11/08/2015 01:46, Jamie Lennox wrote:
> >
> >
> > - Original Message -
> >> From: "Jamie Lennox"  To: "OpenStack
> >> Development Mailing List (not for usage questions)"
> >>  Sent: Tuesday, 11 August, 2015
> >> 10:09:33 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >> Federated Login
> >>
> >>
> >>
> >> - Original Message -
> >>> From: "David Chadwick"  To:
> >>> openstack-dev@lists.openstack.org Sent: Tuesday, 11 August, 2015
> >>> 12:50:21 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >>> Federated Login
> >>>
> >>>
> >>>
> >>> On 10/08/2015 01:53, Jamie Lennox wrote:
> 
> 
>  - Original Message -
> > From: "David Chadwick"  To:
> > openstack-dev@lists.openstack.org Sent: Sunday, August 9,
> > 2015 12:29:49 AM Subject: Re: [openstack-dev] [Keystone]
> > [Horizon] Federated Login
> >
> > Hi Jamie
> >
> > nice presentation, thanks for sharing it. I have forwarded it
> > to my students working on federation aspects of Horizon.
> >
> > About public federated cloud access, the way you envisage it,
> > i.e. that every user will have his own tailored (subdomain)
> > URL to the SP is not how it works in the real world today.
> > SPs typically provide one URL, which everyone from every IdP
> > uses, so that no matter which browser you are using, from
> > wherever you are in the world, you can access the SP (via
> > your IdP). The only thing the user needs to know, is the name
> > of his IdP, in order to correctly choose it.
> >
> > So discovery of all available IdPs is needed. You are correct
> > in saying that Shib supports a separate discovery service
> > (WAYF), but Horizon can also play this role, by listing the
> > IdPs for the user. This is the mod that my student is making
> > to Horizon, by adding type ahead searching.
> 
>  So my point at the moment is that unless there's something i'm
>  missing in the way shib/mellon discovery works is that horizon
>  can't. Because we forward to a common websso entry point there
>  is no way (i know) for the users selection in horizon to be
>  forwarded to keystone. You would still need a custom "select
>  your idp" discovery page in front of keystone. I'm not sure if
>  this addition is part of your students work, it just hasn't
>  been mentioned yet.
> 
> > About your proposed discovery mod, surely this seems to be
> > going in the wrong direction. A common entry point to
> > Keystone for all IdPs, as we have now with WebSSO, seems to
> > be preferable to separate entry points per IdP. Which high
> > street shop has separate doors for each user? Or have I
> > misunderstood the purpose of your mod?
> 
>  The purpose of the mod is purely to bypass the need to have a
>  shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2.
>  This page is currently required to allow a user to select their
>  idp (presumably from the ones supported by keystone) and
>  redirect to that IDPs specific login page.
> >>>
> >>> There are two functionalities that are required: a) Horizon
> >>> finding the redirection login URL of the IdP chosen by the user
> >>> b) Keystone finding which IdP was used for login.
> >>>
> >>> The second is already done by Apache telling Keystone in the
> >>> header field.
> >>>
> >>> The first is part of the metadata of the IdP, and Keystone should
> >>> make this available to Horizon via an API call. Ideally when
> >>> Horizon calls Keystone for the list of trusted IdPs, then the
> >>> user friendly name of the IdP (to be displayed to the user) and
> >>> the login page URL should be returned. Then Horizon can present
> >>> the user friendly list to the user, get the login URL that
> >>> matches this, then redirect the user to the IdP telling the IdP
> >>> the common callback URL of Keystone.
> >>
> >> So my understanding was that this wasn't possible. Because we want
> >> to have keystone be the registered service provider and receive the
> >> returned SAML assertions the login redirect must be issued from
> >> keystone and not horizon. Is it possible to issue a login request
> >> from horizon that returns the response to keystone? This seems
> >> dodgy to me but may be possible if all the trust relationships are
> >> set up.
> >
> > Note also that currently this metadata including the login URL is not
> > known by keystone. It's controlled by apache in the metadata xml
> > files so we would have to add this information to keystone. Obviously
> > this is doable just extra config setup that would require double
> > handling of this URL.
>
> My idea is to use Horizon as the WAYF/Discovery service, approximately
> as follows
>
> 1. The user goes to Horizon to login locally or to discover which
> federated IdP to use
> 2. Horizon dynamically populates the li

Re: [openstack-dev] stable is hosed

2015-08-12 Thread Matt Riedemann



On 8/12/2015 4:42 AM, Thierry Carrez wrote:

Matt Riedemann wrote:

Just an update:

Kilo: I think we are OK here now, at least for some projects like nova -
raising the minimum required neutronclient to >=2.4.0 seems to have
fixed things.

Juno: We're still blocked on the large ops job:

https://bugs.launchpad.net/openstack-gate/+bug/1482350

I'll probably take a deeper look at options there tomorrow.  lifeless
left a suggestion in the bug report.


Thanks for working on this ! I'm back from vacation now, still catching
up. Don't hesitate to pull me in discussion of options or ping me if you
need the occasional review help.



Here is what I think is the workaround for the large-ops blocker in 
stable/juno:


https://review.openstack.org/#/c/212135/

It's not pretty but we don't have many alternatives.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [rally] [Ceilometer] profiler sample resource id

2015-08-12 Thread Pradeep Kilambi
We're in the process of converting existing meters to use a more
declarative approach where we add the meter definition as part of a yaml.
As part of this transition there are few notification handlers where the id
is not consistent. For example, in profiler notification Handler the
resource_id is set to "profiler-%s" % message["payload"]["base_id"] . Is
there a reason we have the prefix? Can we ignore this and directly set
to message["payload"]["base_id"] ? Seems like there is no real need for the
prefix here unless i'm missing something. Can we go ahead and drop this?

If we don't hear anything i'll assume there is no objection to dropping
this prefix.


Thanks,

-- 
--
Pradeep Kilambi; irc: prad
OpenStack Engineering
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-12 Thread Jay Pipes

On 08/12/2015 01:20 PM, Markus Zoeller wrote:


The three questions I have with every config option:
1) which service(s) access this option?
2) what does it do? / what's the impact?
3) which other options do I need to tweek to get the described impact?


All excellent questions that really should be answered in both the help 
string for the option as well as the documentation here:


http://docs.openstack.org/havana/config-reference/content/list-of-compute-config-options.html

Note that the above link is generated, IIRC, from the code, so 
increasing the details in option descriptions (help string) should show 
up there. Anne is that assumption correct?



Would it make sense to stage the changes?
M cycle: move the config options out of the modules to another place
  (like the approach Sean proposed) and annotate them with
  the services which uses them
N cycle: inject the options into the drivers and eliminate the global
  variables this way (like Daniel et al. proposed)


+1. I think the above is an excellent plan. You have my support.

Best,
-jay


Especially for new contributors like me who didn't start in any of the
early releases and didn't have the change to grow with Nova and its
complexity, it would really help me a lot and enable me to contribute
in a better way.

As a side note:
The "nova.flagmappings" file, which gets generated when you want to
build the configuration reference manual, contains 804 config options
for Nova. Quite a lot I think :)

Sean Dague  wrote on 07/27/2015 04:35:56 PM:


From: Sean Dague 
To: openstack-dev@lists.openstack.org
Date: 07/27/2015 04:36 PM
Subject: Re: [openstack-dev] [openstack][nova] Streamlining of config
options in nova

On 07/27/2015 10:05 AM, Daniel P. Berrange wrote:

On Fri, Jul 24, 2015 at 09:48:15AM +0100, Daniel P. Berrange wrote:

On Thu, Jul 23, 2015 at 05:55:36PM +0300, mhorban wrote:

Hi all,

During development process in nova I faced with an issue related

with config

options. Now we have lists of config options and registering options

mixed

with source code in regular files.
 From one side it can be convenient: to have module-encapsulated

config

options. But problems appear when we need to use some config option

in

different modules/packages.

If some option is registered in module X and module X imports module

Y for

some reasons...
and in one day we need to import this option in module Y we will get
exception
NoSuchOptError on import_opt in module Y.
Because of circular dependency.
To resolve it we can move registering of this option in Y module(in

the

inappropriate place) or use other tricks.

I offer to create file options.py in each package and move all

package's

config options and registration code there.
Such approach allows us to import any option in any place of nova

without

problems.

Implementations of this refactoring can be done piece by piece where

piece

is
one package.

What is your opinion about this idea?


I tend to think that focusing on problems with dependancy ordering

when

modules import each others config options is merely attacking a

symptom

of the real root cause problem.

The way we use config options is really entirely wrong. We have gone
to the trouble of creating (or trying to create) structured code with
isolated functional areas, files and object classes, and then we

throw

in these config options which are essentially global variables which

are

allowed to be accessed by any code anywhere. This destroys the

isolation

of the various classes we've created, and means their behaviour often
based on side effects of config options from unrelated pieces of

code.

It is total madness in terms of good design practices to have such

use

of global variables.

So IMHO, if we want to fix the real big problem with config options,

we

need to be looking to solution where we stop using config options as
global variables. We should change our various classes so that the
neccessary configurable options as passed into object constructors
and/or methods as parameters.

As an example in the libvirt driver.

I would set it up so that /only/ the LibvirtDriver class in driver.py
was allowed to access the CONF config options. In its constructor it
would load all the various config options it needs, and either set
class attributes for them, or pass them into other methods it calls.
So in the driver.py, instead of calling

CONF.libvirt.libvirt_migration_uri

everywhere in the code,  in the constructor we'd save that config

param

value to an attribute 'self.mig_uri =

CONF.libvirt.libvirt_migration_uri'

and then where needed, we'd just call "self.mig_uri".

Now in the various other libvirt files, imagebackend.py, volume.py
vif.py, etc. None of those files would /ever/ access CONF.*. Any time
they needed a config parameter, it would be passed into their

constructor

or method, by the LibvirtDriver or whatever invoked them.

Getting rid of the global CONF object usage in all these files

Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-12 Thread Sean Dague
On 08/12/2015 02:23 PM, Jay Pipes wrote:
> On 08/12/2015 01:20 PM, Markus Zoeller wrote:
> 
>> The three questions I have with every config option:
>> 1) which service(s) access this option?
>> 2) what does it do? / what's the impact?
>> 3) which other options do I need to tweek to get the described impact?
> 
> All excellent questions that really should be answered in both the help
> string for the option as well as the documentation here:
> 
> http://docs.openstack.org/havana/config-reference/content/list-of-compute-config-options.html
> 
> 
> Note that the above link is generated, IIRC, from the code, so
> increasing the details in option descriptions (help string) should show
> up there. Anne is that assumption correct?
> 
>> Would it make sense to stage the changes?
>> M cycle: move the config options out of the modules to another place
>>   (like the approach Sean proposed) and annotate them with
>>   the services which uses them
>> N cycle: inject the options into the drivers and eliminate the global
>>   variables this way (like Daniel et al. proposed)
> 
> +1. I think the above is an excellent plan. You have my support.

I think this is a great plan. I agree with both steps, and the order in
tackling them. Thanks for taking this on.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Mike Bayer



On 8/12/15 1:49 PM, Sachin Manpathak wrote:

Thanks, This feedback was helpful.
Perhaps my paraphrasing was misleading. I am not running openstack at 
scale in order to see how much the DB can sustain. My observation was 
that the host running nova services saturates on CPU much earlier than 
the DB does.
You absolutely *want* a single host to be saturated *way* before the 
database is; the database here is a single vertical service intended to 
serve hundreds or thousands of horizontally scaled clients 
simultaneously.A single request at a time should not even be a blip 
in the database's view of things.




Joins could be one of the reasons. I also observed that background 
tasks like instance creation, resource/stats updates contend with get 
queries. In addition to caching optimizations prioritizing tasks in 
nova could help.


Is there a nova API to fetch list of instances without metadata? Until 
I find a good way to profile openstack code, changing the queries can 
be a good experiement IMO.



On Wed, Aug 12, 2015 at 8:12 AM, Dan Smith > wrote:


> If OTOH we are referring to the width of the columns and the join is
> such that you're going to get the same A identity over and over
again,
> if you join A and B you get a "wide" row with all of A and B
with a very
> large amount of redundant data sent over the wire again and
again (note
> that the database drivers available to us in Python always send
all rows
> and columns over the wire unconditionally, whether or not we
fetch them
> in application code).

Yep, it was this. N instances times M rows of metadata each. If
you pull
100 instances and they each have 30 rows of system metadata, that's a
lot of data, and most of it is the instance being repeated 30
times for
each metadata row. When we first released code doing this, a prominent
host immediately raised the red flag because their DB traffic shot
through the roof.

> In this case you *do* want to do the join in
> Python to some extent, though you use the database to deliver the
> simplest information possible to work with first; you get the
full row
> for all of the A entries, then a second query for all of B plus A's
> primary key that can be quickly matched to that of A.

This is what we're doing. Fetch the list of instances that match the
filters, then for the ones that were returned, get their metadata.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [app-catalog][heat] Heat template contributors repo

2015-08-12 Thread Fox, Kevin M
While being able to write kind of stand alone templates for apps is the goal, 
any sufficiently complicated cloud app template starts to benefit from shared 
components.

If you look at a bunch of the templates that are in:
https://github.com/EMSL-MSC/heat-templates/tree/master/cfn
They are more and more sharing components. So I've built into a shared library 
there:
https://github.com/EMSL-MSC/heat-templates/tree/master/cfn/lib

By having one place to put open sourced production ready templates, these 
common features can start to be abstracted out, made more robust and featureful 
in standard libraries. This in turn makes it easier to write production grade, 
generic templates to put in the catalog.

Its the back and forth between the production templates, and the shared 
libraries that I think is a very important piece to the issue.

Its also a place developers can congregate around and recommend ways for the 
templates to be made better. "Hey, rather then hard code that manually, why 
don't you use X shared library feature?". This also is an important piece.

Lastly, its a place where the heat engine developers can go actually look at 
and quickly see what things app developers are having to do in order to make 
production grade templates work. There are some unfortunate things that have to 
be done today, and it becomes more easy to convince developers new features are 
needed when they can see the reasons themselves.

Eventually the shared libraries and individual app templates will mature enough 
that having just the shared libraries in the common repo, and having the 
templates be split out on their own will probably make sense. I think that's 
several years out at very least. Probably longer.

Thanks,
Kevin



From: Christopher Aedo [d...@aedo.net]
Sent: Wednesday, August 12, 2015 10:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [app-catalog][heat] Heat template contributors 
repo

On Sun, Aug 9, 2015 at 3:32 PM, Steve Baker  wrote:
> On 07/08/15 06:56, Fox, Kevin M wrote:
>>
>> Heat templates so far seems to be a place to dump examples for showing off
>> how to use specific heat resources/features.
>>
>> Are there any intentions to maintain production ready heat templates in
>> it? Last I asked the answer seemed to be no.
>>
>> If I misunderstood, heat-templates would be a logical place to put them
>> then.
>>
> Historically heat-templates has avoided hosting production-ready templates,
> but this has purely been due to having the resources available to maintain
> them.
>
> If a community emerged who were motivated to author, maintain and support
> the infrastructure which tests these templates then I think they would
> benefit from being hosted in the heat-templates repository. It sounds like
> such a community is coalescing around the app-catalog project.
>
> Production-ready templates could end up somewhere like
> heat-templates/hot/app-catalog. If this takes off then heat-templates can be
> assigned its own core team so that more than just heat-core could approve
> these templates.

Steve and Kevin have both touched on what I was hoping for when I sent
the initial note.  That is to try to make a place for developing heat
templates, and a path to get there for those who might contribute.

Ryan asked a key question that was definitely not clear from what I was asking:
On Thu, Aug 6, 2015 at 11:55 AM, Ryan Brown  wrote:
> What do you imagine these templates being for? Are people creating
> little reusable snippets/nested stacks that can be incorporated into
> someone else's infrastructure? Or standalone templates for stuff like
> "here, instant mongodb cluster"?

I was thinking of this from the perspective of all the people who have
access to OpenStack clouds now (whether private self-hosted clouds or
the dozens of public OpenStack clouds people can use today).  The App
Catalog is meant to be a showcase for things that you can do with that
cloud; packing it full of useful Heat templates would be excellent.  I
am thinking standalone templates, basically like the great set of
templates available on Rackspace cloud [1] but ones tailored for ALL
OpenStack clouds.

I do not believe just creating a repo will magically result in people
adding templates there (with expert guidance from unspecified core
reviewers).  But it feels to me like we are missing a place and
community for sharing the templates that are being developed (or the
ideas that could be turned into templates).  Like Kevin pointed out
for the work J^2 did for the Chef templates, there's no obvious place
to go check first to see if someone else has created and shared a
template before you start working on one.  Ideally the app-catalog
becomes that place, but I'm trying to figure out how to engage the
Heat community in making that a reality.  If making a new repo is not
the answer (and I agree with most of the points in this thread -
that's not

Re: [openstack-dev] [TaskFlow] Cross-run persistence

2015-08-12 Thread Joshua Harlow
On Wed, 12 Aug 2015 10:31:03 -0700
Demian Brecht  wrote:

> 
> > On Aug 10, 2015, at 1:18 PM, Joshua Harlow 
> > wrote:
> > 
> > Is that what u are looking for? (or possibly something else?),
> 
> Hi Josh,
> 
> Thanks for the reply. I haven’t had time to dig into it much further,
> but I’m not sure that’s what I’m looking for (unless I’m missing
> something in initial configuration. What I have tried so far is
> something to the effect of:
> 
> store = {‘foo’: ‘bar’}
> class MyAuthTask(task.Task):
> default_provides = ‘auth_token’
> 
> def execute(self):
> return ‘mytoken'
> 
> engine = engines.load(flow, …, store=store)
> engine.run()
> print(store)
> 
> What I’m trying to get at is to selectively persist the token. In
> this case, after engine execution, values injected to the store are
> no longer there. If I understand how the engine works (which I may
> not, I’ve only been looking into this /very/ lightly over the last
> few days), as tasks are executed, what they provide is injected into
> the store. Unless you specifically tell the system to not
> purge /anything/ from the store, all will be lost once the engine is
> complete. I also don’t want to /all/ data to persist, but only select
> items (i.e. tokens).

Hmmm, I'll be out on vacation for a couple weeks, so feel free to jump
on IRC and ask there, but the summary u have is mostly correct, but
what u have to do that I don't see in the above is specify which
persistence backend the engine should be using, if none is selected
then all data saved goes into memory (and therefore is lost when the
program is done).

Once u get that working things will be persisted, although selective
persistence isn't currently possible, but could be done in a somewhat
easy manner (as long as we are clear on what i means and/or implies),

Once u get the persistence working,
http://docs.openstack.org/developer/taskflow/persistence.html should
help here then feel free to possibly submit some kind of oslo-spec or
blueprint for 'selective persistence' (or other better named thing); if
u want to work on said code, that's even better :)

Spec template @
https://github.com/openstack/oslo-specs/tree/master/specs

Irc folks usually @ #openstack-state-management and/or #openstack-oslo
(should be folks there that can help while i'm out...)

-Josh

> 
> Am I misunderstanding something here?
> 
> Thanks again,
> Demian
> 
> ---
> GPG Fingerprint: 9530 B4AF 551B F3CD A45C  476C D4E5 662D DB97 69E3
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][qos] request to merge feature/qos back into master

2015-08-12 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi all,

with great pleasure, I want to request a coordinated review for
merging feature/qos branch back to master:

https://review.openstack.org/#/c/212170/

Since it's a merge patch, gerrit fails to show the whole diff that it
introduces into master. To get over it, fetch the patch:

$ git review -d 212170

and then check the difference:

$ git fetch origin && git diff origin/master...

I think we should stick to review process originally suggested at [1].
Specifically, since it's not reasonable to expect the whole feature
branch to be reviewed by a single person, I hope multiple people will
assign themselves to the job and split the pieces to review based on
devref document that describes the feature [2] (Note that a new RPC
push/pull mechanism is described in a separate devref section [3]).

Note that we don't expect to tackle all review comments, however tiny,
in feature/qos. We are happy to handle major flaws there, but for
minor stuff, it's good to proceed in master. Nevertheless we are happy
to get minors too and collect them for post-merge.

Things we have in the tree:

- - server: QoS API extension; QoS core resource extension; QoS ML2
extension driver; QoS versioned objects + base for new objects; QoS
supported rule types mechanism for ML2; QoS notification drivers
mechanism to update SDN controllers;

- - RPC: new push/pull mechanisms for versioned objects to propagate QoS
objects into the agents;

- - agent side: new L2 agent extensions mechanism, integrated into OVS
and SR-IOV agents; QoS l2 agent extension; OVS and SR-IOV QoS drivers;
ovs_lib and pci_lib changes.

I suggest to split review into following logical pieces:

- - API controller + service plugin + API tests;
- - Versioned objects: neutron.objects.*
- - ML2: supported_qos_rule_types mechanism, extension driver, update
for get_device_details payload;
- - RPC mechanism (push/pull), resource manager, registries +
notification drivers integration;
- - l2 extensions (manager, base interface) + qos extension;
- - OVS integration with extension manager + OVS QoS driver + ovs_lib
changes;
- - SR-IOV agent integration with extension manager + SR-IOV QoS driver
+ pci_lib changes;
- - functional tests.

We will also need to update the spec:
https://review.openstack.org/#/c/199112/

Included test coverage:

- - unit tests;
- - API tests;
- - functional tests (more scenarios to come in master);
- - fullstack tests [4] (not in the tree since we need to merge client
and base fullstack patches first).

We have client patches up for review [5][6] and expect them to go in
after merge of server component.

We hope that we'll make fullstack in before closing the blueprint in
this cycle.

[1]:
http://lists.openstack.org/pipermail/openstack-dev/2015-July/069188.html
[2]:
http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/devref/q
uality_of_service.rst?h=feature/qos
[3]:
http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/devref/r
pc_callbacks.rst?h=feature/qos
[4]: https://review.openstack.org/202492
[5]: https://review.openstack.org/189655
[6]: https://review.openstack.org/198277
[7]: https://review.openstack.org/202061
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCAAGBQJVy6TPAAoJEC5aWaUY1u574v0IAOFOH09+cwhv8eEORyHF8kaK
RTYGFefnjCD2BdXJ1bXBhyPMm9CoFbNpAW+zG9l9SaQ7aGvd3yE3bgqlp75qMK8Q
8dW7HuC/pM/VTlrFg1dqZFwHiNYnqxTdoXgrviI8YWXFpfHUDvPIlVkfFRwurX6J
YjHlJEh0VLSI4ungkTNg7Hljwlx4pDMzIB8dVrhGRTRcop4QMpqW+XG6DQVCiW/l
XeUNkAE57H9phkyFQKJFzazYCN2HyOpADZqCrw7vQsUWbFR0LSwbbWy3bkYN9V0D
CV4JTypmHsE+uMV1OaQ+PqPu0NhJw+S7B75QeouVJjltz4VdCWlV8qxSPiFMH4s=
=kfhT
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos] request to merge feature/qos back into master

2015-08-12 Thread Kevin Benton
If you want a quick visual diff of this, you can click on "Files changed"
here: https://github.com/openstack/neutron/compare/feature/qos

On Wed, Aug 12, 2015 at 12:55 PM, Ihar Hrachyshka 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> Hi all,
>
> with great pleasure, I want to request a coordinated review for
> merging feature/qos branch back to master:
>
> https://review.openstack.org/#/c/212170/
>
> Since it's a merge patch, gerrit fails to show the whole diff that it
> introduces into master. To get over it, fetch the patch:
>
> $ git review -d 212170
>
> and then check the difference:
>
> $ git fetch origin && git diff origin/master...
>
> I think we should stick to review process originally suggested at [1].
> Specifically, since it's not reasonable to expect the whole feature
> branch to be reviewed by a single person, I hope multiple people will
> assign themselves to the job and split the pieces to review based on
> devref document that describes the feature [2] (Note that a new RPC
> push/pull mechanism is described in a separate devref section [3]).
>
> Note that we don't expect to tackle all review comments, however tiny,
> in feature/qos. We are happy to handle major flaws there, but for
> minor stuff, it's good to proceed in master. Nevertheless we are happy
> to get minors too and collect them for post-merge.
>
> Things we have in the tree:
>
> - - server: QoS API extension; QoS core resource extension; QoS ML2
> extension driver; QoS versioned objects + base for new objects; QoS
> supported rule types mechanism for ML2; QoS notification drivers
> mechanism to update SDN controllers;
>
> - - RPC: new push/pull mechanisms for versioned objects to propagate QoS
> objects into the agents;
>
> - - agent side: new L2 agent extensions mechanism, integrated into OVS
> and SR-IOV agents; QoS l2 agent extension; OVS and SR-IOV QoS drivers;
> ovs_lib and pci_lib changes.
>
> I suggest to split review into following logical pieces:
>
> - - API controller + service plugin + API tests;
> - - Versioned objects: neutron.objects.*
> - - ML2: supported_qos_rule_types mechanism, extension driver, update
> for get_device_details payload;
> - - RPC mechanism (push/pull), resource manager, registries +
> notification drivers integration;
> - - l2 extensions (manager, base interface) + qos extension;
> - - OVS integration with extension manager + OVS QoS driver + ovs_lib
> changes;
> - - SR-IOV agent integration with extension manager + SR-IOV QoS driver
> + pci_lib changes;
> - - functional tests.
>
> We will also need to update the spec:
> https://review.openstack.org/#/c/199112/
>
> Included test coverage:
>
> - - unit tests;
> - - API tests;
> - - functional tests (more scenarios to come in master);
> - - fullstack tests [4] (not in the tree since we need to merge client
> and base fullstack patches first).
>
> We have client patches up for review [5][6] and expect them to go in
> after merge of server component.
>
> We hope that we'll make fullstack in before closing the blueprint in
> this cycle.
>
> [1]:
> http://lists.openstack.org/pipermail/openstack-dev/2015-July/069188.html
> [2]:
> http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/devref/q
> uality_of_service.rst?h=feature/qos
> [3]:
> http://git.openstack.org/cgit/openstack/neutron/tree/doc/source/devref/r
> pc_callbacks.rst?h=feature/qos
> [4]: https://review.openstack.org/202492
> [5]: https://review.openstack.org/189655
> [6]: https://review.openstack.org/198277
> [7]: https://review.openstack.org/202061
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iQEcBAEBCAAGBQJVy6TPAAoJEC5aWaUY1u574v0IAOFOH09+cwhv8eEORyHF8kaK
> RTYGFefnjCD2BdXJ1bXBhyPMm9CoFbNpAW+zG9l9SaQ7aGvd3yE3bgqlp75qMK8Q
> 8dW7HuC/pM/VTlrFg1dqZFwHiNYnqxTdoXgrviI8YWXFpfHUDvPIlVkfFRwurX6J
> YjHlJEh0VLSI4ungkTNg7Hljwlx4pDMzIB8dVrhGRTRcop4QMpqW+XG6DQVCiW/l
> XeUNkAE57H9phkyFQKJFzazYCN2HyOpADZqCrw7vQsUWbFR0LSwbbWy3bkYN9V0D
> CV4JTypmHsE+uMV1OaQ+PqPu0NhJw+S7B75QeouVJjltz4VdCWlV8qxSPiFMH4s=
> =kfhT
> -END PGP SIGNATURE-
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][nova] Streamlining of config options in nova

2015-08-12 Thread Michael Still
On Thu, Aug 13, 2015 at 4:29 AM, Sean Dague  wrote:

> On 08/12/2015 02:23 PM, Jay Pipes wrote:
> > On 08/12/2015 01:20 PM, Markus Zoeller wrote:
> > 
> >> The three questions I have with every config option:
> >> 1) which service(s) access this option?
> >> 2) what does it do? / what's the impact?
> >> 3) which other options do I need to tweek to get the described impact?
> >
> > All excellent questions that really should be answered in both the help
> > string for the option as well as the documentation here:
> >
> >
> http://docs.openstack.org/havana/config-reference/content/list-of-compute-config-options.html
> >
> >
> > Note that the above link is generated, IIRC, from the code, so
> > increasing the details in option descriptions (help string) should show
> > up there. Anne is that assumption correct?
> >
> >> Would it make sense to stage the changes?
> >> M cycle: move the config options out of the modules to another place
> >>   (like the approach Sean proposed) and annotate them with
> >>   the services which uses them
>

Do we see https://review.openstack.org/#/c/205154/ as a reasonable example
of such centralization? If not, what needs to change there to make it an
example of that centralization? I see value in having a worked example
people can follow before we attempt a large number of these moves.


> >> N cycle: inject the options into the drivers and eliminate the global
> >>   variables this way (like Daniel et al. proposed)
> >
> > +1. I think the above is an excellent plan. You have my support.
>
> I think this is a great plan. I agree with both steps, and the order in
> tackling them. Thanks for taking this on.


Michael

-- 
Rackspace Australia
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-12 Thread Matt Riedemann

Bug reported here:

https://bugs.launchpad.net/taskflow/+bug/1484267

We need a 0.6.2 release of taskflow from stable/juno with the g-r caps 
(for networkx specifically) to unblock the cinder py26 job in stable/juno.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread David Chadwick
Hi Jamie

I have been thinking some more about your Coke and Pepsi use case
example, and I think it is a somewhat spurious example, for the
following reasons:

1. If Coke and Pepsi are members of the SAME federation, then they trust
each other (by definition). Therefore they would not and could not
object to being listed as alternative IdPs in this federation.

2. If Coke and Pepsi are in different federations because they dont
trust each other, but they have the same service provider, then their
service provider would be a member of both federations. In this case,
the SP would provide different access points to the different
federations, and neither Coke nor Pepsi would be aware of each other.

regards

David

On 06/08/2015 00:54, Jamie Lennox wrote:
> 
> 
> - Original Message -
>> From: "David Lyle" 
>> To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Sent: Thursday, August 6, 2015 5:52:40 AM
>> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
>>
>> Forcing Horizon to duplicate Keystone settings just makes everything much
>> harder to configure and much more fragile. Exposing whitelisted, or all,
>> IdPs makes much more sense.
>>
>> On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews < dolph.math...@gmail.com >
>> wrote:
>>
>>
>>
>> On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli < steve...@ca.ibm.com >
>> wrote:
>>
>>
>>
>>
>>
>> Some folks said that they'd prefer not to list all associated idps, which i
>> can understand.
>> Why?
> 
> So the case i heard and i think is fairly reasonable is providing corporate 
> logins to a public cloud. Taking the canonical coke/pepsi example if i'm 
> coke, i get asked to login to this public cloud i then have to scroll though 
> all the providers to find the COKE.COM domain and i can see for example that 
> PEPSI.COM is also providing logins to this cloud. Ignoring the corporate 
> privacy implications this list has the potential to get long. Think about for 
> example how you can do a corporate login to gmail, you certainly don't pick 
> from a list of auth providers for gmail - there would be thousands. 
> 
> My understanding of the usage then would be that coke would have been 
> provided a (possibly branded) dedicated horizon that backed onto a public 
> cloud and that i could then from horizon say that it's only allowed access to 
> the COKE.COM domain (because the UX for inputting a domain at login is not 
> great so per customer dashboards i think make sense) and that for this 
> instance of horizon i want to show the 3 or 4 login providers that COKE.COM 
> is going to allow. 
> 
> Anyway you want to list or whitelist that in keystone is going to involve 
> some form of IdP tagging system where we have to say which set of idps we 
> want in this case and i don't think we should.
> 
> @David - when you add a new IdP to the university network are you having to 
> provide a new mapping each time? I know the CERN answer to this with websso 
> was to essentially group many IdPs behind the same keystone idp because they 
> will all produce the same assertion values and consume the same mapping. 
> 
> Maybe the answer here is to provide the option in django_openstack_auth, a 
> plugin (again) of fetch from keystone, fixed list in settings or let it point 
> at a custom text file/url that is maintained by the deployer. Honestly if 
> you're adding and removing idps this frequently i don't mind making the 
> deployer maintain some of this information out of scope of keystone.
> 
> 
> Jamie
> 
>>
>>
>>
>>
>>
>> Actually, I like jamie's suggestion of just making horizon a bit smarter, and
>> expecting the values in the horizon settings (idp+protocol)
>> But, it's already in keystone.
>>
>>
>>
>>
>>
>>
>>
>> Thanks,
>>
>> Steve Martinelli
>> OpenStack Keystone Core
>>
>> Dolph Mathews ---2015/08/05 01:38:09 PM---On Wed, Aug 5, 2015 at 5:39 AM,
>> David Chadwick < d.w.chadw...@kent.ac.uk > wrote:
>>
>> From: Dolph Mathews < dolph.math...@gmail.com >
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org >
>> Date: 2015/08/05 01:38 PM
>> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
>>
>>
>>
>>
>>
>> On Wed, Aug 5, 2015 at 5:39 AM, David Chadwick < d.w.chadw...@kent.ac.uk >
>> wrote:
>>
>> On 04/08/2015 18:59, Steve Martinelli wrote: > Right, but that API is/should
>> be protected. If we want to list IdPs > *before* authenticating a user, we
>> either need: 1) a new API for listing > public IdPs or 2) a new policy that
>> doesn't protect that API. Hi Steve yes this was my understanding of the
>> discussion that took place many months ago. I had assumed (wrongly) that
>> something had been done about it, but I guess from your message that we are
>> no further forward on this Actually 2) above might be better reworded as - a
>> new policy/engine that allows public access to be a bona fide policy rule
>> The existing policy simply seems wrong. Why protect th

Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread David Chadwick
Hi Lance

On 12/08/2015 18:55, Lance Bragstad wrote:
> 
> 
> On Wed, Aug 12, 2015 at 12:06 PM, David Chadwick
> mailto:d.w.chadw...@kent.ac.uk>> wrote:
> 
> 
> 
> On 11/08/2015 01:46, Jamie Lennox wrote:
> >
> >
> > - Original Message -
> >> From: "Jamie Lennox"  > To: "OpenStack
> >> Development Mailing List (not for usage questions)"
> >>  > Sent: Tuesday, 11
> August, 2015
> >> 10:09:33 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >> Federated Login
> >>
> >>
> >>
> >> - Original Message -
> >>> From: "David Chadwick"  > To:
> >>> openstack-dev@lists.openstack.org
>  Sent: Tuesday, 11 August,
> 2015
> >>> 12:50:21 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >>> Federated Login
> >>>
> >>>
> >>>
> >>> On 10/08/2015 01:53, Jamie Lennox wrote:
> 
> 
>  - Original Message -
> > From: "David Chadwick"  > To:
> > openstack-dev@lists.openstack.org
>  Sent: Sunday, August 9,
> > 2015 12:29:49 AM Subject: Re: [openstack-dev] [Keystone]
> > [Horizon] Federated Login
> >
> > Hi Jamie
> >
> > nice presentation, thanks for sharing it. I have forwarded it
> > to my students working on federation aspects of Horizon.
> >
> > About public federated cloud access, the way you envisage it,
> > i.e. that every user will have his own tailored (subdomain)
> > URL to the SP is not how it works in the real world today.
> > SPs typically provide one URL, which everyone from every IdP
> > uses, so that no matter which browser you are using, from
> > wherever you are in the world, you can access the SP (via
> > your IdP). The only thing the user needs to know, is the name
> > of his IdP, in order to correctly choose it.
> >
> > So discovery of all available IdPs is needed. You are correct
> > in saying that Shib supports a separate discovery service
> > (WAYF), but Horizon can also play this role, by listing the
> > IdPs for the user. This is the mod that my student is making
> > to Horizon, by adding type ahead searching.
> 
>  So my point at the moment is that unless there's something i'm
>  missing in the way shib/mellon discovery works is that horizon
>  can't. Because we forward to a common websso entry point there
>  is no way (i know) for the users selection in horizon to be
>  forwarded to keystone. You would still need a custom "select
>  your idp" discovery page in front of keystone. I'm not sure if
>  this addition is part of your students work, it just hasn't
>  been mentioned yet.
> 
> > About your proposed discovery mod, surely this seems to be
> > going in the wrong direction. A common entry point to
> > Keystone for all IdPs, as we have now with WebSSO, seems to
> > be preferable to separate entry points per IdP. Which high
> > street shop has separate doors for each user? Or have I
> > misunderstood the purpose of your mod?
> 
>  The purpose of the mod is purely to bypass the need to have a
>  shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2.
>  This page is currently required to allow a user to select their
>  idp (presumably from the ones supported by keystone) and
>  redirect to that IDPs specific login page.
> >>>
> >>> There are two functionalities that are required: a) Horizon
> >>> finding the redirection login URL of the IdP chosen by the user
> >>> b) Keystone finding which IdP was used for login.
> >>>
> >>> The second is already done by Apache telling Keystone in the
> >>> header field.
> >>>
> >>> The first is part of the metadata of the IdP, and Keystone should
> >>> make this available to Horizon via an API call. Ideally when
> >>> Horizon calls Keystone for the list of trusted IdPs, then the
> >>> user friendly name of the IdP (to be displayed to the user) and
> >>> the login page URL should be returned. Then Horizon can present
> >>> the user friendly list to the user, get the login URL that
> >>> matches this, then redirect the user to the IdP telling the IdP
> >>> the common callback URL of Keystone.
> >>
> >> So my understanding was that this wasn't possible. Because we want
> >> to have keystone be the registered service provider and receive the
> >> returned SAML assertions the login redirect must be issued from
> >> keystone and not horizo

Re: [openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-12 Thread Mike Perez
On Wed, Aug 12, 2015 at 1:13 PM, Matt Riedemann
 wrote:
> Bug reported here:
>
> https://bugs.launchpad.net/taskflow/+bug/1484267
>
> We need a 0.6.2 release of taskflow from stable/juno with the g-r caps (for
> networkx specifically) to unblock the cinder py26 job in stable/juno.

Josh Harlow is on vacation.

I asked in #openstack-state-management channel who else can do a
release, but haven't heard back from anyone yet.

--
Mike Perez

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [app-catalog] No IRC meeting this week

2015-08-12 Thread Christopher Aedo
We are planning to push the meeting until next week.  If anyone has
any specific topic they would like to discuss though, please respond
here and we can hold the meeting as normally planned.  Otherwise, join
us on IRC (#openstack-app-catalog) any time!

-Christopher

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Liberty SPFE Request - IDP Specific WebSSO

2015-08-12 Thread Lance Bragstad
Hey all,


I'd like to propose a spec proposal freeze exception for IDP Specific
WebSSO [0].

This topic has been discussed, in length, on the mailing list [1], where
this spec has been referenced as a possible solution [2]. This would allow
for multiple Identity Providers to use the same protocol. As described on
the mailing list, this proposal would help with the public cloud cases for
federated authentication workflows, where Identity Providers can't be
directly exposed to users.

The flow would look similar to what we already do for federated
authentication [3], but it includes adding a call in step 3. Most of the
code for step 3 already exists in Keystone, it would more or less be adding
it to the path.


Thanks!


[0] https://review.openstack.org/#/c/199339/2
[1]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071131.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-August/071571.html
[3] http://goo.gl/lLbvE1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Brocade CI

2015-08-12 Thread Nagendra Jaladanki
Mike,

Thanks for your feedback and suggestions. I had send my response yesterday
but looks like didn't get posted on the lists.openstack.org. Hence posting
it here again.

We reviewed your comments and following issues were identified and some of
them are fixed and some fix plans in progress:

1) Not posting success or failure
 The Brocade CI is a non-voting CI. The CI is posting the comment for build
sucucess or failures. The report tool is not seeing these. We are working
on correcting this.
2) Not posting a result link to view logs.
   We could not find any cases where CI is failed to post the link to logs
from the generated report.  If you have any specific uses where it failed
to post logs link, please share with us. But we did see that CI not posted
the comment at all for some review patch sets. Root causing the issue why
CI not posted the comment at all.
3) Not consistently doing runs.
   There were planned down times and CI not posted during those periods. We
also observed that CI was not posting the failures in some cases where CI
failed due non openstack issues. We corrected this. Now the CI should be
posting the results for all patch sets either success or failure.

We are also doing the following:
- Enhance the message format to be inline with other CIs.
- Closely monitoring the incoming Jenkin's request vs out going builds and
correcting if there are any issues.

Once again thanks for your feedback and suggestions. We will continue to
post this list on the updates.

Thanks & Regards,

Nagendra Rao Jaladanki

Manager, Software Engineering Manageability Brocade

130 Holger Way, San Jose, CA 95134

On Sun, Aug 9, 2015 at 5:34 PM, Mike Perez  wrote:

> People have asked me at the Cinder midcycle sprint to look at the Brocade
> CI
> to:
>
> 1) Keep the zone manager driver in Liberty.
> 2) Consider approving additional specs that we're submitted before the
>deadline.
>
> Here are the current problems with the last 100 runs [1]:
>
> 1) Not posting success or failure.
> 2) Not posting a result link to view logs.
> 3) Not consistently doing runs. If you compare with other CI's there are
> plenty
>missing in a day.
>
> This CI does not follow the guidelines [2]. Please get help [3].
>
> [1] - http://paste.openstack.org/show/412316/
> [2] -
> http://docs.openstack.org/infra/system-config/third_party.html#requirements
> [3] -
> https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Questions
>
> --
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][cinder] taskflow 0.6.1 breaking cinder py26 in stable/juno

2015-08-12 Thread Robert Collins
On 13 August 2015 at 10:31, Mike Perez  wrote:
> On Wed, Aug 12, 2015 at 1:13 PM, Matt Riedemann
>  wrote:
>> Bug reported here:
>>
>> https://bugs.launchpad.net/taskflow/+bug/1484267
>>
>> We need a 0.6.2 release of taskflow from stable/juno with the g-r caps (for
>> networkx specifically) to unblock the cinder py26 job in stable/juno.
>
> Josh Harlow is on vacation.
>
> I asked in #openstack-state-management channel who else can do a
> release, but haven't heard back from anyone yet.

The library releases team manages all oslo releases; submit a proposed
release to openstack/releases. I need to pop out shortly but will look
in in my evening to see about getting the release tagged. If Dims or
Doug are around now they can do it too, obviously :)

-Rob


-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Compass] [Meeting] Weekly Meeting Agenda

2015-08-12 Thread Weidong Shao
On openstack-meeting-4  starting soon (6PDT today)

Today's tentative agenda:

1) Ansible playbook and upstream
2) Blueprint review

Weidong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread Jamie Lennox


- Original Message -
> From: "David Chadwick" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, 13 August, 2015 3:06:46 AM
> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> 
> 
> 
> On 11/08/2015 01:46, Jamie Lennox wrote:
> > 
> > 
> > - Original Message -
> >> From: "Jamie Lennox"  To: "OpenStack
> >> Development Mailing List (not for usage questions)"
> >>  Sent: Tuesday, 11 August, 2015
> >> 10:09:33 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >> Federated Login
> >> 
> >> 
> >> 
> >> - Original Message -
> >>> From: "David Chadwick"  To:
> >>> openstack-dev@lists.openstack.org Sent: Tuesday, 11 August, 2015
> >>> 12:50:21 AM Subject: Re: [openstack-dev] [Keystone] [Horizon]
> >>> Federated Login
> >>> 
> >>> 
> >>> 
> >>> On 10/08/2015 01:53, Jamie Lennox wrote:
>  
>  
>  - Original Message -
> > From: "David Chadwick"  To:
> > openstack-dev@lists.openstack.org Sent: Sunday, August 9,
> > 2015 12:29:49 AM Subject: Re: [openstack-dev] [Keystone]
> > [Horizon] Federated Login
> > 
> > Hi Jamie
> > 
> > nice presentation, thanks for sharing it. I have forwarded it
> > to my students working on federation aspects of Horizon.
> > 
> > About public federated cloud access, the way you envisage it,
> > i.e. that every user will have his own tailored (subdomain)
> > URL to the SP is not how it works in the real world today.
> > SPs typically provide one URL, which everyone from every IdP
> > uses, so that no matter which browser you are using, from
> > wherever you are in the world, you can access the SP (via
> > your IdP). The only thing the user needs to know, is the name
> > of his IdP, in order to correctly choose it.
> > 
> > So discovery of all available IdPs is needed. You are correct
> > in saying that Shib supports a separate discovery service
> > (WAYF), but Horizon can also play this role, by listing the
> > IdPs for the user. This is the mod that my student is making
> > to Horizon, by adding type ahead searching.
>  
>  So my point at the moment is that unless there's something i'm
>  missing in the way shib/mellon discovery works is that horizon
>  can't. Because we forward to a common websso entry point there
>  is no way (i know) for the users selection in horizon to be
>  forwarded to keystone. You would still need a custom "select
>  your idp" discovery page in front of keystone. I'm not sure if
>  this addition is part of your students work, it just hasn't
>  been mentioned yet.
>  
> > About your proposed discovery mod, surely this seems to be
> > going in the wrong direction. A common entry point to
> > Keystone for all IdPs, as we have now with WebSSO, seems to
> > be preferable to separate entry points per IdP. Which high
> > street shop has separate doors for each user? Or have I
> > misunderstood the purpose of your mod?
>  
>  The purpose of the mod is purely to bypass the need to have a
>  shib/mellon discovery page on /v3/OS-FEDERATION/websso/saml2.
>  This page is currently required to allow a user to select their
>  idp (presumably from the ones supported by keystone) and
>  redirect to that IDPs specific login page.
> >>> 
> >>> There are two functionalities that are required: a) Horizon
> >>> finding the redirection login URL of the IdP chosen by the user
> >>> b) Keystone finding which IdP was used for login.
> >>> 
> >>> The second is already done by Apache telling Keystone in the
> >>> header field.
> >>> 
> >>> The first is part of the metadata of the IdP, and Keystone should
> >>> make this available to Horizon via an API call. Ideally when
> >>> Horizon calls Keystone for the list of trusted IdPs, then the
> >>> user friendly name of the IdP (to be displayed to the user) and
> >>> the login page URL should be returned. Then Horizon can present
> >>> the user friendly list to the user, get the login URL that
> >>> matches this, then redirect the user to the IdP telling the IdP
> >>> the common callback URL of Keystone.
> >> 
> >> So my understanding was that this wasn't possible. Because we want
> >> to have keystone be the registered service provider and receive the
> >> returned SAML assertions the login redirect must be issued from
> >> keystone and not horizon. Is it possible to issue a login request
> >> from horizon that returns the response to keystone? This seems
> >> dodgy to me but may be possible if all the trust relationships are
> >> set up.
> > 
> > Note also that currently this metadata including the login URL is not
> > known by keystone. It's controlled by apache in the metadata xml
> > files so we would have to add this information to keystone. Obviously
> > this is doable just extra config setup that would require double
> > handling of this URL.
> 
> My idea is to use Horizon as the WAYF/

[openstack-dev] [fuel] Gerrit dashboard update: ready for core reviewers, disagreements

2015-08-12 Thread Dmitry Borodaenko
Fuelers,

I've proposed an update for the Fuel gerrit dashboard:
https://review.openstack.org/212231

New "Ready for Core Reviewers" section encourages peer review by non-cores
and allows cores to focus on reviews that already have +1 from other
reviews and from CI.

New "Disagreements" section highlights reviews that have both positive and
negative code review votes. This worked out pretty well for Puppet
OpenStack, lets try to use it in Fuel, too.

The remaining sections are rearranged to exclude commits that match the two
new sections.

-- 
Dmitry Borodaenko
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] [Horizon] Federated Login

2015-08-12 Thread Jamie Lennox


- Original Message -
> From: "David Chadwick" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, 13 August, 2015 7:46:54 AM
> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> 
> Hi Jamie
> 
> I have been thinking some more about your Coke and Pepsi use case
> example, and I think it is a somewhat spurious example, for the
> following reasons:
> 
> 1. If Coke and Pepsi are members of the SAME federation, then they trust
> each other (by definition). Therefore they would not and could not
> object to being listed as alternative IdPs in this federation.
> 
> 2. If Coke and Pepsi are in different federations because they dont
> trust each other, but they have the same service provider, then their
> service provider would be a member of both federations. In this case,
> the SP would provide different access points to the different
> federations, and neither Coke nor Pepsi would be aware of each other.
> 
> regards
> 
> David

So yes, my point here is to number 2 and providing multitenancy in a way that 
you can't see who else is available, and in talking with some of the keystone 
people this is essentially what we've come to (and i think i mentioned 
earlier?) that you would need to provide a different access point to different 
companies to keep this information private. It has the side advantage for the 
public cloud folks of providing whitelabelling for horizon. 

The question then once you have multiple access points per customer (not user) 
is how to list IDPs that are associated with that customer. The example i had 
earlier was tagging so you could tag a horizon instance (probably doesn't need 
to be a whole instance, just a login page) with like a value like COKE and when 
you list IDPs from keystone you say list with tag=COKE to find out what should 
show in horizon. This would allow common idps like google to be reused. 

This is why i was saying that public/private may not be fine grained enough. It 
may also be not be a realistic concern. If we are talking a portal per customer 
does the cost of rebooting horizon to staticly add a new idp to the 
local_config matter? This is assumedly a rare operation. 

I think the answer has been for a while that idp listing is going to need to be 
configurable from horizon because we already have a case for list nothing, list 
everything, and use this static list, so if in future we find we need to add 
something more complex like tagging it's another option we can consider then.

> 
> On 06/08/2015 00:54, Jamie Lennox wrote:
> > 
> > 
> > - Original Message -
> >> From: "David Lyle" 
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Sent: Thursday, August 6, 2015 5:52:40 AM
> >> Subject: Re: [openstack-dev] [Keystone] [Horizon] Federated Login
> >>
> >> Forcing Horizon to duplicate Keystone settings just makes everything much
> >> harder to configure and much more fragile. Exposing whitelisted, or all,
> >> IdPs makes much more sense.
> >>
> >> On Wed, Aug 5, 2015 at 1:33 PM, Dolph Mathews < dolph.math...@gmail.com >
> >> wrote:
> >>
> >>
> >>
> >> On Wed, Aug 5, 2015 at 1:02 PM, Steve Martinelli < steve...@ca.ibm.com >
> >> wrote:
> >>
> >>
> >>
> >>
> >>
> >> Some folks said that they'd prefer not to list all associated idps, which
> >> i
> >> can understand.
> >> Why?
> > 
> > So the case i heard and i think is fairly reasonable is providing corporate
> > logins to a public cloud. Taking the canonical coke/pepsi example if i'm
> > coke, i get asked to login to this public cloud i then have to scroll
> > though all the providers to find the COKE.COM domain and i can see for
> > example that PEPSI.COM is also providing logins to this cloud. Ignoring
> > the corporate privacy implications this list has the potential to get
> > long. Think about for example how you can do a corporate login to gmail,
> > you certainly don't pick from a list of auth providers for gmail - there
> > would be thousands.
> > 
> > My understanding of the usage then would be that coke would have been
> > provided a (possibly branded) dedicated horizon that backed onto a public
> > cloud and that i could then from horizon say that it's only allowed access
> > to the COKE.COM domain (because the UX for inputting a domain at login is
> > not great so per customer dashboards i think make sense) and that for this
> > instance of horizon i want to show the 3 or 4 login providers that
> > COKE.COM is going to allow.
> > 
> > Anyway you want to list or whitelist that in keystone is going to involve
> > some form of IdP tagging system where we have to say which set of idps we
> > want in this case and i don't think we should.
> > 
> > @David - when you add a new IdP to the university network are you having to
> > provide a new mapping each time? I know the CERN answer to this with
> > websso was to essentially group many IdPs behind the same keystone idp
> > because they will all produce the same assertion values and c

Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Clint Byrum
Excerpts from Dan Smith's message of 2015-08-12 23:12:23 +0800:
> > If OTOH we are referring to the width of the columns and the join is
> > such that you're going to get the same A identity over and over again, 
> > if you join A and B you get a "wide" row with all of A and B with a very
> > large amount of redundant data sent over the wire again and again (note
> > that the database drivers available to us in Python always send all rows
> > and columns over the wire unconditionally, whether or not we fetch them
> > in application code).
> 
> Yep, it was this. N instances times M rows of metadata each. If you pull
> 100 instances and they each have 30 rows of system metadata, that's a
> lot of data, and most of it is the instance being repeated 30 times for
> each metadata row. When we first released code doing this, a prominent
> host immediately raised the red flag because their DB traffic shot
> through the roof.
> 

In the past I've taken a different approach to problematic one to
many relationships and have made the metadata a binary JSON blob.
Is there some reason that won't work? Of course, this type of thing
can run into concurrency issues on update, but these can be handled by
SELECT..FOR UPDATE + intelligent retry on deadlock. Since the metadata
is nearly always queried as a whole, this seems like a valid approach
that would keep DB traffic low but also ease the burden of reassembling
the collection in nova-api.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Mike Bayer



On 8/12/15 10:29 PM, Clint Byrum wrote:

Excerpts from Dan Smith's message of 2015-08-12 23:12:23 +0800:

If OTOH we are referring to the width of the columns and the join is
such that you're going to get the same A identity over and over again,
if you join A and B you get a "wide" row with all of A and B with a very
large amount of redundant data sent over the wire again and again (note
that the database drivers available to us in Python always send all rows
and columns over the wire unconditionally, whether or not we fetch them
in application code).

Yep, it was this. N instances times M rows of metadata each. If you pull
100 instances and they each have 30 rows of system metadata, that's a
lot of data, and most of it is the instance being repeated 30 times for
each metadata row. When we first released code doing this, a prominent
host immediately raised the red flag because their DB traffic shot
through the roof.


In the past I've taken a different approach to problematic one to
many relationships and have made the metadata a binary JSON blob.
Is there some reason that won't work? Of course, this type of thing
can run into concurrency issues on update, but these can be handled by
SELECT..FOR UPDATE + intelligent retry on deadlock. Since the metadata
is nearly always queried as a whole, this seems like a valid approach
that would keep DB traffic low but also ease the burden of reassembling
the collection in nova-api.


JSON blobs have the disadvantages that you are piggybacking an entirely 
different storage model on top of the relational one, losing all the 
features you might like about the relational model like rich datatypes 
(I understand our JSON decoders trip up on plain datetimes?), insert 
defaults, nullability constraints, a fixed, predefined schema that can 
be altered in a controlled, all-or-nothing way, efficient storage 
characteristics, and of course reasonable querying capabilities.   They 
are useful IMO only for small sections of data that are amenable to 
ad-hoc changes in schema like simple bags of key-value pairs containing 
miscellaneous features.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Dan Smith
> In the past I've taken a different approach to problematic one to 
> many relationships and have made the metadata a binary JSON blob. Is
> there some reason that won't work?

We have done that for various pieces of data that were previously in
system_metadata. Where this breaks down is if you need to be able to
select instances based on keys in the metadata blob, which we do in
various scheduling operations (certainly for aggregate metadata, at
least). I *believe* we have to leave metadata as row-based for that
reason (although honestly I don't remember the details), and probably
system_metadata as well, but I'd have to survey what is left in there.

> Since the metadata is nearly always queried as a whole, this seems
> like a valid approach that would keep DB traffic low but also ease
> the burden of reassembling the collection in nova-api.

'Nearly' being the key word there. We just got done moving all of the
flavor information we used to stash in system_metadata to a JSON blob in
the database. That cuts 10-30 rows of system_metadata for each instance,
depending on the state, and gives us a thing we can selectively join
with instance for a single load with little overhead. We might be able
to get away with going back to fully joining system_metadata given the
reduction in size, but we honestly don't even need to query it as often
after the flavor-ectomy, so I'm not sure it's worth it. Further, after
the explosion of system_metadata which caused us to stop joining it in
the first place, it was realized that a user could generate a lot of
traffic by exhausting their quota of metadata items (which they
control), so we probably want to join user metadata in python anyway for
that reason.

So I guess the summary is: I think with flavor data out of the path, the
major offender is gone, such that this becomes extremely low on the
priority list.

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-12 Thread 王华
any comments on this?

On Wed, Aug 12, 2015 at 2:50 PM, 王华  wrote:

> Hi All,
>
> In order to prevent race conditions due to multiple conductors, my
> solution is as blew:
> 1. remove the db operation in bay_update to prevent race conditions.Stack
> operation is atomic. Db operation is atomic. But the two operations
> together are not atomic.So the data in the db may be wrong.
> 2. sync up stack status and stack parameters(now only node_count) from
> heat by periodic tasks. bay_update can change stack parameters, so we need
> to sync up them.
> 3. remove heat poller, because we have periodic tasks.
>
> To sync up stack parameters from heat, we need to show stacks using
> admin_context. But heat don't allow to show stacks in other tenant. If we
> want to show stacks in other tenant, we need to store auth context for
> every bay. That is a problem. Even if we store the auth context, there is a
> timeout for token. The best way I think is to let heat allow admin user to
> show stacks in other tenant.
>
> Do you have a better solution or any improvement for my solution?
>
> Regards,
> Wanghua
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] In memory joins in Nova

2015-08-12 Thread Clint Byrum
Excerpts from Mike Bayer's message of 2015-08-13 11:03:32 +0800:
> 
> On 8/12/15 10:29 PM, Clint Byrum wrote:
> > Excerpts from Dan Smith's message of 2015-08-12 23:12:23 +0800:
> >>> If OTOH we are referring to the width of the columns and the join is
> >>> such that you're going to get the same A identity over and over again,
> >>> if you join A and B you get a "wide" row with all of A and B with a very
> >>> large amount of redundant data sent over the wire again and again (note
> >>> that the database drivers available to us in Python always send all rows
> >>> and columns over the wire unconditionally, whether or not we fetch them
> >>> in application code).
> >> Yep, it was this. N instances times M rows of metadata each. If you pull
> >> 100 instances and they each have 30 rows of system metadata, that's a
> >> lot of data, and most of it is the instance being repeated 30 times for
> >> each metadata row. When we first released code doing this, a prominent
> >> host immediately raised the red flag because their DB traffic shot
> >> through the roof.
> >>
> > In the past I've taken a different approach to problematic one to
> > many relationships and have made the metadata a binary JSON blob.
> > Is there some reason that won't work? Of course, this type of thing
> > can run into concurrency issues on update, but these can be handled by
> > SELECT..FOR UPDATE + intelligent retry on deadlock. Since the metadata
> > is nearly always queried as a whole, this seems like a valid approach
> > that would keep DB traffic low but also ease the burden of reassembling
> > the collection in nova-api.
> 
> JSON blobs have the disadvantages that you are piggybacking an entirely 
> different storage model on top of the relational one, losing all the 
> features you might like about the relational model like rich datatypes 
> (I understand our JSON decoders trip up on plain datetimes?), insert 
> defaults, nullability constraints, a fixed, predefined schema that can 
> be altered in a controlled, all-or-nothing way, efficient storage 
> characteristics, and of course reasonable querying capabilities.   They 
> are useful IMO only for small sections of data that are amenable to 
> ad-hoc changes in schema like simple bags of key-value pairs containing 
> miscellaneous features.
> 

Agreed on all points!. And metadata for instances is exactly that:
a simple bag of key/value strings that is almost always queried and
delivered as a whole.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Gerrit dashboard update: ready for core reviewers, disagreements

2015-08-12 Thread Cameron Seader
test

On 08/12/2015 07:16 PM, Dmitry Borodaenko wrote:
> Fuelers,
>
> I've proposed an update for the Fuel gerrit dashboard:
> https://review.openstack.org/212231
>
> New "Ready for Core Reviewers" section encourages peer review by
> non-cores and allows cores to focus on reviews that already have +1
> from other reviews and from CI.
>
> New "Disagreements" section highlights reviews that have both positive
> and negative code review votes. This worked out pretty well for Puppet
> OpenStack, lets try to use it in Fuel, too.
>
> The remaining sections are rearranged to exclude commits that match
> the two new sections.
>
> -- 
> Dmitry Borodaenko
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cameron Seader
Systems Engineer
SUSE
c...@suse.com
(w)208-572-0095
(M)208-420-2167

Register for SUSECon 2015
www.susecon.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove]Implement the API to create masterinstance and slave instances with one request

2015-08-12 Thread 陈迪豪
Thanks Doug.
 
It's really helpful and we need this feature as well. Can you point out the bp 
or patch of this?


I think we will add "--replica-count" parameter within trove create request. So 
trove-api will create trove instance(aync create nova instance) and then create 
some replica trove instances(aync create nova instances). This is really useful 
for web front-end developers to create master and replica instances in the same 
time(they don't want to send multiple requests by themselves).


Regards,
tobe from UnitedStack 


-- Original --
From:  "Doug Shelley";
Date:  Wed, Aug 12, 2015 10:21 PM
To:  "openstack-dev@lists."; 

Subject:  Re: [openstack-dev] [trove]Implement the API to create masterinstance 
and slave instances with one request

 
   As of Kilo, you can add a —replica-count parameter to trove create 
—replica-of to have it spin up multiple mysql slaves simultaneously. This same 
construct is in the python/REST API as well. I realize that you still need to 
create a master first, but thought  I would point this out as it might be 
helpful to you.
 
 
 
 
 Regards,
 Doug
 
 
 
 
   From: 陈迪豪 
 Reply-To: OpenStack List 
 Date: Tuesday, August 11, 2015 at 11:45 PM
 To: OpenStack List 
 Subject: [openstack-dev] [trove]Implement the API to create master instance 
and slave instances with one request
 
 
 
   Now we can create mysql master instance and slave instance one by one.
 
 
 It would be much better to allow user to create one master instance and 
multiple slave instances with one request.
 
 
 Any suggestion about this, the API design or the implementation?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][qos] request to merge feature/qos back into master

2015-08-12 Thread Andreas Jaeger

On 08/12/2015 09:55 PM, Ihar Hrachyshka wrote:

Hi all,

with great pleasure, I want to request a coordinated review for
merging feature/qos branch back to master:

https://review.openstack.org/#/c/212170/


Great!

Please send also a patch for project-config to remove the special 
handling of that branch...


thanks,
Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Dilip Upmanyu, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]problems for horizontal scale

2015-08-12 Thread Kai Qiang Wu
Hi Hua,

I have some comments about this:

A>
 remove heat poller can be a way, but some of its logic needs to make sure
it work and performance not burden.
1) for old heat poller it is quick loop, with fixed interval, to make sure
stack status update quickly can be reflected in bay status
2) for periodic task running, it seems dynamic loop, and period is long, it
was added for some stacks creation timeout, 1) loop exit, this 2) loop can
help update the stack and also conductor crash issue


It would be ideal to put in one place for looping over the stacks, but
periodic tasks need to consider if it really just need to loop
IN_PROGRESS status stack ? And what's the interval for loop that ? (60s or
short, loop performance)

Does heat have other status transition  path, like delete_failed -->
(status reset) --> become OK.  etc.



B> For remove db operation in bay_update case. I did not understand your
suggestion.
bay_update include update_stack and poll_and_check(it is in heat poller),
if you removed heat poller to periodic task(as you said in your 3). It
still needs db operations.



C> For allow admin user to show stacks in other tenant, it seems OK. Does
other projects try this before? Is it reasonable case for customer ?



Thanks


Best Wishes,

Kai Qiang Wu (吴开强  Kennan)
IBM China System and Technology Lab, Beijing

E-mail: wk...@cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
 No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193

Follow your heart. You are miracle!



From:   王华 
To: openstack-dev@lists.openstack.org
Date:   08/13/2015 11:31 AM
Subject:Re: [openstack-dev] [magnum]problems for horizontal scale



any comments on this?

On Wed, Aug 12, 2015 at 2:50 PM, 王华  wrote:
  Hi All,

  In order to prevent race conditions due to multiple conductors, my
  solution is as blew:
  1. remove the db operation in bay_update to prevent race conditions.Stack
  operation is atomic. Db operation is atomic. But the two operations
  together are not atomic.So the data in the db may be wrong.
  2. sync up stack status and stack parameters(now only node_count) from
  heat by periodic tasks. bay_update can change stack parameters, so we
  need to sync up them.
  3. remove heat poller, because we have periodic tasks.

  To sync up stack parameters from heat, we need to show stacks using
  admin_context. But heat don't allow to show stacks in other tenant. If we
  want to show stacks in other tenant, we need to store auth context for
  every bay. That is a problem. Even if we store the auth context, there is
  a timeout for token. The best way I think is to let heat allow admin user
  to show stacks in other tenant.

  Do you have a better solution or any improvement for my solution?

  Regards,
  Wanghua
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] I am pleased to propose two new Neutron API/DB/RPC core reviewers!

2015-08-12 Thread Akihiro Motoki
+1 for both.

2015-08-12 22:45 GMT+09:00 Kyle Mestery :

> It gives me great pleasure to propose Russell Bryant and Brandon Logan as
> core reviewers in the API/DB/RPC area of Neutron. Russell and Brandon have
> both been incredible contributors to Neutron for a while now. Their
> expertise has been particularly helpful in the area they are being proposed
> in. Their review stats [1] place them both comfortably in the range of
> existing Neutron core reviewers. I expect them to continue working with all
> community members to drive Neutron forward for the rest of Liberty and into
> Mitaka.
>
> Existing DB/API/RPC core reviewers (and other Neutron core reviewers),
> please vote +1/-1 for the addition of Russell and Brandon.
>
> Thanks!
> Kyle
>
> [1] http://stackalytics.com/report/contribution/neutron-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Compass] Call for contributors

2015-08-12 Thread Jesse Pretorius
On 12 August 2015 at 17:23, Weidong Shao  wrote:

>
> Compass is not new to OpenStack community. We started it as an OpenStack
> deployment tool at the HongKong summit. We then showcased it at the Paris
> summit.
>
> However, the project has gone through some changes recently. We'd like to
> re-introduce Compass and welcome new developers to expand our efforts,
> share in its design, and advance its usefulness to the OpenStack community.
>
> We intend to follow the 4 openness guidelines and enter the "Big Tent". We
> have had some feedback from TC reviewers and others and realize we have
> some work to do to get there. More developers interested in working on the
> project will get us there easier.
>
> Besides the openness Os, there is critical developer work we need to get
> to one of the OpenStack Os.  For example, we have forked Chef cookbooks,
> and Ansible written from scratch for OpenStack deployment. We need to merge
> the Compass Ansible playbooks back to openstack upstream repo
> (os-ansible-deployment).
>
> We also need to reach out to other related projects, such as Ironic, to
> make sure that where our efforts overlap, we provided added value, not
> different ways of doing the same thing.
>
> Lot of work we think will add to the OpenStack community.
>
>
>- The project wiki page is at https://wiki.openstack.org/wiki/Compass
>- The launchpad is: https://launchpad.net/compass
>- The weekly IRC meeting is on openstack-meeting4 0100 Thursdays UTC
>(or Wed 6pm PDT)
>- Code repo is under stackforge
>https://github.com/stackforge/compass-core
>https://github.com/stackforge/compass-web
>https://github.com/stackforge/compass-adapters
>
> Hi Weidong,

This looks like an excellent project and we (the openstack-ansible project)
love to assist you with the integration of Compass with openstack-ansible
(aka os-ansible-deployment).

I'd like to discuss with your team how we can work together to facilitate
Compass' consumption of the playbooks/roles we produce in a suitable way
and will try to attend the next meeting as I seem to have missed this
week's meeting). We'd like to understand the project's needs so that we can
work towards defined goals to accommodate them, while also maintaining our
stability for other downstream consumers.

We also invite you to attend our next meeting on Thu 16:00 UTC in
#openstack-meeting-4 - details are here for reference:
https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Community_Meeting

Looking forward to working with you!

Best regards,

Jesse
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Kuryr] - Update Status

2015-08-12 Thread Gal Sagie
Hello everyone,

I would like to give a short status update on Kuryr [1].

The project is starting to formalize, we already conducted two IRC meetings
[2]
to define the project first goals and road map goals, check the meetings
logs and
the agenda here [3]

I think we see good amount of interest from the community in the project
and understanding
the importance of its goals.
The project repository already contains the proxy implementation of
libnetwork remote driver API
which is mapped to Neutron's API's and you can check and review the code [4]

The current topics we are discussing are: (please view the etherpads for
more information)

1) Kuryr Configuration - both the Neutron side and Docker side [5]

2) Generic VIF-Binding solution that can be used by all Neutron plugins [6]

We are trying to cooperate and leverage the tremendous great work done
already in Magnum
and Kolla projects and see where Kuryr and Neutron fits together with these
projects.
We have Daneyon Hansen joining our meetings and we hope to keep the
cooperation and
introduce a solution which leverage the experience and work done in Neutron
and its
implementations.

I want to welcome anyone that is interested in this topic to come to the
meetings, raise
ideas/comments in the etherpads, review the code and contribute code, we
welcome any contribution.

Would like to thank Antoni Segura Puimedo (apuimedo) for leading this
effort and everyone that
are contributing to the project.

[1] http://eavesdrop.openstack.org/#Kuryr_Project_Meeting
[2] https://launchpad.net/kuryr
[3] https://wiki.openstack.org/wiki/Meetings/Kuryr
[4] https://review.openstack.org/#/q/project:openstack/kuryr,n,z
[5] https://etherpad.openstack.org/p/kuryr-configuration
[6] https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest] Can we add testcase for download image v2?

2015-08-12 Thread Deore, Pranali11
Hi,

While going through the tempest code, I have found that image-download v2 api 
test is missing in tempest.
Can I add the api test for the same? Please suggest?

Also glance task import API related testcases are also not there in tempest. Is 
it ok if I add tests for the same?


Thanks

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Puppet] Keystone V2/V3 service endpoints

2015-08-12 Thread Gilles Dubreuil
Hi Matthew,

On 11/08/15 01:14, Rich Megginson wrote:
> On 08/10/2015 07:46 AM, Matthew Mosesohn wrote:
>> Sorry to everyone for bringing up this old thread, but it seems we may
>> need more openstackclient/keystone experts to settle this.
>>
>> I'm referring to the comments in https://review.openstack.org/#/c/207873/
>> Specifically comments from Richard Megginson and Gilles Dubreuil
>> indicating openstackclient behavior for v3 keystone API.
>>
>>
>> A few items seem to be under dispute:
>> 1 - Keystone should be able to accept v3 requests at
>> http://keystone-server:5000/
> 
> I don't think so.  Keystone requires the version suffix "/v2.0" or "/v3".
> 

Yes, if the public endpoint is set without a version then the service
can deal with either version.

http://paste.openstack.org/raw/412819/

That is not true for the admin endpoint (authentication is already done,
the admin services deals only with tokens), at least for now, Keystone
devs are working on it.

>> 2 - openstackclient should be able to interpret v3 requests and append
>> "v3/" to OS_AUTH_URL=http://keystone-server.5000/ or rewrite the URL
>> if it is set as
>> OS_AUTH_URL=http://keystone-server.5000/
> 
> It does, if it can determine from the given authentication arguments if
> it can do v3 or v2.0.
> 

It effectively does, again, assuming the path doesn't contain a version
number (x.x.x.x:5000)

>> 3 - All deployments require /etc/keystone/keystone.conf with a token
>> (and not simply use openrc for creating additional endpoints, users,
>> etc beyond keystone itself and an admin user)
> 
> No.  What I said about this issue was "Most people using
> puppet-keystone, and realizing Keystone resources on nodes that are not
> the Keystone node, put a /etc/keystone/keystone.conf on that node with
> the admin_token in it."
> 
> That doesn't mean the deployment requires /etc/keystone/keystone.conf. 
> It should be possible to realize Keystone resources on non-Keystone
> nodes by using ENV or openrc or other means.
> 

Agreed. Also keystone.conf is used only to bootstrap keystone
installation and create admin users, etc.


>>
>> I believe it should be possible to set v2.0 keystone OS_AUTH_URL in
>> openrc and puppet-keystone + puppet-openstacklib should be able to
>> make v3 requests sensibly by manipulating the URL.
> 
> Yes.  Because for the puppet-keystone resource providers, they are coded
> to a specific version of the api, and therefore need to be able to
> set/override the OS_IDENTITY_API_VERSION and the version suffix in the URL.
> 

No. To support V2 and V3, the OS_AUTH_URL must not contain any version
in order.

The less we deal with the version number the better!

>> Additionally, creating endpoints/users/roles shouldbe possible via
>> openrc alone.
> 
> Yes.
> 

Yes, the openrc variables are used, if not available then the service
token from the keystone.conf is used.

>> It's not possible to write single node tests that can demonstrate this
>> functionality, which is why it probably went undetected for so long.
> 
> And since this is supported, we need tests for this.

I'm not sure what's the issue besides the fact keystone_puppet doesn't
generate a RC file once the admin user has been created. That is left to
be done by the composition layer. Although we might want to integrate that.

If that issue persists, assuming the AUTH_URL is free for a version
number and having an openrc in place, we're going to need a bug number
to track the investigation.

>>
>> If anyone can speak up on these items, it could help influence the
>> outcome of this patch.
>>
>> Thank you for your time.
>>
>> Best Regards,
>> Matthew Mosesohn


Thanks,
Gilles

>>
>> On Fri, Jul 31, 2015 at 6:32 PM, Rich Megginson > > wrote:
>>
>> On 07/31/2015 07:18 AM, Matthew Mosesohn wrote:
>>
>> Jesse, thanks for raising this. Like you, I should just track
>> upstream
>> and wait for full V3 support.
>>
>> I've taken the quickest approach and written fixes to
>> puppet-openstacklib and puppet-keystone:
>> https://review.openstack.org/#/c/207873/
>> https://review.openstack.org/#/c/207890/
>>
>> and again to Fuel-Library:
>> https://review.openstack.org/#/c/207548/1
>>
>> I greatly appreciate the quick support from the community to
>> find an
>> appropriate solution. Looks like I'm just using a weird edge case
>> where we're creating users on a separate node from where
>> keystone is
>> installed and it never got thoroughly tested, but I'm happy to fix
>> bugs where I can.
>>
>>
>> Most puppet deployments either realize all keystone resources on
>> the keystone node, or drop an /etc/keystone/keystone.conf with
>> admin token onto non-keystone nodes where additional keystone
>> resources need to be realized.
>>
>>
>>
>> -Matthew
>>
>>   

Re: [openstack-dev] ][third-party-ci]Running custom code before tests

2015-08-12 Thread Eduard Matei
Hi,

I think you pointed me to the wrong file, the devstack-gate yaml (and line
2201 contains "timestamps").
I need an example of how to configure tempest to use my driver.

I tried EXPORT in the jenkins job (before executing dsvm shell script) but
looking at the tempest.txt (log) it shows that it still uses the defaults.
How do i overwrite those defaults?

Thanks,

Eduard
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev