[openstack-dev] [nova][api] does validation bug-fix require microversion bump?

2015-12-20 Thread Ken'ichi Ohmichi
Hi nova-api team,

I'd like to get a feedback about the way to bump a microversion.

Short version:
  We found a validation bug on Nova v2.1 API.
  To fix the bug, do we need to bump a new microversion?

Long version:
As LP bug report[1], nova v2.0 API allows a list of server-IDs on
scheduler_hint "different_host" like

"os:scheduler_hints": {
"different_host": [
"099b8bee-9264-48fe-a745-45b22f7ff79f",
"99644acc-8893-4656-9481-0114efdbc9b6"
]
}

on "create a server" API.
However, nova v2.1 API is handling this request as invalid because the
validation implementation way is wrong now.
Nova v2.1 API should allow the list of server-IDs for backwards compatibility.

We are trying to fix this bug on
https://review.openstack.org/#/c/259247/ , and we have a question to
fix it.
This fix is API change even if fixing the bug, so do we need to bump a
microversion?

The one usage of microversions is notification of API change.
If bumping it, nova can notify the fixing with a microversion.

This fix should be applied to stable branches also because of helping
the existing users.
So if bumping a microversion on stable branch also, the microversion
number meanings become different between clouds which are deployed
with different nova releases.
So we(John, Alex, me) are guessing we should not bump a microversion
on stable branches. but if doing that, nova cannot notify the fixing
on stable branches.

Now I am feeling this fixing will be applied without a microversion
bump because it is nice to avoid different microversion meanings of
master/stable branches.
Is it fine for us?

Thanks
Ken Ohmichi

---
[1]: https://launchpad.net/bugs/1521928

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova-scheduler] Scheduler sub-group meeting - Cancel 12/21

2015-12-20 Thread Dugger, Donald D
As discussed last week we will cancel this and next week's meeting.

Happy Holidays everyone.

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum]storage for docker-bootstrap

2015-12-20 Thread 王华
Adrian,

flannel_network_cidr and flannel_network_subnetlen are two parameters
needed for flannel. flannel_network_cidr is the network range for flannel
overlay network. flannel_network is the size of subnet assigned to each
minion. When flannel starts, it needs the two parameters. Flannel will
allocate a subnet from flannel_network_cidr for each minion. THe subnets on
different minions are different. The data of flannel is stored in etcd. BIP
is equal to the subnet created by flannel. MTU depends on whether we use
vxlan in flannel.

If we use one docker daemon, we need to start the docker daemon without BIP
first, then run flannel and etcd to generate BIP. After that, we need to
kill the previous docker daemon and start a new docker daemon with BIP,
then run etcd and flannel on it.

Regards,
Wanghua

On Sat, Dec 19, 2015 at 2:19 AM, Adrian Otto 
wrote:

> Wanghua,
>
> I see. The circular dependency you described does sound like a formidable
> challenge. Having multiple docker daemons violates the principle of least
> surprise. I worry that when it comes time to perform troubleshooting, an
> engineer would be surprised to find multiple dockers running at the same
> time within the same compute instance.
>
> Perhaps there is a way to generate the BIP and MTU before the docker
> daemon is started, then use those while starting docker, and start both
> flannel and etcd containers so all containers on the instance can share a
> single docker daemon? Would that work at all? I guess I’d need a better
> understanding of exactly how the BIP and MTU are generated before judging
> if this is a good idea.
>
> Adrian
>
> On Dec 16, 2015, at 11:40 PM, 王华  wrote:
>
> Adrian,
>
> When the docker daemon starts, it needs to know the bip and mtu which are
> generated by flannel. So flannel and etcd should start before docker
> daemon, but if flannel and etcd run in the same daemon, it introduces a
> circle. We need another docker daemon which is dedicated to flannel and
> etcd.
>
> Regards
> wanghua
>
> On Mon, Dec 14, 2015 at 11:45 AM, Steven Dake (stdake) 
> wrote:
>
>> Adrian,
>>
>> Its a real shame Atomic can't execute its mission -  serve as a container
>> operating system.  If you need some guidance on image building find
>> experienced developers on #kolla – we have extensive experience in
>> producing containers for various runtime environments focused around
>> OpenStack.
>>
>> Regards
>> -steve
>>
>>
>> From: Adrian Otto 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Monday, December 7, 2015 at 1:16 PM
>> To: "OpenStack Development Mailing List (not for usage questions)" <
>> openstack-dev@lists.openstack.org>
>> Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap
>>
>> Until I see evidence to the contrary, I think adding some bootstrap
>> complexity to simplify the process of bay node image management and
>> customization is worth it. Think about where most users will focus
>> customization efforts. My guess is that it will be within these docker
>> images. We should ask our team to keep things as simple as possible while
>> working to containerize components where that makes sense. That may take
>> some creativity and a few iterations to achieve.
>>
>> We can pivot on this later if we try it and hate it.
>>
>> Thanks,
>>
>> Adrian
>>
>> On Dec 7, 2015, at 1:57 AM, Kai Qiang Wu  wrote:
>>
>> HI Hua,
>>
>> From my point of view, not everything needed to be put in container.
>> Let's make the initial start (be simple)to work and then discussed other
>> options if needed in IRC or weekly meeting.
>>
>>
>> Thanks
>>
>> Best Wishes,
>>
>> 
>> Kai Qiang Wu (吴开强 Kennan)
>> IBM China System and Technology Lab, Beijing
>>
>> E-mail: wk...@cn.ibm.com
>> Tel: 86-10-82451647
>> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
>> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>>
>> 
>> Follow your heart. You are miracle!
>>
>> 王华 ---07/12/2015 10:10:38 am---Hi all, If we want to run
>> etcd and flannel in container, we will introduce
>>
>> From: 王华 
>> To: Egor Guz 
>> Cc: "openstack-dev@lists.openstack.org" <
>> openstack-dev@lists.openstack.org>
>> Date: 07/12/2015 10:10 am
>> Subject: Re: [openstack-dev] [magnum]storage for docker-bootstrap
>> --
>>
>>
>>
>> Hi all,
>>
>> If we want to run etcd and flannel in container, we will
>> introduce docker-bootstrap which makes setup become more complex as Egor
>> pointed out. Should we pay for the price?
>>
>> On Sat, Nov 28, 2015 at 8:45 AM, Egor Guz <*e...@walmartlabs.com*
>> > wrote:
>>
>>Wanghua,
>>
>>I don’t think moving flannel to the container is good idea. This is
>>setup great for dev environment, but become too complex from operator 
>> point
>>of view (you add extra Docker 

[openstack-dev] [nova] No Nova API sub-team meeting in next weeks

2015-12-20 Thread Alex Xu
Hi,

Most of api sub-team members aren't there. So we will cancel nova api
meetings in next two weeks:
https://wiki.openstack.org/wiki/Meetings/NovaAPI

Hope everyone happy and enjoy the holidays!

Thanks
Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc] DocImpact vs. reno

2015-12-20 Thread Joshua Hesketh
Hey all,

So I just caught up on this thread and the corresponding scrollback in IRC.

First of all, sorry if this came as a surprise to anybody. As Andreas
pointed out this was highlighted in a number of docs email to this list,
but I understand why they might have been overlooked.

The resource usage was indeed a concern I had in mind in implementing the
DocImpact check. That is why I worked on further improvements to zuul to
only need to run the test on jobs that actually use the DocImpact flag[0].
The job does, however, run in under 2min. So the total burden of boot time
+ 2min isn't overly high. I do completely agree with all the concerns and
that it's not ideal though.

More than happy to have the job reverted or turned off while we discuss
this further. From the discussion in IRC it sounds like there'll be a
little bit of holding off until the new year (and people return from
holidays) but overall there seems to be a desire to use reno to replace
parts of this perhaps making the new job redundant.

Cheers,
Josh

[0]
https://review.openstack.org/#/q/status:open+project:openstack-infra/zuul+branch:master+topic:skip-commit,n,z

On Sat, Dec 19, 2015 at 8:52 AM, Sean Dague  wrote:

> On 12/18/2015 02:31 PM, Andreas Jaeger wrote:
> > On 12/18/2015 07:45 PM, Sean Dague wrote:
> >> On 12/18/2015 01:34 PM, Andreas Jaeger wrote:
> >>> On 12/18/2015 07:03 PM, Sean Dague wrote:
>  Recently noticed that a new job ended up on all nova changes that was
>  theoertically processing commit messages for DocImpact. It appears
>  to be
>  part of this spec -
> 
> http://specs.openstack.org/openstack/docs-specs/specs/mitaka/review-docimpact.html
> 
> 
> >>>
> >>> Lana talked with John Garbutt about this and announced this also in
> >>> several 'What's up' newsletters like
> >>>
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081522.html
> >>>
> >>>
> >>>
>  First, a heads up would be good. Nova burns a lot of nodes (i.e. has a
>  lot of patch volume), so this just decreased everyone's CI capacity
>  noticably.
> >>>
> >>> I understand this reasoning and Joshua worked on a superior solution,
> >>> see
> >>>
> https://review.openstack.org/#/q/status:open+project:openstack-infra/zuul+branch:master+topic:skip-commit,n,z
> >>>
> >>>
> >>>
> 
>  Secondly, this all seems like the wrong direction. We've got reno now,
>  which is extremely useful for documenting significant changes in the
>  code base that need to be reflected up. We've dropped UpgradeImpact
> for
>  an upgrade comment in reno, which is *so* much better.
> 
>  It seems like using reno instead of commit message tags would be much
>  better for everyone here.
> >>>
> >>> The goal of DocImpact is to notify the Documentation team about changes
> >>> - currently done via bugs in launchpad so that manuals can be easily
> >>> updated. How would this tracking work with docimpact?
> >>
> >> Because the current concern seems to be that naked DocImpact tag leaves
> >> people guessing what is important. And I understand that. There is a
> >> whole job now to just check that DocImpact containts a reason after it.
> >>
> >> We now have a very detailed system in reno to describe changes that will
> >> impact people using the code. It lets you do that with the commit and
> >> provide an arbitrarily large amount of content in it describing what and
> >> why you think that's important to reflect up.
> >>
> >> I think it effectively deprecates all *Impact flags. Now we have a place
> >> for that payload.
> >
> >
> > We - Sean, Anne Gentle, and Jeremy Stanley - just discussed this on
> > #openstack-infra, let me summarize my understanding:
> >
> > Some flags are used for checking before a merge the changes, especially
> > SecurityImpact and APIImpact. These are used for reviewing the changes.
> > This would be nice for DocImpact as well. SecurityImpact creates emails
> > for merged changes, DocImpact creates bugs for merged changes.
> >
> > When the docimpact spec was written, reno was not in use - and later
> > nobody brought it up as alternative idea.
> >
> > The idea going forward is instead of checking the commit message, is to
> > add a special section using reno that explains the changes that are
> > needed. A post-job would run and create bugs or sends out emails like
> > today whenever a new entry gets added. But this would be triggered by
> > special sections in the release-notes and not in the commit message. We
> > also expect/hope that release notes get a good review and thus the
> > quality of these notifications would be improved.
> >
> > Let's look on how exactly we can do this next year,
>
> Definitely.
>
> One other thing. Not running tests on commit messages has been the norm
> for a while. We used to have commit message checks in hacking, but they
> are things that you can't run locally (easily). So people push a
> critical fix, and run local, everything passes. They push to

[openstack-dev] [Glance] [Artifacts] [app-catalog] Proposed pre-holiday Artifacts virtual meetup.

2015-12-20 Thread Nikhil Komawar
Hi all,

Sorry to send this last minute; but as informally decided and having had
some momentum on Artifacts with a few decision items to be taken, it
would be nice to have a virtual sync before the holiday season begins.

I have created a poll for the same. Please vote on the doodle as soon as
possible ( http://doodle.com/poll/adq4y5ppiy4hqcww ). Attached is the
hangout link for participation however, we may utilize other video
conference media if large number of participants show up. The meeting
time would be 60-90 minutes and is likely to happen either this Tuesday
or Wednesday (as shown on doodle) if poll is successful.

Please let me know if anyone has concerns.

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs

2015-12-20 Thread Jay Lau
We also need to consider a lot for Magnum UI part as the UI part is highly
depend on those APIs. Thanks.

On Thu, Dec 17, 2015 at 9:21 AM, Adrian Otto 
wrote:

> Yes, this topic is a good one for a spec. What I am planning to do here is
> copy the content from the BP to an etherpad in spec format, and iterating
> on that in a fluid way to begin with. I will clear the BP whiteboard, and
> simplify the description to cover the intent and principles of the change.
> Once that gels a little we can contribute that for review as a spec and
> have more structured debate.
>
> When we finish, we will have a concise blueprint, history of our debate in
> Gerrit, a merged spec, and then we can code it. The timing of this is
> unfortunate because several key stakeholders may be away for holidays over
> the next couple of weeks. We should proceed with caution.
>
> Adrian
>
> On Dec 16, 2015, at 5:11 PM, Kai Qiang Wu  wrote:
>
> Hi Adrian,
>
> Right now, I think:
>
> for the unify-COE-container actions bp, it needs more discussion and good
> design to make it happen. ( I think spec is needed for this)
> And for the k8s related objects deprecation, it needs backup instead of
> directly dropped it. Especially when we not have any spec or design come
> out for unify-COE-container bp.
>
>
> Right now, the work now mostly happen on UI part, I think for UI, it can
> have discussion if need to implement those views or not.(instead we
> directly drop API part while not come out a consistent design on
> unify-COE-container actions bp)
>
>
> Thanks
>
> Best Wishes,
>
> 
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wk...@cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> 
> Follow your heart. You are miracle!
>
> Adrian Otto ---17/12/2015 07:00:37 am---Tom, > On Dec 16,
> 2015, at 9:31 AM, Cammann, Tom  wrote:
>
> From: Adrian Otto 
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: 17/12/2015 07:00 am
> Subject: Re: [openstack-dev] [magnum] Removing pod, rcs and service APIs
>
> --
>
>
>
> Tom,
>
> > On Dec 16, 2015, at 9:31 AM, Cammann, Tom  wrote:
> >
> > I don’t see a benefit from supporting the old API through a microversion
> > when the same functionality will be available through the native API.
>
> +1
>
> [snip]
>
> > Have we had any discussion on adding a v2 API and what changes (beyond
> > removing pod, rc, service) we would include in that change. What sort of
> > timeframe would we expect to remove the v1 API. I would like to move to
> a
> > v2 in this cycle, then we can think about removing v1 in N.
>
> Yes, when we drop functionality from the API that’s a contract breaking
> change, and requires a new API major version. We can drop the v1 API in N
> if we set expectations in advance. I’d want that plan to be supported with
> some evidence that maintaining the v1 API was burdensome in some way.
> Because adoption is limited, deprecation of v1 is not likely to be a
> contentious issue.
>
> Adrian
>
> >
> > Tom
> >
> >
> >
> > On 16/12/2015, 15:57, "Hongbin Lu"  wrote:
> >
> >> Hi Tom,
> >>
> >> If I remember correctly, the decision is to drop the COE-specific API
> >> (Pod, Service, Replication Controller) in the next API version. I think
> a
> >> good way to do that is to put a deprecated warning in current API
> version
> >> (v1) for the removed resources, and remove them in the next API version
> >> (v2).
> >>
> >> An alternative is to drop them in current API version. If we decide to
> do
> >> that, we need to bump the micro-version [1], and ask users to specify
> the
> >> microversion as part of the requests when they want the removed APIs.
> >>
> >> [1]
> >>
> http://docs.openstack.org/developer/nova/api_microversions.html#removing-a
> >> n-api-method
> >>
> >> Best regards,
> >> Hongbin
> >>
> >> -Original Message-
> >> From: Cammann, Tom [mailto:tom.camm...@hpe.com ]
> >> Sent: December-16-15 8:21 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: [openstack-dev] [magnum] Removing pod, rcs and service APIs
> >>
> >> I have been noticing a fair amount of redundant work going on in
> magnum,
> >> python-magnumclient and magnum-ui with regards to APIs we have been
> >> intending to drop support for. During the Tokyo summit it was decided
> >> that we should support for only COE APIs that all COEs can support
> which
> >> means dropping support for Kubernetes specific APIs for Pod, Service
> and
> >> Replication Controller.
> >>
> >> Egor has submitted a blueprint[1] “Unify container actions between all
> >> COEs” which has been approved to cover this work an

Re: [openstack-dev] [glance][artifacts][app-catalog] Proposal to move artifacts meeting time

2015-12-20 Thread Nikhil Komawar
Thanks Alex. This is a good idea. Please propose a review for the change
of schedule so that we can be assured the tests pass and decision would
be accepted.

On 12/18/15 9:20 AM, Alexander Tivelkov wrote:
> Hi folks,
>
> The current timeslot of our weekly IRC meeting for artifact subteam
> (14:00 UTC Mondays) seems a bit inconvenient: it's a bit early for
> people in the Pacific timezone. Since we want to maximise the presence
> of all the interested parties at our sync-ups, I'd propose to move our
> meeting to some later timeslot. I'd prefer it to remain in
> #openstack-meeting-4 (since all the rest Glancy meetings are there)
> and be several days ahead of the main Glance meeting (which is on
> Thursdays). 
>
> I've checked the current openstack meetings schedule and found some
> slots which may be more convenient then the current one. I've put them
> in doodle at http://doodle.com/poll/7krdfp96kttnvmg7 - please vote
> there for the slots which are ok for you. Then I'll make a patch to
> irc-meetings infra repo. 
>
> Thanks!
> -- 
> Regards,
> Alexander Tivelkov
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

Thanks,
Nikhil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Tempest] Asking for reviews from Tempest cores.

2015-12-20 Thread Sheng Bo Hou
Hi Tempest folks,

Thank you for your reviews on this patch: 
https://review.openstack.org/#/c/195443/.
However, the current devstack configuration in gate jobs only supports one 
cinder back-end, but this test is supposed to test the volume retype and 
migration between two different cinder back-ends. That is the reason why 
this test is not covered in the gate or the experimental so far.
I will take a look at how to turn on the multiple cinder back-ends support 
in the devstack for the gate, while at the same time, if you find the code 
in this patch is OK, could you just approve this patch? This test is very 
valuable for cinder admins to check if the volume retype and migration 
work correctly. Thanks.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193
- Forwarded by Sheng Bo Hou/China/IBM on 12/21/2015 10:44 AM -

From:   Sheng Bo Hou/China/IBM
To: 
Date:   12/15/2015 02:48 PM
Subject:[QA][Tempest] Asking for reviews from Tempest cores.


Hi Tempest folks,

https://review.openstack.org/#/c/195443/

I am asking you to take a review on this patch, which is the integration 
test for volume retype with migration in cinder. It has been taken quite a 
while and quite some cycles to
get mature. It is a very important test for volume migration feature in 
Cinder.

Thank you for your attention.

Best wishes,
Vincent Hou (侯胜博)

Staff Software Engineer, Open Standards and Open Source Team, Emerging 
Technology Institute, IBM China Software Development Lab

Tel: 86-10-82450778 Fax: 86-10-82453660
Notes ID: Sheng Bo Hou/China/IBM@IBMCNE-mail: sb...@cn.ibm.com 
Address:3F Ring, Building 28 Zhongguancun Software Park, 8 Dongbeiwang 
West Road, Haidian District, Beijing, P.R.C.100193
地址:北京市海淀区东北旺西路8号中关村软件园28号楼环宇大厦3层 邮编:100193

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Heat] Status of the Support Conditionals in Heat templates

2015-12-20 Thread Huangtianhua
https://review.openstack.org/#/c/245042
First patch https://review.openstack.org/#/c/221648

I proposed this spec, because the function is really needed, many customers of 
our company complained that they have to write/manage many templates to meet 
their business(the templates are similar, can they re-used?), 
also magnum guys asked me for this function too. I know there are several 
previous discussions such as https://review.openstack.org/#/c/84468/ and 
https://review.openstack.org/#/c/153771/ , but considering the user habits 
and compatibility with CFN templates, also the sample way is easy to implement 
based on our architecture, I proposed the same style as CFN.

If you agree with it, I will be happy to continue this work, thanks:)
   

-邮件原件-
发件人: Steven Hardy [mailto:sha...@redhat.com] 
发送时间: 2015年12月18日 19:08
收件人: OpenStack Development Mailing List (not for usage questions)
主题: Re: [openstack-dev] [Heat] Status of the Support Conditionals in Heat 
templates

On Wed, Dec 09, 2015 at 01:42:13PM +0300, Sergey Kraynev wrote:
>Hi Heaters,
>On the last IRC meeting we had a question about Support Conditionals spec
>[1].
>Previous attempt for this staff is here [2].
>The example of first POC in Heat can be reviewed here [3]
>As I understand we have not had any complete decision about this work.
>So I'd like to clarify feelings of community about it. This clarification
>may be done as answers for two simple questions:
> - Why do we want to implement it?
> - Why do NOT we want to implement it?
>My personal feeling is:
>- Why do we want to implement it?
>    * A lot of users wants to have similar staff.
>    * It's already presented in AWS, so will be good to have this
>feature in Heat too.
> - Why do NOT we want to implement it?
>    * it can be solved with Jinja [4] . However I don't think, that it's
>really important reason for blocking this work.
>Please share your idea about two questions above.
>It should allows us to eventually decide, want we implement it or not.

This has been requested for a long time, and there have been several previous 
discussions, which all ended up in debating the implementation, rather than 
focussing on the simplest possible way to meet the user requirement.

I think this latest attempt provides a simple way to meet the requirement, 
improves out CFN compatibility, and is inspired by an interface which has been 
proven to work.

So I'm +1 on going ahead with this - the implementation looks pretty simple :)

We've debated Jinja and other solutions before and dismissed them as either 
unsafe to run inside the heat service, or potentially too complex - this 
proposed solution appears to resolve both those concerns.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][nova][all] timeutils deprecation removals will break Nova

2015-12-20 Thread Dolph Mathews
On Sunday, December 20, 2015, Davanum Srinivas  wrote:

> Nova folks,
>
> We have this review in oslo.utils:
> https://review.openstack.org/#/c/252898/
>
> There were failed effort in the past to cleanup in Nova:
> https://review.openstack.org/#/c/164753/
> https://review.openstack.org/#/c/197601/
>
> What do we do? Suggestions please.


Propose an effort to cleanup nova without inexplicably abandoning it
shortly thereafter?


>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][novaclient] Kilo reviews

2015-12-20 Thread Tony Breeds
Hi all,
The gate-novaclient-dsvm-functional tests are failing with:

--
.tox/functional/bin/python: can't open file 
'/usr/local/jenkins/slave_scripts/subunit2html.py': [Errno 2] No such file or 
directory
-- [1]

This can be fixed with [2] Y'all can consider it my christmas present if we can
review these [3]

Yours Tony.

[1] 
http://logs.openstack.org/38/224538/1/check/gate-novaclient-dsvm-functional/0e7c962/console.html.gz#_2015-11-05_14_57_11_323
[2] https://review.openstack.org/#/c/245621/
[3] 
https://review.openstack.org/#/q/starredby:tonyb+is:open+branch:stable/kilo+project:openstack/python-novaclient


pgpkBySEQcy7s.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][stable] Circular dependancy to resolve

2015-12-20 Thread Tony Breeds
Hi all (Actually I'mm really looking a Dan, Sean and Matt)

We have a 2 changes in stable/liberty

https://review.openstack.org/#/c/248505 Add -constraints sections for CI jobs ; 
and
https://review.openstack.org/#/c/248877 Remove the TestRemoteObject class

If you grab 248505 and look at the git DAG you get:
$ git log --oneline --decorate  -3
83ca84a (HEAD -> review/sachi_king/bp/Requirements-Management) Add -constraints 
sections for CI jobs
6e2da82 Remove the TestRemoteObject class
94d6b69 (origin/stable/liberty, gerrit/stable/liberty) Omnibus stable fix for 
upstream requirements breaks

so 248877 is based on stable/liberty and 248505 is based on 248877

The problem is that 248877 Depends-On 248505[1]

I think the correct solution is to remove the Depends-On directive from 248877.

I didn't do that thing as:
1. I don't understand whay that there and could be missing something
2. Doing so would loose th 2+2's and +W anyway so wont really help.


Yours Tony.
[1] Well it Depends on Icbbb78cfcd074b0050e60c54557637af723f9b92 which maps to
the same change in master and stable/liberty 


pgplzsOgkFCYD.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Jay Lau
Thanks Adrian and Tim, I saw that @Vilobh already uploaded a patch
https://review.openstack.org/#/c/259201/ here, perhaps we can first have a
spec and discuss there. ;-)

On Mon, Dec 21, 2015 at 2:44 AM, Tim Bell  wrote:

> Given the lower level quotas in Heat, Neutron, Nova etc., the error
> feedback is very important. A Magnum “cannot create” message requires a lot
> of debugging whereas a “Floating IP quota exceeded” gives a clear root
> cause.
>
>
>
> Whether we quota Magnum resources or not, some error scenarios and
> appropriate testing+documentation would be a great help for operators.
>
>
>
> Tim
>
>
>
> *From:* Adrian Otto [mailto:adrian.o...@rackspace.com]
> *Sent:* 20 December 2015 18:50
> *To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev@lists.openstack.org>
>
> *Subject:* Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
>
>
> This sounds like a source-of-truth concern. From my perspective the
> solution is not to create redundant quotas. Simply quota the Magnum
> resources. Lower level limits *could* be queried by magnum prior to acting
> to CRUD the lower level resources. In the case we could check the maximum
> allowed number of (or access rate of) whatever lower level resource before
> requesting it, and raising an understandable error. I see that as an
> enhancement rather than a must-have. In all honesty that feature is
> probably more complicated than it's worth in terms of value.
>
> --
>
> Adrian
>
>
> On Dec 20, 2015, at 6:36 AM, Jay Lau  wrote:
>
> I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT
> need call nova, cinder, neutron to create the Bay resources. But both Nova
> and Cinder has its own quota policy, if we define quota again in Magnum,
> then how to handle the conflict? Another point is that limiting the Bay by
> quota seems a bit coarse-grainded as different bays may have different
> configuration and resource request. Comments? Thanks.
>
>
>
> On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote  wrote:
>
> Food for thought - there is a cost to FIPs (in the case of public IP
> addresses), security groups (to a lesser extent, but in terms of the
> computation of many hundreds of them), etc. Administrators may wish to
> enforce quotas on a variety of resources that are direct costs or indirect
> costs (e.g. # of bays, where a bay consists of a number of multi-VM /
> multi-host pods and services, which consume CPU, mem, etc.).
>
>
>
> If Magnum quotas are brought forward, they should govern (enforce quota)
> on Magnum-specific constructs only, correct? Resources created by Magnum
> COEs should be governed by existing quota policies governing said resources
> (e.g. Nova and vCPUs).
>
>
>
> Lee
>
>
>
> On Dec 16, 2015, at 1:56 PM, Tim Bell  wrote:
>
>
>
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com ]
> Sent: 15 December 2015 22:40
> To: openstack-dev 
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
> Hi! Can I offer a counter point?
>
> Quotas are for _real_ resources.
>
>
> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
>
>
> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
>
> cost
>
> real money and cannot be conjured from thin air. As such, the user being
> able to allocate 1 billion or 2 containers is not limited by Magnum, but
>
> by real
>
> things that they must pay for. If they have enough Nova quota to allocate
>
> 1
>
> billion tiny pods, why would Magnum stop them? Who actually benefits from
> that limitation?
>
> So I suggest that you not add any detailed, complicated quota system to
> Magnum. If there are real limitations to the implementation that Magnum
> has chosen, such as we had in Heat (the entire stack must fit in memory),
> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
> memory quotas be the limit, and enjoy the profit margins that having an
> unbound force multiplier like Magnum in your cloud gives you and your
> users!
>
> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
>
> Hi All,
>
> Currently, it is possible to create unlimited number of resource like
> bay/pod/service/. In Magnum, there should be a limitation for user or
> project to create Magnum resource, and the limitation should be
> configurable[1].
>
> I proposed following design :-
>
> 1. Introduce new table magnum.quotas
> ++--+--+-+-++
>
> | Field  | Type | Null | Key | Default | Extra  |
>
> ++--+--+-+-++
>
> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>
> | created_at | datetime | YES  | | NULL||
>

Re: [openstack-dev] [keystone]How do we avoid token expired?

2015-12-20 Thread Duncan Thomas
I believe the code that needs fixing is in cinder backup itself, rather
than (or as well as) the client, since the client will only initiate the
operation, it will not be around for later when the token expires.

Cinder backup is also potentially a place where keystone trusts can be
fruitfully employed.
On 20 Dec 2015 11:22, "Clark Boylan"  wrote:

> On Sun, Dec 20, 2015, at 07:43 AM, zhu4236926 wrote:
> > Hi guys,
> > I'm using cinder-clinet to backup a volume and query the status of
> > the backup-volume in a loop per 3 seconds. I got the token from
> > keystone when I begin to backup the volume,assuming that the token is
> > expired in 2 minutes, but the backup need 5 minutes to finish, so
> > after 3 minutes the token is expired and authentication-failed, how
> > should I solve this problems by using cinder-client or
> > keystone-client, or could you provide other solution to avoid this
> > problems.
> I believe that if you use a keystoneauth session [0] that it will renew
> tokens that are near to expiring or have already expired. Support is per
> client so you will need to check if cinderclient supports this and if
> not probably add support first.
>
> [0] http://docs.openstack.org/developer/keystoneauth/using-sessions.html
>
> Hope this helps,
> Clark
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Addressing issue of keysone token expiry during long running operations

2015-12-20 Thread Jamie Lennox
Hey Paul,

At the Tokyo summit we discussed a general way to make it so that user
tokens were only expiration tested once. When the token hits nova for
example we can say it was validated, then when nova talks to glance it
sends both the user token (or enough data to represent the user token) and
an X-Service-Token which is the token nova validated with and we say the
presence of the X-Service-Token means we should trust that the previous
service already did enough validation to just trust it.

This is a big effort because it's going to require changing how service to
service communication works at all places.

At the moment I don't have a blueprint for this. The biggest change is
going to be making service->service communication rely on keystoneauth auth
plugins so that we can have the auth plugin control what data is
communicated rather than hack this in to every location and so far has
required updates to middleware and future to oslo.context and others to
make this easy for services to consume. This work has been ongoing by
myself, mordred and morgan (if you see reviews to switch your service to
keystoneauth plugins please review as it will make the rest of this work
easier in future).

I certainly don't expect to see this pulled off in Mitaka time frame.

For the mean time more and more services are relying on trusts, which is an
unfortunate but workable solution.

Jamie

On 18 December 2015 at 22:13, Paul Carlton  wrote:

> Jamie
>
> John Garbutt suggested I follow up this issue with you.  I understand you
> may be leading the
> effort to address the issue of token expiry during a long running
> operation.  Nova
> encounter this scenario during image snapshots and live migrations.
>
> Is there a keystone blueprint for this issue?
>
> Thanks
>
> --
> Paul Carlton
> Software Engineer
> Cloud Services
> Hewlett Packard
> BUK03:T242
> Longdown Avenue
> Stoke Gifford
> Bristol BS34 8QZ
>
> Mobile:+44 (0)7768 994283
> Email:mailto:paul.carlt...@hpe.com
> Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks
> RG12 1HN Registered No: 690597 England.
> The contents of this message and any attachments to it are confidential
> and may be legally privileged. If you have received this message in error,
> you should delete it from your system immediately and advise the sender. To
> any recipient of this message within HP, unless otherwise stated you should
> consider this message and attachments as "HP CONFIDENTIAL".
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][neutron][keystone] how to reauth the token

2015-12-20 Thread Jamie Lennox
On 17 December 2015 at 02:59, Pavlo Shchelokovskyy <
pshchelokovs...@mirantis.com> wrote:

> Hi all,
>
> I'd like to start discussion on how Ironic is using Neutron when Keystone
> is involved.
>
> Recently the patch [0] was merged in Ironic to fix a bug when the token
> with which to create the neutronclient is expired. For that Ironic now
> passes both username/password of its own service user and the token from
> the request to the client. But that IMO is a wrong thing to do.
>
> When token is given but happens to be expired, neutronclient will
> reauthentificate [1] using provided credentials for service tenant and user
> - but in fact the original token might have come from completely different
> tenant. Thus the action neutron is performing might look for / change
> resources in the service tenant instead of the tenant for which the
> original token was issued.
>
> Ironic by default is admin-only service, so the token that is accepted is
> admin-scoped, but still it might be coming from different tenants (e.g.
> service tenant or actual admin tenant, or some other tenant that admin is
> logged into). And even in the case of admin-scoped token I'm not sure how
> this will work for domain-separated tenants in Keystone v3. Does
> admin-scoped neutronclient show all ports including those created by
> tenants in domains other than the domain of admin tenant?
>
> If I understand it right, the best we could do is use keystoneauth *token
> auth plugins that can reauth when the token is about to expire (but of
> course not when it is already expired).
>
>
I'm not familiar with ironic as to what token is being passed around there.

If it's the user's token there's really nothing we can do. You can't
refresh a token a user gave you (big security issue) and using
authentication plugins there really isn't going to help. In this case it's
weird to pass both the token and the user/pass because assuming
neutronclient allows that at all you're not going to know if you performed
an operation as the user or the service.

If it's the token of the ironic service user (which seems possible because
in that patch you've removed the else statement to always use the ironic
service user) then yes if you were to use authentication plugins the token
would be refreshed for you automatically because we have the username and
password available to get a new token.

The only real option at the moment to extending the life of the user token
is to establish a trust with keystone immediately on receiving the user
token that delegates permission from the user to the service. You then use
the service token (refreshable) to perform operations before returning to
the user. This is what heat and recently glance (and others) have done to
get around this problem.

There is ongoing work to solve this in a better way for all services but
there is a lot to be done (change service->service communication
everywhere) before this is available so if you are experiencing problems i
wouldn't wait for it.

As a last aside, please create another section for the service user. You
can use the same credentials but consider the keystone_authtoken section
off limits. The options you are reading from there are old, not used in
recent configurations (including devstack) and are going to mean that
auth_token middleware in ironic can't be configured with v3, let alone cert
based auth or any of the new things we are introducing there.



[0] https://review.openstack.org/#/c/255885
> [1]
> https://github.com/openstack/python-neutronclient/blob/master/neutronclient/client.py#L173
>
> Best regards,
> --
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-20 Thread Clark Boylan
Looking at the dstat logs for a recent fail [0], it did help in that
more memory is available. You now have over 1GB available but still less
than 2GB. I would try using less memory. Can you use a 1GB flavor
instead of a 2GB flavor?

[0]
http://logs.openstack.org/58/251158/4/check/gate-functional-dsvm-magnum-swarm/6b022cc/logs/screen-dstat.txt.gz

On Sun, Dec 20, 2015, at 12:08 PM, Hongbin Lu wrote:
> Hi Clark,
> 
> Thanks for the fix. Unfortunately, it doesn't seem to help. The error
> still occurred [1] after you increased the memory restriction, and as
> before, most of them occurred in OVH host. Any further suggestion?
> 
> [1] http://status.openstack.org/elastic-recheck/#1521237
> 
> Best regards,
> Hongbin
> 
> -Original Message-
> From: Clark Boylan [mailto:cboy...@sapwetik.org] 
> Sent: December-15-15 5:41 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite
> often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"
> 
> On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> > On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > > Hi,
> > > 
> > > As Kai Qiang mentioned, magnum gate recently had a bunch of random 
> > > failures, which occurred on creating a nova instance with 2G of RAM.
> > > According to the error message, it seems that the hypervisor tried 
> > > to allocate memory to the nova instance but couldn’t find enough 
> > > free memory in the host. However, by adding a few “nova 
> > > hypervisor-show XX” before, during, and right after the test, it 
> > > showed that the host has 6G of free RAM, which is far more than 2G. 
> > > Here is a snapshot of the output [1]. You can find the full log here [2].
> > If you look at the dstat log
> > http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnu
> > m-k8s/5305d7a/logs/screen-dstat.txt.gz
> > the host has nowhere near 6GB free memory and less than 2GB. I think 
> > you actually are just running out of memory.
> > > 
> > > Another observation is that most of the failure happened on a node 
> > > with name “devstack-trusty-ovh-*” (You can verify it by entering a 
> > > query [3] at http://logstash.openstack.org/ ). It seems that the 
> > > jobs will be fine if they are allocated to a node other than “ovh”.
> > I have just done a quick spot check of the total memory on 
> > devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free 
> > -m` and the results are 7480, 7732, and 6976 megabytes respectively. 
> > Despite using 8GB flavors in each case there is variation and OVH 
> > comes in on the low end for some reason. I am guessing that you fail 
> > here more often because the other hosts give you just enough extra 
> > memory to boot these VMs.
> To follow up on this we seem to have tracked this down to how the linux
> kernel restricts memory at boot when you don't have a contiguous chunk of
> system memory. We have worked around this by increasing the memory
> restriction to 9023M at boot which gets OVH inline with Rackspace and
> slightly increases available memory on HPCloud (because it actually has
> more of it).
> 
> You should see this fix in action after image builds complete tomorrow
> (they start at 1400UTC ish).
> > 
> > We will have to look into why OVH has less memory despite using 
> > flavors that should be roughly equivalent.
> > > 
> > > Any hints to debug this issue further? Suggestions are greatly 
> > > appreciated.
> > > 
> > > [1] http://paste.openstack.org/show/481746/
> > > [2]
> > > http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-mag
> > > num-swarm/56d79c3/console.html [3] 
> > > https://review.openstack.org/#/c/254370/2/queries/1521237.yaml
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

2015-12-20 Thread Hongbin Lu
Hi Clark,

Thanks for the fix. Unfortunately, it doesn't seem to help. The error still 
occurred [1] after you increased the memory restriction, and as before, most of 
them occurred in OVH host. Any further suggestion?

[1] http://status.openstack.org/elastic-recheck/#1521237

Best regards,
Hongbin

-Original Message-
From: Clark Boylan [mailto:cboy...@sapwetik.org] 
Sent: December-15-15 5:41 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Infra][nova][magnum] Jenkins failed quite often 
for "Cannot set up guest memory 'pc.ram': Cannot allocate memory"

On Sun, Dec 13, 2015, at 10:51 AM, Clark Boylan wrote:
> On Sat, Dec 12, 2015, at 02:16 PM, Hongbin Lu wrote:
> > Hi,
> > 
> > As Kai Qiang mentioned, magnum gate recently had a bunch of random 
> > failures, which occurred on creating a nova instance with 2G of RAM.
> > According to the error message, it seems that the hypervisor tried 
> > to allocate memory to the nova instance but couldn’t find enough 
> > free memory in the host. However, by adding a few “nova 
> > hypervisor-show XX” before, during, and right after the test, it 
> > showed that the host has 6G of free RAM, which is far more than 2G. 
> > Here is a snapshot of the output [1]. You can find the full log here [2].
> If you look at the dstat log
> http://logs.openstack.org/07/244907/5/check/gate-functional-dsvm-magnu
> m-k8s/5305d7a/logs/screen-dstat.txt.gz
> the host has nowhere near 6GB free memory and less than 2GB. I think 
> you actually are just running out of memory.
> > 
> > Another observation is that most of the failure happened on a node 
> > with name “devstack-trusty-ovh-*” (You can verify it by entering a 
> > query [3] at http://logstash.openstack.org/ ). It seems that the 
> > jobs will be fine if they are allocated to a node other than “ovh”.
> I have just done a quick spot check of the total memory on 
> devstack-trusty hosts across HPCloud, Rackspace, and OVH using `free 
> -m` and the results are 7480, 7732, and 6976 megabytes respectively. 
> Despite using 8GB flavors in each case there is variation and OVH 
> comes in on the low end for some reason. I am guessing that you fail 
> here more often because the other hosts give you just enough extra 
> memory to boot these VMs.
To follow up on this we seem to have tracked this down to how the linux kernel 
restricts memory at boot when you don't have a contiguous chunk of system 
memory. We have worked around this by increasing the memory restriction to 
9023M at boot which gets OVH inline with Rackspace and slightly increases 
available memory on HPCloud (because it actually has more of it).

You should see this fix in action after image builds complete tomorrow (they 
start at 1400UTC ish).
> 
> We will have to look into why OVH has less memory despite using 
> flavors that should be roughly equivalent.
> > 
> > Any hints to debug this issue further? Suggestions are greatly 
> > appreciated.
> > 
> > [1] http://paste.openstack.org/show/481746/
> > [2]
> > http://logs.openstack.org/48/256748/1/check/gate-functional-dsvm-mag
> > num-swarm/56d79c3/console.html [3] 
> > https://review.openstack.org/#/c/254370/2/queries/1521237.yaml


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ansible] tox functional testing rename

2015-12-20 Thread Paul Belanger
Greetings,

I've proposed a series of patches[1] to rename tox -eansible-functional to
tox -efunctional. While the change is trival, it is meant to workaround an
interpreter issue with tox[2]. Additionally, the change brings our tox.ini
inline with other OpenStack projects launching functional tests from tox.

The current patches will fail to pass the gate until the Depends-On patch is
merged.

So, I'm here asking for feedback to help get ansible-role-jenkins-job-builder
passing the gate again.

[1] https://review.openstack.org/#/q/topic:temp/functional+status:open
[2] https://review.openstack.org/#/c/259594/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone]How do we avoid token expired?

2015-12-20 Thread Clark Boylan
On Sun, Dec 20, 2015, at 07:43 AM, zhu4236926 wrote:
> Hi guys,
> I'm using cinder-clinet to backup a volume and query the status of
> the backup-volume in a loop per 3 seconds. I got the token from
> keystone when I begin to backup the volume,assuming that the token is
> expired in 2 minutes, but the backup need 5 minutes to finish, so
> after 3 minutes the token is expired and authentication-failed, how
> should I solve this problems by using cinder-client or
> keystone-client, or could you provide other solution to avoid this
> problems.
I believe that if you use a keystoneauth session [0] that it will renew
tokens that are near to expiring or have already expired. Support is per
client so you will need to check if cinderclient supports this and if
not probably add support first.

[0] http://docs.openstack.org/developer/keystoneauth/using-sessions.html

Hope this helps,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer][aodh][vitrage] Raising custom alarms in AODH

2015-12-20 Thread AFEK, Ifat (Ifat)
> -Original Message-
> From: Ryota Mibu [mailto:r-m...@cq.jp.nec.com]
> Sent: Tuesday, December 08, 2015 11:17 AM
>
> Hi Ifat,
> 
> In short, 'event' is generated in OpenStack, 'alarm' is defined by a
> user. 'event' is a container of data passed from other OpenStack
> services through OpenStack notification bus. 'event' and contained data
> will be stored in ceilometer DB and exposed via event api [1]. 'alarm'
> is pre-configured alerting rule defined by a user via alarm API [2].
> 'Alarm' also has state like 'ok' and 'alarm', and history as well.
> 
> [1]
> http://docs.openstack.org/developer/ceilometer/webapi/v2.html#events-
> and-traits
> [2] http://docs.openstack.org/developer/aodh/webapi/v2.html#alarms
> 
> 
> The point is whether we should use 'event' or 'alarm' for all failure
> representation. Maybe we can use 'event' for all raw error/fault
> notification, and use 'alarm' for exposing deduced/wrapped failure.
> This is my view, so might be wrong.
> 

Hi,

Let me summarize the issue. 

What we need in Vitrage is:

- custom alarms, where we can set metadata like: {"resource_type":"switch", 
"resource_name":"switch-2"} or {"resource_type":"nova.instance", 
"resource_id":} or {"nagios_test_name":"check_ovs_vswitchd", 
"nagios_test_status":"warning"}

- the ability to define an alarm once, and instantiate it multiple times for 
every instance

- the ability to define an alarm on-the-fly (since we can't predict all alarm 
types)

- an option to trigger the alarm from vitrage


The optimal solution for us would be to have alarm templates and alarm 
metadata. Or, we can have a workaround... The current workarounds that I see 
are:

1. Create an event-alarm on the fly for every alarm on every instance and set 
its state immediately using Aodh API. The alarm will be stored in the database, 
but this will not trigger a notification or a call to alarm-actions. The alarm 
name will have to include the resource name/id, like "Instance  is at 
risk due to public switch problem" to make it unique. This might work for 
Vitrage horizon use cases in Mitaka, but not for future use cases that will 
require alarm-actions.

2. Send notifications in order to trigger event alarms "by the book". Vitrage 
notification "Alarm: Instance is at risk due to public switch problem" with 
metadata {"switch_name":"switch-2", "instance_id":} will be converted to 
a corresponding event, then to an alarm. We will still need to create a 
different alarm for every instance. And we will have to wait until the cache is 
refreshed. 


I will be happy to hear your thoughts about it.

Thanks,
Ifat.















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Tim Bell
Given the lower level quotas in Heat, Neutron, Nova etc., the error feedback
is very important. A Magnum "cannot create" message requires a lot of
debugging whereas a "Floating IP quota exceeded" gives a clear root cause.

 

Whether we quota Magnum resources or not, some error scenarios and
appropriate testing+documentation would be a great help for operators.

 

Tim

 

From: Adrian Otto [mailto:adrian.o...@rackspace.com] 
Sent: 20 December 2015 18:50
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

 

This sounds like a source-of-truth concern. From my perspective the solution
is not to create redundant quotas. Simply quota the Magnum resources. Lower
level limits *could* be queried by magnum prior to acting to CRUD the lower
level resources. In the case we could check the maximum allowed number of
(or access rate of) whatever lower level resource before requesting it, and
raising an understandable error. I see that as an enhancement rather than a
must-have. In all honesty that feature is probably more complicated than
it's worth in terms of value.

-- 

Adrian


On Dec 20, 2015, at 6:36 AM, Jay Lau mailto:jay.lau@gmail.com> > wrote:

I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT
need call nova, cinder, neutron to create the Bay resources. But both Nova
and Cinder has its own quota policy, if we define quota again in Magnum,
then how to handle the conflict? Another point is that limiting the Bay by
quota seems a bit coarse-grainded as different bays may have different
configuration and resource request. Comments? Thanks.

 

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote mailto:leecalc...@gmail.com> > wrote:

Food for thought - there is a cost to FIPs (in the case of public IP
addresses), security groups (to a lesser extent, but in terms of the
computation of many hundreds of them), etc. Administrators may wish to
enforce quotas on a variety of resources that are direct costs or indirect
costs (e.g. # of bays, where a bay consists of a number of multi-VM /
multi-host pods and services, which consume CPU, mem, etc.). 

 

If Magnum quotas are brought forward, they should govern (enforce quota) on
Magnum-specific constructs only, correct? Resources created by Magnum COEs
should be governed by existing quota policies governing said resources (e.g.
Nova and vCPUs).

 

Lee

 

On Dec 16, 2015, at 1:56 PM, Tim Bell mailto:tim.b...@cern.ch> > wrote:

 

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev mailto:openstack-dev@lists.openstack.org> >
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.


The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.




Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that

cost



real money and cannot be conjured from thin air. As such, the user being
able to allocate 1 billion or 2 containers is not limited by Magnum, but

by real



things that they must pay for. If they have enough Nova quota to allocate

1



billion tiny pods, why would Magnum stop them? Who actually benefits from
that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
memory quotas be the limit, and enjoy the profit margins that having an
unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:



Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a limitation for user or
project to create Magnum resource, and the limitation should be
configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL|

Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Adrian Otto
This sounds like a source-of-truth concern. From my perspective the solution is 
not to create redundant quotas. Simply quota the Magnum resources. Lower level 
limits *could* be queried by magnum prior to acting to CRUD the lower level 
resources. In the case we could check the maximum allowed number of (or access 
rate of) whatever lower level resource before requesting it, and raising an 
understandable error. I see that as an enhancement rather than a must-have. In 
all honesty that feature is probably more complicated than it's worth in terms 
of value.

--
Adrian

On Dec 20, 2015, at 6:36 AM, Jay Lau 
mailto:jay.lau@gmail.com>> wrote:

I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT need 
call nova, cinder, neutron to create the Bay resources. But both Nova and 
Cinder has its own quota policy, if we define quota again in Magnum, then how 
to handle the conflict? Another point is that limiting the Bay by quota seems a 
bit coarse-grainded as different bays may have different configuration and 
resource request. Comments? Thanks.

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote 
mailto:leecalc...@gmail.com>> wrote:
Food for thought - there is a cost to FIPs (in the case of public IP 
addresses), security groups (to a lesser extent, but in terms of the 
computation of many hundreds of them), etc. Administrators may wish to enforce 
quotas on a variety of resources that are direct costs or indirect costs (e.g. 
# of bays, where a bay consists of a number of multi-VM / multi-host pods and 
services, which consume CPU, mem, etc.).

If Magnum quotas are brought forward, they should govern (enforce quota) on 
Magnum-specific constructs only, correct? Resources created by Magnum COEs 
should be governed by existing quota policies governing said resources (e.g. 
Nova and vCPUs).

Lee

On Dec 16, 2015, at 1:56 PM, Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

-Original Message-
From: Clint Byrum [mailto:cl...@fewbar.com]
Sent: 15 December 2015 22:40
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
Resources

Hi! Can I offer a counter point?

Quotas are for _real_ resources.


The CERN container specialist agrees with you ... it would be good to
reflect on the needs given that ironic, neutron and nova are policing the
resource usage. Quotas in the past have been used for things like key pairs
which are not really real.

Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
cost
real money and cannot be conjured from thin air. As such, the user being
able to allocate 1 billion or 2 containers is not limited by Magnum, but
by real
things that they must pay for. If they have enough Nova quota to allocate
1
billion tiny pods, why would Magnum stop them? Who actually benefits from
that limitation?

So I suggest that you not add any detailed, complicated quota system to
Magnum. If there are real limitations to the implementation that Magnum
has chosen, such as we had in Heat (the entire stack must fit in memory),
then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
memory quotas be the limit, and enjoy the profit margins that having an
unbound force multiplier like Magnum in your cloud gives you and your
users!

Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
Hi All,

Currently, it is possible to create unlimited number of resource like
bay/pod/service/. In Magnum, there should be a limitation for user or
project to create Magnum resource, and the limitation should be
configurable[1].

I proposed following design :-

1. Introduce new table magnum.quotas
++--+--+-+-++

| Field  | Type | Null | Key | Default | Extra  |

++--+--+-+-++

| id | int(11)  | NO   | PRI | NULL| auto_increment |

| created_at | datetime | YES  | | NULL||

| updated_at | datetime | YES  | | NULL||

| deleted_at | datetime | YES  | | NULL||

| project_id | varchar(255) | YES  | MUL | NULL||

| resource   | varchar(255) | NO   | | NULL||

| hard_limit | int(11)  | YES  | | NULL||

| deleted| int(11)  | YES  | | NULL||

++--+--+-+-++

resource can be Bay, Pod, Containers, etc.


2. API controller for quota will be created to make sure basic CLI
commands work.

quota-show, quota-delete, quota-create, quota-update

3. When the admin specifies a quota of X number of resources to be
created the code should abide by that. For example if hard limit for Bay
is 5
(i.e.
a project can have maximum 5 Bay's) if a user in a project tries to
exceed that hardlimit it won't be allowed. Similarly goes for oth

[openstack-dev] [oslo][nova][all] timeutils deprecation removals will break Nova

2015-12-20 Thread Davanum Srinivas
Nova folks,

We have this review in oslo.utils:
https://review.openstack.org/#/c/252898/

There were failed effort in the past to cleanup in Nova:
https://review.openstack.org/#/c/164753/
https://review.openstack.org/#/c/197601/

What do we do? Suggestions please.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer][Gnocchi] inconsistent instance attributes cause infinite update

2015-12-20 Thread Luo Gangyi
Hi devs,
  
 I found a problem which may cause infinite update of instance's attributes in 
gnocchi.
  
 Let's see the resource definition of instance.
  
   - resource_type: instance
metrics:
  - 'instance'
  - 'memory'
  - 'memory.usage'
  - 'memory.resident'
  - 'vcpus'
  - 'cpu'
  - 'cpu_util'
  - 'disk.root.size'
 ...
 attributes:
  host: resource_metadata.host
  image_ref: resource_metadata.image_ref_url
 ...

 Here is the problem, although they have same  attributes, they are *not* same.
  
 Some of them came from nova's notifications and the others are came from 
ceilometer-compute-agent.
  
 1) Those came from notifications, their attributes looks like
  
 image_ref :http://10.133.12.125:9292/images/  
 host: compute.lgy-openstack-kilo.novalocal 
  
 2) Those came from ceilometer-compute-agent, 
 image_ref : 
http://10.133.12.125:8774/4994e42421a04beda56fff7d817e810e/images/8d6a9cd9-48ae-4a41-bd13-262a46c93d72
 
 host:ea8f8e465d9caff06e80a0fda6f30d02725e0b55dc0fd940954cb55c
  
 Such difference will cause alternately and infinitely update of a instance's 
attributes if we enable nova audit.
  
 So I suggest we seperate these meters which came from notifications to another 
resource type like "instance_from_notification".
  
 Any other idea?
 
 
  --
 Luo Gangyi   luogan...@chinamobile.com__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone]How do we avoid token expired?

2015-12-20 Thread zhu4236926
Hi guys,
I'm using cinder-clinet to backup a volume and query the status of the 
backup-volume in a loop per 3 seconds. I got the token from keystone when I 
begin to backup the volume,assuming that the token is expired in 2 minutes, but 
the backup need 5 minutes to finish, so after 3 minutes the token is expired 
and authentication-failed, how should I solve this problems by using 
cinder-client or keystone-client, or could you provide other solution to avoid 
this problems.
Thank you!!!


By 
Sylvernass




 





 __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Dragonflow] - IRC Meeting tomorrow (12/21) - 0900 UTC

2015-12-20 Thread Gal Sagie
Hello All,

We will have an IRC meeting tomorrow (Monday, 12/21) at 0900 UTC
in #openstack-meeting-4

Please review the expected meeting agenda here:
https://wiki.openstack.org/wiki/Meetings/Dragonflow

You can view last meeting action items and logs here:
http://eavesdrop.openstack.org/meetings/dragonflow/2015/dragonflow.2015-12-14-09.00.html


Please update the agenda if you have any subject you would like to discuss
about.

Thanks
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack][magnum] Quota for Magnum Resources

2015-12-20 Thread Jay Lau
I also have the same concern with Lee, as Magnum depend on HEAT  and HEAT
need call nova, cinder, neutron to create the Bay resources. But both Nova
and Cinder has its own quota policy, if we define quota again in Magnum,
then how to handle the conflict? Another point is that limiting the Bay by
quota seems a bit coarse-grainded as different bays may have different
configuration and resource request. Comments? Thanks.

On Thu, Dec 17, 2015 at 4:10 AM, Lee Calcote  wrote:

> Food for thought - there is a cost to FIPs (in the case of public IP
> addresses), security groups (to a lesser extent, but in terms of the
> computation of many hundreds of them), etc. Administrators may wish to
> enforce quotas on a variety of resources that are direct costs or indirect
> costs (e.g. # of bays, where a bay consists of a number of multi-VM /
> multi-host pods and services, which consume CPU, mem, etc.).
>
> If Magnum quotas are brought forward, they should govern (enforce quota)
> on Magnum-specific constructs only, correct? Resources created by Magnum
> COEs should be governed by existing quota policies governing said resources
> (e.g. Nova and vCPUs).
>
> Lee
>
> On Dec 16, 2015, at 1:56 PM, Tim Bell  wrote:
>
> -Original Message-
> From: Clint Byrum [mailto:cl...@fewbar.com ]
> Sent: 15 December 2015 22:40
> To: openstack-dev 
> Subject: Re: [openstack-dev] [openstack][magnum] Quota for Magnum
> Resources
>
> Hi! Can I offer a counter point?
>
> Quotas are for _real_ resources.
>
>
> The CERN container specialist agrees with you ... it would be good to
> reflect on the needs given that ironic, neutron and nova are policing the
> resource usage. Quotas in the past have been used for things like key pairs
> which are not really real.
>
> Memory, CPU, disk, bandwidth. These are all _closely_ tied to things that
>
> cost
>
> real money and cannot be conjured from thin air. As such, the user being
> able to allocate 1 billion or 2 containers is not limited by Magnum, but
>
> by real
>
> things that they must pay for. If they have enough Nova quota to allocate
>
> 1
>
> billion tiny pods, why would Magnum stop them? Who actually benefits from
> that limitation?
>
> So I suggest that you not add any detailed, complicated quota system to
> Magnum. If there are real limitations to the implementation that Magnum
> has chosen, such as we had in Heat (the entire stack must fit in memory),
> then make that the limit. Otherwise, let their vcpu, disk, bandwidth, and
> memory quotas be the limit, and enjoy the profit margins that having an
> unbound force multiplier like Magnum in your cloud gives you and your
> users!
>
> Excerpts from Vilobh Meshram's message of 2015-12-14 16:58:54 -0800:
>
> Hi All,
>
> Currently, it is possible to create unlimited number of resource like
> bay/pod/service/. In Magnum, there should be a limitation for user or
> project to create Magnum resource, and the limitation should be
> configurable[1].
>
> I proposed following design :-
>
> 1. Introduce new table magnum.quotas
> ++--+--+-+-++
>
> | Field  | Type | Null | Key | Default | Extra  |
>
> ++--+--+-+-++
>
> | id | int(11)  | NO   | PRI | NULL| auto_increment |
>
> | created_at | datetime | YES  | | NULL||
>
> | updated_at | datetime | YES  | | NULL||
>
> | deleted_at | datetime | YES  | | NULL||
>
> | project_id | varchar(255) | YES  | MUL | NULL||
>
> | resource   | varchar(255) | NO   | | NULL||
>
> | hard_limit | int(11)  | YES  | | NULL||
>
> | deleted| int(11)  | YES  | | NULL||
>
> ++--+--+-+-++
>
> resource can be Bay, Pod, Containers, etc.
>
>
> 2. API controller for quota will be created to make sure basic CLI
> commands work.
>
> quota-show, quota-delete, quota-create, quota-update
>
> 3. When the admin specifies a quota of X number of resources to be
> created the code should abide by that. For example if hard limit for Bay
>
> is 5
>
> (i.e.
>
> a project can have maximum 5 Bay's) if a user in a project tries to
> exceed that hardlimit it won't be allowed. Similarly goes for other
>
> resources.
>
>
> 4. Please note the quota validation only works for resources created
> via Magnum. Could not think of a way that Magnum to know if a COE
> specific utilities created a resource in background. One way could be
> to see the difference between whats stored in magnum.quotas and the
> information of the actual resources created for a particular bay in
>
> k8s/COE.
>
>
> 5. Introduce a config variable to set quotas values.
>
> If everyone agrees will start the changes by introducing quota
> restrictions on Bay creation.
>
> Thoughts ??
>
>
> -Vilobh

Re: [openstack-dev] [Kuryr] IRC Meeting - Tuesday 0300 UTC (12/15)

2015-12-20 Thread Vikas Choudhary
Hi All,

I request you all to please go though IPAM changes and related changes in
network driver. Its been a very long time since changes are in review
state.

https://review.openstack.org/#/q/owner:+vikaschoudhary16+status:open


Thanks & Regards
Vikas Choudhary


On Sun, Dec 13, 2015 at 5:33 PM, Gal Sagie  wrote:

> Hello All,
>
> I have updated the agenda for the upcoming Kuryr IRC meeting [1]
> Please review and add any additional topics you might want to cover.
>
> Also please go over last meeting action items [2] , there are still
> patches (IPAM)
> that are looking for review love :)
>
> Since this is the week we do the meeting in an alternating time, i won't
> be able to attend
> (and i believe toni won't be able to as well)
> Taku/banix please run the meeting.
>
> banix, would love if you can update regarding the team/peoples going to
> work
> on testing/CI for Kuryr, i think this is a top priority for us at this
> cycle.
>
> [1] https://wiki.openstack.org/wiki/Meetings/Kuryr
> [2]
> http://eavesdrop.openstack.org/meetings/kuryr/2015/kuryr.2015-12-07-15.00.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][designate] Regarding Designate install through Openstack-Ansible

2015-12-20 Thread Sharma Swati6
 Hi All,

Thanks alot for your valuable feedback Jesse.

Point 1 :

I have made the appropriate Designate entry in the file here : 
/playbooks/defaults/repo_packages/openstack_services.yml and uploaded it for 
review here : 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/defaults/repo_packages/openstack_services.yml
Here, I have taken the 'designate_git_install_branch:' as the most recent one 
on '17.12.2015'

Point 2 :

The execution of tasks and handlers is very well explained in your answer. 
Thanks for that :)

Point 3 :

With regards to creating a DB user & DB, I have modeled the file from glance 
and placed it here: 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/designate_db_setup.yml

Point 4 :

I also raised that I am facing an error while running the playbook, for which I 
have pasted the results at http://paste.openstack.org/show/482171/ . On IRC, 
Jesse  recommended to attach the designate container first and 
check the internet connection. 
I did attach the new designate_container and pinged some address for the 
connectivity check. This works fine here, but I get the same error while 
running playbook. 
Any other probable cause? 

Once this is done, I will checkout the next step suggested by Jesse, i.e. to 
see the designate service in the Keystone service catalog and  interact with it 
via the CLI?

Please share your suggestions.

Thanks & Regards
 Swati Sharma
 System Engineer
 Tata Consultancy Services
 Mailto: sharma.swa...@tcs.com
 Website: http://www.tcs.com
 
 Experience certainty.  IT Services
Business Solutions
Consulting
 
 

-Jesse Pretorius  wrote: -
To: "OpenStack Development Mailing List (not for usage questions)" 

From: Jesse Pretorius 
Date: 12/17/2015 04:35PM
Cc: pandey.pree...@tcs.com, Partha Datta 
Subject: Re: [openstack-dev] Regarding Designate install through
Openstack-Ansible

Hi Swati,

It looks like you're doing well so far! In addition to my review feedback via 
IRC, let me try to answer your questions.

The directory containing the files which hold the SHA's is here:
https://github.com/openstack/openstack-ansible/tree/master/playbooks/defaults/repo_packages

Considering that Designate is an OpenStack Service, the appropriate entries 
should be added into this file:
https://github.com/openstack/openstack-ansible/blob/master/playbooks/defaults/repo_packages/openstack_services.yml

The order of the services is generally alphabetic, so Designate should be added 
after Cinder and before Glance.

I'm not sure I understand your second question, but let me try and respond with 
what I think you're asking. Assuming a running system with all the other 
components, and an available container for Designate, the workflow will be:

1 - you execute the os-designate-install.yml playbook.
2 - Ansible executes the pre-tasks, then the role at 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/os-designate-install.yml#L64
3 - Ansible then executes 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/main.yml
4 - Handlers are triggered when you notify them, for example: 
https://github.com/sharmaswati6/designate_files/blob/master/playbooks/roles/os_designate/tasks/designate_post_install.yml#L54

Does that help you understand how the tasks and handlers are included for 
execution? Does that answer your question?

With regards to creating a DB user & DB - as you've modeled the role from Aodh, 
which doesn't use Galera, you're missing that part An example you can model 
from is here: 
https://github.com/openstack/openstack-ansible/blob/master/playbooks/roles/os_glance/tasks/glance_db_setup.yml

Question 4 is a complex one, and I don't know enough about Designate to answer 
properly. From what I can see you're already doing the following in the role:

1 - preparing the host/container, RabbitMQ (and soon will be doing the DB) for 
a Designate deployment
2 - installing the apt and python packages required for Designate to be able to 
run
3 - placing down the config files and upstart scripts for Designate to run
4 - registering the Designate service endpoint

Once that's done, I'm not entirely sure what else needs to be done to make 
Designate do what it needs to do. At that point, are you able to see the 
service in the Keystone service catalog? Can you interact with it via the CLI?

A few housekeeping items relating to the use of email and the mailing list:

If you wish to gain the attention of particular communities on the 
openstack-dev mailing list, the best is to tag the subject line. In this 
particular case as you're targeting the OpenStack-Ansible community with 
questions you should add '[openstack-ansible]' as a tag in your subject line. 
If you were also targeting questions regarding Designate, or wi