Re: [openstack-dev] [Neutron]

2017-11-22 Thread Andreas Jaeger
On 2017-11-23 08:36,  Frank Wang  wrote:
> *Hello, openstackers,*
> 
> *I'd like to know if neutron-lib support* *Localization &
> Internationalization. I found neutron.po  for neutron, but no *.po for
> neutron-lib. Does anyone have any idea how to support localization for
> neutron-lib. I appreciate it. please let me know if I miss something.*

It is set up for translation but looking at the translation server, no
real work has been done and thus nothing was imported:

https://translate.openstack.org/project/view/neutron-lib?dswid=4366

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] "Show all" buttons broken for api-guide

2017-11-22 Thread Andreas Jaeger
On 2017-11-23 08:06, Gilles Dubreuil wrote:
> 
> 
> On 23/11/17 18:03, Gilles Dubreuil wrote:
>> Hi,
>>
>> Is that just me?
>>
>> The "Show all" button for any of the
>> "https://developer.openstack.org/api-guide/quick-start/*; pages is 
> 
> *not*
> 
>> working.
>> It normally expands (and collapses with the "Hide all" button) all the
>> resources for the specific guide.

please file a bugreport against os-api-ref, for details see
https://docs.openstack.org/os-api-ref/latest/contributing.html

andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron]

2017-11-22 Thread Frank Wang
Hello, openstackers,


I'd like to know if neutron-lib supportLocalization & Internationalization. I 
found neutron.po  for neutron, but no *.po for neutron-lib. Does anyone have 
any idea how to support localization for neutron-lib. I appreciate it. please 
let me know if I miss something.


Thanks.
Frank


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] [api] Use internal URL for placement-api

2017-11-22 Thread mihaela.balas
Hello,

Is there any setting that we can provide to nova-compute in nova.conf/placement 
so that it will use the internal URL for placement API? By default, I see that 
(in Newton) it uses the public URL and our compute nodes do not have access to 
the public IP address.

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift][Keystone] Swift vs Keystone permission models

2017-11-22 Thread John Dickinson


On 22 Nov 2017, at 22:08, Adrian Turjak wrote:

> Hello fellow openstackers,
>
> I'm trying to figure something out that confuses me around how Swift
> handles roles from Keystone, and what the ACLs allow.
>
> In the Swift config you can specify roles which can manage the
> containers for their project ('operator_roles'), and anyone with such a
> role appears to bypass ACLs on their containers.
>
> But beyond that Swift doesn't seem to handle roles in the same way other
> projects do. It has no policy.json file, so you can't limit access to
> the Swift API to specific roles beyond 'operator_roles'. To do any real
> limiting in Swift you have to use ACLs. Sure you can limit specific
> containers via ACLs to a user on a project, and even with a given role,
> but ACLs are user defined, and it feels odd that they would bypass scope
> and roles.
>
> If you assign an ACL to a container  for a given user but don't specify
> project, Swift only cares that the user is authenticated (does at least
> need to be a scope token, right?) and that the ACL is valid, but does
> not respect role/token scope really.
>
> That means that even if you wanted to do a read_only role for everything
> (nova, cinder, etc), you could always bypass that with ACLs in Swift.
> This means Swift's authorisation model can entirely bypass the Keystone
> one in the context of Swift. This seems kind of broken. I can understand
> some cases where that would be useful, but it seems to go against the
> rest of the authorisation model in OpenStack, where roles define
> explicitly where and what you have access to (or at least at meant to).
>
> Am I understanding this wrong? Or missing something obvious? Or is this
> just how it is and it won't change? Because it feels wrong, and I'm not
> sure if that's just me not understanding it, me being paranoid in ways I
> shouldn't, or this really isn't right. I don't like the idea that we
> have two authorisation mechanisms (the core one being Keystone) that can
> be bypassed by Swift ACLs for the purposes of itself. Which makes Swift
> in truth have a higher precedence than Keystone for the purposes of
> scope when it comes to it's own resource. It means there are multiple
> sources of truth, one which is the authority for all other services, and
> another that is the authority for itself. That might makes for all kind
> of mistakes, as people will assume that Keystone scope is honored
> everywhere, since mostly that is the case.
>
> I'm asking because I'd like to setup fine grained roles for each
> service, and when I make a role that can only talk to Nova, I don't
> really like the idea of an ACL being able to bypass that. Not to mention
> there really isn't anything role based I can do via roles/Keystone for
> Swift that can't be bypassed in some way by ACLs, nor can I make a role
> that is read_only for Swift for that given project. I can't have
> swift_readonly, swift_write, swift_manage (manage being able to do
> ACLs). Even with account level ACLs (which don't yet work with Keystone
> anyway), they wouldn't be implied by roles and would have to be set
> manually on project creation, so... it doesn't really work either.
>
> Part of me would at least feel far more comfortable if there was a
> setting in Swift that enforced roles and scope so that you could only
> ever talk to Swift in your project regardless of ACLs, but that feels
> like only one of many things that would need to happen. What I imagine
> as being my ideal scenario for Swift in OpenStack is Swift respects
> roles and scope always, but then ACLs are a way of making fine grained
> per user/group permissions to containers/objects within that scope.
> Sharing between projects may be useful, but rescoping to the other
> project isn't too hard if everything is mostly role based, and sharing
> to a project/user that cannot outright accept that sharing permission is
> innately scary (which is why Glance's cross-project sharing model works
> well). Even more so if the user can't audit their permissions (can a
> user see what ACLs apply to them?).
>
> I'm hoping saner minds can help me either understand or figure out if
> I'm being silly, or if the permission model between Swift and Keystone
> really is weird/broken.
>
> This is also coming from a public cloud perspective rather than a
> private one, so who knows if what I'm trying to solve fits with what
> others may be thinking. I'm also curious how other clouds look at this,
> and what their views are around permissions management between keystone
> and swift.
>
> Cheers,
> Adrian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The short answer (and you should probably get a longer one at some point) is 
that you're right. I wouldn't say it's 

Re: [openstack-dev] [docs] "Show all" buttons broken for api-guide

2017-11-22 Thread Gilles Dubreuil



On 23/11/17 18:03, Gilles Dubreuil wrote:

Hi,

Is that just me?

The "Show all" button for any of the 
"https://developer.openstack.org/api-guide/quick-start/*; pages is 


*not*


working.
It normally expands (and collapses with the "Hide all" button) all the 
resources for the specific guide.


--
Gil



--
Gilles Dubreuil
Senior Software Engineer, Openstack DFG Integration
Mobile: +61 400 894 219
Email: gil...@redhat.com
GitHub/IRC: gildub



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] "Show all" buttons broken for api-guide

2017-11-22 Thread Gilles Dubreuil

Hi,

Is that just me?

The "Show all" button for any of the 
"https://developer.openstack.org/api-guide/quick-start/*; pages is working.
It normally expands (and collapses with the "Hide all" button) all the 
resources for the specific guide.


--
Gil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Zun] Change in Zun core team

2017-11-22 Thread Kevin Zhao
+1 for both

On 22 November 2017 at 09:08, Shuu Mutou  wrote:

> +1 for them, includes the new voting schema.
>
> Best regards,
> Shu
>
>
> > -Original Message-
> > From: Hongbin Lu [mailto:hongbin...@gmail.com]
> > Sent: Wednesday, November 22, 2017 8:16 AM
> > To: OpenStack Development Mailing List (not for usage questions)
> > 
> > Cc: miao.hong...@zte.com.cn
> > Subject: [openstack-dev] [Zun] Change in Zun core team
> >
> > Hi all,
> >
> > I would like to announce the following change to the Zun core reviewers
> > team.
> >
> > + miaohb (miao-hongbao)
> > - Sheel Rana (ranasheel2000)
> >
> > Miaohb has been consistently contributed to Zun for a few months. So far,
> > he has 60 commits in Zun, which ranked on top 3 in the commit metric. I
> > think his hard work justified his qualification as a core reviewer in
> Zun.
> >
> > This change was approved unanimously by the existing core team. Below are
> > the core team members who supported this change:
> >
> > Hongbin Lu
> > Shunli Zhou
> > Kien Nguyen
> > Kevin Zhao
> > Madhuri Kumari
> > Namrata Sitlani
> > Shubham Sharma
> >
> > Best regards,
> > Hongbin
> >
> > [1] http://stackalytics.com/?metric=commits=zun-group
> > 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift][Keystone] Swift vs Keystone permission models

2017-11-22 Thread Adrian Turjak
Hello fellow openstackers,

I'm trying to figure something out that confuses me around how Swift
handles roles from Keystone, and what the ACLs allow.

In the Swift config you can specify roles which can manage the
containers for their project ('operator_roles'), and anyone with such a
role appears to bypass ACLs on their containers.

But beyond that Swift doesn't seem to handle roles in the same way other
projects do. It has no policy.json file, so you can't limit access to
the Swift API to specific roles beyond 'operator_roles'. To do any real
limiting in Swift you have to use ACLs. Sure you can limit specific
containers via ACLs to a user on a project, and even with a given role,
but ACLs are user defined, and it feels odd that they would bypass scope
and roles.

If you assign an ACL to a container  for a given user but don't specify
project, Swift only cares that the user is authenticated (does at least
need to be a scope token, right?) and that the ACL is valid, but does
not respect role/token scope really.

That means that even if you wanted to do a read_only role for everything
(nova, cinder, etc), you could always bypass that with ACLs in Swift.
This means Swift's authorisation model can entirely bypass the Keystone
one in the context of Swift. This seems kind of broken. I can understand
some cases where that would be useful, but it seems to go against the
rest of the authorisation model in OpenStack, where roles define
explicitly where and what you have access to (or at least at meant to).

Am I understanding this wrong? Or missing something obvious? Or is this
just how it is and it won't change? Because it feels wrong, and I'm not
sure if that's just me not understanding it, me being paranoid in ways I
shouldn't, or this really isn't right. I don't like the idea that we
have two authorisation mechanisms (the core one being Keystone) that can
be bypassed by Swift ACLs for the purposes of itself. Which makes Swift
in truth have a higher precedence than Keystone for the purposes of
scope when it comes to it's own resource. It means there are multiple
sources of truth, one which is the authority for all other services, and
another that is the authority for itself. That might makes for all kind
of mistakes, as people will assume that Keystone scope is honored
everywhere, since mostly that is the case.

I'm asking because I'd like to setup fine grained roles for each
service, and when I make a role that can only talk to Nova, I don't
really like the idea of an ACL being able to bypass that. Not to mention
there really isn't anything role based I can do via roles/Keystone for
Swift that can't be bypassed in some way by ACLs, nor can I make a role
that is read_only for Swift for that given project. I can't have
swift_readonly, swift_write, swift_manage (manage being able to do
ACLs). Even with account level ACLs (which don't yet work with Keystone
anyway), they wouldn't be implied by roles and would have to be set
manually on project creation, so... it doesn't really work either.

Part of me would at least feel far more comfortable if there was a
setting in Swift that enforced roles and scope so that you could only
ever talk to Swift in your project regardless of ACLs, but that feels
like only one of many things that would need to happen. What I imagine
as being my ideal scenario for Swift in OpenStack is Swift respects
roles and scope always, but then ACLs are a way of making fine grained
per user/group permissions to containers/objects within that scope.
Sharing between projects may be useful, but rescoping to the other
project isn't too hard if everything is mostly role based, and sharing
to a project/user that cannot outright accept that sharing permission is
innately scary (which is why Glance's cross-project sharing model works
well). Even more so if the user can't audit their permissions (can a
user see what ACLs apply to them?).

I'm hoping saner minds can help me either understand or figure out if
I'm being silly, or if the permission model between Swift and Keystone
really is weird/broken.

This is also coming from a public cloud perspective rather than a
private one, so who knows if what I'm trying to solve fits with what
others may be thinking. I'm also curious how other clouds look at this,
and what their views are around permissions management between keystone
and swift.

Cheers,
Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIs schema consumption discussion

2017-11-22 Thread Gilles Dubreuil



On 23/11/17 07:04, Graham Hayes wrote:


On 16/11/17 01:56, Gilles Dubreuil wrote:

On 15/11/17 03:07, Doug Hellmann wrote:

Excerpts from Gilles Dubreuil's message of 2017-11-14 10:15:02 +1100:

Hi,

Follow-up conversation from our last "API SIG feedback and discussion
session" at Sydney Summit [1], about APIs schema consumption.

Let's summarize the current situation.

Each OpenStack project has an "API-source" folder containing RST files
describing its API structure ([2] for example). Those files are in turn
consumed by the Sphinx library to generate each project's API reference
manual which are then available in the API guide documentation [3]. Such
effort has made the APIs harmoniously consistent across all OpenStack
projects and has also created a "de-facto" API schema.

While the RST files are used by the documentation, they are not readily
consumable by SDKs. Of course the APIs schema can be extracted by web
crawling the Reference guides, which in turn can be used by any
language. Such approach works [4] and help the Misty project [5] (Ruby
SDK) started with more languages to exploit the same approach.

Therefore to allow better automation, the next step would be to have the
APIs schema stored directly into each project's repo so the SDKs could
consume them straight from the source. This is why we've started
discussing how to have the schema either extracted from the RST files or
alternatively having the API described directly into its own file. The
latter would provide a different work flow: "Yaml -> RST -> Reference
doco" instead of "RST -> Reference doco -> Yaml".

So the question at this stage is: "Which of the work flow to choose
from?".

To clarify the needs, it's important to note that we found out that none
of the SDKs project, besides OSC (OpenStack Client, thanks to Dean),
have full time working teams to maintain each SDK, which besides the
natural structural complexity inherent to the cloud context, makes the
task of keeping a SDK up to date very difficult. Especially as projects
moves forward. Automatically managing Openstack APIs is inevitable for
consumers. Another example/feedback was provided by the presenters of
"AT’s Strategy for Implementing a Next Generation OpenStack Cloud"
session during Sydney Keynotes, as they don't handle the Openstack API
manually!

APIs consumers and providers, any thoughts?

[1]
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20442/api-sig-feedback-and-discussion-session

[2] https://github.com/openstack/nova/tree/master/api-guide/source
[3] https://developer.openstack.org/api-guide/quick-start/index.html
[4] https://github.com/flystack/openstack-APIs
[5] https://github.com/flystack/misty

Regards,
Gilles

Please do not build something that looks like SOAP based on parsing RST
files. Surely we can at least work directly from JSONSchema inputs?

I'm glad you said that :).
Working directly from YAML or JSON files (format to be discussed) to
maintain the schema seems (to me too) the natural approach.

Meaning every project to change current practice: maintain the schema
files instead of maintaining RST files.
I suppose there has been suggestion to do it the other way (parse the
RST files) because of the latter impact on the current practice, but it
shouldn't be a blocker.

Gil


When I was talking to Gil about it, I suggested writing a new sphinx /
docutils formatter. I am not sure how feasible it would be, but it could
be possible (as sphinx has the whole page tree in memory when writing it
out, we may be able to output it in some sort of structured format.


That makes sense if the tree is already loaded, could you please provide 
a pointer?




I would be hesitant to change how we write docs - this change took long
enough to get in place, and the ability to add / remove bits to suit
different projects is a good thing. Pages like [1] would be hard to do
in a standard machine readable format, and I think they definitely make
the docs better.


First off, let me insist: "The reference guides are absolutely great". I 
guess that's the ransom of the success! :)
So, from outside (as a blackbox) the doc generation process, it made 
sense to have a work flow going from a structured tree to the docs, 
meanwhile if the same information can be obtained from the existing that 
sounds good.




- Graham

1 - https://developer.openstack.org/api-ref/compute/#servers-servers



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Gil

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-22 Thread jun zhong
Thank you ben and Manila team, and also thank my mentor Goutham always
encourages me and helped me a lot. It's my honor to contribute to the
manila and to be a member of the manila core reviewer team. Looking forward
to work with all of you!

2017-11-23 5:43 GMT+08:00 Ben Swartzlander :

> On 11/19/2017 06:29 PM, Ravi, Goutham wrote:
>
>> Hello Manila developers,
>>
>> I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on gerrit)
>> to be part of the Manila core team. Zhongjun has been an important member
>> of our community since the Kilo release, and has, in the past few releases
>> made significant contributions to the constellation of projects related to
>> openstack/manila [1]. She is also our ambassador in the APAC
>> region/timezones. Her opinion is valued amongst the core team and I think,
>> as a core reviewer and maintainer, she would continue to help grow and
>> maintain our project.
>>
>> Please respond with a +1/-1.
>>
>> We will not be having an IRC meeting this Thursday (23^rd November 2017),
>> so if we have sufficient quorum, PTL extraordinaire, Ben Swartzlander will
>> confirm her nomination here.
>>
>
> Welcome Jun to the manila core reviewer team! Your hard work and
> dedication to the Manila project is very appreciated! Normally we do these
> announcements during the weekly meetings, but since tomorrow's meeting is
> canceled, I'm adding you early. If you have any questions about
> responsibilities as a core reviewer, please ask me on IRC.
>
> -Ben Swartzlander
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Summary of ironic sessions from Sydney

2017-11-22 Thread Michael Still
Thanks for this summary. I'd say the cinder-booted IPA is definitely of
interest to the operators I've met. Building new IPAs, especially when
trying to iterate on what drivers are needed is a pain so being able to
iterate faster would be very useful. That said, I guess this implies
booting more than one machine off a volume at once?

Michael

On Wed, Nov 15, 2017 at 3:18 AM, Julia Kreger 
wrote:

> Greetings ironic folk!
>
> Like many other teams, we had very few ironic contributors make it to
> Sydney. As such, I wanted to go ahead and write up a summary that
> covers takeaways, questions, and obvious action items for the
> community that were raised by operators and users present during the
> sessions, so that we can use this as feedback to help guide our next
> steps and feature planning.
>
> Much of this is from my memory combined with notes on the various
> etherpads. I would like to explicitly thank NobodyCam for reading
> through this in advance to see if I was missing anything at a high
> level since he was present in the vast majority of these sessions, and
> dtantsur for sanity checking the content and asking for some
> elaboration in some cases.
>
> -Julia
>
>
>
> Ironic Project Update
> =
>
> Questions largely arose around use of boot from volume, including some
> scenarios we anticipated that would arise, as well as new scenarios
> that we had not considered.
>
> Boot nodes booting from the same volume
> ---
>
> From a technical standpoint, when BFV is used with iPXE chain loading,
> the chain loader reads the boot loader and related data from the
> cinder (or, realistically, any iSCSI volume). This means that a
> skilled operator is able to craft a specific volume that may just turn
> around and unpack a ramdisk and operate the machine solely from RAM,
> or that utilize an NFS root.
>
> This sort of technical configuration would not be something an average
> user would make use of, but there are actual use cases that some large
> scale deployment operators would make use of and that would provide
> them value.
>
> Additionally, this topic and the desire for this capability also come
> up during the “Building a bare metal cloud is hard” talk Q
>
> Action Item: Check the data model to see if we prohibit, and consider
> removing the prohibition against using the same volume across nodes,
> if any.
>
> Cinder-less BFV support
> ---
>
> Some operators are curious about booting Ironic managed nodes without
> cinder in a BFV context. This is something we anticipated and built
> the API and CLI interfaces to support this. Realistically, we just
> need to offer the ability for the data to be read and utilized.
>
> Action Item: Review code and ensure that we have a some sort of no-op
> driver or method that allows cinder-less node booting. For existing
> drivers, it would be the shipment of the information to the BMC or the
> write-out of iPXE templates as necessary.
>
> Boot IPA from a cinder volume
> -
>
> With larger IPA images, specifically in cases where the image contains
> a substantial amount of utilized or tooling to perform cleaning,
> providing a mechanism to point the deployment Ramdisk to a cinder
> volume would allow more efficient IO access.
>
> Action Item: Discuss further - Specifically how we could support as we
> would need to better understand how some of the operators might use
> such functionality.
>
> Dedicated Storage Fabric support
> 
>
> A question of dedicated storage fabric/networking support arose. For
> users of FibreChannel, they generally have a dedicated storage fabric
> by the very nature of separate infrasturcture. However, with ethernet
> networking where iSCSI software initiators are used, or even possibly
> converged network adapters, things get a little more complex.
>
> Presently, with the iPXE boot from volume support, we boot using the
> same interface details for the neutron VIF that the node is attached
> with.
>
> Moving forward, with BFV, the concept was to support the use of
> explicitly defined interfaces as storage interfaces, which could be
> denoted as "volume connectors" in ironic by type defined as "mac". In
> theory, we begin to get functionality along these lines once
> https://review.openstack.org/#/c/468353/ lands, as the user could
> define two networks, and the storage network should then fall to the
> explicit volume connector interface(s). The operator would just need
> to ensure that the settings being used on that storage network are
> such that the node can boot and reach the iSCSI endpoint, and that a
> default route is not provided.
>
> The question then may be, does Ironic do this quietly for the user
> requesting the VM or not, and how do we document the use such that
> operators can conceptualize it. How do we make this work at a larger
> scale? How could this fit or not 

Re: [openstack-dev] [designate] Core Reviewers

2017-11-22 Thread da...@vn.fujitsu.com
Nice, welcome Jens! Hopefully this change will make Designate great again :)

> -Original Message-
> From: Graham Hayes [mailto:g...@ham.ie]
> Sent: Thursday, November 23, 2017 12:22 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: [openstack-dev] [designate] Core Reviewers
> 
> I have decided to start cycling out old core reviewers who are not as active
> with new, more active reviewers.
> 
> The first change is
> 
> - Eric Larson (elarson)
> + Jens Harbott (frickler)
> 
> Unfortunately elarson has moved companies, and frickler has been
> consistently providing good, regular reviews.
> 
> Please welcome Jens to the team, and if Eric can rejoin us in the future, we
> can fast track him back to core.
> 
> Thanks,
> 
> Graham

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-11-22 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> "gong_ys2004"  writes:
>
>> Hi, everyone
>> I am trying to migrate tacker's functional CI job into new zuul v3
>> framework, but it seems:
>> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
>> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I
>> have:  devstack_plugins:
>> heat: https://git.openstack.org/openstack/heat
>> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
>> aodh: https://git.openstack.org/openstack/aodh
>> ceilometer: https://git.openstack.org/openstack/ceilometer
>> barbican: https://git.openstack.org/openstack/barbican
>> mistral: https://git.openstack.org/openstack/mistral
>> tacker: https://git.openstack.org/openstack/tacker
>> but the running order
>> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
>> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
>> I need barbican to start before tacker.
>
> [I changed the subject to replace the 'openstack' tag with 'devstack',
> which is what I assume was intended.]
>
>
> As Yatin Karel later notes, this is handled as a regular python
> dictionary which means we process the keys in an indeterminate order.
>
> I can think of a few ways we can address this:
>
...
> 3) Add dependency information to devstack plugins, but rather than
> having devstack resolve it, have the Ansible role which writes out the
> local.conf read that information and resolve the order.  This lets us
> keep the actual information in plugins so we don't have to continually
> update the role, but it lets us perform the processing in the role
> (which is in Python) when writing the config file.
...
> After considering all of those, I think I favor option 3, because we
> should be able to implement it without too much difficulty, it will
> improve things by providing a known and documented location for plugins
> to specify dependencies, and once it is in place, we can still implement
> option 1 later if we want, using the same declaration.

I discussed this with Dean and we agreed on something close to this
option, except that we would do it in such a way that devstack could
potentially make use of this in the future.  For starters, it will be
easy for devstack to error if someone adds plugins in the wrong order.
If someone feels like having a lot of fun, they could actually implement
a dependency resolver in devstack.

I have two patches which implement this idea:

https://review.openstack.org/521965
https://review.openstack.org/522054

Once those land, we'll need to add the appropriate lines to barbican and
tacker's devstack plugin settings files, then the job you're creating
should start those plugins in the right order automatically.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-22 Thread Ben Swartzlander

On 11/19/2017 06:29 PM, Ravi, Goutham wrote:

Hello Manila developers,

I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on 
gerrit) to be part of the Manila core team. Zhongjun has been an 
important member of our community since the Kilo release, and has, in 
the past few releases made significant contributions to the 
constellation of projects related to openstack/manila [1]. She is also 
our ambassador in the APAC region/timezones. Her opinion is valued 
amongst the core team and I think, as a core reviewer and maintainer, 
she would continue to help grow and maintain our project.


Please respond with a +1/-1.

We will not be having an IRC meeting this Thursday (23^rd November 
2017), so if we have sufficient quorum, PTL extraordinaire, Ben 
Swartzlander will confirm her nomination here.


Welcome Jun to the manila core reviewer team! Your hard work and 
dedication to the Manila project is very appreciated! Normally we do 
these announcements during the weekly meetings, but since tomorrow's 
meeting is canceled, I'm adding you early. If you have any questions 
about responsibilities as a core reviewer, please ask me on IRC.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread Doug Hellmann

> On Nov 22, 2017, at 11:22 AM, Ian Y. Choi  wrote:
> 
> Hello,
> 
> Maybe there would be some chance to be also considered with PDF builds?
> 
> I created an WIP patch on openstack/horizon repository [1] to highlight
> which changes to be needed for PDF build support on docs and releasenotes.
> 
> Although there is currently one warning using "python setup.py build_sphinx" 
> [2],
> I think the warning would be quite fine now since using sphinx-build command 
> with
> "-b latex" option works well and such command execution is how PTI is going 
> from my understanding [3].

It does make sense to build PDFs. Do you think we want to always build them?

> 
> (I am also copying this to openstack-docs mailing list for [4].)

We really really need to stop using that separate mailing list. Now that the 
project teams are managing their own documentation, we should just have the 
docs discussions on the -dev list so everyone can participate.

> 
> 
> With many thanks,
> 
> /Ian
> 
> [1] https://review.openstack.org/#/c/520620/
> [2] https://github.com/sphinx-doc/sphinx/issues/4259
> [3] 
> http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/tasks/main.yaml#n15
> [4] https://review.openstack.org/#/c/509297/
> 
> Doug Hellmann wrote on 11/23/2017 12:05 AM:
>> Excerpts from Monty Taylor's message of 2017-11-22 07:39:45 -0600:
>> 
>>> * We use -W for all releasenotes builds - this means warnings are always
>>> errors for releasenotes. That shouldn't bother anyone, as most of the
>>> releasenotes content is generated by reno anyway.
>> For projects that never had -W set, there may be invalid RST in old
>> branches. We hit that in ceilometer for mitaka and newton, and since
>> those branches were closed already we used "reno report" to generate
>> static RST pages to replace the reno directives. See
>> https://review.openstack.org/#/c/521548/ for an example of doing this if
>> your project has a similar issue.
>> 
>> Doug
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] APIs schema consumption discussion

2017-11-22 Thread Graham Hayes


On 16/11/17 01:56, Gilles Dubreuil wrote:
> 
> On 15/11/17 03:07, Doug Hellmann wrote:
>> Excerpts from Gilles Dubreuil's message of 2017-11-14 10:15:02 +1100:
>>> Hi,
>>>
>>> Follow-up conversation from our last "API SIG feedback and discussion
>>> session" at Sydney Summit [1], about APIs schema consumption.
>>>
>>> Let's summarize the current situation.
>>>
>>> Each OpenStack project has an "API-source" folder containing RST files
>>> describing its API structure ([2] for example). Those files are in turn
>>> consumed by the Sphinx library to generate each project's API reference
>>> manual which are then available in the API guide documentation [3]. Such
>>> effort has made the APIs harmoniously consistent across all OpenStack
>>> projects and has also created a "de-facto" API schema.
>>>
>>> While the RST files are used by the documentation, they are not readily
>>> consumable by SDKs. Of course the APIs schema can be extracted by web
>>> crawling the Reference guides, which in turn can be used by any
>>> language. Such approach works [4] and help the Misty project [5] (Ruby
>>> SDK) started with more languages to exploit the same approach.
>>>
>>> Therefore to allow better automation, the next step would be to have the
>>> APIs schema stored directly into each project's repo so the SDKs could
>>> consume them straight from the source. This is why we've started
>>> discussing how to have the schema either extracted from the RST files or
>>> alternatively having the API described directly into its own file. The
>>> latter would provide a different work flow: "Yaml -> RST -> Reference
>>> doco" instead of "RST -> Reference doco -> Yaml".
>>>
>>> So the question at this stage is: "Which of the work flow to choose
>>> from?".
>>>
>>> To clarify the needs, it's important to note that we found out that none
>>> of the SDKs project, besides OSC (OpenStack Client, thanks to Dean),
>>> have full time working teams to maintain each SDK, which besides the
>>> natural structural complexity inherent to the cloud context, makes the
>>> task of keeping a SDK up to date very difficult. Especially as projects
>>> moves forward. Automatically managing Openstack APIs is inevitable for
>>> consumers. Another example/feedback was provided by the presenters of
>>> "AT’s Strategy for Implementing a Next Generation OpenStack Cloud"
>>> session during Sydney Keynotes, as they don't handle the Openstack API
>>> manually!
>>>
>>> APIs consumers and providers, any thoughts?
>>>
>>> [1]
>>> https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20442/api-sig-feedback-and-discussion-session
>>>
>>> [2] https://github.com/openstack/nova/tree/master/api-guide/source
>>> [3] https://developer.openstack.org/api-guide/quick-start/index.html
>>> [4] https://github.com/flystack/openstack-APIs
>>> [5] https://github.com/flystack/misty
>>>
>>> Regards,
>>> Gilles
>> Please do not build something that looks like SOAP based on parsing RST
>> files. Surely we can at least work directly from JSONSchema inputs?
> 
> I'm glad you said that :).
> Working directly from YAML or JSON files (format to be discussed) to
> maintain the schema seems (to me too) the natural approach.
> 
> Meaning every project to change current practice: maintain the schema
> files instead of maintaining RST files.
> I suppose there has been suggestion to do it the other way (parse the
> RST files) because of the latter impact on the current practice, but it
> shouldn't be a blocker.
> 
> Gil
> 

When I was talking to Gil about it, I suggested writing a new sphinx /
docutils formatter. I am not sure how feasible it would be, but it could
be possible (as sphinx has the whole page tree in memory when writing it
out, we may be able to output it in some sort of structured format.

I would be hesitant to change how we write docs - this change took long
enough to get in place, and the ability to add / remove bits to suit
different projects is a good thing. Pages like [1] would be hard to do
in a standard machine readable format, and I think they definitely make
the docs better.

- Graham

1 - https://developer.openstack.org/api-ref/compute/#servers-servers



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread Andreas Jaeger
On 2017-11-22 18:26, James E. Blair wrote:
> Monty Taylor  writes:
> 
>> * We use -W only if setup.cfg sets it
>>
>> * Installs dependencies via bindep for doc environment. Binary deps,
>> such as graphviz, should be listed in bindep and marked with a 'doc'
>> tag.
>>
>> * doc/requirements.txt is used for installation of python dependencies.
>> Things like whereto or openstackdocstheme should go there.
> 
> Should we add this info to the infra manual?
> 
> Similar to this?
> 
>   https://docs.openstack.org/infra/manual/drivers.html#package-requirements

Yes, I suggest to update it. The PTI documents doc/requirements.txt but
the link above should be correct and mention it as well,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Sergio Morales Acuña
Dear Spyros:

Thanks for your answer. I'm moving my cloud to Pike!.

The problems I encountered were with the TCP listeners for the etcd's
LoadBalancer and the "curl -sf" from the nodes to the etcd LB (I have to
put a -k).

I'm using Kolla Binary with Centos 7, so I also have problems with
kubernets python libreries (they needed updates to be able to handle
IPADDRESS on certificates)

Cheers and thanks again.

El mié., 22 nov. 2017 a las 5:30, Spyros Trigazis ()
escribió:

> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña 
> wrote:
> > I'm using Openstack Ocata and trying Magnum.
> >
> > I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve
> them
> for everyone else?
>
> >
> > Now I'm curious about some aspects of Magnum:
> >
> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> > create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
> >
> > ¿Any one here using Magnum on daily basis? If yes, What version are you
> > using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
> running
> Pike and we use only the fedora atomic drivers.
>
> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
> >
> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> > upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
> >
> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option
> and can
> keep up with kubernetes easily.
>
> >
> > ¿Where I can found updated articles about the state of Magnum and it's
> > future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
> >
> > Cheers
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Summary of ironic sessions from Sydney

2017-11-22 Thread Ruby Loo
Thank you Julia, for sacrificing yourself and going to Australia; I'm glad
the koalas didn't get you :)

This summary is GREAT! I'm trying to figure out how we take all these asks
into consideration with all the existing asks and TODOs that are on our
plate. I guess the best plan of action (and a bit more procrastination) is
to discuss this at our virtual mid-cycle meetup next week [1].

--ruby

[1]
http://lists.openstack.org/pipermail/openstack-dev/2017-November/124725.html


On Tue, Nov 14, 2017 at 11:18 AM, Julia Kreger 
wrote:

> Greetings ironic folk!
>
> Like many other teams, we had very few ironic contributors make it to
> Sydney. As such, I wanted to go ahead and write up a summary that
> covers takeaways, questions, and obvious action items for the
> community that were raised by operators and users present during the
> sessions, so that we can use this as feedback to help guide our next
> steps and feature planning.
>
> Much of this is from my memory combined with notes on the various
> etherpads. I would like to explicitly thank NobodyCam for reading
> through this in advance to see if I was missing anything at a high
> level since he was present in the vast majority of these sessions, and
> dtantsur for sanity checking the content and asking for some
> elaboration in some cases.
>
> -Julia
>
>
>
> Ironic Project Update
> =
>
> Questions largely arose around use of boot from volume, including some
> scenarios we anticipated that would arise, as well as new scenarios
> that we had not considered.
>
> Boot nodes booting from the same volume
> ---
>
> From a technical standpoint, when BFV is used with iPXE chain loading,
> the chain loader reads the boot loader and related data from the
> cinder (or, realistically, any iSCSI volume). This means that a
> skilled operator is able to craft a specific volume that may just turn
> around and unpack a ramdisk and operate the machine solely from RAM,
> or that utilize an NFS root.
>
> This sort of technical configuration would not be something an average
> user would make use of, but there are actual use cases that some large
> scale deployment operators would make use of and that would provide
> them value.
>
> Additionally, this topic and the desire for this capability also come
> up during the “Building a bare metal cloud is hard” talk Q
>
> Action Item: Check the data model to see if we prohibit, and consider
> removing the prohibition against using the same volume across nodes,
> if any.
>
> Cinder-less BFV support
> ---
>
> Some operators are curious about booting Ironic managed nodes without
> cinder in a BFV context. This is something we anticipated and built
> the API and CLI interfaces to support this. Realistically, we just
> need to offer the ability for the data to be read and utilized.
>
> Action Item: Review code and ensure that we have a some sort of no-op
> driver or method that allows cinder-less node booting. For existing
> drivers, it would be the shipment of the information to the BMC or the
> write-out of iPXE templates as necessary.
>
> Boot IPA from a cinder volume
> -
>
> With larger IPA images, specifically in cases where the image contains
> a substantial amount of utilized or tooling to perform cleaning,
> providing a mechanism to point the deployment Ramdisk to a cinder
> volume would allow more efficient IO access.
>
> Action Item: Discuss further - Specifically how we could support as we
> would need to better understand how some of the operators might use
> such functionality.
>
> Dedicated Storage Fabric support
> 
>
> A question of dedicated storage fabric/networking support arose. For
> users of FibreChannel, they generally have a dedicated storage fabric
> by the very nature of separate infrasturcture. However, with ethernet
> networking where iSCSI software initiators are used, or even possibly
> converged network adapters, things get a little more complex.
>
> Presently, with the iPXE boot from volume support, we boot using the
> same interface details for the neutron VIF that the node is attached
> with.
>
> Moving forward, with BFV, the concept was to support the use of
> explicitly defined interfaces as storage interfaces, which could be
> denoted as "volume connectors" in ironic by type defined as "mac". In
> theory, we begin to get functionality along these lines once
> https://review.openstack.org/#/c/468353/ lands, as the user could
> define two networks, and the storage network should then fall to the
> explicit volume connector interface(s). The operator would just need
> to ensure that the settings being used on that storage network are
> such that the node can boot and reach the iSCSI endpoint, and that a
> default route is not provided.
>
> The question then may be, does Ironic do this quietly for the user
> requesting the VM or not, and how 

Re: [openstack-dev] [keystone][zuul] A Sad Farewell

2017-11-22 Thread Lance Bragstad
I've gone ahead and removed Jamie from the core reviewer group and the
keystone-drivers group in Launchpad.

Jamie, thanks again for all your hard work and remember that you're more
than welcome to come back anytime.


On 10/03/2017 05:54 PM, Monty Taylor wrote:
> On 10/03/2017 11:17 AM, Dean Troyer wrote:
>> On Mon, Oct 2, 2017 at 9:13 PM, Jamie Lennox 
>> wrote:
>>> I'm really sad to announce that I'll be leaving the OpenStack
>>> community (at
>>> least for a while), I've accepted a new position unrelated to OpenStack
>>> that'll begin in a few weeks, and am going to be mostly on holiday
>>> until
>>> then.
>>
>> No, this just will not do. -2
>
> I concur. Will a second -2 help?
>
>> Seriously, it has been a great pleasure to 'try to take over the
>> world' with you, at least that is what I recall as the goal we set in
>> Hong Kong.  The entire interaction of Python-based clients with
>> OpenStack has been made so much better with your contributions and
>> OpenStackClient would not have gotten as far as it has without them.
>
> Your contributions and impact around these parts cannot be overstated.
> I have enjoyed our time working together and hold your work and
> contributions in extremely high regard.
>
> Best of luck in your next endeavor - they are lucky to have you!
>
> Monty
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Add scenario tests based on multiple cells environment

2017-11-22 Thread Matt Riedemann

On 11/21/2017 2:37 AM, koshiya maho wrote:

Hi, all

Multiple cells (Nova-Cells-v2) is supported in Pike release.
It is necessary to confirm that existing APIs work appropriately in the 
multiple cells environment.
We will post multiple patches, so I created BluePrint[1] to make it easier to 
keep track of those patches.
Please check the contents and approve it.

[1] 
https://blueprints.launchpad.net/nova/+spec/add-multiple-cells-scenario-tests

Best regards,
--
Maho Koshiya
E-Mail : koshiya.m...@po.ntt-tx.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



We don't really need a blueprint for this work. It would be good to know 
what gaps in existing testing you think exist. And where do you plan on 
implementing these tests? In the nova/tests/functional tree or somewhere 
else? We already have a lot of tests which are using a CellDatabase 
fixture which allow us to create multiple cell mappings for tests in the 
API.


If you're considering Tempest, tests there wouldn't really be 
appropriate because to the end user of the API, they should have no idea 
if they are talking to a cloud with multiple cells or not, since it's 
really a deployment issue.


What we don't have today in our CI testing, and that we need someone to 
work on, is running a devstack multi-node setup with at least two cells. 
This likely requires some work in the devstack-gate repo to configure 
devstack per node to tell it which cell it is.


I encourage you to bring this up in a weekly cells v2 meeting for 
further discussion:


http://eavesdrop.openstack.org/#Nova_Cellsv2_Meeting

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] No office hours and meeting tomorrow 23/11

2017-11-22 Thread Andrea Frittoli
Dear all,

sorry about the short notice on this - tomorrow I will be travelling so I
won't be able to host the office hours in the morning and the meeting in
the evening.

Kind regards

Andrea Frittoli (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread James E. Blair
Monty Taylor  writes:

> * We use -W only if setup.cfg sets it
>
> * Installs dependencies via bindep for doc environment. Binary deps,
> such as graphviz, should be listed in bindep and marked with a 'doc'
> tag.
>
> * doc/requirements.txt is used for installation of python dependencies.
> Things like whereto or openstackdocstheme should go there.

Should we add this info to the infra manual?

Similar to this?

  https://docs.openstack.org/infra/manual/drivers.html#package-requirements

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Core Reviewers

2017-11-22 Thread Graham Hayes
I have decided to start cycling out old core reviewers
who are not as active with new, more active reviewers.

The first change is

- Eric Larson (elarson)
+ Jens Harbott (frickler)

Unfortunately elarson has moved companies, and frickler has been
consistently providing good, regular reviews.

Please welcome Jens to the team, and if Eric can rejoin us in the
future, we can fast track him back to core.

Thanks,

Graham



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Meeting Time

2017-11-22 Thread Graham Hayes
So, after looking at the responses, we have a winning
meeting time.

14:00 UTC (I propose staying on Wednesday) [1].

If there is no objections by the end of the week, I will
update the meeting time on eavesdrop.openstack.org

Thanks,

Graham

1 -
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20171129T14=1440



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread Ian Y. Choi

Hello,

Maybe there would be some chance to be also considered with PDF builds?

I created an WIP patch on openstack/horizon repository [1] to highlight
which changes to be needed for PDF build support on docs and releasenotes.

Although there is currently one warning using "python setup.py 
build_sphinx" [2],
I think the warning would be quite fine now since using sphinx-build 
command with
"-b latex" option works well and such command execution is how PTI is 
going from my understanding [3].


(I am also copying this to openstack-docs mailing list for [4].)


With many thanks,

/Ian

[1] https://review.openstack.org/#/c/520620/
[2] https://github.com/sphinx-doc/sphinx/issues/4259
[3] 
http://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/tasks/main.yaml#n15

[4] https://review.openstack.org/#/c/509297/

Doug Hellmann wrote on 11/23/2017 12:05 AM:

Excerpts from Monty Taylor's message of 2017-11-22 07:39:45 -0600:


* We use -W for all releasenotes builds - this means warnings are always
errors for releasenotes. That shouldn't bother anyone, as most of the
releasenotes content is generated by reno anyway.

For projects that never had -W set, there may be invalid RST in old
branches. We hit that in ceilometer for mitaka and newton, and since
those branches were closed already we used "reno report" to generate
static RST pages to replace the reno directives. See
https://review.openstack.org/#/c/521548/ for an example of doing this if
your project has a similar issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Hongbin Lu
As a record, if magnum team doesn't interest to maintain the CoreOS driver,
it is an indication that this driver should be spitted out and maintained
by another team. CoreOS is one of the prevailing container OS. I believe
there will be a lot of interests after the split.

Disclaim: I am an author of the CoreOS driver

Best regards,
Hongbin

On Wed, Nov 22, 2017 at 3:29 AM, Spyros Trigazis  wrote:

> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña 
> wrote:
> > I'm using Openstack Ocata and trying Magnum.
> >
> > I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve
> them
> for everyone else?
>
> >
> > Now I'm curious about some aspects of Magnum:
> >
> > ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> > create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
> >
> > ¿Any one here using Magnum on daily basis? If yes, What version are you
> > using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are
> running
> Pike and we use only the fedora atomic drivers.
> http://openstack-in-production.blogspot.ch/2017/
> 01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
> >
> > ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> > upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
> >
> > ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option
> and can
> keep up with kubernetes easily.
>
> >
> > ¿Where I can found updated articles about the state of Magnum and it's
> > future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
> >
> > Cheers
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/tripleo-ui failed

2017-11-22 Thread Emilien Macchi
On Wed, Nov 22, 2017 at 7:50 AM, Sean McGinnis  wrote:
> I just put up a patch to fix the failing job:
>
> https://review.openstack.org/#/c/522284/
>
> Are you saying it was redundant though?

I was just wondering if we actually need this post-release job.
But I'll let the tripleo-ui maintainers to answer that question.

>
>
> On Wed, Nov 22, 2017 at 9:43 AM, Emilien Macchi  wrote:
>>
>> Yeah, I confirm we were able to build an RPM and release a new version
>> of TripleO UI, which was our goal. I guess we can remove the
>> flacky/failing job.
>>
>> On Wed, Nov 22, 2017 at 3:54 AM, Honza Pokorny  wrote:
>> > I'm investigating on behalf of the tripleo-ui team.  It looks like the
>> > definition for the jobs changed under us, and we'll need to update our
>> > configuration.  I'll track it down.
>> >
>> > For the time being, though, we can safely ignore this issue.  As
>> > indicated, the publish-openstack-javascript-tarball job succeeded.  I
>> > checked the tarballs listing, and the file is in fact present.  Our
>> > project doesn't produce any other release artifacts upstream.
>> >
>> > Please direct any future failures to me (in addition to the mailing list
>> > of course)
>> >
>> > Thanks
>> >
>> > Honza Pokorny
>> >
>> > On 2017-11-21 21:34, Emilien Macchi wrote:
>> >> Indeed, adding Jason in copy.
>> >>
>> >> Do we actually need release-openstack-javascript job?
>> >>
>> >> On Tue, Nov 21, 2017 at 8:27 PM, Tony Breeds 
>> >> wrote:
>> >> > On Wed, Nov 22, 2017 at 03:07:33AM +, z...@openstack.org wrote:
>> >> >> Build failed.
>> >> >>
>> >> >> - publish-openstack-javascript-tarball
>> >> >> http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/publish-openstack-javascript-tarball/9908482/
>> >> >> : SUCCESS in 4m 58s
>> >> >> - release-openstack-javascript
>> >> >> http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/release-openstack-javascript/95af4ef/
>> >> >> : POST_FAILURE in 6m 37s
>> >> >> - announce-release announce-release : SKIPPED
>> >> >
>> >> > I'm not certain what went wrong here but [1] looks problematic
>> >> >
>> >> > Yours Tony.
>> >> >
>> >> > [1]
>> >> > http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/release-openstack-javascript/95af4ef/job-output.txt.gz#_2017-11-22_03_01_48_750499
>> >> >
>> >> > ___
>> >> > Release-job-failures mailing list
>> >> > release-job-failu...@lists.openstack.org
>> >> >
>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>> >>
>> >>
>> >>
>> >> --
>> >> Emilien Macchi
>> >>
>> >>
>> >> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> ___
>> Release-job-failures mailing list
>> release-job-failu...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>
>
>
> ___
> Release-job-failures mailing list
> release-job-failu...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Queens bug smash etherpad

2017-11-22 Thread Matt Riedemann
There is a bug smash event happening in China this week. Bugs and their 
associated fixes are being tracked in this etherpad:


https://etherpad.openstack.org/p/OpenStack-Bug-Smash-Queens-Wuhan-Bugs-List

The timing isn't ideal for anyone in the US giving the Thanksgiving 
holiday, but for people that are around this week please help to review 
these patches.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/tripleo-ui failed

2017-11-22 Thread Emilien Macchi
Yeah, I confirm we were able to build an RPM and release a new version
of TripleO UI, which was our goal. I guess we can remove the
flacky/failing job.

On Wed, Nov 22, 2017 at 3:54 AM, Honza Pokorny  wrote:
> I'm investigating on behalf of the tripleo-ui team.  It looks like the
> definition for the jobs changed under us, and we'll need to update our
> configuration.  I'll track it down.
>
> For the time being, though, we can safely ignore this issue.  As
> indicated, the publish-openstack-javascript-tarball job succeeded.  I
> checked the tarballs listing, and the file is in fact present.  Our
> project doesn't produce any other release artifacts upstream.
>
> Please direct any future failures to me (in addition to the mailing list
> of course)
>
> Thanks
>
> Honza Pokorny
>
> On 2017-11-21 21:34, Emilien Macchi wrote:
>> Indeed, adding Jason in copy.
>>
>> Do we actually need release-openstack-javascript job?
>>
>> On Tue, Nov 21, 2017 at 8:27 PM, Tony Breeds  wrote:
>> > On Wed, Nov 22, 2017 at 03:07:33AM +, z...@openstack.org wrote:
>> >> Build failed.
>> >>
>> >> - publish-openstack-javascript-tarball 
>> >> http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/publish-openstack-javascript-tarball/9908482/
>> >>  : SUCCESS in 4m 58s
>> >> - release-openstack-javascript 
>> >> http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/release-openstack-javascript/95af4ef/
>> >>  : POST_FAILURE in 6m 37s
>> >> - announce-release announce-release : SKIPPED
>> >
>> > I'm not certain what went wrong here but [1] looks problematic
>> >
>> > Yours Tony.
>> >
>> > [1] 
>> > http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/release-openstack-javascript/95af4ef/job-output.txt.gz#_2017-11-22_03_01_48_750499
>> >
>> > ___
>> > Release-job-failures mailing list
>> > release-job-failu...@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Queens sprint Nov 22

2017-11-22 Thread Zhipeng Huang
Hi Team,

As discussed at the team meeting, we will have another video conf for our
Queens sprint on 9:00pm ET Nov 22 which would be 10:00am China Time Nov 23.
Meeting details are as follows:


Topic: Cyborg Queens Sprint Nov 22
Time: Nov 23, 2017 10:00 AM Hong Kong

Join from PC, Mac, Linux, iOS or Android: *https://zoom.us/j/652239229
 *

Or iPhone one-tap :
US: +16699006833,,652239229#  or +16468769923,,652239229#
Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 669 900 6833  or +1 646 876 9923
Meeting ID: 652 239 229
International numbers available:
https://zoom.us/zoomconference?m=xGvv4a5rnQ_5CErEwpP9hJ-BZB7wiBc_


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ec2api] legacy-functional-neutron-dsvm-ec2api on stable/pike looks broken

2017-11-22 Thread Emilien Macchi
I'm trying to backport zuulv3 layout in ec2api but it seems like the
gate is broken:
https://review.openstack.org/#/c/521592/

Bringing it here for visibility, hopefully someone from the project
can have a look.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2017-11-22 07:39:45 -0600:

> * We use -W for all releasenotes builds - this means warnings are always 
> errors for releasenotes. That shouldn't bother anyone, as most of the 
> releasenotes content is generated by reno anyway.

For projects that never had -W set, there may be invalid RST in old
branches. We hit that in ceilometer for mitaka and newton, and since
those branches were closed already we used "reno report" to generate
static RST pages to replace the reno directives. See
https://review.openstack.org/#/c/521548/ for an example of doing this if
your project has a similar issue.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-11-21

2017-11-22 Thread Lance Bragstad
Hey all,

Yesterday in office hours we recapped the main topic from the keystone
user and operator feedback session at the forum, which is possibly
allowing the project API to accept user-specified project IDs when
creating projects. Full context from that conversation can be found in
the logs [0]. The TL;DR of it is that we're going to re-propose the
specification with some additional requirements that were brought up
during the forum. This should allow us to continue discussing
requirements we weren't really aware of the first time around and
document the outcomes.

Otherwise, the following bugs were either triaged or updated during
office hours. I also added the office-hours tag to a bunch of bugs [1]
in case anyone is looking for something between now and next week.

Bug #1291157 in OpenStack Identity (keystone): "idp deletion should
trigger token revocation"
https://bugs.launchpad.net/keystone/+bug/1291157

Bug #1727726 in OpenStack Identity (keystone): "Keystone to ignore ldap
users/groups with blank spaces in their name"
https://bugs.launchpad.net/keystone/+bug/1727726

Bug #1728690 in OpenStack Identity (keystone): "member_role_id/name conf
options reference v2"
https://bugs.launchpad.net/keystone/+bug/1728690

Bug #1730270 in OpenStack Identity (keystone): "keystone raise 500 error
when authenticate with "mapped""
https://bugs.launchpad.net/keystone/+bug/1730270

Bug #1724645 in OpenStack Identity (keystone): "remote_id_attribute
config options prevents multiple protocol variations for Federation"
https://bugs.launchpad.net/keystone/+bug/1724645

Bug #1727099 in OpenStack Identity (keystone): "Change password error
history message count is wrong"
https://bugs.launchpad.net/keystone/+bug/1727099

Bug #1728907 in OpenStack Identity (keystone): "Empty Fernet Key Files
causing problems with token issue"
https://bugs.launchpad.net/keystone/+bug/1728907

Bug #1642988 in OpenStack Identity (keystone): "Optionally encode
project IDs in fernet tokens"
https://bugs.launchpad.net/keystone/+bug/1642988

Bug #1603579 in OpenStack Identity (keystone): "WebSSO user can't see
the "Federation" management panel in Horizon"
https://bugs.launchpad.net/keystone/+bug/1603579

Bug #1615076 in OpenStack Identity (keystone): "Keystone server does 
not define "enabled" attribute for Region but mentions in v3 regions.py"
https://bugs.launchpad.net/keystone/+bug/1615076

Bug #1724686 in OpenStack Identity (keystone): "authentication code
hangs when there are three or more admin keystone endpoints"
https://bugs.launchpad.net/keystone/+bug/1724686

Bug #1732298 in OpenStack Identity (keystone): "Migration unit tests
incorrectly failing for 'alter' in contract phase"
https://bugs.launchpad.net/keystone/+bug/1732298

[0]
http://eavesdrop.openstack.org/meetings/keystone_office_hours/2017/keystone_office_hours.2017-11-21-19.07.log.html
[1] https://goo.gl/tRbEsD



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] [designate] Domain and server creation during deployment

2017-11-22 Thread Liam Young
The Designate sink service relies on sink file(s) that contain the domain
id(s) of the domains that automatically generated records should be added
to.
At the moment the designate charm creates a server and domains during charm
installation if the neutron-domain and/or nova-domain config options have
been
set. It then renders the sink files accordingly. This is done via the
designate cli and obviously relies on the keystone API and designate API
services being up and avaliable. This is unsuprisngly proving to be
unreliable
and racey, particularly during HA deployments.

The heat charm has a similiar issue in that it relies on a domain to have
been
created before heat can be used. But rather than try and create the domain
during charm installation the heat charm exposes an action which should be
run post-installation to create the domain.

I think that the designate charm should be updated to work in a similar way
to
the heat charm and that the server and domain creation and sink file
rendering
should be done via a post-deployment action rather than during deployment
time. There is a complication to this approach. All the designate API units
will need to render sink configurations containing the domain id(s) once the
creation action has run. I can think of two similar ways to achieve this:

1) Expose a server and domain creation action that must be run on the
leader.
   During the action the leader then sets the domain ids via the leader db.
   The non-leaders can then respond to leader-settings-changed and render
   their sink file(s). Storing the sink config in the leader-db also has the
   advantage that if the designate service is scaled out at a later date
then
   the new unit will still have access to the sink configuration and can
   render the sink files.

2) A very similar approach would be to push the creation of servers and
   domains back to the administrator to perform and expose a generic action
   for creating sink files which accepts the domain id as one of its
   arguments. Again this would need to be run on the leader and propogated
via
   leader-settings

I'm inclined to do option 2 does anyone have any objections or suggestions
for an
alternative approach ?

Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FOSDEM 2018: Call For Proposals: Virtualization & IaaS DevRoom

2017-11-22 Thread Kashyap Chamarthy
On Mon, Nov 06, 2017 at 01:37:10PM +0100, Kashyap Chamarthy wrote:
> I'm delighted to announce that the call for proposals is now open for
> the Virtualization & IaaS devroom at the upcoming FOSDEM 2018, to be
> hosted on February 3 and 4, 2018.
> 
> This year will mark FOSDEM’s 18th anniversary as one of the longest-running
> free and open source software developer events, attracting thousands of
> developers and users from all over the world. FOSDEM will be held once
> again in Brussels, Belgium, on February 3 & 4, 2018.
> 
> This devroom is a collaborative effort, and is organized by dedicated
> folks from projects such as OpenStack, Xen Project, oVirt, QEMU, KVM,
> libvirt, and Foreman. We would like to invite all those who are involved
> in these fields to submit your proposals by December 1st, 2017.
> 
> ---
> Important Dates
> ---
> Submission deadline: 01 December 2017

A gentle reminder -- submission deadline is just about a week's time
from now.

> Acceptance notifications: 14 December 2017
> Final schedule announcement: 21 December 2017
> Devroom: 03 and 04 February 2018 (two days- different rooms)

[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-22 Thread Victoria Martínez de la Cruz
Yes! Huge +1!

Congrats Jun :)

2017-11-22 7:45 GMT-03:00 Thomas Bechtold :

> +1
>
> On 20.11.2017 00:29,  Ravi, Goutham  wrote:
>
>> Hello Manila developers,
>>
>> I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on gerrit)
>> to be part of the Manila core team. Zhongjun has been an important member
>> of our community since the Kilo release, and has, in the past few releases
>> made significant contributions to the constellation of projects related to
>> openstack/manila [1]. She is also our ambassador in the APAC
>> region/timezones. Her opinion is valued amongst the core team and I think,
>> as a core reviewer and maintainer, she would continue to help grow and
>> maintain our project.
>>
>> Please respond with a +1/-1.
>>
>> We will not be having an IRC meeting this Thursday (23^rd November 2017),
>> so if we have sufficient quorum, PTL extraordinaire, Ben Swartzlander will
>> confirm her nomination here.
>>
>> [1] http://stackalytics.com/?user_id=jun-zhongjun=all
>> tric=person-day
>>
>> Thanks,
>>
>> Goutham
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] pluggable drivers for oslo.config spec ready for review

2017-11-22 Thread Raildo Mascena de Sousa Filho
Hello folks,

Since this topic have been discussed for a while, I'll give some updates on
our current progress and which is the next steps for that.

Yesterday, The spec for oslo.config drivers has been approved [1] and we
started that implementation [2] for that spec. After that, we should be
able to implement a Castellan driver for oslo.config, which will provide
the ability to use Castellan reference for those secrets and store it using
a proper key store backend.
Besides that, we are implementing the Custodia support to the key manager
to store/fetch secrets on Castellan [3].

Finally, as next steps for Rocky release, we should discuss (maybe in the
next PTG) some points like using some deployment tool like Ansible or
puppet, through the TripleO service, to create those secrets and store it
properly on Custodia, following that Castellan driver for oslo.config. So,
later, we will be able to restore it properly in the configuration files.

[1] https://review.openstack.org/#/c/454897/7
[2] https://review.openstack.org/#/c/513844/
[3] https://review.openstack.org/#/c/515190/

Regards,

On Mon, Nov 20, 2017 at 1:42 PM Doug Hellmann  wrote:

> Excerpts from Jay Pipes's message of 2017-11-20 11:02:33 -0500:
> > On 11/20/2017 10:19 AM, Doug Hellmann wrote:
> > > The spec for adding pluggable drivers to oslo.config is ready for a
> > > final queens review [1]. The latest draft should be simpler to
> implement
> > > (important given where we are in the schedule) at the expense of always
> > > requiring at least one configuration file to specify the location of
> > > other configuration sources. We can improve on that design in the
> future
> > > when we have the drivers working.
> >
> > Hi Doug. Is this spec crucial for various PCI/security-minded folks to
> > review due to how plaintext configuration options are currently handled
> > for sensitive things like password and user/project IDs?
> >
> > Best,
> > -jay
> >
>
> The spec is meant to enable securely storing secrets, but it's
> foundation work before the secret store driver can actually be
> implemented so it doesn't go into a lot of detail about the castellan
> driver. Still, I would appreciate if the folks interested in that
> feature look at it.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

Raildo mascena

Software Engineer, Identity Managment

Red Hat



TRIED. TESTED. TRUSTED. 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Changes to releasenotes and docs build jobs

2017-11-22 Thread Monty Taylor

Hey everybody!

Following recent changes [0] in the PTI [1][2] we're rolling out some 
changes to the build-openstack-releasenotes and 
build-openstack-sphinx-docs jobs. If you see patches with the topic 
'updated-pti' - they are in service of this.


The most important thing to be aware of is that we'll no longer be using 
tox for either of them. There are a few reasons for that - the one 
that's most important to me is that it allows us to use the exact same 
docs and releasenotes build jobs for python, go, javascript or any other 
language without needing to add python build tooling to non-python 
repos. We'll also align more with how readthedocs does things in the 
broader python ecosystem.


It's also worth reminding people that we've NEVER used 'tox -edocs' in 
the gate for docs jobs - so anyone who has additional things in their 
docs environment has not been having those things run. For folks running 
doc8, we recommend adding those checks to your pep8 environment instead.


It's also worth noting that we're adding support for a 
doc/requirements.txt file (location chosen for alignment readthedocs) to 
express requirements needed for all docs (for both releasenotes and 
docs). We'll start off falling back to test-requirements.txt ... but, we 
recommend splitting doc related requirements into doc/requirements.txt - 
since that will mean not needing to install Sphinx when doing tox unittests.


Specific info
=

Releasenotes


The patches for releasenotes have been approved and merged.

* We use -W for all releasenotes builds - this means warnings are always 
errors for releasenotes. That shouldn't bother anyone, as most of the 
releasenotes content is generated by reno anyway.


* We're temporarily installing the project to get version number. Doing 
this will be removed as soon as the changes in 
topic:releasenotes-version land. Note this only changes the version 
number on the front page, not what is shown. topics:releasenotes-version


* Installs dependencies via bindep for doc environment.

* doc/requirements.txt is used for installation of python dependencies. 
Things like whereto or openstackdocstheme should go there.


Documentation builds


* We use -W only if setup.cfg sets it

* Installs dependencies via bindep for doc environment. Binary deps, 
such as graphviz, should be listed in bindep and marked with a 'doc' tag.


* doc/requirements.txt is used for installation of python dependencies.
Things like whereto or openstackdocstheme should go there.

* tox_install.sh used to install project if it exists. Because of the 
current situation with neutron and horizon plugins it's necessary to run 
tox_install.sh if it exists as part of setup. We eventually want to make 
that go away, but that's a different effort. There are seven repos with 
a problematic tox_install.sh - patches will be arriving to fix them, and 
we won't land the build-openstack-sphinx-docs changes until they have 
all landed.



We've prepared these with a bunch of depends-on patches across a 
collection of projects, so we don't anticipate much in the way of pain 
... but life happens, so if you notice anything go south with 
releasenotes or sphinx jobs, please let us know and we can help solve 
any issues.


Thanks!
Monty

[0] https://review.openstack.org/#/c/509868
[1] https://review.openstack.org/#/c/508694
[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [acceleration]Cyborg Team Weekly Meeting 2017.11.22

2017-11-22 Thread Zhipeng Huang
Hi Team,

As usual, team meeting will happen on #openstack-cyborg starting from
UTC1500

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack/tripleo-ui failed

2017-11-22 Thread Honza Pokorny
I'm investigating on behalf of the tripleo-ui team.  It looks like the
definition for the jobs changed under us, and we'll need to update our
configuration.  I'll track it down.

For the time being, though, we can safely ignore this issue.  As
indicated, the publish-openstack-javascript-tarball job succeeded.  I
checked the tarballs listing, and the file is in fact present.  Our
project doesn't produce any other release artifacts upstream.

Please direct any future failures to me (in addition to the mailing list
of course)

Thanks

Honza Pokorny

On 2017-11-21 21:34, Emilien Macchi wrote:
> Indeed, adding Jason in copy.
> 
> Do we actually need release-openstack-javascript job?
> 
> On Tue, Nov 21, 2017 at 8:27 PM, Tony Breeds  wrote:
> > On Wed, Nov 22, 2017 at 03:07:33AM +, z...@openstack.org wrote:
> >> Build failed.
> >>
> >> - publish-openstack-javascript-tarball 
> >> http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/publish-openstack-javascript-tarball/9908482/
> >>  : SUCCESS in 4m 58s
> >> - release-openstack-javascript 
> >> http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/release-openstack-javascript/95af4ef/
> >>  : POST_FAILURE in 6m 37s
> >> - announce-release announce-release : SKIPPED
> >
> > I'm not certain what went wrong here but [1] looks problematic
> >
> > Yours Tony.
> >
> > [1] 
> > http://logs.openstack.org/e5/e5831f230bd29516dc202eb406270604f27e27f9/release/release-openstack-javascript/95af4ef/job-output.txt.gz#_2017-11-22_03_01_48_750499
> >
> > ___
> > Release-job-failures mailing list
> > release-job-failu...@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
> 
> 
> 
> -- 
> Emilien Macchi
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] Nominating Zhong Jun (zhongjun) for Manila core

2017-11-22 Thread Thomas Bechtold

+1

On 20.11.2017 00:29,  Ravi, Goutham  wrote:

Hello Manila developers,

I would like to nominate Zhong Jun (zhongjun on irc, zhongjun2 on 
gerrit) to be part of the Manila core team. Zhongjun has been an 
important member of our community since the Kilo release, and has, in 
the past few releases made significant contributions to the 
constellation of projects related to openstack/manila [1]. She is also 
our ambassador in the APAC region/timezones. Her opinion is valued 
amongst the core team and I think, as a core reviewer and maintainer, 
she would continue to help grow and maintain our project.


Please respond with a +1/-1.

We will not be having an IRC meeting this Thursday (23^rd November 
2017), so if we have sufficient quorum, PTL extraordinaire, Ben 
Swartzlander will confirm her nomination here.


[1] 
http://stackalytics.com/?user_id=jun-zhongjun=all=person-day


Thanks,

Goutham



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Longer development cycles discussion on the -sigs list

2017-11-22 Thread Thierry Carrez
Quick pointer:

I started a discussion around longer development cycles on the -sigs list:

http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000149.html
http://lists.openstack.org/pipermail/openstack-sigs/2017-November/000161.html

Please join the thread there if you have a strong opinion either way :)

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Compute fails to startup

2017-11-22 Thread Goutham Pratapa
Hi Eduardo,

Following is the nova-compute log


https://hastebin.com/asikijekiq.sql -- compute log (I didn't find any error)
and the libvirt log is empty

What kolla release is deployed?

>> $ *pip freeze | grep kolla*


kolla==5.0.1.dev26
kolla-ansible==5.0.1.dev37

Kolla 5.0.1 is the version we have used..

If using nested virtualization, is correctly configured to use qemu instead
of kvm?

>> Yes, because 2 weeks back on the same setup we could deploy kolla

Thanks in advance
Goutham



On Tue, Nov 21, 2017 at 9:28 PM, Eduardo Gonzalez 
wrote:

> Hi Goutham,
>
> Please share your nova-compute and libvirt logs from
> /var/lib/docker/volumes/kolla_logs/_data/nova.
>
> What kolla release is deployed?
> If using nested virtualization, is correctly configured to use qemu
> instead of kvm?
>
> Regards
>
> 2017-11-21 15:37 GMT+01:00 Goutham Pratapa :
>
>> Hi all,
>>
>> I have been trying to deploy Kolla on a virtualized environment with
>> Centos Docker images using the
>>
>> stable/pike branch
>>
>> Deployment fails with -- https://hastebin.com/gubilijecu.vbs
>> Inventory fail -- https://hastebin.com/etosipegez.pl
>> extra log -- https://hastebin.com/yudafudegu.go
>> Docker logs https://hastebin.com/imotenanob.cs
>> Gloabls.yml https://hastebin.com/ihamepanim.coffeescript
>>
>> Any help would be of great use ???
>>
>> Thanks in advance...
>>
>> Thank you !!!
>> Goutham Pratapa
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Cheers !!!
Goutham Pratapa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] Retiring ceilometerclient

2017-11-22 Thread Julien Danjou
Hi,

Now that the Ceilometer API is gone, we really don't need
ceilometerclient anymore. I've proposed a set of patches to retire it:

  https://review.openstack.org/#/c/522183/

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Ops Meetups team minutes + main topics

2017-11-22 Thread Thierry Carrez
David Medberry wrote:
> I think the Foundation staff were very very wary of extending the PTG or
> doing dual sites simultaneously due to not saving a thing logistically.
> Yes, it would conceivably save travel for folks that need to go to two
> separate events (as would the other colo options on the table) but not
> saving a thing logistically over two separate events as we have now. A
> six or seven day sprint/thing/ptg would also mean encroaching on one or
> both weekends (above and beyond travel dates) and that may really limit
> venue choices as private parties (weddings, etc) tend to book those
> locales on weekends.

We also need to be careful with not restoring issues we had with the
old-style Design Summit. We want to avoid creating conflicts that would
reduce the productivity of the PTG (so running in parallel would be
dangerous). We also want to make sure the PTG remains a work event
rather than a feedback gathering event, as the start of the cycle is not
the best moment to introduce new priorities. That timing resulted in a
lot of frustration in the past.

Running the Ops meetup on the last days of the week before is one
option. That would let organizations save a bit on travel for people
that want to attend both (although hotel costs would increase with the
stay-over-weekend). My personal objection to that is that my brain
usually shuts down after 5 days of intense work, so I'm not looking
forward to that long week (or I would skip the Ops meetup to focus on
the PTG).

More generally I think we need to have that discussion in the broader
context of our event portfolio. What is the best way to have Ops meetups
in 2018, with increased participation from ops in Forums at summits and
OpenStack Days ? I feel like smaller, local events like OpenStack Days
were quite successful in reaching out to the silent majority of our
users that would not travel to a twice-a-year Ops Meetup. Should we
encourage more of that ? The Public Cloud WG/SIG managed to hold
discussions at various OpenStack Days as well... So we could encourage
having ops-centric discussions around local OpenStack Days, and then use
Forums at Summits as the funnel to close the feedback loop in those
discussions. That would reduce the need for a "big" twice-a-year Ops
Meetup and let us piggyback on already organized events...

Just thinking out loud...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder for todays meeting

2017-11-22 Thread Tobias Rydberg

Hi all,

Time again for a meeting for the Public Cloud WG - today at 1400 UTC in 
#openstack-meeting-3


Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg

See you later!

Tobias Rydberg




smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable/newton branch for Neutron and Glance

2017-11-22 Thread Sławomir Kapłoński
Hi,

For Neutron there is tag "newton-eol" in repository: 
https://github.com/openstack/neutron/tree/newton-eol
You can clone latest Newton from this tag.
For Glace it's probably the same.

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl



> Wiadomość napisana przez Anda Nicolae  w dniu 
> 22.11.2017, o godz. 09:36:
> 
> Hi all,
>  
> I intend to install OpenStack Newton using stable/newton branch from devstack 
> (https://github.com/openstack-dev/devstack.git)
>  
> Unfortunately, I’ve noticed that some OpenStack repos such as Neutron 
> (https://github.com/openstack/neutron) or Glance 
> (https://github.com/openstack/glance) do not have stable/newton branch. 
> Because of this, stack.sh script from devstack fails when trying to clone 
> Neutron or Glance repos using stable/newton branch, because this branch does 
> not exist in GitHub for the respective projects.
>  
> Can you please let me know from where can I clone stable/newton branch for 
> Neutron and Glance projects? Or are there other repos for Neutron and Glance 
> which contain stable/newton branch?
>  
> Thanks,
> Anda
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] stable/newton branch for Neutron and Glance

2017-11-22 Thread Andreas Jaeger
On 2017-11-22 09:36, Anda Nicolae wrote:
> Hi all,
> 
>  
> 
> I intend to install OpenStack Newton using stable/newton branch from
> devstack (https://github.com/openstack-dev/devstack.git)
> 
>  
> 
> Unfortunately, I’ve noticed that some OpenStack repos such as Neutron
> (https://github.com/openstack/neutron) or Glance
> (https://github.com/openstack/glance) do not have stable/newton branch.
> Because of this, stack.sh script from devstack fails when trying to
> clone Neutron or Glance repos using stable/newton branch, because this
> branch does not exist in GitHub for the respective projects.
> 
>  
> 
> Can you please let me know from where can I clone stable/newton branch
> for Neutron and Glance projects? Or are there other repos for Neutron
> and Glance which contain stable/newton branch?

checkout the newton-eol tag, that tag was created from the branch when
we closed the branch,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
I forgot to include the Pike release notes
https://docs.openstack.org/releasenotes/magnum/pike.html

Spyros

On 22 November 2017 at 09:29, Spyros Trigazis  wrote:
> Hi Sergio,
>
> On 22 November 2017 at 03:31, Sergio Morales Acuña  wrote:
>> I'm using Openstack Ocata and trying Magnum.
>>
>> I encountered a lot of problems but I been able to solved many of them.
>
> Which problems did you encounter? Can you be more specific? Can we solve them
> for everyone else?
>
>>
>> Now I'm curious about some aspects of Magnum:
>>
>> ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
>> create a custom fedora-atomic-27? What about RBAC?
>
> Since Pike, magnum is running kubernetes in containers on fedora 26.
> In fedora atomic 27 kubernetes etcd and flannel are removed from the
> base image so running them in containers is the only way.
>
> For RBAC, you need 1.8 and with Pike you can get it. just by changing
> one parameter.
>
>>
>> ¿Any one here using Magnum on daily basis? If yes, What version are you
>> using?
>
> In our private cloud at CERN we have ~120 clusters with ~450 vms, we are 
> running
> Pike and we use only the fedora atomic drivers.
> http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
> Vexxhost is running magnum:
> https://vexxhost.com/public-cloud/container-services/kubernetes/
> Stackhpc:
> https://www.stackhpc.com/baremetal-cloud-capacity.html
>
>>
>> ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
>> upgrade Magnum to follow K8S's crazy changes?
>
> Atomic is maintained and supported much more than CoreOS in magnum.
> There wasn't much interest from developers for CoreOS.
>
>>
>> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>
> Magnum Ocata is not too old but it will eventually be since it misses the
> capability of running kubernetes on containers. Pike allows this option and 
> can
> keep up with kubernetes easily.
>
>>
>> ¿Where I can found updated articles about the state of Magnum and it's
>> future?
>
> I did the project update presentation for magnum at the Sydney summit.
> https://www.openstack.org/videos/sydney-2017/magnum-project-update
>
> Chees,
> Spyros
>
>>
>> Cheers
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stable/newton branch for Neutron and Glance

2017-11-22 Thread Anda Nicolae
Hi all,

I intend to install OpenStack Newton using stable/newton branch from devstack 
(https://github.com/openstack-dev/devstack.git)

Unfortunately, I've noticed that some OpenStack repos such as Neutron 
(https://github.com/openstack/neutron) or Glance 
(https://github.com/openstack/glance) do not have stable/newton branch. Because 
of this, stack.sh script from devstack fails when trying to clone Neutron or 
Glance repos using stable/newton branch, because this branch does not exist in 
GitHub for the respective projects.

Can you please let me know from where can I clone stable/newton branch for 
Neutron and Glance projects? Or are there other repos for Neutron and Glance 
which contain stable/newton branch?

Thanks,
Anda

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] Questions about Caas with Magnum

2017-11-22 Thread Spyros Trigazis
Hi Sergio,

On 22 November 2017 at 03:31, Sergio Morales Acuña  wrote:
> I'm using Openstack Ocata and trying Magnum.
>
> I encountered a lot of problems but I been able to solved many of them.

Which problems did you encounter? Can you be more specific? Can we solve them
for everyone else?

>
> Now I'm curious about some aspects of Magnum:
>
> ¿Do I need a newer version of Magnum to run K8S 1.7? ¿Or I just need to
> create a custom fedora-atomic-27? What about RBAC?

Since Pike, magnum is running kubernetes in containers on fedora 26.
In fedora atomic 27 kubernetes etcd and flannel are removed from the
base image so running them in containers is the only way.

For RBAC, you need 1.8 and with Pike you can get it. just by changing
one parameter.

>
> ¿Any one here using Magnum on daily basis? If yes, What version are you
> using?

In our private cloud at CERN we have ~120 clusters with ~450 vms, we are running
Pike and we use only the fedora atomic drivers.
http://openstack-in-production.blogspot.ch/2017/01/containers-on-cern-cloud.html
Vexxhost is running magnum:
https://vexxhost.com/public-cloud/container-services/kubernetes/
Stackhpc:
https://www.stackhpc.com/baremetal-cloud-capacity.html

>
> ¿What driver is, in your opinion, better: Atomic or CoreOS? ¿Do I need to
> upgrade Magnum to follow K8S's crazy changes?

Atomic is maintained and supported much more than CoreOS in magnum.
There wasn't much interest from developers for CoreOS.

>
> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?

Magnum Ocata is not too old but it will eventually be since it misses the
capability of running kubernetes on containers. Pike allows this option and can
keep up with kubernetes easily.

>
> ¿Where I can found updated articles about the state of Magnum and it's
> future?

I did the project update presentation for magnum at the Sydney summit.
https://www.openstack.org/videos/sydney-2017/magnum-project-update

Chees,
Spyros

>
> Cheers
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev