Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-27 Thread Andreas Jaeger
For those migrating from oslosphinx to openstackdocstheme, I strongly
advise to follow the docs for openstackdocstheme 1.11 on how to set it up:

https://docs.openstack.org/openstackdocstheme/latest/

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] OpenStack manuals project migration - Progress for TripleO

2017-06-27 Thread Andreas Jaeger
On 2017-06-27 22:54, Emilien Macchi wrote:
> Full background, context and details can be read here:
> http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html
> 
> TL;DR: there is a massive cross-project effort which aims to migrate
> documentations out of a central repository and into project trees,
> managed by project teams instead of OpenStack Manual team (who is
> running with low number of contributors at this time). (Alex feel free
> to correct me if I said something wrong).
> 
> There is a list of things we need to do to achieve this goal, if
> possible by the end of Pike:
> https://etherpad.openstack.org/p/doc-migration-tracking
> 
> Here's a first (basic) iteration:
> Switch release notes to use openstacktheme:
> https://review.openstack.org/#/q/topic:doc_migration+owner:%22Emilien+Macchi+%253Cemilien%2540redhat.com%253E%22
> (for all TripleO projects).

I commented on your first review with a -1 that probably applies to all
of them.

Please see https://docs.openstack.org/openstackdocstheme/latest/ how to
setup openstackdocstheme 1.11 properly.

Especially if you want to use the "Report a bug" feature, you need to
add more variables,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [storyboard][infra][docs] Report a bug and storyboard

2017-06-27 Thread Andreas Jaeger
Hi Storyboard team,

the openstackdocstheme has a "Report a bug" feature, where you can click
on the Bug icon and get a link to project's bug area in launchpad
together with information about the documentation (bug tag, git URL of
build, date, SHA, extra text).

How can this be done with storyboard?

Example: See the Bug icon on https://docs.openstack.org/admin-guide/ .
It gives the following URL as time of writing:

https://bugs.launchpad.net/openstack-manuals/+filebug?field.title=OpenStack%20Administrator%20Guide%20in%20Administrator%20Guide&field.comment=%0A%0A%0AThis%20bug%20tracker%20is%20for%20errors%20with%20the%20documentation,%20use%20the%20following%20as%20a%20template%20and%20remove%20or%20add%20fields%20as%20you%20see%20fit.%20Convert%20[%20]%20into%20[x]%20to%20check%20boxes:%0A%0A-%20[%20]%20This%20doc%20is%20inaccurate%20in%20this%20way:%20__%0A-%20[%20]%20This%20is%20a%20doc%20addition%20request.%0A-%20[%20]%20I%20have%20a%20fix%20to%20the%20document%20that%20I%20can%20paste%20below%20including%20example:%20input%20and%20output.%20%0A%0AIf%20you%20have%20a%20troubleshooting%20or%20support%20issue,%20use%20the%20following%20%20resources:%0A%0A%20-%20Ask%20OpenStack:%20http://ask.openstack.org%0A%20-%20The%20mailing%20list:%20http://lists.openstack.org%0A%20-%20IRC:%20%27openstack%27%20channel%20on%20Freenode%0A%0A---%0ARelease:%2015.0.0%20on%202017-06-27%2009:57%0ASHA:%20cc281cac69483977f3eed49f03f9bfac850cd7f0%0ASource:%20https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/source/index.rst%0AURL:%20https://docs.openstack.org/admin-guide/&field.tags=admin-guide

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Last release date vs End of Life date

2017-06-27 Thread Sean McGinnis
On Tue, Jun 27, 2017 at 02:47:30PM -0400, Doug Hellmann wrote:
> Excerpts from Tony Breeds's message of 2017-06-27 16:51:37 +1000:
> > Hi all,
> > Up 'til now we haven't set a last release date for a stable branch
> > approaching end of life.  It seems like formalizing that would be a good
> > thing.
> > 
> > This comes up as we need time to verify that said release integrates
> > well (at least doesn't break) said branch.  So should we define a date
> > for the last release for *libraries* services are less critical as we're
> > always testing the HEAD of that branch.
> > 
> > I'd suggest it be 2 weeks before EOL date.  Thoughts?
> > 
> > Yours Tony.
> 
> That makes sense.
> 
> Doug


Agree, that makes sense to me too. Unless something has gone horribly wrong,
two weeks should be plenty to make sure things are settled before things get
wrapped up.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral][deployment] how to add deployment roles

2017-06-27 Thread Renat Akhmerov
Hi,

Just a small addition. “mistral action-update” applies only to so-called ad-hoc 
actions which are essentially wrappers written in YAML. You can’t update a 
regular action written in Python and plugged in Mistral with stevedore using 
this command.

Renat Akhmerov
@Nokia

On 28 Jun 2017, 08:03 +0700, wrote:
>
> hone
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-06-27 Thread Joshua Hesketh
On Wed, Jun 28, 2017 at 6:59 AM, Jeremy Stanley  wrote:

> On 2017-06-17 01:20:28 +1000 (+1000), Joshua Hesketh wrote:
> [...]
> > I'm happy to help do this if you'd like. Otherwise the script I've
> > used for the last few retirements is here:
> > http://git.openstack.org/cgit/openstack-infra/release-tools/
> tree/eol_branch.sh
>
> That would be really appreciated if you have the available
> bandwidth. If not, I'm going to try to find an opportunity to
> babysit it sometime later this week.
>


Yep, more than happy to. I'll try and get to it this week but if not, next
week at the latest.

Cheers,
Josh



>
> > I believe the intention was to add some hardening around that
> > script and automate it. However I think it was put on hold
> > awaiting a new gerrit.. either that or nobody took it up.
>
> Fingers crossed that we'll be able to switch to Gerrit 2.13 soon and
> resume that much needed development effort.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-06-27 Thread Tony Breeds
On Tue, Jun 27, 2017 at 08:59:27PM +, Jeremy Stanley wrote:

> Fingers crossed that we'll be able to switch to Gerrit 2.13 soon and
> resume that much needed development effort.

This looks to have landed in 2.14[1] :(   I don't know how hard it'd be
to backport to 2.13 or even if that's a thing we'd do.

It certainly doesn't cleanly cherry-pick :(

Yours Tony.

[1] https://gerrit-review.googlesource.com/c/85512


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-27 Thread zhubingbing



+1



>> -Original Message-
>> From: Michał Jastrzębski [mailto:inc...@gmail.com]
>> Sent: Wednesday, June 14, 2017 10:46 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for
>> core
>> 
>> Hello,
>> 
>> With great pleasure I'm kicking off another core voting to kolla-ansible and
>> kolla teams:) this one is about spsurya. Voting will be open for 2 weeks 
>> (till
>> 28th Jun).
>> 
>> Consider this mail my +1 vote, you know the drill:)
>> 
>> Regards,
>> Michal
>> 
>> 
>
>-
>duonghq
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Tricircle vs Trio2o

2017-06-27 Thread joehuang
Hello, Silvia,

Thank you for your interest in Tricircle and Trio2o.

See inline comments.

Best Regards
Chaoyi Huang (joehuang)

From: Silvia Fichera [fichera@gmail.com]
Sent: 28 June 2017 5:16
To: OpenStack Development Mailing List (not for usage questions); 
openst...@lists.openstack.org
Subject: [openstack-dev] [tricircle] Tricircle vs Trio2o


Hi all,
I would like to build up a multi-region openstack and I read that both 
tricircle and trio2o are the most suitable solutions.
I have few questions:

- from the wikis I couldn't deeply understand the differences between the two: 
ad far as I understood tricircle deploys a shared neutron module, trio2o 
provides a gateway in the case of a single nova/glance module.Is it right?

[joehuang] Yes, Tricircle is to provide networking capability across multiple 
neutron servers in OpenStack mult-region deployment or Nova cells V2 multi-cell 
deployment, one shared Neutron module providing such cross Neutron server 
functionalities and provide the API entrance.  For trio2o, it's major purpose 
is to gateway, and it's not active after it's moved outside from Tricircle, 
during the Tricricle bightent application, some TCs were worried about the API 
consistency if one gateway layer is there, so it's not accepted into OpenStack. 
Though I know many cloud operators expressed the need for single API entry 
point for multiple OpenStack instances. [/joehuang]

- Could you better explain the architecture? For instance: where is the 
Controller? Are the compute nodes the same of the one site implementation?

[joehuang] for the architecture and work flow, you can refer to the slides and 
video we presented in OPNFV Beijing summit: video 
https://www.youtube.com/watch?v=tbcc7-eZnkY  , slides 
https://docs.google.com/presentation/d/1WBdra-ZaiB-K8_m3Pv76o_jhylEqJXTTxzEZ-cu8u2A/
 , 
https://www.slideshare.net/JoeHuang7/shared-networks-to-support-vnf-high-availability-across-openstack-multiregion-deployment-77283728
I don't know what do you mean for Controller, Tricircle only provides plugins 
to Neutron: Neutron server with different plugin plays different role, for 
central Neutron with Tricicle central plugin, it'll play as the networking 
coordination for multiple local Neutrons which will be installed with Tricircle 
local plugin. Each local Neutron work as usual with ML2 plugin/OVS backend or 
SDN controller backend, just install one slim hook layer of Tricircle local 
Neutron plugin. You can experience it through the guide: 
https://docs.openstack.org/developer/tricircle/installation-guide.html#multi-pod-installation-with-devstack
  [/joehuang]

- Is it possible to connect distributed compute nodes with an SDN network, in 
order to use it as data plane?
[joehuang] Yes, it's possible with an SDN network, currently we use in-tree OVS 
implemetation to connect distributed compute nodes, Tricricle is able to carry 
shadow port/shadow agent information to other Neutron servers, so SDN 
controller can use these information to build cross Neutron network. If SDN 
controller can communicate with each other by themselves, and establish cross 
Neutron network, it's also fine, Tricircle support this mode. During OPNFV 
summit, Vikram and I had one idea to see if OVN can work together with 
Tricircle.  Could you describe your idea for the SDN network in more detail, 
then I can check whether my understanding is correct, and provide my valuable 
comment.  [/joehuang]

- If not, what kind of network is in the middle? I suppose that a network is 
necessary to connect the different pod.

[joehuang]  Please refer to the last comment  [/joehuang]

Could you please clarify these points?
Thanks a lot

--
Silvia Fichera
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev]   [watcher] Nominate Yumeng Bao to the core    team

2017-06-27 Thread li.canwei2
+1













李灿伟 licanwei__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for core

2017-06-27 Thread 朱冰兵





+1
>> -Original Message-
>> From: Michał Jastrzębski [mailto:inc...@gmail.com]
>> Sent: Wednesday, June 14, 2017 10:46 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: [openstack-dev] [kolla][kolla-ansible] Proposing Surya (spsurya) for
>> core
>> 
>> Hello,
>> 
>> With great pleasure I'm kicking off another core voting to kolla-ansible and
>> kolla teams:) this one is about spsurya. Voting will be open for 2 weeks 
>> (till
>> 28th Jun).
>> 
>> Consider this mail my +1 vote, you know the drill:)
>> 
>> Regards,
>> Michal
>> 
>> 
>
>-
>duonghq
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral][deployment] how to add deployment roles

2017-06-27 Thread Dan Trainor
HI, Steve -


> I think perhaps the confusion is because this was implemented in
> tripleoclient, and porting it to tripleo-common is not yet completed?
> (Alex can confirm the status of this but it was planned I think).
>
> Related ML discussion which includes links to the patches:
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-June/118157.html
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-June/118213.html
>
>
Thanks for digging this up.  I do vaguely remember seeing this.  This
points me more in the direction I'm looking for.

Thanks!
-dant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] removing domain configuration upload via keystone-manage

2017-06-27 Thread Lance Bragstad
Hi all,

Keystone has deprecated the domain configuration upload capability
provided through `keystone-manage`. We discussed it's removal in today's
meeting [0] and wanted to send a quick note to the operator list. The
ability to upload a domain config into keystone was done as a stop-gap
until the API was marked as stable [1]. It seems as though file-based
domain configuration was only a band-aid until full support was done.

Of the operators using the domain config API in keystone, how many are
backing their configurations with actual configuration files versus the API?


[0]
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-06-27-18.00.log.html#l-167
[1]
https://github.com/openstack/keystone/commit/a5c5f5bce812fad3c6c88a23203bd6c00451e7b3



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure trying to set dns_domain from command line

2017-06-27 Thread Lawrence J. Albinson
Hi Graham,

As ever, many thanks for clarifying that. And I shall put in a feature
request as you suggest.

In the meantime, presumably the missing pieces that I need are those
described in:

https://docs.openstack.org/ocata/networking-guide/config-dns-int.html

under

'Configuring OpenStack Networking for integration with an external
DNS service'.

and covering:

[default]
external_dns_driver = designate

[designate]
url =  etc 

Kind regards, Lawrence

On 27/06/17 16:57, Graham Hayes wrote:
> On 27/06/17 16:36, Lawrence J. Albinson wrote:
>> Hi Graham,
>>
>> Many thanks for the pointer. I hadn't added dns to the plugin list.
>>
>> I did, however, set the following:
>>
>> neutron_designate_enabled:  True
> Ah - unfortunately there is two ways of integrating Neutron + Designate
>
> The external DNS plug, which allows you to use "--dns-domain" on a per
> network basis, and "designate-sink"
>
> designate-sink was written before Designate was an official project, and
> uses notifications. It is very powerful, but requires deployers to
> write custom plugins for it to work well.
>
> What OSA is missing is "neutron external DNS integration" - which is the
> code that we use for "--dns-domain"
>
> If you file a request here it will go on to the to do list:
>
> https://blueprints.launchpad.net/openstack-ansible
>
> - Graham
>
>> I'm wondering if the two together will fix things. I shall know by the 
>> morning.
>>
>> Again, many thanks.
>>
>> Kind regards, Lawrence
>> 
>> From: Graham Hayes
>> Sent: 27 June 2017 16:14
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure 
>> trying to set dns_domain from command line
>>
>> On 27/06/17 15:01, Lawrence J. Albinson wrote:
>>> Hello Colleagues,
>>>
>>> I am trying to enable dynamic updating of DNSaaS when a port or VM is
>>> created/deleted.
>>>
>>> I have DNSaaS working with Bind9 as the back-end and I am able to
>>> manually create/update/delete entries with the openstack client and/or
>>> the designate client and see Bind9 reflect those changes.
>>>
>>> However, I am unable to set a dns_domain name for a network from the
>>> openstack CLI and/or the neutron CLI.
>>>
>>> I have tried the following:
>>>
>>> neutron net-update --dns-domain example.com
>>> 64b50baa-acd8-4269-8a3a-767b70c7d18d
>>> neutron net-update --dns-domain example.com public
>>> neutron net-update --dns-domain example.com.
>>> 64b50baa-acd8-4269-8a3a-767b70c7d18d
>>> neutron net-update --dns-domain example.com. public
>>>
>>> The response is always the same, namely:
>>>
>>> Unrecognized attribute(s) 'dns_domain'
>>> Neutron server returns request_ids:
>>> ['req-be15e08a-b3b0-458c-a045-ffac7ce3ebbd']
>>>
>>> Before I go searching through the Neutron source, does anyone know if
>>> this is a 'hole' in the Neutron API and, if so, has it been fixed after
>>> the commit point being used by openstack-ansible tag 15.1.3.
>>>
>>> Kind regards, Lawrence
>> Hi Lawrence,
>>
>> Did you enable the DNS plugin in neutron?
>>
>> Adding "dns" to the list here [0] should enable the --dns-domain
>> attribute.
>>
>> 0 -
>> https://github.com/openstack/openstack-ansible-os_neutron/blob/15.1.3/defaults/main.yml#L133
>>
>> However, it does not look like OpenStack Ansible code supports Neutron
>> calling Designate to update DNS Recordsets yet.
>>
>> It looks like a missing feature - I am not sure how OSA deals with
>> feature requests, but if you are on IRC they use #openstack-ansible
>>
>> Thanks
>>
>> - Graham
>>
>>
>>> Lawrence J Albinson
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Mike Perez
On 22:18 Jun 27, Jeremy Stanley wrote:
> On 2017-06-27 14:34:46 -0700 (-0700), Mike Perez wrote:
> [...]
> > ## PTG
> > 
> > New contributors should be participating in the sessions for a
> > project and get to know who are the people leading those efforts.
> > People leading efforts want help. Whether it be documentation for
> > the thing, implementation, testing, etc. Working with the people
> > involved is a good way to get to know that feature or change. The
> > people leading the effort are now invested in YOU succeeding
> > because if you don't succeed, they don't either. Once you succeed
> > in the feature or change with someone, you have recognition in
> > people knowing you are responsible for it in some way. This is an
> > awesome feeling and will lead you to either improving it more or
> > going onto other things. While you're only understanding of a
> > project is that thing, you may get curious and move onto other
> > parts of the code. This leads to someone in the future leading
> > efforts for new contributors!
> [...]
> 
> If you mean "junior" contributors who have maybe gotten a small
> change merged or fixed a minor bug (but have at least figured out
> what team they probably want to spend a lot of their time helping
> on) then I agree. To me "new" contributors are the ones who still
> need the basics of how to submit a patch, where to find bug reports,
> or whatever and those are being catered to at the Forum (via
> OpenStack 101, Upstream Institute, project onboarding), not the PTG.

Yes I'm talking about both junior and new contributors in the way you're
defining them. If they're new, they can still participate in discussions and
meet the people leading an effort. Processes with launchpad/gerrit etc is all
just tools that can be learned later. Learning these processes should just be
followed up later with the looking at the new contributor portal with the
project they selected to work with and follow instructions and ask for help on
their IRC channel when needed.  

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Mike Perez
On 00:07 Jun 28, Ildiko Vancsa wrote:


 
> > On 2017. Jun 27., at 23:34, Mike Perez  wrote:
> > 
> > On 13:52 Jun 23, Michał Jastrzębski wrote:
> >> Great idea!
> >> 
> >> I would also throw another issue new people often have (I had it too).
> >> Namely what to contribute. Lot's of people wants to do something but
> >> not quite know where to start.
> >> So few ideas for start:
> >> * List of triaged bugs
> >> * List of work items of large blueprits
> > 
> > IMO the triaged bugs/low hanging fruit thing seems to be still daunting for 
> > new
> > contributors. There's also not really much gratification or recognition for
> > what you did by the wider community sometimes. This is something I feel that
> > helps in having people come back to contribute.
> 
> From maintenance perspective I would ensure new comers are aware how to find
> those, but don’t give an exact list.
> 
> By having the Project On-boarding sessions, etc. and a bit more focus on
> coaching and mentoring we might be able to get some attention from projects
> regarding maintaining the list of low hanging fruit bugs. Sometimes that tag
> is not really verified and there are also cases when it does not get marked
> as fixed or obsolete. From earlier experience they are not always
> encouraging...
> 
> > This is going on a tangent of something else I have coming in the future but
> > I think there are a few ways a new contributor would come in:
> > 
> > ## PTG
> > 
> > New contributors should be participating in the sessions for a project and
> > get to know who are the people leading those efforts. People leading efforts
> > want help. Whether it be documentation for the thing, implementation, 
> > testing,
> > etc. Working with the people involved is a good way to get to know that 
> > feature
> > or change. The people leading the effort are now invested in YOU succeeding
> > because if you don't succeed, they don't either. Once you succeed in the
> > feature or change with someone, you have recognition in people knowing you 
> > are
> > responsible for it in some way. This is an awesome feeling and will lead 
> > you to
> > either improving it more or going onto other things. While you're only
> > understanding of a project is that thing, you may get curious and move onto
> > other parts of the code. This leads to someone in the future leading efforts
> > for new contributors!
> 
> I think for this we need to encourage people to attend the Summit first and
> come to an On-boarding session if their target project has one. From my
> experience when I attended my first Design Summit I had no clue what’s going
> on and that can be very discouraging and I saw people for whom it was. And in
> my opinion we also cannot expect the project teams to baby sit new people on
> the PTG.
> 
> With that said, I agree we should have new people on this event, but I think
> we need to be more careful with describing and clarifying prerequisites and
> expectations, like basic knowledge of the area and experience or just to do
> their research before they come and they should know this event is not
> focusing on them.
> 
> The portal should be a great place to describe all this and give a list of
> best practices!

I agree. I just know people are going to come regardless of no matter how many
warning signs you throw in front of them. A quick intro into each session that
this going to move fast/has been discussed and not exactly new contributor
friendly but if you are interested in the effort being discussed find . This is just an answer for those people that
miss all those warnings.

> > ## Forum
> > 
> > I would like to see our on-boarding rooms having time to introduce
> > current/future efforts happening in the project. Introduce the people behind
> > those efforts. Give a little time to break out into meet and greet to 
> > remember
> > friendly faces and do as mentioned above.
> 
> +1
> 
> > 
> > ## Internet
> > 
> > People may not be able to attend our events, but want to participate. Using
> > your idea of listing work items of large blueprints is an excellent! It 
> > would
> > be good if we could list those cleanly and who is leading it. Maybe 
> > Storyboard
> > will be able to help with this in the future Kendall?
> 
> Do we/can we have a tag for large blueprints? So we could teach people how to
> find this and give them a search link, etc.?

Either storyboard allows us to filter on the stuff the project teams have set
for a release, or we just use its API and build our own clean listing.


-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] OpenStack manuals project migration - Progress for TripleO

2017-06-27 Thread Dong Ma
I would like to help with some part.

Dong

2017-06-28 4:54 GMT+08:00 Emilien Macchi :

> Full background, context and details can be read here:
> http://specs.openstack.org/openstack/docs-specs/specs/
> pike/os-manuals-migration.html
>
> TL;DR: there is a massive cross-project effort which aims to migrate
> documentations out of a central repository and into project trees,
> managed by project teams instead of OpenStack Manual team (who is
> running with low number of contributors at this time). (Alex feel free
> to correct me if I said something wrong).
>
> There is a list of things we need to do to achieve this goal, if
> possible by the end of Pike:
> https://etherpad.openstack.org/p/doc-migration-tracking
>
> Here's a first (basic) iteration:
> Switch release notes to use openstacktheme:
> https://review.openstack.org/#/q/topic:doc_migration+owner:%
> 22Emilien+Macchi+%253Cemilien%2540redhat.com%253E%22
> (for all TripleO projects).
> Note that TripleO doc already switched:
> https://docs.openstack.org/developer/tripleo-docs/
>
> We need to evaluate what other work needs to be done, I'll probably
> keep working on it during the following weeks but any help would be
> welcome.
> I'll make my best to keep you posted on this thread, weekly, so we can
> get feedback from docs experts, to make sure we're doing the right
> things.
> If you have time to help, please ping me directly.
>
> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Jeremy Stanley
On 2017-06-27 14:34:46 -0700 (-0700), Mike Perez wrote:
[...]
> ## PTG
> 
> New contributors should be participating in the sessions for a
> project and get to know who are the people leading those efforts.
> People leading efforts want help. Whether it be documentation for
> the thing, implementation, testing, etc. Working with the people
> involved is a good way to get to know that feature or change. The
> people leading the effort are now invested in YOU succeeding
> because if you don't succeed, they don't either. Once you succeed
> in the feature or change with someone, you have recognition in
> people knowing you are responsible for it in some way. This is an
> awesome feeling and will lead you to either improving it more or
> going onto other things. While you're only understanding of a
> project is that thing, you may get curious and move onto other
> parts of the code. This leads to someone in the future leading
> efforts for new contributors!
[...]

If you mean "junior" contributors who have maybe gotten a small
change merged or fixed a minor bug (but have at least figured out
what team they probably want to spend a lot of their time helping
on) then I agree. To me "new" contributors are the ones who still
need the basics of how to submit a patch, where to find bug reports,
or whatever and those are being catered to at the Forum (via
OpenStack 101, Upstream Institute, project onboarding), not the PTG.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Kendall Nelson
Storyboard can definitely help with this! Each task in a story has an owner
and a project while the larger story's description could list who is in
charge of the larger implementation overall.

On Tue, Jun 27, 2017 at 4:34 PM Mike Perez  wrote:

> On 13:52 Jun 23, Michał Jastrzębski wrote:
> > Great idea!
> >
> > I would also throw another issue new people often have (I had it too).
> > Namely what to contribute. Lot's of people wants to do something but
> > not quite know where to start.
> > So few ideas for start:
> > * List of triaged bugs
> > * List of work items of large blueprits
>
> IMO the triaged bugs/low hanging fruit thing seems to be still daunting
> for new
> contributors. There's also not really much gratification or recognition for
> what you did by the wider community sometimes. This is something I feel
> that
> helps in having people come back to contribute.
>
> This is going on a tangent of something else I have coming in the future
> but
> I think there are a few ways a new contributor would come in:
>
> ## PTG
>
> New contributors should be participating in the sessions for a project and
> get to know who are the people leading those efforts. People leading
> efforts
> want help. Whether it be documentation for the thing, implementation,
> testing,
> etc. Working with the people involved is a good way to get to know that
> feature
> or change. The people leading the effort are now invested in YOU succeeding
> because if you don't succeed, they don't either. Once you succeed in the
> feature or change with someone, you have recognition in people knowing you
> are
> responsible for it in some way. This is an awesome feeling and will lead
> you to
> either improving it more or going onto other things. While you're only
> understanding of a project is that thing, you may get curious and move onto
> other parts of the code. This leads to someone in the future leading
> efforts
> for new contributors!
>
> ## Forum
>
> I would like to see our on-boarding rooms having time to introduce
> current/future efforts happening in the project. Introduce the people
> behind
> those efforts. Give a little time to break out into meet and greet to
> remember
> friendly faces and do as mentioned above.
>
> ## Internet
>
> People may not be able to attend our events, but want to participate. Using
> your idea of listing work items of large blueprints is an excellent! It
> would
> be good if we could list those cleanly and who is leading it. Maybe
> Storyboard
> will be able to help with this in the future Kendall?
>
> --
> Mike Perez
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Mike Perez
On 13:52 Jun 23, Michał Jastrzębski wrote:
> Great idea!
> 
> I would also throw another issue new people often have (I had it too).
> Namely what to contribute. Lot's of people wants to do something but
> not quite know where to start.
> So few ideas for start:
> * List of triaged bugs
> * List of work items of large blueprits

IMO the triaged bugs/low hanging fruit thing seems to be still daunting for new
contributors. There's also not really much gratification or recognition for
what you did by the wider community sometimes. This is something I feel that
helps in having people come back to contribute.

This is going on a tangent of something else I have coming in the future but
I think there are a few ways a new contributor would come in:

## PTG

New contributors should be participating in the sessions for a project and
get to know who are the people leading those efforts. People leading efforts
want help. Whether it be documentation for the thing, implementation, testing,
etc. Working with the people involved is a good way to get to know that feature
or change. The people leading the effort are now invested in YOU succeeding
because if you don't succeed, they don't either. Once you succeed in the
feature or change with someone, you have recognition in people knowing you are
responsible for it in some way. This is an awesome feeling and will lead you to
either improving it more or going onto other things. While you're only
understanding of a project is that thing, you may get curious and move onto
other parts of the code. This leads to someone in the future leading efforts
for new contributors!

## Forum

I would like to see our on-boarding rooms having time to introduce
current/future efforts happening in the project. Introduce the people behind
those efforts. Give a little time to break out into meet and greet to remember
friendly faces and do as mentioned above.

## Internet

People may not be able to attend our events, but want to participate. Using
your idea of listing work items of large blueprints is an excellent! It would
be good if we could list those cleanly and who is leading it. Maybe Storyboard
will be able to help with this in the future Kendall?

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Mike Perez
On 09:45 Jun 26, Alexandra Settle wrote:
> I think this is a good idea :) thanks Mike. We get a lot of people coming to
> the docs chan or ML asking for help/where to start and sometimes it’s
> difficult to point them in the right direction.
> 
> Just from experience working with contributor documentation, I’d avoid all
> screen shots if you can – updating them whenever the process changes
> (surprisingly often) is a lot of unnecessary technical debt.

I understand and agree. This was a big selling point to contributors I've spoke
to about this though in avoid walls of text so it's actually something seeming
doable. Perhaps having a small number of steps to a page can still give the
reader a feeling they can finish it in five minutes or less?

> The docs team put a significant amount of effort in a few releases back
> writing a pretty comprehensive Contributor Guide. For the purposes you
> describe below, I imagine a lot of the content here could be adapted. The
> process of setting up for code and docs is exactly the same:
> http://docs.openstack.org/contributor-guide/index.html

Yes I've seen this content and do plan to adapt stuff over!

> 
> I also wonder if we could include a ‘what is openstack’ 101 for new
> contributors. I find that there is a *lot* of material out there, but it is
> often very hard to explain to people what each project does, how they all
> interact, why we install from different sources, why do we have official and
> unofficial projects etc. It doesn’t have to be seriously in-depth, but an
> overview that points people who are interested in the right directions. Often
> this will help people decide on what project they’d like to undertake.

Wonderful idea. I cc'd Anne from the Openstack Foundation who is helping with
this effort. We will be discussing soon on incorporating 101 content over.

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle] Tricircle vs Trio2o

2017-06-27 Thread Silvia Fichera
Hi all,
I would like to build up a multi-region openstack and I read that both
tricircle and trio2o are the most suitable solutions.
I have few questions:
- from the wikis I couldn't deeply understand the differences between the
two: ad far as I understood tricircle deploys a shared neutron module,
trio2o provides a gateway in the case of a single nova/glance module.Is it
right?
- Could you better explain the architecture? For instance: where is the
Controller? Are the compute nodes the same of the one site implementation?
- Is it possible to connect distributed compute nodes with an SDN network,
in order to use it as data plane?
- If not, what kind of network is in the middle? I suppose that a network
is necessary to connect the different pod.

Could you please clarify these points?
Thanks a lot



-- 
Silvia Fichera
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread John Griffith
On Wed, Jun 21, 2017 at 8:59 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> One of the areas identified as a priority by the Board + TC + UC
> workshop in March was the need to better close the feedback loop and
> make unanswered requirements emerge. Part of the solution is to ensure
> that groups that look at specific use cases, or specific problem spaces
> within OpenStack get participation from a wide spectrum of roles, from
> pure operators of OpenStack clouds, to upstream developers, product
> managers, researchers, and every combination thereof. In the past year
> we reorganized the Design Summit event, so that the design / planning /
> feedback gathering part of it would be less dev- or ops-branded, to
> encourage participation of everyone in a neutral ground, based on the
> topic being discussed. That was just a first step.
>
> In OpenStack we have a number of "working groups", groups of people
> interested in discussing a given use case, or addressing a given problem
> space across all of OpenStack. Examples include the API working group,
> the Deployment working group, the Public clouds working group, the
> Telco/NFV working group, or the Scientific working group. However, for
> governance reasons, those are currently set up either as a User
> Committee working group[1], or a working group depending on the
> Technical Committee[2]. This branding of working groups artificially
> discourages participation from one side to the others group, for no
> specific reason. This needs to be fixed.
>
> We propose to take a page out of Kubernetes playbook and set up "SIGs"
> (special interest groups), that would be primarily defined by their
> mission (i.e. the use case / problem space the group wants to
> collectively address). Those SIGs would not be Ops SIGs or Dev SIGs,
> they would just be OpenStack SIGs. While possible some groups will lean
> more towards an operator or dev focus (based on their mission), it is
> important to encourage everyone to join in early and often. SIGs could
> be very easily set up, just by adding your group to a wiki page,
> defining the mission of the group, a contact point and details on
> meetings (if the group has any). No need for prior vetting by any
> governance body. The TC and UC would likely still clean up dead SIGs
> from the list, to keep it relevant and tidy. Since they are neither dev
> or ops, SIGs would not use the -dev or the -operators lists: they would
> use a specific ML (openstack-sigs ?) to hold their discussions without
> cross-posting, with appropriate subject tagging.
>
> Not everything would become a SIG. Upstream project teams would remain
> the same (although some of them, like Security, might turn into a SIG).
> Teams under the UC that are purely operator-facing (like the Ops Tags
> Team or the AUC recognition team) would likewise stay as UC subteams.
>
> Comments, thoughts ?
>
> [1]
> https://wiki.openstack.org/wiki/Governance/Foundation/
> UserCommittee#Working_Groups_and_Teams
> [2] https://wiki.openstack.org/wiki/Upstream_Working_Groups
>
> --
> Melvin Hillsman & Thierry Carrez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

​I don't think this is necessarily where you're heading on this, but one
thing that's been kinda nice IMO working in some other upstream communities
(like K8's) that use the SIG model.  There's one channel for something like
storage, and to your point it's for everybody and anybody interested in
that topic.  Whether they're a developer, deployer or end-user.  I actually
think this works really well because it ensures that the same people that
are developing code are also directly exposed to and interacting with
various consumers of their code.

It also means people that will be consuming the code also may actually get
to contribute directly to the development process.  This would be a huge
win in my opinion.  The example is that rather than having a cinder channel
just for dev related conversations and a general openstack channel for
support and questions, throw all Cinder related things into a single
channel.  This means devs are actually in touch with ops and users which is
something that I think would be extremely beneficial.

​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][stable][ptls] Tagging mitaka as EOL

2017-06-27 Thread Jeremy Stanley
On 2017-06-17 01:20:28 +1000 (+1000), Joshua Hesketh wrote:
[...]
> I'm happy to help do this if you'd like. Otherwise the script I've
> used for the last few retirements is here:
> http://git.openstack.org/cgit/openstack-infra/release-tools/tree/eol_branch.sh

That would be really appreciated if you have the available
bandwidth. If not, I'm going to try to find an opportunity to
babysit it sometime later this week.

> I believe the intention was to add some hardening around that
> script and automate it. However I think it was put on hold
> awaiting a new gerrit.. either that or nobody took it up.

Fingers crossed that we'll be able to switch to Gerrit 2.13 soon and
resume that much needed development effort.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread Jeremy Stanley
On 2017-06-27 15:42:05 +0200 (+0200), Thierry Carrez wrote:
> Sean Dague wrote:
[...]
> > I wonder if we're going down this path, if some kind of tooling like
> > standard tags for issues/patches should be added to the mix to help gain
> > the effectiveness that the k8s team seems to have here.
> 
> For Launchpad/Storyboard we could totally reuse tags. Absence of tags in
> gerrit bites us here (and in other places too). I know that was a
> planned feature, does anyone have updated status on it ?
[...]

Gerrit's "hashtag" feature relies on their new NoteDb backend and
new PolyGerrit frontend. It looks like we'll need to be running 2.14
at a minimum to have the working WebUI so that people can
conveniently set hashtags on changes. Due to a number of factors,
we're not planning to upgrade to 2.14 just yet (it was only released
two months ago, requires newer Java which in turn requires a newer
operating system) and are instead underway with testing the
stable-2.13 branch tip for our upcoming upgrade.

This aside, you could probably get somewhere with a combination of
commit message footers and topics, or some reverse-mapping from
tagged bugs/stories using, e.g., reviewday.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] OpenStack manuals project migration - Progress for TripleO

2017-06-27 Thread Emilien Macchi
Full background, context and details can be read here:
http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html

TL;DR: there is a massive cross-project effort which aims to migrate
documentations out of a central repository and into project trees,
managed by project teams instead of OpenStack Manual team (who is
running with low number of contributors at this time). (Alex feel free
to correct me if I said something wrong).

There is a list of things we need to do to achieve this goal, if
possible by the end of Pike:
https://etherpad.openstack.org/p/doc-migration-tracking

Here's a first (basic) iteration:
Switch release notes to use openstacktheme:
https://review.openstack.org/#/q/topic:doc_migration+owner:%22Emilien+Macchi+%253Cemilien%2540redhat.com%253E%22
(for all TripleO projects).
Note that TripleO doc already switched:
https://docs.openstack.org/developer/tripleo-docs/

We need to evaluate what other work needs to be done, I'll probably
keep working on it during the following weeks but any help would be
welcome.
I'll make my best to keep you posted on this thread, weekly, so we can
get feedback from docs experts, to make sure we're doing the right
things.
If you have time to help, please ping me directly.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] EXT: [octavia] scheduling webex to discuss flavor spec (https://review.openstack.org/#/c/392485/)

2017-06-27 Thread Carlos Puga
BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Central Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=2SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN=Carlos Puga:MAILTO:carlos.p...@walmart.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN=OpenStack 
 Development Mailing List (not for usage questions):MAILTO:openstack-dev@li
 sts.openstack.org
DESCRIPTION;LANGUAGE=en-US:Octavia Team\,\n\n\nThis Webex meet up is to dis
 cuss the flavor spec (https://review.openstack.org/#/c/392485/).  Please r
 eview the spec prior to this meeting so that we can make the most of the t
 ime.\n\n\nThank you\,\nCarlos Puga\n-- Do not delete or change any of the 
 following text. --\n\n\n*   Meeting Number: 740 014 060\n*   Meeting Passw
 ord: 1234\n\nClick to Join Meeting\n\n\n  1.  Click above to join th
 e meeting.\n  2.  You can also join by accessing http://walmart.webex.com 
 or using the Webex mobile app in the Google Play or Apple Store\n*
Meeting Number: 740 014 060\n*   Meeting Password: 1234\nJoin f
 rom a video conferencing system or application\nDial 740014...@walmart.web
 ex.com\nInternal Walmart video conference
  rooms: Simply dial the 9-digit meeting number\n\n\nJoin by phone\n+1-855-
 797-9485 US Toll free\n+1-415-655-0002 US Toll\nAccess code: 740 014 060\n
 Global call-in numbers  |  Toll-free calling restrictions
 \n\n\nTo Mute/Unmute 
 press *6\n\nAdditional Global Call-in Numbers for Walmart Associates\n\nWalmart Asso
 ciates: Visit the help site for training videos\, FAQs\, and support discu
 ssions\n-
 --\nIf you need assistance during the meeting\
 , please call:\nUnited States: 3-8866 (1-479-273-8866)\nGlobal Tech Suppor
 t Numbers \n-
 --\n\nIMPORTANT NOTICE: Please not
 e that this WebEx service allows audio and other information sent during t
 he session to be recorded\, which may be discoverable in a legal matter. B
 y joining this session\, you automatically consent to such recordings. If 
 you do not consent to being recorded\, discuss your concerns with the host
  or do not join the session..\n
UID:73A2A4BD-108C-4223-A2DB-60F59549944D
SUMMARY;LANGUAGE=en-US:EXT: [openstack-dev] [octavia] scheduling webex to d
 iscuss flavor spec (https://review.openstack.org/#/c/392485/)
DTSTART;TZID=Central Standard Time:20170629T10
DTEND;TZID=Central Standard Time:20170629T11
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170627T204756Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:0
LOCATION;LANGUAGE=en-US:WebEx 
X-MICROSOFT-CDO-APPT-SEQUENCE:0
X-MICROSOFT-CDO-OWNERAPPTID:2115427869
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:0
X-MICROSOFT-DISALLOW-COUNTER:FALSE
BEGIN:VALARM
DESCRIPTION:REMINDER
TRIGGER;RELATED=START:-PT15M
ACTION:DISPLAY
END:VALARM
END:VEVENT
END:VCALENDAR
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][mistral][deployment] how to add deployment roles

2017-06-27 Thread Steven Hardy
Hi Dan,

On Tue, Jun 27, 2017 at 9:19 PM, Dan Trainor  wrote:
> Hi -
>
> I'm looking for the glue that populates the overcloud role list.
>
> I can get a list of roles via 'openstack overcloud role list', however I'm
> looking to create new roles to incorporate in to this list.
>
> I got as far as using 'mistral action-update' against what I beleive to be
> the proper action (tripleo.role.list) but am not sure what to use as the
> source of what I would be updating, not am I  finding any information about
> how that runs and where it gets its data from.  I also had a nice exercise
> pruning the output of 'mistral action-*' commands which was pretty
> insightful and helped me hone in on what I was looking for, but still
> uncertain of.

I think perhaps the confusion is because this was implemented in
tripleoclient, and porting it to tripleo-common is not yet completed?
(Alex can confirm the status of this but it was planned I think).

Related ML discussion which includes links to the patches:

http://lists.openstack.org/pipermail/openstack-dev/2017-June/118157.html

http://lists.openstack.org/pipermail/openstack-dev/2017-June/118213.html

HTH,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][mistral][deployment] how to add deployment roles

2017-06-27 Thread Dan Trainor
Hi -

I'm looking for the glue that populates the overcloud role list.

I can get a list of roles via 'openstack overcloud role list', however I'm
looking to create new roles to incorporate in to this list.

I got as far as using 'mistral action-update' against what I beleive to be
the proper action (tripleo.role.list) but am not sure what to use as the
source of what I would be updating, not am I  finding any information about
how that runs and where it gets its data from.  I also had a nice exercise
pruning the output of 'mistral action-*' commands which was pretty
insightful and helped me hone in on what I was looking for, but still
uncertain of.

Pretty sure I'm missing a few details along the way here, too.

Can someone please shed some light on this so I can have a better
understanding of the process?

Thanks!
-dant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] New upgrade test tool proposal.

2017-06-27 Thread Sean Dague
On 06/27/2017 03:19 PM, Dean Troyer wrote:
> On Tue, Jun 27, 2017 at 1:37 PM, Jay Pipes  wrote:
>> Hi Castulo, sorry for the delayed response on this. Has your team moved
>> forward on any of this?
> 
> IIRC this work was impacted by the OSIC shutdown, I believe it is not
> currently on anyone's radar.
> 
>> What about the Grenade testing framework that uses devstack as its
>> deployment system was not useful or usable for you?
> 
> I can take at least part of the blame in encouraging them to not
> attempt to leverage Grenade directly.  Grenade needs to be replaced as
> it has far out-lived its expected life[0].
> 
> Grenade was built to do static in-place upgrades and the fact that it
> has been pushed as far as it has is a happy surprise to me.  However,
> it is fundamentally limited in its abilities as a test orchestrator,
> implementing robust multi-node capabilities and the granularity that
> is required to properly do upgrade testing really needs a reboot.  In
> a well-funded world that would include replacing DevStack too, which
> while nice is not strictly necessary to achieve the testing goals they
> had.
> 
> The thing that Grenade and DevStack have going for them besides
> inertia is that they are not otherwise tied to any deployment
> strategy.  Starting over from scratch really is not an option at this
> point, something existing really does need to be leveraged even though
> it may hurt some feelings along the way for the project(s) not chosen.
> 
> dt
> 
> 
> [0] Seriously, I never expected Grenade (or DevStack for that matter)
> to have survived this long, but they have mostly because they were/are
> just barely good enough that nobody wants to fund replacing them.

Well, we also lengthened their life and usefulness by adding an external
plugin interface, that let people leverage what we had without having to
wait on review queues.

But, I also agree that grenade is largely pushed to its limits. That
also includes the fact that I'm more or less the only "active" reviewer
at this point. The only way this is even vaguely tenable is by the
complexity being capped so it's straight forward enough to think through
implications of patches.

The complicated upgrade orchestration including multiple things
executing at the same time, breaks that. It's cool if someone wants to
do it, however it definitely needs more dedicated folks on the review
side to be successful.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] New upgrade test tool proposal.

2017-06-27 Thread Dean Troyer
On Tue, Jun 27, 2017 at 1:37 PM, Jay Pipes  wrote:
> Hi Castulo, sorry for the delayed response on this. Has your team moved
> forward on any of this?

IIRC this work was impacted by the OSIC shutdown, I believe it is not
currently on anyone's radar.

> What about the Grenade testing framework that uses devstack as its
> deployment system was not useful or usable for you?

I can take at least part of the blame in encouraging them to not
attempt to leverage Grenade directly.  Grenade needs to be replaced as
it has far out-lived its expected life[0].

Grenade was built to do static in-place upgrades and the fact that it
has been pushed as far as it has is a happy surprise to me.  However,
it is fundamentally limited in its abilities as a test orchestrator,
implementing robust multi-node capabilities and the granularity that
is required to properly do upgrade testing really needs a reboot.  In
a well-funded world that would include replacing DevStack too, which
while nice is not strictly necessary to achieve the testing goals they
had.

The thing that Grenade and DevStack have going for them besides
inertia is that they are not otherwise tied to any deployment
strategy.  Starting over from scratch really is not an option at this
point, something existing really does need to be leveraged even though
it may hurt some feelings along the way for the project(s) not chosen.

dt


[0] Seriously, I never expected Grenade (or DevStack for that matter)
to have survived this long, but they have mostly because they were/are
just barely good enough that nobody wants to fund replacing them.

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stable][release] Last release date vs End of Life date

2017-06-27 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2017-06-27 16:51:37 +1000:
> Hi all,
> Up 'til now we haven't set a last release date for a stable branch
> approaching end of life.  It seems like formalizing that would be a good
> thing.
> 
> This comes up as we need time to verify that said release integrates
> well (at least doesn't break) said branch.  So should we define a date
> for the last release for *libraries* services are less critical as we're
> always testing the HEAD of that branch.
> 
> I'd suggest it be 2 weeks before EOL date.  Thoughts?
> 
> Yours Tony.

That makes sense.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] New upgrade test tool proposal.

2017-06-27 Thread Jay Pipes

On 04/05/2017 04:22 PM, Martinez, Castulo wrote:

Hi,

As you might know, the TC introduced new tags [1] for upgrade processes
quite some time ago, which reflect the level of maturity of an upgrade.
Therefore there is the growing need of test tools which help to exercise
and validate the upgrade approaches defined by the community.

During the PTG at Atlanta there were discussions around options for
covering this gap in testing tools, one proposal was to add this
functionality to Grenade, but this idea was dismissed since it was
determined that Grenade was originally designed for a different purpose
and is already pushing its limits, so the consensus was towards
creating a new tool to fill this gap.

We are proposing a new toolset for testing an OpenStack environment
before, during, and after an upgrade process. The toolset answers the
question ³how does OpenStack behave across upgrades from one release N
to a release N+1, or from the latest official release to master?². It
also provides information that can be used to assess if an upgrade
complies with the requirements for being recognized as a rolling
upgrade, a zero downtime upgrade or a zero impact upgrade.

Before starting this effort, we would like to hear feedback from
everybody, are there any concerns with this approach? Are we missing
something?

You can find details of this proposal here:
https://review.openstack.org/#/c/449295/3


Hi Castulo, sorry for the delayed response on this. Has your team moved 
forward on any of this?


From first glance at the spec, it does seem that you are biting off a 
lot -- deploying, healthchecking, upgrading, monitoring, data/report 
storage, and configuration management are all parts of this tool.


What about the Grenade testing framework that uses devstack as its 
deployment system was not useful or usable for you?


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Joshua Harlow

Boris Pavlovic wrote:


Overall it would take 1-2 days for people not familiar with OpenStack.

What about if one make "Sing-Up" page:

1) Few steps: provide Username, Contact info, Agreement, SSH key (and it
will do all work for you set Gerrit, OpenStack,...)
2) After one finished form it gets instruction for his OS how to setup
and run properly git review
3) Maybe few tutorials (how to find some bug, how to test it and where
are the docs, devstack, ...)


Sounds nice.

I wouldn't mind this as I also saw how painful it was (with the same 
intern).




That would simplify onboarding process...

Best regards,
Boris Pavlovic

On Mon, Jun 26, 2017 at 2:45 AM, Alexandra Settle mailto:a.set...@outlook.com>> wrote:

I think this is a good idea :) thanks Mike. We get a lot of people
coming to the docs chan or ML asking for help/where to start and
sometimes it’s difficult to point them in the right direction.

__ __

Just from experience working with contributor documentation, I’d
avoid all screen shots if you can – updating them whenever the
process changes (surprisingly often) is a lot of unnecessary
technical debt.

__ __

The docs team put a significant amount of effort in a few releases
back writing a pretty comprehensive Contributor Guide. For the
purposes you describe below, I imagine a lot of the content here
could be adapted. The process of setting up for code and docs is
exactly the same:
http://docs.openstack.org/contributor-guide/index.html
 

__ __

I also wonder if we could include a ‘what is openstack’ 101 for new
contributors. I find that there is a **lot** of material out there,
but it is often very hard to explain to people what each project
does, how they all interact, why we install from different sources,
why do we have official and unofficial projects etc. It doesn’t have
to be seriously in-depth, but an overview that points people who are
interested in the right directions. Often this will help people
decide on what project they’d like to undertake.

__ __

Cheers,

__ __

Alex

__ __

*From: *Mike Perez mailto:thin...@gmail.com>>
*Reply-To: *"OpenStack Development Mailing List (not for usage
questions)" mailto:openstack-dev@lists.openstack.org>>
*Date: *Friday, June 23, 2017 at 9:17 PM
*To: *OpenStack Development Mailing List
mailto:openstack-dev@lists.openstack.org>>
*Cc: *Wes Wilson mailto:w...@openstack.org>>,
"ild...@openstack.org "
mailto:ild...@openstack.org>>,
"knel...@openstack.org "
mailto:knel...@openstack.org>>
*Subject: *[openstack-dev] [docs][all][ptl] Contributor Portal and
Better New Contributor On-boarding

__ __

Hello all,

__ __

Every month we have people asking on IRC or the dev mailing list
having interest in working on OpenStack, and sometimes they're given
different answers from people, or worse, no answer at all. 

__ __

Suggestion: lets work our efforts together to create some common
documentation so that all teams in OpenStack can benefit.

__ __

First it’s important to note that we’re not just talking about code
projects here. OpenStack contributions come in many forms such as
running meet ups, identifying use cases (product working group),
documentation, testing, etc. We want to make sure those potential
contributors feel welcomed too!

__ __

What is common documentation? Things like setting up Git, the many
accounts you need to setup to contribute (gerrit, launchpad,
OpenStack foundation account). Not all teams will use some common
documentation, but the point is one or more projects will use them.
Having the common documentation worked on by various projects will
better help prevent duplicated efforts, inconsistent documentation,
and hopefully just more accurate information.

__ __

A team might use special tools to do their work. These can also be
integrated in this idea as well.

__ __

Once we have common documentation we can have something like:

 1. Choose your own adventure: I want to contribute by code

 2. What service type are you interested in? (Database, Block
storage, compute)

 3. Here’s step-by-step common documentation to setting up Git,
IRC, Mailing Lists, Accounts, etc.

 4. A service type project might choose to also include
additional documentation in that flow for special tools, etc.



Important things to note in this flow:

 * How do you want to contribute?

 * Here are **clear** names that identify the team. Not code
names like Cloud Kitty, Cinder, etc.

 * The documentation sh

Re: [openstack-dev] [watcher] Nominate Yumeng Bao to the core team

2017-06-27 Thread Shedimbi, Prudhvi Rao
+1. Keep it up Yumeng!!

- Pru

From: "Чадин Александр (Alexander Chadin)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, June 27, 2017 at 6:43 AM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [watcher] Nominate Yumeng Bao to the core team

Hi watcher folks,

I’d like to nominate Yumeng Bao to the core team. She has made a lot of 
contributions including specifications,
features and bug fixes. Yumeng has attended PTG and Summit with her 
presentation related to the Watcher.
Yumeng is active on IRC channels and take a part on weekly meetings as well.

Please, vote with +1/-1.

Best Regards,
_
Alexander Chadin
OpenStack Developer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-27 Thread Jeremy Stanley
On 2017-06-26 15:27:21 +0200 (+0200), Thierry Carrez wrote:
> Flavio Percoco wrote:
> > [...]
> > Not being able to easily ping someone during a meeting is kind
> > of a bummer but I'd argue that assuming someone is in the
> > meeting channel and available at all times is a mistake to begin
> > with.
> 
> I think people can be pinged by PM or on #openstack-dev, it's just an
> habit to take. It's just that there are cases  where people passively
> mention you, without going up to a formal ping -- I usually go back
> later to that person to answer the issue they informally raised. We'll
> lose that, but it's minor enough.
[...]

By lurking in official meeting channels I'm often able to jump
straight into a discussion when someone asks me a question in a
meeting I wouldn't normally attend but am around during. I can see
the discussion instantly up to that point as opposed to
inconveniencing the attendees by asking to have everything repeated
for me after /join'ing. The channel logs on eavesdrop.o.o aren't
really a substitute there because of the batched flushing in the bot
delays Web-based logs by some number of minutes.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-27 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2017-06-27 17:55:57 +:
> On 2017-06-22 16:27:34 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > The new job is configured to update the docs for all repos every
> > time a patch is merged, not just when we tag releases. The server
> > projects have been working that way, but this is different for some
> > of the libraries, especially the clients.
> [...]
> 
> I think the past concern had been that since the default document
> presented was the latest one built from the master branch tip rather
> than a redirect to the documentation for the latest release, readers
> might get confused when seeing options or behaviors documented which
> didn't match the software they had downloaded.

That makes sense. The openstackdocstheme makes it easy to link to
specific versions of documentation, so we should be able to address this
concern that way. We will also have series-specific landing pages
linking directly to the appropriate guides.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][all][ptl] Contributor Portal and Better New Contributor On-boarding

2017-06-27 Thread Jeremy Stanley
On 2017-06-26 10:51:08 -0700 (-0700), Clark Boylan wrote:
> On Mon, Jun 26, 2017, at 10:31 AM, Boris Pavlovic wrote:
[...]
> > - When you try to contribute your first commit (if you already
> > created it, you won't be able unit you do git commit --ament, so
> > git review will add change-id)
> 
> Git review should automatically do this last step for you if a change id
> is missing.

Except if you're trying to push multiple commits you made before
installing the hook (then you need to go back and --amend them one
by one so Gerrit will accept them). Note this may be similarly
confusing when we eventually switch from the ICLA to the DCO, where
Gerrit will start rejecting your commits if you weren't already in
the habit of putting signed-off-by footers in all your commit
messages. Actually more confusing because we can't reasonably
automate that step away for anyone (since it loses its legal intent
if we did).

[...]
> I think that Jeremy (fungi) has work in progress to tie electoral rolls
> to foundation membership via an external lookup api that was recently
> added to the foundation membership site. This means that we shouldn't
> need to check that gerrit account info matches foundation account info
> at CLA signing tiem anymore (at least this is my understanding, Jeremy
> can correct me if I am wrong).
> 
> If this is the case it should make account setup much much simpler. You
> just add an ssh key and sign the cla without worrying about account
> details lining up.

Yes, but it _is_ going to significantly increase the risk of
contributors not qualifying to vote in technical elections or
receive event registration discounts if they choose (intentionally
or accidently) to not join the foundation or list different E-mail
addresses there vs. in Gerrit. We ought to preserve some means of
making sure they're still aware during onboarding that they might
want to take those extra steps.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-27 Thread Jeremy Stanley
On 2017-06-22 16:27:34 -0400 (-0400), Doug Hellmann wrote:
[...]
> The new job is configured to update the docs for all repos every
> time a patch is merged, not just when we tag releases. The server
> projects have been working that way, but this is different for some
> of the libraries, especially the clients.
[...]

I think the past concern had been that since the default document
presented was the latest one built from the master branch tip rather
than a redirect to the documentation for the latest release, readers
might get confused when seeing options or behaviors documented which
didn't match the software they had downloaded.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Chris Friesen

On 06/27/2017 09:36 AM, Henning Schild wrote:

Am Tue, 27 Jun 2017 09:28:34 -0600
schrieb Chris Friesen :



Once you use "isolcpus" on the host, the host scheduler won't "float"
threads between the CPUs based on load.  To get the float behaviour
you'd have to not isolate the pCPUs that will be used for emulator
threads, but then you run the risk of the host running other work on
those pCPUs (unless you use cpusets or something to isolate the host
work to a subset of non-isolcpus pCPUs).


With openstack you use libvirt and libvirt uses cgroups/cpusets to get
those threads onto these cores.


Right.  I misremembered.  We are currently using "isolcpus" on the compute node 
to isolate the pCPUs used for packet processing, but the pCPUs used for guests 
are not isolated.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure trying to set dns_domain from command line

2017-06-27 Thread Graham Hayes
On 27/06/17 16:36, Lawrence J. Albinson wrote:
> Hi Graham,
> 
> Many thanks for the pointer. I hadn't added dns to the plugin list.
> 
> I did, however, set the following:
> 
> neutron_designate_enabled:  True

Ah - unfortunately there is two ways of integrating Neutron + Designate

The external DNS plug, which allows you to use "--dns-domain" on a per
network basis, and "designate-sink"

designate-sink was written before Designate was an official project, and
uses notifications. It is very powerful, but requires deployers to
write custom plugins for it to work well.

What OSA is missing is "neutron external DNS integration" - which is the
code that we use for "--dns-domain"

If you file a request here it will go on to the to do list:

https://blueprints.launchpad.net/openstack-ansible

- Graham

> I'm wondering if the two together will fix things. I shall know by the 
> morning.
> 
> Again, many thanks.
> 
> Kind regards, Lawrence
> 
> From: Graham Hayes
> Sent: 27 June 2017 16:14
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure 
> trying to set dns_domain from command line
> 
> On 27/06/17 15:01, Lawrence J. Albinson wrote:
>> Hello Colleagues,
>>
>> I am trying to enable dynamic updating of DNSaaS when a port or VM is
>> created/deleted.
>>
>> I have DNSaaS working with Bind9 as the back-end and I am able to
>> manually create/update/delete entries with the openstack client and/or
>> the designate client and see Bind9 reflect those changes.
>>
>> However, I am unable to set a dns_domain name for a network from the
>> openstack CLI and/or the neutron CLI.
>>
>> I have tried the following:
>>
>> neutron net-update --dns-domain example.com
>> 64b50baa-acd8-4269-8a3a-767b70c7d18d
>> neutron net-update --dns-domain example.com public
>> neutron net-update --dns-domain example.com.
>> 64b50baa-acd8-4269-8a3a-767b70c7d18d
>> neutron net-update --dns-domain example.com. public
>>
>> The response is always the same, namely:
>>
>> Unrecognized attribute(s) 'dns_domain'
>> Neutron server returns request_ids:
>> ['req-be15e08a-b3b0-458c-a045-ffac7ce3ebbd']
>>
>> Before I go searching through the Neutron source, does anyone know if
>> this is a 'hole' in the Neutron API and, if so, has it been fixed after
>> the commit point being used by openstack-ansible tag 15.1.3.
>>
>> Kind regards, Lawrence
> 
> Hi Lawrence,
> 
> Did you enable the DNS plugin in neutron?
> 
> Adding "dns" to the list here [0] should enable the --dns-domain
> attribute.
> 
> 0 -
> https://github.com/openstack/openstack-ansible-os_neutron/blob/15.1.3/defaults/main.yml#L133
> 
> However, it does not look like OpenStack Ansible code supports Neutron
> calling Designate to update DNS Recordsets yet.
> 
> It looks like a missing feature - I am not sure how OSA deals with
> feature requests, but if you are on IRC they use #openstack-ansible
> 
> Thanks
> 
> - Graham
> 
> 
>>
>> Lawrence J Albinson
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-27 Thread Emilien Macchi
On Mon, Jun 26, 2017 at 9:58 AM, Chris Dent  wrote:
> On Mon, 26 Jun 2017, Flavio Percoco wrote:
>
>> So, should we let teams to host IRC meetings in their own channels?
>
>
> Yes.

++

>> Thoughts?
>
>
> I think the silo-ing concern is, at least recently, not relevant on
> two fronts: IRC was never a good fix for that and silos gonna be
> silos.
>
> There are so many meetings and so many projects there already are
> silos and by encouraging people to use the mailing lists more we are
> more effectively enabling diverse access than IRC ever could,
> especially if the IRC-based solution is the impossible "always be on
> IRC, always use a bouncer, always read all the backlogs, always read
> all the meeting logs".
>
> The effective way for a team not to be a silo is for it to be
> better about publishing accessible summaries of itself (as in: make
> more email) and participating in cross project related reviews. If
> it doesn't do that, that's the team's loss.
>
> Synchronous communication is fine for small groups of speakers but
> that's pretty much where it ends.

+1000 with what cdent said.

> --
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][dev][doc] Operations Guide future

2017-06-27 Thread Alexandra Settle
Thanks everyone for your feedback regarding the proposal below.

Going forwards we are going to implement Option 3.

If anyone is able to help out with this migration, please let me know :)

Looking forward to getting started!

From: Alexandra Settle 
Date: Thursday, June 1, 2017 at 4:06 PM
To: OpenStack Operators , 
"'openstack-d...@lists.openstack.org'" 
Cc: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [Openstack-operators] [dev] [doc] Operations Guide future

Hi everyone,

I haven’t had any feedback regarding moving the Operations Guide to the 
OpenStack wiki. I’m not taking silence as compliance. I would really like to 
hear people’s opinions on this matter.

To recap:

1.  Option one: Kill the Operations Guide completely and move the 
Administration Guide to project repos.
2.  Option two: Combine the Operations and Administration Guides (and then 
this will be moved into the project-specific repos)
3.  Option three: Move Operations Guide to OpenStack wiki (for ease of 
operator-specific maintainability) and move the Administration Guide to project 
repos.

Personally, I think that option 3 is more realistic. The idea for the last 
option is that operators are maintaining operator-specific documentation and 
updating it as they go along and we’re not losing anything by combining or 
deleting. I don’t want to lose what we have by going with option 1, and I think 
option 2 is just a workaround without fixing the problem – we are not getting 
contributions to the project.

Thoughts?

Alex

From: Alexandra Settle 
Date: Friday, May 19, 2017 at 1:38 PM
To: Melvin Hillsman , OpenStack Operators 

Subject: Re: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
What's up doc? Summit recap edition

Hi everyone,

Adding to this, I would like to draw your attention to the last dot point of my 
email:

“One of the key takeaways from the summit was the session that I joint 
moderated with Melvin Hillsman regarding the Operations and Administration 
Guides. You can find the etherpad with notes here: 
https://etherpad.openstack.org/p/admin-ops-guides  The session was really 
helpful – we were able to discuss with the operators present the current 
situation of the documentation team, and how they could help us maintain the 
two guides, aimed at the same audience. The operator’s present at the session 
agreed that the Administration Guide was important, and could be maintained 
upstream. However, they voted and agreed that the best course of action for the 
Operations Guide was for it to be pulled down and put into a wiki that the 
operators could manage themselves. We will be looking at actioning this item as 
soon as possible.”

I would like to go ahead with this, but I would appreciate feedback from 
operators who were not able to attend the summit. In the etherpad you will see 
the three options that the operators in the room recommended as being viable, 
and the voted option being moving the Operations Guide out of 
docs.openstack.org into a wiki. The aim of this was to empower the operations 
community to take more control of the updates in an environment they are more 
familiar with (and available to others).

What does everyone think of the proposed options? Questions? Other thoughts?

Alex

From: Melvin Hillsman 
Date: Friday, May 19, 2017 at 1:30 PM
To: OpenStack Operators 
Subject: [Openstack-operators] Fwd: [openstack-dev] [openstack-doc] [dev] 
What's up doc? Summit recap edition


-- Forwarded message --
From: Alexandra Settle mailto:a.set...@outlook.com>>
Date: Fri, May 19, 2017 at 6:12 AM
Subject: [openstack-dev] [openstack-doc] [dev] What's up doc? Summit recap 
edition
To: 
"openstack-d...@lists.openstack.org" 
mailto:openstack-d...@lists.openstack.org>>
Cc: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>



Hi everyone,

The OpenStack manuals project had a really productive week at the OpenStack 
summit in Boston. You can find a list of all the etherpads and attendees here: 
https://etherpad.openstack.org/p/docs-summit

As we all know, we are rapidly losing key contributors and core reviewers. We 
are not alone, this is happening across the board. It is making things harder, 
but not impossible. Since our inception in 2010, we’ve been climbing higher and 
higher trying to achieve the best documentation we could, and uphold our high 
standards. This is something to be incredibly proud of. However, we now need to 
take a step back and realise that the amount of work we are attempting to 
maintain is now out of reach for the team size that we have. At the moment we 
have 13 cores, of which none are full time contributors or reviewers. This 
includes myself.

That being said! I have spent the last week at the summit talking to some of 
our leaders, including Doug Hellmann (cc’d), Jonathan Bryce and Mike Perez 
regarding the future o

Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-27 Thread Mikhail Fedosin
On Tue, Jun 27, 2017 at 10:19 AM, Flavio Percoco  wrote:

> On 26/06/17 17:35 +0300, Mikhail Fedosin wrote:
>
>> 2. We would like to become an official OpenStack project, and in general
>> we
>> follow all the necessary rules and recommendations, starting from weekly
>> IRC meetings and our own channel, to Apache license and Keystone support.
>> For this reason, I want to file an application and hear objections and
>> recommendations on this matter.
>>
>
> Note that IRC meetings are not a requirement anymore:
> https://review.openstack.org/#/c/462077/
>
> As far as the rest of the process goes, it looks like you are all good to
> go.
> I'd recommend you to submit the request to the governance repo and let the
> discussion begin: https://governance.openstack.o
> rg/tc/reference/new-projects-requirements.html
>
> Flavio
>

Thank you Flavio - it's exactly what I suppose to do!

>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-27 Thread Mikhail Fedosin
On Tue, Jun 27, 2017 at 3:33 PM, Jay Pipes  wrote:

> From what I can tell, Keycloak is an Identity provider, not a secret store?
>
> Yes! I should explain more detailed.

CloudBand is a big enterprise system for SDN and OpenStack is a part of it.
The default Identity provider of the system is Keycloak.
Currently Glare is used there not as a part of OpenStack deployment, but as
a standalone service outside of OpenStack.
For this reason earlier this year we implemented Keycloak auth middleware
for the server and authentication mechanism in the client,
i.e. we can use Keycloak instead of Keystone.

The decision regarding the secrets was taken, on the grounds that Barbican
does not have such ability, and it's tightly attached
to Keystone. Moreover it was not difficult to implement the plugin for
Glare.
As I said - originally this is a private plugin, which was decided to
opensource for the OpenStack community. If this is not required, then
we can always cancel it. I don't see any problems with this.


> -jay
>
> On 06/27/2017 05:35 AM, Adam Heczko wrote:
>
>> Barbican already supports multiple secret storage backends [1] and most
>> likely adding Keycloak's one [2] should be possible.
>>
>> [1] https://docs.openstack.org/project-install-guide/key-manager
>> /draft/barbican-backend.html
>> [2] https://github.com/jpkrohling/secret-store
>>
>> On Tue, Jun 27, 2017 at 10:42 AM, Thierry Carrez > > wrote:
>>
>> Mikhail Fedosin wrote:
>> > Does the above mean you are implementing a share secret
>> storage
>> > solution or that you are going to use an existing
>> solution like
>> > Barbican that does that?
>> >
>> > Sectets is a plugin for Glare we developed for Nokia
>> CloudBand
>> > platform,   and they just decided to opensource it. It
>> doesn't
>> > use Barbican, technically it is oslo.versionedobjects class.
>> >
>> > Sorry to hear that you opted not to use Barbican.
>> >
>> > I think it's only because Keycloak integration is required by
>> Nokia's
>> > system and Barbican doesn't support it.
>>
>> Any technical reason why it couldn't be added to Barbican ? Any chance
>> Keycloak integration could be added as a Castellan backend ? Secrets
>> management is really one of those things that should *not* be
>> reinvented
>> in every project. It is easier to get wrong than people think, and you
>> end up having to do security audits on 10 repositories instead of one.
>>
>> --
>> Thierry Carrez (ttx)
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>>
>>
>>
>>
>> --
>> Adam Heczko
>> Security Engineer @ Mirantis Inc.
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Henning Schild
Am Tue, 27 Jun 2017 09:25:14 -0600
schrieb Chris Friesen :

> On 06/27/2017 01:44 AM, Sahid Orentino Ferdjaoui wrote:
> > On Mon, Jun 26, 2017 at 10:19:12AM +0200, Henning Schild wrote:  
> >> Am Sun, 25 Jun 2017 10:09:10 +0200
> >> schrieb Sahid Orentino Ferdjaoui :
> >>  
> >>> On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:  
>  On 06/23/2017 09:35 AM, Henning Schild wrote:  
> > Am Fri, 23 Jun 2017 11:11:10 +0200
> > schrieb Sahid Orentino Ferdjaoui :  
>   
> >> In Linux RT context, and as you mentioned, the non-RT vCPU can
> >> acquire some guest kernel lock, then be pre-empted by emulator
> >> thread while holding this lock. This situation blocks RT vCPUs
> >> from doing its work. So that is why we have implemented [2].
> >> For DPDK I don't think we have such problems because it's
> >> running in userland.
> >>
> >> So for DPDK context I think we could have a mask like we have
> >> for RT and basically considering vCPU0 to handle best effort
> >> works (emulator threads, SSH...). I think it's the current
> >> pattern used by DPDK users.  
> >
> > DPDK is just a library and one can imagine an application that
> > has cross-core communication/synchronisation needs where the
> > emulator slowing down vpu0 will also slow down vcpu1. You DPDK
> > application would have to know which of its cores did not get a
> > full pcpu.
> >
> > I am not sure what the DPDK-example is doing in this discussion,
> > would that not just be cpu_policy=dedicated? I guess normal
> > behaviour of dedicated is that emulators and io happily share
> > pCPUs with vCPUs and you are looking for a way to restrict
> > emulators/io to a subset of pCPUs because you can live with some
> > of them beeing not 100%.  
> 
>  Yes.  A typical DPDK-using VM might look something like this:
> 
>  vCPU0: non-realtime, housekeeping and I/O, handles all virtual
>  interrupts and "normal" linux stuff, emulator runs on same pCPU
>  vCPU1: realtime, runs in tight loop in userspace processing
>  packets vCPU2: realtime, runs in tight loop in userspace
>  processing packets vCPU3: realtime, runs in tight loop in
>  userspace processing packets
> 
>  In this context, vCPUs 1-3 don't really ever enter the kernel,
>  and we've offloaded as much kernel work as possible from them
>  onto vCPU0.  This works pretty well with the current system.
>   
> >> For RT we have to isolate the emulator threads to an additional
> >> pCPU per guests or as your are suggesting to a set of pCPUs for
> >> all the guests running.
> >>
> >> I think we should introduce a new option:
> >>
> >> - hw:cpu_emulator_threads_mask=^1
> >>
> >> If on 'nova.conf' - that mask will be applied to the set of all
> >> host CPUs (vcpu_pin_set) to basically pack the emulator threads
> >> of all VMs running here (useful for RT context).  
> >
> > That would allow modelling exactly what we need.
> > In nova.conf we are talking absolute known values, no need for a
> > mask and a set is much easier to read. Also using the same name
> > does not sound like a good idea.
> > And the name vcpu_pin_set clearly suggest what kind of load runs
> > here, if using a mask it should be called pin_set.  
> 
>  I agree with Henning.
> 
>  In nova.conf we should just use a set, something like
>  "rt_emulator_vcpu_pin_set" which would be used for running the
>  emulator/io threads of *only* realtime instances.  
> >>>
> >>> I'm not agree with you, we have a set of pCPUs and we want to
> >>> substract some of them for the emulator threads. We need a mask.
> >>> The only set we need is to selection which pCPUs Nova can use
> >>> (vcpus_pin_set).  
> >>
> >> At that point it does not really matter whether it is a set or a
> >> mask. They can both express the same where a set is easier to
> >> read/configure. With the same argument you could say that
> >> vcpu_pin_set should be a mask over the hosts pcpus.
> >>
> >> As i said before: vcpu_pin_set should be renamed because all sorts
> >> of threads are put here (pcpu_pin_set?). But that would be a
> >> bigger change and should be discussed as a seperate issue.
> >>
> >> So far we talked about a compute-node for realtime only doing
> >> realtime. In that case vcpu_pin_set + emulator_io_mask would work.
> >> If you want to run regular VMs on the same host, you can run a
> >> second nova, like we do.
> >>
> >> We could also use vcpu_pin_set + rt_vcpu_pin_set(/mask). I think
> >> that would allow modelling all cases in just one nova. Having all
> >> in one nova, you could potentially repurpose rt cpus to
> >> best-effort and back. Some day in the future ...  
> >
> > That is not something we should allow or at least
> > advertise. compute-node can't run both RT and non-RT guests and that
> > bec

Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure trying to set dns_domain from command line

2017-06-27 Thread Lawrence J. Albinson
Hi Graham,

Many thanks for the pointer. I hadn't added dns to the plugin list.

I did, however, set the following:

neutron_designate_enabled:  True

I'm wondering if the two together will fix things. I shall know by the morning.

Again, many thanks.

Kind regards, Lawrence

From: Graham Hayes
Sent: 27 June 2017 16:14
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure 
trying to set dns_domain from command line

On 27/06/17 15:01, Lawrence J. Albinson wrote:
> Hello Colleagues,
>
> I am trying to enable dynamic updating of DNSaaS when a port or VM is
> created/deleted.
>
> I have DNSaaS working with Bind9 as the back-end and I am able to
> manually create/update/delete entries with the openstack client and/or
> the designate client and see Bind9 reflect those changes.
>
> However, I am unable to set a dns_domain name for a network from the
> openstack CLI and/or the neutron CLI.
>
> I have tried the following:
>
> neutron net-update --dns-domain example.com
> 64b50baa-acd8-4269-8a3a-767b70c7d18d
> neutron net-update --dns-domain example.com public
> neutron net-update --dns-domain example.com.
> 64b50baa-acd8-4269-8a3a-767b70c7d18d
> neutron net-update --dns-domain example.com. public
>
> The response is always the same, namely:
>
> Unrecognized attribute(s) 'dns_domain'
> Neutron server returns request_ids:
> ['req-be15e08a-b3b0-458c-a045-ffac7ce3ebbd']
>
> Before I go searching through the Neutron source, does anyone know if
> this is a 'hole' in the Neutron API and, if so, has it been fixed after
> the commit point being used by openstack-ansible tag 15.1.3.
>
> Kind regards, Lawrence

Hi Lawrence,

Did you enable the DNS plugin in neutron?

Adding "dns" to the list here [0] should enable the --dns-domain
attribute.

0 -
https://github.com/openstack/openstack-ansible-os_neutron/blob/15.1.3/defaults/main.yml#L133

However, it does not look like OpenStack Ansible code supports Neutron
calling Designate to update DNS Recordsets yet.

It looks like a missing feature - I am not sure how OSA deals with
feature requests, but if you are on IRC they use #openstack-ansible

Thanks

- Graham


>
> Lawrence J Albinson
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Henning Schild
Am Tue, 27 Jun 2017 09:28:34 -0600
schrieb Chris Friesen :

> On 06/27/2017 01:45 AM, Sahid Orentino Ferdjaoui wrote:
> > On Mon, Jun 26, 2017 at 12:12:49PM -0600, Chris Friesen wrote:  
> >> On 06/25/2017 02:09 AM, Sahid Orentino Ferdjaoui wrote:  
> >>> On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:  
>  On 06/23/2017 09:35 AM, Henning Schild wrote:  
> > Am Fri, 23 Jun 2017 11:11:10 +0200
> > schrieb Sahid Orentino Ferdjaoui :  
>   
> >> In Linux RT context, and as you mentioned, the non-RT vCPU can
> >> acquire some guest kernel lock, then be pre-empted by emulator
> >> thread while holding this lock. This situation blocks RT vCPUs
> >> from doing its work. So that is why we have implemented [2].
> >> For DPDK I don't think we have such problems because it's
> >> running in userland.
> >>
> >> So for DPDK context I think we could have a mask like we have
> >> for RT and basically considering vCPU0 to handle best effort
> >> works (emulator threads, SSH...). I think it's the current
> >> pattern used by DPDK users.  
> >
> > DPDK is just a library and one can imagine an application that
> > has cross-core communication/synchronisation needs where the
> > emulator slowing down vpu0 will also slow down vcpu1. You DPDK
> > application would have to know which of its cores did not get a
> > full pcpu.
> >
> > I am not sure what the DPDK-example is doing in this
> > discussion, would that not just be cpu_policy=dedicated? I
> > guess normal behaviour of dedicated is that emulators and io
> > happily share pCPUs with vCPUs and you are looking for a way to
> > restrict emulators/io to a subset of pCPUs because you can live
> > with some of them beeing not 100%.  
> 
>  Yes.  A typical DPDK-using VM might look something like this:
> 
>  vCPU0: non-realtime, housekeeping and I/O, handles all virtual
>  interrupts and "normal" linux stuff, emulator runs on same pCPU
>  vCPU1: realtime, runs in tight loop in userspace processing
>  packets vCPU2: realtime, runs in tight loop in userspace
>  processing packets vCPU3: realtime, runs in tight loop in
>  userspace processing packets
> 
>  In this context, vCPUs 1-3 don't really ever enter the kernel,
>  and we've offloaded as much kernel work as possible from them
>  onto vCPU0.  This works pretty well with the current system.
>   
> >> For RT we have to isolate the emulator threads to an
> >> additional pCPU per guests or as your are suggesting to a set
> >> of pCPUs for all the guests running.
> >>
> >> I think we should introduce a new option:
> >>
> >>  - hw:cpu_emulator_threads_mask=^1
> >>
> >> If on 'nova.conf' - that mask will be applied to the set of
> >> all host CPUs (vcpu_pin_set) to basically pack the emulator
> >> threads of all VMs running here (useful for RT context).  
> >
> > That would allow modelling exactly what we need.
> > In nova.conf we are talking absolute known values, no need for
> > a mask and a set is much easier to read. Also using the same
> > name does not sound like a good idea.
> > And the name vcpu_pin_set clearly suggest what kind of load
> > runs here, if using a mask it should be called pin_set.  
> 
>  I agree with Henning.
> 
>  In nova.conf we should just use a set, something like
>  "rt_emulator_vcpu_pin_set" which would be used for running the
>  emulator/io threads of *only* realtime instances.  
> >>>
> >>> I'm not agree with you, we have a set of pCPUs and we want to
> >>> substract some of them for the emulator threads. We need a mask.
> >>> The only set we need is to selection which pCPUs Nova can use
> >>> (vcpus_pin_set).
> >>>  
>  We may also want to have "rt_emulator_overcommit_ratio" to
>  control how many threads/instances we allow per pCPU.  
> >>>
> >>> Not really sure to have understand this point? If it is to
> >>> indicate that for a pCPU isolated we want X guest emulator
> >>> threads, the same behavior is achieved by the mask. A host for
> >>> realtime is dedicated for realtime, no overcommitment and the
> >>> operators know the number of host CPUs, they can easily deduct a
> >>> ratio and so the corresponding mask.  
> >>
> >> Suppose I have a host with 64 CPUs.  I reserve three for host
> >> overhead and networking, leaving 61 for instances.  If I have
> >> instances with one non-RT vCPU and one RT vCPU then I can run 30
> >> instances.  If instead my instances have one non-RT and 5 RT vCPUs
> >> then I can run 12 instances.  If I put all of my emulator threads
> >> on the same pCPU, it might make a difference whether I put 30 sets
> >> of emulator threads or 12 sets.  
> >
> > Oh I understand your point now, but not sure that is going to make
> > any difference. I would say the load in the isolated cores is
> > prob

Re: [openstack-dev] [tc][fuel] Making Fuel a hosted project

2017-06-27 Thread Emilien Macchi
On Wed, Jun 21, 2017 at 4:34 PM, Vladimir Kuklin  wrote:
> Folks, I sent a reply a couple of days ago, but somehow it got lost. The
> original message goes below
>
> Folks
>
> It is essentially true that Fuel is no longer being developed as almost 99%
> of people have left the project and are working on something else. May be,
> in the future, when the dust settles, we can resume working on it, but the
> probability is not so high as of now.
>
> I would like to thank everyone who worked on the project - contributors,
> reviewers, core-reviewers, ex-PTLs Alex Shtokolov, Vladimir Kozhukalov and
> Dmitry Borodaenko - it was a pleasure to work with you guys.
>
> Also, I would like to thank puppet-openstack project team as we worked
> together on many things really effectively and wish them good luck as well.

Thank YOU for your collaboration, I remember the amount of patches you
sent when we started to really work together - we had hard time to
catch up but were so happy to have you aboard.
Anyway, it was a great time and thanks again for the hard work.
I hope you'll have fun in your next things :-)

> Special Kudos to Jay and Dims as they helped as a lot on governance and
> community side.
>
> I hope, we will work some day together again.
>
> At the same time, I would like to mention that Fuel is still being actively
> used and some bugs are still being fixed, so I would suggest, if that is
> possible, that we keep the github repository available for a while, so that
> those guys can still access the repositories.
>
> Having that said, I do not have any other objections on making Fuel Hosted
> project.
>
>
> Yours Faithfully
>
> Vladimir Kuklin
>
> email: ag...@aglar.ru
> email(alt.): aglaren...@gmail.com
> mob.: +79267023968
> mob.: (when in EU) +393497028541
> mob.: (when in US) +19293122331
> skype: kuklinvv
> telegram
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Chris Friesen

On 06/27/2017 01:45 AM, Sahid Orentino Ferdjaoui wrote:

On Mon, Jun 26, 2017 at 12:12:49PM -0600, Chris Friesen wrote:

On 06/25/2017 02:09 AM, Sahid Orentino Ferdjaoui wrote:

On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:

On 06/23/2017 09:35 AM, Henning Schild wrote:

Am Fri, 23 Jun 2017 11:11:10 +0200
schrieb Sahid Orentino Ferdjaoui :



In Linux RT context, and as you mentioned, the non-RT vCPU can acquire
some guest kernel lock, then be pre-empted by emulator thread while
holding this lock. This situation blocks RT vCPUs from doing its
work. So that is why we have implemented [2]. For DPDK I don't think
we have such problems because it's running in userland.

So for DPDK context I think we could have a mask like we have for RT
and basically considering vCPU0 to handle best effort works (emulator
threads, SSH...). I think it's the current pattern used by DPDK users.


DPDK is just a library and one can imagine an application that has
cross-core communication/synchronisation needs where the emulator
slowing down vpu0 will also slow down vcpu1. You DPDK application would
have to know which of its cores did not get a full pcpu.

I am not sure what the DPDK-example is doing in this discussion, would
that not just be cpu_policy=dedicated? I guess normal behaviour of
dedicated is that emulators and io happily share pCPUs with vCPUs and
you are looking for a way to restrict emulators/io to a subset of pCPUs
because you can live with some of them beeing not 100%.


Yes.  A typical DPDK-using VM might look something like this:

vCPU0: non-realtime, housekeeping and I/O, handles all virtual interrupts
and "normal" linux stuff, emulator runs on same pCPU
vCPU1: realtime, runs in tight loop in userspace processing packets
vCPU2: realtime, runs in tight loop in userspace processing packets
vCPU3: realtime, runs in tight loop in userspace processing packets

In this context, vCPUs 1-3 don't really ever enter the kernel, and we've
offloaded as much kernel work as possible from them onto vCPU0.  This works
pretty well with the current system.


For RT we have to isolate the emulator threads to an additional pCPU
per guests or as your are suggesting to a set of pCPUs for all the
guests running.

I think we should introduce a new option:

 - hw:cpu_emulator_threads_mask=^1

If on 'nova.conf' - that mask will be applied to the set of all host
CPUs (vcpu_pin_set) to basically pack the emulator threads of all VMs
running here (useful for RT context).


That would allow modelling exactly what we need.
In nova.conf we are talking absolute known values, no need for a mask
and a set is much easier to read. Also using the same name does not
sound like a good idea.
And the name vcpu_pin_set clearly suggest what kind of load runs here,
if using a mask it should be called pin_set.


I agree with Henning.

In nova.conf we should just use a set, something like
"rt_emulator_vcpu_pin_set" which would be used for running the emulator/io
threads of *only* realtime instances.


I'm not agree with you, we have a set of pCPUs and we want to
substract some of them for the emulator threads. We need a mask. The
only set we need is to selection which pCPUs Nova can use
(vcpus_pin_set).


We may also want to have "rt_emulator_overcommit_ratio" to control how many
threads/instances we allow per pCPU.


Not really sure to have understand this point? If it is to indicate
that for a pCPU isolated we want X guest emulator threads, the same
behavior is achieved by the mask. A host for realtime is dedicated for
realtime, no overcommitment and the operators know the number of host
CPUs, they can easily deduct a ratio and so the corresponding mask.


Suppose I have a host with 64 CPUs.  I reserve three for host overhead and
networking, leaving 61 for instances.  If I have instances with one non-RT
vCPU and one RT vCPU then I can run 30 instances.  If instead my instances
have one non-RT and 5 RT vCPUs then I can run 12 instances.  If I put all of
my emulator threads on the same pCPU, it might make a difference whether I
put 30 sets of emulator threads or 12 sets.


Oh I understand your point now, but not sure that is going to make any
difference. I would say the load in the isolated cores is probably
going to be the same. Even that an overhead will be the number of
threads handled which will be slightly higher in your first scenario.


The proposed "rt_emulator_overcommit_ratio" would simply say "nova is
allowed to run X instances worth of emulator threads on each pCPU in
"rt_emulator_vcpu_pin_set".  If we've hit that threshold, then no more RT
instances are allowed to schedule on this compute node (but non-RT instances
would still be allowed).


Also I don't think we want to schedule where the emulator threads of
the guests should be pinned on the isolated cores. We will let them
float on the set of cores isolated. If there is a requierement to have
them pinned so probably the current implementation will be enough.


Once you us

Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Chris Friesen

On 06/27/2017 01:44 AM, Sahid Orentino Ferdjaoui wrote:

On Mon, Jun 26, 2017 at 10:19:12AM +0200, Henning Schild wrote:

Am Sun, 25 Jun 2017 10:09:10 +0200
schrieb Sahid Orentino Ferdjaoui :


On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:

On 06/23/2017 09:35 AM, Henning Schild wrote:

Am Fri, 23 Jun 2017 11:11:10 +0200
schrieb Sahid Orentino Ferdjaoui :



In Linux RT context, and as you mentioned, the non-RT vCPU can
acquire some guest kernel lock, then be pre-empted by emulator
thread while holding this lock. This situation blocks RT vCPUs
from doing its work. So that is why we have implemented [2].
For DPDK I don't think we have such problems because it's
running in userland.

So for DPDK context I think we could have a mask like we have
for RT and basically considering vCPU0 to handle best effort
works (emulator threads, SSH...). I think it's the current
pattern used by DPDK users.


DPDK is just a library and one can imagine an application that has
cross-core communication/synchronisation needs where the emulator
slowing down vpu0 will also slow down vcpu1. You DPDK application
would have to know which of its cores did not get a full pcpu.

I am not sure what the DPDK-example is doing in this discussion,
would that not just be cpu_policy=dedicated? I guess normal
behaviour of dedicated is that emulators and io happily share
pCPUs with vCPUs and you are looking for a way to restrict
emulators/io to a subset of pCPUs because you can live with some
of them beeing not 100%.


Yes.  A typical DPDK-using VM might look something like this:

vCPU0: non-realtime, housekeeping and I/O, handles all virtual
interrupts and "normal" linux stuff, emulator runs on same pCPU
vCPU1: realtime, runs in tight loop in userspace processing packets
vCPU2: realtime, runs in tight loop in userspace processing packets
vCPU3: realtime, runs in tight loop in userspace processing packets

In this context, vCPUs 1-3 don't really ever enter the kernel, and
we've offloaded as much kernel work as possible from them onto
vCPU0.  This works pretty well with the current system.


For RT we have to isolate the emulator threads to an additional
pCPU per guests or as your are suggesting to a set of pCPUs for
all the guests running.

I think we should introduce a new option:

- hw:cpu_emulator_threads_mask=^1

If on 'nova.conf' - that mask will be applied to the set of all
host CPUs (vcpu_pin_set) to basically pack the emulator threads
of all VMs running here (useful for RT context).


That would allow modelling exactly what we need.
In nova.conf we are talking absolute known values, no need for a
mask and a set is much easier to read. Also using the same name
does not sound like a good idea.
And the name vcpu_pin_set clearly suggest what kind of load runs
here, if using a mask it should be called pin_set.


I agree with Henning.

In nova.conf we should just use a set, something like
"rt_emulator_vcpu_pin_set" which would be used for running the
emulator/io threads of *only* realtime instances.


I'm not agree with you, we have a set of pCPUs and we want to
substract some of them for the emulator threads. We need a mask. The
only set we need is to selection which pCPUs Nova can use
(vcpus_pin_set).


At that point it does not really matter whether it is a set or a mask.
They can both express the same where a set is easier to read/configure.
With the same argument you could say that vcpu_pin_set should be a mask
over the hosts pcpus.

As i said before: vcpu_pin_set should be renamed because all sorts of
threads are put here (pcpu_pin_set?). But that would be a bigger change
and should be discussed as a seperate issue.

So far we talked about a compute-node for realtime only doing realtime.
In that case vcpu_pin_set + emulator_io_mask would work. If you want to
run regular VMs on the same host, you can run a second nova, like we do.

We could also use vcpu_pin_set + rt_vcpu_pin_set(/mask). I think that
would allow modelling all cases in just one nova. Having all in one
nova, you could potentially repurpose rt cpus to best-effort and back.
Some day in the future ...


That is not something we should allow or at least
advertise. compute-node can't run both RT and non-RT guests and that
because the nodes should have a kernel RT. We can't guarantee RT if
both are on same nodes.


A compute node with an RT OS could run RT and non-RT guests at the same time 
just fine.  In a small cloud (think hyperconverged with maybe two nodes total) 
it's not viable to dedicate an entire node to just RT loads.


I'd personally rather see nova able to handle a mix of RT and non-RT than need 
to run multiple nova instances on the same node and figure out an up-front split 
of resources between RT nova and non-RT nova.  Better to allow nova to 
dynamically allocate resources as needed.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: o

Re: [openstack-dev] [openstack-ansible][neutron][designate] Failure trying to set dns_domain from command line

2017-06-27 Thread Graham Hayes
On 27/06/17 15:01, Lawrence J. Albinson wrote:
> Hello Colleagues,
> 
> I am trying to enable dynamic updating of DNSaaS when a port or VM is
> created/deleted.
> 
> I have DNSaaS working with Bind9 as the back-end and I am able to
> manually create/update/delete entries with the openstack client and/or
> the designate client and see Bind9 reflect those changes.
> 
> However, I am unable to set a dns_domain name for a network from the
> openstack CLI and/or the neutron CLI.
> 
> I have tried the following:
> 
> neutron net-update --dns-domain example.com
> 64b50baa-acd8-4269-8a3a-767b70c7d18d
> neutron net-update --dns-domain example.com public
> neutron net-update --dns-domain example.com.
> 64b50baa-acd8-4269-8a3a-767b70c7d18d
> neutron net-update --dns-domain example.com. public
> 
> The response is always the same, namely:
> 
> Unrecognized attribute(s) 'dns_domain'
> Neutron server returns request_ids:
> ['req-be15e08a-b3b0-458c-a045-ffac7ce3ebbd']
> 
> Before I go searching through the Neutron source, does anyone know if
> this is a 'hole' in the Neutron API and, if so, has it been fixed after
> the commit point being used by openstack-ansible tag 15.1.3.
> 
> Kind regards, Lawrence

Hi Lawrence,

Did you enable the DNS plugin in neutron?

Adding "dns" to the list here [0] should enable the --dns-domain
attribute.

0 -
https://github.com/openstack/openstack-ansible-os_neutron/blob/15.1.3/defaults/main.yml#L133

However, it does not look like OpenStack Ansible code supports Neutron
calling Designate to update DNS Recordsets yet.

It looks like a missing feature - I am not sure how OSA deals with
feature requests, but if you are on IRC they use #openstack-ansible

Thanks

- Graham


> 
> Lawrence J Albinson
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible][neutron][designate] Failure trying to set dns_domain from command line

2017-06-27 Thread Lawrence J. Albinson
Hello Colleagues,

I am trying to enable dynamic updating of DNSaaS when a port or VM is 
created/deleted.

I have DNSaaS working with Bind9 as the back-end and I am able to manually 
create/update/delete entries with the openstack client and/or the designate 
client and see Bind9 reflect those changes.

However, I am unable to set a dns_domain name for a network from the openstack 
CLI and/or the neutron CLI.

I have tried the following:

neutron net-update --dns-domain example.com 
64b50baa-acd8-4269-8a3a-767b70c7d18d
neutron net-update --dns-domain example.com public
neutron net-update --dns-domain example.com. 
64b50baa-acd8-4269-8a3a-767b70c7d18d
neutron net-update --dns-domain example.com. public

The response is always the same, namely:

Unrecognized attribute(s) 'dns_domain'
Neutron server returns request_ids: 
['req-be15e08a-b3b0-458c-a045-ffac7ce3ebbd']

Before I go searching through the Neutron source, does anyone know if this is a 
'hole' in the Neutron API and, if so, has it been fixed after the commit point 
being used by openstack-ansible tag 15.1.3.

Kind regards, Lawrence

Lawrence J Albinson

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Henning Schild
Am Tue, 27 Jun 2017 09:44:22 +0200
schrieb Sahid Orentino Ferdjaoui :

> On Mon, Jun 26, 2017 at 10:19:12AM +0200, Henning Schild wrote:
> > Am Sun, 25 Jun 2017 10:09:10 +0200
> > schrieb Sahid Orentino Ferdjaoui :
> >   
> > > On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:  
> > > > On 06/23/2017 09:35 AM, Henning Schild wrote:
> > > > > Am Fri, 23 Jun 2017 11:11:10 +0200
> > > > > schrieb Sahid Orentino Ferdjaoui :
> > > > 
> > > > > > In Linux RT context, and as you mentioned, the non-RT vCPU
> > > > > > can acquire some guest kernel lock, then be pre-empted by
> > > > > > emulator thread while holding this lock. This situation
> > > > > > blocks RT vCPUs from doing its work. So that is why we have
> > > > > > implemented [2]. For DPDK I don't think we have such
> > > > > > problems because it's running in userland.
> > > > > > 
> > > > > > So for DPDK context I think we could have a mask like we
> > > > > > have for RT and basically considering vCPU0 to handle best
> > > > > > effort works (emulator threads, SSH...). I think it's the
> > > > > > current pattern used by DPDK users.
> > > > > 
> > > > > DPDK is just a library and one can imagine an application
> > > > > that has cross-core communication/synchronisation needs where
> > > > > the emulator slowing down vpu0 will also slow down vcpu1. You
> > > > > DPDK application would have to know which of its cores did
> > > > > not get a full pcpu.
> > > > > 
> > > > > I am not sure what the DPDK-example is doing in this
> > > > > discussion, would that not just be cpu_policy=dedicated? I
> > > > > guess normal behaviour of dedicated is that emulators and io
> > > > > happily share pCPUs with vCPUs and you are looking for a way
> > > > > to restrict emulators/io to a subset of pCPUs because you can
> > > > > live with some of them beeing not 100%.
> > > > 
> > > > Yes.  A typical DPDK-using VM might look something like this:
> > > > 
> > > > vCPU0: non-realtime, housekeeping and I/O, handles all virtual
> > > > interrupts and "normal" linux stuff, emulator runs on same pCPU
> > > > vCPU1: realtime, runs in tight loop in userspace processing
> > > > packets vCPU2: realtime, runs in tight loop in userspace
> > > > processing packets vCPU3: realtime, runs in tight loop in
> > > > userspace processing packets
> > > > 
> > > > In this context, vCPUs 1-3 don't really ever enter the kernel,
> > > > and we've offloaded as much kernel work as possible from them
> > > > onto vCPU0.  This works pretty well with the current system.
> > > > 
> > > > > > For RT we have to isolate the emulator threads to an
> > > > > > additional pCPU per guests or as your are suggesting to a
> > > > > > set of pCPUs for all the guests running.
> > > > > > 
> > > > > > I think we should introduce a new option:
> > > > > > 
> > > > > >- hw:cpu_emulator_threads_mask=^1
> > > > > > 
> > > > > > If on 'nova.conf' - that mask will be applied to the set of
> > > > > > all host CPUs (vcpu_pin_set) to basically pack the emulator
> > > > > > threads of all VMs running here (useful for RT context).
> > > > > 
> > > > > That would allow modelling exactly what we need.
> > > > > In nova.conf we are talking absolute known values, no need
> > > > > for a mask and a set is much easier to read. Also using the
> > > > > same name does not sound like a good idea.
> > > > > And the name vcpu_pin_set clearly suggest what kind of load
> > > > > runs here, if using a mask it should be called pin_set.
> > > > 
> > > > I agree with Henning.
> > > > 
> > > > In nova.conf we should just use a set, something like
> > > > "rt_emulator_vcpu_pin_set" which would be used for running the
> > > > emulator/io threads of *only* realtime instances.
> > > 
> > > I'm not agree with you, we have a set of pCPUs and we want to
> > > substract some of them for the emulator threads. We need a mask.
> > > The only set we need is to selection which pCPUs Nova can use
> > > (vcpus_pin_set).  
> > 
> > At that point it does not really matter whether it is a set or a
> > mask. They can both express the same where a set is easier to
> > read/configure. With the same argument you could say that
> > vcpu_pin_set should be a mask over the hosts pcpus.
> > 
> > As i said before: vcpu_pin_set should be renamed because all sorts
> > of threads are put here (pcpu_pin_set?). But that would be a bigger
> > change and should be discussed as a seperate issue.
> > 
> > So far we talked about a compute-node for realtime only doing
> > realtime. In that case vcpu_pin_set + emulator_io_mask would work.
> > If you want to run regular VMs on the same host, you can run a
> > second nova, like we do.
> > 
> > We could also use vcpu_pin_set + rt_vcpu_pin_set(/mask). I think
> > that would allow modelling all cases in just one nova. Having all
> > in one nova, you could potentially repurpose rt cpus to best-effort
> > and back. Some day in the future ...  
> 
> That is not something we should allow

[openstack-dev] [hacking] Propose removal of cores

2017-06-27 Thread John Villalovos
I am proposing that the following people be removed as core reviewers from
the hacking project: https://review.openstack.org/#/admin/groups/153,members

Joe Gordon
James Carey


Joe Gordon:
Has not done a review in OpenStack since 16-Feb-2017
http://stackalytics.com/?release=all&user_id=jogo

Has not done a review in hacking since 23-Jan-2016:
http://stackalytics.com/?module=hacking&user_id=jogo&release=all


James Carey
Has not done a review in OpenStack since 9-Aug-2016
http://stackalytics.com/?release=all&user_id=jecarey

Has not done a review in hacking since 9-Aug-2016:
http://stackalytics.com/?module=hacking&release=all&user_id=jecarey


And maybe this project needs more core reviewers as there have been six
total reviews by four core reviewers so far in the Pike cycle:
http://stackalytics.com/?release=pike&module=hacking
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread Sean Dague
On 06/27/2017 09:42 AM, Thierry Carrez wrote:
> Sean Dague wrote:
>> I also think it's fine to rebrand WG to SIG, but we should also be
>> honest that it's mostly a rebrand to consolidate on terminology that k8s
>> and cncf have used that people find easier to understand so it's a way
>> in which openstack is not different than those. Consolidating on terms
>> isn't a bad thing, but it's really a minor part of the workflow issue.
> 
> It's both a consolidation and the signal of a change. If we continued to
> call them "workgroups" I suspect we'd carry some of the traditions
> around them (or would end up calling them new-style WG vs. old-style WG).

I still think I've missed, or not grasped, during this thread how a SIG
functions differently than a WG, besides name. Both in theory and practice.

The API WG doesn't seem like a great example, because it was honestly a
couple of people that were interested in API consumption, but mostly had
a historical view of how the API worked in OpenStack. The transition in
from developers was largely because some reality checking needed to be
put in place, and then people changed roles / jobs, and those showing up
stayed on the dev side.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread Thierry Carrez
Blair Bethwaite wrote:
> There is a not insignificant degree of irony in the fact that this
> conversation has splintered so that anyone only reading
> openstack-operators and/or user-committee is missing 90% of the
> picture Maybe I just need a new ML management strategy.

That irony is not lost on me, and no ML management strategy can help.
Currently for a ops+dev discussion we have 4 options: start it on -dev
(miss ops), start it on -ops (miss devs), cross-post (and miss random
messages that are lost as subscribers don't match, or people don't reply
to both), or try to post separate variants (but then you have to follow
both ends, and your replies miss half the audience). We tried the 4th
option this time -- was a fail but then there are no good option in the
current setup.

Setting up a common ML for common discussions (openstack-sigs) will
really help, even if there will be some pain setting them up and getting
the right readership to them :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] Nominate Yumeng Bao to the core team

2017-06-27 Thread Alexander Chadin
Hi watcher folks,

I’d like to nominate Yumeng Bao to the core team. She has made a lot of 
contributions including specifications,
features and bug fixes. Yumeng has attended PTG and Summit with her 
presentation related to the Watcher.
Yumeng is active on IRC channels and take a part on weekly meetings as well.

Please, vote with +1/-1.

Best Regards,
_
Alexander Chadin
OpenStack Developer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread Thierry Carrez
Sean Dague wrote:
> On 06/21/2017 01:10 PM, Michał Jastrzębski wrote:
>> One of key components which, imho, made SIGs successful in k8s is
>> infrastructure behind it.
>>
>> When someone proposes an issue, they can tag SIG to it. Everyone in
>> this SIG will be notified that there is an issue they might be
>> interested it, they check it out and provide feedback. That also
>> creates additional familiarity with dev toolset for non-dev sig
>> members. I think what would be important for OpenStack SIGs to be
>> successful is connecting SIGs to both Launchpad and Gerrit.
> 
> I think this is a key point. The simpler tools that github has, which
> require that you build a workflow based on tags outside of the tools,
> actually enables the effectiveness here.
> 
> Does k8s community currently have the same level of operators that
> aren't developers participating as OpenStack?
> 
> I wonder if we're going down this path, if some kind of tooling like
> standard tags for issues/patches should be added to the mix to help gain
> the effectiveness that the k8s team seems to have here.

For Launchpad/Storyboard we could totally reuse tags. Absence of tags in
gerrit bites us here (and in other places too). I know that was a
planned feature, does anyone have updated status on it ?

> I also think it's fine to rebrand WG to SIG, but we should also be
> honest that it's mostly a rebrand to consolidate on terminology that k8s
> and cncf have used that people find easier to understand so it's a way
> in which openstack is not different than those. Consolidating on terms
> isn't a bad thing, but it's really a minor part of the workflow issue.

It's both a consolidation and the signal of a change. If we continued to
call them "workgroups" I suspect we'd carry some of the traditions
around them (or would end up calling them new-style WG vs. old-style WG).

> It might also be a good idea that any SIG that is going to be "official"
> has the requirement that they write up a state of the sig every month or
> two with what's done, what's happening, what's next, and what's
> challenging. At a project the scale of OpenStack one of the biggest
> issues is actually having a good grasp on the wide range of efforts, and
> these summaries by teams are pretty critical to increasing the shared
> understanding.

++

Previously in the thread, we mentioned the need to clean up if SIGs are
no longer alive -- that regular reporting could be a good indicator of
liveness.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fwd: [scientific] IRC Meeting (Tues 2100 UTC): Science app catalogues, network security of research computing on OpenStack

2017-06-27 Thread Blair Bethwaite
Resend for openstack-dev with proper list perms...

-- Forwarded message --
From: Blair Bethwaite 
Date: 27 June 2017 at 23:24
Subject: [scientific] IRC Meeting (Tues 2100 UTC): Science app catalogues,
network security of research computing on OpenStack
To: user-committee , "openstack-oper." <
openstack-operat...@lists.openstack.org>, "openstack-dev@lists.openstack.org"



Hi all,

Scientific-WG meeting in ~8 hours in #openstack-meeting. This week's agenda
is largely the same as last week, for alternate TZ.

Cheers,
Blair

-- Forwarded message --
From: Stig Telfer 
Date: 21 June 2017 at 02:51
Subject: [User-committee] [scientific] IRC Meeting: Science app catalogues,
security of research computing on OpenStack - Wednesday 0900 UTC
To: user-committee , "openstack-oper." <
openstack-operat...@lists.openstack.org>


Greetings!

We have an IRC meeting on Wednesday at 0900 UTC in channel
#openstack-meeting.

This week we’d like to hear people’s thoughts and experiences on providing
scientific application catalogues to users - in particular with a view to
gathering best practice for a new chapter for the Scientific OpenStack book.

Similarly, we’d like to discuss what people are doing for security of
research computing instances on OpenStack.

The agenda is available here: https://wiki.openstack.o
rg/wiki/Scientific_working_group#IRC_Meeting_June_21st_2017
Details of the IRC meeting are here: http://eavesdrop.opensta
ck.org/#Scientific_Working_Group

Please come along with ideas, suggestions or requirements.  All are welcome.

Cheers,
Stig

___
User-committee mailing list
user-commit...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee




-- 
Blair Bethwaite
Senior HPC Consultant

Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk, Clayton Campus
Clayton VIC 3800
Australia
Mobile: 0439-545-002
Office: +61 3-9903-2800 <+61%203%209903%202800>



-- 
Blair Bethwaite
Senior HPC Consultant

Monash eResearch Centre
Monash University
Room G26, 15 Innovation Walk, Clayton Campus
Clayton VIC 3800
Australia
Mobile: 0439-545-002
Office: +61 3-9903-2800
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] deprecating pollster-list option

2017-06-27 Thread gordon chung
hi,

i'm proposing to deprecate/remove pollster-list config option. if you're 
like me and wondering what this is, we apparently support ability to 
filter pollsters loaded by polling agent[1].

the reason i'd like to deprecate is because we already have this 
functionality in the polling|pipeline.yaml where you define what meters 
you want to enable. having the ability to do it in two different places 
causes conflict/confusion and i think the yaml path is probably the one 
people know about and is the more logical path.

disclaimer: i also want to drop the pollster-list option because there's 
a bug[2] related to it and i personally don't think it's worth fixing it.

[1] 
https://github.com/openstack/ceilometer/blob/7eb37ace7012a203ed83bd27b57f3db6f59f4547/ceilometer/cmd/polling.py#L69-L73
[2] https://bugs.launchpad.net/ceilometer/+bug/1603320

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo-service] Multi main processes are started by using oslo_service

2017-06-27 Thread gordon chung


On 27/06/17 05:56 AM, zhi wrote:
> Everything goes well when the "api_workers " equals 0 or 1. But two main
> processes were started when the " api_workers " equals 2. The log shows
> below:
>
> 2017-06-27 17:42:18.864 1958058 INFO abc.common.wsgi [-] (1958058) wsgi
> starting up on http://0.0.0.0:9914/
>
> 2017-06-27 17:42:18.864 1958059 INFO abc.common.wsgi [-] (1958059) wsgi
> starting up on http://0.0.0.0:9914/

because you asked for 2 workers? workers in oslo.service are 
processes[1]. i have no idea how 0 workers doesn't throw an error.

[1] 
https://github.com/openstack/oslo.service/blob/master/oslo_service/service.py#L523-L526

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread Blair Bethwaite
There is a not insignificant degree of irony in the fact that this
conversation has splintered so that anyone only reading openstack-operators
and/or user-committee is missing 90% of the picture Maybe I just need a
new ML management strategy.

I'd like to add a +1 to Sean's suggestion about WG/SIG/team/whatever tags
on reviews etc. This is something I've also suggested in the past:
http://lists.openstack.org/pipermail/user-committee/2016-October/001328.html.
My thinking at the time was that it would provide a tractable basis for
chairs to build standing discussion items around and help get more user &
ops eyes on blueprints/reviews/etc.

On 27 June 2017 at 10:25, Melvin Hillsman  wrote:

>
>
> On Wed, Jun 21, 2017 at 11:55 AM, Matt Riedemann 
> wrote:
>
>> On 6/21/2017 11:17 AM, Shamail Tahir wrote:
>>
>>>
>>>
>>> On Wed, Jun 21, 2017 at 12:02 PM, Thierry Carrez >> > wrote:
>>>
>>> Shamail Tahir wrote:
>>> > In the past, governance has helped (on the UC WG side) to reduce
>>> > overlaps/duplication in WGs chartered for similar objectives. I
>>> would
>>> > like to understand how we will handle this (if at all) with the
>>> new SIG
>>> > proposa?
>>>
>>> I tend to think that any overlap/duplication would get solved
>>> naturally,
>>> without having to force everyone through an application process that
>>> may
>>> discourage natural emergence of such groups. I feel like an
>>> application
>>> process would be premature optimization. We can always encourage
>>> groups
>>> to merge (or clean them up) after the fact. How much
>>> overlaps/duplicative groups did you end up having ?
>>>
>>>
>>> Fair point, it wasn't many. The reason I recalled this effort was
>>> because we had to go through the exercise after the fact and that made the
>>> volume of WGs to review much larger than had we asked the purpose whenever
>>> they were created. As long as we check back periodically and not let the
>>> work for validation/clean up pile up then this is probably a non-issue.
>>>
>>>
>>> > Also, do we have to replace WGs as a concept or could SIG
>>> > augment them? One suggestion I have would be to keep projects on
>>> the TC
>>> > side and WGs on the UC side and then allow for spin-up/spin-down
>>> of SIGs
>>> > as needed for accomplishing specific goals/tasks (picture of a
>>> diagram
>>> > I created at the Forum[1]).
>>>
>>> I feel like most groups should be inclusive of all community, so I'd
>>> rather see the SIGs being the default, and ops-specific or
>>> dev-specific
>>> groups the exception. To come back to my Public Cloud WG example, you
>>> need to have devs and ops in the same group in the first place before
>>> you would spin-up a "address scalability" SIG. Why not just have a
>>> Public Cloud SIG in the first place?
>>>
>>>
>>> +1, I interpreted originally that each use-case would be a SIG versus
>>> the SIG being able to be segment oriented (in which multiple use-cases
>>> could be pursued)
>>>
>>>
>>>  > [...]
>>> > Finally, how will this change impact the ATC/AUC status of the SIG
>>> > members for voting rights in the TC/UC elections?
>>>
>>> There are various options. Currently you give UC WG leads the AUC
>>> status. We could give any SIG lead both statuses. Or only give the
>>> AUC
>>> status to a subset of SIGs that the UC deems appropriate. It's
>>> really an
>>> implementation detail imho. (Also I would expect any SIG lead to
>>> already
>>> be both AUC and ATC somehow anyway, so that may be a non-issue).
>>>
>>>
>>> We can discuss this later because it really is an implementation detail.
>>> Thanks for the answers.
>>>
>>>
>>> --
>>> Thierry Carrez (ttx)
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> >> subscribe>
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>>
>>>
>>>
>>>
>>> --
>>> Thanks,
>>> Shamail Tahir
>>> t: @ShamailXD
>>> tz: Eastern Time
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> I think a key point you're going to want to convey and repeat ad nauseum
>> with this SIG idea is that each SIG is focused on a specific use case and
>> they can be spun up and spun down. Assuming that's what you want them to be.
>>
>> One problem I've seen with the various work groups is they overlap in a
>>

Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-27 Thread Jay Pipes

From what I can tell, Keycloak is an Identity provider, not a secret store?

-jay

On 06/27/2017 05:35 AM, Adam Heczko wrote:
Barbican already supports multiple secret storage backends [1] and most 
likely adding Keycloak's one [2] should be possible.


[1] 
https://docs.openstack.org/project-install-guide/key-manager/draft/barbican-backend.html

[2] https://github.com/jpkrohling/secret-store

On Tue, Jun 27, 2017 at 10:42 AM, Thierry Carrez > wrote:


Mikhail Fedosin wrote:
> Does the above mean you are implementing a share secret 
storage
> solution or that you are going to use an existing solution 
like
> Barbican that does that?
>
> Sectets is a plugin for Glare we developed for Nokia CloudBand
> platform,   and they just decided to opensource it. It doesn't
> use Barbican, technically it is oslo.versionedobjects class.
>
> Sorry to hear that you opted not to use Barbican.
>
> I think it's only because Keycloak integration is required by Nokia's
> system and Barbican doesn't support it.

Any technical reason why it couldn't be added to Barbican ? Any chance
Keycloak integration could be added as a Castellan backend ? Secrets
management is really one of those things that should *not* be reinvented
in every project. It is easier to get wrong than people think, and you
end up having to do security audits on 10 repositories instead of one.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





--
Adam Heczko
Security Engineer @ Mirantis Inc.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][ALL] What tempest tests will go under tempest plugin for a Project?

2017-06-27 Thread Chandan kumar
++ openstack-dev

On Tue, Jun 27, 2017 at 3:57 PM, Ghanshyam Mann  wrote:
> On Tue, Jun 27, 2017 at 3:52 PM, Chandan kumar  wrote:
>> Hello Ghanshyam,
>>
>> On Sat, Jun 24, 2017 at 3:48 PM, Ghanshyam Mann  
>> wrote:
>>> On Fri, Jun 23, 2017 at 4:51 PM, Chandan kumar  wrote:
 Hello,

 In Queen OpenStack release, We have a community goal to split In-Tree
 tempest plugin to a separate repo[1.].

 I have a couple question regarding the tempest tests movement within
 tempest plugins.

 [1.] Since some of the core OpenStack projects like Nova, Glance and
 Swift does have tempest plugin currently.
  Their Tempest tests reside under tempest project repo.
  are we going to create tempest plugin for the same?
  If yes, what are the tempest tests (API/Scenario) tests moving
 under tempest plugins?

 [2.] And, like other core projects like neutron and cinder have their
 in-tree tempest plugins also.
  And those are also moving to a separate repo and currently, their
 tests also resides under tempest repo.
  How can we avoid the duplication of the tempest tests?
>>>
>>> Its same answer for 1 and 2. Tempest is a place to have integration
>>> tests and future tests also falls in same scope.
>>> Yes, we do have API tests negative as well as positive which are there
>>> because of defcore. Defcore need those for interop certification.
>>> Those will reside in Tempest as of now and so new tests can be added
>>> in Tempest if defcore require them.
>>>
>>> New or existing Tempest plugin for 6 core projects whose tests are
>>> present in Tempest, will target their functional/API/negative testing
>>> etc which are/should be out of scope of Tempest.
>>>
>>> Regarding the duplication of tests, we do take care of those while
>>> review of new tests addition. If there is any new tests proposed in
>>> Tempest, reviewers need to check whether same coverage is there on
>>> project side or not (either functional tests or in tempest plugin).
>>> Also if those are more appropriate to reside on project side. We will
>>> be continuing with the same process to avoid duplicate tests.
>>>
>>
>> Thanks got it, So basically if any tests needed by DefCore as well as
>> integration tests
>> needed for Core  Projects will go under Tempest.
>>
>>>

 [3.] For other projects while moving tests to a separate repo how we
 are going to collaborate together to avoid
  duplication and move common tests to Tempest?
>>>
>>> You mean tests in projects tree as functional tests etc and their
>>> tempest plugin ?
>>
>> Yes, For example, Swift have lots of functional tests with in tests
>> folder of swift project tree.
>> Does it go under tempest plugin?
>> This part i am confused.
>
> That depends on project to project. If they want to implement tempest
> like tests and does not fall under Tempest scope, then those tests
> goes in tempest plugin like done by Cinder. But it does not mean that
> all existing functional tests of projects needs to be moved/converted
> to tempest like tests.
> That's all depends on project team decision.
>
> -gmann
>
>>
Thanks, I got all the answers.

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Turning TC/UC workgroups into OpenStack SIGs

2017-06-27 Thread Sean Dague
On 06/21/2017 01:10 PM, Michał Jastrzębski wrote:
> One of key components which, imho, made SIGs successful in k8s is
> infrastructure behind it.
> 
> When someone proposes an issue, they can tag SIG to it. Everyone in
> this SIG will be notified that there is an issue they might be
> interested it, they check it out and provide feedback. That also
> creates additional familiarity with dev toolset for non-dev sig
> members. I think what would be important for OpenStack SIGs to be
> successful is connecting SIGs to both Launchpad and Gerrit.

I think this is a key point. The simpler tools that github has, which
require that you build a workflow based on tags outside of the tools,
actually enables the effectiveness here.

Does k8s community currently have the same level of operators that
aren't developers participating as OpenStack?

I wonder if we're going down this path, if some kind of tooling like
standard tags for issues/patches should be added to the mix to help gain
the effectiveness that the k8s team seems to have here.

I also think it's fine to rebrand WG to SIG, but we should also be
honest that it's mostly a rebrand to consolidate on terminology that k8s
and cncf have used that people find easier to understand so it's a way
in which openstack is not different than those. Consolidating on terms
isn't a bad thing, but it's really a minor part of the workflow issue.

It might also be a good idea that any SIG that is going to be "official"
has the requirement that they write up a state of the sig every month or
two with what's done, what's happening, what's next, and what's
challenging. At a project the scale of OpenStack one of the biggest
issues is actually having a good grasp on the wide range of efforts, and
these summaries by teams are pretty critical to increasing the shared
understanding.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-27 Thread Sean Dague
On 06/27/2017 04:50 AM, Thierry Carrez wrote:
> gordon chung wrote:
>> do we know why they're not being completed? indifference? lack of resources?
> 
> I would say it's a mix of reasons. Sometimes it's a resource issue, but
> most of the time it's a prioritization issue (everyone waiting for
> someone else to pick it up), and in remaining cases it's pure
> procrastination (it's not that much work, I'll do it tomorrow).
> 
>> i like the champion idea although i think its scope should be expanded. 
>> i didn't mention this in meeting and the following has no legit research 
>> behind it so feel free to disregard but i imagine some of the 
>> indifference towards the goals is because:
>>
>> - it's often trivial (but important) work
>> many projects are already flooded with a lot of non-trivial, 
>> self-interest goals AND a lot trivial (and unimportant) copy/paste 
>> patches already so it's hard to feel passionate and find motivation to 
>> do it. the champion stuff may help here.
>>
>> - there is a disconnect between the TC and the projects.
>> it seems there is a requirement for the projects to engage the TC but 
>> not necessarily the other way around. for many projects, i'm fairly 
>> certain nothing would change whether they actively engaged the TC or 
>> just left relationship as is and had minimal/no interaction. i apologise 
>> if that's blunt but just based on my own prior experience.
>>
>> i don't know if the TC wants to become PMs but having the goals i feel 
>> sort of requires the TC to be PMs and to actually interact with the PTLs 
>> regularly, not just about the goal itself but the project and it's role 
>> in openstack. maybe it's as designed, but if there's no relationship 
>> there, i don't think 'TC wants you to do this' will get something done. 
>> it's in the same vein as how it's easier to get a patch approved if 
>> you're engaged in a project for some time as oppose to a patch out of 
>> the blue (disclaimer: i did not study sociology).
> When we look at goals, the main issue is generally not writing the
> patches, it's more about getting that prioritized in code review and
> tracking completion. That's where I think champions will help. Sometimes
> teams will need help writing patches, sometimes they will just need
> reminders to prioritize code review up. Someone has to look at the big
> picture and care for the completion of the goal. Having champions will
> also make it look a lot less like 'TC wants you to do this' and more
> like 'we are in this together, completing this goal will make openstack
> better'.

++

Having worked on a number of things that have touched a bunch of
projects, it turns out that the needs of every project are different.
The reason that multi project efforts seem to take so long, or die out,
is they need a reasonable amount of project management to be effective.

There are lots of demands on teams, and having someone that can
represent a bigger goal, knows what it looks like when complete, and can
go to the affected teams with "here is the next one thing I need from
you to make this whole" really speeds up the process. At least 2 - 3x
(if not more).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa][glance] some recent tempest problems

2017-06-27 Thread Ghanshyam Mann
On Mon, Jun 26, 2017 at 11:58 PM, Eric Harney  wrote:
> On 06/19/2017 09:22 AM, Matt Riedemann wrote:
>> On 6/16/2017 8:58 AM, Eric Harney wrote:
>>> I'm not convinced yet that this failure is purely Ceph-specific, at a
>>> quick look.
>>>
>>> I think what happens here is, unshelve performs an asynchronous delete
>>> of a glance image, and returns as successful before the delete has
>>> necessarily completed.  The check in tempest then sees that the image
>>> still exists, and fails -- but this isn't valid, because the unshelve
>>> API doesn't guarantee that this image is no longer there at the time it
>>> returns.  This would fail on any image delete that isn't instantaneous.
>>>
>>> Is there a guarantee anywhere that the unshelve API behaves how this
>>> tempest test expects it to?
>>
>> There are no guarantees, no. The unshelve API reference is here [1]. The
>> asynchronous postconditions section just says:
>>
>> "After you successfully shelve a server, its status changes to ACTIVE.
>> The server appears on the compute node.
>>
>> The shelved image is deleted from the list of images returned by an API
>> call."
>>
>> It doesn't say the image is deleted immediately, or that it waits for
>> the image to be gone before changing the instance status to ACTIVE.
>>
>> I see there is also a typo in there, that should say after you
>> successfully *unshelve* a server.
>>
>> From an API user point of view, this is all asynchronous because it's an
>> RPC cast from the nova-api service to the nova-conductor and finally
>> nova-compute service when unshelving the instance.
>>
>> So I think the test is making some wrong assumptions on how fast the
>> image is going to be deleted when the instance is active.
>>
>> As Ken'ichi pointed out in the Tempest change, Glance returns a 204 when
>> deleting an image in the v2 API [2]. If the image delete is asynchronous
>> then that should probably be a 202.
>>
>> Either way the Tempest test should probably be in a wait loop for the
>> image to be gone if it's really going to assert this.
>>
>
> Thanks for confirming this.
>
> What do we need to do to get this fixed in Tempest?  Nobody from Tempest
> Core has responded to the revert patch [3] since this explanation was
> posted.
>
> IMO we should revert this for now and someone can implement a fixed
> version if this test is needed.

Sorry for delay. Let's fix this instead of revert  -
https://review.openstack.org/#/c/477821/

-gmann

>
> [3] https://review.openstack.org/#/c/471352/
>
>> [1]
>> https://developer.openstack.org/api-ref/compute/?expanded=unshelve-restore-shelved-server-unshelve-action-detail#unshelve-restore-shelved-server-unshelve-action
>>
>> [2]
>> https://developer.openstack.org/api-ref/image/v2/index.html?expanded=delete-an-image-detail#delete-an-image
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo][oslo-service] Multi main processes are started by using oslo_service

2017-06-27 Thread zhi
Hi, all.

Here, I want to start a main process and multi subprocess by using
oslo_service. So I launch a service like this:

from oslo_service import service as common_service

def serve_wsgi(cls):
try:
service = cls.create()
except Exception:
with excutils.save_and_reraise_exception():
LOG.exception(''..')

return service

class ApiService(WsgiService):
"""Class for tacker-api service."""

@classmethod
def create(cls, app_name='abc'):
service = cls(app_name)
return service

api = service.serve_wsgi(service.ApiService)
launcher = common_service.launch(cfg.CONF, api,
 workers=cfg.CONF.api_workers or
None)
launcher.wait()

Everything goes well when the "api_workers " equals 0 or 1. But two main
processes were started when the " api_workers " equals 2. The log shows
below:

2017-06-27 17:42:18.864 1958058 INFO abc.common.wsgi [-] (1958058) wsgi
starting up on http://0.0.0.0:9914/

2017-06-27 17:42:18.864 1958059 INFO abc.common.wsgi [-] (1958059) wsgi
starting up on http://0.0.0.0:9914/


Could someone tell me the reason why two main processes were started ?


Thanks
Zhi Chang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-27 Thread Adam Heczko
Barbican already supports multiple secret storage backends [1] and most
likely adding Keycloak's one [2] should be possible.

[1]
https://docs.openstack.org/project-install-guide/key-manager/draft/barbican-backend.html
[2] https://github.com/jpkrohling/secret-store

On Tue, Jun 27, 2017 at 10:42 AM, Thierry Carrez 
wrote:

> Mikhail Fedosin wrote:
> > Does the above mean you are implementing a share secret
> storage
> > solution or that you are going to use an existing solution
> like
> > Barbican that does that?
> >
> > Sectets is a plugin for Glare we developed for Nokia CloudBand
> > platform,   and they just decided to opensource it. It doesn't
> > use Barbican, technically it is oslo.versionedobjects class.
> >
> > Sorry to hear that you opted not to use Barbican.
> >
> > I think it's only because Keycloak integration is required by Nokia's
> > system and Barbican doesn't support it.
>
> Any technical reason why it couldn't be added to Barbican ? Any chance
> Keycloak integration could be added as a Castellan backend ? Secrets
> management is really one of those things that should *not* be reinvented
> in every project. It is easier to get wrong than people think, and you
> end up having to do security audits on 10 repositories instead of one.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Jun.28

2017-06-27 Thread joehuang
Hello, team,

Agenda of Jun.28 weekly meeting:

  1.  feature implementation review
  2.  Community wide goals, PTG and Sydney summit topics
  3.  Open Discussion

How to join:

#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 1:00.



Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all][ptl] Most Supported Queens Goals and Improving Goal Completion

2017-06-27 Thread Thierry Carrez
gordon chung wrote:
> do we know why they're not being completed? indifference? lack of resources?

I would say it's a mix of reasons. Sometimes it's a resource issue, but
most of the time it's a prioritization issue (everyone waiting for
someone else to pick it up), and in remaining cases it's pure
procrastination (it's not that much work, I'll do it tomorrow).

> i like the champion idea although i think its scope should be expanded. 
> i didn't mention this in meeting and the following has no legit research 
> behind it so feel free to disregard but i imagine some of the 
> indifference towards the goals is because:
> 
> - it's often trivial (but important) work
> many projects are already flooded with a lot of non-trivial, 
> self-interest goals AND a lot trivial (and unimportant) copy/paste 
> patches already so it's hard to feel passionate and find motivation to 
> do it. the champion stuff may help here.
> 
> - there is a disconnect between the TC and the projects.
> it seems there is a requirement for the projects to engage the TC but 
> not necessarily the other way around. for many projects, i'm fairly 
> certain nothing would change whether they actively engaged the TC or 
> just left relationship as is and had minimal/no interaction. i apologise 
> if that's blunt but just based on my own prior experience.
> 
> i don't know if the TC wants to become PMs but having the goals i feel 
> sort of requires the TC to be PMs and to actually interact with the PTLs 
> regularly, not just about the goal itself but the project and it's role 
> in openstack. maybe it's as designed, but if there's no relationship 
> there, i don't think 'TC wants you to do this' will get something done. 
> it's in the same vein as how it's easier to get a patch approved if 
> you're engaged in a project for some time as oppose to a patch out of 
> the blue (disclaimer: i did not study sociology).
When we look at goals, the main issue is generally not writing the
patches, it's more about getting that prioritized in code review and
tracking completion. That's where I think champions will help. Sometimes
teams will need help writing patches, sometimes they will just need
reminders to prioritize code review up. Someone has to look at the big
picture and care for the completion of the goal. Having champions will
also make it look a lot less like 'TC wants you to do this' and more
like 'we are in this together, completing this goal will make openstack
better'.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-27 Thread Thierry Carrez
Mikhail Fedosin wrote:
> Does the above mean you are implementing a share secret storage
> solution or that you are going to use an existing solution like
> Barbican that does that?
> 
> Sectets is a plugin for Glare we developed for Nokia CloudBand
> platform,   and they just decided to opensource it. It doesn't
> use Barbican, technically it is oslo.versionedobjects class.
> 
> Sorry to hear that you opted not to use Barbican.
> 
> I think it's only because Keycloak integration is required by Nokia's
> system and Barbican doesn't support it. 

Any technical reason why it couldn't be added to Barbican ? Any chance
Keycloak integration could be added as a Castellan backend ? Secrets
management is really one of those things that should *not* be reinvented
in every project. It is easier to get wrong than people think, and you
end up having to do security audits on 10 repositories instead of one.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Meeting on the top of the hour

2017-06-27 Thread Ildiko Vancsa
Hi Training Team,

It is a friendly reminder that we’re having our Europe and Asia TZ friendly 
meeting slot at 0900 UTC today, on #openstack-meeting-3.

You can find the meeting agenda here: 
https://etherpad.openstack.org/p/openstack-upstream-institute-meetings

Thanks,
Ildikó
(IRC: ildikov)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Sahid Orentino Ferdjaoui
On Mon, Jun 26, 2017 at 12:12:49PM -0600, Chris Friesen wrote:
> On 06/25/2017 02:09 AM, Sahid Orentino Ferdjaoui wrote:
> > On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:
> > > On 06/23/2017 09:35 AM, Henning Schild wrote:
> > > > Am Fri, 23 Jun 2017 11:11:10 +0200
> > > > schrieb Sahid Orentino Ferdjaoui :
> > > 
> > > > > In Linux RT context, and as you mentioned, the non-RT vCPU can acquire
> > > > > some guest kernel lock, then be pre-empted by emulator thread while
> > > > > holding this lock. This situation blocks RT vCPUs from doing its
> > > > > work. So that is why we have implemented [2]. For DPDK I don't think
> > > > > we have such problems because it's running in userland.
> > > > > 
> > > > > So for DPDK context I think we could have a mask like we have for RT
> > > > > and basically considering vCPU0 to handle best effort works (emulator
> > > > > threads, SSH...). I think it's the current pattern used by DPDK users.
> > > > 
> > > > DPDK is just a library and one can imagine an application that has
> > > > cross-core communication/synchronisation needs where the emulator
> > > > slowing down vpu0 will also slow down vcpu1. You DPDK application would
> > > > have to know which of its cores did not get a full pcpu.
> > > > 
> > > > I am not sure what the DPDK-example is doing in this discussion, would
> > > > that not just be cpu_policy=dedicated? I guess normal behaviour of
> > > > dedicated is that emulators and io happily share pCPUs with vCPUs and
> > > > you are looking for a way to restrict emulators/io to a subset of pCPUs
> > > > because you can live with some of them beeing not 100%.
> > > 
> > > Yes.  A typical DPDK-using VM might look something like this:
> > > 
> > > vCPU0: non-realtime, housekeeping and I/O, handles all virtual interrupts
> > > and "normal" linux stuff, emulator runs on same pCPU
> > > vCPU1: realtime, runs in tight loop in userspace processing packets
> > > vCPU2: realtime, runs in tight loop in userspace processing packets
> > > vCPU3: realtime, runs in tight loop in userspace processing packets
> > > 
> > > In this context, vCPUs 1-3 don't really ever enter the kernel, and we've
> > > offloaded as much kernel work as possible from them onto vCPU0.  This 
> > > works
> > > pretty well with the current system.
> > > 
> > > > > For RT we have to isolate the emulator threads to an additional pCPU
> > > > > per guests or as your are suggesting to a set of pCPUs for all the
> > > > > guests running.
> > > > > 
> > > > > I think we should introduce a new option:
> > > > > 
> > > > > - hw:cpu_emulator_threads_mask=^1
> > > > > 
> > > > > If on 'nova.conf' - that mask will be applied to the set of all host
> > > > > CPUs (vcpu_pin_set) to basically pack the emulator threads of all VMs
> > > > > running here (useful for RT context).
> > > > 
> > > > That would allow modelling exactly what we need.
> > > > In nova.conf we are talking absolute known values, no need for a mask
> > > > and a set is much easier to read. Also using the same name does not
> > > > sound like a good idea.
> > > > And the name vcpu_pin_set clearly suggest what kind of load runs here,
> > > > if using a mask it should be called pin_set.
> > > 
> > > I agree with Henning.
> > > 
> > > In nova.conf we should just use a set, something like
> > > "rt_emulator_vcpu_pin_set" which would be used for running the emulator/io
> > > threads of *only* realtime instances.
> > 
> > I'm not agree with you, we have a set of pCPUs and we want to
> > substract some of them for the emulator threads. We need a mask. The
> > only set we need is to selection which pCPUs Nova can use
> > (vcpus_pin_set).
> > 
> > > We may also want to have "rt_emulator_overcommit_ratio" to control how 
> > > many
> > > threads/instances we allow per pCPU.
> > 
> > Not really sure to have understand this point? If it is to indicate
> > that for a pCPU isolated we want X guest emulator threads, the same
> > behavior is achieved by the mask. A host for realtime is dedicated for
> > realtime, no overcommitment and the operators know the number of host
> > CPUs, they can easily deduct a ratio and so the corresponding mask.
> 
> Suppose I have a host with 64 CPUs.  I reserve three for host overhead and
> networking, leaving 61 for instances.  If I have instances with one non-RT
> vCPU and one RT vCPU then I can run 30 instances.  If instead my instances
> have one non-RT and 5 RT vCPUs then I can run 12 instances.  If I put all of
> my emulator threads on the same pCPU, it might make a difference whether I
> put 30 sets of emulator threads or 12 sets.

Oh I understand your point now, but not sure that is going to make any
difference. I would say the load in the isolated cores is probably
going to be the same. Even that an overhead will be the number of
threads handled which will be slightly higher in your first scenario.

> The proposed "rt_emulator_overcommit_ratio" would simply say "nova is
> allowed to run X instances 

Re: [openstack-dev] realtime kvm cpu affinities

2017-06-27 Thread Sahid Orentino Ferdjaoui
On Mon, Jun 26, 2017 at 10:19:12AM +0200, Henning Schild wrote:
> Am Sun, 25 Jun 2017 10:09:10 +0200
> schrieb Sahid Orentino Ferdjaoui :
> 
> > On Fri, Jun 23, 2017 at 10:34:26AM -0600, Chris Friesen wrote:
> > > On 06/23/2017 09:35 AM, Henning Schild wrote:  
> > > > Am Fri, 23 Jun 2017 11:11:10 +0200
> > > > schrieb Sahid Orentino Ferdjaoui :  
> > >   
> > > > > In Linux RT context, and as you mentioned, the non-RT vCPU can
> > > > > acquire some guest kernel lock, then be pre-empted by emulator
> > > > > thread while holding this lock. This situation blocks RT vCPUs
> > > > > from doing its work. So that is why we have implemented [2].
> > > > > For DPDK I don't think we have such problems because it's
> > > > > running in userland.
> > > > > 
> > > > > So for DPDK context I think we could have a mask like we have
> > > > > for RT and basically considering vCPU0 to handle best effort
> > > > > works (emulator threads, SSH...). I think it's the current
> > > > > pattern used by DPDK users.  
> > > > 
> > > > DPDK is just a library and one can imagine an application that has
> > > > cross-core communication/synchronisation needs where the emulator
> > > > slowing down vpu0 will also slow down vcpu1. You DPDK application
> > > > would have to know which of its cores did not get a full pcpu.
> > > > 
> > > > I am not sure what the DPDK-example is doing in this discussion,
> > > > would that not just be cpu_policy=dedicated? I guess normal
> > > > behaviour of dedicated is that emulators and io happily share
> > > > pCPUs with vCPUs and you are looking for a way to restrict
> > > > emulators/io to a subset of pCPUs because you can live with some
> > > > of them beeing not 100%.  
> > > 
> > > Yes.  A typical DPDK-using VM might look something like this:
> > > 
> > > vCPU0: non-realtime, housekeeping and I/O, handles all virtual
> > > interrupts and "normal" linux stuff, emulator runs on same pCPU
> > > vCPU1: realtime, runs in tight loop in userspace processing packets
> > > vCPU2: realtime, runs in tight loop in userspace processing packets
> > > vCPU3: realtime, runs in tight loop in userspace processing packets
> > > 
> > > In this context, vCPUs 1-3 don't really ever enter the kernel, and
> > > we've offloaded as much kernel work as possible from them onto
> > > vCPU0.  This works pretty well with the current system.
> > >   
> > > > > For RT we have to isolate the emulator threads to an additional
> > > > > pCPU per guests or as your are suggesting to a set of pCPUs for
> > > > > all the guests running.
> > > > > 
> > > > > I think we should introduce a new option:
> > > > > 
> > > > >- hw:cpu_emulator_threads_mask=^1
> > > > > 
> > > > > If on 'nova.conf' - that mask will be applied to the set of all
> > > > > host CPUs (vcpu_pin_set) to basically pack the emulator threads
> > > > > of all VMs running here (useful for RT context).  
> > > > 
> > > > That would allow modelling exactly what we need.
> > > > In nova.conf we are talking absolute known values, no need for a
> > > > mask and a set is much easier to read. Also using the same name
> > > > does not sound like a good idea.
> > > > And the name vcpu_pin_set clearly suggest what kind of load runs
> > > > here, if using a mask it should be called pin_set.  
> > > 
> > > I agree with Henning.
> > > 
> > > In nova.conf we should just use a set, something like
> > > "rt_emulator_vcpu_pin_set" which would be used for running the
> > > emulator/io threads of *only* realtime instances.  
> > 
> > I'm not agree with you, we have a set of pCPUs and we want to
> > substract some of them for the emulator threads. We need a mask. The
> > only set we need is to selection which pCPUs Nova can use
> > (vcpus_pin_set).
> 
> At that point it does not really matter whether it is a set or a mask.
> They can both express the same where a set is easier to read/configure.
> With the same argument you could say that vcpu_pin_set should be a mask
> over the hosts pcpus.
> 
> As i said before: vcpu_pin_set should be renamed because all sorts of
> threads are put here (pcpu_pin_set?). But that would be a bigger change
> and should be discussed as a seperate issue.
> 
> So far we talked about a compute-node for realtime only doing realtime.
> In that case vcpu_pin_set + emulator_io_mask would work. If you want to
> run regular VMs on the same host, you can run a second nova, like we do.
> 
> We could also use vcpu_pin_set + rt_vcpu_pin_set(/mask). I think that
> would allow modelling all cases in just one nova. Having all in one
> nova, you could potentially repurpose rt cpus to best-effort and back.
> Some day in the future ...

That is not something we should allow or at least
advertise. compute-node can't run both RT and non-RT guests and that
because the nodes should have a kernel RT. We can't guarantee RT if
both are on same nodes.

The realtime nodes should be isolated by aggregates as you seem to do.

> > > We may also want to have "rt_emulator_overcommit_ratio

Re: [openstack-dev] [Glare][TC][All] Past, Present and Future of Glare project

2017-06-27 Thread Flavio Percoco

On 26/06/17 17:35 +0300, Mikhail Fedosin wrote:

2. We would like to become an official OpenStack project, and in general we
follow all the necessary rules and recommendations, starting from weekly
IRC meetings and our own channel, to Apache license and Keystone support.
For this reason, I want to file an application and hear objections and
recommendations on this matter.


Note that IRC meetings are not a requirement anymore: 
https://review.openstack.org/#/c/462077/

As far as the rest of the process goes, it looks like you are all good to go.
I'd recommend you to submit the request to the governance repo and let the
discussion begin: 
https://governance.openstack.org/tc/reference/new-projects-requirements.html

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev