Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-02 Thread Cédric Jeanneret


On 08/02/2018 11:41 PM, Steve Baker wrote:
> 
> 
> On 02/08/18 13:03, Alex Schultz wrote:
>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya 
>> wrote:
>>> On 7/6/18 7:02 PM, Ben Nemec wrote:


 On 07/05/2018 01:23 PM, Dan Prince wrote:
> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
>>
>> I would almost rather see us organize the directories by service
>> name/project instead of implementation.
>>
>> Instead of:
>>
>> puppet/services/nova-api.yaml
>> puppet/services/nova-conductor.yaml
>> docker/services/nova-api.yaml
>> docker/services/nova-conductor.yaml
>>
>> We'd have:
>>
>> services/nova/nova-api-puppet.yaml
>> services/nova/nova-conductor-puppet.yaml
>> services/nova/nova-api-docker.yaml
>> services/nova/nova-conductor-docker.yaml
>>
>> (or perhaps even another level of directories to indicate
>> puppet/docker/ansible?)
>
> I'd be open to this but doing changes on this scale is a much larger
> developer and user impact than what I was thinking we would be willing
> to entertain for the issue that caused me to bring this up (i.e.
> how to
> identify services which get configured by Ansible).
>
> Its also worth noting that many projects keep these sorts of things in
> different repos too. Like Kolla fully separates kolla-ansible and
> kolla-kubernetes as they are quite divergent. We have been able to
> preserve some of our common service architectures but as things move
> towards kubernetes we may which to change things structurally a bit
> too.

 True, but the current directory layout was from back when we
 intended to
 support multiple deployment tools in parallel (originally
 tripleo-image-elements and puppet).  Since I think it has become
 clear that
 it's impractical to maintain two different technologies to do
 essentially
 the same thing I'm not sure there's a need for it now.  It's also worth
 noting that kolla-kubernetes basically died because there wasn't enough
 people to maintain both deployment methods, so we're not the only
 ones who
 have found that to be true.  If/when we move to kubernetes I would
 anticipate it going like the initial containers work did -
 development for a
 couple of cycles, then a switch to the new thing and deprecation of
 the old
 thing, then removal of support for the old thing.

 That being said, because of the fact that the service yamls are
 essentially an API for TripleO because they're referenced in user
>>>
>>> this ^^
>>>
 resource registries, I'm not sure it's worth the churn to move
 everything
 either.  I think that's going to be an issue either way though, it's
 just a
 question of the scope.  _Something_ is going to move around no
 matter how we
 reorganize so it's a problem that needs to be addressed anyway.
>>>
>>> [tl;dr] I can foresee reorganizing that API becomes a nightmare for
>>> maintainers doing backports for queens (and the LTS downstream
>>> release based
>>> on it). Now imagine kubernetes support comes within those next a few
>>> years,
>>> before we can let the old API just go...
>>>
>>> I have an example [0] to share all that pain brought by a simple move of
>>> 'API defaults' from environments/services-docker to
>>> environments/services
>>> plus environments/services-baremetal. Each time a file changes
>>> contents by
>>> its old location, like here [1], I had to run a lot of sanity checks to
>>> rebase it properly. Like checking for the updated paths in resource
>>> registries are still valid or had to/been moved as well, then picking
>>> the
>>> source of truth for diverged old vs changes locations - all that to
>>> loose
>>> nothing important in progress.
>>>
>>> So I'd say please let's do *not* change services' paths/namespaces in
>>> t-h-t
>>> "API" w/o real need to do that, when there is no more alternatives
>>> left to
>>> that.
>>>
>> Ok so it's time to dig this thread back up. I'm currently looking at
>> the chrony support which will require a new service[0][1]. Rather than
>> add it under puppet, we'll likely want to leverage ansible. So I guess
>> the question is where do we put services going forward?  Additionally
>> as we look to truly removing the baremetal deployment options and
>> puppet service deployment, it seems like we need to consolidate under
>> a single structure.  Given that we don't want force too much churn,
>> does this mean that we should align to the docker/services/*.yaml
>> structure or should we be proposing a new structure that we can try to
>> align on.
>>
>> There is outstanding tech-debt around the nested stacks and references
>> within these services when we added the container deployments so it's
>> something that would be beneficial to start tackling sooner rather
>> than later.  Personally I think we're always going to have 

[openstack-dev] [all][docs] ACTION REQUIRED for projects using readthedocs

2018-08-02 Thread Ian Wienand
Hello,

tl;dr : any projects using the "docs-on-readthedocs" job template
to trigger a build of their documentation in readthedocs needs to:

 1) add the "openstackci" user as a maintainer of the RTD project
 2) generate a webhook integration URL for the project via RTD
 3) provide the unique webhook ID value in the "rtd_webhook_id" project
variable

See

 
https://docs.openstack.org/infra/openstack-zuul-jobs/project-templates.html#project_template-docs-on-readthedocs

--

readthedocs has recently updated their API for triggering a
documentation build.  In the old API, anyone could POST to a known URL
for the project and it would trigger a build.  This end-point has
stopped responding and we now need to use an authenticated webhook to
trigger documentation builds.

Since this is only done in the post and release pipelines, projects
probably haven't had great feedback that current methods are failing
and this may be a surprise.  To check your publishing, you can go to
the zuul builds page [1] and filter by your project and the "post"
pipeline to find recent runs.

There is now some setup required which can only be undertaken by a
current maintainer of the RTD project.

In short; add the "openstackci" user as a maintainer, add a "generic
webhook" integration to the project, find the last bit of the URL from
that and put it in the project variable "rtd_webhook_id".

Luckily OpenStack infra keeps a team of highly skilled digital artists
on retainer and they have produced a handy visual guide available at

  https://imgur.com/a/Pp4LH31

Once the RTD project is setup, you must provide the webhook ID value
in your project variables.  This will look something like:

 - project:
templates:
  - docs-on-readthedocs
  - publish-to-pypi
vars:
  rtd_webhook_id: '12345'
check:
  jobs:
  ...

For actual examples; see pbrx [2] which keeps its config in tree, or
gerrit-dash-creator which has its configuration in project-config [3].

Happy to help if anyone is having issues, via mail or #openstack-infra

Thanks!

-i

p.s. You don't *have* to use the jobs from the docs-on-readthedocs
templates and hence add infra as a maintainer; you can setup your own
credentials with zuul secrets in tree and write your playbooks and
jobs to use the generic role [4].  We're always happy to discuss any
concerns.

[1] https://zuul.openstack.org/builds.html
[2] https://git.openstack.org/cgit/openstack/pbrx/tree/.zuul.yaml#n17
[3] 
https://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml
[4] https://zuul-ci.org/docs/zuul-jobs/roles.html#role-trigger-readthedocs

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra][openstack-third-party-ci][nodepool][ironic] nodepool can't ssh to the VM it created

2018-08-02 Thread Pei Pei2 Jia
Hi all,

I’m now encountering a strange problem when using nodepool 0.5.0 to manage 
openstack cloud. It can create VM successfully in the openstack could, but 
can’t ssh to it. The nodepool.log is:
>nodepool.log << END
2018-08-02 22:25:36,152 ERROR nodepool.utils: Failed to negotiate SSH: 
Signature verification (ssh-rsa) failed.
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/nodepool/nodeutils.py", line 55, 
in ssh_connect
client = SSHClient(ip, username, **connect_kwargs)
  File "/usr/local/lib/python2.7/dist-packages/nodepool/sshclient.py", line 33, 
in __init__
allow_agent=allow_agent)
  File "/usr/local/lib/python2.7/dist-packages/paramiko/client.py", line 353, 
in connect
t.start_client(timeout=timeout)
  File "/usr/local/lib/python2.7/dist-packages/paramiko/transport.py", line 
494, in start_client
raise e
SSHException: Signature verification (ssh-rsa) failed.
END

And when I check the VM start log, I find it shows
A start job is running for unbound.service (3min 37s / 8min 28s)

And my nodepool.yml is:
providers:
  - name: cloud_183
region-name: 'RegionOne'
cloud: cloud_183
max-servers: 2
boot-timeout: 240
launch-timeout: 600
networks:
  - name: tenant
clean-floating-ips: True
images:
  - name: ubuntu-xenial
min-ram: 2048
diskimage: ubuntu-xenial
username: jenkins
key-name: nodepool
private-key: '/home/nodepool/.ssh/id_rsa'

Could anyone happen to know this? Thank you in advance.

Jeremy Jia (贾培)
Software Developer, Lenovo Cloud Technology Center
5F, Zhangjiang Mansion, 560 SongTao Rd. Pudong, Shanghai


  jiap...@lenovo.com
  Ph: 8621-
  Mobile: 8618116119081


www.lenovo.com  / www.lenovo.com 
Forums | Blogs | 
Twitter | Facebook | 
Flickr
Print only when necessary



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Jay Pipes

On 08/02/2018 06:18 PM, Michael Glasgow wrote:

On 08/02/18 15:04, Chris Friesen wrote:

On 08/02/2018 01:04 PM, melanie witt wrote:


The problem is an infamous one, which is, your users are trying to boot
instances and they get "No Valid Host" and an instance in ERROR 
state. They contact support, and now support is trying to determine 
why NoValidHost happened. In the past, they would turn on DEBUG log 
level on the nova-scheduler, try another request, and take a look at 
the scheduler logs.


At a previous Summit[1] there were some operators that said they just 
always ran nova-scheduler with debug logging enabled in order to deal 
with this issue, but that it was a pain [...]


I would go a bit further and say it's likely to be unacceptable on a 
large cluster.  It's expensive to deal with all those logs and to 
manually comb through them for troubleshooting this issue type, which 
can happen frequently with some setups.  Secondarily there are 
performance and security concerns with leaving debug on all the time.


As to "defining the problem", I think it's what Melanie said.  It's 
about asking for X and the system saying, "sorry, can't give you X" with 
no further detail or even means of discovering it.


More generally, any time a service fails to deliver a resource which it 
is primarily designed to deliver, it seems to me at this stage that 
should probably be taken a bit more seriously than just "check the log 
file, maybe there's something in there?"  From the user's perspective, 
if nova fails to produce an instance, or cinder fails to produce a 
volume, or neutron fails to build a subnet, that's kind of a big deal, 
right?


In such cases, would it be possible to generate a detailed exception 
object which contains all the necessary info to ascertain why that 
specific failure occurred?


It's not an exception. It's normal course of events. NoValidHosts means 
there were no compute nodes that met the requested resource amounts.


There's plenty of ways the operator can get usage and trait information 
and determine if there are providers that meet the requested amounts and 
required/forbidden traits.


What we're talking about here is debugging information, plain and simple.

If a SELECT statement against an Oracle DB returns 0 rows, is that an 
exception? No. Would an operator need to re-send the SELECT statement 
with an EXPLAIN SELECT in order to get information about what indexes 
were used to winnow the result set (to zero)? Yes. Either that, or the 
operator would need to gradually re-execute smaller SELECT statements 
containing fewer filters in order to determine which join or predicate 
caused a result set to contain zero rows.


That's exactly what we're talking about here. It's not an exception. 
It's debugging information.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican][ara][helm][tempest] Removal of fedora-27 nodes

2018-08-02 Thread Paul Belanger
Greetings,

We've had fedora-28 nodes online for some time in openstack-infra, I'd like to
finish the migration process and remove fedora-27 images.

Please take a moment to review and approve the following patches[1]. We'll be
using the fedora-latest nodeset now, which make is a little easier for
openstack-infra to migrate to newer versions of fedora.  Next time around, we'll
send out an email to the ML once fedora-29 is online to give projects some time
to test before we make the change.

Thanks
- Paul

[1] https://review.openstack.org/#/q/topic:fedora-latest

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate][stable] Stable Core Team Updates

2018-08-02 Thread Tony Breeds
On Tue, Jul 31, 2018 at 06:39:36PM +0100, Graham Hayes wrote:
> Hi Stable Team,
> 
> I would like to nominate 2 new stable core reviewers for Designate.
> 
> * Erik Olof Gunnar Andersson 
> * Jens Harbott (frickler) 
> 
> Erik has been doing a lot of stable reviews recently, and Jens has shown
> that he understands the policy in other reviews (and has stable rights
> on other repositories (like DevStack) already).

Done.  Jens doesn't seem to be doing active stable reviews but I've
added them anyway.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Michael Glasgow

On 08/02/18 15:04, Chris Friesen wrote:

On 08/02/2018 01:04 PM, melanie witt wrote:


The problem is an infamous one, which is, your users are trying to boot
instances and they get "No Valid Host" and an instance in ERROR state. 
They contact support, and now support is trying to determine why 
NoValidHost happened. In the past, they would turn on DEBUG log level 
on the nova-scheduler, try another request, and take a look at the 
scheduler logs.


At a previous Summit[1] there were some operators that said they just 
always ran nova-scheduler with debug logging enabled in order to deal 
with this issue, but that it was a pain [...]


I would go a bit further and say it's likely to be unacceptable on a 
large cluster.  It's expensive to deal with all those logs and to 
manually comb through them for troubleshooting this issue type, which 
can happen frequently with some setups.  Secondarily there are 
performance and security concerns with leaving debug on all the time.


As to "defining the problem", I think it's what Melanie said.  It's 
about asking for X and the system saying, "sorry, can't give you X" with 
no further detail or even means of discovering it.


More generally, any time a service fails to deliver a resource which it 
is primarily designed to deliver, it seems to me at this stage that 
should probably be taken a bit more seriously than just "check the log 
file, maybe there's something in there?"  From the user's perspective, 
if nova fails to produce an instance, or cinder fails to produce a 
volume, or neutron fails to build a subnet, that's kind of a big deal, 
right?


In such cases, would it be possible to generate a detailed exception 
object which contains all the necessary info to ascertain why that 
specific failure occurred?  Ideally the operator should be able to 
correlate those exceptions with associated objects, e.g. the instance in 
ERROR state in this case, so that given that failed instance ID they can 
quickly remedy the user's problem without reading megabytes of log 
files.  If there's a way to make this error handling generic across 
services to some extent, that seems like it would be great for operators.


Such a framework might eventually hook into internal ticketing systems, 
maintenance reporting, or provide a starting point for self healing 
mechanisms, but initially the aim would just be to provide the operator 
with the bare minimum info necessary for more efficient break-fix.


It could be a big investment, but it also doesn't seem like "optional" 
functionality from a large operator's perspective.  "Enable debug and 
try again" is just not good enough IMHO.


--
Michael Glasgow

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Prospective RC1 Bugs

2018-08-02 Thread Lance Bragstad
Hey all,

I went through all bugs opened during the Rocky release and came up with
a list of ones that might be good to fix before next week [0]. The good
news is that more than half are in progress and none of them are release
blockers, just ones that would be good to get in.

Let me know if you see anything reported this week that needs to get fixed.

[0] https://bit.ly/2MeXN0L



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-02 Thread Steve Baker



On 02/08/18 13:03, Alex Schultz wrote:

On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya  wrote:

On 7/6/18 7:02 PM, Ben Nemec wrote:



On 07/05/2018 01:23 PM, Dan Prince wrote:

On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:


I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)


I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.


True, but the current directory layout was from back when we intended to
support multiple deployment tools in parallel (originally
tripleo-image-elements and puppet).  Since I think it has become clear that
it's impractical to maintain two different technologies to do essentially
the same thing I'm not sure there's a need for it now.  It's also worth
noting that kolla-kubernetes basically died because there wasn't enough
people to maintain both deployment methods, so we're not the only ones who
have found that to be true.  If/when we move to kubernetes I would
anticipate it going like the initial containers work did - development for a
couple of cycles, then a switch to the new thing and deprecation of the old
thing, then removal of support for the old thing.

That being said, because of the fact that the service yamls are
essentially an API for TripleO because they're referenced in user


this ^^


resource registries, I'm not sure it's worth the churn to move everything
either.  I think that's going to be an issue either way though, it's just a
question of the scope.  _Something_ is going to move around no matter how we
reorganize so it's a problem that needs to be addressed anyway.


[tl;dr] I can foresee reorganizing that API becomes a nightmare for
maintainers doing backports for queens (and the LTS downstream release based
on it). Now imagine kubernetes support comes within those next a few years,
before we can let the old API just go...

I have an example [0] to share all that pain brought by a simple move of
'API defaults' from environments/services-docker to environments/services
plus environments/services-baremetal. Each time a file changes contents by
its old location, like here [1], I had to run a lot of sanity checks to
rebase it properly. Like checking for the updated paths in resource
registries are still valid or had to/been moved as well, then picking the
source of truth for diverged old vs changes locations - all that to loose
nothing important in progress.

So I'd say please let's do *not* change services' paths/namespaces in t-h-t
"API" w/o real need to do that, when there is no more alternatives left to
that.


Ok so it's time to dig this thread back up. I'm currently looking at
the chrony support which will require a new service[0][1]. Rather than
add it under puppet, we'll likely want to leverage ansible. So I guess
the question is where do we put services going forward?  Additionally
as we look to truly removing the baremetal deployment options and
puppet service deployment, it seems like we need to consolidate under
a single structure.  Given that we don't want force too much churn,
does this mean that we should align to the docker/services/*.yaml
structure or should we be proposing a new structure that we can try to
align on.

There is outstanding tech-debt around the nested stacks and references
within these services when we added the container deployments so it's
something that would be beneficial to start tackling sooner rather
than later.  Personally I think we're always going to have the issue
when we rename files that could have been referenced by custom
templates, but I don't think we can continue to carry the outstanding
tech debt around these static locations.  Should we be investing in
coming up with some sort of mappings that we can use/warn a user on
when we move files?


When Stein development starts, the puppet services will have been 
deprecated for an entire cycle. Can I suggest we use this reorganization 
as the time we delete the puppet services files? This would release us 
of the burden of maintaining a deployment method that we no 

Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Jeremy Stanley
On 2018-08-02 14:16:10 -0500 (-0500), Sean McGinnis wrote:
[...]
> Interesting... I hadn't looked into Gerrit functionality enough to know about
> these. Looks like this is probably what you are referring to?
> 
> https://gerrit.googlesource.com/plugins/its-storyboard/

Yes, that. Khai Do (zaro) did the bulk of the work implementing it
for us but isn't around as much these days (we miss you!).

> It's been awhile since I did anything significant with Java, but that might be
> an option. Maybe a fun weekend project at least to see what it would take to
> create an its-launchpad plugin.
[...]

Careful; if you let anyone know you've touched a Gerrit plug-in the
requests for more help will never end.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Jeremy Stanley
On 2018-08-02 14:04:10 -0600 (-0600), Chris Friesen wrote:
[...]
> At a previous Summit[1] there were some operators that said they just always
> ran nova-scheduler with debug logging enabled in order to deal with this
> issue, but that it was a pain to isolate the useful logs from the not-useful
> ones.
[...]

Also, the OpenStack VMT doesn't prioritize information leaks which
are limited to debug-level logging[*], so leaving debug logging
enabled is perhaps more risky if you don't safeguard those logs.

[*] https://security.openstack.org/vmt-process.html#incident-report-taxonomy
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Chris Friesen

On 08/02/2018 01:04 PM, melanie witt wrote:


The problem is an infamous one, which is, your users are trying to boot
instances and they get "No Valid Host" and an instance in ERROR state. They
contact support, and now support is trying to determine why NoValidHost
happened. In the past, they would turn on DEBUG log level on the nova-scheduler,
try another request, and take a look at the scheduler logs.


At a previous Summit[1] there were some operators that said they just always ran 
nova-scheduler with debug logging enabled in order to deal with this issue, but 
that it was a pain to isolate the useful logs from the not-useful ones.


Chris


[1] in a discussion related to 
https://blueprints.launchpad.net/nova/+spec/improve-sched-logging


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Sean McGinnis
On Thu, Aug 02, 2018 at 05:56:23PM +, Jeremy Stanley wrote:
> On 2018-08-02 10:09:48 -0500 (-0500), Sean McGinnis wrote:
> [...]
> > I was able to find part of how that is implemented in jeepyb:
> > 
> > http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py
> [...]
> 
> As for the nuts and bolts here, the script you found is executed
> from a Gerrit hook every time a change merges:
> 
> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/gerrit/change-merged
> 

Thanks, that's at least a place I can start looking!

> Gerrit hooks are a bit fragile but also terribly opaque (the only
> way to troubleshoot a failure is a Gerrit admin pouring over a noisy
> log file on the server looking for a Java backtrace). If you decide
> to do something automated to open bugs/stories when changes merge, I
> recommend a Zuul job. We don't currently have a pipeline definition
> which generates a distinct build set for every merged change (the
> post and promote pipelines do supercedent queuing rather than
> independent queuing these days) but it would be easy to add one that
> does.
> 
> It _could_ also be a candidate for a Gerrit ITS plug-in (there's one
> for SB but not for LP as far as I know), but implementing this would
> mean spending more time in Java than most of us care to experience.

Interesting... I hadn't looked into Gerrit functionality enough to know about
these. Looks like this is probably what you are referring to?

https://gerrit.googlesource.com/plugins/its-storyboard/

It's been awhile since I did anything significant with Java, but that might be
an option. Maybe a fun weekend project at least to see what it would take to
create an its-launchpad plugin.

Thanks for the pointers!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-08-02 Thread Jimmy McArthur
The Edge and Containers translations are now live.  As new translations 
become available, we will add them to the page.


https://www.openstack.org/containers/
https://www.openstack.org/edge-computing/

Note that the Chinese translation has not been added to Zanata at this 
time, so I've left the PDF download up on that page.


Thanks everyone and please let me know if you have questions or concerns!

Cheers!
Jimmy

Jimmy McArthur wrote:

Frank,

We expect to have these papers up this afternoon. I'll update this 
thread when we do.


Thanks!
Jimmy

Frank Kloeker wrote:

Hi Sebastian,

okay, it's translated now. In Edge whitepaper is the problem with 
XML-Parsing of the term AT Don't know how to escape this. Maybe 
you will see the warning during import too.


kind regards

Frank

Am 2018-07-30 20:09, schrieb Sebastian Marcet:

Hi Frank,
i was double checking pot file and realized that original pot missed
some parts of the original paper (subsections of the paper) apologizes
on that
i just re uploaded an updated pot file with missing subsections

regards

On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker  wrote:


Hi Jimmy,

from the GUI I'll get this link:

https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center 


[1]

paper version  are only in container whitepaper:


https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack 


[2]

In general there is no group named papers

kind regards

Frank

Am 2018-07-30 17:06, schrieb Jimmy McArthur:
Frank,

We're getting a 404 when looking for the pot file on the Zanata API:

https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing 


[3]

As a result, we can't pull the po files.  Any idea what might be
happening?

Seeing the same thing with both papers...

Thank you,
Jimmy

Frank Kloeker wrote:
Hi Jimmy,

Korean and German version are now done on the new format. Can you
check publishing?

thx

Frank

Am 2018-07-19 16:47, schrieb Jimmy McArthur:
Hi all -

Follow up on the Edge paper specifically:

https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 


[4] This is now available. As I mentioned on IRC this morning, it
should
be VERY close to the PDF.  Probably just needs a quick review.

Let me know if I can assist with anything.

Thank you to i18n team for all of your help!!!

Cheers,
Jimmy

Jimmy McArthur wrote:
Ian raises some great points :) I'll try to address below...

Ian Y. Choi wrote:
Hello,

When I saw overall translation source strings on container
whitepaper, I would infer that new edge computing whitepaper
source strings would include HTML markup tags.
One of the things I discussed with Ian and Frank in Vancouver is
the expense of recreating PDFs with new translations.  It's
prohibitively expensive for the Foundation as it requires design
resources which we just don't have.  As a result, we created the
Containers whitepaper in HTML, so that it could be easily updated
w/o working with outside design contractors.  I indicated that we
would also be moving the Edge paper to HTML so that we could prevent
that additional design resource cost.
On the other hand, the source strings of edge computing whitepaper
which I18n team previously translated do not include HTML markup
tags, since the source strings are based on just text format.
The version that Akihiro put together was based on the Edge PDF,
which we unfortunately didn't have the resources to implement in the
same format.

I really appreciate Akihiro's work on RST-based support on
publishing translated edge computing whitepapers, since
translators do not have to re-translate all the strings.
I would like to second this. It took a lot of initiative to work on
the RST-based translation.  At the moment, it's just not usable for
the reasons mentioned above.
On the other hand, it seems that I18n team needs to investigate on
translating similar strings of HTML-based edge computing whitepaper
source strings, which would discourage translators.
Can you expand on this? I'm not entirely clear on why the HTML
based translation is more difficult.

That's my point of view on translating edge computing whitepaper.

For translating container whitepaper, I want to further ask the
followings since *I18n-based tools*
would mean for translators that translators can test and publish
translated whitepapers locally:

- How to build translated container whitepaper using original
Silverstripe-based repository?
https://docs.openstack.org/i18n/latest/tools.html [5] describes
well how to build translated artifacts for RST-based OpenStack
repositories
but I could not find the way how to build translated container
whitepaper with translated resources on Zanata.
This is a little tricky.  It's possible to set up a local version
of the OpenStack website


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread melanie witt

On Thu, 2 Aug 2018 13:20:43 -0500, Eric Fried wrote:

And we could do the same kind of approach with the non-granular request
groups by reducing the single large SQL statement that is used for all
resources and all traits (and all agg associations) into separate SELECT
statements.

It could be slightly less performance-optimized but more readable and
easier to output debug logs like those above.


Okay, but first we should define the actual problem(s) we're trying to
solve, as Chris says, so we can assert that it's worth the (possible)
perf hit and (definite) dev resources, not to mention the potential for
injecting bugs.


The problem is an infamous one, which is, your users are trying to boot 
instances and they get "No Valid Host" and an instance in ERROR state. 
They contact support, and now support is trying to determine why 
NoValidHost happened. In the past, they would turn on DEBUG log level on 
the nova-scheduler, try another request, and take a look at the 
scheduler logs. They'd see a message, for example, "DiskFilter [start: 
2, end: 0]" (there were 2 candidates before DiskFilter ran and there 
were 0 after it ran) when the scheduling fails, indicating that 
scheduling failed because no computes were reporting enough disk to 
fulfill the request. The key thing here is they could see which resource 
was not available in their cluster.


Now, with placement, all the resources are checked in one go and support 
can't tell which resource or trait was rejected, assuming it wasn't all 
of them. They want to know what resource or trait was rejected in order 
to help them find the problematic compute host or configuration or other 
and fix it.


At present, I think the only approach support could take is to query a 
view of resource providers with their resource and trait availability 
and compare against the request flavor that failed, to figure out which 
resources or traits don't pass what's reported as available.


Hope that helps.

-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Eric Fried
> And we could do the same kind of approach with the non-granular request
> groups by reducing the single large SQL statement that is used for all
> resources and all traits (and all agg associations) into separate SELECT
> statements.
> 
> It could be slightly less performance-optimized but more readable and
> easier to output debug logs like those above.

Okay, but first we should define the actual problem(s) we're trying to
solve, as Chris says, so we can assert that it's worth the (possible)
perf hit and (definite) dev resources, not to mention the potential for
injecting bugs.

That said, it might be worth doing what you suggest purely for the sake
of being able to read and understand the code...

efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-08-02 Thread Jill Rouleau
On Thu, 2018-08-02 at 13:30 -0400, Pradeep Kilambi wrote:
> 
> 
> On Wed, Aug 1, 2018 at 6:06 PM Jill Rouleau  wrote:
> > On Tue, 2018-07-31 at 07:38 -0400, Pradeep Kilambi wrote:
> > > 
> > > 
> > > On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau 
> > wrote:
> > > > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote:
> > > > > 
> > > > > 
> > > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz  > .com
> > > > >
> > > > > wrote:
> > > > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr  > om>
> > > > > > wrote:
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi  > edha
> > > > t.co
> > > > > > m> wrote:
> > > > > > >>
> > > > > > >> Your fellow reporter took a break from writing, but is
> > now
> > > > back
> > > > > > on his
> > > > > > >> pen.
> > > > > > >>
> > > > > > >> Welcome to the twenty-fifth edition of a weekly update in
> > > > TripleO
> > > > > > world!
> > > > > > >> The goal is to provide a short reading (less than 5
> > minutes)
> > > > to
> > > > > > learn
> > > > > > >> what's new this week.
> > > > > > >> Any contributions and feedback are welcome.
> > > > > > >> Link to the previous version:
> > > > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-J
> > une/
> > > > 1314
> > > > > > 26.html
> > > > > > >>
> > > > > > >> +-+
> > > > > > >> | General announcements |
> > > > > > >> +-+
> > > > > > >>
> > > > > > >> +--> Rocky Milestone 3 is next week. After, any feature
> > code
> > > > will
> > > > > > require
> > > > > > >> Feature Freeze Exception (FFE), asked on the mailing-
> > list.
> > > > We'll
> > > > > > enter a
> > > > > > >> bug-fix only and stabilization period, until we can push
> > the
> > > > > > first stable
> > > > > > >> version of Rocky.
> > > > > > >
> > > > > > >
> > > > > > > Hey guys,
> > > > > > >
> > > > > > >   I would like to ask for FFE for backup and restore,
> > where we
> > > > > > ended up
> > > > > > > deciding where is the best place for the code base for
> > this
> > > > > > project (please
> > > > > > > see [1] for details). We believe that B support for
> > > > overcloud
> > > > > > control
> > > > > > > plane will be good addition to a rocky release, but we
> > started
> > > > > > with this
> > > > > > > initiative quite late indeed. The final result should the
> > > > support
> > > > > > in
> > > > > > > openstack client, where "openstack overcloud
> > (backup|restore)"
> > > > > > would work as
> > > > > > > a charm. Thanks in advance for considering this feature.
> > > > > > >
> > > > > > 
> > > > > > Was there a blueprint/spec for this effort?  Additionally do
> > we
> > > > have
> > > > > > a
> > > > > > list of the outstanding work required for this? If it's just
> > > > these
> > > > > > two
> > > > > > playbooks, it might be ok for an FFE. But if there's
> > additional
> > > > > > tripleoclient related changes, I wouldn't necessarily feel
> > > > > > comfortable
> > > > > > with these unless we have a complete list of work.  Just as
> > a
> > > > side
> > > > > > note, I'm not sure putting these in tripleo-common is going
> > to
> > > > be
> > > > > > the
> > > > > > ideal place for this.
> > > > 
> > > > Was it this review? https://review.openstack.org/#/c/582453/
> > > > 
> > > > For Stein we'll have an ansible role[0] and playbook repo[1]
> > where
> > > > these
> > > > types of tasks should live.
> > > > 
> > > > [0] https://github.com/openstack/ansible-role-openstack-operatio
> > ns 
> > > > [1] https://review.openstack.org/#/c/583415/
> > > Thanks Jill! The issue is, we want to be able to backport this to
> > > Queens once merged. With the new repos you're mentioning would
> > this be
> > > possible? If no, then this wont work for us unfortunately.
> > > 
> > 
> > We wouldn't backport the new packages to Queens, however the repos
> > will
> > be on github and available to clone and use.  This would be far
> > preferable than adding them to tripleo-common so late in the rocky
> > cycle
> > then having to break them back out right away in stein.
> 
> Understood. To extend this further, we will need to integrate these
> into tripleoclient. That way a user can just run $ openstack overcloud
> backup - and get all the data backendup instead of running the play
> books manually. Would this be possible with keeping these in a
> separate tripleo ansible repo? How do we currently handle undercloud
> backup. Where do we currently keep those playbooks? 
>  

We're not currently providing backup playbooks, this is a new feature.
 So it would be great if there were spec we could organize around.

Cedric is working on a patch for running ansible playbooks via
tripleoclient that should help:  https://review.openstack.org/#/c/586538
/



> > 
> > > 
> > >  
> > > > 
> > > > 
> > > > > 
> > > > > Thanks Alex. For Rocky, if we can ship the playbooks with
> > relevant
> > > > > docs we should be good. We will integrated with client in
> > 

Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Jeremy Stanley
On 2018-08-02 10:09:48 -0500 (-0500), Sean McGinnis wrote:
[...]
> I was able to find part of how that is implemented in jeepyb:
> 
> http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py
[...]

As for the nuts and bolts here, the script you found is executed
from a Gerrit hook every time a change merges:

https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/gerrit/change-merged

Gerrit hooks are a bit fragile but also terribly opaque (the only
way to troubleshoot a failure is a Gerrit admin pouring over a noisy
log file on the server looking for a Java backtrace). If you decide
to do something automated to open bugs/stories when changes merge, I
recommend a Zuul job. We don't currently have a pipeline definition
which generates a distinct build set for every merged change (the
post and promote pipelines do supercedent queuing rather than
independent queuing these days) but it would be easy to add one that
does.

It _could_ also be a candidate for a Gerrit ITS plug-in (there's one
for SB but not for LP as far as I know), but implementing this would
mean spending more time in Java than most of us care to experience.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Jay Pipes

On 08/02/2018 01:40 PM, Eric Fried wrote:

Jay et al-


And what I'm referring to is doing a single query per "related
resource/trait placement request group" -- which is pretty much what
we're heading towards anyway.

If we had a request for:

GET /allocation_candidates?
  resources0=VCPU:1&
  required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX&
  resources1=MEMORY_MB:1024

and logged something like this:

DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU,
requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches

DEBUG: [placement request ID XXX] request group 2 of 2 for 1024
MEMORY_MB returned 3 matches

that would at least go a step towards being more friendly for debugging
a particular request's results.


Well, that's easy [1] (but I'm sure you knew that when you suggested
it). Produces logs like [2].

This won't be backportable, I'm afraid.

[1] https://review.openstack.org/#/c/588350/
[2] http://paste.openstack.org/raw/727165/


Yes.

And we could do the same kind of approach with the non-granular request 
groups by reducing the single large SQL statement that is used for all 
resources and all traits (and all agg associations) into separate SELECT 
statements.


It could be slightly less performance-optimized but more readable and 
easier to output debug logs like those above.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Eric Fried
I should have made it clear that this is a tiny incremental improvement,
to a code path that almost nobody is even going to see until Stein. In
no way was it intended to close this topic.

Thanks,
efried

On 08/02/2018 12:40 PM, Eric Fried wrote:
> Jay et al-
> 
>> And what I'm referring to is doing a single query per "related
>> resource/trait placement request group" -- which is pretty much what
>> we're heading towards anyway.
>>
>> If we had a request for:
>>
>> GET /allocation_candidates?
>>  resources0=VCPU:1&
>>  required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX&
>>  resources1=MEMORY_MB:1024
>>
>> and logged something like this:
>>
>> DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU,
>> requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches
>>
>> DEBUG: [placement request ID XXX] request group 2 of 2 for 1024
>> MEMORY_MB returned 3 matches
>>
>> that would at least go a step towards being more friendly for debugging
>> a particular request's results.
> 
> Well, that's easy [1] (but I'm sure you knew that when you suggested
> it). Produces logs like [2].
> 
> This won't be backportable, I'm afraid.
> 
> [1] https://review.openstack.org/#/c/588350/
> [2] http://paste.openstack.org/raw/727165/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Eric Fried
Jay et al-

> And what I'm referring to is doing a single query per "related
> resource/trait placement request group" -- which is pretty much what
> we're heading towards anyway.
> 
> If we had a request for:
> 
> GET /allocation_candidates?
>  resources0=VCPU:1&
>  required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX&
>  resources1=MEMORY_MB:1024
> 
> and logged something like this:
> 
> DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU,
> requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches
> 
> DEBUG: [placement request ID XXX] request group 2 of 2 for 1024
> MEMORY_MB returned 3 matches
> 
> that would at least go a step towards being more friendly for debugging
> a particular request's results.

Well, that's easy [1] (but I'm sure you knew that when you suggested
it). Produces logs like [2].

This won't be backportable, I'm afraid.

[1] https://review.openstack.org/#/c/588350/
[2] http://paste.openstack.org/raw/727165/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 25th Edition

2018-08-02 Thread Pradeep Kilambi
On Wed, Aug 1, 2018 at 6:06 PM Jill Rouleau  wrote:

> On Tue, 2018-07-31 at 07:38 -0400, Pradeep Kilambi wrote:
> >
> >
> > On Mon, Jul 30, 2018 at 2:17 PM Jill Rouleau  wrote:
> > > On Mon, 2018-07-30 at 11:35 -0400, Pradeep Kilambi wrote:
> > > >
> > > >
> > > > On Mon, Jul 30, 2018 at 10:42 AM Alex Schultz  > > >
> > > > wrote:
> > > > > On Mon, Jul 30, 2018 at 8:32 AM, Martin Magr 
> > > > > wrote:
> > > > > >
> > > > > >
> > > > > > On Tue, Jul 17, 2018 at 6:12 PM, Emilien Macchi  > > t.co
> > > > > m> wrote:
> > > > > >>
> > > > > >> Your fellow reporter took a break from writing, but is now
> > > back
> > > > > on his
> > > > > >> pen.
> > > > > >>
> > > > > >> Welcome to the twenty-fifth edition of a weekly update in
> > > TripleO
> > > > > world!
> > > > > >> The goal is to provide a short reading (less than 5 minutes)
> > > to
> > > > > learn
> > > > > >> what's new this week.
> > > > > >> Any contributions and feedback are welcome.
> > > > > >> Link to the previous version:
> > > > > >> http://lists.openstack.org/pipermail/openstack-dev/2018-June/
> > > 1314
> > > > > 26.html
> > > > > >>
> > > > > >> +-+
> > > > > >> | General announcements |
> > > > > >> +-+
> > > > > >>
> > > > > >> +--> Rocky Milestone 3 is next week. After, any feature code
> > > will
> > > > > require
> > > > > >> Feature Freeze Exception (FFE), asked on the mailing-list.
> > > We'll
> > > > > enter a
> > > > > >> bug-fix only and stabilization period, until we can push the
> > > > > first stable
> > > > > >> version of Rocky.
> > > > > >
> > > > > >
> > > > > > Hey guys,
> > > > > >
> > > > > >   I would like to ask for FFE for backup and restore, where we
> > > > > ended up
> > > > > > deciding where is the best place for the code base for this
> > > > > project (please
> > > > > > see [1] for details). We believe that B support for
> > > overcloud
> > > > > control
> > > > > > plane will be good addition to a rocky release, but we started
> > > > > with this
> > > > > > initiative quite late indeed. The final result should the
> > > support
> > > > > in
> > > > > > openstack client, where "openstack overcloud (backup|restore)"
> > > > > would work as
> > > > > > a charm. Thanks in advance for considering this feature.
> > > > > >
> > > > >
> > > > > Was there a blueprint/spec for this effort?  Additionally do we
> > > have
> > > > > a
> > > > > list of the outstanding work required for this? If it's just
> > > these
> > > > > two
> > > > > playbooks, it might be ok for an FFE. But if there's additional
> > > > > tripleoclient related changes, I wouldn't necessarily feel
> > > > > comfortable
> > > > > with these unless we have a complete list of work.  Just as a
> > > side
> > > > > note, I'm not sure putting these in tripleo-common is going to
> > > be
> > > > > the
> > > > > ideal place for this.
> > >
> > > Was it this review? https://review.openstack.org/#/c/582453/
> > >
> > > For Stein we'll have an ansible role[0] and playbook repo[1] where
> > > these
> > > types of tasks should live.
> > >
> > > [0] https://github.com/openstack/ansible-role-openstack-operations
> > > [1] https://review.openstack.org/#/c/583415/
> > Thanks Jill! The issue is, we want to be able to backport this to
> > Queens once merged. With the new repos you're mentioning would this be
> > possible? If no, then this wont work for us unfortunately.
> >
>
> We wouldn't backport the new packages to Queens, however the repos will
> be on github and available to clone and use.  This would be far
> preferable than adding them to tripleo-common so late in the rocky cycle
> then having to break them back out right away in stein.
>


Understood. To extend this further, we will need to integrate these into
tripleoclient. That way a user can just run $ openstack overcloud backup -
and get all the data backendup instead of running the play books manually.
Would this be possible with keeping these in a separate tripleo ansible
repo? How do we currently handle undercloud backup. Where do we currently
keep those playbooks?


>
> >
> >
> > >
> > >
> > > >
> > > > Thanks Alex. For Rocky, if we can ship the playbooks with relevant
> > > > docs we should be good. We will integrated with client in Stein
> > > > release with restore logic included. Regarding putting tripleo-
> > > common,
> > > > we're open to suggestions. I think Dan just submitted the review
> > > so we
> > > > can get some eyes on the playbooks. Where do you suggest is better
> > > > place for these instead?
> > > >
> > > > >
> > > > > Thanks,
> > > > > -Alex
> > > > >
> > > > > > Regards,
> > > > > > Martin
> > > > > >
> > > > > > [1] https://review.openstack.org/#/c/582453/
> > > > > >
> > > > > >>
> > > > > >> +--> Next PTG will be in Denver, please propose topics:
> > > > > >> https://etherpad.openstack.org/p/tripleoci-ptg-stein
> > > > > >> +--> Multiple squads are currently brainstorming a framework
> > > to
> > > > > 

Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Jay S Bryant



On 8/2/2018 10:59 AM, Radomir Dopieralski wrote:
To be honest, I don't see much point in automatically creating bugs 
that nobody is going to look at. When you implement a new feature, 
it's up to you to make it available in Horizon and CLI and wherever 
else, since the people working there simply don't have the time to 
work on it. Creating a ticket will not magically make someone do that 
work for you. We are happy to assist with this, but that's it. 
Anything else is going to get added whenever someone has any free 
cycles, or it becomes necessary for some reason (like breaking 
compatibility). That's the current reality, and no automation is going 
to help with it.


I disagree with this view.  In the past there have been companies that 
have had people working on Horizon to keep it implemented for their 
purposes.  Have these bugs available would have made their work easier.  
I also know that there are people on the OSC team that just work on 
keeping functions implemented and up to date.


At a minimum, having these bugs automatically opened would help when 
someone is trying to figure out why the new function they are looking 
for is not available in OSC or Horizon.  A search would turn up the fact 
that it hasn't been implemented yet.  Currently, we frequently have the 
discussion 'Has that been implemented in Horizon yet?'  This would 
reduce the confusion around that subject.


So, I support trying to make this happen as I feel it moves us towards a 
better UX for OpenStack.


On Thu, Aug 2, 2018 at 5:09 PM Sean McGinnis > wrote:


I'm wondering if someone on the infra team can give me some
pointers on how to
approach something, and looking for any general feedback as well.

Background
==
We've had things like the DocImpact tag that could be added to
commit messages
that would tie into some automation to create a launchpad bug when
that commit
merged. While we had a larger docs team and out-of-tree docs, I
think this
really helped us make sure we didn't lose track of needed
documentation
updates.

I was able to find part of how that is implemented in jeepyb:


http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py

Current Challenge
=
Similar to the need to follow up with documentation, I've seen a
lot of cases
where projects have added features or made other changes that
impact downstream
consumers of that project. Most often, I've seen cases where
something like
python-cinderclient adds some functionality, but it is on projects
like Horizon
or python-openstackclient to proactively go out and discover those
changes.

Not only just seeking out those changes, but also evaluating
whether a given
change should have any impact on their project. So we've ended up
in a lot of
cases where either new functionality isn't made available through
these
interfaces until a cycle or two later, or probably worse, cases
where something
is now broken with no one aware of it until an actual end user
hits a problem
and files a bug.

ClientImpact Plan
=
I've run this by a few people and it seems to have some support.
Or course I'm
open to any other suggestions.

What I would like to do is add a ClientImpact tag handling that
could be added
very similarly to DocImpact. The way I see it working is it would
work in much
the same way where project's can use this to add the tag to a
commit message
when they know it is something that will require additional work
in OSC or
Horizon (or others). Then when that commit merges, automation
would create a
launchpad bug and/or Storyboard story, including a default set of
client
projects. Perhaps we can find some way to make those impacted clients
configurable by source project, but that could be a follow-on
optimization.

I am concerned that this could create some extra overhead for
these projects.
But my hope is it would be a quick evaluation by a bug triager in
those
projects where they can, hopefully, quickly determine if a change
does not in
fact impact them and just close the ones they don't think require
any follow on
work.

I do hope that this will save some time and speed things up
overall for these
projects to be notified that there is something that needs their
attention
without needing someone to take the time to actively go out and
discover that.

Help Needed
===
From the bits I've found for the DocImpact handling, it looks like
it should
not be too much effort to implement the logic to handle a
ClientImpact flag.
But I have not been able to find all the moving parts that work
together to
perform that automation.

If anyone has 

[openstack-dev] [all][api] POST /api-sig/news

2018-08-02 Thread Michael McCune
Greetings OpenStack community,

Today's meeting was primarily focused around two topics: the IETF[7]
draft proposal for Best Practices when building HTTP protocols[8], and
the upcoming OpenStack Project Teams Gathering (PTG)[9].

The group had taken a collective action to read the aforementioned
draft[8], and as such we were well prepared to discuss its nuances.
For the most part, we agreed that the draft is a good prepartory text
when approaching HTTP APIs and that we should provide a link to it
from the guidelines. Although there are a few areas that we identified
as points of discussion regarding the text of the draft, on balance it
was seen as helpful to the OpenStack community and consistent with our
established guidelines.

On the topic of the PTG, the group has started planning for the event
and is in the early stages gathering content. We will soon have an
etherpad available for topic collection and as an added bonus mordred
himself made a pronouncement about the API-SIG meeting being a
priority in his schedule for this PTG. We hope to see you all there!

The OpenStack infra team will be doing the final rename from API-WG to
API-SIG this Friday. Although there are not expected to be any issues
from this rename, we will be updating documentation references, and
appreciate any help in chasing down bugs.

There were no new guidelines to discuss, nor bugs that have arisen
since last week.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

* None

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://ietf.org/
[8] https://tools.ietf.org/html/draft-ietf-httpbis-bcp56bis-06
[9] https://www.openstack.org/ptg/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Guests not getting metadata in a Cellsv2 deploy

2018-08-02 Thread Liam Young
Hi,

I have a fresh pike deployment and the guests are not getting metadata. To
investigate it further it would really help me to understand what the
metadata flow is supposed to look like.

In my deployment the guest receives a 404 when hitting
http://169.254.169.254/latest/meta-data. I have added some logging to
expose the messages passing via amqp and I see the nova-api-metadata
service making a call to the super-conductor asking for an InstanceMapping.
The super-conductor sends a reply detailing which cell the instance is in
and the urls for both mysql and rabbit. The nova-api-metadata service then
sends a second message to the superconductor this time asking for
an Instance obj. The super-conductor fails to find the instance and returns
a failure with a "InstanceNotFound: Instance  could not be found"
message, the  nova-api-metadata service then sends a 404 to the original
requester.

I think the super-conductor is looking in the wrong database for the
instance information. I believe it is looking in cell0 when it should
actually be connecting to an entirely different instance of mysql which is
associated with the cell that the instance is in.

Should the super-conductor even be trying to retrieve the instance
information or should the nova-api-metadata service actually be messaging
the conductor in the compute cell?

Any pointers gratefully received!
Thanks
Liam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Setting-up NoVNC 1.0.0 with nova

2018-08-02 Thread Stephen Finucane
On Sun, 2018-05-20 at 09:33 -0700, Matt Riedemann wrote:
> On 5/20/2018 6:37 AM, Thomas Goirand wrote:
> > The novnc package in Debian and Ubuntu is getting very old. So I thought
> > about upgrading to 1.0.0, which has lots of very nice newer features,
> > like the full screen mode, and so on.
> > 
> > All seemed to work, however, when trying to connect to the console of a
> > VM, NoVNC attempts to connect tohttps://example.com:6080/websockify  and
> > then fails (with a 404).
> > 
> > So I was wondering: what's missing in my setup so that there's a
> > /websockify URL? Is there some missing code in the nova-novncproxy so
> > that it would forward this URL to /usr/bin/websockify? If so, has anyone
> > started working on it?
> > 
> > Also, what's the status of NoVNC with Python 3? I saw lots of print
> > statements which are easy to fix, though I even wonder if the code in
> > the python-novnc package is useful. Who's using it? Nova-novncproxy?
> > That's unlikely, since I didn't package a Python 3 version for it.
> 
> Stephen Finucane (stephenfin on irc) would know best at this point, but 
> I know he ran into some issues with configuring nova when using novnc 
> 1.0.0, so check your novncproxy_base_url config option value:
> 
> https://docs.openstack.org/nova/latest/configuration/config.html#vnc.novncproxy_base_url
> 
> Specifically:
> 
> "If using noVNC >= 1.0.0, you should use vnc_lite.html instead of 
> vnc_auto.html."

We've got a patch up to resolve this in DevStack [1]. As Matt notes,
the issue is because a path was renamed in noVNC 1.0.0. You could
resolve this by including a symlink to the path in your package but it
might be better long-term to simply ensure the deployment tools take
care of this. We can eventually change the default in nova once noVNC
1.0.0 gains enough momentum. There's a WIP patch up for this too [2].

Let me know if you need more info,
Stephen

[1] https://review.openstack.org/#/c/550172/6
[2] https://review.openstack.org/#/c/550173/4


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Joshua Harlow

Storage space is a concern; really?

If it really is, then keep X of them for some definition of X (days, 
number, hours, other)? Offload the snapshot asynchronously if 
snapshotting during requests is a problem.


We have the power! :)

Chris Friesen wrote:

On 08/01/2018 11:34 PM, Joshua Harlow wrote:


And I would be able to say request the explanation for a given request id
(historical even) so that analysis could be done post-change and
pre-change (say
I update the algorithm for selection) so that the effects of
alternations to
said decisions could be determined.


This would require storing a snapshot of all resources prior to
processing every request...seems like that could add overhead and
increase storage consumption.

Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-08-02 Thread Jimmy McArthur

Frank,

We expect to have these papers up this afternoon. I'll update this 
thread when we do.


Thanks!
Jimmy

Frank Kloeker wrote:

Hi Sebastian,

okay, it's translated now. In Edge whitepaper is the problem with 
XML-Parsing of the term AT Don't know how to escape this. Maybe you 
will see the warning during import too.


kind regards

Frank

Am 2018-07-30 20:09, schrieb Sebastian Marcet:

Hi Frank,
i was double checking pot file and realized that original pot missed
some parts of the original paper (subsections of the paper) apologizes
on that
i just re uploaded an updated pot file with missing subsections

regards

On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker  wrote:


Hi Jimmy,

from the GUI I'll get this link:

https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center 


[1]

paper version  are only in container whitepaper:


https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack 


[2]

In general there is no group named papers

kind regards

Frank

Am 2018-07-30 17:06, schrieb Jimmy McArthur:
Frank,

We're getting a 404 when looking for the pot file on the Zanata API:

https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing 


[3]

As a result, we can't pull the po files.  Any idea what might be
happening?

Seeing the same thing with both papers...

Thank you,
Jimmy

Frank Kloeker wrote:
Hi Jimmy,

Korean and German version are now done on the new format. Can you
check publishing?

thx

Frank

Am 2018-07-19 16:47, schrieb Jimmy McArthur:
Hi all -

Follow up on the Edge paper specifically:

https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192 


[4] This is now available. As I mentioned on IRC this morning, it
should
be VERY close to the PDF.  Probably just needs a quick review.

Let me know if I can assist with anything.

Thank you to i18n team for all of your help!!!

Cheers,
Jimmy

Jimmy McArthur wrote:
Ian raises some great points :) I'll try to address below...

Ian Y. Choi wrote:
Hello,

When I saw overall translation source strings on container
whitepaper, I would infer that new edge computing whitepaper
source strings would include HTML markup tags.
One of the things I discussed with Ian and Frank in Vancouver is
the expense of recreating PDFs with new translations.  It's
prohibitively expensive for the Foundation as it requires design
resources which we just don't have.  As a result, we created the
Containers whitepaper in HTML, so that it could be easily updated
w/o working with outside design contractors.  I indicated that we
would also be moving the Edge paper to HTML so that we could prevent
that additional design resource cost.
On the other hand, the source strings of edge computing whitepaper
which I18n team previously translated do not include HTML markup
tags, since the source strings are based on just text format.
The version that Akihiro put together was based on the Edge PDF,
which we unfortunately didn't have the resources to implement in the
same format.

I really appreciate Akihiro's work on RST-based support on
publishing translated edge computing whitepapers, since
translators do not have to re-translate all the strings.
I would like to second this. It took a lot of initiative to work on
the RST-based translation.  At the moment, it's just not usable for
the reasons mentioned above.
On the other hand, it seems that I18n team needs to investigate on
translating similar strings of HTML-based edge computing whitepaper
source strings, which would discourage translators.
Can you expand on this? I'm not entirely clear on why the HTML
based translation is more difficult.

That's my point of view on translating edge computing whitepaper.

For translating container whitepaper, I want to further ask the
followings since *I18n-based tools*
would mean for translators that translators can test and publish
translated whitepapers locally:

- How to build translated container whitepaper using original
Silverstripe-based repository?
https://docs.openstack.org/i18n/latest/tools.html [5] describes
well how to build translated artifacts for RST-based OpenStack
repositories
but I could not find the way how to build translated container
whitepaper with translated resources on Zanata.
This is a little tricky.  It's possible to set up a local version
of the OpenStack website

(https://github.com/OpenStackweb/openstack-org/blob/master/installation.md 


[6]).  However, we have to manually ingest the po files as they are
completed and then push them out to production, so that wouldn't do
much to help with your local build.  I'm open to suggestions on how
we can make this process easier for the i18n team.

Thank you,
Jimmy

With many thanks,

/Ian

Jimmy McArthur wrote on 7/17/2018 11:01 PM:
Frank,

I'm sorry to hear about the displeasure around 

Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Sean McGinnis
On Thu, Aug 02, 2018 at 05:59:20PM +0200, Radomir Dopieralski wrote:
> To be honest, I don't see much point in automatically creating bugs that
> nobody is going to look at. When you implement a new feature, it's up to
> you to make it available in Horizon and CLI and wherever else, since the
> people working there simply don't have the time to work on it. Creating a
> ticket will not magically make someone do that work for you. We are happy
> to assist with this, but that's it. Anything else is going to get added
> whenever someone has any free cycles, or it becomes necessary for some
> reason (like breaking compatibility). That's the current reality, and no
> automation is going to help with it.
> 

I don't think that's universally true with these projects. There are some on
these teams that are interested in implementing support for new features and
keeping existing things working right.

The reality for most of this then is new features won't be available and users
will move away from using something like Horizon for whatever else comes along
that will give them access to what they need. I know there are very few
developers focused on Cinder that also have the skillset to add functionality
to Horizon.

I agree ideally someone would work on things wherever they are needed, but I
think there is a barrier with skills and priorities to make that happen. And at
least in the case of Cinder, neither Horizon nor OpenStackClient are required. 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova] [os-vif] [vif_plug_ovs] Support for OVS DB tcp socket communication.

2018-08-02 Thread Stephen Finucane
On Wed, 2018-07-25 at 15:22 +0530, pranab boruah wrote:
> Hello folks,
> I have filed a bug in os-vif: 
> https://bugs.launchpad.net/os-vif/+bug/1778724 and working on a
> patch. Any feedback/comments from you guys would be extremely
> helpful. 
> Bug details:
> OVS DB server has the feature of listening over a TCP socket for
> connections rather than just on the unix domain socket. [0]
> 
> If the OVS DB server is listening over a TCP socket, then the ovs-
> vsctl commands should include the ovsdb_connection parameter:
> # ovs-vsctl --db=tcp:IP:PORT ...
> eg:
> # ovs-vsctl --db=tcp:169.254.1.1:6640 add-port br-int eth0
> Neutron supports running the ovs-vsctl commands with the
> ovsdb_connection parameter. The ovsdb_connection parameter is
> configured in openvswitch_agent.ini file. [1]
> While adding a vif to the ovs bridge(br-int), Nova(os-vif) invokes
> the ovs-vsctl command. Today, there is no support to pass the
> ovsdb_connection parameter while invoking the ovs-vsctl command. The
> support should be added. This would enhance the functionality of os-
> vif, since it would support a scenario when OVS DB server is
> listening on a TCP socket connection and on functional parity with
> Neutron.
> [0] http://www.openvswitch.org/support/dist-docs/ovsdb-server.1.html
> [1] 
> https://docs.openstack.org/neutron/pike/configuration/openvswitch-agent.html
> 
> 
> TIA,Pranab

Perhaps not the same thing, but would the patches mentioned in the
below mail work for this too?

  
http://lists.openstack.org/pipermail/openstack-dev/2018-March/127907.html

Cheers,
Stephen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Radomir Dopieralski
To be honest, I don't see much point in automatically creating bugs that
nobody is going to look at. When you implement a new feature, it's up to
you to make it available in Horizon and CLI and wherever else, since the
people working there simply don't have the time to work on it. Creating a
ticket will not magically make someone do that work for you. We are happy
to assist with this, but that's it. Anything else is going to get added
whenever someone has any free cycles, or it becomes necessary for some
reason (like breaking compatibility). That's the current reality, and no
automation is going to help with it.

On Thu, Aug 2, 2018 at 5:09 PM Sean McGinnis  wrote:

> I'm wondering if someone on the infra team can give me some pointers on
> how to
> approach something, and looking for any general feedback as well.
>
> Background
> ==
> We've had things like the DocImpact tag that could be added to commit
> messages
> that would tie into some automation to create a launchpad bug when that
> commit
> merged. While we had a larger docs team and out-of-tree docs, I think this
> really helped us make sure we didn't lose track of needed documentation
> updates.
>
> I was able to find part of how that is implemented in jeepyb:
>
>
> http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py
>
> Current Challenge
> =
> Similar to the need to follow up with documentation, I've seen a lot of
> cases
> where projects have added features or made other changes that impact
> downstream
> consumers of that project. Most often, I've seen cases where something like
> python-cinderclient adds some functionality, but it is on projects like
> Horizon
> or python-openstackclient to proactively go out and discover those changes.
>
> Not only just seeking out those changes, but also evaluating whether a
> given
> change should have any impact on their project. So we've ended up in a lot
> of
> cases where either new functionality isn't made available through these
> interfaces until a cycle or two later, or probably worse, cases where
> something
> is now broken with no one aware of it until an actual end user hits a
> problem
> and files a bug.
>
> ClientImpact Plan
> =
> I've run this by a few people and it seems to have some support. Or course
> I'm
> open to any other suggestions.
>
> What I would like to do is add a ClientImpact tag handling that could be
> added
> very similarly to DocImpact. The way I see it working is it would work in
> much
> the same way where project's can use this to add the tag to a commit
> message
> when they know it is something that will require additional work in OSC or
> Horizon (or others). Then when that commit merges, automation would create
> a
> launchpad bug and/or Storyboard story, including a default set of client
> projects. Perhaps we can find some way to make those impacted clients
> configurable by source project, but that could be a follow-on optimization.
>
> I am concerned that this could create some extra overhead for these
> projects.
> But my hope is it would be a quick evaluation by a bug triager in those
> projects where they can, hopefully, quickly determine if a change does not
> in
> fact impact them and just close the ones they don't think require any
> follow on
> work.
>
> I do hope that this will save some time and speed things up overall for
> these
> projects to be notified that there is something that needs their attention
> without needing someone to take the time to actively go out and discover
> that.
>
> Help Needed
> ===
> From the bits I've found for the DocImpact handling, it looks like it
> should
> not be too much effort to implement the logic to handle a ClientImpact
> flag.
> But I have not been able to find all the moving parts that work together to
> perform that automation.
>
> If anyone has any background knowledge on how DocImpact is implemented and
> can
> give me a few pointers, I think I should be able to take it from there to
> get
> this implemented. Or if there is someone that knows this well and is
> interested
> in working on some of the implementation, that would be very welcome too!
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Next bug day is Tuesday August 28th! Vote for timeslot!

2018-08-02 Thread Michael Turek

Hey all!

Bug day was pretty productive today and we decided to schedule another 
one for the end of this month, on Tuesday the 28th. For details see the 
etherpad for the event [0]


Also since we're changing things up, we decided to also put up a vote 
for the timeslot [1]


If you have any questions or suggestions on how to improve bug day, I am 
all ears! Hope to see you there!


Thanks,
Mike Turek 

[0] https://etherpad.openstack.org/p/ironic-bug-day-august-28-2018
[1] https://doodle.com/poll/ef4m9zmacm2ey7ce


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Add SRIOV mirroring support to Tap as a Service (https://review.openstack.org/#/c/584514/)

2018-08-02 Thread Deepak Tiwari
Hi TaaS Dev team,



This mail is regarding the comment to move the changes out of stable/ocata
branch. I would like to explain the reasons why we require these changes in
Ocata branch.



We intend to deploy TaaS-plugin with Openstack-helm (OSH) charts in our
labs. However OSH as of now supports only Ocata. So we need to put in the
changes to Ocata branch of TaaS to enable us to deploy and test it. Of
course in parallel we are working on a commit for master branch as well,
however we require this feature in ocata branch also.



Due to the fact that we are adding a new SRIOV driver, with no changes to
existing OVS driver and there being no impact to TaaS API or DB/Data model,
the existing functionality shouldn’t be impacted with this change.



Please provide your go ahead for the same



Br, Deepak
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][horizon][osc] ClientImpact tag automation

2018-08-02 Thread Sean McGinnis
I'm wondering if someone on the infra team can give me some pointers on how to
approach something, and looking for any general feedback as well.

Background
==
We've had things like the DocImpact tag that could be added to commit messages
that would tie into some automation to create a launchpad bug when that commit
merged. While we had a larger docs team and out-of-tree docs, I think this
really helped us make sure we didn't lose track of needed documentation
updates.

I was able to find part of how that is implemented in jeepyb:

http://git.openstack.org/cgit/openstack-infra/jeepyb/tree/jeepyb/cmd/notify_impact.py

Current Challenge
=
Similar to the need to follow up with documentation, I've seen a lot of cases
where projects have added features or made other changes that impact downstream
consumers of that project. Most often, I've seen cases where something like
python-cinderclient adds some functionality, but it is on projects like Horizon
or python-openstackclient to proactively go out and discover those changes.

Not only just seeking out those changes, but also evaluating whether a given
change should have any impact on their project. So we've ended up in a lot of
cases where either new functionality isn't made available through these
interfaces until a cycle or two later, or probably worse, cases where something
is now broken with no one aware of it until an actual end user hits a problem
and files a bug.

ClientImpact Plan
=
I've run this by a few people and it seems to have some support. Or course I'm
open to any other suggestions.

What I would like to do is add a ClientImpact tag handling that could be added
very similarly to DocImpact. The way I see it working is it would work in much
the same way where project's can use this to add the tag to a commit message
when they know it is something that will require additional work in OSC or
Horizon (or others). Then when that commit merges, automation would create a
launchpad bug and/or Storyboard story, including a default set of client
projects. Perhaps we can find some way to make those impacted clients
configurable by source project, but that could be a follow-on optimization.

I am concerned that this could create some extra overhead for these projects.
But my hope is it would be a quick evaluation by a bug triager in those
projects where they can, hopefully, quickly determine if a change does not in
fact impact them and just close the ones they don't think require any follow on
work.

I do hope that this will save some time and speed things up overall for these
projects to be notified that there is something that needs their attention
without needing someone to take the time to actively go out and discover that.

Help Needed
===
From the bits I've found for the DocImpact handling, it looks like it should
not be too much effort to implement the logic to handle a ClientImpact flag.
But I have not been able to find all the moving parts that work together to
perform that automation.

If anyone has any background knowledge on how DocImpact is implemented and can
give me a few pointers, I think I should be able to take it from there to get
this implemented. Or if there is someone that knows this well and is interested
in working on some of the implementation, that would be very welcome too!

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Ben Nemec



On 08/01/2018 06:05 PM, Matt Riedemann wrote:

On 8/1/2018 3:55 PM, Ben Nemec wrote:
I changed disk_allocation_ratio to 2.0 in the config file and it had 
no effect on the existing resource provider.  I assume that is because 
I had initially deployed with it unset, so I got 1.0, and when I later 
wanted to change it the provider already existed with the default value. 


Yeah I think so, unless the inventory changes we don't mess with 
changing the allocation ratio.


That makes sense.  It would be nice if it were more explicitly stated in 
the option help, but I guess Jay's spec below would obsolete that 
behavior so maybe it's better to just pursue that.





  So in the past I could do the following:

1) Change disk_allocation_ratio in nova.conf
2) Restart nova-scheduler and/or nova-compute

Now it seems like I need to do:

1) Change disk_allocation_ratio in nova.conf
2) Restart nova-scheduler, nova-compute, and nova-placement (or some 
subset of those?)


Restarting the placement service wouldn't have any effect here.


Wouldn't I need to restart it if I wanted new resource providers to use 
the new default?




3) Use osc-placement to fix up the ratios on any existing resource 
providers


Yeah that's what you'd need to do in this case.

I believe Jay Pipes might have somewhere between 3 and 10 specs for the 
allocation ratio / nova conf / placement inventory / aggregates problems 
floating around, so he's probably best to weigh in here. Like: 
https://review.openstack.org/#/c/552105/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Jay Pipes

On 08/02/2018 01:12 AM, Alex Xu wrote:
2018-08-02 4:09 GMT+08:00 Jay Pipes >:


On 08/01/2018 02:02 PM, Chris Friesen wrote:

On 08/01/2018 11:32 AM, melanie witt wrote:

I think it's definitely a significant issue that
troubleshooting "No allocation
candidates returned" from placement is so difficult.
However, it's not
straightforward to log detail in placement when the request
for allocation
candidates is essentially "SELECT * FROM nodes WHERE cpu
usage < needed and disk
usage < needed and memory usage < needed" and the result is
returned from the API.


I think the only way to get useful info on a failure would be to
break down the huge SQL statement into subclauses and store the
results of the intermediate queries.


This is a good idea and something that can be done.


That sounds like you need separate sql query for each resource to get 
the intermediate, will that be terrible performance than a single query 
to get the final result?


No, not necessarily.

And what I'm referring to is doing a single query per "related 
resource/trait placement request group" -- which is pretty much what 
we're heading towards anyway.


If we had a request for:

GET /allocation_candidates?
 resources0=VCPU:1&
 required0=HW_CPU_X86_AVX2,!HW_CPU_X86_VMX&
 resources1=MEMORY_MB:1024

and logged something like this:

DEBUG: [placement request ID XXX] request group 1 of 2 for 1 PCPU, 
requiring HW_CPU_X86_AVX2, forbidding HW_CPU_X86_VMX, returned 10 matches


DEBUG: [placement request ID XXX] request group 2 of 2 for 1024 
MEMORY_MB returned 3 matches


that would at least go a step towards being more friendly for debugging 
a particular request's results.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Chris Friesen

On 08/02/2018 04:10 AM, Chris Dent wrote:


When people ask for something like what Chris mentioned:

 hosts with enough CPU: 
 hosts that also have enough disk: 
 hosts that also have enough memory: 
 hosts that also meet extra spec host aggregate keys: 
 hosts that also meet image properties host aggregate keys: 
 hosts that also have requested PCI devices: 

What are the operational questions that people are trying to answer
with those results? Is the idea to be able to have some insight into
the resource usage and reporting on and from the various hosts and
discover that things are being used differently than thought? Is
placement a resource monitoring tool, or is it more simple and
focused than that? Or is it that we might have flavors or other
resource requesting constraints that have bad logic and we want to
see at what stage the failure is?  I don't know and I haven't really
seen it stated explicitly here, and knowing it would help.

Do people want info like this for requests as they happen, or to be
able to go back later and try the same request again with some flag
on that says: "diagnose what happened"?

Or to put it another way: Before we design something that provides
the information above, which is a solution to an undescribed
problem, can we describe the problem more completely first to make
sure that what solution we get is the right one. The thing above,
that set of information, is context free.


The reason my organization added additional failure-case logging to the 
pre-placement scheduler was that we were enabling complex features (cpu pinning, 
hugepages, PCI, SRIOV, CPU model requests, NUMA topology, etc.) and we were 
running into scheduling failures, and people were asking the question "why did 
this scheduler request fail to find a valid host?".


There are a few reasons we might want to ask this question.  Some of them 
include:

1) double-checking the scheduler is working properly when first using additional 
features

2) weeding out images/flavors with excessive or mutually-contradictory 
constraints
3) determining whether the cluster needs to be reconfigured to meet user 
requirements


I suspect that something like "do the same request again with a debug flag" 
would cover many scenarios.  I suspect its main weakness would be dealing with 
contention between short-lived entities.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] PTL on PTO, no meeting next week

2018-08-02 Thread Ben Nemec
I'm out next week and I'm told Monday is a bank holiday in some places, 
so we're going to skip the Oslo meeting for August 6th.  Of course if 
you have issues you don't have to wait for a meeting to ask.  The Oslo 
team is pretty much always around in #openstack-oslo.


I should be back the following week so we'll resume the normal meeting 
schedule then.


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Paste unmaintained

2018-08-02 Thread Chris Dent

On Thu, 2 Aug 2018, Stephen Finucane wrote:


Given that multiple projects are using this, we may want to think about
reaching out to the author and seeing if there's anything we can do to
at least keep this maintained going forward. I've talked to cdent about
this already but if anyone else has ideas, please let me know.


I've sent some exploratory email to Ian, the original author, to get
a sense of where things are and whether there's an option for us (or
if for some reason us wasn't okay, me) to adopt it. If email doesn't
land I'll try again with other media

I agree with the idea of trying to move away from using it, as
mentioned elsewhere in this thread and in IRC, but it's not a simple
step as at least in some projects we are using paste files as
configuration that people are allowed (and do) change. Moving away
from that is the hard part, not figuring out how to load WSGI
middleware in a modern way.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Paste unmaintained

2018-08-02 Thread Jeremy Freudberg
On Thu, Aug 2, 2018 at 10:27 AM, Doug Hellmann  wrote:
> Excerpts from Stephen Finucane's message of 2018-08-02 15:11:25 +0100:
>> tl;dr: It seems Paste [1] may be entering unmaintained territory and we
>> may need to do something about it.
>>
>> I was cleaning up some warning messages that nova was issuing this
>> morning and noticed a few coming from Paste. I was going to draft a PR
>> to fix this, but a quick browse through the Bitbucket project [2]
>> suggests there has been little to no activity on that for well over a
>> year. One particular open PR - "Python 3.7 support" - is particularly
>> concerning, given the recent mailing list threads on the matter.
>>
>> Given that multiple projects are using this, we may want to think about
>> reaching out to the author and seeing if there's anything we can do to
>> at least keep this maintained going forward. I've talked to cdent about
>> this already but if anyone else has ideas, please let me know.
>>
>> Stephen
>>
>> [1] https://pypi.org/project/Paste/
>> [2] https://bitbucket.org/ianb/paste/
>> [3] https://bitbucket.org/ianb/paste/pull-requests/41
>>
>
> The last I heard, a few years ago Ian moved away from Python to
> JavaScript as part of his work at Mozilla. The support around
> paste.deploy has been sporadic since then, and was one of the reasons
> we discussed a goal of dropping paste.ini as a configuration file.
>
> Do we have a real sense of how many of the projects below, which
> list Paste in requirements.txt, actually use it directly or rely
> on it for configuration?
>
> Doug
>
> $ beagle search --ignore-case --file requirements.txt 'paste[><=! ]'
> +++--++
> | Repository | Filename   
> | Line | Text   |
> +++--++
> | airship-armada | requirements.txt   
> |8 | Paste>=2.0.3   |
> | airship-deckhand   | requirements.txt   
> |   12 | Paste # MIT|
> | anchor | requirements.txt   
> |9 | Paste # MIT|
> | apmec  | requirements.txt   
> |6 | Paste>=2.0.2 # MIT |
> | barbican   | requirements.txt   
> |   22 | Paste>=2.0.2 # MIT |
> | cinder | requirements.txt   
> |   37 | Paste>=2.0.2 # MIT |
> | congress   | requirements.txt   
> |   11 | Paste>=2.0.2 # MIT |
> | designate  | requirements.txt   
> |   25 | Paste>=2.0.2 # MIT |
> | ec2-api| requirements.txt   
> |   20 | Paste # MIT|
> | freezer-api| requirements.txt   
> |8 | Paste>=2.0.2 # MIT |
> | gce-api| requirements.txt   
> |   16 | Paste>=2.0.2 # MIT |
> | glance | requirements.txt   
> |   31 | Paste>=2.0.2 # MIT |
> | glare  | requirements.txt   
> |   29 | Paste>=2.0.2 # MIT |
> | karbor | requirements.txt   
> |   28 | Paste>=2.0.2 # MIT |
> | kingbird   | requirements.txt   
> |7 | Paste>=2.0.2 # MIT |
> | manila | requirements.txt   
> |   30 | Paste>=2.0.2 # MIT |
> | meteos | requirements.txt   
> |   29 | Paste # MIT|
> | monasca-events-api | requirements.txt   
> |6 | Paste # MIT|
> | monasca-log-api| requirements.txt   
> |6 | Paste>=2.0.2 # MIT |
> | murano | requirements.txt   
> |   28 | Paste>=2.0.2 # MIT |
> | neutron| requirements.txt   
> |6 | Paste>=2.0.2 # MIT |
> | nova   | requirements.txt   
> |   19 | Paste>=2.0.2 # MIT |
> | novajoin 

Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Sean McGinnis
> > 
> > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few 
> > teams per cycle that miss the election call, that would fall under that.
> > 
+1 for appointing Dirk as PTL.

> > Trove had a volunteer (Dariusz Krol), but that person did not fill the 
> > requirements for candidates. Given that the previous PTL (Zhao Chao) 
> > plans to stay around to help onboarding the new contributors, I'd 
> > support appointing Dariusz.
> > 

I would be fine with this. But I also wonder if it might make sense to move
Trove out of governance while they go through this transition so they have more
leeway to evolve the project how they need to, with the expectation that if
things get to a good and healthy point we can quickly re-accept the project as
official.

> > I suspect Freezer falls in the same bucket as Packaging_Rpm and we 
> > should get a candidate there. I would reach out to caoyuan see if they 
> > would be interested in steeping up.
> > 
> > LOCI is also likely in the same bucket. However, given that it's a 
> > deployment project, if we can't get anyone to step up and guarantee some 
> > level of currentness, we should consider removing it from the "official" 
> > list.
> > 
> > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if 
> > it's not, given that it's an add-on project that runs within Neutron, I 
> > would consider removing it from the "official" list if we can't find 
> > anyone to step up.
> > 

Omer has responded that the deadline was missed and he would like to continue
as PTL. I think that is acceptable. (though unfortunate that it was missed)

> > For Winstackers and Searchlight, those are low-activity teams (18 and 13 
> > commits), which brings the question of PTL workload for feature-complete 
> > projects.
> 
> Even for feature-complete projects we need to know how to reach the
> maintainers, otherwise I feel like we would consider the project
> unmaintained, wouldn't we?
> 

I agree with Doug, I think there needs to be someone designated as the contact
point for issues with the project. We've seen other "stable" things suddenly go
unstable due to library updates or other external factors.

I don't think Thierry was suggesting there not be a PTL for these, but for any
potential PTL candidates they can know that the demands on their time to fill
that role _should_ be pretty light.

> > 
> > Finally, RefStack: I feel like this should be wrapped into an 
> > Interoperability SIG, since that project team is not producing 
> > "OpenStack", but helping fostering OpenStack interoperability. Having 
> > separate groups (Interop WG, RefStack) sounds overkill anyway, and with 
> > the introduction of SIGs we have been recentering project teams on 
> > upstream code production.
> > 
> 

I agree this has gotten to the point where it probably now makes more sense to
be owned by a SIG rather than being a full project team.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Paste unmaintained

2018-08-02 Thread Doug Hellmann
Excerpts from Stephen Finucane's message of 2018-08-02 15:11:25 +0100:
> tl;dr: It seems Paste [1] may be entering unmaintained territory and we
> may need to do something about it.
> 
> I was cleaning up some warning messages that nova was issuing this
> morning and noticed a few coming from Paste. I was going to draft a PR
> to fix this, but a quick browse through the Bitbucket project [2]
> suggests there has been little to no activity on that for well over a
> year. One particular open PR - "Python 3.7 support" - is particularly
> concerning, given the recent mailing list threads on the matter.
> 
> Given that multiple projects are using this, we may want to think about
> reaching out to the author and seeing if there's anything we can do to
> at least keep this maintained going forward. I've talked to cdent about
> this already but if anyone else has ideas, please let me know.
> 
> Stephen
> 
> [1] https://pypi.org/project/Paste/
> [2] https://bitbucket.org/ianb/paste/
> [3] https://bitbucket.org/ianb/paste/pull-requests/41
> 

The last I heard, a few years ago Ian moved away from Python to
JavaScript as part of his work at Mozilla. The support around
paste.deploy has been sporadic since then, and was one of the reasons
we discussed a goal of dropping paste.ini as a configuration file.

Do we have a real sense of how many of the projects below, which
list Paste in requirements.txt, actually use it directly or rely
on it for configuration?

Doug

$ beagle search --ignore-case --file requirements.txt 'paste[><=! ]'
+++--++
| Repository | Filename 
  | Line | Text   |
+++--++
| airship-armada | requirements.txt 
  |8 | Paste>=2.0.3   |
| airship-deckhand   | requirements.txt 
  |   12 | Paste # MIT|
| anchor | requirements.txt 
  |9 | Paste # MIT|
| apmec  | requirements.txt 
  |6 | Paste>=2.0.2 # MIT |
| barbican   | requirements.txt 
  |   22 | Paste>=2.0.2 # MIT |
| cinder | requirements.txt 
  |   37 | Paste>=2.0.2 # MIT |
| congress   | requirements.txt 
  |   11 | Paste>=2.0.2 # MIT |
| designate  | requirements.txt 
  |   25 | Paste>=2.0.2 # MIT |
| ec2-api| requirements.txt 
  |   20 | Paste # MIT|
| freezer-api| requirements.txt 
  |8 | Paste>=2.0.2 # MIT |
| gce-api| requirements.txt 
  |   16 | Paste>=2.0.2 # MIT |
| glance | requirements.txt 
  |   31 | Paste>=2.0.2 # MIT |
| glare  | requirements.txt 
  |   29 | Paste>=2.0.2 # MIT |
| karbor | requirements.txt 
  |   28 | Paste>=2.0.2 # MIT |
| kingbird   | requirements.txt 
  |7 | Paste>=2.0.2 # MIT |
| manila | requirements.txt 
  |   30 | Paste>=2.0.2 # MIT |
| meteos | requirements.txt 
  |   29 | Paste # MIT|
| monasca-events-api | requirements.txt 
  |6 | Paste # MIT|
| monasca-log-api| requirements.txt 
  |6 | Paste>=2.0.2 # MIT |
| murano | requirements.txt 
  |   28 | Paste>=2.0.2 # MIT |
| neutron| requirements.txt 
  |6 | Paste>=2.0.2 # MIT |
| nova   | requirements.txt 
  |   19 | Paste>=2.0.2 # MIT |
| novajoin   | requirements.txt 
  |6 | Paste>=2.0.2 # MIT |
| oslo.service   | requirements.txt 
  |   

Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Chris Friesen

On 08/01/2018 11:34 PM, Joshua Harlow wrote:


And I would be able to say request the explanation for a given request id
(historical even) so that analysis could be done post-change and pre-change (say
I update the algorithm for selection) so that the effects of alternations to
said decisions could be determined.


This would require storing a snapshot of all resources prior to processing every 
request...seems like that could add overhead and increase storage consumption.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Doug Hellmann
Excerpts from Thierry Carrez's message of 2018-08-02 10:58:53 +0200:
> Tony Breeds wrote:
> > [...]
> > There are 8 projects without candidates, so according to this
> > resolution[1], the TC will have to decide how the following
> > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm,
> > RefStack, Searchlight, Trove and Winstackers.
> 
> Here is my take on that...
> 
> Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few 
> teams per cycle that miss the election call, that would fall under that.
> 
> Trove had a volunteer (Dariusz Krol), but that person did not fill the 
> requirements for candidates. Given that the previous PTL (Zhao Chao) 
> plans to stay around to help onboarding the new contributors, I'd 
> support appointing Dariusz.
> 
> I suspect Freezer falls in the same bucket as Packaging_Rpm and we 
> should get a candidate there. I would reach out to caoyuan see if they 
> would be interested in steeping up.
> 
> LOCI is also likely in the same bucket. However, given that it's a 
> deployment project, if we can't get anyone to step up and guarantee some 
> level of currentness, we should consider removing it from the "official" 
> list.
> 
> Dragonflow is a bit in the LOCI case. It feels like a miss too, but if 
> it's not, given that it's an add-on project that runs within Neutron, I 
> would consider removing it from the "official" list if we can't find 
> anyone to step up.
> 
> For Winstackers and Searchlight, those are low-activity teams (18 and 13 
> commits), which brings the question of PTL workload for feature-complete 
> projects.

Even for feature-complete projects we need to know how to reach the
maintainers, otherwise I feel like we would consider the project
unmaintained, wouldn't we?

> 
> Finally, RefStack: I feel like this should be wrapped into an 
> Interoperability SIG, since that project team is not producing 
> "OpenStack", but helping fostering OpenStack interoperability. Having 
> separate groups (Interop WG, RefStack) sounds overkill anyway, and with 
> the introduction of SIGs we have been recentering project teams on 
> upstream code production.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Doug Hellmann
Excerpts from Omer Anson's message of 2018-08-02 12:56:37 +0300:
> Hi,
> 
> I'm sorry for the inconvenience. I completely missed the nomination period.
> Is it possible to send in a late nomination for Dragonflow?

At this point the TC is going to be looking for a volunteer, so if there
is one please let us know.

Doug

> 
> Thanks,
> Omer Anson.
> 
> On Thu, 2 Aug 2018 at 11:59, Thierry Carrez  wrote:
> 
> > Tony Breeds wrote:
> > > [...]
> > > There are 8 projects without candidates, so according to this
> > > resolution[1], the TC will have to decide how the following
> > > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm,
> > > RefStack, Searchlight, Trove and Winstackers.
> >
> > Here is my take on that...
> >
> > Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few
> > teams per cycle that miss the election call, that would fall under that.
> >
> > Trove had a volunteer (Dariusz Krol), but that person did not fill the
> > requirements for candidates. Given that the previous PTL (Zhao Chao)
> > plans to stay around to help onboarding the new contributors, I'd
> > support appointing Dariusz.
> >
> > I suspect Freezer falls in the same bucket as Packaging_Rpm and we
> > should get a candidate there. I would reach out to caoyuan see if they
> > would be interested in steeping up.
> >
> > LOCI is also likely in the same bucket. However, given that it's a
> > deployment project, if we can't get anyone to step up and guarantee some
> > level of currentness, we should consider removing it from the "official"
> > list.
> >
> > Dragonflow is a bit in the LOCI case. It feels like a miss too, but if
> > it's not, given that it's an add-on project that runs within Neutron, I
> > would consider removing it from the "official" list if we can't find
> > anyone to step up.
> >
> > For Winstackers and Searchlight, those are low-activity teams (18 and 13
> > commits), which brings the question of PTL workload for feature-complete
> > projects.
> >
> > Finally, RefStack: I feel like this should be wrapped into an
> > Interoperability SIG, since that project team is not producing
> > "OpenStack", but helping fostering OpenStack interoperability. Having
> > separate groups (Interop WG, RefStack) sounds overkill anyway, and with
> > the introduction of SIGs we have been recentering project teams on
> > upstream code production.
> >
> > --
> > Thierry Carrez (ttx)
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Paste unmaintained

2018-08-02 Thread Stephen Finucane
tl;dr: It seems Paste [1] may be entering unmaintained territory and we
may need to do something about it.

I was cleaning up some warning messages that nova was issuing this
morning and noticed a few coming from Paste. I was going to draft a PR
to fix this, but a quick browse through the Bitbucket project [2]
suggests there has been little to no activity on that for well over a
year. One particular open PR - "Python 3.7 support" - is particularly
concerning, given the recent mailing list threads on the matter.

Given that multiple projects are using this, we may want to think about
reaching out to the author and seeing if there's anything we can do to
at least keep this maintained going forward. I've talked to cdent about
this already but if anyone else has ideas, please let me know.

Stephen

[1] https://pypi.org/project/Paste/
[2] https://bitbucket.org/ianb/paste/
[3] https://bitbucket.org/ianb/paste/pull-requests/41


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposing Moisés Guimarães for oslo.config core

2018-08-02 Thread Stephen Finucane
On Wed, 2018-08-01 at 09:27 -0400, Doug Hellmann wrote:
> Moisés Guimarães (moguimar) did quite a bit of work on oslo.config
> during the Rocky cycle to add driver support. Based on that work,
> and a discussion we have had since then about general cleanup needed
> in oslo.config, I think he would make a good addition to the
> oslo.config review team.
> 
> Please indicate your approval or concerns with +1/-1.
> 
> Doug

+1. The more the merrier.

Stephen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-08-02 13:13:21 +:
> On 2018-08-02 10:58:53 +0200 (+0200), Thierry Carrez wrote:
> [...]
> > Finally, RefStack: I feel like this should be wrapped into an
> > Interoperability SIG, since that project team is not producing
> > "OpenStack", but helping fostering OpenStack interoperability.
> > Having separate groups (Interop WG, RefStack) sounds overkill
> > anyway, and with the introduction of SIGs we have been recentering
> > project teams on upstream code production.
> 
> That was one of the possibilities I discussed with them during their
> meeting a month ago:
> 
> http://eavesdrop.openstack.org/irclogs/%23refstack/%23refstack.2018-07-03.log.html#t2018-07-03T17:05:43
> 
> Election official hat off and TC Refstack liaison hat on, I think if
> Chris Hoge doesn't volunteer to act as PTL this cycle to oversee
> shutting down the team and reassigning its deliverables, then we
> need to help them fast-track that nowish and not appoint a Stein
> cycle PTL.

This came up at a joint leadership meeting right after we created SIGs
and the Interop WG was reluctant to make any structural changes at the
time because they had just gone through a renaming process for the
working group. Changing "WG" to "SIG" feels much lighter weight, so
maybe we can move ahead with that now.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Jeremy Stanley
On 2018-08-02 10:58:53 +0200 (+0200), Thierry Carrez wrote:
[...]
> Finally, RefStack: I feel like this should be wrapped into an
> Interoperability SIG, since that project team is not producing
> "OpenStack", but helping fostering OpenStack interoperability.
> Having separate groups (Interop WG, RefStack) sounds overkill
> anyway, and with the introduction of SIGs we have been recentering
> project teams on upstream code production.

That was one of the possibilities I discussed with them during their
meeting a month ago:

http://eavesdrop.openstack.org/irclogs/%23refstack/%23refstack.2018-07-03.log.html#t2018-07-03T17:05:43

Election official hat off and TC Refstack liaison hat on, I think if
Chris Hoge doesn't volunteer to act as PTL this cycle to oversee
shutting down the team and reassigning its deliverables, then we
need to help them fast-track that nowish and not appoint a Stein
cycle PTL.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-08-02 Thread Sebastian Marcet
Hello Ian, due the nature of the pot file format and mechanic
we cant add the translators as msgid entries bc will only exist on the
corresponding po file per lang
said that , i think that we could create a solution using both [1] and [2]
that said
* adding "TRANSLATORS" msgid on pot file, so i could get that string per
lang
* adding translators names as stated on [2] as po file metadata so i could
parse and display per language


regards

On Wed, Aug 1, 2018 at 11:03 PM, Ian Y. Choi  wrote:

> Hello Sebastian,
>
> Korean has also currently 100% translation now.
> About two weeks ago, there were a discussion how to include the list of
> translators per translated document.
>
> My proposal is mentioned in [1] - do you think it is a good idea and it is
> under implementation,
> or parsing the name of translators in header lines on po files (e.g., four
> lines on [2]) would be better idea?
>
>
> With many thanks,
>
> /Ian
>
> [1] http://eavesdrop.openstack.org/irclogs/%23openstack-i18n/%
> 23openstack-i18n.2018-07-19.log.html#t2018-07-19T15:09:46
> [2] http://git.openstack.org/cgit/openstack/i18n/tree/doc/source
> /locale/de/LC_MESSAGES/doc.po#n1
>
>
> Frank Kloeker wrote on 7/31/2018 6:39 PM:
>
>> Hi Sebastian,
>>
>> okay, it's translated now. In Edge whitepaper is the problem with
>> XML-Parsing of the term AT Don't know how to escape this. Maybe you will
>> see the warning during import too.
>>
>> kind regards
>>
>> Frank
>>
>> Am 2018-07-30 20:09, schrieb Sebastian Marcet:
>>
>>> Hi Frank,
>>> i was double checking pot file and realized that original pot missed
>>> some parts of the original paper (subsections of the paper) apologizes
>>> on that
>>> i just re uploaded an updated pot file with missing subsections
>>>
>>> regards
>>>
>>> On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker  wrote:
>>>
>>> Hi Jimmy,

 from the GUI I'll get this link:

 https://translate.openstack.org/rest/file/translation/edge-
>>> computing/pot-translation/de/po?docId=cloud-edge-computing-
>>> beyond-the-data-center
>>>
 [1]

 paper version  are only in container whitepaper:


 https://translate.openstack.org/rest/file/translation/levera
>>> ging-containers-openstack/paper/de/po?docId=leveraging-
>>> containers-and-openstack
>>>
 [2]

 In general there is no group named papers

 kind regards

 Frank

 Am 2018-07-30 17:06, schrieb Jimmy McArthur:
 Frank,

 We're getting a 404 when looking for the pot file on the Zanata API:

 https://translate.openstack.org/rest/file/translation/papers
>>> /papers/de/po?docId=edge-computing
>>>
 [3]

 As a result, we can't pull the po files.  Any idea what might be
 happening?

 Seeing the same thing with both papers...

 Thank you,
 Jimmy

 Frank Kloeker wrote:
 Hi Jimmy,

 Korean and German version are now done on the new format. Can you
 check publishing?

 thx

 Frank

 Am 2018-07-19 16:47, schrieb Jimmy McArthur:
 Hi all -

 Follow up on the Edge paper specifically:

 https://translate.openstack.org/iteration/view/edge-computin
>>> g/pot-translation/documents?dswid=-3192
>>>
 [4] This is now available. As I mentioned on IRC this morning, it
 should
 be VERY close to the PDF.  Probably just needs a quick review.

 Let me know if I can assist with anything.

 Thank you to i18n team for all of your help!!!

 Cheers,
 Jimmy

 Jimmy McArthur wrote:
 Ian raises some great points :) I'll try to address below...

 Ian Y. Choi wrote:
 Hello,

 When I saw overall translation source strings on container
 whitepaper, I would infer that new edge computing whitepaper
 source strings would include HTML markup tags.
 One of the things I discussed with Ian and Frank in Vancouver is
 the expense of recreating PDFs with new translations.  It's
 prohibitively expensive for the Foundation as it requires design
 resources which we just don't have.  As a result, we created the
 Containers whitepaper in HTML, so that it could be easily updated
 w/o working with outside design contractors.  I indicated that we
 would also be moving the Edge paper to HTML so that we could prevent
 that additional design resource cost.
 On the other hand, the source strings of edge computing whitepaper
 which I18n team previously translated do not include HTML markup
 tags, since the source strings are based on just text format.
 The version that Akihiro put together was based on the Edge PDF,
 which we unfortunately didn't have the resources to implement in the
 same format.

 I really appreciate Akihiro's work on RST-based support on
 publishing translated edge computing whitepapers, since
 translators do not have to re-translate all the strings.
 I would like to second this. It 

Re: [openstack-dev] [placement] #openstack-placement IRC channel requires registered nicks

2018-08-02 Thread Jim Rollenhagen
On Thu, Aug 2, 2018 at 5:18 AM, Chris Dent  wrote:

>
>
> I thought I should post a message here for visibility that yesterday
> we made the openstack-placement IRC channel +r so that the recent
> spate of spammers could be blocked.
>
> This means that you must have a registered nick to gain access to
> the channel. There's information on how to register at:
>
> https://freenode.net/kb/answer/registration
>
> Plenty of other channels have been doing the same thing, see:
>
> https://etherpad.openstack.org/p/freenode-plus-r-08-2018


In case you (or others) missed it, infra actually went through and made
all official OpenStack channels +r. They're also set to redirect to
#openstack-unregistered where there's a message about what's going on
and people there to help navigate registering a nick.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] proposing Moisés Guimarães for oslo.config core

2018-08-02 Thread ChangBo Guo
+1

2018-08-01 23:38 GMT+08:00 John Dennis :

> On 08/01/2018 09:27 AM, Doug Hellmann wrote:
>
>> Moisés Guimarães (moguimar) did quite a bit of work on oslo.config
>> during the Rocky cycle to add driver support. Based on that work,
>> and a discussion we have had since then about general cleanup needed
>> in oslo.config, I think he would make a good addition to the
>> oslo.config review team.
>>
>> Please indicate your approval or concerns with +1/-1.
>>
>
> +1
>
>
> --
> John Dennis
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-02 Thread Chris Dent


Responses to some of Jay's comments below, but first, to keep this
on track with the original goal of the thread ("How to debug no
valid host failures with placement") before I drag it to the side,
some questions.

When people ask for something like what Chris mentioned:

hosts with enough CPU: 
hosts that also have enough disk: 
hosts that also have enough memory: 
hosts that also meet extra spec host aggregate keys: 
hosts that also meet image properties host aggregate keys: 
hosts that also have requested PCI devices: 

What are the operational questions that people are trying to answer
with those results? Is the idea to be able to have some insight into
the resource usage and reporting on and from the various hosts and
discover that things are being used differently than thought? Is
placement a resource monitoring tool, or is it more simple and
focused than that? Or is it that we might have flavors or other
resource requesting constraints that have bad logic and we want to
see at what stage the failure is?  I don't know and I haven't really
seen it stated explicitly here, and knowing it would help.

Do people want info like this for requests as they happen, or to be
able to go back later and try the same request again with some flag
on that says: "diagnose what happened"?

Or to put it another way: Before we design something that provides
the information above, which is a solution to an undescribed
problem, can we describe the problem more completely first to make
sure that what solution we get is the right one. The thing above,
that set of information, is context free.

On Wed, 1 Aug 2018, Jay Pipes wrote:

On 08/01/2018 02:02 PM, Chris Friesen wrote:
I think the only way to get useful info on a failure would be to break down 
the huge SQL statement into subclauses and store the results of the 
intermediate queries.


This is a good idea and something that can be done.


I can see how it would be a good idea from an explicit debugging
standpoint, but is it a good idea on all fronts? From the very early
days when placement was just a thing under your pen on whiteboards,
we were trying to achieve something that wasn't the FilterScheduler
but achieved efficiencies and some measure of black boxed-ness by
being as near as possible to a single giant SQL statement as we
could get it. Do we want to get too far away from that?

Another thing to consider is that in a large installation, logging
these intermediate results (if done in the listing-hosts way
indicated above) would be very large without some truncating or
"only if < N results" guards.

Would another approach be to make it easy to replay a resource
request that incrementally retries the request with a less
constrained set of requirements (expanding by some heuristic we
design)? Something on a different URI where the response is in
neither of what /allocation_candidates or /resourcer_providers
returns, but allows the caller to know the boundary of results and
no results is.

One could also imagine a non-http interface to placement that
outputs something a bit like 'top': a regularly updating scan of
resource usage. But it's hard to know if that is even relevant
without more info as asked above.

It could very well be that explicit debugging of filtering stages is
the right way to go, but we should look closely at the costs of
doing so. Part of me is all: Please, yes, let's do it, it would make
the code _so_ much more comprehensible. But there were reasons we
made the complex SQL in the first place.

Unfortunately, it's refactoring work and as a community, we tend to 
prioritize fancy features like NUMA topology and CPU pinning over refactoring 
work.


I think if we, as a community, said "no", that would be okay. That's
really all it would take. We effectively say "no" to features all the
time anyway, because we've generated software to which it takes 3
years to add something like placement to anyway, for very little
appreciable gain in that time (Yes there are many improvements under
the surface and with things like race conditions, but in terms of
what can be accomplished with the new tooling, we're still not
there).

If our labour is indeed valuable we can choose to exercise greater
control over its direction.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Omer Anson
Hi,

I'm sorry for the inconvenience. I completely missed the nomination period.
Is it possible to send in a late nomination for Dragonflow?

Thanks,
Omer Anson.

On Thu, 2 Aug 2018 at 11:59, Thierry Carrez  wrote:

> Tony Breeds wrote:
> > [...]
> > There are 8 projects without candidates, so according to this
> > resolution[1], the TC will have to decide how the following
> > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm,
> > RefStack, Searchlight, Trove and Winstackers.
>
> Here is my take on that...
>
> Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few
> teams per cycle that miss the election call, that would fall under that.
>
> Trove had a volunteer (Dariusz Krol), but that person did not fill the
> requirements for candidates. Given that the previous PTL (Zhao Chao)
> plans to stay around to help onboarding the new contributors, I'd
> support appointing Dariusz.
>
> I suspect Freezer falls in the same bucket as Packaging_Rpm and we
> should get a candidate there. I would reach out to caoyuan see if they
> would be interested in steeping up.
>
> LOCI is also likely in the same bucket. However, given that it's a
> deployment project, if we can't get anyone to step up and guarantee some
> level of currentness, we should consider removing it from the "official"
> list.
>
> Dragonflow is a bit in the LOCI case. It feels like a miss too, but if
> it's not, given that it's an add-on project that runs within Neutron, I
> would consider removing it from the "official" list if we can't find
> anyone to step up.
>
> For Winstackers and Searchlight, those are low-activity teams (18 and 13
> commits), which brings the question of PTL workload for feature-complete
> projects.
>
> Finally, RefStack: I feel like this should be wrapped into an
> Interoperability SIG, since that project team is not producing
> "OpenStack", but helping fostering OpenStack interoperability. Having
> separate groups (Interop WG, RefStack) sounds overkill anyway, and with
> the introduction of SIGs we have been recentering project teams on
> upstream code production.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] #openstack-placement IRC channel requires registered nicks

2018-08-02 Thread Chris Dent



I thought I should post a message here for visibility that yesterday
we made the openstack-placement IRC channel +r so that the recent
spate of spammers could be blocked.

This means that you must have a registered nick to gain access to
the channel. There's information on how to register at:

https://freenode.net/kb/answer/registration

Plenty of other channels have been doing the same thing, see:

https://etherpad.openstack.org/p/freenode-plus-r-08-2018

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election] PTL nominations are now closed

2018-08-02 Thread Thierry Carrez

Tony Breeds wrote:

[...]
There are 8 projects without candidates, so according to this
resolution[1], the TC will have to decide how the following
projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm,
RefStack, Searchlight, Trove and Winstackers.


Here is my take on that...

Packaging_Rpm has a late candidate (Dirk Mueller). We always have a few 
teams per cycle that miss the election call, that would fall under that.


Trove had a volunteer (Dariusz Krol), but that person did not fill the 
requirements for candidates. Given that the previous PTL (Zhao Chao) 
plans to stay around to help onboarding the new contributors, I'd 
support appointing Dariusz.


I suspect Freezer falls in the same bucket as Packaging_Rpm and we 
should get a candidate there. I would reach out to caoyuan see if they 
would be interested in steeping up.


LOCI is also likely in the same bucket. However, given that it's a 
deployment project, if we can't get anyone to step up and guarantee some 
level of currentness, we should consider removing it from the "official" 
list.


Dragonflow is a bit in the LOCI case. It feels like a miss too, but if 
it's not, given that it's an add-on project that runs within Neutron, I 
would consider removing it from the "official" list if we can't find 
anyone to step up.


For Winstackers and Searchlight, those are low-activity teams (18 and 13 
commits), which brings the question of PTL workload for feature-complete 
projects.


Finally, RefStack: I feel like this should be wrapped into an 
Interoperability SIG, since that project team is not producing 
"OpenStack", but helping fostering OpenStack interoperability. Having 
separate groups (Interop WG, RefStack) sounds overkill anyway, and with 
the introduction of SIGs we have been recentering project teams on 
upstream code production.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage][ptg] Vitrage virtual PTG

2018-08-02 Thread Ifat Afek
Hi,

As discussed in our IRC meeting yesterday [1], we will hold the Vitrage
virtual PTG on the first week of October. If you would like to participate,
you are welcome to add your name, time zone and ideas for discussion in the
PTG etherpad[2].

[1]
http://eavesdrop.openstack.org/meetings/vitrage/2018/vitrage.2018-08-01-08.00.log.html

[2] https://etherpad.openstack.org/p/vitrage-stein-ptg

Br,
Ifat
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] OpenStack lagging behind 2 major python versions: we need a Python 3.7 gate

2018-08-02 Thread Andrey Kurilin
Hi Thomas!

On Thu, 2 Aug 2018 at 06:13, Thomas Goirand  wrote:

> On 07/12/2018 10:38 PM, Thomas Goirand wrote:
> > Hi everyone!
> >
> > [...]
> Here's more examples that shows why we should be gating earlier with
> newer Python versions:
>
> Nova:
> https://review.openstack.org/#/c/584365/
>
> Glance:
> https://review.openstack.org/#/c/586716/
>
> Murano:
> https://bugs.debian.org/904581
>
> Pyghmi:
> https://bugs.debian.org/905213
>
> There's also some "raise StopIteration" issues in:
> - ceilometer
> - cinder
> - designate
> - glance
> - glare
> - heat
> - karbor
> - manila
> - murano
> - networking-ovn
> - neutron-vpnaas
> - nova
> - rally


Can you provide any traceback or steps to reproduce the issue for Rally
project ?

>
> - zaqar
>
> It'd be nice to have these addressed ASAP.
>
> Cheers,
>
> Thomas Goirand (zigo)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 
Best regards,
Andrey Kurilin.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][election][tc] Lederless projects.

2018-08-02 Thread Javier Pena


- Original Message -
> On Wed, Aug 01, 2018 at 09:55:13AM +1000, Tony Breeds wrote:
> > 
> > Hello all,
> > The PTL Nomination period is now over. The official candidate list
> > is available on the election website[0].
> > 
> > There are 8 projects without candidates, so according to this
> > resolution[1], the TC will have to decide how the following
> > projects will proceed: Dragonflow, Freezer, Loci, Packaging_Rpm,

The Packaging RPM team had our weekly meeting yesterday. We are sorry for the 
inconveniences, caused by some miscommunication on our side.

We decided to propose Dirk Mueller as PTL for TC appointment for the Stein 
cycle [1], and we will make an effort to avoid this situation in the future.

Thanks,
Javier

[1] - 
http://eavesdrop.openstack.org/meetings/rpm_packaging/2018/rpm_packaging.2018-08-01-13.01.log.html#l-44

> > RefStack, Searchlight, Trove and Winstackers.
> 
> Hello TC,
> A few extra details[1]:
> 
> ---
> Projects[1]   :65
> Projects with candidates  :57 ( 87.69%)
> Projects with election: 2 (  3.08%)
> ---
> Need election : 2 (Senlin Tacker)
> Need appointment  : 8 (Dragonflow Freezer Loci Packaging_Rpm RefStack
>Searchlight Trove Winstackers)
> ===
> Stats gathered@ 2018-08-01 00:11:59 UTC
> 
> Of the 8 projects that can be considered leaderless, Trove did have a
> candidate[2] that doesn't meet the ATC criteria in that they do not
> have a merged change.
> 
> I also excluded Security due to the governance review[3] to remove it as
> a project and the companion email discussion[4]
> 
> Yours Tony.
> 
> [1] http://paste.openstack.org/show/727002
> [2] https://review.openstack.org/587333
> [3] https://review.openstack.org/586896
> [4] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132595.html
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev