[openstack-dev] [nova][neutron] numa aware vswitch

2018-08-24 Thread Guo, Ruijing
Hi, All,

I am verifying numa aware vwitch features 
(https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-vswitches.html).
 But the result is not my expectation.

What I missing?


Nova configuration:

[filter_scheduler]
track_instance_changes = False
enabled_filters = 
RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,NUMATopologyFilter

[neutron]
physnets = physnet0,physnet1

[neutron_physnet_physnet0]
numa_nodes = 0

[neutron_physnet_physnet1]
numa_nodes = 1


ml2 configuration:

[ml2_type_vlan]
network_vlan_ranges = physnet0,physnet1
[ovs]
vhostuser_socket_dir = /var/lib/libvirt/qemu
bridge_mappings = physnet0:br-physnet0,physnet1:br-physnet1


command list:

openstack network create net0 --external --provider-network-type=vlan 
--provider-physical-network=physnet0 --provider-segment=100
openstack network create net1 --external --provider-network-type=vlan 
--provider-physical-network=physnet1 --provider-segment=200
openstack subnet create --network=net0 --subnet-range=192.168.1.0/24 
--allocation-pool start=192.168.1.200,end=192.168.1.250 --gateway 192.168.1.1 
subnet0
openstack subnet create --network=net1 --subnet-range=192.168.2.0/24 
--allocation-pool start=192.168.2.200,end=192.168.2.250 --gateway 192.168.2.1 
subnet1
openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic 
net-id=net0 vm0
openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic 
net-id=net1 vm1

vm0 and vm1 are created but numa is not enabled:
  1
  
1024
  






Thanks,
-Ruijing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] Post-lunch presentations schedule

2018-08-24 Thread Thierry Carrez

Hi!

The PTG starts in two weeks in Denver! As in Dublin, we'll have some 
presentations running during the second half of the lunch break in the 
lunch room. Here is the schedule:


Monday: Welcome to the PTG
Welcome new teams / Ops meetup, Housekeeping, Community update, Set 
stage for the week, Present Stein goals (ttx, mnaser, kendallW)


Tuesday: Three demo presentations on tools
Gertty (corvus), Storyboard (diablo_rojo), and Simplifying backports 
with git-deps and git-explode (aspiers)


Wednesday: Three general talks
Release management (smcginnis), Project navigator (jimmymcarthur), and 
Tech vision statement intro (zaneb, cdent)


Thursday: PTG: present and future
Our traditional event feedback session, including a presentation of 
future PTG/summit co-location plans for 2019 (jbryce, ttx)


Friday: Lightning talks
Fast-paced 5-min segments to talk about anything... Summaries of team 
plans for Stein encouraged. A presentation of Sphinx in OpenStack by 
stephenfin will open the show.


Hopefully this time we won't have snow disrupting that schedule.
Cheers,

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Regarding cache-based cross-VM side channel attacks in OpenStack

2018-08-24 Thread Adam Heczko
Hi Darshan,
I believe you are referring to the recent Foreshadow / l1tf vulnerability?
If that's the case OpenStack compute workloads are protected with all
relevant to the specific hypervisor type mechanisms.
AFAIK OpenStack at this moment supports KVM-Qemu, Xen, vSphere/ESXI and
Hyper-V hypervisors.
All of the above hypervisors offer side channel protection mechanisms
implementations.
You can also consult OpenStack Security Guide, compute sections seems to be
most relevant to the question you raised,
https://docs.openstack.org/security-guide/compute.html

HTH,


On Fri, Aug 24, 2018 at 7:35 AM Darshan Tank  wrote:

> Dear Sir,
>
> I would like to know, whether cache-based cross-VM side channel attacks
> are possible in OpenStack VM or not ?
>
> If the answer of above question is no, then what are the mechanisms
> employed in OpenStack to prevent or to mitigate such types of security
> threats?
>
> I'm looking forward to hearing from you.
>
> Thanks in advance for your support.
>
> With Warm Regards,
> *Darshan Tank *
>
> [image: Please consider the environment before printing]
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-24 Thread Thierry Carrez

Matt Riedemann wrote:

On 8/23/2018 4:00 AM, Thierry Carrez wrote:
In the OpenStack governance model, contributors to a given piece of 
code control its destiny.


This is pretty damn fuzzy.


Yes, it's definitely not binary.

So if someone wants to split out nova-compute 
into a new repo/project/governance with a REST API and all that, 
nova-core has no say in the matter?


I'd consider the repository split to be a prerequisite.

Then if most people working on the nova-compute repository (not just 
"someone") feel like they are in a distinct group working on a distinct 
piece of code and that the larger group is not representative of them, 
then yes, IMHO they can make a case that a separate project team would 
be more healthy...


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] keystone 14.0.0.0rc2 (rocky)

2018-08-24 Thread no-reply

Hello everyone,

A new release candidate for keystone for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/keystone/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/keystone/log/?h=stable/rocky

Release notes for keystone can be found at:

https://docs.openstack.org/releasenotes/keystone/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes

2018-08-24 Thread Sam Betts (sambetts)
+1

Sam

On 23/08/2018, 21:38, "Mark Goddard" 
mailto:m...@stackhpc.com>> wrote:

+1

On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen, 
mailto:j...@jimrollenhagen.com>> wrote:
++


// jim

On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger 
mailto:juliaashleykre...@gmail.com>> wrote:
Greetings everyone!

In our team meeting this week we stumbled across the subject of
promoting contributors to be sub-project's core reviewers.
Traditionally it is something we've only addressed as needed or
desired by consensus with-in those sub-projects, but we were past due
time to take a look at the entire picture since not everything should
fall to ironic-core.

And so, I've taken a look at our various repositories and I'm
proposing the following additions:

For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya
Etingof[1]. Ilya has been actively involved with sushy, sushy-tools,
and virtualbmc this past cycle. I've found many of his reviews and
non-voting review comments insightful and willing to understand. He
has taken on some of the effort that is needed to maintain and keep
these tools usable for the community, and as such adding him to the
core group for these repositories makes lots of sense.

For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2].
Kaifeng has taken on some hard problems in ironic and
ironic-inspector, as well as brought up insightful feedback in
ironic-specs. They are demonstrating a solid understanding that I only
see growing as time goes on.

For sushy-core: Debayan Ray[3]. Debayan has been involved with the
community for some time and has worked on sushy from early on in its
life. He has indicated it is near and dear to him, and he has been
actively reviewing and engaging in discussion on patchsets as his time
has permitted.

With any addition it is good to look at inactivity as well. It saddens
me to say that we've had some contributors move on as priorities have
shifted to where they are no longer involved with the ironic
community. Each person listed below has been inactive for a year or
more and is no longer active in the ironic community. As such I've
removed their group membership from the sub-project core reviewer
groups. Should they return, we will welcome them back to the community
with open arms.

bifrost-core: Stephanie Miller[4]
ironic-inspector-core: Anton Arefivev[5]
ironic-ui-core: Peter Peila[6], Beth Elwell[7]

Thanks,

-Julia

[1]: http://stackalytics.com/?user_id=etingof=marks
[2]: http://stackalytics.com/?user_id=kaifeng=marks
[3]: http://stackalytics.com/?user_id=deray=marks=all
[4]: http://stackalytics.com/?metric=marks=all_id=stephan
[5]: http://stackalytics.com/?user_id=aarefiev=marks
[6]: http://stackalytics.com/?metric=marks=all_id=ppiela
[7]: 
http://stackalytics.com/?metric=marks=all_id=bethelwell=ironic-ui

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tripleo] fluentd logging status

2018-08-24 Thread Juan Badia Payno
Recently, I did a little test regarding fluentd logging on the gates
master[1], queens[2], pike [3]. I don't like the status of it, I'm still
working on them, but basically there are quite a lot of misconfigured logs
and some services that they are not configured at all.

I think we need to put some effort on the logging. The purpose of this
email is to point out that we need to do a little effort on the task.

First of all, I think we need to enable fluentd on all the scenarios, as it
is on the tests [1][2][3] commented on the beginning of the email. Once
everything is ok and some automatic test regarding logging is done they can
be disabled.

I'd love not to create a new bug for every misconfigured/unconfigured
service, but if requested to grab more attention on it, I will open it.

The plan I have in mind is something like:
 * Make an initial picture of what the fluentd/log status is (from pike
upwards).
 * Fix all misconfigured services. (designate,...)
 * Add the non-configured services. (manila,...)
 * Add an automated check to find a possible unconfigured/misconfigured
problem.

Any comments, doubts or questions are welcome

Cheers,
Juan

[1] https://review.openstack.org/594836
[2] https://review.openstack.org/594838
[3] https://review.openstack.org/594840
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tripleo-heat-templates 9.0.0.0rc1 (rocky)

2018-08-24 Thread no-reply

Hello everyone,

A new release candidate for tripleo-heat-templates for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/tripleo-heat-templates/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


https://git.openstack.org/cgit/openstack/tripleo-heat-templates/log/?h=stable/rocky

Release notes for tripleo-heat-templates can be found at:

https://docs.openstack.org/releasenotes/tripleo-heat-templates/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/tripleo

and tag it *rocky-rc-potential* to bring it to the tripleo-heat-templates
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance][horizon] Issues we found when using Community Images

2018-08-24 Thread Jeremy Freudberg
Hi again Andy,

Thanks for the update. Sounds like there is some work to do in various
client libraries first.

I also just tried to launch a Sahara cluster against a community
image-- it failed, because our current validation wants the image ID
to actually appear in the image list. So there will have to be a
server side tweak to Sahara as well (not necessarily using your
desired "list all" mechanism, but it could be).

Anyway, the Sahara team is aware, and we'll keep an eye on this moving forward.

Cheers,
Jeremy


On Thu, Aug 23, 2018 at 8:43 PM, Andy Botting  wrote:
> Hi Jeremy,
>
>>
>> Can you comment more on what needs to be updated in Sahara? Are they
>> simply issues in the UI (sahara-dashboard) or is there a problem
>> consuming community images on the server side?
>
>
> We haven't looked into it much yet, so I couldn't tell you.
>
> I think it would be great to extend the Glance API to include a
> visibility=all filter, so we can actually get ALL available images in a
> single request, then projects could switch over to this.
>
> It might need some thought on how to manage the new API request when using
> an older version of Glance that didn't support visibility=all, but I'm sure
> that could be worked out.
>
> It would be great to hear from one of the Glance devs what they think about
> this approach.
>
> cheers,
> Andy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Chris Dent


Over the past few days a few of us have been experimenting with
extracting placement to its own repo, as has been discussed at
length on this list, and in some etherpads:

https://etherpad.openstack.org/p/placement-extract-stein
https://etherpad.openstack.org/p/placement-extraction-file-notes

As part of that, I've been doing some exploration to tease out the
issues we're going to hit as we do it. None of this is work that
will be merged, rather it is stuff to figure out what we need to
know to do the eventual merging correctly and efficiently.

Please note that doing that is just the near edge of a large
collection of changes that will cascade in many ways to many
projects, tools, distros, etc. The people doing this are aware of
that, and the relative simplicity (and fairly immediate success) of
these experiments is not misleading people into thinking "hey, no
big deal". It's a big deal.

There's a strategy now (described at the end of the first etherpad
listed above) for trimming the nova history to create a thing which
is placement. From the first run of that Ed created a github repo
and I branched that to eventually create:

https://github.com/EdLeafe/placement/pull/2

In that, all the placement unit and functional tests are now
passing, and my placecat [1] integration suite also passes.

That work has highlighted some gaps in the process for trimming
history which will be refined to create another interim repo. We'll
repeat this until the process is smooth, eventually resulting in an
openstack/placement.

To take things further, this morning I pip installed the placement
code represented by that pull request into a nova repo and made some
changes to remove placement from nova.

With some minor adjustments I got the remaining unit and functional
tests working.

That work is in gerrit at

https://review.openstack.org/#/c/596291/

with a hopefully clear commit message about what's going on. As with
the rest of this work, this is not something to merge, rather an
experiment to learn from. The hot spots in the changes are
relatively limited and about what you would expect so, with luck,
should be pretty easy to deal with, some of them even before we
actually do any extracting (to enhance the boundaries between the
two services).

If you're interested in this process please have a look at all the
links and leave comments there, in response to this email, or join
#openstack-placement on freenode to talk about it.

Thanks.

[1] https://github.com/cdent/placecat
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] fluentd logging status

2018-08-24 Thread Ben Nemec



On 08/24/2018 04:17 AM, Juan Badia Payno wrote:
Recently, I did a little test regarding fluentd logging on the gates 
master[1], queens[2], pike [3]. I don't like the status of it, I'm still 
working on them, but basically there are quite a lot of misconfigured 
logs and some services that they are not configured at all.


I think we need to put some effort on the logging. The purpose of this 
email is to point out that we need to do a little effort on the task.


First of all, I think we need to enable fluentd on all the scenarios, as 
it is on the tests [1][2][3] commented on the beginning of the email. 
Once everything is ok and some automatic test regarding logging is done 
they can be disabled.


I'd love not to create a new bug for every misconfigured/unconfigured 
service, but if requested to grab more attention on it, I will open it.


The plan I have in mind is something like:
  * Make an initial picture of what the fluentd/log status is (from pike 
upwards).

  * Fix all misconfigured services. (designate,...)


For the record, Designate in TripleO is not considered production-ready 
at this time.  There are a few other issues that need to be resolved 
too.  I'll add this to my todo list though.



  * Add the non-configured services. (manila,...)
  * Add an automated check to find a possible unconfigured/misconfigured 
problem.


This would be good.  I copy-pasted the log config from another service 
but had no idea whether it was correct (apparently it wasn't :-).




Any comments, doubts or questions are welcome

Cheers,
Juan

[1] https://review.openstack.org/594836
[2] https://review.openstack.org/594838
[3] https://review.openstack.org/594840



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova_powervm 7.0.0.0rc2 (rocky)

2018-08-24 Thread no-reply

Hello everyone,

A new release candidate for nova_powervm for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/nova-powervm/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/nova_powervm/log/?h=stable/rocky

Release notes for nova_powervm can be found at:

https://docs.openstack.org/releasenotes/nova_powervm/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky RC1 released!

2018-08-24 Thread Emilien Macchi
We just released Rocky RC1 and branched stable/rocky for most of tripleo
repos, please let us know if we missed something.
Please don't forget to backport the patches that land in master and that
you want in Rocky.

We're currently investigating if we whether or not we'll need an RC2 so
don't be surprised if Launchpad bugs are moved around during the next days.

Thanks,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tripleo-puppet-elements 9.0.0.0rc1 (rocky)

2018-08-24 Thread no-reply

Hello everyone,

A new release candidate for tripleo-puppet-elements for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/tripleo-puppet-elements/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


https://git.openstack.org/cgit/openstack/tripleo-puppet-elements/log/?h=stable/rocky

Release notes for tripleo-puppet-elements can be found at:

https://docs.openstack.org/releasenotes/tripleo-puppet-elements/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 20 August 2018

2018-08-24 Thread Colleen Murphy
# Keystone Team Update - Week of 20 August 2018

## News

We ended up releasing an RC2 after all in order to include placeholder 
sqlalchemy migrations for Rocky, thanks wxy for catching it!

## Open Specs

Search query: https://bit.ly/2Pi6dGj

Lance reproposed the auth receipts and application credentials specs that we 
punted on last cycle for Stein.

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 13 changes this week.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 75 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

If that seems like a lot more than last week, it's because someone has 
helpfully proposed many patches supporting the python3-first community goal[1]. 
However, they haven't coordinated with the goal champions and have missed some 
steps[2], like proposing the removal of jobs from project-config and proposing 
jobs to the stable branches. I would recommend coordinating with the 
python3-first goal champions on merging these patches. The good news is that 
all of our projects seem to work with python 3.6!

[1] https://governance.openstack.org/tc/goals/stein/python3-first.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html

## Bugs

This week we opened 4 new bugs and closed 1.

Bugs opened (4) 
Bug #1788415 (keystone:High) opened by Lance Bragstad 
https://bugs.launchpad.net/keystone/+bug/1788415 
Bug #1788694 (keystone:High) opened by Lance Bragstad 
https://bugs.launchpad.net/keystone/+bug/1788694 
Bug #1787874 (keystone:Medium) opened by wangxiyuan 
https://bugs.launchpad.net/keystone/+bug/1787874 
Bug #1788183 (oslo.policy:Undecided) opened by Stephen Finucane 
https://bugs.launchpad.net/oslo.policy/+bug/1788183 

Bugs closed (1) 
Bug #1771203 (python-keystoneclient:Undecided) 
https://bugs.launchpad.net/python-keystoneclient/+bug/1771203 

Bugs fixed (0)

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

We're at the end of the RC period with the official release happening next week.

## Shout-outs

Thanks everyone for a great release!

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and 
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] numa aware vswitch

2018-08-24 Thread Stephen Finucane
On Fri, 2018-08-24 at 07:55 +, Guo, Ruijing wrote:
> Hi, All,
>  
> I am verifying numa aware vwitch features 
> (https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/numa-aware-vswitches.html).
>  But the result is not my expectation.
>  
> What I missing?
>  
>  
> Nova configuration:
>  
> [filter_scheduler]
> track_instance_changes = False
> enabled_filters = 
> RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,SameHostFilter,DifferentHostFilter,NUMATopologyFilter
>  
> [neutron]
> physnets = physnet0,physnet1
>  
> [neutron_physnet_physnet0]
> numa_nodes = 0
>  
> [neutron_physnet_physnet1]
> numa_nodes = 1
>  
>  
> ml2 configuration:
>  
> [ml2_type_vlan]
> network_vlan_ranges = physnet0,physnet1
> [ovs]
> vhostuser_socket_dir = /var/lib/libvirt/qemu
> bridge_mappings = physnet0:br-physnet0,physnet1:br-physnet1
>  
>  
> command list:
>  
> openstack network create net0 --external --provider-network-type=vlan 
> --provider-physical-network=physnet0 --provider-segment=100
> openstack network create net1 --external --provider-network-type=vlan 
> --provider-physical-network=physnet1 --provider-segment=200
> openstack subnet create --network=net0 --subnet-range=192.168.1.0/24 
> --allocation-pool start=192.168.1.200,end=192.168.1.250 --gateway 192.168.1.1 
> subnet0
> openstack subnet create --network=net1 --subnet-range=192.168.2.0/24 
> --allocation-pool start=192.168.2.200,end=192.168.2.250 --gateway 192.168.2.1 
> subnet1
> openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic 
> net-id=net0 vm0
> openstack server create --flavor 1 --image=cirros-0.3.5-x86_64-disk --nic 
> net-id=net1 vm1
>  
> vm0 and vm1 are created but numa is not enabled:
>   1
>   
> 1024
>   
 
Using this won't add a NUMA topology - it'll just control how any
topology present will be mapped to the guest. You need to enable
dedicated CPUs or a explicitly request a NUMA topology for this to
work.

openstack flavor set --property hw:numa_nodes=1 1



openstack flavor set --property hw:cpu_policy=dedicated 1


This is perhaps something that we could change in the future, though I
haven't given it much thought yet.

Regards,
Stephen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-jenkins][Release-job-failures] Release of openstack/python-jenkins failed

2018-08-24 Thread Sean McGinnis
See below for links to a release job failure for python-jenkins.

This was a ReadTheDocs publishing job. It appears to have failed due to the
necessary steps missing from this earlier post:

http://lists.openstack.org/pipermail/openstack-dev/2018-August/132836.html


- Forwarded message from z...@openstack.org -

Date: Fri, 24 Aug 2018 14:33:25 +
From: z...@openstack.org
To: release-job-failu...@lists.openstack.org
Subject: [Release-job-failures] Release of openstack/python-jenkins failed
Reply-To: openstack-dev@lists.openstack.org

Build failed.

- trigger-readthedocs-webhook 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/trigger-readthedocs-webhook/cec87fd/
 : FAILURE in 1m 49s
- release-openstack-python 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/release-openstack-python/68b356f/
 : SUCCESS in 4m 03s
- announce-release 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/announce-release/04fd7c3/
 : SUCCESS in 4m 10s
- propose-update-constraints 
http://logs.openstack.org/c4/c473b0af94342b269593dd24e5093d33a94b5046/release/propose-update-constraints/3eaf094/
 : SUCCESS in 2m 08s

___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

- End forwarded message -

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] fluentd logging status

2018-08-24 Thread Remo Mattei
My co-worker has it working on OOO, Pike release bm not containers. There was a 
plan to clean up the code and open it up since it’s all ansible-playbooks doing 
the work. 

Remo 

> On Aug 24, 2018, at 07:37, Ben Nemec  wrote:
> 
> 
> 
> On 08/24/2018 04:17 AM, Juan Badia Payno wrote:
>> Recently, I did a little test regarding fluentd logging on the gates 
>> master[1], queens[2], pike [3]. I don't like the status of it, I'm still 
>> working on them, but basically there are quite a lot of misconfigured logs 
>> and some services that they are not configured at all.
>> I think we need to put some effort on the logging. The purpose of this email 
>> is to point out that we need to do a little effort on the task.
>> First of all, I think we need to enable fluentd on all the scenarios, as it 
>> is on the tests [1][2][3] commented on the beginning of the email. Once 
>> everything is ok and some automatic test regarding logging is done they can 
>> be disabled.
>> I'd love not to create a new bug for every misconfigured/unconfigured 
>> service, but if requested to grab more attention on it, I will open it.
>> The plan I have in mind is something like:
>>  * Make an initial picture of what the fluentd/log status is (from pike 
>> upwards).
>>  * Fix all misconfigured services. (designate,...)
> 
> For the record, Designate in TripleO is not considered production-ready at 
> this time.  There are a few other issues that need to be resolved too.  I'll 
> add this to my todo list though.
> 
>>  * Add the non-configured services. (manila,...)
>>  * Add an automated check to find a possible unconfigured/misconfigured 
>> problem.
> 
> This would be good.  I copy-pasted the log config from another service but 
> had no idea whether it was correct (apparently it wasn't :-).
> 
>> Any comments, doubts or questions are welcome
>> Cheers,
>> Juan
>> [1] https://review.openstack.org/594836
>> [2] https://review.openstack.org/594838
>> [3] https://review.openstack.org/594840
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] tripleo-image-elements 9.0.0.0rc1 (rocky)

2018-08-24 Thread no-reply

Hello everyone,

A new release candidate for tripleo-image-elements for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/tripleo-image-elements/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:


https://git.openstack.org/cgit/openstack/tripleo-image-elements/log/?h=stable/rocky

Release notes for tripleo-image-elements can be found at:

https://docs.openstack.org/releasenotes/tripleo-image-elements/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Puppet weekly recap - week 34

2018-08-24 Thread Tobias Urdin

Hello all Puppeteers!

Welcome to the weekly Puppet recap for week 34.
This is a weekly overview of what has changed in the Puppet OpenStack 
project the past week.


CHANGES
===

We haven't had much changes this week, mostly CI fixes due to changes in 
packaging.


* We've merged all stable/rocky related changes except for Keystone [1] [2]
** This is blocked by packaging issue [3] because we dont update 
packages before runs in the beaker tests.

** Please review [4] and let us know what you think.
** This is also blocking this [5]
* Fixed puppet-ovn to make sure OVS bridge is created before setting 
mac-table-size [6]


[1] https://review.openstack.org/#/c/593787/
[2] https://review.openstack.org/#/c/593786/
[3] https://bugzilla.redhat.com/show_bug.cgi?id=1620221
[4] https://review.openstack.org/#/c/595370/
[5] https://review.openstack.org/#/c/589877/
[6] https://review.openstack.org/#/c/594128/

REVIEWS
==

We have some open changes that needs reviews.

* Update packages after adding repos 
https://review.openstack.org/#/c/595370/
* Make vlan_transparent in neutron.conf configurable 
https://review.openstack.org/#/c/591899/
* neutron-dynamic-routing wrong package for Debian 
https://review.openstack.org/#/c/594058/ (and backports)
* Add workers to magnum api and conductor 
https://review.openstack.org/#/c/595228/

* Correct default number of threads https://review.openstack.org/#/c/591493/
* Deprecate unused notify_on_api_faults parameter 
https://review.openstack.org/#/c/593034/
* Resolve duplicate declaration with split of api / metadata wsgi 
https://review.openstack.org/#/c/595523/


SPECS
=
No new specs, only one open spec for review.

* Add parameter data types spec https://review.openstack.org/#/c/568929/

OTHER
=

* No new progress on the Storyboard migration, we will continue letting 
you know once we have more details about dates.
* Going to the PTG? We have some cores that will be there, make sure you 
say hi! [7]
** We dont have any planned talks or discussions and therefore dont need 
any session or a moderator, but we are always available if you need us 
on IRC at #puppet-openstack


* Interested in the current status for Rocky? See [8] or maybe you want 
to plan some awesome new cool thing then...
** Start planning Stein now [9] and let us know! We would love any new 
contributors with new cool ideas!


* We should have a walk through with abandoning old open changes, if 
anybody is interested in helping with such an effort, please let me know.


[7] https://etherpad.openstack.org/p/puppet-ptg-stein
[8] https://etherpad.openstack.org/p/puppet-openstack-rocky
[9] https://etherpad.openstack.org/p/puppet-openstack-stein

Wishing you all a great weekend!

Best regards
Tobias (tobias-urdin @ IRC)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] numa aware vswitch

2018-08-24 Thread Matt Riedemann

On 8/24/2018 8:58 AM, Stephen Finucane wrote:

Using this won't add a NUMA topology - it'll just control how any
topology present will be mapped to the guest. You need to enable
dedicated CPUs or a explicitly request a NUMA topology for this to
work.

openstack flavor set --property hw:numa_nodes=1 1



openstack flavor set --property hw:cpu_policy=dedicated 1


This is perhaps something that we could change in the future, though I
haven't given it much thought yet.


Looks like the admin guide [1] should be updated to at least refer to 
the flavor user guide on setting up these types of flavors?


[1] 
https://docs.openstack.org/nova/latest/admin/networking.html#numa-affinity


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 20 August 2018

2018-08-24 Thread Lance Bragstad


On 08/24/2018 10:15 AM, Colleen Murphy wrote:
> # Keystone Team Update - Week of 20 August 2018
>
> ## News
>
> We ended up releasing an RC2 after all in order to include placeholder 
> sqlalchemy migrations for Rocky, thanks wxy for catching it!
>
> ## Open Specs
>
> Search query: https://bit.ly/2Pi6dGj
>
> Lance reproposed the auth receipts and application credentials specs that we 
> punted on last cycle for Stein.
>
> ## Recently Merged Changes
>
> Search query: https://bit.ly/2IACk3F
>
> We merged 13 changes this week.
>
> ## Changes that need Attention
>
> Search query: https://bit.ly/2wv7QLK
>
> There are 75 changes that are passing CI, not in merge conflict, have no 
> negative reviews and aren't proposed by bots.
>
> If that seems like a lot more than last week, it's because someone has 
> helpfully proposed many patches supporting the python3-first community 
> goal[1]. However, they haven't coordinated with the goal champions and have 
> missed some steps[2], like proposing the removal of jobs from project-config 
> and proposing jobs to the stable branches. I would recommend coordinating 
> with the python3-first goal champions on merging these patches. The good news 
> is that all of our projects seem to work with python 3.6!
>
> [1] https://governance.openstack.org/tc/goals/stein/python3-first.html
> [2] http://lists.openstack.org/pipermail/openstack-dev/2018-August/133610.html
>
> ## Bugs
>
> This week we opened 4 new bugs and closed 1.
>
> Bugs opened (4) 
> Bug #1788415 (keystone:High) opened by Lance Bragstad 
> https://bugs.launchpad.net/keystone/+bug/1788415 
> Bug #1788694 (keystone:High) opened by Lance Bragstad 
> https://bugs.launchpad.net/keystone/+bug/1788694 
> Bug #1787874 (keystone:Medium) opened by wangxiyuan 
> https://bugs.launchpad.net/keystone/+bug/1787874 
> Bug #1788183 (oslo.policy:Undecided) opened by Stephen Finucane 
> https://bugs.launchpad.net/oslo.policy/+bug/1788183 
>
> Bugs closed (1) 
> Bug #1771203 (python-keystoneclient:Undecided) 
> https://bugs.launchpad.net/python-keystoneclient/+bug/1771203 
>
> Bugs fixed (0)
>
> ## Milestone Outlook
>
> https://releases.openstack.org/rocky/schedule.html
>
> We're at the end of the RC period with the official release happening next 
> week.
>
> ## Shout-outs
>
> Thanks everyone for a great release!

++

I can't say thanks enough to everyone who contributes to this in some
way, shape, or form. I'm looking forward to Stein :)

>
> ## Help with this newsletter
>
> Help contribute to this newsletter by editing the etherpad: 
> https://etherpad.openstack.org/p/keystone-team-newsletter
> Dashboard generated using gerrit-dash-creator and 
> https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ansible roles in tripleo

2018-08-24 Thread Jill Rouleau
On Thu, 2018-08-23 at 10:42 -0400, Dan Prince wrote:
> On Tue, Aug 14, 2018 at 1:53 PM Jill Rouleau  wrote:
> > 
> > 
> > Hey folks,
> > 
> > Like Alex mentioned[0] earlier, we've created a bunch of ansible
> > roles
> > for tripleo specific bits.  The idea is to start putting some basic
> > cookiecutter type things in them to get things started, then move
> > some
> > low-hanging fruit out of tripleo-heat-templates and into the
> > appropriate
> > roles.  For example, docker/services/keystone.yaml could have
> > upgrade_tasks and fast_forward_upgrade_tasks moved into ansible-
> > role-
> > tripleo-keystone/tasks/(upgrade.yml|fast_forward_upgrade.yml), and
> > the
> > t-h-t updated to
> > include_role: ansible-role-tripleo-keystone
> >   tasks_from: upgrade.yml
> > without having to modify any puppet or heat directives.
> > 
> > This would let us define some patterns for implementing these
> > tripleo
> > roles during Stein while looking at how we can make use of ansible
> > for
> > things like core config.
> I like the idea of consolidating the Ansible stuff and getting out of
> the practice of inlining it into t-h-t. Especially the "core config"
> which I take to mean moving away from Puppet and towards Ansible for
> service level configuration. But presumably we are going to rely on
> the upstream Openstack ansible-os_* projects to do the heavy config
> lifting for us here though right? We won't have to do much on our side
> to leverage that I hope other than translating old hiera to equivalent
> settings for the config files to ensure some backwards comparability.
> 

We'll hopefully be able to rely on the OSA roles for a lot of the
config, yes, but there will still be a fair bit of TripleO specific
stuff that will need to be handled, and that's what we plan to do in
these ansible-role-tripleo-* repos.  

> While I agree with the goals I do wonder if the shear number of git
> repos we've created here is needed. Like with puppet-tripleo we were
> able to combine a set of "small lightweight" manifests in a way to
> wrap them around the upstream Puppet modules. Why not do the same with
> ansible-role-tripleo? My concern is that we've created so many cookie
> cutter repos with boilerplate code in them that ends up being much
> heavier than the files which will actually reside in many of these
> repos. This in addition to the extra review work and RPM packages we
> need to constantly maintain.
> 
In theory it should be roughly the same amount of commits/review work,
just a question of what repo they go to - service specific patches go to
the appropriate role and shared plugins, libs, etc go to the tripleo-
ansible project repo.

We want the roles to be modular rather than monolithic so only the roles
that are being used in a given environment need to be pulled in.  Also
by having them separated, they should be easier to parse and contribute
to.  Yes it's a higher number of repos that could be contributed to, but
when doing so a person won't have to mentally frontload how all of the
possible things work just to be able to add an upgrade task for service
$foo like it is today with t-h-t. 

Unless there's a different breakdown/layout you're thinking of beyond
"dump everything in one place"?

I'm interested in other options if we have some to reduce packaging or
maintenance overhead.  With other deployers I've done stable branches
checked out straight from git, but I doubt that would fly for
downstream.  We could push the roles to Ansible Galaxy but we would need
to think about how that would work for offline deploys and they still
need to be maintained there, it's just painting the problem a different
color.

- Jill


> Dan
> 
> > 
> > 
> > t-h-t and config-download will still drive the vast majority of
> > playbook
> > creation for now, but for new playbooks (such as for operations
> > tasks)
> > tripleo-ansible[1] would be our project directory.
> > 
> > So in addition to the larger conversation about how deployers can
> > start
> > to standardize how we're all using ansible, I'd like to also have a
> > tripleo-specific conversation at PTG on how we can break out some of
> > our
> > ansible that's currently embedded in t-h-t into more modular and
> > flexible roles.
> > 
> > Cheers,
> > Jill
> > 
> > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-August/1
> > 3311
> > 9.html
> > [1] https://git.openstack.org/cgit/openstack/tripleo-ansible/tree/__
> > 
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsub
> > scribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubsc
> ribe
> 

Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Sean McGinnis
> 
> After some prompting from gibi, that code has now been adjusted so
> that requirements.txt and tox.ini [1] make sure that the extract
> placement branch is installed into the test virtualenvs. So in the
> gate the unit and functional tests pass. Other jobs do not because
> of [1].
> 
> In the intervening time I've taken that code, built a devstack that
> uses a nova-placement-api wsgi script that uses nova.conf and the
> extracted placement code. It runs against the nova-api database.
> 
> Created a few servers. Worked.
> 

Excellent!

> Then I switched the devstack@placement-unit unit file to point to
> the placement-api wsgi script, and configured
> /etc/placement/placement.conf to have a
> [placement_database]/connection of the nova-api db.
> 
> Created a few servers. Worked.
> 
> Thanks.
> 
> [1] As far as I can tell a requirements.txt entry of
> 
> -e 
> git+https://github.com/cdent/placement-1.git@cd/make-it-work#egg=placement
> 
> will install just fine with 'pip install -r requirements.txt', but
> if I do 'pip install nova' and that line is in requirements.txt it
> does not work. This means I had to change tox.ini to have a deps
> setting of:
> 
> deps = -r{toxinidir}/test-requirements.txt
>-r{toxinidir}/requirements.txt
> 
> to get the functional and unit tests to build working virtualenvs.
> That this is not happening in the dsvm-based zuul jobs mean that the
> tests can't run or pass. What's going on here? Ideas?

Just conjecture on my part, but I know we have it documented somewhere that URL
paths to requirements are not allowed. Maybe we do something to actively
prevent that?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] The Weekly Owl - 29th Edition

2018-08-24 Thread Emilien Macchi
Welcome to the twenty-ninthest edition of a weekly update in TripleO world!
The goal is to provide a short reading (less than 5 minutes) to learnwhat's
new this week.
Any contributions and feedback are welcome.
Link to the previous version:
http://lists.openstack.org/pipermail/openstack-dev/2018-August/133094.html

General announcements
=
+--> This week we released Rocky RC1, branched stable/rocky and unless
there are critical bugs we'll call it our final stable release.
+--> The team is preparing for the next PTG:
https://etherpad.openstack.org/p/tripleo-ptg-stein

CI status
=
+--> Sprint theme: Zuul v3 migration (
https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter=label:Sprint%2018%20CI
)
+--> The Ruck and Rover for this sprint  are Marios and Wes. Please tell
them any CI issue.
+--> Promotion on master is 11 days, 1 day on Rocky, 3 days on Queens, 3
days on Pike and 1 days on Ocata.

Upgrades
=
+--> Adding support for upgrades when OpenShift is deployed.

Containers
=
+--> Efforts to support Podman tracked here:
https://trello.com/b/S8TmOU0u/tripleo-podman

config-download
=
+--> This squad is down and we move forward with the Edge squad.

Edge
=
+--> New squad created by James:
https://etherpad.openstack.org/p/tripleo-edge-squad-status (more to come)

Integration
=
+--> No updates this week.

UI/CLI
=
+--> No updates this week.

Validations
=
+--> No updates this week, reviews are needed:
https://etherpad.openstack.org/p/tripleo-validations-squad-status

Networking
=
+--> Good progress on Ansible ML2 driver

Workflows
=
+--> Planning Stein: better Ansible integration, UI convergence, etc.

Security
=
+--> Working on SElinux for containers (related to podman integration
mainly)

Owl fact
=
"One single Owl can go fast. Multiple owls, together, can go far."
Source: a mix of an African proverb and my Friday-afternoon imagination.


Thank you all for reading and stay tuned!
--
Your fellow reporter, Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][bifrost][sushy][ironic-inspector][ironic-ui][virtualbmc] sub-project/repository core reviewer changes

2018-08-24 Thread Vladyslav Drok
+1 to all the changes.

On Fri, Aug 24, 2018 at 12:12 PM Sam Betts (sambetts) 
wrote:

> +1
>
>
>
> Sam
>
>
>
> On 23/08/2018, 21:38, "Mark Goddard"  wrote:
>
>
>
> +1
>
>
>
> On Thu, 23 Aug 2018, 20:43 Jim Rollenhagen, 
> wrote:
>
> ++
>
>
>
> // jim
>
>
>
> On Thu, Aug 23, 2018 at 2:24 PM, Julia Kreger 
> wrote:
>
> Greetings everyone!
>
> In our team meeting this week we stumbled across the subject of
> promoting contributors to be sub-project's core reviewers.
> Traditionally it is something we've only addressed as needed or
> desired by consensus with-in those sub-projects, but we were past due
> time to take a look at the entire picture since not everything should
> fall to ironic-core.
>
> And so, I've taken a look at our various repositories and I'm
> proposing the following additions:
>
> For sushy-core, sushy-tools-core, and virtualbmc-core: Ilya
> Etingof[1]. Ilya has been actively involved with sushy, sushy-tools,
> and virtualbmc this past cycle. I've found many of his reviews and
> non-voting review comments insightful and willing to understand. He
> has taken on some of the effort that is needed to maintain and keep
> these tools usable for the community, and as such adding him to the
> core group for these repositories makes lots of sense.
>
> For ironic-inspector-core and ironic-specs-core: Kaifeng Wang[2].
> Kaifeng has taken on some hard problems in ironic and
> ironic-inspector, as well as brought up insightful feedback in
> ironic-specs. They are demonstrating a solid understanding that I only
> see growing as time goes on.
>
> For sushy-core: Debayan Ray[3]. Debayan has been involved with the
> community for some time and has worked on sushy from early on in its
> life. He has indicated it is near and dear to him, and he has been
> actively reviewing and engaging in discussion on patchsets as his time
> has permitted.
>
> With any addition it is good to look at inactivity as well. It saddens
> me to say that we've had some contributors move on as priorities have
> shifted to where they are no longer involved with the ironic
> community. Each person listed below has been inactive for a year or
> more and is no longer active in the ironic community. As such I've
> removed their group membership from the sub-project core reviewer
> groups. Should they return, we will welcome them back to the community
> with open arms.
>
> bifrost-core: Stephanie Miller[4]
> ironic-inspector-core: Anton Arefivev[5]
> ironic-ui-core: Peter Peila[6], Beth Elwell[7]
>
> Thanks,
>
> -Julia
>
> [1]: http://stackalytics.com/?user_id=etingof=marks
> [2]: http://stackalytics.com/?user_id=kaifeng=marks
> [3]: http://stackalytics.com/?user_id=deray=marks=all
> [4]: http://stackalytics.com/?metric=marks=all_id=stephan
> [5]: http://stackalytics.com/?user_id=aarefiev=marks
> [6]: http://stackalytics.com/?metric=marks=all_id=ppiela
> [7]:
> http://stackalytics.com/?metric=marks=all_id=bethelwell=ironic-ui
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] numa aware vswitch

2018-08-24 Thread Stephen Finucane
On Fri, 2018-08-24 at 09:13 -0500, Matt Riedemann wrote:
> On 8/24/2018 8:58 AM, Stephen Finucane wrote:
> > Using this won't add a NUMA topology - it'll just control how any
> > topology present will be mapped to the guest. You need to enable
> > dedicated CPUs or a explicitly request a NUMA topology for this to
> > work.
> > 
> > openstack flavor set --property hw:numa_nodes=1 1
> > 
> > 
> > 
> > openstack flavor set --property hw:cpu_policy=dedicated 1
> > 
> > 
> > This is perhaps something that we could change in the future, though I
> > haven't given it much thought yet.
> 
> Looks like the admin guide [1] should be updated to at least refer to 
> the flavor user guide on setting up these types of flavors?
> 
> [1] https://docs.openstack.org/nova/latest/admin/networking.html#numa-affinity

Good idea.

https://review.openstack.org/596393

Stephen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Chris Dent

On Fri, 24 Aug 2018, Doug Hellmann wrote:


I guess all of the people who complained so loudly about the global in 
oslo.config are gone?


It's a diffent context. In a testing environment where there is
already a well established pattern of use it's not a big deal.
Global in oslo.config is still horrible, but again: a well
established pattern of use.

This is part of why I think it is better positioned in oslotest as
that signals its limitations.

However, like I said in my other message, copying nova's thing has
proven fine.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Rocky RC1 released!

2018-08-24 Thread Alex Schultz
On Fri, Aug 24, 2018 at 9:09 AM, Emilien Macchi  wrote:
> We just released Rocky RC1 and branched stable/rocky for most of tripleo
> repos, please let us know if we missed something.
> Please don't forget to backport the patches that land in master and that you
> want in Rocky.
>
> We're currently investigating if we whether or not we'll need an RC2 so
> don't be surprised if Launchpad bugs are moved around during the next days.
>

I've created a Rocky RC2 milestone in launchpad and moved the current
open critical bugs over to it. I would like to target August 31, 2018
(next Friday) as a date to identify any major blockers that would
require an RC2.  If none are found, I propose that we mark RC1 as the
final release for Rocky.

Please take a look at the current open Critical issues and move them
to Stein if appropriate.

https://bugs.launchpad.net/tripleo/?field.searchtext==-importance%3Alist=NEW%3Alist=CONFIRMED%3Alist=TRIAGED%3Alist=INPROGRESS_option=any=_reporter=_commenter==_subscriber=%3Alist=86388=_combinator=ANY_cve.used=_dupes.used=_dupes=on_me.used=_patch.used=_branches.used=_branches=on_no_branches.used=_no_branches=on_blueprints.used=_blueprints=on_no_blueprints.used=_no_blueprints=on=Search

Thanks,
-Alex


> Thanks,
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][api][grapql] Proof of Concept

2018-08-24 Thread Miguel Lavalle
Gilles,

Ok. Added the patches in Gerrit to this coming Tuesday Neutron weekly
meeting agenda. I will highlight the patches during the meeting

Regards

On Thu, Aug 23, 2018 at 7:09 PM, Gilles Dubreuil 
wrote:

>
>
> On 24/08/18 04:58, Slawomir Kaplonski wrote:
>
>> Hi Miguel,
>>
>> I’m not sure but maybe You were looking for those patches:
>>
>> https://review.openstack.org/#/q/project:openstack/neutron+b
>> ranch:feature/graphql
>>
>>
> Yes that's the one, it's under Tristan Cacqueray name as he helped getting
> started.
>
> Wiadomość napisana przez Miguel Lavalle  w dniu
>>> 23.08.2018, o godz. 18:57:
>>>
>>> Hi Gilles,
>>>
>>> Ed pinged me earlier today in IRC in regards to this topic. After
>>> reading your message, I assumed that you had patches up for review in
>>> Gerrit. I looked for them, with the intent to list them in the agenda of
>>> the next Neutron team meeting, to draw attention to them. I couldn't find
>>> any, though: https://review.openstack.org/#
>>> /q/owner:%22Gilles+Dubreuil+%253Cgdubreui%2540redhat.com%253E%22
>>>
>>> So, how can we help? This is our meetings schedule:
>>> http://eavesdrop.openstack.org/#Neutron_Team_Meeting. Given that you
>>> are Down Under at UTC+10, the most convenient meeting for you is the one on
>>> Monday (even weeks), which would be Tuesday at 7am for you. Please note
>>> that we have an on demand section in our agenda:
>>> https://wiki.openstack.org/wiki/Network/Meetings#On_Demand_Agenda. Feel
>>> free to add topics in that section when you have something to discuss with
>>> the Neutron team.
>>>
>>
> Now that we have a working base API serving GraphQL requests we need to do
> provide the data in respect of Oslo Policy and such.
>
> Thanks for the pointers, I'll add the latter to the Agenda and will be at
> next meeting.
>
>
>
>
>>> Best regards
>>>
>>> Miguel
>>>
>>> On Sun, Aug 19, 2018 at 10:57 PM, Gilles Dubreuil 
>>> wrote:
>>>
>>>
>>> On 25/07/18 23:48, Ed Leafe wrote:
>>> On Jun 6, 2018, at 7:35 PM, Gilles Dubreuil  wrote:
>>> The branch is now available under feature/graphql on the neutron core
>>> repository [1].
>>> I wanted to follow up with you on this effort. I haven’t seen any
>>> activity on StoryBoard for several weeks now, and wanted to be sure that
>>> there was nothing blocking you that we could help with.
>>>
>>>
>>> -- Ed Leafe
>>>
>>>
>>>
>>> Hi Ed,
>>>
>>> Thanks for following up.
>>>
>>> There has been 2 essential counterproductive factors to the effort.
>>>
>>> The first is that I've been busy attending issues on other part of my
>>> job.
>>> The second one is the lack of response/follow-up from the Neutron core
>>> team.
>>>
>>> We have all the plumbing in place but we need to layer the data through
>>> oslo policies.
>>>
>>> Cheers,
>>> Gilles
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> --
> Gilles Dubreuil
> Senior Software Engineer - Red Hat - Openstack DFG Integration
> Email: gil...@redhat.com
> GitHub/IRC: gildub
> Mobile: +61 400 894 219
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] nova 18.0.0.0rc3 (rocky)

2018-08-24 Thread no-reply

Hello everyone,

A new release candidate for nova for the end of the Rocky
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/nova/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Rocky release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/rocky release
branch at:

https://git.openstack.org/cgit/openstack/nova/log/?h=stable/rocky

Release notes for nova can be found at:

https://docs.openstack.org/releasenotes/nova/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-24 Thread Matt Riedemann

On 8/22/2018 9:14 PM, Sam Morrison wrote:

I think in our case we’d only migrate between cells if we know the network and 
storage is accessible and would never do it if not.
Thinking moving from old to new hardware at a cell level.


If it's done via the resize API at the top, initiated by a non-admin 
user, how would you prevent it? We don't really know if we're going 
across cell boundaries until the scheduler picks a host, and today we 
restrict all move operations to within the same cell. But that's part of 
the problem that needs addressing - how to tell the scheduler when it's 
OK to get target hosts for a move from all cells rather than the cell 
that the server is currently in.




If storage and network isn’t available ideally it would fail at the api request.


Not sure this is something we can really tell beforehand in the API, but 
maybe possible depending on whatever we come up with regarding volumes 
and ports. I expect this is a whole new orchestrated task in the 
(super)conductor when it happens. So while I think about using 
shelve/unshelve from a compute operation standpoint, I don't want to try 
and shoehorn this into existing conductor tasks.




There is also ceph backed instances and so this is also something to take into 
account which nova would be responsible for.


Not everyone is using ceph and it's not really something the API is 
aware of...at least not today - but long-term with shared storage 
providers in placement we might be able to leverage this for 
non-volume-backed instances, i.e. if we know the source and target host 
are on the same shared storage, regardless of cell boundary, we could 
just move rather than use snapshots (shelve). But I think phase1 is 
easiest universally if we are using snapshots to get from cell 1 to cell 2.




I’ll be in Denver so we can discuss more there too.


Awesome.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Chris Dent

On Fri, 24 Aug 2018, Chris Dent wrote:


That work is in gerrit at

   https://review.openstack.org/#/c/596291/

with a hopefully clear commit message about what's going on. As with
the rest of this work, this is not something to merge, rather an
experiment to learn from. The hot spots in the changes are
relatively limited and about what you would expect so, with luck,
should be pretty easy to deal with, some of them even before we
actually do any extracting (to enhance the boundaries between the
two services).


After some prompting from gibi, that code has now been adjusted so
that requirements.txt and tox.ini [1] make sure that the extract
placement branch is installed into the test virtualenvs. So in the
gate the unit and functional tests pass. Other jobs do not because
of [1].

In the intervening time I've taken that code, built a devstack that
uses a nova-placement-api wsgi script that uses nova.conf and the
extracted placement code. It runs against the nova-api database.

Created a few servers. Worked.

Then I switched the devstack@placement-unit unit file to point to
the placement-api wsgi script, and configured
/etc/placement/placement.conf to have a
[placement_database]/connection of the nova-api db.

Created a few servers. Worked.

Thanks.

[1] As far as I can tell a requirements.txt entry of

-e 
git+https://github.com/cdent/placement-1.git@cd/make-it-work#egg=placement

will install just fine with 'pip install -r requirements.txt', but
if I do 'pip install nova' and that line is in requirements.txt it
does not work. This means I had to change tox.ini to have a deps
setting of:

deps = -r{toxinidir}/test-requirements.txt
   -r{toxinidir}/requirements.txt

to get the functional and unit tests to build working virtualenvs.
That this is not happening in the dsvm-based zuul jobs mean that the
tests can't run or pass. What's going on here? Ideas?
--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-24 Thread Jay S Bryant



On 8/23/2018 12:07 PM, Gorka Eguileor wrote:

On 23/08, Dan Smith wrote:

I think Nova should never have to rely on Cinder's hosts/backends
information to do migrations or any other operation.

In this case even if Nova had that info, it wouldn't be the solution.
Cinder would reject migrations if there's an incompatibility on the
Volume Type (AZ, Referenced backend, capabilities...)

I think I'm missing a bunch of cinder knowledge required to fully grok
this situation and probably need to do some reading. Is there some
reason that a volume type can't exist in multiple backends or something?
I guess I think of volume type as flavor, and the same definition in two
places would be interchangeable -- is that not the case?


Hi,

I just know the basics of flavors, and they are kind of similar, though
I'm sure there are quite a few differences.

Sure, multiple storage arrays can meet the requirements of a Volume
Type, but then when you create the volume you don't know where it's
going to land. If your volume type is too generic you volume could land
somewhere your cell cannot reach.



I don't know anything about Nova cells, so I don't know the specifics of
how we could do the mapping between them and Cinder backends, but
considering the limited range of possibilities in Cinder I would say we
only have Volume Types and AZs to work a solution.

I think the only mapping we need is affinity or distance. The point of
needing to migrate the volume would purely be because moving cells
likely means you moved physically farther away from where you were,
potentially with different storage connections and networking. It
doesn't *have* to mean that, but I think in reality it would. So the
question I think Matt is looking to answer here is "how do we move an
instance from a DC in building A to building C and make sure the
volume gets moved to some storage local in the new building so we're
not just transiting back to the original home for no reason?"

Does that explanation help or are you saying that's fundamentally hard
to do/orchestrate?

Fundamentally, the cells thing doesn't even need to be part of the
discussion, as the same rules would apply if we're just doing a normal
migration but need to make sure that storage remains affined to compute.


We could probably work something out using the affinity filter, but
right now we don't have a way of doing what you need.

We could probably rework the migration to accept scheduler hints to be
used with the affinity filter and to accept calls with the host or the
hints, that way it could migrate a volume without knowing the
destination host and decide it based on affinity.

We may have to do more modifications, but it could be a way to do it.




I don't know how the Nova Placement works, but it could hold an
equivalency mapping of volume types to cells as in:

  Cell#1Cell#2

VolTypeA <--> VolTypeD
VolTypeB <--> VolTypeE
VolTypeC <--> VolTypeF

Then it could do volume retypes (allowing migration) and that would
properly move the volumes from one backend to another.

The only way I can think that we could do this in placement would be if
volume types were resource providers and we assigned them traits that
had special meaning to nova indicating equivalence. Several of the words
in that sentence are likely to freak out placement people, myself
included :)

So is the concern just that we need to know what volume types in one
backend map to those in another so that when we do the migration we know
what to ask for? Is "they are the same name" not enough? Going back to
the flavor analogy, you could kinda compare two flavor definitions and
have a good idea if they're equivalent or not...

--Dan

In Cinder you don't get that from Volume Types, unless all your backends
have the same hardware and are configured exactly the same.

There can be some storage specific information there, which doesn't
correlate to anything on other hardware.  Volume types may refer to a
specific pool that has been configured in the array to use specific type
of disks.  But even the info on the type of disks is unknown to the
volume type.

I haven't checked the PTG agenda yet, but is there a meeting on this?
Because we may want to have one to try to understand the requirements
and figure out if there's a way to do it with current Cinder
functionality of if we'd need something new.

Gorka,

I don't think that this has been put on the agenda yet.  Might be good 
to add.  I don't think we have a cross project time officially planned 
with Nova.  I will start that discussion with Melanie so that we can 
cover the couple of cross projects subjects we have.


Jay


Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




[openstack-dev] Berlin Community Contributor Awards

2018-08-24 Thread Kendall Nelson
Hello Everyone!

As we approach the Summit (still a ways away thankfully), I thought I would
kick off the Community Contributor Award nominations early this round.

For those of you that already know what they are, here is the form[1].

For those of you that have never heard of the CCA, I'll briefly explain
what they are :) We all know people in the community that do the dirty
jobs, we all know people that will bend over backwards trying to help
someone new, we all know someone that is a savant in some area of the code
we could never hope to understand. These people rarely get the thanks they
deserve and the Community Contributor Awards are a chance to make sure they
know that they are appreciated for the amazing work they do and skills they
have.

So go forth and nominate these amazing community members[1]! Nominations
will close on October 21st at 7:00 UTC and winners will be announced at the
OpenStack Summit in Berlin.

-Kendall (diablo_rojo)

[1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Eric Fried
So...

Restore the PS of the oslo_utils version that exposed the global [1]?

Or use the forced-singleton pattern from nova [2] to put it in its own
importable module, e.g. oslo_utils.uuidutils.uuidsentinel?

(FTR, "import only modules" is a thing for me too, but I've noticed it
doesn't seem to be a hard and fast rule in OpenStack; and in this case
it seemed most important to emulate the existing syntax+behavior for
consumers.)

-efried

[1] https://review.openstack.org/#/c/594179/2/oslo_utils/uuidutils.py
[2]
https://github.com/openstack/nova/blob/a421bd2a8c3b549c603df7860e6357738e79c7c3/nova/tests/uuidsentinel.py#L30

On 08/23/2018 11:23 PM, Doug Hellmann wrote:
> 
> 
>> On Aug 23, 2018, at 4:01 PM, Ben Nemec  wrote:
>>
>>
>>
>>> On 08/23/2018 12:25 PM, Doug Hellmann wrote:
>>> Excerpts from Eric Fried's message of 2018-08-23 09:51:21 -0500:
 Do you mean an actual fixture, that would be used like:

  class MyTestCase(testtools.TestCase):
  def setUp(self):
  self.uuids = self.useFixture(oslofx.UUIDSentinelFixture()).uuids

  def test_foo(self):
  do_a_thing_with(self.uuids.foo)

 ?

 That's... okay I guess, but the refactoring necessary to cut over to it
 will now entail adding 'self.' to every reference. Is there any way
 around that?
>>> That is what I had envisioned, yes.  In the absence of a global,
>>> which we do not want, what other API would you propose?
>>
>> If we put it in oslotest instead, would the global still be a problem? 
>> Especially since mock has already established a pattern for this 
>> functionality?
> 
> I guess all of the people who complained so loudly about the global in 
> oslo.config are gone?
> 
> If we don’t care about the global then we could just put the code from Eric’s 
> threadsafe version in oslo.utils somewhere. 
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-24 Thread Matt Riedemann

On 8/21/2018 5:36 AM, Lee Yarwood wrote:

I'm definitely in favor of hiding this from users eventually but
wouldn't this require some form of deprecation cycle?

Warnings within the API documentation would also be useful and even
something we could backport to stable to highlight just how fragile this
API is ahead of any policy change.


The swap volume API in nova defaults to admin-only policy rules by 
default, so for any users that are using it directly, they are (1) 
admins knowingly shooting themselves, or their users, in the foot or (2) 
operators have opened up the policy to non-admins (or some other role of 
user) to hit the API directly. I would ask why that is.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] compute nodes use of placement

2018-08-24 Thread Matt Riedemann

On 7/30/2018 1:55 PM, Jay Pipes wrote:

ack. will review shortly. thanks, Chris.


For those on the edge of their seats at home, we have merged [1] in 
Stein and assuming things don't start failing in weird ways after some 
period of time, we'll probably backport it. OVH is already running with it.


[1] https://review.openstack.org/#/c/520024/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tripleo] fluentd logging status

2018-08-24 Thread Goutham Pacha Ravi
On Fri, Aug 24, 2018 at 2:17 AM Juan Badia Payno  wrote:
>
> Recently, I did a little test regarding fluentd logging on the gates 
> master[1], queens[2], pike [3]. I don't like the status of it, I'm still 
> working on them, but basically there are quite a lot of misconfigured logs 
> and some services that they are not configured at all.
>
> I think we need to put some effort on the logging. The purpose of this email 
> is to point out that we need to do a little effort on the task.
>
> First of all, I think we need to enable fluentd on all the scenarios, as it 
> is on the tests [1][2][3] commented on the beginning of the email. Once 
> everything is ok and some automatic test regarding logging is done they can 
> be disabled.
>
> I'd love not to create a new bug for every misconfigured/unconfigured 
> service, but if requested to grab more attention on it, I will open it.
>
> The plan I have in mind is something like:
>  * Make an initial picture of what the fluentd/log status is (from pike 
> upwards).
>  * Fix all misconfigured services. (designate,...)
>  * Add the non-configured services. (manila,...)

Awesome, I noticed this with manila just yesterday, and added it to my
list of To-Do/cleanup. I'm glad you're taking note/working on it,
please add me to review (gouthamr) / let me know if you'd like me to
do something.


>  * Add an automated check to find a possible unconfigured/misconfigured 
> problem.
>
> Any comments, doubts or questions are welcome
>
> Cheers,
> Juan
>
> [1] https://review.openstack.org/594836
> [2] https://review.openstack.org/594838
> [3] https://review.openstack.org/594840
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Matt Riedemann

On 8/23/2018 2:05 PM, Chris Dent wrote:

On Thu, 23 Aug 2018, Dan Smith wrote:


...and it doesn't work like mock.sentinel does, which is part of the
value. I really think we should put this wherever it needs to be so that
it can continue to be as useful as is is today. Even if that means just
copying it into another project -- it's not that complicated of a thing.


Yeah, I agree. I had hoped that we could make something that was
generally useful, but its main value is its interface and if we
can't have that interface in a library, having it per codebase is no
biggie. For example it's been copied straight from nova into the
placement extractions experiments with no changes and, as one would
expect, works just fine.

Unless people are wed to doing something else, Dan's right, let's
just do that.


So just follow me here people, what if we had this common shared library 
where code could incubate and then we could write some tools to easily 
copy that common code into other projects...


I'm pretty sure I could get said project approved as a top-level program 
under The Foundation and might even get a talk or two out of this idea. 
I can see the Intel money rolling in now...


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-24 Thread Matt Riedemann

On 8/23/2018 12:07 PM, Gorka Eguileor wrote:

I haven't checked the PTG agenda yet, but is there a meeting on this?
Because we may want to have one to try to understand the requirements
and figure out if there's a way to do it with current Cinder
functionality of if we'd need something new.


I don't see any set schedule yet for topics like we've done in the past, 
I'll ask Mel since time is getting short (~2 weeks out now). But I have 
this as an item for discussion in the etherpad [1]. In previous PTGs, we 
usually have 3 days for (mostly) vertical team stuff with Wednesday 
being our big topics days split into morning and afternoon, e.g. cells 
and placement, then Thursday is split into 1-2 hour cross-project 
sessions, e.g. nova/cinder, nova/neutron, etc, and then Friday is the 
miscellaneous everything else day for stuff on the etherpad.


[1] https://etherpad.openstack.org/p/nova-ptg-stein

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] The Weekly Owl - 29th Edition

2018-08-24 Thread Wesley Hayutin
On Fri, Aug 24, 2018 at 2:40 PM Emilien Macchi  wrote:

> Welcome to the twenty-ninthest edition of a weekly update in TripleO world!
> The goal is to provide a short reading (less than 5 minutes) to
> learnwhat's new this week.
> Any contributions and feedback are welcome.
> Link to the previous version:
> http://lists.openstack.org/pipermail/openstack-dev/2018-August/133094.html
>
> General announcements
> =
> +--> This week we released Rocky RC1, branched stable/rocky and unless
> there are critical bugs we'll call it our final stable release.
> +--> The team is preparing for the next PTG:
> https://etherpad.openstack.org/p/tripleo-ptg-stein
>
> CI status
> =
> +--> Sprint theme: Zuul v3 migration (
> https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter=label:Sprint%2018%20CI
> )
> +--> The Ruck and Rover for this sprint  are Marios and Wes. Please tell
> them any CI issue.
>

It's actually Sorin and myself while Marios is on PTO.
Might as well take the opportunity to welcome Sorin to the TripleO team :))




> +--> Promotion on master is 11 days, 1 day on Rocky, 3 days on Queens, 3
> days on Pike and 1 days on Ocata.
>
> Upgrades
> =
> +--> Adding support for upgrades when OpenShift is deployed.
>
> Containers
> =
> +--> Efforts to support Podman tracked here:
> https://trello.com/b/S8TmOU0u/tripleo-podman
>
> config-download
> =
> +--> This squad is down and we move forward with the Edge squad.
>
> Edge
> =
> +--> New squad created by James:
> https://etherpad.openstack.org/p/tripleo-edge-squad-status (more to come)
>
> Integration
> =
> +--> No updates this week.
>
> UI/CLI
> =
> +--> No updates this week.
>
> Validations
> =
> +--> No updates this week, reviews are needed:
> https://etherpad.openstack.org/p/tripleo-validations-squad-status
>
> Networking
> =
> +--> Good progress on Ansible ML2 driver
>
> Workflows
> =
> +--> Planning Stein: better Ansible integration, UI convergence, etc.
>
> Security
> =
> +--> Working on SElinux for containers (related to podman integration
> mainly)
>
> Owl fact
> =
> "One single Owl can go fast. Multiple owls, together, can go far."
> Source: a mix of an African proverb and my Friday-afternoon imagination.
>
>
> Thank you all for reading and stay tuned!
> --
> Your fellow reporter, Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-- 

Wes Hayutin

Associate MANAGER

Red Hat



whayu...@redhat.comT: +1919 <+19197544114>4232509 IRC:  weshay


View my calendar and check my availability for meetings HERE

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 6 August 2018

2018-08-24 Thread Lance Bragstad


On 08/22/2018 07:49 AM, Lance Bragstad wrote:
>
> On 08/22/2018 03:23 AM, Adrian Turjak wrote:
>> Bah! I saw this while on holiday and didn't get a chance to respond,
>> sorry for being late to the conversation.
>>
>> On 11/08/18 3:46 AM, Colleen Murphy wrote:
>>> ### Self-Service Keystone
>>>
>>> At the weekly meeting Adam suggested we make self-service keystone a focus 
>>> point of the PTG[9]. Currently, policy limitations make it difficult for an 
>>> unprivileged keystone user to get things done or to get information without 
>>> the help of an administrator. There are some other projects that have been 
>>> created to act as workflow proxies to mitigate keystone's limitations, such 
>>> as Adjutant[10] (now an official OpenStack project) and Ksproj[11] (written 
>>> by Kristi). The question is whether the primitives offered by keystone are 
>>> sufficient building blocks for these external tools to leverage, or if we 
>>> should be doing more of this logic within keystone. Certainly improving our 
>>> RBAC model is going to be a major part of improving the self-service user 
>>> experience.
>>>
>>> [9] 
>>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-121
>>> [10] https://adjutant.readthedocs.io/en/latest/
>>> [11] https://github.com/CCI-MOC/ksproj
>> As you can probably expect, I'd love to be a part of any of these
>> discussions. Anything I can nicely move to being logic directly
>> supported in Keystone, the less I need to do in Adjutant. The majority
>> of things though I think I can do reasonably well with the primitives
>> Keystone gives me, and what I can't I tend to try and work with upstream
>> to fill the gaps.
>>
>> System vs project scope helps a lot though, and I look forward to really
>> playing with that.
> Since it made sense to queue incorporating system scope after the flask
> work, I just started working with that on the credentials API*. There is
> a WIP series up for review that attempts to do a couple things [0].
> First it tries to incorporate system and project scope checking into the
> API. Second it tries to be more explicit about protection test cases,
> which I think is going to be important since we're adding another scope
> type. We also support three different roles now and it would be nice to
> clearly see who can do what in each case with tests.
>
> I'd be curious to get your feedback here if you have any.
>
> * Because the credentials API was already moved to flask and has room
> for self-service improvements [1]
>
> [0] https://review.openstack.org/#/c/594547/

This should be passing tests at least now, but there are still some
tests left to write. Most of what's in the patch is testing the new
authorization scope (e.g. system).

I'm currently taking advice on ways to extensively test six different
personas without duplication running rampant across test cases (project
admin, project member, project reader, system admin, system member,
system reader).

In summary, it does make the credential API much more self-service
oriented, which is something we should try and do everywhere (I picked
credentials first because it was already moved to flask).

> [1]
> https://git.openstack.org/cgit/openstack/keystone/tree/keystone/common/policies/credential.py#n21
>
>> I sadly won't be at the PTG, but will be at the Berlin summit. Plus I
>> have a lot of Adjutant work planned for Stein, a large chunk of which is
>> refactors and reshuffling blueprints and writing up a roadmap, plus some
>> better entry point tasks for new contributors.
>>
>>> ### Standalone Keystone
>>>
>>> Also at the meeting and during office hours, we revived the discussion of 
>>> what it would take to have a standalone keystone be a useful identity 
>>> provider for non-OpenStack projects[12][13]. First up we'd need to turn 
>>> keystone into a fully-fledged SAML IdP, which it's not at the moment (which 
>>> is a point of confusion in our documentation), or even add support for it 
>>> to act as an OpenID Connect IdP. This would be relatively easy to do (or at 
>>> least not impossible). Then the application would have to use 
>>> keystonemiddleware or its own middleware to route requests to keystone to 
>>> issue and validate tokens (this is one aspect where we've previously 
>>> discussed whether JWT could benefit us). Then the question is what should a 
>>> not-OpenStack application do with keystone's "scoped RBAC"? It would all 
>>> depend on how the resources of the application are grouped and whether they 
>>> care about multitenancy in some form. Likely each application would have 
>>> different needs and it would be difficult to find a one-size-fits-all 
>>> approach. We're interested to know whether anyone has a burning use case 
>>> for something like this.
>>>
>>> [12] 
>>> http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-08-07-16.00.log.html#l-192
>>> [13] 
>>> 

Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-24 Thread Matt Riedemann

On 8/20/2018 10:29 AM, Matthew Booth wrote:

Secondly, is there any reason why we shouldn't just document then you
have to delete snapshots before doing a volume migration? Hopefully
some cinder folks or operators can chime in to let me know how to back
them up or somehow make them independent before doing this, at which
point the volume itself should be migratable?


Coincidentally the volume migration API never had API reference 
documentation. I have that here now [1]. It clearly states the 
preconditions to migrate a volume based on code in the volume API. 
However, volume migration is admin-only by default and retype 
(essentially like resize) is admin-or-owner so non-admins can do it and 
specify to migrate. In general I think it's best to have preconditions 
for *any* API documented, so anything needed to perform a retype should 
be documented in the API, like that the volume can't have snapshots.


[1] https://review.openstack.org/#/c/595379/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-24 Thread Alex Xu
2018-08-18 20:25 GMT+08:00 Chris Dent :

> On Fri, 17 Aug 2018, Doug Hellmann wrote:
>
> If we ignore the political concerns in the short term, are there
>> other projects actually interested in using placement? With what
>> technical caveats? Perhaps with modifications of some sort to support
>> the needs of those projects?
>>
>
> I think ignoring the political concerns (in any term) is not
> possible. We are a group of interacting humans, politics are always
> present. Cordial but active debate to determine the best course of
> action is warranted.
>
> (tl;dr: Let's have existing and potential placement contributors
> decide its destiny.)
>
> Five topics I think are relevant here, in order of politics, least
> to most:
>
> 1. Placement has been designed from the outset to have a hard
> contract between it and the services that use it. Being embedded
> and/or deeply associated with one other single service means that
> that contract evolves in a way that is strongly coupled. We made
> placement have an HTTP API, not use RPC, and not produce or consume
> notifications because it is supposed to be bounded and independent.
> Sharing code and human management doesn't enable that. As you'll
> read below, placement's progress has been overly constrained by
> compute.
>
> 2. There are other projects actively using placement, not merely
> interested. If you search codesearch.o.o for terms like "resource
> provider" you can find them. But to rattle off those that I'm aware
> of (which I'm certain is an incomplete list):
>
> * Cyborg is actively working on using placement to track FPGA
>   e.g., https://review.openstack.org/#/c/577438/
>
> * Blazar is working on using them for reservations:
>   https://review.openstack.org/#/q/status:open+project:opensta
> ck/blazar+branch:master+topic:bp/placement-api
>
> * Neutron has been reporting to placement for some time and has work
>   in progress on minimum bandwidth handling with the help of
>   placement:
>   https://review.openstack.org/#/q/status:open+project:opensta
> ck/neutron-lib+branch:master+topic:minimum-bandwidth-
> allocation-placement-api
>
> * Ironic uses resource classes to describe types of nodes
>
> * Mogan (which may or may not be dead, not clear) was intending to
>   track nodes with placement:
>   http://git.openstack.org/cgit/openstack/mogan-specs/tree/spe
> cs/pike/approved/track-resources-using-placement.rst
>
> * Zun is working to use placement for "unified resource management":
>   https://blueprints.launchpad.net/zun/+spec/use-placement-res
> ource-management
>
> * Cinder has had discussion about using placement to overcome race
>   conditions in its existing scheduling subsystem (a purpose to
>   which placement was explicitly designed).
>
> 3. Placement's direction and progress is heavily curtailed by the
> choices and priorities that compute wants or needs to make. That
> means that for the past year or more much of the effort in placement
> has been devoted to eventually satisfying NFV use cases driven by
> "enhanced platform awareness" to the detriment of the simple use
> case of "get me some resource providers". Compute is under a lot of
> pressure in this area, and is under-resourced, so placement's
> progress is delayed by being in the (necessarily) narrow engine of
> compute. Similarly, computes's overall progress is delayed because a
> lot of attention is devoted to placement.
>
> I think the relevance of that latter point has been under-estimated
> by the voices that are hoping to keep placement near to nova. The
> concern there has been that we need to continue iterating in concert
> and quickly. I disagree with that from two angles. One is that we
> _will_ continue to work in concert. We are OpenStack, and presumably
> all the same people working on placement now will continue to do so,
> and many of those are active contributors to nova. We will work
> together.
>
> The other angle is that, actually, placement is several months ahead
> of nova in terms of features and it would be to everyone's advantage if
> placement, from a feature standpoint, took a time out (to extract)
> while nova had a chance to catch up with fully implementing shared
> providers, nested resource providers, consumer generations, resource
> request groups, using the reshaper properly from the virt drivers,
> having a fast forward upgrade script talking to PlacementDirect, and
> other things that I'm not remembering right now. The placement side
> for those things is in place. The work that it needs now is a
> _diversity_ of callers (not just nova) so that the features can been
> fully exercised and bugs and performance problems found.
>
> The projects above, which might like to--and at various times have
> expressed desire to do so--work on features within placement that
> would benefit their projects, are forced to compete with existing
> priorities to get blueprint attention. Though runways seemed to help
> a bit on that front this just-ending cycle, it's 

Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Davanum Srinivas
On Fri, Aug 24, 2018 at 8:01 PM Jeremy Stanley  wrote:

> On 2018-08-24 18:51:08 -0500 (-0500), Matt Riedemann wrote:
> [...]
> > So just follow me here people, what if we had this common shared
> > library where code could incubate and then we could write some
> > tools to easily copy that common code into other projects...
>
> If we do this, can we at least put it in a consistent place in all
> projects? Maybe name the directory something like "openstack/common"
> just to make it obvious.
>
> > I'm pretty sure I could get said project approved as a top-level
> > program under The Foundation and might even get a talk or two out
> > of this idea. I can see the Intel money rolling in now...
>
> Seems like a sound idea. Can we call it "Nostalgia" for no
> particular reason? Though maybe "Recurring Nightmare" would be a
> more accurate choice.
>

/me wakes up screaming!!


> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Davanum Srinivas :: https://twitter.com/dims
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-24 Thread Matt Riedemann

On 8/23/2018 10:22 AM, Sean McGinnis wrote:

I haven't gone through the workflow, but I thought shelve/unshelve could detach
the volume on shelving and reattach it on unshelve. In that workflow, assuming
the networking is in place to provide the connectivity, the nova compute host
would be connecting to the volume just like any other attach and should work
fine. The unknown or tricky part is making sure that there is the network
connectivity or routing in place for the compute host to be able to log in to
the storage target.


Yeah that's also why I like shelve/unshelve as a start since it's doing 
volume detach from the source host in the source cell and volume attach 
to the target host in the target cell.


Host aggregates in Nova, as a grouping concept, are not restricted to 
cells at all, so you could have hosts in the same aggregate which span 
cells, so I'd think that's what operators would be doing if they have 
network/storage spanning multiple cells. Having said that, host 
aggregates are not exposed to non-admin end users, so again, if we rely 
on a normal user to do this move operation via resize, the only way we 
can restrict the instance to another host in the same aggregate is via 
availability zones, which is the user-facing aggregate construct in 
nova. I know Sam would care about this because NeCTAR sets 
[cinder]/cross_az_attach=False in nova.conf so servers/volumes are 
restricted to the same AZ, but that's not the default, and specifying an 
AZ when you create a server is not required (although there is a config 
option in nova which allows operators to define a default AZ for the 
instance if the user didn't specify one).


Anyway, my point is, there are a lot of "ifs" if it's not an 
operator/admin explicitly telling nova where to send the server if it's 
moving across cells.




If it's the other scenario mentioned where the volume needs to be migrated from
one storage backend to another storage backend, then that may require a little
more work. The volume would need to be retype'd or migrated (storage migration)
from the original backend to the new backend.


Yeah, the thing with retype/volume migration that isn't great is it 
triggers the swap_volume callback to the source host in nova, so if nova 
was orchestrating the volume retype/move, we'd need to wait for the swap 
volume to be done (not impossible) before proceeding, and only the 
libvirt driver implements the swap volume API. I've always wondered, 
what the hell do non-libvirt deployments do with respect to the volume 
retype/migration APIs in Cinder? Just disable them via policy?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Deprecating Core/Disk/RamFilter

2018-08-24 Thread Matt Riedemann
This is just an FYI that I have proposed that we deprecate the 
core/ram/disk filters [1]. We should have probably done this back in 
Pike when we removed them from the default enabled_filters list and also 
deprecated the CachingScheduler, which is the only in-tree scheduler 
driver that benefits from enabling these filters. With the 
heal_allocations CLI, added in Rocky, we can probably drop the 
CachingScheduler in Stein so the pieces are falling into place. As we 
saw in a recent bug [2], having these enabled in Stein now causes 
blatantly incorrect filtering on ironic nodes.


Comments are welcome here, the review, or in IRC.

[1] https://review.openstack.org/#/c/596502/
[2] https://bugs.launchpad.net/tripleo/+bug/1787910

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] extraction (technical) update

2018-08-24 Thread Ed Leafe
On Aug 24, 2018, at 7:36 AM, Chris Dent  wrote:

> Over the past few days a few of us have been experimenting with
> extracting placement to its own repo, as has been discussed at
> length on this list, and in some etherpads:
> 
>https://etherpad.openstack.org/p/placement-extract-stein
>https://etherpad.openstack.org/p/placement-extraction-file-notes
> 
> As part of that, I've been doing some exploration to tease out the
> issues we're going to hit as we do it. None of this is work that
> will be merged, rather it is stuff to figure out what we need to
> know to do the eventual merging correctly and efficiently.

I’ve re-run the extraction, re-arranged the directories, and cleaned up most of 
the import pathing. The code is here: https://github.com/EdLeafe/placement.  I 
did a forced push to remove the first attempt.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration

2018-08-24 Thread Matt Riedemann

+operators

On 8/24/2018 4:08 PM, Matt Riedemann wrote:

On 8/23/2018 10:22 AM, Sean McGinnis wrote:
I haven't gone through the workflow, but I thought shelve/unshelve 
could detach
the volume on shelving and reattach it on unshelve. In that workflow, 
assuming
the networking is in place to provide the connectivity, the nova 
compute host
would be connecting to the volume just like any other attach and 
should work

fine. The unknown or tricky part is making sure that there is the network
connectivity or routing in place for the compute host to be able to 
log in to

the storage target.


Yeah that's also why I like shelve/unshelve as a start since it's doing 
volume detach from the source host in the source cell and volume attach 
to the target host in the target cell.


Host aggregates in Nova, as a grouping concept, are not restricted to 
cells at all, so you could have hosts in the same aggregate which span 
cells, so I'd think that's what operators would be doing if they have 
network/storage spanning multiple cells. Having said that, host 
aggregates are not exposed to non-admin end users, so again, if we rely 
on a normal user to do this move operation via resize, the only way we 
can restrict the instance to another host in the same aggregate is via 
availability zones, which is the user-facing aggregate construct in 
nova. I know Sam would care about this because NeCTAR sets 
[cinder]/cross_az_attach=False in nova.conf so servers/volumes are 
restricted to the same AZ, but that's not the default, and specifying an 
AZ when you create a server is not required (although there is a config 
option in nova which allows operators to define a default AZ for the 
instance if the user didn't specify one).


Anyway, my point is, there are a lot of "ifs" if it's not an 
operator/admin explicitly telling nova where to send the server if it's 
moving across cells.




If it's the other scenario mentioned where the volume needs to be 
migrated from
one storage backend to another storage backend, then that may require 
a little
more work. The volume would need to be retype'd or migrated (storage 
migration)

from the original backend to the new backend.


Yeah, the thing with retype/volume migration that isn't great is it 
triggers the swap_volume callback to the source host in nova, so if nova 
was orchestrating the volume retype/move, we'd need to wait for the swap 
volume to be done (not impossible) before proceeding, and only the 
libvirt driver implements the swap volume API. I've always wondered, 
what the hell do non-libvirt deployments do with respect to the volume 
retype/migration APIs in Cinder? Just disable them via policy?





--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] UUID sentinel needs a home

2018-08-24 Thread Jeremy Stanley
On 2018-08-24 18:51:08 -0500 (-0500), Matt Riedemann wrote:
[...]
> So just follow me here people, what if we had this common shared
> library where code could incubate and then we could write some
> tools to easily copy that common code into other projects...

If we do this, can we at least put it in a consistent place in all
projects? Maybe name the directory something like "openstack/common"
just to make it obvious.

> I'm pretty sure I could get said project approved as a top-level
> program under The Foundation and might even get a talk or two out
> of this idea. I can see the Intel money rolling in now...

Seems like a sound idea. Can we call it "Nostalgia" for no
particular reason? Though maybe "Recurring Nightmare" would be a
more accurate choice.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Searchlight] Team meeting next week

2018-08-24 Thread Trinh Nguyen
Dear team,

I would like to organize a team meeting on Thursday next week:

   - Date: 30 August  2018
   - Time: 15:00 UTC
   - Channel: #openstack-meeting-4

All existing core members and new contributors are welcome.

Here is the Searchlight's Etherpad for Stein, all ideas are welcomed:

https://etherpad.openstack.org/p/searchlight-stein-ptg

Please reply or ping me on IRC (#openstack-searchlight, dangtrinhnt) if you
want to join.

Bests,

*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Disabling nova volume-update (aka swap volume; aka cinder live migration)

2018-08-24 Thread Matt Riedemann

On 8/22/2018 4:46 AM, Gorka Eguileor wrote:

The solution is conceptually simple.  We add a new API microversion in
Cinder that adds and optional parameter called "generic_keep_source"
(defaults to False) to both migrate and retype operations.


But if the problem is that users are not using the retype API and 
instead are hitting the compute swap volume API instead, they won't use 
this new parameter anyway. Again, retype is admin-or-owner but volume 
migration (in cinder) and swap volume (in nova) are both admin-only, so 
are admins calling swap volume directly or are people easing up the 
policy restrictions so non-admins can use these migration APIs?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO]Addressing Edge/Multi-site/Multi-cloud deployment use cases (new squad)

2018-08-24 Thread James Slagle
On Wed, Aug 22, 2018 at 4:21 AM Csatari, Gergely (Nokia - HU/Budapest)
 wrote:
>
> Hi,
>
> This is good news. We could even have an hour session to discuss ideas about 
> TripleO-s place in the edge cloud infrastructure. Would you be open for that?

Yes, that sounds good. I'll add something to the etherpad. Thanks.



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev