Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Tony Breeds
On Thu, Jul 26, 2018 at 10:54:52PM -0500, Matt Riedemann wrote:
> On 7/26/2018 4:37 PM, Sean McGinnis wrote:
> > I'd be curious to hear more about why you don't think that tag is 
> > maintained.
> 
> Are projects actively applying for the tag?
> 
> > 
> > For projects that assert they follow stable policy, in the relase process we
> > have extra scrutiny that nothing is being released on stable branches that
> > would appear to violate the stable policy.
> 
> Is this automated somehow and takes the tag specifically into account, e.g.
> some kind of validation that for projects with the tag, a release on a
> stable branch doesn't have something like "blueprint" in the commit message?
> Or is that just manual code review of the change log?

Manual review of the changelog.  For project that assert the tag the
list-changes job prints a big banner to get the attention of the
release managers[1].  Those reviews need a +2 from me (or Alan) *and* a
+2 from a release manager.

I look at the commit messages and where thing look 'interesting' I go
do code reviews on the backport changes.  It isn't ideal but IMO it's
far from unmaintained.

If you had ideas on automation we could put in place to make this more
robust, without getting in the way I'm all ears[2]

Yours Tony.

[1] 
http://logs.openstack.org/42/586242/1/check/releases-tox-list-changes/af61e24/job-output.txt.gz#_2018-07-26_15_30_07_144206
[2] Well not literally but I am listening ;P


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Tony Breeds
On Thu, Jul 26, 2018 at 11:42:01AM -0500, Matt Riedemann wrote:
> On 7/25/2018 3:07 PM, Mohammed Naser wrote:
> > Hi everyone:
> > 
> > This email is just to notify everyone on the TC and the community that
> > the change to remove the stable branch maintenance as a project
> > team[1] has been fast-tracked[2].
> > 
> > The change should be approved on 2018-07-28 however it is beneficial
> > to remove the stable branch team (which has been moved into a SIG) in
> > order for `tonyb` to be able to act as an election official.
> > 
> > There seems to be no opposing votes however a revert is always
> > available if any members of the TC are opposed to the change[3].
> > 
> > Thanks to Tony for all of his help in the elections.
> > 
> > Regards,
> > Mohammed
> > 
> > [1]:https://review.openstack.org/#/c/584206/
> > [2]:https://governance.openstack.org/tc/reference/house-rules.html#other-project-team-updates
> > [3]:https://governance.openstack.org/tc/reference/house-rules.html#rolling-back-fast-tracked-changes
> 
> First time I've heard of it...

http://lists.openstack.org/pipermail/openstack-dev/2018-July/132369.html

> but thanks. I personally don't think calling
> something a SIG magically makes people appear to help out, like creating a
> stable maintenance official project team and PTL didn't really grow a
> contributor base either, but so it goes.

I'm not expecting magic to happen but, I think a SIG is a better fit.
Since Dublin we've had Elod Illes appear and do good things so perhaps
there is hope[1]!
 
> Only question I have is will the stable:follows-policy governance tag [1]
> also be removed?

That wasn't on the cards, it's still the same gerrit group that is
expected to approve (or not) new applications.

Yours Tony.
[1] Hope is not a strategy


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Matt Riedemann

On 7/26/2018 4:37 PM, Sean McGinnis wrote:

I'd be curious to hear more about why you don't think that tag is maintained.


Are projects actively applying for the tag?



For projects that assert they follow stable policy, in the relase process we
have extra scrutiny that nothing is being released on stable branches that
would appear to violate the stable policy.


Is this automated somehow and takes the tag specifically into account, 
e.g. some kind of validation that for projects with the tag, a release 
on a stable branch doesn't have something like "blueprint" in the commit 
message? Or is that just manual code review of the change log?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] PTL Candidacy for Stein cycle

2018-07-26 Thread Li Liu
 I'd like to nominate myself for the Cyborg PTL role for the Stein cycle.

Thank you Howard for starting this new project in the community couple year
ago. He led the project the beginning and helped the project ramping up on
the right track.

Now the project is in a fanatic state after a couple releases preparation.
We had our first official release from Q and continues to deliver great
features in R and S releases. Our team is growing fast, people are showing
interests in the project across different domains from the industry.

We took it in our pride that cyborg is one of the few projects that is
grown entirely in the OpenStack community from the very beginning: no
vendor code dump, design discussion from scratch, write every bit of code
from zero.

I joined the project not too long ago, but I am already so fascinated by
being in such a great team and knowing the code we write can help others
around the world.

In Rocky, we added further support for FPGAs, e.g. bitstream programming
APIs, metadata bitstream standardization. We also finalized Nova-Cyborg
interaction spec and start working with Placement folks to make things
happen. In addition, we have more device drivers supports (GPUs,
Intel/Xilinx FPGAs, etc.)

Looking forward in Stein Cycle, here is a list of things we will try to
accomplish:
1. Finish and polish up the interaction with Nova through placement API
2. FInish Implementing os-acc library
3. Complete the E2E flow of doing acc scheduling, initializing, as well as
FPGA programming
4. Work with the k8s community to provide containerization support for
Kubernetes DPI plugin.
5. Work with Berkely RISC-V team to port their projects over to the
OpenStack ecosystem(e.g. FireSim)


-- 
Thank you

Regards

Li Liu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-4, July 30 - August 3

2018-07-26 Thread Sean McGinnis
As the client deadline and milestone 3 day winds down here, I wanted to do a
quick check on where things stand before calling it a day.

This is according to script output, so I haven't actually looked into any
details so far. But according to the script, the follow cycle-with-intermediary
deliverables have not had a release done for rocky yet:

aodh
bifrost
ceilometer
cloudkitty-dashboard
cloudkitty
ec2-api
ironic-python-agent
karbor-dashboard
karbor
kuryr-kubernetes
kuryr-libnetwork
magnum-ui
magnum
masakari-dashboard
monasca-kibana-plugin
monasca-log-api
monasca-notification
networking-hyperv
panko
python-cloudkittyclient
python-designateclient
python-karborclient
python-magnumclient
python-pankoclient
python-searchlightclient
python-senlinclient
python-tricircleclient
sahara-tests
senlin-dashboard
tacker-horizon
tacker
zun-ui
zun

Just a reminder that we will need to force a release on these in order to get a
final point to branch stable/rocky.

Taking a look at ones that have done a release but have had more unreleased
commits since then, I'm also seeing several python-*client deliverables that
may be missing final releases.

Thanks,
Sean

On Thu, Jul 26, 2018 at 07:22:01AM -0500, Sean McGinnis wrote:
> 
> General Information
> ---
> 
> For deliverables following the cycle-with-milestones model, we are now (after
> the day I send this) past Feature Freeze. The focus should be on determining
> and fixing release-critical bugs. At this stage only bugfixes should be
> approved for merging in the master branches: feature work should only be
> considered if explicitly granted a Feature Freeze exception by the team PTL
> (after a public discussion on the mailing-list).
> 
> StringFreeze is now in effect, in order to let the I18N team do the 
> translation
> work in good conditions. The StringFreeze is currently soft (allowing
> exceptions as long as they are discussed on the mailing-list and deemed worth
> the effort). It will become a hard StringFreeze on 9th of August along with 
> the
> RC.
> 
> The requirements repository is also frozen, until all cycle-with-milestones
> deliverables have produced a RC1 and have their stable/rocky branches. If
> release critical library or client library releases are needed for Rocky past
> the freeze dates, you must request a Feature Freeze Exception (FFE) from the
> requirements team before we can do a new release to avoid having something
> released in Rocky that is not actually usable. This is done by posting to the
> openstack-dev mailing list with a subject line similar to:
> 
> [$PROJECT][requirements] FFE requested for $PROJECT_LIB
> 
> Include justification/reasoning for why a FFE is needed for this lib. If/when
> the requirements team OKs the post-freeze update, we can then process a new
> release. Including a link to the FFE in the release request is not required,
> but would be helpful in making sure we are clear to do a new release.
> 
> Note that deliverables that are not tagged for release by the appropriate
> deadline will be reviewed to see if they are still active enough to stay on 
> the
> official project list.
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Sean McGinnis
On Thu, Jul 26, 2018 at 02:06:01PM -0500, Matt Riedemann wrote:
> On 7/26/2018 12:00 PM, Sean McGinnis wrote:
> > I wouldn't think so. Nothing is changing with the policy, so it is still of
> > interest to see which projects are following that. I don't believe the 
> > policy
> > was tied in any way with stable being an actual project team vs a SIG.
> 
> OK, then maybe as a separate issue, I would argue the tag is not maintained
> and therefore useless at best, or misleading at worst (for those projects
> that don't have it) and therefore should be removed.
> 

I'd be curious to hear more about why you don't think that tag is maintained.

For projects that assert they follow stable policy, in the relase process we
have extra scrutiny that nothing is being released on stable branches that
would appear to violate the stable policy.

Granted, we need to base most of that evaluation on the commit messages, so
it's certainly possible to phrase something in a misleading way that would not
raise any red flags for stable compliance, but if that happens, I would think
it would be unintentional and rare.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Summit Berlin - Community Voting Closing Soon

2018-07-26 Thread Ashlee Ferguson
Hi everyone,

Session voting for the Berlin Summit closes in less than 8 hours! Submit your 
votes by July 26 at 11:59pm Pacific Time (Friday, July 27 at 6:59 UTC).

VOTE HERE 

The Programming Committees will ultimately determine the final schedule. 
Community votes are meant to help inform the decision, but are not considered 
to be the deciding factor. The Programming Committee members exercise judgment 
in their area of expertise and help ensure diversity. View full details of the 
session selection process here. 


Continue to visit https://www.openstack.org/summit/berlin-2018 
 for all Summit-related 
information.

REGISTER
Register for the Summit  for 
$699 before prices increase after August 21 at 11:59pm Pacific Time (August 22 
at 6:59am UTC).

VISA APPLICATION PROCESS
Make sure to secure your Visa soon. More information 
 about the 
Visa application process.

TRAVEL SUPPORT PROGRAM
August 30 is the last day to submit applications. Please submit your 
applications 
 by 
11:59pm Pacific Time (August 31 at 6:59am UTC).

If you have any questions, please email sum...@openstack.org 
.

Cheers,
Ashlee


Ashlee Ferguson
OpenStack Foundation
ash...@openstack.org




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release] FFE for os-vif 1.11.1

2018-07-26 Thread melanie witt

On Thu, 26 Jul 2018 13:01:18 -0500, Matthew Thode wrote:

On 18-07-26 10:43:05, melanie witt wrote:

Hello,

I'd like to ask for an exception to add os-vif 1.11.1 to stable/rocky. The
current release for rocky, 1.11.0, added a new feature: the NoOp Plugin, but
it's not actually usable (it's not being loaded) because we missed adding a
file to the setup.cfg.

We have fixed the problem in a one liner add to setup.cfg [1] and we would
like to be able to do another release 1.11.1 for rocky to include this fix.
That way, the NoOp Plugin feature advertised in the release notes [2] for
rocky would be usable for consumers.

[1] https://review.openstack.org/585530
[2] 
https://docs.openstack.org/releasenotes/os-vif/unreleased.html#relnotes-1-11-0



Yep, we talked about it in the release channel.

+++--++
| Repository | Filename 
  | Line | Text   |
+++--++
| kuryr-kubernetes   | requirements.txt
   |   18 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 |
| nova   | requirements.txt
   |   59 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 |
| nova-lxd   | requirements.txt
   |7 | os-vif!=1.8.0,>=1.9.0 # Apache-2.0 |
| networking-bigswitch   | requirements.txt
   |6 | os-vif>=1.1.0 # Apache-2.0 |
| networking-bigswitch   | test-requirements.txt   
   |   25 | os-vif>=1.1.0 # Apache-2.0 |
| networking-midonet | test-requirements.txt   
   |   40 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 |
+++--++

All these projects would need re-releases if you plan on raising the
minimum.  They would also need reviews submitted individually for that.
A upper-constraint only fix would not need that, but would also still
allow consumers to encounter the bug, up to you to decide.
LGTM otherwise.


We don't need to raise the minimum -- this will just be a small update 
to fix the existing 1.11.0 release. Thanks!


-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed

2018-07-26 Thread Slawomir Kaplonski
Thx :)

> Wiadomość napisana przez Matt Riedemann  w dniu 
> 26.07.2018, o godz. 18:32:
> 
> On 7/23/2018 4:20 AM, Slawomir Kaplonski wrote:
>> Thx Artom for taking care of it. Did You made any progress?
>> I think that it might be quite important to fix as it failed around 50 times 
>> during last 7 days:
>> http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20386%2C%20in%20test_tagged_attachment%5C%22
> 
> I've proposed a Tempest change to skip that part of the test for now:
> 
> https://review.openstack.org/#/c/586292/
> 
> We could revert that and link it to artom's debug patch to see if we can 
> recreate with proper debug.
> 
> -- 
> 
> Thanks,
> 
> Matt
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Matt Riedemann

On 7/26/2018 12:00 PM, Sean McGinnis wrote:

I wouldn't think so. Nothing is changing with the policy, so it is still of
interest to see which projects are following that. I don't believe the policy
was tied in any way with stable being an actual project team vs a SIG.


OK, then maybe as a separate issue, I would argue the tag is not 
maintained and therefore useless at best, or misleading at worst (for 
those projects that don't have it) and therefore should be removed.


Who doesn't not agree with me?!

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] PTL Candidacy for the Stein cycle

2018-07-26 Thread Lance Bragstad
Hey everyone,

I'm writing to submit my self-nomination as keystone's PTL for the Stein
release.

We've made significant progress tackling some of the major goals we set
for keystone in Pike. Now that we're getting close to wrapping up some
of those initiatives, I'd like to continue advocating for enhanced RBAC
and unified limits. I think we can do this specifically by using them in
keystone, where applicable, and finalize them in Stein.

While a lot of the work we tackled in Rocky was transparent to users, it
paved the way for us to make strides in other areas. We focused on
refactoring large chunks of code in order to reduce technical debt and
traded some hand-built solutions in favor of well-known frameworks. In
my opinion, these are major accomplishments that drastically simplified
keystone. Because of this, it'll be easier to implement new features we
originally slated for this release. We also took time to smooth out
usability issues with unified limits and implemented support across
clients and libraries. This is going to help services consume keystone's
unified limits implementation early next release.

Additionally, I'd like to take some time in Stein to focus on the next
set of challenges and where we'd like to take keystone in the future.
One area that we haven't really had the bandwidth to focus on is
federation. From Juno to Ocata there was a consistent development focus
on supporting federated deployments, resulting in a steady stream of
features or improvements. Conversely, I think having a break from
constant development will help us approach it with a fresh perspective.
In my opinion, federation improvements are a timely thing to work on
given the use-cases that have been cropping up in recent summits and
PTGs. Ideally, I think it would great to come up with an actionable plan
for making federation easier to use and a first-class tested citizen of
keystone.

Finally, I'll continue to place utmost importance on assisting other
services in how they consume and leverage the work we do.

Thanks for taking a moment to read what I have to say and I look forward
to catching up in Denver.

Lance



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Rocky milestone 3 was released!

2018-07-26 Thread Emilien Macchi
Kudos to the team, we just release our third Rocky milestone!

As usual, I prepared some numbers so you can see our project health:
https://docs.google.com/presentation/d/1RV30OVxmXv1y_z33LuXMVB56TA54Urp7oHIoTNwrtzA/edit#slide=id.p

Some comments:
1) More bugs were fixed in rocky milestone 3 than before.
2) Milestone 2 and Milestone 2 delivered the same amount of blueprints.
3) Our list of core reviewers keep growing!
4) Commits and LOC are much higher than Queens.

Now the focus should be on stabilization and bug fixing, we are in release
candidate mode which means no more features unless you have FFE granted.

Thanks everyone for this hard work!
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-26 Thread Emilien Macchi
On Thu, Jul 26, 2018 at 12:30 PM John Fulton  wrote:

> Do we have a plan for which Ansible version might be the default in
> upcoming TripleO versions?
>
> If this is the thread to discuss it then, I want to point out that
> TripleO's been using ceph-ansible for Ceph integration on the client
> and server side since Pike and that ceph-ansible 3.1 (which TripleO
> master currently uses) fails on Ansible 2.6 and that this won't be
> addressed until ceph-ansible 3.2.
>

I think the last thing we want is to break TripleO + Ceph integration so we
will maintain Ansible 2.5.x in TripleO Rocky and upgrade to 2.6.x in Stein
when ceph-ansible 3.2 is used and working well.

Hope it's fine for everyone,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][release] FFE for os-vif 1.11.1

2018-07-26 Thread Matthew Thode
On 18-07-26 10:43:05, melanie witt wrote:
> Hello,
> 
> I'd like to ask for an exception to add os-vif 1.11.1 to stable/rocky. The
> current release for rocky, 1.11.0, added a new feature: the NoOp Plugin, but
> it's not actually usable (it's not being loaded) because we missed adding a
> file to the setup.cfg.
> 
> We have fixed the problem in a one liner add to setup.cfg [1] and we would
> like to be able to do another release 1.11.1 for rocky to include this fix.
> That way, the NoOp Plugin feature advertised in the release notes [2] for
> rocky would be usable for consumers.
> 
> [1] https://review.openstack.org/585530
> [2] 
> https://docs.openstack.org/releasenotes/os-vif/unreleased.html#relnotes-1-11-0
> 

Yep, we talked about it in the release channel.

+++--++
| Repository | Filename 
  | Line | Text   |
+++--++
| kuryr-kubernetes   | requirements.txt 
  |   18 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 |
| nova   | requirements.txt 
  |   59 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 |
| nova-lxd   | requirements.txt 
  |7 | os-vif!=1.8.0,>=1.9.0 # Apache-2.0 |
| networking-bigswitch   | requirements.txt 
  |6 | os-vif>=1.1.0 # Apache-2.0 |
| networking-bigswitch   | test-requirements.txt
  |   25 | os-vif>=1.1.0 # Apache-2.0 |
| networking-midonet | test-requirements.txt
  |   40 | os-vif!=1.8.0,>=1.7.0 # Apache-2.0 |
+++--++

All these projects would need re-releases if you plan on raising the
minimum.  They would also need reviews submitted individually for that.
A upper-constraint only fix would not need that, but would also still
allow consumers to encounter the bug, up to you to decide.
LGTM otherwise.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][release] FFE for os-vif 1.11.1

2018-07-26 Thread melanie witt

Hello,

I'd like to ask for an exception to add os-vif 1.11.1 to stable/rocky. 
The current release for rocky, 1.11.0, added a new feature: the NoOp 
Plugin, but it's not actually usable (it's not being loaded) because we 
missed adding a file to the setup.cfg.


We have fixed the problem in a one liner add to setup.cfg [1] and we 
would like to be able to do another release 1.11.1 for rocky to include 
this fix. That way, the NoOp Plugin feature advertised in the release 
notes [2] for rocky would be usable for consumers.


Cheers,
-melanie

[1] https://review.openstack.org/585530
[2] 
https://docs.openstack.org/releasenotes/os-vif/unreleased.html#relnotes-1-11-0





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Sean McGinnis
> 
> Only question I have is will the stable:follows-policy governance tag [1]
> also be removed?
> 
> [1]
> https://governance.openstack.org/tc/reference/tags/stable_follows-policy.html
> 

I wouldn't think so. Nothing is changing with the policy, so it is still of
interest to see which projects are following that. I don't believe the policy
was tied in any way with stable being an actual project team vs a SIG.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Fast-track: Remove Stable branch maintenance as a project team

2018-07-26 Thread Matt Riedemann

On 7/25/2018 3:07 PM, Mohammed Naser wrote:

Hi everyone:

This email is just to notify everyone on the TC and the community that
the change to remove the stable branch maintenance as a project
team[1] has been fast-tracked[2].

The change should be approved on 2018-07-28 however it is beneficial
to remove the stable branch team (which has been moved into a SIG) in
order for `tonyb` to be able to act as an election official.

There seems to be no opposing votes however a revert is always
available if any members of the TC are opposed to the change[3].

Thanks to Tony for all of his help in the elections.

Regards,
Mohammed

[1]:https://review.openstack.org/#/c/584206/
[2]:https://governance.openstack.org/tc/reference/house-rules.html#other-project-team-updates
[3]:https://governance.openstack.org/tc/reference/house-rules.html#rolling-back-fast-tracked-changes


First time I've heard of it...but thanks. I personally don't think 
calling something a SIG magically makes people appear to help out, like 
creating a stable maintenance official project team and PTL didn't 
really grow a contributor base either, but so it goes.


Only question I have is will the stable:follows-policy governance tag 
[1] also be removed?


[1] 
https://governance.openstack.org/tc/reference/tags/stable_follows-policy.html


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Subject: [all][api] POST /api-sig/news

2018-07-26 Thread Ed Leafe
Greetings OpenStack community,

We had a short but sweet meeting today, as all four core members were around 
for the first time in several weeks. The one action item from last week, 
reaching out to the people working on the GraphQL experiment, was done, but so 
far we have not heard back on their progress.

notmyname suggested that we investigate the IETF [7] draft proposal for Best 
Practices when building HTTP protocols [8] which may be relevant to our work, 
so we all agreed to review the document (all 30 pages of it!) by next week, 
where we will discuss it further.

Finally, we merged two patches that had had universal approval (yes, the 
*entire* universe), sending cdent's stats through the roof.

As always if you're interested in helping out, in addition to coming to the 
meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for changes 
over time. If you find something that's not quite right, submit a patch [6] to 
fix it.
* Have you done something for which you think guidance would have made things 
easier but couldn't find any? Submit a patch and help others [6].

# Newly Published Guidelines

* Expand schema for error.codes to reflect reality
  https://review.openstack.org/#/c/580703/

* Add links to error-example.json
  https://review.openstack.org/#/c/578369/

# API Guidelines Proposed for Freeze

* None

# Guidelines that are ready for wider review by the whole community.

* None

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the 
OpenStack developer mailing list[1] with the tag "[api]" in the subject. In 
your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://ietf.org/
[8] https://tools.ietf.org/html/draft-ietf-httpbis-bcp56bis-06

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lots of slow tests timing out jobs

2018-07-26 Thread Matt Riedemann

On 7/25/2018 1:46 AM, Ghanshyam Mann wrote:

As per avg time, I have voted (currently based on 14 days avg) on ethercalc which 
all test to mark as slow. I taken the criteria of >120 sec avg time.  Once we 
have more and more people votes there we can mark them slow.

[3]https://ethercalc.openstack.org/dorupfz6s9qt


I've made my votes for the compute-specific tests along with 
justification either way on each one.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Cinder][Tempest] Help with tempest.api.compute.servers.test_device_tagging.TaggedAttachmentsTest.test_tagged_attachment needed

2018-07-26 Thread Matt Riedemann

On 7/23/2018 4:20 AM, Slawomir Kaplonski wrote:

Thx Artom for taking care of it. Did You made any progress?
I think that it might be quite important to fix as it failed around 50 times 
during last 7 days:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20386%2C%20in%20test_tagged_attachment%5C%22


I've proposed a Tempest change to skip that part of the test for now:

https://review.openstack.org/#/c/586292/

We could revert that and link it to artom's debug patch to see if we can 
recreate with proper debug.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [tripleo-validations] using using top-level fact vars will deprecated in future Ansible versions

2018-07-26 Thread John Fulton
On Thu, Jul 26, 2018 at 1:48 AM Cédric Jeanneret  wrote:
>
> Hello Sam,
>
> Thanks for the clarifications.
>
> On 07/25/2018 07:46 PM, Sam Doran wrote:
> > I spoke with other Ansible Core devs to get some clarity on this change.
> >
> > This is not a change that is being made quickly, lightly, or without a
> > whole of bunch of reservation. In fact, that PR created by agaffney may
> > not be merged any time soon. He just wanted to get something started and
> > there is still ongoing discussion on that PR. It is definitely a WIP at
> > this point.
> >
> > The main reason for this change is that pretty much all of the Ansible
> > CVEs to date came from "fact injection", meaning a fact that contains
> > executable Python code Jinja will merrily exec(). Vars, hostvars, and
> > facts are different in Ansible (yes, this is confusing — sorry). All
> > vars go through a templating step. By copying facts to vars, it means
> > facts get templated controller side which could lead to controller
> > compromise if malicious code exists in facts.
> >
> > We created an AnsibleUnsafe class to protect against this, but stopping
> > the practice of injecting facts into vars would close the door
> > completely. It also alleviates some name collisions if you set a hostvar
> > that has the same name as a var. We have some methods that filter out
> > certain variables, but keeping facts and vars in separate spaces is much
> > cleaner.
> >
> > This also does not change how hostvars set via set_fact are referenced.
> > (set_fact should really be called set_host_var). Variables set with
> > set_fact are not facts and are therefore not inside the ansible_facts
> > dict. They are in the hostvars dict, which you can reference as {{
> > my_var }} or {{ hostvars['some-host']['my_var'] }} if you need to look
> > it up from a different host.
>
> so if, for convenience, we do this:
> vars:
>   a_mounts: "{{ hostvars[inventory_hostname].ansible_facts.mounts }}"
>
> That's completely acceptable and correct, and won't create any security
> issue, right?
>
> >
> > All that being said, the setting to control this behavior as Emilien
> > pointed out is inject_facts_as_vars, which defaults to True and will
> > remain that way for the foreseeable future. I would not rush into
> > changing all the fact references in playbooks. It can be a gradual process.
> >
> > Setting inject_facts_as_vars toTrue means ansible_hostname becomes
> > ansible_facts.hostname. You do not have to use the hostvars dictionary —
> > that is for looking up facts about hosts other than the current host.
> >
> > If you wanted to be proactive, you could start using the ansible_facts
> > dictionary today since it is compatible with the default setting and
> > will not affect others trying to use playbooks that reference ansible_facts.
> >
> > In other words, with the default setting of True, you can use either
> > ansible_hostname or ansible_facts.hostname. Changing it to False means
> > only ansible_facts.hostname is defined.
> >
> >> Like, really. I know we can't really have a word about that kind of
> >> decision, but... damn, WHY ?!
> >
> > That is most certainly not the case. Ansible is developed in the open
> > and we encourage community members to attend meetings
> >  and 
> > add
> > topics to the agenda
> >  for discussion.
> > Ansible also goes through a proposal process for major changes, which
> > you can view here
> > .
> >
> > You can always go to #ansible-devel on Freenode or start a discussion on
> > the mailing list
> >  to speak with
> > the Ansible Core devs about these things as well.
>
> And I also have the "Because" linked to my "why" :). big thanks!

Do we have a plan for which Ansible version might be the default in
upcoming TripleO versions?

If this is the thread to discuss it then, I want to point out that
TripleO's been using ceph-ansible for Ceph integration on the client
and server side since Pike and that ceph-ansible 3.1 (which TripleO
master currently uses) fails on Ansible 2.6 and that this won't be
addressed until ceph-ansible 3.2.

  John

>
> Bests,
>
> C.
>
> >
> > ---
> >
> > Respectfully,
> >
> > Sam Doran
> > Senior Software Engineer
> > Ansible by Red Hat
> > sdo...@redhat.com 
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> --
> Cédric Jeanneret
> Software Engineer
> DFG:DF
>
> __
> OpenStack Development Mailing List (not for usage qu

Re: [openstack-dev] [tripleo] FFE request for config-download-ui

2018-07-26 Thread Alex Schultz
On Thu, Jul 26, 2018 at 2:31 AM, Jiri Tomasek  wrote:
> Hello,
>
> I would like to request a FFE for [1]. Current status of TripleO UI patches
> is here [2] there are last 2 patches pending review which currently depend
> on [3] which is close to land.
>
> [1] https://blueprints.launchpad.net/tripleo/+spec/config-download-ui/
> [2]
> https://review.openstack.org/#/q/project:openstack/tripleo-ui+branch:master+topic:bp/config-download-ui
> [3] https://review.openstack.org/#/c/583293/
>

Sounds good. Let's get those last two patches landed.

Thanks,
-Alex

> Thanks
> -- Jiri
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] compute nodes use of placement

2018-07-26 Thread Chris Dent


HTML: https://anticdent.org/novas-use-of-placement.html

A year and a half ago I did some analysis on how [nova uses
placement](http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html).

I've repeated some of that analysis today and here's a brief summary
of the results. Note that I don't present this because I'm concerned
about load on placement, we've demonstrated that placement scales
pretty well. Rather, this analysis indicates that the compute node
is doing redundant work which we'd prefer not to do. The compute
node can't scale horizontally in the same way placement does. If
offloading the work to placement and being redundant is the easiest
way to avoid work on the compute node, let's do that, but that
doesn't seem to be quite what's happening here.

Nova uses placement mainly from two places:

* The `nova-compute` nodes report resource provider and inventory to
  placement and make sure that the placement view of what hardware
  is present is accurate.

* The `nova-scheduler` processes request candidates for placement,
  and claim resources by writing allocations to placement.

There are some additional interactions, mostly associated with
migrations or fixing up unusual edge cases. Since those things are
rare they are sort of noise in this discussion, so left out.

When a basic (where basic means no nested resource providers)
compute node starts up it POSTs to create a resource provider and
then PUTs to set the inventory. After that a periodic job runs,
usually every 60 seconds. In that job we see the following 11
requests:

GET 
/placement/resource_providers?in_tree=82fffbc6-572b-4db0-b044-c47e34b27ec6
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/aggregates
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/traits
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/allocations
GET 
/placement/resource_providers?in_tree=82fffbc6-572b-4db0-b044-c47e34b27ec6
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/aggregates
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/traits
GET 
/placement/resource_providers/82fffbc6-572b-4db0-b044-c47e34b27ec6/inventories

A year and a half ago it was 5 requests per-cycle, but they were
different requests:

GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/aggregates
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/inventories
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/allocations
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/aggregates
GET 
/placement/resource_providers/0e33c6f5-62f3-4522-8f95-39b364aa02b4/inventories

The difference comes from two changes:

* We no longer confirm allocations on the compute node.
* We've now have things called ProviderTrees which are responsible
  for managing nested providers, aggregates and traits in a unified
  fashion.

It appears, however, that we have some redundancies. We get
inventories 4 times; aggregates, providers and traits 2 times, and
allocations once.

The `in_tree` calls happen from the report client method
`_get_providers_in_tree` which is called by
`_ensure_resource_provider` which can be called from multiple
places, but in this case is being called both times from
`get_provider_tree_and_ensure_root`, which is also responsible for
two of the inventory request.

`get_provider_tree_and_ensure_root` is called by `_update` in the
resource tracker.

`_update` is called by both `_init_compute_node` and
`_update_available_resource`. Every single period job iteration.
`_init_compute_node` is called from _update_available_resource`
itself.

That accounts for the overall doubling.

The two calls inventories per group come from the following, in
`get_provider_tree_and_ensure_root`:

1. `_ensure_resource_provider` in the report client calls
   `_refresh_and_get_inventory` for every provider in the tree (the
   result of the `in_tree` query)

2. Immediately after the the call to `_ensure_resource_provider`
   every provider in the provider tree (from
   `self._provider_tree.get_provider_uuids()`) then has a
   `_refresh_and_get_inventory` call made.

In a non-sharing, non-nested scenario (such as a single node
devstack, which is where I'm running this analysis) these are the
exact same one resource provider. I'm insufficiently aware of what
might be in the provider tree in more complex situations to be clear
on what could be done to limit redundancy here, but it's a place
worth looking.

The requests for aggregates and traits happen via
`_refresh_associations` in `_ensure_resource_provi

Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-26 Thread melanie witt

On Thu, 26 Jul 2018 09:19:38 -0600, Chris Friesen wrote:

On 07/25/2018 06:21 PM, Alex Xu wrote:



2018-07-26 0:29 GMT+08:00 William M Edmonds mailto:edmon...@us.ibm.com>>:


 Ghanshyam Mann mailto:gm...@ghanshyammann.com>>
 wrote on 07/25/2018 05:44:46 AM:
 ... snip ...
 > 1. is it ok to show the keypair used info via API ? any original
 > rational not to do so or it was just like that from starting.

 keypairs aren't tied to a tenant/project, so how could nova track/report a
 quota for them on a given tenant/project? Which is how the API is
 constructed... note the "tenant_id" in GET 
/os-quota-sets/{tenant_id}/detail


Keypairs usage is only value for the API 'GET
/os-quota-sets/{tenant_id}/detail?user_id={user_id}'


The objection is that keypairs are tied to the user, not the tenant, so it
doesn't make sense to specify a tenant_id in the above query.

And for Pike at least I think the above command does not actually show how many
keypairs have been created by that user...it still shows zero.


Yes, for Pike during the re-architecting of quotas to count resources 
instead of tracking usage separately, we kept the "always zero" count 
for usage of keypairs, server group members, and security group rules, 
so as not to change the behavior. It's been my understanding that we 
would need a microversion to change any of those to actually return a 
count. It's true the counts would not make sense under the 'tenant_id' 
part of the URL though.


-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-26 Thread Chris Friesen

On 07/25/2018 06:22 PM, Alex Xu wrote:



2018-07-26 1:43 GMT+08:00 Chris Friesen mailto:chris.frie...@windriver.com>>:



Keypairs are weird in that they're owned by users, not projects.  This is
arguably wrong, since it can cause problems if a user boots an instance with
their keypair and then gets removed from a project.

Nova microversion 2.54 added support for modifying the keypair associated
with an instance when doing a rebuild.  Before that there was no clean way
to do it.


I don't understand this, we didn't count the keypair usage with the instance
together, we just count the keypair usage for specific user.



I was giving an example of why it's strange that keypairs are owned by users 
rather than projects.  (When instances are owned by projects, and keypairs are 
used to access instances.)


Chris



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-26 Thread Chris Friesen

On 07/25/2018 06:21 PM, Alex Xu wrote:



2018-07-26 0:29 GMT+08:00 William M Edmonds mailto:edmon...@us.ibm.com>>:


Ghanshyam Mann mailto:gm...@ghanshyammann.com>>
wrote on 07/25/2018 05:44:46 AM:
... snip ...
> 1. is it ok to show the keypair used info via API ? any original
> rational not to do so or it was just like that from starting.

keypairs aren't tied to a tenant/project, so how could nova track/report a
quota for them on a given tenant/project? Which is how the API is
constructed... note the "tenant_id" in GET /os-quota-sets/{tenant_id}/detail


Keypairs usage is only value for the API 'GET
/os-quota-sets/{tenant_id}/detail?user_id={user_id}'


The objection is that keypairs are tied to the user, not the tenant, so it 
doesn't make sense to specify a tenant_id in the above query.


And for Pike at least I think the above command does not actually show how many 
keypairs have been created by that user...it still shows zero.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Should we add a tempest-slow job?

2018-07-26 Thread Matt Riedemann

On 5/13/2018 9:06 PM, Ghanshyam Mann wrote:

+1 on idea. As of now slow marked tests are from nova, cinder and
neutron scenario tests and 2 API swift tests only [4]. I agree that
making a generic job in tempest is better for maintainability. We can
use existing job for that with below modification-
-  We can migrate
"legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job
zuulv3 in tempest repo
-  We can see if we can move migration tests out of it and use
"nova-live-migration" job (in tempest check pipeline ) which is much
better in live migration env setup and controlled by nova.
-  then it can be name something like
"tempest-scenario-multinode-lvm-multibackend".
-  run this job in nova, cinder, neutron check pipeline instead of experimental.

Like this 
-https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job

That makes scenario job as generic with running all scenario tests
including slow tests with concurrency 2. I made few cleanup and moved
live migration tests out of it which is being run by
'nova-live-migration' job. Last patch making this job as voting on
tempest side.

If looks good, we can use this to run on project side pipeline as voting.

-gmann



I should have said something earlier, but I've said it on my original 
nova change now:


https://review.openstack.org/#/c/567697/

What was implemented in Tempest isn't really at all what I was going 
for, especially since it doesn't run the API tests marked 'slow'. All I 
want is a job like tempest-full (which excludes slow tests) to be 
tempest-full which *only* runs slow tests. They would run a mutually 
exclusive set of tests so we have that coverage. I don't care if the 
scenario tests are run in parallel or serial (it's probably best to 
start in serial like tempest-full today and then change to parallel 
later if that settles down).


But I think it's especially important given:

https://review.openstack.org/#/c/567697/2

That we have a job which only runs slow tests because we're going to be 
marking more tests as "slow" pretty soon and we don't need the overlap 
with the existing tests that are run in tempest-full.


--
Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Setting swift as glance backend

2018-07-26 Thread Ben Nemec
It looks like Glance defaults to Swift, so you shouldn't need to do 
anything: 
https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/glance-api.yaml#L96


On 07/26/2018 12:41 AM, Samuel Monderer wrote:

Hi,

I would like to deploy a small overcloud with just one controller and 
one compute for testing.

I want to use swift as the glance backend.
How do I configure the overcloud templates?

Samuel


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership

2018-07-26 Thread Sean McGinnis
> >
> > A good news is recently a team from Samsung R&D Center in Krakow, Poland
> > joined us, they're building a product on OpenStack, have done improvments
> > on Trove(internally), and now interested in contributing to the community,
> > starting by migrating the intergating tests to the tempest plugin. They're
> > also willing and ready to act as the PTL role. The only problem for their
> > nomination may be that none of them have a patched merged into the Trove
> > projects. There're some in the trove-tempest-plugin waiting review, but
> > according to the activities of the project, these patches may need a long
> > time to merge (and we're at Rocky milestone-3, I think we could merge
> > patches in the trove-tempest-plugin, as they're all abouth testing).
> >
> > I also hope and welcome the other current active team members of Trove
> > could nominate themselves, in that way, we could get more discussions about
> > how we think about the direction of Trove.
> >

Great to see another group getting involved!

It's too bad there hasn't been enough time to build up some experience working
upstream and getting at least a few more commits under their belt, but this
sounds like things are heading in the right direction.

Since the new folks are still so new - if this works for you - I would
recommend continuing on as the official PTL for one more release, but with the
understanding that you would just be around to answer questions and give advice
to help the new team get up to speed. That should hopefully be a small time
commitment for you while still easing that transition.

Then hopefully by the T release it would not be an issue at all for someone
else to step up as the new PTL. Or even if things progress well, you could step
down as PTL at some point during the Stein cycle if someone is ready to take
over for you.

Just a suggestion to help ease the process.

Sean


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] FFE for multi-backend

2018-07-26 Thread Abhishek Kekane
I'm asking for a Feature Freeze Exception for Multiple backend support
(multi-store)
feature [0].  The only remaining work is a versioning patch to flag
this feature as
 experimental and should be completed early next week.

​[0] 
https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multi-store.html

Patches open for review:

https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:bp/multi-store​



Th​
anks & Best Regards,

Abhishek Kekane
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] FFE for multihash

2018-07-26 Thread Brian Rosmaita
I'm asking for a Feature Freeze Exception for the glance-side work for
the Secure Hash Algorithm Support (multihash) feature [0].  The work
is underway and should be completed early next week.

cheers,
brian

[0] 
https://specs.openstack.org/openstack/glance-specs/specs/rocky/approved/glance/multihash.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-dev] [trove] Considering the transfter of the project leadership

2018-07-26 Thread Dariusz Krol
Hello All,


as a member of Samsung R&D Center in Krakow, I would like to confirm 
that we are very interested in Trove development.

We also notices that Trove project has a small team now and that 
community around Trove becomes smaller with each release which is a 
shame since it is a great project. That is why we would like to step up 
and help with development and leadership.

We started our contribution with code reviews and we also submitted our 
first contributions to trove-tempest-plugin. We intend to increase our 
involvement in the community but we understand it need to take some time 
and help from community.


I would like to thank current Trove team for warm welcome, and I'm 
really looking forward to the future collaboration with the community.


Kind regards,

Dariusz Krol

On 07/25/2018 06:18 PM, 赵超 wrote:
> cc to the Trove team members and guys from Samsung R&D Center in 
> Krakow, Poland privately, so anyone of them who are not reading the ML 
> could also be notified.
>
> On Thu, Jul 26, 2018 at 12:09 AM, 赵超  > wrote:
>
> Hi All,
>
> Trove currently has a really small team, and all the active team
> members are from China, we had some good discussions during the
> Rocky online PTG meetings[1], and the goals were arranged and
> priorited [2][3]. But it's sad that none of us could focus on the
> project, and the number of patches and reviews fall a lot in this
> cycle comparing Queens.
>
> [1] https://etherpad.openstack.org/p/trove-ptg-rocky
> 
> [2]
> https://etherpad.openstack.org/p/trove-priorities-and-specs-tracking
> 
> [3]
> 
> https://docs.google.com/spreadsheets/d/1Jz6TnmRHnhbg6J_tSBXv-SvYIrG4NLh4nWejupxqdeg/edit#gid=0
> 
> 
>
> And for me, it's a really great chance to play as the PTL role of
> Trove, and I learned a lot during this cycle(from Trove projects
> to the CI infrastrues, and more). However in this cycle, I have
> been with no bandwith to work on the project for months, and the
> situation seems not be better in the forseeable future, so I think
> it's better to transfter the leadership, and look for opportunites
> for more anticipations in the project.
>
> A good news is recently a team from Samsung R&D Center in Krakow,
> Poland joined us, they're building a product on OpenStack, have
> done improvments on Trove(internally), and now interested in
> contributing to the community, starting by migrating the
> intergating tests to the tempest plugin. They're also willing and
> ready to act as the PTL role. The only problem for their
> nomination may be that none of them have a patched merged into the
> Trove projects. There're some in the trove-tempest-plugin waiting
> review, but according to the activities of the project, these
> patches may need a long time to merge (and we're at Rocky
> milestone-3, I think we could merge patches in the
> trove-tempest-plugin, as they're all abouth testing).
>
> I also hope and welcome the other current active team members of
> Trove could nominate themselves, in that way, we could get more
> discussions about how we think about the direction of Trove.
>
> I'll stll be here, to help the migration of the integration tests,
> CentOS guest images support, Cluster improvement and all other
> goals we discussed before, and code review.
>
> Thanks.
>
> -- 
> To be free as in freedom.
>
>
>
>
> -- 
> To be free as in freedom.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][release][election][adjutant] Welcome Adjutant as an official project!

2018-07-26 Thread Monty Taylor

On 07/17/2018 08:19 PM, Adrian Turjak wrote:

Thanks!

As the current project lead for Adjutant I welcome the news, and while I
know it wasn't an easy process would like to thank everyone involved in
the voting. All the feedback (good and bad) will be taken on board to
make the service as suited for OpenStack as possible in the space we've
decided it can fit.

Now to onboarding, choosing a suitable service type, and preparing for a
busy Stein cycle!


Welcome!

I believe you're already aware, but once you have chosen a service type, 
make sure to submit a patch to 
https://git.openstack.org/cgit/openstack/service-types-authority



On 18/07/18 05:52, Doug Hellmann wrote:

The Adjutant team's application [1] to become an official project
has been approved. Welcome!

As I said on the review, because it is past the deadline for Rocky
membership, Adjutant will not be considered part of the Rocky
release, but a future release can be part of Stein.

The team should complete the onboarding process for new projects,
including holding PTL elections for Stein, setting up deliverable
files in the openstack/releases repository, and adding meeting
information to eavesdrop.openstack.org.

I have left a comment on the patch setting up the Stein election
to ask that the Adjutant team be included.  We can also add Adjutant
to the list of projects on docs.openstack.org for Stein, after
updating your publishing job(s).

Doug

[1] https://review.openstack.org/553643

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sdk] PTL Candidacy for the Stein cycle

2018-07-26 Thread Monty Taylor

Hi everybody!

I'd like to run for PTL of OpenStackSDK again.

This last cycle was great. os-client-config is now just a thin wrapper 
around openstacksdk. shade still has a bunch of code, but the shade 
OpenStackCloud object is a subclass of openstack.connection.Connection,

so we're in good position to turn shade into a thin wrapper.

Ansible and nodepool are now using openstacksdk directly rather than
shade and os-client-config. python-openstackclient is also now using
openstacksdk for config instead of os-client-config. We were able to 
push some of the special osc code down into keystoneauth so that it gets 
its session directly from openstacksdk now too.


We plumbed os-service-types in to the config layer so that people can
use any of the official aliases for a service in their config. 
Microversion discovery was added - and we actually even are using it for 
at least one method (way to be excited, right?)


I said last time that we needed to get a 1.0 out during this cycle and 
we did not accomplish that.


Moving forward my number one priority for the Stein cycle is to get the 
1.0 release cut, hopefully very early in the cycle. We need to finish 
plumbing discovery through everywhere, and we need to rationalize the 
Resource objects and the shade munch objects. As soon as those two are 
done, 1.0 here we come.


After we've got a 1.0, I think we should focus on getting 
python-openstackclient starting to use more of openstacksdk. I'd also 
like to
start getting services using openstacksdk so that we can start reducing 
the number of moving parts everywhere.


We have cross-testing with the upstream Ansible modules. We should move 
the test playbooks themselves out of the openstacksdk repo and into the 
Ansible repo.


The caching layer needs an overhaul. What's there was written with
nodepool in mind, and is **heavily** relied on in the gate. We can't 
break that, but it's not super friendly for people who are not nodepool 
(which is most people)


I'd like to start moving methods from the shade layer into the sdk
proxy layer and, where it makes sense, make the shade layer simple 
passthrough calls to the proxy layer. We really shouldn't have two 
different methods for uploading images to a cloud, for instance.


Finally, we have some AMAZING docs - but with the merging of shade and
os-client-config the overview leaves much to be desired in terms of 
leading people towards making the right choices. It would be great to 
get that cleaned up.


I'm sure there will be more things to do too. There always are.

In any case, I'd love to keep helping to pushing these rocks uphill.

Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-4, July 30 - August 3

2018-07-26 Thread Sean McGinnis
Hey, I thought you guys might be interested in this release countdown info. ;)

Development Focus
-

The R-4 week is our one deadline free week between the lib freezes and Rocky-3
milestone and RC.

Work should be focused on fixing any requirements update issues, critical bugs,
and wrapping up feature work to prepare for the Release Candidate deadline (for
deliverables following the with-milestones model) or final Rocky releases (for
deliverables following the with-intermediary model) next Thursday, 9th of
August.

General Information
---

For deliverables following the cycle-with-milestones model, we are now (after
the day I send this) past Feature Freeze. The focus should be on determining
and fixing release-critical bugs. At this stage only bugfixes should be
approved for merging in the master branches: feature work should only be
considered if explicitly granted a Feature Freeze exception by the team PTL
(after a public discussion on the mailing-list).

StringFreeze is now in effect, in order to let the I18N team do the translation
work in good conditions. The StringFreeze is currently soft (allowing
exceptions as long as they are discussed on the mailing-list and deemed worth
the effort). It will become a hard StringFreeze on 9th of August along with the
RC.

The requirements repository is also frozen, until all cycle-with-milestones
deliverables have produced a RC1 and have their stable/rocky branches. If
release critical library or client library releases are needed for Rocky past
the freeze dates, you must request a Feature Freeze Exception (FFE) from the
requirements team before we can do a new release to avoid having something
released in Rocky that is not actually usable. This is done by posting to the
openstack-dev mailing list with a subject line similar to:

[$PROJECT][requirements] FFE requested for $PROJECT_LIB

Include justification/reasoning for why a FFE is needed for this lib. If/when
the requirements team OKs the post-freeze update, we can then process a new
release. Including a link to the FFE in the release request is not required,
but would be helpful in making sure we are clear to do a new release.

Note that deliverables that are not tagged for release by the appropriate
deadline will be reviewed to see if they are still active enough to stay on the
official project list.

Actions
-

stable/rocky branches should be created soon for all not-already-branched
libraries. You should expect 2-3 changes to be proposed for each: a .gitreview
update, a reno update (skipped for projects not using reno), and a tox.ini
constraints URL update*. Please review those in priority so that the branch can
be functional ASAP.

* The constraints update patches should not be approved until a stable/rocky
  branch has been created for openstack/requirements. Watch for an unfreeze
  announcement from the requirements team for this.

For cycle-with-intermediary deliverables, release liaisons should consider
releasing their latest version, and creating stable/rocky branches from it
ASAP.

For cycle-with-milestones deliverables, release liaisons should wait until R-3
week to create RC1 (to avoid having an RC2 created quickly after). Review
release notes for any missing information, and start preparing "prelude"
release notes as summaries of the content of the release so that those are
merged before the first release candidate.

*Release Cycle Highlights*
Along with the prelude work, it is also a good time to start planning what
highlights you want for your project team in the cycle highlights:

Background on cycle-highlights: 
http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html
Project Team Guide, Cycle-Highlights: 
https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights
anne [at] openstack.org/annabelleB on IRC is available if you need help
selecting or writing your highlights

For release-independent deliverables, release liaisons should check that their
deliverable file includes all the existing releases, so that they can be
properly accounted for in the releases.openstack.org website.

If your team has not done so, remember to file Rocky goal completion
information, as explained in:

https://governance.openstack.org/tc/goals/index.html#completing-goals


Upcoming Deadlines & Dates
--

PTL self-nomination ends: July 31
PTL election starts: August 1
RC1 deadline: August 9

--
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][PTL][Election] Quality Assurance PTL Candidacy for Stein

2018-07-26 Thread Ghanshyam Mann
Hi Everyone,

I would like to announce my candidacy to continue the Quality Assurance PTL 
role for Stein cycle.

I have served as QA PTL in Rocky cycle and as first time being PTL role, it was 
great experience for me. I have been doing my best effort in Rocky and made 
sure that we continued serving the QA responsibility in better way  and also 
Improving the many things in QA like new feature test coverage, docs, Tracking 
Process etc.

In Rocky, QA team has successfully executed many of the targeted working items. 
Few items and things went well are as below:-

* Zuul v3 migration and base job available for cross project use. 
* Running volume v3 API as default in gate testing. Along with that running a 
single job for v2 API for compatibility checks. 
* Tempest plugins release process to map with Tempest releases. 
* Improving the test coverage and service clients.
* Releasing sub project like hacking and fix the version issues,  projects were 
facing on every hacking release. 
* Completing compute microversion response schema gaps in Tempest.
* Finishing more and more work in Patrole to make it towards stable release 
like documentation, more coverage etc. 
* We are able to continue serving in good term irrespective of resource 
shortage in QA.
* Supporting projects for testing and fixes  to continue their development. 

Apart from above accomplishment, there are still a lot of improvements needed 
(listed below) and I will try my best to execute the same in next Stein cycle.

* Tempest CLI unit test coverage and switching gate job to use all of them. 
This will help to avoid regression in CLI.
* Tempest scenario manage refactoring which is still in messy state and hard to 
debug. 
* no progress on QA SIG which will help us to share/consume the QA tooling 
across communities. 
* no progress on Destructive testing (Eris) projects. 
* Plugins cleanup to improve the QA interface usage. 
* Bug Triage, Our targets was to continue the New bugs count as low which did 
not went well in Rocky. 

All the momentum and activities rolling are motivating me to continue another 
term as QA PTL in order to explore and showcase more challenges. Along with 
that let me summarize my goals and focus area for Stein cycle:

* Continue working on backlogs from above list and finish them based on 
priority.
* Help the Projects' developments with test writing/improvement and gate 
stability
* Plugin improvement and helping them on everything they need from QA. This 
area need more process and collaboration with plugins team. 
* Try best to have progress on Eris project.  
* Start QA SIG to help cross community collaboration.  
* Bring on more contributor and core reviewers.

Thanks for reading and consideration my candidacy for Stein cycle.

-gmann







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][nova] Running NFV tests in CI

2018-07-26 Thread Sean Mooney
On 24 July 2018 at 19:47, Clark Boylan  wrote:
>
> On Tue, Jul 24, 2018, at 10:21 AM, Artom Lifshitz wrote:
> > On Tue, Jul 24, 2018 at 12:30 PM, Clark Boylan  wrote:
> > > On Tue, Jul 24, 2018, at 9:23 AM, Artom Lifshitz wrote:
> > >> Hey all,
> > >>
> > >> tl;dr Humbly requesting a handful of nodes to run NFV tests in CI
> > >>
> > >> Intel has their NFV tests tempest plugin [1] and manages a third party
> > >> CI for Nova. Two of the cores on that project (Stephen Finucane and
> > >> Sean Mooney) have now moved to Red Hat, but the point still stands
> > >> that there's a need and a use case for testing things like NUMA
> > >> topologies, CPU pinning and hugepages.
> > >>
> > >> At Red Hat, we also have a similar tempest plugin project [2] that we
> > >> use for downstream whitebox testing. The scope is a bit bigger than
> > >> just NFV, but the main use case is still testing NFV code in an
> > >> automated way.
> > >>
> > >> Given that there's a clear need for this sort of whitebox testing, I
> > >> would like to humbly request a handful of nodes (in the 3 to 5 range)
> > >> from infra to run an "official" Nova NFV CI. The code doing the
> > >> testing would initially be the current Intel plugin, bug we could have
> > >> a separate discussion about keeping "Intel" in the name or forking
> > >> and/or renaming it to something more vendor-neutral.
> > >
> > > The way you request nodes from Infra is through your Zuul configuration. 
> > > Add jobs to a project to run tests on the node labels that you want.
> >
> > Aha, thanks, I'll look into that. I was coming from a place of
> > complete ignorance about infra.
> > >
> > > I'm guessing this process doesn't work for NFV tests because you have 
> > > specific hardware requirements that are not met by our current VM 
> > > resources?
> > > If that is the case it would probably be best to start by documenting 
> > > what is required and where the existing VM resources fall
> > > short.
> >
> > Well, it should be possible to do most of what we'd like with nested
> > virt and virtual NUMA topologies, though things like hugepages will
> > need host configuration, specifically the kernel boot command [1]. Is
> > that possible with the nodes we have?
>
> https://docs.openstack.org/infra/manual/testing.html attempts to give you an 
> idea for what is currently available via the test environments.
>
>
> Nested virt has historically been painful because not all clouds support it 
> and those that do did not do so in a reliable way (VMs and possibly 
> hypervisors would crash). This has gotten better recently as nested virt is 
> something more people have an interest in >getting working but it is still 
> hit and miss particularly as you use newer kernels in guests. I think if we 
> can continue to work together with our clouds (thank you limestone, OVH, and 
> vexxhost!) we may be able to work out nested virt that is redundant across 
> multiple >clouds. We will likely need individuals willing to keep caring for 
> that though and debug problems when the next release of your favorite distro 
> shows up. Can you get by with qemu or is nested virt required?

for what its worth the intel nfv ci has alway ran with nested virt
since we first set it up on ubuntu 12.04 all the way through the time
we ran it fedora 20- fedora 21 and it continue to use nested virt on
ubuntu 16.04
we have never had any issue with nested virt but the key to using it
correctly is you should always set the nova cpu mode to
host-passthrough if you use nested virt.

because of how we currently do cpu pinning/ hugepanges and numa
affinity in nova today to do this testign we have a hard requiremetn
on running kvm in devstack which mean we have a hard requirement for
nested virt.
there ware ways around that but the nova core team has previously
express there view that adding the code changes reqiured to allow the
use of qemu is not warrented for ci since we would also not be testing
the normal config
e.g. these feature are normaly only used when performance matters
whcih means you will be useing kvm not qemu.

i have tried to test ovs-dpdk in the upstream ci on 3 ocation in the
past (this being the most recent
https://review.openstack.org/#/c/433491/) but without nested virt that
didnt get very far.

>
> As for hugepages, I've done a quick survey of cpuinfo across our clouds and 
> all seem to have pse available but not all have pdpe1gb available. Are you 
> using 1GB hugepages? Keep in mind that the test VMs only have 8GB of memory 
> total. As for booting with special kernel parameters you can have your job 
> make those modifications to the test environment then reboot the test 
> environment within the job. There is some Zuul specific housekeeping that 
> needs to be done post reboot, we can figure that out if we decide to go down 
> this route. Would your setup work with 2M hugepages?

the host vm does not need to be backed by hungepages at all. it does
need to have the cpuflags set to allow huge

Re: [openstack-dev] [kolla] ptl non candidacy

2018-07-26 Thread zhubingbing


Thanks for your work as PTL during the Rocky cycle Jeffrey


Cheers,


zhubingbing




在 2018-07-25 11:48:24,"Jeffrey Zhang"  写道:

Hi all,


I just wanna to say I am not running PTL for Stein cycle. I have been involved 
in Kolla project for almost 3 years. And recently my work changes a little, 
too. So I may not have much time in the community in the future. Kolla is a 
great project and the community is also awesome. I would encourage everyone in 
the community to consider for running. 


Thanks for your support :D.
--

Regards,
Jeffrey Zhang
Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] network isolation can't find files referred to on director

2018-07-26 Thread Samuel Monderer
Hi James,

I understand the network-environment.yaml will also be generated.
What do you mean by rendered path? Will it be
"usr/share/openstack-tripleo-heat-templates/network/ports/"?
By the way I didn't find any other place in my templates where I refer to
these files?
What about custom nic configs is there also a jinja2 process to create them?

Samuel

On Thu, Jul 26, 2018 at 12:02 AM James Slagle 
wrote:

> On Wed, Jul 25, 2018 at 11:56 AM, Samuel Monderer
>  wrote:
> > Hi,
> >
> > I'm trying to upgrade from OSP11(Ocata) to OSP13 (Queens)
> > In my network-isolation I refer to files that do not exist anymore on the
> > director such as
> >
> >   OS::TripleO::Compute::Ports::ExternalPort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml
> >   OS::TripleO::Compute::Ports::InternalApiPort:
> >
> /usr/share/openstack-tripleo-heat-templates/network/ports/internal_api.yaml
> >   OS::TripleO::Compute::Ports::StoragePort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
> >   OS::TripleO::Compute::Ports::StorageMgmtPort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/noop.yaml
> >   OS::TripleO::Compute::Ports::TenantPort:
> > /usr/share/openstack-tripleo-heat-templates/network/ports/tenant.yaml
> >   OS::TripleO::Compute::Ports::ManagementPort:
> >
> /usr/share/openstack-tripleo-heat-templates/network/ports/management_from_pool.yaml
> >
> > Where have they gone?
>
> These files are now generated from network/ports/port.network.j2.yaml
> during the jinja2 template rendering process. They will be created
> automatically during the overcloud deployment based on the enabled
> networks from network_data.yaml.
>
> You still need to refer to the rendered path (as shown in your
> example) in the various resource_registry entries.
>
> This work was done to enable full customization of the created
> networks used for the deployment. See:
>
> https://docs.openstack.org/tripleo-docs/latest/install/advanced_deployment/custom_networks.html
>
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] FFE request for config-download-ui

2018-07-26 Thread Jiri Tomasek
Hello,

I would like to request a FFE for [1]. Current status of TripleO UI patches
is here [2] there are last 2 patches pending review which currently depend
on [3] which is close to land.

[1] https://blueprints.launchpad.net/tripleo/+spec/config-download-ui/
[2]
https://review.openstack.org/#/q/project:openstack/tripleo-ui+branch:master+topic:bp/config-download-ui
[3] https://review.openstack.org/#/c/583293/

Thanks
-- Jiri
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev