[openstack-dev] What's Up, Doc? 26 August

2016-08-25 Thread Lana Brindley
Hi everyone,

This week marks six weeks left to the Newton release, so I've been busy getting 
everything prepared. We're now ready to start testing the core Installation 
Tutorial, so check out the wiki and get signed up: 
https://wiki.openstack.org/wiki/Documentation/NewtonDocTesting#Testers We've 
also selected two new release managers to assist with this release, please 
welcome Olena and Alex! More on those topics in this newsletter.  

== Progress towards Newton ==

40 days to go!

Bugs closed so far: 387

Release planning: https://etherpad.openstack.org/p/NewtonRelease

== Release Managers ==

There was an overwhelming response to my request for release managers, which 
was very cool! The first two responders were Olena Logvinova 
 and Alexandra Settle 
, so please welcome them to the release team. 
Their main role during this process is to make sure that all the tasks we have 
to complete get done accurately and on schedule. It's an important job, and 
we're really glad to have you both on board.

== Install Tutorial Testing ==

Install Tutorial testing is just about ready to kick off. If you're interested 
in helping, make sure you've added your name on to the list on our testing wiki 
page: https://wiki.openstack.org/wiki/Documentation/NewtonDocTesting (if you 
don't have wiki access, just email me directly and I'll add you in). 

Thanks to our package maintainers, we now have b2 pre-release packages ready 
for Ubuntu and Debian, so those are ready to be tested. 

== Docs Tools ==

I'm very pleased to announce that we have now removed DocBook from our build 
tools. The end of an era! Thanks to Anne, Andreas, and David Cramer for keeping 
this project alive for so long, and eventually seeing it come to fruition.

Version 1.5.0 of openstackdoccstheme was released this week.

In other tools news, there are plans afoot to move developer.openstack.org and 
docs.openstack.org away from Rackspace Cloudsites, and on to pages that 
OpenStack Infra control. This has been on the proverbial backburner for a long 
time, but the Infra team is now in a position to make it happen. You can read 
and comment on the Infra spec here: https://review.openstack.org/#/c/276482/

== Speciality Team Reports ==

'''HA Guide: Andrew Beekhof'''
No report this week.

'''Install Tutorials: Lana Brindley'''
Core Guide testing is underway! Check out progress and sign up here: 
https://wiki.openstack.org/wiki/Documentation/MitakaDocTesting Next meeting: 30 
August 0600UTC.

'''Networking Guide: Edgar Magana'''
Attending the OpenStack East in NYC. No report this week.

'''Security Guide: Nathaniel Dillon'''
No report this week.

'''User Guides: Joseph Robinson'''
No report this week.

'''Ops Guide: Shilla Saebi, Darren Chan'''
No report this week.

'''API Guide: Anne Gentle'''
Releasing openstackdocstheme with new sidebar and integration with os-api-ref 
Sphinx Extension. Many thanks to Karen Bradshaw, Sean Dague, Graham Hayes, 
Andreas Jaeger (and anyone I may have missed!) for their tireless efforts on 
this release. WOW.

'''Config/CLI Ref: Tomoyuki Kato'''
Merged some patches, updated a few CLI reference, nothing special.

'''Training labs: Pranav Salunke, Roger Luethi'''
Some minor changes, backports of features.
The parser is almost done. https://github.com/dguitarbite/rst2bash
Updated the openstack-manuals, install-guides with the new syntax which is 
parser friendly. I should start sending the required patches to manuals repo. 
soon after finalizing the syntax.. 
https://github.com/dguitarbite/openstack-manuals/commit/c9e7e1bb2c7269f02a757768dd2dd309fa2233ac

'''Training Guides: Matjaz Pancur'''
Upstream training improvement 
(https://etherpad.openstack.org/p/upstream-university-improvements). Please 
consider participating in the Remote content sprint (proposed timeframe: first 
half of September, see etherpad for details).

'''Hypervisor Tuning Guide: Blair Bethwaite
No report this week.

'''UX/UI Guidelines: Michael Tullis, Rodrigo Caballero'''
No report this week.

== Site Stats ==

Keystone and Ceph are this week's most popular search terms.

== Doc team meeting ==

Next meetings:

The APAC meeting was held this week, you can read the minutes here: 
https://wiki.openstack.org/wiki/Documentation/MeetingLogs#2016-08-24

Next meetings:
US: Wednesday 31 August, 19:00 UTC
APAC: Wednesday 7 September, 00:30 UTC

Please go ahead and add any agenda items to the meeting page here: 
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

--

Keep on doc'ing!

Lana

https://wiki.openstack.org/wiki/Documentation/WhatsUpDoc#26_August_2016

-- 
Lana Brindley
Technical Writer
Rackspace Cloud Builders Australia
http://lanabrindley.com



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists

Re: [openstack-dev] [nova][keystone] auth for new metadata plugins

2016-08-25 Thread Adam Young

On 08/22/2016 11:11 AM, Rob Crittenden wrote:

Adam Young wrote:

On 08/15/2016 05:10 PM, Rob Crittenden wrote:

Review https://review.openstack.org/#/c/317739/ added a new dynamic
metadata handler to nova. The basic jist is that rather than serving
metadata statically, it can be done dyamically, so that certain values
aren't provided until they are needed, mostly for security purposes
(like credentials to enroll in an AD domain). The metadata is
configured as URLs to a REST service.

Very little is passed into the REST call, mostly UUIDs of the
instance, image, etc. to ensure a stable API. What this means though
is that the REST service may need to make calls into nova or glance to
get information, like looking up the image metadata in glance.

Currently the dynamic metadata handler _can_ generate auth headers if
an authenticated request is made to it, but consider that a common use
case is fetching metadata from within an instance using something like:

% curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json

This will come into the nova metadata service unauthenticated.

So a few questions:

1. Is it possible to configure paste (I'm a relative newbie) both
authenticated and unauthenticated requests are accepted such that IF
an authenticated request comes it, those credentials can be used,
otherwise fall back to something else?



Only if they are on different URLs, I think.  Its auth_token middleware
for all services but Keystone.  Keystone, the rles are similar, but the
implementation is a little different.


Ok. I'm fine with the unauthenticated path if the service we can just 
create a separate service user for it.



2. If an unauthenticated request comes in, how best to obtain a token
to use? Is it best to create a service user for the REST services
(perhaps several), use a shared user, something else?



No unauthenticated requests, please.  If the call is to Keystone, we
could use the X509 Tokenless approach, but if the call comes from the
new server, you won't have a cert by the time you need to make the call,
will you?


Not sure which cert you're referring too but yeah, the metadata 
service is unauthenticated. The requests can come in from the instance 
which has no credentials (via http://169.254.169.254/).



Shared service users are probably your best bet.  We can limit the roles
that they get.  What are these calls you need to make?


To glance for image metadata, Keystone for project information and 
nova for instance information. The REST call passes in various UUIDs 
for these so they need to be dereferenced. There is no guarantee that 
these would be called in all cases but it is a possibility.


rob



I guess if config_drive is True then this isn't really a problem as
the metadata will be there in the instance already.

thanks

rob

__ 



OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sounded like you had this sorted.  True?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture]Coordination between actions/WGs

2016-08-25 Thread joehuang
Hello, Ed,

Just as Peter mentioned,  "BT's NFV use cases e.g. vCPE, vCDN, vEPC, vIMS, MEC, 
IoT, where we will have compute highly distributed around the network (from 
thousands to millions of sites) ".  vCPE is only one use case, but not all. And 
the hardware facility to run "vCDN, vEPC, vIMS, MEC" is not in set-box or 
single hardware, even in current non-cloud way, it includes lots of blades, 
rack servers, chasises, or racks.

A whitepaper was just created "Accelerating NFV Delivery with OpenStack" 
https://www.openstack.org/telecoms-and-nfv/  

So it's part of a cloud architecture, the challenge is how OpenStack to run 
"regardless of size" and in "massively distributed" manner.

Best Regards
Chaoyi Huang (joehuang)

From: Ed Leafe [e...@leafe.com]
Sent: 25 August 2016 22:03
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][massively distributed][architecture] 
Coordination between actions/WGs

On Aug 24, 2016, at 8:42 PM, joehuang  wrote:
>
> Funny point of view. Let's look at the mission of OpenStack:
>
> "to produce the ubiquitous Open Source Cloud Computing platform that enables
> building interoperable public and private clouds regardless of size, by being
> simple to implement and massively scalable while serving the cloud users'
> needs."
>
> It mentioned that "regardless of size", and you also mentioned "cloud to me:
> lots of hardware consolidation".

If it isn’t part of a cloud architecture, then it isn’t part of OpenStack’s 
mission. The ‘size’ qualifier relates to everything from massive clouds like 
CERN and Walmart down to small private clouds. It doesn’t mean ‘any sort of 
computing platform’; the focus is clear that we are an "Open Source Cloud 
Computing platform”.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] relationship_type in static_datasources

2016-08-25 Thread Yujun Zhang
Lost in the code...It seems the datasource just construct the entities and
send them over event bus to entity graph processor. I need to dig further
to find out the exact point the "backup" relationship is filtered.

I think we should some how keep the validation of relationship type. It is
so easy to make typo when creating the template manually (I did this quite
often...).

My idea is to delegate the validation to datasource instead of enumerating
all constants it in evaluator. I think this will introduce better
extensibility. Any comments?

On Thu, Aug 25, 2016 at 1:32 PM Weyl, Alexey (Nokia - IL) <
alexey.w...@nokia.com> wrote:

> Hi Yujun,
>
>
>
> You can find the names of the lables in the constants.py file.
>
>
>
> In addition, the restriction on the physical_static datasource is done in
> it’s driver.py.
>
>
>
> Alexey
>
>
>
> *From:* Yujun Zhang [mailto:zhangyujun+...@gmail.com]
> *Sent:* Thursday, August 25, 2016 4:50 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [vitrage] relationship_type in
> static_datasources
>
>
>
> Hi, Ifat,
>
>
>
> I searched for edge_labels in the project. It seems it is validated only
> in `vitrage/evaluator/template_validation/template_syntax_validator.py`.
> Where is such restriction applied in static_datasources?
>
>
>
> --
>
> Yujun
>
>
>
> On Wed, Aug 24, 2016 at 3:19 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
> Hi Yujun,
>
>
>
> Indeed, we have some restrictions on the relationship types that can be
> used in the static datasources. I think we should remove these
> restrictions, and allow any kind of relationship type.
>
>
>
> Best regards,
>
> Ifat.
>
>
>
> *From: *Yujun Zhang
> *Date: *Monday, 22 August 2016 at 08:37
>
> I'm following the sample configuration in docs [1] to verify how static
> datasources works.
>
>
>
> It seems `backup` relationship is not displayed in the entity graph view
> and neither is it included in topology show.
>
>
>
> There is an enumeration for edge labels [2]. Should relationship in static
> datasource be limited to it?
>
>
>
> [1]
> https://github.com/openstack/vitrage/blob/master/doc/source/static-physical-config.rst
>
> [2]
> https://github.com/openstack/vitrage/blob/master/vitrage/common/constants.py#L49
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Wrapping up Newton

2016-08-25 Thread Armando M.
Hi Neutrinos,

Newton-3 is almost upon us. We are now in non-client requirement freeze,
and a week away from client requirement/feature freeze. This is the time
where we switch gear...for real:

   - Start focusing on testing and documentation, if you have not done so
   already;
   - Apply for FFE on postmortem [1];
   - For pending efforts that get a FFE granted, there's time until [3].
   - Ocata opens up as soon as RC1 is cut [4], therefore those that get
   denied will have to be pushed back until then.

When in doubt, reach out!

Cheers,
Armando

[1] https://review.openstack.org/#/c/360207/
[2] https://releases.openstack.org/newton/schedule.html
[3] Newton 3 milestone, Sept 1.
[4] Newton RC-1, 15 Sept.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][neutron] neutron-lib 0.4.0 release (newton)

2016-08-25 Thread no-reply
We are grateful to announce the release of:

neutron-lib 0.4.0: Neutron shared routines and utilities

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/neutron-lib

With package available at:

https://pypi.python.org/pypi/neutron-lib

Please report issues through launchpad:

http://bugs.launchpad.net/neutron

For more details, please see below.

Changes in neutron-lib 0.3.0..0.4.0
---

705fd90 Remove new checks from hacking factory()
ab4e3de Correcting information in configuration
c73926b Updated from global requirements
5fbbfcc Add  docstrings for utils.net
0f80b98 Add  docstrings for utils.host
ee03dc1 Add  docstrings for utils.helpers
e86df3b Add  docstrings for utils.file
f54d49a Add  docstrings for hacking.translation_checks
4a9f24b Add  docstrings for hacking.checks
72c7cd2 Updated from global requirements
5cbb6f7 Add  docstrings for db.utils
d911d71 Get ready for os-api-ref sphinx theme change
610eb07 Add  docstrings for policy
8be88da Base DB: rehome model_base
1abf81a Start migration of utility methods
52d4875 Add  docstrings for exceptions
756e02e Add  docstrings for converters
effbe8b Enhance pyir tooling CLI
13fe2cf Support copy() in Sentinel
9b71f28 Don't run api-report during pep8
eb32fd4 Add a hacking rule for string interpolation at logging
ace1abc Correcting 'extention' parameter on Networking API v2.0
8e84984 Remove invalid depreaction warning
14352ef Generate API report tooling
b00c3c5 Updated from global requirements
72c9448 Add DeviceNotFoundError to neutron_lib exceptions
27cfad9 Revert "Update hacking check consumption"
1be35ff Enable DeprecationWarning in test environments
ebee801 Update the home-page in setup.cfg
b2ab133 Add Python 3.5 classifier and venv
fcc8ad2 Updated from global requirements
f5b7161 Don't pass argument sqlite_db in method set_defaults
9e17865 api-ref: Fix api-ref for routers
e7132af Updated from global requirements
31816d2 api-ref: Rename file names for consistency
e3cb5a4 api-ref: Move sample JSON files under v2 directory
d6d91b6 api-ref: Merge v2 and v2-ext into one directory
4dcf597 Sync neutron callbacks into lib
172918f Forbid eventlet hacking check
23e738f api-ref: Split LBaaS API reference into v1 and v2
4f318bc Update hacking check consumption
371915b translation_checks: Exclude rally plugins
0c29ef7 Add Neutron context module and some policy methods
48ae867 Updated from global requirements
dd60d1b Add DEVICE_OWNER_BAREMETAL_PREFIX const
980068f Fix api-ref response code formatting


Diffstat (except docs and test files)
-

HACKING.rst|6 +-
api-ref/source/conf.py |   37 +-
api-ref/source/index.rst   |1 -
.../extensions/extension-show-response.json|9 -
.../extensions/extensions-list-response.json   |  123 -
.../samples/firewalls/firewall-create-request.json |6 -
.../firewalls/firewall-create-response.json|   14 -
.../firewalls/firewall-policies-list-response.json |   15 -
.../firewalls/firewall-policy-create-request.json  |8 -
.../firewalls/firewall-policy-create-response.json |   13 -
.../firewall-policy-insert-rule-request.json   |5 -
.../firewall-policy-insert-rule-response.json  |   14 -
.../firewall-policy-remove-rule-request.json   |3 -
.../firewall-policy-remove-rule-response.json  |   13 -
.../firewalls/firewall-policy-show-response.json   |   13 -
.../firewalls/firewall-policy-update-request.json  |8 -
.../firewalls/firewall-policy-update-response.json |   14 -
.../firewalls/firewall-rule-create-request.json|9 -
.../firewalls/firewall-rule-create-response.json   |   19 -
.../firewalls/firewall-rule-show-response.json |   19 -
.../firewalls/firewall-rule-update-request.json|5 -
.../firewalls/firewall-rule-update-response.json   |   19 -
.../firewalls/firewall-rules-list-response.json|   21 -
.../samples/firewalls/firewall-show-response.json  |   14 -
.../samples/firewalls/firewall-update-request.json |5 -
.../firewalls/firewall-update-response.json|   14 -
.../samples/firewalls/firewalls-list-response.json |   16 -
.../samples/flavors/flavor-associate-request.json  |5 -
.../samples/flavors/flavor-associate-response.json |5 -
.../samples/flavors/flavor-create-request.json |8 -
.../samples/flavors/flavor-create-response.json|   10 -
.../samples/flavors/flavor-show-response.json  |   10 -
.../samples/flavors/flavor-update-request.json |7 -
.../samples/flavors/flavor-update-response.json|   10 -
.../samples/flavors/flavors-list-response.json |   12 -
.../flavors/service-profile-create-request.json|8 -
.../flavors/service-profile-create-response.json   |9 -
.../flavors/service-profile-show-response.json |9 -
.../flavors/service-profile-update-request.js

[openstack-dev] [new][openstack] osc-lib 1.1.0 release (newton)

2016-08-25 Thread no-reply
We are enthusiastic to announce the release of:

osc-lib 1.1.0: OpenStackClient Library

This release is part of the newton release series.

With source available at:

https://git.openstack.org/cgit/openstack/osc-lib

With package available at:

https://pypi.python.org/pypi/osc-lib

Please report issues through launchpad:

https://bugs.launchpad.net/python-openstackclient

For more details, please see below.

Changes in osc-lib 1.0.2..1.1.0
---

9bf62fd Fix default handling for verify option in ClientManager
79240ac Updated from global requirements
13cee0c Updated from global requirements


Diffstat (except docs and test files)
-

osc_lib/clientmanager.py|  5 -
requirements.txt|  2 +-
test-requirements.txt   |  4 ++--
4 files changed, 19 insertions(+), 4 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index c80a504..93fe65d 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ keystoneauth1>=2.10.0 # Apache-2.0
-os-client-config!=1.19.0,>=1.13.1 # Apache-2.0
+os-client-config!=1.19.0,!=1.19.1,!=1.20.0,>=1.13.1 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 136e574..bec18dc 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -15,2 +15,2 @@ testtools>=1.4.0 # MIT
-osprofiler>=1.3.0 # Apache-2.0
-bandit>=1.0.1 # Apache-2.0
+osprofiler>=1.4.0 # Apache-2.0
+bandit>=1.1.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Barcelona Design Summit space needs

2016-08-25 Thread Paul Belanger
On Tue, Aug 23, 2016 at 09:04:15AM -0400, Emilien Macchi wrote:
> Team,
> 
> Thierry sent an email to all PTLs about space needs for next Summit.
> 
> Here's what we can have:
> 
> * Fishbowl sessions (from Wednesday 4pm to Friday noon)
> Our traditional largish rooms organized in fishbowl style, with
> advertised session content on the summit schedule for increased external
> participation. Ideal for when wider feedback is essential.
> 
> * Workroom sessions (from Wednesday 4pm to Friday noon)
> Smaller rooms organized in boardroom style, with topic buried in the
> session description, in an effort to limit attendance and not overcrowd
> the room. Ideal to get work done and prioritize work in small teams.
> 
> * Contributors meetup (Friday afternoon)
> Half-day session on Friday afternoon to get into the Ocata action while
> decisions and plans are still hot, or to finish discussions started
> during the week, whatever works for you.
> 
> Note:
> - Ops summit on Tuesday morning until 4pm
> - Cross-project workshops from Tuesday 4pm to Wednesday 4pm
> 
> As a reminder, here's what we had for Austin:
> Fishbowl slots (Wed-Thu): 2
> Workroom slots (Tue-Thu): 3
> Contributors meetup (Fri): 1/2
> 
> Notes from Thierry:
> "We'll have less slots compared to Austin, and new teams to accommodate.
> So as a rule of thumb, you should probably require *less* slots than in
> Austin. It's also worth noting that the Ocata cycle will be a short
> cycle (likely only 15 weeks between the design summit and feature
> freeze, including thanksgiving and other end-of-year holidays), so there
> is no need to plan too much work."
> 
> I created an etherpad for topic ideas, feel free to start thinking about it:
> https://etherpad.openstack.org/p/ocata-tripleo
> 
> Once we gather some feedback on the topics, shardy or me will contact
> Thierry to ask for the fair number of rooms.
> Thanks for reading so far,

Thanks, I've left some comments specific to CI.  I was happy to see somebody
else raise the topic too.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Barcelona Design Summit space needs

2016-08-25 Thread James Slagle
On Thu, Aug 25, 2016 at 9:10 AM, Steven Hardy  wrote:
> On Tue, Aug 23, 2016 at 09:04:15AM -0400, Emilien Macchi wrote:
>> As a reminder, here's what we had for Austin:
>> Fishbowl slots (Wed-Thu): 2
>> Workroom slots (Tue-Thu): 3
>> Contributors meetup (Fri): 1/2
>
> I think this allocation worked well in Austin, so I'd suggest we ask for
> the same again.
>
> I know Thierry indicated we should request less, but we are asking for far
> fewer sessions than many other projects, so I'd like to aim for the same
> allocation and see if that can be accommodated.
>
> What do folks think, if I can get some acks on this plan I will go ahead
> and provide the feedback to Thierry.

+1, sounds good to me.



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] New bug tagging policy

2016-08-25 Thread Steve Baker

On 25/08/16 22:30, Julie Pichon wrote:

Hi folks,

The bug tagging proposal has merged, behold the new policy:

http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html

TL;DR The TripleO Launchpad tracker encompasses a lot of sub-projects,
let's use a consistent list of Launchpad tags where they make sense in
order to help understand which area(s) are affected. The tags get
autocompleted by Launchpad (or will be soon).


There is one remaining action to create the missing tags: I don't have
bug wrangling permissions on the TripleO project so, if someone with
the appropriate permissions could update the list [1] to match the
policy I would appreciate it. Should I be deemed trustworthy enough
I'm just as happy to do it myself and help out with the occasional
bout of triaging as well.

Thanks,

Julie

[1] https://bugs.launchpad.net/tripleo/+manage-official-tags

I'm not seeing any tag appropriate for the configuration agent projects 
os-collect-config, os-apply-config, os-refresh-config. Is it possible to 
add a tag like config-agent?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo][horizon][tripleo-ui] Parameter groups, tags and deprecation

2016-08-25 Thread Jason Rist
On 08/25/2016 03:55 PM, Steven Hardy wrote:
> On Thu, Aug 25, 2016 at 04:19:12PM -0400, Zane Bitter wrote:
> > On 25/08/16 14:02, Steven Hardy wrote:
> >> Hi all,
> >>
> >> So I'm following up on a discussion that started here:
> >>
> >> https://review.openstack.org/#/c/356240
> >>
> >> Basically I recently suggested[1] that relaxing the restriction where a
> >> parameter can only exist in exactly one parameter_group would be a way to
> >> help work around some pain-points we have in TripleO atm.
> >
> > I usually read the scrollback of meetings I'm not able to attend but I must
> > have missed that one, sorry.
> >
> >> Let me start with the the problems we're trying to solve, because I think
> >> they are common to many Heat users, not just TripleO:
> >>
> >> 1. When doing nested validation to discover parameter schema, it's
> >> impossible to tell if a parameter will be provided by the parent
> >
> > IMHO they should all be provided by the parent, but that's another story ;)
> >
> >> This is a known issue from when nested validation was first implemented,
> >> and we never figured out a satisfactory fix, basically because you can't
> >> possibly tell without actually creating things whether an indirectly
> >> provided parameter (e.g a reference to another resource in a parent
> >> template) will resolve to a valid value.
> >>
> >> So when you build a bunch of templates where there are some parameters
> >> which form part of an internal interface (e.g they are always provided by
> >> the parent and thus should not be exposed to end users) and some which are
> >> extra (and should always be provided, or at least exposed to end users) you
> >> have no way to differentiate them.
> >
> > Heat has a way to differentiate them surely, because it has the templates?
> > If you implement it that sounds much more reliable than asking template
> > authors to annotate this stuff manually (for a start the same nested
> > template can be instantiated in different ways, so it's not even possible in
> > general to annotate it in such a way as to indicate which parameters will be
> > set by the parent and which should be set by modifying parameter defaults).
>
> The problem is that many (most?) intrinsic functions return None at
> validation time, which we can't easily distinguish from no value being
> passed into the nested stack.
>
> I'm not saying this can't be fixed, I'm just saying it's probably pretty
> hard from an implementation perspective (Jay and I both had a try at fixing
> this but I'd welcome some fresh eyes on it!).
>
> Some more context here (and Jay mentions a newton targetted bug although
> I'm currently failing to find it):
>
> https://bugs.launchpad.net/heat/+bug/1508857
>
> It's true that no general pattern for annotation is possible, but given a
> sufficiently constrained interface (such as we support in some parts of
> TripleO) it would be a workable alternative to the status-quo (if the
> template annotation was possible, but currently it is not).
>
> >> Example of this here:
> >>
> >> https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/heat-api.yaml#L7
> >>
> >> ServiceNetMap, DefaultPasswords and EndpointMap are internal interfaces,
> >> and the remainder are options for the service that should be exposed to the
> >> user.
> >>
> >> One option would be to add internal interfaces to a parameter_group
> >> "internal", but that then means we can never categorize a parameter as
> >> anything else (such as "deprecated" for example, see below).
> >>
> >> 2. When you ship a template containing parameters, it's impossible to ever
> >> deprecate the parameter names
> >>
> >> The problem here is twofold:
> >>  - We don't provide any means to tag a parameter as deprecated (so that,
> >>for example, a UI or CLI tool could output an annoying warning to
> >> encourage not using it)
> >>  - There's no way to map an old (deprecated) name to a new one unless you
> >>do hacks inside the template such as overwriting one parameter with
> >>another via str_replace (which won't work for all parameter types, and
> >>you still can't ever remove the old parameter because there's no channel
> >>to warn users)
> >>
> >> So, one option here is to add a parameter_group called "deprecated" that's
> >> then introspected by the client during the validation phase, and outputs a
> >> warning when deprecated parameters are used.
> >
> > This seems like a very general problem. I would definitely be in favour of
> > some native support for this in the HOT format.
>
> Sure, so would I - but I remember discussing this in Vancouver, so it'd be
> good to figure out some incremental steps to improve this, again a template
> annotation (parameter_groups or something else) provides a simple interim
> workaround for this non-trivial and long-standing feature gap.
>
> >> 3. No way to subcategorize more than once
> >>
> >> The assumption in the current parameter_group interface is that a UI w

Re: [openstack-dev] [tripleo] collaboration request with vendors

2016-08-25 Thread Qasim Sarfraz
Steven/Emilien,

PLUMgrid will be happy to collaborate in the effort. A much needed effort
for healthy integration of vendors with TripleO.

What level of commitment would be expected from our side for this effort? As
Steve mentioned each vendor will have some requirements like
customizing the overcloud images so lets list them down to scope the effort.

Let me know if you want to discuss this in any TripleO meeting.


On Thu, Aug 25, 2016 at 6:20 PM, Steven Hardy  wrote:

> On Wed, Aug 24, 2016 at 03:11:38PM -0400, Emilien Macchi wrote:
> > TripleO does support multiple vendors for different type of backends.
> > Here are some examples:
> > Neutron networking: Cisco, Nuage, Opencontrail, Midonet, Plumgrid,
> Biswitch
> > Cinder: Dell, Netapp, Ceph
> >
> > TripleO developers are struggling to maintain the environment files
> > that allow to deploy those backends because it's very hard to test
> > them:
> > - not enough hardware
> > - zero knowledge at how to deploy the actual backend system
> > - no time to test all backends
> >
> > Recently, we made some changes in TripleO CI that will help us to
> > scale the way we test TripleO in the future.
> > One of those changes is that we can now deploy TripleO using nodepool
> > instances like devstack jobs.
> >
> > I wrote a prototype of TripleO job scenario:
> > https://review.openstack.org/#/c/360039/ that will allow us to have
> > more CI jobs with less services installed on each, so we can save
> > performances while increasing services coverage.
> > I would like to re-use those bits to test our vendors backends.
> >
> > Here's the proposal:
> > - for vendors backends that can be deployed using TripleO itself
> > (open-source backend systems like OpenContrail, Midonet, etc): we
> > could re-use the scenario approach by adding new scenarios for each
> > backend.
> > The jobs would only be triggered if we touch environment files related
> > on the backend in THT or the puppet profiles for the backend in
> > puppet-tripleo or the puppet backend class in puppet-neutron for the
> > backend (all thanks to Zuul magic).
>
> This sounds good, my only concern is how we handle things breaking when
> something outside of tripleo changes (e.g triage of bugs related to the
> vendor backends).
>
> If we can get some commitment folks will show up to help with that then
> definitely +1 on doing this.
>
> There are some additional complexities around images we'll need to consider
> too, as some (all?) of these backends require customization of the
> overcloud images (e.g adding some additional pieces related to the enabled
> vendor backend).
>
> > - for vendors backends that can't be deployed using TripleO itself
> > (not implemented in the services and / or not open-source):
> > Like most of you probably did for devstack jobs in neutron/cinder's
> > gates, work with us to implement CI jobs that would deploy TripleO
> > with your backend. I don't have the exact technical solution right
> > now, but at least I would like to know who would be interested by this
> > collaboration.
>
> This also sounds good, but it's unclear to me atm if we have any folks
> willing to step up and do this work.  If people with bandwidth to do this
> can be identified then it would be good investigate.
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Regards,
Qasim Sarfraz
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][tripleo][horizon][tripleo-ui] Parameter groups, tags and deprecation

2016-08-25 Thread Steven Hardy
On Thu, Aug 25, 2016 at 04:19:12PM -0400, Zane Bitter wrote:
> On 25/08/16 14:02, Steven Hardy wrote:
> > Hi all,
> > 
> > So I'm following up on a discussion that started here:
> > 
> > https://review.openstack.org/#/c/356240
> > 
> > Basically I recently suggested[1] that relaxing the restriction where a
> > parameter can only exist in exactly one parameter_group would be a way to
> > help work around some pain-points we have in TripleO atm.
> 
> I usually read the scrollback of meetings I'm not able to attend but I must
> have missed that one, sorry.
> 
> > Let me start with the the problems we're trying to solve, because I think
> > they are common to many Heat users, not just TripleO:
> > 
> > 1. When doing nested validation to discover parameter schema, it's
> > impossible to tell if a parameter will be provided by the parent
> 
> IMHO they should all be provided by the parent, but that's another story ;)
> 
> > This is a known issue from when nested validation was first implemented,
> > and we never figured out a satisfactory fix, basically because you can't
> > possibly tell without actually creating things whether an indirectly
> > provided parameter (e.g a reference to another resource in a parent
> > template) will resolve to a valid value.
> > 
> > So when you build a bunch of templates where there are some parameters
> > which form part of an internal interface (e.g they are always provided by
> > the parent and thus should not be exposed to end users) and some which are
> > extra (and should always be provided, or at least exposed to end users) you
> > have no way to differentiate them.
> 
> Heat has a way to differentiate them surely, because it has the templates?
> If you implement it that sounds much more reliable than asking template
> authors to annotate this stuff manually (for a start the same nested
> template can be instantiated in different ways, so it's not even possible in
> general to annotate it in such a way as to indicate which parameters will be
> set by the parent and which should be set by modifying parameter defaults).

The problem is that many (most?) intrinsic functions return None at
validation time, which we can't easily distinguish from no value being
passed into the nested stack.

I'm not saying this can't be fixed, I'm just saying it's probably pretty
hard from an implementation perspective (Jay and I both had a try at fixing
this but I'd welcome some fresh eyes on it!).

Some more context here (and Jay mentions a newton targetted bug although
I'm currently failing to find it):

https://bugs.launchpad.net/heat/+bug/1508857

It's true that no general pattern for annotation is possible, but given a
sufficiently constrained interface (such as we support in some parts of
TripleO) it would be a workable alternative to the status-quo (if the
template annotation was possible, but currently it is not).

> > Example of this here:
> > 
> > https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/heat-api.yaml#L7
> > 
> > ServiceNetMap, DefaultPasswords and EndpointMap are internal interfaces,
> > and the remainder are options for the service that should be exposed to the
> > user.
> > 
> > One option would be to add internal interfaces to a parameter_group
> > "internal", but that then means we can never categorize a parameter as
> > anything else (such as "deprecated" for example, see below).
> > 
> > 2. When you ship a template containing parameters, it's impossible to ever
> > deprecate the parameter names
> > 
> > The problem here is twofold:
> >  - We don't provide any means to tag a parameter as deprecated (so that,
> >for example, a UI or CLI tool could output an annoying warning to
> > encourage not using it)
> >  - There's no way to map an old (deprecated) name to a new one unless you
> >do hacks inside the template such as overwriting one parameter with
> >another via str_replace (which won't work for all parameter types, and
> >you still can't ever remove the old parameter because there's no channel
> >to warn users)
> > 
> > So, one option here is to add a parameter_group called "deprecated" that's
> > then introspected by the client during the validation phase, and outputs a
> > warning when deprecated parameters are used.
> 
> This seems like a very general problem. I would definitely be in favour of
> some native support for this in the HOT format.

Sure, so would I - but I remember discussing this in Vancouver, so it'd be
good to figure out some incremental steps to improve this, again a template
annotation (parameter_groups or something else) provides a simple interim
workaround for this non-trivial and long-standing feature gap.

> > 3. No way to subcategorize more than once
> > 
> > The assumption in the current parameter_group interface is that a UI will
> > always be built on the assumption that parameters should only ever be in
> > one group, which may be true in Horizon, but it's not the only Ux design
> > 

Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread Sean Dague

On 08/25/2016 01:13 PM, Steve Martinelli wrote:

The keystone team is pursuing a trigger-based approach to support
rolling, zero-downtime upgrades. The proposed operator experience is
documented here:

  http://docs.openstack.org/developer/keystone/upgrading.html

This differs from Nova and Neutron's approaches to solve for rolling
upgrades (which use oslo.versionedobjects), however Keystone is one of
the few services that doesn't need to manage communication between
multiple releases of multiple service components talking over the
message bus (which is the original use case for oslo.versionedobjects,
and for which it is aptly suited). Keystone simply scales horizontally
and every node talks directly to the database.

Database triggers are obviously a new challenge for developers to write,
honestly challenging to debug (being side effects), and are made even
more difficult by having to hand write triggers for MySQL, PostgreSQL,
and SQLite independently (SQLAlchemy offers no assistance in this case),
as seen in this patch:

  https://review.openstack.org/#/c/355618/

However, implementing an application-layer solution with
oslo.versionedobjects is not an easy task either; refer to Neutron's
implementation:


https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db

Our primary concern at this point are how to effectively test the
triggers we write against our supported database systems, and their
various deployment variations. We might be able to easily drop SQLite
support (as it's only supported for our own test suite), but should we
expect variation in support and/or actual behavior of triggers across
the MySQLs, MariaDBs, Perconas, etc, of the world that would make it
necessary to test each of them independently? If you have operational
experience working with triggers at scale: are there landmines that we
need to be aware of? What is it going to take for us to say we support
*zero* dowtime upgrades with confidence?


I would really hold off doing anything triggers related until there was 
sufficient testing for that, especially with potentially dirty data.


Triggers also really bring in a whole new DSL that people need to learn 
and understand, not just across this boundary, but in the future 
debugging issues. And it means that any errors happening here are now in 
a place outside of normal logging / recovery mechanisms.


There is a lot of value that in these hard problem spaces like zero down 
uptime we keep to common patterns between projects because there are 
limited folks with the domain knowledge, and splitting that even further 
makes it hard to make this more universal among projects.


-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Drivers team cancelled

2016-08-25 Thread Armando M.
Hi folks,

We have a few absences today, apologies for the short notice but I am going
to cancel the meeting.

If there is anything release related you'd like to discuss please reach out
to me on this thread or IRC.

Cheers,
Armando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-25 Thread Sylvain Bauza



Le 25/08/2016 16:19, Andrew Laski a écrit :

Cross posting to gather some operator feedback.

There have been a couple of contentious patches gathering attention
recently about how to handle the case where a block device mapping
supersedes flavor information. Before moving forward on either of those
I think we should have a discussion about how best to handle the general
case, and how to handle any changes in behavior that results from that.

There are two cases presented:

1. A user boots an instance using a Cinder volume as a root disk,
however the flavor specifies root_gb = x where x > 0. The current
behavior in Nova is that the scheduler is given the flavor root_gb info
to take into account during scheduling. This may disqualify some hosts
from receiving the instance even though that disk space  is not
necessary because the root disk is a remote volume.
https://review.openstack.org/#/c/200870/

2. A user boots an instance and uses the block device mapping parameters
to specify a swap or ephemeral disk size that is less than specified on
the flavor. This leads to the same problem as above, the scheduler is
provided information that doesn't match the actual disk space to be
consumed. https://review.openstack.org/#/c/352522/

Now the issue: while it's easy enough to provide proper information to
the scheduler on what the actual disk consumption will be when using
block device mappings that undermines one of the purposes of flavors
which is to control instance packing on hosts. So the outstanding
question is to what extent should users have the ability to use block
device mappings to bypass flavor constraints?

One other thing to note is that while a flavor constrains how much local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.

Some possibilities:

Completely allow block device mappings, when present, to determine
instance packing. This is what the patches above propose and there's a
strong desire for this behavior from some folks. But changes how many
instances may fit on a host which could be undesirable to some.


That would completely (as you mentioned) tramples the fact that Nova 
uses flavors as quantitative resource user queries and would create some 
kind of conditional whether we should check if a BDM is there and also 
overriding the flavor disk values.


Please, I think we should only have one single source of truth for 
knowing the user disk request, which are flavors.
Of course, long-term, we could try to see how to have composite flavors 
for helping users to not create a whole handful of flavors for quite the 
same user requests, but that would still be flavors (or the name for 
saying a flavor composition).



Keep the status quo. It's clear that is undesirable based on the bug
reports and proposed patches above.


The status quo is not good as well. Given that we contract on BDM sizes 
in the API, we should somehow respect that contract and either accept 
the request (and honor it) or refuse it gracefully (for example, say if 
a flavor swap value doesn't match the swap BDM size you asked for)




Allow block device mappings, when present, to mostly determine instance
packing. By that I mean that the scheduler only takes into account local
disk that would be consumed, but we add additional configuration to Nova
which limits the number of instance that can be placed on a host. This
is a compromise solution but I fear that a single int value does not
meet the needs of deployers wishing to limit instances on a host. They
want it to take into account cpu allocations and ram and disk, in short
a flavor :)


If we consider that a flavor is the only source of truth, it means that 
another possibility would be to say that when an user requests both a 
flavor and a BDM, we would need to reconcile those two into one single 
flavor that would be part of the RequestSpec object. That wouldn't be 
the flavor the user asked, sure, but we would respect the quantitative 
resource values he wanted.


-Sylvain



And of course there may be some other unconsidered solution. That's
where you, dear reader, come in.

Thoughts?

-Andrew


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-sfc] Unable to create openstack SFC

2016-08-25 Thread Alioune
Hi Mohan,

The packets are not going through the SFs after setting the chain and I
think that this error is due to a misconfiguration of the pipelines in
br-int.
I used the flow classifier [0] but only the network address " 55.55.55.0/24"
is put in pipeline flow entries see [1] [2] and not the explicit address of
the source " 55.55.55.8/24" or destination " 55.55.55.7/24"

The source instance can successfully ping the destination before setting up
the port chain, after building the chain the ICMP packets are leaving from
the source  to the destination see [3] but it seems that they are not
correctly switched in br-int.

Any suggestion to solve that ?

[0] "neutron flow-classifier-create --ethertype IPv4 --source-ip-prefix
55.55.55.8/24  --logical-source-port 9ee874fc-aaec-477d-af41-0d0e872bb209
--destination-ip-prefix 55.55.55.7/24  --logical-destination-port
d2eea910-4e6c-4f30-947a-849fba7447a4  --protocol icmp FC1"

[1] sudo ovs-ofctl dump-flows br-int -O OpenFlow13 table=0
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x990756dc81846819, duration=1848.233s, table=0, n_packets=0,
n_bytes=0, priority=10,arp,in_port=4 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1833.939s, table=0, n_packets=0,
n_bytes=0, priority=10,arp,in_port=6 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1811.307s, table=0, n_packets=29,
n_bytes=1218, priority=10,arp,in_port=8 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1850.150s, table=0, n_packets=12,
n_bytes=504, priority=10,arp,in_port=3 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1837.405s, table=0, n_packets=11,
n_bytes=462, priority=10,arp,in_port=5 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1825.399s, table=0, n_packets=26,
n_bytes=1092, priority=10,arp,in_port=7 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=4244.694s, table=0, n_packets=329,
n_bytes=35178, priority=0 actions=NORMAL
 cookie=0x990756dc81846819, duration=4244.276s, table=0, n_packets=0,
n_bytes=0, priority=20,mpls actions=resubmit(,10)
 cookie=0x990756dc81846819, duration=1850.182s, table=0, n_packets=21,
n_bytes=2282, priority=9,in_port=3 actions=resubmit(,25)
 cookie=0x990756dc81846819, duration=1848.328s, table=0, n_packets=3,
n_bytes=230, priority=9,in_port=4 actions=resubmit(,25)
 cookie=0x990756dc81846819, duration=1837.480s, table=0, n_packets=21,
n_bytes=2282, priority=9,in_port=5 actions=resubmit(,25)
 cookie=0x990756dc81846819, duration=1834.008s, table=0, n_packets=2,
n_bytes=140, priority=9,in_port=6 actions=resubmit(,25)
 cookie=0x990756dc81846819, duration=1825.467s, table=0, n_packets=27,
n_bytes=2870, priority=9,in_port=7 actions=resubmit(,25)
 cookie=0x990756dc81846819, duration=1811.437s, table=0, n_packets=179,
n_bytes=24558, priority=9,in_port=8 actions=resubmit(,25)
 cookie=0x990756dc81846819, duration=1850.166s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=3,icmp_type=136 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1848.266s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=4,icmp_type=136 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1837.433s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=5,icmp_type=136 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1825.436s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=7,icmp_type=136 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1833.966s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=6,icmp_type=136 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=1811.353s, table=0, n_packets=0,
n_bytes=0, priority=10,icmp6,in_port=8,icmp_type=136 actions=resubmit(,24)
 cookie=0x990756dc81846819, duration=592.988s, table=0, n_packets=305,
n_bytes=29890, priority=30,icmp,in_port=8,nw_src=55.55.55.0/24,nw_dst=
55.55.55.0/24 actions=group:1
 cookie=0x990756dc81846819, duration=592.835s, table=0, n_packets=0,
n_bytes=0, priority=30,icmp,in_port=4,nw_src=55.55.55.0/24,nw_dst=
55.55.55.0/24 actions=group:2
 cookie=0x990756dc81846819, duration=592.750s, table=0, n_packets=0,
n_bytes=0, priority=30,icmp,in_port=6,nw_src=55.55.55.0/24,nw_dst=
55.55.55.0/24 actions=NORMAL

[2] sudo ovs-ofctl dump-flows br-int -O OpenFlow13 table=5
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x990756dc81846819, duration=660.337s, table=5, n_packets=0,
n_bytes=0, priority=1,ip,dl_dst=fa:16:3e:ee:ac:9a,nw_src=55.55.55.0/24
actions=push_mpls:0x8847,set_field:65791->mpls_label,set_mpls_ttl(255),push_vlan:0x8100,set_field:4097->vlan_vid,resubmit(,10)
 cookie=0x990756dc81846819, duration=660.104s, table=5, n_packets=0,
n_bytes=0, priority=1,ip,dl_dst=fa:16:3e:9b:2b:91,nw_src=55.55.55.0/24
actions=push_mpls:0x8847,set_field:65790->mpls_label,set_mpls_ttl(254),push_vlan:0x8100,set_field:4097->vlan_vid,resubmit(,10)
 cookie=0x990756dc81846819, duration=660.325s, table=5, n_packets=0,
n_bytes=0, priority=0,dl_dst=fa:16:3e:ee:ac:9a
actions=push_mpls:0x8847,set_field:65791->mpls_label,set_mpls_ttl(

Re: [openstack-dev] [heat][tripleo][horizon][tripleo-ui] Parameter groups, tags and deprecation

2016-08-25 Thread Zane Bitter

On 25/08/16 14:02, Steven Hardy wrote:

Hi all,

So I'm following up on a discussion that started here:

https://review.openstack.org/#/c/356240

Basically I recently suggested[1] that relaxing the restriction where a
parameter can only exist in exactly one parameter_group would be a way to
help work around some pain-points we have in TripleO atm.


I usually read the scrollback of meetings I'm not able to attend but I 
must have missed that one, sorry.



Let me start with the the problems we're trying to solve, because I think
they are common to many Heat users, not just TripleO:

1. When doing nested validation to discover parameter schema, it's
impossible to tell if a parameter will be provided by the parent


IMHO they should all be provided by the parent, but that's another story ;)


This is a known issue from when nested validation was first implemented,
and we never figured out a satisfactory fix, basically because you can't
possibly tell without actually creating things whether an indirectly
provided parameter (e.g a reference to another resource in a parent
template) will resolve to a valid value.

So when you build a bunch of templates where there are some parameters
which form part of an internal interface (e.g they are always provided by
the parent and thus should not be exposed to end users) and some which are
extra (and should always be provided, or at least exposed to end users) you
have no way to differentiate them.


Heat has a way to differentiate them surely, because it has the 
templates? If you implement it that sounds much more reliable than 
asking template authors to annotate this stuff manually (for a start the 
same nested template can be instantiated in different ways, so it's not 
even possible in general to annotate it in such a way as to indicate 
which parameters will be set by the parent and which should be set by 
modifying parameter defaults).



Example of this here:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/heat-api.yaml#L7

ServiceNetMap, DefaultPasswords and EndpointMap are internal interfaces,
and the remainder are options for the service that should be exposed to the
user.

One option would be to add internal interfaces to a parameter_group
"internal", but that then means we can never categorize a parameter as
anything else (such as "deprecated" for example, see below).

2. When you ship a template containing parameters, it's impossible to ever
deprecate the parameter names

The problem here is twofold:
 - We don't provide any means to tag a parameter as deprecated (so that,
   for example, a UI or CLI tool could output an annoying warning to
encourage not using it)
 - There's no way to map an old (deprecated) name to a new one unless you
   do hacks inside the template such as overwriting one parameter with
   another via str_replace (which won't work for all parameter types, and
   you still can't ever remove the old parameter because there's no channel
   to warn users)

So, one option here is to add a parameter_group called "deprecated" that's
then introspected by the client during the validation phase, and outputs a
warning when deprecated parameters are used.


This seems like a very general problem. I would definitely be in favour 
of some native support for this in the HOT format.



3. No way to subcategorize more than once

The assumption in the current parameter_group interface is that a UI will
always be built on the assumption that parameters should only ever be in
one group, which may be true in Horizon, but it's not the only Ux design
pattern.


I guess that's true if you're willing to tweak the layout a bit:

http://i1.wp.com/www.jeffreythompson.org/blog/wp-content/uploads/2012/07/7-WayVennDiagram-web.png

(To be fair a carousel-type UI could actually work quite sanely, with a 
parameter in multiple groups just showing up on multiple different 
panels. But shouldn't we be going for lowest common denominator here? 
i.e. shouldn't the output of validate_template be guaranteed to be 
usable in all of the UIs out there, rather than saying that one UI knows 
how to display it so everything else has to just deal with it?)



Particularly when dealing with filtering lots of nested templates (which
all accept parameters which may exist in some subcategory, such as
"network", "passwords", "advanced" etc.  There's no way to subcategorize
parameters in the heat templates, so we're having to wire in hard-coded
translations outside of heat, because tripleo-ui doesn't work the same as
Horizon (it allows you to browse the nested parameters, and there are a lot
so some subcategories are basically needed here, the horizon flat list
approach won't work).

Any ideas on the most acceptable path forward here would be appreciated -
Randall mentioned enabling per-parameter tags which is certainly an
option, and having some means to handle deprecation would also be very
good, I'm just not sure on the least impactful way to do thi

[openstack-dev] [Nova] Reconciling flavors and block device mappings

2016-08-25 Thread Andrew Laski
Cross posting to gather some operator feedback.

There have been a couple of contentious patches gathering attention
recently about how to handle the case where a block device mapping
supersedes flavor information. Before moving forward on either of those
I think we should have a discussion about how best to handle the general
case, and how to handle any changes in behavior that results from that.

There are two cases presented:

1. A user boots an instance using a Cinder volume as a root disk,
however the flavor specifies root_gb = x where x > 0. The current
behavior in Nova is that the scheduler is given the flavor root_gb info
to take into account during scheduling. This may disqualify some hosts
from receiving the instance even though that disk space  is not
necessary because the root disk is a remote volume.
https://review.openstack.org/#/c/200870/

2. A user boots an instance and uses the block device mapping parameters
to specify a swap or ephemeral disk size that is less than specified on
the flavor. This leads to the same problem as above, the scheduler is
provided information that doesn't match the actual disk space to be
consumed. https://review.openstack.org/#/c/352522/

Now the issue: while it's easy enough to provide proper information to
the scheduler on what the actual disk consumption will be when using
block device mappings that undermines one of the purposes of flavors
which is to control instance packing on hosts. So the outstanding
question is to what extent should users have the ability to use block
device mappings to bypass flavor constraints?

One other thing to note is that while a flavor constrains how much local
disk is used it does not constrain volume size at all. So a user can
specify an ephemeral/swap disk <= to what the flavor provides but can
have an arbitrary sized root disk if it's a remote volume.

Some possibilities:

Completely allow block device mappings, when present, to determine
instance packing. This is what the patches above propose and there's a
strong desire for this behavior from some folks. But changes how many
instances may fit on a host which could be undesirable to some.

Keep the status quo. It's clear that is undesirable based on the bug
reports and proposed patches above.

Allow block device mappings, when present, to mostly determine instance
packing. By that I mean that the scheduler only takes into account local
disk that would be consumed, but we add additional configuration to Nova
which limits the number of instance that can be placed on a host. This
is a compromise solution but I fear that a single int value does not
meet the needs of deployers wishing to limit instances on a host. They
want it to take into account cpu allocations and ram and disk, in short
a flavor :)

And of course there may be some other unconsidered solution. That's
where you, dear reader, come in.

Thoughts?

-Andrew


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread gordon chung


On 25/08/16 01:13 PM, Steve Martinelli wrote:
The keystone team is pursuing a trigger-based approach to support rolling, 
zero-downtime upgrades. The proposed operator experience is documented here:

  http://docs.openstack.org/developer/keystone/upgrading.html

This differs from Nova and Neutron's approaches to solve for rolling upgrades 
(which use oslo.versionedobjects), however Keystone is one of the few services 
that doesn't need to manage communication between multiple releases of multiple 
service components talking over the message bus (which is the original use case 
for oslo.versionedobjects, and for which it is aptly suited). Keystone simply 
scales horizontally and every node talks directly to the database.


just curious, but does Keystone have any IPC or is it still just a single 
service interacting with db? if the latter, you should be able to just apply 
migration with no downtime as long as you don't modify/delete existing columns. 
similar experience as others, haven't really used stored procedures in a while 
but it's a pain wrt to portability. considering OpenStack has a habit of 
supporting every driver under the sun, i'm guessing driver specific solutions 
will get more difficult over time.

cheers,


--
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][cinder] os-brick 1.6.0 release (newton)

2016-08-25 Thread no-reply
We are gleeful to announce the release of:

os-brick 1.6.0: OpenStack Cinder brick library for managing local
volume attaches

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/os-brick

With package available at:

https://pypi.python.org/pypi/os-brick

Please report issues through launchpad:

http://bugs.launchpad.net/os-brick

For more details, please see below.

1.6.0
^

New Features

* Add Windows Fibre Channel connector support.

* Add Windows SMBFS connector support.

* Added initiator connector 'VmdkConnector' to support backup and
  restore of vmdk volumes by Cinder backup service.

Changes in os-brick 1.5.0..1.6.0


f8e4f3c Mask out passwords when tracing
ce0d9b3 RBD: Fix typo in rados timeout assignment
f0491db Fixes with customized ceph cluster name
b815232 Add connector for GPFS volumes
9daa20e Add missing %s in print message
91ac58f Fix linuxrbd to work with Python 3
552bcb2 Add tracing unit tests
2dbe45d Wrong param makes exception message throws inaccurate
8df2fe9 Fix the typo in the file
9f70ace Add connector for vmdk volumes
cef2880 Fix iSCSI discovery with ISER transport
075f12e RemoteFsClient extend Executor
25453f3 Add Windows Fibre Channel connector
4045300 Add Windows SMBFS connector
d937f64 Fix FC multipath cleanup
8900ce1 Fix weak test_vzstorage_with_mds_list
45184cb Fix the mocking mess
28a4d55 Fix FC multipath rescan
7a75b47 Update the home-page info with the developer documentation
c5e3d8a Splitting Out Connectors from connector.py
e1f9a54 Remove race condition from lvextend
53173f7 Fix iSCSI multipath cleanup


Diffstat (except docs and test files)
-

os_brick/encryptors/cryptsetup.py  |2 +-
os_brick/exception.py  |4 +
os_brick/initiator/__init__.py |   39 +
os_brick/initiator/connector.py| 3293 +---
os_brick/initiator/connectors/__init__.py  |0
os_brick/initiator/connectors/aoe.py   |  177 ++
os_brick/initiator/connectors/base.py  |  129 +
os_brick/initiator/connectors/base_iscsi.py|   42 +
os_brick/initiator/connectors/disco.py |  207 ++
os_brick/initiator/connectors/drbd.py  |  109 +
os_brick/initiator/connectors/fake.py  |   48 +
os_brick/initiator/connectors/fibre_channel.py |  301 ++
.../initiator/connectors/fibre_channel_s390x.py|   86 +
os_brick/initiator/connectors/gpfs.py  |   41 +
os_brick/initiator/connectors/hgst.py  |  182 ++
os_brick/initiator/connectors/huawei.py|  192 ++
os_brick/initiator/connectors/iscsi.py |  844 +
os_brick/initiator/connectors/local.py |   78 +
os_brick/initiator/connectors/rbd.py   |  197 ++
os_brick/initiator/connectors/remotefs.py  |  119 +
os_brick/initiator/connectors/scaleio.py   |  491 +++
os_brick/initiator/connectors/sheepdog.py  |  126 +
os_brick/initiator/connectors/vmware.py|  276 ++
os_brick/initiator/initiator_connector.py  |  193 ++
os_brick/initiator/linuxfc.py  |   46 +-
os_brick/initiator/linuxrbd.py |   16 +-
os_brick/initiator/linuxscsi.py|2 +-
os_brick/initiator/windows/__init__.py |   43 -
os_brick/initiator/windows/base.py |9 +-
os_brick/initiator/windows/fibre_channel.py|  127 +
os_brick/initiator/windows/iscsi.py|   12 +-
os_brick/initiator/windows/smbfs.py|   94 +
os_brick/local_dev/lvm.py  |   29 +
os_brick/remotefs/remotefs.py  |   42 +-
os_brick/remotefs/windows_remotefs.py  |  122 +
.../initiator/connectors/test_fibre_channel.py |  398 +++
.../connectors/test_fibre_channel_s390x.py |   71 +
os_brick/utils.py  |   33 +-
...add-windows-fibre-channel-030c095c149da321.yaml |3 +
.../notes/add-windows-smbfs-d86edaa003130a31.yaml  |3 +
.../vmware-vmdk-connector-19e6999e6cae43cd.yaml|4 +
setup.cfg  |2 +-
test-requirements.txt  |1 +
71 files changed, 8718 insertions(+), 6068 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index b0c1dd2..b3d465c 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -16,0 +17 @@ os-testr>=0.7.0 # Apache-2.0
+oslo.vmware>=2.11.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-25 Thread Stanislaw Bogatkin
+1

On Thu, Aug 25, 2016 at 12:08 PM, Aleksandr Didenko 
wrote:

> +1
>
> On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko  > wrote:
>
>> +1
>>
>>
>> /sv
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
with best regards,
Stan.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-5, 29 Aug - 2 Sept

2016-08-25 Thread Doug Hellmann
Focus
-

The feature freeze deadline is 1 Sept. All project teams should be
putting the last minute touches on feature work.

General Notes
-

Non-client libraries are frozen and will only see updates for
release-critical bugs. We have not created stable branches, yet,
in case we do need those releases. Teams who haven't frozen master
will need to create the stable branch from their most recent release
and backport fixes before doing a patch release.

The third milesone occuring this week includes several freezes in
our release cycle to let us shift our focus to bug fixes and generally
hardening the release.

* We freeze releases of all libraries and changes to requirements
  between the third milestone and the final release to give downstream
  packagers time to vet the libraries. Only emergency bug fix updates
  are allowed during that period, not releases for FFEs.

* The overall feature freeze allows teams to wrap up Newton and start
  thinking about Ocata planning.

* We start a soft string freeze at the milestone to give translators
  time to catch up with the work that has already been done this cycle.
  A hard string freeze will follow two weeks later at R-3.

Release Actions
---

The last day for releases for client libraries will be 1 Sept. File your
release request in time to have the release done on the 1st.

The milestone deadline is also 1 Sept. Please file your 0b3 tag request
in time to have the release done on the 1st.

Review the members of your $project-release group in gerrit, based on
the instructions Thierry sent on 15 Aug.

Important Dates
---

Newton RC-1, 15 Sept.

Newton release schedule: http://releases.openstack.org/newton/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread Dan Smith
> This differs from Nova and Neutron's approaches to solve for rolling
> upgrades (which use oslo.versionedobjects), however Keystone is one of
> the few services that doesn't need to manage communication between
> multiple releases of multiple service components talking over the
> message bus (which is the original use case for oslo.versionedobjects,
> and for which it is aptly suited). Keystone simply scales horizontally
> and every node talks directly to the database.

Yeah, o.vo gives you nothing really if all you want is a facade behind
which to hide the application-level migrations. That doesn't mean it
would be a bad thing to use, but maybe overkill vs. just writing a
couple wrappers.

> Database triggers are obviously a new challenge for developers to write,
> honestly challenging to debug (being side effects), and are made even
> more difficult by having to hand write triggers for MySQL, PostgreSQL,
> and SQLite independently (SQLAlchemy offers no assistance in this case),
> as seen in this patch:
> 
>   https://review.openstack.org/#/c/355618/
> 
> However, implementing an application-layer solution with
> oslo.versionedobjects is not an easy task either; refer to Neutron's
> implementation:

Yeah, it's not trivial at the application level either but at least it
is in python and write-once for any kind of compatible backend. My
(extremely limited) experience with stored procedures is that they are
very difficult to get right, even as an expert in the technology, which
almost none of us are. Application-level migrations are significantly
simpler and exist closer to the domain of the rest of the code for a
specific new feature.

I will offer one bit of anecdotal information that may be relevant:
Several of the migrations that nova has done in the past have required
things like parsing/generating JSON, and making other API calls to look
up information needed to translate from one format to another. That
would (AFAIK) be quite difficult to do in the database itself, and may
mean you end up with a combination of both approaches in the long run.

I don't think that keystone necessarily needs to adopt the same approach
as the other projects (especially in the absence of things like
cross-version RPC compatibility) and so if stored procedures are really
the best fit then that's cool. They will themselves be a landmine in
front of me should I ever have to debug such a problem, but if they are
significantly better for the most part then so be it.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-25 Thread Thierry Carrez
lebre.adr...@free.fr wrote:
> [...]
> The goal of this email is to : 
> 
> (i) understand whether the fog/edge computing use case is in the scope of 
> the Architecture WG. 
> 
> (ii) if not, whether it makes sense to create a working group that focus 
> on scalability and multi-site challenges (Folks from Orange Labs and British 
> Telecom for instance already told us that they are interesting by such a 
> use-case).
> 
> (iii) what is the best way to coordinate our efforts with the actions 
> performed in other WGs such as the Performance and Architecture ones (e.g., 
> actions performed/decisions taken in the Architecture WG can have impacts on 
> the massively distributed WG and thus  drive the way we should perform 
> actions to progress to the Fog/Edge Computing target)

I think the two groups are complementary. The massively-distributed WG
needs to gather the parties interested in working in that, identify the
challenges and paint a picture of what the way forward could look like.

If only incremental changes or optional features are needed to achieve
the goal, I'd say the Arch WG doesn't really need to get involved. You
just need to push those features in the various impacted projects, with
some inter-project work coordination. But if the only way to achieve
those goals is to change to general architecture of OpenStack (for
example by needing Tricircle on a top cell in every OpenStack cloud),
then validation of the plan and assessment of how that could be rolled
out OpenStack-wide would involve the Arch WG (and ultimately probably
the TC).

The former approach is a lot easier than the latter :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-25 Thread Slade Baumann
Ben,In response to your comments, we are working on getting NFS up to date at IBM as we are using it as our storage backend (long story, don't ask lol). A few of use used to focus on Cinder development, so we have a good chance of getting things working. So I agree wholeheartedly that this needs to be resolved and we are working towards that goal (as well as fixing clone, etc).P.S. Thanks for you history of the NFS driver. I didn't know a lot of that.Slade-Ben Swartzlander  wrote: -To: "OpenStack Development Mailing List (not for usage questions)" , Erlon Cruz From: Ben Swartzlander Date: 08/25/2016 12:10PMSubject: Re: [openstack-dev] [tempest][cinder] Clone feature toggle not inclone tests


Originally the NFS driver did
support snapshots, but it was implemented by just 'cp'ing the file
containing the raw bits. This works fine (if inefficiently) for unattached
volumes, but if you do this on an attached volume the snapshot won't be
crash consistent at all.It was decided that we could do
better for attached volumes by switching to qcow2 and relying on nova to
perform the snapshots. Based on this, the bad snapshot implementation was
removed.However, for a variety of
reasons the nova-assisted snapshot implementation has remained unmerged for
2+ years and the NFS driver has been an exception to the rules for that
whole time.I would like to see that
exception end in the near future with either the removal of the driver or
the completion of the Nova-assisted snapshot implementation, and it doesn't
really matter to me which.There is a 3rd alternative
which would be to modify the NFS driver to require a specific filesystem
that supports snapshots (there are a few choices here, but definitely NOT
ext4). Unfortunately those of us who work for storage vendors aren't
motivated to make such a modification because it would be effectively
creating more competition for ourselves. The only way this could happen is
if someone not working for a storage vendor takes this on.-BenOn
August 25, 2016 10:39:35 AM Erlon Cruz  wrote:Hi Jordan, Slade,Currently NFS driver
does not support cloning neither snapshots (which are the base for
implementing cloning). AFAIC, the NFS driver was in Cinder before the
minimum requirements being discussed and set, so, it just stood there with
the features it already supported.There is
currently this job
'gate-tempest-dsvm-full-devstack-plugin-nfs-nv'[1] that by the way
are failing in the same test you mentioned tough passing the snapshot tests
(not shure how the configuration is doing that) and a work[2] in progress
to support the snapshot feature.So, Jordan, I
think its OK to allow tempest to skip this tests, provided that at least in
the NFS driver, tempest isn't being an enforcement to Cinder minimum
features
requirements.Erlon[1]
http://logs.openstack.org/86/147186/25/experimental/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b149960/[2]
https://review.openstack.org/#/c/147186/ On Wed, Aug 24, 2016 at
6:34 PM, Jordan Pittier wrote:On Wed, Aug 24, 2016 at 6:06 PM, Slade Baumann  wrote:I am attempting to
disable clone tests in tempest as they aren'tfunctioning in NFS.
But the tests test_volumes_clone.py andtest_volumes_clone_negative.py
don't have the "clone" featuretoggle in them. I thought
it obvious that if clone is disabledin tempest, the tests that simply
clone should be disabled.So I put up a bug and fix for it, but have
been talking withJordan Pittier and he suggested I come to the mailing
list toget this figured out. I'm not asking for reviews,
unless you want to give them.I'm simply asking if this is the right
way to go about thisor if there is something else I need to do to get
this intoTempest.Here are the bug and fix:https://bugs.launchpad.net/tempest/+bug/1615770https://review.openstack.org/#/c/358813/I
would appreciate any suggestion or direction in this problem.For
extra reference, the clone toggle flag was added here:https://bugs.launchpad.net/tempest/+bug/1488274

Hi, Thanks for starting this
thread. My point about this patch is, as "volume clone" is part
of the core requirements [1] every Cinder drive must support, I don't
see a need for a feature flag. The feature flag already exists, but that
doesn't mean we should encourage its
usage.Now, if this really helps the NFS driver
(although I don"t know why we couldn't support clone with NFS)...
I don't have a strong opinion on this patch.I
-1ed the patch for consistency: I agree that there should be a minimum set
of features expected from a Cinder driver.[1] http://docs.openstack.org/developer/cinder/devref/drivers.html#core-functionalityCheers,Jordan 
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:u

Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-25 Thread Thierry Carrez
Jay Pipes wrote:
> [...]
> How is vCPE a *cloud* use case?
> 
> From what I understand, the v[E]CPE use case is essentially that Telcos
> want to have the set-top boxen/routers that are running cable television
> apps (i.e. AT&T U-verse or Verizon FiOS-like things for US-based
> customers) and home networking systems (broadband connectivity to a
> local central office or point of presence, etc) be able run on virtual
> machines to make deployment and management of new applications easier.
> Since all those home routers and set-top boxen are essentially just
> Linux boxes, the infrastructure seems to be there to make this a
> cost-savings reality for Telcos. [1]
> 
> The problem is that that isn't remotely a cloud use case. Or at least,
> it doesn't describe what I think of as cloud.
> [...]

My read on that is that they want to build a cloud using the computing
power in those set-top boxes and be able to distribute workloads to them
(in an API/cloudy manner). So yes, essentially nova-compute nodes on
those set-top boxes. It feels like that use case fits your description
of "cloud", only their datacenter ends up being distributed in their
customers homes (and conveniently using your own electricity/cooling) ?

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread Matt Fischer
On Thu, Aug 25, 2016 at 1:13 PM, Steve Martinelli 
wrote:

> The keystone team is pursuing a trigger-based approach to support rolling,
> zero-downtime upgrades. The proposed operator experience is documented here:
>
>   http://docs.openstack.org/developer/keystone/upgrading.html
>
> This differs from Nova and Neutron's approaches to solve for rolling
> upgrades (which use oslo.versionedobjects), however Keystone is one of the
> few services that doesn't need to manage communication between multiple
> releases of multiple service components talking over the message bus (which
> is the original use case for oslo.versionedobjects, and for which it is
> aptly suited). Keystone simply scales horizontally and every node talks
> directly to the database.
>
> Database triggers are obviously a new challenge for developers to write,
> honestly challenging to debug (being side effects), and are made even more
> difficult by having to hand write triggers for MySQL, PostgreSQL, and
> SQLite independently (SQLAlchemy offers no assistance in this case), as
> seen in this patch:
>
>   https://review.openstack.org/#/c/355618/
>
> However, implementing an application-layer solution with
> oslo.versionedobjects is not an easy task either; refer to Neutron's
> implementation:
>
>   https://review.openstack.org/#/q/topic:bp/adopt-oslo-
> versioned-objects-for-db
>
> Our primary concern at this point are how to effectively test the triggers
> we write against our supported database systems, and their various
> deployment variations. We might be able to easily drop SQLite support (as
> it's only supported for our own test suite), but should we expect variation
> in support and/or actual behavior of triggers across the MySQLs, MariaDBs,
> Perconas, etc, of the world that would make it necessary to test each of
> them independently? If you have operational experience working with
> triggers at scale: are there landmines that we need to be aware of? What is
> it going to take for us to say we support *zero* dowtime upgrades with
> confidence?
>
> Steve & Dolph
>
>

No experience to add for triggers, but I'm happy to help test this on a
MySQL Galera cluster. I'd also like to add thanks for looking into this. A
keystone outage is a cloud outage and being able to eliminate them from
upgrades will be beneficial to everyone.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][tripleo][horizon][tripleo-ui] Parameter groups, tags and deprecation

2016-08-25 Thread Steven Hardy
Hi all,

So I'm following up on a discussion that started here:

https://review.openstack.org/#/c/356240

Basically I recently suggested[1] that relaxing the restriction where a
parameter can only exist in exactly one parameter_group would be a way to
help work around some pain-points we have in TripleO atm.

Let me start with the the problems we're trying to solve, because I think
they are common to many Heat users, not just TripleO:

1. When doing nested validation to discover parameter schema, it's
impossible to tell if a parameter will be provided by the parent

This is a known issue from when nested validation was first implemented,
and we never figured out a satisfactory fix, basically because you can't
possibly tell without actually creating things whether an indirectly
provided parameter (e.g a reference to another resource in a parent
template) will resolve to a valid value.

So when you build a bunch of templates where there are some parameters
which form part of an internal interface (e.g they are always provided by
the parent and thus should not be exposed to end users) and some which are
extra (and should always be provided, or at least exposed to end users) you
have no way to differentiate them.

Example of this here:

https://github.com/openstack/tripleo-heat-templates/blob/master/puppet/services/heat-api.yaml#L7

ServiceNetMap, DefaultPasswords and EndpointMap are internal interfaces,
and the remainder are options for the service that should be exposed to the
user.

One option would be to add internal interfaces to a parameter_group
"internal", but that then means we can never categorize a parameter as
anything else (such as "deprecated" for example, see below).

2. When you ship a template containing parameters, it's impossible to ever
deprecate the parameter names

The problem here is twofold:
 - We don't provide any means to tag a parameter as deprecated (so that,
   for example, a UI or CLI tool could output an annoying warning to
encourage not using it)
 - There's no way to map an old (deprecated) name to a new one unless you
   do hacks inside the template such as overwriting one parameter with
   another via str_replace (which won't work for all parameter types, and
   you still can't ever remove the old parameter because there's no channel
   to warn users)

So, one option here is to add a parameter_group called "deprecated" that's
then introspected by the client during the validation phase, and outputs a
warning when deprecated parameters are used.

3. No way to subcategorize more than once

The assumption in the current parameter_group interface is that a UI will
always be built on the assumption that parameters should only ever be in
one group, which may be true in Horizon, but it's not the only Ux design
pattern.

Particularly when dealing with filtering lots of nested templates (which
all accept parameters which may exist in some subcategory, such as
"network", "passwords", "advanced" etc.  There's no way to subcategorize
parameters in the heat templates, so we're having to wire in hard-coded
translations outside of heat, because tripleo-ui doesn't work the same as
Horizon (it allows you to browse the nested parameters, and there are a lot
so some subcategories are basically needed here, the horizon flat list
approach won't work).

Any ideas on the most acceptable path forward here would be appreciated -
Randall mentioned enabling per-parameter tags which is certainly an
option, and having some means to handle deprecation would also be very
good, I'm just not sure on the least impactful way to do this.

Thanks!

Steve


[1] 
http://eavesdrop.openstack.org/meetings/heat/2016/heat.2016-08-10-08.00.log.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread Sean M. Collins
Personally I had very bad experiences with stored procedures and
triggers in previous jobs, where the amount of side effects that
occurred and the overall lack of maintainability of triggers and stored
procedures scared me off.

We handed off changes to stored procedures and
triggers to the DBAs, who had a tendency to not apply them correctly or
forget to apply them at a site. Then it was a total nightmare to try and
figure out why things wouldn't work, until we discovered that the
changes to an SP or Trigger wasn't actually applied.

Now, I don't think OpenStack as a project suffers the same
organizational dysfunction as my previous jobs, but just overall they're
hard to debug and maintain and I don't like to use them.

/rant

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] New bug tagging policy

2016-08-25 Thread Julie Pichon
On 25 August 2016 at 11:47, Swapnil Kulkarni  wrote:
> On Thu, Aug 25, 2016 at 4:00 PM, Julie Pichon  wrote:
>> Hi folks,
>>
>> The bug tagging proposal has merged, behold the new policy:
>>
>> http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html
>>
>> TL;DR The TripleO Launchpad tracker encompasses a lot of sub-projects,
>> let's use a consistent list of Launchpad tags where they make sense in
>> order to help understand which area(s) are affected. The tags get
>> autocompleted by Launchpad (or will be soon).
>>
>>
>> There is one remaining action to create the missing tags: I don't have
>> bug wrangling permissions on the TripleO project so, if someone with
>> the appropriate permissions could update the list [1] to match the
>> policy I would appreciate it. Should I be deemed trustworthy enough
>> I'm just as happy to do it myself and help out with the occasional
>> bout of triaging as well.
>>
>> Thanks,
>>
>> Julie
>>
>> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags
>>
>
> Done!

Awesome, thank you!!!

It seems a couple of the tags, like t-h-t don't quite match the names
in the document but well, it's all a start :)

Thanks,

Julie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-25 Thread Andrew Laski



On Thu, Aug 25, 2016, at 12:22 PM, Everett Toews wrote:
> Top posting with general comment...
>
> It sounds like there's some consensus in Nova-land around these traits
> (née "capabilities"). The API Working Group [4] is
> also aware of similar efforts in Cinder [1][2] and Glance [3].

To be clear, we're looking at exposing both traits and capabilities in
Nova. This puts us in a weird spot where I think our concept of traits
aligns with cinders capabilities, but I don't see any match for the Nova
concept of capabilities. So I'm still open to naming suggestions but I
think capabilities most accurately describes what it is. Dean has it
right, I think, that what we really have are 'api capabilities' and
'host capabilities'. But people will end up just using 'capabilities'
and cause confusion.


>
> If these are truly the same concepts being discussed across projects,
> it would be great to see consistency in the APIs and have the projects
> come together under a new guideline. I encourage the projects and
> people to propose such a guideline and for someone to step up and
> champion it. Seems like good fodder for a design session proposal at
> the upcoming summit.

Here's what all of these different things look like to me:

Cinder is looking to expose hardware capabilities. This pretty closely
aligns with what traits are intending to do in Nova. This answers the
question of "can I create a resource that needs/does X in this
deployment?" However in Nova we ultimately want users to be able to
specify which traits they want for their instance. That may be embedded
in a flavor or arbitrarily specified in the request but a trait is not
implicitly available to all resources like it seems it is in Cinder. We
assume there could be a heterogeneous environment so without requesting
a trait there's no guarantee of getting it.

Nova capabilities are intended to answer the question of "as user Y with
resource X what can I do with it?" This is dependent on user
authorization, hardware "traits" where the resource lives, and service
version. I didn't see an analog to this in any of the proposals below.
And one major difference between this and the other proposals is that,
if possible, we would like the response to map to the API action that
will perform that capability. So if a user can perform a resize on their
instance the response might include 'POST .../servers//action -d
resize' or whatever form we come up with.

The Glance concept of value discovery maps closely to what Nova
capabilities are in intent in that it answers the question of "what
can I do in this API request that will be valid?" But the scope is
completely different in that it doesn't answer the question of which
API requests can be made, just what values can be used in this
specific call.


Given the above I find that I don't have the imagination required to
consolidate those into a consistent API concept that can be shared
across projects. Cinder capabilities and Nova traits could potentially
work, but the rest seem too different to me. And if we change traits-
>capabilities then we should find another name for what is currently
Nova capabilities.

-Andrew

>
> Cheers,
> Everett
>
> [1] https://review.openstack.org/#/c/306930/
> [2] https://review.openstack.org/#/c/350310/
> [3]  
> https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html#value-discovery
> [4] http://specs.openstack.org/openstack/api-wg/
>
>
>> On Aug 16, 2016, at 3:16 AM, Sylvain Bauza  wrote:
>>
>>
>>
>> Le 15/08/2016 22:59, Andrew Laski a écrit :
>>
>>> On Mon, Aug 15, 2016, at 10:33 AM, Jay Pipes wrote:
>>>
 On 08/15/2016 09:27 AM, Andrew Laski wrote:

> Currently in Nova we're discussion adding a "capabilities" API to
> expose
>  to users what actions they're allowed to take, and having compute
>  hosts
>  expose "capabilities" for use by the scheduler. As much fun as it
>  would
>  be to have the same term mean two very different things in
>  Nova to
>  retain some semblance of sanity let's rename one or both of these
>  concepts.
>
>  An API "capability" is going to be an action, or URL, that a
>  user is
>  allowed to use. So "boot an instance" or "resize this instance"
>  are
>  capabilities from the API point of view. Whether or not a user
>  has this
>  capability will be determined by looking at policy rules in place
>  and
>  the capabilities of the host the instance is on. For instance an
>  upcoming volume multiattach feature may or may not be allowed
>  for an
>  instance depending on host support and the version of nova-
>  compute code
>  running on that host.
>
>  A host "capability" is a description of the hardware or software
>  on the
>  host that determines whether or not that host can fulfill the
>  needs of
>  an instance looking for a home. So SSD or x86 could be host
> 

[openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

2016-08-25 Thread Steve Martinelli
The keystone team is pursuing a trigger-based approach to support rolling,
zero-downtime upgrades. The proposed operator experience is documented here:

  http://docs.openstack.org/developer/keystone/upgrading.html

This differs from Nova and Neutron's approaches to solve for rolling
upgrades (which use oslo.versionedobjects), however Keystone is one of the
few services that doesn't need to manage communication between multiple
releases of multiple service components talking over the message bus (which
is the original use case for oslo.versionedobjects, and for which it is
aptly suited). Keystone simply scales horizontally and every node talks
directly to the database.

Database triggers are obviously a new challenge for developers to write,
honestly challenging to debug (being side effects), and are made even more
difficult by having to hand write triggers for MySQL, PostgreSQL, and
SQLite independently (SQLAlchemy offers no assistance in this case), as
seen in this patch:

  https://review.openstack.org/#/c/355618/

However, implementing an application-layer solution with
oslo.versionedobjects is not an easy task either; refer to Neutron's
implementation:


https://review.openstack.org/#/q/topic:bp/adopt-oslo-versioned-objects-for-db

Our primary concern at this point are how to effectively test the triggers
we write against our supported database systems, and their various
deployment variations. We might be able to easily drop SQLite support (as
it's only supported for our own test suite), but should we expect variation
in support and/or actual behavior of triggers across the MySQLs, MariaDBs,
Perconas, etc, of the world that would make it necessary to test each of
them independently? If you have operational experience working with
triggers at scale: are there landmines that we need to be aware of? What is
it going to take for us to say we support *zero* dowtime upgrades with
confidence?

Steve & Dolph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-25 Thread Ben Swartzlander
Originally the NFS driver did support snapshots, but it was implemented by 
just 'cp'ing the file containing the raw bits. This works fine (if 
inefficiently) for unattached volumes, but if you do this on an attached 
volume the snapshot won't be crash consistent at all.


It was decided that we could do better for attached volumes by switching to 
qcow2 and relying on nova to perform the snapshots. Based on this, the bad 
snapshot implementation was removed.


However, for a variety of reasons the nova-assisted snapshot implementation 
has remained unmerged for 2+ years and the NFS driver has been an exception 
to the rules for that whole time.


I would like to see that exception end in the near future with either the 
removal of the driver or the completion of the Nova-assisted snapshot 
implementation, and it doesn't really matter to me which.


There is a 3rd alternative which would be to modify the NFS driver to 
require a specific filesystem that supports snapshots (there are a few 
choices here, but definitely NOT ext4). Unfortunately those of us who work 
for storage vendors aren't motivated to make such a modification because it 
would be effectively creating more competition for ourselves. The only way 
this could happen is if someone not working for a storage vendor takes this on.


-Ben


On August 25, 2016 10:39:35 AM Erlon Cruz  wrote:


Hi Jordan, Slade,

Currently NFS driver does not support cloning neither snapshots (which are
the base for implementing cloning). AFAIC, the NFS driver was in Cinder
before the minimum requirements being discussed and set, so, it just stood
there with the features it already supported.

There is currently this job
'gate-tempest-dsvm-full-devstack-plugin-nfs-nv'[1] that by the way are
failing in the same test you mentioned tough passing the snapshot tests
(not shure how the configuration is doing that) and a work[2] in progress
to support the snapshot feature.

So, Jordan, I think its OK to allow tempest to skip this tests, provided
that at least in the NFS driver, tempest isn't being an enforcement to
Cinder minimum features requirements.

Erlon


[1]
http://logs.openstack.org/86/147186/25/experimental/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b149960/
[2] https://review.openstack.org/#/c/147186/

On Wed, Aug 24, 2016 at 6:34 PM, Jordan Pittier 
wrote:



On Wed, Aug 24, 2016 at 6:06 PM, Slade Baumann  wrote:


I am attempting to disable clone tests in tempest as they aren't
functioning in NFS. But the tests test_volumes_clone.py and
test_volumes_clone_negative.py don't have the "clone" feature
toggle in them. I thought it obvious that if clone is disabled
in tempest, the tests that simply clone should be disabled.

So I put up a bug and fix for it, but have been talking with
Jordan Pittier and he suggested I come to the mailing list to
get this figured out.

I'm not asking for reviews, unless you want to give them.
I'm simply asking if this is the right way to go about this
or if there is something else I need to do to get this into
Tempest.

Here are the bug and fix:
https://bugs.launchpad.net/tempest/+bug/1615770
https://review.openstack.org/#/c/358813/

I would appreciate any suggestion or direction in this problem.

For extra reference, the clone toggle flag was added here:
https://bugs.launchpad.net/tempest/+bug/1488274

Hi,

Thanks for starting this thread. My point about this patch is, as "volume
clone" is part of the core requirements [1] every Cinder drive must
support, I don't see a need for a feature flag. The feature flag already
exists, but that doesn't mean we should encourage its usage.

Now, if this really helps the NFS driver (although I don"t know why we
couldn't support clone with NFS)... I don't have a strong opinion on this
patch.

I -1ed the patch for consistency: I agree that there should be a minimum
set of features expected from a Cinder driver.

[1] http://docs.openstack.org/developer/cinder/devref/drivers.html#core-
functionality

Cheers,
Jordan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][keystone] keystoneauth1 2.12.1 release (newton)

2016-08-25 Thread no-reply
We are satisfied to announce the release of:

keystoneauth1 2.12.1: Authentication Library for OpenStack Identity

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/keystoneauth

With package available at:

https://pypi.python.org/pypi/keystoneauth1

Please report issues through launchpad:

http://bugs.launchpad.net/keystoneauth

For more details, please see below.

Changes in keystoneauth1 2.12.0..2.12.1
---

b7b887c get_endpoint should return None when no version found


Diffstat (except docs and test files)
-

keystoneauth1/identity/base.py  |  6 --
2 files changed, 25 insertions(+), 2 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2016-08-25 Thread Chris Dent


Greetings OpenStack community,

Just me (cdent) and etoews today. We merged one guideline, froze some others 
and worked on making some links to the links guideline from other guidelines:

[5:40pm] cdent: hypermedia everywhere!
[5:41pm] etoews: it's the HT in HTML and HTTP!
[5:41pm] cdent: ™

The meeting logs are in the usual place [5]. If you find an issue with an 
existing guideline [3] or think of one that needs to exist, make a bug [4].

# New guidelines

Nothing new this week.

# API guidelines that have been recently merged

* Clarify backslash usage for 'in' operator
  https://review.openstack.org/#/c/353396/

# API guidelines proposed for freeze

The following guidelines are available for broader review by interested 
parties. These will be merged in one week if there is no further feedback.

* Add the beginning of a set of guidlines for URIs
  https://review.openstack.org/#/c/322194/
* Clarify handling of bad microversion strings
  https://review.openstack.org/#/c/346846/
* A guideline for links
  https://review.openstack.org/354266

# Guidelines currently under review [7]

At the moment everything is either frozen or merged.

# API Impact reviews currently open

Reviews marked as APIImpact [1] are meant to help inform the working group 
about changes which would benefit from wider inspection by group members and 
liaisons. While the working group will attempt to address these reviews 
whenever possible, it is highly recommended that interested parties attend the 
API-WG meetings [2] to promote communication surrounding their reviews.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [3].

Thanks for reading and see you next week!

[1] 
https://review.openstack.org/#/q/status:open+AND+(message:ApiImpact+OR+message:APIImpact),n,z
[2] https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
[3] http://specs.openstack.org/openstack/api-wg/
[4]: https://bugs.launchpad.net/openstack-api-wg
[5]: http://eavesdrop.openstack.org/meetings/api_wg/
[7]: https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

--
Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-25 Thread Everett Toews
Top posting with general comment...

It sounds like there's some consensus in Nova-land around these traits (née 
"capabilities"). The API Working Group [4] is also aware of similar efforts in 
Cinder [1][2] and Glance [3].

If these are truly the same concepts being discussed across projects, it would 
be great to see consistency in the APIs and have the projects come together 
under a new guideline. I encourage the projects and people to propose such a 
guideline and for someone to step up and champion it. Seems like good fodder 
for a design session proposal at the upcoming summit.

Cheers,
Everett

[1] https://review.openstack.org/#/c/306930/
[2] https://review.openstack.org/#/c/350310/
[3] 
https://specs.openstack.org/openstack/glance-specs/specs/mitaka/approved/image-import/image-import-refactor.html#value-discovery
[4] http://specs.openstack.org/openstack/api-wg/


On Aug 16, 2016, at 3:16 AM, Sylvain Bauza 
mailto:sba...@redhat.com>> wrote:



Le 15/08/2016 22:59, Andrew Laski a écrit :
On Mon, Aug 15, 2016, at 10:33 AM, Jay Pipes wrote:
On 08/15/2016 09:27 AM, Andrew Laski wrote:
Currently in Nova we're discussion adding a "capabilities" API to expose
to users what actions they're allowed to take, and having compute hosts
expose "capabilities" for use by the scheduler. As much fun as it would
be to have the same term mean two very different things in Nova to
retain some semblance of sanity let's rename one or both of these
concepts.

An API "capability" is going to be an action, or URL, that a user is
allowed to use. So "boot an instance" or "resize this instance" are
capabilities from the API point of view. Whether or not a user has this
capability will be determined by looking at policy rules in place and
the capabilities of the host the instance is on. For instance an
upcoming volume multiattach feature may or may not be allowed for an
instance depending on host support and the version of nova-compute code
running on that host.

A host "capability" is a description of the hardware or software on the
host that determines whether or not that host can fulfill the needs of
an instance looking for a home. So SSD or x86 could be host
capabilities.
https://github.com/jaypipes/os-capabilities/blob/master/os_capabilities/const.py
has a list of some examples.

Some possible replacement terms that have been thrown out in discussions
are features, policies(already used), grants, faculties. But none of
those seemed to clearly fit one concept or the other, except policies.

Any thoughts on this hard problem?
I know, naming is damn hard, right? :)

After some thought, I think I've changed my mind on referring to the
adjectives as "capabilities" and actually think that the term
"capabilities" is better left for the policy-like things.

My vote is the following:

GET /capabilities <-- returns a set of *actions* or *abilities* that the
user is capable of performing

GET /traits <-- returns a set of *adjectives* or *attributes* that may
describe a provider of some resource
Traits sounds good to me.

Yeah, it wouldn't be dire, trait.

I can rename os-capabilities to os-traits, which would make Sean Mooney
happy I think and also clear up the terminology mismatch.

Thoughts?
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Priority reviews for the upcoming n-3 release next week

2016-08-25 Thread Nikhil Komawar
Hi,


In today's weekly meeting, I've listed some of the priority reviews that
the reviewers, especially glance-cores, should focus on the coming few
days for the Newton-3 release next week [1]. We will be evaluating the
reviews that are likely to merge or otherwise in the next few days and
should have the final list by EOD Monday 29th. Given all the constraints
in mind, I am planning to propose the n-3 hash to the release team on
EOD Wednesday Aug 31st unless any critical issues come up [2]. So,
please make sure you keep your patches up to date and ready for review
if you need them in Newton.


Also, please stop proposing trivial fixes at this stage in the cycle as
such things put load on the gate and are detrimental to development work
in the community, not just Glance. Besides, having trivial fixes in
review queue distracts reviewers and presents an awkward view of the
review dashboard, again this is something that hampers productivity.


If you have WIP patches and are not planning to target it to Glance for
Newton or otherwise, please make sure you take some time to abandon
those patches. Cleaning up the queue yourself, will help the glance team
a long way -- kindly don't just rely on the PTL, release liaison or the
cores to do the cleaning for you.


[1]
http://eavesdrop.openstack.org/meetings/glance/2016/glance.2016-08-25-14.00.html
[2] http://releases.openstack.org/newton/schedule.html

-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] Barcelona summit space requirements and session planning etherpad

2016-08-25 Thread Nikhil Komawar
Hi,


Just wanted to point out to those who haven't been to Glance meetings in
the past couple of weeks that we've to submit space requirements for the
Barcelona design summit early next week. I've listed the constraints
poised in front of us in the planning etherpad [1]. Please see the top
portion of this etherpad under "Layout Proposal" to either propose or
vote on the layout proposal options to help us collaboratively determine
the space needs for Glance. Currently there are 2 proposals and if you
don't have any other in mind, please cast your vote on the given.


I need the votes by EOD on Monday 29th Aug and will be sending our final
space requirement request first thing on Tuesday 30th.


On another note, if you want to start proposing sessions for the summit
feel free to scroll to the bottom of the etherpad for the template and
the slots for the topics.


Let me know if you've any questions.


[1] https://etherpad.openstack.org/p/ocata-glance-summit-planning


-- 

Thanks,
Nikhil


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interface in PEER2PEER live migration

2016-08-25 Thread Alberto Planas Dominguez
On Thu, 2016-08-25 at 11:41 -0400, Daniel P. Berrange wrote:
> 
> I think where the confusion is coming is that libvirt will work in
> two different ways with P2P migration. If the TUNNELLED flag is set,
> then the migration data will go over the Libvirtd <-> libvirtd
> connection, which is influenced by the live_migration_inbound_addr
> parameter.

Right.

>  If the TUNNELED flag is not set the data is QEMU <-> QEMU directly,
> and that needs the extra URI set.

Exactly.

> What we need todo is fix the Nova code so that when the TUNNELLED
> flag
> is *not* set, we also provide the extra URI, using the hostname/ip
> addr
> listed in live_migration_inbound_addr, falling back to the compute
> hostname
> if live_migration_inbound_addr is not set.

This will work, but I have the feeling that this will give
to live_migration_inbound_addr two meanings, that depends on the kind
of live migration configured.

I will adapt the patch to follow this path!

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham
Norton, HRB 21284 (AG Nürnberg)
Maxfeldstraße 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interface in PEER2PEER live migration

2016-08-25 Thread Daniel P. Berrange
On Thu, Aug 25, 2016 at 12:01:40PM +0200, Alberto Planas Dominguez wrote:
> On Wed, 2016-08-24 at 11:18 -0400, Daniel P. Berrange wrote:
> > On Wed, Aug 24, 2016 at 05:07:50PM +0200, Alberto Planas Dominguez
> > wrote:
> 
> Daniel, thanks for the fast reply!!
> 
> > > Unfortunately was closed as invalid, and the solution provided is
> > > completely unrelated. The solution suggested is based on
> > > `live_migration_inbound_addr`, that is related with the libvirtd
> > > URI,
> > > not the qmeu one. I tested several times and yes, this solution is
> > > not
> > > related with the problem.
> > 
> > The "live_migration_inbound_addr" configuration parameters was
> > intended
> > to affect both libvirt & QEMU traffic. If that is not working
> > correctly,
> > then we should be fixing that, nto adding yet another parameter.
> 
> The code in libvirt is very clear: if uri_in is NULL will ask to the
> hostname to the other side. I checked the code in 1.2.18:
> 
> https://github.com/libvirt/libvirt/blob/v1.2.18-maint/src/qemu/qemu_mig
> ration.c#L3601
> 
> https://github.com/libvirt/libvirt/blob/v1.2.18-maint/src/qemu/qemu_mig
> ration.c#L3615
> 
> The same logic is in master:
> 
> https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu_migration.
> c#L4013
> 
> But we can go back to 0.9.12:
> 
> https://github.com/libvirt/libvirt/blob/v0.9.12-maint/src/qemu/qemu_mig
> ration.c#L1472
> 
> Nova set migration_uri parameter to None, that this means that uri_in
> is NULL.
> 
> How can I affect the the QEMU part? The code path AAIU is: if we do not
> set miguri (migrateToURI2) or migrate_uri (migrateToURI3), is a
> uri_in=NULL.
> 
> I am not familiar with libvirt code, please, help me to find how I can
> affect this uri_in parameter to have a value different from the
> hostname of the other node, without setting the correct value in
> migrateToURI[23] in the Nova side.

I think where the confusion is coming is that libvirt will work in two
different ways with P2P migration. If the TUNNELLED flag is set, then the
migration data will go over the Libvirtd <-> libvirtd connection, which is
influenced by the live_migration_inbound_addr parameter. If the TUNNELED
flag is not set the data is QEMU <-> QEMU directly, and that needs the
extra URI set.

What we need todo is fix the Nova code so that when the TUNNELLED flag
is *not* set, we also provide the extra URI, using the hostname/ip addr
listed in live_migration_inbound_addr, falling back to the compute hostname
if live_migration_inbound_addr is not set.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [freezer] Core team updates

2016-08-25 Thread Ramirez Garcia, Guillermo
+1 on adding Yang Yapeng (yangyapeng) as core.




From: Mathieu, Pierre-Arthur
Sent: Thursday, August 25, 2016 4:33:08 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [freezer] Core team updates

Hello,

I would like to propose some modifications regarding the Freezer core team.

First, the removal of two inactive members:
  - Fabrizio Fresco: Inactive
  - Eldar Nugaev: Switched company and is now focusing on other projects.
Thank you very much for your contributions.


Secondly, I would like to propose that we promote Yang Yapeng (yangyapeng) core.
He has been a highly valuable developper for the past few month, mainly working 
on integration with Nova and Cinder.
His work can be found here: [1]
And his stackalitics profile here: [2]


If you agree with all these change, please approve with a +1 answer or explain 
your opinion on any of these individual modification.
If there are no objection, I plan on applying these tomorrow evening.

Thanks
- Pierre, Freezer PTL

[1]  https://review.openstack.org/#/q/owner:%22yapeng+Yang%22
[2] http://stackalytics.com/?release=all&metric=loc&user_id=yang-yapeng















__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Core team updates

2016-08-25 Thread Mathieu, Pierre-Arthur
Hello, 

I would like to propose some modifications regarding the Freezer core team. 

First, the removal of two inactive members:
  - Fabrizio Fresco: Inactive
  - Eldar Nugaev: Switched company and is now focusing on other projects.
Thank you very much for your contributions.


Secondly, I would like to propose that we promote Yang Yapeng (yangyapeng) core.
He has been a highly valuable developper for the past few month, mainly working 
on integration with Nova and Cinder.
His work can be found here: [1]
And his stackalitics profile here: [2]


If you agree with all these change, please approve with a +1 answer or explain 
your opinion on any of these individual modification.
If there are no objection, I plan on applying these tomorrow evening.

Thanks
- Pierre, Freezer PTL

[1]  https://review.openstack.org/#/q/owner:%22yapeng+Yang%22
[2] http://stackalytics.com/?release=all&metric=loc&user_id=yang-yapeng

 












  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] VM console for VMware instances

2016-08-25 Thread Andrew Laski


On Thu, Aug 25, 2016, at 09:50 AM, Radoslav Gerganov wrote:
> Hi,
> 
> If you want to use the MKS console for VMware instances, it's now
> possible with the nova-mksproxy[1].
> There is a devstack plugin, so simply add this in your local.conf:
> 
>   [[local|localrc]]
>   enable_plugin nova-mksproxy https://github.com/openstack/nova-mksproxy
> 
> the CLI command for getting a console URL is:
> 
>   $ nova get-mks-console 
> 
> This is the preferred console type for VMware instances because it
> doesn't require any configuration
> changes on the hypervisor (whereas VNC requires opening network ports).
> 
> Any comments/feedback is welcome.
> 
> -Rado
> 
> [1] https://github.com/openstack/nova-mksproxy

Is there a reason this has not been proposed to the Nova project, or
have I missed that? I looked for a proposal and did not see one.

I see that there's support in Nova and python-novaclient for this
feature, but the actual proxy is not in the Nova tree. In situations
like this, where there's in-tree code to support an out of tree feature,
we typically deprecate and remove that code unless there's a plan to
move all of the components into the project. Is there a plan to move
this proxy into Nova?

> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Gerrit downtime on Friday 2016-09-02 at 18:00 UTC

2016-08-25 Thread Elizabeth K. Joseph
Hi everyone,

On Friday, September 2nd from approximately 18:00 through 22:00 UTC
Gerrit will be unavailable while complete project renames.

Currently, we plan on renaming the following projects:

openstack/smaug -> openstack/karbor

openstack/higgins -> openstack/zun

Existing reviews, project watches, etc, for these projects will all be
carried over.

This list is subject to change. If you need a rename, please be sure
to get your project-config change in soon so we can review it and add
it to 
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Upcoming_Project_Renames

We'll also be doing some cleanup unrelated to these two renames.

If you have any questions about the maintenance, please reply here or
contact us in #openstack-infra on freenode.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][cinder] Clone feature toggle not in clone tests

2016-08-25 Thread Erlon Cruz
Hi Jordan, Slade,

Currently NFS driver does not support cloning neither snapshots (which are
the base for implementing cloning). AFAIC, the NFS driver was in Cinder
before the minimum requirements being discussed and set, so, it just stood
there with the features it already supported.

There is currently this job
'gate-tempest-dsvm-full-devstack-plugin-nfs-nv'[1] that by the way are
failing in the same test you mentioned tough passing the snapshot tests
(not shure how the configuration is doing that) and a work[2] in progress
to support the snapshot feature.

So, Jordan, I think its OK to allow tempest to skip this tests, provided
that at least in the NFS driver, tempest isn't being an enforcement to
Cinder minimum features requirements.

Erlon


[1]
http://logs.openstack.org/86/147186/25/experimental/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/b149960/
[2] https://review.openstack.org/#/c/147186/

On Wed, Aug 24, 2016 at 6:34 PM, Jordan Pittier 
wrote:

>
> On Wed, Aug 24, 2016 at 6:06 PM, Slade Baumann  wrote:
>
>> I am attempting to disable clone tests in tempest as they aren't
>> functioning in NFS. But the tests test_volumes_clone.py and
>> test_volumes_clone_negative.py don't have the "clone" feature
>> toggle in them. I thought it obvious that if clone is disabled
>> in tempest, the tests that simply clone should be disabled.
>>
>> So I put up a bug and fix for it, but have been talking with
>> Jordan Pittier and he suggested I come to the mailing list to
>> get this figured out.
>>
>> I'm not asking for reviews, unless you want to give them.
>> I'm simply asking if this is the right way to go about this
>> or if there is something else I need to do to get this into
>> Tempest.
>>
>> Here are the bug and fix:
>> https://bugs.launchpad.net/tempest/+bug/1615770
>> https://review.openstack.org/#/c/358813/
>>
>> I would appreciate any suggestion or direction in this problem.
>>
>> For extra reference, the clone toggle flag was added here:
>> https://bugs.launchpad.net/tempest/+bug/1488274
>>
>> Hi,
> Thanks for starting this thread. My point about this patch is, as "volume
> clone" is part of the core requirements [1] every Cinder drive must
> support, I don't see a need for a feature flag. The feature flag already
> exists, but that doesn't mean we should encourage its usage.
>
> Now, if this really helps the NFS driver (although I don"t know why we
> couldn't support clone with NFS)... I don't have a strong opinion on this
> patch.
>
> I -1ed the patch for consistency: I agree that there should be a minimum
> set of features expected from a Cinder driver.
>
> [1] http://docs.openstack.org/developer/cinder/devref/drivers.html#core-
> functionality
>
> Cheers,
> Jordan
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] Topics for discussions at summit

2016-08-25 Thread Vitaly Gridnev
Hello team,

let's start adding topics for discussions at Ocata summit at [0].

[0] https://etherpad.openstack.org/p/sahara-ocata-summit


-- 
Best Regards,
Vitaly Gridnev,
Project Technical Lead of OpenStack DataProcessing Program (Sahara)
Mirantis, Inc
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][massively distributed][architecture] Coordination between actions/WGs

2016-08-25 Thread Ed Leafe
On Aug 24, 2016, at 8:42 PM, joehuang  wrote:
> 
> Funny point of view. Let's look at the mission of OpenStack:
> 
> "to produce the ubiquitous Open Source Cloud Computing platform that enables
> building interoperable public and private clouds regardless of size, by being
> simple to implement and massively scalable while serving the cloud users'
> needs."
> 
> It mentioned that "regardless of size", and you also mentioned "cloud to me:
> lots of hardware consolidation".

If it isn’t part of a cloud architecture, then it isn’t part of OpenStack’s 
mission. The ‘size’ qualifier relates to everything from massive clouds like 
CERN and Walmart down to small private clouds. It doesn’t mean ‘any sort of 
computing platform’; the focus is clear that we are an "Open Source Cloud 
Computing platform”.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-25 Thread James Slagle
On Wed, Aug 24, 2016 at 9:56 PM, Paul Belanger  wrote:
> I actually believe these problem highlights how large tripleo-ci has grown, 
> and
> in need of a refactor. While we won't solve this problem today, I do think
> tripleo-ci is to monolithic today. I believe there is some discussion on
> breaking jobs into different scenarios, but I haven't had a chance to read up 
> on
> that.
>
> I'm hoping in Barcelona we can have a topic on CI pipelines and how better to
> optimize our runs.

I've added a couple topics about CI to the planning etherpad, you
might like to add this there as well:
https://etherpad.openstack.org/p/ocata-tripleo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][ironic] ironic-lib 2.1.0 release (newton)

2016-08-25 Thread no-reply
We are stoked to announce the release of:

ironic-lib 2.1.0: Ironic common library

This release is part of the newton release series.

With package available at:

https://pypi.python.org/pypi/ironic-lib

For more details, please see below.

Changes in ironic-lib 2.0.0..2.1.0
--

bb90b10 Correct reraising of exception
7aff4e1 Add developer documentation on metrics
65fbc0d Use constraints for all the things
84f8c30 Enforce doc8, make it pass, + fix inaccuracies
4f58317 Add framework for doc building in ironic-lib
f4da9e9 Updated from global requirements
64451bc Updated from global requirements
7e926fd Support configdrive in iscsi deploy for whole disk images
7aac631 Add parse_root_device_hints to utils.py
64dc8b6 Updated from global requirements


Diffstat (except docs and test files)
-

etc/rootwrap.d/ironic-lib.filters   |   2 +
ironic_lib/disk_utils.py| 230 ++-
ironic_lib/exception.py |  25 +-
ironic_lib/metrics.py   |  73 ++---
ironic_lib/utils.py |  62 
ironic_lib/version.py   |  18 ++
requirements.txt|   4 +-
setup.cfg   |   9 +
test-requirements.txt   |   5 +
tools/tox_install.sh|  55 
tox.ini |  13 +-
15 files changed, 1229 insertions(+), 52 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 1159ddb..9b1b50b 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ oslo.concurrency>=3.8.0 # Apache-2.0
-oslo.config>=3.10.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
@@ -10 +10 @@ oslo.service>=1.10.0 # Apache-2.0
-oslo.utils>=3.14.0 # Apache-2.0
+oslo.utils>=3.16.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 6a7c98a..b6f0e36 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12,0 +13,5 @@ testtools>=1.4.0 # MIT
+
+# Doc requirements
+doc8 # Apache-2.0
+sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
+oslosphinx!=3.4.0,>=2.5.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] VM console for VMware instances

2016-08-25 Thread Radoslav Gerganov
Hi,

If you want to use the MKS console for VMware instances, it's now possible with 
the nova-mksproxy[1].
There is a devstack plugin, so simply add this in your local.conf:

  [[local|localrc]]
  enable_plugin nova-mksproxy https://github.com/openstack/nova-mksproxy

the CLI command for getting a console URL is:

  $ nova get-mks-console 

This is the preferred console type for VMware instances because it doesn't 
require any configuration
changes on the hypervisor (whereas VNC requires opening network ports).

Any comments/feedback is welcome.

-Rado

[1] https://github.com/openstack/nova-mksproxy

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutronclient check queue is broken

2016-08-25 Thread Assaf Muller
On Thu, Aug 25, 2016 at 6:58 AM, Ihar Hrachyshka  wrote:
> Akihiro Motoki  wrote:
>
>> In the neutronclient check queue,
>> gate-neutronclient-test-dsvm-functional is broken now [1].
>> Please avoid issuing 'recheck'.
>>
>> [1] https://bugs.launchpad.net/python-neutronclient/+bug/1616749
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> The proposed fix (removal tests for lbaasv1) made me wonder why we don’t
> gate on neutron stable branches in client master branch. Isn’t it a test
> matrix gap that could allow a new client to introduce a regression that
> would break interactions with older clouds?

Absolutely. Feel free to send a project-config patch :)

>
> I see that some clients (nova) validate stable server branches against
> master client patches. Shouldn’t we do the same?
>
> Ihar
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-25 Thread James Slagle
On Thu, Aug 25, 2016 at 5:40 AM, Derek Higgins  wrote:
> On 25 August 2016 at 02:56, Paul Belanger  wrote:
>> On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
>>> The latest recurring problem that is failing a lot of the nonha ssl
>>> jobs in tripleo-ci is:
>>>
>>> https://bugs.launchpad.net/tripleo/+bug/1616144
>>> tripleo-ci: nonha jobs failing with Unable to establish connection to
>>> https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1b1671ced2ff/stacks/overcloud/f9f6f712-8e89-4ea9-a34b-6084dc74b5c1
>>>
>>> This error happens while polling for events from the overcloud stack
>>> by tripleoclient.
>>>
>>> I can reproduce this error very easily locally by deploying with an
>>> ssl undercloud with 6GB ram and 2 vcpus. If I don't enable swap,
>>> something gets OOM killed. If I do enable swap, swap gets used (< 1GB)
>>> and then I hit this error almost every time.
>>>
>>> The stack keeps deploying but the client has died, so the job fails.
>>> My investigation so far has only pointed out that it's the swap
>>> allocation that is delaying things enough to cause the failure.
>>>
>>> We do not see this error in the ha job even though it deploys more
>>> nodes. As of now, my only suspect is that it's the overhead of the
>>> initial SSL connections causing the error.
>>>
>>> If I test with 6GB ram and 4 vcpus I can't reproduce the error,
>>> although much more swap is used due to the increased number of default
>>> workers for each API service.
>>>
>>> However, I suggest we just raise the undercloud specs in our jobs to
>>> 8GB ram and 4 vcpus. These seem reasonable to me because those are the
>>> default specs used by infra in all of their devstack single and
>>> multinode jobs spawned on all their other cloud providers. Our own
>>> multinode job for the undercloud/overcloud and undercloud only job are
>>> running on instances of these sizes.
>>>
>> Close, our current flavors are 8vCPU, 8GB RAM, 80GB HDD. I'd recommend doing
>> that for the undercloud just to be consistent.
>
> The HD on most of the compute nodes are 200GB so we've been trying
> really hard[1] to keep the disk usage for each instance down so that
> we can fit as many instances onto each compute nodes as possible
> without being restricted by the HD's. We've also allowed nova to
> overcommit on storage by a factor of 3. The assumption is that all of
> the instances are short lived and a most of them never fully exhaust
> the storage allocated to them. Even the ones that do (the undercloud
> being the one that does) hit peak at different times so everything is
> tickety boo.
>
> I'd strongly encourage against using a flavor with a 80GB HDD, if we
> increase the disk space available to the undercloud to 80GB then we
> will eventually be using it in CI. And 3 undercloud on the same
> compute node will end up filling up the disk on that host.

I've gone ahead and made the changes to the undercloud flavor in rh1
to use 8GB ram and 4 vcpus. I left the disk at 40. I'd like to see use
the same flavor specs as the default infra flavor, but going up to
8vcpus would require configuring less workers per api service I think.
That's something we can iterate towards I think.

We should start seeing new instances coming online using the specs
from the updated undercloud flavor.

Fwiw, I tested Giulio's python-tripleoclient patch in my environment
where I can reproduce the failure and it did not help with this
specific issue, although I think the patch is still a step in the
right direction.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-08-25 Thread Assaf Muller
On Thu, Aug 25, 2016 at 7:35 AM, Gary Kotton  wrote:
> Hi,
> At the moment it is still not clear to me the upgrade process from V1 to V2. 
> The migration script https://review.openstack.org/#/c/289595/ has yet to be 
> approved. Does this support all drivers or is this just the default reference 
> implementation driver?

The migration script doesn't have a test, so we really have no idea if
it's going to work.

> Are there people still using V1?
> Thanks
> Gary
>
> On 8/25/16, 4:25 AM, "Doug Wiegley"  wrote:
>
>
> > On Mar 23, 2016, at 4:17 PM, Doug Wiegley 
>  wrote:
> >
> > Migration script has been submitted, v1 is not going anywhere from 
> stable/liberty or stable/mitaka, so it’s about to disappear from master.
> >
> > I’m thinking in this order:
> >
> > - remove jenkins jobs
> > - wait for heat to remove their jenkins jobs ([heat] added to this 
> thread, so they see this coming before the job breaks)
> > - remove q-lbaas from devstack, and any references to lbaas v1 in 
> devstack-gate or infra defaults.
> > - remove v1 code from neutron-lbaas
>
> FYI, all of the above have completed, and the final removal is in the 
> merge queue: https://review.openstack.org/#/c/286381/
>
> Mitaka will be the last stable branch with lbaas v1.
>
> Thanks,
> doug
>
> >
> > Since newton is now open for commits, this process is going to get 
> started.
> >
> > Thanks,
> > doug
> >
> >
> >
> >> On Mar 8, 2016, at 11:36 AM, Eichberger, German 
>  wrote:
> >>
> >> Yes, it’s Database only — though we changed the agent driver in the DB 
> from V1 to V2 — so if you bring up a V2 with that database it should 
> reschedule all your load balancers on the V2 agent driver.
> >>
> >> German
> >>
> >>
> >>
> >>
> >> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
> >>
> >>> So this looks like only a database migration, right?
> >>>
> >>> -Original Message-
> >>> From: Eichberger, German [mailto:german.eichber...@hpe.com]
> >>> Sent: Tuesday, March 08, 2016 12:28 AM
> >>> To: OpenStack Development Mailing List (not for usage questions)
> >>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
> >>>
> >>> Ok, for what it’s worth we have contributed our migration script: 
> https://review.openstack.org/#/c/289595/ — please look at this as a starting 
> point and feel free to fix potential problems…
> >>>
> >>> Thanks,
> >>> German
> >>>
> >>>
> >>>
> >>>
> >>> On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:
> >>>
>  As far as I recall, you can specify the VIP in creating the LB so 
> you will end up with same IPs.
> 
>  -Original Message-
>  From: Eichberger, German [mailto:german.eichber...@hpe.com]
>  Sent: Monday, March 07, 2016 8:30 PM
>  To: OpenStack Development Mailing List (not for usage questions)
>  Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
> weready?
> 
>  Hi Sam,
> 
>  So if you have some 3rd party hardware you only need to change the
>  database (your steps 1-5) since the 3rd party hardware will just keep
>  load balancing…
> 
>  Now for Kevin’s case with the namespace driver:
>  You would need a 6th step to reschedule the loadbalancers with the 
> V2 namespace driver — which can be done.
> 
>  If we want to migrate to Octavia or (from one LB provider to 
> another) it might be better to use the following steps:
> 
>  1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, 
> Health
>  Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 
> 3.
>  Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format
>  file into some scripts which recreate the load balancers with your
>  provider of choice —
> 
>  6. Run those scripts
> 
>  The problem I see is that we will probably end up with different VIPs
>  so the end user would need to change their IPs…
> 
>  Thanks,
>  German
> 
> 
> 
>  On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
> 
> > As for a migration tool.
> > Due to model changes and deployment changes between LBaaS v1 and 
> LBaaS v2, I am in favor for the following process:
> >
> > 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools,
> > Health Monitors , Members) into some JSON format file(s) 2. Delete 
> LBaaS v1 3.
> > Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 
> back
> > over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to
> > make room to some custom modification for mapping between v1 and v2
> >>

[openstack-dev] [release] Last day for releases for non-client libraries (Re: [release] Release countdown for week R-6, 22-26 Aug)

2016-08-25 Thread Davanum Srinivas
Folks,

Please see below... deadline is today!

"The last day for releases for non-client libraries will be 25 Aug.
File your release request in time to have the release done on the
25th."

Thanks,
-- Dims

On Thu, Aug 18, 2016 at 1:22 PM, Doug Hellmann  wrote:
> Focus
> -
>
> We're approaching feature freeze deadline so teams should be wrapping
> up feature development as we approach the final milestone 29 Aug -
> 2 Sept.
>
> General Notes
> -
>
> The upcoming third milestone marks the start of several freezes in
> our release cycle to let us shift our focus to bug fixes and generally
> hardening the release.
>
> The general feature freeze allows teams to wrap up Newton and start
> thinking about Ocata planning.
>
> We freeze releases of all libraries and changes to requirements
> between the third milestone and the final release to give downstream
> packagers time to vet the libraries. Only emergency bug fix updates
> are allowed during that period, not releases for FFEs.
>
> We start a soft string freeze at the milestone to give translators
> time to catch up with the work that has already been done this
> cycle. A hard string freeze will follow two weeks later at R-3.
>
> Release Actions
> ---
>
> The last day for releases for non-client libraries will be 25 Aug.
> File your release request in time to have the release done on the
> 25th.
>
> Review the members of your $project-release group in gerrit, based
> on the instructions Thierry sent on 15 Aug. You may not be able to
> merge patches during the release candidate period if the group
> membership is not set correctly.
>
> Important Dates
> ---
>
> Library release freeze date Aug 25.
>
> Newton 3 milestone, Sept 1.
>
> Newton release schedule: http://releases.openstack.org/newton/schedule.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] release reminder

2016-08-25 Thread Emilien Macchi
On Thu, Aug 25, 2016 at 8:55 AM, Steven Hardy  wrote:
> On Tue, Aug 23, 2016 at 11:17:16AM -0400, Emilien Macchi wrote:
>> This is a quick release update:
>>
>> - We're currently R-6, "Final release for non-client libraries".
>> - Next week will be R-5, "Feature freeze" and "newton-3 milestone".
>> We'll proceed to a TripleO b3 release. If you need FFE, please submit
>> it using ML.
>
> Thanks for the update Emilien - I've been out for a few days but am now
> back and will try to get things ready so that we can release newton-3 next
> week.
>
> Currently there are a lot of features still incomplete (some just need
> reviews):
>
> https://bugs.launchpad.net/tripleo/+milestone/newton-3
>
> I expect there will be some FFE's, and I'll follow up with a separate
> thread about it, but please can everyone prioritize reviewing those
> features so that we can close as many as possible for n-3.
>
>> - Official Newton release for TripleO will be Oct 17-21 week (2 weeks
>> trailing release).
>
> Yes, although note we may release earlier than 2 weeks if we're ready (main
> constraint here is we can't release before the puppet modules declare their
> final release).

FTR we'll do our best to release as soon as we can, hopefully on Oct 3rd.

>> If you need more informations about Newton release management, please look:
>> https://releases.openstack.org/newton/schedule.html
>>
>> Also, Doug is currently working on Ocata release schedule, please have a 
>> look:
>> https://review.openstack.org/#/c/357214/ (you'll notice the release
>> cadence is a bit different).
>>
>> Thanks,
>
> Thanks!
>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] collaboration request with vendors

2016-08-25 Thread Steven Hardy
On Wed, Aug 24, 2016 at 03:11:38PM -0400, Emilien Macchi wrote:
> TripleO does support multiple vendors for different type of backends.
> Here are some examples:
> Neutron networking: Cisco, Nuage, Opencontrail, Midonet, Plumgrid, Biswitch
> Cinder: Dell, Netapp, Ceph
> 
> TripleO developers are struggling to maintain the environment files
> that allow to deploy those backends because it's very hard to test
> them:
> - not enough hardware
> - zero knowledge at how to deploy the actual backend system
> - no time to test all backends
> 
> Recently, we made some changes in TripleO CI that will help us to
> scale the way we test TripleO in the future.
> One of those changes is that we can now deploy TripleO using nodepool
> instances like devstack jobs.
> 
> I wrote a prototype of TripleO job scenario:
> https://review.openstack.org/#/c/360039/ that will allow us to have
> more CI jobs with less services installed on each, so we can save
> performances while increasing services coverage.
> I would like to re-use those bits to test our vendors backends.
> 
> Here's the proposal:
> - for vendors backends that can be deployed using TripleO itself
> (open-source backend systems like OpenContrail, Midonet, etc): we
> could re-use the scenario approach by adding new scenarios for each
> backend.
> The jobs would only be triggered if we touch environment files related
> on the backend in THT or the puppet profiles for the backend in
> puppet-tripleo or the puppet backend class in puppet-neutron for the
> backend (all thanks to Zuul magic).

This sounds good, my only concern is how we handle things breaking when
something outside of tripleo changes (e.g triage of bugs related to the
vendor backends).

If we can get some commitment folks will show up to help with that then
definitely +1 on doing this.

There are some additional complexities around images we'll need to consider
too, as some (all?) of these backends require customization of the
overcloud images (e.g adding some additional pieces related to the enabled
vendor backend).

> - for vendors backends that can't be deployed using TripleO itself
> (not implemented in the services and / or not open-source):
> Like most of you probably did for devstack jobs in neutron/cinder's
> gates, work with us to implement CI jobs that would deploy TripleO
> with your backend. I don't have the exact technical solution right
> now, but at least I would like to know who would be interested by this
> collaboration.

This also sounds good, but it's unclear to me atm if we have any folks
willing to step up and do this work.  If people with bandwidth to do this
can be identified then it would be good investigate.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-25 Thread Steven Hardy
On Wed, Aug 17, 2016 at 07:20:59AM -0400, James Slagle wrote:
> On Wed, Aug 17, 2016 at 4:04 AM, Dmitry Tantsur  wrote:
> > However, the current gate system allows to run jobs based on files affected.
> > So we can also run a scenario covering ironic on THT check/gate if
> > puppet/services/*ironic* is affected, but not in the other cases. This won't
> > cover all potential failures, but it would be of great help anyway. It
> > should also run in experimental pipeline, so that it can be triggered on any
> > patch.
> >
> > This is in addition to periodic jobs you're proposing, not replacing them.
> > WDYT?
> 
> Using the files affected to trigger a scenario test that uses the
> affected composable service sounds like a really good idea to me.

+1 I think this sounds like a really good idea.

Now that we're doing almost all per-service configuration in the respective
templates and puppet profiles, this should be much easier to implement I
think so definitely +1 on giving it a go.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Barcelona Design Summit space needs

2016-08-25 Thread Steven Hardy
On Tue, Aug 23, 2016 at 09:04:15AM -0400, Emilien Macchi wrote:
> Team,
> 
> Thierry sent an email to all PTLs about space needs for next Summit.

Thanks for raising this while I was out Emilien - indeed we need to provide
some feedback (before the end of August, ideally sooner) about our needs
for summit space.

> Here's what we can have:
> 
> * Fishbowl sessions (from Wednesday 4pm to Friday noon)
> Our traditional largish rooms organized in fishbowl style, with
> advertised session content on the summit schedule for increased external
> participation. Ideal for when wider feedback is essential.
> 
> * Workroom sessions (from Wednesday 4pm to Friday noon)
> Smaller rooms organized in boardroom style, with topic buried in the
> session description, in an effort to limit attendance and not overcrowd
> the room. Ideal to get work done and prioritize work in small teams.
> 
> * Contributors meetup (Friday afternoon)
> Half-day session on Friday afternoon to get into the Ocata action while
> decisions and plans are still hot, or to finish discussions started
> during the week, whatever works for you.
> 
> Note:
> - Ops summit on Tuesday morning until 4pm
> - Cross-project workshops from Tuesday 4pm to Wednesday 4pm
> 
> As a reminder, here's what we had for Austin:
> Fishbowl slots (Wed-Thu): 2
> Workroom slots (Tue-Thu): 3
> Contributors meetup (Fri): 1/2

I think this allocation worked well in Austin, so I'd suggest we ask for
the same again.

I know Thierry indicated we should request less, but we are asking for far
fewer sessions than many other projects, so I'd like to aim for the same
allocation and see if that can be accommodated.

What do folks think, if I can get some acks on this plan I will go ahead
and provide the feedback to Thierry.

> Notes from Thierry:
> "We'll have less slots compared to Austin, and new teams to accommodate.
> So as a rule of thumb, you should probably require *less* slots than in
> Austin. It's also worth noting that the Ocata cycle will be a short
> cycle (likely only 15 weeks between the design summit and feature
> freeze, including thanksgiving and other end-of-year holidays), so there
> is no need to plan too much work."
> 
> I created an etherpad for topic ideas, feel free to start thinking about it:
> https://etherpad.openstack.org/p/ocata-tripleo

Thanks, I added a couple of topics, will add some more later.

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] entity graph layout

2016-08-25 Thread Afek, Ifat (Nokia - IL)
Hi Yujun,

Try setting COMPRESS_ENABLED = False in local_settings.py (add it if it doesn’t 
exist).
Let us know if it worked, it sounds like a good solution.

Thanks,
Ifat.


From: Yujun Zhang
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
Date: Thursday, 25 August 2016 at 10:52
To: "OpenStack Development Mailing List (not for usage questions)"
Subject: Re: [openstack-dev] [vitrage] entity graph layout

After the first investigation, I think cytoscape might be too heavy. There 
would be lots of refactoring work to migrate all functions to new library. So I 
would suspend this proposal for now.

However, it seems the layout could be improved by adjusting the parameters 
applied to force layout, e.g. charge, gravity and etc. When a larger charge is 
assigned to cluster, it will push away the other element to avoid overlapping.

But currently it is difficult to tune such parameters since the scripts are 
compressed. Any idea to speed up debug process?

--
Yujun

On Tue, Aug 23, 2016 at 9:29 AM Yujun Zhang 
mailto:zhangyujun%2b...@gmail.com>> wrote:
I'm considering to use Cytoscape.js [1] to improve the layout for entity graph 
view.

Cytoscape.js is a graph theory (a.k.a. network) library for analysis and 
visualisation which under active maintenance (latest release 2.7.8 on Aug 18, 
2016) [2], while the current library d3-dagre [3] is declared not being 
actively developed or maintained.

Meanwhile, I'm building a proof of concept for visualizing the entity graph 
with Cytoscape.

Could anybody give a list on the required features for this view? Any comments 
are welcome.

[1] http://js.cytoscape.org/
[2] https://github.com/cytoscape/cytoscape.js
[3] https://github.com/cpettitt/dagre-d3


On Mon, Aug 8, 2016 at 2:34 PM Afek, Ifat (Nokia - IL) 
mailto:ifat.a...@nokia.com>> wrote:
There is no such blueprint at the moment.
You are more than welcome to add one, in case you have some ideas for 
improvements.

Ifat.

From: Yujun Zhang
Date: Monday, 8 August 2016 at 09:21


Great, it works.
But it would be better if we could improve the default layout. Is there any 
blueprint in progress?
--
Yujun

On Sun, Aug 7, 2016 at 1:09 PM Afek, Ifat (Nokia - IL) 
mailto:ifat.a...@nokia.com>> wrote:
Hi,

It is possible to adjust the layout of the graph. You can double-click on a 
vertex and it will remain pinned to its place. You can then move the pinned 
vertices around to adjust the graph layout.

Hope it helped, and let us know if you need additional help with your demo.

Best Regards,
Ifat.


From: Yujun Zhang
Date: Friday, 5 August 2016 at 09:32
Hi, all,

I'm building a demo of vitrage. The dynamic entity graph looks interesting.

But when more entities are added, things becomes crowded and the links screw 
over each other. Dragging the items will not help much.

Is it possible to adjust the layout so I can get a more regular/stable tree 
view of the entities?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] release reminder

2016-08-25 Thread Steven Hardy
On Tue, Aug 23, 2016 at 11:17:16AM -0400, Emilien Macchi wrote:
> This is a quick release update:
> 
> - We're currently R-6, "Final release for non-client libraries".
> - Next week will be R-5, "Feature freeze" and "newton-3 milestone".
> We'll proceed to a TripleO b3 release. If you need FFE, please submit
> it using ML.

Thanks for the update Emilien - I've been out for a few days but am now
back and will try to get things ready so that we can release newton-3 next
week.

Currently there are a lot of features still incomplete (some just need
reviews):

https://bugs.launchpad.net/tripleo/+milestone/newton-3

I expect there will be some FFE's, and I'll follow up with a separate
thread about it, but please can everyone prioritize reviewing those
features so that we can close as many as possible for n-3.

> - Official Newton release for TripleO will be Oct 17-21 week (2 weeks
> trailing release).

Yes, although note we may release earlier than 2 weeks if we're ready (main
constraint here is we can't release before the puppet modules declare their
final release).

> If you need more informations about Newton release management, please look:
> https://releases.openstack.org/newton/schedule.html
> 
> Also, Doug is currently working on Ocata release schedule, please have a look:
> https://review.openstack.org/#/c/357214/ (you'll notice the release
> cadence is a bit different).
> 
> Thanks,

Thanks!

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.log 3.16.0 release (newton)

2016-08-25 Thread no-reply
We are overjoyed to announce the release of:

oslo.log 3.16.0: oslo.log library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.log

With package available at:

https://pypi.python.org/pypi/oslo.log

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.log

For more details, please see below.

Changes in oslo.log 3.15.0..3.16.0
--

573e049 Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt  | 2 +-
test-requirements.txt | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index e6a740d..7c598d7 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context>=2.4.0 # Apache-2.0
+oslo.context>=2.6.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index 673f993..d86ad0e 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -25 +25 @@ reno>=1.8.0 # Apache2
-bandit>=1.0.1 # Apache-2.0
+bandit>=1.1.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.versionedobjects 1.17.0 release (newton)

2016-08-25 Thread no-reply
We are gleeful to announce the release of:

oslo.versionedobjects 1.17.0: Oslo Versioned Objects library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.versionedobjects

With package available at:

https://pypi.python.org/pypi/oslo.versionedobjects

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.versionedobjects

For more details, please see below.

Changes in oslo.versionedobjects 1.16.0..1.17.0
---

45175c2 Add get_schema for IPV6Address FieldType class
39a057b Fix remotable object change tracking
86f4b03 JSON schema get_schema implementation for more complex fields
eb4d4a5 Adds new fields and field types


Diffstat (except docs and test files)
-

oslo_versionedobjects/base.py   |  12 ++-
oslo_versionedobjects/fields.py | 139 ++-
5 files changed, 314 insertions(+), 9 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.config 3.17.0 release (newton)

2016-08-25 Thread no-reply
We are jazzed to announce the release of:

oslo.config 3.17.0: Oslo Configuration API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.config

With package available at:

https://pypi.python.org/pypi/oslo.config

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.config

For more details, please see below.

Changes in oslo.config 3.16.0..3.17.0
-

8db0b7c Updated from global requirements


Diffstat (except docs and test files)
-

test-requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index a11d8f2..7c3334a 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -31 +31 @@ mock>=2.0 # BSD
-bandit>=1.0.1 # Apache-2.0
+bandit>=1.1.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslotest 2.10.0 release (newton)

2016-08-25 Thread no-reply
We are eager to announce the release of:

oslotest 2.10.0: Oslo test framework

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslotest

With package available at:

https://pypi.python.org/pypi/oslotest

Please report issues through launchpad:

http://bugs.launchpad.net/oslotest

For more details, please see below.

Changes in oslotest 2.9.0..2.10.0
-

0609571 Updated from global requirements
f378674 A DisableModules fixture that removes modules from path


Diffstat (except docs and test files)
-

oslotest/modules.py | 50 +
requirements.txt|  2 +-
3 files changed, 79 insertions(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 53b93d9..7e76ed4 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13 +13 @@ mox3>=0.7.0 # Apache-2.0
-os-client-config!=1.19.0,>=1.13.1 # Apache-2.0
+os-client-config!=1.19.0,!=1.19.1,!=1.20.0,>=1.13.1 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.service 1.16.0 release (newton)

2016-08-25 Thread no-reply
We are pumped to announce the release of:

oslo.service 1.16.0: oslo.service library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.service

With package available at:

https://pypi.python.org/pypi/oslo.service

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.service

For more details, please see below.

Changes in oslo.service 1.15.0..1.16.0
--

9df6088 Updated from global requirements


Diffstat (except docs and test files)
-

test-requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index de31b9a..aa6de5e 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -18 +18 @@ coverage>=3.6 # Apache-2.0
-bandit>=1.0.1 # Apache-2.0
+bandit>=1.1.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.privsep 1.13.0 release (newton)

2016-08-25 Thread no-reply
We are grateful to announce the release of:

oslo.privsep 1.13.0: OpenStack library for privilege separation

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.privsep

With package available at:

https://pypi.python.org/pypi/oslo.privsep

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.privsep

For more details, please see below.

1.13.0
^^

Other Notes

* Switch to reno for managing release notes.

Changes in oslo.privsep 1.12.0..1.13.0
--

a029855 Updated from global requirements
e3f0550 Use default value for undefined caps in fmt_caps
f0370ac Add reno for release notes management


Diffstat (except docs and test files)
-

.gitignore|   3 +
oslo_privsep/daemon.py|   6 +-
oslo_privsep/version.py   |  18 ++
releasenotes/notes/add_reno-3b4ae0789e9c45b4.yaml |   3 +
releasenotes/source/_static/.placeholder  |   0
releasenotes/source/_templates/.placeholder   |   0
releasenotes/source/conf.py   | 273 ++
releasenotes/source/index.rst |   8 +
releasenotes/source/unreleased.rst|   5 +
requirements.txt  |   2 +-
test-requirements.txt |   1 +
tox.ini   |   3 +
12 files changed, 319 insertions(+), 3 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 34304cd..eb977e3 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -7 +7 @@ oslo.i18n>=2.1.0 # Apache-2.0
-oslo.config>=3.12.0 # Apache-2.0
+oslo.config>=3.14.0 # Apache-2.0
diff --git a/test-requirements.txt b/test-requirements.txt
index b3deaba..c79d4ef 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -12,0 +13 @@ sphinx!=1.3b1,<1.3,>=1.2.1 # BSD
+reno>=1.8.0 # Apache2



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.vmware 2.14.0 release (newton)

2016-08-25 Thread no-reply
We are chuffed to announce the release of:

oslo.vmware 2.14.0: Oslo VMware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.vmware

With package available at:

https://pypi.python.org/pypi/oslo.vmware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.vmware

For more details, please see below.

Changes in oslo.vmware 2.13.0..2.14.0
-

a2494e5 Updated from global requirements
9c998c3 Fix TypeError:six.StringIO(resp.content) must be str or None, not bytes.


Diffstat (except docs and test files)
-

oslo_vmware/service.py   | 2 +-
test-requirements.txt| 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)


Requirements updates


diff --git a/test-requirements.txt b/test-requirements.txt
index e9eac53..bfa1001 100644
--- a/test-requirements.txt
+++ b/test-requirements.txt
@@ -26 +26 @@ reno>=1.8.0 # Apache2
-bandit>=1.0.1 # Apache-2.0
+bandit>=1.1.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.messaging 5.10.0 release (newton)

2016-08-25 Thread no-reply
We are mirthful to announce the release of:

oslo.messaging 5.10.0: Oslo Messaging API

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.messaging

With package available at:

https://pypi.python.org/pypi/oslo.messaging

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

For more details, please see below.

Changes in oslo.messaging 5.9.0..5.10.0
---

86df23d Updated from global requirements
59b3f4f [zmq] Host name and target in socket identity
83a07b1 [zmq] Make zmq_immediate configurable


Diffstat (except docs and test files)
-

.../client/publishers/dealer/zmq_dealer_publisher_proxy.py   | 10 +-
.../_drivers/zmq_driver/client/zmq_sockets_manager.py| 12 
oslo_messaging/_drivers/zmq_driver/proxy/zmq_queue_proxy.py  |  6 --
.../zmq_driver/server/consumers/zmq_dealer_consumer.py   | 12 +++-
.../_drivers/zmq_driver/server/consumers/zmq_sub_consumer.py |  3 ++-
oslo_messaging/_drivers/zmq_driver/zmq_address.py|  5 +++--
oslo_messaging/_drivers/zmq_driver/zmq_options.py|  9 -
oslo_messaging/_drivers/zmq_driver/zmq_socket.py |  7 ---
requirements.txt |  2 +-
9 files changed, 50 insertions(+), 16 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 7397aaf..3be9ded 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -9 +9 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context>=2.6.0 # Apache-2.0
+oslo.context>=2.9.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.db 4.13.0 release (newton)

2016-08-25 Thread no-reply
We are joyful to announce the release of:

oslo.db 4.13.0: Oslo Database library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.db

With package available at:

https://pypi.python.org/pypi/oslo.db

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.db

For more details, please see below.

Changes in oslo.db 4.12.0..4.13.0
-

87fb9cc Updated from global requirements


Diffstat (except docs and test files)
-

requirements.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index 74c885d..e8dcc28 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -10 +10 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context>=2.6.0 # Apache-2.0
+oslo.context>=2.9.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.middleware 3.19.0 release (newton)

2016-08-25 Thread no-reply
We are exuberant to announce the release of:

oslo.middleware 3.19.0: Oslo Middleware library

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.middleware

With package available at:

https://pypi.python.org/pypi/oslo.middleware

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.middleware

For more details, please see below.

Changes in oslo.middleware 3.18.0..3.19.0
-

c18ea68 Remove pot files
c29637e Fix inline docstring to use default path (not /status)
df740dc Updated from global requirements


Diffstat (except docs and test files)
-

oslo_middleware/healthcheck/__init__.py|  4 ++--
.../locale/oslo_middleware-log-error.pot   | 25 --
oslo_middleware/locale/oslo_middleware.pot | 25 --
requirements.txt   |  2 +-
4 files changed, 3 insertions(+), 53 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index a51ac77..381e433 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8 +8 @@ oslo.config>=3.14.0 # Apache-2.0
-oslo.context>=2.6.0 # Apache-2.0
+oslo.context>=2.9.0 # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][oslo] oslo.cache 1.14.0 release (newton)

2016-08-25 Thread no-reply
We are frolicsome to announce the release of:

oslo.cache 1.14.0: Cache storage for OpenStack projects.

This release is part of the newton release series.

With source available at:

http://git.openstack.org/cgit/openstack/oslo.cache

With package available at:

https://pypi.python.org/pypi/oslo.cache

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.cache

For more details, please see below.

Changes in oslo.cache 1.13.0..1.14.0


fb1c71d Correct help text for backend option
1129b52 Fix OpenStack capitalization


Diffstat (except docs and test files)
-

oslo_cache/_opts.py  | 22 ++
setup.cfg|  2 +-
3 files changed, 12 insertions(+), 14 deletions(-)




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat-translator] [tosca-parser] Reminder = no IRC meeting this week

2016-08-25 Thread Sahdev P Zala
Hello team,

FYI, as we discussed in our last meeting, there will be no meeting today 
Thursday August 25th as I am travelling. Please take any discussion on IRC 
or via emails.

Thanks! 

Regards, 
Sahdev Zala


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] TripleO Deep dive sessions for CI topics.

2016-08-25 Thread Carlos Camacho Gonzalez
Yeahp I think in case of debugging upstream submissions (on CI)
it's the same process of debugging locally those errors,
in which case, the only missing part there I think can be
just to point where to see when reading CI logs.

What I think will be a bigger challenge it's about to
teach other people to understand and master how
TripleO CI works, including how to define new jobs,
where to see when we have a package conflict
breaking the build, or how to detect infra issues
among other topics. (This won't be for debugging
submissions, instead to debug when we have CI
failures)

Not sure if it's needed a deep historical understanding about
how CI was built and about how it's actually working
but I think this will make people love a little bit more
our CI.

I'll add more items to the Etherpad, let's see how many people
are interested.

Cheers,
Carlos.


On Wed, Aug 24, 2016 at 8:24 PM, James Slagle 
wrote:

> On Wed, Aug 24, 2016 at 12:17 PM, Carlos Camacho Gonzalez
>  wrote:
> > Hello guys!
> >
> > I will like to ask you a question related to future TripleO deep dive
> > sessions.
> >
> > What about having a specific series for CI? I read some people kind of
> > “complaining” on IRC when CI does not work properly and assuming that
> taking
> > care of CI is part of everyone's work let's try go have more eyes on CI
> > (including me).
> >
> > I believe if more people its actually able to debug “real” CI issues we
> will
> > be able to decrease the amount of work that these tasks take from the
> team.
> >
> > I added to https://etherpad.openstack.org/p/tripleo-deep-dive-topics a
> > section with some topics, feel free to add/edit items and let's discuss
> it
> > on the weekly meeting to see if in a mid-term we can have some
> introduction
> > to CI.
>
> I think this is a great idea. What I'd like to know before planning
> something like this is what specific things do people need help on
> when it comes to debugging failed jobs. How have folks tried to debug
> jobs that have failed and gotten stuck?
>
> Most of the time it is looking at logs and trying to reproduce
> locally. I'd be happy to show that, but I feel like we've already
> covered that to a large degree. So, I'd like to dig a little more into
> specific ways people get stuck with failures and then we can directly
> address those.
>
> Ideally, a root cause of a failure could always be found, but that is
> just not going to be the case given other constraints. It often comes
> down to what one is able to reproduce locally, and how to mitigate the
> issues as best we can (see email I just sent for an example).
>
> Let me know or add the specifics to the etherpad and I'll pull
> something together if there are no other volunteers :).
>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new][documentation] os-api-ref 1.0.0 release

2016-08-25 Thread no-reply
We are glowing to announce the release of:

os-api-ref 1.0.0: Sphinx Extensions to support API reference sites in
OpenStack

With source available at:

http://git.openstack.org/cgit/openstack/os-api-ref

For more details, please see below.

Changes in os-api-ref 0.4.0..1.0.0
--

80480aa Update docs for openstackdocstheme
916db5d openstackdocstheme integration
b100d15 Change Layout of Path + Sub Title


Diffstat (except docs and test files)
-

README.rst  |  2 +-
os_api_ref/__init__.py  | 38 +
os_api_ref/assets/api-site.css  | 76 +++--
os_api_ref/assets/bootstrap.min.css |  5 --
os_api_ref/assets/bootstrap.min.js  |  6 --
requirements.txt|  1 +
12 files changed, 104 insertions(+), 60 deletions(-)


Requirements updates


diff --git a/requirements.txt b/requirements.txt
index db3eed3..e7e1e2c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -8,0 +9 @@ sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
+openstackdocstheme>=1.4.0  # Apache-2.0



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS][heat] Removing LBaaS v1 - are weready?

2016-08-25 Thread Gary Kotton
Hi,
At the moment it is still not clear to me the upgrade process from V1 to V2. 
The migration script https://review.openstack.org/#/c/289595/ has yet to be 
approved. Does this support all drivers or is this just the default reference 
implementation driver?
Are there people still using V1? 
Thanks
Gary

On 8/25/16, 4:25 AM, "Doug Wiegley"  wrote:


> On Mar 23, 2016, at 4:17 PM, Doug Wiegley  
wrote:
> 
> Migration script has been submitted, v1 is not going anywhere from 
stable/liberty or stable/mitaka, so it’s about to disappear from master.
> 
> I’m thinking in this order:
> 
> - remove jenkins jobs
> - wait for heat to remove their jenkins jobs ([heat] added to this 
thread, so they see this coming before the job breaks)
> - remove q-lbaas from devstack, and any references to lbaas v1 in 
devstack-gate or infra defaults.
> - remove v1 code from neutron-lbaas

FYI, all of the above have completed, and the final removal is in the merge 
queue: https://review.openstack.org/#/c/286381/

Mitaka will be the last stable branch with lbaas v1.

Thanks,
doug

> 
> Since newton is now open for commits, this process is going to get 
started.
> 
> Thanks,
> doug
> 
> 
> 
>> On Mar 8, 2016, at 11:36 AM, Eichberger, German 
 wrote:
>> 
>> Yes, it’s Database only — though we changed the agent driver in the DB 
from V1 to V2 — so if you bring up a V2 with that database it should reschedule 
all your load balancers on the V2 agent driver.
>> 
>> German
>> 
>> 
>> 
>> 
>> On 3/8/16, 3:13 AM, "Samuel Bercovici"  wrote:
>> 
>>> So this looks like only a database migration, right?
>>> 
>>> -Original Message-
>>> From: Eichberger, German [mailto:german.eichber...@hpe.com] 
>>> Sent: Tuesday, March 08, 2016 12:28 AM
>>> To: OpenStack Development Mailing List (not for usage questions)
>>> Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
weready?
>>> 
>>> Ok, for what it’s worth we have contributed our migration script: 
https://review.openstack.org/#/c/289595/ — please look at this as a starting 
point and feel free to fix potential problems…
>>> 
>>> Thanks,
>>> German
>>> 
>>> 
>>> 
>>> 
>>> On 3/7/16, 11:00 AM, "Samuel Bercovici"  wrote:
>>> 
 As far as I recall, you can specify the VIP in creating the LB so you 
will end up with same IPs.
 
 -Original Message-
 From: Eichberger, German [mailto:german.eichber...@hpe.com]
 Sent: Monday, March 07, 2016 8:30 PM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [Neutron][LBaaS]Removing LBaaS v1 - are 
weready?
 
 Hi Sam,
 
 So if you have some 3rd party hardware you only need to change the 
 database (your steps 1-5) since the 3rd party hardware will just keep 
 load balancing…
 
 Now for Kevin’s case with the namespace driver:
 You would need a 6th step to reschedule the loadbalancers with the V2 
namespace driver — which can be done.
 
 If we want to migrate to Octavia or (from one LB provider to another) 
it might be better to use the following steps:
 
 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, 
Health 
 Monitors , Members) into some JSON format file(s) 2. Delete LBaaS v1 
3. 
 Uninstall LBaaS v1 4. Install LBaaS v2 5. Transform the JSON format 
 file into some scripts which recreate the load balancers with your 
 provider of choice —
 
 6. Run those scripts
 
 The problem I see is that we will probably end up with different VIPs 
 so the end user would need to change their IPs…
 
 Thanks,
 German
 
 
 
 On 3/6/16, 5:35 AM, "Samuel Bercovici"  wrote:
 
> As for a migration tool.
> Due to model changes and deployment changes between LBaaS v1 and 
LBaaS v2, I am in favor for the following process:
> 
> 1. Download LBaaS v1 information (Tenants, Flavors, VIPs, Pools, 
> Health Monitors , Members) into some JSON format file(s) 2. Delete 
LBaaS v1 3.
> Uninstall LBaaS v1 4. Install LBaaS v2 5. Import the data from 1 back 
> over LBaaS v2 (need to allow moving from falvor1-->flavor2, need to 
> make room to some custom modification for mapping between v1 and v2
> models)
> 
> What do you think?
> 
> -Sam.
> 
> 
> 
> 
> -Original Message-
> From: Fox, Kevin M [mailto:kevin@pnnl.gov]
> Sent: Friday, March 04, 2016 2:06 AM
> To: OpenStack Development Mailing List (not for usage questions)

Re: [openstack-dev] [neutron] neutronclient check queue is broken

2016-08-25 Thread Henry Gessau
Akihiro Motoki  wrote:
> In the neutronclient check queue,
> gate-neutronclient-test-dsvm-functional is broken now [1].
> Please avoid issuing 'recheck'.
> 
> [1] https://bugs.launchpad.net/python-neutronclient/+bug/1616749

The fix [2] has merged. Carry on.

[2] https://review.openstack.org/360291


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] neutronclient check queue is broken

2016-08-25 Thread Ihar Hrachyshka

Akihiro Motoki  wrote:


In the neutronclient check queue,
gate-neutronclient-test-dsvm-functional is broken now [1].
Please avoid issuing 'recheck'.

[1] https://bugs.launchpad.net/python-neutronclient/+bug/1616749

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The proposed fix (removal tests for lbaasv1) made me wonder why we don’t  
gate on neutron stable branches in client master branch. Isn’t it a test  
matrix gap that could allow a new client to introduce a regression that  
would break interactions with older clouds?


I see that some clients (nova) validate stable server branches against  
master client patches. Shouldn’t we do the same?


Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] New bug tagging policy

2016-08-25 Thread Swapnil Kulkarni
On Thu, Aug 25, 2016 at 4:00 PM, Julie Pichon  wrote:
> Hi folks,
>
> The bug tagging proposal has merged, behold the new policy:
>
> http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html
>
> TL;DR The TripleO Launchpad tracker encompasses a lot of sub-projects,
> let's use a consistent list of Launchpad tags where they make sense in
> order to help understand which area(s) are affected. The tags get
> autocompleted by Launchpad (or will be soon).
>
>
> There is one remaining action to create the missing tags: I don't have
> bug wrangling permissions on the TripleO project so, if someone with
> the appropriate permissions could update the list [1] to match the
> policy I would appreciate it. Should I be deemed trustworthy enough
> I'm just as happy to do it myself and help out with the occasional
> bout of triaging as well.
>
> Thanks,
>
> Julie
>
> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Done!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] New bug tagging policy

2016-08-25 Thread Julie Pichon
Hi folks,

The bug tagging proposal has merged, behold the new policy:

http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html

TL;DR The TripleO Launchpad tracker encompasses a lot of sub-projects,
let's use a consistent list of Launchpad tags where they make sense in
order to help understand which area(s) are affected. The tags get
autocompleted by Launchpad (or will be soon).


There is one remaining action to create the missing tags: I don't have
bug wrangling permissions on the TripleO project so, if someone with
the appropriate permissions could update the list [1] to match the
policy I would appreciate it. Should I be deemed trustworthy enough
I'm just as happy to do it myself and help out with the occasional
bout of triaging as well.

Thanks,

Julie

[1] https://bugs.launchpad.net/tripleo/+manage-official-tags

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Creating VM error: Insufficient compute resources

2016-08-25 Thread Fawaz Mohammed
Have you enabled huge page on the host level?
Do you have enough vn.nr_hugepages?
As per your requirements, you need a host with 512 hugepage (1G/2M).
Check your host's /etc/sysctl.conf file and see vm.nr_hugepages value.

On Aug 25, 2016 1:15 PM, "zhi"  wrote:

> hi, all
>
> I plan to create VM with huge page. And I created a new flavor like
> this:
>
> $ nova flavor-show ed8dccd2-adbe-44ee-9e4f-391d045d3653
> ++--
> 
> ---+
> | Property   | Value
> |
> ++--
> 
> ---+
> | OS-FLV-DISABLED:disabled   | False
> |
> | OS-FLV-EXT-DATA:ephemeral  | 0
> |
> | disk   | 30
>  |
> | extra_specs| {"aggregate_instance_extra_specs:pinned":
> "true", "hw:cpu_policy": "dedicated", "hw:mem_page_size": "2048"} |
> | id | ed8dccd2-adbe-44ee-9e4f-391d045d3653
>  |
> | name   | m1.vm_2
> |
> | os-flavor-access:is_public | True
>  |
> | ram| 1024
>  |
> | rxtx_factor| 1.0
> |
> | swap   |
> |
> | vcpus  | 4
> |
> ++--
> 
> ---+
>
> Then I create a VM by using this flavor and creating fail. The error
> message is :
> "
> {"message": "Build of instance ada7ac22-1052-44e1-b4a5-c21221dbab87 was
> re-scheduled: Insufficient compute resources: Requested instance NUMA
> topology cannot fit the given
>  host NUMA topology.", "code": 500, "details": "  File
> \"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 1905,
> in _do_build_and_run_instance
> "
>
> And, my compute node's numa info is:
>
> $ numactl --hardware
> available: 2 nodes (0-1)
> node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
> node 0 size: 32543 MB
> node 0 free: 28307 MB
> node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
> node 1 size: 32768 MB
> node 1 free: 29970 MB
> node distances:
> node   0   1
>   0:  10  21
>   1:  21  10
>
> Qemu version is "QEMU emulator version 2.1.2
> (qemu-kvm-ev-2.1.2-23.el7.1)". And libvirtd version is "1.2.17".
>
>
> Did anyone meet the same error like me?
>
>
>
> B.R.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Interface in PEER2PEER live migration

2016-08-25 Thread Alberto Planas Dominguez
On Wed, 2016-08-24 at 11:18 -0400, Daniel P. Berrange wrote:
> On Wed, Aug 24, 2016 at 05:07:50PM +0200, Alberto Planas Dominguez
> wrote:

Daniel, thanks for the fast reply!!

> > Unfortunately was closed as invalid, and the solution provided is
> > completely unrelated. The solution suggested is based on
> > `live_migration_inbound_addr`, that is related with the libvirtd
> > URI,
> > not the qmeu one. I tested several times and yes, this solution is
> > not
> > related with the problem.
> 
> The "live_migration_inbound_addr" configuration parameters was
> intended
> to affect both libvirt & QEMU traffic. If that is not working
> correctly,
> then we should be fixing that, nto adding yet another parameter.

The code in libvirt is very clear: if uri_in is NULL will ask to the
hostname to the other side. I checked the code in 1.2.18:

https://github.com/libvirt/libvirt/blob/v1.2.18-maint/src/qemu/qemu_mig
ration.c#L3601

https://github.com/libvirt/libvirt/blob/v1.2.18-maint/src/qemu/qemu_mig
ration.c#L3615

The same logic is in master:

https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu_migration.
c#L4013

But we can go back to 0.9.12:

https://github.com/libvirt/libvirt/blob/v0.9.12-maint/src/qemu/qemu_mig
ration.c#L1472

Nova set migration_uri parameter to None, that this means that uri_in
is NULL.

How can I affect the the QEMU part? The code path AAIU is: if we do not
set miguri (migrateToURI2) or migrate_uri (migrateToURI3), is a
uri_in=NULL.

I am not familiar with libvirt code, please, help me to find how I can
affect this uri_in parameter to have a value different from the
hostname of the other node, without setting the correct value in
migrateToURI[23] in the Nova side.


> > this patch will create a second uri:
> > 
> > migrate_uri=tcp://fast.%s/
> 
> While you can do that hack, the fact that is works is simply luck -
> it
> certainly was not designed with this kind of usage in mind. We would
> in fact like to remove the live_migration_uri config parameter
> entirely
> and having the libvirt driver automatically use the correct URI.

This is a very interesting suggestion!

I can see that in /etc/libvirt.qemu.conf there is a parameter for
'migration_address', that is the -incoming side of the migration. What
I am not able to see is how I can change the hostname (to select the
correct interface) from the source of the migration. I see the default
value in `migration_host`, but my tests didn't work to set the uri_in
properly

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham
Norton, HRB 21284 (AG Nürnberg)
Maxfeldstraße 5, 90409 Nürnberg, Germany

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-25 Thread Derek Higgins
On 25 August 2016 at 02:56, Paul Belanger  wrote:
> On Wed, Aug 24, 2016 at 02:11:32PM -0400, James Slagle wrote:
>> The latest recurring problem that is failing a lot of the nonha ssl
>> jobs in tripleo-ci is:
>>
>> https://bugs.launchpad.net/tripleo/+bug/1616144
>> tripleo-ci: nonha jobs failing with Unable to establish connection to
>> https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1b1671ced2ff/stacks/overcloud/f9f6f712-8e89-4ea9-a34b-6084dc74b5c1
>>
>> This error happens while polling for events from the overcloud stack
>> by tripleoclient.
>>
>> I can reproduce this error very easily locally by deploying with an
>> ssl undercloud with 6GB ram and 2 vcpus. If I don't enable swap,
>> something gets OOM killed. If I do enable swap, swap gets used (< 1GB)
>> and then I hit this error almost every time.
>>
>> The stack keeps deploying but the client has died, so the job fails.
>> My investigation so far has only pointed out that it's the swap
>> allocation that is delaying things enough to cause the failure.
>>
>> We do not see this error in the ha job even though it deploys more
>> nodes. As of now, my only suspect is that it's the overhead of the
>> initial SSL connections causing the error.
>>
>> If I test with 6GB ram and 4 vcpus I can't reproduce the error,
>> although much more swap is used due to the increased number of default
>> workers for each API service.
>>
>> However, I suggest we just raise the undercloud specs in our jobs to
>> 8GB ram and 4 vcpus. These seem reasonable to me because those are the
>> default specs used by infra in all of their devstack single and
>> multinode jobs spawned on all their other cloud providers. Our own
>> multinode job for the undercloud/overcloud and undercloud only job are
>> running on instances of these sizes.
>>
> Close, our current flavors are 8vCPU, 8GB RAM, 80GB HDD. I'd recommend doing
> that for the undercloud just to be consistent.

The HD on most of the compute nodes are 200GB so we've been trying
really hard[1] to keep the disk usage for each instance down so that
we can fit as many instances onto each compute nodes as possible
without being restricted by the HD's. We've also allowed nova to
overcommit on storage by a factor of 3. The assumption is that all of
the instances are short lived and a most of them never fully exhaust
the storage allocated to them. Even the ones that do (the undercloud
being the one that does) hit peak at different times so everything is
tickety boo.

I'd strongly encourage against using a flavor with a 80GB HDD, if we
increase the disk space available to the undercloud to 80GB then we
will eventually be using it in CI. And 3 undercloud on the same
compute node will end up filling up the disk on that host.

[1] 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_gate_test.sh#n26

>
> [1] http://docs.openstack.org/infra/system-config/contribute-cloud.html
>
>> Yes, this is just sidestepping the problem by throwing more resources
>> at it. The reality is that we do not prioritize working on optimizing
>> for speed/performance/resources. We prioritize feature work that
>> indirectly (or maybe it's directly?) makes everything slower,
>> especially at this point in the development cycle.
>>
>> We should therefore expect to have to continue to provide more and
>> more resources to our CI jobs until we prioritize optimizing them to
>> run with less.
>>
> I actually believe these problem highlights how large tripleo-ci has grown, 
> and
> in need of a refactor. While we won't solve this problem today, I do think
> tripleo-ci is to monolithic today. I believe there is some discussion on
> breaking jobs into different scenarios, but I haven't had a chance to read up 
> on
> that.
>
> I'm hoping in Barcelona we can have a topic on CI pipelines and how better to
> optimize our runs.
>
>> Let me know if there is any disagreement on making these changes. If
>> there isn't, I'll apply them in the next day or so. If there are any
>> other ideas on how to address this particular bug for some immediate
>> short term relief, please let me know.
>>
>> --
>> -- James Slagle
>> --
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI] Need more undercloud resources

2016-08-25 Thread Derek Higgins
On 24 August 2016 at 19:11, James Slagle  wrote:
> The latest recurring problem that is failing a lot of the nonha ssl
> jobs in tripleo-ci is:
>
> https://bugs.launchpad.net/tripleo/+bug/1616144
> tripleo-ci: nonha jobs failing with Unable to establish connection to
> https://192.0.2.2:13004/v1/a90407df1e7f4f80a38a1b1671ced2ff/stacks/overcloud/f9f6f712-8e89-4ea9-a34b-6084dc74b5c1
>
> This error happens while polling for events from the overcloud stack
> by tripleoclient.
>
> I can reproduce this error very easily locally by deploying with an
> ssl undercloud with 6GB ram and 2 vcpus. If I don't enable swap,
> something gets OOM killed. If I do enable swap, swap gets used (< 1GB)
> and then I hit this error almost every time.
>
> The stack keeps deploying but the client has died, so the job fails.
> My investigation so far has only pointed out that it's the swap
> allocation that is delaying things enough to cause the failure.
>
> We do not see this error in the ha job even though it deploys more
> nodes. As of now, my only suspect is that it's the overhead of the
> initial SSL connections causing the error.
>
> If I test with 6GB ram and 4 vcpus I can't reproduce the error,
> although much more swap is used due to the increased number of default
> workers for each API service.
>
> However, I suggest we just raise the undercloud specs in our jobs to
> 8GB ram and 4 vcpus. These seem reasonable to me because those are the
> default specs used by infra in all of their devstack single and
> multinode jobs spawned on all their other cloud providers. Our own
> multinode job for the undercloud/overcloud and undercloud only job are
> running on instances of these sizes.
>
> Yes, this is just sidestepping the problem by throwing more resources
> at it. The reality is that we do not prioritize working on optimizing
> for speed/performance/resources. We prioritize feature work that
> indirectly (or maybe it's directly?) makes everything slower,
> especially at this point in the development cycle.

Yup, I couldn't agree with this more it is exactly what happens. And
as long as everybody remains driven by particular features its going
to be the case. Ideally we'd have somebody who's driving force is
simply to take what we have at any particular point in time profile
certain pain points and make improvements where they can be made tune
things etc

>
> We should therefore expect to have to continue to provide more and
> more resources to our CI jobs until we prioritize optimizing them to
> run with less.
>
> Let me know if there is any disagreement on making these changes. If
> there isn't, I'll apply them in the next day or so. If there are any
> other ideas on how to address this particular bug for some immediate
> short term relief, please let me know.

Not disagreeing but just a reminder to double check quota's and
over-commit ratios (for vCPU)  so things will still fit where the
should be.

Also its worth noting that act of increasing the number of vCPU's
available to the undercloud will not only increase the memory
requirements of the undercloud (we know this happens) but the extra
services even if unused may cause additional cpu usage on the host so
this is worth monitoring.

>
> --
> -- James Slagle
> --
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Creating VM error: Insufficient compute resources

2016-08-25 Thread zhi
hi, all

I plan to create VM with huge page. And I created a new flavor like
this:

$ nova flavor-show ed8dccd2-adbe-44ee-9e4f-391d045d3653
++-+
| Property   | Value
|
++-+
| OS-FLV-DISABLED:disabled   | False
|
| OS-FLV-EXT-DATA:ephemeral  | 0
|
| disk   | 30
   |
| extra_specs| {"aggregate_instance_extra_specs:pinned":
"true", "hw:cpu_policy": "dedicated", "hw:mem_page_size": "2048"} |
| id | ed8dccd2-adbe-44ee-9e4f-391d045d3653
   |
| name   | m1.vm_2
|
| os-flavor-access:is_public | True
   |
| ram| 1024
   |
| rxtx_factor| 1.0
|
| swap   |
|
| vcpus  | 4
|
++-+

Then I create a VM by using this flavor and creating fail. The error
message is :
"
{"message": "Build of instance ada7ac22-1052-44e1-b4a5-c21221dbab87 was
re-scheduled: Insufficient compute resources: Requested instance NUMA
topology cannot fit the given
 host NUMA topology.", "code": 500, "details": "  File
\"/usr/lib/python2.7/site-packages/nova/compute/manager.py\", line 1905, in
_do_build_and_run_instance
"

And, my compute node's numa info is:

$ numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38
node 0 size: 32543 MB
node 0 free: 28307 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
node 1 size: 32768 MB
node 1 free: 29970 MB
node distances:
node   0   1
  0:  10  21
  1:  21  10

Qemu version is "QEMU emulator version 2.1.2 (qemu-kvm-ev-2.1.2-23.el7.1)".
And libvirtd version is "1.2.17".


Did anyone meet the same error like me?



B.R.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-25 Thread Aleksandr Didenko
+1

On Thu, Aug 25, 2016 at 9:35 AM, Sergey Vasilenko 
wrote:

> +1
>
>
> /sv
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Testing optional composable services in the CI

2016-08-25 Thread Dmitry Tantsur

Hi!

Looks great! Ironic currently requires a few manual steps, I wonder how 
we do them, but I guess we can figure out later.


On 08/24/2016 08:39 PM, Emilien Macchi wrote:

Ok I have  PoC ready for review and feedback:

- First iteration of scenario001 job in TripleO CI:
https://review.openstack.org/#/c/360039
I checked, and the job is not triggered if we don't touch Sahara files directly.

- Patch in THT that tries to modify Sahara files:
https://review.openstack.org/#/c/360040
I checked, and when running "check experimental", the job is triggered
because we modify puppet/services/sahara-base.yaml.

So the mechanism is in place (experimental status now) but ready for review.
Please give any feedback.

Once we have this mechanism in place, we'll be able to add more
services coverage, and run the jobs in a smart way thanks to Zuul.

Thanks,

On Wed, Aug 17, 2016 at 3:52 PM, Emilien Macchi  wrote:

On Wed, Aug 17, 2016 at 7:20 AM, James Slagle  wrote:

On Wed, Aug 17, 2016 at 4:04 AM, Dmitry Tantsur  wrote:

However, the current gate system allows to run jobs based on files affected.
So we can also run a scenario covering ironic on THT check/gate if
puppet/services/*ironic* is affected, but not in the other cases. This won't
cover all potential failures, but it would be of great help anyway. It
should also run in experimental pipeline, so that it can be triggered on any
patch.

This is in addition to periodic jobs you're proposing, not replacing them.
WDYT?


Using the files affected to trigger a scenario test that uses the
affected composable service sounds like a really good idea to me.



I have a PoC, everything is explained in commit message:
https://review.openstack.org/#/c/356675/

Please review it and give feedback !



--
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Emilien Macchi







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] entity graph layout

2016-08-25 Thread Yujun Zhang
After the first investigation, I think cytoscape might be too heavy. There
would be lots of refactoring work to migrate all functions to new library.
So I would suspend this proposal for now.

However, it seems the layout could be improved by adjusting the parameters
applied to force layout, e.g. charge, gravity and etc. When a larger charge
is assigned to cluster, it will push away the other element to avoid
overlapping.

But currently it is difficult to tune such parameters since the scripts are
compressed. *Any idea to speed up debug process?*

--
Yujun

On Tue, Aug 23, 2016 at 9:29 AM Yujun Zhang 
wrote:

> I'm considering to use Cytoscape.js [1] to improve the layout for entity
> graph view.
>
> Cytoscape.js is a graph theory (a.k.a. network) library for analysis and
> visualisation which under active maintenance (latest release 2.7.8 on Aug
> 18, 2016) [2], while the current library d3-dagre [3] is declared not being
> actively developed or maintained.
>
> Meanwhile, I'm building a proof of concept for visualizing the entity
> graph with Cytoscape.
>
> Could anybody give a list on the required features for this view? Any
> comments are welcome.
>
> [1] http://js.cytoscape.org/
> [2] https://github.com/cytoscape/cytoscape.js
> [3] https://github.com/cpettitt/dagre-d3
>
>
> On Mon, Aug 8, 2016 at 2:34 PM Afek, Ifat (Nokia - IL) <
> ifat.a...@nokia.com> wrote:
>
>> There is no such blueprint at the moment.
>> You are more than welcome to add one, in case you have some ideas for
>> improvements.
>>
>> Ifat.
>>
>> From: Yujun Zhang
>> Date: Monday, 8 August 2016 at 09:21
>>
>>
>> Great, it works.
>> But it would be better if we could improve the default layout. Is there
>> any blueprint in progress?
>> --
>> Yujun
>>
>> On Sun, Aug 7, 2016 at 1:09 PM Afek, Ifat (Nokia - IL) <
>> ifat.a...@nokia.com> wrote:
>>
>>> Hi,
>>>
>>> It is possible to adjust the layout of the graph. You can double-click
>>> on a vertex and it will remain pinned to its place. You can then move the
>>> pinned vertices around to adjust the graph layout.
>>>
>>> Hope it helped, and let us know if you need additional help with your
>>> demo.
>>>
>>> Best Regards,
>>> Ifat.
>>>
>>>
>>> From: Yujun Zhang
>>> Date: Friday, 5 August 2016 at 09:32
>>>
>>> Hi, all,
>>>
>>> I'm building a demo of vitrage. The dynamic entity graph looks
>>> interesting.
>>>
>>> But when more entities are added, things becomes crowded and the links
>>> screw over each other. Dragging the items will not help much.
>>>
>>> Is it possible to adjust the layout so I can get a more regular/stable
>>> tree view of the entities?
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Agenda for IRC meeting Aug. 26 2016

2016-08-25 Thread hu . zhijiang
1) Roll Call
2) OPNFV CI Evolution Study Progress
3) Bare Metal Deployment(PXE/IPMI) Status Update
4) Bare Metal Related DB Deployment Status Update
5) Bifrost/Ironic Integration (does ironic provide all data we need)


B.R.,
Zhijiang


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-25 Thread Sergey Vasilenko
+1


/sv
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [api-ref][ceilometer][nova][senlin][swift][zaqar] OpenStack Docs theme migration

2016-08-25 Thread Andreas Jaeger
On 2016-08-24 19:11, Graham Hayes wrote:
> Hi All,
> 
> We are nearly ready to release the new version of os-api-ref.
> 
> This required a temporary section of code to allow the docs to build
> with both oslosphinx and openstackdocstheme.
> 
> Currently only Nova, Ceilometer, Zaqar, Senlin and Swift are
> outstanding.
> 

All are merged now, check:
https://review.openstack.org/#/q/topic:os-api-ref-1.0.0-prep

We're ready for the 1.0 release of os-api-ref:
https://review.openstack.org/#/c/360038/2

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuel] Propose Denis Egorenko for fuel-library core

2016-08-25 Thread Maksim Malchuk
My big +1

On Thu, Aug 25, 2016 at 4:13 AM, Emilien Macchi  wrote:

> I'm surprised Denis was not core before.
> He has been a tremendous core reviewer for the Puppet OpenStack modules.
>
> My vote doesn't count but I'm still encouraging this effort. Congrats
> Denis, it's well deserved!
>
> On Wed, Aug 24, 2016 at 5:33 PM, Sergii Golovatiuk
>  wrote:
> > +1
> >
> > --
> > Best regards,
> > Sergii Golovatiuk,
> > Skype #golserge
> > IRC #holser
> >
> > On Wed, Aug 24, 2016 at 10:49 PM, Matthew Mosesohn <
> mmoses...@mirantis.com>
> > wrote:
> >>
> >> +1 Denis is excellent at reviews and spotting CI failure root causes.
> >> I definitely support him.
> >>
> >> On Wed, Aug 24, 2016 at 11:36 PM, Alex Schultz 
> >> wrote:
> >> > Ahoy Fuel Cores,
> >> >
> >> > I would like to propose Denis Egorenko for fuel-library core.  Denis
> is
> >> > always providing great reviews[0] and continues to help keep Fuel
> moving
> >> > forward.  He's the #3 reviewer and committer of the last 90 days[1]
> and
> >> > 180
> >> > days[2].
> >> >
> >> > Please vote with a +1/-1. Voting will close on August 31st.
> >> >
> >> > Thanks,
> >> > -Alex
> >> >
> >> > [0] http://stackalytics.com/?user_id=degorenko
> >> > [1] http://stackalytics.com/report/contribution/fuel-library/90
> >> > [2] http://stackalytics.com/report/contribution/fuel-library/180
> >> >
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> >
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Maksim Malchuk,
Senior DevOps Engineer,
MOS: Product Engineering,
Mirantis, Inc

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev