[openstack-dev] [Craton] NFV planned host maintenance

2016-11-11 Thread Juvonen, Tomi (Nokia - FI/Espoo)
I have been looking in past two OpenStack summits to have changes needed to
fulfill OPNFV Doctor use case for planned host maintenance and at the same
time trying to find other Ops requirements to satisfy different needs. I was
just about to start a new project (Fenix), but looking Craton, it seems
a good alternative and was proposed to me in Barcelona meetup. Here is some
ideas and would like a comment wither Craton could be used here.

OPNFV Doctor / NFV requirements are described here:
http://artifacts.opnfv.org/doctor/docs/requirements/02-use_cases.html#nvfi-maintenance
http://artifacts.opnfv.org/doctor/docs/requirements/03-architecture.html#nfvi-maintenance
http://artifacts.opnfv.org/doctor/docs/requirements/05-implementation.html#nfvi-maintenance

My rough thoughts about what would be initially needed (as short as I can):

- There should be a database of all hosts matching to what is known by Nova.
- There should by an API for Cloud Admin to set planned maintenance window
  for a host (maybe aggregate, group of hosts), when in maintenance and unset
  when finished. There might be some optional parameters like target host
  where to move things currently running on effected host. could also be
  used for retirement of a host.
- There should be project(tenant) and host specific notifications that could:
- Trigger alarm in Aodh so Application would be aware of maintenance state
  changes effecting to his servers, so zero downtime of application could
  be guaranteed.
- Notification could be consumed by workflow engine like Mistral, where
  application server specific actions flows and admin action flows could
  be performed (to move servers away, disable host,...).
- Host monitoring like Vitrage could consume notification to disable
  alarms for host as of planned maintenance ongoing and not down by fault.
- There should be admin and project level API to query maintenance session
  status.
- Workflow status should be queried or read as notification to keep internal
  state and send further notification.
- Some more discussion also in "BCN-ops-informal-meetup" that goes beyond this:
  https://etherpad.openstack.org/p/BCN-ops-informal-meetup

What else, details, problems:

There is a problem in flow engine actions. Depending on how long maintenance
would take or what type of server is running, application wants flows to behave
differently. Application specific flows could surely be done, but problem is
that they should make admin actions. It should be solved how application can
decide actions flows while only admin can run them. Should admin make
the flows and let application a power to choose by hint in nova metadata or
in notification going to flow engine.

Started a discussion in Austin summit about extending the planned host
maintenance in Nova, but it was agreed there could just be a link to external
tool. Now if this tool would exist in OpenStack, I would suggest to link it
like this, but surely this is to be seen after the external tool
implementation exists:
- Nova Services API could have a way for admin to set and unset a "base URL"
  pointing to external tool about planned maintenance effecting to a host.
- Admin should see link to external tool when querying services via services
  API. This might be formed like: {base URL}/{host_name}
- Project should have a project specific link to external tool when querying
  via Nova servers API. This might be: {base URL}/project/{hostId}.
  hostId is exposed to project as it do not tell exact host, but otherwise as
  a unique identifier for host:
  hashlib.sha224(projectid + host_name).hexdigest()

Br,
Tomi Juvonen
Senior SW Architect, Nokia







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Karbor] Karbor Ocata Design Summit Recap

2016-11-11 Thread edison xiang
Hi guys,

Karbor Ocata Accepted items:[1]

* Stabilizing Protect and Restore end-to-end

* Freezer integration (with in-guest agent)

* Multi-Tenancy [2]

* Support Cinder V3 API

* Triggers Support iCalendar RFC 2445

Please feel free to review them.

[1] https://etherpad.openstack.org/p/karbor-ocata-roadmap
[2] https://review.openstack.org/#/c/372846/


Best Regards

xiangxinyong
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Senlin]Nominating Ruijie Yuan for Senlin core reviewer

2016-11-11 Thread Ethan Lynn
+1 Ruijie is a hard working guy!

Best Regards,
Ethan Lynn
xuanlangj...@gmail.com




> On Nov 11, 2016, at 14:27, Yanyan Hu  wrote:
> 
> Hi Senlin core team, 
> 
> I'd like to nominate Ruijie Yuan(IRC name 'ruijie') for Senlin core reviewer.
> 
> Ruijie started to work on Senlin since the beginning of Newton cycle and he 
> has made significant contribution to Senlin project in last three months 
> including the batch policy support, versioned API support and etc.. He is 
> also actively interacting with the team for bug fix, blueprint discussion and 
> code review. I have talked with him and he expressed strong enthusiasm and 
> will to contribute to Senlin project and I believe he will make a great 
> addition to the core review team.
> 
> So please put your +1 or -1 here. Please note any -1 will be a veto for this 
> nomination. I will collect the result in seven days.
> 
> Thanks you so much!
> 
> http://stackalytics.com/report/contribution/senlin-group/60 
> 
> 
> -- 
> Best regards,
> 
> Yanyan
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] About doing the migration claim with Placement API

2016-11-11 Thread Alex Xu
2016-11-03 4:52 GMT+08:00 Jay Pipes :

> On 11/01/2016 10:14 AM, Alex Xu wrote:
>
>> Currently we only update the resource usage with Placement API in the
>> instance claim and the available resource update periodic task. But
>> there is no claim for migration with placement API yet. This works is
>> tracked by https://bugs.launchpad.net/nova/+bug/1621709. In newton, we
>> only fix one bit which make the resource update periodic task works
>> correctly, then it will auto-heal everything. For the migration claim
>> part, that isn't the goal for newton release.
>>
>> So the first question is do we want to fix it in this release? If the
>> answer is yes, there have a concern need to discuss.
>>
>
> Yes, I believe we should fix the underlying problem in Ocata. The
> underlying problem is what Sylvain brought up: live migrations do not
> currently use any sort of claim operation. The periodic resource audit is
> relied upon to essentially clean up the state of claimed resources over
> time, and as Chris points out in review comments on
> https://review.openstack.org/#/c/244489/, this leads to the scheduler
> operating on stale data and can lead to an increase in retry operations.
>
> This needs to be fixed before even attempting to address the issue you
> bring up with the placement API calls from the resource tracker.


ok, let me see if I can help something at here.


>
>
> In order to implement the drop of migration claim, the RT needs to
>> remove allocation records on the specific RP(on the source/destination
>> compute node). But there isn't any API can do that. The API about remove
>> allocation records is 'DELETE /allocations/{consumer_uuid}', but it will
>> delete all the allocation records for the consumer. So the initial
>> fix(https://review.openstack.org/#/c/369172/) adds new API 'DELETE
>> /resource_providers/{rp_uuid}/allocations/{consumer_id}'. But Chris Dent
>> pointed out this against the original design. All the allocations for
>> the specific consumer only can be dropped together.
>>
>
> Yes, and this is by design. Consumption of resources -- or the freeing
> thereof -- must be an atomic, transactional operation.
>
> There also have suggestion from Andrew, we can update all the allocation
>> records for the consumer each time. That means the RT will build the
>> original allocation records and new allocation records for the claim
>> together, and put into one API. That API should be 'PUT
>> /allocations/{consumer_uuid}'. Unfortunately that API doesn't replace
>> all the allocation records for the consumer, it always amends the new
>> allocation records for the consumer.
>>
>
> I see no reason why we can't change the behaviour of the `PUT
> /allocations/{consumer_uuid}` call to allow changing either the amounts of
> the allocated resources (a resize operation) or the set of resource
> provider UUIDs referenced in the allocations list (a move operation).
>
> For instance, let's say we have an allocation for an instance "i1" that is
> consuming 2 VCPU and 2048 MEMORY_MB on compute node "rpA", 50 DISK_GB on a
> shared storage pool "rpC".
>
> The allocations table would have the following records in it:
>
> resource_provider resource_class consumer used
> - --  
> rpA   VCPU   i1  2
> rpA   MEMORY_MB  i1   2048
> rpC   DISK_GBi1 50
>
> Now, we need to migrate instance "i1" to compute node "rpB". The instance
> disk uses shared storage so the only allocation records we actually need to
> modify are the VCPU and MEMORY_MB records.
>

yea, think about with shared storage, this makes sense a lot. Thanks for
such detail explain at here!


>
> We would create the following REST API call from the resource tracker on
> the destination node:
>
> PUT /allocations/i1
> {
>   "allocations": [
>   {
> "resource_provider": {
>   "uuid": "rpB",
> },
> "resources": {
>   "VCPU": 2,
>   "MEMORY_MB": 2048
> }
>   },
>   {
> "resource_provider": {
>   "uuid": "rpC",
> },
> "resources": {
>   "DISK_GB": 50
> }
>   }
>   ]
> }
>
> The placement service would receive that request payload and immediately
> grab any existing allocation records referencing consumer_uuid of "i1". It
> would notice that records referencing "rpA" (the source compute node) are
> no longer needed. It would notice that the DISK_GB allocation hasn't
> changed. And finally it would notice that there are new VCPU and MEMORY_MB
> records referring to a new resource provider "rpB" (the destination compute
> node).
>
> A single SQL transaction would be built that executes the following:
>
> BEGIN;
>
>   # Grab the source and destination compute node provider generations
>   # to protect against concurrent writes...
>   $RPA_GEN := SELECT generation FROM resource_providers
>   WHERE uuid = 'rpA';
>   $RPB_GEN := SELECT generation FROM resource_providers
>   WHER

Re: [openstack-dev] [all][ptg] how are cross-project sessions scheduled?

2016-11-11 Thread Thierry Carrez
John Dickinson wrote:
> I've been fielding questions about the upcoming PTG, but one has come
> up that I don't know how to answer.
> 
> How will cross-project sessions at the PTG be scheduled?
> 
> From looking at the "Event Layout" section on
> https://www.openstack.org/ptg, it seems to imply that each team in the
> left column will likely set their own schedule. So if there's some
> cross-project topic someone wants to bring up, then that person should
> figure out which team it best fits under and petition that team to
> include the topic. Is that correct?

Yes, mostly. If you want to discuss changes in release management, that
would probably fall into the Release management room.

But while the PTG is really team-oriented, we don't want to discourage
true inter-team discussions (like the interesting discussion we had on
scaling review teams in Barcelona) which do not fall under a specific
cross-project team work.

The current thinking is that we could have one room dedicated to such
discussions, with an open schedule (like an etherpad with 30min slots
where you could book two in a row). If you want to schedule a specific
discussion between upstream contributors which does not fall into any
specific team territory (cross-project or not), we'd have that venue to
facilitate that. Attendees could keep an eye on that etherpad and learn
of discussions scheduled there, and arrange their own team discussions
around it.

We want to keep the whole event very dynamic and loosely scheduled,
since the midcycles taught us that is the most efficient way to proceed.

Discussions that have impact beyond development issues, are more
outward-facing and include all of the OpenStack community should be held
at the "Forum" at the Summit instead, which is where all of our
community will be represented.

Hope this helps,

-- 
Thierry Carrez (ttx)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] One instance use local disk,the instance's root_gb is 0 when it's flavor's disk size is 0, I don't think this is appropriate.

2016-11-11 Thread han . rong3
I submitted a bug for this question:
https://bugs.launchpad.net/nova/+bug/1414947

This bug's status is wishlist, But I have a different opinion.

I have a use case:
Users may create some virtual machines for the processing of their 
business, these virtual machines use customized images which can meet 
their requirement. When users create virtual machines using these 
customized images in batches, they don't need to know the size of 
instance's root_gb, so they can use flavor with its disk size as 0.
In this case, the value of root_gb's filed in db table of instances is 0 
for every virtual machine, but it's real value is not 0. This will lead to 
live migrate failure since live-migration's destination compute node's 
disk_available_least is always less than 0.
We often enable disk_filter to control compute nodes's disk space, but 
when the value of root_gb's filed in db table of instances is 0, this 
filter doesn't work.

So, I think the value of root_gb's filed in db table of instances should 
be equal to image's virtual size in this use case.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker] Tacker design sessions summary

2016-11-11 Thread Sripriya Seetharam
Hello Tackers,

Here is a high level summary (delayed!) of the Tacker design sessions [1] at 
the summit. Please feel free to add/comment on the notes in case I have missed 
any.

Fishbowl session:

Topic-1: VMware / vCloud, Public Cloud
There is a need for supporting VIMs beyond OpenStack in Tacker. It was 
suggested that native VMware deployment using vcloud is typical in customer 
deployments. There was also discussion on VMware integrated OpenStack as a 
deployment choice. Both options were explored and the general agreement was to 
use the vCloud api to talk to VMware VIM. The open vCloud API is able to 
support complex resource requests from Tacker with a rich set of REST APIs. 
Action items include revisiting the current VIM API that will allow any VIM to 
be registered in Tacker and reference OpenStack driver be part of VIM module, 
drivers including VMware will reside outside Tacker project. CI options - 3rd 
party CI testing similar to Nova VMware driver tests needs to be integrated in 
Tacker. Tacker feature compatibility such as - scaling, monitoring, mgmt 
driver, auto network, auto flavor creation need to be evaluated and implemented 
specific to vCloud API.

Topic 2: Containerized VNFs
The topic was around exploring the current options for launching VNFs as 
containers. Zun project provides API for containers life cycle management. 
There are still questions around how to consume zun's REST APIs and its
interaction with container networking project such as Kuryr. Also, the role of 
VIM in container management such as OpenStack using Zun, VMware vCloud using 
photon was discussed to identify the support required by VIM for container 
management. It was agreed that a spec needs to be created to come with a plan 
for 1st iteration of container support in Tacker.

Work sessions:

Topic 1: NSD - Mistral Integration
The role of workflow composer such as mistral was discussed to implement the 
NSD feature. Given, NSD template describes a complex network service, there are 
benefits in bringing mistral to process the template with dependency 
requirements or to initiate resource requests in parallel. Also, any error 
scenarios or forking sub tasks on certain matching VNF criteria can be handled 
with mistral support, especially for forwarding graphs. The action items 
identified was to build a basic translator that can map simple NSD TOSCA 
templates to Mistral workflow files in the 1st iteration for Ocata cycle. This 
can later be evolved to handle more complex requirements of NSD.

Topic 2: EM Interface Evolution
The discussion was around defining the API interactions between Tacker and 
external EM interface for VNF management. It was agreed that there is no 
standard way to define and implement this feature. The orchestrator and 
framework should implement a basic version of this interaction and later evolve 
it as more requirements are gathered. There needs to be a feedback loop from EM 
interface to Tacker to drive the policies and performance SLAs of VNFs. The 
orchestrator needs to provide the calls to EM interface for VNF configuration 
management. Also, VNF change notifications should be monitored by the EM 
interface and the data sent as a loopback to Tacker to trigger action requests.

Topic 3: Senlin Integration:
Senlin team provided a good overview of the project and described cluster 
management for scaling and HA policies in Tacker. Heat-translator provides 
supports for senlin resources. Also, senlin integration generates multiple 
templates including profiles and clusters that will have to be handled and 
managed within Tacker. There needs to be support for specifying target set of 
nodes/VDUs that needs to be scaled out/scaled in by senlin. There are 
challenges in translating tosca scaling policies and vnf properties to senlin 
resource types such as profile, cluster and policy as there is no 1:1 mapping 
of these resources between tosca template and senlin heat templates. It was 
agreed that an initial support can be brought in as an alternative to heat auto 
scaling if senlin is deployed on the system.

Cross project discussions
Topic: 4 tosca-parser/heat-translator:
The TP/HT folks updated the team on the 2 features that are brought in for 
Ocata cycle. 1. nested-templates which is required for target VDU scaling and 
2.substitution mappings required for NSD implementation. The team discussed on 
supporting forwarding graph in NSD in tosca-parser project. The action item 
identified was to find the gaps between the NSD spec in tosca profile and 
tosca-parser support and TP team will correspondingly follow up with the 
required fixes.

Topic 5: networking-sfc:
Tacker is now using networking-sfc to implement forwarding graphs. The ask was 
to rely on a stable branch of networking-sfc to bring in the functional tests 
for forwarding graphs in Tacker. The networking-sfc team will create newton 
release in December and will follow the regular release schedule going forward 
from Oc

Re: [openstack-dev] [openstack-ansible] How do you haproxy?

2016-11-11 Thread Jesse Pretorius
On 11/10/16, 2:01 PM, "Ian Cordasco"  wrote:

There are numerous existing HAProxy Ansible roles on Galaxy

(https://galaxy.ansible.com/list#/roles?page=1&page_size=10&autocomplete=haproxy).
Ostensibly one or more of these are already flexible enough for what
users might need. Why not adopt one of those and contribute back to it
instead of creating yet another HAProxy role?

All good evaluations start with understanding the requirements. ☺



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Propose removal of TrivialFix requirement

2016-11-11 Thread Vikram Hosakote (vhosakot)
Yes, I'm +1 for this.

I've seen TrivialFix used for a lot of non-trivial fixes.

May be, we need a spec to define what is trivial?  ;)

Regards,
Vikram Hosakote
IRC:  vhosakot

From: Paul Bourke mailto:paul.bou...@oracle.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, November 3, 2016 at 6:51 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] Propose removal of TrivialFix requirement

Kolleagues,

How do people feel above removing the requirement of having TrivialFix
in commit messages where a bug/bp is not required?

I'm seeing a lot of valid and important commits being held up because of
this, in my opinion, unnecessary requirement. It also causes friction
for new contributers to the project.

Are there any major benefits we're getting from the use of this tag that
I'm missing?

Cheers,
-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Default KeystoneV3 Creds

2016-11-11 Thread Liam Young
On Fri, Nov 11, 2016 at 12:41 AM, James Beedy  wrote:

> Concerning the Juju Keystone charm, and api v3, can someone shed some
> light on how to find the default admin creds needed to access the keystone
> api, or what the novarc file might look like? I'm setting the
> 'admin-password' config of the keystone charm to "openstack", and am using
> this -> http://paste.ubuntu.com/23458768/ for my .novarc, but am getting
> a 401 -> http://paste.ubuntu.com/23458782/
>

https://wiki.ubuntu.com/OpenStack/OpenStackCharms/ReleaseNotes1604#Keystone_v3_API_support
should point you in the right direction


>
> Is there something I'm missing here?
>
> thx
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] How do you haproxy?

2016-11-11 Thread David Moreau Simard
I feel like you might get valuable feedback by cross-posting this to
openstack-operators.

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

On Nov 10, 2016 7:40 AM, "Jean-Philippe Evrard" <
jean-philippe.evr...@rackspace.co.uk> wrote:

Hello,



In openstack-ansible, we are using haproxy (and keepalived) as load
balancer mechanism.

We’ve been recommending to use hardware load balancers for a while, but I
think it’s time to improve haproxy configuration flexibility and testing.



I’m gathering requirements of what you’d like to see in haproxy.



I have an etherpad set up, and I’d be happy if you could fill your comments
there:

https://etherpad.openstack.org/p/openstack-ansible-haproxy-improvements



Thank you in advance.



Best regards,

Jean-Philippe Evrard (evrardjp)

--
Rackspace Limited is a company registered in England & Wales (company
registered number 03897010) whose registered office is at 5 Millington
Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy
can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail
message may contain confidential or privileged information intended for the
recipient. Any dissemination, distribution or copying of the enclosed
material is prohibited. If you receive this transmission in error, please
notify us immediately by e-mail at ab...@rackspace.com and delete the
original message. Your cooperation is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Embracing new languages in OpenStack

2016-11-11 Thread Thierry Carrez
Pete Zaitcev wrote:
> I'm disappointed that TC voted Golang
> down on August 2, but I can see where they come from.
> 
> The problem we're grappling with on the Swift side is (in my view) mainly
> that the Go reimplementation provides essential performance advantages
> which manifest at a certain scale (around 100 PB with current technology).
> For this reason, ignoring Hummingbird and prohibiting Go is not going to
> suppress them. As the operators deploy Hummingbird in preference to the
> Python implementation, the focus of the development is going to migrate,
> and the end result is going to be an effective exile of a founding
> project from the OpenStack.

Yes, and those same performance advantages are why so many people (yes,
including TC members) are uncomfortable with the current state. There is
no question (I think) that Swift would benefit from being able to
include Go-rewritten parts. We just need to make sure that what is a
benefit for Swift doesn't turn out to be a disaster for "OpenStack".

This is in my opinion the real reason for the "not yet" answer that was
given so far -- lack of visibility into what a Go-enabled OpenStack
would look like in the future. The clearer we are with what that future
would look like (not just for Swift for for "OpenStack" as a whole), and
the more people we have lined up to do the necessary work to turn that
specific case into a general success, the more likely it is that a
majority of TC members (who have "OpenStack" interests at heart beyond
any specific project) will support that addition to our toolbelt.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Re: Your draft logo & sneak peek

2016-11-11 Thread Paul Bourke
Reminder Kolla, today is the deadline for feedback on this! Personally I 
think it could use a few tweaks so please don't forget to take a look.


Link: http://tinyurl.com/OSmascot

On 21/10/16 21:58, Steven Dake (stdake) wrote:

Forgot kolla tag in subject.

From: Steven Dake mailto:std...@cisco.com>>
Date: Friday, October 21, 2016 at 7:50 PM
To: "OpenStack Development Mailing List (not for usage questions)"
mailto:openstack-dev@lists.openstack.org>>
Cc: "Jastrzebski, Michal" mailto:michal.jastrzeb...@intel.com>>
Subject: FW: Your draft logo & sneak peek

Folks,

Inline is the draft logo of the Kolla mascot and logo.  We will be
getting various types of swag at the first PTG related to our mascot.
 The deadline for feedback is November 11th, 2016.  See the email inside
from Heidi for more information on our project mascots.

Super exciting!

Regards
-steve


From: Heidi Joy Tretheway mailto:heidi...@openstack.org>>
Date: Friday, October 21, 2016 at 7:16 PM
To: Steven Dake mailto:std...@cisco.com>>
Subject: Your draft logo & sneak peek

Hi Steven,

We're excited to show you the draft version of your project logo,
attached. We want to give you and your team a chance to see the mascot
illustrations before we make them official, so we decided to make
Barcelona the draft target, with final logos ready by the Project Team
Gathering in Atlanta in February.

Our illustrators worked as fast as possible to draft nearly 60 logos,
and we're thrilled to see how they work as a family. Here's a 50-second
"sneak peek" at how they came together: https://youtu.be/JmMTCWyY8Y4

We welcome you to *share this logo with your team and discuss it in
Barcelona*. We're very happy to take feedback on it if we've missed the
mark. The style of the logos is consistent across projects, and we did
our best to incorporate any special requests, such as an element of an
animal that is especially important, or a reference to an old logo.



We ask that you don't start using this logo now since it's a draft.
Here's *what you can expect for the final product*:

  * A horizontal version of the logo, including your mascot, project
name and the words "An OpenStack Community project"
  * A square(ish) version of the logo, including all of the above
  * A mascot-only version of the logo
  * Stickers for all project teams distributed at the PTG
  * One piece of swag that incorporates all project mascots, such as a
deck of playing cards, distributed at the PTG
  * All digital files will be available through the website


We know this is a busy time for you, so to take some of the burden of
coordinating feedback off you, we made a feedback
form*:* http://tinyurl.com/OSmascot  You are also welcome to reach out
to Heidi Joy directly with questions or concerns. Please
provide *feedback by Friday, Nov. 11*, so that we can request revisions
from the illustrators if needed. Or, if this logo looks great, just
reply to this email and you don't need to take any further action.

Thank you!
Heidi Joy Tretheway - project lead
Todd Morey - creative lead

P.S. Here's an email that you can copy/paste to send to your team
(remember to attach your logo from my email):

Hi team,
I just received a draft version of our project logo, using the mascot we
selected together. A final version (and some cool swag) will be ready
for us before the Project Team Gathering in February. Before they make
our logo final, they want to be sure we're happy with our mascot.

We can discuss any concerns in Barcelona and you can also provide direct
feedback to the designers: http://tinyurl.com/OSmascot  Logo feedback is
due Friday, Nov. 11. To get a sense of how ours stacks up to others,
check out this sneak preview of several dozen draft logos from our
community: https://youtu.be/JmMTCWyY8Y4


photo   
*Heidi Joy Tretheway*
Senior Marketing Manager, OpenStack Foundation
503 816 9769  | Skype: heidi.tretheway

  






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update

2016-11-11 Thread Chris Dent


I thought I would share some updates on the state of things related
to placement and resource providers. There's almost certainly things
missing from this, so if you know something important that you think
should be mentioned please make a response including it. The point
of this message is simply so people can have some idea of what's in
play at the moment.

Since spec freeze is the 17th, the stuff that presumably matters
most in the list of reviews below are the specs. There are several.
There's quite a lot of pending code changes too. The sooner that
stuff merges the less conflicts it will cause later.

# Leftovers from Newton

These are the things which either should have been done in Newton
and weren't or cleanups of stuff that was done but need some
revision. There is an etherpad[1] tracking these things where if you
have interest and time you can pick up some things to do. There's
been some excellent contribution from outside the usual suspects.

Things that are ready to review:

* Improved 404 responses:
  https://review.openstack.org/#/c/387674/
  https://review.openstack.org/#/c/392321/

* Increased gabbi test coverage (stack of 3):
  https://review.openstack.org/#/c/396363/

* Proper handling of max_unit in resource tracker and placement API
  (stack of 2):
  https://review.openstack.org/#/c/395971/

* Aggregates support in placement api (stack of 3):
  https://review.openstack.org/#/c/362863/

* Demo inventory update script:
  https://review.openstack.org/#/c/382613/
  This one might be considered a WIP because how it chooses to do
  things (rather simply and dumbly) may not be in line with expecations.

* CORS support in placement API:
  https://review.openstack.org/#/c/392891/

* Cleaning up the newton resource providers spec to reflect reality:
  https://review.openstack.org/#/c/366888/

Except for the demo script none of that should be particular
controversial.

There are still quite a few things to pick up on the etherpad[1].

[1] https://etherpad.openstack.org/p/placement-newton-leftovers

# Filtering compute nodes with the placement API

Now that the placement API is tracking compute node inventory and
allocations against that inventory it becomes possible to use the
api to shrink the number of compute nodes that nova-scheduler
filters. Sylvain has started the work on that.

* Spec for modifying the scheduler to query the api:
  https://review.openstack.org/#/c/300178/

* Spec for modifying the placement api to return resource providers
  that match requirements:
  https://review.openstack.org/#/c/385618/

* Code that satisfies those two specs (stack of 2, POC):
  https://review.openstack.org/#/c/386242/

The main area of disagreement on this stuff has been how to form the
API for requesting and returning a list of resource providers.
That's in the second patch of the stack above.

# Custom Resource Classes

Custom resource classes provide a mechanism for tracking volumes of
inventory which are unique to a deployment. There's both spec and
code in flight for this:

* The spec
  https://review.openstack.org/#/c/312696/

* Code to make them work in the api (stack of 4):
  https://review.openstack.org/#/c/386844/
  There's a lot of already merged code that establish the
  ResourceClass and ResourceClassList objects

Custom resource classes are important for, amongst other things,
being able to effectively manage different bare metal resources.

# Nested Resource Providers

Nested resource providers allow hierarchical or containing
relationships in resource providers so it is possible to say things
like "this portion of this device on this compute node" (think
NUMA).

* The spec
  https://review.openstack.org/#/c/386710/

* Code to implement the object and HTTP API changes (stack of 4):
  https://review.openstack.org/#/c/377138/

# Allocations for generic PCI devices

Changes to the resource tracker to allow simpe PCI devices to be
tracker.

* Code (stack of 3):
  https://review.openstack.org/#/c/382000/

# Important stuff not in other categories

This section is for lose ends that don't fit in elsewhere. Stuff
we're going to need to figure out at some point.

## Placement DB

The long term plan is for there to be a separate placement database.
One way to ease the migration to that is to allow people to go ahead
and have a placement db from the outset (instead of using the API
db). There's an etherpad[2] that discusses this and some code[3]
that implements it but it's not clear that all the bases are covered
or even need to be. It may be that more discussion is required or it
may simply be that someone needs to untangle to mess and state the
decision clearly.

[2] https://etherpad.openstack.org/p/placement-optional-db-spec
[3] https://review.openstack.org/#/c/362766/ (this is -2d pending
resolution of the stuff on the etherpad)

## Placement Docs

They need to happen. We know. Just need to find the time and
resources. Listing it here so it is clear it is a known thing.

## Placement Upgrad

Re: [openstack-dev] [nova] Problem with Quota and servers spawned in groups

2016-11-11 Thread Sławek Kapłoński
Hello,

Can maybe someone (from core team) take a look on that? Thx in advance :)

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Mon, 07 Nov 2016, Sławek Kapłoński wrote:

> Hello,
> 
> Some time ago I found that there is problem with unconsistent behaviour
> (IMHO) of Nova API when user wants to spawn bunch of instances in server
> group (affinity or anti-affinity). I described it in [1].
> I also made some summary how Nova is working in different cases of
> spawning instances. Results are like below:
> 
> +-+---+---++-+
> | QUOTAS  |   |   |   
>  | |
> +--+--+ min_count | max_count |  number of 
> spawned instances   | Expected result |
> |instances | server_group_members |   |   |   
>  | |
> +==+==+===+===++=+
> |10|  5   | 3 | 4 | 4 
>  |4|
> +--+--+---+---++-+
> |10|  5   | 6 | 9 |Quota exceeded, 
> too many servers|Group Quota  |
> |  |  |   |   |in group (HTTP 
> 403) |exceeded |
> +--+--+---+---++-+
> |10|  5   | 3 | 6 |Quota exceeded, 
> too many servers|5|
> |  |  |   |   |in group (HTTP 
> 403) | |
> +--+--+---+---++-+
> |10|  5   | 3 |11 |Quota exceeded, 
> too many servers|5|
> |  |  |   |   |in group (HTTP 
> 403) | |
> +--+--+---+---++-+
> |10|  5   | 6 |11 |Quota exceeded, 
> too many servers|Group Quota  |
> |  |  |   |   |in group (HTTP 
> 403) |exceeded |
> +--+--+---+---++-+
> |10|  5   | 3 | - | 3 
>  |3|
> +--+--+---+---++-+
> |10|  5   | 6 | - |Quota exceeded, 
> too many servers|Group Quota  |
> |  |  |   |   |in group (HTTP 
> 403) |exceeded |
> +--+--+---+---++-+
> |10|  5   | 11| - |Quota exceeded for 
> instances:   |Servers Quota|
> |  |  |   |   |Requested 11, but 
> already used 0|exceeded |
> |  |  |   |   | of 10 instances   
>  | |
> +--+--+---+---++-+
> |10|  5   | - | 3 | 3 
>  |3|
> +--+--+---+---++-+
> |10|  5   | - | 6 |Quota exceeded, 
> too many servers|Group Quota  |
> |  |  |   |   |in group (HTTP 
> 403) |exceeded |
> +--+--+---+---++-+
> |10|  5   | - | 11|Quota exceeded, 
> too many servers|Group Quota  |
> |  |  |   |   |in group (HTTP 
> 403) |exceeded |
> +--+--+---+---++-+
> 
> In "expected result" I described behaviour which I suppose that is should be
> there.
> Can You maybe check it and tell me if I'm right here and there is really bug 
> as
> described in [1]?
> 
> I also made some patch to fix it [2] so please review it if You think that it
> is worth to fix this behaviour.
> 
> [1] http

Re: [openstack-dev] [glance] [stable] New stable releases (Was: [release] Release countdown for week R-14, 14-18 Nov, Ocata-1 milestone)

2016-11-11 Thread Erno Kuvaja
On Thu, Nov 10, 2016 at 7:26 PM, Ian Cordasco  wrote:
> -Original Message-
> From: Doug Hellmann 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: November 10, 2016 at 13:13:02
> To: openstack-dev 
> Subject:  [openstack-dev] [release] Release countdown for week R-14,
> 14-18 Nov, Ocata-1 milestone
>
>> Release Actions
>> ---
>>
>> Release liaisons for projects following the cycle-with-milestones
>> release model should prepare a patch to add the tag for the first
>> milestone by Thursday. Milestones are considered betas, and should
>> have version numbers like X.0.0.0b1 where X is the next major version
>> number for your deliverables.
>>
>> This is a good time to coordinate with the stable maintenance team
>> on releases from stable branches.
>
> Hi all,
>
> As the Glance Release Liaison, I'm wondering if you all agree that we
> should also be proposing patches for new patch releases from our
> stable branches next week. I know we have a few priorities to wrap up,
> but I'm more hoping to get a sense of when we want to tag a new stable
> release of Glance.
>
> Cheers,
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Hi Ian,

We should be backporting bugfixes that are within stable scope across
the cycle and releasing when convenient. The general agreement has
been that if patch is worth of backporting it's worth of releasing,
stable branches does not need time point releases and unless
absolutely necessary should not align with the milestone releases of
the master or the final release.

I'll have a look, likely next week, what we have merged since the
release, what has been backported and what we need to backport. If
there is something in our stable branches that has not been released
after that let's get the releases out after the O-1 milestone week.

Thanks,
Erno -jokke- Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Weekly Policy Meeting

2016-11-11 Thread Lance Bragstad
I've added some initial content to the etherpad [0], to get things rolling.
Since this is going to be a recurring thing, I'd like our first meeting to
level set the playing field for everyone. Let's spend some time getting
familiar with policy concepts, understand exactly how OpenStack policy
works today, then we can start working on writing down what we like and
don't like about the existing implementation. I'm sure most people
interested in this work will already be familiar with the problem, but I
want to make it easy for folks who aren't to ramp up quickly and get them
into the discussion.

Some have already started contributing to the etherpad! I've slowly started
massaging that information into our first agenda. I'll continue to do so
and send out another email on Tuesday as a reminder to familiarize
yourselves with the etherpad before the meeting.


Thanks!


[0] https://etherpad.openstack.org/p/keystone-policy-meeting

On Thu, Nov 10, 2016 at 2:36 PM, Steve Martinelli 
wrote:

> Thanks for taking the initiative Lance! It'll be great to hear some ideas
> that are capable of making policy more fine grained, and keeping things
> backwards compatible.
>
> On Thu, Nov 10, 2016 at 3:30 PM, Lance Bragstad 
> wrote:
>
>> Hi folks,
>>
>> After hearing the recaps from the summit, it sounds like policy was a hot
>> topic (per usual). This is also reinforced by the fact every release we
>> have specifications proposed to re-do policy in some way.
>>
>> It's no doubt policy in OpenStack needs work. Let's dedicate an hour a
>> week to policy, analyze what we have currently, design an ideal solution,
>> and aim for that. We can bring our progress to the PTG in Atlanta.
>>
>> We'll hold the meeting openly using Google Hangouts and record our notes
>> using etherpad.
>>
>> Our first meeting will be Wednesday, November 16th from 10:00 AM – 11:00
>> AM Central (16:00 - 17:00 UTC) and it will reoccur weekly.
>>
>> Hangout: https://hangouts.google.com/call/pd36j4qv5zfbldmhxeeatq6f7ae
>> Etherpad: https://etherpad.openstack.org/p/keystone-policy-meeting
>>
>> Let me know if you have any other questions, comments or concerns. I look
>> forward to the first meeting!
>>
>> Lance
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][release] Version numbers for kolla-ansible repository.

2016-11-11 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2016-11-08 21:41:26 +:
> 
> On 11/8/16, 9:08 AM, "Doug Hellmann"  wrote:
> 
> >Excerpts from Steven Dake (stdake)'s message of 2016-11-08 13:08:11 +:
> >> Hey folks,
> >> 
> >> As we split out the repository per our unanimous vote several months ago, 
> >> we have a choice to make (I think, assuming we are given latitude of  the 
> >> release team who is in the cc list) as to which version to call 
> >> kolla-ansible.
> >> 
> >> My personal preference is to completely retag our newly split repo with 
> >> all old tags from kolla git revisions up to version 3.0.z.  The main 
> >> rationale I can think of is kolla-ansible 1 = liberty, 2 = mitaka, 3 = 
> >> newton.  I think calling kolla-ansible 1.0 = newton would be somewhat 
> >> confusing, but we could do that as well.
> >> 
> >> The reason the repository needs to be retagged in either case is to 
> >> generate release artifacts (tarballs, pypi, etc).
> >> 
> >> Would also like feedback from the release team on what they think is a 
> >> best practice here (we may be breaking new ground for the OpenStack 
> >> release team, maybe not – is there prior art here?)
> >> 
> >> For a diagram (mostly for the release team) of the repository split check 
> >> out:
> >> https://www.gliffy.com/go/share/sg9fc5eg9ktg9binvc89
> >> 
> >> Thanks!
> >> -steve
> >
> >When you say "split," I'm going to assume that you mean the
> >openstack/kolla repo has the full history but that openstack/kolla-ansible
> >only contains part of the files and their history.
> 
> Doug,
> 
> I’d like to maintain history for both the repos, and then selectively remove 
> the stuff not neeeded for each repo (so they will then diverge).

Sure, that's one way to do it. I recommend picking just one of the
repos to have the old tags. I'm not sure if it would be simpler to
keep them in the repo that is current (openstack/kolla, I think?)
because artifact names for the old versions won't change that way,
or to keep all of that history and the stable branches in the repo
where you'll be doing new work to make backporting simpler.

What's the difference between kolla and kolla-ansible?

> >Assuming the history is preserved in openstack/kolla, then I don't
> >think you want new tags. The way to reproduce the 1, 2, or 3 versions
> >is to check out the existing tag in openstack/kolla. Having similar
> >tags in openstack/kolla-ansible will be confusing about which is
> >the actual tag that produced the build artifacts that were shipped
> >with those version numbers.  New versions tagged on master in
> >openstack/kolla-ansible can still start from 4 (or 3.x, I suppose).
> 
> Ok that works.  I think the lesson there is we can’t change the past :)  I 
> think we would want kolla-ansible 
> 
> >
> >Do you maintain stable branches? Are those being kept in openstack/kolla
> >or openstack/kolla-ansible?
> Great question and something I hadn’t thought of.
> 
> Yes we maintain stable branches for liberty, mitaka, and newton.  I’m not 
> sure if a stable branch for liberty is in policy for OpenStack.  Could you 
> advice here?

Liberty is scheduled to be EOL-ed around 17 Nov, so if you have the
branch I would keep it for now and go through the EOL process normally.

> 
> I believe the result we want is to maintain the stable branches for 
> liberty/mitaka/newton in kolla and then tag kolla-ansible Ocata as 4.0.0.  I 
> don’t know if we need the 1/2/3 tags deleted in this case.  Could you advise?
> 
> Thanks for your help and contributions Doug :)
> 
> Regards
> -steve
> 
> >
> >Doug
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [new] nova_powervm 3.0.1 release (newton)

2016-11-11 Thread no-reply
We are frolicsome to announce the release of:

nova_powervm 3.0.1: PowerVM driver for OpenStack Nova.

This release is part of the newton stable release series.

Download the package from:

https://tarballs.openstack.org/nova_powervm/

For more details, please see below.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update

2016-11-11 Thread Matt Riedemann

On 11/11/2016 6:59 AM, Chris Dent wrote:


I thought I would share some updates on the state of things related
to placement and resource providers. There's almost certainly things
missing from this, so if you know something important that you think
should be mentioned please make a response including it. The point
of this message is simply so people can have some idea of what's in
play at the moment.

Since spec freeze is the 17th, the stuff that presumably matters
most in the list of reviews below are the specs. There are several.
There's quite a lot of pending code changes too. The sooner that
stuff merges the less conflicts it will cause later.

# Leftovers from Newton

These are the things which either should have been done in Newton
and weren't or cleanups of stuff that was done but need some
revision. There is an etherpad[1] tracking these things where if you
have interest and time you can pick up some things to do. There's
been some excellent contribution from outside the usual suspects.

Things that are ready to review:

* Improved 404 responses:
  https://review.openstack.org/#/c/387674/
  https://review.openstack.org/#/c/392321/

* Increased gabbi test coverage (stack of 3):
  https://review.openstack.org/#/c/396363/

* Proper handling of max_unit in resource tracker and placement API
  (stack of 2):
  https://review.openstack.org/#/c/395971/

* Aggregates support in placement api (stack of 3):
  https://review.openstack.org/#/c/362863/

* Demo inventory update script:
  https://review.openstack.org/#/c/382613/
  This one might be considered a WIP because how it chooses to do
  things (rather simply and dumbly) may not be in line with expecations.

* CORS support in placement API:
  https://review.openstack.org/#/c/392891/

* Cleaning up the newton resource providers spec to reflect reality:
  https://review.openstack.org/#/c/366888/

Except for the demo script none of that should be particular
controversial.

There are still quite a few things to pick up on the etherpad[1].

[1] https://etherpad.openstack.org/p/placement-newton-leftovers

# Filtering compute nodes with the placement API

Now that the placement API is tracking compute node inventory and
allocations against that inventory it becomes possible to use the
api to shrink the number of compute nodes that nova-scheduler
filters. Sylvain has started the work on that.

* Spec for modifying the scheduler to query the api:
  https://review.openstack.org/#/c/300178/

* Spec for modifying the placement api to return resource providers
  that match requirements:
  https://review.openstack.org/#/c/385618/

* Code that satisfies those two specs (stack of 2, POC):
  https://review.openstack.org/#/c/386242/

The main area of disagreement on this stuff has been how to form the
API for requesting and returning a list of resource providers.
That's in the second patch of the stack above.

# Custom Resource Classes

Custom resource classes provide a mechanism for tracking volumes of
inventory which are unique to a deployment. There's both spec and
code in flight for this:

* The spec
  https://review.openstack.org/#/c/312696/

* Code to make them work in the api (stack of 4):
  https://review.openstack.org/#/c/386844/
  There's a lot of already merged code that establish the
  ResourceClass and ResourceClassList objects

Custom resource classes are important for, amongst other things,
being able to effectively manage different bare metal resources.

# Nested Resource Providers

Nested resource providers allow hierarchical or containing
relationships in resource providers so it is possible to say things
like "this portion of this device on this compute node" (think
NUMA).

* The spec
  https://review.openstack.org/#/c/386710/

* Code to implement the object and HTTP API changes (stack of 4):
  https://review.openstack.org/#/c/377138/

# Allocations for generic PCI devices

Changes to the resource tracker to allow simpe PCI devices to be
tracker.

* Code (stack of 3):
  https://review.openstack.org/#/c/382000/

# Important stuff not in other categories

This section is for lose ends that don't fit in elsewhere. Stuff
we're going to need to figure out at some point.

## Placement DB

The long term plan is for there to be a separate placement database.
One way to ease the migration to that is to allow people to go ahead
and have a placement db from the outset (instead of using the API
db). There's an etherpad[2] that discusses this and some code[3]
that implements it but it's not clear that all the bases are covered
or even need to be. It may be that more discussion is required or it
may simply be that someone needs to untangle to mess and state the
decision clearly.

[2] https://etherpad.openstack.org/p/placement-optional-db-spec
[3] https://review.openstack.org/#/c/362766/ (this is -2d pending
resolution of the stuff on the etherpad)

## Placement Docs

They need to happen. We know. Just need to find the time and
resources. Listing it here so it is clear

Re: [openstack-dev] [tripleo] Is it time to reconsider how we configure OVS bridges in the overcloud?

2016-11-11 Thread Brent Eagles
Hi Dan,

On Thu, Nov 10, 2016 at 1:25 PM, Dan Sneddon  wrote:

> ​
>
> Brent,
>
> Thanks for taking the time to analyze this situation. I see a couple of
> potential issues with the topology you are suggesting.
>
> First of all, what about the scenario where a system has only 2x10Gb
> NICs, and the operator wishes to bond these together on a single
> bridge? If we require separate bridges for Neutron than we do for the
> control plane, then it would be impossible to configure a system with
> only 2 NICs in a fault-tolerant way.
>

Unless I'm missing something, I think this would be similar to the single
NIC case. In the case where there is only bond, it would be bridged to the
main overcloud bridge (e.g. br-ctl) and any bridges for external traffic
(e.g. br-ex) would be patched to that. The fault tolerance of the bond
would still be available to all. The cost would be whatever extra overhead
the br-ex->br-ctl patch ports incur.


>
> Second, there will be a large percentage of users who already have a
> shared br-ex that wish to upgrade. Do we tell them that due to an
> architectural change, they now must redeploy a new cloud with a new
> topology to use the latest version?
>

> So while I would be on-board with changing our default for new
> installations, I don't think that relieves us of the responsibility to
> figure out how to handle the edge cases where a separate bridge is not
> feasible.
> ​
>


> --
> Dan Sneddon |  Senior Principal OpenStack Engineer
>
>
​This is a very good and important point. Migration could be a pain. If it
is a bridge that neutron uses, it manipulates them pretty quickly after
startup. However, if an update would only write the ifcfg changes and
require a rebooting for them to take affect, then that might take care of
that. Probably more problematic is the other IPs that are assigned to the
"main" bridge. If it's keying on the bridge name then that becomes a
problem. If it's the IP, then it should follow the bridge. There is also
the danger of any other number of dependencies or tooling that might need
updating. The bright spot is that neutron's config wouldn't actually
change. In short, it would probably be doable, but I find myself feeling
that I wouldn't recommend it unless it was absolutely necessary and/or fit
some kind of scenario that we had worked out was extremely low risk and
tested heavily.

Cheers,

Brent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Fedora AArch64 (64-bit ARM) support in diskimage-builder

2016-11-11 Thread dmarlin


I have been looking at using diskimage-builder on Fedora AArch64. While 
there is 64-bit ARM support for Ubuntu (arm64), there appear to be a few 
things missing for Fedora.  Is this the correct list to ask questions, 
and propose minor changes to diskimage-builder in support of this effort?



Thank you,

d.marlin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] content-type always set to application/octet-stream

2016-11-11 Thread Carlos Konstanski

glance --version : 2.5.0
File: common/http.py
Class: _BaseHTTPClient
Method: _set_common_request_kwargs
Python version: 3.4

First problem
-

There is a line of code in this method that fetches the Content-Type
header from the passed-in headers dict:

content_type = headers.get('content-type', 'application/octet-stream')

However it fails because the headers dict has strings that are not
represented as unicode:

(Pdb) print(headers)
{b'Content-Type': b'application/openstack-images-v2.1-json-patch'}

Therefore the lookup fails:

(Pdb) print(headers.get("Content-Type"))
None

But if we use a key that is coerced to the right type, it works:

(Pdb) print(headers.get(b"Content-Type"))
b'application/openstack-images-v2.1-json-patch'

The question I have yet to answer is: are these strings supposed to be
char sequences, or did they get converted by mistake due to a different
bug? That's really the first thing to figure out.

Second problem
--

The headers.get() call gets the case wrong. It should be Content-Type,
but instead it uses content-type.

Path to fix
---

I would be happy to fix this. This would be my first upstream
contribution, and I know there is a process to even get to square
one. Should I start down that path, or does someone else wish to address
this bug?

Sincerely,
Carlos Konstanski

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Is it time to reconsider how we configure OVS bridges in the overcloud?

2016-11-11 Thread Russell Bryant
On Thu, Nov 10, 2016 at 10:22 AM, Brent Eagles  wrote:

> To something like:
> (triple configured)
> - eth0
>  - br-ctl - used as br-ex is currently used except neutron knows nothing
> about it.
> - br-ex -patched to br-ctl - ostensibly for external traffic and this is
> what neutron in the overcloud is configured to use
> (neutron configured)
> - br-int
> - br-tun
>
> (In all cases, neutron configures patches, etc. between bridges *it knows
> about* as needed. That is, in the second case, tripleo would configure the
> patch between br-ctl and br-ex)
>
​+1.  I think this configuration makes more sense than the previous
overlapping usage of br-ex.​

> At the cost of an extra bridge (ovs bridge to ovs bridge with patch ports
> is allegedly cheap btw) we get:
>
​Yes, it is cheap.  I would not consider this a concern from a data path
performance point of view.  The separate bridges and patch ports only exist
in userspace.  They don't affect the fast path and the impact to the slow
path is negligible.


>  1. an independently configured bridge for overcloud traffic insulates
> non-tenant node traffic against changes to neutron, including upgrades,
> neutron bugs, etc.
>  2. insulates neutron from changes to the underlying network that it
> doesn't "care" about.
>  3. In OVS only environments, the difference between a single nic
> environment and one where there is a dedicated nic for external traffic is,
> instead of a patch port from br-ctl to br-ex, it is directly connected to
> the nic for the external traffic.
>
> Even without the issue that instigated this message, I think that this is
> a change worth considering.
>

​Nice writeup, Brent.

-- 
Russell Bryant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][tripleo][ansible][puppet][all] changing default token format

2016-11-11 Thread Steve Martinelli
Its been a few days now, and I think we're OK. We are not seeing the
tempest failures related to race conditions like we saw before. No major CI
breaks, there seems be an uptick in failures, but those look like they are
timeouts and not related (i hope!). Thanks to everyone who worked on this!

On Thu, Nov 3, 2016 at 10:11 AM, Steve Martinelli 
wrote:

> As a heads up to some of keystone's consuming projects, we will be
> changing the default token format from UUID to Fernet. Many patches have
> merged to make this possible [1]. The last 2 that you probably want to look
> at are [2] and [3]. The first flips a switch in devstack to make fernet the
> selected token format, the second makes it default in Keystone itself.
>
> [1] https://review.openstack.org/#/q/topic:make-fernet-default
> [2] DevStack patch: https://review.openstack.org/#/c/367052/
> [3] Keystone patch: https://review.openstack.org/#/c/345688/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] content-type always set to application/octet-stream

2016-11-11 Thread Carlos Konstanski

On Fri, 11 Nov 2016, Carlos Konstanski wrote:


Date: Fri, 11 Nov 2016 10:17:09 -0700 (MST)
From: Carlos Konstanski 
Reply-To: "OpenStack Development Mailing List (not for usage questions)"

To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [glance] content-type always set to
application/octet-stream

glance --version : 2.5.0
File: common/http.py
Class: _BaseHTTPClient
Method: _set_common_request_kwargs
Python version: 3.4

First problem
-

There is a line of code in this method that fetches the Content-Type
header from the passed-in headers dict:

content_type = headers.get('content-type', 'application/octet-stream')

However it fails because the headers dict has strings that are not
represented as unicode:

(Pdb) print(headers)
{b'Content-Type': b'application/openstack-images-v2.1-json-patch'}

Therefore the lookup fails:

(Pdb) print(headers.get("Content-Type"))
None

But if we use a key that is coerced to the right type, it works:

(Pdb) print(headers.get(b"Content-Type"))
b'application/openstack-images-v2.1-json-patch'

The question I have yet to answer is: are these strings supposed to be
char sequences, or did they get converted by mistake due to a different
bug? That's really the first thing to figure out.

Second problem
--

The headers.get() call gets the case wrong. It should be Content-Type,
but instead it uses content-type.

Path to fix
---

I would be happy to fix this. This would be my first upstream
contribution, and I know there is a process to even get to square
one. Should I start down that path, or does someone else wish to address
this bug?

Sincerely,
Carlos Konstanski


Found the issue: oslo_utils/encodeutils.py:66, method safe_encode()

It does this: return text.encode(encoding, errors)

encoding == "utf-8". This results in an object of type bytes, not str.

So glance should be fixed to work with bytes.

Carlos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [stable][release] Final releases for stable/liberty and liberty-eol

2016-11-11 Thread Matt Riedemann
Yesterday in the nova team meeting I asked tonyb about final liberty 
releases and EOL so I could get an idea of what needs to happen there. I 
haven't seen that communicated broadly yet so I wanted to pass along 
what Tony told me. As I understand it:


1. Final stable/liberty releases should happen next week, probably by 
Thursday 11/17.


2. The liberty-eol tag is going to happen the week after that (this 
always seems to get fudged a bit though when it actually happens). But 
teams should not plan on getting more releases and changes into liberty 
after the 17th.


Tony - please correct me if any of that is wrong.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement / resource providers ocata summit session recaps

2016-11-11 Thread Jay Pipes
Matt, thanks much for your excellent summary of the resource provider 
sessions in Barcelona. A couple minor corrections noted below.


On 11/02/2016 01:54 PM, Matt Riedemann wrote:

- Custom resource classes

The code for this is moving along and being reviewed. There will be
namespaces on the standard resource classes that nova provides.


Note that the opposite is what was decided: there will be no namespace 
on standard resource classes, but we will be using a CUSTOM_ string 
prefix for any resource classes that are added via the REST API (i.e. 
they are user-created "custom resource classes").


All other resource classes that exist now (or in the future) in the list 
of standard resource classes will have no string prefix.



The resource tracker will create inventory/allocation records for the
Ironic nodes. The Ironic inventory records will use the
node.resource_class value as the custom resource class.


node.node_class attribute, but yes.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] Migration options

2016-11-11 Thread Ben Swartzlander
After the long and contentious discussion about migration options, 
Rodrigo and I have reached agreement about how we should handle them 
that works for both of us, and I will share it here before Rodrigo 
updates the spec. Discussion about the proposal can continue here on the 
ML or in the spec, but the final decision will be made through the spec 
approval process.


===

The main assumptions that drive this conclusion are:
* The API design must not violate our agreed-upon versioning 
(microversions) scheme as the API evolves over time.
* Actions requested by a client must result in behavior that is the same 
across server versions. It's not okay to get "bonus" behaviors due to a 
server upgrade if the client doesn't ask for them.


For the REST API, we propose that all the options are mandatory 
booleans. There will be no defaults at the API level. Values of false 
for options will always be compatible with the fallback/universal 
migration strategy. The options will be:

* writable
* preserve-metadata
* non-disruptive
* preserve-snapshots

Omitting any one of these is an error. This ensures safety by making 
clients send a value of true or false ensuring there are no surprise 
downsides to performing a migration.


For future migration options, they will be added with a new microversion 
and they will also be required options (in the new microversion). Newer 
server versions will provide backwards compatibility by defaulting 
options to false when clients invoke older microversions where that 
option didn't exist.


For the pythonclient, we propose that options are mandatory on the CLI. 
Again this provides safety by avoiding situations where users who don't 
read the docs are surprised by the behavior they get. This ensures that 
CLI scripts that invoke migrations will never break as long as the 
client version remains the same, and that they will get consistent 
behavior across all servers versions that support that client.


Updating to newer python clients may introduce new required parameters 
though, which can break old scripts, so this will have the effect of 
tying CLI scripts to specific client versions.


The client will provide backwards compatibility with older servers by 
only allowing requests to be sent if the user specifies a value of false 
for any option that doesn't exist on the server. This ensures that users 
always get exactly what they ask for, or an error if what they asked for 
can't be provided. It again avoids surprise behavior.


For the UI, options will always be specified as checkboxes, which will 
default to checked (true).


===

This proposal was arrived at after thinking through a long list of 
hypothetical use cases involving newer and older clients and servers as 
well as use of apps which import the client, usage of the client's CLI 
interface, usage of the UI, and direct calls to the REST API without 
using the client.


Also we specifically considered hypothetical future migration options 
and how they would affect the API. I'm confident that this is as "safe" 
as the API and CLIs can be, and the only downside I can see to this 
approach is that it's more verbose than alternatives that include 
implicit defaults.


-Ben Swartzlander

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement / resource providers ocata summit session recaps

2016-11-11 Thread Jay Pipes

On 11/11/2016 03:19 PM, Matt Riedemann wrote:

On 11/11/2016 12:28 PM, Jay Pipes wrote:

Matt, thanks much for your excellent summary of the resource provider
sessions in Barcelona. A couple minor corrections noted below.

On 11/02/2016 01:54 PM, Matt Riedemann wrote:

- Custom resource classes

The code for this is moving along and being reviewed. There will be
namespaces on the standard resource classes that nova provides.


Note that the opposite is what was decided: there will be no namespace
on standard resource classes, but we will be using a CUSTOM_ string
prefix for any resource classes that are added via the REST API (i.e.
they are user-created "custom resource classes").

All other resource classes that exist now (or in the future) in the list
of standard resource classes will have no string prefix.


OK, I was going off what we discussed on Friday at the summit, but I
suspect that changed during code review. Let's make this clear in the spec.


++


The resource tracker will create inventory/allocation records for the
Ironic nodes. The Ironic inventory records will use the
node.resource_class value as the custom resource class.


node.node_class attribute, but yes.


Are you sure? Is it not this?

https://github.com/openstack/ironic/blob/541947f9060b146d25f97025eda6ca69b1c87226/ironic/db/sqlalchemy/models.py#L121


Hmm, apologies, checking the API you are correct. Sorry about that.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement / resource providers ocata summit session recaps

2016-11-11 Thread Matt Riedemann

On 11/11/2016 12:28 PM, Jay Pipes wrote:

Matt, thanks much for your excellent summary of the resource provider
sessions in Barcelona. A couple minor corrections noted below.

On 11/02/2016 01:54 PM, Matt Riedemann wrote:

- Custom resource classes

The code for this is moving along and being reviewed. There will be
namespaces on the standard resource classes that nova provides.


Note that the opposite is what was decided: there will be no namespace
on standard resource classes, but we will be using a CUSTOM_ string
prefix for any resource classes that are added via the REST API (i.e.
they are user-created "custom resource classes").

All other resource classes that exist now (or in the future) in the list
of standard resource classes will have no string prefix.


OK, I was going off what we discussed on Friday at the summit, but I 
suspect that changed during code review. Let's make this clear in the spec.





The resource tracker will create inventory/allocation records for the
Ironic nodes. The Ironic inventory records will use the
node.resource_class value as the custom resource class.


node.node_class attribute, but yes.


Are you sure? Is it not this?

https://github.com/openstack/ironic/blob/541947f9060b146d25f97025eda6ca69b1c87226/ironic/db/sqlalchemy/models.py#L121



Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-11 Thread Alex Schultz
On Thu, Nov 3, 2016 at 11:31 PM, Sam Morrison  wrote:
>
> On 4 Nov. 2016, at 1:33 pm, Emilien Macchi  wrote:
>
> On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison  wrote:
>
> Wow I didn’t realise puppet3 was being deprecated, is anyone actually using
> puppet4?
>
> I would hope that the openstack puppet modules would support puppet3 for a
> while still, at lest until the next ubuntu LTS is out else we would get to
> the stage where the openstack  release supports Xenial but the corresponding
> puppet module would not? (Xenial has puppet3)
>
>
> I'm afraid we made a lot of communications around it but you might
> have missed it, no problem.
> I have 3 questions for you:
> - for what reasons would you not upgrade puppet?
>
>
> Because I’m a time poor operator with more important stuff to upgrade :-)
> Upgrading puppet *could* be a big task and something we haven’t had time to
> look into. Don’t follow along with puppetlabs so didn’t realise puppet3 was
> being deprecated. Now that this has come to my attention we’ll look into it
> for sure.
>
> - would it be possible for you to use puppetlabs packaging if you need
> puppet4 on Xenial? (that's what upstream CI is using, and it works
> quite well).
>
>
> OK thats promising, good to know that the CI is using puppet4. It’s all my
> other dodgy puppet code I’m worried about.
>
> - what version of the modules do you deploy? (and therefore what
> version of OpenStack)
>
>
> We’re using a mixture of newton/mitaka/liberty/kilo, sometimes the puppet
> module version is newer than the openstack version too depending on where
> we’re at in the upgrade process of the particular openstack project.
>
> I understand progress must go on, I am interested though in how many
> operators use puppet4. We may be in the minority and then I’ll be quiet :-)
>
> Maybe it should be deprecated in one release and then dropped in the next?
>

So this has been talked about for a while and we have attempted to
gauge the 3/4 over the last year or so.  Unfortunately with the
upstream modules also dropping 3 support, we're kind of stuck
following their lead. We recently got nailed when the puppetlabs-ntp
module finally became puppet 3 incompatible and we had to finally pin
to an older version.  That being said we can try and hold off any
possible incompatibilities in our modules until either late in this
cycle or maybe until the start of the next cycle.  We will have
several milestone releases for Ocata that will still be puppet 3
compatible (one being next week) so that might be an option as well.
I understand the extra work this may cause which is why we're trying
to give as much advanced notice as possible.  In the current forecast
I don't see any work that will make our modules puppet 3 incompatible,
but we're also at the mercy of the community at large.  We will
definitely drop puppet 3 at the start of Pike if we manage to make it
through Ocata without any required changes.  I think it'll be more
evident early next year after the puppet 3 EOL finally hits.

Thanks,
-Alex

>
> Cheers,
> Sam
>
>
>
>
>
>
> My guess is that this would also be the case for RedHat and other distros
> too.
>
>
> Fedora is shipping Puppet 4 and we're going to do the same for Red Hat
> and CentOS7.
>
> Thoughts?
>
>
>
> On 4 Nov. 2016, at 2:58 am, Alex Schultz  wrote:
>
> Hey everyone,
>
> Puppet 3 is reaching it's end of life at the end of this year[0].
> Because of this we are planning on dropping official puppet 3 support
> as part of the Ocata cycle.  While we currently are not planning on
> doing any large scale conversion of code over to puppet 4 only syntax,
> we may allow some minor things in that could break backwards
> compatibility.  Based on feedback we've received, it seems that most
> people who may still be using puppet 3 are using older (< Newton)
> versions of the modules.  These modules will continue to be puppet 3.x
> compatible but we're using Ocata as the version where Puppet 4 should
> be the target version.
>
> If anyone has any concerns or issues around this, please let us know.
>
> Thanks,
> -Alex
>
> [0] https://puppet.com/misc/puppet-enterprise-lifecycle
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> --
> Emilien Macchi
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update

2016-11-11 Thread Matt Riedemann

On 11/11/2016 10:24 AM, Matt Riedemann wrote:


This might be easier to swallow in chunks if we have some topics that
need discussing, such as:

- deploying it, what is needed for this? (maybe link to the devstack
patch that added it as a reference)
- when is it needed? answer: during newton, before ocata
- docs on the actual REST API
- docs on the microversions / history and how to make a request with a
microversion
- maybe a little high-level background on how nova is using this in the
resource tracker, and then links to reference specs

If we had a list of topics built up can we start working the changes in
a series so it's easier to digest.



I got a start on docs today, patch is here:

https://review.openstack.org/#/c/396761/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] containerized compute CI status

2016-11-11 Thread Wesley Hayutin
Greetings,

I wanted to send a status update on the containerized compute CI, this is
still very much a work in progress and is not yet working end to end.  I'm
hoping by sending this out early in the process we'll all benefit from
developing the CI as the feature is developed.

Everything you'll need to know to use this is documented here [1], I have
two issues that I'm filling out details on [2]

Any feedback w/ regards to the role, the documentation or in general is
welcome.  I hope if you try this, you find it easy to understand and
develop with.

Thank you!


[1]
https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-prep-containers/
[2]
https://github.com/redhat-openstack/ansible-role-tripleo-overcloud-prep-containers/issues
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Next Scheduler subteam meeting

2016-11-11 Thread Ed Leafe
The next meeting of the Nova Scheduler subteam will be on Monday, November 14 
at 1400 UTC in #openstack-meeting-alt
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20161114T14

As always, the agenda is here: 
https://wiki.openstack.org/wiki/Meetings/NovaScheduler

Please add any items you’d like to discuss to the agenda before the meeting.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Registration open for the Project Teams Gathering, Atlanta Feb 20-24, 2017

2016-11-11 Thread Jay Pipes

On 11/08/2016 04:29 PM, Thierry Carrez wrote:

Hi everyone,

The registration is now open for the first OpenStack Project Teams
Gathering event (which will take place in Atlanta, GA the week of
February 20, 2017). This event is geared toward existing upstream team
members, and will provide a venue for those project teams to meet,
discuss and organize the development work for the Pike release.

To learn more about the event (and find the registration link), please
check out the event page at:

http://www.openstack.org/ptg

The event will happen in the Sheraton hotel in downtown Atlanta. You can
find a link to book a room at the event hotel at a discounted rate on
the "Travel" tab in that page. On that same tab you can also find
details about our expanded Travel Support program and a link to apply.


As an FYI, the "discounted rate" at the Sheraton is $185/night for a 
single room.


Go on hotels.com and enter "Atlanta, GA 30303" for 20th through 24th 
February and you can find plenty of hotels within walking distance for 
*less* than the "discounted rate" from Sheraton.


Alternately, go on Airbnb and book apartments for a *fraction* of the 
cost of any of the hotels around there:


https://www.airbnb.com/s/Atlanta--GA-30303?adults=2&checkin=02%2F20%2F2017&checkout=02%2F24%2F2017&guests=2&source=bb&ss_id=g9pq1wkj&page=1&s_tag=3noibCNy&allow_override%5B%5D=

There were 148 matching listings as of this afternoon.

One of the purposes of the PTG was to have a less expensive, less 
marketing/sales-oriented event. So, it would be great to encourage the 
management at your contributing company to get on the cost-savings wagon 
and use a less-expensive option for lodging.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] More file injection woes

2016-11-11 Thread Matt Riedemann
Chris Friesen reported a bug [1] where injected files on a server aren't 
in the guest after it's evacuated to another compute host. This is 
because the injected files aren't persisted in the nova database at all. 
Evacuate and rebuild use similar code paths, but rebuild is a user 
operation and the command line is similar to boot, but evacuate is an 
admin operation and the admin doesn't have the original injected files.


We've talked about issues with file injection before [2] - in that case 
not being able to tell if it can be honored and it just silently doesn't 
inject the files but the server build doesn't fail. We could eventually 
resolve that with capabilities discovery in the API.


There are other issues with file injection, like potential security 
issues, and we've talked about getting rid of it for years because you 
can use the config drive.


The metadata service is not a replacement, as noted in the code [3], 
because the files aren't persisted in nova so they can't be served up later.


I'm sure we've talked about this before, but if we were to seriously 
consider deprecating file injection, what does that look like?  Thoughts 
off the top of my head are:


1. Add a microversion to the server create and rebuild REST APIs such 
that the personality files aren't accepted unless:


a) you're also building the server with a config drive
b) or CONF.force_config_drive is True
c) or the image has the 'img_config_drive=mandatory' property

2. Deprecate VFSLocalFS in Ocata for removal in Pike. That means 
libguestfs is required. We'd do this because I think VFSLocalFS is the 
one with potential security issues.




Am I missing anything? Does this sound like a reasonable path forward? 
Are there other use cases out there for file injection that we don't 
have alternatives for like config drive?


Note I'm cross-posting to the operators list for operator feedback there 
too.


[1] https://bugs.launchpad.net/nova/+bug/1638961
[2] http://lists.openstack.org/pipermail/openstack-dev/2016-July/098703.html
[3] 
https://github.com/openstack/nova/blob/b761ea47b97c6df09e21755f7fbaaa2061290fbb/nova/api/metadata/base.py#L179-L187


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] openstack/vmware-nsx installation instructions

2016-11-11 Thread Micheal B
What are the steps to install the vmware-nsx nuetron drivers from  
https://github.com/openstack/vmware-nsx?

 

install -r requirements.txt and the  python setup.py install I got but just 
cannot remember how to generate the nsx.ini file .. tox?

 

And if there’s any further documentation links or howto’s on the nsx drivers 
that would be great as well. 

 

thanks

 

Mike

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev