[openstack-dev] [gnocchi] Support for other drivers - influxdb

2016-08-01 Thread Sam Morrison
Hi Gnocchi Devs,

We have been using gnocchi for a while now with the influxDB driver and are 
keen to get the influxdb driver back into upstream.

However looking into the code and how it’s arranged it looks like there are a 
lot of assumptions that the backend storage driver is carbonara based.

Is gnocchi an API for time series DBs or is it a time series DB itself? 

In saying that we have resurrected the driver in the stable/2.1 branch and it 
works great. 

Running tox I get:

==
Totals
==
Ran: 2655 tests in 342. sec.
 - Passed: 2353
 - Skipped: 293
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 9
Sum of execute time for each test: 1135.6970 sec.

The tests that are failing are due to the way carbonara and influx handle the 
retention and multiple granularities differently. (which we can work around 
outside of gnocchi for now)

So I guess I’m wondering if there will be support for other drivers apart from 
carbonara?

We use influx because we already use it for other stuff within our organisation 
and don’t want to set up ceph or swift (which is quite an endeavour) to support 
another time series DB.

Thanks,
Sam


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Shamail
Thanks Doug,

> On Aug 1, 2016, at 10:44 AM, Doug Hellmann  wrote:
> 
> Excerpts from Shamail Tahir's message of 2016-08-01 09:49:35 -0500:
>>> On Mon, Aug 1, 2016 at 7:58 AM, Doug Hellmann  wrote:
>>> 
>>> Excerpts from Sean Dague's message of 2016-08-01 08:33:06 -0400:
> On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> One of the outcomes of the discussion at the leadership training
> session earlier this year was the idea that the TC should set some
> community-wide goals for accomplishing specific technical tasks to
> get the projects synced up and moving in the same direction.
> 
> After several drafts via etherpad and input from other TC and SWG
> members, I've prepared the change for the governance repo [1] and
> am ready to open this discussion up to the broader community. Please
> read through the patch carefully, especially the "goals/index.rst"
> document which tries to lay out the expectations for what makes a
> good goal for this purpose and for how teams are meant to approach
> working on these goals.
> 
> I've also prepared two patches proposing specific goals for Ocata
> [2][3].  I've tried to keep these suggested goals for the first
> iteration limited to "finish what we've started" type items, so
> they are small and straightforward enough to be able to be completed.
> That will let us experiment with the process of managing goals this
> time around, and set us up for discussions that may need to happen
> at the Ocata summit about implementation.
> 
> For future cycles, we can iterate on making the goals "harder", and
> collecting suggestions for goals from the community during the forum
> discussions that will happen at summits starting in Boston.
> 
> Doug
> 
> [1] https://review.openstack.org/349068 describe a process for
>>> managing community-wide goals
> [2] https://review.openstack.org/349069 add ocata goal "support
>>> python 3.5"
> [3] https://review.openstack.org/349070 add ocata goal "switch to
>>> oslo libraries"
 
 I like the direction this is headed. And I think for the test items, it
 works pretty well.
 
 I'm trying to think about how we'd use a model like this to support
 something a little more abstract such as making upgrades easier. Where
 we've got a few things that we know get in the way (policy in files,
 rootwrap rules, paste ini changes), as well as validation, as well as
 configuration changes. And what it looks like for persistently important
 items which are going to take more than a cycle to get through.
>>> 
>>> If we think the goal can be completed in a single cycle, then those
>>> specific items can just be used to define "done" ("all policy
>>> definitions have defaults in code and the service works without a policy
>>> configuration file" or whatever). If the goal cannot be completed in a
>>> single cycle, then it would need to be broken up in to phases.
>>> 
 
 Definitely seems worth giving it a shot on the current set of items, and
 see how it fleshes out.
 
 My only concern at this point is it seems like we're building nested
 data structures that people are going to want to parse into some kind of
 visualization in RST, which is a sub optimal parsing format. If we know
 that people want to parse this in advance, yamling it up might be in
 order. Because this mostly looks like it would reduce to one of those
 green/yellow/red checker boards by project and task.
>>> 
>>> That's a good idea. How about if I commit to translate what we end
>>> up with to YAML during Ocata, but we evolve the first version using
>>> the RST since that's simpler to review for now?
>> 
>> We have created a tracker file[1][2] for user stories (minor changes
>> pending based on feedback) in the Product WG repo.  We are currently
>> working with the infra team to get a visualization tool deployed that shows
>> the status for each artifact and provides links so that people can get more
>> details as necessary.  Could something similar be (re)used here?
> 
> Possibly. I don't want to tie the governance part of the process
> to tightly to any project management tools, since those tend to
> change, but if the project-specific tracking artifacts exist in
> that form then linking to them would be appropriate.
The purpose of the tracking is to link existing project-level artifacts 
including cross project specs and service level specs/blueprints.  Once the 
tool is deployed, we can see if it fits this need.
> 
>> 
>> I also have a general question about whether goals could be documented as
>> user stories[3]?
> 
> I would expect some of the goals to come from user stories, and in
> those cases references to those stories would be appropriate.
> However, we need much more specific detail to describe "done" than
> is typically found in a user 

Re: [openstack-dev] [puppet] Propose Sofer Athlan-Guyot (chem) part of Puppet OpenStack core

2016-08-01 Thread Emilien Macchi
A great number of positive votes, thanks for your feedback.
Thanks Sofer, and keep rocking!

On Mon, Aug 1, 2016 at 5:05 AM, Sofer Athlan-Guyot  wrote:
> Hi,
>
> Thanks everyone for you support, it's appreciated.  Now, let's +2
> something :)
>
> Emilien Macchi  writes:
>
>> You might not know who Sofer is but he's actually "chem" on IRC.
>> He's the guy who will find the root cause of insane bugs, in OpenStack
>> in general but also in Puppet OpenStack modules.
>> Sofer has been working on Puppet OpenStack modules for a while now,
>> and is already core in puppet-keystone. Many times he brought his
>> expertise to make our modules better.
>> He's always here on IRC to help folks and has excellent understanding
>> at how our project works.
>>
>> If you want stats:
>> http://stackalytics.com/?user_id=sofer-athlan-guyot=commits
>> I'm quite sure Sofer will make more reviews over the time but I have
>> no doubt he fully deserves to be part of core reviewers now, with his
>> technical experience and involvement.
>>
>> As usual, it's an open decision, please vote +1/-1 about this proposal.
>>
>> Thanks,
>
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][requirements] Re: [Openstack-stable-maint] Stable check of openstack/heat failed

2016-08-01 Thread Ethan Lynn
Hi Tony,
  patch https://review.openstack.org/#/c/347634/ 
  for master branch is merged, patch 
for mitaka https://review.openstack.org/#/c/347637/ and for liberty 
https://review.openstack.org/#/c/347639/ 
 are being review.

Best Regards,
Ethan Lynn
xuanlangj...@gmail.com




> On Jul 28, 2016, at 02:19, Tony Breeds  wrote:
> 
> On Wed, Jul 27, 2016 at 02:20:38PM +0800, Ethan Lynn wrote:
>> Hi Tony,
>>  I submit a patch to use upper-constraints for review,
>>  https://review.openstack.org/#/c/347639/
>>   . Let’s wait for the feedback
>>  and results.
> 
> Thanks.  I see that you have reviews for master, mitaka and liberty.  Thanks 
> for doign that.
> 
> Once the mast patch merges let me know and I'll help approve the stable 
> patches
> 
> Yours Tony.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Belated nova newton midcycle recap (part 2)

2016-08-01 Thread Matt Riedemann

Starting from where I accidentally left off:

* Vendor metadata reboot

We agreed that we still wanted mikal to keep working on this so we can 
keep a timeline for removing the deprecated dynamic vendor data classloader.


The API change was merged last week:

https://review.openstack.org/#/c/317739/

There are still some testing and documentation changes left.

* Microversion API testing in Tempest

We talked about the current state of getting changes into Tempest for 
Nova microversions and how a bunch of changes were up for review at the 
same time for backfill some schema responses, like for 2.3 and 2.26.


I already posted what I thought we had agreed on at the midcycle:

http://lists.openstack.org/pipermail/openstack-dev/2016-July/099860.html

But there is some disagreement about how I tried to write that up in the 
Tempest docs so we're still trying to hash out this policy:


https://review.openstack.org/#/c/346092/

* Placement API for resource providers

Jay's personal goal for Newton is for the resource tracker to be writing 
inventory and allocation data via the placement API. We want to get the 
data writing into the placement API in Newton so we can start using it 
in Ocata.


There are some spec amendments up for resource providers, at least one 
has merged, and the initial placement API change merged today:


https://review.openstack.org/#/c/329149/

We talked about supporting dynamic resource classes for Ironic use cases 
which is a stretch goal for Nova in Newton. Jay has a spec for that here:


https://review.openstack.org/#/c/312696/

There is a lot more detail in the etherpad and honestly Jay Pipes or Jim 
Rollenhagen would be better to summarize what came out of this at the 
midcycle and what's being worked on for dynamic resource classes right now.


We talked about a separate placement API database but decided this 
should be optional to avoid forcing yet another nova database on 
deployers in a couple of releases. This would be available for deployers 
to use to avoid some future upgrade pain when the placement service is 
split out from Nova, but if not configured it will default to the API 
database for the placement API. There are a bunch more details and 
discussion on that in this thread that Chris Dent started after the 
midcycle:


http://lists.openstack.org/pipermail/openstack-dev/2016-July/100302.html

* nova/cinder interlock

A few of us called into the cinder midcycle hangout to talk through a 
few ongoing efforts between projects.


John Griffith has some POC code up that adds a new set of APIs to Cinder 
which consolidates the os-reserve, os-initialize_connection and 
os-attach APIs into a single API along with changes to cinderclient and 
nova to use the APIs. This is to close the gap on some long-standing 
race issues between nova and cinder volume attach operations and will 
feed into the volume multi-attach work as cinder will be storing the 
attachment information differently so we can detach properly. I need to 
fix up my devstack(-gate) change to test the entire stack, and John 
Griffith was going to write a spec for Cinder for the new APIs.


John Garbutt and I had a TODO to review Walter Boring's bug fix / 
cleanup nova-api change to stop doing state checking from the API and 
let Cinder handle that in os-reserve:


https://review.openstack.org/#/c/315789/

There is some other related work to that but it gets a bit more 
complicated in the boot from volume and live migration cases. John 
Griffith was also going to check with some other Cinder storage 
providers like Ceph/FC to make sure these changes would be OK for them, 
and to check on live migration testing (PureStorage is working on 
multinode live migration test coverage for their third party CI using 
iscsi/fibrechannel).


Matt Treinish also helped sort out some issues with getting a cinder 
multi-backend job in the gate that Scott D'Angelo has been working on. 
There is a series of changes to project-config, devstack and Tempest to 
get this testing working so we can test things like volume 
retype/migration and swap-volume in the gate with an LVM backend. The 
devstack change should just be a pass-through config to Tempest from the 
job rather than the existing approach.


* nova/neutron interlock

Carl Baldwin was at the meetup so we mostly talked about routed networks 
and the deferred IP allocation change he's been working on:


https://review.openstack.org/#/c/299591/

This is a step toward using routed networks before we have the full 
proper scheduling in place with resource providers and the placement 
API. We talked through some issues with build failures and rescheduling 
which right now will just reschedule until failure, but Carl has some 
changes to detect fixed IP allocation on the wrong host and fail, which 
Nova can then handle and abort the build rather than reschedule. We can 
work that in as a bug fix though once we get the initial change in to 
support deferred IP allocation.


We 

[openstack-dev] [tacker] Weekly meeting Aug, 2, 2016 cancelled

2016-08-01 Thread Sridhar Ramaswamy
Tackers,

Since we just met last Wed & Thurs for our midcycle meetup and, there
is no solid agenda item, I'm cancelling tomorrow's Tacker weekly
meeting.

Please use the extra time to continue to review and push patchsets for
Newton features like,

VNF-FFG
VNF Scaling
Event Audit Support
Alarm Monitoring

Of course, if you have any questions please reach out in #tacker channel.

thanks,
Sridhar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-01 Thread Carl Baldwin
On Mon, Aug 1, 2016 at 2:29 PM, Kevin Benton  wrote:

> >We still want the exception to rollback the entire API operation and
> stopping it with a nested operation I think would mess that up.
>
> Well I think you would want to start a nested transaction, capture the
> duplicate, call the ipam delete methods, then throw a retryrequest. The
> exception will still trigger a rollback of the entire operation.
>

This is kind of what I was headed when I decided to solicit some feedback.
It is a possibility should still be considered.


> >Second, I've been throwing around the idea of not sharing the session
> with the IPAM driver.
>
> If the IPAM driver does not have access to the session, it can't see any
> of the uncommitted data. Would that be a problem? In particular, doesn't
> the IPAM driver's DB table have foreign key constraints with the data
> waiting to be committed in the other session? I'm hesitant to take this
> approach because it means other (if the in-tree doesn't already) IPAM
> drivers cannot have any relational integrity with the objects in question.
>

The in-tree driver doesn't have any FK constraints back to the neutron db
schema for IPAM [1]. I don't think that would make sense since it is
supposed to work like an external driver.


> A related question is, why does the in-tree IPAM driver have to do
> anything at all on a rollback? It currently does share a session which is
> automatically going to rollback all of it's DB operations for it. If it's
> because the driver cannot distinguish a delete call from a rollback and a
> normal delete, I suggest we change the delete call to pass a flag
> indicating that it's for a rollback. That would allow any DB-based drivers
> to just do nothing at this step.
>

Given that it shares the session, it wouldn't have to do anything. But,
again, it wouldn't behave like an external driver. I'd like to not have
special drivers that behave differently than drivers that are really
external; we end up finding things that the in-tree driver does in our
testing that doesn't work right for other drivers.

Drivers might need to access uncommitted data from the neutron DB. I think
even external drivers do this. However, there is a hard line between the
Neutron tables (even IPAM related ones) and the pluggable IPAM driver
database schema. I should have been a little more explicit that I wasn't
suggesting that we hide the Neutron session from the driver. What I meant
to suggest is that we use a different session for the part of the database
schema that belongs solely to the driver. All of the changes would be
inside the driver implementation and the interface to the driver wouldn't
change at all.

Carl

[1]
https://github.com/openstack/neutron/blob/2b1c143ca9/neutron/ipam/drivers/neutrondb_ipam/db_models.py
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Networking-vSphere]

2016-08-01 Thread Jay Pipes

On 07/14/2016 09:28 AM, Igor Gajsin wrote:

Thanks for quick reply.


Likewise, apologies for the delayed response! :(


Let's restore the context. I develop plugin for Fuel that uses
Networking-vSphere as the network driver. Next release of Fuel, 9.1 and
maybe 9.X will based on mitaka.


Understood.


For me it means that I have to have a place for commit my changes to
driver. Now I'm working on changing of devstack/plugin.sh to make
possible to install either OVSvApp or VMware DVS driver. But my
ambitions are spread much further.


Land any feature changes in master branch. If you need to backport 
anything to a Mitaka-based source repository due to Mirantis OpenStack 
being based on stable/mitaka code, then you will need to create a 
Mirantis internal branch of the upstream stable/mitaka 
networking-vsphere branch and apply your changes to that internal branch.



I have several ideas how to improve vmware dvs and my question is can I
full-fledged develop it or not?


Of course you can! Only you need to propose these improvements to the 
upstream master branch first.


Best,
-jay


On Thu, Jul 14, 2016 at 3:21 PM, Jay Pipes > wrote:

On 07/14/2016 02:41 AM, Igor Gajsin wrote:

Hi all.

I'm going to add some improvement to Networking-vSphere. There
is the
problem because I'm working with mitaka which already released.
But I
see some commits in the stable/mitaka branch that were made after
release date.

Does it mean, that I can commit new functionality to stable/mitaka
branch and work with it as usual?


Hello Igor!

Generally, no patches should land in a stable branch that provide a
new feature or change existing behaviour. Only bug fixes should be
applied to a stable branch. Can you point to specific patches that
you are referring to?

Thanks!
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ironic] A couple feature freeze exception requests

2016-08-01 Thread Jay Pipes

On 08/01/2016 05:20 PM, Jim Rollenhagen wrote:

Yes, I know this is stupid late for these.

I'd like to request two exceptions to the non-priority feature freeze,
for a couple of features in the Ironic driver.  These were not requested
at the normal time as I thought they were nowhere near ready.

Multitenant networking
==

Ironic's top feature request for around 2 years now has been to make
networking safe for multitenant use, as opposed to a flat network
(including control plane access!) for all tenants. We've been working on
a solution for 3 cycles now, and finally have the Ironic pieces of it
done, after a heroic effort to finish things up this cycle.

There's just one patch left to make it work, in the virt driver in Nova.
That is here: https://review.openstack.org/#/c/297895/


Reviewed. +2 from me, under the assumption that Ironic must always be 
upgraded before Nova per our discussion on IRC on the same topic today.



It's important to note that this actually fixes some dead code we pushed
on before this feature was done, and is only ~50 lines, half of which
are comments/reno.

Reviewers on this unearthed a problem on the ironic side, which I expect
to be fixed in the next couple of days:
https://review.openstack.org/#/q/topic:bug/1608511

We also have CI for this feature in ironic, and I have a depends-on
testing all of this as a whole: https://review.openstack.org/#/c/347004/

Per Matt's request, I'm also adding that job to Nova's experimental
queue: https://review.openstack.org/#/c/349595/

A couple folks from the ironic team have also done some manual testing
of this feature, with the nova code in, using real switches.

Merging this patch would bring a *huge* win for deployers and operators,
and I don't think it's very risky. It'll be ready to go sometime this
week, once that ironic chain is merged.


++


Multi-compute usage via a hash ring
===

One of the major problems with the ironic virt driver today is that we
don't support running multiple nova-compute daemons with the ironic driver
loaded, because each compute service manages all ironic nodes and stomps
on each other.

There's currently a hack in the ironic virt driver to
kind of make this work, but instance locking still isn't done:
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py

That is also holding back removing the pluggable compute manager in nova:
https://github.com/openstack/nova/blob/master/nova/conf/service.py#L64-L69

And as someone that runs a deployment using this hack, I can tell you
first-hand that it doesn't work well.

We (the ironic and nova community) have been working on fixing this for
2-3 cycles now, trying to find a solution that isn't terrible and
doesn't break existing use cases. We've been conflating it with how we
schedule ironic instances and keep managing to find a big wedge with
each approach. The best approach we've found involves duplicating the
compute capabilities and affinity filters in ironic.

Some of us were talking at the nova midcycle and decided we should try
the hash ring approach (like ironic uses to shard nodes between
conductors) and see how it works out, even though people have said in
the past that wouldn't work. I did a proof of concept last week, and
started playing with five compute daemons in a devstack environment.
Two nerd-snipey days later and I had a fully working solution, with unit
tests, passing CI. That is here:
https://review.openstack.org/#/c/348443/


w00t :)


We'll need to work on CI for this with multiple compute services. That
shouldn't be crazy difficult, but I'm not sure we'll have it done this
cycle (and it might get interesting trying to test computes joining and
leaving the cluster).

It also needs some testing at scale, which is hard to do in the upstream
gate, but I'll be doing my best to ship this downstream as soon as I
can, and iterating on any problems we see there.

It's a huge win for operators, for only a few hundred lines (some of
which will be pulled out to oslo next cycle, as it's copied from
ironic). The single compute mode would still be recommended while we
iron out any issues here, and that mode is well-understood (as this will
behave the same in that case). We have a couple of nova cores on board
with helping get this through, and I think it's totally doable.

Thanks for hearing me out,

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [octavia]redirection and barbican config

2016-08-01 Thread Akshay Kumar Sanghai
Hi Michael,
Thanks. I have few more queries:
- Is it possible to create multiple VIPs on one amphora?

-I created a LB 2 days back. I created all the objects loadbalancer,
listener, pool and members. The curl was successful for the vip. Today I
added one more listener listening on port 443 (Terminated https) and added
pool for it and members for the pool. I have barbican installed and I have
tried ssl offloading with barbican with haproxy namespace driver.  The curl
for http and https were giving me code 503, but when I did a curl to the
member, it was working 200 ok. I tried to figure out where its going wrong,
but could not. I could not find any errors in octavia-api.log and
octavia-worker.log. So, I deleted everything and recreated again. Now it
was working. But for a similar future scenario, how should i figure out
where things went wrong or where the packet is dropped. Is it possible to
login to the amphora vm?

Thanks
Akshay

On Sat, Jul 30, 2016 at 11:45 PM, Michael Johnson 
wrote:

> Hi Akshay,
>
> For 80 to 443 redirection, you can accomplish this using the new L7
> rules capability.  You would setup a listener on port 80 that has a
> redirect rule to the 443 URL.
>
> On the barbican question, if you are using the octavia driver, you
> will need to set the required settings in the octavia.conf file for
> proper barbican access.
> Those settings are called out here:
>
> http://docs.openstack.org/developer/octavia/config-reference/octavia-config-table.html
>
> Michael
>
>
> On Thu, Jul 28, 2016 at 1:02 PM, Akshay Kumar Sanghai
>  wrote:
> > Hi,
> > I have a couple on questions on octavia. Please answer or redirect me to
> > relevant documentation:
> > - Assume listener is listening on 443 and client hits the vip on port 80,
> > the connection will be refused.  Is it possible to configure http to
> https
> > direction?
> >
> > - For the barbican config, the only config item i can find is
> > cert_manager_type in neutron_lbaas.conf. How do we configure the barbican
> > access for lbaas. I assume since we do the access config for nova and
> > keystone in neutron.conf, there should be some config file where we
> define
> > the barbican access(username, password, auth_url).
> >
> > The community has been very helpful to me. Thanks a lot Guys. Appreciate
> > your efforts.
> >
> > Thanks
> > Akshay Sanghai
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread James E. Blair
Mikhail Medvedev  writes:

>
> In theory it is possible split diff and syntax spatially, so there
> would be no need to mix diff and syntax colors. Mockup
> http://i.imgur.com/gAD9x9v.png

True, though I should have clarified my comments as applying
particularly to the intra-line diff, where not only are changed lines
indicated (by dark red/green) but also changed characters (by bright
red/green).  As someone who could spend an hour staring at a line and
not seeing the addition of a single letter, I find that very useful.  :)

Perhaps in your approach some compromise could be obtained by indicating
a changed line as you suggest, and which characters are changed via an
alteration to either the foreground (perhaps making them bold) or
background color of the characters.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.4.0

2016-08-01 Thread James E. Blair
"Sean M. Collins"  writes:

> For some reason I installed the newer version but still the version
> string reports
>
> Gertty version: 1.1.1.dev24

When I install it from pypi via pip in a new virtualenv, I see:

  Gertty version: 1.4.0

Maybe you have an older copy installed from a git repo as editable or
something?  Perhaps try creating a new virtualenv for it, or
uninstalling it and re-installing?

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread Jeremy Stanley
On 2016-08-01 17:38:59 -0500 (-0500), Mikhail Medvedev wrote:
> In theory it is possible split diff and syntax spatially, so there
> would be no need to mix diff and syntax colors. Mockup
> http://i.imgur.com/gAD9x9v.png

That's not too bad on the eyes, though as an avid user of the
unified diff view (rather than side-by-side as depicted in your
screenshot) it would amount to just color-coding the - and + at the
start of changed lines which probably won't jump out much visually.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread Mikhail Medvedev
On Mon, Aug 1, 2016 at 4:00 PM, James E. Blair  wrote:
> Masayuki Igawa  writes:
>
>> Hi!
>>
>> On Wed, Jul 27, 2016 at 11:50 PM, James E. Blair  wrote:
>>> Michał Dulko  writes:
>>>
 Just wondering - were there tries to implement syntax highlighting in
 diff view? I think that's the only thing that keeps me from switching to
 Gertty.
>>>
>>> I don't know of anyone working on that, but I suspect it could be done
>>> using the pygments library.
>>
>> Oh, it's an interesting feature to me :) I'll try to investigate and
>> implement in next couple of days :)
>
> As I think about this, one challenge in particular comes to mind: Gerrit
> uses background color (green and pink) to distinguish old and new
> text when displaying diffs.  In Gertty, I avoided that and used
> foreground colors instead because text with green and red backgrounds is
> difficult to read on a terminal.
>
> We essentially have two channels of information that we want to
> represent with color -- the diff, and the syntax.  They can sometimes
> overlap.
>
> Perhaps we could use a 256 color (or even RGB) terminal for this
> feature.  Then we may be able to get just the right shade of background
> color for the diff channel, and use the foreground colors for syntax
> highlighting.
>
> At any rate, it may be worth trying to solve *this* problem first with a
> mockup to see if there is any way of doing this without making our eyes
> bleed before working on the code to implement it.

In theory it is possible split diff and syntax spatially, so there
would be no need to mix diff and syntax colors. Mockup
http://i.imgur.com/gAD9x9v.png

>
> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Announcing Gertty 1.4.0

2016-08-01 Thread Sean M. Collins
For some reason I installed the newer version but still the version
string reports

Gertty version: 1.1.1.dev24
-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][heat][infra] breakage of Sahara gate and images from openstack.org

2016-08-01 Thread Steve Baker

On 02/08/16 03:11, Luigi Toscano wrote:

On Monday, 1 August 2016 10:56:21 CEST Zane Bitter wrote:

On 29/07/16 13:12, Luigi Toscano wrote:

Hi all,
the Sahara jobs on the gate run the scenario tests (from sahara-tests)
using the fake plugin, so no real Hadoop/Spark/BigData operations are
performed, but other the other expected operations are executed on the
image. In order to do this we used for long time this image:
http://tarballs.openstack.org/heat-test-image/fedora-heat-test-image.qcow2

which was updated early on this Friday (July 29th) from Fedora 22 to
Fedora 24 breaking our jobs with some cryptic error, maybe something
related to the repositories:
http://logs.openstack.org/46/335946/12/check/gate-sahara-tests-dsvm-scenar
io-nova-heat/5eeff52/logs/screen-sahara-eng.txt.gz?level=WARNING

So AFAICT from the log:

"rpm -q xfsprogs" prints "package xfsprogs is not installed" which is
expected if xfsprogs is not installed.

"yum install -y xfsprogs" redirects to "/usr/bin/dnf install -y
xfsprogs" which is expected on F24.

dnf fails with "Error: Failed to synchronize cache for repo 'fedora'"
which means it couldn't download the Fedora repository data.

"sudo mount -o data=writeback,noatime,nodiratime /dev/vdb
/volumes/disk1" then fails, doubtlessly because xfsprogs in not installed.

The absence of "sudo" in the yum command (when it does appear in the
mount command) is suspicious, but unlikely to be the reason it can't
sync the cache.

This is why I mentioned the repositories, yes.


It's not obvious why this change of image would suddenly result in not
being able to install packages. It seems more likely that you've never
been able to install packages, but the previous image had xfsprogs
preinstalled and the new one doesn't. I don't know the specifics of how
that image is built, but certainly Fedora has been making an ongoing
effort to strip the cloud image back to basics.

But this is not a normal Fedora image. If I read project-config correctly,
this is generated by this job:

http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/
jobs/heat.yaml#n34

 From a brief chat on #heat on Friday it seems that the image is not gated or
checked or even used right now. Is it the case? The image is almost a simple
Fedora with few extra packages:
http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/test-image/build-heat-test-image.sh

We've stopped using this image recently because the download failure 
rate from tarballs.openstack.org was impacting heat's gate job 
reliability. We've switched to a vanilla fedora for now because none of 
our tests actually require a customized image. When we do have such 
tests we'll likely do boot-time install of packages from an AFS infra 
mirror.


We had no idea that Sahara was using this image in their gate, and it 
was certainly never intended for broader consumption.


Sahara would have a few options for an alternative:

- changing the test to work on a vanilla image

- do boot-time installation of the required packages

- work with infra on creating and hosting a custom image

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Belated nova newton midcycle recap

2016-08-01 Thread Matt Riedemann

On 8/1/2016 4:19 PM, Matt Riedemann wrote:

It's a little late but I wanted to get a high level recap of the nova
newton midcycle written up for those that didn't make it.

First off, thanks again to Intel for hosting and especially to Cindy
Sirianni at Intel for making sure we were taken care of. We had about 40
people each day in a single room so it was a little cramped but being
the champions we are we survived.

The full etherpad is here:

https://etherpad.openstack.org/p/nova-newton-midcycle

I won't go into all of the details about every topic because (a) there
was a lot of discussion and a lot of topics and (b) I honestly didn't
catch everything, so I'm going to go over the highlights/decisions/todos
(in no particular order).

* cells v2 progress/status check

The aggregates and server group data migration changes are underway and
being reviewed. Migrating quotas to the API DB needs work though and
someone besides Mark Doffman (doffm) will probably need to pick that up.

For cell0 only scheduler failures live there, so we talked about how
those fit into the server list response. We decided that servers without
a host will be sorted at the front of the list response, and servers
with a list will be sorted after that. This will need to be documented
behavior in the API and could be improved later with Searchlight. We
would like someone to be a point person for interlocking with the
Searchlight team and we thought Balazs Gibizer (gibi) would be a good
person for this.

Andrew Laski has a change up for migrating from non-cells to cells v2.
We want to force people to upgrade to cells v2 in Newton so that we can
land a breaking change in Ocata to block people that aren't on cells v2
yet. Doing this is contingent on grenade testing. Dan Smith has the TODO
to look at the grenade changes. We don't plan on grenade testing cells
v1 to cells v2. We'll need to get docs changes for upgrades for the
process of migrating to cells v2. Michael Still (mikal) said we needed
to open bugs against the docs team for this.

The goal for Newton with cells v2 is that an instance record will not be
created until we pick a cell and we'll use the BuildRequest until that
point, and listing/deleting instances during that window will still work
as normal. For listing instances, we will prepend BuildRequests to the
front of the list (unsorted). We'll also limit the sort_keys in the API,
at least to excluded fields on joined tables - that can be fixed as a
bug fix.

For RPC/DB context switching, the infrastructure is in place but we
probably won't use this in Newton. There is a problem with version caps
and sending a new object to and old cell. There are a few proposed
solutions and Dan Smith was looking at testing a solution for this, but
we'll most likely end up documenting it for upgrades.

* API policy in code

Claudiu Belu has a patch up for a nova-manage command to check what APIs
a given user can perform. This is a first step to eventually getting to
a discoverable policy CLI and it also provides a debug tool for
operators when API users get policy errors.

We also said that any command for determining the effective policy of a
deployment or checking duplicates should live in oslo.policy, not nova,
since other projects are looking for the same thing, like Ironic. Nova
wouldn't have a nova-manage command for this but would have an
entrypoint. We also need to prioritize anything that needs to get into
oslo.policy so we're not caught by the final non-client library release
the week of 8/22.

* API docs in tree

Things are slow but that's mostly OK, we'll continue working on this
past feature freeze since it's docs. And we'll probably schedule an
api-ref docs review sprint early in September after feature freeze hits.

* Proxy API deprecations

We talked quite a bit about how to land the proxy API deprecation and
network API changes in a single microversion, which actually happened
with 2.36 last week.

Most of the discussion was around how to handle the network API
deprecation since if you're using nova0-network it's not a proxy. We
didn't want to really case the network APIs though, and we wanted the
additional signaling mechanism that the network APIs, and nova-network,
are deprecated, so we ultimately decided to include nova-network and all
network APIs in the 2.36 microversion for deprecation. The sticky thing
is that today you can request <2.36 and the API still works. After
nova-network is deleted from code, that will no longer work. Yes this is
a backward incompatible change, but we wanted the further signaling of
the removal rather than just yank it outright when the time comes.

To ease some of the client experience, Dan Smith is working a change in
python-novaclient to deprecate the network CLIs and if requesting
microversion>=2.36 we'll fallback to 2.35 (or the latest available that
still makes this work). So the network CLIs will be deprecated and emit
a warning but continue to work even though API users will not be 

[openstack-dev] [nova] Belated nova newton midcycle recap

2016-08-01 Thread Matt Riedemann
It's a little late but I wanted to get a high level recap of the nova 
newton midcycle written up for those that didn't make it.


First off, thanks again to Intel for hosting and especially to Cindy 
Sirianni at Intel for making sure we were taken care of. We had about 40 
people each day in a single room so it was a little cramped but being 
the champions we are we survived.


The full etherpad is here:

https://etherpad.openstack.org/p/nova-newton-midcycle

I won't go into all of the details about every topic because (a) there 
was a lot of discussion and a lot of topics and (b) I honestly didn't 
catch everything, so I'm going to go over the highlights/decisions/todos 
(in no particular order).


* cells v2 progress/status check

The aggregates and server group data migration changes are underway and 
being reviewed. Migrating quotas to the API DB needs work though and 
someone besides Mark Doffman (doffm) will probably need to pick that up.


For cell0 only scheduler failures live there, so we talked about how 
those fit into the server list response. We decided that servers without 
a host will be sorted at the front of the list response, and servers 
with a list will be sorted after that. This will need to be documented 
behavior in the API and could be improved later with Searchlight. We 
would like someone to be a point person for interlocking with the 
Searchlight team and we thought Balazs Gibizer (gibi) would be a good 
person for this.


Andrew Laski has a change up for migrating from non-cells to cells v2. 
We want to force people to upgrade to cells v2 in Newton so that we can 
land a breaking change in Ocata to block people that aren't on cells v2 
yet. Doing this is contingent on grenade testing. Dan Smith has the TODO 
to look at the grenade changes. We don't plan on grenade testing cells 
v1 to cells v2. We'll need to get docs changes for upgrades for the 
process of migrating to cells v2. Michael Still (mikal) said we needed 
to open bugs against the docs team for this.


The goal for Newton with cells v2 is that an instance record will not be 
created until we pick a cell and we'll use the BuildRequest until that 
point, and listing/deleting instances during that window will still work 
as normal. For listing instances, we will prepend BuildRequests to the 
front of the list (unsorted). We'll also limit the sort_keys in the API, 
at least to excluded fields on joined tables - that can be fixed as a 
bug fix.


For RPC/DB context switching, the infrastructure is in place but we 
probably won't use this in Newton. There is a problem with version caps 
and sending a new object to and old cell. There are a few proposed 
solutions and Dan Smith was looking at testing a solution for this, but 
we'll most likely end up documenting it for upgrades.


* API policy in code

Claudiu Belu has a patch up for a nova-manage command to check what APIs 
a given user can perform. This is a first step to eventually getting to 
a discoverable policy CLI and it also provides a debug tool for 
operators when API users get policy errors.


We also said that any command for determining the effective policy of a 
deployment or checking duplicates should live in oslo.policy, not nova, 
since other projects are looking for the same thing, like Ironic. Nova 
wouldn't have a nova-manage command for this but would have an 
entrypoint. We also need to prioritize anything that needs to get into 
oslo.policy so we're not caught by the final non-client library release 
the week of 8/22.


* API docs in tree

Things are slow but that's mostly OK, we'll continue working on this 
past feature freeze since it's docs. And we'll probably schedule an 
api-ref docs review sprint early in September after feature freeze hits.


* Proxy API deprecations

We talked quite a bit about how to land the proxy API deprecation and 
network API changes in a single microversion, which actually happened 
with 2.36 last week.


Most of the discussion was around how to handle the network API 
deprecation since if you're using nova0-network it's not a proxy. We 
didn't want to really case the network APIs though, and we wanted the 
additional signaling mechanism that the network APIs, and nova-network, 
are deprecated, so we ultimately decided to include nova-network and all 
network APIs in the 2.36 microversion for deprecation. The sticky thing 
is that today you can request <2.36 and the API still works. After 
nova-network is deleted from code, that will no longer work. Yes this is 
a backward incompatible change, but we wanted the further signaling of 
the removal rather than just yank it outright when the time comes.


To ease some of the client experience, Dan Smith is working a change in 
python-novaclient to deprecate the network CLIs and if requesting 
microversion>=2.36 we'll fallback to 2.35 (or the latest available that 
still makes this work). So the network CLIs will be deprecated and emit 
a warning but continue to work even 

[openstack-dev] [nova][ironic] A couple feature freeze exception requests

2016-08-01 Thread Jim Rollenhagen
Yes, I know this is stupid late for these.

I'd like to request two exceptions to the non-priority feature freeze,
for a couple of features in the Ironic driver.  These were not requested
at the normal time as I thought they were nowhere near ready.

Multitenant networking
==

Ironic's top feature request for around 2 years now has been to make
networking safe for multitenant use, as opposed to a flat network
(including control plane access!) for all tenants. We've been working on
a solution for 3 cycles now, and finally have the Ironic pieces of it
done, after a heroic effort to finish things up this cycle.

There's just one patch left to make it work, in the virt driver in Nova.
That is here: https://review.openstack.org/#/c/297895/

It's important to note that this actually fixes some dead code we pushed
on before this feature was done, and is only ~50 lines, half of which
are comments/reno.

Reviewers on this unearthed a problem on the ironic side, which I expect
to be fixed in the next couple of days:
https://review.openstack.org/#/q/topic:bug/1608511

We also have CI for this feature in ironic, and I have a depends-on
testing all of this as a whole: https://review.openstack.org/#/c/347004/

Per Matt's request, I'm also adding that job to Nova's experimental
queue: https://review.openstack.org/#/c/349595/

A couple folks from the ironic team have also done some manual testing
of this feature, with the nova code in, using real switches.

Merging this patch would bring a *huge* win for deployers and operators,
and I don't think it's very risky. It'll be ready to go sometime this
week, once that ironic chain is merged.

Multi-compute usage via a hash ring
===

One of the major problems with the ironic virt driver today is that we
don't support running multiple nova-compute daemons with the ironic driver
loaded, because each compute service manages all ironic nodes and stomps
on each other.

There's currently a hack in the ironic virt driver to
kind of make this work, but instance locking still isn't done:
https://github.com/openstack/ironic/blob/master/ironic/nova/compute/manager.py

That is also holding back removing the pluggable compute manager in nova:
https://github.com/openstack/nova/blob/master/nova/conf/service.py#L64-L69

And as someone that runs a deployment using this hack, I can tell you
first-hand that it doesn't work well.

We (the ironic and nova community) have been working on fixing this for
2-3 cycles now, trying to find a solution that isn't terrible and
doesn't break existing use cases. We've been conflating it with how we
schedule ironic instances and keep managing to find a big wedge with
each approach. The best approach we've found involves duplicating the
compute capabilities and affinity filters in ironic.

Some of us were talking at the nova midcycle and decided we should try
the hash ring approach (like ironic uses to shard nodes between
conductors) and see how it works out, even though people have said in
the past that wouldn't work. I did a proof of concept last week, and
started playing with five compute daemons in a devstack environment.
Two nerd-snipey days later and I had a fully working solution, with unit
tests, passing CI. That is here:
https://review.openstack.org/#/c/348443/

We'll need to work on CI for this with multiple compute services. That
shouldn't be crazy difficult, but I'm not sure we'll have it done this
cycle (and it might get interesting trying to test computes joining and
leaving the cluster).

It also needs some testing at scale, which is hard to do in the upstream
gate, but I'll be doing my best to ship this downstream as soon as I
can, and iterating on any problems we see there.

It's a huge win for operators, for only a few hundred lines (some of
which will be pulled out to oslo next cycle, as it's copied from
ironic). The single compute mode would still be recommended while we
iron out any issues here, and that mode is well-understood (as this will
behave the same in that case). We have a couple of nova cores on board
with helping get this through, and I think it's totally doable.

Thanks for hearing me out,

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread Jeremy Stanley
On 2016-08-01 14:00:12 -0700 (-0700), James E. Blair wrote:
[...]
> We essentially have two channels of information that we want to
> represent with color -- the diff, and the syntax.  They can sometimes
> overlap.
[...]

One option that probably wouldn't bug me too much is if it could be
toggled on/off fairly instantaneously while in diff view, so that if
it becomes hard to read one way (syntax highlight colors too noisy)
you just switch to the other (diff highlighting only) and keep
reading.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] Announcing Gertty 1.4.0

2016-08-01 Thread James E. Blair
Masayuki Igawa  writes:

> Hi!
>
> On Wed, Jul 27, 2016 at 11:50 PM, James E. Blair  wrote:
>> Michał Dulko  writes:
>>
>>> Just wondering - were there tries to implement syntax highlighting in
>>> diff view? I think that's the only thing that keeps me from switching to
>>> Gertty.
>>
>> I don't know of anyone working on that, but I suspect it could be done
>> using the pygments library.
>
> Oh, it's an interesting feature to me :) I'll try to investigate and
> implement in next couple of days :)

As I think about this, one challenge in particular comes to mind: Gerrit
uses background color (green and pink) to distinguish old and new
text when displaying diffs.  In Gertty, I avoided that and used
foreground colors instead because text with green and red backgrounds is
difficult to read on a terminal.

We essentially have two channels of information that we want to
represent with color -- the diff, and the syntax.  They can sometimes
overlap.

Perhaps we could use a 256 color (or even RGB) terminal for this
feature.  Then we may be able to get just the right shade of background
color for the diff channel, and use the foreground colors for syntax
highlighting.

At any rate, it may be worth trying to solve *this* problem first with a
mockup to see if there is any way of doing this without making our eyes
bleed before working on the code to implement it.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-08-01 Thread Liz Blanchard
On Mon, Aug 1, 2016 at 8:36 AM, Jiri Tomasek  wrote:

>
>
> On 27.7.2016 15:18, Steven Hardy wrote:
>
>> On Wed, Jul 27, 2016 at 08:41:32AM -0300, Honza Pokorny wrote:
>>
>>> Hello folks,
>>>
>>> As the tripleo-ui project is quickly maturing, it might be time to start
>>> versioning our code.  As of now, the version is set to 0.0.1 and that
>>> hardly reflects the state of the project.
>>>
>>> What do you think?
>>>
>> I would like to see it released as part of the coordinated tripleo
>> release,
>> e.g tagged each milestone along with all other projects where we assert
>> the
>> release:cycle-with-intermediary tag:
>>
>>
>> https://github.com/openstack/governance/blob/master/reference/projects.yaml#L4448
>>
>> Because tripleo-ui isn't yet fully integrated with TripleO (e.g packaging,
>> undercloud installation and CI testing), we've not tagged it in the last
>> two milestone releases, but perhaps we can for the n-3 release?
>>
>> https://review.openstack.org/#/c/324489/
>>
>> https://review.openstack.org/#/c/340350/
>>
>> When we do that, the versioning will align with all other TripleO
>> deliverables, solving the problem of the 0.0.1 version?
>>
>> The steps to achieve this are:
>>
>> 1. Get per-commit builds of tripleo-ui working via delorean-current:
>>
>> https://trunk.rdoproject.org/centos7-master/current/
>>
>> 2. Get the tripleo-ui package installed and configured as part of the
>> undercloud install (via puppet) - we might want to add a conditional to
>> the
>> undercloud.conf so it's configurable (enabled by default?)
>>
>>
>> https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.pp
>>
>> 3. Get the remaining Mistral API pieces landed so it's fully functional
>>
>> 4. Implement some basic CI smoke tests to ensure the UI is at least
>> accessible.
>>
>> Does that sequence make sense, or have I missed something?
>>
> Makes perfect sense. Here is the launchpad link that tracks undercloud
> integration of GUI
> https://blueprints.launchpad.net/tripleo-ui/+spec/instack-undercloud-ui-config


It would be great to work this into the informational menu where the
Service Status lives. I put together some quick mockups on what this could
look like:
https://invis.io/247VLM4SB#/178151973_2016-8-1_TripleO_UI51

What do you all think?

Thanks,
Liz


>
> Jirka
>
>
>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-01 Thread Kevin Benton
>We still want the exception to rollback the entire API operation and
stopping it with a nested operation I think would mess that up.

Well I think you would want to start a nested transaction, capture the
duplicate, call the ipam delete methods, then throw a retryrequest. The
exception will still trigger a rollback of the entire operation.


>Second, I've been throwing around the idea of not sharing the session with
the IPAM driver.

If the IPAM driver does not have access to the session, it can't see any of
the uncommitted data. Would that be a problem? In particular, doesn't the
IPAM driver's DB table have foreign key constraints with the data waiting
to be committed in the other session? I'm hesitant to take this approach
because it means other (if the in-tree doesn't already) IPAM drivers cannot
have any relational integrity with the objects in question.

A related question is, why does the in-tree IPAM driver have to do anything
at all on a rollback? It currently does share a session which is
automatically going to rollback all of it's DB operations for it. If it's
because the driver cannot distinguish a delete call from a rollback and a
normal delete, I suggest we change the delete call to pass a flag
indicating that it's for a rollback. That would allow any DB-based drivers
to just do nothing at this step.



On Mon, Aug 1, 2016 at 12:28 PM, Carl Baldwin  wrote:

> Hi all,
>
> Last Thursday, I spent the afternoon looking in to a bug with pluggable
> IPAM [1] which is preventing me from deciding to pull the trigger on
> finally switching from the old non-pluggable reference implementation. I'd
> *really* like to get this in shape for Newton but time is running out.
>
> I've written a unit test [2] which manages to tickle the issue with
> rollback. It is a bit convoluted but basically it hijacks the call to
> _store_ip_allocation to reliably simulate another process racing for the
> same address and writes an allocation to the DB for the same ip address it
> is trying to allocate. The unit test also has to have something to rollback
> so it allocates two fixed ips to the port. The first one succeeds and then
> the second one fails (hence the initial call to "skip_one").
>
> I think the issue stems from the fact that the reference driver for
> pluggable IPAM shares the session with the rest of the API call. Writing
> the allocation fails with a DBDuplicate in the main part of the API call
> but then tries to rollback. Rollback fails because the session has already
> been "broken" by the duplicate exception.
>
> To fix this, I've thrown around a couple of ideas. First, I thought of
> maybe adding nested transactions in key places to isolate the part that is
> going to break. I could continue to pursue this but it is as
> straight-forward as it first seemed to me. We still want the exception to
> rollback the entire API operation and stopping it with a nested operation I
> think would mess that up.
>
> Second, I've been throwing around the idea of not sharing the session with
> the IPAM driver. I recall we had some discussion about this a long time ago
> but I cannot remember the conclusion. To me, it seems that the IPAM driver
> should not be sharing a session with the main API call. This would put it
> on par with other external IPAM drivers which would certainly not be
> sharing a DB context, let alone the DB itself, with the neutron API.
>
> Do you have any thoughts on this?
>
> Carl
>
> [1] https://bugs.launchpad.net/neutron/+bug/1603162
> [2] https://review.openstack.org/#/c/348956/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-01 Thread Matt Riedemann

On 8/1/2016 1:39 PM, Ken'ichi Ohmichi wrote:

2016-07-29 10:32 GMT-07:00 Sean Dague :

On 07/28/2016 05:38 PM, Matt Riedemann wrote:

On 7/28/2016 3:55 PM, Matt Riedemann wrote:

For os-attach-interfaces, we need that to attach/detach interfaces to a
server, so those actions don't go away with 2.36. We can also list and
show interfaces (ports) which is a proxy to neutron, but in this case it
seems a tad bit necessary, else to list ports for a given server you
have to know to list ports via neutron CLI and filter on
device_id=server.uuid.


On second thought, we could drop the proxy APIs to list/show ports for a
given server. python-openstackclient could have a convenience CLI for
listing ports for a server. And the show in os-attach-interfaces takes a
server id but it's not used, so it's basically pointless and should just
be replaced with neutron.

The question is, as these are proxies and the 2.36 microversion was for
proxy API deprecation, can we still do those in 2.36 even though it's
already merged? Or do they need to be 2.37? That seems like the more
accurate thing to do, but then we really have some weird "which is the
REAL proxy API microversion?" logic going on.

I think we could move forward with deprecation in novaclient either way.


We should definitely move forward with novaclient CLI deprecations.

We've said that microversions are idempotent, so fixing one in this case
isn't really what we want to do, it should just be another bump, with
things we apparently missed. I'm not sure it's super important that
there is a REAL proxy API microversion. We got most of it in one go, and
as long as we catch the stragglers in 2.39 (let's make that the last
merged one before the release so that we can figure out anything else we
missed, and keep get me a network as 2.37).


Yeah, I agree with another bump.
We miss something like this and microversion mechanism can provide us
with another chance.

Thanks
Ken Omichi

---



OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



OK I'll take a stab at writing a spec for this either tonight or 
tomorrow. We'll deprecate the proxy APIs for os-attach-interfaces show 
and list methods. We'll need to take into account that the create 
(attach interface) action uses the show method which proxies to 
neutron's show_port method, but I think we will have enough information 
to avoid that proxy (that could probably just be a separate bug fix 
actually).


As for os-virtual-interfaces, I don't plan on deprecating that, and 
actually plan on enhancing it with a microversion in Ocata to return the 
vif tags from the model (useful for microversion 2.32). Plus we'll be 
able to use that for both nova-net and neutron since the data isn't 
proxied from anywhere, it just comes from the nova DB.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docker] [magnum] Magnum account on Docker Hub

2016-08-01 Thread Ton Ngo

Hi everyone,
 At the last IRC meeting, the team discussed the need for hosting some
container images on Docker Hub
to facilitate development.  There is currently a Magnum account on Docker
Hub, but this is not owned by anyone
on the team, so we would like to find who the owner is and whether this
account was set up for OpenStack Magnum.
Thanks in advance!
Ton Ngo,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] weekly meeting #89

2016-08-01 Thread Emilien Macchi
Hi Puppeteers!

We'll have our weekly meeting tomorrow at 3pm UTC on #openstack-meeting-4.

Here's a first agenda:
https://etherpad.openstack.org/p/puppet-openstack-weekly-meeting-20160802

Feel free to add topics, and any outstanding bug and patch.

See you tomorrow!
Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][IPAM] Pluggable IPAM rollback issue

2016-08-01 Thread Carl Baldwin
Hi all,

Last Thursday, I spent the afternoon looking in to a bug with pluggable
IPAM [1] which is preventing me from deciding to pull the trigger on
finally switching from the old non-pluggable reference implementation. I'd
*really* like to get this in shape for Newton but time is running out.

I've written a unit test [2] which manages to tickle the issue with
rollback. It is a bit convoluted but basically it hijacks the call to
_store_ip_allocation to reliably simulate another process racing for the
same address and writes an allocation to the DB for the same ip address it
is trying to allocate. The unit test also has to have something to rollback
so it allocates two fixed ips to the port. The first one succeeds and then
the second one fails (hence the initial call to "skip_one").

I think the issue stems from the fact that the reference driver for
pluggable IPAM shares the session with the rest of the API call. Writing
the allocation fails with a DBDuplicate in the main part of the API call
but then tries to rollback. Rollback fails because the session has already
been "broken" by the duplicate exception.

To fix this, I've thrown around a couple of ideas. First, I thought of
maybe adding nested transactions in key places to isolate the part that is
going to break. I could continue to pursue this but it is as
straight-forward as it first seemed to me. We still want the exception to
rollback the entire API operation and stopping it with a nested operation I
think would mess that up.

Second, I've been throwing around the idea of not sharing the session with
the IPAM driver. I recall we had some discussion about this a long time ago
but I cannot remember the conclusion. To me, it seems that the IPAM driver
should not be sharing a session with the main API call. This would put it
on par with other external IPAM drivers which would certainly not be
sharing a DB context, let alone the DB itself, with the neutron API.

Do you have any thoughts on this?

Carl

[1] https://bugs.launchpad.net/neutron/+bug/1603162
[2] https://review.openstack.org/#/c/348956/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Driver composition defaults call

2016-08-01 Thread Julia Kreger
Greetings!

As discussed in our meeting today[0], we would like to try and schedule a
time for a VoIP call so we can discuss driver composition[1] defaults with
the goal of reaching a consensus on defaults.

Given that there are several facets, and multiple people in multiple
timezones who need to be included, I've proposed times from 2 PM to 5 PM
GMT on August 9th, 10th, or 11th. The link to the doodle poll is below[2].

If none of the times work, please let me know and I'll gladly update the
poll.

-Julia

[0]
http://eavesdrop.openstack.org/meetings/ironic/2016/ironic.2016-01-04-17.00.log.html
[1]
https://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/driver-composition-reform.html
[2] http://doodle.com/poll/ayga9ppc6d2mrd9n
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] NOT_REGISTERED to be or not to be

2016-08-01 Thread Paul Belanger
On Mon, Aug 01, 2016 at 03:53:12PM +, Lenny Verkhovsky wrote:
> Hi,
> 
> Currently in some cases[1] CI sets a comment on the patch set as 
> NOT_REGISTERED.
> 
> Those comments are very hard to monitor for CI operators and have noisy value 
> for the developers.
> 
> Maybe a better solution is not commenting in such cases at all as discussed 
> in [2].
> 
> If a developer is missing some important CI comments, it can rechecked later 
> or sent an email to CI owner.
> 
> [1] No valid slaves
>Not all jobs are registered due to zuul restart for instance
> 
> [2] 
> http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-08-01.log.html
> 
We actually disabled this functionality in openstack-infra by adding a new
setting to zuul called check_job_registration[3]. While it doesn't address the
issue of invalid job configuration from landing in zuul, it does prevent
NOT_REGISTERED in the gate now.

We, openstack-infra, make every effort in our gate testing to ensure each change
to jenkins/jobs and zuul/layout.yaml is actually valid, if not zuul with -1 the
change as invalid. In our case, having zuul then check for registered jobs, was
a little redundant.

Now, changes land, zuul skips checking if the jobs are valid, and waits for new
nodes to come online. It also is less disruptive to testing when we have to
restart zuul, since jobs does fail with NOT_REGISTERED.

The downside of course, if a job is not configured properly, it will linger in
the queue until somebody address the issue.

[3] 
http://git.openstack.org/cgit/openstack-infra/zuul/commit/?id=9208dc1c0a859642deece4f4be5f43fae065c945

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread James Bottomley
On Mon, 2016-08-01 at 13:43 -0400, Sean Dague wrote:
> On 08/01/2016 12:24 PM, James Bottomley wrote:
> > Making no judgments about the particular exemplars here, I would 
> > just like to point out that one reason why projects exist with very
> > little diversity is that they "just work".  Usually people get 
> > involved when something doesn't work or they need something changed 
> > to work for them.  However, people do have a high tolerance for 
> > "works well enough" meaning that a project can be functional, 
> > widely used and not attracting diverse contributors.  A case in 
> > point for this type of project in the non-openstack world would be 
> > openssl but there are many others.
> 
> I think openssl is a good example of what we are actually trying to
> avoid. Over time that project boiled down to just a couple of people.
> Which seemed ok, because everything seemed to be working fine, but 
> only because no one was pushing on it too hard. Then folks did, and 
> we realized that there was kind of a house of cards here, that's
> required special intervention to address some of the issues found.

The original problem was lack of security audits leading to heartbleed
mistakes.  Now that that's been remedied by investment from the CII,
the project is still very monoclonal and run by a small group ... and
still just as essential.

> Keeping a diverse community up front helps mitigate some of this. 
> It's not a silver bullet by any means, but it does help ensure that 
> the goals of the project aren't only the goals of a single product 
> team inside a single entity.

The point I'm making is that Company led projects tend to be much
better connected with the end user base (because companies want
customers) which, ipso facto, means they tend to fall into the "good
enough" bucket and fail to attract many more outside contributions.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-virtual-interfaces isn't deprecated in 2.36

2016-08-01 Thread Ken'ichi Ohmichi
2016-07-29 10:32 GMT-07:00 Sean Dague :
> On 07/28/2016 05:38 PM, Matt Riedemann wrote:
>> On 7/28/2016 3:55 PM, Matt Riedemann wrote:
>>> For os-attach-interfaces, we need that to attach/detach interfaces to a
>>> server, so those actions don't go away with 2.36. We can also list and
>>> show interfaces (ports) which is a proxy to neutron, but in this case it
>>> seems a tad bit necessary, else to list ports for a given server you
>>> have to know to list ports via neutron CLI and filter on
>>> device_id=server.uuid.
>>
>> On second thought, we could drop the proxy APIs to list/show ports for a
>> given server. python-openstackclient could have a convenience CLI for
>> listing ports for a server. And the show in os-attach-interfaces takes a
>> server id but it's not used, so it's basically pointless and should just
>> be replaced with neutron.
>>
>> The question is, as these are proxies and the 2.36 microversion was for
>> proxy API deprecation, can we still do those in 2.36 even though it's
>> already merged? Or do they need to be 2.37? That seems like the more
>> accurate thing to do, but then we really have some weird "which is the
>> REAL proxy API microversion?" logic going on.
>>
>> I think we could move forward with deprecation in novaclient either way.
>
> We should definitely move forward with novaclient CLI deprecations.
>
> We've said that microversions are idempotent, so fixing one in this case
> isn't really what we want to do, it should just be another bump, with
> things we apparently missed. I'm not sure it's super important that
> there is a REAL proxy API microversion. We got most of it in one go, and
> as long as we catch the stragglers in 2.39 (let's make that the last
> merged one before the release so that we can figure out anything else we
> missed, and keep get me a network as 2.37).

Yeah, I agree with another bump.
We miss something like this and microversion mechanism can provide us
with another chance.

Thanks
Ken Omichi

---


> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] HA NG status

2016-08-01 Thread Michele Baldessari
Hi all,

just wanted to give a short status on the work around the HA
architecture described in the SPEC here:
https://review.openstack.org/299628 (blueprint here [1])

The last chunk of the work is currently contained in these two reviews:
https://review.openstack.org/#/c/342650/ - cinder volume constraint move
https://review.openstack.org/#/c/314208/ - Main Tripleo Heat Templates changes

The patches are very small at this point, because after a chat with
Giulio we made sure that the initial big patches were split off in
smaller, more easily consumable bits. We also made it so that even
openstack-core is a profile that can be turned on and off. This work
still depends on the Aodh profiles [2] work landing in master. Once Aodh
is committed the change is rather trivial. Testing has been successful
so far. Given that it can be rolled back with a simple tweak to
environments/puppet-pacemaker.yaml we have a solid non-intrusive plan B,
should we encounter any unforeseen major issues.

For the long-term (so Ocata at least), we can work to remove all the
tripleo::profile::pacemaker classes that won't be needed any longer.
I think it makes sense to keep them around for at least one release.

I hope we can land these changes sometime this week or beginning of the
next (depending on how the aodh reviews go). In any case either Chris or
I will be around for any question/issues.

Thanks for reading so far ;)
Michele

[1] https://blueprints.launchpad.net/tripleo/+spec/ha-lightweight-architecture
[2] https://review.openstack.org/#/c/333556/
-- 
Michele Baldessari
C2A5 9DA3 9961 4FFB E01B  D0BC DDD4 DCCB 7515 5C6D

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission

2016-08-01 Thread Znoinski, Waldemar

 >-Original Message-
 >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >Sent: Friday, July 29, 2016 6:37 PM
 >To: openstack-dev@lists.openstack.org
 >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting permission
 >
 >On 7/29/2016 10:47 AM, Znoinski, Waldemar wrote:
 >> Hi Matt et al,
 >> Thanks for taking the time to have a chat about it in Nova meeting
 >yesterday.
 >> In relation to your two points below...
 >>
 >> 1. tempest-dsvm-ovsdpdk-nfv-networking job in our Intel NFV CI was
 >broken for about a day till we troubleshooted the issue, to find out merge of
 >this [1] change started to cause our troubles.
 >> We set Q_USE_PROVIDERNET_FOR_PUBLIC back to False to let the job get
 >green again and test what it should be testing - nova/neutron changes and
 >not giving false negatives because of that devstack change.
 >> We saw a REVERT [2] of the above change shortly after as it was breaking
 >Jenkins neutron's linuxbridge tempest too [3].
 >>
 >> 2. Our aim is to have two things tested when new change is proposed to
 >devstack: NFV and OVS+DPDK. For better clarity we'll run two separate jobs
 >instead of having NFV+OVSDPDK together.
 >> Currently we run OVSDPDK+ODL on devstack changes to discover potential
 >issues with configuring these two together with each devstack change
 >proposed. We've discussed this internally and we can add/(replace
 >OVSDPDK+ODL job) with a 'tempest-dsvm-full-nfv' one (currently running on
 >Nova changes) that does devstack + runs full tempest test suite (1100+ tests)
 >on NFV enabled flavors. It should test properly proposed devstack changes
 >with NFV features (as per wiki [4]) we have enabled in Openstack.
 >>
 >> Let me know if there are other questions, concerns, asks or suggestions.
 >>
 >> Thanks
 >> Waldek
 >>
 >>
 >> [1] https://review.openstack.org/#/c/343072/
 >> [2] https://review.openstack.org/#/c/345820/
 >> [3] https://bugs.launchpad.net/devstack/+bug/1605423
 >> [4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI
 >>
 >>
 >>  >-Original Message-
 >>  >From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com]
 >>  >Sent: Thursday, July 28, 2016 4:14 PM
 >>  >To: openstack-dev@lists.openstack.org
 >>  >Subject: Re: [openstack-dev] [nova] [infra] Intel NFV CI voting
 >> permission  >  >On 7/21/2016 5:38 AM, Znoinski, Waldemar wrote:
 >>  >> Hi Nova cores et al,
 >>  >>
 >>  >>
 >>  >>
 >>  >> I would like to acquire voting (+/-1 Verified) permission for our
 >> >> Intel NFV CI.
 >>  >>
 >>  >>
 >>  >>
 >>  >> 1.   It's running since Q1'2015.
 >>  >>
 >>  >> 2.   Wiki [1].
 >>  >>
 >>  >> 3.   It's using openstack-infra/puppet-openstackci
 >>  >>  with Zuul
 >> >> 2.1.1 for last 4 months: zuul, gearman, Jenkins, nodepool, local
 >> Openstack  >cloud.
 >>  >>
 >>  >> 4.   We have a team of 2 people + me + Nagios looking after it. Its
 >>  >> problems are fixed promptly and rechecks triggered after non-code
 >> >> related issues. It's being reconciled against ci-watch [2].
 >>  >>
 >>  >> 5.   Reviews [3].
 >>  >>
 >>  >>
 >>  >>
 >>  >> Let me know if further questions.
 >>  >>
 >>  >>
 >>  >>
 >>  >> 1.   https://wiki.openstack.org/wiki/ThirdPartySystems/Intel_NFV_CI
 >>  >>
 >>  >> 2.   http://ci-watch.tintri.com/project?project=nova
 >>  >>
 >>  >> 3.
 >>  >> https://review.openstack.org/#/q/reviewer:%22Intel+NFV-
 >>  >CI+%253Copensta
 >>  >> ck-nfv-ci%2540intel.com%253E%22
 >>  >>
 >>  >>
 >>  >>
 >>  >>
 >>  >>
 >>  >>
 >>  >> *Waldek*
 >>  >>
 >>  >>
 >>  >>
 >>  >> --
 >>  >> Intel Research and Development Ireland Limited Registered in
 >> Ireland  >> Registered Office: Collinstown Industrial Park, Leixlip,
 >> County  >> Kildare Registered Number: 308263  >>  >> This e-mail and
 >> any attachments may contain confidential material for  >> the sole use
 >> of the intended recipient(s). Any review or distribution  >> by others
 >> is strictly prohibited. If you are not the intended  >> recipient,
 >> please contact the sender and delete all copies.
 >>  >>
 >>  >>
 >>  >>
 >>  >>
 >>
 >>_
 >_
 >>  >
 >>  >>  OpenStack Development Mailing List (not for usage questions)
 >> >> Unsubscribe:
 >>  >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >>  >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >>  >>
 >>  >
 >>  >We talked about this in the nova meeting today. I don't have a great
 >> grasp on  >how the Intel NFV CI has been performing, but making it
 >> voting will help with  >that. Looking at the 7 day results:
 >>  >
 >>  >http://ci-watch.tintri.com/project?project=nova=7+days
 >>  >
 >>  >Everything looks pretty good except for tempest-dsvm-ovsdpdk-nfv-
 >> >networking but Waldemar pointed out there was a change in devstack
 >> that  >broke the CI for a 

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Ed Leafe
On Aug 1, 2016, at 10:14 AM, Adrian Otto  wrote:

> I am struggling to understand why we would want to remove projects from our 
> big tent at all, as long as they are being actively developed under the 
> principles of "four opens". It seems to me that working to disqualify such 
> projects sends an alarming signal to our ecosystem. The reason we made the 
> big tent to begin with was to set a tone of inclusion. This whole discussion 
> seems like a step backward. What problem are we trying to solve, exactly?

Many projects that are largely single-vendor are approved for the big tent with 
the understanding that they need to diversify. I believe that it is these types 
of projects that we are discussing.

-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HA] RFC: High Availability track at future Design Summits

2016-08-01 Thread Adam Spiers
Hi all,

I doubt anyone would dispute that High Availability is a really
important topic within OpenStack, yet none of the OpenStack
conferences or Design Summits so far have provided an "official" track
or similar dedicated space for discussion on HA topics.

This is becoming increasingly problematic as the number of HA topics
increase.  For example, in Austin a group of us spent something like
15 hours together over 3-4 days for design sessions around the future
of HA for the compute plane.

This is not by any means the only HA topic which needs discussing.
Other possible topics:

  - Input from operators on their experiences of deployment,
maintenance, and effectiveness of highly available OpenStack
infrastructure

  - Adding or improving HA support in existing projects, e.g.

  - cinder-volume active/active work is currently ongoing

  - neutron always has ongoing HA topics - the hot one in
Austin seemed to be HA+DVR+SNAT.

  - We had some great discussions with the Congress team in
Austin, which may need follow-up.

  - mistral is involved in ongoing HA work.

  - The various projects playing on the HA scene (Senlin is
another example) need the opportunity to sync up with each
other to become aware of any opportunities for integration or
potential overlap.

  - Documentation (the official HA guide)

  - Different / new approaches to HA of the control plane
(e.g. Pacemaker vs. systemd vs. other clustering technologies)

  - Testing and hardening of existing HA architectures (e.g. via
projects such as cloud99)

Whilst we do have the #openstack-ha IRC channel, weekly IRC meetings,
and of course the mailing lists, I think it would be helpful to have
an official space in the design summits for continuation of those
technical discussions face-to-face.

Granted, some of the above topics could be discussed in the related
project track (cinder, neutron, congress, documentation etc.).  But
this does not provide a forum for detailed technical discussion on
cross-project initiatives such as compute HA, or architectural debates
which don't relate to a single project, or work on HA projects which
don't have their own dedicated track in the Design Summit.

Therefore I would like to propose that future Design Summits adopt an
official HA "mini-track" (I guess one day might be sufficient), and
I'd really appreciate hearing opinions on this proposal.

Also the idea meets enough favour, it would be useful to find it
whether it's already too late to arrange this for Barcelona :-)

Thanks a lot!
Adam

P.S. Maybe a similar proposal on a smaller scale would be valid for
some of the operator and regional meetups too?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Sean Dague
On 08/01/2016 12:24 PM, James Bottomley wrote:
> On Mon, 2016-08-01 at 11:38 -0400, Doug Hellmann wrote:
>> Excerpts from Adrian Otto's message of 2016-08-01 15:14:48 +:
>>> I am struggling to understand why we would want to remove projects
>>> from our big tent at all, as long as they are being actively
>>> developed under the principles of "four opens". It seems to me that
>>> working to disqualify such projects sends an alarming signal to our
>>> ecosystem. The reason we made the big tent to begin with was to set
>>> a tone of inclusion. This whole discussion seems like a step
>>> backward. What problem are we trying to solve, exactly?
>>>
>>> If we want to have tags to signal team diversity, that's fine. We
>>> do that now. But setting arbitrary requirements for big tent
>>> inclusion based on who participates definitely sounds like a
>>> mistake.
>>
>> Membership in the big tent comes with benefits that have a real
>> cost born by the rest of the community. Space at PTG and summit
>> forum events is probably the one that's easiest to quantify and to
>> point to as something limited that we want to use as productively
>> as possible. If 90% of the work of a project is being done by a
>> single company or organization (our current definition for
>> single-vendor), and that doesn't change after 18 months, then I
>> would take that as a signal that the community isn't interested
>> enough in the project to bear the associated costs.
>>
>> I'm interested in hearing other reasons that we should keep these
>> sorts of projects, though. I'm not yet ready to propose the change
>> to the policy myself.
> 
> Making no judgments about the particular exemplars here, I would just
> like to point out that one reason why projects exist with very little
> diversity is that they "just work".  Usually people get involved when
> something doesn't work or they need something changed to work for them.
>  However, people do have a high tolerance for "works well enough"
> meaning that a project can be functional, widely used and not
> attracting diverse contributors.  A case in point for this type of
> project in the non-openstack world would be openssl but there are many
> others.

I think openssl is a good example of what we are actually trying to
avoid. Over time that project boiled down to just a couple of people.
Which seemed ok, because everything seemed to be working fine, but only
because no one was pushing on it too hard. Then folks did, and we
realized that there was kind of a house of cards here, that's required
special intervention to address some of the issues found.

Keeping a diverse community up front helps mitigate some of this. It's
not a silver bullet by any means, but it does help ensure that the goals
of the project aren't only the goals of a single product team inside a
single entity.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] NOT_REGISTERED to be or not to be

2016-08-01 Thread Mikhail Medvedev
Thanks starting the discussion, Lenny. I thought about a way to
accomplish that, and came up with few options:

 - Pre-register all possible jenkins jobs with gearman. This only
avoids not registered errors due to gearman server restart, not if you
misconfigured your system;

 - Add option to zuul to treat builds as effectively canceled if there
is no corresponding gearman worker, this would avoid the
not_registered comment in all cases. This can be done in one line of
code if you just hack it [1] (assuming my patch is correct).

 - Automatically register all jobs that zuul tries to start with
gearman. That is, check if  exists on gearman, and register
dummy function if not, for each attempted build. This would avoid
missing any patches.

[1] 
https://review.openstack.org/#q,Ie6d5ea35c6eeed465168f24921b04442df8f5744,n,z


On Mon, Aug 1, 2016 at 10:53 AM, Lenny Verkhovsky  wrote:
> Hi,
>
>
>
> Currently in some cases[1] CI sets a comment on the patch set as
> NOT_REGISTERED.
>
>
>
> Those comments are very hard to monitor for CI operators and have noisy
> value for the developers.
>
>
>
> Maybe a better solution is not commenting in such cases at all as discussed
> in [2].
>
>
>
> If a developer is missing some important CI comments, it can rechecked later
> or sent an email to CI owner.
>
>
>
> [1] No valid slaves
>
>Not all jobs are registered due to zuul restart for instance
>
>
>
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-08-01.log.html
>
>
>
>
>
> Thanks.
>
> Lenny
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-01 Thread Jim Rollenhagen
On Mon, Aug 01, 2016 at 08:10:18AM -0400, Jim Rollenhagen wrote:
> Hey all,
> 
> Our nova patch for networking[0] got stuck for a bit, because Nova needs
> to know which network interface is in use for the node, in order to
> properly set up the port.
> 
> The code landed for network_interface follows the following order for
> what is actually used for the node:
> 1) node.network_interface, if that is None:
> 2) CONF.default_network_interface, if that isNone:
> 3) flat, if using neutron DHCP
> 4) noop, if not using neutron DHCP
> 
> The API will return None for node.network_interface in the API (GET
> /v1/nodes/uuid). This won't work for Nova, because Nova can't know what
> CONF.default_network_interface is.
> 
> I propose that if a network_interface is not sent in the node-create
> call, we write whatever the current default is, so that it is always set
> and not using an implicit value that could change.
> 
> For nodes that exist before the upgrade, we do a database migration to
> set network_interface to CONF.default_network_interface (or if that's
> None, set to flat/noop depending on the DHCP provider).
> 
> An alternative is to keep the existing behavior, but have the API return
> whatever interface is actually being used. This keeps the implicit
> behavior (which I don't think is good), and also doesn't provide a way
> to find out from the API if the interface is actually set, or if it's
> using the configurable default.
> 
> I'm going to go ahead and execute on that plan now, do speak up if you
> have major objections to it.

By the way, the patch chain to do this is here:
https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1608511

// jim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [HA] weekly High Availability meetings on IRC: change of time

2016-08-01 Thread Adam Spiers
Hi everyone,

I have proposed moving the weekly High Availability IRC meetings one
hour later, back to the original time of 0900 UTC every Monday.

  https://review.openstack.org/#/c/349601/

Everyone is welcome to attend these meetings, so if you think you are
likely to regularly attend, feel free to vote on that review.

Thanks!
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Infra] Meeting Tuesday August 2nd at 19:00 UTC

2016-08-01 Thread Elizabeth K. Joseph
Hi everyone,

The OpenStack Infrastructure (Infra) team is having our next weekly
meeting on Tuesday August 2nd, at 19:00 UTC in #openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting

Anyone is welcome to to add agenda items and everyone interested in
the project infrastructure and process surrounding automated testing
and deployment is encouraged to attend.

In case you missed it or would like a refresher, the meeting minutes
and full logs from our last meeting are available:

Minutes: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-07-26-19.03.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-07-26-19.03.txt
Log: 
http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-07-26-19.03.log.html

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread James Bottomley
On Mon, 2016-08-01 at 11:38 -0400, Doug Hellmann wrote:
> Excerpts from Adrian Otto's message of 2016-08-01 15:14:48 +:
> > I am struggling to understand why we would want to remove projects
> > from our big tent at all, as long as they are being actively
> > developed under the principles of "four opens". It seems to me that
> > working to disqualify such projects sends an alarming signal to our
> > ecosystem. The reason we made the big tent to begin with was to set
> > a tone of inclusion. This whole discussion seems like a step
> > backward. What problem are we trying to solve, exactly?
> > 
> > If we want to have tags to signal team diversity, that's fine. We
> > do that now. But setting arbitrary requirements for big tent
> > inclusion based on who participates definitely sounds like a
> > mistake.
> 
> Membership in the big tent comes with benefits that have a real
> cost born by the rest of the community. Space at PTG and summit
> forum events is probably the one that's easiest to quantify and to
> point to as something limited that we want to use as productively
> as possible. If 90% of the work of a project is being done by a
> single company or organization (our current definition for
> single-vendor), and that doesn't change after 18 months, then I
> would take that as a signal that the community isn't interested
> enough in the project to bear the associated costs.
> 
> I'm interested in hearing other reasons that we should keep these
> sorts of projects, though. I'm not yet ready to propose the change
> to the policy myself.

Making no judgments about the particular exemplars here, I would just
like to point out that one reason why projects exist with very little
diversity is that they "just work".  Usually people get involved when
something doesn't work or they need something changed to work for them.
 However, people do have a high tolerance for "works well enough"
meaning that a project can be functional, widely used and not
attracting diverse contributors.  A case in point for this type of
project in the non-openstack world would be openssl but there are many
others.

James



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Doug Hellmann
Excerpts from Michael Krotscheck's message of 2016-08-01 16:06:45 +:
> FYI- I'm totally in favor of eviction. But...
> 
> On Mon, Aug 1, 2016 at 8:42 AM Doug Hellmann  wrote:
> 
> >
> > I'm interested in hearing other reasons that we should keep these
> > sorts of projects, though. I'm not yet ready to propose the change
> > to the policy myself.
> 
> 
> ...if the social consequences result in that entire team's development
> staff effectively exiting OpenStack altogether? This in particular is
> pertinent to myself - if Fuel is evicted from the big tent, then it's very
> likely that the JavaScript SDK collaboration (which includes several
> Fuel-UI developers and has _finally_ taken off) will grind to a halt.
> 
> There's a halo effect to having a project under the big tent - contributors
> are already familiar with infra and procedure, and thus the barriers to
> cross-project bugfixes are way lower. Perhaps (using Fuel as an example)
> the "should this be in the big tent" metric is based on how many
> contributors contribute _only_ to Fuel, as opposed to
> Fuel-and-other-projects.

Remember that the big tent is projects governed by the TC. Projects can
still use gerrit, CI, etc. even if they are not in the big tent.

> As a countersuggestion - perhaps the solution to increasing project
> diversity is to reduce barriers to cross-project contributions. If the
> learning curve of project-shifting was reduced (by agreeing on common web
> frameworks, etc), it'd certainly make cross-project bug fixes way easier.

I certainly support that, though as Jay points out in his thread on the
goals proposal we still want to leave room for experimentation.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Changing the repo descriptions in the github mirrors

2016-08-01 Thread Monty Taylor
Hey everybody!

Recently some of us in Infra land were browsing through the upstream git
source code, and happened to notice their repo description in github:

https://github.com/git//git

"Git Source Code Mirror - This is a publish-only repository and all pull
requests are ignored. Please follow Documentation/SubmittingPatches
procedure for any of your improvements."

It seemed both clear and informative, and made us think that we could
provide the same clarity to people who might come to our source code via
github.

For those who are not aware, we do currently run a Pull Request Closer
that closes any pull requests we get with a note about how to submit
things to gerrit. In many repos we also have a CONTRIBUTING.rst file
that also includes instructions on how to submit code - but ultimately I
think the more upfront we can be with people the less far down the road
of trying to contribute to OpenStack via a pull request they'll get.

We also have a mild issue over time of github descriptions going stale,
as we currently only set them at GH mirror creation time (turns out
asserting a bunch of metadata on thousands of repositories is a LOT of
API calls)

In any case, I've got some proposed patches up to implement this:

https://review.openstack.org/#/q/status:open+topic:global-gh-desc

But it seemed like one of those things where giving folks a heads up
and/or a time to give feedback before pulling the trigger would be friendly.

Thoughts?

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Michael Krotscheck
FYI- I'm totally in favor of eviction. But...

On Mon, Aug 1, 2016 at 8:42 AM Doug Hellmann  wrote:

>
> I'm interested in hearing other reasons that we should keep these
> sorts of projects, though. I'm not yet ready to propose the change
> to the policy myself.


...if the social consequences result in that entire team's development
staff effectively exiting OpenStack altogether? This in particular is
pertinent to myself - if Fuel is evicted from the big tent, then it's very
likely that the JavaScript SDK collaboration (which includes several
Fuel-UI developers and has _finally_ taken off) will grind to a halt.

There's a halo effect to having a project under the big tent - contributors
are already familiar with infra and procedure, and thus the barriers to
cross-project bugfixes are way lower. Perhaps (using Fuel as an example)
the "should this be in the big tent" metric is based on how many
contributors contribute _only_ to Fuel, as opposed to
Fuel-and-other-projects.

As a countersuggestion - perhaps the solution to increasing project
diversity is to reduce barriers to cross-project contributions. If the
learning curve of project-shifting was reduced (by agreeing on common web
frameworks, etc), it'd certainly make cross-project bug fixes way easier.

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Doug Hellmann
Excerpts from Jay Pipes's message of 2016-08-01 10:23:57 -0400:
> On 08/01/2016 08:33 AM, Sean Dague wrote:
> > On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> >> One of the outcomes of the discussion at the leadership training
> >> session earlier this year was the idea that the TC should set some
> >> community-wide goals for accomplishing specific technical tasks to
> >> get the projects synced up and moving in the same direction.
> >>
> >> After several drafts via etherpad and input from other TC and SWG
> >> members, I've prepared the change for the governance repo [1] and
> >> am ready to open this discussion up to the broader community. Please
> >> read through the patch carefully, especially the "goals/index.rst"
> >> document which tries to lay out the expectations for what makes a
> >> good goal for this purpose and for how teams are meant to approach
> >> working on these goals.
> >>
> >> I've also prepared two patches proposing specific goals for Ocata
> >> [2][3].  I've tried to keep these suggested goals for the first
> >> iteration limited to "finish what we've started" type items, so
> >> they are small and straightforward enough to be able to be completed.
> >> That will let us experiment with the process of managing goals this
> >> time around, and set us up for discussions that may need to happen
> >> at the Ocata summit about implementation.
> >>
> >> For future cycles, we can iterate on making the goals "harder", and
> >> collecting suggestions for goals from the community during the forum
> >> discussions that will happen at summits starting in Boston.
> >>
> >> Doug
> >>
> >> [1] https://review.openstack.org/349068 describe a process for managing 
> >> community-wide goals
> >> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> >> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> >> libraries"
> >
> > I like the direction this is headed. And I think for the test items, it
> > works pretty well.
> 
> I commented on the reviews, but I disagree with both the direction and 
> the proposed implementation of this.
> 
> In short, I think there's too much stick and not enough carrot. We 
> should create natural incentives for projects to achieve desired 
> alignment in certain areas, but placing mandates on project teams in a 
> diverse community like OpenStack is not useful.
> 
> The consequences of a project team *not* meeting these proposed mandates 
> has yet to be decided (and I made that point on the governance patch 
> review). But let's say that the consequences are that a project is 
> removed from the OpenStack big tent if they fail to complete these 
> "shared objectives".
> 
> What will we do when Swift decides that they have no intention of using 
> oslo.messaging or oslo.config because they can't stand fundamentals 
> about those libraries? Are we going to kick Swift, a founding project of 
> OpenStack, out of the OpenStack big tent?

Yes, your point about the title of that specific proposal is well
made.  I'll be renaming it to "remove obsolete incubated version
of Oslo code" or something similar in the next draft to avoid
confusion.

> Likewise, what if the Manila project team decides they aren't interested 
> in supporting Python 3.5 or a particular greenlet library du jour that 
> has been mandated upon them? Is the only filesystem-as-a-service project 
> going to be booted from the tent?

I hardly think "move off of the EOL-ed version of our language" and
"use a library du jour" are in the same class.  All of the topics
discussed so far are either focused on eliminating technical debt
that project teams have not prioritized consistently or adding
features that, again for consistency, are deemed important by the
overall community (API microversioning falls in that category,
though that's an example and not in any way an approved goal right
now).

> When it comes to the internal implementation of projects, my strong 
> belief is that we should let the project teams be laboratories of 
> innovation and avoid placing mandates on them.
> 
> Let projects choose from a set of vetted options for important libraries 
> or frameworks and allow a project to pave its own road if the project 
> team can justify a reason for that which outweighs any vetted choice 
> (Zaqar's choice to use Falcon fits this kind of thing).

We might have a goal that says projects should drop unapproved
tools. I don't think so far we've considered any that say they
should all use a specific tool that isn't already widely used. I'm
having trouble thinking of an example of that sort of thing that I
would support, but at the same time I'm not prepared to say we would
never do something like that just because of my lack of imagination
this morning. How about if we argue the merits of actual goal
proposals when they're made, instead of posing hypothetical scenarios?

> Finally, instead of these shared OpenStack-wide goals being a different 
> stick-thing for 

[openstack-dev] [all][infra] NOT_REGISTERED to be or not to be

2016-08-01 Thread Lenny Verkhovsky
Hi,

Currently in some cases[1] CI sets a comment on the patch set as NOT_REGISTERED.

Those comments are very hard to monitor for CI operators and have noisy value 
for the developers.

Maybe a better solution is not commenting in such cases at all as discussed in 
[2].

If a developer is missing some important CI comments, it can rechecked later or 
sent an email to CI owner.

[1] No valid slaves
   Not all jobs are registered due to zuul restart for instance

[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2016-08-01.log.html


Thanks.
Lenny
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Doug Hellmann
Excerpts from Shamail Tahir's message of 2016-08-01 09:49:35 -0500:
> On Mon, Aug 1, 2016 at 7:58 AM, Doug Hellmann  wrote:
> 
> > Excerpts from Sean Dague's message of 2016-08-01 08:33:06 -0400:
> > > On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> > > > One of the outcomes of the discussion at the leadership training
> > > > session earlier this year was the idea that the TC should set some
> > > > community-wide goals for accomplishing specific technical tasks to
> > > > get the projects synced up and moving in the same direction.
> > > >
> > > > After several drafts via etherpad and input from other TC and SWG
> > > > members, I've prepared the change for the governance repo [1] and
> > > > am ready to open this discussion up to the broader community. Please
> > > > read through the patch carefully, especially the "goals/index.rst"
> > > > document which tries to lay out the expectations for what makes a
> > > > good goal for this purpose and for how teams are meant to approach
> > > > working on these goals.
> > > >
> > > > I've also prepared two patches proposing specific goals for Ocata
> > > > [2][3].  I've tried to keep these suggested goals for the first
> > > > iteration limited to "finish what we've started" type items, so
> > > > they are small and straightforward enough to be able to be completed.
> > > > That will let us experiment with the process of managing goals this
> > > > time around, and set us up for discussions that may need to happen
> > > > at the Ocata summit about implementation.
> > > >
> > > > For future cycles, we can iterate on making the goals "harder", and
> > > > collecting suggestions for goals from the community during the forum
> > > > discussions that will happen at summits starting in Boston.
> > > >
> > > > Doug
> > > >
> > > > [1] https://review.openstack.org/349068 describe a process for
> > managing community-wide goals
> > > > [2] https://review.openstack.org/349069 add ocata goal "support
> > python 3.5"
> > > > [3] https://review.openstack.org/349070 add ocata goal "switch to
> > oslo libraries"
> > >
> > > I like the direction this is headed. And I think for the test items, it
> > > works pretty well.
> > >
> > > I'm trying to think about how we'd use a model like this to support
> > > something a little more abstract such as making upgrades easier. Where
> > > we've got a few things that we know get in the way (policy in files,
> > > rootwrap rules, paste ini changes), as well as validation, as well as
> > > configuration changes. And what it looks like for persistently important
> > > items which are going to take more than a cycle to get through.
> >
> > If we think the goal can be completed in a single cycle, then those
> > specific items can just be used to define "done" ("all policy
> > definitions have defaults in code and the service works without a policy
> > configuration file" or whatever). If the goal cannot be completed in a
> > single cycle, then it would need to be broken up in to phases.
> >
> > >
> > > Definitely seems worth giving it a shot on the current set of items, and
> > > see how it fleshes out.
> > >
> > > My only concern at this point is it seems like we're building nested
> > > data structures that people are going to want to parse into some kind of
> > > visualization in RST, which is a sub optimal parsing format. If we know
> > > that people want to parse this in advance, yamling it up might be in
> > > order. Because this mostly looks like it would reduce to one of those
> > > green/yellow/red checker boards by project and task.
> >
> > That's a good idea. How about if I commit to translate what we end
> > up with to YAML during Ocata, but we evolve the first version using
> > the RST since that's simpler to review for now?
> 
> We have created a tracker file[1][2] for user stories (minor changes
> pending based on feedback) in the Product WG repo.  We are currently
> working with the infra team to get a visualization tool deployed that shows
> the status for each artifact and provides links so that people can get more
> details as necessary.  Could something similar be (re)used here?

Possibly. I don't want to tie the governance part of the process
to tightly to any project management tools, since those tend to
change, but if the project-specific tracking artifacts exist in
that form then linking to them would be appropriate.

> 
> I also have a general question about whether goals could be documented as
> user stories[3]?

I would expect some of the goals to come from user stories, and in
those cases references to those stories would be appropriate.
However, we need much more specific detail to describe "done" than
is typically found in a user story, so just having a story won't
be sufficient. It's the difference between "As a deployer, I can
run OpenStack on Python 3.5" and "There are voting gate jobs running
all of the integration tests for a project under Python 3.5."

Doug

> 
> >
> 
> > Doug
> >
> 

Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-01 Thread John Davidge
Yes, as Brian says this will be covered by the follow-up patch to [2]
which I¹m currently working on. Thanks for the question.

John


On 8/1/16, 3:17 PM, "Brian Haley"  wrote:

>On 07/31/2016 06:27 AM, huangdenghui wrote:
>> Hi
>>Now we have spec named subnet service types, which provides a
>>capability of
>> allowing different port of a network to allocate ip address from
>>different
>> subnet. In current implementation of DVR, fip also is distributed on
>>every
>> compute node, floating ip and fg's ip are both allocated from external
>>network's
>> subnets. In large public cloud deployment, current implementation will
>>consume
>> lots of public ip address. Do we need a RFE to apply subnet service
>>types spec
>> to resolve this problem. Any thoughts?
>
>Hi,
>
>This is going to be covered in the existing RFE for subnet service types
>[1].
>We currently have two reviews in progress for CRUD [2] and CLI [3], the
>IPAM
>changes are next.
>
>-Brian
>
>[1] https://review.openstack.org/#/c/300207/
>[2] https://review.openstack.org/#/c/337851/
>[3] https://review.openstack.org/#/c/342976/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Doug Hellmann
Excerpts from Adrian Otto's message of 2016-08-01 15:14:48 +:
> I am struggling to understand why we would want to remove projects from our 
> big tent at all, as long as they are being actively developed under the 
> principles of "four opens". It seems to me that working to disqualify such 
> projects sends an alarming signal to our ecosystem. The reason we made the 
> big tent to begin with was to set a tone of inclusion. This whole discussion 
> seems like a step backward. What problem are we trying to solve, exactly?
> 
> If we want to have tags to signal team diversity, that's fine. We do that 
> now. But setting arbitrary requirements for big tent inclusion based on who 
> participates definitely sounds like a mistake.

Membership in the big tent comes with benefits that have a real
cost born by the rest of the community. Space at PTG and summit
forum events is probably the one that's easiest to quantify and to
point to as something limited that we want to use as productively
as possible. If 90% of the work of a project is being done by a
single company or organization (our current definition for
single-vendor), and that doesn't change after 18 months, then I
would take that as a signal that the community isn't interested
enough in the project to bear the associated costs.

I'm interested in hearing other reasons that we should keep these
sorts of projects, though. I'm not yet ready to propose the change
to the policy myself.

Doug

> 
> --
> Adrian
> 
> > On Aug 1, 2016, at 5:11 AM, Sean Dague  wrote:
> > 
> >> On 07/31/2016 02:29 PM, Doug Hellmann wrote:
> >> Excerpts from Steven Dake (stdake)'s message of 2016-07-31 18:17:28 +:
> >>> Kevin,
> >>> 
> >>> Just assessing your numbers, the team:diverse-affiliation tag covers what
> >>> is required to maintain that tag.  It covers more then core reviewers -
> >>> also covers commits and reviews.
> >>> 
> >>> See:
> >>> https://github.com/openstack/governance/blob/master/reference/tags/team_div
> >>> erse-affiliation.rst
> >>> 
> >>> 
> >>> I can tell you from founding 3 projects with the team:diverse-affiliation
> >>> tag (Heat, Magnum, Kolla) team:deverse-affiliation is a very high bar to
> >>> meet.  I don't think its wise to have such strict requirements on single
> >>> vendor projects as those objectively defined in team:diverse-affiliation.
> >>> 
> >>> But Doug's suggestion of timelines could make sense if the timelines gave
> >>> plenty of time to meet whatever requirements make sense and the
> >>> requirements led to some increase in diverse affiliation.
> >> 
> >> To be clear, I'm suggesting that projects with team:single-vendor be
> >> given enough time to lose that tag. That does not require them to grow
> >> diverse enough to get team:diverse-affiliation.
> > 
> > The idea of 3 cycles to loose the single-vendor tag sounds very
> > reasonable to me. This also is very much along the spirit of the tag in
> > that it should be one of the top priorities of the team to work on this.
> > I'd be in favor.
> > 
> >-Sean
> > 
> > -- 
> > Sean Dague
> > http://dague.net
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-08-01 10:31:44 -0400:
> On 08/01/2016 10:28 AM, Davanum Srinivas wrote:
> > Sean,
> > 
> > So we will programatically test the metrics (if we are not doing that
> > already) to apply/remove "team:single-vendor" tag:
> > 
> > https://governance.openstack.org/reference/tags/team_single-vendor.html
> > 
> > And trigger exit when the tag is present for more than 3 cycles in a
> > row (say as of release date?)
> > 
> > Thanks,
> > -- Dims
> 
> An approach like that would be fine with me. I'm not sure we have a
> formal proposal yet, but 3 cycles seems like a reasonable time frame.
> I'm happy to debate if people think there are better timeframes instead.
> 
> -Sean
> 

Yes, I think 3 cycles works and we are supposed to be reviewing that tag
periodically anyway.

I also agree that there's no need to differentiate between the reasons
for not being able to drop the tag (lack of trying or lack of success),
for the reasons Sean gave.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Adrian Otto
I am struggling to understand why we would want to remove projects from our big 
tent at all, as long as they are being actively developed under the principles 
of "four opens". It seems to me that working to disqualify such projects sends 
an alarming signal to our ecosystem. The reason we made the big tent to begin 
with was to set a tone of inclusion. This whole discussion seems like a step 
backward. What problem are we trying to solve, exactly?

If we want to have tags to signal team diversity, that's fine. We do that now. 
But setting arbitrary requirements for big tent inclusion based on who 
participates definitely sounds like a mistake.

--
Adrian

> On Aug 1, 2016, at 5:11 AM, Sean Dague  wrote:
> 
>> On 07/31/2016 02:29 PM, Doug Hellmann wrote:
>> Excerpts from Steven Dake (stdake)'s message of 2016-07-31 18:17:28 +:
>>> Kevin,
>>> 
>>> Just assessing your numbers, the team:diverse-affiliation tag covers what
>>> is required to maintain that tag.  It covers more then core reviewers -
>>> also covers commits and reviews.
>>> 
>>> See:
>>> https://github.com/openstack/governance/blob/master/reference/tags/team_div
>>> erse-affiliation.rst
>>> 
>>> 
>>> I can tell you from founding 3 projects with the team:diverse-affiliation
>>> tag (Heat, Magnum, Kolla) team:deverse-affiliation is a very high bar to
>>> meet.  I don't think its wise to have such strict requirements on single
>>> vendor projects as those objectively defined in team:diverse-affiliation.
>>> 
>>> But Doug's suggestion of timelines could make sense if the timelines gave
>>> plenty of time to meet whatever requirements make sense and the
>>> requirements led to some increase in diverse affiliation.
>> 
>> To be clear, I'm suggesting that projects with team:single-vendor be
>> given enough time to lose that tag. That does not require them to grow
>> diverse enough to get team:diverse-affiliation.
> 
> The idea of 3 cycles to loose the single-vendor tag sounds very
> reasonable to me. This also is very much along the spirit of the tag in
> that it should be one of the top priorities of the team to work on this.
> I'd be in favor.
> 
>-Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][heat][infra] breakage of Sahara gate and images from openstack.org

2016-08-01 Thread Luigi Toscano
On Monday, 1 August 2016 10:56:21 CEST Zane Bitter wrote:
> On 29/07/16 13:12, Luigi Toscano wrote:
> > Hi all,
> > the Sahara jobs on the gate run the scenario tests (from sahara-tests)
> > using the fake plugin, so no real Hadoop/Spark/BigData operations are
> > performed, but other the other expected operations are executed on the
> > image. In order to do this we used for long time this image:
> > http://tarballs.openstack.org/heat-test-image/fedora-heat-test-image.qcow2
> > 
> > which was updated early on this Friday (July 29th) from Fedora 22 to
> > Fedora 24 breaking our jobs with some cryptic error, maybe something
> > related to the repositories:
> > http://logs.openstack.org/46/335946/12/check/gate-sahara-tests-dsvm-scenar
> > io-nova-heat/5eeff52/logs/screen-sahara-eng.txt.gz?level=WARNING
> So AFAICT from the log:
> 
> "rpm -q xfsprogs" prints "package xfsprogs is not installed" which is
> expected if xfsprogs is not installed.
> 
> "yum install -y xfsprogs" redirects to "/usr/bin/dnf install -y
> xfsprogs" which is expected on F24.
> 
> dnf fails with "Error: Failed to synchronize cache for repo 'fedora'"
> which means it couldn't download the Fedora repository data.
> 
> "sudo mount -o data=writeback,noatime,nodiratime /dev/vdb
> /volumes/disk1" then fails, doubtlessly because xfsprogs in not installed.
> 
> The absence of "sudo" in the yum command (when it does appear in the
> mount command) is suspicious, but unlikely to be the reason it can't
> sync the cache.

This is why I mentioned the repositories, yes. 

> It's not obvious why this change of image would suddenly result in not
> being able to install packages. It seems more likely that you've never
> been able to install packages, but the previous image had xfsprogs
> preinstalled and the new one doesn't. I don't know the specifics of how
> that image is built, but certainly Fedora has been making an ongoing
> effort to strip the cloud image back to basics.

But this is not a normal Fedora image. If I read project-config correctly, 
this is generated by this job:

http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/
jobs/heat.yaml#n34

>From a brief chat on #heat on Friday it seems that the image is not gated or 
checked or even used right now. Is it the case? The image is almost a simple 
Fedora with few extra packages:
http://git.openstack.org/cgit/openstack/heat-templates/tree/hot/software-config/test-image/build-heat-test-image.sh

-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara][heat][infra] breakage of Sahara gate and images from openstack.org

2016-08-01 Thread Zane Bitter

On 29/07/16 13:12, Luigi Toscano wrote:

Hi all,
the Sahara jobs on the gate run the scenario tests (from sahara-tests) using
the fake plugin, so no real Hadoop/Spark/BigData operations are performed, but
other the other expected operations are executed on the image. In order to do
this we used for long time this image:
http://tarballs.openstack.org/heat-test-image/fedora-heat-test-image.qcow2

which was updated early on this Friday (July 29th) from Fedora 22 to Fedora 24
breaking our jobs with some cryptic error, maybe something related to the
repositories:
http://logs.openstack.org/46/335946/12/check/gate-sahara-tests-dsvm-scenario-nova-heat/5eeff52/logs/screen-sahara-eng.txt.gz?level=WARNING


So AFAICT from the log:

"rpm -q xfsprogs" prints "package xfsprogs is not installed" which is 
expected if xfsprogs is not installed.


"yum install -y xfsprogs" redirects to "/usr/bin/dnf install -y 
xfsprogs" which is expected on F24.


dnf fails with "Error: Failed to synchronize cache for repo 'fedora'" 
which means it couldn't download the Fedora repository data.


"sudo mount -o data=writeback,noatime,nodiratime /dev/vdb 
/volumes/disk1" then fails, doubtlessly because xfsprogs in not installed.


The absence of "sudo" in the yum command (when it does appear in the 
mount command) is suspicious, but unlikely to be the reason it can't 
sync the cache.


It's not obvious why this change of image would suddenly result in not 
being able to install packages. It seems more likely that you've never 
been able to install packages, but the previous image had xfsprogs 
preinstalled and the new one doesn't. I don't know the specifics of how 
that image is built, but certainly Fedora has been making an ongoing 
effort to strip the cloud image back to basics.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Shamail Tahir
On Mon, Aug 1, 2016 at 7:58 AM, Doug Hellmann  wrote:

> Excerpts from Sean Dague's message of 2016-08-01 08:33:06 -0400:
> > On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> > > One of the outcomes of the discussion at the leadership training
> > > session earlier this year was the idea that the TC should set some
> > > community-wide goals for accomplishing specific technical tasks to
> > > get the projects synced up and moving in the same direction.
> > >
> > > After several drafts via etherpad and input from other TC and SWG
> > > members, I've prepared the change for the governance repo [1] and
> > > am ready to open this discussion up to the broader community. Please
> > > read through the patch carefully, especially the "goals/index.rst"
> > > document which tries to lay out the expectations for what makes a
> > > good goal for this purpose and for how teams are meant to approach
> > > working on these goals.
> > >
> > > I've also prepared two patches proposing specific goals for Ocata
> > > [2][3].  I've tried to keep these suggested goals for the first
> > > iteration limited to "finish what we've started" type items, so
> > > they are small and straightforward enough to be able to be completed.
> > > That will let us experiment with the process of managing goals this
> > > time around, and set us up for discussions that may need to happen
> > > at the Ocata summit about implementation.
> > >
> > > For future cycles, we can iterate on making the goals "harder", and
> > > collecting suggestions for goals from the community during the forum
> > > discussions that will happen at summits starting in Boston.
> > >
> > > Doug
> > >
> > > [1] https://review.openstack.org/349068 describe a process for
> managing community-wide goals
> > > [2] https://review.openstack.org/349069 add ocata goal "support
> python 3.5"
> > > [3] https://review.openstack.org/349070 add ocata goal "switch to
> oslo libraries"
> >
> > I like the direction this is headed. And I think for the test items, it
> > works pretty well.
> >
> > I'm trying to think about how we'd use a model like this to support
> > something a little more abstract such as making upgrades easier. Where
> > we've got a few things that we know get in the way (policy in files,
> > rootwrap rules, paste ini changes), as well as validation, as well as
> > configuration changes. And what it looks like for persistently important
> > items which are going to take more than a cycle to get through.
>
> If we think the goal can be completed in a single cycle, then those
> specific items can just be used to define "done" ("all policy
> definitions have defaults in code and the service works without a policy
> configuration file" or whatever). If the goal cannot be completed in a
> single cycle, then it would need to be broken up in to phases.
>
> >
> > Definitely seems worth giving it a shot on the current set of items, and
> > see how it fleshes out.
> >
> > My only concern at this point is it seems like we're building nested
> > data structures that people are going to want to parse into some kind of
> > visualization in RST, which is a sub optimal parsing format. If we know
> > that people want to parse this in advance, yamling it up might be in
> > order. Because this mostly looks like it would reduce to one of those
> > green/yellow/red checker boards by project and task.
>
> That's a good idea. How about if I commit to translate what we end
> up with to YAML during Ocata, but we evolve the first version using
> the RST since that's simpler to review for now?

We have created a tracker file[1][2] for user stories (minor changes
pending based on feedback) in the Product WG repo.  We are currently
working with the infra team to get a visualization tool deployed that shows
the status for each artifact and provides links so that people can get more
details as necessary.  Could something similar be (re)used here?

I also have a general question about whether goals could be documented as
user stories[3]?


>


> Doug
>
> >
> > -Sean
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time

[1]
https://github.com/openstack/openstack-user-stories/blob/master/doc/source/tracker_overview.rst
[2]
https://github.com/openstack/openstack-user-stories/blob/master/user-story-tracker.json
[3]
https://github.com/openstack/openstack-user-stories/blob/master/user-story-template.rst
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Sean Dague
On 08/01/2016 10:28 AM, Davanum Srinivas wrote:
> Sean,
> 
> So we will programatically test the metrics (if we are not doing that
> already) to apply/remove "team:single-vendor" tag:
> 
> https://governance.openstack.org/reference/tags/team_single-vendor.html
> 
> And trigger exit when the tag is present for more than 3 cycles in a
> row (say as of release date?)
> 
> Thanks,
> -- Dims

An approach like that would be fine with me. I'm not sure we have a
formal proposal yet, but 3 cycles seems like a reasonable time frame.
I'm happy to debate if people think there are better timeframes instead.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [searchlight] What do we need in notification payload?

2016-08-01 Thread McLellan, Steven
In our (Searchlight's) ideal world, every notification about a resource
would contain the full representation of that resource (for instance,
equivalent to the API response for a resource), because it means that each
notification on its own can be treated as the current state at that time
without having to potentially handle multiple incremental updates to a
resource. That isn't the case at the moment in lots of places either for
historic reasons or because the implementation would be complex or
expensive. 

With tags as an example, while I understand why that's the case (the API
treats tags as a separate entity and it's implemented as a separate
database table) it doesn't make a lot of logical sense to me to treat
adding a tag to a network as a separate event from (for instance) renaming
it. In both cases as far as a consumer of notifications is concerned, some
piece of information about the network changed. That said, it's obviously
up to each project how they generate notifications for events (and thanks
for taking this one on), and I understand why you don't want to add a huge
amount of complexity to the plugin code.

One thing that would be useful is if adding a tag changes the resource's
'updated_at', and have that included in the notification. That allows us
to determine whether a notification is more up-to-date than a request at
some point in the near past to the API. I guess though that this will also
be difficult in terms of how the plugin interacts with the core code?

Thanks,

Steve

On 8/1/16, 3:33 AM, "Hirofumi Ichihara" 
wrote:

>Hi,
>
>I'm trying to solve a issue[1, 2] which doesn't send a notification when
>Tag is updated. I'm worried about the payload. My solution just outputs
>added tag, resource type, and resource id as payload. However, there was
>a comment which mentioned the payload should have more information. I
>guess that it means, for instance, when we added a tag to a network, we
>can accept the network's name, status, description, share, and so on as
>notification payload.
>
>If Tag plugin already has such information, I might not disagree the
>opinion but the plugin doesn't have it now. So we will need to add
>reading DB process to each Tag API for notification only. I wouldn't go
>as far as to add such extra process.
>
>Is my current solution enough information for searchlight or other
>notification systems?
>
>[1]: https://bugs.launchpad.net/neutron/+bug/1560226
>[2]: https://review.openstack.org/#/c/298133/
>
>Thanks,
>Hirofumi
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Davanum Srinivas
Sean,

So we will programatically test the metrics (if we are not doing that
already) to apply/remove "team:single-vendor" tag:

https://governance.openstack.org/reference/tags/team_single-vendor.html

And trigger exit when the tag is present for more than 3 cycles in a
row (say as of release date?)

Thanks,
-- Dims

On Mon, Aug 1, 2016 at 10:19 AM, Sean Dague  wrote:
> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
>> Thierry, Ben, Doug,
>>
>> How can we distinguish between. "Project is doing the right thing, but
>> others are not joining" vs "Project is actively trying to keep people
>> out"?
>
> I think at some level, it's not really that different. If we treat them
> as different, everyone will always believe they did all the right
> things, but got no results. 3 cycles should be plenty of time to drop
> single entity contributions below 90%. That means prioritizing bugs /
> patches from outside groups (to drop below 90% on code commits),
> mentoring every outside member that provides feedback (to drop below 90%
> on reviews), shifting development resources towards mentoring / docs /
> on ramp exercises for others in the community (to drop below 90% on core
> team).
>
> Digging out of a single vendor status is hard, and requires making that
> your top priority. If teams aren't interested in putting that ahead of
> development work, that's fine, but that doesn't make it a sustainable
> OpenStack project.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Jay Pipes

On 08/01/2016 08:33 AM, Sean Dague wrote:

On 07/29/2016 04:55 PM, Doug Hellmann wrote:

One of the outcomes of the discussion at the leadership training
session earlier this year was the idea that the TC should set some
community-wide goals for accomplishing specific technical tasks to
get the projects synced up and moving in the same direction.

After several drafts via etherpad and input from other TC and SWG
members, I've prepared the change for the governance repo [1] and
am ready to open this discussion up to the broader community. Please
read through the patch carefully, especially the "goals/index.rst"
document which tries to lay out the expectations for what makes a
good goal for this purpose and for how teams are meant to approach
working on these goals.

I've also prepared two patches proposing specific goals for Ocata
[2][3].  I've tried to keep these suggested goals for the first
iteration limited to "finish what we've started" type items, so
they are small and straightforward enough to be able to be completed.
That will let us experiment with the process of managing goals this
time around, and set us up for discussions that may need to happen
at the Ocata summit about implementation.

For future cycles, we can iterate on making the goals "harder", and
collecting suggestions for goals from the community during the forum
discussions that will happen at summits starting in Boston.

Doug

[1] https://review.openstack.org/349068 describe a process for managing 
community-wide goals
[2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
[3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
libraries"


I like the direction this is headed. And I think for the test items, it
works pretty well.


I commented on the reviews, but I disagree with both the direction and 
the proposed implementation of this.


In short, I think there's too much stick and not enough carrot. We 
should create natural incentives for projects to achieve desired 
alignment in certain areas, but placing mandates on project teams in a 
diverse community like OpenStack is not useful.


The consequences of a project team *not* meeting these proposed mandates 
has yet to be decided (and I made that point on the governance patch 
review). But let's say that the consequences are that a project is 
removed from the OpenStack big tent if they fail to complete these 
"shared objectives".


What will we do when Swift decides that they have no intention of using 
oslo.messaging or oslo.config because they can't stand fundamentals 
about those libraries? Are we going to kick Swift, a founding project of 
OpenStack, out of the OpenStack big tent?


Likewise, what if the Manila project team decides they aren't interested 
in supporting Python 3.5 or a particular greenlet library du jour that 
has been mandated upon them? Is the only filesystem-as-a-service project 
going to be booted from the tent?


When it comes to the internal implementation of projects, my strong 
belief is that we should let the project teams be laboratories of 
innovation and avoid placing mandates on them.


Let projects choose from a set of vetted options for important libraries 
or frameworks and allow a project to pave its own road if the project 
team can justify a reason for that which outweighs any vetted choice 
(Zaqar's choice to use Falcon fits this kind of thing).


Finally, instead of these shared OpenStack-wide goals being a different 
stick-thing for the TC to use, why not just make tags that projects can 
*choose* to pursue, therefore building in the incentive (as opposed to 
the punishment) to align with a direction the TC feels is a good one.


You could have tags like:

 supports:python-3.5

or

 supports:oslo-only

or things like that. Project teams could then endeavour to achieve said 
tags if they feel that such a tag absolutely aligns with the team's goals.


Just my two cents,
-jay


I'm trying to think about how we'd use a model like this to support
something a little more abstract such as making upgrades easier. Where
we've got a few things that we know get in the way (policy in files,
rootwrap rules, paste ini changes), as well as validation, as well as
configuration changes. And what it looks like for persistently important
items which are going to take more than a cycle to get through.

Definitely seems worth giving it a shot on the current set of items, and
see how it fleshes out.

My only concern at this point is it seems like we're building nested
data structures that people are going to want to parse into some kind of
visualization in RST, which is a sub optimal parsing format. If we know
that people want to parse this in advance, yamling it up might be in
order. Because this mostly looks like it would reduce to one of those
green/yellow/red checker boards by project and task.

-Sean



__
OpenStack Development Mailing 

Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Sean Dague
On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
> Thierry, Ben, Doug,
> 
> How can we distinguish between. "Project is doing the right thing, but
> others are not joining" vs "Project is actively trying to keep people
> out"?

I think at some level, it's not really that different. If we treat them
as different, everyone will always believe they did all the right
things, but got no results. 3 cycles should be plenty of time to drop
single entity contributions below 90%. That means prioritizing bugs /
patches from outside groups (to drop below 90% on code commits),
mentoring every outside member that provides feedback (to drop below 90%
on reviews), shifting development resources towards mentoring / docs /
on ramp exercises for others in the community (to drop below 90% on core
team).

Digging out of a single vendor status is hard, and requires making that
your top priority. If teams aren't interested in putting that ahead of
development work, that's fine, but that doesn't make it a sustainable
OpenStack project.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][dvr][fip] fg device allocated private ip address

2016-08-01 Thread Brian Haley

On 07/31/2016 06:27 AM, huangdenghui wrote:

Hi
   Now we have spec named subnet service types, which provides a capability of
allowing different port of a network to allocate ip address from different
subnet. In current implementation of DVR, fip also is distributed on every
compute node, floating ip and fg's ip are both allocated from external network's
subnets. In large public cloud deployment, current implementation will consume
lots of public ip address. Do we need a RFE to apply subnet service types spec
to resolve this problem. Any thoughts?


Hi,

This is going to be covered in the existing RFE for subnet service types [1]. 
We currently have two reviews in progress for CRUD [2] and CLI [3], the IPAM 
changes are next.


-Brian

[1] https://review.openstack.org/#/c/300207/
[2] https://review.openstack.org/#/c/337851/
[3] https://review.openstack.org/#/c/342976/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Davanum Srinivas
Thierry, Ben, Doug,

How can we distinguish between. "Project is doing the right thing, but
others are not joining" vs "Project is actively trying to keep people
out"?

Thanks,
Dims

On Mon, Aug 1, 2016 at 9:32 AM, Ben Swartzlander  wrote:
> On 08/01/2016 03:39 AM, Thierry Carrez wrote:
>>
>> Steven Dake (stdake) wrote:
>>>
>>> On 7/31/16, 11:29 AM, "Doug Hellmann"  wrote:

 [...]
 To be clear, I'm suggesting that projects with team:single-vendor be
 given enough time to lose that tag. That does not require them to grow
 diverse enough to get team:diverse-affiliation.
>>>
>>>
>>> That makes sense and doesn't send the wrong message.  I wasn't trying to
>>> suggest that either; was just pointing out Kevin's numbers are more in
>>> line with diverse-affiliation than single vendor.  My personal thoughts
>>> are single vendor projects are ok in OpenStack if they are undertaking
>>> community-building activities to increase their diversity of
>>> contributors.
>>
>>
>> Basically my position on this is: OpenStack is about providing open
>> collaboration spaces so that multiple organizations and individuals can
>> collaborate (on a level playing ground) to solve a set of issues. It's
>> difficult to have a requirement of a project having a diversity of
>> affiliation before it can join, because of the chicken-and-egg issue
>> between visibility and affiliation-diversity. So we totally accept
>> single-vendor projects as official OpenStack projects.
>>
>> But if a project is persistently single-vendor after some time and
>> nobody seems interested to join it, the technical value of that project
>> being "in" OpenStack rather than a separate project in the OpenStack
>> ecosystem of projects is limited. It's limited for OpenStack (why
>> provide resources to support a project that is obviously only beneficial
>> to one organization ?), and it's limited to the organization itself (why
>> go through the OpenStack-specific open processes when you could shortcut
>> it with internal tools and meetings ? why accept the oversight of the
>> Technical Committee ?).
>
>
> Thierry I think you underestimate the value organizations perceive they get
> from projects being in the tent. Even if a project is single vendor, the
> halo effect of OpenStack and the access to free resources (the infra cloud,
> and more importantly the world-class infra TEAM) more than make up for any
> downsides associated with following established processes.
>
> I strongly doubt any organization would choose to remove a project from
> OpenStack for the reasons you mention. If the community doesn't want these
> kinds of projects in the big tent then the community probably needs to push
> them out.
>
> -Ben Swartzlander
>
>
>> So the idea is to find a way for projects who realize that they won't
>> attract a significant share of external contributions to move to an
>> externally-governed project. I'm not sure we can use a strict deadline
>> -- some projects might still be single-vendor after a year but without
>> structurally resisting contributions. But being able to trigger a review
>> after some time, to assess if we have reasons to think it will improve
>> in the future (or not), sounds like a good idea.
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Ben Swartzlander

On 08/01/2016 03:39 AM, Thierry Carrez wrote:

Steven Dake (stdake) wrote:

On 7/31/16, 11:29 AM, "Doug Hellmann"  wrote:

[...]
To be clear, I'm suggesting that projects with team:single-vendor be
given enough time to lose that tag. That does not require them to grow
diverse enough to get team:diverse-affiliation.


That makes sense and doesn't send the wrong message.  I wasn't trying to
suggest that either; was just pointing out Kevin's numbers are more in
line with diverse-affiliation than single vendor.  My personal thoughts
are single vendor projects are ok in OpenStack if they are undertaking
community-building activities to increase their diversity of contributors.


Basically my position on this is: OpenStack is about providing open
collaboration spaces so that multiple organizations and individuals can
collaborate (on a level playing ground) to solve a set of issues. It's
difficult to have a requirement of a project having a diversity of
affiliation before it can join, because of the chicken-and-egg issue
between visibility and affiliation-diversity. So we totally accept
single-vendor projects as official OpenStack projects.

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).


Thierry I think you underestimate the value organizations perceive they 
get from projects being in the tent. Even if a project is single vendor, 
the halo effect of OpenStack and the access to free resources (the infra 
cloud, and more importantly the world-class infra TEAM) more than make 
up for any downsides associated with following established processes.


I strongly doubt any organization would choose to remove a project from 
OpenStack for the reasons you mention. If the community doesn't want 
these kinds of projects in the big tent then the community probably 
needs to push them out.


-Ben Swartzlander



So the idea is to find a way for projects who realize that they won't
attract a significant share of external contributions to move to an
externally-governed project. I'm not sure we can use a strict deadline
-- some projects might still be single-vendor after a year but without
structurally resisting contributions. But being able to trigger a review
after some time, to assess if we have reasons to think it will improve
in the future (or not), sounds like a good idea.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Andrew Laski


On Mon, Aug 1, 2016, at 08:08 AM, Jay Pipes wrote:
> On 07/31/2016 10:03 PM, Alex Xu wrote:
> > 2016-07-28 22:31 GMT+08:00 Jay Pipes  > >:
> >
> > On 07/20/2016 11:25 PM, Alex Xu wrote:
> >
> > One more for end users: Capabilities Discovery API, it should be
> > 'GET
> > /resource_providers/tags'. Or a proxy API from nova to the placement
> > API?
> >
> >
> > I would imagine that it should be a `GET
> > /resource-providers/{uuid}/capabilities` call on the placement API,
> > only visible to cloud administrators.
> >
> > When the end-user request a capability which doesn't support by the
> > cloud, the end-user needs to wait for a moment after sent boot request
> > due to we use async call in nova, then he get an instance with error
> > status. The error info is no valid host. If this is the only way for
> > user to discover the capabilities in the cloud, that sounds bad. So we
> > need an API for the end-user to discover the Capabilities which are
> > supported in the cloud, the end-user can query this API before send boot
> > request.
> 
> Ah, yes, totally agreed. I'm not sure if that is something that we'd 
> want to put as a normal-end-user-callable API endpoint in the placement 
> API, but certainly we could do something like this in the placement API:
> 
>   GET /capabilities
> 
> Would return a list of capability strings representing the distinct set 
> of capabilities that any resource provider in the system exposed. It 
> would not give the user any counts of resource providers that expose the 
> capabilities, nor would it provide any information regarding which 
> resource providers had any available inventory for a consumer to use.

This is what I had imagined based on the midcycle discussion of this
topic. Just information about what is possible to request, and no
information about what is available.

> 
> Nova could then either have a proxy API call that would add the normal 
> end-user interface to that information or completely hide it from end 
> users via the existing flavors interface?

Please no more proxy APIs :)

> 
> Thoughts?
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2016-07-31 15:59:56 +:
> This sounds good to me.
> 
> What about making it iterative but with a delayed start. Something like:
> 
> There is a grace period of 1 year for projects that newly join the big tent. 
> After which, the following criteria will be evaluated to keep a project in 
> the big tent, evaluated at the end of each OpenStack release cycle to keep 
> the project for the next cycle. The project should not have active cores from 
> one company in the amount greater then 45% of the active core membership. If 
> that number is higher, the project is given notice they are under diverse and 
> have 6 months of remaining in the big tent to show they are attempting to 
> increase diversity by shifting the ratio to a more diverse active core 
> membership. The active core membership percentage by the over represented 
> company, called X%, will be shown to be reduced by 25% or reach 45%, so 
> max(X% * (100%-25%), 45%). If the criteria is met, the project can remain in 
> the big tent and a new cycle will begin. (another notification and 6 months 
> if still out of compliance)
> 
> This should allow projects that are, or become under diverse a path towards 
> working on project membership diversity. It gives projects that are very far 
> out of wack a while to fix it. It basically gives projects over represented:
>  * (80%, 100%] -  gets 18 months to fix it
>  * (60%, 80%] - gets 12 months
>  * (45%, 60%] - gets 6 months
> 
> Thoughts? The numbers should be fairly easy to change to make for different 
> amounts of grace period.

I think I understand the motivation behind a progressive deadline like
this, but I'd rather keep the implementation simple with a single
deadline, even if that means we give some teams what appears to be a
more generous amount of time than they need.

Doug

> 
> Thanks,
> Kevin
> 
> From: Doug Hellmann [d...@doughellmann.com]
> Sent: Sunday, July 31, 2016 7:16 AM
> To: openstack-dev
> Subject: [openstack-dev] [tc] persistently single-vendor projects
> 
> Starting a new thread from "Re: [openstack-dev] [Kolla] [Fuel] [tc]
> Looks like Mirantis is getting Fuel CCP (docker/k8s) kicked off"
> 
> Excerpts from Thierry Carrez's message of 2016-07-31 11:37:44 +0200:
> > Doug Hellmann wrote:
> > > There is only one way for a repository's contents to be considered
> > > part of the big tent: It needs to be listed in the projects.yaml
> > > file in the openstack/governance repository, associated with a
> > > deliverable from a team that has been accepted as a big tent member.
> > >
> > > The Fuel team has stated that they are not ready to include the
> > > work in these new repositories under governance, and indeed the
> > > repositories are not listed in the set of deliverables for the Fuel
> > > team [1].
> > >
> > > Therefore, the situation is clear, to me: They are not part of the
> > > big tent.
> >
> > Reading this thread after a week off, I'd like to +1 Doug's
> > interpretation since it was referenced to describe the status quo.
> >
> > As others have said, we wouldn't even have that discussion if the new
> > repositories didn't use "fuel" as part of the naming. We probably
> > wouldn't have that discussion either if the Fuel team affiliation was
> > more diverse and the new repositories were an experiment of a specific
> > subgroup of that team.
> >
> > NB: I *do* have some concerns about single-vendor OpenStack projects
> > that don't grow more diverse affiliations over time, but that's a
> > completely separate topic.
> 
> I'm starting to think that perhaps we should add some sort of
> expectation of a time-frame for projects that join the big tent as
> single-vendor to attract other contributors.
> 
> We removed the requirement that new projects need to have some
> minimal level of diversity when they join because projects asserted
> that they would have a better chance of attracting other contributors
> after becoming official. It might focus the team's efforts on that
> priority if we said that after a year or 18 months without any
> increased diversity, the project would be removed from the big tent.
> 
> Doug
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2016-08-01 08:33:06 -0400:
> On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> > One of the outcomes of the discussion at the leadership training
> > session earlier this year was the idea that the TC should set some
> > community-wide goals for accomplishing specific technical tasks to
> > get the projects synced up and moving in the same direction.
> > 
> > After several drafts via etherpad and input from other TC and SWG
> > members, I've prepared the change for the governance repo [1] and
> > am ready to open this discussion up to the broader community. Please
> > read through the patch carefully, especially the "goals/index.rst"
> > document which tries to lay out the expectations for what makes a
> > good goal for this purpose and for how teams are meant to approach
> > working on these goals.
> > 
> > I've also prepared two patches proposing specific goals for Ocata
> > [2][3].  I've tried to keep these suggested goals for the first
> > iteration limited to "finish what we've started" type items, so
> > they are small and straightforward enough to be able to be completed.
> > That will let us experiment with the process of managing goals this
> > time around, and set us up for discussions that may need to happen
> > at the Ocata summit about implementation.
> > 
> > For future cycles, we can iterate on making the goals "harder", and
> > collecting suggestions for goals from the community during the forum
> > discussions that will happen at summits starting in Boston.
> > 
> > Doug
> > 
> > [1] https://review.openstack.org/349068 describe a process for managing 
> > community-wide goals
> > [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> > [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> > libraries"
> 
> I like the direction this is headed. And I think for the test items, it
> works pretty well.
> 
> I'm trying to think about how we'd use a model like this to support
> something a little more abstract such as making upgrades easier. Where
> we've got a few things that we know get in the way (policy in files,
> rootwrap rules, paste ini changes), as well as validation, as well as
> configuration changes. And what it looks like for persistently important
> items which are going to take more than a cycle to get through.

If we think the goal can be completed in a single cycle, then those
specific items can just be used to define "done" ("all policy
definitions have defaults in code and the service works without a policy
configuration file" or whatever). If the goal cannot be completed in a
single cycle, then it would need to be broken up in to phases.

> 
> Definitely seems worth giving it a shot on the current set of items, and
> see how it fleshes out.
> 
> My only concern at this point is it seems like we're building nested
> data structures that people are going to want to parse into some kind of
> visualization in RST, which is a sub optimal parsing format. If we know
> that people want to parse this in advance, yamling it up might be in
> order. Because this mostly looks like it would reduce to one of those
> green/yellow/red checker boards by project and task.

That's a good idea. How about if I commit to translate what we end
up with to YAML during Ocata, but we evolve the first version using
the RST since that's simpler to review for now?

Doug

> 
> -Sean
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 1, 2016 1:09 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage
> Capabilities with ResourceProvider
> 
> On 07/31/2016 10:03 PM, Alex Xu wrote:
> > 2016-07-28 22:31 GMT+08:00 Jay Pipes  > >:
> >
> > On 07/20/2016 11:25 PM, Alex Xu wrote:
> >
> > One more for end users: Capabilities Discovery API, it should be
> > 'GET
> > /resource_providers/tags'. Or a proxy API from nova to the placement
> > API?
> >
> >
> > I would imagine that it should be a `GET
> > /resource-providers/{uuid}/capabilities` call on the placement API,
> > only visible to cloud administrators.
> >
> > When the end-user request a capability which doesn't support by the
> > cloud, the end-user needs to wait for a moment after sent boot request
> > due to we use async call in nova, then he get an instance with error
> > status. The error info is no valid host. If this is the only way for
> > user to discover the capabilities in the cloud, that sounds bad. So we
> > need an API for the end-user to discover the Capabilities which are
> > supported in the cloud, the end-user can query this API before send
> > boot request.
> 
> Ah, yes, totally agreed. I'm not sure if that is something that we'd want to 
> put as a
> normal-end-user-callable API endpoint in the placement API, but certainly we
> could do something like this in the placement API:
> 
>   GET /capabilities
> 
> Would return a list of capability strings representing the distinct set of 
> capabilities
> that any resource provider in the system exposed. It would not give the user 
> any
> counts of resource providers that expose the capabilities, nor would it 
> provide
> any information regarding which resource providers had any available inventory
> for a consumer to use.
> 
> Nova could then either have a proxy API call that would add the normal 
> end-user
> interface to that information or completely hide it from end users via the 
> existing
> flavors interface?
[Mooney, Sean K] the main drawback with that as an end user is you cannot tell 
what combination of capabilities will
Work together.  For example a cloud might provide SSDs and GPUs but they may 
not be provided on the
Same host or indeed still available on the same host though in the latter case 
no valid host would be the expected behavior.
That said this can be somewhat mitigated via operators creating flavors that 
will work with their infra which is a reasonable requirement
For us to ask them to fulfill but tenant could still uploads images with 
capability request or indeed craft boot requests that would still fail.
You would basically need to return a list of capability  adjacency lists so 
that the end user could build the matrix of what features can be requested 
together.
That would potentially be computationally intensive in the api but mysql should 
be able to compute it efficiently. 
> 
> Thoughts?
> 
> Best,
> -jay
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [UI] Version

2016-08-01 Thread Jiri Tomasek



On 27.7.2016 15:18, Steven Hardy wrote:

On Wed, Jul 27, 2016 at 08:41:32AM -0300, Honza Pokorny wrote:

Hello folks,

As the tripleo-ui project is quickly maturing, it might be time to start
versioning our code.  As of now, the version is set to 0.0.1 and that
hardly reflects the state of the project.

What do you think?

I would like to see it released as part of the coordinated tripleo release,
e.g tagged each milestone along with all other projects where we assert the
release:cycle-with-intermediary tag:

https://github.com/openstack/governance/blob/master/reference/projects.yaml#L4448

Because tripleo-ui isn't yet fully integrated with TripleO (e.g packaging,
undercloud installation and CI testing), we've not tagged it in the last
two milestone releases, but perhaps we can for the n-3 release?

https://review.openstack.org/#/c/324489/

https://review.openstack.org/#/c/340350/

When we do that, the versioning will align with all other TripleO
deliverables, solving the problem of the 0.0.1 version?

The steps to achieve this are:

1. Get per-commit builds of tripleo-ui working via delorean-current:

https://trunk.rdoproject.org/centos7-master/current/

2. Get the tripleo-ui package installed and configured as part of the
undercloud install (via puppet) - we might want to add a conditional to the
undercloud.conf so it's configurable (enabled by default?)

https://github.com/openstack/instack-undercloud/blob/master/elements/puppet-stack-config/puppet-stack-config.pp

3. Get the remaining Mistral API pieces landed so it's fully functional

4. Implement some basic CI smoke tests to ensure the UI is at least
accessible.

Does that sequence make sense, or have I missed something?
Makes perfect sense. Here is the launchpad link that tracks undercloud 
integration of GUI 
https://blueprints.launchpad.net/tripleo-ui/+spec/instack-undercloud-ui-config


Jirka



Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] establishing project-wide goals

2016-08-01 Thread Sean Dague
On 07/29/2016 04:55 PM, Doug Hellmann wrote:
> One of the outcomes of the discussion at the leadership training
> session earlier this year was the idea that the TC should set some
> community-wide goals for accomplishing specific technical tasks to
> get the projects synced up and moving in the same direction.
> 
> After several drafts via etherpad and input from other TC and SWG
> members, I've prepared the change for the governance repo [1] and
> am ready to open this discussion up to the broader community. Please
> read through the patch carefully, especially the "goals/index.rst"
> document which tries to lay out the expectations for what makes a
> good goal for this purpose and for how teams are meant to approach
> working on these goals.
> 
> I've also prepared two patches proposing specific goals for Ocata
> [2][3].  I've tried to keep these suggested goals for the first
> iteration limited to "finish what we've started" type items, so
> they are small and straightforward enough to be able to be completed.
> That will let us experiment with the process of managing goals this
> time around, and set us up for discussions that may need to happen
> at the Ocata summit about implementation.
> 
> For future cycles, we can iterate on making the goals "harder", and
> collecting suggestions for goals from the community during the forum
> discussions that will happen at summits starting in Boston.
> 
> Doug
> 
> [1] https://review.openstack.org/349068 describe a process for managing 
> community-wide goals
> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
> libraries"

I like the direction this is headed. And I think for the test items, it
works pretty well.

I'm trying to think about how we'd use a model like this to support
something a little more abstract such as making upgrades easier. Where
we've got a few things that we know get in the way (policy in files,
rootwrap rules, paste ini changes), as well as validation, as well as
configuration changes. And what it looks like for persistently important
items which are going to take more than a cycle to get through.

Definitely seems worth giving it a shot on the current set of items, and
see how it fleshes out.

My only concern at this point is it seems like we're building nested
data structures that people are going to want to parse into some kind of
visualization in RST, which is a sub optimal parsing format. If we know
that people want to parse this in advance, yamling it up might be in
order. Because this mostly looks like it would reduce to one of those
green/yellow/red checker boards by project and task.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-01 Thread Sam Betts (sambetts)
On 01/08/2016 13:10, "Jim Rollenhagen"  wrote:

>Hey all,
>
>Our nova patch for networking[0] got stuck for a bit, because Nova needs
>to know which network interface is in use for the node, in order to
>properly set up the port.
>
>The code landed for network_interface follows the following order for
>what is actually used for the node:
>1) node.network_interface, if that is None:
>2) CONF.default_network_interface, if that isNone:
>3) flat, if using neutron DHCP
>4) noop, if not using neutron DHCP
>
>The API will return None for node.network_interface in the API (GET
>/v1/nodes/uuid). This won't work for Nova, because Nova can't know what
>CONF.default_network_interface is.
>
>I propose that if a network_interface is not sent in the node-create
>call, we write whatever the current default is, so that it is always set
>and not using an implicit value that could change.

+1 from me

>
>For nodes that exist before the upgrade, we do a database migration to
>set network_interface to CONF.default_network_interface (or if that's
>None, set to flat/noop depending on the DHCP provider).
>
>An alternative is to keep the existing behavior, but have the API return
>whatever interface is actually being used. This keeps the implicit
>behavior (which I don't think is good), and also doesn't provide a way
>to find out from the API if the interface is actually set, or if it's
>using the configurable default.
>
>I'm going to go ahead and execute on that plan now, do speak up if you
>have major objections to it.
>
>// jim
>
>[0] https://review.openstack.org/#/c/297895/
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-01 Thread Mathieu Mitchell



On 2016-08-01 8:10 AM, Jim Rollenhagen wrote:

Hey all,






I propose that if a network_interface is not sent in the node-create
call, we write whatever the current default is, so that it is always set
and not using an implicit value that could change.


Works for me and ensures an easier path down the road if that value were 
to change in a deployment.


Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] network_interface, defaults, and explicitness

2016-08-01 Thread Jim Rollenhagen
Hey all,

Our nova patch for networking[0] got stuck for a bit, because Nova needs
to know which network interface is in use for the node, in order to
properly set up the port.

The code landed for network_interface follows the following order for
what is actually used for the node:
1) node.network_interface, if that is None:
2) CONF.default_network_interface, if that isNone:
3) flat, if using neutron DHCP
4) noop, if not using neutron DHCP

The API will return None for node.network_interface in the API (GET
/v1/nodes/uuid). This won't work for Nova, because Nova can't know what
CONF.default_network_interface is.

I propose that if a network_interface is not sent in the node-create
call, we write whatever the current default is, so that it is always set
and not using an implicit value that could change.

For nodes that exist before the upgrade, we do a database migration to
set network_interface to CONF.default_network_interface (or if that's
None, set to flat/noop depending on the DHCP provider).

An alternative is to keep the existing behavior, but have the API return
whatever interface is actually being used. This keeps the implicit
behavior (which I don't think is good), and also doesn't provide a way
to find out from the API if the interface is actually set, or if it's
using the configurable default.

I'm going to go ahead and execute on that plan now, do speak up if you
have major objections to it.

// jim

[0] https://review.openstack.org/#/c/297895/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Sean Dague
On 07/31/2016 02:29 PM, Doug Hellmann wrote:
> Excerpts from Steven Dake (stdake)'s message of 2016-07-31 18:17:28 +:
>> Kevin,
>>
>> Just assessing your numbers, the team:diverse-affiliation tag covers what
>> is required to maintain that tag.  It covers more then core reviewers -
>> also covers commits and reviews.
>>
>> See:
>> https://github.com/openstack/governance/blob/master/reference/tags/team_div
>> erse-affiliation.rst
>>
>>
>> I can tell you from founding 3 projects with the team:diverse-affiliation
>> tag (Heat, Magnum, Kolla) team:deverse-affiliation is a very high bar to
>> meet.  I don't think its wise to have such strict requirements on single
>> vendor projects as those objectively defined in team:diverse-affiliation.
>>
>> But Doug's suggestion of timelines could make sense if the timelines gave
>> plenty of time to meet whatever requirements make sense and the
>> requirements led to some increase in diverse affiliation.
> 
> To be clear, I'm suggesting that projects with team:single-vendor be
> given enough time to lose that tag. That does not require them to grow
> diverse enough to get team:diverse-affiliation.

The idea of 3 cycles to loose the single-vendor tag sounds very
reasonable to me. This also is very much along the spirit of the tag in
that it should be one of the top priorities of the team to work on this.
I'd be in favor.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Jay Pipes

On 07/31/2016 10:03 PM, Alex Xu wrote:

2016-07-28 22:31 GMT+08:00 Jay Pipes >:

On 07/20/2016 11:25 PM, Alex Xu wrote:

One more for end users: Capabilities Discovery API, it should be
'GET
/resource_providers/tags'. Or a proxy API from nova to the placement
API?


I would imagine that it should be a `GET
/resource-providers/{uuid}/capabilities` call on the placement API,
only visible to cloud administrators.

When the end-user request a capability which doesn't support by the
cloud, the end-user needs to wait for a moment after sent boot request
due to we use async call in nova, then he get an instance with error
status. The error info is no valid host. If this is the only way for
user to discover the capabilities in the cloud, that sounds bad. So we
need an API for the end-user to discover the Capabilities which are
supported in the cloud, the end-user can query this API before send boot
request.


Ah, yes, totally agreed. I'm not sure if that is something that we'd 
want to put as a normal-end-user-callable API endpoint in the placement 
API, but certainly we could do something like this in the placement API:


 GET /capabilities

Would return a list of capability strings representing the distinct set 
of capabilities that any resource provider in the system exposed. It 
would not give the user any counts of resource providers that expose the 
capabilities, nor would it provide any information regarding which 
resource providers had any available inventory for a consumer to use.


Nova could then either have a proxy API call that would add the normal 
end-user interface to that information or completely hide it from end 
users via the existing flavors interface?


Thoughts?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance][Glare] External locations design

2016-08-01 Thread Kairat Kushaev
Hello all,

I would like to start to describe some design decisions we made in Glare
code (https://review.openstack.org/#/q/topic:bp/glare-api+status:open).  If
you are not familiar with Glare I suggest you to read the following spec:

https://github.com/openstack/glance-specs/blob/master/specs/newton/approved/glance/glare-api.rst

I hope it will help other folks to understand Glare approach and provide
some constructive feedback for Glare. I think that we can also use Glare
solution for Glance in near future to address some drawbacks we have in
Glance.

Glare locations

Glance and Glare have possibility to set some external url as
image(artifact) location. This feature is quite useful for users who would
like to refer to some external image or artifact (for example, Fedora image
on official Fedora site) and not to store this image or artifact in the
cloud.

External locations in Glance have several specialities:

   1.

   It is possible to setup multiple locations for an image. Glance uses
   special location strategy to define which location to use. This strategy
   defined in glance codebase and can be configured in glance conf.
   2.

   Glance doesn’t differ image locations specified by url and image
   locations uploaded to Glance backend. Glance has some restrictions about
   which urls to use for locations (see Glance docs for more info).


Glare external locations designed in different way to address some
drawbacks we have in Glance. So the approach is the following:

   1.

   Glare doesn’t support multiple locations, you can specify dict of blobs
   in artifact type and add url for each blob in dict. User must define a
   name(f.e. region name or priority) for blob in dict and this name can be
   used to retrieve this blob from artifact. So decision about which location
   to use will be outside of Glare.
   2.

   Glare adds a special flag to database for external locations. So they
   will be treated differently in Glare when delete artifact. If blob value is
   external url then we don’t need to pass this url to backend and just delete
   the record in DB. For now, Glare allows only http(s) locations set but it
   may be extended in future but the idea still the same: external location
   are just records in DB.
   3.

   Glare saves blob size and checksum when specifying external url. When
   user specified url Glare downloads the blob by url, calculates its size and
   checksum. Of course, it leads to some performance degradation but we can
   ensure that the external blob is immutable. We made this because security
   seems more important for Glare than performance. Also there are plans to
   extend this approach to support subscriptions for external locations so we
   can increase secureness of that operation.


I think that some of the features above can be implemented in Glance. For
example, we can treat our locations as only read-only links if external
flag will be implemented.  It will allow us to ensure that only blobs
uploaded through the Glance will be managed.

Additionally, if we will calculate checksum and size for external urls, we
can ensure that  all multiple locations refers to the same blob. So
management of multiple locations(deletion/creation) can be more secure.
Also we can ensure that the external url blob was not changed.

I understand that we need a spec for that but I would like to discuss this
at high level first. Here is etherpad for discussion:
https://etherpad.openstack.org/p/glare-locations


Best regards,
Kairat Kushaev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] How should ironic and related project names be written?

2016-08-01 Thread Sam Betts (sambetts)
Its official OpenStack policy that project names be written in lower case, for 
example Ironic must always be written as ironic, however I was recently writing 
a spec for IPA, and was unsure how to approach writing IPAs name in full.
Discussing this with Dmitry on IRC, we decided it would be best brought to a 
wider audience on the ML because this affects any project that includes 
ironic's name in their name.

Ironic Python Agent
ironic Python Agent
ironic python agent
ironic-python-agent

I prefer a capitalised Ironic Python Agent name, because it lines up with the 
way we write the acronym, IPA, and makes it obvious its a name, however I'm 
unsure if this aligns with the OpenStack policy. If we need to lower case the 
whole of IPA's name, then I prefer we refer to it using including the dashes, 
so that it is obviously a project name.

A couple of other projects that also use ironic in the name:

Ironic Inspector
ironic Inspector
ironic inspector
Ironic-inspector

Ironic Lib
ironic Lib
ironic lib
ironic-lib

I would like to hear some opinions on whether we should always refer to the 
projects as they are written if you go to git.openstack.org (with dashes) or 
which of the above styles we're allowed, and prefer?

Sam
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel]Nominating Vitalii Kulanov for python-fuelclient-core

2016-08-01 Thread Sergii Golovatiuk
Congratulations Vitalii!

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Mon, Aug 1, 2016 at 11:54 AM, Roman Prykhodchenko  wrote:

> The entire core team has voted for the nomination so I’m putting it to
> power. Let’s all welcome Vitalii as a new core reviewer. Congratulations!
>
> 1 серп. 2016 р. о 08:55 Aleksey Kasatkin 
> написав(ла):
>
> +1
>
>
> Aleksey Kasatkin
>
>
> On Mon, Jul 25, 2016 at 7:46 PM, Igor Kalnitsky 
> wrote:
>
>> Vitaly's doing a great job. +2, no doubts!
>>
>> On Mon, Jul 25, 2016 at 6:27 PM, Tatyana Leontovich
>>  wrote:
>> > A huge +1
>> >
>> > On Mon, Jul 25, 2016 at 4:33 PM, Yegor Kotko 
>> wrote:
>> >>
>> >> +1
>> >>
>> >> On Mon, Jul 25, 2016 at 3:19 PM, Roman Prykhodchenko 
>> >> wrote:
>> >>>
>> >>> Hi Fuelers,
>> >>>
>> >>> Vitalii has been providing great code reviews and patches for some
>> time.
>> >>> His recent commitment to help consolidating both old and new fuel
>> clients
>> >>> and his bug-squashing activities show his willingness to step up and
>> take
>> >>> responsibilities within the community. He can often be found in #fuel
>> as
>> >>> vkulanov.
>> >>>
>> >>>
>> >>>
>> http://stackalytics.com/?module=python-fuelclient_id=vitaliy-t=mitaka
>> >>> http://stackalytics.com/?module=python-fuelclient_id=vitaliy-t
>> >>>
>> >>>
>> >>> P. S. Sorry for sending this email twice — I realized I didn’t put a
>> >>> topic to the subject.
>> >>>
>> >>>
>> >>> - romcheg
>> >>>
>> >>>
>> >>>
>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>>
>> >>
>> >>
>> >>
>> __
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel]Nominating Vitalii Kulanov for python-fuelclient-core

2016-08-01 Thread Roman Prykhodchenko
The entire core team has voted for the nomination so I’m putting it to power. 
Let’s all welcome Vitalii as a new core reviewer. Congratulations!

> 1 серп. 2016 р. о 08:55 Aleksey Kasatkin  написав(ла):
> 
> +1
> 
> 
> Aleksey Kasatkin
> 
> 
> On Mon, Jul 25, 2016 at 7:46 PM, Igor Kalnitsky  > wrote:
> Vitaly's doing a great job. +2, no doubts!
> 
> On Mon, Jul 25, 2016 at 6:27 PM, Tatyana Leontovich
> > wrote:
> > A huge +1
> >
> > On Mon, Jul 25, 2016 at 4:33 PM, Yegor Kotko  > > wrote:
> >>
> >> +1
> >>
> >> On Mon, Jul 25, 2016 at 3:19 PM, Roman Prykhodchenko  >> >
> >> wrote:
> >>>
> >>> Hi Fuelers,
> >>>
> >>> Vitalii has been providing great code reviews and patches for some time.
> >>> His recent commitment to help consolidating both old and new fuel clients
> >>> and his bug-squashing activities show his willingness to step up and take
> >>> responsibilities within the community. He can often be found in #fuel as
> >>> vkulanov.
> >>>
> >>>
> >>> http://stackalytics.com/?module=python-fuelclient_id=vitaliy-t=mitaka
> >>>  
> >>> 
> >>> http://stackalytics.com/?module=python-fuelclient_id=vitaliy-t 
> >>> 
> >>>
> >>>
> >>> P. S. Sorry for sending this email twice — I realized I didn’t put a
> >>> topic to the subject.
> >>>
> >>>
> >>> - romcheg
> >>>
> >>>
> >>> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> >>> 
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> >>> 
> >>>
> >>
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> >> 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> >> 
> >>
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> > 
> >
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] Vitrage Mascot Selection

2016-08-01 Thread Afek, Ifat (Nokia - IL)
Hi,

According to the poll we had, Vitrage mascot will be a Giraffe - it's skin 
looks like a vitrage, and it sees everything from above. 

Now we should think of how we would like to illustrate it. Should we color the 
entire Giraffe like a Vitrage? Maybe just part of it? other ideas? (an external 
designer will illustrate all projects, but I guess we can have some 
requests/guidelines).

Thanks,
Ifat.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Promoting Dawid Deja to core reviewers

2016-08-01 Thread Deja, Dawid
Thank you all! I'll do my best to provide good reviews and make Mistral better.

Regards,
Dawid Deja

On Mon, 2016-08-01 at 11:05 +0700, Renat Akhmerov wrote:
Team, thank you for your support!

Dawid, welcome to the Mistral core team :)

You now can vote +2 and approve patches. Use them wisely!

Renat Akhmerov
@Nokia

On 01 Aug 2016, at 10:59, Hardik 
> 
wrote:

+1 , Nice work Dawid !

Thanks and Regards,
Hardik Parekh

On Sunday 31 July 2016 06:56 AM, Lingxian Kong wrote:
+1, good job, Dawid!

Regards!
---
Lingxian Kong


On Sat, Jul 30, 2016 at 10:59 PM, Elisha, Moshe (Nokia - IL)
> wrote:
Hi,

I am not a core reviewer but having met Dawid in person and working closely
with him on some important bug fixes – I fully support the idea.

From: Anastasia Kuznetsova 
>
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>
Date: Friday, 29 July 2016 at 15:53
To: "OpenStack Development Mailing List (not for usage questions)"
>
Subject: Re: [openstack-dev] [mistral] Promoting Dawid Deja to core
reviewers

Renat,

I fully support Dawid's promotion! Here is my +1 for Dawid.

Dawid,

I will be glad to see you in the Mistral core team.

On Fri, Jul 29, 2016 at 2:39 PM, Renat Akhmerov 
>
wrote:
Hi,

I’d like to promote Dawid Deja working at Intel (ddeja in IRC) to Mistral
core reviewers.

The reasons why I want to see Dawid in the core team is that he provides
amazing, very thorough reviews.
Just by looking at a few of them I was able to make a conclusion that he
knows the system architecture very well
although he started contributing actively not so long ago. He always sees
things deeply, can examine a problem
from different angles, demonstrates solid technical background in general.
He is in top 5 reviewers now by a number
of reviews and the only one who still doesn’t have core status. He also
implemented several very important changes
during Newton cycle. Some of them were in progress for more than a year
(flexible RPC) but Dawid helped to knock
them down elegantly.

Besides purely professional skills that I just mentioned I also want to
say that it’s a great pleasure to work with
Dawid. He’s a bright cheerful guy and a good team player.

Dawid’s statistics is here:
http://stackalytics.com/?module=mistral-group=commits_id=dawid-deja-0


I’m hoping for your support in making this promotion.

Thanks

Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Anastasia Kuznetsova

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Propose Sofer Athlan-Guyot (chem) part of Puppet OpenStack core

2016-08-01 Thread Sofer Athlan-Guyot
Hi,

Thanks everyone for you support, it's appreciated.  Now, let's +2
something :)

Emilien Macchi  writes:

> You might not know who Sofer is but he's actually "chem" on IRC.
> He's the guy who will find the root cause of insane bugs, in OpenStack
> in general but also in Puppet OpenStack modules.
> Sofer has been working on Puppet OpenStack modules for a while now,
> and is already core in puppet-keystone. Many times he brought his
> expertise to make our modules better.
> He's always here on IRC to help folks and has excellent understanding
> at how our project works.
>
> If you want stats:
> http://stackalytics.com/?user_id=sofer-athlan-guyot=commits
> I'm quite sure Sofer will make more reviews over the time but I have
> no doubt he fully deserves to be part of core reviewers now, with his
> technical experience and involvement.
>
> As usual, it's an open decision, please vote +1/-1 about this proposal.
>
> Thanks,

-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [searchlight] What do we need in notification payload?

2016-08-01 Thread Hirofumi Ichihara

Hi,

I'm trying to solve a issue[1, 2] which doesn't send a notification when  
Tag is updated. I'm worried about the payload. My solution just outputs  
added tag, resource type, and resource id as payload. However, there was  
a comment which mentioned the payload should have more information. I  
guess that it means, for instance, when we added a tag to a network, we  
can accept the network's name, status, description, share, and so on as  
notification payload.


If Tag plugin already has such information, I might not disagree the  
opinion but the plugin doesn't have it now. So we will need to add  
reading DB process to each Tag API for notification only. I wouldn't go  
as far as to add such extra process.


Is my current solution enough information for searchlight or other  
notification systems?


[1]: https://bugs.launchpad.net/neutron/+bug/1560226
[2]: https://review.openstack.org/#/c/298133/

Thanks,
Hirofumi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Swapnil Kulkarni (coolsvap)
On Mon, Aug 1, 2016 at 1:09 PM, Thierry Carrez  wrote:
> Steven Dake (stdake) wrote:
>> On 7/31/16, 11:29 AM, "Doug Hellmann"  wrote:
>>> [...]
>>> To be clear, I'm suggesting that projects with team:single-vendor be
>>> given enough time to lose that tag. That does not require them to grow
>>> diverse enough to get team:diverse-affiliation.
>>
>> That makes sense and doesn't send the wrong message.  I wasn't trying to
>> suggest that either; was just pointing out Kevin's numbers are more in
>> line with diverse-affiliation than single vendor.  My personal thoughts
>> are single vendor projects are ok in OpenStack if they are undertaking
>> community-building activities to increase their diversity of contributors.
>
> Basically my position on this is: OpenStack is about providing open
> collaboration spaces so that multiple organizations and individuals can
> collaborate (on a level playing ground) to solve a set of issues. It's
> difficult to have a requirement of a project having a diversity of
> affiliation before it can join, because of the chicken-and-egg issue
> between visibility and affiliation-diversity. So we totally accept
> single-vendor projects as official OpenStack projects.
>
> But if a project is persistently single-vendor after some time and
> nobody seems interested to join it, the technical value of that project
> being "in" OpenStack rather than a separate project in the OpenStack
> ecosystem of projects is limited. It's limited for OpenStack (why
> provide resources to support a project that is obviously only beneficial
> to one organization ?), and it's limited to the organization itself (why
> go through the OpenStack-specific open processes when you could shortcut
> it with internal tools and meetings ? why accept the oversight of the
> Technical Committee ?).

+1 to track this.

>
> So the idea is to find a way for projects who realize that they won't
> attract a significant share of external contributions to move to an
> externally-governed project. I'm not sure we can use a strict deadline
> -- some projects might still be single-vendor after a year but without
> structurally resisting contributions. But being able to trigger a review
> after some time, to assess if we have reasons to think it will improve
> in the future (or not), sounds like a good idea.
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The idea of externally-governed projects is very good since there are
and will be projects which want the status of being part of
"OpenStack" community but cannot have diverse-affiliation due to
inherent nature of development/testing/ci or whatsoever requirements.
If it remains or is known to remain a single vendor project in its
future, it does not need to be dependent on any of the community
resources, be it contributors/infrastructure.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-01 Thread Thierry Carrez
Steven Dake (stdake) wrote:
> On 7/31/16, 11:29 AM, "Doug Hellmann"  wrote:
>> [...]
>> To be clear, I'm suggesting that projects with team:single-vendor be
>> given enough time to lose that tag. That does not require them to grow
>> diverse enough to get team:diverse-affiliation.
> 
> That makes sense and doesn't send the wrong message.  I wasn't trying to
> suggest that either; was just pointing out Kevin's numbers are more in
> line with diverse-affiliation than single vendor.  My personal thoughts
> are single vendor projects are ok in OpenStack if they are undertaking
> community-building activities to increase their diversity of contributors.

Basically my position on this is: OpenStack is about providing open
collaboration spaces so that multiple organizations and individuals can
collaborate (on a level playing ground) to solve a set of issues. It's
difficult to have a requirement of a project having a diversity of
affiliation before it can join, because of the chicken-and-egg issue
between visibility and affiliation-diversity. So we totally accept
single-vendor projects as official OpenStack projects.

But if a project is persistently single-vendor after some time and
nobody seems interested to join it, the technical value of that project
being "in" OpenStack rather than a separate project in the OpenStack
ecosystem of projects is limited. It's limited for OpenStack (why
provide resources to support a project that is obviously only beneficial
to one organization ?), and it's limited to the organization itself (why
go through the OpenStack-specific open processes when you could shortcut
it with internal tools and meetings ? why accept the oversight of the
Technical Committee ?).

So the idea is to find a way for projects who realize that they won't
attract a significant share of external contributions to move to an
externally-governed project. I'm not sure we can use a strict deadline
-- some projects might still be single-vendor after a year but without
structurally resisting contributions. But being able to trigger a review
after some time, to assess if we have reasons to think it will improve
in the future (or not), sounds like a good idea.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Priority Spec for Libvirt Storage Pools

2016-08-01 Thread Carlton, Paul (Cloud Services)
 Matt,


could you review https://review.openstack.org/#/c/310505 and 
https://review.openstack.org/#/c/310538/ please, hoping to get them approved by 
end of week deadline


Thanks


Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Office: +44 (0) 1173 162189
Mobile:+44 (0)7768 994283
Email:paul.carl...@hpe.com
Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, 
Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender. To any recipient 
of this message within HP, unless otherwise stated you should consider this 
message and attachments as "HP CONFIDENTIAL".


From: Carlton, Paul (Cloud Services)
Sent: 25 July 2016 08:21:41
To: Matthew Booth
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Priority Spec for Libvirt Storage Pools

Matt


With help from Maxim Nestratov of Virtuozzo I made some progress with the 
issues relating to my libvirt storage pools spec at the mid cycle last week, 
could you take another look at https://review.openstack.org/#/c/310505/ please, 
I'd like to get this approved so I can land some changes in Newton


Thanks


Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ

Office: +44 (0) 1173 162189
Mobile:+44 (0)7768 994283
Email:paul.carl...@hpe.com
Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, 
Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may 
be legally privileged. If you have received this message in error, you should 
delete it from your system immediately and advise the sender. To any recipient 
of this message within HP, unless otherwise stated you should consider this 
message and attachments as "HP CONFIDENTIAL".


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] nova-docker alternatives ?

2016-08-01 Thread wangli...@chinac.com
mark



wangli...@chinac.com
 
From: yasemin
Date: 2016-08-01 01:40
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] 
nova-docker alternatives ?
that is docker swarm bay ?? 
On Jul 31, 2016, at 8:40 AM, Ton Ngo  wrote:

for a second I thought that would be a great life cycle operation for bays .. :)
Ton,

Adrian Otto ---07/29/2016 11:31:53 AM---s/mentally/centrally/ 
Autocorrect is not my friend.

From: Adrian Otto 
To: "OpenStack Development Mailing List (not for usage questions)" 

Date: 07/29/2016 11:31 AM
Subject: Re: [openstack-dev] [nova] [container] [docker] [magnum] [zun] 
nova-docker alternatives ?





s/mentally/centrally/ 

Autocorrect is not my friend.
On Jul 29, 2016, at 11:26 AM, Adrian Otto  wrote:

Yasmin, 

One option you have is to use the libvirt-lxc nova virt driver, and use an 
image that has a docker daemon installed on it. That would give you a way to 
place docker containers on a data plane the uses no virtualization, but you 
need to individually manage each instance. Another option is to add Magnum to 
your cloud (with or without a libvirt-lxc nova virt driver) and use Magnum to 
mentally manage each container cluster. We refer to such clusters as bays. 

Adrian
On Jul 29, 2016, at 11:01 AM, Yasemin DEMİRAL (BİLGEM BTE) 
 wrote:


nova-docker is a dead project, i learned irc channel. 
I need the hypervisor for nova, and I cant installation nova-docker in physical 
openstack systems. In devstack, I could deploy nova-docker.
What can I do ? openstack-magnum or openstack-zun project is useful for me ?? I 
dont know.
Do you have any ideas ?

Yasemin Demiral
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Alex Xu
Nova-spec is submitted: https://review.openstack.org/345138, welcome review
and comments!

2016-07-11 19:08 GMT+08:00 Alex Xu :

> This propose is about using ResourceProviderTags as a solution to manage
> Capabilities (Qualitative) in ResourceProvider.
> The ResourceProviderTags is to describe the capabilities which are defined
> by OpenStack Service (Compute Service,
> Storage Service, Network Service etc.) and by users. The ResourceProvider
> provide resource exposed by a single
> compute node, some shared resource pool or an external resource-providing
> service of some sort.  As such,
> ResourceProviderTags is also expected to describe the capabilities of
> single ResourceProvider or the capabilities of
> ResourcePool.
>
> The ResourceProviderTags is similar with ServersTags [0] which is
> implemented in the Nova. The only difference is
> that the tags is attached to the ResourceProvider. The API endpoint will
> be " /ResourceProvider/{uuid}/tags", and it
> will follow the API-WG guideline about Tags [1].
>
> As the Tags are just strings, the meaning of Tag isn't defined by
> Scheduler. The meaning of Tag is defined by
> OpenStack services or Users. The ResourceProviderTags will only be used
> for scheduling with a ResourceProviderTags
> filter.
>
> The ResourceProviderTags is very easy to cover the cases of single
> ResourceProvider, ResourcePool and
> DynamicResouces. Let see those cases one by one.
>
> For single ResourceProvider case, just see how Nova report ComputeNode's
> Capabilities. Firstly,  Nova is expected
> to define a standard way to describe the Capabilities which provided by
> Hypervisor or Hardware. Then those description
> of Capabilities can be used across the Openstack deployment. So Nova will
> define a set of Tags. Those Tags should
> be included with prefix to indicated that this is coming from Nova. Also
> the naming rule of prefix can be used to catalog
> the Capabilities. For example, the capabilities can be defined as:
>
> COMPUTE_HW_CAP_CPU_AVX
> COMPUTE_HW_CAP_CPU_SSE
> 
> COMPUTE_HV_CAP_LIVE_MIGRATION
> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> 
>
> ( The COMPUTE means this is coming from Nova. HW means this is hardware
> related Capabilities. HV means this is
>  capabilities of Hypervisor. But the catalog of Capabilities can be
> discussed separated. This propose focus on the
>  ResourceTags. We also have another idea about not using 'PREFIX' to
> manage the Tags. We can add attributes to the
>  Tags. Then we have more control on the Tags. This will describe
> separately in the bottom. )
>
> Nova will create ResourceProvider for the compute node, and report the
> quantitative stuff, and report capabilities
> by adding those defined tags to the ResourceProvider at same time. Then
> those Capabilities are exposed by Nova
> automatically.
>
> The capabilities of ComputeNode can be queried through the API "GET
> /ResourceProviders/{uuid}/tags".
>
> For the ResourcePool case, let us use Shared Storage Pool as example. The
> different Storage Pool may have
> different capabilities. Maybe one of Pool are using SSD. For expose that
> Capability, admin user can do as below:
>
> 1. Define the aggregates
>   $AGG_UUID=`openstack aggregate create r1rck0610`
>
> 2. Create resource pool for shared storage
>   $RP_UUID=`openstack resource-provider create "/mnt/nfs/row1racks0610/" \
> --aggregate-uuid=$AGG_UUID`
>
> 3. Update the capacity of shared storage
>   openstack resource-provider set inventory $RP_UUID \
> --resource-class=DISK_GB \
> --total=10 --reserved=1000 \
> --min-unit=50 --max-unit=1 --step-size=10 \
> --allocation-ratio=1.0
>
> 4. Add the Capabilities of shared storage
>   openstack resource-provider add tags $RP_UUID --tag STORAGE_WITH_SSD
>
> In this case, 'STORAGE_WITH_SSD' is defined by Admin user. This is the
> same with Quantitative, where there
> isn't agent to report the Quantitative, neither the Qualitative.
>
> This is also easy to cover the DynamicResource case. Thinking of Ironic,
> admin will create ResourcePool for
> same hardware configuration bare-metal machines. Those machines will have
> the same set of capabilities. So
> those capabilities will be added to the ResourcePool as tags, this is
> pretty same with SharedStoragePool case.
>
> To expose cloud capabilities to users,  there is one more API endpoint
> 'GET /ResourceProviders/Tags'. User can
> get all the tags. Then user can know what kind of Capabilities the cloud
> provides. The query parameter
> will allow user to filter the Tags by the prefix rules.
>
> This propose is intended to be a solution of managing Capabilities in the
> scheduler with ResourceProvider. But yes,
> look at how Nova implement the manage of Capabilities, this is just part
> of solution. The whole solution still needs needs
> other propose (like [2]) to describe how to model capabilities inside the
> compute node and propose (like [3]) to
> describe how 

Re: [openstack-dev] I want to consult ironic problems

2016-08-01 Thread paul schlacter
the mirror have ramdisk and kernel and image,
I want to know what do these images are used
ironic python agent on ramdisk, he is how to run up in the machine after
the deployment is complete, ironic python agent will run it?

On Fri, Jul 29, 2016 at 6:00 PM, paul schlacter  wrote:

> I want to consult ironic problems
> do there have bare metal management man?
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel]Nominating Vitalii Kulanov for python-fuelclient-core

2016-08-01 Thread Aleksey Kasatkin
+1


Aleksey Kasatkin


On Mon, Jul 25, 2016 at 7:46 PM, Igor Kalnitsky 
wrote:

> Vitaly's doing a great job. +2, no doubts!
>
> On Mon, Jul 25, 2016 at 6:27 PM, Tatyana Leontovich
>  wrote:
> > A huge +1
> >
> > On Mon, Jul 25, 2016 at 4:33 PM, Yegor Kotko 
> wrote:
> >>
> >> +1
> >>
> >> On Mon, Jul 25, 2016 at 3:19 PM, Roman Prykhodchenko 
> >> wrote:
> >>>
> >>> Hi Fuelers,
> >>>
> >>> Vitalii has been providing great code reviews and patches for some
> time.
> >>> His recent commitment to help consolidating both old and new fuel
> clients
> >>> and his bug-squashing activities show his willingness to step up and
> take
> >>> responsibilities within the community. He can often be found in #fuel
> as
> >>> vkulanov.
> >>>
> >>>
> >>>
> http://stackalytics.com/?module=python-fuelclient_id=vitaliy-t=mitaka
> >>> http://stackalytics.com/?module=python-fuelclient_id=vitaliy-t
> >>>
> >>>
> >>> P. S. Sorry for sending this email twice — I realized I didn’t put a
> >>> topic to the subject.
> >>>
> >>>
> >>> - romcheg
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all]Architecture Working Group

2016-08-01 Thread joehuang
Hello, Clint,



How are you, eager to know the plan and agenda for the Architecture Working 
Group, and would like to join the group, thanks.



And also leave a comment in the "Propose creation of Architecture Working 
Group" https://review.openstack.org/#/c/335141/ .



Best Regards

Chaoyi Huang (joehuang)





From: joehuang
Sent: 25 July 2016 7:15
To: openstack-dev
Cc: thie...@openstack.org
Subject: re: [openstack-dev] [all] Proposal: Architecture Working Group


Hi, all,



Thanks to intiate the architecture working group, would be glad to join the 
group if there is still a place to stand.



According to the comment from Thierry in the Tricircle big-tent application 
https://review.openstack.org/#/c/338796/: "From an OpenStack community 
standpoint, we need more agreement and consensus on how we want to tackle the 
"massive scaling" issues. Tactical solutions like Nova cells only work for one 
project. I hope that the newly-founded Architecture WG can openly discuss an 
architecture for scaling OpenStack clouds beyond their current limits, a vision 
we can all get behind.





This is not a technical reflection on the quality of the Tricircle work. I 
think it's very interesting, I think the project should continue experimenting 
with the solution and I definitely want the Architecture WG to consider the 
Tricircle approach to scaling of using a top cell, API proxies and helpers. If 
anything, it forces us into having a discussion we should have had a long time 
ago, and for that I'm grateful that it was proposed."



So kindly please put the scaling OpenStack clouds in the agenda items, and also 
take these use cases into consideration: 
https://docs.google.com/presentation/d/1Zkoi4vMOGN713Vv_YO0GP6YLyjLpQ7fRbHlirpq6ZK4/edit?usp=sharing,
 i.e, let's discuss how to scale OpenStack clouds into one site or multiple 
sites.



Thank you all in advance.



Best Regards

Chaoyi Huang ( joehuang )







http://lists.openstack.org/pipermail/openstack-dev/2016-June/097668.html



[openstack-dev] [all] Proposal: Architecture Working Group

Clint Byrum clint at fewbar.com 

Fri Jun 17 21:52:43 UTC 2016

  *   Previous message: [openstack-dev] OpenStack Developer Mailing List Digest 
May 14 to June 
17
  *   Next message: [openstack-dev] [all] Proposal: Architecture Working 
Group
  *   Messages sorted by: [ date 
] 
[ thread 
]
 [ subject 
]
 [ author 
]

ar·chi·tec·ture
ˈärkəˌtek(t)SHər/
noun
noun: architecture

1.

the art or practice of designing and constructing buildings.

synonyms:building design, building style, planning, building, construction;

formalarchitectonics

"modern architecture"

the style in which a building is designed or constructed, especially with 
regard to a specific period, place, or culture.

plural noun: architectures

"Victorian architecture"

2.

the complex or carefully designed structure of something.

"the chemical architecture of the human brain"

the conceptual structure and logical organization of a computer or 
computer-based system.

"a client/server architecture"

synonyms:structure, construction, organization, layout, design, build, 
anatomy, makeup;

informalsetup

"the architecture of a computer system"


Introduction
=

OpenStack is a big system. We have debated what it actually is [1],
and there are even t-shirts to poke fun at the fact that we don't have
good answers.

But this isn't what any of us wants. We'd like to be able to point
at something and proudly tell people "This is what we designed and
implemented."

And for each individual project, that is a possibility. Neutron can
tell you they designed how their agents and drivers work. Nova can
tell you that they designed the way conductors handle communication
with API nodes and compute nodes. But when we start talking about how
they interact with each other, it's clearly just a coincidental mash of
de-facto standards and specs that don't help anyone make decisions when
refactoring or adding on to the system.

Oslo and cross-project initiatives have brought some peace and order
to the implementation and engineering processes, but not to the design
process. New ideas still start largely in the project where they are
needed most, and often conflict with similar decisions 

[openstack-dev] [Dragonflow] - No IRC Meeting Today (1/8/2016)

2016-08-01 Thread Gal Sagie
Hello All,

Due to some members not able to attend today's meeting, we will cancel it
and
continue the meetings the week after.

If you need an update regarding anything, feel free to stop by
#openstack-dragonflow

Thanks
Gal.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][rfc] Booting docker images using nova libvirt

2016-08-01 Thread Sudipta Biswas

Thanks Devdatta/Maxime for your comments.



I am definitely not rigid about implementing the workflow in Nova and 
it's well known that there can be multiple integration points for this 
work including that in docker itself. However, there are two prime 
reasons why we chose Nova as integration point in OpenStack:


1. Minimal changes to a VM boot workflow. No need to depend on Swift or 
any other service.
2. Faster boot up times - since the downloading of the virtual machine 
image is negated. Downloading the docker filesystems should be more or 
less easier.



Some comments inline.


Thanks,

Sudipto


On 27/07/16 11:59 PM, Maxime Belanger wrote:


+1 on this,


Still you loose all the great stuff about the containers but it is a 
first step towards native container orchestration platform.


IMHO, it is not about just losing stuff. We are not emulating a docker 
workflow. The expectation is to have the ability to run a container 
inside a virtual machine and then take that filesystem out and run it 
natively on the hardware as desired. You can debate on whether it's 
really needed in Nova or elsewhere and I think that's a fair debate.


I am sure there are further technical challenges to overcome, if we want 
to think in this direction.


*From:* Devdatta Kulkarni 
*Sent:* July 27, 2016 12:21:30 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [nova][rfc] Booting docker images using 
nova libvirt

Hi Sudipta,

There is another approach you can consider which does not need any 
changes to Nova.


The approach works as follows:
- Save the container image tar in Swift
- Generate a Swift tempURL for the container file
- Boot Nova vm and pass instructions for following steps through cloud 
init / user data

  - download the container file from Swift (wget)
I believe this has to be carried out for every docker image? That is if 
i have a nginx image and it's provisioned twice, a fresh copy of such 
has to be wget'ed every time? IF the nova workflow is acceptable, then 
there can be optimizations thought around this. At this moment, my 
implementation copies the cached image for each of the containers - 
atleast making further boots faster.

Also, how do you tackle the problem with snapshoting a container?

  - load it (docker load)
  - run it (docker run)
Do you run the docker native commands inside the virtual machine? In 
such a case, do you actually install docker
as a part of the cloud-init scripts? Do you have numbers w.r.t the boot 
time of the container image in this case?
We have implemented this approach in Solum (where we use Heat for 
deploying a VM and
then run application container on it  by providing above instructions 
through user_data of the HOT).


Thanks,
Devdatta


-


From: Sudipta Biswas 
Sent: Wednesday, July 27, 2016 9:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][rfc] Booting docker images using nova 
libvirt


Premise:

While working with customers, we have realized:

- They want to use containers but are wary of using the same host 
kernel for multiple containers.
- They already have a significant investment (including skills) in 
OpenStack's Virtual Machine workflow and would like to re-use it as 
much as possible.

- They are very interested in using docker images.

There are some existing approaches like Hyper, Secure Containers 
workflows which already tries to address the first point. But we 
wanted to arrive at an approach that addresses all the above three in 
context of OpenStack Nova with minimalist changes.



Design Considerations:

We tried a few experiments with the present libvirt driver in nova to 
accomplish a work flow to deploy containers inside virtual machines in 
OpenStack via Nova.


The fundamental premise of our approach is to run a single container 
encapsulated in a single VM. This VM image just has a bare minimum 
operating system required to run it.

The container filesystem comes from the docker image.

We would like to get the feedback on the below approaches from the 
community before proposing this as a spec or blueprint.



Approach 1

User workflow:

1. The docker image is obtained in the form of a tar file.
2. Upload this tar file in glance. This support is already there in 
glance were a container-type of docker is supported.
3. Use this image along with nova libvirt driver to deploy a virtual 
machine.


Following are some of the changes to the OpenStack code that 
implements this approach:


1. Define a new conf parameter in nova called – 
base_vm_image=/var/lib/libvirt/images/baseimage.qcow2

This option is used to specify the base VM image.

2. define a new sub_virt_type = container in nova conf. Setting this 
parameter will ensure mounting of the container filesystem inside the VM.
Unless qemu and