[openstack-dev] [qa] Update: Nova API List for Missing Tempest Tests

2013-10-07 Thread Masayuki Igawa
Hi,

I have updated: Nova API List for Missing Tempest Tests.
 
https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc

The summary of this list:
different count from
Tested or not# of APIs  ratio   the last time
---
Tested API  122  48.8%  +5
Not Tested API   68  27.2%  -5
Not Need to Test(*1) 60  24.0%   0
---
Total(*2):  250 100.0%   0

(*1) Because they are deprecated APIs such as nova-network and volume.
(*2) not included v3 APIs

I hope this information would be helpful for creating Tempest tests.
Any comments and questions are welcome.

Best Regards,
-- Masayuki Igawa


 Hi, Tempest developers
 
 I have made:
  Nova API List for Missing Tempest Tests.
  
 https://docs.google.com/spreadsheet/ccc?key=0AmYuZ6T4IJETdEVNTWlYVUVOWURmOERSZ0VGc1BBQWc
 
 This list shows what we should test. That is:
  * Nova has 250 APIs(not include v3 APIs).
  * 117 APIs are executed(maybe tested).
  * 73 APIs are not executed.
  * 60 APIs are not executed. But they maybe not need to test.
  - Because they are deprecated APIs such as nova-network and volume.
 
 So I think we need more tempest test cases.
 If this idea is acceptable, can you put your name to 'assignee' at your 
 favorites,
 and implement tempest tests.
 
 Any comments are welcome.
 
 Additional information:
  I made this API list with modification of nova's code that based on 
  https://review.openstack.org/#/c/25882/ (Abandoned).
 
 Best Regards,
 -- Masayuki Igawa
 
 

-
 《本メールの取り扱い》・区分:秘密  ・開示:配布先限り
  ・制限条件:重要扱い ・持出:禁止  ・期限:無期限   ・用済後:廃棄
-

-- 
ITシステム事業部 クラウド基盤サービス開発G
井川 征幸



-
 《本メールの取り扱い》・区分:秘密  ・開示:配布先限り
  ・制限条件:重要扱い ・持出:禁止  ・期限:無期限   ・用済後:廃棄
-

-- 
ITシステム事業部 クラウド基盤サービス開発G
井川 征幸




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] API Improvements, concepts

2013-10-07 Thread Jaromir Coufal

Hey,

based on Friday's call, I put down some notes, added few other ideas and 
I am sending a draft here. Let's start on top of that, get to details, 
figure out the scope and we are half finished (just implementation part, 
but that's minority) :).


https://etherpad.openstack.org/tuskar-concepts

Cheers
-- Jarda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Tenant isolation gate failures?

2013-10-07 Thread Maru Newby
The tenant isolation gates that have been failing so frequently seem to be 
passing all of a sudden.  I didn't see any merges that claimed to fix the 
issue, so maybe this is just a lull due to a lower volume of gate jobs.  If it 
was intentional, though, I would appreciate knowing which patch or patches 
resolved the problem.

Thanks in advance,


Maru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] API Improvements, concepts

2013-10-07 Thread Robert Collins
With the merge of Tuskar into TripleO, you might want to use [TripleO]
rather than [Tuskar]. (In that everyone in TripleO should be
interested in your notes :)).

Cheers,
Rob

On 7 October 2013 20:30, Jaromir Coufal jcou...@redhat.com wrote:
 Hey,

 based on Friday's call, I put down some notes, added few other ideas and I
 am sending a draft here. Let's start on top of that, get to details, figure
 out the scope and we are half finished (just implementation part, but that's
 minority) :).

 https://etherpad.openstack.org/tuskar-concepts

 Cheers
 -- Jarda

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-07 Thread Flavio Percoco

I would like to propose my candidacy for the OpenStack Technical
Committee.


== What I've done ==

I've been involved with OpenStack for almost a year now and although
it may not seem a very long time, I've been able to contribute to
most of OpenStack areas.

My contributions to OpenStack started with Glance where I dedicated a
lot of time and efforts trying to make it better and to align it with
other OpenStack models. This led me to contribute to Oslo, and then
become a core contributor of it. Contributing to Oslo has given me a
wider view about what other projects needs are - w.r.t common code,
libs and ways to do things - and this also helped me to understand how
projects integrate with each other and gave me a more pragmatic idea
of how things work and how they could be improved.

While all this was happening, I was also contributing to one of the
recently incubated projects, Marconi. Working on Marconi gave me a
complete view of what new projects path is, what they need to
accomplish before reaching incubation, what may make sense or not to
have as an incubated project and which are the community and
development standards that all projects should respect throughout
OpenStack. I served as co-PTL of Marconi before it got incubated and,
FWIW, I still do, although it's not an official role in OpenStack.

In addition to the above, I'm also part of stable-maint team and I
obviously work full-time on OpenStack.

== Plans as TC member ==

My main goal as a TC member is to provide an objective, openstack-wise
opinion thanks to the wide view of it I've gained during my last year
of contributions and the teams I'm member of.

With that in mind, I'd like to help new projects to fit into OpenStack
and provide some guidance and support to existing projects either to
help then get out of incubation or stay on track with what OpenStack
is.

As a TC member, I'll always put OpenStack's interests on top of
anything else and use that to help other projects to grow.

As part of helping OpenStack overall, I will emphasize the need
of stability and make sure we do our best to preserve it and perhaps
improve the process.

Thanks for reading this email. I'd be very honored to serve as a TC
member and help this project grow with the same dedication and passion
I've had in the time I've contributed to it.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Resource Classes and Racks (Wireframes + Concept Discussion)

2013-10-07 Thread Jaromir Coufal

Hi Liz,

thanks a lot for your feedback, I'll add some inline notes:


On 2013/02/10 20:04, Liz Blanchard wrote:

[snip]


Hi Jarda,

I just wanted to send along my feedback on the current state of the 
Rack and Resource Class creation wireframes…


*Rack Creation*
1) Even though it looks like with the edit icon, I think for 
consistencies sake, we could just label the Name of the L-Group 
similar to the other fields.
Yes, I agree, I already got the feedback from others and have it fixed. 
I am just working on other improvements before sending updated version 
out. Anyway, good point.


2) Should the Provisioning Image field be in line with the rest of 
the form? It feels a bit out of place being under the add node. 
Especially if you are adding one image for the entire L-Group.
You are adding the image only for Management Node which you are going to 
provision directly to the list of Nodes, that's why it is under the list 
of Nodes, because it relates to that.


3) What would you expect the ? icon to give the user in the upper 
right corner of the modal?
It's just quick sketch for help, I don't know yet how it will behave, I 
need to design whole helping system in forms.



4) I think we should denote any required fields.
Yes, I was not thinking about this at the moment, I was more focusing on 
the cencepts. Need to add it there.


5) Maybe we should add some sort of edit icon or link to let the users 
know that they can edit Node settings after it's been added.

Good point.

6) We should allow users to remove nodes quickly after they've been 
added. Rather than having to click on the node and then choose the 
delete option.

Agree, I already added Edit  Remove icons to the list.

7) I'm not sure we need the extra Choose Image: text in the drop 
down for provisioning image. Maybe you could replace the Provisioning 
image… with Choose Image?

OK.

8) I think we should specify the type of node we are adding for the 
Management Nodes. So rather than Add Node… it should read Add 
Management Node…

For Management Node - yes, this will be very helpful.

9) I think the second button should read Create and Provision rather 
than Create and go to Provisioning.
It should not, because at that moment of clicking the button, you are 
going to create the l-group but you are not provisioning, you continue 
to provisioning part.



*Resource Class Creation*
1) I think the Choose Type: should read Class Type:.

Agree, already fixed.

2) I think it would be best to auto-select one of the Class types by 
default. It feels weird to me that we wouldn't default to the first, 
for example.
I am not sure about this, because there is no default option for user. 
There is no expectation what would be the most used option. But I will 
try to think about htat and if possible incorporate it.


3) We should think about how the Class Type selection scales. As users 
are allowed to add more resource classes, maybe this selection should 
live in a drop down to support 5-8 different classes.
This shouldn't happen. There are no other services which you can provide 
by cloud - you can provide only compute or storage services, so I don't 
expect this scaling.


4) I think the icons that you are used for linked vs. not linked are 
backwards. Also, I think the lines could be removed if the values are 
unlinked. Here is an example of what photoshop uses for if the values 
are constrained:
http://people.redhat.com/~lsurette/OpenStack/PS%20Link 
http://people.redhat.com/%7Elsurette/OpenStack/PS%20Link
I don't think they are backwards, by default everything is unlinked. 
Dotted line helps to link the icon to the desired selection which will 
be linked then.
Anyway, this is very further future and we don't have to deal with this 
now, it won't be part of v1 or v2. We will see if users really need this 
feature and based on it we can implement it.


5) Similar to #7 in Rack feedback, I don't think we need the labeling 
Choose Property in the drop down.

Agree.

6) Even though Units doesn't make sense in the example you give, I 
think it should still be labeled Units and be greyed out.

Maybe. I am not very sure about this. because if I leave

7) How is the hardware assignment step different from the Optional 
Provisioning step other than assigning images to the nodes? I wonder 
if these two steps could be combined?
They shouldn't because then you go to provisioning process. It is 
different activity to assign nodes to the class or to go to node 
provisioning, it was decided to be split.


8) Do you think users will want to define different types of nodes 
within one resource class? Or do you think they will want to break 
these out into different resource classes? For example there could be 
two types of m1.compute classes based on the hardware profile. I 
wonder it this would make it easier to monitor and alert on these two 
specific types of hardware? This might be something we can ask early 
adopter users to get a feeling on how they would 

[openstack-dev] [TripleO] reminder - register summit design sessions

2013-10-07 Thread Robert Collins
There are no tuskar sessions proposed at the moment, for instance.

While we only have 5 slots, I would like to ensure we get as much as
possible out of them.

Cheers,
Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-07 Thread Ladislav Smola

Hello Clint,

thank you for your feedback.

On 10/04/2013 06:08 PM, Clint Byrum wrote:

Excerpts from Ladislav Smola's message of 2013-10-04 08:28:22 -0700:

Hello,

just a few words about role of Ceilometer in the Undercloud and the work
in progress.

Why we need Ceilometer in Undercloud:
---

In Tuskar-UI, we will display number of statistics, that will show
Undercloud metrics.
Later also number of alerts and notifications, that will come from
Ceilometer.

But I do suspect, that the Heat will use the Ceilometer Alarms, similar
way it is using it for
auto-scaling in Overcloud. Can anybody confirm?

I have not heard of anyone want to auto scale baremetal for the
purpose of scaling out OpenStack itself. There is certainly a use case
for it when we run out of compute resources and happen to have spare
hardware around. But unlike on a cloud where you have several
applications all contending for the same hardware, in the undercloud we
have only one application, so it seems less likely that auto-scaling
will be needed. We definitely need scaling, but I suspect it will not
be extremely elastic.


Yeah that's probably true. What I had in mind was something like
suspending hardware, that is no used at the time and e.g. have no
VM's running inside, for energy saving. And start it again when
we run out of compute resources, as you say.


What will be needed, however, is metrics for the rolling updates feature
we plan to add to Heat. We want to make sure that a rolling update does
not adversely affect the service level of the running cloud. If we're
early in the process with our canary-based deploy and suddenly CPU load is
shooting up on all of the completed nodes, something, perhaps Ceilometer,
should be able to send a signal to Heat, and trigger a rollback.


That is how Alarms should work now, you will just define the Alarm
inside of the Heat template, check the example:
https://github.com/openstack/heat-templates/blob/master/cfn/F17/AutoScalingCeilometer.yaml


What is planned in near future
---

The Hardware Agent capable of obtaining statistics:
https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices
It uses SNMP inspector for obtaining the stats. I have tested that with
the Devtest tripleo setup
and it works.

The planned architecture is to have one Hardware Agent(will be merged to
central agent code)
placed on Control Node (or basically anywhere). That agent will poll
SNMP daemons placed on
hardware in the Undercloud(baremetals, network devices). Any objections
why this is a bad idea?

We will have to create a Ceilometer Image element, snmpd element is
already there, but we should
test it. Anybody volunteers for this task? There will be a hard part:
doing the right configurations.
(firewall, keystone, snmpd.conf) So it's all configured in a clean and a
secured way. That would
require a seasoned sysadmin to at least observe the thing. Any
volunteers here? :-)

The IPMI inspector for Hardware agent just started:
https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices
Seems it should query the Ironic API, which would provide the data
samples. Any objections?
Any volunteers for implementing this on Ironic side?

devananda and lifeless had a greatest concern about the scalability of a
Central agent. The Ceilometer
is not doing any scaling right now, but they are planning Horizontal
scaling of the central agent
for the future. So this is a very important task for us, for larger
deployments. Any feedback about
scaling? Or changing of architecture for better scalability?


I share their concerns. For  100 nodes it is no big deal. But centralized
monitoring has a higher cost than distributed monitoring. I'd rather see
agents on the machines themselves do a bit more than respond to polling
so that load is distributed as much as possible and non-essential
network chatter is reduced.


Right now, for the central agent, it should be matter of configuration.
So you can set one central agent, fetching all baremetals from nova. Or
You can bake the central agent to each baremetal and set it to poll only
from localhost. Or one of distributed architecture, that is planned as
configuration option, is having node (Management Leaf node), that is
managing bunch of hardware, so the Central agent could be baked into it.

What the agent does then, is process the data, pack it into message
and send it to openstack message bus (should be heavily scalable) where
it is collected by a Collector (should be able to have many workers) and 
saved

to database.



I'm extremely interested in the novel approach that Assimilation
Monitoring [1] is taking to this problem, which is to have each node
monitor itself and two of its immediate neighbors on a switch and
some nodes monitor an additional node on a different switch. Failures
are reported to an API server which uses graph database queries to

Re: [openstack-dev] [TripleO] Undercloud Ceilometer

2013-10-07 Thread Ladislav Smola

Hello Chris,

That would be much appreciated, thank you. :-)

Kind Regards,
Ladislav

On 10/05/2013 12:12 AM, Chris Jones wrote:

Hi

On 4 October 2013 16:28, Ladislav Smola lsm...@redhat.com 
mailto:lsm...@redhat.com wrote:


test it. Anybody volunteers for this task? There will be a hard
part: doing the right configurations.
(firewall, keystone, snmpd.conf) So it's all configured in a clean
and a secured way. That would
require a seasoned sysadmin to at least observe the thing. Any
volunteers here? :-)


I'm not familiar at all with Ceilometer, but I'd be happy to discuss 
how/where things like snmpd are going to be exposed, and look over the 
resulting bits in tripleo :)


--
Cheers,

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-07 Thread Sylvain Bauza

Hi Mike,

Dina and you outlined some differences in terms of seeing what is 
dependent on what.
As Dina explained, Climate plans to be integrated into Nova and Heat 
logics, where Heat and Nova would request Climate API by asking for a 
lease and would tag on their own the resources as 'RESERVED'.
On your point, and correct me if I'm wrong, you would rather see Climate 
on top of Heat and Nova, scheduling resources on its own, and only send 
creation requests to Heat and Nova.


I'm happy to say both of you are right : Climate aims to be both called 
by Nova and *also* calling Nova. That's just matter of what Climate 
*is*. And here is the confusion.


That's why Climate is not only one API endpoint. It actually have two 
distinct endpoints : one called the Lease API endpoint, and one called 
the Resource Reservation API endpoint.


As a Climate developer working on physical hosts reservations (and not 
Heat stacks), my concern is to be able to guarantee to a REST client 
(either a user or another service) that if this user wants to provision 
X hosts on a specific timeframe in the future (immediate or in 10 
years), Climate will be able to provision them. By meaning being able 
and guarantee, I do use strong words for stating that we engage 
ourselves to be able to plan what will be resources capacity state in 
the future.


This decision-making process (ie. this Climate scheduler) will be 
implemented as RPC Service for the Reservation API, and thus will needs 
to keep its own persistence layer in Climate. Of course, it will request 
the Lease API for really creating the lease and managing lease start/end 
hooks, that's the Lease API job.



Provided you would want to use the Reservation API for reserving Heat 
stacks, you would have to implement it tho.



Thanks,
-Sylvain

Le 06/10/2013 20:41, Mike Spreitzer a écrit :
Thanks, Dina.  Yes, we do not understand each other; can I ask some 
more questions?


You outlined a two-step reservation process (We assume the following 
reservation process for the OpenStack services...), and right after 
that talked about changing your mind to use Heat instead of individual 
services.  So I am confused, I am not sure which of your remarks 
reflect your current thinking and which reflect old thinking.  Can you 
just state your current thinking?


On what basis would Climate decide to start or stop a lease?  What 
sort of event notifications would Climate be sending, and when and 
why, and what would subscribers do upon receipt of such notifications?


If the individual resource services continue to make independent 
scheduling decisions as they do today, what value does Climate add?


Maybe a little more detailed outline of what happens in your current 
thinking, in support of an explicitly stated use case that shows the 
value, would help here.


Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Tenant isolation gate failures?

2013-10-07 Thread Gary Kotton
https://review.openstack.org/#/c/46900/


On 10/7/13 10:36 AM, Maru Newby ma...@redhat.com wrote:

The tenant isolation gates that have been failing so frequently seem to
be passing all of a sudden.  I didn't see any merges that claimed to fix
the issue, so maybe this is just a lull due to a lower volume of gate
jobs.  If it was intentional, though, I would appreciate knowing which
patch or patches resolved the problem.

Thanks in advance,


Maru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-07 Thread Sean Dague

I'd like to announce my candidacy for the TC.

About me

I've been involved in OpenStack since early 2012. I'm currently the PTL 
for the OpenStack QA program, and a core reviewer on Tempest, Devstack, 
Nova, Grenade, and a myriad of smaller pieces of OpenStack 
infrastructure (including hacking and elastic-recheck). [1][2]


My focus in OpenStack has been about making OpenStack work as a 
consistent whole, both from a runtime and development perspective. I 
believe this consistency, and the idea of OpenStack being one project 
(with many moving parts) is important to the long term health of the 
community. This has let me to focus on the projects that integrate us, 
the QA Program, Devstack, and parts of the gate infrastructure, and 
things like the wsgi log filter on logs.openstack.org.


Beyond OpenStack I've had a long history of contributing to Open Source 
projects both as part of my day job and on my own time. [3][4]


I've been involved in organizing communities for over a decade, creating 
and leading our local Linux  Open Source users group back in 2003 and 
running it ever since. [5]



Platform
-
This view of OpenStack as a single whole is the reason I've focussed on 
the QA Program, as I feel that our gate infrastructure, and the 
integration tests that we choose to run there, is an incredibly 
important lens that ensures OpenStack hangs together as a whole.


This one OpenStack POV has also manifested itself in efforts like the 
global-requirements testing, one of my top projects this summer, where 
we now ensure all our projects actually are gating with a shared global 
list of requirements, so we know they all work together in a consistent way.


I'm excited by the growth of projects applying for incubation, but as 
the global requirements exercise showed, the more moving parts 
OpenStack, the more important we prove they integrate well with each 
other before they are graduated to integrated status. I think it's 
important that this remains expressed in code, which has always been the 
currency of OpenStack. Today that implementation lens for integration is 
devstack/tempest, tomorrow this may be something different, to meet the 
growing needs of the projects, but I still think it's important that 
we've got a single lens that brings all of Integrated OpenStack 
together, and that we can demonstrate it really is.


Integration is important, and ensuring that existing integrated projects 
remain integrated, and future ones really are integrated before we 
promote them, is my primary concern.


I'm incredibly excited by OpenStack's growth (in people, code, scope), 
which I attribute to an incredibly welcoming and constructive community, 
and the velocity we get out of our preemptive integration system. As a 
TC member I'd do my best to ensure those conditions remain. I think 
we've only just begun to see what OpenStack will become, and I'd be 
honored to be elected to the TC to help in all ways I can with it.


-Sean

[1] - contribution list to OpenStack - 
https://review.openstack.org/#/q/status:merged+owner:sean%2540dague.net,n,z
[2] - review list for OpenStack - 
https://review.openstack.org/#/q/reviewer:sean%2540dague.net,n,z

[3] - https://www.ohloh.net/accounts/sdague
[4] - https://github.com/sdague/
[5] - http://mhvlug.org

--
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Resource Classes and Racks (Wireframes + Concept Discussion)

2013-10-07 Thread Jaromir Coufal

Hey Ju,

thanks for your feedback, I don't think there is anything to hide and I 
would love to have everything discussed in open way. I hope you don't 
mind me cc-ing the answer to upstream list.


Furthermore what I need to stress is:
* Purpose of these wireframes was to start discussion around underlying 
concepts
* It is very first draft of wireframes, where the work is in progress 
and is not finished, I want to get feedback from the community before 
handing out next versions. I believe it's broadly understood.
* I was not focusing much on big details, more on what user can see on 
the screen in which step
* So yes, it needs to be polished, but I am very happy to gather all 
notes from anybody about some missing details so I can reflect them


On 2013/04/10 22:32, Ju Lim wrote:


Jarda:

I'm putting my comments / feedback here in internal email and not 
upstream as there are so many comments including comments related to 
general UX best practices that should have been in the design but are 
not in there right now.  I'll explain more my reasons when we talk on 
Monday / next week.


In the meantime, here are my comments/feedback for the Logical Group 
and Resource Class creation wireframes:


*Logical Group Creation*

Logical Group (aka Rack) Creation: 
http://people.redhat.com/~jcoufal/openstack/tuskar/2013-09-30_tuskar_l-group_creation_wireframes.pdf 
http://people.redhat.com/%7Ejcoufal/openstack/tuskar/2013-09-30_tuskar_l-group_creation_wireframes.pdf


Page / Slide 2:

(1) L-Group term is confusing.  Why not just call it out as Logical Group?

L-Group term is working name and nobody agreed on it, it is just 
something general, what I used for this purpose by shortening 'logical 
group'. But based on feedback until now, community really like it.


(2) Terminology inconsistency.  Specifically, I'm referring to the 
word Setup.  I've also seen words used in other places, such as 
Create, Add, Configure, Setup.  In this particular slide, the 
header/label reads Create L-Group but Step 1 in the workflow says 
L-Group Setup.


I agree with being consistent. But I don't see the arguments with 
Create, Add, Configure and Setup. Create is very different from Setup. 
'Setup' means configuring and 'Create' means creating the object in 
database. The whole workflow is about creating the class. In the first 
step, you do general setup. Can you please be more specific in that 
inconsistency? Or what would you suggest?


Furthermore, it's also very different Add and Create, add means add 
something existing, by importing to the list, create means creating objects.


(3) Should IP Subnet be an open text field?  Should we try to reduce 
erroneous typing by providing a clearer text entry field.


Can you give an example? Because the best user experience by operators 
is to have open field for that. You can find it in any advanced network 
setup.


(4) We need to indicate which fields are mandatory, e.g. IP Subnet, 
Management Node.  Note: This comment applies all the other pages / 
slides (so I don't have to call out each item that is mandatory).



Yes, I agree, I was not focusing on it yet.

(5) It should include a L-Group Name field for entry as the first 
field where all the other fields are.  I didn't notice at all L-Group 
Name field until I got to the next page/slide as the visuals (light 
grey) made it hard for me to see it, and the inline editing while nice 
was not consistent with the other fields to be entered.  Would we 
consider suggesting a default name if a user does not type a label / 
display name, e.g. LogicalGroup01, LogicalGroup02, etc.?  Also, why do 
you need … at all?


Agree with regular label and text field, it's already fixed in newer 
version, but I am still working on that so it's not sent yet.
No default names, please, this doesn't make any sense and you can't 
predict anything here.


(6) Order of the buttons at the bottom are strange.  I would think the 
Nodes Definition should be on the left, and Back and Cancel be on the 
right, or perhaps together.


I strongly disagree at this point. In any workflow the right side means 
forward and left side means back.


(7) Why show a Back button if it's disabled?  Specifically, have we 
decided if a function / action is not allowed whether to grey it out 
vs. hiding it in the UI?



Because of consistency of 'cancel' button position.

(8) Can user save this task if he/she gets interrupted while he/she is 
trying to create the logical group [Save button]?



Nope.


(9) Why does the Add Node… need a …?

The indication was there to show, that there is more options hidden 
under the label. We can find better visualization for this.


(10) Choosing the Provisioning image does not belong in the main 
section of Step  It should be part of adding/editing the Management 
Node.  In current location, it's breaking the user's mental model on 
the flow of setting up a management node.  I think you were trying to 
find something that would allow the 

Re: [openstack-dev] [Tuskar] [TripleO] API Improvements, concepts

2013-10-07 Thread Jaromir Coufal
That's good to know, I was asking and didn't get clear answer back then. 
Thanks for letting me know - I added that into this reply and will use 
[TripleO] in future instead of [Tuskar].


Cheers
-- Jarda

On 2013/07/10 09:43, Robert Collins wrote:

With the merge of Tuskar into TripleO, you might want to use [TripleO]
rather than [Tuskar]. (In that everyone in TripleO should be
interested in your notes :)).

Cheers,
Rob

On 7 October 2013 20:30, Jaromir Coufal jcou...@redhat.com wrote:

Hey,

based on Friday's call, I put down some notes, added few other ideas and I
am sending a draft here. Let's start on top of that, get to details, figure
out the scope and we are half finished (just implementation part, but that's
minority) :).

https://etherpad.openstack.org/tuskar-concepts

Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-07 Thread Patrick Petit

Hi Mike,

There are actually more facets to this. Sorry if it's a little confusing 
:-( Climate's original blueprint 
https://wiki.openstack.org/wiki/Blueprint-nova-planned-resource-reservation-api 
was about physical host reservation only. The typical use case being: I 
want to reserve x number of hosts that match the capabilities expressed 
in the reservation request. The lease is populated with reservations 
which at this point are only capacity descriptors. The reservation 
becomes active only when the lease starts at a specified time and for a 
specified duration. The lease manager plugin in charge of the physical 
reservation has a planning of reservations that allows Climate to grant 
a lease only if the requested capacity is available at that time. Once 
the lease becomes active, the user can request instances to be created 
on the reserved hosts using a lease handle as a Nova's scheduler hint. 
That's basically it. We do not assume or enforce how and by whom (Nova, 
Heat ,...) a resource instantiation is performed. In other words, a host 
reservation is like a whole host allocation 
https://wiki.openstack.org/wiki/WholeHostAllocation that is reserved 
ahead of time by a tenant in anticipation of some workloads that is 
bound to happen in the future. Note that while we are primarily 
targeting hosts reservations the same service should be offered for 
storage. Now, Mirantis brought in a slew of new use cases that are 
targeted toward virtual resource reservation as explained earlier by 
Dina. While architecturally both reservation schemes (physical vs 
virtual) leverage common components, it is important to understand that 
they behave differently. For example, Climate exposes an API for the 
physical resource reservation that the virtual resource reservation 
doesn't. That's because virtual resources are supposed to be already 
reserved (through some yet to be created Nova, Heat, Cinder,... 
extensions) when the lease is created. Things work differently for the 
physical resource reservation in that the actual reservation is 
performed by the lease manager plugin not before the lease is created 
but when the lease becomes active (or some time before depending on the 
provisioning lead time) and released when the lease ends.

HTH clarifying things.
BR,
Patrick

On 10/6/13 8:36 AM, Mike Spreitzer wrote:
I looked at the blueprint 
(https://blueprints.launchpad.net/heat/+spec/stacks-reservation) and 
associated wiki page 
(https://wiki.openstack.org/wiki/Heat/Reservation), and I have a few 
comments/questions.


The wiki page has some remarks that are Nova-centric, and some other 
remarks that emphasize that Climate is not just about Nova, and I do 
not understand the relationship between these remarks.  Is this a 
roadmapping thought (start with just Nova, expand to other services 
later), or the inclusion of some details (related to Nova) and 
omission of other similar details (related to the other services), or 
what?


Will Climate be an independent service, or part of Nova, or what?

What will be the atomic operations?  I presume the primary interesting 
operation will be something like reserving a bag of resources, where 
that bag is allowed to contain any mixture of resources from any 
services.  Have I got that right?


What exactly does reserving a resource mean?  Does this atomic 
reservation operation include some atomic cooperation from the 
resources' services' schedulers (e.g., Nova scheduler, Cinder 
scheduler, etc)?  Or is this reservation service logically independent 
of the resources' primary schedulers?  Overall I am getting the 
suggestion that reservation is an independent service.  The flow is 
something like first reserve a bag of resources, and then proceed to 
use them at your leisure.  But I also suppose that the important thing 
about a reservation is that it includes the result of scheduling 
(placement) --- the point of a reservation is that it is holding 
capacity to host the reserved resources.  You do not want an atomic 
operation to take a long time; do the scheduling decisions get made 
(tentatively, of course) before the real atomic section, with 
re-schedule and re-try on conflict detection, or is scheduling 
included in the atomic section?


For how long does a reservation last?  What sort of thing owns a 
reservation?  I suppose one important use case is that a heat engine, 
or the heat service (in the multi-engine world), could own a 
reservation; in this use case, the reservation would last until heat 
releases it.  Hopefully heat will persist its reservation information, 
so that a process crash will not cause a dangling reservation; how 
will you enable complete avoidance of timing splinters (e.g., heat 
engine crashes after making reservation but before persisting 
information about it)?  Presumably other things besides heat could own 
reservations.


Is this primarily about planning for the distant future, or focused on 
the immediate future?  If a bag of 

[openstack-dev] Nominating Zhi Yan Liu for glance-core

2013-10-07 Thread stuart . mclaren

+1


Hey,

I would like to nominate Zhi Yan Liu(lzydev) for glance core. I think Zhi has 
been an active reviewer/contributor to the glance community [1] and has
always been on top of reviews.

Thanks for the good work Zhi!

Iccha

[1] http://russellbryant.net/openstack-stats/glance-reviewers-30.txt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openstack.org/pipermail/openstack-dev/attachments/20131003/7fbed70d/attachment-0001.html

--

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nominating Fei Long Wang for glance core

2013-10-07 Thread stuart . mclaren

+1

Hey,

I would like to nominate Fei Long Wang(flwang) for glance core. I think Fei has 
been an active reviewer/contributor to the glance community [1] and has
always been on top of reviews.

Thanks for the good work Fei!

Iccha

[1] http://russellbryant.net/openstack-stats/glance-reviewers-30.txt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.openstack.org/pipermail/openstack-dev/attachments/20131003/63f0cece/attachment-0001.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] Questions and comments

2013-10-07 Thread Mike Spreitzer
Do not worry about what I want, right now I am just trying to understand 
the Climate proposal, wrt virtual resources (Patrick helped a lot on the 
physical side).  Can you please walk through a scenario involving Climate 
reservations on virtual resources?  I mean from start to finish, outlining

which party makes which decision, based on what.

Thanks,
Mike



From:   Sylvain Bauza sylvain.ba...@bull.net
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Cc: Mike Spreitzer/Watson/IBM@IBMUS
Date:   10/07/2013 05:07 AM
Subject:Re: [openstack-dev] [Climate] Questions and comments



Hi Mike,

Dina and you outlined some differences in terms of seeing what is 
dependent on what. 
As Dina explained, Climate plans to be integrated into Nova and Heat 
logics, where Heat and Nova would request Climate API by asking for a 
lease and would tag on their own the resources as 'RESERVED'.
On your point, and correct me if I'm wrong, you would rather see Climate 
on top of Heat and Nova, scheduling resources on its own, and only send 
creation requests to Heat and Nova. 

I'm happy to say both of you are right : Climate aims to be both called by

Nova and *also* calling Nova. That's just matter of what Climate *is*. And

here is the confusion.

That's why Climate is not only one API endpoint. It actually have two 
distinct endpoints : one called the Lease API endpoint, and one called the

Resource Reservation API endpoint.

As a Climate developer working on physical hosts reservations (and not 
Heat stacks), my concern is to be able to guarantee to a REST client 
(either a user or another service) that if this user wants to provision X 
hosts on a specific timeframe in the future (immediate or in 10 years), 
Climate will be able to provision them. By meaning being able and 
guarantee, I do use strong words for stating that we engage ourselves to

be able to plan what will be resources capacity state in the future.

This decision-making process (ie. this Climate scheduler) will be 
implemented as RPC Service for the Reservation API, and thus will needs to

keep its own persistence layer in Climate. Of course, it will request the 
Lease API for really creating the lease and managing lease start/end 
hooks, that's the Lease API job.


Provided you would want to use the Reservation API for reserving Heat 
stacks, you would have to implement it tho.


Thanks,
-Sylvain

Le 06/10/2013 20:41, Mike Spreitzer a écrit :
Thanks, Dina.  Yes, we do not understand each other; can I ask some more 
questions? 

You outlined a two-step reservation process (We assume the following 
reservation process for the OpenStack services...), and right after that 
talked about changing your mind to use Heat instead of individual 
services.  So I am confused, I am not sure which of your remarks reflect 
your current thinking and which reflect old thinking.  Can you just state 
your current thinking? 

On what basis would Climate decide to start or stop a lease?  What sort of

event notifications would Climate be sending, and when and why, and what 
would subscribers do upon receipt of such notifications? 

If the individual resource services continue to make independent 
scheduling decisions as they do today, what value does Climate add? 

Maybe a little more detailed outline of what happens in your current 
thinking, in support of an explicitly stated use case that shows the 
value, would help here. 

Thanks, 
Mike 

_
__
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Tenant isolation gate failures?

2013-10-07 Thread Matt Riedemann
These tempest patches were directly related to tenant isolation also:

https://review.openstack.org/#/c/49431/ 

https://review.openstack.org/#/c/49447/ 



Thanks,

MATT RIEDEMANN
Advisory Software Engineer
Cloud Solutions and OpenStack Development

Phone: 1-507-253-7622 | Mobile: 1-507-990-1889
E-mail: mrie...@us.ibm.com


3605 Hwy 52 N
Rochester, MN 55901-1407
United States




From:   Gary Kotton gkot...@vmware.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   10/07/2013 05:18 AM
Subject:Re: [openstack-dev] [Neutron] Tenant isolation gate 
failures?



https://review.openstack.org/#/c/46900/


On 10/7/13 10:36 AM, Maru Newby ma...@redhat.com wrote:

The tenant isolation gates that have been failing so frequently seem to
be passing all of a sudden.  I didn't see any merges that claimed to fix
the issue, so maybe this is just a lull due to a lower volume of gate
jobs.  If it was intentional, though, I would appreciate knowing which
patch or patches resolved the problem.

Thanks in advance,


Maru
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


image/gif___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 04:30 AM, Flavio Percoco wrote:

I would like to propose my candidacy for the OpenStack Technical
Committee.


== What I've done ==

I've been involved with OpenStack for almost a year now and although
it may not seem a very long time, I've been able to contribute to
most of OpenStack areas.

My contributions to OpenStack started with Glance where I dedicated a
lot of time and efforts trying to make it better and to align it with
other OpenStack models. This led me to contribute to Oslo, and then
become a core contributor of it. Contributing to Oslo has given me a
wider view about what other projects needs are - w.r.t common code,
libs and ways to do things - and this also helped me to understand how
projects integrate with each other and gave me a more pragmatic idea
of how things work and how they could be improved.

While all this was happening, I was also contributing to one of the
recently incubated projects, Marconi. Working on Marconi gave me a
complete view of what new projects path is, what they need to
accomplish before reaching incubation, what may make sense or not to
have as an incubated project and which are the community and
development standards that all projects should respect throughout
OpenStack. I served as co-PTL of Marconi before it got incubated and,
FWIW, I still do, although it's not an official role in OpenStack.

In addition to the above, I'm also part of stable-maint team and I
obviously work full-time on OpenStack.

== Plans as TC member ==

My main goal as a TC member is to provide an objective, openstack-wise
opinion thanks to the wide view of it I've gained during my last year
of contributions and the teams I'm member of.

With that in mind, I'd like to help new projects to fit into OpenStack
and provide some guidance and support to existing projects either to
help then get out of incubation or stay on track with what OpenStack
is.

As a TC member, I'll always put OpenStack's interests on top of
anything else and use that to help other projects to grow.

As part of helping OpenStack overall, I will emphasize the need
of stability and make sure we do our best to preserve it and perhaps
improve the process.

Thanks for reading this email. I'd be very honored to serve as a TC
member and help this project grow with the same dedication and passion
I've had in the time I've contributed to it.

Cheers,
FF

--
@flaper87
Flavio Percoco

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 05:18 AM, Julien Danjou wrote:

Hi there,

I'd like to announce my candidacy for the TC.

About me

I've been working on OpenStack for 2 years now. I'm one of the early
contributors to the Ceilometer project, and worked towards its
incubation and integration. Now, I am the PTL for Ceilometer since the
Havana cycle.

Nowadays I work on various area of OpenStack, while still focusing on
Ceilometer. I've been also nominated and became a core contributor for
Oslo recently.

Outside OpenStack, I've been a FOSS contributor for the last 15 years in
various other projects (Debian, Freedesktop, Emacs...). I like to think
that I know a lot about the dynamics that make FOSS working and how to
build and organize successful open source projects.


Platform

I've been member of the TC for last 6 months. I've seen the enthusiasm
that drives people towards OpenStack, and the amount of incubation
proposal for projects that we received. Dealing with these requests has
been a major feature of the TC these last months, and I expect it to
continue this way. Especially because the TC chose to incubate a few
projects already, and judging if they are ready to go out of incubation
will be its duty. I hope that my experience could help in this regard.

On this issue, I've been really open minded and I think OpenStack should
generally welcome the incubation of projects, while setting right
correct criteria for projects to become integrated. Such standards
should be integration with other projects, reusability of work done, or
reduction of overlaps.

I don't envision OpenStack as a IaaS-only platform, but as an ecosystem
of coherent projects plugged together, providing services and solutions
to end users.

More generally, as a TC member I will continue to work to make OpenStack
the success it is and to be sure it continues to grow, connect with
friendly projects, and remain a welcoming technical community.

Cheers,


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 06:19 AM, Sean Dague wrote:

I'd like to announce my candidacy for the TC.

About me

I've been involved in OpenStack since early 2012. I'm currently the 
PTL for the OpenStack QA program, and a core reviewer on Tempest, 
Devstack, Nova, Grenade, and a myriad of smaller pieces of OpenStack 
infrastructure (including hacking and elastic-recheck). [1][2]


My focus in OpenStack has been about making OpenStack work as a 
consistent whole, both from a runtime and development perspective. I 
believe this consistency, and the idea of OpenStack being one project 
(with many moving parts) is important to the long term health of the 
community. This has let me to focus on the projects that integrate us, 
the QA Program, Devstack, and parts of the gate infrastructure, and 
things like the wsgi log filter on logs.openstack.org.


Beyond OpenStack I've had a long history of contributing to Open 
Source projects both as part of my day job and on my own time. [3][4]


I've been involved in organizing communities for over a decade, 
creating and leading our local Linux  Open Source users group back in 
2003 and running it ever since. [5]



Platform
-
This view of OpenStack as a single whole is the reason I've focussed 
on the QA Program, as I feel that our gate infrastructure, and the 
integration tests that we choose to run there, is an incredibly 
important lens that ensures OpenStack hangs together as a whole.


This one OpenStack POV has also manifested itself in efforts like the 
global-requirements testing, one of my top projects this summer, where 
we now ensure all our projects actually are gating with a shared 
global list of requirements, so we know they all work together in a 
consistent way.


I'm excited by the growth of projects applying for incubation, but as 
the global requirements exercise showed, the more moving parts 
OpenStack, the more important we prove they integrate well with each 
other before they are graduated to integrated status. I think it's 
important that this remains expressed in code, which has always been 
the currency of OpenStack. Today that implementation lens for 
integration is devstack/tempest, tomorrow this may be something 
different, to meet the growing needs of the projects, but I still 
think it's important that we've got a single lens that brings all of 
Integrated OpenStack together, and that we can demonstrate it really is.


Integration is important, and ensuring that existing integrated 
projects remain integrated, and future ones really are integrated 
before we promote them, is my primary concern.


I'm incredibly excited by OpenStack's growth (in people, code, scope), 
which I attribute to an incredibly welcoming and constructive 
community, and the velocity we get out of our preemptive integration 
system. As a TC member I'd do my best to ensure those conditions 
remain. I think we've only just begun to see what OpenStack will 
become, and I'd be honored to be elected to the TC to help in all ways 
I can with it.


-Sean

[1] - contribution list to OpenStack - 
https://review.openstack.org/#/q/status:merged+owner:sean%2540dague.net,n,z
[2] - review list for OpenStack - 
https://review.openstack.org/#/q/reviewer:sean%2540dague.net,n,z

[3] - https://www.ohloh.net/accounts/sdague
[4] - https://github.com/sdague/
[5] - http://mhvlug.org




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reporting framework in OpenStack ...

2013-10-07 Thread Sandy Walsh
Hey y'all,

I've been looking at the reporting framework being put together in oslo [1] and 
was wondering about using this for other purposes than guru meditation [2]. 

In StackTach we generate a number of critical reports around usage, billing, 
performance and errors. Currently we store these reports as json blobs in the 
database and expose it through the web API. Clients can fetch, format and 
distribute the json blobs as they like. Some clients convert them to HTML, some 
plain text, some are only viewed through a cmdline tool. Different departments 
have different needs so we just make the summarized data available. We are 
going to need the same functionality in Ceilometer. 

Some of these reports are very computationally expensive. Our biggest 
requirement is a cron-like service/api that can trigger new events when 
previous ones have finished (batch window management). Though, we could model 
this in ceilometer pretty easily I think. I'm just not sure that's the correct 
application of ceilometer. 

We certainly don't want to make a full blown reporting platform, but I was 
wondering if using this oslo library would be appropriate? 

I'd also like to hear what others are doing for their reporting needs 
currently. 

Thanks
-S

[1] 
https://github.com/openstack/oslo-incubator/tree/master/openstack/common/report
[2] https://blueprints.launchpad.net/oslo/+spec/guru-meditation-report
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2013-10-07 Thread Mark McLoughlin
Hi

I'd like to offer my self as a candidate for the Technical Committee
election.


About me

I've been working on OpenStack for over two years now and have
particularly focused my contributions on Nova and Oslo, but have also
contributed in smaller ways to most other OpenStack projects.

For the past year or more, I've been a member of the Technical Committee
as Oslo PTL.

I'm a proud Red Hatter and am also a member of the OpenStack Foundation
Board of Directors.


The Past Year

I'm very happy with some of the progress and decisions the TC made over
the past year.

We welcomed Heat, Ceilometer, Trove, Savannah, Marconi into the
OpenStack family either as integrated or incubating projects. The TC
carefully considered each of these applications and my own rule of thumb
was does it have a healthy contributor community and is it a sensible
growth of OpenStack's scope?. I love to see this sustainable growth in
our project and community.

In a similar vein, I'm really excited that TripleO has been added as an
official OpenStack program. One of OpenStack's biggest criticisms has
always been that it is difficult to deploy and manage. TripleO is an
awesome idea but, more importantly, is a way for all of us to work
together to build tools and processes for deploying and managing our
software.

In terms of more meta changes, I'm really happy that the TC has moved to
a model where all seats are directly elected. This removed the concern
that adding new projects would make the TC unmanageably large so that
can no longer be used as an excuse to not add new projects. I also hope
that this election model will result in more members who are interested
in cross-project concerns.

I'm proud of the work we did with the foundation board to adopt the term
integrated as a way to separate the TC controlled accepted into the
OpenStack integrated release process status from the board controlled
allowed to use associate OpenStack trademark status. This is really
important because it allows the TC to evaluate new project applications
on a purely technical basis.

I think it's really positive that we adopted the concept of programs
as a recognition that not all important efforts and contributor groups
are focused around a particular server project. Our community has a very
diverse set of contributor groups and all of them play an important
role.

Finally, I'm happy to see us starting to use gerrit to track TC
decisions in a way that is easily referenceable. Looking back over the
last year of TC decisions would have been a lot easier with 'git log'!
See https://review.openstack.org/50066 :)


Next Year

I want to see the TC continue to be welcoming to new projects and
contributor groups. That said, I'd like us to continue improve how we
deliberate over these applications. For example, maybe we assign
specific TC members with aspects of the project to report back on - e.g.
architecture, development process, contributor diversity, test coverage,
etc.

I'm also really eager to encourage any experiments with evolving our
project governance model. I think we're seeing several projects with
multiple leaders who are essentially peers and having to choose a PTL
can be an artificial elevation of one person over their peers. I stepped
down as Oslo PTL because I want Oslo to have a bunch of strong leaders,
rather than be dominated by one person.

Finally, I'd like the TC to be used more often as a forum for people to
develop their ideas about the project. We should view the TC as a group
of project leaders who are happy, as a group, to help people out with
advice and mentorship.


Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-07 Thread Anne Gentle
Hi, I'd like to propose myself to serve on the Technical Committee for the
upcoming election term.

== Who Am I? ==

I volunteer my best for OpenStack every day, and will continue to do so in
the coming term. I currently serve as Documentation Program Lead and have
been working on OpenStack at Rackspace since September 2010. My PTL
candidacy statement is available at [1]. We continually improve the
documentation by applying better processes and resources all the time,
treating the documentation like the fast-moving code itself. We also serve
the many audiences interested in OpenStack, deployers, operators,
administrators, architects, cloud consumers, users and developers. I
believe that participating companies who employ OpenStack coders should
also dedicate technical documentation resources, and am pleased to see
member companies are getting the message and creating and recruiting for
those positions.

== Why Am I Technical Enough for the TC? ==

I try to provide support through the docs in any way I can, by answering
questions on IRC, Disqus doc comments, and on ask.openstack.org. My breadth
and depth of knowledge is across OpenStack because of my role as Doc PTL.

I learn quickly and gather facts before passing judgement. In the last
year, I have given fair attention to each incubation request and have tried
to evaluate from the point of view of the audiences the docs serve.

I have built relationships across all of the OpenStack projects, giving doc
platform and tooling support. I review doc patches from the perspective of
all audiences, constantly asking for improvement.

During TC meetings, I ask the right questions and listen to the answers,
asking further questions when my experience or vocabulary is lacking in an
area. I admit when I don't know something. I work hard to earn trust and
respect.

My goal is to provide an inclusive and supportive environment for projects
while making OpenStack better for users and admins all the time. We are so
fortunate to have the explosive growth and interest in OpenStack, and I
want it to continue. We have built upon incredible ideas and I want us to
be empowered to innovate.

I believe by serving on the Technical Committee I can continue to support
OpenStack in meaningful ways.

Thanks for your attention, and thanks for all the important work that YOU,
the members of our community, bring every day to this project.

Anne

1.
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015465.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 10:58 AM, Mark McLoughlin wrote:

Hi

I'd like to offer my self as a candidate for the Technical Committee
election.


About me

I've been working on OpenStack for over two years now and have
particularly focused my contributions on Nova and Oslo, but have also
contributed in smaller ways to most other OpenStack projects.

For the past year or more, I've been a member of the Technical Committee
as Oslo PTL.

I'm a proud Red Hatter and am also a member of the OpenStack Foundation
Board of Directors.


The Past Year

I'm very happy with some of the progress and decisions the TC made over
the past year.

We welcomed Heat, Ceilometer, Trove, Savannah, Marconi into the
OpenStack family either as integrated or incubating projects. The TC
carefully considered each of these applications and my own rule of thumb
was does it have a healthy contributor community and is it a sensible
growth of OpenStack's scope?. I love to see this sustainable growth in
our project and community.

In a similar vein, I'm really excited that TripleO has been added as an
official OpenStack program. One of OpenStack's biggest criticisms has
always been that it is difficult to deploy and manage. TripleO is an
awesome idea but, more importantly, is a way for all of us to work
together to build tools and processes for deploying and managing our
software.

In terms of more meta changes, I'm really happy that the TC has moved to
a model where all seats are directly elected. This removed the concern
that adding new projects would make the TC unmanageably large so that
can no longer be used as an excuse to not add new projects. I also hope
that this election model will result in more members who are interested
in cross-project concerns.

I'm proud of the work we did with the foundation board to adopt the term
integrated as a way to separate the TC controlled accepted into the
OpenStack integrated release process status from the board controlled
allowed to use associate OpenStack trademark status. This is really
important because it allows the TC to evaluate new project applications
on a purely technical basis.

I think it's really positive that we adopted the concept of programs
as a recognition that not all important efforts and contributor groups
are focused around a particular server project. Our community has a very
diverse set of contributor groups and all of them play an important
role.

Finally, I'm happy to see us starting to use gerrit to track TC
decisions in a way that is easily referenceable. Looking back over the
last year of TC decisions would have been a lot easier with 'git log'!
See https://review.openstack.org/50066 :)


Next Year

I want to see the TC continue to be welcoming to new projects and
contributor groups. That said, I'd like us to continue improve how we
deliberate over these applications. For example, maybe we assign
specific TC members with aspects of the project to report back on - e.g.
architecture, development process, contributor diversity, test coverage,
etc.

I'm also really eager to encourage any experiments with evolving our
project governance model. I think we're seeing several projects with
multiple leaders who are essentially peers and having to choose a PTL
can be an artificial elevation of one person over their peers. I stepped
down as Oslo PTL because I want Oslo to have a bunch of strong leaders,
rather than be dominated by one person.

Finally, I'd like the TC to be used more often as a forum for people to
develop their ideas about the project. We should view the TC as a group
of project leaders who are happy, as a group, to help people out with
advice and mentorship.


Thanks,
Mark.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 10:57 AM, Anne Gentle wrote:
Hi, I'd like to propose myself to serve on the Technical Committee for 
the upcoming election term.


== Who Am I? ==

I volunteer my best for OpenStack every day, and will continue to do 
so in the coming term. I currently serve as Documentation Program Lead 
and have been working on OpenStack at Rackspace since September 2010. 
My PTL candidacy statement is available at [1]. We continually improve 
the documentation by applying better processes and resources all the 
time, treating the documentation like the fast-moving code itself. We 
also serve the many audiences interested in OpenStack, deployers, 
operators, administrators, architects, cloud consumers, users and 
developers. I believe that participating companies who employ 
OpenStack coders should also dedicate technical documentation 
resources, and am pleased to see member companies are getting the 
message and creating and recruiting for those positions.


== Why Am I Technical Enough for the TC? ==

I try to provide support through the docs in any way I can, by 
answering questions on IRC, Disqus doc comments, and on 
ask.openstack.org http://ask.openstack.org. My breadth and depth of 
knowledge is across OpenStack because of my role as Doc PTL.


I learn quickly and gather facts before passing judgement. In the last 
year, I have given fair attention to each incubation request and have 
tried to evaluate from the point of view of the audiences the docs serve.


I have built relationships across all of the OpenStack projects, 
giving doc platform and tooling support. I review doc patches from the 
perspective of all audiences, constantly asking for improvement.


During TC meetings, I ask the right questions and listen to the 
answers, asking further questions when my experience or vocabulary is 
lacking in an area. I admit when I don't know something. I work hard 
to earn trust and respect.


My goal is to provide an inclusive and supportive environment for 
projects while making OpenStack better for users and admins all the 
time. We are so fortunate to have the explosive growth and interest in 
OpenStack, and I want it to continue. We have built upon incredible 
ideas and I want us to be empowered to innovate.


I believe by serving on the Technical Committee I can continue to 
support OpenStack in meaningful ways.


Thanks for your attention, and thanks for all the important work that 
YOU, the members of our community, bring every day to this project.


Anne

1. 
http://lists.openstack.org/pipermail/openstack-dev/2013-September/015465.html



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [marconi] API Finalization starting in 10 minutes

2013-10-07 Thread Kurt Griffiths
Hi folks,

Sorry for the late notice. During today's regular team meeting we will be 
reviewing and freezing the v1 API. I hope to see you there!

1600 UTC @ #openstack-meeting-alt

Cheers,
Kurt G. (kgriffs)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Client] How do I list interfaces on a router?

2013-10-07 Thread Jay Pipes

Hi all,

I've got code that is checking to see if a particular router has a 
public gateway set up, and if not, it wires up the gateway:


print Checking %s router setup ...  % zone.upper(),
 try:
   router = qc.list_routers(name=demorouter)['routers']
   router = router[0]
   print OK
 except IndexError:
   print MISSING
   print -- Creating missing router ... ,
   router_data = dict(name=demorouter, admin_state_up=True)
   router = qc.create_router(dict(router=router_data))['router']
   print OK

print Checking %s router gateway ...  % zone.upper(),
if router['external_gateway_info'] is None:
  print NOT SET
  print -- Setting external gateway for router ... ,
  pub_net_id = qc.list_networks(name=public)['networks'][0]['id']
  net_dict = dict(network_id=pub_net_id)
  qc.add_gateway_router(router['id'], net_dict)
  print OK
else:
  print OK

The above code works just fine. The next thing I need to check is 
whether the private subnet is wired into the router. I cannot seem to 
determine how to list interfaces for a particular router.


I checked Horizon and it doesn't seem to have any idea how to do this 
either [1]. Is this just something that is missing from the Neutron API? 
If so, how do you suggest I determine if the demo router has been 
connected to the private subnet?


Thanks in advance for any help,
-jay

https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py 
lines 660-708


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Client] How do I list interfaces on a router?

2013-10-07 Thread Jay Pipes

On 10/07/2013 12:00 PM, Jay Pipes wrote:

Hi all,

I've got code that is checking to see if a particular router has a
public gateway set up, and if not, it wires up the gateway:

print Checking %s router setup ...  % zone.upper(),
  try:
router = qc.list_routers(name=demorouter)['routers']
router = router[0]
print OK
  except IndexError:
print MISSING
print -- Creating missing router ... ,
router_data = dict(name=demorouter, admin_state_up=True)
router = qc.create_router(dict(router=router_data))['router']
print OK

print Checking %s router gateway ...  % zone.upper(),
if router['external_gateway_info'] is None:
   print NOT SET
   print -- Setting external gateway for router ... ,
   pub_net_id = qc.list_networks(name=public)['networks'][0]['id']
   net_dict = dict(network_id=pub_net_id)
   qc.add_gateway_router(router['id'], net_dict)
   print OK
else:
   print OK

The above code works just fine. The next thing I need to check is
whether the private subnet is wired into the router. I cannot seem to
determine how to list interfaces for a particular router.

I checked Horizon and it doesn't seem to have any idea how to do this
either [1]. Is this just something that is missing from the Neutron API?
If so, how do you suggest I determine if the demo router has been
connected to the private subnet?

Thanks in advance for any help,
-jay

https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py
lines 660-708


OK, I figured it out. For the benefit of folks looking for how to do 
this, you need to use the neutronclient.list_ports() method passing in 
the router ID as the device_id kwarg, like so:


router_ports = qc.list_ports(device_id=router['id'])['ports']
if len(router_ports) == 0:
  print NOT SET
  print -- Adding interface to subnet 192.168.1.0/24 to router ... ,
  iface_dict = dict(subnet_id=sn['id'])
  qc.add_interface_router(router['id'], iface_dict)
  print OK
else:
  print OK

Where sn is the subnet dict that you've previously added or gotten.

Best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AUTO: Dani Katz is prepared for DELETION (FREEZE) (returning 02/01/2013)

2013-10-07 Thread Dani Katz

I am out of the office until 02/01/2013.

Dani Katz is prepared for DELETION (FREEZE)


Note: This is an automated response to your message  OpenStack-dev Digest,
Vol 18, Issue 11 sent on 07/10/2013 15:00:02.

This is the only notification you will receive while this person is away.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon]Template for mobile browsers

2013-10-07 Thread Jaromir Coufal

Hi Max,

I think it looks that it was some student, who started to have a look on 
that as a project, there appeared some wireframes, but then it 
disappeared. So if you want to jump into it, feel free! There was 
already some discussion in WB, but I believe you've read this one.


The things which you want to start are exactly what fits to UX part, we 
will be starting new discussion tool soon, would you mind to start the 
whole discussion thread there with your proposals? I believe you will 
get better targeted audience there and lot of supportive design ideas. I 
will be happy to help in this are as well.


Thanks a lot for starting effort on this, will let you know once the UX 
tool is running (it should be fairly soon).


-- Jarda

On 2013/04/10 14:48, Maxime Vidori wrote:

Hi,

I have to work on this blueprint 
https://blueprints.launchpad.net/horizon/+spec/horizon-mobile-ui, and I am 
wondering if something has be ever done on it? There is no activity on it since 
mid august. I will soon upload some specification, features and design I need, 
so if someone is interested in, I will be happy to heard his ideas.

Thanks

Max

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-07 Thread Thomas Maddox
On 10/3/13 4:09 PM, Thomas Maddox thomas.mad...@rackspace.com wrote:

On 10/3/13 8:53 AM, Julien Danjou jul...@danjou.info wrote:

On Thu, Oct 03 2013, Thomas Maddox wrote:

 Interesting point, Doug and Julien. I'm thinking out loud, but if we
wanted
 to use pipeline.yaml, we could have an 'enabled' attribute for each
 pipeline?

That would be an option, for sure. But just removing all of them should
also work.

 I'm curious, does the pipeline dictate whether its resulting
 sample is stored, or if no pipeline is configured, will it just store
the
 sample according to the plugins in */notifications.py? I will test this
out.

If there's no pipeline, there's no sample, so nothing's stored.

 For additional context, the intent of the feature is to allow a
deployer
 more flexibility. Like, say we wanted to only enable storing
white-listed
 event traits and using trigger pipelines (to come) for notification
based
 alerting/monitoring?

This is already supported by the pipeline as you can list the meters you
want or not.

I poked around a bunch today; yep, you're right - we can just drop samples
on the floor by negating all meters in pipeline.yaml. I didn't have much
luck just removing all pipeline definitions or using a blank one (it
puked, and anything other than negating all samples felt too hacky to be
viable with trusted behavior).

I had my semantics and understanding of the workflow from the collector to
the pipeline to the dispatcher all muddled and was set straight today. =]
I will think on this some more.

I was also made aware of some additional Stevedore functionality, like
NamedExtensionManager, that should allow us to completely enable/disable
any handlers we don't want to load and the pipelines with just config
changes, and easily (thanks, Dragon!).

I really appreciate the time you all take to help us less experienced
developers learn on a daily basis! =]

I tried two approaches from this:

1. Using NamedExtensionManager and passing in an empty list of names, I
get the same RuntimeError[1]
2. Using EnabledExtensionManager (my preference since the use case for
disabling is lesser than enabling) and passing in a black list check, with
which I received the same Runtime error when an empty list of extensions
was the result.

I was thinking that, with the white-list/black-list capability of [Named,
Enabled]ExtensionManager, it would behave more like an iterator. If the
manager didn't load any Extensions, then it would just no op on operations
on said extensions it owns and the application would carry on as always.

Is this something that we could change in Stevedore? I wanted to get your
thoughts before opening an issue there, in case this was intended behavior
for some benefit I'm not aware of.

-Thomas

[1]:'RuntimeError: No ceilometer.collector extensions found'



Cheers!

-Thomas


-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 01:23 PM, Brad Topol wrote:

Hi Everyone,

I would like propose my candidacy for the OpenStack Technical Committee.

A bit about me

I have been an Active Technical Contributor to Keystone and DevStack 
since the Grizzly release.  I have also co-authored articles on 
OpenStack [1] and more information about me can be found in a recent 
OpenStack Open Mic article [2].In addition,  I lead and give 
direction to a large team of OpenStack developers at my company that 
contribute to a variety of OpenStack projects including Keystone, 
Heat, Ceilometer, Swift, Cinder, and Internationalization (i18n). This 
role gives me an opportunity to obtain a broad view of what is 
happening in the OpenStack projects as opposed to being focused on a 
specific core project.   I also spend a tremendous amount of time 
traveling to customers and business partners to evangelize OpenStack, 
listening to their OpenStack requirements and adoption impediments, 
and assisting their development teams to become contributors to 
OpenStack.  I am incredibly passionate about making sure OpenStack is 
highly consumable for our users  and that the OpenStack ecosystem 
grows and thrives.



Platform
===
As a technical team lead, developer, and evangelist for OpenStack,  I 
am deeply committed to OpenStack being incredibly valuable and 
consumable for all our users and also to making sure the OpenStack 
ecosystem grows and thrives.  I hear from customers all the time on 
how OpenStack can be improved such that it becomes easier to install, 
more consumable and serviceable, more easily integrated into 
enterprise customer environments,  and the need for being able to 
certify interoperability and scalability of OpenStack environments.  I 
would be honored to have an opportunity to serve as a member of the TC 
and to help drive the the technical direction of the OpenStack project 
based on all the outstanding feedback I receive from customers, 
business partners, analysts, and also from all the knowledge I have 
gained by both being a personal contributor to OpenStack as well as 
leading a large development team of strong and core contributors that 
work across several of OpenStack's core projects.


[1] - http://www.ibm.com/developerworks/cloud/library/cl-ldap-keystone/
[2] - 
https://www.openstack.org/blog/2013/08/open-mic-spotlight-brad-topol/


Thanks,

Brad

Brad Topol, Ph.D.
IBM Distinguished Engineer
OpenStack
(919) 543-0646
Internet:  bto...@us.ibm.com
Assistant: Cindy Willman (919) 268-5296


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread stever
+1 for Brad
-Original Message-
From: Brad Topol bto...@us.ibm.com
Date: Mon, 7 Oct 2013 13:23:30 
To: OpenStack Development Mailing Listopenstack-dev@lists.openstack.org
Reply-To: OpenStack Development Mailing List
 openstack-dev@lists.openstack.org
Subject: [openstack-dev]  TC Candidacy

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-07 Thread Doug Hellmann
On Mon, Oct 7, 2013 at 1:44 PM, Thomas Maddox
thomas.mad...@rackspace.comwrote:

 On 10/3/13 4:09 PM, Thomas Maddox thomas.mad...@rackspace.com wrote:

 On 10/3/13 8:53 AM, Julien Danjou jul...@danjou.info wrote:
 
 On Thu, Oct 03 2013, Thomas Maddox wrote:
 
  Interesting point, Doug and Julien. I'm thinking out loud, but if we
 wanted
  to use pipeline.yaml, we could have an 'enabled' attribute for each
  pipeline?
 
 That would be an option, for sure. But just removing all of them should
 also work.
 
  I'm curious, does the pipeline dictate whether its resulting
  sample is stored, or if no pipeline is configured, will it just store
 the
  sample according to the plugins in */notifications.py? I will test this
 out.
 
 If there's no pipeline, there's no sample, so nothing's stored.
 
  For additional context, the intent of the feature is to allow a
 deployer
  more flexibility. Like, say we wanted to only enable storing
 white-listed
  event traits and using trigger pipelines (to come) for notification
 based
  alerting/monitoring?
 
 This is already supported by the pipeline as you can list the meters you
 want or not.
 
 I poked around a bunch today; yep, you're right - we can just drop samples
 on the floor by negating all meters in pipeline.yaml. I didn't have much
 luck just removing all pipeline definitions or using a blank one (it
 puked, and anything other than negating all samples felt too hacky to be
 viable with trusted behavior).
 
 I had my semantics and understanding of the workflow from the collector to
 the pipeline to the dispatcher all muddled and was set straight today. =]
 I will think on this some more.
 
 I was also made aware of some additional Stevedore functionality, like
 NamedExtensionManager, that should allow us to completely enable/disable
 any handlers we don't want to load and the pipelines with just config
 changes, and easily (thanks, Dragon!).
 
 I really appreciate the time you all take to help us less experienced
 developers learn on a daily basis! =]

 I tried two approaches from this:

 1. Using NamedExtensionManager and passing in an empty list of names, I
 get the same RuntimeError[1]
 2. Using EnabledExtensionManager (my preference since the use case for
 disabling is lesser than enabling) and passing in a black list check, with
 which I received the same Runtime error when an empty list of extensions
 was the result.

 I was thinking that, with the white-list/black-list capability of [Named,
 Enabled]ExtensionManager, it would behave more like an iterator. If the
 manager didn't load any Extensions, then it would just no op on operations
 on said extensions it owns and the application would carry on as always.

 Is this something that we could change in Stevedore? I wanted to get your
 thoughts before opening an issue there, in case this was intended behavior
 for some benefit I'm not aware of.


The exception is intended to prevent the app from failing silently if it
cannot load any plugins for some reason, but
stevedore should throw a different exception for the could not load any
plugins and I was told not to use any plugins and then told to do some
work cases.

In this particular case, though, the thing calling the extension manager
knows what the pipeline configuration is, and could just skip the call if
there are no publishers in the pipeline.

Doug




 -Thomas

 [1]:'RuntimeError: No ceilometer.collector extensions found'


 
 Cheers!
 
 -Thomas
 
 
 --
 Julien Danjou
 -- Free Software hacker - independent consultant
 -- http://julien.danjou.info
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TRIPLEO] tripleo-core update october

2013-10-07 Thread Robert Collins
Hi, like most OpenStack projects we need to keep the core team up to
date: folk who are not regularly reviewing will lose context over
time, and new folk who have been reviewing regularly should be trusted
with -core responsibilities.

Please see Russell's excellent stats:
http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt

For joining and retaining core I look at the 90 day statistics; folk
who are particularly low in the 30 day stats get a heads up: it's not
a purely mechanical process :).

As we've just merged review teams with Tuskar devs, we need to allow
some time for everyone to get up to speed; so for folk who are core as
a result of the merge will be retained as core, but November I expect
the stats will have normalised somewhat and that special handling
won't be needed.

IMO these are the reviewers doing enough over 90 days to meet the
requirements for core:

|   lifeless **| 3498 140   2 19957.6% |2
(  1.0%)  |
| clint-fewbar **  | 3292  54   1 27283.0% |7
(  2.6%)  |
| cmsj **  | 2481  25   1 22189.5% |   13
(  5.9%)  |
|derekh ** |  880  28  23  3768.2% |6
( 10.0%)  |

Who are already core, so thats easy.

If you are core, and not on that list, that may be because you're
coming from tuskar, which doesn't have 90 days of history, or you need
to get stuck into some more reviews :).

Now, 30 day history - this is the heads up for folk:

| clint-fewbar **  | 1792  27   0 15083.8% |6 (  4.0%)  |
| cmsj **  | 1791  15   0 16391.1% |   11 (  6.7%)  |
|   lifeless **| 1293  39   2  8567.4% |2 (  2.3%)  |
|derekh ** |  410  11   0  3073.2% |0 (  0.0%)  |
|  slagle  |  370  11  26   070.3% |3 ( 11.5%)  |
|ghe.rivero|  280   4  24   085.7% |2 (  8.3%)  |


I'm using the fairly simple metric of 'average at least one review a
day' as a proxy for 'sees enough of the code and enough discussion of
the code to be an effective reviewer'. James and Ghe, good stuff -
you're well on your way to core. If you're not in that list, please
treat this as a heads-up that you need to do more reviews to keep on
top of what's going on, whether so you become core, or you keep it.

In next month's update I'll review whether to remove some folk that
aren't keeping on top of things, as it won't be a surprise :).

Cheers,
Rob






-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Oslo meeting this week

2013-10-07 Thread Doug Hellmann
The Oslo team will be meeting this week to discuss delayed message
translation.

Please refer to https://wiki.openstack.org/wiki/Meetings/Oslo for a few
links relevant to the conversation.

Date: 11 Oct 2013
Time: 1400 UTC
Location: #openstack-meeting on freenode

See you there!
Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Dev] TC candidacy

2013-10-07 Thread John Griffith
Hi,

I'd like to propose my candidacy for a seat on the OpenStack Technical
Committee.

I've been an ATC working full time on OpenStack for about a year and a half
now.  I was currently re-elected as PTL for the Cinder project which I
started back in the Folsom release.  I've also had the privilege of serving
on the TC as a result of my role as PTL.  My goal over the past year and a
half has been focused on building the Cinder project and getting it on it's
way to being a healthy, diverse and active community driven project.
 During that time I've taken an active interest in all things OpenStack,
and over the next year I'd like to continue growing that interest and
participating more in OpenStack and it's future as a whole.

As far as my background, I'm not associated with a specific OpenStack
Distribution or a Service Provider, but I am employed by a storage startup
(SolidFire Inc) specifically to contribute to OpenStack as a whole.  I
believe that I have a slightly different (and valuable) perspective on
OpenStack.  Coming from a device vendor, and a company that implements an
OpenStack private cloud in house, I have a strong interest in the
user-experience, whether that user be the dev-ops or sys-admin's deploying
OpenStack or the end-user actually consuming the resources made available.
 My emphasis is on compatibility, regardless of distribution, hardware
devices deployed, virtualization technologies used etc. I spend a lot of my
time talking and more importantly, listening to a variety of folks about
OpenStack, including vendors and most of all folks that are implementing
OpenStack.  I like to hear their feedback regarding what's working, what's
not and how we can do better.  I'd like the opportunity to take that
feedback and help drive towards an ever improving OpenStack.

I believe that the TC (as well as the role of PTL) actually serves an
important function in the community.  In both cases these roles in my
opinion should take into account acting as an advocate for the overall well
being of OpenStack and it's technical direction.  It has nothing to do with
titles or special benefits, it's just a lot of extra hard work that
needs to be done, and not everybody is willing to do it, as well as
providing a point of contact for folks that are looking for technical
answers or explanation.

To me, this means much more than just voting on proposed new projects.  New
projects and growth are important to OpenStack however I don't think that
uncontrolled and disjointed growth in the form of new projects is a good
thing, in fact I think it's detrimental to OpenStack as a whole.  I
personally would like to see the TC have more involvement in terms of
recommending/investigating new projects before they're proposed or started
by others.  By the same token, I'd also like to see the TC take a more
active role in the projects we currently have and how they all tie
together.  I personally believe that having 10 or so individual projects
operating in their own silos is not the right direction.  My opinion here
does NOT equate to more control, but instead should equate to being more
helpful.  With the continued growth of OpenStack I believe it's critical to
have some sort of vision and some resources that have a deep understanding
of the entire eco-system.

If you have any questions about my views, opinions or anything feel free to
drop me an email or hit me up on irc.

Thanks,
John

OpenStack code contributions:
https://review.openstack.org/#/q/status:merged+owner:%2522John+Griffith%2522,n,z
OpenStack code reviews:
https://review.openstack.org/#/q/reviewer:%2522John+Griffith%2522,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-07 Thread Chris Friesen

On 10/07/2013 12:44 PM, Russell Bryant wrote:

On 10/07/2013 02:28 PM, Chris Friesen wrote:


I've been doing a lot of instance creation/deletion/evacuate and I've
noticed that if I

1)create an instance
2) power off the compute node it was running on
3) delete the instance
4) boot up the compute node

then the instance rootfs stays around in /var/lib/nova/instances/.
Eventually this could add up to significant amounts of space.


Is this expected behaviour?  (This is on grizzly, so maybe havana is
different.)  If not, should I file a bug for it?

I think it would make sense for the compute node to come up, query all
the instances in /var/lib/nova/instances/, and delete the ones for
instances that aren't in the database.


How long are you waiting after starting up the compute node?  I would
expect it to get cleaned up by a periodic task, so you might have to
wait roughly 10 minutes (by default).


This is nearly 50 minutes after booting up the compute node:

cfriesen@compute2:/var/lib/nova/instances$ ls -1
39e459b1-3878-41db-aaaf-7c7d0dfa2b19
41a60975-d6b8-468e-90bc-d7de58c2124d
46aec2ae-b6de-4503-a238-af736f81f1a4
50ec3d89-1c9d-4c28-adaf-26c924dfa3ed
_base
c6ec71a3-658c-4c7c-aa42-cc26296ce7fb
c72845e9-0d34-459f-b602-bb2ee409728b
compute_nodes
locks

Of these, only two show up in nova list.

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-07 Thread Thomas Maddox
On 10/7/13 1:55 PM, Doug Hellmann 
doug.hellm...@dreamhost.commailto:doug.hellm...@dreamhost.com wrote:




On Mon, Oct 7, 2013 at 1:44 PM, Thomas Maddox 
thomas.mad...@rackspace.commailto:thomas.mad...@rackspace.com wrote:
On 10/3/13 4:09 PM, Thomas Maddox 
thomas.mad...@rackspace.commailto:thomas.mad...@rackspace.com wrote:

On 10/3/13 8:53 AM, Julien Danjou 
jul...@danjou.infomailto:jul...@danjou.info wrote:

On Thu, Oct 03 2013, Thomas Maddox wrote:

 Interesting point, Doug and Julien. I'm thinking out loud, but if we
wanted
 to use pipeline.yaml, we could have an 'enabled' attribute for each
 pipeline?

That would be an option, for sure. But just removing all of them should
also work.

 I'm curious, does the pipeline dictate whether its resulting
 sample is stored, or if no pipeline is configured, will it just store
the
 sample according to the plugins in */notifications.py? I will test this
out.

If there's no pipeline, there's no sample, so nothing's stored.

 For additional context, the intent of the feature is to allow a
deployer
 more flexibility. Like, say we wanted to only enable storing
white-listed
 event traits and using trigger pipelines (to come) for notification
based
 alerting/monitoring?

This is already supported by the pipeline as you can list the meters you
want or not.

I poked around a bunch today; yep, you're right - we can just drop samples
on the floor by negating all meters in pipeline.yaml. I didn't have much
luck just removing all pipeline definitions or using a blank one (it
puked, and anything other than negating all samples felt too hacky to be
viable with trusted behavior).

I had my semantics and understanding of the workflow from the collector to
the pipeline to the dispatcher all muddled and was set straight today. =]
I will think on this some more.

I was also made aware of some additional Stevedore functionality, like
NamedExtensionManager, that should allow us to completely enable/disable
any handlers we don't want to load and the pipelines with just config
changes, and easily (thanks, Dragon!).

I really appreciate the time you all take to help us less experienced
developers learn on a daily basis! =]

I tried two approaches from this:

1. Using NamedExtensionManager and passing in an empty list of names, I
get the same RuntimeError[1]
2. Using EnabledExtensionManager (my preference since the use case for
disabling is lesser than enabling) and passing in a black list check, with
which I received the same Runtime error when an empty list of extensions
was the result.

I was thinking that, with the white-list/black-list capability of [Named,
Enabled]ExtensionManager, it would behave more like an iterator. If the
manager didn't load any Extensions, then it would just no op on operations
on said extensions it owns and the application would carry on as always.

Is this something that we could change in Stevedore? I wanted to get your
thoughts before opening an issue there, in case this was intended behavior
for some benefit I'm not aware of.

The exception is intended to prevent the app from failing silently if it cannot 
load any plugins for some reason, but
stevedore should throw a different exception for the could not load any 
plugins and I was told not to use any plugins and then told to do some work 
cases.

Thanks, Doug!

I poked around a bit more. This is being raised in the map function: 
https://github.com/dreamhost/stevedore/blob/master/stevedore/extension.py#L135-L137,
 not at load time. I see a separate try/except block for a failure to load, it 
looks like: 
https://github.com/dreamhost/stevedore/blob/master/stevedore/extension.py#L85-L97.
 Is that what you're referring to?


In this particular case, though, the thing calling the extension manager knows 
what the pipeline configuration is, and could just skip the call if there are 
no publishers in the pipeline.

This seems like it'd have the desired end result, but then this logic would 
have to live in two places for the collector service - both at collector 
initialization as well as each time a notification is processed. If Stevedore 
ExtensionManager behaved like an iterator map, where extensions = [] and 
map(func, extensions) just returns None, we would handle the empty case in one 
place (but silently, indeed). Otherwise, We'd have to check for publishers in 
the init function and, since that causes no notification managers to load,  we 
also have to check for the existence of notification managers in the callback 
for each notification. I realize I'm speaking specifically to Ceilometer's 
collector service, so generally speaking, what I'm suggesting is to fall back 
to how Python handles this naturally, since Stevedore is using a map function 
as syntactic sugar for an iterator of available extensions; seems the simplest 
approach to me.

Another suggestion (thanks, Dragon!), is we could just subclass the 
ExtensionManager for a different way of handling the various iterator 
operations?

So, 

Re: [openstack-dev] [OpenStack-Dev] TC candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 03:45 PM, John Griffith wrote:

Hi,

I'd like to propose my candidacy for a seat on the OpenStack Technical 
Committee.


I've been an ATC working full time on OpenStack for about a year and a 
half now.  I was currently re-elected as PTL for the Cinder project 
which I started back in the Folsom release.  I've also had the 
privilege of serving on the TC as a result of my role as PTL.  My goal 
over the past year and a half has been focused on building the Cinder 
project and getting it on it's way to being a healthy, diverse and 
active community driven project.  During that time I've taken an 
active interest in all things OpenStack, and over the next year I'd 
like to continue growing that interest and participating more in 
OpenStack and it's future as a whole.


As far as my background, I'm not associated with a specific OpenStack 
Distribution or a Service Provider, but I am employed by a storage 
startup (SolidFire Inc) specifically to contribute to OpenStack as a 
whole.  I believe that I have a slightly different (and valuable) 
perspective on OpenStack.  Coming from a device vendor, and a company 
that implements an OpenStack private cloud in house, I have a strong 
interest in the user-experience, whether that user be the dev-ops or 
sys-admin's deploying OpenStack or the end-user actually consuming the 
resources made available.  My emphasis is on compatibility, regardless 
of distribution, hardware devices deployed, virtualization 
technologies used etc. I spend a lot of my time talking and more 
importantly, listening to a variety of folks about OpenStack, 
including vendors and most of all folks that are implementing 
OpenStack.  I like to hear their feedback regarding what's working, 
what's not and how we can do better.  I'd like the opportunity to take 
that feedback and help drive towards an ever improving OpenStack.


I believe that the TC (as well as the role of PTL) actually serves an 
important function in the community.  In both cases these roles in my 
opinion should take into account acting as an advocate for the overall 
well being of OpenStack and it's technical direction.  It has nothing 
to do with titles or special benefits, it's just a lot of extra 
hard work that needs to be done, and not everybody is willing to do 
it, as well as providing a point of contact for folks that are looking 
for technical answers or explanation.


To me, this means much more than just voting on proposed new projects. 
 New projects and growth are important to OpenStack however I don't 
think that uncontrolled and disjointed growth in the form of new 
projects is a good thing, in fact I think it's detrimental to 
OpenStack as a whole.  I personally would like to see the TC have more 
involvement in terms of recommending/investigating new projects before 
they're proposed or started by others.  By the same token, I'd also 
like to see the TC take a more active role in the projects we 
currently have and how they all tie together.  I personally believe 
that having 10 or so individual projects operating in their own silos 
is not the right direction.  My opinion here does NOT equate to more 
control, but instead should equate to being more helpful.  With the 
continued growth of OpenStack I believe it's critical to have some 
sort of vision and some resources that have a deep understanding of 
the entire eco-system.


If you have any questions about my views, opinions or anything feel 
free to drop me an email or hit me up on irc.


Thanks,
John

OpenStack code contributions: 
https://review.openstack.org/#/q/status:merged+owner:%2522John+Griffith%2522,n,z
OpenStack code reviews: 
https://review.openstack.org/#/q/reviewer:%2522John+Griffith%2522,n,z




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Looking for input on optional sample pipelines branch

2013-10-07 Thread Doug Hellmann
On Mon, Oct 7, 2013 at 4:23 PM, Thomas Maddox
thomas.mad...@rackspace.comwrote:

   On 10/7/13 1:55 PM, Doug Hellmann doug.hellm...@dreamhost.com wrote:




 On Mon, Oct 7, 2013 at 1:44 PM, Thomas Maddox thomas.mad...@rackspace.com
  wrote:

  On 10/3/13 4:09 PM, Thomas Maddox thomas.mad...@rackspace.com wrote:

 On 10/3/13 8:53 AM, Julien Danjou jul...@danjou.info wrote:
 
 On Thu, Oct 03 2013, Thomas Maddox wrote:
 
  Interesting point, Doug and Julien. I'm thinking out loud, but if we
 wanted
  to use pipeline.yaml, we could have an 'enabled' attribute for each
  pipeline?
 
 That would be an option, for sure. But just removing all of them should
 also work.
 
  I'm curious, does the pipeline dictate whether its resulting
  sample is stored, or if no pipeline is configured, will it just store
 the
  sample according to the plugins in */notifications.py? I will test
 this
 out.
 
 If there's no pipeline, there's no sample, so nothing's stored.
 
  For additional context, the intent of the feature is to allow a
 deployer
  more flexibility. Like, say we wanted to only enable storing
 white-listed
  event traits and using trigger pipelines (to come) for notification
 based
  alerting/monitoring?
 
 This is already supported by the pipeline as you can list the meters you
 want or not.
 
 I poked around a bunch today; yep, you're right - we can just drop
 samples
 on the floor by negating all meters in pipeline.yaml. I didn't have much
 luck just removing all pipeline definitions or using a blank one (it
 puked, and anything other than negating all samples felt too hacky to be
 viable with trusted behavior).
 
 I had my semantics and understanding of the workflow from the collector
 to
 the pipeline to the dispatcher all muddled and was set straight today. =]
 I will think on this some more.
 
 I was also made aware of some additional Stevedore functionality, like
 NamedExtensionManager, that should allow us to completely enable/disable
 any handlers we don't want to load and the pipelines with just config
 changes, and easily (thanks, Dragon!).
 
 I really appreciate the time you all take to help us less experienced
 developers learn on a daily basis! =]

  I tried two approaches from this:

 1. Using NamedExtensionManager and passing in an empty list of names, I
 get the same RuntimeError[1]
 2. Using EnabledExtensionManager (my preference since the use case for
 disabling is lesser than enabling) and passing in a black list check, with
 which I received the same Runtime error when an empty list of extensions
 was the result.

 I was thinking that, with the white-list/black-list capability of [Named,
 Enabled]ExtensionManager, it would behave more like an iterator. If the
 manager didn't load any Extensions, then it would just no op on operations
 on said extensions it owns and the application would carry on as always.

 Is this something that we could change in Stevedore? I wanted to get your
 thoughts before opening an issue there, in case this was intended behavior
 for some benefit I'm not aware of.


  The exception is intended to prevent the app from failing silently if it
 cannot load any plugins for some reason, but
 stevedore should throw a different exception for the could not load any
 plugins and I was told not to use any plugins and then told to do some
 work cases.


  Thanks, Doug!

  I poked around a bit more. This is being raised in the map function:
 https://github.com/dreamhost/stevedore/blob/master/stevedore/extension.py#L135-L137,
 not at load time. I see a separate try/except block for a failure to load,
 it looks like:
 https://github.com/dreamhost/stevedore/blob/master/stevedore/extension.py#L85-L97.
 Is that what you're referring to?


The exception is raised when the manager is used, because the manager might
have been created as a module or application global object in a place where
the traceback wouldn't have been logged properly.




  In this particular case, though, the thing calling the extension manager
 knows what the pipeline configuration is, and could just skip the call if
 there are no publishers in the pipeline.


  This seems like it'd have the desired end result, but then this logic
 would have to live in two places for the collector service - both at
 collector initialization as well as each time a notification is processed.
 If Stevedore ExtensionManager behaved like an iterator map, where
 extensions = [] and map(func, extensions) just returns None, we would
 handle the empty case in one place (but silently, indeed). Otherwise, We'd
 have to check for publishers in the init function and, since that causes no
 notification managers to load,  we also have to check for the existence of
 notification managers in the callback for each notification. I realize I'm
 speaking specifically to Ceilometer's collector service, so generally
 speaking, what I'm suggesting is to fall back to how Python handles this
 naturally, since Stevedore is using a map function as 

[openstack-dev] [Doc] Doc Team meeting on IRC 10/8/13 13:00 UTC

2013-10-07 Thread Anne Gentle
Hi all,

The OpenStack Doc team meeting will take place tomorrow in
#openstack-meeting on IRC at 13:00 UTC. The Agenda is here:
https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting#Agenda_for_next_meeting

Please add items of interest to you and join us.
Thanks,
Anne
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-07 Thread Vishvananda Ishaya
There is a configuration option stating what to do with instances that are 
still in the hypervisor but have been deleted from the database. I think you 
want:

running_deleted_instance_action=reap

You probably also want

resume_guests_state_on_host_boot=true

to bring back the instances that were running before the node was powered off. 
We should definitely consider changing the default of these two values since I 
think the default values are probably not what most people would want.

Vish
On Oct 7, 2013, at 1:24 PM, Chris Friesen chris.frie...@windriver.com wrote:

 On 10/07/2013 12:44 PM, Russell Bryant wrote:
 On 10/07/2013 02:28 PM, Chris Friesen wrote:
 
 I've been doing a lot of instance creation/deletion/evacuate and I've
 noticed that if I
 
 1)create an instance
 2) power off the compute node it was running on
 3) delete the instance
 4) boot up the compute node
 
 then the instance rootfs stays around in /var/lib/nova/instances/.
 Eventually this could add up to significant amounts of space.
 
 
 Is this expected behaviour?  (This is on grizzly, so maybe havana is
 different.)  If not, should I file a bug for it?
 
 I think it would make sense for the compute node to come up, query all
 the instances in /var/lib/nova/instances/, and delete the ones for
 instances that aren't in the database.
 
 How long are you waiting after starting up the compute node?  I would
 expect it to get cleaned up by a periodic task, so you might have to
 wait roughly 10 minutes (by default).
 
 This is nearly 50 minutes after booting up the compute node:
 
 cfriesen@compute2:/var/lib/nova/instances$ ls -1
 39e459b1-3878-41db-aaaf-7c7d0dfa2b19
 41a60975-d6b8-468e-90bc-d7de58c2124d
 46aec2ae-b6de-4503-a238-af736f81f1a4
 50ec3d89-1c9d-4c28-adaf-26c924dfa3ed
 _base
 c6ec71a3-658c-4c7c-aa42-cc26296ce7fb
 c72845e9-0d34-459f-b602-bb2ee409728b
 compute_nodes
 locks
 
 Of these, only two show up in nova list.
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-07 Thread Joshua Harlow
This brings up another question, do people actually like/use the
'local_delete' feature in nova?

In general it seems to free resources that are not actually capable of
being freed and has been problematic for y! usage.

Deleting from the DB allows another request to actually take those
resources over, yet the previous VM (+network,volumes...) that wasn't
deleted still has those resources (likely attached to it in the case of a
volume, or in the case of a hypervisor the VM resource is still active,
but maybe nova-compute is down) so u end up in a conflict. How are others
using this code? Has it been working out?

-Josh

On 10/7/13 3:34 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:

There is a configuration option stating what to do with instances that
are still in the hypervisor but have been deleted from the database. I
think you want:

running_deleted_instance_action=reap

You probably also want

resume_guests_state_on_host_boot=true

to bring back the instances that were running before the node was powered
off. We should definitely consider changing the default of these two
values since I think the default values are probably not what most people
would want.

Vish
On Oct 7, 2013, at 1:24 PM, Chris Friesen chris.frie...@windriver.com
wrote:

 On 10/07/2013 12:44 PM, Russell Bryant wrote:
 On 10/07/2013 02:28 PM, Chris Friesen wrote:
 
 I've been doing a lot of instance creation/deletion/evacuate and I've
 noticed that if I
 
 1)create an instance
 2) power off the compute node it was running on
 3) delete the instance
 4) boot up the compute node
 
 then the instance rootfs stays around in /var/lib/nova/instances/.
 Eventually this could add up to significant amounts of space.
 
 
 Is this expected behaviour?  (This is on grizzly, so maybe havana is
 different.)  If not, should I file a bug for it?
 
 I think it would make sense for the compute node to come up, query all
 the instances in /var/lib/nova/instances/, and delete the ones for
 instances that aren't in the database.
 
 How long are you waiting after starting up the compute node?  I would
 expect it to get cleaned up by a periodic task, so you might have to
 wait roughly 10 minutes (by default).
 
 This is nearly 50 minutes after booting up the compute node:
 
 cfriesen@compute2:/var/lib/nova/instances$ ls -1
 39e459b1-3878-41db-aaaf-7c7d0dfa2b19
 41a60975-d6b8-468e-90bc-d7de58c2124d
 46aec2ae-b6de-4503-a238-af736f81f1a4
 50ec3d89-1c9d-4c28-adaf-26c924dfa3ed
 _base
 c6ec71a3-658c-4c7c-aa42-cc26296ce7fb
 c72845e9-0d34-459f-b602-bb2ee409728b
 compute_nodes
 locks
 
 Of these, only two show up in nova list.
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [savanna] using keystone client

2013-10-07 Thread Jon Maron
Hi,

  I'm trying to use the keystone client code in savanna/utils/openstack but my 
attempt to sue it yield:

 'Api v2.0 endpoint not found in service identity'

  An code sample:

from savanna.utils.openstack import keystone

. . .
  service_id = next((service.id for service in
   keystone.client().services.list()
   if 'quantum' == service.name), None)

  Thanks for the help!

-- Jon


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Blueprint for IPAM and Policy extensions in Neutron

2013-10-07 Thread Rudra Rugge
Hi Nachi,

I have split the spec for policy and VPN wiki served as a good reference point. 
Please review and provide comments:
https://wiki.openstack.org/wiki/Blueprint-policy-extensions-for-neutron

Thanks,
Rudra

On Oct 4, 2013, at 4:56 PM, Nachi Ueno na...@ntti3.com wrote:

 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,
 
 Inline response
 
 On 10/4/13 12:54 PM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Rudra
 
 inline responded
 
 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi Nachi,
 
 Thanks for reviewing the BP. Please see inline:
 
 On 10/4/13 11:30 AM, Nachi Ueno na...@ntti3.com wrote:
 
 Hi Rudra
 
 Two comment from me
 
 (1) IPAM and Network policy extension looks like independent extension.
 so IPAM part and Network policy should be divided for two blueprints.
 
 [Rudra] I agree that these need to be split into two blueprints. I will
 create another BP.
 
 Thanks
 
 
 (2) The team IPAM is too general word. IMO we should use more specific
 word.
 How about SubnetGroup?
 
 [Rudra] IPAM holds more information.
- All DHCP attributes for this IPAM subnet
- DNS server configuration
- In future address allocation schemes
 
 Actually, Neutron Subnet has dhcp, DNS, ip allocation schemes.
 If I understand your proposal correct, IPAM is a group of subnets
 for of which shares common parameters.
 Also, you can propose to extend existing subnet.
 
 [Rudra] Neutron subnet requires a network as I understand. IPAM info
 should not have such dependency. Similar to Amazon VPC model where all
 IPAM information can be stored even if a a network is not created.
 Association to networks can happen at a later time.
 
 OK I got it. However IPAM is still too general word.
 Don't you have any alternatives?
 
 Best
 Nachi
 
 Rudra
 
 
 
 
 
 (3) Network Policy Resource
 I would like to know more details of this api
 
 I would like to know resource definition and
 sample API request and response json.
 
 (This is one example
 https://wiki.openstack.org/wiki/Quantum/VPNaaS )
 
 Especially, I'm interested in src-addresses, dst-addresses, action-list
 properties.
 Also, how can we express any port in your API?
 
 [Rudra] Will add the details of the resources and APIs after separating
 the blueprint.
 
 Thanks!
 
 Best
 Nachi
 
 Regards,
 Rudra
 
 
 Best
 Nachi
 
 
 2013/10/4 Rudra Rugge rru...@juniper.net:
 Hi All,
 
 The link in the email was incorrect. Please follow the following link:
 
 
 https://blueprints.launchpad.net/neutron/+spec/ipam-policy-extensions-f
 or
 -neutron
 
 Thanks,
 Rudra
 
 On Oct 3, 2013, at 11:38 AM, Rudra Rugge rru...@juniper.net wrote:
 
 Hi All,
 
 A blueprint has been registered to add IPAM and Policy
 extensions to Neutron. Please review the blueprint and
 the attached specification.
 
 
 https://blueprints.launchpad.net/neutron/+spec/juniper-contrail-ipam-po
 li
 cy-extensions-for-neutron
 
 All comments are welcome.
 
 Thanks,
 Rudra
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-07 Thread Russell Bryant
On 10/07/2013 06:34 PM, Vishvananda Ishaya wrote:
 There is a configuration option stating what to do with instances that are 
 still in the hypervisor but have been deleted from the database. I think you 
 want:
 
 running_deleted_instance_action=reap
 
 You probably also want
 
 resume_guests_state_on_host_boot=true
 
 to bring back the instances that were running before the node was powered 
 off. We should definitely consider changing the default of these two values 
 since I think the default values are probably not what most people would want.

https://review.openstack.org/50188

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-07 Thread Vishvananda Ishaya

On Oct 7, 2013, at 3:49 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 This brings up another question, do people actually like/use the
 'local_delete' feature in nova?
 
 In general it seems to free resources that are not actually capable of
 being freed and has been problematic for y! usage.
 
 Deleting from the DB allows another request to actually take those
 resources over, yet the previous VM (+network,volumes...) that wasn't
 deleted still has those resources (likely attached to it in the case of a
 volume, or in the case of a hypervisor the VM resource is still active,
 but maybe nova-compute is down) so u end up in a conflict. How are others
 using this code? Has it been working out?

We haven't had any trouble with the two settings set as below. Users seem
to get far more frustrated when they have things that they cannot delete,
especially when it is using up their precious quota.

Vish

 
 -Josh
 
 On 10/7/13 3:34 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:
 
 There is a configuration option stating what to do with instances that
 are still in the hypervisor but have been deleted from the database. I
 think you want:
 
 running_deleted_instance_action=reap
 
 You probably also want
 
 resume_guests_state_on_host_boot=true
 
 to bring back the instances that were running before the node was powered
 off. We should definitely consider changing the default of these two
 values since I think the default values are probably not what most people
 would want.
 
 Vish
 On Oct 7, 2013, at 1:24 PM, Chris Friesen chris.frie...@windriver.com
 wrote:
 
 On 10/07/2013 12:44 PM, Russell Bryant wrote:
 On 10/07/2013 02:28 PM, Chris Friesen wrote:
 
 I've been doing a lot of instance creation/deletion/evacuate and I've
 noticed that if I
 
 1)create an instance
 2) power off the compute node it was running on
 3) delete the instance
 4) boot up the compute node
 
 then the instance rootfs stays around in /var/lib/nova/instances/.
 Eventually this could add up to significant amounts of space.
 
 
 Is this expected behaviour?  (This is on grizzly, so maybe havana is
 different.)  If not, should I file a bug for it?
 
 I think it would make sense for the compute node to come up, query all
 the instances in /var/lib/nova/instances/, and delete the ones for
 instances that aren't in the database.
 
 How long are you waiting after starting up the compute node?  I would
 expect it to get cleaned up by a periodic task, so you might have to
 wait roughly 10 minutes (by default).
 
 This is nearly 50 minutes after booting up the compute node:
 
 cfriesen@compute2:/var/lib/nova/instances$ ls -1
 39e459b1-3878-41db-aaaf-7c7d0dfa2b19
 41a60975-d6b8-468e-90bc-d7de58c2124d
 46aec2ae-b6de-4503-a238-af736f81f1a4
 50ec3d89-1c9d-4c28-adaf-26c924dfa3ed
 _base
 c6ec71a3-658c-4c7c-aa42-cc26296ce7fb
 c72845e9-0d34-459f-b602-bb2ee409728b
 compute_nodes
 locks
 
 Of these, only two show up in nova list.
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

2013-10-07 Thread Joshua Harlow
A scenario that I've seen:

Take 'nova-compute' down for software upgrade, API still accessible since
you want to provide API uptime (aka not taking the whole cluster offline).

User Y deletes VM on that hypervisor where nova-compute is currently down,
DB locally deletes, at this point VM 'A' is still active but nova thinks
its not.

User X requests VM, gets allocated 'A' hostname, ip (or other resource).
Uh oh, u now have two of the same IP/hostname in the same network (second
'A' is in damaged state since it likely can't even 'ifup' its network).

Upgrade software on that hypervisor (yum install xyz...), service
nova-compute restart (back to normal). Now the first 'A' could get
deleted, but u still have second broken 'A'.

Now what?

On 10/7/13 4:21 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:


On Oct 7, 2013, at 3:49 PM, Joshua Harlow harlo...@yahoo-inc.com wrote:

 This brings up another question, do people actually like/use the
 'local_delete' feature in nova?
 
 In general it seems to free resources that are not actually capable of
 being freed and has been problematic for y! usage.
 
 Deleting from the DB allows another request to actually take those
 resources over, yet the previous VM (+network,volumes...) that wasn't
 deleted still has those resources (likely attached to it in the case of
a
 volume, or in the case of a hypervisor the VM resource is still active,
 but maybe nova-compute is down) so u end up in a conflict. How are
others
 using this code? Has it been working out?

We haven't had any trouble with the two settings set as below. Users seem
to get far more frustrated when they have things that they cannot delete,
especially when it is using up their precious quota.

Vish

 
 -Josh
 
 On 10/7/13 3:34 PM, Vishvananda Ishaya vishvana...@gmail.com wrote:
 
 There is a configuration option stating what to do with instances that
 are still in the hypervisor but have been deleted from the database. I
 think you want:
 
 running_deleted_instance_action=reap
 
 You probably also want
 
 resume_guests_state_on_host_boot=true
 
 to bring back the instances that were running before the node was
powered
 off. We should definitely consider changing the default of these two
 values since I think the default values are probably not what most
people
 would want.
 
 Vish
 On Oct 7, 2013, at 1:24 PM, Chris Friesen chris.frie...@windriver.com
 wrote:
 
 On 10/07/2013 12:44 PM, Russell Bryant wrote:
 On 10/07/2013 02:28 PM, Chris Friesen wrote:
 
 I've been doing a lot of instance creation/deletion/evacuate and
I've
 noticed that if I
 
 1)create an instance
 2) power off the compute node it was running on
 3) delete the instance
 4) boot up the compute node
 
 then the instance rootfs stays around in /var/lib/nova/instances/.
 Eventually this could add up to significant amounts of space.
 
 
 Is this expected behaviour?  (This is on grizzly, so maybe havana is
 different.)  If not, should I file a bug for it?
 
 I think it would make sense for the compute node to come up, query
all
 the instances in /var/lib/nova/instances/, and delete the ones for
 instances that aren't in the database.
 
 How long are you waiting after starting up the compute node?  I would
 expect it to get cleaned up by a periodic task, so you might have to
 wait roughly 10 minutes (by default).
 
 This is nearly 50 minutes after booting up the compute node:
 
 cfriesen@compute2:/var/lib/nova/instances$ ls -1
 39e459b1-3878-41db-aaaf-7c7d0dfa2b19
 41a60975-d6b8-468e-90bc-d7de58c2124d
 46aec2ae-b6de-4503-a238-af736f81f1a4
 50ec3d89-1c9d-4c28-adaf-26c924dfa3ed
 _base
 c6ec71a3-658c-4c7c-aa42-cc26296ce7fb
 c72845e9-0d34-459f-b602-bb2ee409728b
 compute_nodes
 locks
 
 Of these, only two show up in nova list.
 
 Chris
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-07 Thread Gabriel Hurley
I hereby announce my candidacy to continue serving on the Technical Committee.

My current qualifications:

  * Two prior terms actively engaged on the TC.
  * Horizon PTL for the Grizzly and Havana cycles.
  * Core Horizon developer since Essex, and Keystone Core since Folsom.
  * Author of the Core Values of Horizon: 
http://docs.openstack.org/developer/horizon/intro.html
  * Extensive depth and breadth of knowledge of all of the OpenStack projects 
and service APIs.
  * Aided both the OpenStack Translation team and the OpenStack UX Group in 
their initial formations and growth phases.
  * Ongoing advocate for OpenStack to provide a unified and sensible experience 
for end users.
  * Highly involved in discussions around the future of OpenStack.

Here are what I see as the two most important issues facing OpenStack:

1. Presenting a unified experience for OpenStack, across all programs/projects, 
all APIs, the dashboard, and the documentation would do a HUGE amount to 
improve what we offer to our users. The level of frustration that OpenStack's 
users endure on a regular basis due to our fragmentation is a true pain. I see 
it as the job of the Technical Committee to provide leadership across projects 
and to bear in mind the needs of the broader community in ways that individual 
projects may not have insights into.

2. The ongoing question of the scope of OpenStack is of critical importance. 
Recent TC discussion and votes on Savanna, Trove, etc. have shown just how 
unclear people are on the where the edges of OpenStack are. This question has 
no easy answers and I want to continue to try and define those boundaries in 
the way that best supports the broader ecosystem around OpenStack.

There are myriad other issues such as the engagement between the Foundation 
Board and the TC, coordination within OpenStack, and more, but I won't go into 
those now.

Hopefully I've proven myself thus far to be a considered and well-reasoned 
member of the TC. It would be my honor to consider doing the good work of 
OpenStack.

Thank you,

- Gabriel Hurley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 08:03 PM, Gabriel Hurley wrote:

I hereby announce my candidacy to continue serving on the Technical Committee.

My current qualifications:

   * Two prior terms actively engaged on the TC.
   * Horizon PTL for the Grizzly and Havana cycles.
   * Core Horizon developer since Essex, and Keystone Core since Folsom.
   * Author of the Core Values of Horizon: 
http://docs.openstack.org/developer/horizon/intro.html
   * Extensive depth and breadth of knowledge of all of the OpenStack projects 
and service APIs.
   * Aided both the OpenStack Translation team and the OpenStack UX Group in 
their initial formations and growth phases.
   * Ongoing advocate for OpenStack to provide a unified and sensible 
experience for end users.
   * Highly involved in discussions around the future of OpenStack.

Here are what I see as the two most important issues facing OpenStack:

1. Presenting a unified experience for OpenStack, across all programs/projects, 
all APIs, the dashboard, and the documentation would do a HUGE amount to 
improve what we offer to our users. The level of frustration that OpenStack's 
users endure on a regular basis due to our fragmentation is a true pain. I see 
it as the job of the Technical Committee to provide leadership across projects 
and to bear in mind the needs of the broader community in ways that individual 
projects may not have insights into.

2. The ongoing question of the scope of OpenStack is of critical importance. Recent TC 
discussion and votes on Savanna, Trove, etc. have shown just how unclear people are on 
the where the edges of OpenStack are. This question has no easy answers and I 
want to continue to try and define those boundaries in the way that best supports the 
broader ecosystem around OpenStack.

There are myriad other issues such as the engagement between the Foundation 
Board and the TC, coordination within OpenStack, and more, but I won't go into 
those now.

Hopefully I've proven myself thus far to be a considered and well-reasoned 
member of the TC. It would be my honor to consider doing the good work of 
OpenStack.

Thank you,

 - Gabriel Hurley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC Candidacy

2013-10-07 Thread Doug Hellmann
I am announcing my candidacy for a position on the OpenStack Technical
Committee.

I have been programming in Python professionally for 15 years, in a variety
of application areas, and am currently the development lead for DreamHost's
OpenStack-based public cloud project, DreamCompute. I am a member of the
Python Software Foundation, have been on the PyCon Program Committee, and
was Editor in Chief of Python Magazine. In June of 2011, I published The
Python Standard Library by Example.

I started contributing to OpenStack in 2012, just before the Folsom summit.
I am a core reviewer and one of the founding members of the Ceilometer
project, and a core reviewer for the requirements and unified command line
interface projects. I am on the stable release maintenance team for
Grizzly, am part of the team working on the Python 3 transition, and have
contributed to several of the infrastructure projects. I will be the PTL
for the Oslo project starting with the Icehouse release.

These development activities, combined with our deployment work at
DreamHost, have given me a unique cross-project perspective into OpenStack,
and reinforced for me the importance of consistency across our components.
One of the roles of the technical committee is to encourage projects to
find commonalities and adopt consistent approaches or tools to make the
project run more smoothly for all contributors and users. Using consistent
libraries, coding style, and implementation patterns helps integrate new
developers with our community more quickly and encourages existing
developers to participate in more than one project. Using consistent tools
helps our infrastructure team create and maintain the automated systems
that have made OpenStack's impressive development velocity possible.
Consistent configuration tools also help packagers and deployers consume
what we are producing, making adoption easier. Consistent APIs and UIs make
it easier for end users to choose OpenStack clouds, either public or
private, over other options.

In addition to my code contributions, I am especially proud of the work
over the last year that went into bringing Ceilometer through incubation to
become an integrated project. Because we were one of the earliest projects
to go through formal incubation, much of the process was still being
developed as we were navigating it. I learned a lot while contributing to
that discussion. There are still some open questions about how mature a
project must be to enter incubation, and what level of integration is
needed before graduation. I look forward to addressing those questions as
we continue to grow as a community.

I share the view of many of the other candidates that OpenStack should not
limit itself to today's definition of IaaS. The history of computing is a
progression of different levels of abstraction, and what we consider
platform today may become infrastructure tomorrow.

I have found the OpenStack community to be the most welcoming group I have
interacted with in more than 20 years of contributing to open source. I'm
excited to be a part of OpenStack, and look forward to continuing to
contribute in whatever way I am able.

Doug


My commit history:
https://review.openstack.org/#/q/owner:doug.hellmann%2540dreamhost.com,n,z

My review history:
https://review.openstack.org/#/q/reviewer:doug.hellmann%2540dreamhost.com,n,z

My Ohloh account: https://www.ohloh.net/accounts/doughellmann

My blog: http://doughellmann.com/
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2013-10-07 Thread Anita Kuno

Confirmed.

On 10/07/2013 08:24 PM, Doug Hellmann wrote:
I am announcing my candidacy for a position on the OpenStack Technical 
Committee.


I have been programming in Python professionally for 15 years, in a 
variety of application areas, and am currently the development lead 
for DreamHost's OpenStack-based public cloud project, DreamCompute. I 
am a member of the Python Software Foundation, have been on the PyCon 
Program Committee, and was Editor in Chief of Python Magazine. In June 
of 2011, I published The Python Standard Library by Example.


I started contributing to OpenStack in 2012, just before the Folsom 
summit. I am a core reviewer and one of the founding members of the 
Ceilometer project, and a core reviewer for the requirements and 
unified command line interface projects. I am on the stable release 
maintenance team for Grizzly, am part of the team working on the 
Python 3 transition, and have contributed to several of the 
infrastructure projects. I will be the PTL for the Oslo project 
starting with the Icehouse release.


These development activities, combined with our deployment work at 
DreamHost, have given me a unique cross-project perspective into 
OpenStack, and reinforced for me the importance of consistency across 
our components. One of the roles of the technical committee is to 
encourage projects to find commonalities and adopt consistent 
approaches or tools to make the project run more smoothly for all 
contributors and users. Using consistent libraries, coding style, and 
implementation patterns helps integrate new developers with our 
community more quickly and encourages existing developers to 
participate in more than one project. Using consistent tools helps our 
infrastructure team create and maintain the automated systems that 
have made OpenStack's impressive development velocity possible. 
Consistent configuration tools also help packagers and deployers 
consume what we are producing, making adoption easier. Consistent APIs 
and UIs make it easier for end users to choose OpenStack clouds, 
either public or private, over other options.


In addition to my code contributions, I am especially proud of the 
work over the last year that went into bringing Ceilometer through 
incubation to become an integrated project. Because we were one of the 
earliest projects to go through formal incubation, much of the process 
was still being developed as we were navigating it. I learned a lot 
while contributing to that discussion. There are still some open 
questions about how mature a project must be to enter incubation, and 
what level of integration is needed before graduation. I look forward 
to addressing those questions as we continue to grow as a community.


I share the view of many of the other candidates that OpenStack should 
not limit itself to today's definition of IaaS. The history of 
computing is a progression of different levels of abstraction, and 
what we consider platform today may become infrastructure tomorrow.


I have found the OpenStack community to be the most welcoming group I 
have interacted with in more than 20 years of contributing to open 
source. I'm excited to be a part of OpenStack, and look forward to 
continuing to contribute in whatever way I am able.


Doug


My commit history: 
https://review.openstack.org/#/q/owner:doug.hellmann%2540dreamhost.com,n,z


My review history: 
https://review.openstack.org/#/q/reviewer:doug.hellmann%2540dreamhost.com,n,z


My Ohloh account: https://www.ohloh.net/accounts/doughellmann

My blog: http://doughellmann.com/



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] using keystone client

2013-10-07 Thread Dolph Mathews
On Mon, Oct 7, 2013 at 5:57 PM, Jon Maron jma...@hortonworks.com wrote:

 Hi,

   I'm trying to use the keystone client code in savanna/utils/openstack
 but my attempt to sue it yield:

  'Api v2.0 endpoint not found in service identity'


This sounds like the service catalog for keystone itself either isn't
configured, or isn't configured properly (with /v2.0/ endpoints). What does
your `keystone service-list` and `keystone endpoint-list` look like?


   An code sample:

 from savanna.utils.openstack import keystone

 . . .
   service_id = next((service.id for service in
keystone.client().services.list()
if 'quantum' == service.name), None)


I don't really know what the context of this code is, but be aware that it
requires admin access to keystone and is not interacting with a
representation of the catalog that normal users see. It's also not
particularly reliable to identify services by name -- instead, use they're
type (type=network, for quantum I believe), as a deployer could name a
network service as Quantum or Neutron or neutron or My Awesome
Neutron... but in any case, the type should still be network.



   Thanks for the help!

 -- Jon


 --
 CONFIDENTIALITY NOTICE
 NOTICE: This message is intended for the use of the individual or entity to
 which it is addressed and may contain information that is confidential,
 privileged and exempt from disclosure under applicable law. If the reader
 of this message is not the intended recipient, you are hereby notified that
 any printing, copying, dissemination, distribution, disclosure or
 forwarding of this communication is strictly prohibited. If you have
 received this communication in error, please contact the sender immediately
 and delete it from your system. Thank You.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 

-Dolph
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-07 Thread Yathiraj Udupi (yudupi)
Hi,

Based on the discussions we have had in the past few scheduler sub-team 
meetings,  I am sharing a document that proposes an updated Instance Group 
Model and API extension model.
This is a work-in-progress draft version, but sharing it for early feedback.
https://docs.google.com/document/d/17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing

This model support generic instance types, where an instance can represent a 
virtual node of any resource type.  But in the context of Nova, an instance 
refers to the VM instance.

This builds on the existing proposal for Instance Group Extension as documented 
here in this blueprint:  
https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension

Thanks,
Yathi.






___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-07 Thread Mike Spreitzer
Thanks.  I have a few questions.  First, I am a bit stymied by the style 
of API documentation used in that document and many others: it shows the 
first line of an HTTP request but says nothing about all the other 
details.  I am sure some of those requests must have interesting bodies, 
but I am not always sure which ones have a body at all, let alone what 
goes in it.  I suspect there may be some headers that are important too. 
Am I missing something?

That draft says the VMs are created before the group.  Is there a way 
today to create a VM without scheduling it?

As I understand your draft, it lays out a three phase process for a client 
to follow: create resources without scheduling or activating them, then 
arrange them into groups, then schedule  activate them.  By activate I 
mean, for a VM instance, to start running it.  That ordering must hold 
independently for each resource.  Activations are invoked by the client in 
an order that is consistent with (a) runtime dependencies that are 
mediated directly by the client (e.g., string slinging in the heat engine) 
and (b) the nature of the resources (for example, you  can not attach a 
volume to a VM instance until after both have been created).  Other than 
those considerations, the ordering and/or parallelism is a degree of 
freedom available to the client.  Have I got this right?

Couldn't we simplify this into a two phase process: create groups and 
resources with scheduling, then activate the resources in an acceptable 
order?

FYI: my group is using Weaver as the software orchestration technique, so 
there are no runtime dependencies that are mediated directly by the 
client.  The client sees a very simple API: the client presents a 
definition of all the groups and resources, and the service first 
schedules it all then activates in an acceptable order.  (We already have 
something in OpenStack that can do activations in an acceptable order, 
right?)  Weaver is not the only software orchestration technique with this 
property.  The simplicity of this API is one reason I recommend software 
orchestration techniques that take dependency mediation out of the 
client's hands.  I hope that with coming work on HOT we can get OpenStack 
to this level of API simplicity.  But that struggle lies farther down the 
roadmap...

Thanks,
Mike

Yathiraj Udupi (yudupi) yud...@cisco.com wrote on 10/07/2013 11:10:20 
PM:
 
 Hi, 
 
 Based on the discussions we have had in the past few scheduler sub-
 team meetings,  I am sharing a document that proposes an updated 
 Instance Group Model and API extension model. 
 This is a work-in-progress draft version, but sharing it for early 
feedback. 
 https://docs.google.com/document/d/
 17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing 
 
 This model support generic instance types, where an instance can 
 represent a virtual node of any resource type.  But in the context 
 of Nova, an instance refers to the VM instance. 
 
 This builds on the existing proposal for Instance Group Extension as
 documented here in this blueprint:  https://
 blueprints.launchpad.net/nova/+spec/instance-group-api-extension 
 
 Thanks,
 Yathi. 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scheduler] APIs for Smart Resource Placement - Updated Instance Group Model and API extension model - WIP Draft

2013-10-07 Thread Mike Spreitzer
In addition to the other questions below, I was wondering if you could 
explain why you included all those integer IDs; aren't the UUIDs 
sufficient?

Thanks,
Mike



From:   Mike Spreitzer/Watson/IBM@IBMUS
To: Yathiraj Udupi (yudupi) yud...@cisco.com, 
Cc: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org
Date:   10/08/2013 12:41 AM
Subject:Re: [openstack-dev] [scheduler] APIs for Smart Resource 
Placement - Updated Instance Group Model and API extension model - WIP 
Draft



Thanks.  I have a few questions.  First, I am a bit stymied by the style 
of API documentation used in that document and many others: it shows the 
first line of an HTTP request but says nothing about all the other 
details.  I am sure some of those requests must have interesting bodies, 
but I am not always sure which ones have a body at all, let alone what 
goes in it.  I suspect there may be some headers that are important too. 
Am I missing something? 

That draft says the VMs are created before the group.  Is there a way 
today to create a VM without scheduling it? 

As I understand your draft, it lays out a three phase process for a client 
to follow: create resources without scheduling or activating them, then 
arrange them into groups, then schedule  activate them.  By activate I 
mean, for a VM instance, to start running it.  That ordering must hold 
independently for each resource.  Activations are invoked by the client in 
an order that is consistent with (a) runtime dependencies that are 
mediated directly by the client (e.g., string slinging in the heat engine) 
and (b) the nature of the resources (for example, you  can not attach a 
volume to a VM instance until after both have been created).  Other than 
those considerations, the ordering and/or parallelism is a degree of 
freedom available to the client.  Have I got this right? 

Couldn't we simplify this into a two phase process: create groups and 
resources with scheduling, then activate the resources in an acceptable 
order? 

FYI: my group is using Weaver as the software orchestration technique, so 
there are no runtime dependencies that are mediated directly by the 
client.  The client sees a very simple API: the client presents a 
definition of all the groups and resources, and the service first 
schedules it all then activates in an acceptable order.  (We already have 
something in OpenStack that can do activations in an acceptable order, 
right?)  Weaver is not the only software orchestration technique with this 
property.  The simplicity of this API is one reason I recommend software 
orchestration techniques that take dependency mediation out of the 
client's hands.  I hope that with coming work on HOT we can get OpenStack 
to this level of API simplicity.  But that struggle lies farther down the 
roadmap... 

Thanks, 
Mike 

Yathiraj Udupi (yudupi) yud...@cisco.com wrote on 10/07/2013 11:10:20 
PM: 
 
 Hi, 
 
 Based on the discussions we have had in the past few scheduler sub-
 team meetings,  I am sharing a document that proposes an updated 
 Instance Group Model and API extension model. 
 This is a work-in-progress draft version, but sharing it for early 
feedback. 
 https://docs.google.com/document/d/
 17OIiBoIavih-1y4zzK0oXyI66529f-7JTCVj-BcXURA/edit?usp=sharing 
 
 This model support generic instance types, where an instance can 
 represent a virtual node of any resource type.  But in the context 
 of Nova, an instance refers to the VM instance. 
 
 This builds on the existing proposal for Instance Group Extension as
 documented here in this blueprint:  https://
 blueprints.launchpad.net/nova/+spec/instance-group-api-extension 
 
 Thanks, 
 Yathi. ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What should be Neutron behavior with scoped token?

2013-10-07 Thread Ravi Chunduru
I raised a bug with my findings
https://bugs.launchpad.net/neutron/+bug/1236704


On Fri, Oct 4, 2013 at 10:16 AM, Ravi Chunduru ravi...@gmail.com wrote:

 Does the described behavior qualify as a bug?

 Thanks,
 -Ravi.


 On Thu, Oct 3, 2013 at 5:21 PM, Ravi Chunduru ravi...@gmail.com wrote:

 Hi,
   In my tests, I observed that when an admin of a tenant runs 'nova list'
 to list down all the servers of the tenant - nova-api makes a call to
 quantum to get_ports with filter set to device owner. This operation is
 taking about 1m 30s in our setup(almost having 100 VMs i.e  100 ports)

 While a user of a tenant runs the same command, the response is immediate.

 Going into details - the only difference between those two operations is
 the 'role'.

 Looking into the code, I have the following questions
 1) Scoped Admin token returned all entries of a resource. Any reason not
 filtered per tenant?
 Comparing with Nova - it always honored tenant from the scoped token and
 returns values specific to tenant.

 2) In the above described test, the DB access should not take much time
 with or with out tenant-id in filter. Why change in response time for
 tenant admin or a member user?

 Thanks,
 -Ravi.







 --
 Ravi




-- 
Ravi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev