[openstack-dev] Error during puppet run

2013-09-19 Thread Peeyush Gupta
Hi all,

I have been trying to install openstack using packstack on
Fedora 19. Now, when I run packstack --allinone, I get the 
following error:

 ERROR : Error during puppet run : Error: mysqladmin -u root  password 
'b6ca73d0f2ce' returned 1 instead of one of [0]

Can anyone help me to figure out why am I facing this issue?

Thanks,
~Peeyush Gupta___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Tomas Sedovic

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO 
developers face to face and discuss the visions and goals of our projects.


Tuskar's ultimate goal is to have to a full OpenStack management 
solution: letting the cloud operators try OpenStack, install it, keep it 
running throughout the entire lifecycle (including bringing in new 
hardware, burning it in, decommissioning), help to scale it, secure the 
setup, monitor for failures, project the need for growth and so on.


And to provide a good user interface and API to let the operators 
control and script this easily.


Now, the scope of the OpenStack Deployment program (TripleO) includes 
not just installation, but the entire lifecycle management (from racking 
it up to decommissioning). Among other things they're thinking of are 
issue tracker integration and inventory management, but these could 
potentially be split into a separate program.


That means we do have a lot of goals in common and we've just been going 
at them from different angles: TripleO building the fundamental 
infrastructure while Tuskar focusing more on the end user experience.


We've come to a conclusion that it would be a great opportunity for both 
teams to join forces and build this thing together.


The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make 
it into the *I* one)


TripleO would get a UI and more developers trying it out and helping 
with setup and integration.


This shouldn't even need to derail us much from the rough roadmap we 
planned to follow in the upcoming months:


1. get things stable and robust enough to demo in Hong Kong on real hardware
2. include metrics and monitoring
3. security

What do you think?

Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] Meeting agenda for Thu Sep 19th at 1500 UTC

2013-09-19 Thread Julien Danjou
The Ceilometer project team holds a meeting in #openstack-meeting, see
https://wiki.openstack.org/wiki/Meetings/MeteringAgenda for more details.

Next meeting is on Thu Sep 19th at 1500 UTC 

Please add your name with the agenda item, so we know who to call on during
the meeting.
* Review Havana RC1 milestone
  * https://launchpad.net/ceilometer/+milestone/havana-rc1
* Release python-ceilometerclient? 
* Open discussion

If you are not able to attend or have additional topic(s) you would like
to add, please update the agenda on the wiki.

Cheers,
-- 
Julien Danjou
// Free Software hacker / independent consultant
// http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread mar...@redhat.com
On 19/09/13 11:08, Tomas Sedovic wrote:
 Hi everyone,
 
 Some of us Tuskar developers have had the chance to meet the TripleO
 developers face to face and discuss the visions and goals of our projects.
 
 Tuskar's ultimate goal is to have to a full OpenStack management
 solution: letting the cloud operators try OpenStack, install it, keep it
 running throughout the entire lifecycle (including bringing in new
 hardware, burning it in, decommissioning), help to scale it, secure the
 setup, monitor for failures, project the need for growth and so on.
 
 And to provide a good user interface and API to let the operators
 control and script this easily.
 
 Now, the scope of the OpenStack Deployment program (TripleO) includes
 not just installation, but the entire lifecycle management (from racking
 it up to decommissioning). Among other things they're thinking of are
 issue tracker integration and inventory management, but these could
 potentially be split into a separate program.
 
 That means we do have a lot of goals in common and we've just been going
 at them from different angles: TripleO building the fundamental
 infrastructure while Tuskar focusing more on the end user experience.
 
 We've come to a conclusion that it would be a great opportunity for both
 teams to join forces and build this thing together.
 
 The benefits for Tuskar would be huge:
 
 * being a part of an incubated project
 * more eyballs (see Linus' Law (the ESR one))
 * better information flow between the current Tuskar and TripleO teams
 * better chance at attracting early users and feedback
 * chance to integrate earlier into an OpenStack release (we could make
 it into the *I* one)
 
 TripleO would get a UI and more developers trying it out and helping
 with setup and integration.
 
 This shouldn't even need to derail us much from the rough roadmap we
 planned to follow in the upcoming months:
 
 1. get things stable and robust enough to demo in Hong Kong on real
 hardware
 2. include metrics and monitoring
 3. security
 
 What do you think?
 

this is fantastic news !

marios

 Tomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] AI How does tuskar fit in with TripleO

2013-09-19 Thread mar...@redhat.com
On 18/09/13 19:44, Robert Collins wrote:
 On 18 September 2013 20:59, mar...@redhat.com mandr...@redhat.com wrote:
 I have an AI from the tuskar community meeting to come up with a
 description of how TripleO 'differs from' Tuskar. I have no idea where
 this will be used/placed and in fact I don't know where to send it:
 should we paste it into the naming etherpad, open a launchpad docs
 blueprint (seems a bit much, especially as I don't know which doc it's
 going into). Alternatively please feel free to change and use as you see
 fit wherever:


 

  How does tuskar fit in with TripleO?


 TripleO [1] is a blanket term for a number of subprojects - but the
 
 Huh? TripleO is the OpenStack Deployment project codename: we're a
 program focused on production deployment of OpenStack at scale. The
 fact we have a number of specific projects to facilitate that is just
 good engineering, exactly the same as nova having the server API and
 client in different projects.

indeed ^^^ and this is how I intended it - the same way that 'nova' is a
collection of related but distinct services (compute, scheduler, api,
message bus/broker, etc) - tbh that's the first time I've seen
'OpenStack Deployment project' so my apologies for not using that

 
 What you've written below is correct, but it's implementation detail :)
 
 
 Tuskar [2] is actually a perfect fit for TripleO and entirely depends on
 the TripleO concept and services to do all of the heavy lifting.
 Actually, Tuskar may in part be defined as a *design* tool. With Tuskar,
 you get a UI and API with which you can tell the undercloud
 nova-baremetal service exactly which OpenStack services (i.e. baremetal
 images) to deploy onto which machines in the datacenter. The UI
 integrates into the default OpenStack Horizon dashboard and allows you
 to define your datacenter in terms of Racks (groups of physical machines
 registered by id/mac_address) and ResourceClasses (groups of Racks that
 all provide the same Overcloud service 'compute' vs 'block_storage').


 In the simplest terms, Tuskar translates your definition into the
 undercloud machine HEAT template, allowing you to then provision your
 datacenter at the push of a button. Beyond this planning/design, Tuskar
 also monitors the datacenter, allowing operators to make most efficient
 use of capacity. Ultimately, Tuskar aims to allow you to plan, define,
 deploy and monitor your datacenter in an accessible, scalable,
 repeatable, highly available and secure way.
 
 
 FWIW I see keeping the deployed OpenStack up to date, performing well,
 scaling it up and down, replacing hardware etc as all part of the
 production deployment problem : we'd be delighted to have those
 facilities be part of the TripleO program - but we have to walk before
 we run :).
 
I just read Tomas e-mail, this is great news :)

marios


 Cheers,
 Rob
 
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Jiří Stránský

On 19.9.2013 10:08, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO
developers face to face and discuss the visions and goals of our projects.

Tuskar's ultimate goal is to have to a full OpenStack management
solution: letting the cloud operators try OpenStack, install it, keep it
running throughout the entire lifecycle (including bringing in new
hardware, burning it in, decommissioning), help to scale it, secure the
setup, monitor for failures, project the need for growth and so on.

And to provide a good user interface and API to let the operators
control and script this easily.

Now, the scope of the OpenStack Deployment program (TripleO) includes
not just installation, but the entire lifecycle management (from racking
it up to decommissioning). Among other things they're thinking of are
issue tracker integration and inventory management, but these could
potentially be split into a separate program.

That means we do have a lot of goals in common and we've just been going
at them from different angles: TripleO building the fundamental
infrastructure while Tuskar focusing more on the end user experience.

We've come to a conclusion that it would be a great opportunity for both
teams to join forces and build this thing together.

The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make
it into the *I* one)

TripleO would get a UI and more developers trying it out and helping
with setup and integration.

This shouldn't even need to derail us much from the rough roadmap we
planned to follow in the upcoming months:

1. get things stable and robust enough to demo in Hong Kong on real hardware
2. include metrics and monitoring
3. security

What do you think?


I think this is a good idea, given that we heavily depend on TripleO 
already.


J.



Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Petr Blaho
On Thu, Sep 19, 2013 at 10:08:28AM +0200, Tomas Sedovic wrote:
 Hi everyone,
 
 Some of us Tuskar developers have had the chance to meet the TripleO 
 developers face to face and discuss the visions and goals of our projects.
 
 Tuskar's ultimate goal is to have to a full OpenStack management 
 solution: letting the cloud operators try OpenStack, install it, keep it 
 running throughout the entire lifecycle (including bringing in new 
 hardware, burning it in, decommissioning), help to scale it, secure the 
 setup, monitor for failures, project the need for growth and so on.
 
 And to provide a good user interface and API to let the operators 
 control and script this easily.
 
 Now, the scope of the OpenStack Deployment program (TripleO) includes 
 not just installation, but the entire lifecycle management (from racking 
 it up to decommissioning). Among other things they're thinking of are 
 issue tracker integration and inventory management, but these could 
 potentially be split into a separate program.
 
 That means we do have a lot of goals in common and we've just been going 
 at them from different angles: TripleO building the fundamental 
 infrastructure while Tuskar focusing more on the end user experience.
 
 We've come to a conclusion that it would be a great opportunity for both 
 teams to join forces and build this thing together.
 
 The benefits for Tuskar would be huge:
 
 * being a part of an incubated project
 * more eyballs (see Linus' Law (the ESR one))
 * better information flow between the current Tuskar and TripleO teams
 * better chance at attracting early users and feedback
 * chance to integrate earlier into an OpenStack release (we could make 
 it into the *I* one)
 
 TripleO would get a UI and more developers trying it out and helping 
 with setup and integration.
 
 This shouldn't even need to derail us much from the rough roadmap we 
 planned to follow in the upcoming months:
 
 1. get things stable and robust enough to demo in Hong Kong on real hardware
 2. include metrics and monitoring
 3. security
 
 What do you think?

+1

 
 Tomas
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Imre Farkas

On 09/19/2013 10:08 AM, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO
developers face to face and discuss the visions and goals of our projects.

Tuskar's ultimate goal is to have to a full OpenStack management
solution: letting the cloud operators try OpenStack, install it, keep it
running throughout the entire lifecycle (including bringing in new
hardware, burning it in, decommissioning), help to scale it, secure the
setup, monitor for failures, project the need for growth and so on.

And to provide a good user interface and API to let the operators
control and script this easily.

Now, the scope of the OpenStack Deployment program (TripleO) includes
not just installation, but the entire lifecycle management (from racking
it up to decommissioning). Among other things they're thinking of are
issue tracker integration and inventory management, but these could
potentially be split into a separate program.

That means we do have a lot of goals in common and we've just been going
at them from different angles: TripleO building the fundamental
infrastructure while Tuskar focusing more on the end user experience.

We've come to a conclusion that it would be a great opportunity for both
teams to join forces and build this thing together.

The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make
it into the *I* one)

TripleO would get a UI and more developers trying it out and helping
with setup and integration.

This shouldn't even need to derail us much from the rough roadmap we
planned to follow in the upcoming months:

1. get things stable and robust enough to demo in Hong Kong on real
hardware
2. include metrics and monitoring
3. security

What do you think?


That is an excellent idea!

Does it mean from the practical point of view that the Tuskar code will 
be merged into the TripleO repos and the project will be deleted from 
StackForge and Launchpad?


Imre



Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Ceilometer Alarm management page

2013-09-19 Thread Ladislav Smola

Hello everyone,

I am in the process of implementing Ceilometer Alarm API. Here are the 
blueprints.


https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-api
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page


While I am waiting for some Ceilometer patches to complete, I would like 
to start a discussion about the bp. 
https://blueprints.launchpad.net/horizon/+spec/ceilometer-alarm-management-page:


1. The points 1-4 from are some sort simple version of the page, that 
uses all basic alarm-api features. Do you think we need them all? Any 
feedback for them? Enhancements?


2. There is a thought, that we should maybe divide Alarms into (System, 
User-defined). The only system alarms now, are set up with Heat and used 
for auto-scaling.


3. There is a thought about watching correlation of multiple alarm 
histories in one Chart (either Alarm Histories, or the real statistics 
the Alarm is defined by). Do you think it will be needed? Any real life 
examples you have in mind?


4. There is a thought about tagging the alarms by user defined tag, so 
user can easily group alarms together and then watch them together based 
on their tag.


5. There is a thought about generating a default alarms, that could 
observe the most important things (verifying good behaviour, showing bad 
behaviour). Does anybody have an idea which alarms could be the most 
important and usable for everybody?


6. There is a thought about making overview pages customizable by the 
users, so they can really observe, what they need. (includes Ceilometer 
statistics and alarms)


Could you please give me some feedback for the points above(or anything 
else related)? After we collect what we need, I would push this to UX 
guys, so they can prepare some wireframes of how it could look like. So 
we can start discuss the UX.
e.g. even the Alarm management from 1 could be pretty challenging, as we 
have to come up with some sane UI, for defining general statistic query, 
that defines the Alarm.


Thank you very much for any feedback.

--Ladislav


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Ladislav Smola

On 09/19/2013 10:08 AM, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO 
developers face to face and discuss the visions and goals of our 
projects.


Tuskar's ultimate goal is to have to a full OpenStack management 
solution: letting the cloud operators try OpenStack, install it, keep 
it running throughout the entire lifecycle (including bringing in new 
hardware, burning it in, decommissioning), help to scale it, secure 
the setup, monitor for failures, project the need for growth and so on.


And to provide a good user interface and API to let the operators 
control and script this easily.


Now, the scope of the OpenStack Deployment program (TripleO) includes 
not just installation, but the entire lifecycle management (from 
racking it up to decommissioning). Among other things they're thinking 
of are issue tracker integration and inventory management, but these 
could potentially be split into a separate program.


That means we do have a lot of goals in common and we've just been 
going at them from different angles: TripleO building the fundamental 
infrastructure while Tuskar focusing more on the end user experience.


We've come to a conclusion that it would be a great opportunity for 
both teams to join forces and build this thing together.


The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make 
it into the *I* one)


TripleO would get a UI and more developers trying it out and helping 
with setup and integration.


This shouldn't even need to derail us much from the rough roadmap we 
planned to follow in the upcoming months:


1. get things stable and robust enough to demo in Hong Kong on real 
hardware

2. include metrics and monitoring
3. security

What do you think?

Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


That certainly makes a lot of sense to me.
+1

-- Ladislav



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-19 Thread Mike Spreitzer
I'd like to try to summarize this discussion, if nothing else than to see 
whether I have correctly understood it.  There is a lot of consensus, but 
I haven't heard from Adrian Otto since he wrote some objections.  I'll 
focus on trying to describe the consensus; Adrian's concerns are already 
collected in a single message.  Or maybe this is already written in some 
one place?

The consensus is that there should be an autoscaling (AS) service that is 
accessible via its own API.  This autoscaling service can scale anything 
describable by a snippet of Heat template (it's not clear to me exactly 
what sort of syntax this is; is it written up anywhere?).  The autoscaling 
service is stimulated into action by a webhook call.  The user has the 
freedom to arrange calls on that webhook in any way she wants.  It is 
anticipated that a common case will be alarms raised by Ceilometer.  For 
more specialized or complicated logic, the user is free to wire up 
anything she wants to call the webhook.

An instance of the autoscaling service maintains an integer variable, 
which is the current number of copies of the thing being autoscaled.  Does 
the webhook call provide a new number, or +1/-1 signal, or ...?

There was some discussion of a way to indicate which individuals to 
remove, in the case of decreasing the multiplier.  I suppose that would be 
an option in the webhook, and one that will not be exercised by Ceilometer 
alarms.

(It seems to me that there is not much auto in this autoscaling service 
--- it is really a scaling service driven by an external controller.  This 
is not a criticism, I think this is a good factoring --- but maybe not the 
best naming.)

The autoscaling service does its job by multiplying the heat template 
snippet (the thing to be autoscaled) by the current number of copies and 
passing this derived template to Heat to make it so.  As the desired 
number of copies changes, the AS service changes the derived template that 
it hands to Heat.  Most commentators argue that the consistency and 
non-redundancy of making the AS service use Heat outweigh the extra 
path-length compared to a more direct solution.

Heat will have a resource type, analogous to 
AWS::AutoScaling::AutoScalingGroup, through which the template author can 
request usage of the AS service.

OpenStack in general, and Heat in particular, need to be much better at 
traceability and debuggability; the AS service should be good at these 
too.

Have I got this right?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-dev] [Ceilometer] Adding Alarms gives error

2013-09-19 Thread Somanchi Trinath-B39208
Hi Stackers!

I was getting to understand ceilometer.

I get the following error when I create an alarm.

From the client I issue the command :

root@openstack:~# ceilometer alarm-create --name alm1 --description test 
--project-id 28296b3cb50f403c80e3c47a025398cc --user-id 
d7f7bb7a07504c29bd6cd2151fb7f2e3 --period 10 --state ok --meter-name m22 
--statistic min

From the Ceilometer API side, I get this error log,

2013-09-19 17:51:12.250 8385 ERROR wsme.api [-] Server-side error: coercing to 
Unicode: need string or buffer, Message found. Detail:
Traceback (most recent call last):

  File /usr/local/lib/python2.7/dist-packages/wsmeext/pecan.py, line 72, in 
callfunction
result = f(self, *args, **kwargs)

  File 
/usr/local/lib/python2.7/dist-packages/ceilometer/api/controllers/v2.py, line 
1144, in post
raise wsme.exc.ClientSideError(error)

  File /usr/local/lib/python2.7/dist-packages/wsme/exc.py, line 9, in __init__
super(ClientSideError, self).__init__(self.faultstring)

  File /usr/local/lib/python2.7/dist-packages/wsme/exc.py, line 18, in 
faultstring
return six.u(self.msg)

  File /usr/lib/python2.7/dist-packages/six.py, line 262, in u
return unicode(s, unicode_escape)

TypeError: coercing to Unicode: need string or buffer, Message found

Is there anything wrong with the input I'm sending to the API.

Kindly help me resolve this issue.

Thanking you..


--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] Use Taskflow for leases start/end ?

2013-09-19 Thread Sylvain Bauza

Hi Climate team,

I just went through https://wiki.openstack.org/wiki/TaskFlow
Do you think Taskflow could help us in providing ressources defined in 
the lease on an atomic way ?


We could leave the implementation up to the plugins, but maybe the Lease 
Manager could also benefit of it.


-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up Doc? Sep 18 2013

2013-09-19 Thread Mike Asthalter
Just a couple notes about #5 (inking to WADL files in separate repositories):


  1.  This feature in supported in version 1.10.0 of the clouddocs maven 
plugin, which has now been reverted back to 1.9.3 due to request/response 
issues. You should be able to test using 1.9.4-SNAPSHOT until version 1.10.0 is 
available again.
  2.  You must use a plain URL to access the wadl. See the following example.

For example, to get the plain version for the following URL:

http://git.openstack.org/cgit/openstack/api-site/tree/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

Browse to the above location in Firefox, and then click 
(plainhttp://git.openstack.org/cgit/openstack/api-site/plain/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl)
 on the blob: line, just above where the wadl text appears.

This will display the plain version of the URL:

http://git.openstack.org/cgit/openstack/api-site/plain/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl

Then use the plain version of the URL to access the wadl resources in the API 
Devguide:


wadl:resources

href=http://git.openstack.org/cgit/openstack/api-site/plain/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl;
xmlns:wadl=http://wadl.dev.java.net/2009/02/

Mike

From: Anne Gentle a...@openstack.orgmailto:a...@openstack.org
Reply-To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Wednesday, September 18, 2013 8:36 PM
To: 
openstack-d...@lists.openstack.orgmailto:openstack-d...@lists.openstack.org 
openstack-d...@lists.openstack.orgmailto:openstack-d...@lists.openstack.org,
 OpenStack Development Mailing List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Cc: Glen Campbell 
glen.campb...@rackspace.commailto:glen.campb...@rackspace.com
Subject: [openstack-dev] What's Up Doc? Sep 18 2013


Less than a month until the Oct. 17th timed release! We're in better shape than 
in past releases but the install guide consolidation is crucial and there are 
many doc bugs. It's crunch time!

1. In review and merged this past week:

Congrats to Michael Still for his first doc patches to openstack-manuals! He 
got his new skillz at boot camp. Thanks for the patches!

I see good doc bug fixes going in, and Tom put in an update to the 
auto-generated tables.

We had a lot of cleanup to do for the flattening of directories but I think it 
has mostly settled now. The last step is to move the Networking Admin Guide's 
content into the correct locations (install or config or admin). Nermina's on 
it, assigned to bug  https://bugs.launchpad.net/openstack-manuals/+bug/1223542.

Andreas has a good Etherpad going to talk about continuous publishing: 
https://etherpad.openstack.org/Continous_Publishing. It's possible we'll need 
to release the Cloud Admin Guide as well due to many references to 
configuration files. Please join in the discussion as we only have less than a 
month until release!

2. High priority doc work:

The install guide remains a high priority as do doc bugs. Shaun is working on a 
patch this week to consolidate the basic-install to create one single-sourced 
install-guide (which will document for yum, apt, and zypper). There were over 
300 doc bugs in openstack-manuals earlier this week as some docimpact bugs got 
logged, but we are making decent progress.

We also need to have someone re-run the autodoc tables and try to match up with 
some of the storage patches in this week: 
https://review.openstack.org/#/c/47044/ and 
https://review.openstack.org/#/c/47118/.

3. Doc work going on that I know of:

Kersten Richter is working on the Compute API docs again and getting an 
environment set up.

4. New incoming doc requests:

At the Doc Boot Camp, Joe Gordon asked me to write up a checklist of sorts for 
what info should go along with a DocImpact flag. I wrote up my ideas in a blog 
post and would love more input before putting it on the wiki.

http://justwriteclick.com/2013/09/17/openstack-docimpact-flag-walk-through/

5. Doc tools updates:

We had a 1.10.0 release today but had to revert back to 1.9.3 due to the 
request/responses not appearing in the API reference page. David Cramer's aware 
of the problem and will work through it.

We also had a breakthrough where we can now link to WADL files in separate 
repositories so we have a single source of truth. Thanks to Mike Asthalter for 
testing! He reported that you can use this syntax in the xml doc where you want 
to refer to the WADL:

wadl:resources

href=http://git.openstack.org/cgit/openstack/api-site/plain/api-ref/src/wadls/orchestration-api/src/v1/orchestration-api.wadl;
xmlns:wadl=http://wadl.dev.java.net/2009/02/

6. Other doc news:

We had a great time at Docs Boot Camp. I wrote up a blog entry and posted some 
pictures:

http://justwriteclick.com/2013/09/13/openstack-docs-boot-camp-wrap-up/


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Lucas Alvares Gomes
 We've come to a conclusion that it would be a great opportunity for both
 teams to join forces and build this thing together.

+1

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Adam Young

On 09/19/2013 05:19 AM, Imre Farkas wrote:

On 09/19/2013 10:08 AM, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO
developers face to face and discuss the visions and goals of our 
projects.


Tuskar's ultimate goal is to have to a full OpenStack management
solution: letting the cloud operators try OpenStack, install it, keep it
running throughout the entire lifecycle (including bringing in new
hardware, burning it in, decommissioning), help to scale it, secure the
setup, monitor for failures, project the need for growth and so on.

And to provide a good user interface and API to let the operators
control and script this easily.

Now, the scope of the OpenStack Deployment program (TripleO) includes
not just installation, but the entire lifecycle management (from racking
it up to decommissioning). Among other things they're thinking of are
issue tracker integration and inventory management, but these could
potentially be split into a separate program.

That means we do have a lot of goals in common and we've just been going
at them from different angles: TripleO building the fundamental
infrastructure while Tuskar focusing more on the end user experience.

We've come to a conclusion that it would be a great opportunity for both
teams to join forces and build this thing together.

The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make
it into the *I* one)

TripleO would get a UI and more developers trying it out and helping
with setup and integration.

This shouldn't even need to derail us much from the rough roadmap we
planned to follow in the upcoming months:

1. get things stable and robust enough to demo in Hong Kong on real
hardware
2. include metrics and monitoring
3. security

What do you think?


That is an excellent idea!

Does it mean from the practical point of view that the Tuskar code 
will be merged into the TripleO repos and the project will be deleted 
from StackForge and Launchpad?


I would recommend against that, and instead have the unified team merge, 
but maintain both repos.  Think of how Keystone manages both 
python-keystoneclient and keystone server.


And let me be the first to suggest that the unified team be called 
Tuskarooo!




Imre



Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-19 Thread Christopher Armstrong
Hi Michael! Thanks for this summary. There were some minor
inaccuracies, but I appreciate you at least trying when I should have
summarized it earlier. I'll give some feedback inline.

First, though, I have recently worked a lot on the wiki page for the
blueprint. It's available here:

https://wiki.openstack.org/wiki/Heat/AutoScaling

It still might need a little bit more cleaning up and probably a more
holistic example, but it should be pretty close now. I will say that I
changed it to specify the Heat resources for using autoscale instead
of the APIs of the AS API mostly for convenience because they're
easily specifiable. The AS API should be derived pretty obviously from
the resources.

On Thu, Sep 19, 2013 at 6:35 AM, Mike Spreitzer mspre...@us.ibm.com wrote:
 I'd like to try to summarize this discussion, if nothing else than to see
 whether I have correctly understood it.  There is a lot of consensus, but I
 haven't heard from Adrian Otto since he wrote some objections.  I'll focus
 on trying to describe the consensus; Adrian's concerns are already collected
 in a single message.  Or maybe this is already written in some one place?

Yeah. Sorry I didn't link that wiki page earlier; it was in a pretty
raw and chaotic form.

 The consensus is that there should be an autoscaling (AS) service that is
 accessible via its own API.  This autoscaling service can scale anything
 describable by a snippet of Heat template (it's not clear to me exactly what
 sort of syntax this is; is it written up anywhere?).

Yes. See the wiki page above; it's basically just a mapping exactly
like the Resources section in a typical Heat template. e.g.

{..., Resources: {mywebserver: {Type: OS::Nova::Server}, ...}}

 The autoscaling
 service is stimulated into action by a webhook call.  The user has the
 freedom to arrange calls on that webhook in any way she wants.  It is
 anticipated that a common case will be alarms raised by Ceilometer.  For
 more specialized or complicated logic, the user is free to wire up anything
 she wants to call the webhook.

This is accurate.

 An instance of the autoscaling service maintains an integer variable, which
 is the current number of copies of the thing being autoscaled.  Does the
 webhook call provide a new number, or +1/-1 signal, or ...?

The webhook provides no parameters. The amount of change is encoded
into the policy that the webhook is associated with. Policies can
change it the same way they can in current AWS-based autoscaling: +/-
fixed number, or +/- percent, or setting it to a specific number
directly.


 There was some discussion of a way to indicate which individuals to remove,
 in the case of decreasing the multiplier.  I suppose that would be an option
 in the webhook, and one that will not be exercised by Ceilometer alarms.

I don't think the webhook is the right place to do that. That should
probably be a specific thing in the AS API.

 (It seems to me that there is not much auto in this autoscaling service
 --- it is really a scaling service driven by an external controller.  This
 is not a criticism, I think this is a good factoring --- but maybe not the
 best naming.)

I think the policies are what qualify it for the auto term. You can
have webhook policies or schedule-based policies (and maybe more
policies in the future). The policies determine how to change the
group.

 The autoscaling service does its job by multiplying the heat template
 snippet (the thing to be autoscaled) by the current number of copies and
 passing this derived template to Heat to make it so.  As the desired
 number of copies changes, the AS service changes the derived template that
 it hands to Heat.  Most commentators argue that the consistency and
 non-redundancy of making the AS service use Heat outweigh the extra
 path-length compared to a more direct solution.

Agreed.

 Heat will have a resource type, analogous to
 AWS::AutoScaling::AutoScalingGroup, through which the template author can
 request usage of the AS service.

Yes.

 OpenStack in general, and Heat in particular, need to be much better at
 traceability and debuggability; the AS service should be good at these too.

Agreed.

 Have I got this right?

Pretty much! Thanks for the summary :-)

-- 
IRC: radix
Christopher Armstrong
Rackspace

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Tomas Sedovic

On 09/19/2013 04:00 PM, Adam Young wrote:

On 09/19/2013 05:19 AM, Imre Farkas wrote:

On 09/19/2013 10:08 AM, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO
developers face to face and discuss the visions and goals of our
projects.

Tuskar's ultimate goal is to have to a full OpenStack management
solution: letting the cloud operators try OpenStack, install it, keep it
running throughout the entire lifecycle (including bringing in new
hardware, burning it in, decommissioning), help to scale it, secure the
setup, monitor for failures, project the need for growth and so on.

And to provide a good user interface and API to let the operators
control and script this easily.

Now, the scope of the OpenStack Deployment program (TripleO) includes
not just installation, but the entire lifecycle management (from racking
it up to decommissioning). Among other things they're thinking of are
issue tracker integration and inventory management, but these could
potentially be split into a separate program.

That means we do have a lot of goals in common and we've just been going
at them from different angles: TripleO building the fundamental
infrastructure while Tuskar focusing more on the end user experience.

We've come to a conclusion that it would be a great opportunity for both
teams to join forces and build this thing together.

The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make
it into the *I* one)

TripleO would get a UI and more developers trying it out and helping
with setup and integration.

This shouldn't even need to derail us much from the rough roadmap we
planned to follow in the upcoming months:

1. get things stable and robust enough to demo in Hong Kong on real
hardware
2. include metrics and monitoring
3. security

What do you think?


That is an excellent idea!

Does it mean from the practical point of view that the Tuskar code
will be merged into the TripleO repos and the project will be deleted
from StackForge and Launchpad?


I would recommend against that, and instead have the unified team merge,
but maintain both repos.  Think of how Keystone manages both
python-keystoneclient and keystone server.

And let me be the first to suggest that the unified team be called
Tuskarooo!


My understanding is: we'd mostly keep our repos (tuskar, tukar-ui, 
python-tuskarclient) for now but probably move them from stackforge to 
openstack (since that's where all the TripleO repos live).


The Tuskar code would probably be integrated a bit later than the 
current TripleO stuff (API in I) and we'll need to meet some integration 
requirements, but I believe that eventually tuskar-ui would merge with 
Horizon just like all the other UIs do. (provided that ends up making sense)


There is some code we currently have in Tuskar that will make more sense 
to move to another project or whatever but that's details.


And yeah, Tuskarooo's great but I wouldn't say no to Tripletusk 
(Triceratops!) or say Gablerstaplerfahrer.






Imre



Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Robert Collins
+1 :).

On 19 September 2013 20:08, Tomas Sedovic tsedo...@redhat.com wrote:
 Hi everyone,

 Some of us Tuskar developers have had the chance to meet the TripleO
 developers face to face and discuss the visions and goals of our projects.

 Tuskar's ultimate goal is to have to a full OpenStack management solution:
 letting the cloud operators try OpenStack, install it, keep it running
 throughout the entire lifecycle (including bringing in new hardware, burning
 it in, decommissioning), help to scale it, secure the setup, monitor for
 failures, project the need for growth and so on.

 And to provide a good user interface and API to let the operators control
 and script this easily.

 Now, the scope of the OpenStack Deployment program (TripleO) includes not
 just installation, but the entire lifecycle management (from racking it up
 to decommissioning). Among other things they're thinking of are issue
 tracker integration and inventory management, but these could potentially be
 split into a separate program.

Indeed. To offer a little nuance here - if you could just install the
base OpenStack once on some hardware, it would be pretty useless ;).
We have to look holistically at a deployment as something that takes
place over some years, and may scale up and down. There are lots of
things that can be added onto the very core of that problem - such as
in-depth historical performance analysis - that don't change how you
deliver the core solution - and there are other things - such as the
Tuskar resource class abstraction which do change how the core
solution is implemented/delivered.

So there is a big problem space here, and we may find down the track
that there need to be more programs focusing on higher-order problems
- like a full blown CMDB's with fault correlation over time-series -
but those things are big and complex enough to take on a life of their
own : we should integrate with them but not build them :)

 That means we do have a lot of goals in common and we've just been going at
 them from different angles: TripleO building the fundamental infrastructure
 while Tuskar focusing more on the end user experience.

 We've come to a conclusion that it would be a great opportunity for both
 teams to join forces and build this thing together.

 The benefits for Tuskar would be huge:

 * being a part of an incubated project
 * more eyballs (see Linus' Law (the ESR one))
 * better information flow between the current Tuskar and TripleO teams
 * better chance at attracting early users and feedback
 * chance to integrate earlier into an OpenStack release (we could make it
 into the *I* one)

We'll want to talk with Thierry about the process for existing
Programs adding new API server projects; that hasn't been done before
- I suspect the normal checklist will apply. E.g. stable API etc.

Cheers!
-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error during puppet run

2013-09-19 Thread Craig E. Ward
The mysqladmin command returned an error status. It could be that your mysql 
root password is different from the one used by packstack, which passes it down 
to puppet. You could run the command in an interactive shell and see what error 
message gets produced.


If the system already had mysql installed, the root account password likely was 
already set. If you don't need that installation, uninstall mysql, remove its 
directories, and let packstack install it again. The command that failed 
probably was issued because packstack/puppet assumed a fresh install and tried 
to set the root password to something other than null. (A good thing to do with 
a fresh install of mysql.)


If you need any of the databases already in mysql, then you'll need to find a 
way to tell packstack to not have puppet do the install. I've only performed 
some simple experimental installations using packstack; I don't know how to 
tell it not to install mysql.


Craig


On 9/18/13 11:44 PM, Peeyush Gupta wrote:

Hi all,

I have been trying to install openstack using packstack on
Fedora 19. Now, when I run packstack --allinone, I get the
following error:

  ERROR : Error during puppet run : Error: mysqladmin -u root  password 
'b6ca73d0f2ce' returned 1 instead of one of [0]

Can anyone help me to figure out why am I facing this issue?

Thanks,
~Peeyush Gupta



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Craig E. Ward
University of Southern California
Information Sciences Institute
cw...@isi.edu

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] unable to run tox due to the '--pre' argument

2013-09-19 Thread Jeremy Stanley
On 2013-09-19 09:24:08 +0800 (+0800), Yongsheng Gong wrote:
[...]
 I upgrade the pip into 1.4.1:
 
 gongysh@gongysh-ThinkPad-T530:/opt/stack/python-neutronclient$
 .tox/py27/bin/pip install -U pip
 gongysh@gongysh-ThinkPad-T530:/opt/stack/python-neutronclient$
 .tox/py27/bin/pip  --version
 pip 1.4.1 from /mnt/data/opt/stack/python-neutronclient/.tox/py27
 /lib/python2.7/site-packages (python 2.7)
[...]
 Then I run tox -e py27 and it failed:
[...]
 I check the pip version in .tox:
 gongysh@gongysh-ThinkPad-T530:/mnt/data/opt/stack/
 python-neutronclient$  .tox/py27/bin/pip --version
 pip 1.3.1 from /mnt/data/opt/stack/python-neutronclient/.tox/py27
 /lib/python2.7/site-packages/pip-1.3.1-py2.7.egg (python 2.7)
 
 It is changed back!!!

I've tried to reproduce this and it doesn't seem to happen for me.
Using tox 1.6.1 to run 'tox -e py27' in a current checkout of
python-neutronclient's master branch automatically installs pip
1.4.1 in the virtualenv it creates. Can you try this on another
machine/vm just for confirmation? Clark also suggested on IRC just
now that maybe you have some global tox configuration specifying to
always recreate the virtualenv (essentially -r) and that its
populating it with your system-installed version or perhaps an older
cached download.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] Results of voting for Glossary (1st round)

2013-09-19 Thread Tzu-Mainn Chen
Hey all,

To assist with the naming and revoting, Matt and I put together a glossary and 
a diagram with our understanding of the terms:

https://wiki.openstack.org/wiki/Tuskar/Glossary
http://ma.ttwagner.com/tuskar-diagram-draft/

Thanks,
Tzu-Mainn Chen

- Original Message -
 Hey buddies,
 
 1st round of voting has happened during the weekly meetings, you can see the
 log here:
 http://eavesdrop.openstack.org/meetings/tuskar/2013/tuskar.2013-09-17-19.00.html
 
 There are few options which needs to revote, so I updated the etherpad with
 suggested names: https://etherpad.openstack.org/tuskar-naming
 
 Please think through that, throw another suggestions so that next week we can
 close the naming topic completely.
 
 Thanks a lot for participation
 -- Jarda
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] How the autoscale API should control scaling in Heat

2013-09-19 Thread Mike Spreitzer
radix, thanks.  How exactly does the cooldown work?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Error during puppet run

2013-09-19 Thread Peeyush Gupta
Hi Craig,

Thanks for your answer. I removed everything and it eventually (after removing
several errors!) worked. But I am stuck at one point. Now, the error says:

ERROR : Error during puppet run : Error: Could not start Service[httpd]: 
Execution of '/sbin/service httpd start' returned 1:


Any idea about this?

Thanks,
 
~Peeyush Gupta



 From: Craig E. Ward cw...@isi.edu
To: Peeyush Gupta gpeey...@ymail.com; OpenStack Development Mailing List 
openstack-dev@lists.openstack.org 
Sent: Thursday, 19 September 2013 9:18 PM
Subject: Re: [openstack-dev] Error during puppet run
 

The mysqladmin command returned an error status. It could be that your mysql 
root password is different from the one used by packstack, which passes it down 
to puppet. You could run the command in an interactive shell and see what error 
message gets produced.

If the system already had mysql installed, the root account password likely was 
already set. If you don't need that installation, uninstall mysql, remove its 
directories, and let packstack install it again. The command that failed 
probably was issued because packstack/puppet assumed a fresh install and tried 
to set the root password to something other than null. (A good thing to do with 
a fresh install of mysql.)

If you need any of the databases already in mysql, then you'll need to find a 
way to tell packstack to not have puppet do the install. I've only performed 
some simple experimental installations using packstack; I don't know how to 
tell it not to install mysql.

Craig


On 9/18/13 11:44 PM, Peeyush Gupta wrote:
 Hi all,
 
 I have been trying to install openstack using packstack on
 Fedora 19. Now, when I run packstack --allinone, I get the
 following error:
 
   ERROR : Error during puppet run : Error: mysqladmin -u root  password 
'b6ca73d0f2ce' returned 1 instead of one of [0]
 
 Can anyone help me to figure out why am I facing this issue?
 
 Thanks,
 ~Peeyush Gupta
 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- Craig E. Ward
University of Southern California
Information Sciences Institute
cw...@isi.edu___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Savanna] Weekly team meeting minutes Sep 19, 2013

2013-09-19 Thread Alexander Ignatov
Thanks everyone who have joined Savanna meeting.

Here are the logs from the meeting:
Minutes:
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-19-18.05.html
Minutes (text):
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-19-18.05.txt
Log:
http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-09-19-18.05.log.html

--
Regards,
Alexander Ignatov
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Custom Kernel Parameters passed to VM

2013-09-19 Thread Alvise Dorigo
Hello,
I'd like to know if this is supposed to work:

https://wiki.openstack.org/wiki/LibvirtCustomKernelArgs

I've tried it, passing a custom root partition (root=/dev/vda2), but cat-ing 
the file /proc/cmdline in the instantiated virtual machine (Debian 7.1, and SL 
6.4) I still see root=/dev/vda.

There's a bug or something ? there's a workaround to get the custom kernel args 
correctly passed to the vmlinuz AKI image ?

thanks,

Alvise___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][vmware] VMwareAPI sub-team reviews update 2013-09-19

2013-09-19 Thread Shawn Hartsock
Greetings stackers!

A quick mid-week update on the patches we're tracking for Havana-rc1. There was 
a bug in my vote counting code that I use to query votes. Some of the older 
patches were getting their votes counted wrong. Tracking the age of a 
submitted patchset (number of days since a patchset was posted) and the 
revision number helps spot these problems. I try to validate these reports by 
hand, but I do miss things on occasion. Let me know if I need to add or edit 
something.

Ordered by priority:
* High/Critical https://bugs.launchpad.net/bugs/1223709 
https://review.openstack.org/46027 readiness:ready for core
* High/Critical https://bugs.launchpad.net/bugs/1216510 
https://review.openstack.org/43616 readiness:needs one more +2/approval
* High/Critical https://bugs.launchpad.net/bugs/1226211 
https://review.openstack.org/46789 readiness:ready for core
* High/Critical https://bugs.launchpad.net/bugs/1217541 
https://review.openstack.org/43621 readiness:needs review
* High/High https://bugs.launchpad.net/bugs/1187853 
https://review.openstack.org/45349 readiness:ready for core
* Medium/High https://bugs.launchpad.net/bugs/1190515 
https://review.openstack.org/33100 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1184807 
https://review.openstack.org/40298 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1214850 
https://review.openstack.org/43270 readiness:needs review
* High https://bugs.launchpad.net/bugs/1226052 
https://review.openstack.org/46730 readiness:needs review
* High https://bugs.launchpad.net/bugs/1226826 
https://review.openstack.org/47030 readiness:needs review
* High https://bugs.launchpad.net/bugs/1225002 
https://review.openstack.org/41977 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1194018 
https://review.openstack.org/43641 readiness:ready for core
* High https://bugs.launchpad.net/bugs/1171226 
https://review.openstack.org/43994 readiness:ready for core
* Medium https://bugs.launchpad.net/bugs/1183654 
https://review.openstack.org/45203 readiness:needs revision
* Medium https://bugs.launchpad.net/bugs/1223074 
https://review.openstack.org/45864 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1199954 
https://review.openstack.org/46231 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1222349 
https://review.openstack.org/45570 readiness:needs one more +2/approval
* Medium https://bugs.launchpad.net/bugs/1216961 
https://review.openstack.org/43721 readiness:needs one more +2/approval
* Medium https://bugs.launchpad.net/bugs/1215352 
https://review.openstack.org/43268 readiness:needs one more +2/approval
* Medium https://bugs.launchpad.net/bugs/1197041 
https://review.openstack.org/43621 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1222948 
https://review.openstack.org/46400 readiness:needs revision
* Medium https://bugs.launchpad.net/bugs/1226238 
https://review.openstack.org/46824 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1224479 
https://review.openstack.org/46277 readiness:ready for core
* Medium https://bugs.launchpad.net/bugs/1207064 
https://review.openstack.org/42024 readiness:needs revision
* Medium https://bugs.launchpad.net/bugs/1180044 
https://review.openstack.org/43270 readiness:needs review
* Medium https://bugs.launchpad.net/bugs/1226425 
https://review.openstack.org/46895 readiness:needs revision
* Low https://bugs.launchpad.net/bugs/1215958 
https://review.openstack.org/43665 readiness:needs review
* Low https://bugs.launchpad.net/bugs/1226450 
https://review.openstack.org/46896 readiness:ready for core


Ordered by fitness for review:

== needs one more +2/approval ==
* Medium https://bugs.launchpad.net/bugs/1222349 review: 
https://review.openstack.org/45570
title: 'VMware: datastore_regex is not honoured'
votes: +2:1, +1:5, -1:0, -2:0  age: 11 days revision: 4
* Medium https://bugs.launchpad.net/bugs/1216961 review: 
https://review.openstack.org/43721
title: 'VMware: exceptions for RetrievePropertiesEx incorrectly handled'
votes: +2:1, +1:5, -1:0, -2:0  age: 1 days revision: 2
* Medium https://bugs.launchpad.net/bugs/1215352 review: 
https://review.openstack.org/43268
title: 'VMware: unable to access VNC console if password is not 
configured'
votes: +2:1, +1:3, -1:0, -2:0  age: 1 days revision: 12
* High/Critical https://bugs.launchpad.net/bugs/1216510 review: 
https://review.openstack.org/43616
title: 'VMware: exception when accessing invalid nodename'
votes: +2:1, +1:5, -1:0, -2:0  age: 3 days revision: 8

== ready for core ==
* High/Critical https://bugs.launchpad.net/bugs/1223709 review: 
https://review.openstack.org/46027
title: 'VMware: boot from volume exception'
votes: +2:0, +1:5, -1:0, -2:0  age: 1 days revision: 3
* High https://bugs.launchpad.net/bugs/1184807 review: 

Re: [openstack-dev] Client and Policy

2013-09-19 Thread Dolph Mathews
On Thu, Sep 19, 2013 at 2:59 PM, Adam Young ayo...@redhat.com wrote:

  I can submit a summit proposal.  I was thinking of making it more
 general than just the Policy piece.  Here is my proposed session.  Let me
 know if it rings true:


 Title: Extracting Shared Libraries from incubator

 Some of the security-sensitive code in OpenStack is coped into various
 projects from Oslo-Incubator.  If there is a CVE identified in one of these
 pieces, there is no rapid way to update them short of syncing code to all
 projects.  This meeting is to identify the pieces of Oslo-incubator that
 should be extracted into stand alone libraries.


I believe the goal of oslo-incubator IS to spin out common code into
standalone libraries in the long run, as appropriate.



 Some of the code would be best reviewed by members of other projects:
 Network specific code by Neutron, Policy by Keystone, and so forth.  As
 part of the discussion, we will identify a code review process that gets
 the right reviewers for those subprojects.


It sounds like the real goal is how do we get relevant/interested
reviewers in front of oslo reviews without overloading them with noise?
I'm sure that's a topic that Mark already has an opinion on, so I've opened
this thread this to openstack-dev.




 On 09/19/2013 12:22 PM, Brant L Knudson wrote:

 What's different between policy and anything else in oslo-incubator? If
 a CVS is found in the oslo-incubator code base, we have no clean way of
 deploying a fix.

 I'm concerned about anything in oslo-incubator that we wind up pulling
 into Keystone, which is quite a bit of code. I tried adding oslo-incubator
 to my gerrit watch list but then my list got too long, and I spend enough
 time just doing Keystone reviews.

 The oslo-incubator projects do have maintainers:
 https://github.com/openstack/oslo-incubator/blob/master/MAINTAINERS

 Maybe we could have an extra field in MAINTAINERS with other groups to
 require review of (like keystone-core for policy). Or change the maintainer
 for policy to keystone-core.

 Brant Knudson, OpenStack Development - Keystone core member
 Phone:   507-253-8621 T/L:553-8621


 [image: Inactive hide details for Adam Young ---09/19/2013 10:33:47
 AM---Policy is part of Oslo. But it is a copied into the various p]Adam
 Young ---09/19/2013 10:33:47 AM---Policy is part of Oslo.  But it is a
 copied into the various projects.   If a CVS is found in the po

 From: Adam Young ayo...@redhat.com ayo...@redhat.com
 To: Dolph Mathews dolph.math...@gmail.com dolph.math...@gmail.com,
 Yee, Guang guang@hp.com guang@hp.com, Henry Nash
 hen...@linux.vnet.ibm.com hen...@linux.vnet.ibm.com, Morgan Fainberg
 m...@metacloud.com m...@metacloud.com, Brant L 
 Knudson/Rochester/IBM@IBMUS,
 Morgan Fainberg m...@metacloud.com m...@metacloud.com,
 Cc: Jamie Lennox jamielen...@gmail.com jamielen...@gmail.com
 Date: 09/19/2013 10:33 AM
 Subject: Client and Policy
  --



 Policy is part of Oslo.  But it is a copied into the various projects.
 If a CVS is found in the policy code base, we have no clean way of
 deploying a fix.

 There are a couple of alternatives:

 1.  Spin if off into a separate gitrepo and build it as a standalone
 package:  python-oslopolicy  or something.
 2.  Merge it in with an existing project.  The obvious one would be
 python-keystoneclient.

 While the second is the obvious solution, the problem is that is crosses
 project team boundaries.  Policy was originally part of Keystone, but
 was deemd a common component and moved to Oslo.

 Both teams have a claim to gatekeeping this code.  Fortunately, it does
 not have to be either/or.

 Potential solution #1:  suggest that the policy code move to the
 keystone client, and invite a subset of the Oslo core devs over to
 Keystone as core devs.

 The  people that have commits to policy.py in Oslo are:

 Author: Andrew Bogott abog...@wikimedia.org abog...@wikimedia.org
 Author: Ann Kamyshnikova 
 akamyshnik...@mirantis.comakamyshnik...@mirantis.com
 Author: Chuck Short chuck.sh...@canonical.comchuck.sh...@canonical.com
 Author: Dina Belova dbel...@mirantis.com dbel...@mirantis.com
 Author: Eric Windisch e...@cloudscaling.com e...@cloudscaling.com
 Author: Flaper Fesp flape...@gmail.com flape...@gmail.com
 Author: guohliu guoh...@cn.ibm.com guoh...@cn.ibm.com
 Author: Jenkins jenk...@review.openstack.orgjenk...@review.openstack.org
 Author: Kevin L. Mitchell 
 kevin.mitch...@rackspace.comkevin.mitch...@rackspace.com
 Author: Mark McClain mark.mccl...@dreamhost.commark.mccl...@dreamhost.com
 Author: Mark McLoughlin mar...@redhat.com mar...@redhat.com
 Author: Monty Taylor mord...@inaugust.com mord...@inaugust.com
 Author: Sergey Lukjanov slukja...@mirantis.com slukja...@mirantis.com
 Author: Victor Sergeyev vserge...@mirantis.com vserge...@mirantis.com
 Author: Vishvananda Ishaya vishvana...@gmail.com vishvana...@gmail.com
 Author: Zhongyue Luo zhongyue@intel.com zhongyue@intel.com

 

Re: [openstack-dev] [nova] [pci passthrough] how to fill instance_type_extra_specs for a pci passthrough?

2013-09-19 Thread David Kang
 From: Yunhong Jiang yunhong.ji...@intel.com
 To: OpenStack Development Mailing List openstack-dev@lists.openstack.org
 Sent: Saturday, September 14, 2013 1:37:43 AM
 Subject: Re: [openstack-dev] [nova] [pci passthrough] how to fill 
 instance_type_extra_specs for a pci passthrough?
 I created a wiki page at
 https://wiki.openstack.org/wiki/Pci_passthrough , and I think Irena
 has updated it also.
 
 Thanks

 Could you give me more information/example of how to use extra_info in 
pci_passthrough_whitelist?
I tried to put some information in the extra_info field, but it causes 
nova-compute crash.
I hacked pci_white.py file to pass it, but extra_info is not stored in the DB. 
I want to use it to store the information of path to the device file that 
corresponds to the PCI device.

 Thanks,
 David


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Medium Availability VMs

2013-09-19 Thread Tim Bell
 

Discussing with various people in the community, there seems to be interest in 
a way to

 

-  Identify when a hypervisor is being drained or is down and inventory 
its VMs

-  Find the best practise way of restarting that VM for hypervisors 
still available

o   Live migration

o   Cold migration

-  Defining policies for the remaining cases

o   Restart from base image

o   Suspend

o   Delete

 

This touches multiple components from Nova/Cinder/Quantum (at minimum).

 

It also touches some cloud architecture questions if OpenStack can start to 
move into the low hanging fruit parts of service
consolidation.

 

I'd like to have some form of summit discussion in Hong Kong around these 
topics but it is not clear where it fits.

 

Are there others who feel similarly ? How can we fit it in ?

 

Tim

 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-19 Thread Adam Young

On 09/19/2013 04:30 PM, Mark McLoughlin wrote:

On Thu, 2013-09-19 at 15:22 -0500, Dolph Mathews wrote:

On Thu, Sep 19, 2013 at 2:59 PM, Adam Young ayo...@redhat.com wrote:
 I can submit a summit proposal.  I was thinking of making it
 more general than just the Policy piece.  Here is my proposed
 session.  Let me know if it rings true:
 
 
 Title: Extracting Shared Libraries from incubator
 
 Some of the security-sensitive code in OpenStack is coped into

 various projects from Oslo-Incubator.  If there is a CVE
 identified in one of these pieces, there is no rapid way to
 update them short of syncing code to all projects.  This
 meeting is to identify the pieces of Oslo-incubator that
 should be extracted into stand alone libraries.
 



I believe the goal of oslo-incubator IS to spin out common code into
standalone libraries in the long run, as appropriate.

Indeed.

https://wiki.openstack.org/wiki/Oslo

   Mission Statement:

 To produce a set of python libraries containing code shared by
 OpenStack projects

https://wiki.openstack.org/wiki/Oslo#Incubation

   Incubation shouldn't be seen as a long term option for any API - it
   is merely a stepping stone to inclusion into a published Oslo
   library.
Thanks for the link.  In Keystone, We've identified policy.py 
specifically as a candidate.





 Some of the code would be best reviewed by members of other
 projects:  Network specific code by Neutron, Policy by
 Keystone, and so forth.  As part of the discussion, we will
 identify a code review process that gets the right reviewers
 for those subprojects.


It sounds like the real goal is how do we get relevant/interested
reviewers in front of oslo reviews without overloading them with
noise? I'm sure that's a topic that Mark already has an opinion on,
so I've opened this thread this to openstack-dev.

To take the specific example of the policy API, if someone actively
wanted to help the process of moving it into a standalone library should
volunteer to help Flavio out as a maintainer:

   https://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

   == policy ==

   M: Flavio Percoco fla...@redhat.com
   S: Maintained
   F: policy.py


Would it make sense to explicitly add Keystone developers, or can we 
include the launchpad keystone-core group to this module?
If we want to keep it per user,  I'm willing to do so, and I think we 
have a couple of other likely candidates from Keystone:  I'll let then 
speak up for themselves.


Should we submit the names as review requests against the MAINTAINERS 
file in that repo?






Another aspect is how someone would go about helping do reviews on a
specific API in oslo-incubator. That's a common need - e.g. for
maintainers of virt drivers in Nova - and AIUI, these folks just
subscribe to all gerrit notifications for the module and then use mail
filters to make sure they see changes to the files they're interested
in.

Thanks,
Mark.





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Review request for adding ordereddict

2013-09-19 Thread Mark Washenberger
I respect the desires of packagers to have a stable environment, but I'm
also very sad about having to copy the OrderedDict code directly into
Glance. Can we actually verify that this is a problem for packagers? (I.e.
not already in their repos?)

It also may be possible that packagers who do not support python2.6 could
completely avoid this problem if we change how the code is written. Does it
seem possible to only depend on ordereddict if collections.ordereddict does
not exist?


On Mon, Sep 16, 2013 at 11:27 AM, Dolph Mathews dolph.math...@gmail.comwrote:


 On Mon, Sep 16, 2013 at 11:34 AM, Paul Bourke pauldbou...@gmail.comwrote:

 Hi all,

 I've submitted https://review.openstack.org/#/c/46474/ to add
 ordereddict to openstack/requirements.


 Related thread:
 http://lists.openstack.org/pipermail/openstack-dev/2013-September/015121.html


 The reasoning behind the change is that we want ConfigParser to store
 sections in the order they're read, which is the default behavior in
 py2.7[1], but it must be specified in py2.6.

 The following two Glance features depend on this:

 https://review.openstack.org/#/c/46268/
 https://review.openstack.org/#/c/46283/

 Can someone take a look at this change?

 Thanks,
 -Paul

 [1]
 http://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Client and Policy

2013-09-19 Thread Mark McLoughlin
On Thu, 2013-09-19 at 15:22 -0500, Dolph Mathews wrote:
 
 On Thu, Sep 19, 2013 at 2:59 PM, Adam Young ayo...@redhat.com wrote:
 I can submit a summit proposal.  I was thinking of making it
 more general than just the Policy piece.  Here is my proposed
 session.  Let me know if it rings true:
 
 
 Title: Extracting Shared Libraries from incubator
 
 Some of the security-sensitive code in OpenStack is coped into
 various projects from Oslo-Incubator.  If there is a CVE
 identified in one of these pieces, there is no rapid way to
 update them short of syncing code to all projects.  This
 meeting is to identify the pieces of Oslo-incubator that
 should be extracted into stand alone libraries.
 
 
 
 I believe the goal of oslo-incubator IS to spin out common code into
 standalone libraries in the long run, as appropriate.

Indeed.

https://wiki.openstack.org/wiki/Oslo

  Mission Statement:

To produce a set of python libraries containing code shared by 
OpenStack projects

https://wiki.openstack.org/wiki/Oslo#Incubation

  Incubation shouldn't be seen as a long term option for any API - it 
  is merely a stepping stone to inclusion into a published Oslo
  library. 

 Some of the code would be best reviewed by members of other
 projects:  Network specific code by Neutron, Policy by
 Keystone, and so forth.  As part of the discussion, we will
 identify a code review process that gets the right reviewers
 for those subprojects.
 
 
 It sounds like the real goal is how do we get relevant/interested
 reviewers in front of oslo reviews without overloading them with
 noise? I'm sure that's a topic that Mark already has an opinion on,
 so I've opened this thread this to openstack-dev.

To take the specific example of the policy API, if someone actively
wanted to help the process of moving it into a standalone library should
volunteer to help Flavio out as a maintainer:

  https://git.openstack.org/cgit/openstack/oslo-incubator/tree/MAINTAINERS

  == policy ==

  M: Flavio Percoco fla...@redhat.com
  S: Maintained
  F: policy.py


Another aspect is how someone would go about helping do reviews on a
specific API in oslo-incubator. That's a common need - e.g. for
maintainers of virt drivers in Nova - and AIUI, these folks just
subscribe to all gerrit notifications for the module and then use mail
filters to make sure they see changes to the files they're interested
in.

Thanks,
Mark.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Jaromir Coufal


On 2013/19/09 04:08, Tomas Sedovic wrote:

Hi everyone,

Some of us Tuskar developers have had the chance to meet the TripleO 
developers face to face and discuss the visions and goals of our 
projects.


Tuskar's ultimate goal is to have to a full OpenStack management 
solution: letting the cloud operators try OpenStack, install it, keep 
it running throughout the entire lifecycle (including bringing in new 
hardware, burning it in, decommissioning), help to scale it, secure 
the setup, monitor for failures, project the need for growth and so on.


And to provide a good user interface and API to let the operators 
control and script this easily.


Now, the scope of the OpenStack Deployment program (TripleO) includes 
not just installation, but the entire lifecycle management (from 
racking it up to decommissioning). Among other things they're thinking 
of are issue tracker integration and inventory management, but these 
could potentially be split into a separate program.


That means we do have a lot of goals in common and we've just been 
going at them from different angles: TripleO building the fundamental 
infrastructure while Tuskar focusing more on the end user experience.


We've come to a conclusion that it would be a great opportunity for 
both teams to join forces and build this thing together.


The benefits for Tuskar would be huge:

* being a part of an incubated project
* more eyballs (see Linus' Law (the ESR one))
* better information flow between the current Tuskar and TripleO teams
* better chance at attracting early users and feedback
* chance to integrate earlier into an OpenStack release (we could make 
it into the *I* one)


TripleO would get a UI and more developers trying it out and helping 
with setup and integration.


This shouldn't even need to derail us much from the rough roadmap we 
planned to follow in the upcoming months:


1. get things stable and robust enough to demo in Hong Kong on real 
hardware

2. include metrics and monitoring
3. security

What do you think?

Sounds like a good idea to me.

-- Jarda


Tomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tuskar] [TripleO] The vision and looking ahead

2013-09-19 Thread Tzu-Mainn Chen
 Hi everyone,
 
 Some of us Tuskar developers have had the chance to meet the TripleO
 developers face to face and discuss the visions and goals of our projects.
 
 Tuskar's ultimate goal is to have to a full OpenStack management
 solution: letting the cloud operators try OpenStack, install it, keep it
 running throughout the entire lifecycle (including bringing in new
 hardware, burning it in, decommissioning), help to scale it, secure the
 setup, monitor for failures, project the need for growth and so on.
 
 And to provide a good user interface and API to let the operators
 control and script this easily.
 
 Now, the scope of the OpenStack Deployment program (TripleO) includes
 not just installation, but the entire lifecycle management (from racking
 it up to decommissioning). Among other things they're thinking of are
 issue tracker integration and inventory management, but these could
 potentially be split into a separate program.
 
 That means we do have a lot of goals in common and we've just been going
 at them from different angles: TripleO building the fundamental
 infrastructure while Tuskar focusing more on the end user experience.
 
 We've come to a conclusion that it would be a great opportunity for both
 teams to join forces and build this thing together.
 
 The benefits for Tuskar would be huge:
 
 * being a part of an incubated project
 * more eyballs (see Linus' Law (the ESR one))
 * better information flow between the current Tuskar and TripleO teams
 * better chance at attracting early users and feedback
 * chance to integrate earlier into an OpenStack release (we could make
 it into the *I* one)
 
 TripleO would get a UI and more developers trying it out and helping
 with setup and integration.
 
 This shouldn't even need to derail us much from the rough roadmap we
 planned to follow in the upcoming months:
 
 1. get things stable and robust enough to demo in Hong Kong on real hardware
 2. include metrics and monitoring
 3. security
 
 What do you think?
 
 Tomas

I think this is great.  I would like to understand the organization of the 
teams and the code,
but I assume that is forthcoming?

Mainn

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Medium Availability VMs

2013-09-19 Thread Mike Spreitzer
 From: Tim Bell tim.b...@cern.ch
 ...
 Discussing with various people in the community, there seems to be 
 interest in a way to
 
 -  Identify when a hypervisor is being drained or is down 
 and inventory its VMs
 -  Find the best practise way of restarting that VM for 
 hypervisors still available
 o   Live migration
 o   Cold migration
 -  Defining policies for the remaining cases
 o   Restart from base image
 o   Suspend
 o   Delete
 
 This touches multiple components from Nova/Cinder/Quantum (at minimum).
 
 It also touches some cloud architecture questions if OpenStack can 
 start to move into the low hanging fruit parts of service consolidation.
 
 I’d like to have some form of summit discussion in Hong Kong around 
 these topics but it is not clear where it fits.
 
 Are there others who feel similarly ? How can we fit it in ?

When there are multiple viable choices, I think direction should be taken 
from higher layers.  The operation of draining a hypervisor can be 
parameterized, the VMs themselves can be tagged, by an indication of which 
to do.

I myself am working primarily on holistic infrastructure scheduling, which 
includes quiescing and draining hypervisors among the things it can do. 
Holistic scheduling works under the direction of a 
template/pattern/topology that describes a set of interacting resources 
and their relationships, and so is able to make a good decision about 
where VMs should move to.

Re-starting a VM can require software coordination.

I think holistic infrastructure scheduling is logically downstream from 
software coordination and upstream from infrastructure orchestration.  I 
think the ambitions for Heat are expanding to include the latter two, and 
so must also have something to do with holistic infrastructure scheduling.

Regards,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] configapplier licensing

2013-09-19 Thread Thomas Goirand
Hi,

While trying to package diskimage-builder for Debian, I saw that in some
files, it's written this file is release under the same license as
configapplier. However, I haven't been able to find the license of
configapplier anywhere.

So, under which license is configapplier released? I need this
information to populate the debian/copyright file before uploading to
Sid (to pass the NEW queue).

Cheers,

Thomas Goirand (zigo)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Meeting / Savanna Incubation Follow-Up

2013-09-19 Thread Michael Basnight
 On Sep 18, 2013, at 4:53 PM, Sergey Lukjanov slukja...@mirantis.com wrote:
 
 Hi folks,
 
 I have few comments on Hadoop cluster provisioning in Savanna.
 
 Now Savanna provisions instances, install management console (like Apache 
 Ambari) on one them and communicate with it using REST API of the installed 
 console to prepare and run all requested services at all instances. So, the 
 only provisioning that we're doing in Savanna is the instance, volumes 
 creation and their initial configuration like /etc/hosts generation for all 
 instances. The most part of these operations or even all of them should be 
 eventually removed by Heat integration during the potential incubation in 
 Icehouse cycle, so, after it we'll be concentrated at EDP (Elastic Data 
 Processing) operations.
 
 I was surprised how much time was spent on clustering discussion at the last 
 TC meeting and that there was a small amount of other questions. So, I think 
 that it'll be better to separate clustering discussion that is a long-term 
 activity with plans to be discussed during the design summit and Savanna 
 incubation request that should be finally discussed at the next TC meeting. 
 Of course, I think that it's a right way for Savanna to participate 
 clustering discussions. From our perspective, clustering should be 
 implemented as additional functionality in underlying services like Nova, 
 Cinder, Heat and libraries - Oslo, Taskflow, that will help projects like 
 Savanna, Trove and etc. to provisioning resources for clusters, scale and 
 terminate them. So, our role in it is to collaborate on such features 
 implementation. One more interesting idea - clustering API standardization, 
 it sounds interesting, but it looks like that such APIs could be very 
 different, for example, our current working API [0] and Trove's draft for 
 Cluster API [1].

Draft APIs are subject to change :) until we put the code in place I would be 
ok modifying the API. +1 to working together at the summit to bring the API 
differences together. We have a trove clustering session and I'd LOVE to have 
savanna folk at it. Lets unify ideas!!

 
 I also would like to ensure that Savanna team is 100% behind the idea of 
 doing full integration with all applicable OpenStack projects during 
 incubation.
 
 Thanks.
 
 [0] 
 https://savanna.readthedocs.org/en/latest/userdoc/rest_api_v1.0.html#node-group-templates
 [1] https://wiki.openstack.org/wiki/Trove-Replication-And-Clustering-API
 
 Sincerely yours,
 Sergey Lukjanov
 Savanna Technical Lead
 Mirantis Inc.
 
 On Sep 13, 2013, at 22:35, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Michael Basnight's message of 2013-09-13 08:26:07 -0700:
 On Sep 13, 2013, at 6:56 AM, Alexander Kuznetsov wrote:
 On Thu, Sep 12, 2013 at 7:30 PM, Michael Basnight mbasni...@gmail.com 
 wrote:
 On Sep 12, 2013, at 2:39 AM, Thierry Carrez wrote:
 
 Sergey Lukjanov wrote:
 
 [...]
 As you can see, resources provisioning is just one of the features and 
 the implementation details are not critical for overall architecture. It 
 performs only the first step of the cluster setup. We’ve been 
 considering Heat for a while, but ended up direct API calls in favor of 
 speed and simplicity. Going forward Heat integration will be done by 
 implementing extension mechanism [3] and [4] as part of Icehouse release.
 
 The next part, Hadoop cluster configuration, already extensible and we 
 have several plugins - Vanilla, Hortonworks Data Platform and Cloudera 
 plugin started too. This allow to unify management of different Hadoop 
 distributions under single control plane. The plugins are responsible 
 for correct Hadoop ecosystem configuration at already provisioned 
 resources and use different Hadoop management tools like Ambari to setup 
 and configure all cluster  services, so, there are no actual 
 provisioning configs on Savanna side in this case. Savanna and its 
 plugins encapsulate the knowledge of Hadoop internals and default 
 configuration for Hadoop services.
 
 My main gripe with Savanna is that it combines (in its upcoming release)
 what sounds like to me two very different services: Hadoop cluster
 provisioning service (like what Trove does for databases) and a
 MapReduce+ data API service (like what Marconi does for queues).
 
 Making it part of the same project (rather than two separate projects,
 potentially sharing the same program) make discussions about shifting
 some of its clustering ability to another library/project more complex
 than they should be (see below).
 
 Could you explain the benefit of having them within the same service,
 rather than two services with one consuming the other ?
 
 And for the record, i dont think that Trove is the perfect fit for it 
 today. We are still working on a clustering API. But when we create it, i 
 would love the Savanna team's input, so we can try to make a pluggable API 
 thats usable for people who want MySQL or Cassandra or even Hadoop. Im 
 less a fan 

Re: [openstack-dev] Medium Availability VMs

2013-09-19 Thread Tim Bell
 

Mike,

 

Is this something that will be added into OpenStack or made available as open 
source through something like stackforge ?

 

Tim

 

From: Mike Spreitzer [mailto:mspre...@us.ibm.com] 
Sent: 20 September 2013 03:27
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Medium Availability VMs

 

 From: Tim Bell tim.b...@cern.ch mailto:tim.b...@cern.ch  
 ... 
 Discussing with various people in the community, there seems to be 
 interest in a way to 
   
 -  Identify when a hypervisor is being drained or is down 
 and inventory its VMs 
 -  Find the best practise way of restarting that VM for 
 hypervisors still available 
 o   Live migration 
 o   Cold migration 
 -  Defining policies for the remaining cases 
 o   Restart from base image 
 o   Suspend 
 o   Delete 
   
 This touches multiple components from Nova/Cinder/Quantum (at minimum). 
   
 It also touches some cloud architecture questions if OpenStack can 
 start to move into the low hanging fruit parts of service consolidation. 
   
 I’d like to have some form of summit discussion in Hong Kong around 
 these topics but it is not clear where it fits. 
   
 Are there others who feel similarly ? How can we fit it in ? 

When there are multiple viable choices, I think direction should be taken from 
higher layers.  The operation of draining a hypervisor can be parameterized, 
the VMs themselves can be tagged, by an indication of which to do. 

I myself am working primarily on holistic infrastructure scheduling, which 
includes quiescing and draining hypervisors among the things it can do.  
Holistic scheduling works under the direction of a template/pattern/topology 
that describes a set of interacting resources and their relationships, and so 
is able to make a good decision about where VMs should move to. 

Re-starting a VM can require software coordination. 

I think holistic infrastructure scheduling is logically downstream from 
software coordination and upstream from infrastructure orchestration.  I think 
the ambitions for Heat are expanding to include the latter two, and so must 
also have something to do with holistic infrastructure scheduling. 

Regards, 
Mike 



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Review request for adding ordereddict

2013-09-19 Thread Morgan Fainberg
I would venture that this easily falls into the same category as any FFE
(and likewise should be treated as such).  From a quick cursory glance, it
seems that this is likely packaged up (at least for RHEL in the EPEL
repository).  But I don't have a completely inclusive view of what is out
there.  Also is there a minimum version that you require?  I think getting
the packagers involved (and know if there is a minimum version requirement)
is the best way to know if it should be accepted.

Cheers,
Morgan Fainberg

IRC: morganfainberg


On Thu, Sep 19, 2013 at 2:08 PM, Mark Washenberger 
mark.washenber...@markwash.net wrote:

 I respect the desires of packagers to have a stable environment, but I'm
 also very sad about having to copy the OrderedDict code directly into
 Glance. Can we actually verify that this is a problem for packagers? (I.e.
 not already in their repos?)

 It also may be possible that packagers who do not support python2.6 could
 completely avoid this problem if we change how the code is written. Does it
 seem possible to only depend on ordereddict if collections.ordereddict does
 not exist?


 On Mon, Sep 16, 2013 at 11:27 AM, Dolph Mathews 
 dolph.math...@gmail.comwrote:


 On Mon, Sep 16, 2013 at 11:34 AM, Paul Bourke pauldbou...@gmail.comwrote:

 Hi all,

 I've submitted https://review.openstack.org/#/c/46474/ to add
 ordereddict to openstack/requirements.


 Related thread:
 http://lists.openstack.org/pipermail/openstack-dev/2013-September/015121.html


 The reasoning behind the change is that we want ConfigParser to store
 sections in the order they're read, which is the default behavior in
 py2.7[1], but it must be specified in py2.6.

 The following two Glance features depend on this:

 https://review.openstack.org/#/c/46268/
 https://review.openstack.org/#/c/46283/

 Can someone take a look at this change?

 Thanks,
 -Paul

 [1]
 http://docs.python.org/2/library/configparser.html#ConfigParser.RawConfigParser

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 --

 -Dolph

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev