Re: [openstack-dev] [Openstack] baremetal provisioning

2013-11-05 Thread Ravikanth Samprathi
Thank you Sandeep.
Ravi



On Tue, Nov 5, 2013 at 12:08 AM, Sandeep Raman sandeep.ra...@gmail.comwrote:

 Hello,

 Check https://bugs.launchpad.net/nova/+bug/1226170 and
 https://blueprints.launchpad.net/tripleo/+spec/bittorrent-for-image-deployments

 Sandeep.


 On Tue, Nov 5, 2013 at 12:41 PM, Ravikanth Samprathi 
 rsamp...@gmail.comwrote:

 Hi
 I have noticed that if i generate a baremetal image of 8G, it takes
 around 20-25 minutes for the deployment.  And, the entire disk image is
 built in the openstack server and then copied over. Isnt this a waste of
 time and space?  Should the image with the 8G (or 30G or 100G, whatever i
 want) be created in the openstack server, and the whole image copied?  Is
 there any other way, or is there a fix for this?  Can we not specify the
 image size and the image not be created, but /dev/sdx be created during
 boot time?

 Thanks
 Ravi


 ___
 Mailing list:
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
 Post to : openst...@lists.openstack.org
 Unsubscribe :
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [summit] Youtube stream from Design Summit?

2013-11-05 Thread Christopher Yeoh
On Sat, Nov 2, 2013 at 1:42 AM, Stefano Maffulli stef...@openstack.orgwrote:

 On 11/01/2013 05:33 AM, Jaromir Coufal wrote:
  I was wondering, since there is a lot of people who cannot attend Design
  Sessions, if we can help them to be present at least in some way.

 We tried in the past to setup systems to enable remote participation in
 a generic way (give a URL per each session and hope somebody joins it
 from remote) but never had enough return to justify the effort put in
 the production.

 I'd be interested to learn about your experiments: are you thinking of
 some specific set of people that you need to get involved remotely or
 you just want to provide the remote URL for anybody that wants to join?


I attended a few Nova sessions remotely at the Havana summit, actively
participating in one.
I was at the other end of a Google Hangout on a colleague's laptop.

IMO video doesn't really matter as the most critical thing to see is the
etherpad
which can be viewed easily. Most of the time I ended up typing directly
into the etherpad to communicate
back to the session (and people were looking out for comments).

I think the most critical component is the audio. An audio stream with
multiple mic pickups throughout
the room may be sufficient and you can't rely on people remembering to
speak into microphones when the
conversation is moving around the room a lot. And it's never going to be as
good as attending in person.

Chris



 /stef

 --
 Ask and answer questions on https://ask.openstack.org

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack][Neutron] Server restart failes when configured with ML2 (BugID: 1210236)

2013-11-05 Thread Trinath Somanchi
Hi -

I configured Neutron with ML2 configuration. When I restart the service, the 
Neutron server fails with the following error.

2013-11-05 15:37:08.572 14048 INFO neutron.common.config [-] Config paste file: 
/etc/neutron/api-paste.ini
2013-11-05 15:37:08.574 14048 ERROR neutron.common.config [-] Unable to load 
quantum from configuration file /etc/neutron/api-paste.ini.
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config Traceback (most 
recent call last):
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/neutron/common/config.py, line 144, in 
load_paste_app
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config app = 
deploy.loadapp(config:%s % config_path, name=app_name)
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in 
loadapp
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config return 
loadobj(APP, uri, name=name, **kw)
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in 
loadobj
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config 
global_conf=global_conf)
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in 
loadcontext
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config 
global_conf=global_conf)
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in 
_loadconfig
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config return 
loader.get_context(object_type, name, global_conf)
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 408, in 
get_context
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config object_type, 
name=name)
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File 
/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 587, in 
find_config_section
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config self.filename))
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config LookupError: No 
section 'quantum' (prefixed by 'app' or 'application' or 'composite' or 
'composit' or 'pipeline' or 'filter-app') found in config 
/etc/neutron/api-paste.ini
2013-11-05 15:37:08.574 14048 TRACE neutron.common.config
2013-11-05 15:37:08.575 14048 ERROR neutron.service [-] Unrecoverable error: 
please check log for details.
2013-11-05 15:37:08.575 14048 TRACE neutron.service Traceback (most recent call 
last):
2013-11-05 15:37:08.575 14048 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/service.py, line 100, in serve_wsgi
2013-11-05 15:37:08.575 14048 TRACE neutron.service service.start()
2013-11-05 15:37:08.575 14048 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/service.py, line 65, in start
2013-11-05 15:37:08.575 14048 TRACE neutron.service self.wsgi_app = 
_run_wsgi(self.app_name)
2013-11-05 15:37:08.575 14048 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/service.py, line 109, in _run_wsgi
2013-11-05 15:37:08.575 14048 TRACE neutron.service app = 
config.load_paste_app(app_name)
2013-11-05 15:37:08.575 14048 TRACE neutron.service   File 
/usr/lib/python2.7/dist-packages/neutron/common/config.py, line 151, in 
load_paste_app
2013-11-05 15:37:08.575 14048 TRACE neutron.service raise RuntimeError(msg)
2013-11-05 15:37:08.575 14048 TRACE neutron.service RuntimeError: Unable to 
load quantum from configuration file /etc/neutron/api-paste.ini.
2013-11-05 15:37:08.575 14048 TRACE neutron.service


From Launchpad bugs, I noticed this is a Bug, with ID: 
https://bugs.launchpad.net/neutron/+bug/1210236

Kindly help me how to fix this issue.

Thanking you,

--
Trinath Somanchi - B39208
trinath.soman...@freescale.com | extn: 4048

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread Chris Friesen

On 11/05/2013 01:27 AM, Avishay Traeger wrote:


I think the proper fix is to make sure that Cinder is moving the volume
into 'error' state in all cases where there is an error.  Nova can then
poll as long as its in the 'downloading' state, until it's 'available' or
'error'.  Is there a reason why Cinder would legitimately get stuck in
'downloading'?


There's always the cinder service crashed and couldn't restart case. :)

Chris


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread John Griffith
On Nov 5, 2013 3:33 PM, Avishay Traeger avis...@il.ibm.com wrote:

 So while doubling the timeout will fix some cases, there will be cases
with
 larger volumes and/or slower systems where the bug will still hit.  Even
 timing out on the download progress can lead to unnecessary timeouts (if
 it's really slow, or volume is really big, it can stay at 5% for some
 time).

 I think the proper fix is to make sure that Cinder is moving the volume
 into 'error' state in all cases where there is an error.  Nova can then
 poll as long as its in the 'downloading' state, until it's 'available' or
 'error'.

Agree

 Is there a reason why Cinder would legitimately get stuck in
 'downloading'?

 Thanks,
 Avishay



 From:   John Griffith john.griff...@solidfire.com
 To: OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org,
 Date:   11/05/2013 07:41 AM
 Subject:Re: [openstack-dev] Improvement of Cinder API wrt
 https://bugs.launchpad.net/nova/+bug/1213953



 On Tue, Nov 5, 2013 at 7:27 AM, John Griffith
 john.griff...@solidfire.com wrote:
  On Tue, Nov 5, 2013 at 6:29 AM, Chris Friesen
  chris.frie...@windriver.com wrote:
  On 11/04/2013 03:49 PM, Solly Ross wrote:
 
  So, There's currently an outstanding issue with regards to a Nova
  shortcut command that creates a volume from an image and then boots
  from it in one fell swoop.  The gist of the issue is that there is
  currently a set timeout which can time out before the volume creation
  has finished (it's designed to time out in case there is an error),
  in cases where the image download or volume creation takes an
  extended period of time (e.g. under a Gluster backend for Cinder with
  certain network conditions).
 
  The proposed solution is a modification to the Cinder API to provide
  more detail on what exactly is going on, so that we could
  programmatically tune the timeout.  My initial thought is to create a
  new column in the Volume table called 'status_detail' to provide more
  detailed information about the current status.  For instance, for the
  'downloading' status, we could have 'status_detail' be the completion
  percentage or JSON containing the total size and the current amount
  copied.  This way, at each interval we could check to see if the
  amount copied had changed, and trigger the timeout if it had not,
  instead of blindly assuming that the operation will complete within a
  given amount of time.
 
  What do people think?  Would there be a better way to do this?
 
 
  The only other option I can think of would be some kind of callback
that
  cinder could explicitly call to drive updates and/or notifications of
 faults
  rather than needing to wait for a timeout.  Possibly a combination of
 both
  would be best, that way you could add a --poll option to the create
 volume
  and boot CLI command.
 
  I come from the kernel-hacking world and most things there involve
  event-driven callbacks.  Looking at the openstack code I was kind of
  surprised to see hardcoded timeouts and RPC casts with no callbacks to
  indicate completion.
 
  Chris
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 I believe you're referring to [1], which was closed after a patch was
 added to nova to double the timeout length.  Based on comments sounds
 like your still seeing issues on some Gluster (maybe other) setups?

 Rather than mess with the API in order to do debug, why don't you use
 the info in the cinder-logs?

 [1] https://bugs.launchpad.net/nova/+bug/1213953

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread Chris Friesen
Wouldn't you still need variable timeouts?  I'm assuming that copying 
multi-gig cinder volumes might take a while, even if it's local.  (Or 
are you assuming copy-on-write?)


Chris

On 11/05/2013 01:43 AM, Caitlin Bestler wrote:

Replication of snapshots is one solution to this.

You create a Cinder Volume once. snapshot it. Then replicate to the
hosts that need it (this is the piece currently missing). Then you clone
there.

I will be giving an in an hour in conference session on this and other
uses of snapshots in the last time slot Wednesday.

On Nov 5, 2013 5:58 AM, Solly Ross sr...@redhat.com
mailto:sr...@redhat.com wrote:

So,
There's currently an outstanding issue with regards to a Nova
shortcut command that creates a volume from an image and then boots
from it in one fell swoop.  The gist of the issue is that there is
currently a set timeout which can time out before the volume
creation has finished (it's designed to time out in case there is an
error), in cases where the image download or volume creation takes
an extended period of time (e.g. under a Gluster backend for Cinder
with certain network conditions).

The proposed solution is a modification to the Cinder API to provide
more detail on what exactly is going on, so that we could
programmatically tune the timeout.  My initial thought is to create
a new column in the Volume table called 'status_detail' to provide
more detailed information about the current status.  For instance,
for the 'downloading' status, we could have 'status_detail' be the
completion percentage or JSON containing the total size and the
current amount copied.  This way, at each interval we could check to
see if the amount copied had changed, and trigger the timeout if it
had not, instead of blindly assuming that the operation will
complete within a given amount of time.

What do people think?  Would there be a better way to do this?

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
mailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread Solly Ross
Also, that's still an overly complicated process for one or two VMs.  The idea 
behind the Nova command was to minimize the steps in the image-volume-VM 
process for a single VM.

- Original Message -
From: Chris Friesen chris.frie...@windriver.com
To: openstack-dev@lists.openstack.org
Sent: Tuesday, November 5, 2013 9:23:39 AM
Subject: Re: [openstack-dev] Improvement of Cinder API wrt  
https://bugs.launchpad.net/nova/+bug/1213953

Wouldn't you still need variable timeouts?  I'm assuming that copying 
multi-gig cinder volumes might take a while, even if it's local.  (Or 
are you assuming copy-on-write?)

Chris

On 11/05/2013 01:43 AM, Caitlin Bestler wrote:
 Replication of snapshots is one solution to this.

 You create a Cinder Volume once. snapshot it. Then replicate to the
 hosts that need it (this is the piece currently missing). Then you clone
 there.

 I will be giving an in an hour in conference session on this and other
 uses of snapshots in the last time slot Wednesday.

 On Nov 5, 2013 5:58 AM, Solly Ross sr...@redhat.com
 mailto:sr...@redhat.com wrote:

 So,
 There's currently an outstanding issue with regards to a Nova
 shortcut command that creates a volume from an image and then boots
 from it in one fell swoop.  The gist of the issue is that there is
 currently a set timeout which can time out before the volume
 creation has finished (it's designed to time out in case there is an
 error), in cases where the image download or volume creation takes
 an extended period of time (e.g. under a Gluster backend for Cinder
 with certain network conditions).

 The proposed solution is a modification to the Cinder API to provide
 more detail on what exactly is going on, so that we could
 programmatically tune the timeout.  My initial thought is to create
 a new column in the Volume table called 'status_detail' to provide
 more detailed information about the current status.  For instance,
 for the 'downloading' status, we could have 'status_detail' be the
 completion percentage or JSON containing the total size and the
 current amount copied.  This way, at each interval we could check to
 see if the amount copied had changed, and trigger the timeout if it
 had not, instead of blindly assuming that the operation will
 complete within a given amount of time.

 What do people think?  Would there be a better way to do this?

 Best Regards,
 Solly Ross

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][Neutron] Server restart failes when configured with ML2 (BugID: 1210236)

2013-11-05 Thread Eugene Nikanorov
That kind of error indicates that some ml2 driver or ml2 plugin failed to
load.
You need to inspect neutron server log prior to the trace you are posting,
it should contain specific about the issue.

Thanks,
Eugene.



On Tue, Nov 5, 2013 at 8:31 PM, Ben Nemec openst...@nemebean.com wrote:

  Please don't cross-post between openstack and openstack-dev.  Based on
 the bug you linked, this sounds like a probable configuration issue, so
 openstack would be the place for this.

 Thanks.

 -Ben



 On 2013-11-05 04:06, Trinath Somanchi wrote:

  Hi –



 I configured Neutron with ML2 configuration. When I restart the service,
 the Neutron server fails with the following error.



 2013-11-05 15:37:08.572 14048 INFO neutron.common.config [-] Config paste
 file: /etc/neutron/api-paste.ini

 2013-11-05 15:37:08.574 14048 ERROR neutron.common.config [-] Unable to
 load quantum from configuration file /etc/neutron/api-paste.ini.

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config Traceback (most
 recent call last):

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/neutron/common/config.py, line 144, in
 load_paste_app

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config app =
 deploy.loadapp(config:%s % config_path, name=app_name)

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in
 loadapp

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config return
 loadobj(APP, uri, name=name, **kw)

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 271, in
 loadobj

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config
 global_conf=global_conf)

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 296, in
 loadcontext

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config
 global_conf=global_conf)

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 320, in
 _loadconfig

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config return
 loader.get_context(object_type, name, global_conf)

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 408, in
 get_context

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config object_type,
 name=name)

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config   File
 /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 587, in
 find_config_section

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config
 self.filename))

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config LookupError: No
 section 'quantum' (prefixed by 'app' or 'application' or 'composite' or
 'composit' or 'pipeline' or 'filter-app') found in config
 /etc/neutron/api-paste.ini

 2013-11-05 15:37:08.574 14048 TRACE neutron.common.config

 2013-11-05 15:37:08.575 14048 ERROR neutron.service [-] Unrecoverable
 error: please check log for details.

 2013-11-05 15:37:08.575 14048 TRACE neutron.service Traceback (most recent
 call last):

 2013-11-05 15:37:08.575 14048 TRACE neutron.service   File
 /usr/lib/python2.7/dist-packages/neutron/service.py, line 100, in
 serve_wsgi

 2013-11-05 15:37:08.575 14048 TRACE neutron.service service.start()

 2013-11-05 15:37:08.575 14048 TRACE neutron.service   File
 /usr/lib/python2.7/dist-packages/neutron/service.py, line 65, in start

 2013-11-05 15:37:08.575 14048 TRACE neutron.service self.wsgi_app =
 _run_wsgi(self.app_name)

 2013-11-05 15:37:08.575 14048 TRACE neutron.service   File
 /usr/lib/python2.7/dist-packages/neutron/service.py, line 109, in
 _run_wsgi

 2013-11-05 15:37:08.575 14048 TRACE neutron.service app =
 config.load_paste_app(app_name)

 2013-11-05 15:37:08.575 14048 TRACE neutron.service   File
 /usr/lib/python2.7/dist-packages/neutron/common/config.py, line 151, in
 load_paste_app

 2013-11-05 15:37:08.575 14048 TRACE neutron.service raise
 RuntimeError(msg)

 2013-11-05 15:37:08.575 14048 TRACE neutron.service RuntimeError: Unable
 to load quantum from configuration file /etc/neutron/api-paste.ini.

 2013-11-05 15:37:08.575 14048 TRACE neutron.service





 From Launchpad bugs, I noticed this is a Bug, with ID:
 https://bugs.launchpad.net/neutron/+bug/1210236



 Kindly help me how to fix this issue.



 Thanking you,



 --

 Trinath Somanchi - B39208

 trinath.soman...@freescale.com | extn: 4048



 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Solum]: Welcome to the community site for Solum !!

2013-11-05 Thread Brent Smithurst
This is great! I was waiting for the site to go live so I'd have a nice
place to link our Why Would ActiveState Join Project Solum? blog post:

http://www.activestate.com/blog/2013/11/why-would-activestate-join-openstacks-project-solum

We're looking forward to this as it gains momentum.

Brent


On Mon, Nov 4, 2013 at 5:25 PM, Roshan Agrawal roshan.agra...@rackspace.com
 wrote:

  The community site for Solum has gone live! www.Solum.io  - this is a
 landing page for all things Solum related.

  Also check out the blog section on the site.

  The logo: is a placeholder for now. We are working on a cool new logo -
 but the placeholder right now isn't too bad either, is it?



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum]: Welcome to the community site for Solum !!

2013-11-05 Thread Rajesh Ramchandani
Looks great, Roshan. Please let us know if you need any help with content for 
the site.

Raj



From: Roshan Agrawal roshan.agra...@rackspace.com
Sent: Monday, November 04, 2013 5:25 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [Solum]: Welcome to the community site for Solum !!

The community site for Solum has gone live! www.Solum.iohttp://www.Solum.io  
- this is a landing page for all things Solum related.

Also check out the blog section on the site.

The logo: is a placeholder for now. We are working on a cool new logo - but the 
placeholder right now isn't too bad either, is it?


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Design Summit Etherpads

2013-11-05 Thread McReynolds, Auston
In lieu of being able to be there in person, I've included my
thoughts on the clustering blueprint @
https://etherpad.openstack.org/p/TroveReplicationClustering

Thanks,
Auston


On 10/29/13 12:38 AM, Nikhil Manchanda nik...@manchanda.me wrote:


The list of Etherpads for the design summit sessions for Trove is now
posted at:
https://wiki.openstack.org/wiki/Summit/Icehouse/Etherpads#Trove

Feel free to add any relevant notes to the Etherpads.

Thanks,

Cheers,
-Nikhil

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] A Tool for Making OpenStack Log Reading a Bit Easier

2013-11-05 Thread Solly Ross
Hello All,
The other day, I was reading through a debug-level OpenStack log, and came to 
the realization that reading OpenStack debug-level logs was difficult, to say 
the least -- they can be very busy, and it is hard to quickly filter out 
relevant information.  Thus, I wrote a little Perl script to make reading dense 
debug-level logs a bit easier: 
https://github.com/DirectXMan12/os_log_prettifier.  I figured that I'd share it 
with other people.  Basically, the script highlights certain key details using 
color and bolding (via ANSI control codes), and can filter lines based on 
subject (in the form of 'x.y.z') or message type, using regular expressions.  I 
hope people find it useful!

Best Regards,
Solly Ross

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][keystone] APIs, roles, request scope and admin-ness

2013-11-05 Thread Clint Byrum
Excerpts from Steven Hardy's message of 2013-11-03 00:06:39 +0800:
 Hi all,
 
 Looking to start a wider discussion, prompted by:
 https://review.openstack.org/#/c/54651/
 https://blueprints.launchpad.net/heat/+spec/management-api
 https://etherpad.openstack.org/p/heat-management-api
 
 Summary - it has been proposed to add a management API to Heat, similar in
 concept to the admin/public API topology used in keystone.
 
 I'm concerned that this may not be a pattern we want to propagate throughout
 OpenStack, and that for most services, we should have one API to access data,
 with the scope of the data returned/accessible defined by the roles held by
 the user (ie making proper use of the RBAC facilities afforded to us via
 keystone).
 
 In the current PoC patch, a users admin-ness is derived from the fact that
 they are accessing a specific endpoint, and that policy did not deny them
 access to that endpoint.  I think this is wrong, and we should use keystone
 roles to decide the scope of the request.
 
 The proposal seems to consider tenants as the top-level of abstraction, with
 the next level up being a global service provider admin, but this does not
 consider the keystone v3 concept of domains [1], or that you may wish to
 provide some of these admin-ish features to domain-admin users (who will
 adminster data accross multiple tenants, just like has been proposed), via the
 public-facing API.
 
 It seems like we need a way of scoping the request (via data in the context),
 based on a heirarchy of admin-ness, like:
 
 1. Normal user
 2. Tenant Admin (has admin role in a tenant)
 3. Domain Admin (has admin role in all tenants in the domain)
 4. Service Admin (has admin role everywhere, like admin_token for keystone)
 
 The current is_admin flag which is being used in the PoC patch won't allow
 this granularity of administrative boundaries to be represented, and splitting
 admin actions into a separate API will prevent us providing tenant and domain
 level admin functionality to customers in a public cloud environment.
 
 It has been mentioned that in keystone, if you have admin in one tenant, you
 are admin everywhere, which is a pattern I think we should not follow -
 keystone folks, what are your thoughts in terms of roadmap to make role
 assignment (at the request level) scoped to tenants rather than globally
 applied?  E.g what data can we add to move from X-Roles in auth_token, to
 expressing roles in multiple tenants and domains?
 

Right, roles should be tenant and domain scoped, and the roles that we
consume in our policy definitions should not need to know anything about
the hierarchy. It seems very broken to me that there would be no way to
make a user who can only create more users in their tenant in Keystone.
Likewise, I would consider Heat broken if I had to use a special API
for doing things with a role I already have that is just scoped more
broadly than a single tenant or domain.

 Basically, I'm very concerned that we discuss this, get a clear roadmap which
 will work with future keystone admin/role models, and is not a short-term hack
 which we won't want to maintain long-term.
 
 What are peoples thoughts on this?

Let's try and find a keystone dev or two in the hallway at the summit
and get some clarity on the way Keystone is intended to work, which may
help us decide if we want to follow their admin-specific-API paradigm or
not.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] IPv6 sub-team?

2013-11-05 Thread Collins, Sean (Contractor)
Hi,

Is there any interest in organizing a IPv6 sub-team, similar to how
there are sub-teams for FwaaS, VPNaas, ML2, etc?

-- 
Sean M. Collins
AIM: seanwdp
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 sub-team?

2013-11-05 Thread Kyle Mestery (kmestery)
On Nov 5, 2013, at 5:50 PM, Collins, Sean (Contractor) 
sean_colli...@cable.comcast.com wrote:
 Hi,
 
 Is there any interest in organizing a IPv6 sub-team, similar to how
 there are sub-teams for FwaaS, VPNaas, ML2, etc?


+1, lets do it!
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 sub-team?

2013-11-05 Thread Brian Haley

On 11/06/2013 07:50 AM, Collins, Sean (Contractor) wrote:

Hi,

Is there any interest in organizing a IPv6 sub-team, similar to how
there are sub-teams for FwaaS, VPNaas, ML2, etc?



+1 from me.

-Brian

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 sub-team?

2013-11-05 Thread Edgar Magana
Absolutely!

+1

Edgar

On 11/5/13 3:50 PM, Collins, Sean (Contractor)
sean_colli...@cable.comcast.com wrote:

Hi,

Is there any interest in organizing a IPv6 sub-team, similar to how
there are sub-teams for FwaaS, VPNaas, ML2, etc?

-- 
Sean M. Collins
AIM: seanwdp
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A Tool for Making OpenStack Log Reading a Bit Easier

2013-11-05 Thread Clint Byrum
I often use ccze to look at logs. It has some built in things like
coloring the words warn or warning yellow and error red. It would
be great to have this filter added as another ccze plugin.

Excerpts from Solly Ross's message of 2013-11-06 05:58:02 +0800:
 Hello All,
 The other day, I was reading through a debug-level OpenStack log, and came to 
 the realization that reading OpenStack debug-level logs was difficult, to say 
 the least -- they can be very busy, and it is hard to quickly filter out 
 relevant information.  Thus, I wrote a little Perl script to make reading 
 dense debug-level logs a bit easier: 
 https://github.com/DirectXMan12/os_log_prettifier.  I figured that I'd share 
 it with other people.  Basically, the script highlights certain key details 
 using color and bolding (via ANSI control codes), and can filter lines based 
 on subject (in the form of 'x.y.z') or message type, using regular 
 expressions.  I hope people find it useful!
 
 Best Regards,
 Solly Ross
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] IPv6 sub-team?

2013-11-05 Thread Collins, Sean (Contractor)
Cool!

I've put a placeholder on the meetings wiki page, until we can find a
time that works for everyone.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] A Tool for Making OpenStack Log Reading a Bit Easier

2013-11-05 Thread Joe Gordon
On Wed, Nov 6, 2013 at 9:10 AM, Clint Byrum cl...@fewbar.com wrote:

 I often use ccze to look at logs. It has some built in things like
 coloring the words warn or warning yellow and error red. It would
 be great to have this filter added as another ccze plugin.


Sean Dague, wrote a really slick tool for logs.openstack.org that is
similar:

http://git.openstack.org/cgit/openstack-infra/os-loganalyze/tree/README.rst

http://logs.openstack.org/98/54198/7/check/check-tempest-devstack-vm-neutron/a153156/logs/screen-q-svc.txt.gz?level=TRACE


 Excerpts from Solly Ross's message of 2013-11-06 05:58:02 +0800:
  Hello All,
  The other day, I was reading through a debug-level OpenStack log, and
 came to the realization that reading OpenStack debug-level logs was
 difficult, to say the least -- they can be very busy, and it is hard to
 quickly filter out relevant information.  Thus, I wrote a little Perl
 script to make reading dense debug-level logs a bit easier:
 https://github.com/DirectXMan12/os_log_prettifier.  I figured that I'd
 share it with other people.  Basically, the script highlights certain key
 details using color and bolding (via ANSI control codes), and can filter
 lines based on subject (in the form of 'x.y.z') or message type, using
 regular expressions.  I hope people find it useful!
 
  Best Regards,
  Solly Ross
 

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum] Nova task API, task flow, and actions from design summit

2013-11-05 Thread Clayton Coleman
Quick summary of interesting discussions yesterday at the summit that relate to 
things we will face in Solum wrt async flows.

The two nova sessions on async work [1] and the task API [2] had a lot of good 
back and forth.  The problem space is how to model and convey long running 
tasks in the nova API, and then how to start moving long running tasks into a 
consistent place in the nova code base.  There appeared to be broad consensus 
that this move should and would happen in icehouse for a few important tasks 
(snapshot) and the rough shape of an API, but that there are a lot of open 
questions about how to best handle the hard problems (flow state persistence, 
read/write access patterns into a persistent store, how to make tasks 
idempotent across retries and in the face of partitions and distributed 
transactions).

A highlight for me was that it almost exactly (down to a very low level) 
matched a set of discussions we've been having in Openshift.  The problem space 
is the same - you have a virtual resource (application) that manifests as a 
distributed set of servers that must be coordinated.  You want to create (but 
create can be long running and can fail very late in the flow), you can restart 
and start these resources (usually in parallel), delete needs to be able to cut 
across a deep queue of work, and (although this isn't yet a nova problem, but 
it will be a heat/Solum problem) you need to allow multiple operations to 
execute in parallel.  These are all application life cycle problems that Heat 
and Solum will have to deal with - with Solum potentially providing a thin 
layer on top of the Heat calls (or no layer).

The other session was glance and taskflow [3] - they had general consensus to 
move ahead with their task API on top of a task flow implementation for a few 
of their existing log running tasks.  Someone from cinder talked about their 
experience - some of the known gaps in task flow include restart of a job at a 
previous checkpoint (there are other domain problems on top of that of course) 
as well as the distributed execution engine for task flow (that would allow 
work to be more easily distributed across a cluster).  Some follow up 
discussion included the need for there to be general collaboration across the 
teams on demonstrating patterns of use around the harder problems (restart of 
flows, different types of distributed retry and failure recovery, idempotent 
calls).

For Solum, I think we need to be seriously prototyping a few relevant long 
running tasks (create, build, deploy) using task flow and get familiar with the 
model.  And likewise, we need to be following the task API work in nova and 
glance closely, and working with heat and others to track this work.

[1] https://etherpad.openstack.org/p/IcehouseConductorTasksNextSteps
[2] https://etherpad.openstack.org/p/IcehouseTaskAPI
[3] https://etherpad.openstack.org/p/icehouse-summit-taskflow-and-glance___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Improvement of Cinder API wrt https://bugs.launchpad.net/nova/+bug/1213953

2013-11-05 Thread Avishay Traeger
Chris Friesen chris.frie...@windriver.com wrote on 11/05/2013 10:21:07
PM:
  I think the proper fix is to make sure that Cinder is moving the volume
  into 'error' state in all cases where there is an error.  Nova can then
  poll as long as its in the 'downloading' state, until it's 'available'
or
  'error'.  Is there a reason why Cinder would legitimately get stuck in
  'downloading'?

 There's always the cinder service crashed and couldn't restart case. :)

Well we should fix that too :)
Your Cinder processes should be properly HA'ed, and yes, Cinder needs to be
robust enough to resume operations.
I don't see how adding a callback would help - wouldn't you still need to
timeout if you don't get a callback?

Thanks,
Avishay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releases of this week

2013-11-05 Thread Robert Collins
Awesome work - thank you!!!

Can you please add to your docs though, that we need to go and close
the bugs in the project (either via a script or by hand) - gerrit
leaves them as Fix Committed today.

Cheers,
Rob

On 2 November 2013 04:38, Roman Podoliaka rpodoly...@mirantis.com wrote:
 Hi all,

 This week I've been doing releases of all projects, which belong to
 TripleO program. Here are release notes you might be interested in:

 os-collect-config  - 0.1.5 (was 0.1.4):
 - default polling interval was reduced to 30 seconds
 - requirements were updated to use the new iso8601 version
 fixing important bugs

 diskimage-builder - 0.0.9 (was 0.0.8)
  - added support for bad Fedora image mirrors (retry the
 request once on 404)
  - removed dependency on dracut-network from fedora element
  - fixed the bug with removing of lost+found dir if it's not found

 tripleo-image-elements  - 0.1.0 (was 0.0.4)
  - switched to tftpd-hpa on Fedora and Ubuntu
  - made it possible to disable file injection in Nova
  - switched seed vm to Neutron native PXE
  - added Fedora support to apache2 element
  - fixed processing of routes in init-neutron-ovs
  - fixed Heat watch server url key name in seed vm metadata

 tripleo-heat-templates - 0.1.0 (was 0.0.1)
  - disabled Nova Baremetal file injection (undercloud)
  - made LaunchConfiguration resources mergeable
  - made neutron public interface configurable (overcloud)
  - made it possible to set public interface IP (overcloud)
  - allowed making the public interface a VLAN (overcloud)
  - added a wait condition for signalling that overcloud is ready
  - added metadata for Nova floating-ip extension
  - added tuskar API service configuration
  - hid AdminToken in Heat templates
  - added Ironic service configuration

  tuskar - 0.0.2 (was 0.0.1)
  - made it possible to pass Glance image id
  - fixed the bug with duplicated Resource Class names

  tuskar-ui - 0.0.2 (was 0.0.1)
   - resource class creation form no longer ignores the image selection
   - separated flavors creation step
   - fail gracefully on node detail page when no overcloud
   - added validation of MAC addresses and CIDR values
   - stopped appending Resource Class name to Resource Class flavors
   - fixed JS warnings when $ is not available
   - fixed links and naming in Readme
   - various code and test fixes (pep8, refactoring)

   python-tuskarclient - 0.0.2 (was 0.0.1)
   - fixed processing of 301 response code

   os-apply-config and os-refresh-config haven't had new commits
 since the last release

 This also means that:
 1. We are now releasing all the projects we have.
 2. *tuskar* projects have got PyPi entries.

 Last but not least.

 I'd like to say a big thank you to Chris Jones who taught me 'Release
 Management 101' and provided patches to openstack/infra-config to make
 all our projects 'releasable'; Robert Collins for his advice on
 version numbering; Clark Boylan and Jeremy Stanley for landing of
 Gerrit ACL patches and debugging PyPi uploads issues; Radomir
 Dopieralski and Tomas Sedovic for landing a quick fix to tuskar-ui.

 Thank you all guys, you've helped me a lot!

 Roman

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releases of this week

2013-11-05 Thread Roman Podoliaka
Hey,

Oh, that's a pity. I didn't know that. Sure I'll update the doc and look
for a way to automize the process.

Roman

On Wednesday, November 6, 2013, Robert Collins wrote:

 Awesome work - thank you!!!

 Can you please add to your docs though, that we need to go and close
 the bugs in the project (either via a script or by hand) - gerrit
 leaves them as Fix Committed today.

 Cheers,
 Rob

 On 2 November 2013 04:38, Roman Podoliaka 
 rpodoly...@mirantis.comjavascript:;
 wrote:
  Hi all,
 
  This week I've been doing releases of all projects, which belong to
  TripleO program. Here are release notes you might be interested in:
 
  os-collect-config  - 0.1.5 (was 0.1.4):
  - default polling interval was reduced to 30 seconds
  - requirements were updated to use the new iso8601 version
  fixing important bugs
 
  diskimage-builder - 0.0.9 (was 0.0.8)
   - added support for bad Fedora image mirrors (retry the
  request once on 404)
   - removed dependency on dracut-network from fedora element
   - fixed the bug with removing of lost+found dir if it's not
 found
 
  tripleo-image-elements  - 0.1.0 (was 0.0.4)
   - switched to tftpd-hpa on Fedora and Ubuntu
   - made it possible to disable file injection in Nova
   - switched seed vm to Neutron native PXE
   - added Fedora support to apache2 element
   - fixed processing of routes in init-neutron-ovs
   - fixed Heat watch server url key name in seed vm metadata
 
  tripleo-heat-templates - 0.1.0 (was 0.0.1)
   - disabled Nova Baremetal file injection (undercloud)
   - made LaunchConfiguration resources mergeable
   - made neutron public interface configurable (overcloud)
   - made it possible to set public interface IP (overcloud)
   - allowed making the public interface a VLAN (overcloud)
   - added a wait condition for signalling that overcloud is ready
   - added metadata for Nova floating-ip extension
   - added tuskar API service configuration
   - hid AdminToken in Heat templates
   - added Ironic service configuration
 
   tuskar - 0.0.2 (was 0.0.1)
   - made it possible to pass Glance image id
   - fixed the bug with duplicated Resource Class names
 
   tuskar-ui - 0.0.2 (was 0.0.1)
- resource class creation form no longer ignores the image
 selection
- separated flavors creation step
- fail gracefully on node detail page when no overcloud
- added validation of MAC addresses and CIDR values
- stopped appending Resource Class name to Resource Class
 flavors
- fixed JS warnings when $ is not available
- fixed links and naming in Readme
- various code and test fixes (pep8, refactoring)
 
python-tuskarclient - 0.0.2 (was 0.0.1)
- fixed processing of 301 response code
 
os-apply-config and os-refresh-config haven't had new commits
  since the last release
 
  This also means that:
  1. We are now releasing all the projects we have.
  2. *tuskar* projects have got PyPi entries.
 
  Last but not least.
 
  I'd like to say a big thank you to Chris Jones who taught me 'Release
  Management 101' and provided patches to openstack/infra-config to make
  all our projects 'releasable'; Robert Collins for his advice on
  version numbering; Clark Boylan and Jeremy Stanley for landing of
  Gerrit ACL patches and debugging PyPi uploads issues; Radomir
  Dopieralski and Tomas Sedovic for landing a quick fix to tuskar-ui.
 
  Thank you all guys, you've helped me a lot!
 
  Roman
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org javascript:;
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Robert Collins rbtcoll...@hp.com javascript:;
 Distinguished Technologist
 HP Converged Cloud

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org javascript:;
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Releases of this week

2013-11-05 Thread Sergey Lukjanov
Here is the script for processing bug while releasing - 
https://github.com/ttx/openstack-releasing/blob/master/process_bugs.py

Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.

On Nov 6, 2013, at 1:42 PM, Roman Podoliaka rpodoly...@mirantis.com wrote:

 Hey,
 
 Oh, that's a pity. I didn't know that. Sure I'll update the doc and look for 
 a way to automize the process.
 
 Roman
 
 On Wednesday, November 6, 2013, Robert Collins wrote:
 Awesome work - thank you!!!
 
 Can you please add to your docs though, that we need to go and close
 the bugs in the project (either via a script or by hand) - gerrit
 leaves them as Fix Committed today.
 
 Cheers,
 Rob
 
 On 2 November 2013 04:38, Roman Podoliaka rpodoly...@mirantis.com wrote:
  Hi all,
 
  This week I've been doing releases of all projects, which belong to
  TripleO program. Here are release notes you might be interested in:
 
  os-collect-config  - 0.1.5 (was 0.1.4):
  - default polling interval was reduced to 30 seconds
  - requirements were updated to use the new iso8601 version
  fixing important bugs
 
  diskimage-builder - 0.0.9 (was 0.0.8)
   - added support for bad Fedora image mirrors (retry the
  request once on 404)
   - removed dependency on dracut-network from fedora element
   - fixed the bug with removing of lost+found dir if it's not found
 
  tripleo-image-elements  - 0.1.0 (was 0.0.4)
   - switched to tftpd-hpa on Fedora and Ubuntu
   - made it possible to disable file injection in Nova
   - switched seed vm to Neutron native PXE
   - added Fedora support to apache2 element
   - fixed processing of routes in init-neutron-ovs
   - fixed Heat watch server url key name in seed vm metadata
 
  tripleo-heat-templates - 0.1.0 (was 0.0.1)
   - disabled Nova Baremetal file injection (undercloud)
   - made LaunchConfiguration resources mergeable
   - made neutron public interface configurable (overcloud)
   - made it possible to set public interface IP (overcloud)
   - allowed making the public interface a VLAN (overcloud)
   - added a wait condition for signalling that overcloud is ready
   - added metadata for Nova floating-ip extension
   - added tuskar API service configuration
   - hid AdminToken in Heat templates
   - added Ironic service configuration
 
   tuskar - 0.0.2 (was 0.0.1)
   - made it possible to pass Glance image id
   - fixed the bug with duplicated Resource Class names
 
   tuskar-ui - 0.0.2 (was 0.0.1)
- resource class creation form no longer ignores the image 
  selection
- separated flavors creation step
- fail gracefully on node detail page when no overcloud
- added validation of MAC addresses and CIDR values
- stopped appending Resource Class name to Resource Class flavors
- fixed JS warnings when $ is not available
- fixed links and naming in Readme
- various code and test fixes (pep8, refactoring)
 
python-tuskarclient - 0.0.2 (was 0.0.1)
- fixed processing of 301 response code
 
os-apply-config and os-refresh-config haven't had new commits
  since the last release
 
  This also means that:
  1. We are now releasing all the projects we have.
  2. *tuskar* projects have got PyPi entries.
 
  Last but not least.
 
  I'd like to say a big thank you to Chris Jones who taught me 'Release
  Management 101' and provided patches to openstack/infra-config to make
  all our projects 'releasable'; Robert Collins for his advice on
  version numbering; Clark Boylan and Jeremy Stanley for landing of
  Gerrit ACL patches and debugging PyPi uploads issues; Radomir
  Dopieralski and Tomas Sedovic for landing a quick fix to tuskar-ui.
 
  Thank you all guys, you've helped me a lot!
 
  Roman
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev