Re: [Openstack] which SDK to use?

2018-04-18 Thread Joshua Hesketh
There is also nothing stopping you from using both. For example, you could
use the OpenStack SDK for most things but if you hit an edge case where you
need something specific you can then import the particular client lib.

Cheers,
Josh

On Thu, Apr 19, 2018 at 1:05 AM, Chris Friesen 
wrote:

> I should preface this with the fact that I don't use OpenStack SDK, so you
> may want to check with the project developers.
>
> One example is that a bit over a year ago nova added a microversion to
> include the flavor information directly in the server information rather
> than returning a link to a flavor (that may have been modified or deleted
> in the meantime).
>
> To my knowledge, the Openstack SDK does not yet support this functionality.
>
> Chris
>
>
>
> On 04/17/2018 02:24 PM, Volodymyr Litovka wrote:
>
>> Hi Chris and colleagues,
>>
>> based on your experience, can you specify an average delay between new OS
>> release / new feature introduction and appearance of corresponding
>> support in
>> Unified Openstack SDK if you were experiencing such issues?
>>
>> Thanks.
>>
>> On 4/17/18 7:23 PM, Chris Friesen wrote:
>>
>>> On 04/17/2018 07:13 AM, Jeremy Stanley wrote:
>>>
>>> The various "client libraries" (e.g. python-novaclient,
 python-cinderclient, et cetera) can also be used to that end, but
 are mostly for service-to-service communication these days, aren't
 extremely consistent with each other, and tend to eventually drop
 support for older OpenStack APIs so if you're going to be
 interacting with a variety of different OpenStack deployments built
 on different releases you may need multiple versions of the client
 libraries (depending on what it is you're trying to do).

>>>
>>> The above is all good information.
>>>
>>> I'd like to add that if you need bleeding-edge functionality in nova it
>>> will
>>> often be implemented first in python-novaclient.
>>>
>>> Chris
>>>
>>> ___
>>> Mailing list: http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe : http://lists.openstack.org/cgi
>>> -bin/mailman/listinfo/openstack
>>>
>>
>>
>
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac
> k
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [openstack-dev] Naming polls - and some issues

2016-07-17 Thread Joshua Hesketh
Hello Ian,

My understanding of it is that you need to receive a new link. Monty is
slowly sending those out in batches and he hasn't finished. I expect he'll
email the list once he has finished to confirm so I'd suggest holding off
until then to check.

Cheers,
Josh

On Mon, Jul 18, 2016 at 1:58 PM, Ian Y. Choi  wrote:

> Hello,
>
> Today I tried to vote the naming polls for P and Q releases, however,
> unfortunately I am experiencing some issues.
>
> I used the link for P release in the e-mail titled "Poll: OpenStack P
> Release Naming" on July 11 22:42 UTC.
> When I click my URL, the poll site says:
> "Error / Your voter key is invalid. You should have received a correct URL
> by email."
> , and the poll is not ended yet. However I have not received any other
> correct URLs by e-mail.
>
> For Q release, I followed the link in the e-mail: "Poll: OpenStack Q
> Release Naming" on July 12 02:15 UTC.
> When I go to my vote URL, the site says "Poll already ended".
> But strangely, when I see the poll result (
> http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_06e681ae091ad657 ),
> all the poll candidates are 'tied'.
>
> So my questions are:
>
> 1) Does anybody have troubles on voting P and Q release naming like me?
>
> 2) Has the Q release naming poll already finished?
> I think it can be finished although the original due date is July 20th.
> However It is so strange that the result I am seeing now is "tied".
>
>
> With many thanks,
>
> /Ian
>
>
> Monty Taylor wrote on 7/18/2016 10:03 AM:
>
>> Any time is a good time.
>>
>> On 07/17/2016 04:54 PM, Michael Still wrote:
>>
>>> So, is now a good time to mention that "Quamby" is the name of a local
>>> prison?
>>>
>>> Michael
>>>
>>>
>>>
>>> On Fri, Jul 15, 2016 at 7:50 PM, Eoghan Glynn >> > wrote:
>>>
>>>
>>>
>>>  > (top posting on purpose)
>>>  >
>>>  > I have re-started the Q poll and am slowly adding all of you fine
>>> folks
>>>  > to it. Let's keep our fingers crossed that it works this time.
>>>  >
>>>  > I also removed Quay. Somehow my brain didn't process the "it
>>> would be
>>>  > like naming the S release "Street"" when reading the original
>>> names.
>>>  > Based on the naming critera, "Circular Quay" would be a great
>>> option for
>>>  > "Circular" - but sadly we already named the C release Cactus. It's
>>>  > possible this choice will make someone unhappy, and if it does,
>>> I'm
>>>  > certainly sorry. On the other hand, there are _so_ many awesome
>>> names
>>>  > possible in this list, I don't think we'll miss it.
>>>
>>>  Excellent, thanks Monty for fixing this ... agreed that the
>>> remaining
>>>  Q* choices are more than enough.
>>>
>>>  Cheers,
>>>  Eoghan
>>>
>>>  > I'll fire out new emails for P once Q is up and going.
>>>  >
>>>  > On 07/15/2016 11:02 AM, Jamie Lennox wrote:
>>>  > > Partially because its name is Circular Quay, so it would be like
>>>  calling
>>>  > > the S release Street for  Street.
>>>  > >
>>>  > > Having said that there are not that many of them and Sydney
>>>  people know
>>>  > > what you mean when you are going to the Quay.
>>>  > >
>>>  > >
>>>  > > On 14 July 2016 at 21:35, Neil Jerram >>  
>>>  > > >> wrote:
>>>  > >
>>>  > > Not sure what the problem would be with 'Quay' or 'Street' -
>>>  they
>>>  > > both sound like good options to me.
>>>  > >
>>>  > >
>>>  > > On Thu, Jul 14, 2016 at 11:29 AM Eoghan Glynn
>>>  mailto:egl...@redhat.com>
>>>  > > >>
>>> wrote:
>>>  > >
>>>  > >
>>>  > >
>>>  > > > >> Hey all!
>>>  > > > >>
>>>  > > > >> The poll emails for the P and Q naming have started
>>>  to go
>>>  > > out - and
>>>  > > > >> we're experiencing some difficulties. Not sure at
>>> the
>>>  > > moment what's
>>>  > > > >> going on ... but we'll keep working on the issues
>>>  and get
>>>  > > ballots to
>>>  > > > >> everyone as soon as we can.
>>>  > > > >
>>>  > > > > You'll need to re-send at least some emails,
>>> because the
>>>  > > link I received
>>>  > > > > is wrong - the site just reports
>>>  > > > >
>>>  > > > >   "Your voter key is invalid. You should have
>>> received a
>>>  > > correct URL by
>>>  > > > >   email."
>>>  > > >
>>>  > > > Yup. That would be a key symptom of the problems. One
>>>  of the
>>>  > > others is
>>>  > > > that I just uploaded 3000 of the emails to the Q poll
>>>  and it
>>>  > > shows 0
>>>  > > > active voters.
>>>  > > >
>>> 

Re: [Openstack] Nova migrate-flavor-data woes

2015-07-27 Thread Joshua Hesketh
Yep and that patch was backported into kilo last month[0]. If others are
experiencing pain, they could try the head of stable/kilo.

Given this is possibly causing some operator pain, it might be worth
cutting a bug-fix release, or updating the installation instructions.

[0] https://review.openstack.org/#/c/195762/

On Tue, Jul 28, 2015 at 1:09 AM, Mike Dorman  wrote:

> I had this frustration, too, when doing this the first time.
>
> FYI (and for the Googlers who stumble across this in the future), this
> patch [1] fixes the --max_number thing.
>
> [1] https://review.openstack.org/#/c/175890/
>
>
>
>
>
>
> On 7/27/15, 8:45 AM, "Jay Pipes"  wrote:
>
> >On 07/26/2015 01:15 PM, Lars Kellogg-Stedman wrote:
> >> So, the Kilo release notes say:
> >>
> >>  nova-manage migrate-flavor-data
> >>
> >> But nova-manage says:
> >>
> >>  nova-manage db migrate_flavor_data
> >>
> >> But that says:
> >>
> >>  Missing arguments: max_number
> >>
> >> And the help says:
> >>
> >>  usage: nova-manage db migrate_flavor_data [-h]
> >>[--max-number ]
> >>
> >> Which indicates that --max-number is optional, but whatever, so you
> >> try:
> >>
> >>  nova-manage db migrate_flavor_data --max-number 100
> >>
> >> And that says:
> >>
> >>  Missing arguments: max_number
> >>
> >> So just for kicks you try:
> >>
> >>  nova-manage db migrate_flavor_data --max_number 100
> >>
> >> And that says:
> >>
> >>  nova-manage: error: unrecognized arguments: --max_number
> >>
> >> So finally you try:
> >>
> >>  nova-manage db migrate_flavor_data 100
> >>
> >> And holy poorly implement client, Batman, it works.
> >
> >LOL. Well, the important thing is that the thing eventually worked. ;P
> >
> >In all seriousness, though, yeah, the nova-manage CLI tool is entirely
> >different from the main python-novaclient CLI tool. It's not been a
> >priority whatsoever to clean it up, but I think it would be some pretty
> >low-hanging fruit to make the CLI consistent with the design of, say,
> >python-openstackclient...
> >
> >Perhaps something we should develop a backlog spec for.
> >
> >Best,
> >-jay
> >
> >___
> >Mailing list:
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> >Post to : openstack@lists.openstack.org
> >Unsubscribe :
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Grizzly -> Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2013-10-30 Thread Joshua Hesketh

Hi Blair,

No trouble, glad I could help. I'll be working backporting this as is 
appropriate.


Cheers,
Josh

Rackspace Australia

On 10/30/13 5:06 PM, Blair Zajac wrote:

Hi Joshua,

Thanks for the quick fix, I appreciate it.

I took your fixed 185_rename_unique_constraints.py, copied it into 
/usr/share/pyshared/nova/db/sqlalchemy/migrate_repo/versions/185_rename_unique_constraints.py, 
restored MySQL back to 1 week ago and reran all the migrations and 
everything worked fine.


I've launched my instances, so I'm a happy openstacker ;)

Thanks!
Blair

On 10/29/2013 09:05 PM, Joshua Hesketh wrote:

Hi Blair,

I have proposed a new fix which I believe should work for you. However
I've been unable to determine where exactly the duplicate keys were
introduced. Looking up what versions were available in Ubuntu from 12.10
you have been running since Folsom. This means you did not have the
133_folsom.py migration and should have skipped it when upgrading to
Grizzly. I can't see anywhere else that would have changed the fkeys.

So what I have proposed is to do a check for which keys exist in both of
the problematic tables (as virtual_interfaces is also affected) and
remove them bringing the databases inline with 133_folsom users.

I'll propose this for backporting into Havana after it is merged.

Cheers,
Josh

Rackspace Australia

On 10/30/13 11:42 AM, Blair Zajac wrote:

On 10/29/2013 05:16 PM, Joshua Hesketh wrote:

Hi Blair,

Thanks for the clarifications, that helps.

At the moment I'm trying to determine how you ended up with both fkeys
so I can ensure the problem is properly fixed. What version did you
first start deploying openstack from? Essex? and have you been just
upgrading with releases or RC's or upstream etc.


I don't remember exactly, but here's a best shot.  I keep my systems
up to date on Ubuntu, so:

1) Somewhere during 12.10 I installed OpenStack, I don't have notes on
which one.

2) By the time I upgraded to 13.04 I had 2012.2.1-0ubuntu1.3 installed
as I have the 'dpkg -l' output saved just before the upgrade. I was
using the packages from ubuntu-cloud.archive.canonical.com for 12.04
even though I was on 12.10.  Upgrading to 13.04 upgraded OpenStack to
1:2013.1-0ubuntu1.

3) When I upgraded to 13.10 I switched from
ubuntu-cloud.archive.canonical.com back to 13.10's native OpenStack
packages.  This brought me to 1:2013.2~rc4-0ubuntu1 and which have
been since updated to 1:2013.2-0ubuntu1.

Hope that helps.

Blair








___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Grizzly -> Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2013-10-29 Thread Joshua Hesketh

Hi Blair,

I have proposed a new fix which I believe should work for you. However 
I've been unable to determine where exactly the duplicate keys were 
introduced. Looking up what versions were available in Ubuntu from 12.10 
you have been running since Folsom. This means you did not have the 
133_folsom.py migration and should have skipped it when upgrading to 
Grizzly. I can't see anywhere else that would have changed the fkeys.


So what I have proposed is to do a check for which keys exist in both of 
the problematic tables (as virtual_interfaces is also affected) and 
remove them bringing the databases inline with 133_folsom users.


I'll propose this for backporting into Havana after it is merged.

Cheers,
Josh

Rackspace Australia

On 10/30/13 11:42 AM, Blair Zajac wrote:

On 10/29/2013 05:16 PM, Joshua Hesketh wrote:

Hi Blair,

Thanks for the clarifications, that helps.

At the moment I'm trying to determine how you ended up with both fkeys
so I can ensure the problem is properly fixed. What version did you
first start deploying openstack from? Essex? and have you been just
upgrading with releases or RC's or upstream etc.


I don't remember exactly, but here's a best shot.  I keep my systems 
up to date on Ubuntu, so:


1) Somewhere during 12.10 I installed OpenStack, I don't have notes on 
which one.


2) By the time I upgraded to 13.04 I had 2012.2.1-0ubuntu1.3 installed 
as I have the 'dpkg -l' output saved just before the upgrade.  I was 
using the packages from ubuntu-cloud.archive.canonical.com for 12.04 
even though I was on 12.10.  Upgrading to 13.04 upgraded OpenStack to 
1:2013.1-0ubuntu1.


3) When I upgraded to 13.10 I switched from 
ubuntu-cloud.archive.canonical.com back to 13.10's native OpenStack 
packages.  This brought me to 1:2013.2~rc4-0ubuntu1 and which have 
been since updated to 1:2013.2-0ubuntu1.


Hope that helps.

Blair




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Grizzly -> Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2013-10-29 Thread Joshua Hesketh

Hi Blair,

Thanks for the clarifications, that helps.

At the moment I'm trying to determine how you ended up with both fkeys 
so I can ensure the problem is properly fixed. What version did you 
first start deploying openstack from? Essex? and have you been just 
upgrading with releases or RC's or upstream etc.


Thanks,
Josh

Rackspace Australia

On 10/30/13 10:10 AM, Blair Zajac wrote:

On 10/29/2013 04:04 PM, Joshua Hesketh wrote:

Hi Blair,

So from your dump a few weeks ago you have both fkeys
'instance_info_caches_ibfk_1' and
'instance_info_caches_instance_uuid_fkey'?


Hi Joshua,

Yes, that's correct.

Yet your original bug report

only lists the former key. Was the original bug report after attempting
to run migration 185 and failing (ie, no rollback)? If so I suspect
migration 185 dropped the 'instance_info_caches_instance_uuid_fkey' fkey
before failing. Perhaps the patch needs to check if either or exist.


Yes, my first email showing the table schema was after 185 and failing 
with no rollback.


Yes, the original bug report was generated by showing the table 
creation after 185 failed.



Similar question, was the dump you have just shown now after 185 failed
with my patch? That would also be why you can't see the old fkey.


Yes, that's correct.

And the dump of my db from a week ago shows both FKs:

--
-- Table structure for table `instance_info_caches`
--

DROP TABLE IF EXISTS `instance_info_caches`;
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `instance_info_caches` (
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `network_info` mediumtext,
  `instance_uuid` varchar(36) NOT NULL,
  `deleted` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `instance_uuid` (`instance_uuid`),
  CONSTRAINT `instance_info_caches_ibfk_1` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`),
  CONSTRAINT `instance_info_caches_instance_uuid_fkey` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`)

) ENGINE=InnoDB AUTO_INCREMENT=202 DEFAULT CHARSET=utf8;
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `instance_info_caches`
--


I didn't go looking to see what was going on until the patch 185 
migration also failed.


Regards,
Blair



Thanks,
Josh

Rackspace Australia

On 10/29/13 3:50 PM, Blair Zajac wrote:

On 10/28/13 6:17 PM, Joshua Hesketh wrote:

Hi guys,

I have a patch in the works against this that will hopefully fix your
problems:
https://review.openstack.org/#/c/54212/

One of the gotchas though will be if you have already ran migration
185 you
can't run it again (even if it failed because it'll try and do
operations that
it got part way through before).

Once the patch is merged you may have to restore your database to a
previous and
run the upgrade again.


Hi Josh,

Thanks for the patch.  I went back to a mysqldump from a week ago and
loaded it into MySQL and dropped the new
shadow_security_group_default_rules table. However, I'm getting the
same failure, even though that constraint is gone:

mysql> show create table instance_info_caches \G
*** 1. row ***
   Table: instance_info_caches
Create Table: CREATE TABLE `instance_info_caches` (
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `network_info` mediumtext,
  `instance_uuid` varchar(36) NOT NULL,
  `deleted` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `instance_uuid` (`instance_uuid`),
  CONSTRAINT `instance_info_caches_instance_uuid_fkey` FOREIGN KEY
(`instance_uuid`) REFERENCES `instances` (`uuid`)
) ENGINE=InnoDB AUTO_INCREMENT=202 DEFAULT CHARSET=utf8


mysql> alter table instance_info_caches drop index instance_uuid;
ERROR 1553 (HY000): Cannot drop index 'instance_uuid': needed in a
foreign key constraint


Looking at my dump from a week ago, it appears there's two identical
constraints with different names:


  CONSTRAINT `instance_info_caches_ibfk_1` FOREIGN KEY
(`instance_uuid`) REFERENCES `instances` (`uuid`),
  CONSTRAINT `instance_info_caches_instance_uuid_fkey` FOREIGN KEY
(`instance_uuid`) REFERENCES `instances` (`uuid`)


Blair








___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Grizzly -> Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2013-10-29 Thread Joshua Hesketh

Hi Blair,

So from your dump a few weeks ago you have both fkeys 
'instance_info_caches_ibfk_1' and 
'instance_info_caches_instance_uuid_fkey'? Yet your original bug report 
only lists the former key. Was the original bug report after attempting 
to run migration 185 and failing (ie, no rollback)? If so I suspect 
migration 185 dropped the 'instance_info_caches_instance_uuid_fkey' fkey 
before failing. Perhaps the patch needs to check if either or exist.


Similar question, was the dump you have just shown now after 185 failed 
with my patch? That would also be why you can't see the old fkey.


Thanks,
Josh

Rackspace Australia

On 10/29/13 3:50 PM, Blair Zajac wrote:

On 10/28/13 6:17 PM, Joshua Hesketh wrote:

Hi guys,

I have a patch in the works against this that will hopefully fix your 
problems:

https://review.openstack.org/#/c/54212/

One of the gotchas though will be if you have already ran migration 
185 you
can't run it again (even if it failed because it'll try and do 
operations that

it got part way through before).

Once the patch is merged you may have to restore your database to a 
previous and

run the upgrade again.


Hi Josh,

Thanks for the patch.  I went back to a mysqldump from a week ago and 
loaded it into MySQL and dropped the new 
shadow_security_group_default_rules table. However, I'm getting the 
same failure, even though that constraint is gone:


mysql> show create table instance_info_caches \G
*** 1. row ***
   Table: instance_info_caches
Create Table: CREATE TABLE `instance_info_caches` (
  `created_at` datetime DEFAULT NULL,
  `updated_at` datetime DEFAULT NULL,
  `deleted_at` datetime DEFAULT NULL,
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `network_info` mediumtext,
  `instance_uuid` varchar(36) NOT NULL,
  `deleted` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `instance_uuid` (`instance_uuid`),
  CONSTRAINT `instance_info_caches_instance_uuid_fkey` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`)

) ENGINE=InnoDB AUTO_INCREMENT=202 DEFAULT CHARSET=utf8


mysql> alter table instance_info_caches drop index instance_uuid;
ERROR 1553 (HY000): Cannot drop index 'instance_uuid': needed in a 
foreign key constraint



Looking at my dump from a week ago, it appears there's two identical 
constraints with different names:



  CONSTRAINT `instance_info_caches_ibfk_1` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`),
  CONSTRAINT `instance_info_caches_instance_uuid_fkey` FOREIGN KEY 
(`instance_uuid`) REFERENCES `instances` (`uuid`)



Blair




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Grizzly -> Havana nova upgrade failure: Cannot drop index 'instance_uuid'

2013-10-28 Thread Joshua Hesketh

Hi guys,

I have a patch in the works against this that will hopefully fix your 
problems:

https://review.openstack.org/#/c/54212/

One of the gotchas though will be if you have already ran migration 185 
you can't run it again (even if it failed because it'll try and do 
operations that it got part way through before).


Once the patch is merged you may have to restore your database to a 
previous and run the upgrade again.


Cheers,
Josh

Rackspace Australia

On 10/29/13 1:01 AM, Blair Zajac wrote:

On 10/27/13 4:39 PM, Michael Still wrote:

These sound like bugs worth filing...


I opened https://bugs.launchpad.net/nova/+bug/1245502

Blair

___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openstack@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack