Re: [Openstack] Incremental Backup of Instances

2012-07-24 Thread Wolfgang Hennerbichler
You could solve something like this with bacula incremental backups, I'm 
doing this (but not with openstack) like this:
make an lvm snapshot (or qcow2 snapshot) of the instance, write a script 
that mounts the filesystem and let bacula back it up (incrementally). 
bacula can do the actual snapshotting and mounting, so you don't need to 
coordinate when to create a snapshot and so on. this works very well, 
but is hard to configure, too. to restore the machine, one creates a lv 
in my case and restores the whole filesystem there. This works very well 
for linux vms, might be harder with windows vms.


Wolfgang

On 07/23/2012 05:22 AM, Kobagana Kumar wrote:

Hi All,

I am working on *Delta Changes *of an instance. Can you please tell me

The procedure to take *Incremental Backups (Delta Changes) *of VMs,

instead of taking the snapshot of entire instance.

And also please tell me how to apply those delta changes to instance.

*Thanks  Regards,***

*Bharath Kumar Kobagana*

DISCLAIMER == This e-mail may contain privileged and
confidential information which is the property of Persistent Systems
Ltd. It is intended only for the use of the individual or entity to
which it is addressed. If you are not the intended recipient, you are
not authorized to read, retain, copy, print, distribute or use this
message. If you have received this communication in error, please notify
the sender and delete all copies of this message. Persistent Systems
Ltd. does not accept any liability for virus infected mails.



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-24 Thread Alessandro Tagliapietra
Sorry for the delay, i was out from work.
Awesome work Eugene, I don't need the patch instantly as i'm still building the 
infrastructure.
Will it will take alot of time to go in Ubuntu repositories?

Why you said you need load balancing? You can use only the master node and in 
case the rabbitmq-server dies, switch the ip to the new master with pacemaker, 
that's how I would do.

Best Regards

Alessadro


Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha scritto:

 +openstack-dev@
 
 To openstack-dev: this is a discussion of an upcoming patch about
 native RabbitMQ H/A support in nova. I'll post the patch for
 codereview today.
 
 On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Yup, that's basically the same thing that Jay suggested :) Obvious in
 retrospect...
 
 On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh ogelb...@mirantis.com wrote:
 Eugene,
 
 I suggest just add option 'rabbit_servers' that will override
 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
 my understanding.
 
 --
 Best regards,
 Oleg Gelbukh
 Mirantis, Inc.
 
 
 On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
 
 Hi,
 
 I'm working on a RabbitMQ H/A patch right now.
 
 It actually involves more than just using H/A queues (unless you're
 willing to add a TCP load balancer on top of your RMQ cluster).
 You also need to add support for multiple RabbitMQ's directly to nova.
 This is not hard at all, and I have the patch ready and tested in
 production.
 
 Alessandro, if you need this urgently, I can send you the patch right
 now before the discussion codereview for inclusion in core nova.
 
 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.
 
 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?
 
 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,
 
 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.
 
 
 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
 
 You'll need to add the config options there and the queue is declared
 here with the options supplied to the ConsumerBase constructor:
 
 
 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
 
 Best,
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 
 
 -- 
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HPC] BoF at SC12

2012-07-24 Thread Patrick Petit
Hi Lorin,
Like every year there is a Bull delegation participating to this event.
So we will surely attend your session if you make one.
Would be a good opportunity to meet and discuss joining efforts.
Best regards,
Patrick


2012/7/23 Lorin Hochstein lo...@nimbisservices.com

 On Jul 6, 2012, at 1:28 PM, John Paul Walters wrote:

 I'm strongly considering putting together a proposal for a BoF (birds of a
 feather) session at this year's Supercomputing in Salt Lake City.  For
 those of you who are likely to attend, is anyone else interested?  It's not
 a huge amount of time invested on my end to put together the proposal, but
 I'd like to gauge the community interest before doing so.  I would likely
 broaden things a bit from being exclusively Openstack and instead turn it
 into more of an HPC in the Cloud session so that we could, perhaps, take
 some input from other HPC cloud projects.   The submissions are due July
 31, so we've got a little bit of time, but not too much.  Anyone else
 interested?

 best,
 JP


 JP:

 I think this was a great idea, we were thinking about proposing this if
 nobody else did. I would suggest making it OpenStack-specific, since there
 was  an HPC in the Cloud BoF last year (
 http://sc11.supercomputing.org/schedule/event_detail.php?evid=bof140),
 and they'll probably re-apply this year as well. I think we can get
 critical mass for an OpenStack BoF.

 Along these lines: Chris Hoge from U. Oregon gave a talk last week at
 OSCON about their use of OpenStack on HPC
 http://www.oscon.com/oscon2012/public/schedule/detail/24261

 (There are some good slides attached to that web page)

 Take care,

 Lorin
 --
 Lorin Hochstein
 Lead Architect - Cloud Services
 Nimbis Services, Inc.
 www.nimbisservices.com




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
*Give me a place to stand, and I shall move the earth with a lever*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Looking for an openstack developer job

2012-07-24 Thread Zhongyue Luo
Hey, Hengqing

http://www.openstack.org/community/jobs/

See you at the APEC conference.

Cheers,
LZY

On Tue, Jul 24, 2012 at 11:39 AM, Hengqing Hu huda...@hotmail.com wrote:

 Hi,

 Sorry for the disturbance.

 This is Hengqing Hu from Shanghai, China, 29 years old, male,
 looking for an openstack developer job.
 I prefer to work from home, legally allowed
 to work in China, also accept oversea jobs if provided.
 If you are seeking for an openstack developer,
 have a look at my resume here:
 https://www.dropbox.com/s/**41gzc974s6ay9uy/**
 ResumeOfHengqingHuDetailed.pdfhttps://www.dropbox.com/s/41gzc974s6ay9uy/ResumeOfHengqingHuDetailed.pdf

 Any kind people who would help to refer me to their employer is also
 appreciated.

 I may not good at express myself, but good at solving technical problems.

 Best Regards, Hengqing Hu

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/%7Eopenstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/%7Eopenstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] VM High Availability and Floating IP

2012-07-24 Thread Alessandro Tagliapietra
Hi guys,

i've 2 missing pieces in my HA openstack install. Actually all openstack 
services are managed by pacemaker and i can succesfully start/stop vm etc. when 
the cloud controller is down (i've only 2 servers atm).

1 - how can i make a VM HA? Actually live-migration works fine, but if a host 
goes down, how can i restart the vm on the other host? Should i edit the 'host' 
column in the db and issue the restart of the vm? Any other way?

2 - i've the servers hosted at Hetzner, for floating ip we've bought failover 
ip which are assigned to each host and can be changed via the api. So i have to 
make sure that if vm is on host1, floating ip associated to the vm is routed to 
host1.  My idea was to run a job that checks the floating ip already associated 
to any vm, then queries the vm info, checks on which host it's running and if 
it's different from the other check, calls the hetzner api to switch the ip to 
the other server. Any other idea?

Thanks in advance

Best Regards

-- 
Alessandro Tagliapietra | VISup srl
piazza 4 novembre 7
20124 Milano

http://www.visup.it

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] 回复: [openstack] nova-compute always dead on RHEL6.1

2012-07-24 Thread Pádraig Brady
On 07/24/2012 02:05 AM, 延生 付 wrote:
 Dear Padraig,
  
 Thanks for the help. I can see the log from console, really strange there is 
 no log generated under /var/log/nova, the permission is open to nova.

Ensure there are no root owned files under /var/log/nova

 Another question is Apache Qpid can not be connected, even from the same 
 server. I setup from default, no further config.
 I can see the port 5672 is listened, and I also turned iptables off. Is there 
 any Qpid log I can refer?

Try setting auth=no is set in /etc/qpidd.conf

Note these instructions have been used successfully by many:
https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL

cheers,
Pádraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] public-interface ignored

2012-07-24 Thread Wolfgang Hennerbichler

hi,

running openstack essex with vlan-networking:
--network_manager=nova.network.manager.VlanManager
--vlan_interface=bond0
--public_interface=bridge_130
--auto_assign_floating_ip=True

all works great except for floating ips. the problem is, that the 
floating ips are assigned to eth0 instead of bridge_130. I can't get a 
clue out of the logs:


2012-07-24 11:24:02 DEBUG nova.service [-] public_interface : bridge_130 
from (pid=31192) wait /usr/lib/python2.7/dist-packages/nova/service.py:411


= so during startup it realizes that bridge_130 is the 
public-interface. great. But then the floating ip is assigned:


2012-07-24 11:24:26 DEBUG nova.utils [-] Running cmd (subprocess): sudo 
nova-rootwrap ip addr add xx.xx.xx.129/32 dev eth0 from (pid=31192) 
execute /usr/lib/python2.7/dist-packages/nova/utils.py:219


this process finishes without errors, the problem is that eth0 is just 
not my public network interface. What's wrong? is there something in the 
database that I should change?


 mysql select * from networks \G
*** 1. row ***
 created_at: 2012-07-24 08:17:18
 updated_at: 2012-07-24 08:18:48
 deleted_at: NULL
deleted: 0
 id: 1
   injected: 0
   cidr: 10.131.0.0/24
netmask: 255.255.255.0
 bridge: novabr131
gateway: 10.131.0.1
  broadcast: 10.131.0.255
   dns1: 8.8.8.8
   vlan: 131
 vpn_public_address: xx.xx.xx.xx
vpn_public_port: NULL
vpn_private_address: 10.131.0.3
 dhcp_start: 10.131.0.2
 project_id: 7edc3284f53f4e02b92d498db41b842d
   host: NULL
cidr_v6: NULL
 gateway_v6: NULL
  label: RISCSW
 netmask_v6: NULL
   bridge_interface: bond0
 multi_host: 1
   dns2: NULL
   uuid: 96ad4088-b9bb-436f-b381-2edefa8d5567
   priority: NULL
  rxtx_base: NULL
1 row in set (0.00 sec)

help needed... thanks in advance.
Wolfgang

--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM High Availability and Floating IP

2012-07-24 Thread Jay Pipes
On 07/24/2012 04:29 AM, Alessandro Tagliapietra wrote:
 Hi guys,
 
 i've 2 missing pieces in my HA openstack install. Actually all openstack
 services are managed by pacemaker and i can succesfully start/stop vm
 etc. when the cloud controller is down (i've only 2 servers atm).
 
 1 - how can i make a VM HA? Actually live-migration works fine, but if a
 host goes down, how can i restart the vm on the other host? Should i
 edit the 'host' column in the db and issue the restart of the vm? Any
 other way?

Check out that HEAT API:

https://github.com/heat-api/heat/wiki/

 2 - i've the servers hosted at Hetzner, for floating ip we've bought
 failover ip which are assigned to each host and can be changed via the
 api. So i have to make sure that if vm is on host1, floating ip
 associated to the vm is routed to host1.  My idea was to run a job that
 checks the floating ip already associated to any vm, then queries the vm
 info, checks on which host it's running and if it's different from the
 other check, calls the hetzner api to switch the ip to the other server.
 Any other idea?

See above :)

Best,
-jay

 Thanks in advance
 
 Best Regards
 
 -- 
 Alessandro Tagliapietra | VISup srl
 piazza 4 novembre 7
 20124 Milano
 
 http://www.visup.it
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] public-interface ignored

2012-07-24 Thread Wolfgang Hennerbichler



On 07/24/2012 02:51 PM, Jay Pipes wrote:


Looking through the code, in the
nova.network.manager.FloatingIP.associate_floating_ip() call, I see this:

 interface = FLAGS.public_interface or floating_ip['interface']

on line 511 (in current HEAD of branch).

The only thing I could think of is that you created your floating IPs
before setting your FLAGS.public_interface?


true. hahaaa. thanks a bunch for the hint.

mysql select * from floating_ips limit 1 \G
*** 1. row ***
   created_at: 2012-07-24 08:17:19
   updated_at: 2012-07-24 10:55:45
   deleted_at: NULL
  deleted: 0
   id: 1
  address: 193.170.32.129
  fixed_ip_id: 3
   project_id: 7edc3284f53f4e02b92d498db41b842d
 host: computeNode1
auto_assigned: 1
 pool: nova
interface: eth0
1 row in set (0.00 sec)



Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Jay Pipes
Thanks Matt, comments inline...

On 07/23/2012 05:25 PM, Matt Joyce wrote:
  I wish to add some data to the metadata server that can be found
  somewhere else.  That a user could jump through a hoop or two to
 add to
  their instances.  Esteemed personages are concerned that I would be
  crossing the rubicon in terms of opening up the metadata api for
 wanton
  abuse.  They are not without a right or reason to be concerned.  And
  that is why I am going to attempt to explicitly classify a new
 category
  of data that we might wish to allow into the metadata server.  If
 we can
  be clear about what we are allowing we can avoid abuse.
 
  I want to provide a uniform ( standardized? ) way for instances in the
  openstack cloud to communicate back to the OpenStack APIs without
 having
  to be provided data by the users of the cloud services.
 
 Let's be clear here... are you talking about the OpenStack Compute API
 or are you talking about the OpenStack Metadata service which is merely
 the EC2 Metadata API? We already have the config-drive extension [1]
 that allows information and files to be injected into the instance and
 loaded as a readonly device. The information in the config-drive can
 include things like the Keystone URI for the cell/AZ in which an
 instance resides.
 
 I mean the OpenStack Metadata service.

Sorry, I'm still confused. The only actual stand-alone service in
OpenStack for metadata is the OpenStack EC2 Metadata service that runs
on the fixed AWS 169.254.169.254 address. Are you referring to this, or
are you referring to the /servers/SERVER_ID/metadata call in the
OpenStack Compute API v2?

 The config drive extension does
 not as far as I am aware produce a uniform path for data like this. 

Absolutely correct. The community would need to come to a consensus on
this uniformity, just as Amazon came to the decision to hard-code the
169.254.169.254 address.

 This API query should be the same from openstack deployment to openstack
 deployment to ensure portability of instances relying on this API query
 to figure out where the catalog service is.  By uniform I mean it has
 all the love care and backwards versioning support as a traditional API
 query.

Agree completely.

 The config-drive seems more intended to be user customized
 rather than considered a community supported API query.

Well, that may be the case, but as mentioned above, I think we could
*use* config-drive along with a community consensus on a uniform place
to store lookup information for a real OpenStack metadata service --
things like a private key, info file containing the IP of the nearest
metadata service, etc...

 
  Today the
  mechanism by which this is done is catastrophically difficult for
 a new
  user.
 
 Are you specifically referring here to the calls that, say, cloud-init
 makes to the (assumed to be running) EC2 metadata API service at
 http://169.254.169.254/latest/? Or something different? Just want to
 make sure I'm understanding what you are referring to as difficult.
 
 I am referring to the whole new user experience.  Anything custom to a
 deployment of openstack is now outside of our control and is not
 portable. 

Sure, completely agree!

 Also a new user will not be prepared to inject user data
 properly. 

Well, I'm actually not suggesting having the user really be involved at
all with the injection of keys/information into the config-drive :) That
would done by Nova.

When a user currently launches an image in OpenStack, that image
connects to the EC2 metadata service automatically if cloud-init is
installed in the image. I am picturing a similar scenario for this
config-drive stuff -- only instead of cloud-init needing to be installed
on the image, I'm suggesting Nova create a standard (uniform)
config-drive (or part of a config-drive) that contained upstart/startup
scripts, keys, and info for connecting to some OpenStack Metadata service.

 Going further and a bit onto an irate tangent.  Horizon has a
 really round about and completely non intuitive way of providing users
 with info on where API servers are.  IE you have to generate an
 openstack credentials file.  download it.  and look at it in a text
 editor and then know what it is you are looking at.  To find your
 tenant_name you have to guess in the dark that horizon is referring to
 your tenant name as a project. 

Heh, well, you know where I stand on the whole tenant vs. project theme
:) That said, it's a bit of a tangent, as you admit to above :)

  The whole thing is insane.  What I am
 talking about here is a first step in allowing image builders to
 integrate into openstack in a uniform way across all installations ( or
 most ).  And that will allow people to reduce the overall pain on new
 users of cloud at their pleasure.  I am asking for this based on my
 experience trying to do this outside of 

[Openstack] Weekly Hyper-V meeting on Freenode/#openstack-hyper-v

2012-07-24 Thread Peter Pouliot
Hello Everyone,

This is for anyone interested in Hyper-V/Openstack integration efforts.

Our weekly meeting is at 11:00AM EST on IRC:Freenode: #opestack-hyper-v

Today we will be discussing the following:


* Finalizing Code cleanup

* ProxyMock WMI adoption:  Mock interface for dynamically collecting 
method data to be used in building dummy drivers for unit testing. (more to 
come from Alessandro Pilotti).

* Status of the CI infrastructure to support Hyper-V integration

* AD integration


Peter J. Pouliot, CISSP
Senior SDET, OpenStack

Microsoft
New England Research  Development Center
One Memorial Drive,Cambridge, MA 02142
ppoul...@microsoft.commailto:ppoul...@microsoft.com | Tel: +1(857) 453 6436

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Scott Moser
On Sat, 21 Jul 2012, Matt Joyce wrote:

 Preamble:

 Until now, all data that is made available by the metadata server has been
 data that cannot be found anywhere else at the time it may be needed.

 In short, an instance can't be passed it's instance id before it's instance
 id has been allocated so a user cannot pass it to an instance that is being
 started up.  So whether a user wants to jump through a few hoops or not to
 pass their instance the instance id of itself... they simply cannot without
 metadata api being there to provide it at creation time.

 This means that the metadata server holds an uneasy place as a necessary
 clearing house ( evil? ) of data that just doesn't have another place to
 be.  It's not secure, it's not authenticated, and it's a little scary that
 it exists at all.

 I wish to add some data to the metadata server that can be found somewhere
 else.  That a user could jump through a hoop or two to add to their
 instances.  Esteemed personages are concerned that I would be crossing the

Thank you for using 'Esteemed' rather than 'no good @#@$! %!$@$ %#%#%#%',
which I'm sure you considered.

 rubicon in terms of opening up the metadata api for wanton abuse.  They are
 not without a right or reason to be concerned.  And that is why I am going
 to attempt to explicitly classify a new category of data that we might wish
 to allow into the metadata server.  If we can be clear about what we are
 allowing we can avoid abuse.

As Jay was confused, I think we really need to separate the notion of EC2
API and EC2 Instance Metadata API.  Right now, nova is *very* usable
without use of the external EC2 API.  It is much more dependent on the
Instance Metadata API.

Config drive is one alternative to the EC2 metadata service.

Matt's mail discusses only the Instance Metadata API

I had a blueprint at the last summit that covered making config-drive and
the ec2 metadata service basically contain the same data.
  http://etherpad.openstack.org/FolsomNovaConfigDriveImprovements

The general idea there was to replace the current config-drive format with
basically a rendering of the data found in the metadata service.  This
proposal met with surprisingly little complaint.  I planned on then also
extending that by adding to both the metadata service and the config-drive
openstack metadata content.

This bit of information that Matt is interested in would fit under the
openstack metadata content.  But would not be available under the EC2
data.

My plan for backwards compatibility was basically that we would make:
  * http://169.254.169.254/ : look just like the ec2 metadata like thing
that we have now in nova.  A request here would dump the same content
it currently does, which is a list of -MM-DD versions.
  * http://169.254.169.254/ec2 point to http://169.254.169.254/
  * http://169.254.169.254/openstack/ be the well known location of the
openstack metadata service.

The config drive basically then looks just like the metadata service.

We then have to define the content that we have at that exists at
/openstack, basically creating the first version of the openstack metadata
service api.  I personally see nothing wrong with -MM-DD for a
version, as it sorts so well, and is generally immediately understood what
it means.

I planned to simply have the top level entry list versions available
just like the EC2 one does.  Ie, It would have something like:
  2012-07-25
  latest
A request to http://169.254.169.254/openstack/2012-07-25 would return
JSON formatted data.  That data would have things like:
  instance-id
  ami-id
  keystone-endpoint -- for Matt

What I said to Matt was that I think we need to consider (and scrutinize)
changes to the data available in the openstack metadata service the same
way we would an API change.

My choice of the keyname 'keystone-endpoint' is probably bad, but in
making a bad choice, it shows the value in making good choices, which
aren't made without thought and deliberation.

I simply haven't gotten around to doing the work above.  If someone else
wants to take a stab at it, I'd love to help and work together on it.

 My current work effort in regards to this is related to passing keystone
 credentials to instances via pam authentication.  So I can do a number of
 API related queries into openstack because I have credentials available to
 the OS that are dynamically allocated.  But to make my image portable I
 need to not be baking in the keystone API URI.

I promised Matt I would not complain, so my comments here have been deleted. :)

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-24 Thread Eugene Kirpichov
Hi Alessandro,

My patch is about removing the need for pacemaker (and it's pacemaker
that I denoted with the term TCP load balancer).

I didn't submit the patch yesterday because I underestimated the
effort to write unit tests for it and found a few issues on the way. I
hope I'll finish today.

On Tue, Jul 24, 2012 at 12:00 AM, Alessandro Tagliapietra
tagliapietra.alessan...@gmail.com wrote:
 Sorry for the delay, i was out from work.
 Awesome work Eugene, I don't need the patch instantly as i'm still building 
 the infrastructure.
 Will it will take alot of time to go in Ubuntu repositories?

 Why you said you need load balancing? You can use only the master node and in 
 case the rabbitmq-server dies, switch the ip to the new master with 
 pacemaker, that's how I would do.

 Best Regards

 Alessadro


 Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha scritto:

 +openstack-dev@

 To openstack-dev: this is a discussion of an upcoming patch about
 native RabbitMQ H/A support in nova. I'll post the patch for
 codereview today.

 On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Yup, that's basically the same thing that Jay suggested :) Obvious in
 retrospect...

 On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh ogelb...@mirantis.com 
 wrote:
 Eugene,

 I suggest just add option 'rabbit_servers' that will override
 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, in
 my understanding.

 --
 Best regards,
 Oleg Gelbukh
 Mirantis, Inc.


 On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:

 Hi,

 I'm working on a RabbitMQ H/A patch right now.

 It actually involves more than just using H/A queues (unless you're
 willing to add a TCP load balancer on top of your RMQ cluster).
 You also need to add support for multiple RabbitMQ's directly to nova.
 This is not hard at all, and I have the patch ready and tested in
 production.

 Alessandro, if you need this urgently, I can send you the patch right
 now before the discussion codereview for inclusion in core nova.

 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.

 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?

 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,

 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.


 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py

 You'll need to add the config options there and the queue is declared
 here with the options supplied to the ConsumerBase constructor:


 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114

 Best,
 -jay

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov



 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] AWS CreateImage compat

2012-07-24 Thread Jorge Niedbalski
Hello guys,

Can you point me if there is any effort (patch, proposal) to provide a
CreateImage API endpoint fully compatible with the Amazon EC2 API?.

Boto  create_image method is broken with the current API.

-- 
Jorge Niedbalski R.
--

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Matt Joyce
Scott  thanks =P
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stackato and OPenStack as PAAS

2012-07-24 Thread Diane Mueller

Frans, Michael, and Matt,

Stackato is from ActiveState and you can find more information here 
about running Stackato on OpenStack:


http://docs.stackato.com/server/openstack.html?highlight=openstack

Also, we're no longer beta and have been GA since 02/2012, and are now 
at V2.0 of a commercially-supported release of Stackato.


We've been happily deploying and auto-scaling on multiple versions of 
OpenStack since last October - including HP Cloud's OpenStack

You can get aces and test it out here:

http://www.activestate.com/stackato/get_stackato

I also just did a presentation at OSCON on Adventures deploying PaaS on 
the OpenCloud with Jeff Hobbs, our CTO \

Slides here:

http://www.slideshare.net/OReillyOSCON/oscon-2012-adventures-in-deploying-paa-s-in-the-open-cloud-the-activestate-stackato-story



Diane Mueller
Cloud Evangelist
ActiveState - Code to Cloud: Smarter, Safer, Faster™
C: 1.604.765.3635

E: dia...@activestate.com
Web: http://www.activestate.com

Learn about Stackato for Private PaaS: http://www.activestate.com/stackato

On 14/07/2012 6:51 PM, Michael Basnight wrote:

Hey frans,

There are a few that I can think of. Reddwarf (my project) [1], platform layer 
[2], and heat [3].  They are all a bit different and I encourage u to look 
around. I bet $$ there are more that I've forgotten, which I apoligize in 
advance for :)

Sent from my digital shackles.

[1] https://github.com/hub-cap/reddwarf_lite
[2] https://github.com/platformlayer/platformlayer
[3] http://heat-api.org/

On Jul 14, 2012, at 8:15 PM, Frans Thamura fr...@meruvian.org wrote:


hi all

one of company here setup Stackato on OpenStack and in beta phase

any idea and comment regarding PAAS implementation in OpenStack

F
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Martin Packman
On 23/07/2012, Jay Pipes jaypi...@gmail.com wrote:

 This is only due to the asinine EC2 API -- or rather the asinine
 implementation in EC2 that doesn't create an instance ID before the
 instance is launched.

So, I'm curious, how do you allocate a server id in advance using the
openstack api so you can pass it in rather than relying on an external
metadata service? I've not seen anything in the documentation
describing how to do that.

Martin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Stackato and OPenStack as PAAS

2012-07-24 Thread Frans Thamura
hi diane you can see your customer contact, Dondy from Infinsy, we work
 togther for test his cloud, yes 16 blades, power by OpenSTack and stackato
:)

this is the apps that run on their cloud, our yama from meruvian.

yama.atisicloud.net

yes, they use stackato

so we have 2 production openstack in this country, one using nova one using
swift

F


On Tue, Jul 24, 2012 at 11:43 PM, Diane Mueller dia...@activestate.comwrote:

 Frans, Michael, and Matt,

 Stackato is from ActiveState and you can find more information here about
 running Stackato on OpenStack:

 http://docs.stackato.com/**server/openstack.html?**highlight=openstackhttp://docs.stackato.com/server/openstack.html?highlight=openstack

 Also, we're no longer beta and have been GA since 02/2012, and are now
 at V2.0 of a commercially-supported release of Stackato.

 We've been happily deploying and auto-scaling on multiple versions of
 OpenStack since last October - including HP Cloud's OpenStack
 You can get aces and test it out here:

 http://www.activestate.com/**stackato/get_stackatohttp://www.activestate.com/stackato/get_stackato

 I also just did a presentation at OSCON on Adventures deploying PaaS on
 the OpenCloud with Jeff Hobbs, our CTO \
 Slides here:

 http://www.slideshare.net/**OReillyOSCON/oscon-2012-**
 adventures-in-deploying-paa-s-**in-the-open-cloud-the-**
 activestate-stackato-storyhttp://www.slideshare.net/OReillyOSCON/oscon-2012-adventures-in-deploying-paa-s-in-the-open-cloud-the-activestate-stackato-story



 Diane Mueller
 Cloud Evangelist
 ActiveState - Code to Cloud: Smarter, Safer, Faster™
 C: 1.604.765.3635

 E: dia...@activestate.com
 Web: http://www.activestate.com

 Learn about Stackato for Private PaaS: http://www.activestate.com/**
 stackato http://www.activestate.com/stackato


 On 14/07/2012 6:51 PM, Michael Basnight wrote:

 Hey frans,

 There are a few that I can think of. Reddwarf (my project) [1], platform
 layer [2], and heat [3].  They are all a bit different and I encourage u to
 look around. I bet $$ there are more that I've forgotten, which I apoligize
 in advance for :)

 Sent from my digital shackles.

 [1] 
 https://github.com/hub-cap/**reddwarf_litehttps://github.com/hub-cap/reddwarf_lite
 [2] 
 https://github.com/**platformlayer/platformlayerhttps://github.com/platformlayer/platformlayer
 [3] http://heat-api.org/

 On Jul 14, 2012, at 8:15 PM, Frans Thamura fr...@meruvian.org wrote:

  hi all

 one of company here setup Stackato on OpenStack and in beta phase

 any idea and comment regarding PAAS implementation in OpenStack

 F
 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp




 __**_
 Mailing list: 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : 
 https://launchpad.net/~**openstackhttps://launchpad.net/~openstack
 More help   : 
 https://help.launchpad.net/**ListHelphttps://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM High Availability and Floating IP

2012-07-24 Thread Alessandro Tagliapietra
Thank you Jay, never read about that.
Seems something like scalr/chef? WHich handles application and keeps a minimum 
number of vm running?

Best

Alessandro

Il giorno 24/lug/2012, alle ore 14:34, Jay Pipes ha scritto:

 On 07/24/2012 04:29 AM, Alessandro Tagliapietra wrote:
 Hi guys,
 
 i've 2 missing pieces in my HA openstack install. Actually all openstack
 services are managed by pacemaker and i can succesfully start/stop vm
 etc. when the cloud controller is down (i've only 2 servers atm).
 
 1 - how can i make a VM HA? Actually live-migration works fine, but if a
 host goes down, how can i restart the vm on the other host? Should i
 edit the 'host' column in the db and issue the restart of the vm? Any
 other way?
 
 Check out that HEAT API:
 
 https://github.com/heat-api/heat/wiki/
 
 2 - i've the servers hosted at Hetzner, for floating ip we've bought
 failover ip which are assigned to each host and can be changed via the
 api. So i have to make sure that if vm is on host1, floating ip
 associated to the vm is routed to host1.  My idea was to run a job that
 checks the floating ip already associated to any vm, then queries the vm
 info, checks on which host it's running and if it's different from the
 other check, calls the hetzner api to switch the ip to the other server.
 Any other idea?
 
 See above :)
 
 Best,
 -jay
 
 Thanks in advance
 
 Best Regards
 
 -- 
 Alessandro Tagliapietra | VISup srl
 piazza 4 novembre 7
 20124 Milano
 
 http://www.visup.it
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM High Availability and Floating IP

2012-07-24 Thread Jay Pipes
On 07/24/2012 12:52 PM, Alessandro Tagliapietra wrote:
 Thank you Jay, never read about that.
 Seems something like scalr/chef? WHich handles application and keeps a 
 minimum number of vm running?

Yeah, kinda.. just one more way of doing things... :)
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] High Available queues in rabbitmq

2012-07-24 Thread Alessandro Tagliapietra
Oh, so without the need to put an IP floating between hosts.
Good job, thanks for helping

Best

Alessandro

Il giorno 24/lug/2012, alle ore 17:49, Eugene Kirpichov ha scritto:

 Hi Alessandro,
 
 My patch is about removing the need for pacemaker (and it's pacemaker
 that I denoted with the term TCP load balancer).
 
 I didn't submit the patch yesterday because I underestimated the
 effort to write unit tests for it and found a few issues on the way. I
 hope I'll finish today.
 
 On Tue, Jul 24, 2012 at 12:00 AM, Alessandro Tagliapietra
 tagliapietra.alessan...@gmail.com wrote:
 Sorry for the delay, i was out from work.
 Awesome work Eugene, I don't need the patch instantly as i'm still building 
 the infrastructure.
 Will it will take alot of time to go in Ubuntu repositories?
 
 Why you said you need load balancing? You can use only the master node and 
 in case the rabbitmq-server dies, switch the ip to the new master with 
 pacemaker, that's how I would do.
 
 Best Regards
 
 Alessadro
 
 
 Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha scritto:
 
 +openstack-dev@
 
 To openstack-dev: this is a discussion of an upcoming patch about
 native RabbitMQ H/A support in nova. I'll post the patch for
 codereview today.
 
 On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov ekirpic...@gmail.com 
 wrote:
 Yup, that's basically the same thing that Jay suggested :) Obvious in
 retrospect...
 
 On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh ogelb...@mirantis.com 
 wrote:
 Eugene,
 
 I suggest just add option 'rabbit_servers' that will override
 'rabbit_host'/'rabbit_port' pair, if present. This won't break anything, 
 in
 my understanding.
 
 --
 Best regards,
 Oleg Gelbukh
 Mirantis, Inc.
 
 
 On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
 
 Hi,
 
 I'm working on a RabbitMQ H/A patch right now.
 
 It actually involves more than just using H/A queues (unless you're
 willing to add a TCP load balancer on top of your RMQ cluster).
 You also need to add support for multiple RabbitMQ's directly to nova.
 This is not hard at all, and I have the patch ready and tested in
 production.
 
 Alessandro, if you need this urgently, I can send you the patch right
 now before the discussion codereview for inclusion in core nova.
 
 The only problem is, it breaks backward compatibility a bit: my patch
 assumes you have a flag rabbit_addresses which should look like
 rmq-host1:5672,rmq-host2:5672 instead of the prior rabbit_host and
 rabbit_port flags.
 
 Guys, can you advise on a way to do this without being ugly and
 without breaking compatibility?
 Maybe have rabbit_host, rabbit_port be ListOpt's? But that sounds
 weird, as their names are in singular.
 Maybe have rabbit_host, rabbit_port and also rabbit_host2,
 rabbit_port2 (assuming we only have clusters of 2 nodes)?
 Something else?
 
 On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes jaypi...@gmail.com wrote:
 On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
 Hi guys,
 
 just an idea, i'm deploying Openstack trying to make it HA.
 The missing thing is rabbitmq, which can be easily started in
 active/active mode, but it needs to declare the queues adding an
 x-ha-policy entry.
 http://www.rabbitmq.com/ha.html
 It would be nice to add a config entry to be able to declare the queues
 in that way.
 If someone know where to edit the openstack code, else i'll try to do
 that in the next weeks maybe.
 
 
 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py
 
 You'll need to add the config options there and the queue is declared
 here with the options supplied to the ConsumerBase constructor:
 
 
 https://github.com/openstack/openstack-common/blob/master/openstack/common/rpc/impl_kombu.py#L114
 
 Best,
 -jay
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 
 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 
 
 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 
 
 -- 
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack

Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Jay Pipes
On 07/24/2012 12:47 PM, Martin Packman wrote:
 On 23/07/2012, Jay Pipes jaypi...@gmail.com wrote:

 This is only due to the asinine EC2 API -- or rather the asinine
 implementation in EC2 that doesn't create an instance ID before the
 instance is launched.
 
 So, I'm curious, how do you allocate a server id in advance using the
 openstack api so you can pass it in rather than relying on an external
 metadata service? I've not seen anything in the documentation
 describing how to do that.

The OpenStack Compute API POST /servers command creates a server UUID
that is passed back in the initial response and allows the user to query
the status of the server throughout its launch sequence.

http://docs.openstack.org/api/openstack-compute/2/content/CreateServers.html

-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM High Availability and Floating IP

2012-07-24 Thread Alessandro Tagliapietra
But i don't see any part (except the future plans) talking about HA at instance 
level, that seems more to an application level

Il giorno 24/lug/2012, alle ore 18:56, Jay Pipes ha scritto:

 On 07/24/2012 12:52 PM, Alessandro Tagliapietra wrote:
 Thank you Jay, never read about that.
 Seems something like scalr/chef? WHich handles application and keeps a 
 minimum number of vm running?
 
 Yeah, kinda.. just one more way of doing things... :)
 -jay


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Martin Packman
On 24/07/2012, Jay Pipes jaypi...@gmail.com wrote:

 The OpenStack Compute API POST /servers command creates a server UUID
 that is passed back in the initial response and allows the user to query
 the status of the server throughout its launch sequence.

I'm not really seeing how that improves on the situation compared to
the EC2 api. If a server needs to know its own id, it must either
communicate with an external service or be able to use the compute
api, which means putting credentials on the instance. Or am I missing
a trick?

Martin

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Jay Pipes
On 07/24/2012 01:10 PM, Martin Packman wrote:
 On 24/07/2012, Jay Pipes jaypi...@gmail.com wrote:

 The OpenStack Compute API POST /servers command creates a server UUID
 that is passed back in the initial response and allows the user to query
 the status of the server throughout its launch sequence.
 
 I'm not really seeing how that improves on the situation compared to
 the EC2 api. If a server needs to know its own id, it must either
 communicate with an external service or be able to use the compute
 api, which means putting credentials on the instance. Or am I missing
 a trick?

All I am saying is that Nova knows the instance's ID at the time that a
config-drive can be created and installed into the instance. You can't
do that with the user data EC2 API stuff, but you can with
config-drive. Which is why I was recommending using config-drive.

Best,
-jay

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [nova] a proposal to change metadata API data

2012-07-24 Thread Matt Joyce
What I am doing is putting credentials into user sessions on the instance.

-Matt

On Tue, Jul 24, 2012 at 10:10 AM, Martin Packman 
martin.pack...@canonical.com wrote:

 On 24/07/2012, Jay Pipes jaypi...@gmail.com wrote:
 
  The OpenStack Compute API POST /servers command creates a server UUID
  that is passed back in the initial response and allows the user to query
  the status of the server throughout its launch sequence.

 I'm not really seeing how that improves on the situation compared to
 the EC2 api. If a server needs to know its own id, it must either
 communicate with an external service or be able to use the compute
 api, which means putting credentials on the instance. Or am I missing
 a trick?

 Martin

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] VM High Availability and Floating IP

2012-07-24 Thread Steven Dake
On 07/24/2012 09:52 AM, Alessandro Tagliapietra wrote:
 Thank you Jay, never read about that.
 Seems something like scalr/chef? WHich handles application and keeps a 
 minimum number of vm running?
 

The idea of keeping a minimum number of VMs running based upon VM load
is called auto-scaling.  We have added auto-scaling to our v5 release
(which is targeted for July 30th).

As far as puppet/chef integration goes, CloudFormation integrates well
with both.

For chef read:
http://www.full360.com/blogs/integrating-aws-cloudformation-and-chef

For puppet read:
https://s3.amazonaws.com/cloudformation-examples/IntegratingAWSCloudFormationWithPuppet.pdf

Note while these links talk about AWS CloudFormation, Heat is
essentially an AWS CloudFormation implementation for OpenStack.

If you want to get started and give heat a try on Fedora 17+ or Ubuntu
Precise, our getting started guides are in our wiki:

https://github.com/heat-api/heat/wiki

Let me know if you have follow-up questions.

Regards
-steve

 Best
 
 Alessandro
 
 Il giorno 24/lug/2012, alle ore 14:34, Jay Pipes ha scritto:
 
 On 07/24/2012 04:29 AM, Alessandro Tagliapietra wrote:
 Hi guys,

 i've 2 missing pieces in my HA openstack install. Actually all openstack
 services are managed by pacemaker and i can succesfully start/stop vm
 etc. when the cloud controller is down (i've only 2 servers atm).

 1 - how can i make a VM HA? Actually live-migration works fine, but if a
 host goes down, how can i restart the vm on the other host? Should i
 edit the 'host' column in the db and issue the restart of the vm? Any
 other way?

 Check out that HEAT API:

 https://github.com/heat-api/heat/wiki/

 2 - i've the servers hosted at Hetzner, for floating ip we've bought
 failover ip which are assigned to each host and can be changed via the
 api. So i have to make sure that if vm is on host1, floating ip
 associated to the vm is routed to host1.  My idea was to run a job that
 checks the floating ip already associated to any vm, then queries the vm
 info, checks on which host it's running and if it's different from the
 other check, calls the hetzner api to switch the ip to the other server.
 Any other idea?

 See above :)

 Best,
 -jay

 Thanks in advance

 Best Regards

 -- 
 Alessandro Tagliapietra | VISup srl
 piazza 4 novembre 7
 20124 Milano

 http://www.visup.it



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceph + Live Migration

2012-07-24 Thread Mark Moseley
This is more of a sanity check than anything else:

Does the RBDDriver in Diablo support live migration?

I was playing with this yesterday and couldn't get live migration to
succeed. The errors I was getting seem to trace back to the fact that
the RBDDriver doesn't override VolumeDriver's check_for_export call,
which just defaults to NotImplementedError. Looking at the latest
Folsom code, there's still not a check_for_export call in the
RBDDriver, which makes me think that the fact that in my PoC install
(which I've switched between qcow2, iscsi and ceph repeatedly), I've
somehow made something unhappy.

2012-07-24 13:44:29 TRACE nova.compute.manager [instance:
79c8c14f-43bf-4eaf-af94-bc578c82f921] [u'Traceback (most recent call
last):\n', u'  File
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in
_process_data\nrval = node_func(context=ctxt, **node_args)\n', u'
File /usr/lib/python2.7/dist-packages/nova/volume/manager.py, line
294, in check_for_export\nself.driver.check_for_export(context,
volume[\'id\'])\n', u'  File
/usr/lib/python2.7/dist-packages/nova/volume/driver.py, line 459, in
check_for_export\ntid =
self.db.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 983, in
volume_get_iscsi_target_num\nreturn
IMPL.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
102, in wrapper\nreturn f(*args, **kwargs)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
2455, in volume_get_iscsi_target_num\nraise
exception.ISCSITargetNotFoundForVolume(volume_id=volume_id)\n',
u'ISCSITargetNotFoundForVolume: No target id found for volume
130.\n'].

What's interesting here is that this KVM instance is only running RBD
volumes, no iSCSI volumes in sight. Here's the drives as they're set
up by openstack in the kvm command:

-drive file=rbd:nova/volume-0082:id=rbd:key=...deleted my
key...==:auth_supported=cephx
none,if=none,id=drive-virtio-disk0,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=rbd:nova/volume-0083:id=rbd:key=...deleted my
key...==:auth_supported=cephx
none,if=none,id=drive-virtio-disk1,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1

But before banging my head against it more (and chastised after
banging my head against the fiemap issue, which turned out to be an
actual bug), I figured I'd see if it was even possible. In Sebastien
Han's (completely fantastic) Ceph+Openstack article, it doesn't sound
like he was able to get RBD-based migration working either. Again, I
don't mind debugging further but wanted to make sure I wasn't
debugging something that wasn't actually there. Incidentally, if I put
something like this in
/usr/lib/python2.7/dist-packages/nova/volume/manager.py in the
for-loop

if not volume[ iscsi_target ]: continue

before the call to self.driver.check_for_export(), then live migration
*seems* to work, but obviously I might be setting up some ugly corner
case.

If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Weekly Hyper-V meeting on Freenode/#openstack-hyper-v

2012-07-24 Thread Stefano Maffulli
[posting to openstack-dev, the best list for this sort of announcement]

Hi Peter,

I wanted to share here what we discussed on IRC so more people are aware
of what's going on.

We agreed that the Hyper-V team will try to use #openstack-meeting to
hold its official meeting. The time and date is on
http://wiki.openstack.org/Meetings/ and on the iCal feed already
(Tuesdays at 1600 UTC, 9am PDT).

Anybody intererested in joining the effort to bring back Hyper-V support
in OpenStack, join IRC:Freenode: #opestack-hyper-v and talk to
@primeministerp :)

Cheers,
stef

On 07/24/2012 07:34 AM, Peter Pouliot wrote:
 Hello Everyone,
 
  
 
 This is for anyone interested in Hyper-V/Openstack integration efforts.
 
  
 
 Our weekly meeting is at 11:00AM EST on IRC:Freenode: #opestack-hyper-v
 
  
 
 Today we will be discussing the following:
 
  
 
 · Finalizing Code cleanup
 
 · ProxyMock WMI adoption:  Mock interface for dynamically
 collecting method data to be used in building dummy drivers for unit
 testing. (more to come from Alessandro Pilotti).
 
 · Status of the CI infrastructure to support Hyper-V integration
 
 · AD integration
 
  
 
  
 
 Peter J. Pouliot, CISSP
 
 Senior SDET, OpenStack
 
  
 
 Microsoft
 
 New England Research  Development Center
 
 One Memorial Drive,Cambridge, MA 02142
 
 ppoul...@microsoft.com mailto:ppoul...@microsoft.com | Tel: +1(857)
 453 6436
 
  
 
 
 
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Weekly Hyper-V meeting on Freenode/#openstack-hyper-v

2012-07-24 Thread Khairul Aizat Kamarudzzaman
thank for the announcement .. btw , is it #opestack-hyper-v
@ #openstack-hyper-v ? i thinks its typo there , is it ?

On Wed, Jul 25, 2012 at 5:39 AM, Stefano Maffulli stef...@openstack.orgwrote:

 [posting to openstack-dev, the best list for this sort of announcement]

 Hi Peter,

 I wanted to share here what we discussed on IRC so more people are aware
 of what's going on.

 We agreed that the Hyper-V team will try to use #openstack-meeting to
 hold its official meeting. The time and date is on
 http://wiki.openstack.org/Meetings/ and on the iCal feed already
 (Tuesdays at 1600 UTC, 9am PDT).

 Anybody intererested in joining the effort to bring back Hyper-V support
 in OpenStack, join IRC:Freenode: #opestack-hyper-v and talk to
 @primeministerp :)

 Cheers,
 stef

 On 07/24/2012 07:34 AM, Peter Pouliot wrote:
  Hello Everyone,
 
 
 
  This is for anyone interested in Hyper-V/Openstack integration efforts.
 
 
 
  Our weekly meeting is at 11:00AM EST on IRC:Freenode: #opestack-hyper-v
 
 
 
  Today we will be discussing the following:
 
 
 
  · Finalizing Code cleanup
 
  · ProxyMock WMI adoption:  Mock interface for dynamically
  collecting method data to be used in building dummy drivers for unit
  testing. (more to come from Alessandro Pilotti).
 
  · Status of the CI infrastructure to support Hyper-V integration
 
  · AD integration
 
 
 
 
 
  Peter J. Pouliot, CISSP
 
  Senior SDET, OpenStack
 
 
 
  Microsoft
 
  New England Research  Development Center
 
  One Memorial Drive,Cambridge, MA 02142
 
  ppoul...@microsoft.com mailto:ppoul...@microsoft.com | Tel: +1(857)
  453 6436
 
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Josh Durgin

On 07/23/2012 08:24 PM, Jonathan Proulx wrote:

Hi All,

I've been looking at Ceph as a storage back end.  I'm running a
research cluster and while people need to use it and want it 24x7 I
don't need as many nines as a commercial customer facing service does
so I think I'm OK with the current maturity level as far as that goes,
but I have less of a sense of how far along performance is.

My OpenStack deployment is 768 cores across 64 physical hosts which
I'd like to double in the next 12 months.  What it's used for is
widely varying and hard to classify some uses are hundreds of tiny
nodes others are looking to monopolize the biggest physical system
they can get.  I think most really heavy IO currently goes to our NAS
servers rather than through nova-volumes but that could change.

Anyone using ceph at that scale (or preferably larger)?  Does it keep
up if you keep throwing hardware at it?  My proof of concept ceph
cluster on crappy salvaged hardware has proved the concept to me but
has (unsurprisingly) crappy salvaged performance. Trying to get a
sense of what performance expectations I should have given decent
hardware before I decide if I should buy decent hardware for it...

Thanks,
-Jon


Hi Jon,

You might be interested in Jim Schutt's numbers on better hardware:

http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487

You'll probably get more response on the ceph mailing list though.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph + Live Migration

2012-07-24 Thread Josh Durgin

On 07/24/2012 01:04 PM, Mark Moseley wrote:

This is more of a sanity check than anything else:

Does the RBDDriver in Diablo support live migration?


Live migration has always been possible with RBD. Management layers
like libvirt or OpenStack may have bugs that make it fail. This sounds
like one of those bugs.


I was playing with this yesterday and couldn't get live migration to
succeed. The errors I was getting seem to trace back to the fact that
the RBDDriver doesn't override VolumeDriver's check_for_export call,
which just defaults to NotImplementedError. Looking at the latest
Folsom code, there's still not a check_for_export call in the
RBDDriver, which makes me think that the fact that in my PoC install
(which I've switched between qcow2, iscsi and ceph repeatedly), I've
somehow made something unhappy.

2012-07-24 13:44:29 TRACE nova.compute.manager [instance:
79c8c14f-43bf-4eaf-af94-bc578c82f921] [u'Traceback (most recent call
last):\n', u'  File
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py, line 253, in
_process_data\nrval = node_func(context=ctxt, **node_args)\n', u'
File /usr/lib/python2.7/dist-packages/nova/volume/manager.py, line
294, in check_for_export\nself.driver.check_for_export(context,
volume[\'id\'])\n', u'  File
/usr/lib/python2.7/dist-packages/nova/volume/driver.py, line 459, in
check_for_export\ntid =
self.db.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/api.py, line 983, in
volume_get_iscsi_target_num\nreturn
IMPL.volume_get_iscsi_target_num(context, volume_id)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
102, in wrapper\nreturn f(*args, **kwargs)\n', u'  File
/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.py, line
2455, in volume_get_iscsi_target_num\nraise
exception.ISCSITargetNotFoundForVolume(volume_id=volume_id)\n',
u'ISCSITargetNotFoundForVolume: No target id found for volume
130.\n'].

What's interesting here is that this KVM instance is only running RBD
volumes, no iSCSI volumes in sight. Here's the drives as they're set
up by openstack in the kvm command:

-drive file=rbd:nova/volume-0082:id=rbd:key=...deleted my
key...==:auth_supported=cephx
none,if=none,id=drive-virtio-disk0,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive file=rbd:nova/volume-0083:id=rbd:key=...deleted my
key...==:auth_supported=cephx
none,if=none,id=drive-virtio-disk1,format=raw,cache=none -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1

But before banging my head against it more (and chastised after
banging my head against the fiemap issue, which turned out to be an
actual bug), I figured I'd see if it was even possible. In Sebastien
Han's (completely fantastic) Ceph+Openstack article, it doesn't sound
like he was able to get RBD-based migration working either. Again, I
don't mind debugging further but wanted to make sure I wasn't
debugging something that wasn't actually there. Incidentally, if I put
something like this in
/usr/lib/python2.7/dist-packages/nova/volume/manager.py in the
for-loop

if not volume[ iscsi_target ]: continue

before the call to self.driver.check_for_export(), then live migration
*seems* to work, but obviously I might be setting up some ugly corner
case.


It should work, and if that workaround works, you could instead add

def check_for_export(self, context, volume_id):
pass

to the RBDDriver. It looks like the check_for_export method was
added and relied upon without modifying all VolumeDriver subclasses, so
e.g. sheepdog would have the same problem.


If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?


If this is still a problem in trunk I'll make sure it's fixed before
Folsom is released.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph + Live Migration

2012-07-24 Thread Mark Moseley
 It should work, and if that workaround works, you could instead add

 def check_for_export(self, context, volume_id):
 pass

I'll try that out. That's a heck of a lot cleaner, plus I just picked
that if not volume[ 'iscsi_target' ] because it was the only
attribute I could find, but that was just for messing around. I wasn't
able to find any attribute of the volume object that said that it
was an RBD volume.

My real concern is that the check_for_export is somehow important and
I might be missing something else down the road. That said, I've been
able to migrate back and forth just fine after I put that initial hack
in.


 to the RBDDriver. It looks like the check_for_export method was
 added and relied upon without modifying all VolumeDriver subclasses, so
 e.g. sheepdog would have the same problem.


 If live migration isn't currently possible for RBD, anyone know if
 it's in a roadmap?


 If this is still a problem in trunk I'll make sure it's fixed before
 Folsom is released.

Cool, thanks!

Incidentally (and I can open up a new thread if you like), I was also
going to post here about your quote in Sebastien's article:

quote
What’s missing is that OpenStack doesn’t yet have the ability to
initialize a volume from an image. You have to put an image on one
yourself before you can boot from it currently. This should be fixed
in the next version of OpenStack. Booting off of RBD is nice because
you can do live migration, although I haven’t tested that with
OpenStack, just with libvirt. For Folsom, we hope to have
copy-on-write cloning of images as well, so you can store images in
RBD with glance, and provision instances booting off cloned RBD
volumes in very little time.
/quote

I just wanted to confirm I'm reading it right:

With the above features, we'll be able to use --block_device_mapping
vda mapped to an RBD volume *and* a glance-based image on the same
instance and it'll clone that image onto the RBD volume? Is that a
correct interpretation? That'd be indeed sweet.

Right now I'm either booting with an image, which forces me to use
qcow2 and a second RBD-based volume (with the benefit of immediately
running), or booting with both vda and vdb on RBD volumes (but then
have to install the vm from scratch, OS-wise). Of course, I might be
missing a beautiful, already-existing 3rd option that I'm just not
aware of :)

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph + Live Migration

2012-07-24 Thread Josh Durgin

On 07/24/2012 05:10 PM, Mark Moseley wrote:

It should work, and if that workaround works, you could instead add

def check_for_export(self, context, volume_id):
 pass


I'll try that out. That's a heck of a lot cleaner, plus I just picked
that if not volume[ 'iscsi_target' ] because it was the only
attribute I could find, but that was just for messing around. I wasn't
able to find any attribute of the volume object that said that it
was an RBD volume.

My real concern is that the check_for_export is somehow important and
I might be missing something else down the road. That said, I've been
able to migrate back and forth just fine after I put that initial hack
in.


All the export-related functions only matter for iscsi or similar
drivers that require a block device on the host. Since qemu talks to
rbd directly, there's no need for those methods to do anything. If
someone wanted to make a volume driver for the kernel rbd module, for 
example, then the export methods would make sense.



to the RBDDriver. It looks like the check_for_export method was
added and relied upon without modifying all VolumeDriver subclasses, so
e.g. sheepdog would have the same problem.



If live migration isn't currently possible for RBD, anyone know if
it's in a roadmap?



If this is still a problem in trunk I'll make sure it's fixed before
Folsom is released.


Cool, thanks!

Incidentally (and I can open up a new thread if you like), I was also
going to post here about your quote in Sebastien's article:

quote
What’s missing is that OpenStack doesn’t yet have the ability to
initialize a volume from an image. You have to put an image on one
yourself before you can boot from it currently. This should be fixed
in the next version of OpenStack. Booting off of RBD is nice because
you can do live migration, although I haven’t tested that with
OpenStack, just with libvirt. For Folsom, we hope to have
copy-on-write cloning of images as well, so you can store images in
RBD with glance, and provision instances booting off cloned RBD
volumes in very little time.
/quote

I just wanted to confirm I'm reading it right:

With the above features, we'll be able to use --block_device_mapping
vda mapped to an RBD volume *and* a glance-based image on the same
instance and it'll clone that image onto the RBD volume? Is that a
correct interpretation? That'd be indeed sweet.


I'm not sure that'll be the exact interface, but that's the idea.


Right now I'm either booting with an image, which forces me to use
qcow2 and a second RBD-based volume (with the benefit of immediately
running), or booting with both vda and vdb on RBD volumes (but then
have to install the vm from scratch, OS-wise). Of course, I might be
missing a beautiful, already-existing 3rd option that I'm just not
aware of :)


Another option is to do some custom hack with image files and
'rbd rm vol-foo  rbd import file vol-foo' on the backend so you don't
need to copy the data within a vm.

Josh

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] 回复: 回复: [openstack] nova-compute always dead on RHEL6.1

2012-07-24 Thread 延生 付
Dear Padraig,
 
auth=no already set in qpidd.conf, and qpid-stat -c works well after this set.
 
[root@kapi-r11 nova]# qpid-stat -c
Connections
  client-addr cproc  cpid   auth   connected  idle  msgIn  
msgOut
  
=
  [::1]:5672-[::1]:45060  qpid-stat  28059  anonymous  0s 0s 208    
263

[root@kapi-r11 nova]# qpid-queue-stats
Queue Name Sec   Depth Enq Rate 
Deq Rate

qmfc-v2-ui-kapi-r11.28077.1   10.00  0 
0.10 0.10
topic-kapi-r11.28077.1    10.00  0 
0.20 0.20

 
While nova related components still could not connect. I plan to use other 
queue to have a test.
Please share your ideas. Thanks as always.
 
Regards,

Will



发件人: Pádraig Brady p...@draigbrady.com
收件人: 延生 付 willfly0...@yahoo.com.cn 
抄送: openstack@lists.launchpad.net openstack@lists.launchpad.net 
发送日期: 2012年7月24日, 星期二, 下午 5:44
主题: Re: 回复: [Openstack] [openstack] nova-compute always dead on RHEL6.1

On 07/24/2012 02:05 AM, 延生 付 wrote:
 Dear Padraig,
  
 Thanks for the help. I can see the log from console, really strange there is 
 no log generated under /var/log/nova, the permission is open to nova.

Ensure there are no root owned files under /var/log/nova

 Another question is Apache Qpid can not be connected, even from the same 
 server. I setup from default, no further config.
 I can see the port 5672 is listened, and I also turned iptables off. Is there 
 any Qpid log I can refer?

Try setting auth=no is set in /etc/qpidd.conf

Note these instructions have been used successfully by many:
https://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL

cheers,
Pádraig.___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Anne Gentle
I don't know if it will confirm or correlate with your findings, but
do take a look at this blog post with benchmarks in one of the last
sections:

http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/

I'm trying to determine what parts should go into the OpenStack
documentation, please let me know if the post is useful to you in your
setting and what sections are most valuable.
Thanks,
Anne


On Tue, Jul 24, 2012 at 6:08 PM, Josh Durgin josh.dur...@inktank.com wrote:
 On 07/23/2012 08:24 PM, Jonathan Proulx wrote:

 Hi All,

 I've been looking at Ceph as a storage back end.  I'm running a
 research cluster and while people need to use it and want it 24x7 I
 don't need as many nines as a commercial customer facing service does
 so I think I'm OK with the current maturity level as far as that goes,
 but I have less of a sense of how far along performance is.

 My OpenStack deployment is 768 cores across 64 physical hosts which
 I'd like to double in the next 12 months.  What it's used for is
 widely varying and hard to classify some uses are hundreds of tiny
 nodes others are looking to monopolize the biggest physical system
 they can get.  I think most really heavy IO currently goes to our NAS
 servers rather than through nova-volumes but that could change.

 Anyone using ceph at that scale (or preferably larger)?  Does it keep
 up if you keep throwing hardware at it?  My proof of concept ceph
 cluster on crappy salvaged hardware has proved the concept to me but
 has (unsurprisingly) crappy salvaged performance. Trying to get a
 sense of what performance expectations I should have given decent
 hardware before I decide if I should buy decent hardware for it...

 Thanks,
 -Jon


 Hi Jon,

 You might be interested in Jim Schutt's numbers on better hardware:

 http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487

 You'll probably get more response on the ceph mailing list though.

 Josh


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Announcing proof-of-concept Load Balancing as a Service project

2012-07-24 Thread Eugene Kirpichov
Hello community,

We at Mirantis have had a number of clients request functionality to
control various load balancer devices (software and hardware) via an
OpenStack API and horizon. So, in collaboration with Cisco OpenStack
team and a number of other community members, we’ve started
socializing the blueprints for an elastic load balancer API service.
At this point we’d like to share where we are and would very much
appreciate anyone participate and provide input.

The current vision is to allow cloud tenants to request and
provision virtual load balancers on demand and allow cloud
administrators to manage a pool of available LB devices. Access is
provided under a unified interface to different kinds of load
balancers, both software and hardware. It means that API for tenants
is abstracted away from the actual API of underlying hardware or
software load balancers, and LBaaS effectively bridges this gap.

POC level support for Cisco ACE and HAproxy is currently implemented
in the form of plug-ins to LBaaS called “drivers”. We also started some
work on F5 drivers. Would appreciate hearing input on what other
drivers may be important at this point…nginx?

Another question we have is if this should be a standalone module or a
Quantum plugin… Dan – any feedback on this (and BTW congrats on the
acquisition =).

In order not to reinvent the wheel, we decided to base our API on
Atlas-LB (http://wiki.openstack.org/Atlas-LB).

Here are all the pointers:
 * Project overview: http://goo.gl/vZdei
 * Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
 * API draft: http://goo.gl/gFcWT
 * Roadmap: http://goo.gl/EZAhf
 * Github repo: https://github.com/Mirantis/openstack-lbaas

The code is written in Python and based on the OpenStack service
template. We’ll be happy to give a walkthrough over what we have to
anyone who may be interested in contributing (for example, creating a
driver to support a particular LB device).

All of the documents and code are not set in stone and we’re writing
here specifically to ask for feedback and collaboration from the
community.

We would like to start holding weekly IRC meetings at
#openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

--
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceph performance as volume image store?

2012-07-24 Thread Leandro Reox
Were pretty intrested too in large scale performance benchmarks. anyone?

regards
On Jul 24, 2012 10:22 PM, Anne Gentle a...@openstack.org wrote:

 I don't know if it will confirm or correlate with your findings, but
 do take a look at this blog post with benchmarks in one of the last
 sections:

 http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/

 I'm trying to determine what parts should go into the OpenStack
 documentation, please let me know if the post is useful to you in your
 setting and what sections are most valuable.
 Thanks,
 Anne


 On Tue, Jul 24, 2012 at 6:08 PM, Josh Durgin josh.dur...@inktank.com
 wrote:
  On 07/23/2012 08:24 PM, Jonathan Proulx wrote:
 
  Hi All,
 
  I've been looking at Ceph as a storage back end.  I'm running a
  research cluster and while people need to use it and want it 24x7 I
  don't need as many nines as a commercial customer facing service does
  so I think I'm OK with the current maturity level as far as that goes,
  but I have less of a sense of how far along performance is.
 
  My OpenStack deployment is 768 cores across 64 physical hosts which
  I'd like to double in the next 12 months.  What it's used for is
  widely varying and hard to classify some uses are hundreds of tiny
  nodes others are looking to monopolize the biggest physical system
  they can get.  I think most really heavy IO currently goes to our NAS
  servers rather than through nova-volumes but that could change.
 
  Anyone using ceph at that scale (or preferably larger)?  Does it keep
  up if you keep throwing hardware at it?  My proof of concept ceph
  cluster on crappy salvaged hardware has proved the concept to me but
  has (unsurprisingly) crappy salvaged performance. Trying to get a
  sense of what performance expectations I should have given decent
  hardware before I decide if I should buy decent hardware for it...
 
  Thanks,
  -Jon
 
 
  Hi Jon,
 
  You might be interested in Jim Schutt's numbers on better hardware:
 
  http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/7487
 
  You'll probably get more response on the ceph mailing list though.
 
  Josh
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Announcing proof-of-concept Load Balancing as a Service project

2012-07-24 Thread Angus Salkeld

On 24/07/12 18:33 -0700, Eugene Kirpichov wrote:

Hello community,

We at Mirantis have had a number of clients request functionality to
control various load balancer devices (software and hardware) via an
OpenStack API and horizon. So, in collaboration with Cisco OpenStack
team and a number of other community members, we’ve started
socializing the blueprints for an elastic load balancer API service.
At this point we’d like to share where we are and would very much
appreciate anyone participate and provide input.

The current vision is to allow cloud tenants to request and
provision virtual load balancers on demand and allow cloud
administrators to manage a pool of available LB devices. Access is
provided under a unified interface to different kinds of load
balancers, both software and hardware. It means that API for tenants
is abstracted away from the actual API of underlying hardware or
software load balancers, and LBaaS effectively bridges this gap.

POC level support for Cisco ACE and HAproxy is currently implemented
in the form of plug-ins to LBaaS called “drivers”. We also started some
work on F5 drivers. Would appreciate hearing input on what other
drivers may be important at this point…nginx?

Another question we have is if this should be a standalone module or a
Quantum plugin… Dan – any feedback on this (and BTW congrats on the
acquisition =).

In order not to reinvent the wheel, we decided to base our API on
Atlas-LB (http://wiki.openstack.org/Atlas-LB).

Here are all the pointers:
* Project overview: http://goo.gl/vZdei
* Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
* API draft: http://goo.gl/gFcWT
* Roadmap: http://goo.gl/EZAhf
* Github repo: https://github.com/Mirantis/openstack-lbaas

The code is written in Python and based on the OpenStack service
template. We’ll be happy to give a walkthrough over what we have to
anyone who may be interested in contributing (for example, creating a
driver to support a particular LB device).


I made a really simple loadbancer (using HAproxy) in Heat
(https://github.com/heat-api/heat/blob/master/heat/engine/loadbalancer.py)
to implement the AWS::ElasticLoadBalancing::LoadBalancer but
it would be nice to use a more complete loadbancer solution.
When I get a moment I'll see if I can integrate. One issue is
I need latency statistics to trigger autoscaling events.
See the statistics types here:
http://docs.amazonwebservices.com/ElasticLoadBalancing/latest/DeveloperGuide/US_MonitoringLoadBalancerWithCW.html

Anyways, nice project.

Regards
Angus Salkeld



All of the documents and code are not set in stone and we’re writing
here specifically to ask for feedback and collaboration from the
community.

We would like to start holding weekly IRC meetings at
#openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

--
Eugene Kirpichov
http://www.linkedin.com/in/eugenekirpichov

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Possibility to have non python sub project in openstack

2012-07-24 Thread Hengqing Hu

Hi,

These days I was working on a nova-scheduler translated to erlang(update 
database, rpc messaging all in erlang).
Currently it can responds to the following commands the same as the 
nova-scheduler in python:

euca-run-instances -n N -t m1.tiny ami-0003
euca-run-instances -t m1.tiny ami-0003

It is still a work in progress and code need to be beautified, functions
needs to be completed, bugs need to be fixed.

Since what I see openstack now is a complete python stack, What I like 
to know is:

Is it possible to have a non python sub project in openstack?
How to make a non python project a sub project of openstack?
Will mixed language projects in openstack be helpfull or messfull?

Best Regards, Hengqing Hu

Attached some code snippet to show some of the ideas:

-module(compute_filter).

%% api
-export([host_passes/2]).

%%%---
%%% api
%%%---

host_passes({_, State}, FilterProperties) -
InstanceType = proplists:get_value(instance_type,
   FilterProperties),
Topic = proplists:get_value(topic,
State),
((Topic =/= compute) or
 (InstanceType =:= undefined) or
 (InstanceType =:= null)) orelse
begin
Capabilities = proplists:get_value(capabilities,
   State),
Service = proplists:get_value(service,
  State),
CapabilitiesEnabled = proplists:get_value(enabled,
  Capabilities,
  1),
is_service_up(Service) and
is_capabilities_enabled(CapabilitiesEnabled) and
satisfies_extra_specs(Capabilities, InstanceType)
end.

%%%---
%%% internal functions
%%%---

is_service_up(Service) -
Disabled = proplists:get_value(disabled,
   Service),
is_service_enabled(Disabled) and
ensched_lib:service_is_up(Service).

is_capabilities_enabled(1) -
   true;
is_capabilities_enabled(0) -
   false.

is_service_enabled(1) -
false;
is_service_enabled(0) -
true.

satisfies_extra_specs(Capabilities, InstanceType) -
(not lists:keymember(extra_specs, 1, InstanceType)) orelse
begin
ExtraSpecs = proplists:get_value(extra_specs, InstanceType),
lists:all(
fun({Key, Value})-
Value =:= proplists:get_value(Key, Capabilities)
end,
ExtraSpecs)
end.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] Announcing proof-of-concept Load Balancing as a Service project

2012-07-24 Thread Eugene Kirpichov
Hi Dan,

Thanks for the feedback. I will answer in detail tomorrow; for now
just providing a working link to the project overview:

http://goo.gl/LrRik

On Tue, Jul 24, 2012 at 8:30 PM, Dan Wendlandt d...@nicira.com wrote:
 Hi Eugene, Angus,

 Adding openstack-dev (probably the more appropriate mailing list for
 discussion a new openstack feature) and some folks from Radware and F5 who
 had previously also contacted me about Quantum + Load-balancing as a
 service.  I'm probably leaving out some other people who have contacted me
 about this as well, but hopefully they are on the ML and can speak up.

 On Tue, Jul 24, 2012 at 7:51 PM, Angus Salkeld asalk...@redhat.com wrote:

 On 24/07/12 18:33 -0700, Eugene Kirpichov wrote:

 Hello community,

 We at Mirantis have had a number of clients request functionality to
 control various load balancer devices (software and hardware) via an
 OpenStack API and horizon. So, in collaboration with Cisco OpenStack
 team and a number of other community members, we’ve started
 socializing the blueprints for an elastic load balancer API service.
 At this point we’d like to share where we are and would very much
 appreciate anyone participate and provide input.


 Yes, I definitely think LB is one of the key items that we'll want to tackle
 during Grizzly in terms of L4-L7 services.



 The current vision is to allow cloud tenants to request and
 provision virtual load balancers on demand and allow cloud
 administrators to manage a pool of available LB devices. Access is
 provided under a unified interface to different kinds of load
 balancers, both software and hardware. It means that API for tenants
 is abstracted away from the actual API of underlying hardware or
 software load balancers, and LBaaS effectively bridges this gap.


 That's the openstack way, no arguments there :)



 POC level support for Cisco ACE and HAproxy is currently implemented
 in the form of plug-ins to LBaaS called “drivers”. We also started some
 work on F5 drivers. Would appreciate hearing input on what other
 drivers may be important at this point…nginx?


 haproxy is the most common non-vendor solution I hear mentioned.



 Another question we have is if this should be a standalone module or a
 Quantum plugin…


 Based on discussions during the PPB meeting about quantum becoming core,
 there was a push for having a single network service and API, which would
 tend to suggest it being a sub-component of Quantum that is independently
 loadable.  I also tend to think that its likely to be a common set of
 developers working across all such networking functionality, so it wouldn't
 seem like keeping different core-dev teams, repos, tarballs, docs, etc.
 probably doesn't make sense.  I think this is generally inline with the plan
 of allowing Quantum to load additional portions of the API as needed for
 additional services like LB, WAN-bridging, but this is probably a call for
 the PPB in general.



 In order not to reinvent the wheel, we decided to base our API on
 Atlas-LB (http://wiki.openstack.org/Atlas-LB).


 Seems like a good place to start.



 Here are all the pointers:
 * Project overview: http://goo.gl/vZdei


 * Screencast: http://www.youtube.com/watch?v=NgAL-kfdbtE
 * API draft: http://goo.gl/gFcWT
 * Roadmap: http://goo.gl/EZAhf
 * Github repo: https://github.com/Mirantis/openstack-lbaas


 Will take a look.. I'm getting a permission error on the overview.





 The code is written in Python and based on the OpenStack service
 template. We’ll be happy to give a walkthrough over what we have to
 anyone who may be interested in contributing (for example, creating a
 driver to support a particular LB device).


 I made a really simple loadbancer (using HAproxy) in Heat
 (https://github.com/heat-api/heat/blob/master/heat/engine/loadbalancer.py)
 to implement the AWS::ElasticLoadBalancing::LoadBalancer but
 it would be nice to use a more complete loadbancer solution.
 When I get a moment I'll see if I can integrate. One issue is
 I need latency statistics to trigger autoscaling events.
 See the statistics types here:

 http://docs.amazonwebservices.com/ElasticLoadBalancing/latest/DeveloperGuide/US_MonitoringLoadBalancerWithCW.html

 Anyways, nice project.


 Integration with Heat would be great regardless of the above decisions.

 dan





 Regards
 Angus Salkeld



 All of the documents and code are not set in stone and we’re writing
 here specifically to ask for feedback and collaboration from the
 community.

 We would like to start holding weekly IRC meetings at
 #openstack-meeting; we propose 19:00 UTC on Thursdays (this time seems
 free according to http://wiki.openstack.org/Meetings/ ), starting Aug 2.

 --
 Eugene Kirpichov
 http://www.linkedin.com/in/eugenekirpichov

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : 

[Openstack] Glance image add error

2012-07-24 Thread Miguel Alejandro González
Hello

I'm getting this error from glance  when i try to add an image, plz help

Failed to add image. Got error:

The response body:
{badMethod: {message: The server could not comply with the
request since it is either malformed or otherwise incorrect., code:
405}}
Note: Your image metadata may still be in the registry, but the
image's status will likely be 'killed'.

I try to add the image with a command like this one:
glance add name=ubuntu-oneiric is_public=true container_format=ovf
disk_format=qcow2  ubuntu-11.10-server-cloudimg-amd64-disk1.img


I'm using ubuntu server 12.04 as my OS and I'm using the ubuntu
packages to install OpenStack (by hand not using scripts)

Thanks in advance!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Failure: precise_folsom_deploy #90

2012-07-24 Thread openstack-testing-bot
Title: precise_folsom_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_deploy/90/Project:precise_folsom_deployDate of build:Tue, 24 Jul 2012 02:43:33 -0400Build duration:1 min 3 secBuild cause:Started by command lineBuilt on:masterHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesBuild Artifactslogs/syslog.tar.gzlogs/test-02.os.magners.qa.lexington-log.tar.gzlogs/test-03.os.magners.qa.lexington-log.tar.gzlogs/test-04.os.magners.qa.lexington-log.tar.gzlogs/test-06.os.magners.qa.lexington-log.tar.gzlogs/test-07.os.magners.qa.lexington-log.tar.gzlogs/test-08.os.magners.qa.lexington-log.tar.gzlogs/test-09.os.magners.qa.lexington-log.tar.gzlogs/test-10.os.magners.qa.lexington-log.tar.gzlogs/test-11.os.magners.qa.lexington-log.tar.gzConsole Output[...truncated 424 lines...]u'nova-release': u'qa-precise-folsom'}, u'nova-compute': {u'config-flags': u'auto_assign_floating_ip=True',   u'nova-release': u'qa-precise-folsom'}, u'nova-volume': {u'nova-release': u'qa-precise-folsom',  u'overwrite': u'true'}}DEBUG: Calling 'juju status'...DEBUG: Deploying with timeout 2700 sec.DEBUG: Calling: juju deploy --config=/tmp/tmpNKHerG --repository=/var/lib/jenkins/jobs/precise_folsom_deploy/workspace local:nova-compute nova-computeDEBUG: Calling: juju deploy --config=/tmp/tmpNKHerG --repository=/var/lib/jenkins/jobs/precise_folsom_deploy/workspace local:nova-volume nova-volumeDEBUG: Calling: juju deploy --config=/tmp/tmpNKHerG --repository=/var/lib/jenkins/jobs/precise_folsom_deploy/workspace local:nova-cloud-controller nova-cloud-controllerDEBUG: Calling: juju deploy --config=/tmp/tmpNKHerG --repository=/var/lib/jenkins/jobs/precise_folsom_deploy/workspace local:keystone keystoneDEBUG: Calling: juju deploy --repository=/var/lib/jenkins/jobs/precise_folsom_deploy/workspace local:rabbitmq rabbitmqERROR: Juju command returned non-zero: juju deploy --repository=/var/lib/jenkins/jobs/precise_folsom_deploy/workspace local:rabbitmq rabbitmq- Deploying nova-compute in group 1/1- Deploying nova-volume in group 1/1- Deploying nova-cloud-controller in group 1/1- Deploying keystone in group 1/1- Deploying rabbitmq in group 1/1Traceback (most recent call last):  File "/var/lib/jenkins/tools/juju-deployer/deployer.py", line 202, in juju_call(cmd)  File "/var/lib/jenkins/tools/juju-deployer/utils.py", line 142, in juju_callraise ExceptionException+ rc=1+ echo 'Deployer returned: 1'Deployer returned: 1+ [[ 1 != 0 ]]+ echo 'Collating logs...'Collating logs...+ /var/lib/jenkins/tools/jenkins-scripts/collate-test-logs.py -o logs2012-07-24 02:44:29,118 INFO Connecting to environment...2012-07-24 02:44:30,020 INFO Connected to environment.2012-07-24 02:44:30,805 INFO 'status' command finished successfullyINFO:root:Setting up connection to test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Connected (version 2.0, client OpenSSH_5.9p1)INFO:paramiko.transport:Authentication (publickey) successful!INFO:paramiko.transport:Secsh channel 1 opened.INFO:paramiko.transport.sftp:[chan 1] Opened sftp connection (server version 3)INFO:root:Archiving logs on test-02.os.magners.qa.lexingtonINFO:paramiko.transport:Secsh channel 2 opened.INFO:root:Grabbing information from test-02.os.magners.qa.lexingtonINFO:paramiko.transport.sftp:[chan 1] sftp session closed.tar: Removing leading `/' from member names+ exit 1Build step 'Execute shell' marked build as failureArchiving artifactsEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Fixed: precise_folsom_python-keystoneclient_trunk #30

2012-07-24 Thread openstack-testing-bot
Title: precise_folsom_python-keystoneclient_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_python-keystoneclient_trunk/30/Project:precise_folsom_python-keystoneclient_trunkDate of build:Tue, 24 Jul 2012 20:51:22 -0400Build duration:3 min 5 secBuild cause:Started by user adamBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: 2 out of the last 5 builds failed.60ChangesNo ChangesConsole Output[...truncated 1667 lines...]Finished at 20120724-2054Build needed 00:00:55, 1220k disc spaceINFO:root:Uploading package to ppa:openstack-ubuntu-testing/folsom-trunk-testingDEBUG:root:['dput', 'ppa:openstack-ubuntu-testing/folsom-trunk-testing', 'python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_source.changes']gpg: Signature made Tue Jul 24 20:53:22 2012 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) <ja...@shingle-house.org.uk>"gpg: Signature made Tue Jul 24 20:53:21 2012 EDT using RSA key ID 9935ACDCgpg: Good signature from "Openstack Ubuntu Testing Bot (Jenkins Key) <ja...@shingle-house.org.uk>"Checking signature on .changesGood signature on /tmp/tmpWP9Yic/python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_source.changes.Checking signature on .dscGood signature on /tmp/tmpWP9Yic/python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1.dsc.Uploading to ppa (via ftp to ppa.launchpad.net):  Uploading python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1.dsc: done.  Uploading python-keystoneclient_0.1.1.15+git201207242051~precise.orig.tar.gz: done.  Uploading python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1.debian.tar.gz: done.  Uploading python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_source.changes: done.Successfully uploaded packages.INFO:root:Installing build artifacts into /var/lib/jenkins/www/aptDEBUG:root:['reprepro', '--waitforlock', '10', '-Vb', '/var/lib/jenkins/www/apt', 'include', 'precise-folsom', 'python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_amd64.changes']Exporting indices...Successfully created '/var/lib/jenkins/www/apt/dists/precise-folsom/Release.gpg.new'Successfully created '/var/lib/jenkins/www/apt/dists/precise-folsom/InRelease.new'Deleting files no longer referenced...deleting and forgetting pool/main/p/python-keystoneclient/python-keystoneclient_0.1.1.15+git201207242020~precise-0ubuntu1_all.debINFO:root:Pushing changes back to bzr testing branchDEBUG:root:['bzr', 'push', 'lp:~openstack-ubuntu-testing/python-keystoneclient/precise-folsom']Pushed up to revision 33.INFO:root:Storing current commit for next build: e77234bd3e9f49de509bd1ff776966e58be79904INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/python-keystoneclient/precise-folsom-proposed /tmp/tmpWP9Yic/python-keystoneclientmk-build-deps -i -r -t apt-get -y /tmp/tmpWP9Yic/python-keystoneclient/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hbzr merge lp:~openstack-ubuntu-testing/python-keystoneclient/precise-folsom --forcedch -b -D precise --newversion 1:0.1.1.15+git201207242051~precise-0ubuntu1 Automated Ubuntu testing build:dch -b -D precise --newversion 1:0.1.1.15+git201207242051~precise-0ubuntu1 Automated Ubuntu testing build:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_source.changessbuild -d precise-folsom -n -A python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1.dscdput ppa:openstack-ubuntu-testing/folsom-trunk-testing python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_source.changesreprepro --waitforlock 10 -Vb /var/lib/jenkins/www/apt include precise-folsom python-keystoneclient_0.1.1.15+git201207242051~precise-0ubuntu1_amd64.changesbzr push lp:~openstack-ubuntu-testing/python-keystoneclient/precise-folsomEmail was triggered for: FixedTrigger Success was overridden by another trigger and will not send an email.Sending email for trigger: Fixed-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp