Hi all
I am trying to start nova-api in my compute node to use metadata.
I couldn't success yet. I found this log in nova-api log
2012-04-03 15:18:43,908 CRITICAL nova [-] Could not load paste app
'metadata' from /etc/nova/api-paste.ini
36 (nova): TRACE: Traceback (most recent call last):
37
Hello everyone,
Our weekly project release status meeting will take place at 21:00
UTC this Tuesday in #openstack-meeting on IRC. PTLs, if you can't make
it, please name a substitute on [2].
Two days before Essex final release, we will concentrate on the pending
RC2 publications for Horizon and
Yes - Its more generic that hypervisor capabilities – my main problem with
Host Aggregates is that it limits it to some specific 1:1 groupings based on
hypervisor functionality.
Use cases I want to be able to cover include:
- Rolling new hardware through an existing cluster, and
+1 for move to nova.common
I remember discussion about versioning these messages to aid rolling /
zero-downtime upgrades.
Might be worth considering those when doing the decoupling?
Cheers,
John
-Original Message-
From:
Dom0: 192.168.100.251DomU: 192.168.100.238
nova.conf http://pastebin.com/B0PVVWivifconfig (dom0 and domU):
http://pastebin.com/iCLX91RSnova network table: http://pastebin.com/k5XcXHee
Please, explain me how to take others informations if you want, i think i have
been taken all information.
Russell Bryant wrote:
I proposed a session to discuss this a bit at the summit:
http://summit.openstack.org/sessions/view/95
There are a lot of ways this could be approached. I'm going to try to
write up a proposal at some point to get a discussion moving.
Note that there is also the
Thanks for the info. The one thing I am missing is the ifconfig info from
inside your VM instance (I would personally use XenCenter to access the console
and see what is going on). I am assuming that it is not getting the correct IP
address from the DHCP server in nova-network. And I am
Hi,
I'm actively working on the notification part. I did some analysis on the
code and dependencies and was planning to submit a blueprint by end of the
week. We can use that to finalize the interface for the notification. The
rpc implementation is rich (compared to just what we need for
On 04/02/2012 08:44 PM, Xin Zhao wrote:
On 4/2/2012 6:35 PM, Russell Bryant wrote:
On 04/02/2012 03:09 PM, Xin Zhao wrote:
Hello,
I am new to OpenStack and trying to install the diablo release on a
RHEL6 cluster. I follow instructions here:
Hi,
I'm actively working on the notification part. I did some analysis on the
code and dependencies and was planning to submit a blueprint by end of the
week. We can use that to finalize the interface for the notification. The
rpc implementation is rich (compared to just what we need for
Thanks for sharing this information. For the future, I think this type
of analysis and discussion is something that is great to have on the
mailing list instead of just a private group. I wish I had seen it sooner.
The code in nova.rpc seems useful enough that it very well may be used
I haven't had a chance to test with Essex yet, but I did get a session
accepted at the Developer Summit to get everyone on the same page as
far as Fog goes. Hopefully I'll get some cycles in on Essex before the
Developer Summit and get it fixed and working before then. Feel free
to email me if you
Hey Russell,
On Mon, 2012-04-02 at 16:26 -0400, Russell Bryant wrote:
Greetings,
There was a thread on this list a little while ago about moving the
notification drivers that are in nova and glance into openstack.common
since they provide very similar functionality, but have implementations
With the pending release of Essex, I'm making plans to upgrade our internal
cloud infrastructure. My question is what will be the best approach?
Our cloud is being used to support internal research activities and thus needs
to be 'relatively' stable, however as new features become available
On 04/03/2012 09:36 AM, Russell Bryant wrote:
Thanks for sharing this information. For the future, I think this type
of analysis and discussion is something that is great to have on the
mailing list instead of just a private group. I wish I had seen it sooner.
In Venkat's defense, I believe
On 04/03/2012 11:16 AM, Mark McLoughlin wrote:
4) nova.exception
nova.rpc defines two exceptions based on NovaException. They could be
based on OpenstackException from openstack-common, instead. There's
also an RPC exception defined in nova.exception, but that can be moved
into nova.rpc with
Included one answer for you below :)
-Dolph
On Tue, Apr 3, 2012 at 9:53 AM, Pierre Amadio
pierre.ama...@canonical.comwrote:
The ubuntu user is associated to the admin role (i know i did it with
keystone user-role-add , altough i m not sure how to list the role of a
given user to double
On 04/03/2012 12:23 PM, Jay Pipes wrote:
On 04/03/2012 09:36 AM, Russell Bryant wrote:
Thanks for sharing this information. For the future, I think this type
of analysis and discussion is something that is great to have on the
mailing list instead of just a private group. I wish I had seen
However, I'm not sure how people would feel about having both
openstack.common.exception and nova.exception in the tree since they
overlap quite a bit. I like being able to do work in pieces, but
having them both in the tree leaves the code in an odd state, so we
need some end goal in
On 04/03/2012 01:40 PM, John Garbutt wrote:
However, I'm not sure how people would feel about having both
openstack.common.exception and nova.exception in the tree since they
overlap quite a bit. I like being able to do work in pieces, but
having them both in the tree leaves the code in an odd
Your api-paste.ini is very out of date. Here is the section from the current
version:
# Metadata #
[composite:metadata]
use = egg:Paste#urlmap
/: metaversions
/latest: meta
/1.0: meta
/2007-01-19: meta
/2007-03-01: meta
/2007-08-29: meta
/2007-10-10: meta
/2007-12-15:
We use nfs backed instances a lot, and this problem normally has to do with
wrong permission management in your filer and/or client.
Check if not only root can write on the nfs share (specially libvirt user).
Diego
--
Diego Parrilla
http://www.stackops.com/*CEO*
*www.stackops.com | *
On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:
Hi John,
Maybe the problem with host aggregates is that it too quickly became
something that was linked to hypervisor capability, rather than being the
more general mechanism of which one form of aggregate could be linked to
hypervisor
It is working!
You are in the bios screen, so you probably just need to wait (software mode
booting can take a while)
If the vm doesn't ever actually boot, you may be attempting to boot a
non-bootable image.
___
Mailing list:
On 04/03/2012 08:20 AM, Lillie Ross-CDSR11 wrote:
My question is, should I base our new installation directly off the Essex
branch in the git repository, or use the packages that will be deployed as part
of the associated Ubuntu 12.04LTS release? With Diablo, I was forced to use
packages
+1
It is certainly worth a session to decide how to modify the scheduler.
I suspect all you would need is to do on the compute side is add a stub
implementations for add/remove host operations in the compute manager base
class (rather than throw NotImplemented exceptions), and maybe an extra
Hi Adam,
Thanks for the update. Actually, I'm in the process of reading about your
testing and integration framework for Openstack
(http://ubuntuserver.wordpress.com/2012/02/08/704/) as I write this.
Yes, Keystone integration seemed to be the big bugaboo in the Ubuntu/Diablo
release. I've
HI all,
I was looking into how hostnames are selected for an instance in openstack and
there seems to be a couple+X? ways I have discovered and just wanted to see if
I am correct.
1. Use the metadata api, and send in a user-data section that specifies the
hostname, then use say the
+1.
Interesting scenarios open up if we can have the scheduler intelligently
direct workloads based on config/metadata.
johnpur
From: openstack-bounces+john=openstack@lists.launchpad.net
[mailto:openstack-bounces+john=openstack@lists.launchpad.net] On Behalf
Of Vishvananda
Try setting admin_token = 012345SECRET99TOKEN012345 in
/etc/keystone/keystone.conf
-Joshua
On Apr 2, 2012, at 6:26 PM, Vijay wrote:
Hello,
Installed keystone-2012.1~rc1.tar.gz.
Following this url to configure:
I had a problem like this when the umask was locked down. Setting the
umask to 022 in the init script for nova-compute solved my problem.
On Tue, Apr 3, 2012 at 1:56 PM, Diego Parrilla Santamaría
diego.parrilla.santama...@gmail.com wrote:
We use nfs backed instances a lot, and this problem
I accidentally posted this on openstack-operat...@lists.openstack.org..
-- Forwarded message --
Everyone,
After googlin' around I can not find any docs on how to setup OpenStack
with ESXi as a hypervisor.
This official link is dead:
Just confirming what Sandy said; I am playing around with SpiffWorkflow.
I'll post my findings when I'm done on the wiki under the Nova
Orchestration page.
So far I've found some of the documentation lacking and concepts
confusing, which has resulted in a steep learning curve and made it
Hello everyone,
The tarball for the last (this time we mean it) release candidate for
OpenStack Image Service (Glance) 2012.1 is now available at:
https://launchpad.net/glance/essex/essex-rc3
This RC3 will be formally released as the 2012.1 (Essex) final version
next week, unless a critical
Can't wait to hear about it Ziad!
Very cool!
-S
From: Ziad Sawalha
Sent: Tuesday, April 03, 2012 6:56 PM
To: Sriram Subramanian; Dugger, Donald D; Sandy Walsh
Cc: nova-orchestrat...@lists.launchpad.net; openstack@lists.launchpad.net
Subject: Re:
On Tue, Apr 3, 2012 at 7:10 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
On Apr 3, 2012, at 6:45 AM, Day, Phil wrote:
Hi John,
Maybe the problem with host aggregates is that it too quickly became
something that was linked to hypervisor capability, rather than being the
more general
In an effort to further align OpenStack API clients, Jay Pipes, Monty Taylor
and myself have set up the python-glanceclient project. It is not intended to
be a drop-in replacement for the existing client that lives in Glance, but a
complete rewrite with a shiny new interface that maintains
Hi Ziad,
Thanks for taking the effort. Do you know which ones out of the 43
workflows patterns are relavant to us? I'm slightly concerned that
SpiffWorkflow might be an overkill and bring unnecessary complexity
into the game. There was a discussion a while ago suggesting that
relatively simple
On Tue, 03 Apr 2012 16:53:05 +0200
Pierre Amadio pierre.ama...@canonical.com wrote:
[filter:tokenauth]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
service_port = 5000
service_host = 192.168.122.102
auth_port = 35357
auth_host = 192.168.122.102
auth_protocol = http
On Tue, Apr 3, 2012 at 4:53 PM, Pierre Amadio
pierre.ama...@canonical.com wrote:
I am trying to use swift and keystone together (on ubuntu precise), and
fail to do so.
roles: [{id: 60a1783c2f05437d91f2e1f369320c49, name:
Admin},
[...]
[filter:keystone]
paste.filter_factory
Fix proposed to branch: master
Review: https://review.openstack.org/6191
** Changed in: openstack-common
Status: New = In Progress
--
You received this bug notification because you are a member of OpenStack
Common Drivers, which is the registrant for openstack-common.
Title: precise-openstack-essex-python-quantumclient-trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise-openstack-essex-python-quantumclient-trunk/28/Project:precise-openstack-essex-python-quantumclient-trunkDate of build:Tue, 03 Apr 2012 02:01:00 -0400Build
Title: precise-openstack-essex-glance-trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise-openstack-essex-glance-trunk/161/Project:precise-openstack-essex-glance-trunkDate of build:Tue, 03 Apr 2012 13:01:00 -0400Build duration:4 min 9 secBuild cause:Started by
43 matches
Mail list logo