Hello everyone,
so i have started a VM and then allocated a 5G volume. Everything goes
fine until i try to attach the volume to the VM: i am getting this:
BadRequest: The supplied device path (/dev/vol_1) is invalid HTTP 400 !
Does anyone know what means or how to fix it ? I am using Folsom
Thanks for the Clarification!
On Dec 12, 2012, at 2:59 AM, ZhiQiang Fan wrote:
what Andy said is right, you can not list another tenant's instance
but you can use nova list --all_tenants to list all instances in all
tenants if you are administrator
use nova help list to get more help
Hi,
2012-12-12 12:04:48 DEBUG nova.utils [-] backend module
'nova.db.sqlalchemy.migration' from
'/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/migration.pyc' from
(pid=14756) __get_backend /usr/lib/python2.6/site-packages/nova/utils.py:494
I get this error a lot when using the command
Hello,
the problem is related to the name you used for blessing the attachment
(/dev/vol_1).
It should be something like /dev/vdX where X is a letter like 'c', 'd',
'e', etc
Hope it helps.
MCo.
On Wed, Dec 12, 2012 at 10:03 AM, Skible OpenStack
skible.openst...@gmail.com wrote:
Hello
Hi all,
Can anybody help me with this, when I type nova-manage service list I
cannot see the nova-compute in the processes and the nova-network has this
XXX. What should I do?
Thank you all.
2012-12-12 10:36:53 DEBUG nova.utils
[req-6e9f672a-c484-40be-9796-a9b8b000358f None None] backend
1. Compute service - Looks like your Nova Compute service did not
register itself. If Compute service is not running on the same host as other
Nova services, check the 'sql_connection' string in nova.conf.
2. Network service - The network service hasn't updated its heartbeat
be sure you have configed ntp on all servers.
2012/12/12 Gurjar, Unmesh unmesh.gur...@nttdata.com
**1. **Compute service – Looks like your Nova Compute service did
not register itself. If Compute service is not running on the same host as
other Nova services, check the
Hello,
If I understand it correctly, multi-host network mode is not supported
(yet) in quantum in Folsom.
I wonder what's the recommended way of running multiple network nodes
(for load balancing and
bandwidth concerns) in quantum? Any documentation links will be
appreciated.
Thanks,
Xin
On 12/12/2012 05:58 PM, Xin Zhao wrote:
Hello,
If I understand it correctly, multi-host network mode is not supported
(yet) in quantum in Folsom.
I wonder what's the recommended way of running multiple network nodes
(for load balancing and
bandwidth concerns) in quantum? Any documentation
Dear all,
I am running FlatDHCPNetwork. I have two interfaces, em1 and em2.
- em1 is my flat_interface for fixed (192.168.15.0/24) and node
(192.168.14.0/24) ips.
- em2 is my public_interface for floating ips (192.168.16.0/24).
When I create an instance, I notice that the following iptable rule
On Tue, 11 Dec 2012, João Soares wrote:
Hi,
I have set up a CentOS VM according to this tutorial
https://www.ibm.com/developerworks/mydeveloperworks/wikis/home/wiki/OpenStac
k/page/Creating%20qcow2%20CentOS%20Image%20for%20OpenStack?lang=en
I uploaded it to glance and can boot up a
Hello,
I have just reinstalled folsom on centos 6.3
I have a very slow and nearly inoperative dashboard. I think it might be
related to Qpidd...?
I didn't see anything in the http error log.
Thanks,
Andrew
some logs from api.log:
2012-12-12 17:51:51 INFO nova.api.openstack.wsgi
P.S. It is especially slow clicking on Overview, between Project and Admin and
Logging in.
On Dec 12, 2012, at 5:57 PM, Andrew Holway wrote:
Hello,
I have just reinstalled folsom on centos 6.3
I have a very slow and nearly inoperative dashboard. I think it might be
related to
Greetings all!
I'm look into using keyring as a way to (optionally) remove clear text
passwords from the various config files. (See
https://blueprints.launchpad.net/oslo/+spec/pw-keyrings for details.)
One of the comments I got back is that I should have the oslo build
dependency on keyring
Hi guys, we just released an installer for Cinder Volume on Windows Server 2012:
http://www.cloudbase.it/cinder-volume-on-windows-storage-server-2012/
One of the great advantages of integrating Windows solutions in the OpenStack
ecosystem is the ease of management and deployment, and Cinder is
On Wed, Dec 12, 2012 at 11:02 AM, Alessandro Pilotti a...@pilotti.it wrote:
Hi guys, we just released an installer for Cinder Volume on Windows
Server 2012:
http://www.cloudbase.it/cinder-volume-on-windows-storage-server-2012/
One of the great advantages of integrating Windows solutions in
I did not have nova-network process running.
Thanks,
Andrew
On Dec 12, 2012, at 6:01 PM, Andrew Holway wrote:
P.S. It is especially slow clicking on Overview, between Project and Admin
and Logging in.
On Dec 12, 2012, at 5:57 PM, Andrew Holway wrote:
Hello,
I have just reinstalled
Hi,
I have two hosts in my openstack setup: blade03 and blade04. I have set up my
openstack with vlanned networking. The instances are being created on the
specifed vlans correctly.
The problem is that I cannot ping instances on blade03 from blade04. I can ping
blade04 instances from blade04.
Check your switch.
Make sure the ports are trunked. Make sure they have access to the vlans
desired. All ports.
On Wed, Dec 12, 2012 at 10:33 AM, Andrew Holway a.hol...@syseleven.dewrote:
Hi,
I have two hosts in my openstack setup: blade03 and blade04. I have set up
my openstack with
I wanted to let everyone know that we have two new Stackers joining the
OpenStack Foundation staff. Claire Massey is joining us as Marketing
Coordinator, and Jim Blair has joined us as an Infrastructure Engineer. Welcome
Claire Jim!
We still have openings for another Infrastructure Engineer,
Hi,
Yes, it appears I misconfigured that VLAN.
Thanks,
Andrew
On Dec 12, 2012, at 8:06 PM, Matt Joyce wrote:
Check your switch.
Make sure the ports are trunked. Make sure they have access to the vlans
desired. All ports.
On Wed, Dec 12, 2012 at 10:33 AM, Andrew Holway
You can ignore this.
On 12/12/2012 06:06 AM, Andrew Holway wrote:
Hi,
2012-12-12 12:04:48 DEBUG nova.utils [-] backend module
'nova.db.sqlalchemy.migration' from
'/usr/lib/python2.6/site-packages/nova/db/sqlalchemy/migration.pyc' from
(pid=14756) __get_backend
Congrats :)
Le 12 déc. 2012 à 22:24, Mark Collier m...@openstack.org a écrit :
I wanted to let everyone know that we have two new Stackers joining the
OpenStack Foundation staff. Claire Massey is joining us as Marketing
Coordinator, and Jim Blair has joined us as an Infrastructure
My question is what does this extra dependancy give us apart from extra
complexity?
I can't see any enhancement in security with this method?
Cheers,
Sam
On 13/12/2012, at 4:44 AM, Ken Thomas k...@yahoo-inc.com wrote:
Greetings all!
I'm look into using keyring as a way to (optionally)
Hi.
https://review.openstack.org/#/c/17980/ adds psutils (packaged on
Ubuntu as python-psutils) as a dependency for nova. This (or something
very much like it) is required to traverse the process tree in the
is_parent_process() method in nova.utils. I need that functionality to
eliminate a race
The short answer is that it gives you extra security... if you wish to
use it.
If you're fine with relying on the file permission of nova.conf,
glance.conf, etc. to keep any baddies from seeing the clear text
passwords in there, then you're right, it doesn't give you anything.
If, on the
Hi, list.
I observed very strange things. Disk usage and partition size is different.
I made a instance using cirros.img and it has 2GB RAM, 1VCPU, 20GB Disk.
But I could just use 9.2M in 20GB.
Could anyone explain this? How can I use whole 20GB Root disk?
$ mount
/dev/vda1 on / type ext3
$
On Thu, Dec 13, 2012 at 12:14 PM, 이창만 cm224@samsung.com wrote:
Hi, list.
I observed very strange things. Disk usage and partition size is different.
I made a instance using cirros.img and it has 2GB RAM, 1VCPU, 20GB Disk.
But I could just use 9.2M in 20GB.
Could anyone explain this? How
Don't worry. It's just a debug message.
You can disable the debug mode in /etc/nova.conf with option debug=false.
Regards,
JuanFra.
2012/12/12 Jay Pipes jaypi...@gmail.com
You can ignore this.
On 12/12/2012 06:06 AM, Andrew Holway wrote:
Hi,
2012-12-12 12:04:48 DEBUG nova.utils [-]
Hi Stackers,
What is the difference between the ports 5000 and 35357?
When I run glance command, the error message is as below. I googled the
message, but no results can address this issue.
root@Controller:~# glance index
ID Name Disk
Stand down. Padraig has suggested a better way.
Michael
On Thu, Dec 13, 2012 at 10:02 AM, Michael Still mi...@stillhq.com wrote:
Hi.
https://review.openstack.org/#/c/17980/ adds psutils (packaged on
Ubuntu as python-psutils) as a dependency for nova. This (or something
very much like it) is
Hi Ken,
Yeah OK I agree it doesn't make it that much more complex as long as the
dependancy is packaged in the distos which it is.
I'm still a little confused though.
If nova needs a clear text password to be able to talk to the DB for example
then it's going to be needing to access this
Congrats to Claire and Jim !
Best Regards
Moe///
On Dec 12, 2012, at 16:24, Mark Collier m...@openstack.org wrote:
I wanted to let everyone know that we have two new Stackers joining the
OpenStack Foundation staff. Claire Massey is joining us as Marketing
Coordinator, and Jim Blair has
Hi guys,
Please ignore the second question for it's my bad to miss some steps. I'd
like to raise another question if you've got any idea.
I know the ports 5000 and 9292 are default ports used by services. What
about 8774 below? Could I use other ports?
keystone --os-token
Title: precise_essex_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_deploy/18352/Project:precise_essex_deployDate of build:Wed, 12 Dec 2012 04:41:19 -0500Build duration:16 secBuild cause:Started by command lineBuilt on:masterHealth
at 20121212
Title: raring_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/282/Project:raring_grizzly_nova_trunkDate of build:Wed, 12 Dec 2012 15:01:12 -0500Build duration:17 minBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/1993/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7758138291355497489.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/1994/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8796920327305584499.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/1996/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1524365449166203311.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/1999/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7988799712613823859.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2002/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3417928447924732410.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2003/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1875352969718153472.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2004/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4056366764507678852.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2005/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7208135084886426125.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2006/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5681020233408283634.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2007/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson227575290807617679.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2008/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7431053053283986263.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2014/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4021072548160101815.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2015/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson6180386165117686611.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2017/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson6045369450934918620.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2018/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4049536254116661096.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2019/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8479161048655973931.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2020/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1460555437729401680.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2022/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1403336708646873667.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2024/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3601472265544634976.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2025/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3733444140451853160.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2026/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8151597264844938133.sh
+ OS_RELEASE=folsom
+
58 matches
Mail list logo