You're right. The 301 is returned by my http proxy server. The reason is that
the httplib2 python module keystone client uses would use the proxy server in
the environment variable http_proxy, but the content of no_proxy environment
variable is not actually used in establishing the connection.
No, not that I'm aware of -- at least not on the same compute node...
You can only specify public_interface=XXX for a single interface (or
bridge) used for all floating IPs for the VMs on a compute node.
Best,
-jay
On 08/20/2012 12:13 PM, Juris wrote:
Greetings everyone,
Just a quick
Btw, using
nova image-create --poll 4dcd5bb6-c65b-47dd-9c87-ba3fed624e22 Instance
works fine (nova command is run on 1st node), just it creates a new image and
not a snapshot.
Best
--
Alessandro Tagliapietra | VISup srl
piazza 4 novembre 7
20124 Milano
http://www.visup.it
Il giorno
Hello,
I have two questions regarding OpenStack Keystone:
Q1) The Folsom release supports domains. The domain can contain more tenants
and tenant cannot be shared between domains. Is this right? I think so, but
want to be sure.
Q2) Is it posible to have a cluster of the Keystones to avoid
* rob_hirschf...@dell.com (rob_hirschf...@dell.com) wrote:
We've been discussing using Open vSwitch as the basis for non-Quantum Nova
Networking deployments in Folsom. While not Quantum, it feels like we're
bringing Nova Networking a step closer to some of the core technologies that
Vish,
I think I don't understand your statement fully.
Unless we use different hostnames, (hostname, hypervisor_hostname) must be the
same for all bare-metal nodes under a bare-metal nova-compute.
Could you elaborate the following statement a little bit more?
You would just have to use a
Hi Pat,
On Aug 27, 2012, at 8:09 AM, pat p...@xvalheru.org wrote:
I have two questions regarding OpenStack Keystone:
Q1) The Folsom release supports domains. The domain can contain more tenants
and tenant cannot be shared between domains. Is this right? I think so, but
want to be sure.
I'm
Hi David,
I just checked out the code more extensively and I don't see why you need to
create a new service entry for each compute_node entry. The code in
host_manager to get all host states explicitly gets all compute_node entries. I
don't see any reason why multiple compute_node entries
a snapshot and an image are the same. The only difference is a piece of
metadata saying what instance the snapshot came from.
Vish
On Aug 27, 2012, at 6:06 AM, Alessandro Tagliapietra
tagliapietra.alessan...@gmail.com wrote:
Btw, using
nova image-create --poll
On Sun, Aug 26, 2012 at 12:39 PM, rob_hirschf...@dell.com wrote:
Stackers,
I think this is a reasonable approach and appreciate the clarification of
use-cases.
We've been discussing using Open vSwitch as the basis for non-Quantum Nova
Networking deployments in Folsom. While not Quantum,
Hi Vish,
I think I understand your idea.
One service entry with multiple bare-metal compute_node entries are registered
at the start of bare-metal nova-compute.
'hypervisor_hostname' must be different for each bare-metal machine, such as
'bare-metal-0001.xxx.com', 'bare-metal-0002.xxx.com',
-- Forwarded message --
From: andi abes andi.a...@gmail.com
Date: Mon, Aug 27, 2012 at 1:54 PM
Subject: nova-compute on VirtualBox with qemu
To: openstack-operat...@lists.openstack.org
I'm using Essex on virtual box, and am having some issues getting
nova-compute to not hate me
openstack-bounces+mjfork=us.ibm@lists.launchpad.net wrote on 08/27/2012
02:58:56 PM:
From: David Kang dk...@isi.edu
To: Vishvananda Ishaya vishvana...@gmail.com,
Cc: OpenStack Development Mailing List openstack-
d...@lists.openstack.org, openstack@lists.launchpad.net \
Hello all,
It seems that the requirement for keys of HostManager.service_state is just to
be unique;
these do not have to be valid hostnames or queues (Already, existing code casts
messages to topic.service-hostname. Michael, doesn't it?). So, I tried
'host/bm_node_id' as 'host' of
David Kang dk...@isi.edu wrote on 08/27/2012 05:22:37 PM:
From: David Kang dk...@isi.edu
To: Michael J Fork/Rochester/IBM@IBMUS,
Cc: openstack@lists.launchpad.net (openstack@lists.launchpad.net)
openstack@lists.launchpad.net, openstack-bounces+mjfork=us ibm com
Michael,
It is a little confusing without knowing the assumptions of your suggestions.
First of all, I want to make sure that you agree on the followings:
1. one entry for a bare-metal machines in the 'compute_node' table.
2. one entry for bare-metal nova-compute that manages N bare-metal
Hello folks
picking up this comment on the Development mailing list:
On Mon 27 Aug 2012 02:08:48 PM PDT, Jason Kölker wrote:
I've noticed that both this list and the old launchpad lists are being
used. Which is the correct list?
I sent the following message, with questions at the end that are
Hi,
Thank you so much for your help.
I replaced the file /usr/share/pyshared/nova/virt/libvirt/connection.py
with yours, but it looks like not worked for me.
Does it need do any additional thing?
Thanks,
Sam
On Sat, Aug 25, 2012 at 7:03 PM, heut2008 heut2...@gmail.com wrote:
for stable/essex
Stef,
It's pretty obvious to me that there should be a general list at
openst...@lists.openstack.org. The operators list is intended for operations
people that host OpenStack deployments, not a general OpenStack user audience.
I'd create the general openstack list, and setup a daily post to
Hi,
I have an Essex cluster with 6 compute nodes and one control nodes. All
compute nodes are working not any interrupted, for some reason all
instances in my cluster automatically rebooted. I am trying to but not
figured out why this happened in these couple of days.
It's much appreciated if
Hey Sam,
Is it possible your hypervisors restarted? I see this entry in the logs:
2012-08-23 06:35:02 INFO nova.compute.manager
[req-f1598257-3f35-40e6-b5aa-d47a0e93bfba None None] [instance:
ce00ff1d-cf46-44de-9557-c5a0f91c8d67] Rebooting instance after nova-compute
restart.
Gabe
From:
There are at least end users, operators and developers in the OpenStack
technical ecosystem.
There are also 'OpenStack' related discussions that aren't technical in
nature.
It doesn't seem right to make the operators list a catchall.
For exporting from Launchpad, surely someone at Canonical
VTJ NOTSU Arata no...@virtualtech.jp wrote on 08/27/2012 07:30:40 PM:
From: VTJ NOTSU Arata no...@virtualtech.jp
To: Michael J Fork/Rochester/IBM@IBMUS,
Cc: David Kang dk...@isi.edu, openstack@lists.launchpad.net
(openstack@lists.launchpad.net) openstack@lists.launchpad.net,
One of the things i dont like in essex.
That the autostart flag in nova.conf with KVM doesnt work with the
autostart feature of libvirt/kvm, so if, for some reason you need to
restart nova-compute to apply some kind of modification, the instances get
soft/hard rebooted because now nova-compute
Thanks for your guys help!
I guess this problem may be caused by nova packages auto upgrade.
I just found these two lines in the file /var/log/kern.log:
Aug 23 06:34:33 cnode-01 kernel: [4955691.256036] init: nova-network main
process (9191) terminated with status 143
Aug 23 06:34:35 cnode-01
25 matches
Mail list logo