I think my glance-api.conf and glance-registry.conf are right
[keystone_authtoken]
auth_host = 10.1.82.40
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = glance
admin_password = glance
auth_uri = http://10.1.82.40:5000
[paste_deploy]
flavor = keystone
debug inf
> [keystone_authtoken]
> auth_host = 10.1.82.40
> auth_port = 35357
> auth_protocol = http
> admin_tenant_name = service
> admin_user = glance
> admin_password = glance
> auth_uri = http://10.1.82.40:5000
>
> debug info
>
> curl -i -X GET -H 'X-Auth-Token: b15d34f87160452e8e8bcc7a1d851c43' -H
>
Hi Don,
Thanks for your answer.
I'm extending OpenStack (in context with my Bachelor Thesis) to track
hardware information such as CPU temperature. Among other things I extended
resource_tracker.py. I only wanted to increase this interval for testing
purpose. I'm aware of the costs and maybe have
> The object servers should only be talking to each other during replication.
> They should not talk to the proxy, and probably not the load balancer. Can
> you provide the output of "swift-ring-builder /etc/swift/object.builder" and
> more details on the network configuration of this system.
Hi Everyone,
The Call for Speakers is OPEN for the November OpenStack Summit in Paris!
Submit your talks here:
https://www.openstack.org/summit/openstack-paris-summit-2014/call-for-speakers/.
There are a few new speaking tracks in the Summit lineup this year so please
review the below list bef
Hello Everyone,
I am looking for evacuation support in openstack.
>From what I could find, there are following two commands:
1. nova evacuate evacuated_server_name host_b --on-shared-storage
2. nova host-evacuate command
However, both seem to have some downtime.
What I looking for is somet
This seems apropos given something I ran into today with
/var/lib/libvirt/qemu/save not having enough capacity to store my suspended
instances. If the RAM is freed up (and state saved to disk) then sizing the
required size of the qemu partition is pretty big. Based on a rough
back-of-napkin calcula
On Tue, 1 Jul 2014 10:50:15 +0100
Diogo Vieira wrote:
> Can you tell me if this is normal behaviour? If so, how will
> this scale when I add more objects? Will it keep getting more
> and more CPU usage?
Dunno if it's normal or not, but clusters installed with default
parameters do that. My nodes
Ah, your part power is too high for that few devices. I would have
recommend 14 rather than 18. Which is good enough to scale up to ~500
devices before you might worry about maybe running into balancing issues
that *could* make it difficult to run your cluster more than 70-80% full,
but at that p
Greetings,
This is my first post. Sorry for this beginner's question.
I have read some docs on how to configure OpenStack to use Highly available
rabbitmq. Looks like simply configuring services with all queue nodes in that
HA cluster would work.
--Do the services that use HA queue ensure the
oslo.message target can be created with fanout set to be true to allow a
message to be consumed by multiple consumers (targets), however, it does
not really work. Looked through oslo.messaging test cases, there is no one
test case actually test fanout=True. Is this a bug or I am not using it
right
11 matches
Mail list logo