Re: [Openstack] [nova-network] question about nova-network

2012-11-20 Thread Ahmed Al-Mehdi
Hi Darren,

I have moved along a bit, but now running a different issue related to 
launching a VM.  The latest issue is related to an "RPC message timeout" in 
nova-network.  If you have a few minutes, would really appreciate your help.

I have a two node setup:
 - controller-node (hostname: bodega.  running nov-* services, including 
nova-network, but  no nova-compute)
 - compute-node (hostname: sonoma,   running nova-compute)

Two issues:
 - I see the controller-node, nova-network sending a message to sonoma which is 
timing out.  Did I not setup a consumer of the message on the compute-node 
properly?  I am guessing rabbitmq-server is dropping the message on the floor 
(but no log entry about dropped message, probably need to configure some log 
config).
 - Why is nova-nework assigning an floating IP to the VM instance?

2012-11-18 15:50:29 DEBUG nova.network.manager 
[req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
ce1e819636744dc680fa5515f6475e87] [instance: 
4e80964e-5bd1-4df4-a517-223c79d55517] floating IP allocation for instance |%s| 
from (pid=1375) allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:315
2012-11-18 15:50:29 DEBUG nova.network.manager 
[req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
ce1e819636744dc680fa5515f6475e87] [instance: 
4e80964e-5bd1-4df4-a517-223c79d55517] network allocations from (pid=1375) 
allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:977
2012-11-18 15:50:29 DEBUG nova.network.manager 
[req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
ce1e819636744dc680fa5515f6475e87] [instance: 
4e80964e-5bd1-4df4-a517-223c79d55517] networks retrieved for instance: 
|[]| from (pid=1375) 
allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:982

2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making 
asynchronous call on network.sonoma ... from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:351
2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
d73be9ea76b3412493d0752abb9d5a02 from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [], u'_msg_id': u'b2bc0715982846cd916a8ff61b2513af', 
u'_context_quota_class': None, u'_context_request_id': 
u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', u'_context_service_catalog': None, 
u'_context_user_name': None, u'_context_auth_token': '', u'args': 
{u'instance_id': 5, u'instance_uuid': u'4e80964e-5bd1-4df4-a517-223c79d55517', 
u'host': u'sonoma', u'project_id': u'ce1e819636744dc680fa5515f6475e87', 
u'rxtx_factor': 1.0}, u'_context_instance_lock_checked': False, 
u'_context_project_name': None, u'_context_is_admin': True, 
u'_context_project_id': None, u'_context_timestamp': 
u'2012-11-18T23:50:47.233052', u'_context_read_deleted': u'no', 
u'_context_user_id': None, u'method': u'get_instance_nw_info', 
u'_context_remote_address': None} from (pid=1375) _safe_log 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: 
{'project_name': None, 'user_id': None, 'roles': [], 'timestamp': 
u'2012-11-18T23:50:47.233052', 'auth_token': '', 'remote_address': 
None, 'quota_class': None, 'is_admin': True, 'service_catalog': None, 
'request_id': u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', 
'instance_lock_checked': False, 'project_id': None, 'user_name': None, 
'read_deleted': u'no'} from (pid=1375) _safe_log 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2012-11-18 15:50:52 DEBUG nova.utils [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 
None None] Got semaphore "get_dhcp" for method "_get_dhcp_ip"... from 
(pid=1375) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2012-11-18 15:50:52 DEBUG nova.utils [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 
None None] Got semaphore "get_dhcp" for method "_get_dhcp_ip"... from 
(pid=1375) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
FlatDHCPManager._publish_service_capabilities from (pid=1375) periodic_tasks 
/usr/lib/python2.7/dist-packages/nova/manager.py:172
2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
FlatDHCPManager._disassociate_stale_fixed_ips from (pid=1375) periodic_tasks 
/usr/lib/python2.7/dist-packages/nova/manager.py:172
2012-11-18 15:51:29 ERROR nova.openstack.common.rpc.common [-] Timed out 
waiting for RPC response: timed out
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common Traceback (most 
recent call last):
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py", 
line 513, in ensure
2012-11-18 15:51:29 TRACE nova.openst

[Openstack] DB Migrations - switch to Alembic

2012-11-20 Thread Endre Karlson
Hi, I was wondering if anyone has tested using alembic as a tool for
OpenStack projects instead of SQLA Migrate?

I'm wanting to use alembic for new projects (Moniker and Bufunfa) instead
of migrate if possible.

 https://etherpad.openstack.org/nova-backportable-db-migrations

Endre.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] python-swiftclient is missing from epel repo

2012-11-20 Thread Pádraig Brady

On 11/18/2012 09:01 PM, George Lekatsas wrote:

Hello,

after a yum update in centos 6.3 i have the following error

¨--> Processing Dependency: python-swiftclient for package: 
python-glance-2012.2-3.el6.noarch
--> Finished Dependency Resolution
Error: Package: python-glance-2012.2-3.el6.noarch (epel)
Requires: python-swiftclient
  You could try using --skip-broken to work around the problem
  You could try running: rpm -Va --nofiles --nodigest

It seems that python-swiftclient is missing from epel repo.


Note that's an update from Essex to Folsom.
If you want to stay on Essex please use:

http://repos.fedorapeople.org/repos/openstack/openstack-essex/epel-6/
http://repos.fedorapeople.org/repos/openstack/openstack-essex/README

If you do want to upgrade then please consider:

https://fedoraproject.org/wiki/Talk:Getting_started_with_OpenStack_EPEL

thanks,
Pádraig.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] snapshots not working in folsom

2012-11-20 Thread Wolfgang Hennerbichler
hi,

after I've upgraded to folsom snapshotting just stopped working. Horizon
doesn't complain, the log doesn't complain, but the snapshot is
reportedly not taken (snapshot file isn't created, and I don't see the
process of qemu creating it).

as I said, nova-compute on the server doesn't complain:

Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf
qemu-img snapshot -c fd5ab4d51e12435884c8a90f190f1f15
/var/lib/nova/instances/instance-00f3/disk from (pid=13330) execute
/usr/lib/python2.7/dist-packages/nova/utils.py:176
Result was 0 from (pid=13330) execute
/usr/lib/python2.7/dist-packages/nova/utils.py:191
Running cmd (subprocess): qemu-img convert -f qcow2 -O raw -s
fd5ab4d51e12435884c8a90f190f1f15
/var/lib/nova/instances/instance-00f3/disk
/var/lib/nova/instances/snapshots/tmpclLTZA/fd5ab4d51e12435884c8a90f190f1f15
from (pid=13330) execute /usr/lib/python2.7/dist-packages/nova/utils.py:176

any hints or suggestions? Our customers really want this feature.

Wolfgang

-- 
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz

IT-Center
Softwarepark 35
4232 Hagenberg
Austria

Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerbich...@risc-software.at
http://www.risc-software.at

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Swift installation verification fails

2012-11-20 Thread Shashank Sahni
Hi,

I'm trying to install Swift 1.7.4 on Ubuntu 12.04. The installation is
multi-node with keystone and swift(proxy+storage) running on separate
systems. Keystone is up and running perfectly fine. Swift user and service
endpoints are created correctly to point to the swift_node. Swift is
configured and all its services are up. But during swift installation
verification, the following commands hangs with no output.

swift -V 2 -A http://keystone_server:5000/v2.0
-U admin:admin -K admin_pass stat

I'm sure its able to contact the keystone server. This is because if I
change admin_pass, it throws authentication failure error. It probably
fails in the next step which I'm unaware of.

Here is my proxy-server.conf file.

[DEFAULT]
# Enter these next two values if using SSL certifications
cert_file = /etc/swift/cert.crt
key_file = /etc/swift/cert.key
bind_port = 
user = swift

[pipeline:main]
#pipeline = healthcheck cache swift3 authtoken keystone proxy-server
pipeline = healthcheck cache swift3 authtoken keystone proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:swift3]
use=egg:swift3#swift3

[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = Member,admin, swiftoperator

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = 10
service_port = 5000
service_host = keystone_server
auth_port = 35357
auth_host = keystone_server
auth_protocol = http
auth_uri = http://keystone_server:5000/
auth_token = 
admin_token = 
admin_tenant_name = service
admin_user = swift
admin_password = 
signing_dir = /etc/swift

[filter:cache]
use = egg:swift#memcache
set log_name = cache

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

Any suggestion?

--
Shashank Sahni
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova: instance states and commands

2012-11-20 Thread Marco CONSONNI
Hello,



I’m playing with nova CLI and I found that there are several commands for
managing the current state of instances running in the cloud.



These are the states I found:



   - · initial
   - · build
   - · active
   - · shutoff
   - · suspended
   - · rescue
   - · paused
   - · reboot



And these are the nova commands for changing the current state:



   - · boot
   - · pause
   - · reboot
   - · rescue
   - · resume
   - · start
   - · stop
   - · suspend
   - · unpause
   - · unrescue



QUESTION: Is there any explanation on the meaning of the states? In
particular, what’s the difference between SUSPENDED, PAUSED and RESCUE?



Another command changes the status and, from what I found, it always
changes that to ERROR.

The command is “nova reset-state”.



QUESTION: what’s the semantics of this command?





Then, I found some other commands that sound like status management command
but, actually they are not:



   - · Lock
   - · Unlock



QUESTION: What’s the meaning of these?





*Marco.*
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Folsom] [Cinder] Error when creating volumes

2012-11-20 Thread Nikola Pajtic
Hi all.

I have a problem with Cinder and Horizon, I was hoping someone could help
me.

When I try to create new volume from Horizon, it does create it but with
status "Error". When I check cinder database, I can see new volume entry in
cinder.volumes.

Attached below you will find all of the cinder-* logs, as well as apache
error_log from Horizon server. That's what happens when I click "Create
volume".

Besides the Horizon, I can also create volume from CLI, using:

#> cinder --os-username nova --os-password openstack --os-tenant-name
service --os-auth-url http://localhost:5000/v2.0 create --display_name test
1

But, with the same errors in log files. As far as I can tell, error which
causes this problem is from cinder-scheduler:

TRACE cinder.openstack.common.rpc.amqp NoValidHost: No valid host was
found. Is the appropriate service running?

On the other hand, Apache log throws:

[error] unable to retrieve service catalog with token


Is it possible that problem lies in Keystone db? This is how I created
end-point for Cinder:

{"adminurl": "http://192.168.0.12:8776/v1/$(tenant_id)s", "internalurl": "
http://192.168.0.12:8776/v1/$(tenant_id)s", "publicurl": "
http://192.168.0.12:8776/v1/$(tenant_id)s"}

If you need more info, I'd be happy to provide.

Thank you all in advance!
==> cinder-api.log <==
2012-11-20 11:02:18 INFO cinder.api.openstack.wsgi 
[req-63d985ca-e789-457d-a4ab-1e786f43ffa0 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] GET 
http://192.168.0.12:8776/v1/934fb5ed6e9a4accac8a24531b8238f1/snapshots/detail
2012-11-20 11:02:18 DEBUG cinder.api.openstack.wsgi 
[req-63d985ca-e789-457d-a4ab-1e786f43ffa0 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] Unrecognized Content-Type provided in request 
get_body /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:695
2012-11-20 11:02:18 DEBUG cinder.api.openstack.volume.volumes 
[req-63d985ca-e789-457d-a4ab-1e786f43ffa0 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] Removing options '' from query 
remove_invalid_options 
/usr/lib/python2.7/dist-packages/cinder/api/openstack/volume/volumes.py:356
2012-11-20 11:02:18 INFO cinder.api.openstack.wsgi 
[req-63d985ca-e789-457d-a4ab-1e786f43ffa0 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] 
http://192.168.0.12:8776/v1/934fb5ed6e9a4accac8a24531b8238f1/snapshots/detail 
returned with HTTP 200
2012-11-20 11:02:18 INFO cinder.api.openstack.wsgi 
[req-ccb91184-7a54-43db-8589-e0c470ae9e33 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] GET 
http://192.168.0.12:8776/v1/934fb5ed6e9a4accac8a24531b8238f1/volumes/detail
2012-11-20 11:02:18 DEBUG cinder.api.openstack.wsgi 
[req-ccb91184-7a54-43db-8589-e0c470ae9e33 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] Unrecognized Content-Type provided in request 
get_body /usr/lib/python2.7/dist-packages/cinder/api/openstack/wsgi.py:695
2012-11-20 11:02:18 DEBUG cinder.api.openstack.volume.volumes 
[req-ccb91184-7a54-43db-8589-e0c470ae9e33 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] Removing options '' from query 
remove_invalid_options 
/usr/lib/python2.7/dist-packages/cinder/api/openstack/volume/volumes.py:356
2012-11-20 11:02:18 INFO cinder.api.openstack.wsgi 
[req-ccb91184-7a54-43db-8589-e0c470ae9e33 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] 
http://192.168.0.12:8776/v1/934fb5ed6e9a4accac8a24531b8238f1/volumes/detail 
returned with HTTP 200
2012-11-20 11:02:18 INFO cinder.api.openstack.wsgi 
[req-5c8303ce-54a5-419a-afc4-b8f06add9489 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] POST 
http://192.168.0.12:8776/v1/934fb5ed6e9a4accac8a24531b8238f1/volumes
2012-11-20 11:02:18 AUDIT cinder.api.openstack.volume.volumes 
[req-5c8303ce-54a5-419a-afc4-b8f06add9489 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] Create volume of 5 GB
2012-11-20 11:02:18 DEBUG cinder.quota 
[req-5c8303ce-54a5-419a-afc4-b8f06add9489 ed04d6e1628c4f43a6b20829f7910720 
934fb5ed6e9a4accac8a24531b8238f1] Created reservations 
['f16c93bb-4e0e-4279-a315-fc22708bc544', 
'c423f91b-9ea4-4d6f-a2a1-b101c784c083'] reserve 
/usr/lib/python2.7/dist-packages/cinder/quota.py:663
2012-11-20 11:02:18 1472 DEBUG cinder.openstack.common.rpc.amqp [-] Making 
asynchronous cast on cinder-scheduler... cast 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py:377
2012-11-20 11:02:18 1472 DEBUG cinder.openstack.common.rpc.amqp [-] Pool 
creating new connection create 
/usr/lib/python2.7/dist-packages/cinder/openstack/common/rpc/amqp.py:58
2012-11-20 11:02:18 1472 INFO cinder.openstack.common.rpc.common [-] Connected 
to AMQP server on localhost:5672

==> cinder-scheduler.log <==
2012-11-20 11:02:18 1473 DEBUG cinder.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [u'Member', u'anotherrole', u'admin'], 
u'_context_request_id': u'req-5c8303ce-54a5-419a-afc4-b8f06add9489', 
u'_cont

Re: [Openstack] GRE tunneling Quantum, Openvswitch, traffic not allowed

2012-11-20 Thread Robert van Leeuwen
> The GRE tunnel starts to work if I manually set following:
> "ovs-ofctl add-flow br-tun action=normal"

Found the issue: the provider:segmentation_id was not setup for the network.
( I created the network through the Dashboard at that moment it did not create 
the seg_id.
Probably the free range was not in a config file yet because now it is working 
through the dashboard)
Because of this missing info the flows were not created. 

Everything seems to be working now.
So finally got GRE / Openvswitch (kmod) / Quantum on Scientific Linux 6.3 up 
and running in the testlab :)

Cheers,
Robert van Leeuwen

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [ceilometer] How does ceilometer work with multi-DC scenario

2012-11-20 Thread Shengjie_Min
Hi,

Has anybody come across the scenario you need to deploy two or more openstack 
swift or nova clusters for whatever DR or HA reasons. How Ceilometer is going 
to cope with that? Just wondering is there any plans or blueprints addressing 
the usage data replication/distinguish/isolation among multi-DCs?

Thanks,
Shengjie
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Help testing auth 2.0 enabled FTP service

2012-11-20 Thread Juan J. Martinez
Hello list,

I'm the maintainer of (s)ftp-cloudfs, a FTP proxy to access Rackspace
Cloud Files and OpenStack Object Storage (Swift).

I've created a branch of the FTP server to support Auth 2.0 with
python-keystoneclient:

https://github.com/chmouel/ftp-cloudfs/tree/auth-2.0

I'm not using Swift with Keystone myself, so I'd appreciate some help
testing the new code.

There's an open issue, just add a comment if you give the new code a go:

https://github.com/chmouel/ftp-cloudfs/issues/29

Thanks in advance!

Regards,

Juan

-- 
Juan J. Martinez
Development, MEMSET

mail: j...@memset.com
 web: http://www.memset.com/

Memset Ltd., registration number 4504980. 25 Frederick Sanger Road,
Guildford, Surrey, GU2 7YD, UK.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [ceilometer] How does ceilometer work with multi-DC scenario

2012-11-20 Thread Julien Danjou
On Tue, Nov 20 2012, shengjie_...@dell.com wrote:

> Has anybody come across the scenario you need to deploy two or more
> openstack swift or nova clusters for whatever DR or HA reasons. How
> Ceilometer is going to cope with that? Just wondering is there any plans or
> blueprints addressing the usage data replication/distinguish/isolation among
> multi-DCs?

You can deploy several ceilometer and use several databases, or just one
and use a different 'source' field for each of your region/cluster to
differentiate where meters come from.

-- 
Julien Danjou
# Free Software hacker & freelance
# http://julien.danjou.info


pgpAMx4Vs8WRp.pgp
Description: PGP signature
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [devstack] How to purge, remove and wipe out devstack?

2012-11-20 Thread Gui Maluf
I'll try this Joshua, thanks!


On Mon, Nov 19, 2012 at 4:25 PM, Joshua Harlow wrote:

> U can try anvil  also, its
> similar but I built it with the goal to do uninstalls/starts/stops from the
> start.
>
> Your mileage may vary though :-)
>
> -Josh
>
> From: Tong Li 
> Date: Monday, November 19, 2012 6:19 AM
> To: Gui Maluf 
> Cc: "openstack-bounces+litong01=us.ibm@lists.launchpad.net" <
> openstack-bounces+litong01=us.ibm@lists.launchpad.net>, "
> openstack@lists.launchpad.net" 
> Subject: Re: [Openstack] [devstack] How to purge, remove and wipe out
> devstack?
>
> There are configuration files under /etc for components that you enabled
> when you install devstack. such as /etc/nova, /etc/glance, etc.
>
> Tong Li
> Emerging Technologies & Standards
>
>
> [image: Inactive hide details for Gui Maluf ---11/19/2012 05:10:27
> AM---Hello, if I would like to wipe, remove and purge everything dev]Gui
> Maluf ---11/19/2012 05:10:27 AM---Hello, if I would like to wipe, remove
> and purge everything devstack installed and
>
> From: Gui Maluf 
> To: "openstack@lists.launchpad.net" ,
> Date: 11/19/2012 05:10 AM
> Subject: [Openstack] [devstack] How to purge, remove and wipe out
> devstack?
> Sent by: openstack-bounces+litong01=us.ibm@lists.launchpad.net
> --
>
>
>
> Hello,
> if I would like to wipe, remove and purge everything devstack installed
> and configured what should I do?
>
> rm -rf /opt/stack
> rm -rf /usr/local/bin/
>
> what else?
>
> thanks in advance!
> :)
>
> -- *
> guilherme* \n
> \t *maluf*
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>


-- 
*guilherme* \n
\t *maluf*
<>___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Plans for Trusted Computing in OpenStack

2012-11-20 Thread Nicolae Paladi
Looking a from a IaaS provider's point of view, it is indeed simpler to
approach the attestation as
an additional service that would ensure that the host has not been
corrupted in any way -- the client
then trusts the IaaS service provider, it's software deployment processes
and it's people.

And I do agree that when a malicious IaaS provider redeploys the stack, it
can, as you said, insert holes
at any stack and with the current state of the TPM would be if not
impossible, then practically very hard to
identify during an attestation.

However I would like to point out that I am considering the user case when
it
is both needed and feasible to make a dedicated deployment for a customer,
or a set of customers, that require
such special features. In this case the deployment codebase could first be
verified and measured. Let's say that would
be the TTP doing it. Now when this same stack is deployed in production,
the TTP would just have to ensure
that he trusted stack has not been changed. That matches very closely what
is written in the second paragraph:
"
This is because IaaS provider builds and trusts its own deployed stack.  So
the remote attestation service is built to help IaaS provider check whether
the trusted stack has been changed on the hosts. Here the trust criteria is
clear;
"
the only difference being that the TTP knows and has measured the stack
prior to deployment.

Yes it's not a fully encompassing solution cause hey, the IaaS provider
could later silently migrate the VM to a different host with
a leaky hypervisor and all the trust is gone, but there are solutions to
that as well.

The point I earlier made about trusting the IaaS provider, it's codebase
and the employees fits nicely with what is mentioned in the last
paragraph -- bugs and negligence can occur as well and this approach aims
to also prevent from such things.

Cheers,
/Nico.





On 16 November 2012 08:35, Tian, Kevin  wrote:

>  The assessment criteria is clear, that IaaS provider measures the whole
> cloud stack every time when it creates a new release, and then is compared
> to the run-time measurements on the running stack on the hosts in so-called
> attestation process. A failed attestation implicates the host is untrusted
> because the stack has been changed.
>
> ** **
>
> This is because IaaS provider builds and trusts its own deployed stack.
>  So the remote attestation service is built to help IaaS provider check
> whether the trusted stack has been changed on the hosts. Here the trust
> criteria is clear;
>
> ** **
>
> However, an external CA doesn’t build the stack, and I don’t know how CA
> can judge whether a specific cloud stack is trusted or not, even when IaaS
> provider is asked to share. A malicious IaaS provider can provide a evil
> stack with holes on any point. Here the ‘trust’ criteria is not clear.
>
> ** **
>
> BTW, I do think your proposal might be a good complement to the existing
> remote attestation service from another angle. It’s possible that remote
> attestation framework, or scheduler, or other involved components contains
> bug, which lead to VM running on untrusted host even when the attestation
> fails. In that case, give the VM capability to detect its own secret sounds
> like an acknowledgement to a successful attestation, since a failed
> attestation can never inject the secret. J
>
> ** **
>
> Thanks
>
> Kevin
>
> ** **
>
> *From:* Nicolae Paladi [mailto:n.pal...@gmail.com]
> *Sent:* Thursday, November 15, 2012 12:19 AM
> *To:* Tian, Kevin
> *Cc:* Dugger, Donald D; openstack; Li, Susie; Wei, Gang; Maliszewski,
> Richard L
>
> *Subject:* Re: [Openstack] Plans for Trusted Computing in OpenStack
>
>  ** **
>
> That is correct, the variety of versions, components and patches is the
> first thing
>
> that comes to everyone's mind with this approach. 
>
> But the idea is not to have a trusted third party/CA that would be able to
> assess _all_ combinations.
>
> With both approaches, the 'assessment' is left out as a stub or
> "assumption" (whichever you prefer).
>
> In this case it doesn't actually matter who does the assessment - the IaaS
> provider or a CA,
>
> since the assessment criteria are the unsolved issue.
>
> ** **
>
> The use case we're examining here is when a certain IaaS provider is
> contracted to supply
>
> IaaS to a client with "special requirements", let's call it C. That would
> mean, e.g:
>
> 1. Potentially not all hosts would be used to deploy VMs for C
>
> 2. Potentially patches and versioning might go through a separate upgrade
> flow for those hosts
>
> ** **
>
> To summarize and address your concerns here:
>
> * imo once a client  is concerned enough to require "trusted hosts", the
> use of using external assessment case becomes valid
>
> (and assessment by the IaaS provider becomes useless)
>
> * wrt to variation of the software stack, the big issue is the assessment
> criteria rather than t

Re: [Openstack] nova.virt.xenapi.driver [-] Got exception: ['XENAPI_MISSING_PLUGIN', 'xenhost']

2012-11-20 Thread John Garbutt
DevStack is able to automate this whole process for you, if you follow the 
XenServer Readme:
https://github.com/openstack-dev/devstack/blob/master/tools/xen/README.md
It basically installs all the plugins, and creates the VM that runs the nova 
services for you. Understanding the networking can be a little tricky (because 
of some assumptions the scripts make) if you don’t follow the example 
networking exactly; but I am willing to help you through that.

If you are going more manually, the docs have some more info on the issues you 
might be hitting (and some info on possible networking options):
http://docs.openstack.org/folsom/openstack-compute/admin/content/introduction-to-xen.html#xenapi-install

The above will point you towards the readme about the required XenAPI plugins:
https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/README

I hope that helps.

John

From: openstack-bounces+john.garbutt=citrix@lists.launchpad.net 
[mailto:openstack-bounces+john.garbutt=citrix@lists.launchpad.net] On 
Behalf Of Mohammed Naser
Sent: 18 November 2012 3:27 PM
To: Afef MDHAFFAR
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] nova.virt.xenapi.driver [-] Got exception: 
['XENAPI_MISSING_PLUGIN', 'xenhost']

Hi there,

2012-11-15 19:57:05 DEBUG nova.virt.xenapi.driver [-] Got exception: 
['XENAPI_MISSING_PLUGIN', 'xenhost'] from (pid=25140) _unwrap_plugin_exceptions 
/opt/stack/nova/nova/virt/xenapi/driver.py:754

Please login to your main XenServer dom0 and make sure you have the plugins 
installed from the source

https://github.com/openstack/nova/tree/master/plugins/xenserver/xenapi

Good luck

On Thu, Nov 15, 2012 at 3:22 PM, Afef MDHAFFAR 
mailto:afef.mdhaf...@gmail.com>> wrote:
The first problem has been solved by synchronizing the 2 nodes.
But, I could not find a solution to the second one -- nova-compute does not 
want to start

2012/11/15 Afef MDHAFFAR 
mailto:afef.mdhaf...@gmail.com>>
Hi all,

I tried to add a third compute node to a running openstack platform (composed 
of a head node and a compute node).
I use Ubuntu 12.04 + XCP + folsom (stable release). And, I use devstack to 
install the compute node.
The first problem is that the 2nd (new) compute node is not able to see that 
the head node nova services are enabled, however it is still able to detect 
that nova services of the first compute node are running and available.
nova-manage service list
2012-11-15 20:02:46 DEBUG nova.utils [req-44c68f9f-b9f4-498b-a081-c0273e23e05f 
None None] backend  from (pid=25406) __get_backend 
/opt/stack/nova/nova/utils.py:502
Binary   Host Zone Status   
  State Updated_At
nova-consoleauth DevStackOSDomU   nova enabled  
  XXX   2012-11-15 20:01:37
nova-certcomputeDomU01nova enabled  
  :-)   2012-11-15 20:02:30
nova-network computeDomU01nova enabled  
  :-)   2012-11-15 20:02:33
nova-scheduler   computeDomU01nova enabled  
  :-)   2012-11-15 20:02:33
nova-compute computeDomU01nova enabled  
  :-)   2012-11-15 20:02:30
nova-network DevStackOSDomU   nova enabled  
  XXX   2012-11-15 20:01:40
nova-scheduler   DevStackOSDomU   nova enabled  
  XXX   2012-11-15 20:01:40
nova-certDevStackOSDomU   nova enabled  
  XXX   2012-11-15 20:01:35
nova-compute DevStackOSDomU   nova enabled  
  XXX   2012-11-15 20:01:40
nova-network computeDomU02nova enabled  
  :-)   2012-11-15 20:02:43
nova-compute computeDomU02nova enabled  
  XXX   None
nova-scheduler   computeDomU02nova enabled  
  :-)   2012-11-15 20:02:43
nova-certcomputeDomU02nova enabled  
  :-)   2012-11-15 20:02:43

The nova-services are of course running on the head node. Actually, launching 
"nova-manage service list" on the head node or the first compute node returns 
the following output:
nova-manage service list
2012-11-15 20:05:08 DEBUG nova.utils [req-47839159-dc05-4947-a070-66b9f0db36e7 
None None] backend  from (pid=6912) __get_backend 
/opt/stack/nova/nova/utils.py:502
Binary   Host Zone Status   
  State Updated_At
nova-consoleauth DevStackOSDomU   nova enabled  
  :-)   2012-11-15 20:04:10
nova-certcomputeDomU01nova enabled  
  :-)   2012-11-15 20:05:03
nova-network computeDomU01nova enabled  
  :-)   2012-11-15 20:05:07
nova-scheduler   computeDomU01nova enabled  
  :-)   2012-11-15 

Re: [Openstack] Networking issues for openstack on XCP

2012-11-20 Thread John Garbutt
There are docs here:
http://docs.openstack.org/folsom/openstack-compute/admin/content/introduction-to-xen.html
And networking info here:
http://docs.openstack.org/folsom/openstack-compute/admin/content/xenapi-flat-dhcp-networking.html

Let me know what bits are confusing, and I will make an effort to improve the 
docs around that area, and help you through things.

Firstly, quantum support in XCP (with OVS) is still a work in progress. Help to 
review the changes in Gerrit are very welcome!

Now, lets look at your questions...

 > 1 Controller node: no Xen, only Ubuntu 12.04, everything for openstack 
 > service except for nova-compute
> 2 Compute node: XCP 1.6 beta, with nova-compute in special domU (Ubuntu 
> 12.04), xenapi plugin installed in dom0
> each node has two NIC, one with public IP (Only limited floating IP), and 
> another in private network (Any IP is OK)
> flat network or flat dhcp network
> I want to use eth0 for public traffic and service request, and eth1 for 
> inter-vm traffic.

OK, but where is your management traffic, like Rabbit and MySQL going? I would 
have expected one nic for management network, and one nic for public traffic, 
with maybe with a separate VLAN for instance traffic?

1.) does each domU need nova-network running? My understanding is it's OK to 
run nova-network individually, but then how to mange the floating IP globally?

Correct. If multi_host=true, you are running nova-network on every node, if 
false you just run one. It depends if you don't mind the extra overhead or not. 
Others can describe the trade-offs more clearly.
http://docs.openstack.org/folsom/openstack-compute/admin/content/xenapi-flat-dhcp-networking.html

2.) in document for flatdhcp network, I saw four interfaces for each management 
domU. Is it OK to have only two interfaces? Say, eth0-xenbr0 for public IP and 
services, and eth1-xenbr1 for VM network?

As above, you need to think about where your management traffic like MySQL and 
Rabbit traffic is going. Three is certainly fine. 

3.) Is the network isolation rules a must for test install? I found the patch 
to vif is still for xenserver 5.6_p2, and can not be applied to vif of xcp 1.6 
or xenserver 6.1, which might be a trouble.

It is not required for a test setup. Do bug me if you are having problems with 
that stuff on XCP, and I can ask the XCP guys to help me get that working for 
you.

Sorry for the slow response. I was on holiday.

Thanks,
John

> From: openstack-bounces+john.garbutt=citrix@lists.launchpad.net
> [mailto:openstack-bounces+john.garbutt=citrix@lists.launchpad.net]
> On Behalf Of Yan Zhai
> Sent: 20 November 2012 7:52 AM
> To: Afef MDHAFFAR
> Cc: openstack@lists.launchpad.net
> Subject: Re: [Openstack] Networking issues for openstack on XCP
> 
> have even more trouble setting up quantum itself. The plugin support in
> XCP looks not as good as KVM: the first problem is python 2.6 and it's solved
> by adding new repository. But when I installed the openvswitch agent, it
> can not start, requiring some quantum python module, which then seems
> have dependency on libudev. And this means I have to update almost the
> whole dom0, including libc, udev, and blablabla to get that library. That's
> too risky. Maybe I shall stay with bridge plugin and see if things can be a
> little better...
> 
> On Sat, Nov 17, 2012 at 1:18 PM, Afef MDHAFFAR
>  wrote:
> Hi all,
> 
> I am also trying to install openstack, with xcp. I used devstack to do that,
> since it is more simple.
> However, I am still facing a network problem.
> Actually, I got a private network per physical node. I am able to access the
> tenant VMs from the corresponding openstack DomU. But, these tenant
> VMs are not inaccessible from any other machines.
> Is that normal (ie. a private network per physical node? The created tenant
> VMs can access external machines, but they are invisible to other machines?
> Is there any way to let my VMs accessible, at least from other tenant VMs
> (created on other physical nodes)?
> How can Quantum solve this problem?
> 
> Thank you
> 
> Regards,
> Afef
> 
> 2012/11/17 Yan Zhai  Hi Robert,
> 
>  thanks for reply. Currently I am just looking for a way to bring it up for
> internal trial, so if Quantum is better I will move to that. The only reason
> that I am still asking for questions about nova-network is because of the
> document order: I am setting things following the install document, but
> when it comes to the network part I encountered above confusions. I will
> check the quantum document to see if anything can be simplified. Thanks
> again!
> 
> best
> Yan
> 
> On Fri, Nov 16, 2012 at 11:54 PM, Robert Garron
>  wrote:
> Yan,
> 
> In my opinion, if you are going to spend all the time learning a new product
> -- i.e. nova network vs quantum.  And if you are only testing a concept, I
> would spend it upon Folsom and move from Essex and/or nova network and
> nova storage.  Quantum eases many of the issues Nova network has  or

[Openstack] [OpenStack] Remove unsed network on host with nova-network

2012-11-20 Thread Édouard Thuleau
Hi all,

I use nova-network with VLAN manager.

Why nova-network doesn't remove unused network interfaces on a host ?

ie, if none VM on a host have a fixed IP attach to network X, the VLAN
and bridge of this network still up and unused. And 'dnsmasq' process
still listen and running.

The number of unused network interfaces will grow over time.
In the VLAN mode, this number could be 4000 x 2 unused interfaces and
4000 unused 'dnsmasq' processes (in worth case).

Can it lead to decrease the kernel performance ?
Is it a bug ? Or a voluntary implementation ?

Regards,
Édouard.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Ahmed Al-Mehdi
Hello,

When I launch a VM instance, I see the following message regarding floating IP 
in nova-network.log:

2012-11-18 15:50:29 DEBUG nova.network.manager 
[req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
ce1e819636744dc680fa5515f6475e87] [instance: 
4e80964e-5bd1-4df4-a517-223c79d55517] floating IP allocation for instance |%s| 
from (pid=1375) allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:315
2012-11-18 15:50:29 DEBUG nova.network.manager 
[req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
ce1e819636744dc680fa5515f6475e87] [instance: 
4e80964e-5bd1-4df4-a517-223c79d55517] network allocations from (pid=1375) 
allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:977
2012-11-18 15:50:29 DEBUG nova.network.manager 
[req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
ce1e819636744dc680fa5515f6475e87] [instance: 
4e80964e-5bd1-4df4-a517-223c79d55517] networks retrieved for instance: 
|[]| from (pid=1375) 
allocate_for_instance 
/usr/lib/python2.7/dist-packages/nova/network/manager.py:982

I did not configure nova to assign floating IP to VMs.  Can someone please help 
me understand why nova-network is assigning floating IP.

Thank you,
Ahmed.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift installation verification fails

2012-11-20 Thread Hugo
In my suggestion, using curl for verifying keystone first. And then using curl 
to access swift proxy with the returned token and service-endpoint from 
previous keystone operation.

It must give u more clear clues.



從我的 iPhone 傳送

Shashank Sahni  於 2012/11/20 下午6:40 寫道:

> Hi,
> 
> I'm trying to install Swift 1.7.4 on Ubuntu 12.04. The installation is 
> multi-node with keystone and swift(proxy+storage) running on separate 
> systems. Keystone is up and running perfectly fine. Swift user and service 
> endpoints are created correctly to point to the swift_node. Swift is 
> configured and all its services are up. But during swift installation 
> verification, the following commands hangs with no output.
> 
> swift -V 2 -A http://keystone_server:5000/v2.0 -U admin:admin -K admin_pass 
> stat
> 
> I'm sure its able to contact the keystone server. This is because if I change 
> admin_pass, it throws authentication failure error. It probably fails in the 
> next step which I'm unaware of.
> 
> Here is my proxy-server.conf file.
> 
> [DEFAULT]
> # Enter these next two values if using SSL certifications
> cert_file = /etc/swift/cert.crt
> key_file = /etc/swift/cert.key
> bind_port = 
> user = swift
> 
> [pipeline:main]
> #pipeline = healthcheck cache swift3 authtoken keystone proxy-server
> pipeline = healthcheck cache swift3 authtoken keystone proxy-server
> 
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
> 
> [filter:swift3]
> use=egg:swift3#swift3
> 
> [filter:keystone]
> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
> operator_roles = Member,admin, swiftoperator
> 
> [filter:authtoken]
> paste.filter_factory = keystone.middleware.auth_token:filter_factory
> # Delaying the auth decision is required to support token-less
> # usage for anonymous referrers ('.r:*').
> delay_auth_decision = 10
> service_port = 5000
> service_host = keystone_server
> auth_port = 35357
> auth_host = keystone_server
> auth_protocol = http
> auth_uri = http://keystone_server:5000/
> auth_token = 
> admin_token = 
> admin_tenant_name = service
> admin_user = swift
> admin_password = 
> signing_dir = /etc/swift
> 
> [filter:cache]
> use = egg:swift#memcache
> set log_name = cache
> 
> [filter:catch_errors]
> use = egg:swift#catch_errors
> 
> [filter:healthcheck]
> use = egg:swift#healthcheck
> 
> Any suggestion?
> 
> --
> Shashank Sahni
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Floating ip addresses take forever to display

2012-11-20 Thread Lars Kellogg-Stedman
We've been having a persistent problem with our OpenStack (Essex)
cluster.  We are automatically assigning floating ips when systems are
created (auto_assign_floating_ip = True).  When a system boots,
neithre the command line tools nor Horizon seem to know about the
automatically assigned ip address for several minutes (possibly more
than 10 or 15) after the system boots.

The system demonstrably has a floating ip address assigned (if you
initiate an outbound connection from the system, or inspect the
iptables nat rules, you can determine that address and use it to
connect to the system).

Manually assigning a floating ip address will force things to update
(so after manually assigning a floating address you'll see the fixed
address, the automatically assigned address, and the manually assigned
address).

We're running the 2012.1.3 release of things; I've read at least one
bug report that seems to describe this issue that implies the fix
should already be in this release...but we're still having this
problem.

Has anyone else encountered this problem?  Were you able to solve it?
A fix would be great, because right now our documentation is basically
"start an instance...then go do something else for 30 minutes."

-- 
Lars Kellogg-Stedman   |
Senior Technologist   | http://ac.seas.harvard.edu/
Academic Computing| http://code.seas.harvard.edu/
Harvard School of Engineering |
  and Applied Sciences|


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Help with debug in RPC message timeout

2012-11-20 Thread Ahmed Al-Mehdi
Hello,

I am getting  a "RPC message timeout" in nova-network.

2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making 
asynchronous call on network.sonoma ... from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:351
2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
d73be9ea76b3412493d0752abb9d5a02 from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:354
……..
2012-11-18 15:50:52 DEBUG nova.utils [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 
None None] Got semaphore "get_dhcp" for method "_get_dhcp_ip"... from 
(pid=1375) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2012-11-18 15:50:52 DEBUG nova.utils [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 
None None] Got semaphore "get_dhcp" for method "_get_dhcp_ip"... from 
(pid=1375) inner /usr/lib/python2.7/dist-packages/nova/utils.py:713
2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
FlatDHCPManager._publish_service_capabilities from (pid=1375) periodic_tasks 
/usr/lib/python2.7/dist-packages/nova/manager.py:172
2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
FlatDHCPManager._disassociate_stale_fixed_ips from (pid=1375) periodic_tasks 
/usr/lib/python2.7/dist-packages/nova/manager.py:172
2012-11-18 15:51:29 ERROR nova.openstack.common.rpc.common [-] Timed out 
waiting for RPC response: timed out

Is there a way I can enable further logging to find out which queue the message 
is being sent to (put on).   Also, the contents one the message?

Thank you,
Ahmed.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] nova: instance states and commands

2012-11-20 Thread Anne Gentle
Hi Marco -
Great questions, and I won't tell you to read the fine manual because
I know you do and log the most excellent bugs. :) But more commentary
below for how to do some detective work. I would still like for
someone to answer these questions in detail.

On Tue, Nov 20, 2012 at 4:46 AM, Marco CONSONNI  wrote:
> Hello,
>
>
>
> I’m playing with nova CLI and I found that there are several commands for
> managing the current state of instances running in the cloud.
>
>
>
> These are the states I found:
>
>
>
> · initial
> · build
> · active
> · shutoff
> · suspended
> · rescue
> · paused
> · reboot
>
>
>
> And these are the nova commands for changing the current state:
>
>
>
> · boot
> · pause
> · reboot
> · rescue
> · resume
> · start
> · stop
> · suspend
> · unpause
> · unrescue
>
>
>
> QUESTION: Is there any explanation on the meaning of the states? In
> particular, what’s the difference between SUSPENDED, PAUSED and RESCUE?

The document for this info is the Compute API spec, specifically
http://docs.openstack.org/api/openstack-compute/2/content/List_Servers-d1e2078.html.
However since it is a spec it is likely that there are undocumented
things going on here.

Also the API definitions don't tell the whole story because the nova
CLI parameter "pause" is doing something with the API that you can
find out using the nova --debug command.

nova --debug pause 

It seems like it's doing something with extensions, such as
"OS-EXT-STS:vm_state" "OS-EXT-STS:power_state" so dig further there.
You can search for Extended Server status on
http://api.openstack.org/api-ref.html.

Boy, trying to find this info myself shows what a nightmare this info
is to dig into, and I apologize for that. Sure shows a need for more
API documentation for real usage.

Thanks,
Anne

>
>
> Another command changes the status and, from what I found, it always changes
> that to ERROR.
>
> The command is “nova reset-state”.
>
>
>
> QUESTION: what’s the semantics of this command?
>
>
>
>
>
> Then, I found some other commands that sound like status management command
> but, actually they are not:
>
>
>
> · Lock
> · Unlock
>
>
>
> QUESTION: What’s the meaning of these?
>
>
>
>
>
> Marco.
>
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Can i create a public container with write access?

2012-11-20 Thread Blair Bethwaite
Hi Sujay,

On 19 November 2012 14:52, Sujay M  wrote:

> I was wondering if i can create a public container such that i need not
> authenticate to upload files on to it. I know how to create one with read
> for all post -r '.r:*' permissions but how to create write for all
> containers? Thanks in advance.


I think you might want TempURL -
http://docs.openstack.org/developer/swift/misc.html?highlight=tempurl#module-swift.common.middleware.tempurl

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Call for testing : 2012.2.1 tarballs

2012-11-20 Thread Mark McLoughlin
Hey,

We're hoping to publish Nova, Glance, Keystone, Quantum, Cinder and
Horizon 2012.2.1 next week (Nov 29).

The list of issues fixed so far can be seen here:

  https://launchpad.net/nova/+milestone/2012.2.1
  https://launchpad.net/glance/+milestone/2012.2.1
  https://launchpad.net/keystone/+milestone/2012.2.1
  https://launchpad.net/quantum/+milestone/2012.2.1
  https://launchpad.net/cinder/+milestone/2012.2.1
  https://launchpad.net/horizon/+milestone/2012.2.1

That's roughly 80 bugs.

We'd appreciate anyone who could give the candidate tarballs a whirl:

  http://tarballs.openstack.org/nova/nova-stable-folsom.tar.gz
  http://tarballs.openstack.org/glance/glance-stable-folsom.tar.gz
  http://tarballs.openstack.org/keystone/keystone-stable-folsom.tar.gz
  http://tarballs.openstack.org/quantum/quantum-stable-folsom.tar.gz
  http://tarballs.openstack.org/cinder/cinder-stable-folsom.tar.gz
  http://tarballs.openstack.org/horizon/horizon-stable-folsom.tar.gz

We've also started drafting release notes here:

  http://wiki.openstack.org/ReleaseNotes/2012.2.1

Contributions to those release notes are very welcome.

Thanks!
Mark.


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Folsom with VlanManager

2012-11-20 Thread Vishvananda Ishaya
The vlans and bridges are not created until you run an instance in a project 
The network is only assigned to a project when it is first needed and the vlans 
and bridges are only created when an instance is launched on a host in that 
project.

Vish

On Nov 19, 2012, at 10:05 AM, Juris  wrote:

> Hi all,
> 
> I'm trying to configure Folsom to work with nova-networks and VlanManager and 
> it doesn't work.
> 
> I can create networks with:
> nova-manage network create --label testnet --fixed_range_v4 10.0.2.0/24 
> --num_networks 1 --network_size 256 --vlan 200
> 
> list them later:
> nova-manage network list
> 1 10.0.2.0/24 None10.0.2.3None
> None200 None
> 32692942-8965-4174-a45a-18cda4c7d183
> 
> and there are no errors in /var/log/nova/*
> 
> However, I can't see any bridges and vlan interfaces if I run ip addr or 
> brctl show.
> 
> networking section of nova.conf looks like this:
> network_manager=nova.network.manager.VlanManager
> force_dhcp_release=True
> dhcpbridge_flagfile=/etc/nova/nova.conf
> dhcpbridge=/usr/bin/nova-dhcpbridge
> firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
> public_interface=eth0
> vlan_interface=eth1
> fixed_range=10.0.0.0/24
> flat_network_dhcp_start=10.0.0.10
> network_size=256
> flat_injected=False
> multi_host=True
> send_arp_for_ha=True
> connection_type=libvirt
> 
> it's a pretty standard config and I can't see why it is not working.
> 
> Any help will be much appreciated.
> 
> Thank you in advance,
> Juris
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Remove unsed network on host with nova-network

2012-11-20 Thread Vishvananda Ishaya
The only reason this is not done is that it makes the setup simpler. We
don't have to worry about potential races between setting up and tearing
down interfaces. It probably wouldn't be incredibly difficult to make a
patch that would remove them, but you will likely have to do some creative
locking to make sure that you don't run into issues.

Vish

On Nov 20, 2012, at 9:25 AM, Édouard Thuleau  wrote:

> Hi all,
> 
> I use nova-network with VLAN manager.
> 
> Why nova-network doesn't remove unused network interfaces on a host ?
> 
> ie, if none VM on a host have a fixed IP attach to network X, the VLAN
> and bridge of this network still up and unused. And 'dnsmasq' process
> still listen and running.
> 
> The number of unused network interfaces will grow over time.
> In the VLAN mode, this number could be 4000 x 2 unused interfaces and
> 4000 unused 'dnsmasq' processes (in worth case).
> 
> Can it lead to decrease the kernel performance ?
> Is it a bug ? Or a voluntary implementation ?
> 
> Regards,
> Édouard.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Vishvananda Ishaya
That is a confusing log message. That log is from the allocation code in the 
floating ip mixin class. It is not actually allocating a floating ip unless you 
have auto_assign_floating_ip set to True.

Vish

On Nov 20, 2012, at 10:08 AM, Ahmed Al-Mehdi  wrote:

> Hello,
> 
> When I launch a VM instance, I see the following message regarding floating 
> IP in nova-network.log:
> 
> 2012-11-18 15:50:29 DEBUG nova.network.manager 
> [req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
> ce1e819636744dc680fa5515f6475e87] [instance: 
> 4e80964e-5bd1-4df4-a517-223c79d55517] floating IP allocation for instance 
> |%s| from (pid=1375) allocate_for_instance 
> /usr/lib/python2.7/dist-packages/nova/network/manager.py:315
> 2012-11-18 15:50:29 DEBUG nova.network.manager 
> [req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
> ce1e819636744dc680fa5515f6475e87] [instance: 
> 4e80964e-5bd1-4df4-a517-223c79d55517] network allocations from (pid=1375) 
> allocate_for_instance 
> /usr/lib/python2.7/dist-packages/nova/network/manager.py:977
> 2012-11-18 15:50:29 DEBUG nova.network.manager 
> [req-f2b4df1f-6c29-48dc-991b-93fb5eb29d08 ce016bb05df949ebbafcc7c165359d7c 
> ce1e819636744dc680fa5515f6475e87] [instance: 
> 4e80964e-5bd1-4df4-a517-223c79d55517] networks retrieved for instance: 
> |[]| from (pid=1375) 
> allocate_for_instance 
> /usr/lib/python2.7/dist-packages/nova/network/manager.py:982  
> 
> I did not configure nova to assign floating IP to VMs.  Can someone please 
> help me understand why nova-network is assigning floating IP.
> 
> Thank you,
> Ahmed.
> 
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Help setting up network in Openstack on Vagrant

2012-11-20 Thread Vishvananda Ishaya
vagrant doesn't like having its natted ips moved around. Generally with vagrant 
I go ahead crate a host-only network on eth1 (which it looks like you have) and 
set a up a localrc (in the devstack dir) like the following:

FLAT_INTERFACE=eth1 # this tells nova to use eth1 for br100 instead of eth0
HOST_IP=192.168.33.11 # your address from below

FYI i recently switched to vmware fusion 5 as it allows you to run hardware 
virt in the guest, so you can actually have a devstack install that can run 
real vms. It also seems better about keeping internet access when changing 
networks. I regularly have to do sudo /etc/init.d/networking restart in 
virtualbox if i switch wifi networks.

Also, you are much better off accessing the cirros instance via ssh:
ssh cirros@10.0.0.2 # password is 'cubswin:)' without the quotes

Vish

On Nov 20, 2012, at 11:10 AM, "Winsor, Daniel"  wrote:

> Hi,
> 
> I apologize in advance for the log spam.  I have installed Openstack onto 
> Ubuntu 12.04 as per devstack.org.  The Ubuntu is a vagrant box residing on my 
> MacBook, so I have given it in the Vagrantfile a host only network and a 
> bridged network, in addition to the default NAT.  Once the vagrant box is up, 
> I run devstack/stack.sh and everything gets set up correctly.  I can start a 
> cirros instance no problem, though it is a little tricky to log into the vnc. 
>  Instead of http://10.0.2.15:6080/vnc_auto.html……. I replace 10.0.2.15 with 
> either the host only ip address, 192.168.33.11, or the bridged ip address, 
> 10.21.80.255 and it will work from the MacBook's browser (if I turn my proxy 
> off or work from home without a proxy.  If I have the proxy on, the browser 
> spams me to log in and I can't access through the browser).
> 
> My problem is once I have logged onto the cirros instance I can't access the 
> internet, or even seem to access the horizon page.  I am a programmer by 
> trade, and don't know much about networks, so please be gentle when telling 
> me how easy it is to fix this :)  I was thinking maybe I'd need to bridge the 
> networks, but brctl doesn't work on cirros: not a command.  All I want to do 
> is be able to access the outside internet from cirros — assume proxy is a non 
> issue because I can always do it from home.  Also, is the suggested solution 
> any different for, say, an Ubuntu 10.04 vagrant disk.vmdk image uploaded via 
> glance?
> 
> This is my info on the Openstack installation on Ubuntu in Vagrant.  This is 
> with proxy on so that devstack/stack.sh runs ok
> vagrant@precise64:~$ ifconfig
> br100 Link encap:Ethernet  HWaddr 08:00:27:88:0c:a6
>  inet addr:10.0.0.1  Bcast:10.0.0.255  Mask:255.255.255.0
>  inet6 addr: fe80::60cd:afff:fefd:ecb7/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:64 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:98 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0
>  RX bytes:4404 (4.4 KB)  TX bytes:9482 (9.4 KB)
> 
> eth0  Link encap:Ethernet  HWaddr 08:00:27:88:0c:a6
>  inet6 addr: fe80::a00:27ff:fe88:ca6/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:9505 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:9551 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:1000
>  RX bytes:603824 (603.8 KB)  TX bytes:1474451 (1.4 MB)
> 
> eth1  Link encap:Ethernet  HWaddr 08:00:27:7d:7a:1a
>  inet addr:192.168.33.11  Bcast:192.168.33.255  Mask:255.255.255.0
>  inet6 addr: fe80::a00:27ff:fe7d:7a1a/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:407 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:471 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:1000
>  RX bytes:55749 (55.7 KB)  TX bytes:480481 (480.4 KB)
> 
> eth2  Link encap:Ethernet  HWaddr 08:00:27:e9:8e:0f
>  inet addr:10.21.80.255  Bcast:10.21.83.255  Mask:255.255.252.0
>  inet6 addr: fe80::a00:27ff:fee9:8e0f/64 Scope:Link
>  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>  RX packets:8742 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:1000
>  RX bytes:837310 (837.3 KB)  TX bytes:1836 (1.8 KB)
> 
> loLink encap:Local Loopback
>  inet addr:127.0.0.1  Mask:255.0.0.0
>  inet6 addr: ::1/128 Scope:Host
>  UP LOOPBACK RUNNING  MTU:16436  Metric:1
>  RX packets:10411 errors:0 dropped:0 overruns:0 frame:0
>  TX packets:10411 errors:0 dropped:0 overruns:0 carrier:0
>  collisions:0 txqueuelen:0
>  RX bytes:66209073 (66.2 MB)  TX bytes:66209073 (66.2 MB)
> 
> virbr0Link encap:Ethernet  HWaddr 9e:9e:73:56:6a:a8
>  inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.2

Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Ahmed Al-Mehdi
Hi Vish,

I do not have auto_assign_floating_ip set.  So, I can safely assume 
nova-network is assigning fixed-IP, right?



Sorry to impose, but do you have a few minutes to help me understand by I am 
getting a RPC message timeout issue which is prohibiting me from launching a 
VM.  I looked through the logs extensively, but I can't figure out, who the RPC 
msg is destine for, and why no response.

2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making 
asynchronous call on network.sonoma ... from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp
.py:351
2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
d73be9ea76b3412493d0752abb9d5a02 from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:
354
2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [], u'_msg_id': u'b2bc0715982846cd916a8ff61b2513af', 
u'_context_quota_class': None, u'_context_request_id':
 u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', u'_context_service_catalog': 
None, u'_context_user_name': None, u'_context_auth_token': '', 
u'args': {u'instance_id': 5, u'instance_uuid': u
'4e80964e-5bd1-4df4-a517-223c79d55517', u'host': u'sonoma', u'project_id': 
u'ce1e819636744dc680fa5515f6475e87', u'rxtx_factor': 1.0}, 
u'_context_instance_lock_checked': False, u'_context_project_na
me': None, u'_context_is_admin': True, u'_context_project_id': None, 
u'_context_timestamp': u'2012-11-18T23:50:47.233052', u'_context_read_deleted': 
u'no', u'_context_user_id': None, u'method': u'g
et_instance_nw_info', u'_context_remote_address': None} from (pid=1375) 
_safe_log 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: 
{'project_name': None, 'user_id': None, 'roles': [], 'timestamp': 
u'2012-11-18T23:50:47.233052', 'auth_token': '', 'remote_address': None, 'quota_class': None, 'is_admin': True, 
'service_catalog': None, 'request_id': 
u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', 'instance_lock_checked': False, 
'project_i
d': None, 'user_name': None, 'read_deleted': u'no'} from (pid=1375) _safe_log 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2012-11-18 15:50:52 DEBUG nova.utils [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 
None None] Got semaphore "get_dhcp" for method "_get_dhcp_ip"... from 
(pid=1375) inner /usr/lib/python2.7/dist-package
s/nova/utils.py:713
2012-11-18 15:50:52 DEBUG nova.utils [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 
None None] Got semaphore "get_dhcp" for method "_get_dhcp_ip"... from 
(pid=1375) inner /usr/lib/python2.7/dist-package
s/nova/utils.py:713
2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
FlatDHCPManager._publish_service_capabilities from (pid=1375) periodic_tasks 
/usr/lib/python2.7/dist-packages/nova/manager.py:172
2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
FlatDHCPManager._disassociate_stale_fixed_ips from (pid=1375) periodic_tasks 
/usr/lib/python2.7/dist-packages/nova/manager.py:172
2012-11-18 15:51:29 ERROR nova.openstack.common.rpc.common [-] Timed out 
waiting for RPC response: timed out
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common Traceback (most 
recent call last):
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py", 
line 513, in ensure
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
method(*args, **kwargs)
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py", 
line 590, in _consume
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
self.connection.drain_events(timeout=timeout)
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/kombu/connection.py", line 175, in 
drain_events
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
self.transport.drain_events(self.connection, **kwargs)
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py", line 238, in 
drain_events
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
connection.drain_events(**kwargs)
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py", line 57, in 
drain_events
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
self.wait_multi(self.channels.values(), timeout=timeout)
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
"/usr/lib/python2.7/dist-packages/kombu/transport/pyamqplib.py", line 63, in 
wait_multi
2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common chanmap.keys(), 
allowed_methods, timeout=timeout)
2012-11-18 15:51:29

Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Vishvananda Ishaya

On Nov 20, 2012, at 4:02 PM, Ahmed Al-Mehdi  wrote:

> Hi Vish,
> 
> I do not have auto_assign_floating_ip set.  So, I can safely assume 
> nova-network is assigning fixed-IP, right?

you always get a fixed ip
> 
> Sorry to impose, but do you have a few minutes to help me understand by I am 
> getting a RPC message timeout issue which is prohibiting me from launching a 
> VM.  I looked through the logs extensively, but I can't figure out, who the 
> RPC msg is destine for, and why no response.

do you have a machine with the hostname sonoma? Perhaps you did at one point 
and the hostname has changed?

I suspect either:

a) you have a network or fixed ip with an old hostname assigned:
mysql nova -e 'select * from fixed_ips where host="sonoma"'
mysql nova -e 'select * from networks where host="sonoma"'

or (more likely)
b) you are running multi_host mode and you have nova-compute running on the 
host 'sonoma' and you don't have nova-network running on the host like you 
should.

Vish

> 
> 2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making 
> asynchronous call on network.sonoma ... from (pid=1375) multicall 
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp
> .py:351
> 2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
> d73be9ea76b3412493d0752abb9d5a02 from (pid=1375) multicall 
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:
> 354
> 2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] received 
> {u'_context_roles': [], u'_msg_id': u'b2bc0715982846cd916a8ff61b2513af', 
> u'_context_quota_class': None, u'_context_request_id':
>  u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', u'_context_service_catalog': 
> None, u'_context_user_name': None, u'_context_auth_token': '', 
> u'args': {u'instance_id': 5, u'instance_uuid': u
> '4e80964e-5bd1-4df4-a517-223c79d55517', u'host': u'sonoma', u'project_id': 
> u'ce1e819636744dc680fa5515f6475e87', u'rxtx_factor': 1.0}, 
> u'_context_instance_lock_checked': False, u'_context_project_na
> me': None, u'_context_is_admin': True, u'_context_project_id': None, 
> u'_context_timestamp': u'2012-11-18T23:50:47.233052', 
> u'_context_read_deleted': u'no', u'_context_user_id': None, u'method': u'g
> et_instance_nw_info', u'_context_remote_address': None} from (pid=1375) 
> _safe_log 
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
> 2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] unpacked 
> context: {'project_name': None, 'user_id': None, 'roles': [], 'timestamp': 
> u'2012-11-18T23:50:47.233052', 'auth_token': ' IZED>', 'remote_address': None, 'quota_class': None, 'is_admin': True, 
> 'service_catalog': None, 'request_id': 
> u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', 'instance_lock_checked': False, 
> 'project_i
> d': None, 'user_name': None, 'read_deleted': u'no'} from (pid=1375) _safe_log 
> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
> 2012-11-18 15:50:52 DEBUG nova.utils 
> [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 None None] Got semaphore "get_dhcp" 
> for method "_get_dhcp_ip"... from (pid=1375) inner 
> /usr/lib/python2.7/dist-package
> s/nova/utils.py:713
> 2012-11-18 15:50:52 DEBUG nova.utils 
> [req-22e6e99a-c582-449c-8d61-d4ee57f1ac57 None None] Got semaphore "get_dhcp" 
> for method "_get_dhcp_ip"... from (pid=1375) inner 
> /usr/lib/python2.7/dist-package
> s/nova/utils.py:713
> 2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
> FlatDHCPManager._publish_service_capabilities from (pid=1375) periodic_tasks 
> /usr/lib/python2.7/dist-packages/nova/manager.py:172
> 2012-11-18 15:51:09 DEBUG nova.manager [-] Running periodic task 
> FlatDHCPManager._disassociate_stale_fixed_ips from (pid=1375) periodic_tasks 
> /usr/lib/python2.7/dist-packages/nova/manager.py:172
> 2012-11-18 15:51:29 ERROR nova.openstack.common.rpc.common [-] Timed out 
> waiting for RPC response: timed out
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common Traceback (most 
> recent call last):
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py", 
> line 513, in ensure
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
> method(*args, **kwargs)
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
> "/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_kombu.py", 
> line 590, in _consume
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
> self.connection.drain_events(timeout=timeout)
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
> "/usr/lib/python2.7/dist-packages/kombu/connection.py", line 175, in 
> drain_events
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common return 
> self.transport.drain_events(self.connection, **kwargs)
> 2012-11-18 15:51:29 TRACE nova.openstack.common.rpc.common   File 
> "/usr/lib/python2.7/dist-packa

Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Ahmed Al-Mehdi
Hi Vish,

Thank you very much for your help, I really appreciate it.

My setup has two nodes:

controller-node (hostname: bodega;  nova-network running;  no nova-compute 
running)
eth0: 10.176.20.158
eth1: No IP assigned (VM network)

compute-node (hostname: sonoma;  nova-compute running only).
eth0: 10.176.20.4
eth1: No IP assigned (VM network)

My network configuration is in single-host mode.  Both the host's hostname and 
their IPs has not changed.

I believe my setup is affected with issue (a), "have a network or fixed ip with 
an old hostname assigned".

root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select * from 
fixed_ips where host="sonoma"'
+-+-++-++---++---++--+--++---+
| created_at  | updated_at  | deleted_at | deleted | id | 
address   | network_id | allocated | leased | reserved | 
virtual_interface_id | host   | instance_uuid |
+-+-++-++---++---++--+--++---+
| 2012-11-13 18:49:37 | 2012-11-16 21:45:32 | NULL   |   0 |  3 | 
192.168.100.2 |  1 | 0 |  0 |0 | 
NULL | sonoma | NULL  |
+-+-++-++---++---++--+--++---+
root@bodega:/etc/nova#
root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select * from 
networks where host="sonoma"'
root@bodega:/etc/nova#  (NO OUTPUT)

I am a bit confused, why is "192.168.100.2" assigned to sonoma?  Isn't that IP 
range reserved for VMs?

Should sonoma have the IP address "10.176.20.4"?  How can I clear the issue, so 
I don't get the RPC message timeout.


Regards,
Ahmed.


From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Date: Tuesday, November 20, 2012 4:08 PM
To: Ahmed Al-Mehdi mailto:ah...@coraid.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Floating IP vs. Fixed IP in nova-network


On Nov 20, 2012, at 4:02 PM, Ahmed Al-Mehdi 
mailto:ah...@coraid.com>> wrote:

Hi Vish,

I do not have auto_assign_floating_ip set.  So, I can safely assume 
nova-network is assigning fixed-IP, right?

you always get a fixed ip

Sorry to impose, but do you have a few minutes to help me understand by I am 
getting a RPC message timeout issue which is prohibiting me from launching a 
VM.  I looked through the logs extensively, but I can't figure out, who the RPC 
msg is destine for, and why no response.

do you have a machine with the hostname sonoma? Perhaps you did at one point 
and the hostname has changed?

I suspect either:

a) you have a network or fixed ip with an old hostname assigned:
mysql nova -e 'select * from fixed_ips where host="sonoma"'
mysql nova -e 'select * from networks where host="sonoma"'

or (more likely)
b) you are running multi_host mode and you have nova-compute running on the 
host 'sonoma' and you don't have nova-network running on the host like you 
should.

Vish


2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making 
asynchronous call on network.sonoma ... from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp
.py:351
2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
d73be9ea76b3412493d0752abb9d5a02 from (pid=1375) multicall 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:
354
2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] received 
{u'_context_roles': [], u'_msg_id': u'b2bc0715982846cd916a8ff61b2513af', 
u'_context_quota_class': None, u'_context_request_id':
 u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', u'_context_service_catalog': 
None, u'_context_user_name': None, u'_context_auth_token': '', 
u'args': {u'instance_id': 5, u'instance_uuid': u
'4e80964e-5bd1-4df4-a517-223c79d55517', u'host': u'sonoma', u'project_id': 
u'ce1e819636744dc680fa5515f6475e87', u'rxtx_factor': 1.0}, 
u'_context_instance_lock_checked': False, u'_context_project_na
me': None, u'_context_is_admin': True, u'_context_project_id': None, 
u'_context_timestamp': u'2012-11-18T23:50:47.233052', u'_context_read_deleted': 
u'no', u'_context_user_id': None, u'method': u'g
et_instance_nw_info', u'_context_remote_address': None} from (pid=1375) 
_safe_log 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/common.py:195
2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] unpacked context: 
{'project_name': None, 'user_id': None, 'roles': [], 'timestamp': 
u'2012-11-18T23:50:47.233052', 'auth_token': '', 'remote_address': None, 'quota_class': None, 'is_admin': True, 
'service_catalog': None, 'request_id': 
u'req-2

Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Vishvananda Ishaya
Your hosts get ips from the fixed range in multi_host mode. It looks like your 
network is multi_host=True. this is the only reason  sonoma would have been 
assigned an ip. So you can either run nova-network on sonoma or set 
multi_host=0 on your network in the database. If you want to switch back to 
multi_host=0 you will likely have to clean out the db tables, so you might do 
best dropping all records from the networks and fixed_ips tables and recreating 
the network.

Vish

On Nov 20, 2012, at 4:52 PM, Ahmed Al-Mehdi  wrote:

> Hi Vish,
> 
> Thank you very much for your help, I really appreciate it.
> 
> My setup has two nodes:
> 
> controller-node (hostname: bodega;  nova-network running;  no nova-compute 
> running)
>   eth0:   10.176.20.158
>   eth1:   No IP assigned (VM network)
>  
> compute-node (hostname: sonoma;  nova-compute running only).  
>   eth0:   10.176.20.4
>   eth1:   No IP assigned (VM network)
> 
> My network configuration is in single-host mode.  Both the host's hostname 
> and their IPs has not changed.
> 
> I believe my setup is affected with issue (a), "have a network or fixed ip 
> with an old hostname assigned".
> 
> root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select * from 
> fixed_ips where host="sonoma"'
> +-+-++-++---++---++--+--++---+
> | created_at  | updated_at  | deleted_at | deleted | id | 
> address   | network_id | allocated | leased | reserved | 
> virtual_interface_id | host   | instance_uuid |
> +-+-++-++---++---++--+--++---+
> | 2012-11-13 18:49:37 | 2012-11-16 21:45:32 | NULL   |   0 |  3 | 
> 192.168.100.2 |  1 | 0 |  0 |0 | 
> NULL | sonoma | NULL  |
> +-+-++-++---++---++--+--++---+
> root@bodega:/etc/nova# 
> root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select * from 
> networks where host="sonoma"'
> root@bodega:/etc/nova#  (NO OUTPUT)
> 
> I am a bit confused, why is "192.168.100.2" assigned to sonoma?  Isn't that 
> IP range reserved for VMs?
> 
> Should sonoma have the IP address "10.176.20.4"?  How can I clear the issue, 
> so I don't get the RPC message timeout.
> 
> 
> Regards,
> Ahmed.
> 
> 
> From: Vishvananda Ishaya 
> Date: Tuesday, November 20, 2012 4:08 PM
> To: Ahmed Al-Mehdi 
> Cc: "openstack@lists.launchpad.net" 
> Subject: Re: [Openstack] Floating IP vs. Fixed IP in nova-network
> 
>> 
>> On Nov 20, 2012, at 4:02 PM, Ahmed Al-Mehdi  wrote:
>> 
>>> Hi Vish,
>>> 
>>> I do not have auto_assign_floating_ip set.  So, I can safely assume 
>>> nova-network is assigning fixed-IP, right?
>> 
>> you always get a fixed ip
>>> 
>>> Sorry to impose, but do you have a few minutes to help me understand by I 
>>> am getting a RPC message timeout issue which is prohibiting me from 
>>> launching a VM.  I looked through the logs extensively, but I can't figure 
>>> out, who the RPC msg is destine for, and why no response.
>> 
>> do you have a machine with the hostname sonoma? Perhaps you did at one point 
>> and the hostname has changed?
>> 
>> I suspect either:
>> 
>> a) you have a network or fixed ip with an old hostname assigned:
>> mysql nova -e 'select * from fixed_ips where host="sonoma"'
>> mysql nova -e 'select * from networks where host="sonoma"'
>> 
>> or (more likely)
>> b) you are running multi_host mode and you have nova-compute running on the 
>> host 'sonoma' and you don't have nova-network running on the host like you 
>> should.
>> 
>> Vish
>> 
>>> 
>>> 2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] Making 
>>> asynchronous call on network.sonoma ... from (pid=1375) multicall 
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp
>>> .py:351
>>> 2012-11-18 15:50:29 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is 
>>> d73be9ea76b3412493d0752abb9d5a02 from (pid=1375) multicall 
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:
>>> 354
>>> 2012-11-18 15:50:52 DEBUG nova.openstack.common.rpc.amqp [-] received 
>>> {u'_context_roles': [], u'_msg_id': u'b2bc0715982846cd916a8ff61b2513af', 
>>> u'_context_quota_class': None, u'_context_request_id':
>>>  u'req-22e6e99a-c582-449c-8d61-d4ee57f1ac57', u'_context_service_catalog': 
>>> None, u'_context_user_name': None, u'_context_auth_token': '', 
>>> u'args': {u'instance_id': 5, u'instance_uuid': u
>>> '4e80964e-5bd1-4df4-a517-223c79d55517', u'host': u'sonoma', u'project_id': 
>>> u'ce1e819636744dc680fa5515f6475e87', u'rxtx_factor': 1.0}, 
>>> u'_context_instance_lock_checked

Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Ahmed Al-Mehdi
Hi Vish,

Thank you very much gain.  I never set multi_host=True in my nova.conf file.  
But I think I know how it could have got set.  In the "OpenStack Install and 
Deploy Manual", section "Creating the Network for Compute VMs" (  
http://docs.openstack.org/folsom/openstack-compute/install/apt/content/compute-create-network.html
 ), I used the following command from the section to create the network:

nova-manage network create private --multi_host=T 
--fixed_range_v4=192.168.100.0/24 --bridge_interface=br100 --num_networks=1 
--network_size=256


Is that a typo in the document?

I would really prefer to run "multi_host=0" mode, so I don't run into other 
issues.  I am not familiar with, so I feel I might end up doing more damage 
than good trying to muck with the db.  What would you suggest, run nova-network 
on sonoma, and that will pretty much solve my issues, or should I just re-do 
the setup from scratch, which would not be too bad for me as it will take me an 
hour or two.

Regards,
Ahmed.


From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Date: Tuesday, November 20, 2012 4:59 PM
To: Ahmed Al-Mehdi mailto:ah...@coraid.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Floating IP vs. Fixed IP in nova-network

Your hosts get ips from the fixed range in multi_host mode. It looks like your 
network is multi_host=True. this is the only reason  sonoma would have been 
assigned an ip. So you can either run nova-network on sonoma or set 
multi_host=0 on your network in the database. If you want to switch back to 
multi_host=0 you will likely have to clean out the db tables, so you might do 
best dropping all records from the networks and fixed_ips tables and recreating 
the network.

Vish

On Nov 20, 2012, at 4:52 PM, Ahmed Al-Mehdi 
mailto:ah...@coraid.com>> wrote:

Hi Vish,

Thank you very much for your help, I really appreciate it.

My setup has two nodes:

controller-node (hostname: bodega;  nova-network running;  no nova-compute 
running)
eth0:10.176.20.158
eth1:No IP assigned (VM network)

compute-node (hostname: sonoma;  nova-compute running only).
eth0:10.176.20.4
eth1:No IP assigned (VM network)

My network configuration is in single-host mode.  Both the host's hostname and 
their IPs has not changed.

I believe my setup is affected with issue (a), "have a network or fixed ip with 
an old hostname assigned".

root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select * from 
fixed_ips where host="sonoma"'
+-+-++-++---++---++--+--++---+
| created_at  | updated_at  | deleted_at | deleted | id | 
address   | network_id | allocated | leased | reserved | 
virtual_interface_id | host   | instance_uuid |
+-+-++-++---++---++--+--++---+
| 2012-11-13 18:49:37 | 2012-11-16 21:45:32 | NULL   |   0 |  3 | 
192.168.100.2 |  1 | 0 |  0 |0 | 
NULL | sonoma | NULL  |
+-+-++-++---++---++--+--++---+
root@bodega:/etc/nova#
root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select * from 
networks where host="sonoma"'
root@bodega:/etc/nova#  (NO OUTPUT)

I am a bit confused, why is "192.168.100.2" assigned to sonoma?  Isn't that IP 
range reserved for VMs?

Should sonoma have the IP address "10.176.20.4"?  How can I clear the issue, so 
I don't get the RPC message timeout.


Regards,
Ahmed.


From: Vishvananda Ishaya mailto:vishvana...@gmail.com>>
Date: Tuesday, November 20, 2012 4:08 PM
To: Ahmed Al-Mehdi mailto:ah...@coraid.com>>
Cc: "openstack@lists.launchpad.net" 
mailto:openstack@lists.launchpad.net>>
Subject: Re: [Openstack] Floating IP vs. Fixed IP in nova-network


On Nov 20, 2012, at 4:02 PM, Ahmed Al-Mehdi 
mailto:ah...@coraid.com>> wrote:

Hi Vish,

I do not have auto_assign_floating_ip set.  So, I can safely assume 
nova-network is assigning fixed-IP, right?

you always get a fixed ip

Sorry to impose, but do you have a few minutes to help me understand by I am 
getting a RPC message timeout issue which is prohibiting me from launching a 
VM.  I looked through the logs extensively, but I can't figure out, who the RPC 
msg is destine for, and why no response.

do you have a machine with the hostname sonoma? Perhaps you did at one point 
and the hostname has changed?

I suspect either:

a) you have a network or fixed ip with an old hostname assigned:
mysql nova -e 'select * from fixed_ips where host="sonoma"'
mysql n

[Openstack] [[[[BUG]]]] In the Folsom EPEL source version of the Quantum packet error

2012-11-20 Thread yz
hi,all:

  Today in the CentOS 6.3 test epel source provides the Folsom version of
the Quantum bag, found in start Quantum-server time tip:


Traceback (most recent call last):
  File "/usr/bin/quantum-server", line 26, in 
server()
  File "/usr/lib/python2.6/site-packages/quantum/server/__init__.py", line
33, in main
config.parse(sys.argv)
  File "/usr/lib/python2.6/site-packages/quantum/common/config.py", line
62, in parse
version='%%prog %s' % version_string())
  File "/usr/lib/python2.6/site-packages/quantum/openstack/common/cfg.py",
line 1026, in __call__
self._parse_config_files()
  File "/usr/lib/python2.6/site-packages/quantum/openstack/common/cfg.py",
line 1496, in _parse_config_files
raise ConfigFilesNotFoundError(not_read_ok)
quantum.openstack.common.cfg.ConfigFilesNotFoundError: Failed to read some
config files: /etc/quantum/plugin.ini



Through consulting relevant posts, found that the plugin. Ini content
should have already been Folsom officially released version before they
have been combined to quantum. The conf

https://bugs.launchpad.net/quantum/+bug/803086
http://lists.openstack.org/pipermail/openstack-dev/2012-July/000167.html
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [[[[BUG]]]] In the Folsom EPEL source version of the Quantum packet error

2012-11-20 Thread yz
Found in the/etc/init. D/quantum-server startup scripts start
quantum-server config file is / etc / $prog/plugin. ini, can get rid of.



2012/11/21 yz 

> hi,all:
>
>   Today in the CentOS 6.3 test epel source provides the Folsom version of
> the Quantum bag, found in start Quantum-server time tip:
>
>
> Traceback (most recent call last):
>   File "/usr/bin/quantum-server", line 26, in 
> server()
>   File "/usr/lib/python2.6/site-packages/quantum/server/__init__.py", line
> 33, in main
> config.parse(sys.argv)
>   File "/usr/lib/python2.6/site-packages/quantum/common/config.py", line
> 62, in parse
> version='%%prog %s' % version_string())
>   File "/usr/lib/python2.6/site-packages/quantum/openstack/common/cfg.py",
> line 1026, in __call__
> self._parse_config_files()
>   File "/usr/lib/python2.6/site-packages/quantum/openstack/common/cfg.py",
> line 1496, in _parse_config_files
> raise ConfigFilesNotFoundError(not_read_ok)
> quantum.openstack.common.cfg.ConfigFilesNotFoundError: Failed to read some
> config files: /etc/quantum/plugin.ini
>
>
>
> Through consulting relevant posts, found that the plugin. Ini content
> should have already been Folsom officially released version before they
> have been combined to quantum. The conf
>
> https://bugs.launchpad.net/quantum/+bug/803086
>
> http://lists.openstack.org/pipermail/openstack-dev/2012-July/000167.html
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Floating IP vs. Fixed IP in nova-network

2012-11-20 Thread Vishvananda Ishaya
Setup from scratch will be the safest, but just so you know, multi_host = 1
is probably the most common deployment these days. To make it work just run
nova-compute nova-network and nova-api-metadata on each compute  node.
On Nov 20, 2012 5:23 PM, "Ahmed Al-Mehdi"  wrote:

> Hi Vish,
>
> Thank you very much gain.  I never set multi_host=True in my nova.conf
> file.  But I think I know how it could have got set.  In the "OpenStack
> Install and Deploy Manual", section "Creating the Network for Compute VMs"
> (
> http://docs.openstack.org/folsom/openstack-compute/install/apt/content/compute-create-network.html
>  ),
> I used the following command from the section to create the network:
>
>   nova-manage network create private *--multi_host=T 
> *--fixed_range_v4=192.168.100.0/24 --bridge_interface=br100 --num_networks=1 
> --network_size=256
>
> *Is that a typo in the document?*
>
> I would really prefer to run "multi_host=0" mode, so I don't run into
> other issues.  I am not familiar with, so I feel I might end up doing more
> damage than good trying to muck with the db.  What would you suggest, run
> nova-network on sonoma, and that will pretty much solve my issues, or
> should I just re-do the setup from scratch, which would not be too bad for
> me as it will take me an hour or two.
>
> Regards,
> Ahmed.
>
>
> From: Vishvananda Ishaya 
> Date: Tuesday, November 20, 2012 4:59 PM
> To: Ahmed Al-Mehdi 
> Cc: "openstack@lists.launchpad.net" 
> Subject: Re: [Openstack] Floating IP vs. Fixed IP in nova-network
>
> Your hosts get ips from the fixed range in multi_host mode. It looks like
> your network is multi_host=True. this is the only reason  sonoma would have
> been assigned an ip. So you can either run nova-network on sonoma or set
> multi_host=0 on your network in the database. If you want to switch back to
> multi_host=0 you will likely have to clean out the db tables, so you might
> do best dropping all records from the networks and fixed_ips tables and
> recreating the network.
>
> Vish
>
> On Nov 20, 2012, at 4:52 PM, Ahmed Al-Mehdi  wrote:
>
> Hi Vish,
>
> Thank you very much for your help, I really appreciate it.
>
> My setup has two nodes:
>
> controller-node (hostname: bodega;  nova-network running;  no nova-compute
> running)
> eth0:10.176.20.158
> eth1:No IP assigned (VM network)
>
> compute-node (hostname: sonoma;  nova-compute running only).
> eth0:10.176.20.4
> eth1:No IP assigned (VM network)
>
> My network configuration is in single-host mode.  Both the host's hostname
> and their IPs has not changed.
>
> I believe my setup is affected with issue (a), "have a network or fixed ip
> with an old hostname assigned".
>
> root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select *
> from fixed_ips where host="sonoma"'
>
> +-+-++-++---++---++--+--++---+
> | created_at  | updated_at  | deleted_at | deleted | id |
> address   | network_id | allocated | leased | reserved |
> virtual_interface_id | host   | instance_uuid |
>
> +-+-++-++---++---++--+--++---+
> | 2012-11-13 18:49:37 | 2012-11-16 21:45:32 | NULL   |   0 |  3 |
> 192.168.100.2 |  1 | 0 |  0 |0 |
>   NULL | sonoma | NULL  |
>
> +-+-++-++---++---++--+--++---+
> root@bodega:/etc/nova#
> root@bodega:/etc/nova# mysql -u root -pmysqlsecret   nova -e 'select *
> from networks where host="sonoma"'
> root@bodega:/etc/nova#  (NO OUTPUT)
>
> I am a bit confused, why is "192.168.100.2" assigned to sonoma?  Isn't
> that IP range reserved for VMs?
>
> Should sonoma have the IP address "10.176.20.4"?  How can I clear the
> issue, so I don't get the RPC message timeout.
>
>
> Regards,
> Ahmed.
>
>
> From: Vishvananda Ishaya 
> Date: Tuesday, November 20, 2012 4:08 PM
> To: Ahmed Al-Mehdi 
> Cc: "openstack@lists.launchpad.net" 
> Subject: Re: [Openstack] Floating IP vs. Fixed IP in nova-network
>
>
> On Nov 20, 2012, at 4:02 PM, Ahmed Al-Mehdi  wrote:
>
> Hi Vish,
>
> I do not have auto_assign_floating_ip set.  So, I can safely assume
> nova-network is assigning fixed-IP, right?
>
>
> you always get a fixed ip
>
>
> Sorry to impose, but do you have a few minutes to help me understand by I
> am getting a RPC message timeout issue which is prohibiting me from
> launching a VM.  I looked through the logs extensively, but I can't figure
> out, who the RPC msg is destine for, and why no response.
>
>
> do you have a machine with the hostname sonoma? Perhaps you did at one
> point and the hostname has changed?
>
> I suspect either:
>
>

Re: [Openstack] [Swift] Can i create a public container with write access?

2012-11-20 Thread Sujay M
Thank you Blair,

I came to know that if anyone wants to upload files to swift he can use
tempURL along with FormPost. Could you please give me an example for doing
this (using both tempURL and FormPost for uploads).


On 21 November 2012 02:54, Blair Bethwaite wrote:

> Hi Sujay,
>
> On 19 November 2012 14:52, Sujay M  wrote:
>
>> I was wondering if i can create a public container such that i need not
>> authenticate to upload files on to it. I know how to create one with read
>> for all post -r '.r:*' permissions but how to create write for all
>> containers? Thanks in advance.
>
>
> I think you might want TempURL -
> http://docs.openstack.org/developer/swift/misc.html?highlight=tempurl#module-swift.common.middleware.tempurl
>
> --
> Cheers,
> ~Blairo
>



-- 
Best Regards,

Sujay M
Final year B.Tech
Computer Engineering
NITK Surathkal

contact: +918971897571
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift installation verification fails

2012-11-20 Thread Shashank Sahni
Hi,

Thanks for the response. I went head to verify using curl and ran.

$ curl -k -v -H 'X-Storage-User: admin:admin' -H 'X-Storage-Pass: '
http://10.2.4.115:5000/v2.0

Here is the output. I don't see the token or storage-url anywhere. Note
that, 10.2.4.115 is the keystone server.

* About to connect() to 10.2.4.115 port 5000 (#0)
*   Trying 10.2.4.115... connected
> GET /v2.0 HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0
OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: 10.2.4.115:5000
> Accept: */*
> X-Storage-User: admin:admin
> X-Storage-Pass: x
>
< HTTP/1.1 200 OK
< Vary: X-Auth-Token
< Content-Type: application/json
< Date: Wed, 21 Nov 2012 05:46:25 GMT
< Transfer-Encoding: chunked
<
* Connection #0 to host 10.2.4.115 left intact
* Closing connection #0
{"version": {"status": "beta", "updated": "2011-11-19T00:00:00Z",
"media-types": [{"base": "application/json", "type":
"application/vnd.openstack.identity-v2.0+json"}, {"base":
"application/xml", "type": "application/vnd.openstack.identity-v2.0+xml"}],
"id": "v2.0", "links": [{"href": "http://10.2.4.115:5000/v2.0/";, "rel":
"self"}, {"href": "
http://docs.openstack.org/api/openstack-identity-service/2.0/content/";,
"type": "text/html", "rel": "describedby"}, {"href": "
http://docs.openstack.org/api/openstack-identity-service/2.0/identity-dev-guide-2.0.pdf";,
"type": "application/pdf", "rel": "describedby"}]}}

--
Shashank Sahni



On Wed, Nov 21, 2012 at 12:48 AM, Hugo  wrote:

> In my suggestion, using curl for verifying keystone first. And then using
> curl to access swift proxy with the returned token and service-endpoint
> from previous keystone operation.
>
> It must give u more clear clues.
>
>
>
> 從我的 iPhone 傳送
>
> Shashank Sahni  於 2012/11/20 下午6:40 寫道:
>
> Hi,
>
> I'm trying to install Swift 1.7.4 on Ubuntu 12.04. The installation is
> multi-node with keystone and swift(proxy+storage) running on separate
> systems. Keystone is up and running perfectly fine. Swift user and service
> endpoints are created correctly to point to the swift_node. Swift is
> configured and all its services are up. But during swift installation
> verification, the following commands hangs with no output.
>
> swift -V 2 -A http://keystone_server:5000/v2.0-U 
> admin:admin -K admin_pass stat
>
> I'm sure its able to contact the keystone server. This is because if I
> change admin_pass, it throws authentication failure error. It probably
> fails in the next step which I'm unaware of.
>
> Here is my proxy-server.conf file.
>
> [DEFAULT]
> # Enter these next two values if using SSL certifications
> cert_file = /etc/swift/cert.crt
> key_file = /etc/swift/cert.key
> bind_port = 
> user = swift
>
> [pipeline:main]
> #pipeline = healthcheck cache swift3 authtoken keystone proxy-server
> pipeline = healthcheck cache swift3 authtoken keystone proxy-server
>
> [app:proxy-server]
> use = egg:swift#proxy
> allow_account_management = true
> account_autocreate = true
>
> [filter:swift3]
> use=egg:swift3#swift3
>
> [filter:keystone]
> paste.filter_factory = keystone.middleware.swift_auth:filter_factory
> operator_roles = Member,admin, swiftoperator
>
> [filter:authtoken]
> paste.filter_factory = keystone.middleware.auth_token:filter_factory
> # Delaying the auth decision is required to support token-less
> # usage for anonymous referrers ('.r:*').
> delay_auth_decision = 10
> service_port = 5000
> service_host = keystone_server
> auth_port = 35357
> auth_host = keystone_server
> auth_protocol = http
> auth_uri = http://keystone_server:5000/
> auth_token = 
> admin_token = 
> admin_tenant_name = service
> admin_user = swift
> admin_password = 
> signing_dir = /etc/swift
>
> [filter:cache]
> use = egg:swift#memcache
> set log_name = cache
>
> [filter:catch_errors]
> use = egg:swift#catch_errors
>
> [filter:healthcheck]
> use = egg:swift#healthcheck
>
> Any suggestion?
>
> --
> Shashank Sahni
>
> ___
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help   : https://help.launchpad.net/ListHelp
>
>
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Citrix Xen Server + XenApi Failed to start nova-compute service

2012-11-20 Thread Lei Zhang
Hi All,

I am tring install the openstack with Xen Server following the guide of
http://wiki.openstack.org/XenServer/XenXCPAndXenServer. But I run into
error when start the nova-compute service . Here is the error message. Who
can figure out why it happended and how to fix this issue.

*Error Message*

root@ubuntu:/etc/nova# nova-compute
2012-11-21 01:33:47 CRITICAL nova [-] 'get_connection'
2012-11-21 01:33:47 TRACE nova Traceback (most recent call last):
2012-11-21 01:33:47 TRACE nova   File "/usr/bin/nova-compute", line 47, in
2012-11-21 01:33:47 TRACE nova server =
service.Service.create(binary='nova-compute')
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/service.py", line 241, in
create
2012-11-21 01:33:47 TRACE nova report_interval, periodic_interval)
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/service.py", line 150, in
__init__
2012-11-21 01:33:47 TRACE nova self.manager =
manager_class(host=self.host, *args, **kwargs)
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 204,
in __init__
2012-11-21 01:33:47 TRACE nova utils.import_object(compute_driver),
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 89, in
import_object
2012-11-21 01:33:47 TRACE nova return cls()
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/virt/connection.py", line 76,
in get_connection
2012-11-21 01:33:47 TRACE nova conn = xenapi_conn.get_connection(read_only)
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py", line 144,
in get_connection
2012-11-21 01:33:47 TRACE nova return XenAPIConnection(url,
username, password)
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/virt/xenapi_conn.py", line 157,
in __init__
2012-11-21 01:33:47 TRACE nova self._vmops =
vmops.VMOps(self._session, self._product_version)
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line
160, in __init__
2012-11-21 01:33:47 TRACE nova self.firewall_driver =
fw_class(xenapi_session=self._session)
2012-11-21 01:33:47 TRACE nova   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/firewall.py", line
227, in __init__
2012-11-21 01:33:47 TRACE nova self.nwfilter =
NWFilterFirewall(kwargs['get_connection'])
2012-11-21 01:33:47 TRACE nova KeyError: 'get_connection'
2012-11-21 01:33:47 TRACE nova

*nova.conf*

connection_type=xenapi
xenapi_connection_password=***
xenapi_connection_url=http://192.168.0.98
xenapi_connection_username=root

-- 
Lei Zhang

Blog: http://jeffrey4l.github.com
twitter/weibo: @jeffrey4l
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [devstack] How to purge, remove and wipe out devstack?

2012-11-20 Thread Matthias Runge


>
> Hello,
> if I would like to wipe, remove and purge everything devstack
> installed and configured what should I do?
>
> rm -rf /opt/stack
> rm -rf /usr/local/bin/
>
> what else?
>
> thanks in advance!
> :)
>
> -- *
> guilherme* \n
> \t *maluf*
> ___

You might take a look onto the extensive list, Daniel Berrange posted in 
his blog:


http://berrange.com/posts/2012/11/20/what-devstack-does-to-your-host-when-setting-up-openstack-on-fedora-17/

Matthias

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] OpenStack Controller failover

2012-11-20 Thread Edward_Doong
Dear
I find H.A for openstack nova-network
http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html

Openstack instance support NFS migrate by all compute node.
Swift have Zone and backup objects.
MySql have Cluster.

How about all openstack H.A or Failover?
If Openstack controller physical machine crash.
(Dashboard, nova-api, nova information, glance, keystone, etc...)
All service will disappear.
Could we recover openstack controller automatically?

Thanks all.

Edward
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp