[Openstack] [Neutron] General dev queries regarding neutron ovs agent

2014-03-25 Thread Ageeleshwar Kandavelu
Hi,

I have two queries regarding neutron ovs agent.

1. Correct me if I am wrong the ovs agent polls the neutron database for 
changes before creating resources. Why does the agent have to register with 
neutron-server. When I give 'neutron agent-list' I can see all the agents (l3, 
dhcp, ovs-plugin). What is the communication interface between the 
neutron-server and the agents. A link to the documentation would do.

2. I recon that the ovs agent is using subprocess to create interfaces on 
openvswitch. What api does it use to handling namespaces i.e., to create 
interfaces inside non default network namespace.

Thank you,
Ageeleshwar K
http://www.csscorp.com/common/email-disclaimer.php
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] QoS solutions for Neutron?

2014-03-25 Thread Kai
Hi Li,

We had tried you suggestion, but we found that if we use nova-network (for
using flavor), we cannot use neutron. Is it right?


On 20 March 2014 23:38, Li Ma  wrote:

> there's a blueprint working on neutron qos framework.
> https://wiki.openstack.org/wiki/Neutron/QoS
> you can check it, but it seems that this function will not be accepted for
> the coming release.
>
> btw, you can use nova flavor to restrict bandwidth for instances.
> 2014年3月20日 上午2:21于 "Kai" 写道:
>
>> Hi,
>>
>> We are working on OpenStack deployment inside my company. We need a QoS
>> solution to control the quality of network traffic between VMs and network
>> nodes. But it seems to be the missing feature on Havana release, doesn't
>> it? So, is there any alternative or work-around solution for our problem?
>>
>> We are using ML2 in Neutron component.
>>
>> --
>> Best regards,
>>
>> Duong Pham
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>


-- 
Best regards,

Duong Pham
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Neutron] General dev queries regarding neutron ovs agent

2014-03-25 Thread Salvatore Orlando
Comments inline.

Salvatore

On 25 March 2014 07:03, Ageeleshwar Kandavelu <
ageeleshwar.kandav...@csscorp.com> wrote:

>  Hi,
>
> I have two queries regarding neutron ovs agent.
>
> 1. Correct me if I am wrong the ovs agent polls the neutron database for
> changes before creating resources. Why does the agent have to register with
> neutron-server. When I give 'neutron agent-list' I can see all the agents
> (l3, dhcp, ovs-plugin). What is the communication interface between the
> neutron-server and the agents. A link to the documentation would do.
>

The latest version of the neutron agent which had direct access to the
database was Essex. Since Folsom, there is a RPC interface, which is the
one the agent uses to report the state you see with neutron agent-list.

Also, the agent configures iptables rules for implementing security groups,
and creates gre tunnels if you're using this transport mode; it does not
create however tap interfaces, but merely wires them to the appropriate
network.


> 2. I recon that the ovs agent is using subprocess to create interfaces on
> openvswitch. What api does it use to handling namespaces i.e., to create
> interfaces inside non default network namespace.
>

As stated earlier the OVS agent does not create interface. Other agents,
such as DHCP and L3, do that. This, and ns management, is achieved with a
purpose built library: neutron.agent.linux.ip_lib

>
> Thank you,
> Ageeleshwar K
>  http://www.csscorp.com/common/email-disclaimer.php
>
> ___
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Neutron] General dev queries regarding neutron ovs agent

2014-03-25 Thread Ageeleshwar Kandavelu
That was very informative.
Can you also give me any links to the documentation for the RPC interface? Is 
status report the only purpose of this interface or is it also used by 
neutron-server to notify agent about user generated events like say net-create, 
subnet-create etc.

Thank you,
Ageeleshwar K


From: Salvatore Orlando [sorla...@nicira.com]
Sent: Tuesday, March 25, 2014 3:31 PM
To: Ageeleshwar Kandavelu
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] [Neutron] General dev queries regarding neutron ovs 
agent


Comments inline.

Salvatore

On 25 March 2014 07:03, Ageeleshwar Kandavelu 
mailto:ageeleshwar.kandav...@csscorp.com>> 
wrote:
Hi,

I have two queries regarding neutron ovs agent.

1. Correct me if I am wrong the ovs agent polls the neutron database for 
changes before creating resources. Why does the agent have to register with 
neutron-server. When I give 'neutron agent-list' I can see all the agents (l3, 
dhcp, ovs-plugin). What is the communication interface between the 
neutron-server and the agents. A link to the documentation would do.

The latest version of the neutron agent which had direct access to the database 
was Essex. Since Folsom, there is a RPC interface, which is the one the agent 
uses to report the state you see with neutron agent-list.

Also, the agent configures iptables rules for implementing security groups, and 
creates gre tunnels if you're using this transport mode; it does not create 
however tap interfaces, but merely wires them to the appropriate network.


2. I recon that the ovs agent is using subprocess to create interfaces on 
openvswitch. What api does it use to handling namespaces i.e., to create 
interfaces inside non default network namespace.

As stated earlier the OVS agent does not create interface. Other agents, such 
as DHCP and L3, do that. This, and ns management, is achieved with a purpose 
built library: neutron.agent.linux.ip_lib

Thank you,
Ageeleshwar K
http://www.csscorp.com/common/email-disclaimer.php

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


http://www.csscorp.com/common/email-disclaimer.php
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Not able to deploy a instance from controller node

2014-03-25 Thread Nick Maslov
Enable debug and verbose logs on nova, and check the scheduler; I had few 
cases, when RAM on compute nodes is not enough to spin up a VM

Cheers,
NM
-- 
Nick Maslov
Sent with Airmail

On March 21, 2014 at 9:34:31 AM, Mahardhika Gilang 
(mahardika.gil...@andalabs.com) wrote:

Hi, please chat the output of #tail -f /var/log/nova/nova-compute.log
in your compute node

On 3/21/2014 1:55 PM, Manoj K wrote:
Hello OpenStack,

I have simple dual-node OpenStack, On both node all service works fine.

#nova-manage service list

Binary           Host                                 Zone             Status   
  State Updated_At
nova-cert        controller                           internal         enabled  
  :-)   2014-03-21 06:49:18
nova-consoleauth controller                           internal         enabled  
  :-)   2014-03-21 06:49:18
nova-scheduler   controller                           internal         enabled  
  :-)   2014-03-21 06:49:18
nova-conductor   controller                           internal         enabled  
  :-)   2014-03-21 06:49:17
nova-compute     compute1                             nova             enabled  
  :-)   2014-03-21 06:49:17
nova-network     compute1                             internal         enabled  
  :-)   2014-03-21 06:49:17


When i try to launch a instance from controller, I am getting this error 
"2014-03-21 00:38:58.677 865 WARNING nova.scheduler.driver 
[req-acb89cd9-e435-45bd-9e3d-ec5763448139 08b7690d9a434f09a9617e5a3da9b1dd 
17663b2a7f7447348d04b0fa2d370b21] [instance: 
f24b6f13-9233-4db3-be25-703d88eaf32c] Setting instance to ERROR state. 
".

Am not able to solve this please guide me.

Thanks in advance.

Mysetup:

Controller - 192.168.0.10
Compute1 - 192.168.0.11

Controller:

nova.conf

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata


rpc_backend = nova.rpc.impl_kombu
rabbit_host = controller
rabbit_password = RABBIT_PASS

my_ip=192.168.0.10
vncserver_listen=192.168.0.10
vncserver_proxyclient_address=192.168.0.10

auth_strategy=keystone

[database]
connection = mysql://nova:NOVA_DBPASS@controller/nova

[keystone_authtoken]
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = NOVA_PASS

Compute1:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata




rpc_backend = nova.rpc.impl_kombu
rabbit_host = controller
rabbit_password = RABBIT_PASS

auth_strategy=keystone

my_ip=192.168.0.11
vnc_enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.0.11
novncproxy_base_url=http://controller:6080/vnc_auto.html

glance_host=controller



network_manager=nova.network.manager.FlatDHCPManager
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
network_size=254
allow_same_net_traffic=False
multi_host=True
send_arp_for_ha=True
share_dhcp_address=True
force_dhcp_release=True
flat_network_bridge=br100
flat_interface=eth0
public_interface=eth0


[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:NOVA_DBPASS@controller/nova




___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


--
Regards,
Mahardhika Gilang

PT. Andalabs Technology
Gedung Gravira
Jl. Cideng Barat no. 54
Jakarta Pusat 10150

Mobile : 0852 139 55861
Email : mahardika.gil...@andalabs.com
___  
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack  
Post to : openstack@lists.openstack.org  
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack  
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] (no subject)

2014-03-25 Thread Ageeleshwar Kandavelu
Hi,
That is right.

This however is more convincing.

@skywalker.nick
Thank you

---
Message: 20
Date: Tue, 25 Mar 2014 14:17:01 +0700
From: Kai 
To: Li Ma 
Cc: Openstack Milis 
Subject: Re: [Openstack] QoS solutions for Neutron?
Message-ID:

Content-Type: text/plain; charset="utf-8"

Hi Li,

We had tried you suggestion, but we found that if we use nova-network (for
using flavor), we cannot use neutron. Is it right?
http://www.csscorp.com/common/email-disclaimer.php
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] floatting ip are not created

2014-03-25 Thread Ageeleshwar Kandavelu
If you are using gre mode. you have to create br-tun and restart your 
neutron-ovs agent.

If you are using vlan you have to create all bridges mentioned in 
bridge_mappings inside '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'

You can not expect your floating ip to work untill you can ping the external 
router from your instance(using vnc window)

Thank you,
Ageeleshwar K
http://www.csscorp.com/common/email-disclaimer.php
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] floatting ip are not created

2014-03-25 Thread Ageeleshwar Kandavelu
You got it wrong.

br-int (Intergration bridge) - This is like a point-of-presence for instances 
to connect and send network traffic.

br-tun (tunnel bridge) - This bridge serves as the tunnel endpoints. This is 
also a part of your data network. This is also used by VM. The intent is to 
have each tenant traffic in a separate tunnel. Packets leaving the instance 
will carry no vlan id. In br-int flow rules will add a vlan to the packets from 
each instance( the vlan id depends on the network ) . In br-tun there is one 
flow to translate the vlan to a particular tunnel id.  So packets leaving 
br-tun carry tunnel id according to the tenant.

The br-tun of various nodes (computes and network node) form a mesh of tunnels 
through which the vm data flows.

Once you create br-tun and restart the neutron ovs plugin you can see the flow 
rules using 'ovs-ofctl dump-flows br-int/ br-tun'

If you do ovs-vsctl show you will see that the br-tun of various nodes have 
formed a mesh of tunnels.

You do not need any bridge for management.

Thank you,
Ageeleshwar K




From: cheniour ghassen [ghacheni...@gmail.com]
Sent: Tuesday, March 25, 2014 6:15 PM
To: Ageeleshwar Kandavelu
Subject: Re: [Openstack] floatting ip are not created

Hi Ageeleshwar,
I want to thank you first for your answer. I am using gre mode. As i know 
br-tun are using for management And br-int are used for data forwarding 
beteween the VMs. As documented in the openstack docs, I have created br-int 
and indicated tha br-tun are for management.
I think the problem is because neutron doesn't detect the agents. The below pdf 
file contain some configurations.
Thank you and i am looking forward for your answer.
sincerely,
Ghassen Cheniour.



On Tue, Mar 25, 2014 at 1:33 PM, Ageeleshwar Kandavelu 
mailto:ageeleshwar.kandav...@csscorp.com>> 
wrote:
If you are using gre mode. you have to create br-tun and restart your 
neutron-ovs agent.

If you are using vlan you have to create all bridges mentioned in 
bridge_mappings inside '/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini'

You can not expect your floating ip to work untill you can ping the external 
router from your instance(using vnc window)

Thank you,
Ageeleshwar K
http://www.csscorp.com/common/email-disclaimer.php

http://www.csscorp.com/common/email-disclaimer.php
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Error occured while making volume backed live migration based on Ceph Block; Ask for assistance.

2014-03-25 Thread Li Zhuran
Hi all,

I'm trying the volume backed live migration via Ceph Block Storage and stunk
in the issues of libvirt.
The environment of my cluster is as follows:
Hosts in cluster: 
   Havana(Controller & compute) and Compute1(pure Compute node)
   Ceph1, ceph2, ceph3: ceph cluster

1. The ceph cluster: checked by command: ceph health
2. The Openstack cluster: 
  ceph client is installed on each node;
  Configuration done for ceph in glance, cinder, nova
  Qemu-kvm, qemu-img, qemu-kvm-tools are installed from ceph source.
3. Image creation correctly; Volume creation correctly; Instance lunched
from volume on host havana;
4. Unexpected, the instance migration from Havana to compute1 failed with
the following ERROR(I'm struggling to hard work on the issues and it would
be much appreciated if any clue!!!):

Compute.log on host compute1:
2014-03-25 18:46:44.346 16948 AUDIT nova.compute.manager
[req-defe320c-bae6-4d57-817e-1cca779e204a 714bab91932043e98ad2d855a81f19b0
888df5c4bc47459485b96ffa03c671e6] [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Detach volume
f5a34a51-b662-43d4-a7d5-4199de8b1d4b from mountpoint vda
2014-03-25 18:46:44.364 16948 WARNING nova.compute.manager
[req-defe320c-bae6-4d57-817e-1cca779e204a 714bab91932043e98ad2d855a81f19b0
888df5c4bc47459485b96ffa03c671e6] [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Detaching volume from unknown instance
2014-03-25 18:46:44.375 16948 ERROR nova.compute.manager
[req-defe320c-bae6-4d57-817e-1cca779e204a 714bab91932043e98ad2d855a81f19b0
888df5c4bc47459485b96ffa03c671e6] [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Failed to detach volume
f5a34a51-b662-43d4-a7d5-4199de8b1d4b from vda
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Traceback (most recent call last):
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3737, in
_detach_volume
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] encryption=encryption)
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]   File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1202,
in detach_volume
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] virt_dom =
self._lookup_by_name(instance_name)
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]   File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3101,
in _lookup_by_name
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] raise
exception.InstanceNotFound(instance_id=instance_name)
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] InstanceNotFound: Instance
instance-000c could not be found.
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]

Libvirtd.log on host compute1:
2014-03-25 10:32:05.952+: 1891: warning : qemuDomainObjTaint:1377 :
Domain id=1 name='instance-000b'
uuid=4ddb08dc-c7ef-4cdf-8108-80a296eaf457 is tainted: high-privileges
2014-03-25 10:32:07.188+: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.411+: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.414+: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.416+: 1891: warning : qemuSetupCgroupForVcpu:566 :
Unable to get vcpus' pids.
2014-03-25 10:33:34.419+: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.419+: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous

[root@compute1 ~(keystone_admin)]# rpm -qa |grep qemu
qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-guest-agent-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
[root@compute1 ~(keystone_admin)]# rpm -qa |grep libvirt
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
libvirt-0.10.2-29.el6_5.3.x86_64


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/o

[Openstack] OpenStack Havana with ZMQ

2014-03-25 Thread Antonio Messina
Hi all,

I am testing Havana with ZeroMQ but I'm unable to make it work.

First of all, I have a few questions:

* I gather that the nova-rpc-zmq-receiver *must* run on *all* nodes
  (including compute nodes), is that correct?
* the nova-rpc-zmq-receiver is part (in Ubuntu) of the nova-scheduler
  package, should I install the package and disable nova-scheduler on
  the compute nodes? Is there an init script available for it or
  should I create my own?
* How the communication works: the nova-compute services talk to the
  nova-rpc-zmq-receiver via `tcp://` and the services on the same node
  as the nova-rpc-zmq-receiver talk using `ipc://`?

I am currently using two nodes: a controller node and a compute node.
On both nodes I added to nova.conf:

rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = cloud2.gc3
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing
rpc_zmq_port = 9501

started nova-rpc-zmq-receiver with:

nova-rpc-zmq-receiver --config-file /etc/nova/nova.conf

and created a /etc/oslo/matchmaker_ring.json file containing:

{
  "scheduler": ["cloud2"],
  "conductor": ["cloud2"],
  "cert": ["cloud2"],
  "consoleauth": ["cloud2"],
  "network": ["cloud2"],
  "compute": ["node-08-01-01"]

}

where `cloud2` is my controller and `node-08-01-01` is my compute node.

However, when I run `nova service-list` (or `nova-manage service
list`) I don't see the compute node:


+--+---+--+-+---++-+
| Binary   | Host  | Zone | Status  | State |
Updated_at | Disabled Reason |

+--+---+--+-+---++-+
| nova-consoleauth | cloud2| internal | enabled | up|
2014-03-25T15:23:49.00 | None|
| nova-cert| cloud2| internal | enabled | up|
2014-03-25T15:24:13.00 | None|
| nova-scheduler   | cloud2| internal | enabled | up|
2014-03-25T15:23:36.00 | None|
| nova-conductor   | cloud2| internal | enabled | up|
2014-03-25T15:23:56.00 | None|

+--+---+--+-+---++-+

When I start the nova-compute service I see only the following lines
in the nova-compute.log:

2014-03-25 16:27:05.158 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Sending
message(s) to: [(u'conductor.cloud2', u'cloud2')] _multi_send
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:725
2014-03-25 16:27:05.159 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Creating payload
_call /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:650
2014-03-25 16:27:05.159 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Creating queue
socket for reply waiter _call
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:663
2014-03-25 16:27:05.162 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Subscribing to
0a9babcb7ba14b1180621583031a4223 subscribe
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:158
2014-03-25 16:27:05.162 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Connecting to
ipc:///var/run/openstack/zmq_topic_zmq_replies.node-08-01-01.gc3 with
SUB __init__ 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:136
2014-03-25 16:27:05.162 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] -> Subscribed to
0a9babcb7ba14b1180621583031a4223 __init__
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:137
2014-03-25 16:27:05.162 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] -> bind: False
__init__ 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:138
2014-03-25 16:27:05.163 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Sending cast
_call /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:676
2014-03-25 16:27:05.163 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Connecting to
tcp://cloud2:9501 with PUSH __init__
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:136
2014-03-25 16:27:05.163 7791 DEBUG
nova.openstack.common.rpc.common
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] -> Subscribed to
None __init__ 
/usr/lib/python2.7/dist-packages/

[Openstack] Access VM console from a script

2014-03-25 Thread Nagaraj Mandya
Hello,
  If I start a VM (on Havana), is there a way I can get into the console
through a script? Is the console accessible over telnet to a port or
something like that? Or is the only supported access through VNC? Thanks.
--
Regards,
Nagaraj
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Access VM console from a script

2014-03-25 Thread Clint Byrum
Excerpts from Nagaraj Mandya's message of 2014-03-25 08:58:14 -0700:
> Hello,
>   If I start a VM (on Havana), is there a way I can get into the console
> through a script? Is the console accessible over telnet to a port or
> something like that? Or is the only supported access through VNC? Thanks.

nova console-log instance_id

Enjoy. :)

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] re-scope the keystone token

2014-03-25 Thread Vinod Kumar Boppanna
Hi,

I am using keystoneclient to get the authentication token, like as below

keystone = client.Client(username=username,
 password=password,
 auth_url=endpoint)
token = keystone.auth_token

Now, i want to re-scope this token to different project ids, some thing like

keystone = client.Client(tenant_id=tenant,
 auth_url=endpoint,
 token=token)

but this is not woking. So can anybody tell me how to re-scope the 
authentication token obtained from keystone.

Thanks & Regards,
Vinod Kumar Boppanna
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Access VM console from a script

2014-03-25 Thread Christian Berendt
On 03/25/2014 04:58 PM, Nagaraj Mandya wrote:
>   If I start a VM (on Havana), is there a way I can get into the console
> through a script? Is the console accessible over telnet to a port or
> something like that? Or is the only supported access through VNC? Thanks.

You can access the instances using SSH. That's possible without interaction.

Why do you need direct access on the console? This should be only
necessary if your network is not working like expected.

HTH, Christian.

-- 
Christian Berendt
Cloud Computing Solution Architect
Mail: bere...@b1-systems.de

B1 Systems GmbH
Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de
GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Fwd: Swift Functional Tests Errors

2014-03-25 Thread Ashish Dobhal
-- Forwarded message --
From: Ashish Dobhal 
Date: Tue, Mar 25, 2014 at 9:52 PM
Subject: Swift Functional Tests Errors
To: openstack@lists.openstack.org


Dear Sir/Maam
1.I am getting a lot of errors in the *functional test(*
$HOME/swift/.functests*)* of my swift deployment.How can I rectify them.I
am following the Swift All In One Tutorial.(
http://docs.openstack.org/developer/swift/development_saio.html)

2.I am also getting the following *error while creating a container.*

rock@rohan-Inspiron-5521:/$ swift --os-auth-token
AUTH_tkd43461ec44004438818da6e61a55f2ee \--os-storage-url
http://127.0.0.1/v1/AUTH_test \post cont
Container PUT failed: http://127.0.0.1/v1/AUTH_test/cont 404 Not Found
 [first 60 chars of response] Mk�0

   ��� ZO�aQ
 Please Reply.
Thank you
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [Horizon] Manage panel permissions based on user or tenant

2014-03-25 Thread Andrii L
It was described how to limit access to users with the Keystone Admin role:

http://docs.openstack.org/developer/horizon/topics/customizing.html

Example from the tutorial:

permissions = list(getattr(instances_panel, 'permissions', []))
permissions.append('openstack.roles.admin')
instances_panel.permissions = tuple(permissions)

Is it possible to manage panel permissions on a user or a tenant basis?

We can get such details from HttpRequest like so:

tenant_name = request.user.tenant_name

The problem is the 'request' is not available at the time the
"my_project.overrides" file is parsed.

Thank you.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] i can't ping floatting ip

2014-03-25 Thread Michaël Van de Borne

try this on compute node:
echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables
then deploy an instance and assiciate a floating ip.

tell me if it works

yours,

m.
Le 19/03/14 14:35, cheniour ghassen a écrit :


Hi everyone,

I have configured openstack Havana on ubutu server 12.04. I have a 
three server a controller, a compute and a network node. I have used 
neutron for networking with openvswitch. I can deply and run an 
instance. But i can't ping the floatting ip associated with the 
instance. I added icmp and ssh rules. I added also gre for INPUT and 
OUTPUT into iptables. I have also verified my ovs and neutron 
configurations, they are as mentioenned in the openstack havana docs. 
This is my network configurations: for compute node:


root@compute:~# ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8
 scope host lo inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether
00:0c:29:b2:cf:03 brd ff:ff:ff:ff:ff:ff inet 192.168.10.62/24
 brd 192.168.10.255 scope global eth0
inet6 fe80::20c:29ff:feb2:cf03/64 scope link valid_lft forever
preferred_lft forever

3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether
00:0c:29:b2:cf:0d brd ff:ff:ff:ff:ff:ff inet 192.168.80.129/24
 brd 192.168.80.255 scope global eth1
inet6 fe80::20c:29ff:feb2:cf0d/64 scope link valid_lft forever
preferred_lft forever

4: ovs-system: mtu 1500 qdisc noop state DOWN link/ether
ee:3e:3e:1e:c8:97 brd ff:ff:ff:ff:ff:ff

5: br-int: mtu 1500 qdisc noqueue state UNKNOWN link/ether
fa:5b:ad:88:55:43 brd ff:ff:ff:ff:ff:ff inet6
fe80::408:21ff:fe70:3215/64 scope link valid_lft forever
preferred_lft forever

6: virbr0: mtu 1500 qdisc noqueue state DOWN link/ether
4e:88:a0:8f:44:12 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24
 brd 192.168.122.255 scope global virbr0

19: tapd44d4f63-0b: mtu 1500 qdisc pfifo_fast state UNKNOWN qlen
500 link/ether fe:16:3e:62:d8:2d brd ff:ff:ff:ff:ff:ff inet6
fe80::fc16:3eff:fe62:d82d/64 scope link valid_lft forever
preferred_lft forever

root@compute:~# ovs-vsctl show 346cb3ff-efd6-445d-a71f-6e14496de500

Bridge br-int

| Port br-int

 Interface br-int

 type: internal

 Port "tapd44d4f63-0b"

 Interface "tapd44d4f63-0b"

ovs_version: "1.10.2"
|

for the network node:

root@network:~# ip a

1: lo: mtu 65536 qdisc noqueue state UNKNOWN link/loopback
00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8
 scope host lo inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether
00:0c:29:e1:15:8b brd ff:ff:ff:ff:ff:ff inet6
fe80::20c:29ff:fee1:158b/64 scope link valid_lft forever
preferred_lft forever

3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether
00:0c:29:e1:15:95 brd ff:ff:ff:ff:ff:ff inet 192.168.10.53/24
 brd 192.168.10.255 scope global eth1
inet6 fe80::20c:29ff:fee1:1595/64 scope link valid_lft forever
preferred_lft forever

4: eth2: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether
00:0c:29:e1:15:9f brd ff:ff:ff:ff:ff:ff inet 192.168.80.131/24
 brd 192.168.80.255 scope global eth2
inet6 fe80::20c:29ff:fee1:159f/64 scope link valid_lft forever
preferred_lft forever

5: ovs-system: mtu 1500 qdisc noop state DOWN link/ether
6a:35:28:8d:9b:04 brd ff:ff:ff:ff:ff:ff

6: br-ex: mtu 1500 qdisc noqueue state UNKNOWN link/ether
00:0c:29:e1:15:8b brd ff:ff:ff:ff:ff:ff inet 192.168.10.52/24
 brd 192.168.10.255 scope global br-ex
inet6 fe80::884f:c0ff:fe0d:9f82/64 scope link valid_lft forever
preferred_lft forever 7: br-int: mtu 1500 qdisc noqueue state
UNKNOWN link/ether 76:7c:c0:3b:72:49 brd ff:ff:ff:ff:ff:ff inet6
fe80::5022:13ff:febc:4fcd/64 scope link valid_lft forever
preferred_lft forever

root@network:~# ovs-vsctl show 7d3d6422-b107-489c-b9a3-4ec65629b6de

|Bridge br-int

 Port br-int

 Interface br-int

 type: internal

Bridge br-ex

 Port br-ex

 Interface br-ex

 type: internal

 Port "eth0"

 Interface "eth0"

ovs_version: "1.10.2"
|

Any help would be apperciated.

Thanks.



___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


--
Michaël Van de Borne
R&D Engineer, SOA tea

Re: [Openstack] OpenStack Havana with ZMQ

2014-03-25 Thread Nick Maslov
hi,

not related to your problem in particular - but why are you trying to setup 
ZMQ? RabbitMQ is not sufficient for you?

cheers,
NM
-- 
Nick Maslov
Sent with Airmail

On March 25, 2014 at 5:47:14 PM, Antonio Messina (antonio.s.mess...@gmail.com) 
wrote:

Hi all,  

I am testing Havana with ZeroMQ but I'm unable to make it work.  

First of all, I have a few questions:  

* I gather that the nova-rpc-zmq-receiver *must* run on *all* nodes  
(including compute nodes), is that correct?  
* the nova-rpc-zmq-receiver is part (in Ubuntu) of the nova-scheduler  
package, should I install the package and disable nova-scheduler on  
the compute nodes? Is there an init script available for it or  
should I create my own?  
* How the communication works: the nova-compute services talk to the  
nova-rpc-zmq-receiver via `tcp://` and the services on the same node  
as the nova-rpc-zmq-receiver talk using `ipc://`?  

I am currently using two nodes: a controller node and a compute node.  
On both nodes I added to nova.conf:  

rpc_zmq_bind_address = *  
rpc_zmq_contexts = 1  
rpc_zmq_host = cloud2.gc3  
rpc_zmq_ipc_dir = /var/run/openstack  
rpc_zmq_matchmaker =  
nova.openstack.common.rpc.matchmaker_ring.MatchMakerRing  
rpc_zmq_port = 9501  

started nova-rpc-zmq-receiver with:  

nova-rpc-zmq-receiver --config-file /etc/nova/nova.conf  

and created a /etc/oslo/matchmaker_ring.json file containing:  

{  
"scheduler": ["cloud2"],  
"conductor": ["cloud2"],  
"cert": ["cloud2"],  
"consoleauth": ["cloud2"],  
"network": ["cloud2"],  
"compute": ["node-08-01-01"]  

}  

where `cloud2` is my controller and `node-08-01-01` is my compute node.  

However, when I run `nova service-list` (or `nova-manage service  
list`) I don't see the compute node:  

+--+---+--+-+---++-+
  
| Binary | Host | Zone | Status | State |  
Updated_at | Disabled Reason |  
+--+---+--+-+---++-+
  
| nova-consoleauth | cloud2 | internal | enabled | up |  
2014-03-25T15:23:49.00 | None |  
| nova-cert | cloud2 | internal | enabled | up |  
2014-03-25T15:24:13.00 | None |  
| nova-scheduler | cloud2 | internal | enabled | up |  
2014-03-25T15:23:36.00 | None |  
| nova-conductor | cloud2 | internal | enabled | up |  
2014-03-25T15:23:56.00 | None |  
+--+---+--+-+---++-+
  

When I start the nova-compute service I see only the following lines  
in the nova-compute.log:  

2014-03-25 16:27:05.158 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Sending  
message(s) to: [(u'conductor.cloud2', u'cloud2')] _multi_send  
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:725  
2014-03-25 16:27:05.159 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Creating payload  
_call 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:650  
2014-03-25 16:27:05.159 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Creating queue  
socket for reply waiter _call  
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:663  
2014-03-25 16:27:05.162 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Subscribing to  
0a9babcb7ba14b1180621583031a4223 subscribe  
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:158  
2014-03-25 16:27:05.162 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Connecting to  
ipc:///var/run/openstack/zmq_topic_zmq_replies.node-08-01-01.gc3 with  
SUB __init__ 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:136  
2014-03-25 16:27:05.162 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] -> Subscribed to  
0a9babcb7ba14b1180621583031a4223 __init__  
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:137  
2014-03-25 16:27:05.162 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] -> bind: False  
__init__ 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:138  
2014-03-25 16:27:05.163 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Sending cast  
_call 
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:676  
2014-03-25 16:27:05.163 7791 DEBUG  
nova.openstack.common.rpc.common  
[req-9bf7fc72-6419-4b62-aaa9-b1a7fac964ac None None] Connecting to  
tcp://cloud2:9501 with PUSH __init__  
/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/impl_zmq.py:136  
2014-03-25 16:27:05.163 7791 DEBU

[Openstack] Pip Error In Swift Installation (Swift All in One tutorial for a single node)

2014-03-25 Thread Ashish Dobhal
Dear Sir/Maam;
I am following the swift all in one tutorial.
I got he following error while running
*sudo pip install -r swift/test-requirements.txt*  cmd



*ERROR*




*pkg_resources.VersionConflict: (pip 1.0
(/usr/lib/python2.7/dist-packages),
Requirement.parse('pip>=1.4'))Command
python setup.py egg_info failed with error code 1Storing complete log in
/home/yashika/.pip/pip.log*
Please Reply.
Thank You.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] OpenStack Havana with ZMQ

2014-03-25 Thread Antonio Messina
On Tue, Mar 25, 2014 at 6:58 PM, Nick Maslov  wrote:
> hi,
>
> not related to your problem in particular - but why are you trying to setup
> ZMQ? RabbitMQ is not sufficient for you?

Well, we don't know yet. We are planning a mid-size installation
(around 600 nodes) and I'm looking for options. As far as I understand
ZMQ should scale better, and allows a more distributed deployment than
RabbitMQ.

I also like ZMQ more than RabbitMQ personally, but I still want to
test both solutions before making any decision :)

Antonio

-- 
antonio.s.mess...@gmail.com
antonio.mess...@uzh.ch +41 (0)44 635 42 22
GC3: Grid Computing Competence Center  http://www.gc3.uzh.ch/
University of Zurich
Winterthurerstrasse 190
CH-8057 Zurich Switzerland

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Keystone support for simultaneous AD/LDAP domain

2014-03-25 Thread Mohammed, Allauddin
Thanks Adam for the information. Just curious to know, is it 
'domain_specific_driver' implementation that existed in Havana, which will 
enable to have multiple identity provider backends.
I had installed IceHouse-2, and was trying to configure multiple 
identity provider backend. But in keystone.log, saw below warning.

WARNING keystone.identity.core [-] Running an experimental and unsupported 
configuration (domain_specific_drivers_enabled = True); this will result in 
known issues.

Thanks in advance...
Regards,
Allauddin

From: Adam Lawson [mailto:alaw...@aqorn.com]
Sent: Tuesday, March 25, 2014 1:50 AM
To: Mohammed, Allauddin
Cc: openstack@lists.openstack.org
Subject: Re: [Openstack] Keystone support for simultaneous AD/LDAP domain

For now you have to pick one. With Icehouse Release 3, federation of multiple 
IdP's (multiple Identity Provider back-ends) will be supported with Keystone.

Hope this helps.

Mahalo,
Adam


Adam Lawson
AQORN, Inc.
427 North Tatnall Street
Ste. 58461
Wilmington, Delaware 19801-2230
Toll-free: (888) 406-7620
[http://www.aqorn.com/images/logo.png]

On Mon, Mar 24, 2014 at 1:36 AM, Mohammed, Allauddin 
mailto:allauddin.moham...@hp.com>> wrote:
Hi All,
   I have a generic deployment, where I would like to configure  Keystone with 
AD, LDAP  and SQL domain as my identity backend. Such that users  from AD,LDAP 
and SQL can have access to my swift resources. Let me know if this feasible in 
current Ice House release or are there any plans for future release.

Regards,
Allauddin

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack