Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Yup, If your host supports namespaces this can be done via the
quantum-metadata-agent.  The following setting is also required in your
 nova.conf: service_quantum_metadata_proxy=True


On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata service
 work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Hi,

I am able to get File Injection to work during a CentOS or Ubuntu VM
instance creation. But it doesnt work for a Windows VM. Is there a way to
get it to work for windows VM or it going to be a limitation we have to
live with, perhaps due to filesystem differences?

Regards,
Balu
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Thanks Aaron.

I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
169.254.169.254 from the VM. I am using a single node setup with two
NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

These are my metadata related configurations.

*/etc/nova/nova.conf *
metadata_host = 10.5.12.20
metadata_listen = 127.0.0.1
metadata_listen_port = 8775
metadata_manager=nova.api.manager.MetadataManager
service_quantum_metadata_proxy = true
quantum_metadata_proxy_shared_secret = metasecret123

*/etc/quantum/quantum.conf*
allow_overlapping_ips = True

*/etc/quantum/l3_agent.ini*
use_namespaces = True
auth_url = http://10.5.3.230:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
metadata_ip = 10.5.12.20

*/etc/quantum/metadata_agent.ini*
auth_url = http://10.5.3.230:35357/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = quantum
admin_password = service_pass
nova_metadata_ip = 127.0.0.1
nova_metadata_port = 8775
metadata_proxy_shared_secret = metasecret123


I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
request but no response.

root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
qg-193bb8ee-f5
10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
qg-193bb8ee-f5
192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
qr-59e69986-6e
root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
65535 bytes
^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1, length
28
23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
Unknown), length 28
23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28
23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28
23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28
23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
length 28

6 packets captured
6 packets received by filter
0 packets dropped by kernel
root@openstack-dev:~#


Any help will be greatly appreciated.

Thanks,
Balu


On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata service
 work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Wangpan
Hi Balamurugan,
What the edition of nova you are running? is there any trace log in 
nova-compute.log(default path: /var/log/nova/nova-compute.log)?
and what the edition of your windows VM(winxp/win7/win8)? if it is win7 or 
win8, the injected files may exist in the system reserved partition, you can 
google to open and check the injected files is there.(this may be a bug we need 
to fix)


2013-04-24



Wangpan



发件人:Balamurugan V G
发送时间:2013-04-24 14:19
主题:[Openstack] [OpenStack] Files Injection in to Windows VMs
收件人:openstack@lists.launchpad.netopenstack@lists.launchpad.net
抄送:

Hi,


I am able to get File Injection to work during a CentOS or Ubuntu VM instance 
creation. But it doesnt work for a Windows VM. Is there a way to get it to work 
for windows VM or it going to be a limitation we have to live with, perhaps due 
to filesystem differences?


Regards,
Balu___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
The vm should not have a routing table entry for 169.254.0.0/16  if it does
i'm not sure how it got there unless it was added by something other than
dhcp. It seems like that is your problem as the vm is arping directly for
that address rather than the default gw.


On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Hi Wangpan,

Thanks for the response. The file injection is actually working, sorry my
bad I was setting the dst-path incorrectly. I am using Nova 2013.1(Grizzly)
and Windows XP 32bit VM.

When I used the following command, it worked:

nova boot --flavor f43c36f9-de6a-42f4--edcedafe371a --image
3872c4c9-d8f7-4a18-a2cc-0406765d9379 --file balu.txt=balu.txt VM2

The file balu.txt ended up in C: drive.

Thanks again.

Regards,
Balu








On Wed, Apr 24, 2013 at 12:07 PM, Wangpan hzwang...@corp.netease.comwrote:

 **
  Hi Balamurugan,
 What the edition of nova you are running? is there any trace log in
 nova-compute.log(default path: /var/log/nova/nova-compute.log)?
 and what the edition of your windows VM(winxp/win7/win8)? if it is win7 or
 win8, the injected files may exist in the system reserved partition, you
 can google to open and check the injected files is there.(this may be a bug
 we need to fix)


 2013-04-24
  --
  Wangpan
  --
  *发件人:*Balamurugan V G
 *发送时间:*2013-04-24 14:19
 *主题:*[Openstack] [OpenStack] Files Injection in to Windows VMs
 *收件人:*openstack@lists.launchpad.netopenstack@lists.launchpad.net
 *抄送:*

   Hi,

 I am able to get File Injection to work during a CentOS or Ubuntu VM
 instance creation. But it doesnt work for a Windows VM. Is there a way to
 get it to work for windows VM or it going to be a limitation we have to
 live with, perhaps due to filesystem differences?

 Regards,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16 from
the VMs routing table, I could access the metadata service!

The route for 169.254.0.0/16 is added automatically when the instance boots
up, so I assume its coming from the DHCP. Any idea how this can be
suppressed?

Strangely though, I do not see this route in a WindowsXP VM booted in the
same network as the earlier Ubuntu VM and the Windows VM can reach the
metadata service with out me doing anything. The issue is with the Ubuntu
VM.

Thanks,
Balu



On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
169.254.0.0/16? Otherwise I think there is probably some software in your
vm image that is adding this route. One thing to test is if you delete this
route and then rerun dhclient to see if it's added again via dhcp.


On Wed, Apr 24, 2013 at 12:00 AM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






___
Mailing list: 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Salvatore Orlando
The dhcp agent will set a route to 169.254.0.0/16 if
enable_isolated_metadata_proxy=True.
In that case the dhcp port ip will be the nexthop for that route.

Otherwise, it might be your image might have a 'builtin' route to such
cidr.
What's your nexthop for the link-local address?

Salvatore


On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp






 ___
 Mailing list: https://launchpad.net/~openstack
 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Yup,  That's only if your subnet does not have a default gateway set.
Providing the output of route -n would be helpful .


On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.comwrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Hi Salvatore,

Thanks for the response. I do not have enable_isolated_metadata_proxy
anywhere under /etc/quantum and /etc/nova. The closest I see is
'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
commented out. What do you mean by link-local address?

Like you said, I suspect that the image has the route. This was was a
snapshot taken in a Folsom setup. So its possible that Folsom has injected
this route and when I took the snapshot, it became part of the snapshot. I
then copied over this snapshot to a new Grizzly setup. Let me check the
image and remove it from the image if it has the route. Thanks for the hint
again.

Regards,
Balu



On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
I do not have any thing running in the VM which could add this route. With
the route removed, when I disable and enable networking, so that it gets
back the details from DHCP server, I see that the route is getting added
again.

So DHCP seems to be my issue. I guess this rules out any pre-existing route
in the image as well.

Regards,
Balu


On Wed, Apr 24, 2013 at 12:39 PM, Aaron Rosen aro...@nicira.com wrote:

 Hrm, I'd do quantum subnet-list and see if you happened to create a subnet
 169.254.0.0/16? Otherwise I think there is probably some software in your
 vm image that is adding this route. One thing to test is if you delete this
 route and then rerun dhclient to see if it's added again via dhcp.


 On Wed, Apr 24, 2013 at 12:00 AM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in the
 same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I cant
 ping 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.comwrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
The routing table in the VM is:

root@vm:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 192.168.2.1 0.0.0.0 UG0  00 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000   00 eth0
192.168.2.0 0.0.0.0 255.255.255.0   U 1  00 eth0
root@vm:~#

And the routing table in the OpenStack node(single node host) is:

root@openstack-dev:~# ip netns exec
qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse
Iface
0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
qg-193bb8ee-f5
10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
qg-193bb8ee-f5
192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
qr-59e69986-6e
root@openstack-dev:~#

Regards,
Balu




On Wed, Apr 24, 2013 at 12:41 PM, Aaron Rosen aro...@nicira.com wrote:

 Yup,  That's only if your subnet does not have a default gateway set.
 Providing the output of route -n would be helpful .


 On Wed, Apr 24, 2013 at 12:08 AM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I
 cant ping 169.254.169.254 from the VM. I am using a single node setup with
 two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref
 Use Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
 0 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
 0 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
 0 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
I booted a Ubuntu Image in which I had made sure that there was no
pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
once its boots up. So its the DHCP server which is sending this route to
the VM.

Regards,
Balu


On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Hi Salvatore,

 Thanks for the response. I do not have enable_isolated_metadata_proxy
 anywhere under /etc/quantum and /etc/nova. The closest I see is
 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
 commented out. What do you mean by link-local address?

 Like you said, I suspect that the image has the route. This was was a
 snapshot taken in a Folsom setup. So its possible that Folsom has injected
 this route and when I took the snapshot, it became part of the snapshot. I
 then copied over this snapshot to a new Grizzly setup. Let me check the
 image and remove it from the image if it has the route. Thanks for the hint
 again.

 Regards,
 Balu



 On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.com wrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if it
 does i'm not sure how it got there unless it was added by something other
 than dhcp. It seems like that is your problem as the vm is arping directly
 for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see
 that the VM's routing table has an entry for 169.254.0.0/16 but I
 cant ping 169.254.169.254 from the VM. I am using a single node setup with
 two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When
 I ping 169.254.169.254 from VM, in the host's router namespace, I see the
 ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref
 Use Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
 0 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
 0 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
 0 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Razique Mahroua
Hi Balu,check this outhttp://www.cloudbase.it/cloud-init-for-windows-instances/It's a great tool, I just had issues myself with the Admin. password changingRegards,Razique
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 24 avr. 2013 à 08:17, Balamurugan V G balamuruga...@gmail.com a écrit :Hi,I am able to get File Injection to work during a CentOS or Ubuntu VM instance creation. But it doesnt work for a Windows VM. Is there a way to get it to work for windows VM or it going to be a limitation we have to live with, perhaps due to filesystem differences?
Regards,Balu
___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Thanks Razique, I'll try this as well. I am also trying for out of the box
options like file injection and meta-data service.

Regards,
Balu


On Wed, Apr 24, 2013 at 1:57 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:

 Hi Balu,
 check this out
 http://www.cloudbase.it/cloud-init-for-windows-instances/

 It's a great tool, I just had issues myself with the Admin. password
 changing

 Regards,
 Razique

 *Razique Mahroua** - **Nuage  Co*
 razique.mahr...@gmail.com
 Tel : +33 9 72 37 94 15


 Le 24 avr. 2013 à 08:17, Balamurugan V G balamuruga...@gmail.com a
 écrit :

 Hi,

 I am able to get File Injection to work during a CentOS or Ubuntu VM
 instance creation. But it doesnt work for a Windows VM. Is there a way to
 get it to work for windows VM or it going to be a limitation we have to
 live with, perhaps due to filesystem differences?

 Regards,
 Balu
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



NUAGECO-LOGO-Fblan_petit.jpg___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] error in nova-network start-up

2013-04-24 Thread Arindam Choudhury
Hi,

When I try to start the nova-network, I am getting this error:

2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 7472
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 18
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 8
2013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for aopcso1:aopcso1.uab.es
root@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 
2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task 
_periodic_update_dns because its interval is negative
2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node 
(version 2013.1)
2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env 
CONFIG_FILE=[/etc/nova/nova.conf] NETWORK_ID=1 dnsmasq --strict-order 
--bind-interfaces --conf-file= --domain='novalocal' 
--pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 
--except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Exit code: 2
Stdout: ''
Stderr: 2013-04-24 11:12:23.481 INFO nova.manager 
[req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task 
_periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 
INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] 
Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create 
listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 
usando\n
2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):
2013-04-24 11:12:23.590 11502 TRACE nova   File /usr/bin/nova-network, line 
54, in module
2013-04-24 11:12:23.590 11502 TRACE nova service.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 689, in wait
2013-04-24 11:12:23.590 11502 TRACE nova _launcher.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 209, in wait
2013-04-24 11:12:23.590 11502 TRACE nova super(ServiceLauncher, self).wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 179, in wait
2013-04-24 11:12:23.590 11502 TRACE nova service.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
2013-04-24 11:12:23.590 11502 TRACE nova return self._exit_event.wait()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
2013-04-24 11:12:23.590 11502 TRACE nova return hubs.get_hub().switch()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
2013-04-24 11:12:23.590 11502 TRACE nova return self.greenlet.switch()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
2013-04-24 11:12:23.590 11502 TRACE nova result = function(*args, **kwargs)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 147, in run_server
2013-04-24 11:12:23.590 11502 TRACE nova server.start()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 429, in start
2013-04-24 11:12:23.590 11502 TRACE nova self.manager.init_host()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 1602, in 
init_host
2013-04-24 11:12:23.590 11502 TRACE nova super(FlatDHCPManager, 
self).init_host()
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 345, in 
init_host
2013-04-24 11:12:23.590 11502 TRACE nova self._setup_network_on_host(ctxt, 
network)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/manager.py, line 1617, in 
_setup_network_on_host
2013-04-24 11:12:23.590 11502 TRACE nova self.driver.update_dhcp(elevated, 
dev, network)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 938, in 
update_dhcp
2013-04-24 11:12:23.590 11502 TRACE nova restart_dhcp(context, dev, 
network_ref)
2013-04-24 11:12:23.590 11502 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
242, in 

Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Razique Mahroua
Hi Arindam,looks like the port you are trying to bind the process to is already used, can you run :$ netstat -tanpu | grep LISTENand paste the output?thanks!
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 24 avr. 2013 à 11:13, Arindam Choudhury arin...@live.com a écrit :Hi,When I try to start the nova-network, I am getting this error:2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 74722013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 182013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 82013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] Compute_service record updated for aopcso1:aopcso1.uab.esroot@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task _periodic_update_dns because its interval is negative2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node (version 2013.1)2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running command.Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order --bind-interfaces --conf-file= --domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 --except-interface=lo --dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-roExit code: 2Stdout: ''Stderr: "2013-04-24 11:12:23.481 INFO nova.manager [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task _periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 usando\n"2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/bin/nova-network", line 54, in module2013-04-24 11:12:23.590 11502 TRACE nova service.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 689, in wait2013-04-24 11:12:23.590 11502 TRACE nova _launcher.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 209, in wait2013-04-24 11:12:23.590 11502 TRACE nova super(ServiceLauncher, self).wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 179, in wait2013-04-24 11:12:23.590 11502 TRACE nova service.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait2013-04-24 11:12:23.590 11502 TRACE nova return self._exit_event.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait2013-04-24 11:12:23.590 11502 TRACE nova return hubs.get_hub().switch()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 187, in switch2013-04-24 11:12:23.590 11502 TRACE nova return self.greenlet.switch()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 194, in main2013-04-24 11:12:23.590 11502 TRACE nova result = function(*args, **kwargs)2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 147, in run_server2013-04-24 11:12:23.590 11502 TRACE nova server.start()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 429, in start2013-04-24 11:12:23.590 11502 TRACE nova self.manager.init_host()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1602, in init_host2013-04-24 11:12:23.590 11502 TRACE nova super(FlatDHCPManager, self).init_host()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 345, in init_host2013-04-24 11:12:23.590 11502 TRACE nova self._setup_network_on_host(ctxt, network)2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/manager.py", line 1617, in _setup_network_on_host2013-04-24 11:12:23.590 11502 TRACE nova self.driver.update_dhcp(elevated, dev, network)2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/network/linux_net.py", line 938, in update_dhcp2013-04-24 11:12:23.590 11502 TRACE nova 

[Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Hi,

It seems due to an OVS quantum bug, we need to run the utility
quantum-ovs-cleanup before any of the quantum services start, upon a
server reboot.

Where is the best place to put this utility to run automatically when
a server reboots so that the OVS issue is automatically addressed? A
script in /etc/init.d or just plugging in a call for
quantum-ovs-cleanup in an existing script?

Thanks,
Balu

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Arindam Choudhury
Hi,
Thanks for your reply,
Here is the output:

netstat -tanpu | grep LISTEN
tcp0  0 0.0.0.0:43690.0.0.0:*   LISTEN  
13837/epmd  
tcp0  0 0.0.0.0:45746   0.0.0.0:*   LISTEN  
2104/rpc.statd  
tcp0  0 0.0.0.0:756 0.0.0.0:*   LISTEN  
3123/ypbind 
tcp0  0 0.0.0.0:53  0.0.0.0:*   LISTEN  
9033/dnsmasq
tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN  
16165/sshd  
tcp0  0 0.0.0.0:16509   0.0.0.0:*   LISTEN  
4267/libvirtd   
tcp0  0 0.0.0.0:38465   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38466   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38467   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:56196   0.0.0.0:*   LISTEN  
15134/beam.smp  
tcp0  0 0.0.0.0:24007   0.0.0.0:*   LISTEN  
3053/glusterd   
tcp0  0 0.0.0.0:50503   0.0.0.0:*   LISTEN  
-   
tcp0  0 0.0.0.0:86490.0.0.0:*   LISTEN  
4081/gmond  
tcp0  0 0.0.0.0:24009   0.0.0.0:*   LISTEN  
4572/glusterfsd 
tcp0  0 0.0.0.0:33060.0.0.0:*   LISTEN  
4916/mysqld 
tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  
2093/rpcbind
tcp6   0  0 :::53   :::*LISTEN  
9033/dnsmasq
tcp6   0  0 :::22   :::*LISTEN  
16165/sshd  
tcp6   0  0 :::51129:::*LISTEN  
2104/rpc.statd  
tcp6   0  0 :::16509:::*LISTEN  
4267/libvirtd   
tcp6   0  0 :::5672 :::*LISTEN  
15134/beam.smp  
tcp6   0  0 :::54121:::*LISTEN  
-   
tcp6   0  0 :::111  :::*LISTEN  
2093/rpcbind

Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 11:37:15 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com

Hi Arindam, looks like the port you are trying to bind the process to is 
already used, can you run : $ netstat -tanpu | grep LISTENand paste the 
output?thanks!


Razique Mahroua - Nuage  Corazique.mahroua@gmail.comTel : +33 9 72 37 94 15


Le 24 avr. 2013 à 11:13, Arindam Choudhury arin...@live.com a écrit :Hi,

When I try to start the nova-network, I am getting this error:

2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 7472
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 18
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 8
2013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for aopcso1:aopcso1.uab.es
root@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 
2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task 
_periodic_update_dns because its interval is negative
2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node 
(version 2013.1)
2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env 
CONFIG_FILE=[/etc/nova/nova.conf] NETWORK_ID=1 dnsmasq --strict-order 
--bind-interfaces --conf-file= --domain='novalocal' 
--pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 
--except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Exit code: 2
Stdout: ''
Stderr: 2013-04-24 11:12:23.481 INFO nova.manager 
[req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task 
_periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 
INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] 
Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create 
listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 
usando\n
2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):
2013-04-24 11:12:23.590 11502 TRACE nova   File /usr/bin/nova-network, line 
54, 

Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Razique Mahroua
Ok that's the Process 9033 - try a $ kill 9033 and you should be good!
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 24 avr. 2013 à 11:52, Arindam Choudhury arin...@live.com a écrit :Hi,Thanks for your reply,Here is the output:netstat -tanpu | grep LISTENtcp 0 0 0.0.0.0:4369 0.0.0.0:* LISTEN 13837/epmdtcp 0 0 0.0.0.0:45746 0.0.0.0:* LISTEN 2104/rpc.statdtcp 0 0 0.0.0.0:756 0.0.0.0:* LISTEN 3123/ypbindtcp 0 0 0.0.0.0:53 0.0.0.0:* LISTEN 9033/dnsmasqtcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 16165/sshdtcp 0 0 0.0.0.0:16509 0.0.0.0:* LISTEN 4267/libvirtdtcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 4577/glusterfstcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 4577/glusterfstcp 0 0 0.0.0.0:38467 0.0.0.0:* LISTEN 4577/glusterfstcp 0 0 0.0.0.0:56196 0.0.0.0:* LISTEN 15134/beam.smptcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 3053/glusterdtcp 0 0 0.0.0.0:50503 0.0.0.0:* LISTEN -tcp 0 0 0.0.0.0:8649 0.0.0.0:* LISTEN 4081/gmondtcp 0 0 0.0.0.0:24009 0.0.0.0:* LISTEN 4572/glusterfsdtcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 4916/mysqldtcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 2093/rpcbindtcp6 0 0 :::53 :::* LISTEN 9033/dnsmasqtcp6 0 0 :::22 :::* LISTEN 16165/sshdtcp6 0 0 :::51129 :::* LISTEN 2104/rpc.statdtcp6 0 0 :::16509 :::* LISTEN 4267/libvirtdtcp6 0 0 :::5672 :::* LISTEN 15134/beam.smptcp6 0 0 :::54121 :::* LISTEN -tcp6 0 0 :::111 :::* LISTEN 2093/rpcbindSubject: Re: [Openstack] error in nova-network start-upFrom: razique.mahr...@gmail.comDate: Wed, 24 Apr 2013 11:37:15 +0200CC: openstack@lists.launchpad.netTo: arin...@live.comHi Arindam,looks like the port you are trying to bind the process to is already used, can you run :$ netstat -tanpu | grep LISTENand paste the output?thanks!Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15NUAGECO-LOGO-Fblan_petit.jpgLe 24 avr. 2013 à 11:13, Arindam Choudhury arin...@live.com a écrit :Hi,When I try to start the nova-network, I am getting this error:2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 74722013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 182013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 82013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] Compute_service record updated for aopcso1:aopcso1.uab.esroot@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task _periodic_update_dns because its interval is negative2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network driver 'nova.network.linux_net'2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node (version 2013.1)2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running command.Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=1 dnsmasq --strict-order --bind-interfaces --conf-file= --domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 --except-interface=lo --dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-roExit code: 2Stdout: ''Stderr: "2013-04-24 11:12:23.481 INFO nova.manager [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Skipping periodic task _periodic_update_dns because its interval is negative\n2013-04-24 11:12:23.482 INFO nova.network.driver [req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None None] Loading network driver 'nova.network.linux_net'\n\ndnsmasq: failed to create listening socket for 192.168.100.1: La direcci\xc3\xb3n ya se est\xc3\xa1 usando\n"2013-04-24 11:12:23.590 11502 TRACE nova Traceback (most recent call last):2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/bin/nova-network", line 54, in module2013-04-24 11:12:23.590 11502 TRACE nova service.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 689, in wait2013-04-24 11:12:23.590 11502 TRACE nova _launcher.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 209, in wait2013-04-24 11:12:23.590 11502 TRACE nova super(ServiceLauncher, self).wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/nova/service.py", line 179, in wait2013-04-24 11:12:23.590 11502 TRACE nova service.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 168, in wait2013-04-24 11:12:23.590 11502 TRACE nova return self._exit_event.wait()2013-04-24 11:12:23.590 11502 TRACE nova File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait2013-04-24 11:12:23.590 11502 TRACE nova return hubs.get_hub().switch()2013-04-24 

Re: [Openstack] error in nova-network start-up

2013-04-24 Thread Arindam Choudhury

Hi Razique,

Thanks a lot. So lesson learned, dnsmasq should not be running.
Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 12:01:16 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com

Ok that's the Process 9033 - try a $ kill 9033 and you should be good!

Razique Mahroua - Nuage  Corazique.mahroua@gmail.comTel : +33 9 72 37 94 15


Le 24 avr. 2013 à 11:52, Arindam Choudhury arin...@live.com a écrit :Hi,
Thanks for your reply,
Here is the output:

netstat -tanpu | grep LISTEN
tcp0  0 0.0.0.0:43690.0.0.0:*   LISTEN  
13837/epmd  
tcp0  0 0.0.0.0:45746   0.0.0.0:*   LISTEN  
2104/rpc.statd  
tcp0  0 0.0.0.0:756 0.0.0.0:*   LISTEN  
3123/ypbind 
tcp0  0 0.0.0.0:53  0.0.0.0:*   LISTEN  
9033/dnsmasq
tcp0  0 0.0.0.0:22  0.0.0.0:*   LISTEN  
16165/sshd  
tcp0  0 0.0.0.0:16509   0.0.0.0:*   LISTEN  
4267/libvirtd   
tcp0  0 0.0.0.0:38465   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38466   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:38467   0.0.0.0:*   LISTEN  
4577/glusterfs  
tcp0  0 0.0.0.0:56196   0.0.0.0:*   LISTEN  
15134/beam.smp  
tcp0  0 0.0.0.0:24007   0.0.0.0:*   LISTEN  
3053/glusterd   
tcp0  0 0.0.0.0:50503   0.0.0.0:*   LISTEN  
-   
tcp0  0 0.0.0.0:86490.0.0.0:*   LISTEN  
4081/gmond  
tcp0  0 0.0.0.0:24009   0.0.0.0:*   LISTEN  
4572/glusterfsd 
tcp0  0 0.0.0.0:33060.0.0.0:*   LISTEN  
4916/mysqld 
tcp0  0 0.0.0.0:111 0.0.0.0:*   LISTEN  
2093/rpcbind
tcp6   0  0 :::53   :::*LISTEN  
9033/dnsmasq
tcp6   0  0 :::22   :::*LISTEN  
16165/sshd  
tcp6   0  0 :::51129:::*LISTEN  
2104/rpc.statd  
tcp6   0  0 :::16509:::*LISTEN  
4267/libvirtd   
tcp6   0  0 :::5672 :::*LISTEN  
15134/beam.smp  
tcp6   0  0 :::54121:::*LISTEN  
-   
tcp6   0  0 :::111  :::*LISTEN  
2093/rpcbind

Subject: Re: [Openstack] error in nova-network start-up
From: razique.mahr...@gmail.com
Date: Wed, 24 Apr 2013 11:37:15 +0200
CC: openstack@lists.launchpad.net
To: arin...@live.com

Hi Arindam, looks like the port you are trying to bind the process to is 
already used, can you run : $ netstat -tanpu | grep LISTENand paste the 
output?thanks!

Razique Mahroua - Nuage  Corazique.mahroua@gmail.comTel : +33 9 72 37 94 
15NUAGECO-LOGO-Fblan_petit.jpg
Le 24 avr. 2013 à 11:13, Arindam Choudhury arin...@live.com a écrit :Hi,

When I try to start the nova-network, I am getting this error:

2013-04-24 11:12:30.926 10327 AUDIT nova.compute.resource_tracker [-] Auditing 
locally available compute resources
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free ram 
(MB): 7472
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free disk 
(GB): 18
2013-04-24 11:12:31.064 10327 AUDIT nova.compute.resource_tracker [-] Free 
VCPUS: 8
2013-04-24 11:12:31.183 10327 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for aopcso1:aopcso1.uab.es
root@aopcso1:/etc/nova# cat /var/log/nova/nova-network.log 
2013-04-24 11:12:22.140 11502 INFO nova.manager [-] Skipping periodic task 
_periodic_update_dns because its interval is negative
2013-04-24 11:12:22.141 11502 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
2013-04-24 11:12:22.147 11502 AUDIT nova.service [-] Starting network node 
(version 2013.1)
2013-04-24 11:12:23.590 11502 CRITICAL nova [-] Unexpected error while running 
command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env 
CONFIG_FILE=[/etc/nova/nova.conf] NETWORK_ID=1 dnsmasq --strict-order 
--bind-interfaces --conf-file= --domain='novalocal' 
--pid-file=/var/lib/nova/networks/nova-br100.pid --listen-address=192.168.100.1 
--except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
Exit code: 2
Stdout: ''
Stderr: 2013-04-24 11:12:23.481 INFO nova.manager 
[req-fb46a0ad-b4fa-41d9-8b1b-f1eb0170a93a None 

Re: [Openstack] [Quantum] Query regarding floating IP configuration

2013-04-24 Thread Daniels Cai
Anil

It is not necessarily to not configur an IP address for l3 agent ,
2 nics can work in this scenario .config an IP address as you like

Daniels Cai

http://dnscai.com

在 2013-4-24,1:48,Edgar Magana emag...@plumgrid.com 写道:

Anil,

If you are testing multiple vNICs I will recommend you to use the following
image:
IMAGE_URLS=http://www.openvswitch.org/tty-quantum.tgz

In your localrc add the above string and you are all set up!

Thanks,

Edgar

From: Anil Vishnoi vishnoia...@gmail.com
Date: Wednesday, April 17, 2013 1:29 PM
To: openstack@lists.launchpad.net openstack@lists.launchpad.net
Subject: [Openstack] [Quantum] Query regarding floating IP configuration


Hi All,

I am trying to setup openstack in my lab, where i have a plan to run
Controller+Network node on one physical machine and two compute node.
Controller/Network physical machine has 2 NIc, one connected to externet
network (internet) and second nic is on private network.

OS Network Administrator Guide says The node running quantum-l3-agent
should not have an IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of IP addresses from
the external network that can be used by OpenStack Networking for routers
that uplink to the external network.. So my confusion is, if i want to
send any REST API call to my controller/network node from external network,
i obviously need public IP address. But instruction i quoted says that we
should not have manual IP address on the NIC.

Does it mean we can't create floating IP pool in this kind of setup? Or we
need 3 NIC, 1 for private network, 1 for floating ip pool creation and 1
for external access to the machine?

OR is it that we can assign the public ip address to the br-ex, and remove
it from physical NIC? Please let me know if my query is not clear.
-- 
Thanks
Anil
___ Mailing list:
https://launchpad.net/~openstack Post to :
openstack@lists.launchpad.netUnsubscribe :
https://launchpad.net/~openstack More help :
https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceilometer Install

2013-04-24 Thread Riki Arslan
Hi,

We are trying to install ceilometer-2013.1~g2.tar.gz which presumably has
Folsom compatibility.

The requirment is python-keystoneclient=0.2,0.3 and we have the version
2.3.

But, still, setup quits with the following message:

error: Installed distribution python-keystoneclient 0.2.3 conflicts with
requirement python-keystoneclient=0.1.2,0.2

The funny thing is, although pip-requires states
python-keystoneclient=0.2,0.3, the error message complains that it is
not python-keystoneclient=0.1.2,0.2.

Your help is greatly appreciated.

Thank you in advance.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [Quantum] Query regarding floating IP configuration

2013-04-24 Thread Sylvain Bauza

Hi Anil,

What you quoted is about L3 management and bridging and the need of 
flexibility. It means that the physical NIC will have a whole bunch of 
IP addresses, one per Quantum router you define.


Should you want to deploy a Controler on that node, you would need to 
have a second NIC with external access (what is called API Network in 
the docpage Simon quoted).


There is also need for Data Network with ideally a third NIC (if you 
want to provide separate IP ranges for API and data network) but you can 
bypass that in a lab environment by assumpting that your data network IP 
range is externally reachable and consequently the management IP of the 
controler/network node is the public IP (for the API purpose) (here, 
NIC2 IP address)


Is it clearer ?

-Sylvain



Le 18/04/2013 21:00, Anil Vishnoi a écrit :

Re-sending it, with the hope of response :-)

-- Forwarded message --
From: *Anil Vishnoi* vishnoia...@gmail.com 
mailto:vishnoia...@gmail.com

Date: Thu, Apr 18, 2013 at 1:59 AM
Subject: [Openstack][Quantum] Query regarding floating IP configuration
To: openstack@lists.launchpad.net 
mailto:openstack@lists.launchpad.net openstack@lists.launchpad.net 
mailto:openstack@lists.launchpad.net




Hi All,

I am trying to setup openstack in my lab, where i have a plan to run 
Controller+Network node on one physical machine and two compute node. 
Controller/Network physical machine has 2 NIc, one connected to 
externet network (internet) and second nic is on private network.


OS Network Administrator Guide says The node running quantum-l3-agent 
should not have an IP address manually configured on the NIC connected 
to the external network. Rather, you must have a range of IP addresses 
from the external network that can be used by OpenStack Networking for 
routers that uplink to the external network.. So my confusion is, if 
i want to send any REST API call to my controller/network node from 
external network, i obviously need public IP address. But instruction 
i quoted says that we should not have manual IP address on the NIC.


Does it mean we can't create floating IP pool in this kind of setup? 
Or we need 3 NIC, 1 for private network, 1 for floating ip pool 
creation and 1 for external access to the machine?


OR is it that we can assign the public ip address to the br-ex, and 
remove it from physical NIC? Please let me know if my query is not clear.

--
Thanks
Anil



--
Thanks
Anil


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury
Hi,

I having problem with metadata service. I am using nova-network. The console 
log says:

Starting network...udhcpc (v1.18.5) startedSending discover...Sending 
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network 
failedcloudsetup: checking 
http://169.254.169.254/20090404/metadata/instanceidwget: can't connect to 
remote host (169.254.169.254): Network is unreachablecloudsetup: failed 1/30: 
up 10.06. request failed.

the whole console log is here: https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+


  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I put it in the file:/etc/init/quantum-server.conf

post-start script
/usr/bin/quantum-ovs-cleanup
exit 1
end script


On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,
 
 It seems due to an OVS quantum bug, we need to run the utility 
 quantum-ovs-cleanup
 before any of the quantum services start, upon a server reboot.
 
 Where is the best place to put this utility to run automatically when a server
 reboots so that the OVS issue is automatically addressed? A script in 
 /etc/init.d or
 just plugging in a call for quantum-ovs-cleanup in an existing script?
 
 Thanks, Balu
 
 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
 Unsubscribe : https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp
 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific  HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF36C8ACgkQoBCTJSAkVrGqMACg3Jm7tTBwx08oOSaiTVux7sRl
cNMAn0OMrAElV2CZgqZFaayoeOitQMUn
=TGy3
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury

hi,

I was misled by this:

[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks 
 |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

This is a nova-network problem.

From: arin...@live.com
To: openstack@lists.launchpad.net
Date: Wed, 24 Apr 2013 16:12:47 +0200
Subject: [Openstack] problem with metadata and ping




Hi,

I having problem with metadata service. I am using nova-network. The console 
log says:

Starting network...udhcpc (v1.18.5) startedSending discover...Sending 
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network 
failedcloudsetup: checking 
http://169.254.169.254/20090404/metadata/instanceidwget: can't connect to 
remote host (169.254.169.254): Network is unreachablecloudsetup: failed 1/30: 
up 10.06. request failed.

the whole console log is here: https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+


  

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp   
  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Quantum] traffic routes question

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Im having trouble getting the floating IPs on the external network
accessible from the outside world. From the network node they work
fine but somehow I doubt that means anything.

so my network node (also controller node) has 4 ethernets, 1 for management,
1 for VM traffic, 1 that is an extra external connection in case I screw
things up and an external connection that is bridged for the normal br-ex 
interface.
the two internal are 10.X, 172.X types, the two external are on the same
subnet, .100 for extra, .101 for the 1 qrouter set up at the moment.
The external subnet was created with a gateway pointing at the next hop
upstream from the network node.

setting a VM with an external IP (.102) works ok, from the network node I can
ssh into it and all, from other other VMs I can get to the external floating IP.
But I doubt any of that means its really set up correctly.

My question is this, the upstream routers all think the next hop for
the floating IPs are the .101 qrouter IP address. The router that gets
set up will route packets in from this side of things will it not?

Ive tried turning off iptables in case something was blocking traffic
but that didnt help anything.

Ive tried tcpdump on the gateway router device (qg-9dd1a800-c5/.101) and see
stuff going to the .102 IP when its coming from the VMs. But nothing when
I try to connect to it from the outside world.
Dont see any traffic for .102 on the other external network either.

I dont have any access to the next hop upstream to see what packets
are going where.

But this should all work correct?

thanks

s

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific  HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF38CcACgkQoBCTJSAkVrF16QCfXGes9kYSqi0jS3x5Es5Asrs+
fUUAnAkvRJXLY2eMN5N6+RuxZaWmzZe5
=/qEk
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Ceilometer does not collect metrics like vcpus and memory

2013-04-24 Thread Giuseppe Civitella
Hi all,

I'm trying to collect Ceilometer's metrics from my test install of
Openstack Grizzly.
I'm able to collect most of the metrics from the central collector and the
nova-compute agents.
But I'm still missing some values like memory and vcpus.
This is an abstract from ceilometer's log on a nova-compute node:

http://paste.openstack.org/show/36567/

vcpus and memory_mb are empty values.
Any idea about how to get them?

Thanks a lot
Giuseppe
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Thanks Steve.

I came across another way at
https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems
to work as well. But your solution is simpler :)

Regards,
Balu


On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand steve.heist...@nasa.gov wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 I put it in the file:/etc/init/quantum-server.conf

 post-start script
 /usr/bin/quantum-ovs-cleanup
 exit 1
 end script


 On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,

 It seems due to an OVS quantum bug, we need to run the utility 
 quantum-ovs-cleanup
 before any of the quantum services start, upon a server reboot.

 Where is the best place to put this utility to run automatically when a 
 server
 reboots so that the OVS issue is automatically addressed? A script in 
 /etc/init.d or
 just plugging in a call for quantum-ovs-cleanup in an existing script?

 Thanks, Balu

 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp


 - --
 
  Steve Heistand  NASA Ames Research Center
  email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
  ph: (650) 604-4369  Bldg. 258, Rm. 232-5
  Scientific  HPC ApplicationP.O. Box 1
  Development/OptimizationMoffett Field, CA 94035-0001
 
  Any opinions expressed are those of our alien overlords, not my own.

 # For Remedy#
 #Action: Resolve#
 #Resolution: Resolved   #
 #Reason: No Further Action Required #
 #Tier1: User Code   #
 #Tier2: Other   #
 #Tier3: Assistance  #
 #Notification: None #
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iEYEARECAAYFAlF36C8ACgkQoBCTJSAkVrGqMACg3Jm7tTBwx08oOSaiTVux7sRl
 cNMAn0OMrAElV2CZgqZFaayoeOitQMUn
 =TGy3
 -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

it was mentioned to me (by Mr Mihaiescu) that this only works if controller and 
network node
are on the same machine. For the compute nodes I had forgotten its in a 
different
place. On them I am doing it in a pre-start script in 
quantum-plugin-openvswitch-agent.conf.
if the controller/network are on different machines certainly in the 
quantum-server.conf
work on which ever one of them is actually using it, if it doesnt the command 
will have
to be in a different startup script.

It was also mentioned that putting things in /etc/rc.local and then restarting
all the quantum related services might work too.

steve

On 04/24/2013 08:15 AM, Balamurugan V G wrote:
 Thanks Steve.
 
 I came across another way at 
 https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to work 
 as
 well. But your solution is simpler :)
 
 Regards, Balu
 
 
 On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand steve.heist...@nasa.gov 
 wrote: I
 put it in the file:/etc/init/quantum-server.conf
 
 post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script
 
 
 On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,
 
 It seems due to an OVS quantum bug, we need to run the utility
 quantum-ovs-cleanup before any of the quantum services start, upon a server
 reboot.
 
 Where is the best place to put this utility to run automatically when a 
 server 
 reboots so that the OVS issue is automatically addressed? A script in
 /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an 
 existing
 script?
 
 Thanks, Balu
 
 ___ Mailing list: 
 https://launchpad.net/~openstack Post to : 
 openstack@lists.launchpad.net 
 Unsubscribe : https://launchpad.net/~openstack More help   : 
 https://help.launchpad.net/ListHelp
 
 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific  HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF3+K8ACgkQoBCTJSAkVrFfRACgjiiRXjyRGfc2fGPJWTmJTjnK
89cAnRnstn0e/GiYz0Go13R2B+lBUWWw
=HmUJ
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Right now, I have a single node setup on which I am qualifying my use
cases but eventually I will have a controller node, network node and
several compute nodes. In that case, do you mean it should something
like this?

Controller : post-start of quantum-server.cong
Network :   post-start of quantum-server.cong
Compute:  pre-start of  quantum-plugin-openvswitch-agent.conf

Thanks,
Balu

On Wed, Apr 24, 2013 at 8:52 PM, Steve Heistand steve.heist...@nasa.gov wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 it was mentioned to me (by Mr Mihaiescu) that this only works if controller 
 and network node
 are on the same machine. For the compute nodes I had forgotten its in a 
 different
 place. On them I am doing it in a pre-start script in 
 quantum-plugin-openvswitch-agent.conf.
 if the controller/network are on different machines certainly in the 
 quantum-server.conf
 work on which ever one of them is actually using it, if it doesnt the command 
 will have
 to be in a different startup script.

 It was also mentioned that putting things in /etc/rc.local and then restarting
 all the quantum related services might work too.

 steve

 On 04/24/2013 08:15 AM, Balamurugan V G wrote:
 Thanks Steve.

 I came across another way at
 https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to 
 work as
 well. But your solution is simpler :)

 Regards, Balu


 On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand steve.heist...@nasa.gov 
 wrote: I
 put it in the file:/etc/init/quantum-server.conf

 post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script


 On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,

 It seems due to an OVS quantum bug, we need to run the utility
 quantum-ovs-cleanup before any of the quantum services start, upon a 
 server
 reboot.

 Where is the best place to put this utility to run automatically when a 
 server
 reboots so that the OVS issue is automatically addressed? A script in
 /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an 
 existing
 script?

 Thanks, Balu

 ___ Mailing list:
 https://launchpad.net/~openstack Post to : 
 openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp



 - --
 
  Steve Heistand  NASA Ames Research Center
  email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
  ph: (650) 604-4369  Bldg. 258, Rm. 232-5
  Scientific  HPC ApplicationP.O. Box 1
  Development/OptimizationMoffett Field, CA 94035-0001
 
  Any opinions expressed are those of our alien overlords, not my own.

 # For Remedy#
 #Action: Resolve#
 #Resolution: Resolved   #
 #Reason: No Further Action Required #
 #Tier1: User Code   #
 #Tier2: Other   #
 #Tier3: Assistance  #
 #Notification: None #
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iEYEARECAAYFAlF3+K8ACgkQoBCTJSAkVrFfRACgjiiRXjyRGfc2fGPJWTmJTjnK
 89cAnRnstn0e/GiYz0Go13R2B+lBUWWw
 =HmUJ
 -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury
Hi,

Thanks for your reply.

The dnsmasq is running properly.

when I tried to run iptables
-I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT 

it says, 
#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'

Do I have to do this iptables configuration in controller or in compute nodes 
also.

To: arin...@live.com
Subject: Re: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:17:41 -0500

Arindam,



I saw a similar problem with quantum.
 If you have iptables running on the hosting system you may need to
update the rules to allow the DHCP Discover packet through:  iptables
-I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT 



Also ensure that dnsmasq is running
properly. 







Jay S. Bryant

Linux Developer - 

OpenStack Enterprise Edition

   

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



 All the world's a stage and most of us are desperately unrehearsed.

   -- Sean
O'Casey









From:  
 Arindam Choudhury arin...@live.com

To:  
 openstack openstack@lists.launchpad.net,


Date:  
 04/24/2013 10:12 AM

Subject:
   Re: [Openstack]
problem with metadata and ping

Sent by:
   Openstack
openstack-bounces+jsbryant=us.ibm@lists.launchpad.net










hi,



I was misled by this:



[(keystone_user)]$ nova list

+--+++---+

| ID
  | Name   | Status
| Networks  |

+--+++---+

| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | private=192.168.100.2
|

+--+++---+



This is a nova-network problem.




From: arin...@live.com

To: openstack@lists.launchpad.net

Date: Wed, 24 Apr 2013 16:12:47 +0200

Subject: [Openstack] problem with metadata and ping



Hi,



I having problem with metadata service. I am using nova-network. The console
log says:



Starting network...

udhcpc (v1.18.5) started

Sending discover...

Sending discover...

Sending discover...

No lease, failing

WARN: /etc/rc3.d/S40network failed

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid

wget: can't connect to remote host (169.254.169.254):
Network is unreachable

cloudsetup: failed 1/30: up 10.06. request
failed.



the whole console log is here: https://gist.github.com/arindamchoudhury/5452385

my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410



[(keystone_user)]$ nova network-list 

++-+--+

| ID | Label   | Cidr |

++-+--+

| 1  | private | 192.168.100.0/24 |

++-+--+

[(keystone_user)]$ nova secgroup-list

+-+-+

| Name| Description |

+-+-+

| default | default |

+-+-+

[(keystone_user)]$ nova secgroup-list-rules default

+-+---+-+---+--+

| IP Protocol | From Port | To Port | IP Range  | Source Group |

+-+---+-+---+--+

| icmp| -1| -1
 | 0.0.0.0/0 |
 |

| tcp | 22| 22
 | 0.0.0.0/0 |
 |

+-+---+-+---+--+







___ Mailing list: 
https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net Unsubscribe : 
https://launchpad.net/~openstack
More help : 
https://help.launchpad.net/ListHelp___

Mailing list: https://launchpad.net/~openstack

Post to : openstack@lists.launchpad.net

Unsubscribe : https://launchpad.net/~openstack

More help   : https://help.launchpad.net/ListHelp



  inline: ATT1___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Steve Heistand
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

The network node probably wont be running quantum server just one
of the agents, so you put the command in one of those configs not
quantum-server.

That is what Im doing currently and it is working for me.
at some point if you have running VMs with active network
connections and need to restart quantum for some reason this
'may' interrupt their connections. something to keep in mind.

steve


On 04/24/2013 08:32 AM, Balamurugan V G wrote:
 Right now, I have a single node setup on which I am qualifying my use cases 
 but
 eventually I will have a controller node, network node and several compute 
 nodes. In
 that case, do you mean it should something like this?
 
 Controller : post-start of quantum-server.cong Network :   post-start of
 quantum-server.cong Compute:  pre-start of  
 quantum-plugin-openvswitch-agent.conf
 
 Thanks, Balu
 
 On Wed, Apr 24, 2013 at 8:52 PM, Steve Heistand steve.heist...@nasa.gov 
 wrote: it
 was mentioned to me (by Mr Mihaiescu) that this only works if controller and 
 network
 node are on the same machine. For the compute nodes I had forgotten its in a
 different place. On them I am doing it in a pre-start script in
 quantum-plugin-openvswitch-agent.conf. if the controller/network are on 
 different
 machines certainly in the quantum-server.conf work on which ever one of them 
 is
 actually using it, if it doesnt the command will have to be in a different 
 startup
 script.
 
 It was also mentioned that putting things in /etc/rc.local and then 
 restarting all
 the quantum related services might work too.
 
 steve
 
 On 04/24/2013 08:15 AM, Balamurugan V G wrote:
 Thanks Steve.
 
 I came across another way at 
 https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to 
 work
 as well. But your solution is simpler :)
 
 Regards, Balu
 
 
 On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand steve.heist...@nasa.gov
 wrote: I put it in the file:/etc/init/quantum-server.conf
 
 post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script
 
 
 On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,
 
 It seems due to an OVS quantum bug, we need to run the utility 
 quantum-ovs-cleanup before any of the quantum services start, upon a
 server reboot.
 
 Where is the best place to put this utility to run automatically when a
 server reboots so that the OVS issue is automatically addressed? A 
 script
 in /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an
 existing script?
 
 Thanks, Balu
 
 ___ Mailing list: 
 https://launchpad.net/~openstack Post to :
 openstack@lists.launchpad.net Unsubscribe :
 https://launchpad.net/~openstack More help   : 
 https://help.launchpad.net/ListHelp
 
 
 

- -- 

 Steve Heistand  NASA Ames Research Center
 email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
 ph: (650) 604-4369  Bldg. 258, Rm. 232-5
 Scientific  HPC ApplicationP.O. Box 1
 Development/OptimizationMoffett Field, CA 94035-0001

 Any opinions expressed are those of our alien overlords, not my own.

# For Remedy#
#Action: Resolve#   
#Resolution: Resolved   #
#Reason: No Further Action Required #
#Tier1: User Code   #
#Tier2: Other   #
#Tier3: Assistance  #
#Notification: None #
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (GNU/Linux)

iEYEARECAAYFAlF3/W0ACgkQoBCTJSAkVrHdnwCgrnCfjN1NKCml+jFPtHk0s4iA
Nx0An3g6abwQons0jMXkJLu4oBhiZ4ot
=zh9U
-END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Lorin Hochstein
In the docs, we describe how to configure KVM block-based live migration,
and it has the advantage of avoiding the need for shared storage of
instances.

However, there's this email from Daniel Berrangé from back in Aug 2012:
http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html

Block migration is a part of the KVM that none of the upstream developers
really like, is not entirely reliable, and most distros typically do not
want to support it due to its poor design (eg not supported in RHEL).

It is quite likely that it will be removed in favour of an alternative
implementation. What that alternative impl will be, and when I will
arrive, I can't say right now.

Based on this info, the OpenStack Ops guide currently recommends against
using block-based live migration, but the Compute Admin guide has no
warnings about this.

I wanted to sanity-check against the mailing list to verify that this was
still the case. What's the state of block-based live migration with KVM?
Should we say be dissuading people from using it, or is it reasonable for
people to use it?

Lorin
-- 
Lorin Hochstein
Lead Architect - Cloud Services
Nimbis Services, Inc.
www.nimbisservices.com
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] What is the best place to run quantum-ovs-cleanup

2013-04-24 Thread Balamurugan V G
Ok thanks, this helps a lot. But isnt this being done to avoid those
disruptions/issues with networking after a restart. Do you mean do
doing this will result in disruptions after a restart?

Regards,
Balu

On Wed, Apr 24, 2013 at 9:12 PM, Steve Heistand steve.heist...@nasa.gov wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 The network node probably wont be running quantum server just one
 of the agents, so you put the command in one of those configs not
 quantum-server.

 That is what Im doing currently and it is working for me.
 at some point if you have running VMs with active network
 connections and need to restart quantum for some reason this
 'may' interrupt their connections. something to keep in mind.

 steve


 On 04/24/2013 08:32 AM, Balamurugan V G wrote:
 Right now, I have a single node setup on which I am qualifying my use cases 
 but
 eventually I will have a controller node, network node and several compute 
 nodes. In
 that case, do you mean it should something like this?

 Controller : post-start of quantum-server.cong Network :   post-start of
 quantum-server.cong Compute:  pre-start of  
 quantum-plugin-openvswitch-agent.conf

 Thanks, Balu

 On Wed, Apr 24, 2013 at 8:52 PM, Steve Heistand steve.heist...@nasa.gov 
 wrote: it
 was mentioned to me (by Mr Mihaiescu) that this only works if controller and 
 network
 node are on the same machine. For the compute nodes I had forgotten its in a
 different place. On them I am doing it in a pre-start script in
 quantum-plugin-openvswitch-agent.conf. if the controller/network are on 
 different
 machines certainly in the quantum-server.conf work on which ever one of them 
 is
 actually using it, if it doesnt the command will have to be in a different 
 startup
 script.

 It was also mentioned that putting things in /etc/rc.local and then 
 restarting all
 the quantum related services might work too.

 steve

 On 04/24/2013 08:15 AM, Balamurugan V G wrote:
 Thanks Steve.

 I came across another way at
 https://bugs.launchpad.net/quantum/+bug/1084355/comments/15. It seems to 
 work
 as well. But your solution is simpler :)

 Regards, Balu


 On Wed, Apr 24, 2013 at 7:41 PM, Steve Heistand steve.heist...@nasa.gov
 wrote: I put it in the file:/etc/init/quantum-server.conf

 post-start script /usr/bin/quantum-ovs-cleanup exit 1 end script


 On 04/24/2013 02:45 AM, Balamurugan V G wrote:
 Hi,

 It seems due to an OVS quantum bug, we need to run the utility
 quantum-ovs-cleanup before any of the quantum services start, upon a
 server reboot.

 Where is the best place to put this utility to run automatically when a
 server reboots so that the OVS issue is automatically addressed? A 
 script
 in /etc/init.d or just plugging in a call for quantum-ovs-cleanup in an
 existing script?

 Thanks, Balu

 ___ Mailing list:
 https://launchpad.net/~openstack Post to :
 openstack@lists.launchpad.net Unsubscribe :
 https://launchpad.net/~openstack More help   :
 https://help.launchpad.net/ListHelp




 - --
 
  Steve Heistand  NASA Ames Research Center
  email: steve.heist...@nasa.gov  Steve Heistand/Mail Stop 258-6
  ph: (650) 604-4369  Bldg. 258, Rm. 232-5
  Scientific  HPC ApplicationP.O. Box 1
  Development/OptimizationMoffett Field, CA 94035-0001
 
  Any opinions expressed are those of our alien overlords, not my own.

 # For Remedy#
 #Action: Resolve#
 #Resolution: Resolved   #
 #Reason: No Further Action Required #
 #Tier1: User Code   #
 #Tier2: Other   #
 #Tier3: Assistance  #
 #Notification: None #
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v2.0.14 (GNU/Linux)

 iEYEARECAAYFAlF3/W0ACgkQoBCTJSAkVrHdnwCgrnCfjN1NKCml+jFPtHk0s4iA
 Nx0An3g6abwQons0jMXkJLu4oBhiZ4ot
 =zh9U
 -END PGP SIGNATURE-

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Jay S Bryant
Arindam,

Ooops, I had a typo.   The command should have been:  iptables -I input -i 
tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT

You need the iptables configuration on the system where dnsmasq is 
running.  It shouldn't be necessary in the compute nodes that are being 
booted.


Jay S. Bryant
Linux Developer - 
OpenStack Enterprise Edition
   
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Arindam Choudhury arin...@live.com
To: Jay S Bryant/Rochester/IBM@IBMUS, openstack 
openstack@lists.launchpad.net, 
Date:   04/24/2013 10:47 AM
Subject:RE: [Openstack] problem with metadata and ping



Hi,

Thanks for your reply.

The dnsmasq is running properly.

when I tried to run iptables -I input -i tap+ -p udp 67:68 --sport 67:68 
-j ACCEPT 
it says, 
#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'

Do I have to do this iptables configuration in controller or in compute 
nodes also.

To: arin...@live.com
Subject: Re: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:17:41 -0500

Arindam, 

I saw a similar problem with quantum.  If you have iptables running on the 
hosting system you may need to update the rules to allow the DHCP Discover 
packet through:  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j 
ACCEPT 

Also ensure that dnsmasq is running properly. 



Jay S. Bryant
Linux Developer - 
   OpenStack Enterprise Edition
  
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
  -- Sean O'Casey
 



From:Arindam Choudhury arin...@live.com 
To:openstack openstack@lists.launchpad.net, 
Date:04/24/2013 10:12 AM 
Subject:Re: [Openstack] problem with metadata and ping 
Sent by:Openstack 
openstack-bounces+jsbryant=us.ibm@lists.launchpad.net 




hi,

I was misled by this:

[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks  |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

This is a nova-network problem.

From: arin...@live.com
To: openstack@lists.launchpad.net
Date: Wed, 24 Apr 2013 16:12:47 +0200
Subject: [Openstack] problem with metadata and ping

Hi,

I having problem with metadata service. I am using nova-network. The 
console log says:

Starting network... 
udhcpc (v1.18.5) started 
Sending discover... 
Sending discover... 
Sending discover... 
No lease, failing 
WARN: /etc/rc3.d/S40network failed 
cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid 
wget: can't connect to remote host (169.254.169.254): Network is 
unreachable 
cloudsetup: failed 1/30: up 10.06. request failed.

the whole console log is here: 
https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+



___ Mailing list: 
https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net 
Unsubscribe : https://launchpad.net/~openstack More help : 
https://help.launchpad.net/ListHelp
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net

Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Daniel P. Berrange
On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
 In the docs, we describe how to configure KVM block-based live migration,
 and it has the advantage of avoiding the need for shared storage of
 instances.
 
 However, there's this email from Daniel Berrangé from back in Aug 2012:
 http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
 
 Block migration is a part of the KVM that none of the upstream developers
 really like, is not entirely reliable, and most distros typically do not
 want to support it due to its poor design (eg not supported in RHEL).
 
 It is quite likely that it will be removed in favour of an alternative
 implementation. What that alternative impl will be, and when I will
 arrive, I can't say right now.
 
 Based on this info, the OpenStack Ops guide currently recommends against
 using block-based live migration, but the Compute Admin guide has no
 warnings about this.
 
 I wanted to sanity-check against the mailing list to verify that this was
 still the case. What's the state of block-based live migration with KVM?
 Should we say be dissuading people from using it, or is it reasonable for
 people to use it?

What I wrote above about the existing impl is still accurate. The new
block migration code is now merged into libvirt and makes use of an
NBD server built-in to the QMEU process todo block migration. API
wise it should actually work in the same way as the existing deprecated
block migration code.  So if you have new enough libvirt and new enough
KVM, it probably ought to 'just work' with openstack without needing
any code changes in nova. I have not actually tested this myself
though.

So we can probably update the docs - but we'd want to checkout just
what precise versions of libvirt + qemu are needed, and have someone
check that it does in fact work.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] probelm with nova-network

2013-04-24 Thread Arindam Choudhury
Hi,

I having problem with nova-network service. 
Though
[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks 
 |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |
+--+++---+

says that the vm is active and have 192.168.100.2 ip but it dont

The console log says:

Starting network...udhcpc (v1.18.5) startedSending discover...Sending 
discover...Sending discover...No lease, failingWARN: /etc/rc3.d/S40network 
failedcloudsetup: checking 
http://169.254.169.254/20090404/metadata/instanceidwget: can't connect to 
remote host (169.254.169.254): Network is unreachablecloudsetup: failed 1/30: 
up 10.06. request failed.

the whole console log is here: https://gist.github.com/arindamchoudhury/5452385
my nova.conf is here: https://gist.github.com/arindamchoudhury/5452410

[(keystone_user)]$ nova network-list 
++-+--+
| ID | Label   | Cidr |
++-+--+
| 1  | private | 192.168.100.0/24 |
++-+--+
[(keystone_user)]$ nova secgroup-list
+-+-+
| Name| Description |
+-+-+
| default | default |
+-+-+
[(keystone_user)]$ nova secgroup-list-rules default
+-+---+-+---+--+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-+---+-+---+--+
| icmp| -1| -1  | 0.0.0.0/0 |  |
| tcp | 22| 22  | 0.0.0.0/0 |  |
+-+---+-+---+--+

  ___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury

Hi,

So I added that rule:
iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT

but still the same problem.

There is another thing:
# nova-manage service list
Binary   Host Zone Status   
  State Updated_At
nova-network aopcach  internal enabled  
  :-)   2013-04-24 16:07:37
nova-certaopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-conductor   aopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-consoleauth aopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-scheduler   aopcach  internal enabled  
  :-)   2013-04-24 16:07:36
nova-network aopcso1  internal enabled  
  :-)   2013-04-24 16:07:36
nova-compute aopcso1  nova enabled  
  :-)   2013-04-24 16:07:37

shows all the host and services. But in dashboard it only shows the services 
running in aopcach.

screenshot: http://imgur.com/ED9nbxU



To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:55:12 -0500

Arindam,



Ooops, I had a typo.   The command
should have been:  iptables -I
input -i tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT



You need the iptables configuration
on the system where dnsmasq is running.  It shouldn't be necessary
in the compute nodes that are being booted.





Jay S. Bryant

Linux Developer - 

OpenStack Enterprise Edition

   

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



 All the world's a stage and most of us are desperately unrehearsed.

   -- Sean
O'Casey









From:  
 Arindam Choudhury arin...@live.com

To:  
 Jay S Bryant/Rochester/IBM@IBMUS,
openstack openstack@lists.launchpad.net, 

Date:  
 04/24/2013 10:47 AM

Subject:
   RE: [Openstack]
problem with metadata and ping








Hi,



Thanks for your reply.



The dnsmasq is running properly.



when I tried to run iptables -I input
-i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT 

it says, 

#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT

Bad argument `67:68'



Do I have to do this iptables configuration in controller or in compute
nodes also.




To: arin...@live.com

Subject: Re: [Openstack] problem with metadata and ping

From: jsbry...@us.ibm.com

Date: Wed, 24 Apr 2013 10:17:41 -0500



Arindam, 



I saw a similar problem with quantum.  If you have iptables running
on the hosting system you may need to update the rules to allow the DHCP
Discover packet through:  iptables -I input -i tap+ -p udp 67:68 --sport
67:68 -j ACCEPT 



Also ensure that dnsmasq is running properly. 







Jay S. Bryant

Linux Developer - 

   OpenStack Enterprise Edition

  

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



All the world's a stage and most of us are desperately unrehearsed.

  -- Sean
O'Casey

 







From:Arindam
Choudhury arin...@live.com 

To:openstack
openstack@lists.launchpad.net, 

Date:04/24/2013
10:12 AM 

Subject:Re:
[Openstack] problem with metadata and ping


Sent by:Openstack
openstack-bounces+jsbryant=us.ibm@lists.launchpad.net











hi,



I was misled by this:



[(keystone_user)]$ nova list

+--+++---+

| ID
  | Name   | Status
| Networks  |

+--+++---+

| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | private=192.168.100.2
|

+--+++---+



This is a nova-network problem.




From: arin...@live.com

To: openstack@lists.launchpad.net

Date: Wed, 24 Apr 2013 16:12:47 +0200

Subject: [Openstack] problem with metadata and ping



Hi,



I having problem with metadata service. I am using nova-network. The console
log says:



Starting network... 

udhcpc (v1.18.5) started 

Sending discover... 

Sending discover... 

Sending discover... 

No lease, failing 

WARN: /etc/rc3.d/S40network failed 

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid



Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Razique Mahroua
Thanks for the clarification Daniel
Razique Mahroua-Nuage  Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15

Le 24 avr. 2013 à 17:59, "Daniel P. Berrange" d...@berrange.com a écrit :On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:In the docs, we describe how to configure KVM block-based live migration,and it has the advantage of avoiding the need for shared storage ofinstances.However, there's this email from Daniel Berrangé from back in Aug 2012:http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html"Block migration is a part of the KVM that none of the upstream developersreally like, is not entirely reliable, and most distros typically do notwant to support it due to its poor design (eg not supported in RHEL).It is quite likely that it will be removed in favour of an alternativeimplementation. What that alternative impl will be, and when I willarrive, I can't say right now."Based on this info, the OpenStack Ops guide currently recommends againstusing block-based live migration, but the Compute Admin guide has nowarnings about this.I wanted to sanity-check against the mailing list to verify that this wasstill the case. What's the state of block-based live migration with KVM?Should we say be dissuading people from using it, or is it reasonable forpeople to use it?What I wrote above about the existing impl is still accurate. The newblock migration code is now merged into libvirt and makes use of anNBD server built-in to the QMEU process todo block migration. APIwise it should actually work in the same way as the existing deprecatedblock migration code. So if you have new enough libvirt and new enoughKVM, it probably ought to 'just work' with openstack without needingany code changes in nova. I have not actually tested this myselfthough.So we can probably update the docs - but we'd want to checkout justwhat precise versions of libvirt + qemu are needed, and have someonecheck that it does in fact work.Regards,Daniel-- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :||: http://libvirt.org -o- http://virt-manager.org :||: http://autobuild.org -o- http://search.cpan.org/~danberr/ :||: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|___Mailing list: https://launchpad.net/~openstackPost to : openstack@lists.launchpad.netUnsubscribe : https://launchpad.net/~openstackMore help : https://help.launchpad.net/ListHelp___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ANNOUNCE: Ultimate OpenStack Grizzly Guide, with super easy Quantum!

2013-04-24 Thread Martinx - ジェームズ
Hi!

The `Ultimate OpenStack Grizzly Guide' is a bit more updated!

There are two new scripts: keystone_basic.sh and
keystone_endpoints_basic.sh which preliminary support for Swift and
Ceilometer.

Check it out! https://gist.github.com/tmartinx/d36536b7b62a48f859c2

Best!
Thiago



On 20 March 2013 19:51, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Hi!

  I'm working with Grizzly G3+RC1 on top of Ubuntu 12.04.2 and here is the
 guide I wrote:

  Ultimate OpenStack Grizzly 
 Guidehttps://gist.github.com/tmartinx/d36536b7b62a48f859c2

  It covers:

  * Ubuntu 12.04.2
  * Basic Ubuntu setup
  * KVM
  * OpenvSwitch
  * Name Resolution for OpenStack components;
  * LVM for Instances
  * Keystone
  * Glance
  * Quantum - Single Flat, Super Green!!
  * Nova
  * Cinder / tgt
  * Dashboard

  It is still a draft but, every time I deploy Ubuntu and Grizzly, I follow
 this little guide...

  I would like some help to improve this guide... If I'm doing something
 wrong, tell me! Please!

  Probably I'm doing something wrong, I don't know yet, but I'm seeing some
 errors on the logs, already reported here on this list. Like for example:
 nova-novncproxy conflicts with novnc (no VNC console for now),
 dhcp-agent.log / auth.log points to some problems with `sudo' or the
 `rootwarp' subsystem when dealing with metadata (so it isn't working)...

  But in general, it works great!!

 Best!
 Thiago

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Jay S Bryant
Can you provided the output of 'ifconfig' on the hosting node?  Also 'ps 
aux | grep dnsmasq' .



Jay S. Bryant
Linux Developer - 
OpenStack Enterprise Edition
   
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   Arindam Choudhury arin...@live.com
To: Jay S Bryant/Rochester/IBM@IBMUS, openstack 
openstack@lists.launchpad.net, 
Date:   04/24/2013 11:16 AM
Subject:RE: [Openstack] problem with metadata and ping




Hi,

So I added that rule:
iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT

but still the same problem.

There is another thing:
# nova-manage service list
Binary   Host Zone Status State 
Updated_At
nova-network aopcach  internal enabled :-) 
  2013-04-24 16:07:37
nova-certaopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-conductor   aopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-consoleauth aopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-scheduler   aopcach  internal enabled :-) 
  2013-04-24 16:07:36
nova-network aopcso1  internal enabled :-) 
  2013-04-24 16:07:36
nova-compute aopcso1  nova enabled:-)  
2013-04-24 16:07:37

shows all the host and services. But in dashboard it only shows the 
services running in aopcach.

screenshot: http://imgur.com/ED9nbxU



To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:55:12 -0500

Arindam, 

Ooops, I had a typo.   The command should have been:  iptables -I input -i 
tap+ -p udp -dport 67:68 --sport 67:68 -j ACCEPT

You need the iptables configuration on the system where dnsmasq is 
running.  It shouldn't be necessary in the compute nodes that are being 
booted. 


Jay S. Bryant
Linux Developer - 
   OpenStack Enterprise Edition
  
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
  -- Sean O'Casey
 



From:Arindam Choudhury arin...@live.com 
To:Jay S Bryant/Rochester/IBM@IBMUS, openstack 
openstack@lists.launchpad.net, 
Date:04/24/2013 10:47 AM 
Subject:RE: [Openstack] problem with metadata and ping 



Hi,

Thanks for your reply.

The dnsmasq is running properly.

when I tried to run iptables -I input -i tap+ -p udp 67:68 --sport 67:68 
-j ACCEPT 
it says, 
#  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j ACCEPT
Bad argument `67:68'

Do I have to do this iptables configuration in controller or in compute 
nodes also.

To: arin...@live.com
Subject: Re: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 10:17:41 -0500

Arindam, 

I saw a similar problem with quantum.  If you have iptables running on the 
hosting system you may need to update the rules to allow the DHCP Discover 
packet through:  iptables -I input -i tap+ -p udp 67:68 --sport 67:68 -j 
ACCEPT 

Also ensure that dnsmasq is running properly. 



Jay S. Bryant
Linux Developer - 
  OpenStack Enterprise Edition
 
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

All the world's a stage and most of us are desperately unrehearsed.
 -- Sean O'Casey
 



From:Arindam Choudhury arin...@live.com 
To:openstack openstack@lists.launchpad.net, 
Date:04/24/2013 10:12 AM 
Subject:Re: [Openstack] problem with metadata and ping 
Sent by:Openstack 
openstack-bounces+jsbryant=us.ibm@lists.launchpad.net 




hi,

I was misled by this:

[(keystone_user)]$ nova list
+--+++---+
| ID   | Name   | Status | Networks  |
+--+++---+
| 122ceb44-0b2d-442f-bb4b-c5a8cdbcb757 | cirros | ACTIVE | 
private=192.168.100.2 |

Re: [Openstack] problem with metadata and ping

2013-04-24 Thread Arindam Choudhury

Hi,

Output from the controller node:
root@aopcach:~# ifconfig
eth0  Link encap:Ethernet  HWaddr 1c:c1:de:65:6f:ee  
  inet addr:158.109.65.21  Bcast:158.109.79.255  Mask:255.255.240.0
  inet6 addr: fe80::1ec1:deff:fe65:6fee/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:111595 errors:0 dropped:0 overruns:0 frame:0
  TX packets:10941 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:18253579 (17.4 MiB)  TX bytes:1747833 (1.6 MiB)
  Interrupt:19 Memory:f300-f302 

loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:45623 errors:0 dropped:0 overruns:0 frame:0
  TX packets:45623 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:9862970 (9.4 MiB)  TX bytes:9862970 (9.4 MiB)

root@aopcach:~# ps aux | grep dnsmasq
root  6355  0.0  0.0   7828   880 pts/0S+   19:23   0:00 grep dnsmasq


output from the compute node:

root@aopcso1:~# ifconfig
br100 Link encap:Ethernet  HWaddr 38:60:77:0d:31:87  
  inet addr:192.168.100.1  Bcast:192.168.100.255  Mask:255.255.255.0
  inet6 addr: fe80::3a60:77ff:fe0d:3187/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:689504 errors:0 dropped:4251 overruns:0 frame:0
  TX packets:76508 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:89086934 (84.9 MiB)  TX bytes:22405496 (21.3 MiB)

eth0  Link encap:Ethernet  HWaddr 38:60:77:0d:31:87  
  inet6 addr: fe80::3a60:77ff:fe0d:3187/64 Scope:Link
  UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1500  Metric:1
  RX packets:16329913 errors:0 dropped:0 overruns:0 frame:0
  TX packets:927591 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000 
  RX bytes:2596968421 (2.4 GiB)  TX bytes:448796862 (428.0 MiB)
  Interrupt:20 Memory:f710-f712 

loLink encap:Local Loopback  
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:313717 errors:0 dropped:0 overruns:0 frame:0
  TX packets:313717 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0 
  RX bytes:16434128 (15.6 MiB)  TX bytes:16434128 (15.6 MiB)

root@aopcso1:~# ps aux | grep dnsmasq
nobody   12485  0.0  0.0  25124  1104 ?S12:05   0:00 
/usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= 
--domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid 
--listen-address=192.168.100.1 --except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
root 12486  0.0  0.0  25124   472 ?S12:05   0:00 
/usr/sbin/dnsmasq --strict-order --bind-interfaces --conf-file= 
--domain='novalocal' --pid-file=/var/lib/nova/networks/nova-br100.pid 
--listen-address=192.168.100.1 --except-interface=lo 
--dhcp-range=set:private,192.168.100.2,static,255.255.255.0,120s 
--dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf 
--dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro
root 32616  0.0  0.0   7848   876 pts/0S+   19:27   0:00 grep dnsmasq



To: arin...@live.com
CC: openstack@lists.launchpad.net
Subject: RE: [Openstack] problem with metadata and ping
From: jsbry...@us.ibm.com
Date: Wed, 24 Apr 2013 11:26:23 -0500

Can you provided the output of 'ifconfig'
on the hosting node?  Also 'ps aux | grep dnsmasq' .







Jay S. Bryant

Linux Developer - 

OpenStack Enterprise Edition

   

Department 7YLA, Building 015-2, Office E125, Rochester, MN

Telephone: (507) 253-4270, FAX (507) 253-6410

TIE Line: 553-4270

E-Mail:  jsbry...@us.ibm.com



 All the world's a stage and most of us are desperately unrehearsed.

   -- Sean
O'Casey









From:  
 Arindam Choudhury arin...@live.com

To:  
 Jay S Bryant/Rochester/IBM@IBMUS,
openstack openstack@lists.launchpad.net, 

Date:  
 04/24/2013 11:16 AM

Subject:
   RE: [Openstack]
problem with metadata and ping










Hi,



So I added that rule:

iptables -I INPUT -i tap+ -p udp --dport 67:68 --sport 67:68 -j ACCEPT



but still the same problem.



There is another thing:

# nova-manage service list

Binary   Host
 
  Zone Status
State Updated_At

nova-network aopcach
 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
Can you show us a quantum subnet-show for the subnet your vm has an ip on.
Is it possible that you added a host_route to the subnet for 169.254/16?

Or could you try this image:
http://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img


On Wed, Apr 24, 2013 at 1:06 AM, Balamurugan V G balamuruga...@gmail.comwrote:

 I booted a Ubuntu Image in which I had made sure that there was no
 pre-existing route for 169,254.0.0/16. But its getting the route from DHCP
 once its boots up. So its the DHCP server which is sending this route to
 the VM.

 Regards,
 Balu


 On Wed, Apr 24, 2013 at 12:47 PM, Balamurugan V G balamuruga...@gmail.com
  wrote:

 Hi Salvatore,

 Thanks for the response. I do not have enable_isolated_metadata_proxy
 anywhere under /etc/quantum and /etc/nova. The closest I see is
 'enable_isolated_metadata' in /etc/quantum/dhcp_agent.ini and even that is
 commented out. What do you mean by link-local address?

 Like you said, I suspect that the image has the route. This was was a
 snapshot taken in a Folsom setup. So its possible that Folsom has injected
 this route and when I took the snapshot, it became part of the snapshot. I
 then copied over this snapshot to a new Grizzly setup. Let me check the
 image and remove it from the image if it has the route. Thanks for the hint
 again.

 Regards,
 Balu



 On Wed, Apr 24, 2013 at 12:38 PM, Salvatore Orlando 
 sorla...@nicira.comwrote:

 The dhcp agent will set a route to 169.254.0.0/16 if
 enable_isolated_metadata_proxy=True.
 In that case the dhcp port ip will be the nexthop for that route.

 Otherwise, it might be your image might have a 'builtin' route to such
 cidr.
 What's your nexthop for the link-local address?

 Salvatore


 On 24 April 2013 08:00, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks for the hint Aaron. When I deleted the route for 169.254.0.0/16from 
 the VMs routing table, I could access the metadata service!

 The route for 169.254.0.0/16 is added automatically when the instance
 boots up, so I assume its coming from the DHCP. Any idea how this can be
 suppressed?

 Strangely though, I do not see this route in a WindowsXP VM booted in
 the same network as the earlier Ubuntu VM and the Windows VM can reach the
 metadata service with out me doing anything. The issue is with the Ubuntu
 VM.

 Thanks,
 Balu



 On Wed, Apr 24, 2013 at 12:18 PM, Aaron Rosen aro...@nicira.comwrote:

 The vm should not have a routing table entry for 169.254.0.0/16  if
 it does i'm not sure how it got there unless it was added by something
 other than dhcp. It seems like that is your problem as the vm is arping
 directly for that address rather than the default gw.


 On Tue, Apr 23, 2013 at 11:34 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04
 host and even my guest(VM) is Ubuntu 12.04 but metadata not working. I 
 see
 that the VM's routing table has an entry for 169.254.0.0/16 but I
 cant ping 169.254.169.254 from the VM. I am using a single node setup 
 with
 two NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running.
 When I ping 169.254.169.254 from VM, in the host's router namespace, I 
 see
 the ARP request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric Ref
 Use Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  0
 0 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  0
 0 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  0
 0 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol
 decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture
 size 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Martinx - ジェームズ
Hi Balu!

Listen, is your metadata service up and running?!

If yes, which guide you used?

I'm trying everything I can to enable metadata without L3 with a Quantum
Single Flat topology for my own guide:
https://gist.github.com/tmartinx/d36536b7b62a48f859c2

I really appreciate any feedback!

Tks!
Thiago


On 24 April 2013 03:34, Balamurugan V G balamuruga...@gmail.com wrote:

 Thanks Aaron.

 I am perhaps not configuring it right then. I am using Ubuntu 12.04 host
 and even my guest(VM) is Ubuntu 12.04 but metadata not working. I see that
 the VM's routing table has an entry for 169.254.0.0/16 but I cant ping
 169.254.169.254 from the VM. I am using a single node setup with two
 NICs.10.5.12.20 is the public IP, 10.5.3.230 is the management IP

 These are my metadata related configurations.

 */etc/nova/nova.conf *
 metadata_host = 10.5.12.20
 metadata_listen = 127.0.0.1
 metadata_listen_port = 8775
 metadata_manager=nova.api.manager.MetadataManager
 service_quantum_metadata_proxy = true
 quantum_metadata_proxy_shared_secret = metasecret123

 */etc/quantum/quantum.conf*
 allow_overlapping_ips = True

 */etc/quantum/l3_agent.ini*
 use_namespaces = True
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 metadata_ip = 10.5.12.20

 */etc/quantum/metadata_agent.ini*
 auth_url = http://10.5.3.230:35357/v2.0
 auth_region = RegionOne
 admin_tenant_name = service
 admin_user = quantum
 admin_password = service_pass
 nova_metadata_ip = 127.0.0.1
 nova_metadata_port = 8775
 metadata_proxy_shared_secret = metasecret123


 I see that /usr/bin/quantum-ns-metadata-proxy process is running. When I
 ping 169.254.169.254 from VM, in the host's router namespace, I see the ARP
 request but no response.

 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 route -n
 Kernel IP routing table
 Destination Gateway Genmask Flags Metric RefUse
 Iface
 0.0.0.0 10.5.12.1   0.0.0.0 UG0  00
 qg-193bb8ee-f5
 10.5.12.0   0.0.0.0 255.255.255.0   U 0  00
 qg-193bb8ee-f5
 192.168.2.0 0.0.0.0 255.255.255.0   U 0  00
 qr-59e69986-6e
 root@openstack-dev:~# ip netns exec
 qrouter-d9e87e85-8410-4398-9ddd-2dbc36f4b593 tcpdump -i qr-59e69986-6e
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
 listening on qr-59e69986-6e, link-type EN10MB (Ethernet), capture size
 65535 bytes
 ^C23:32:09.638289 ARP, Request who-has 192.168.2.3 tell 192.168.2.1,
 length 28
 23:32:09.650043 ARP, Reply 192.168.2.3 is-at fa:16:3e:4f:ad:df (oui
 Unknown), length 28
 23:32:15.768942 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:16.766896 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:17.766712 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28
 23:32:18.784195 ARP, Request who-has 169.254.169.254 tell 192.168.2.3,
 length 28

 6 packets captured
 6 packets received by filter
 0 packets dropped by kernel
 root@openstack-dev:~#


 Any help will be greatly appreciated.

 Thanks,
 Balu


 On Wed, Apr 24, 2013 at 11:48 AM, Aaron Rosen aro...@nicira.com wrote:

 Yup, If your host supports namespaces this can be done via the
 quantum-metadata-agent.  The following setting is also required in your
  nova.conf: service_quantum_metadata_proxy=True


 On Tue, Apr 23, 2013 at 10:44 PM, Balamurugan V G 
 balamuruga...@gmail.com wrote:

 Hi,

 In Grizzly, when using quantum and overlapping IPs, does metadata
 service work? This wasnt working in Folsom.

 Thanks,
 Balu

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-04-24 Thread Doug Hellmann
On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan riki.ars...@cloudturk.netwrote:

 Hi,

 We are trying to install ceilometer-2013.1~g2.tar.gz which presumably
 has Folsom compatibility.

 The requirment is python-keystoneclient=0.2,0.3 and we have
 the version 2.3.

 But, still, setup quits with the following message:

 error: Installed distribution python-keystoneclient 0.2.3 conflicts with
 requirement python-keystoneclient=0.1.2,0.2

 The funny thing is, although pip-requires states
 python-keystoneclient=0.2,0.3, the error message complains that it is
 not python-keystoneclient=0.1.2,0.2.


Something else you have installed already wants an older version of the
keystone client, so the installation of ceilometer is not able to upgrade
to the version we need.

Doug



 Your help is greatly appreciated.

 Thank you in advance.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-04-24 Thread Riki Arslan
Hi Doug,

Thank you for the reply. I have previously installed Ceilometer version
0.1. Do you think that could be the reason?

Thanks.


On Wed, Apr 24, 2013 at 11:49 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan riki.ars...@cloudturk.netwrote:

 Hi,

 We are trying to install ceilometer-2013.1~g2.tar.gz which presumably
 has Folsom compatibility.

 The requirment is python-keystoneclient=0.2,0.3 and we have
 the version 2.3.

 But, still, setup quits with the following message:

 error: Installed distribution python-keystoneclient 0.2.3 conflicts with
 requirement python-keystoneclient=0.1.2,0.2

 The funny thing is, although pip-requires states
 python-keystoneclient=0.2,0.3, the error message complains that it is
 not python-keystoneclient=0.1.2,0.2.


 Something else you have installed already wants an older version of the
 keystone client, so the installation of ceilometer is not able to upgrade
 to the version we need.

 Doug



 Your help is greatly appreciated.

 Thank you in advance.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Keystone Grizzly install

2013-04-24 Thread Viktor Viking
Community,

I am trying to install Keystone Grizzly following these instructions:
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html

When I try to start the service (before db sync), I get the following error
message: Starting keytonestartproc: exit status of parent of
/usr/bin/keystone-all  1 failed.

/var/log/keystone/keystone.log is not giving me any clue wrt what is wrong.

Could anyone let me know where to look?
Viktor
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone Grizzly install

2013-04-24 Thread Dolph Mathews
What happens when you run keystone-all directly?


-Dolph


On Wed, Apr 24, 2013 at 4:23 PM, Viktor Viking
viktor.viking...@gmail.comwrote:

 Community,

 I am trying to install Keystone Grizzly following these instructions:
 http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html

 When I try to start the service (before db sync), I get the following
 error message: Starting keytonestartproc: exit status of parent of
 /usr/bin/keystone-all  1 failed.

 /var/log/keystone/keystone.log is not giving me any clue wrt what is wrong.

 Could anyone let me know where to look?
 Viktor

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Ceilometer Install

2013-04-24 Thread Riki Arslan
Hi Doug,

Your email helped me. It was actually glanceclient version 0.5.1 that was
causing the conflict. After updating it, the conflict error disappeared.

I hope this would help someone else too.

Thanks again.


On Wed, Apr 24, 2013 at 11:49 PM, Doug Hellmann doug.hellm...@dreamhost.com
 wrote:




 On Wed, Apr 24, 2013 at 9:17 AM, Riki Arslan riki.ars...@cloudturk.netwrote:

 Hi,

 We are trying to install ceilometer-2013.1~g2.tar.gz which presumably
 has Folsom compatibility.

 The requirment is python-keystoneclient=0.2,0.3 and we have
 the version 2.3.

 But, still, setup quits with the following message:

 error: Installed distribution python-keystoneclient 0.2.3 conflicts with
 requirement python-keystoneclient=0.1.2,0.2

 The funny thing is, although pip-requires states
 python-keystoneclient=0.2,0.3, the error message complains that it is
 not python-keystoneclient=0.1.2,0.2.


 Something else you have installed already wants an older version of the
 keystone client, so the installation of ceilometer is not able to upgrade
 to the version we need.

 Doug



 Your help is greatly appreciated.

 Thank you in advance.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Should we discourage KVM block-based live migration?

2013-04-24 Thread Lorin Hochstein
On Wed, Apr 24, 2013 at 11:59 AM, Daniel P. Berrange d...@berrange.comwrote:

 On Wed, Apr 24, 2013 at 11:48:35AM -0400, Lorin Hochstein wrote:
  In the docs, we describe how to configure KVM block-based live migration,
  and it has the advantage of avoiding the need for shared storage of
  instances.
 
  However, there's this email from Daniel Berrangé from back in Aug 2012:
  http://osdir.com/ml/openstack-cloud-computing/2012-08/msg00293.html
 
  Block migration is a part of the KVM that none of the upstream
 developers
  really like, is not entirely reliable, and most distros typically do not
  want to support it due to its poor design (eg not supported in RHEL).
 
  It is quite likely that it will be removed in favour of an alternative
  implementation. What that alternative impl will be, and when I will
  arrive, I can't say right now.
 
  Based on this info, the OpenStack Ops guide currently recommends against
  using block-based live migration, but the Compute Admin guide has no
  warnings about this.
 
  I wanted to sanity-check against the mailing list to verify that this was
  still the case. What's the state of block-based live migration with KVM?
  Should we say be dissuading people from using it, or is it reasonable for
  people to use it?

 What I wrote above about the existing impl is still accurate. The new
 block migration code is now merged into libvirt and makes use of an
 NBD server built-in to the QMEU process todo block migration. API
 wise it should actually work in the same way as the existing deprecated
 block migration code.  So if you have new enough libvirt and new enough
 KVM, it probably ought to 'just work' with openstack without needing
 any code changes in nova. I have not actually tested this myself
 though.

 So we can probably update the docs - but we'd want to checkout just
 what precise versions of libvirt + qemu are needed, and have someone
 check that it does in fact work.


Thanks, Daniel. I can update the docs accordingly. How can I find out what
are the minimum versions of libvirt and qemu are needed?

Also, I noticed you said qemu and not kvm, and I see that
http://wiki.qemu.org/KVM says that qemu-kvm fork for x86 is deprecated,
use upstream QEMU now.  Is it the case now that when using KVM as the
hypervisor for a host, an admin will just install a qemu package instead
of a qemu-kvm package to get the userspace stuff?

Lorin
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Keystone Grizzly install

2013-04-24 Thread Viktor Viking
Hi Dolph,

Now I got an exception. It seems like I am missing repoze.lru. I will
download and install it. I will let you know if it works.

Thank you,
Viktor



On Wed, Apr 24, 2013 at 11:25 PM, Dolph Mathews dolph.math...@gmail.comwrote:

 What happens when you run keystone-all directly?


 -Dolph


 On Wed, Apr 24, 2013 at 4:23 PM, Viktor Viking viktor.viking...@gmail.com
  wrote:

 Community,

 I am trying to install Keystone Grizzly following these instructions:
 http://docs.openstack.org/trunk/openstack-compute/install/yum/content/install-keystone.html

 When I try to start the service (before db sync), I get the following
 error message: Starting keytonestartproc: exit status of parent of
 /usr/bin/keystone-all  1 failed.

 /var/log/keystone/keystone.log is not giving me any clue wrt what is
 wrong.

 Could anyone let me know where to look?
 Viktor

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Balamurugan V G
Hi Wanpan,

While I am able to inject files in to WindowsXP, CentOS5.9 and
Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did
search the entire drive for the file I injected but couldnt file.
Below is the log from nova-compute.log.


2013-04-24 01:41:27.973 AUDIT nova.compute.manager
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance...
2013-04-24 01:41:28.170 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim:
memory 1024 MB, disk 10 GB, VCPUs 1
2013-04-24 01:41:28.171 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953
MB, used: 2048 MB
2013-04-24 01:41:28.171 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929
MB, free: 3881 MB
2013-04-24 01:41:28.172 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB,
used: 15 GB
2013-04-24 01:41:28.172 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not
specified, defaulting to unlimited
2013-04-24 01:41:28.173 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs,
used: 2 VCPUs
2013-04-24 01:41:28.173 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not
specified, defaulting to unlimited
2013-04-24 01:41:28.174 AUDIT nova.compute.claims
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful
2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image
2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into
image 65eaa160-d0e7-403e-a52c-90bea3c22cf7
2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called
setup_basic_filtering in nwfilter
2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static
filters
2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle
event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648
2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-]
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned
successfully.
2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance:
aa46445e-1f86-4a5a-8002-a7703ff98648] During sync_power_state the
instance has a pending task. Skip.
2013-04-24 01:41:59.494 13316 AUDIT nova.compute.resource_tracker [-]
Auditing locally available compute resources
2013-04-24 01:42:00.345 13316 AUDIT nova.compute.resource_tracker [-]
Free ram (MB): 881
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 200
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-]
Free VCPUS: -1
2013-04-24 01:42:00.509 13316 INFO nova.compute.resource_tracker [-]
Compute_service record updated for openstack-dev:openstack-dev.com
2013-04-24 01:42:00.514 13316 INFO nova.compute.manager [-] Updating
bandwidth usage cache
2013-04-24 01:43:06.442 13316 AUDIT nova.compute.resource_tracker [-]
Auditing locally available compute resources
2013-04-24 01:43:07.041 13316 AUDIT nova.compute.resource_tracker [-]
Free ram (MB): 881
2013-04-24 01:43:07.042 13316 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 200
2013-04-24 01:43:07.042 13316 AUDIT nova.compute.resource_tracker [-]
Free VCPUS: -1
2013-04-24 01:43:07.266 13316 INFO nova.compute.resource_tracker [-]
Compute_service record updated for 

Re: [Openstack] [OpenStack] Files Injection in to Windows VMs

2013-04-24 Thread Wangpan
have you open and check the 'system reserved partition'? see the refer at 
bellow:
http://www.techfeb.com/how-to-open-windows-7-hidden-system-reserved-partition/

2013-04-25



Wangpan



发件人:Balamurugan V G
发送时间:2013-04-25 12:34
主题:Re: [Openstack] [OpenStack] Files Injection in to Windows VMs
收件人:Wangpanhzwang...@corp.netease.com
抄送:openstack@lists.launchpad.netopenstack@lists.launchpad.net

Hi Wanpan, 

While I am able to inject files in to WindowsXP, CentOS5.9 and 
Ubuntu12.04. I am unable to do it for Windows8Entrprise OS. I did 
search the entire drive for the file I injected but couldnt file. 
Below is the log from nova-compute.log. 


2013-04-24 01:41:27.973 AUDIT nova.compute.manager 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Starting instance... 
2013-04-24 01:41:28.170 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Attempting claim: 
memory 1024 MB, disk 10 GB, VCPUs 1 
2013-04-24 01:41:28.171 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Memory: 3953 
MB, used: 2048 MB 
2013-04-24 01:41:28.171 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Memory limit: 5929 
MB, free: 3881 MB 
2013-04-24 01:41:28.172 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total Disk: 225 GB, 
used: 15 GB 
2013-04-24 01:41:28.172 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Disk limit not 
specified, defaulting to unlimited 
2013-04-24 01:41:28.173 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Total CPU: 2 VCPUs, 
used: 2 VCPUs 
2013-04-24 01:41:28.173 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] CPU limit not 
specified, defaulting to unlimited 
2013-04-24 01:41:28.174 AUDIT nova.compute.claims 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Claim successful 
2013-04-24 01:41:33.998 INFO nova.virt.libvirt.driver 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Creating image 
2013-04-24 01:41:34.281 INFO nova.virt.libvirt.driver 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Injecting files into 
image 65eaa160-d0e7-403e-a52c-90bea3c22cf7 
2013-04-24 01:41:36.534 INFO nova.virt.libvirt.firewall 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Called 
setup_basic_filtering in nwfilter 
2013-04-24 01:41:36.535 INFO nova.virt.libvirt.firewall 
[req-6b571df0-9608-4bc5-93a7-afb3a2f17ba5 
117e0142ab40418eafc56955f0ab2ba3 7a416e3eaa814734bda41ffca7c2d01e] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Ensuring static 
filters 
2013-04-24 01:41:38.555 13316 INFO nova.compute.manager [-] Lifecycle 
event 0 on VM aa46445e-1f86-4a5a-8002-a7703ff98648 
2013-04-24 01:41:38.763 13316 INFO nova.virt.libvirt.driver [-] 
[instance: aa46445e-1f86-4a5a-8002-a7703ff98648] Instance spawned 
successfully. 
2013-04-24 01:41:38.996 13316 INFO nova.compute.manager [-] [instance: 
aa46445e-1f86-4a5a-8002-a7703ff98648] During sync_power_state the 
instance has a pending task. Skip. 
2013-04-24 01:41:59.494 13316 AUDIT nova.compute.resource_tracker [-] 
Auditing locally available compute resources 
2013-04-24 01:42:00.345 13316 AUDIT nova.compute.resource_tracker [-] 
Free ram (MB): 881 
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-] 
Free disk (GB): 200 
2013-04-24 01:42:00.346 13316 AUDIT nova.compute.resource_tracker [-] 
Free VCPUS: -1 
2013-04-24 01:42:00.509 13316 INFO nova.compute.resource_tracker [-] 
Compute_service record updated for openstack-dev:openstack-dev.com 
2013-04-24 01:42:00.514 13316 INFO nova.compute.manager [-] Updating 
bandwidth usage cache 
2013-04-24 01:43:06.442 

[Openstack] Call for speakers for the 2nd OpenStack User Group Nordics meetup in Stockholm, Sweden

2013-04-24 Thread Nicolae Paladi
Hi,

Following the positive feedback after the 1st OpenStack User Group Nordics
(OSUGN) in 
Stockholmhttp://www.meetup.com/OpenStack-User-Group-Nordics/events/95258382/,
we thought it's time to schedule the next meetup!

This is a call for speakers for the 2nd OSUGN meetup in
Stockholmhttp://www.meetup.com/OpenStack-User-Group-Nordics/events/112862882/,
scheduled for Wednesday, September 11, 2013.

The focus is on technical talks about on-going projects, OpenStack
deployment war stories, projects in incubation, etc. Security-related
topics get bonus points.

Cheers,
/Nicolae.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Balamurugan V G
Hi Aaron,

I tried the image you pointed and it worked fine out of the box. That is it
did not get the route to 169.254.0.0.26 on boot and I am able to retrieve
info from metadata service. The image I was using earlier is a Ubuntu 12.04
LTS desktop image. What do you think could be wrong with my image? Its
almost the vanilla Ubuntu image, I have not installed much. on it.

Here is the quantum details you asked and more. This was taken before I
tried the image you pointed to. And by the way, I have not added any host
route as well.

root@openstack-dev:~# quantum router-list
+--+-++
| id   | name| external_gateway_info
   |
+--+-++
| d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {network_id:
e8862e1c-0233-481f-b284-b027039feef7} |
+--+-++
root@openstack-dev:~# quantum net-list
+--+-+-+
| id   | name| subnets
|
+--+-+-+
| c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
| e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |
+--+-+-+
*root@openstack-dev:~# quantum subnet-list
+--+--++--+
| id   | name | cidr   |
allocation_pools |
+--+--++--+
| 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  | {start:
10.5.12.21, end: 10.5.12.25} |
| ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 | {start:
192.168.2.2, end: 192.168.2.254} |*
+--+--++--+
root@openstack-dev:~# quantum port-list
+--+--+---++
| id   | name | mac_address   |
fixed_ips
   |
+--+--+---++
| 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
{subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
10.5.12.21}  |
| 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
{subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
10.5.12.23}  |
| 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
{subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
192.168.2.2} |
| 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
{subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
192.168.2.3} |
| 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
{subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
192.168.2.1} |
| 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
{subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
10.5.12.24}  |
+--+--+---++
root@openstack-dev:~# quantum floatingip-list
+--+--+-+--+
| id   | fixed_ip_address |
floating_ip_address | port_id  |
+--+--+-+--+
| 1a5dfbf3-0986-461d-854e-f4f8ebb58f8d | 192.168.2.3  | 10.5.12.23
 | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |
| f9d6e7f4-b251-4a2d-9310-532d8ee376f6 |  | 10.5.12.24
 |  |
+--+--+-+--+
root@openstack-dev:~# quantum subnet-show
ecdfe002-658e-4174-a33c-934ba09179b7
+--+--+
| Field| Value|

Re: [Openstack] [OpenStack] Grizzly: Does metadata service work when overlapping IPs is enabled

2013-04-24 Thread Aaron Rosen
I'm not sure but if it works fine with the ubuntu cloud image and not with
your ubuntu image than there is something in your image adding that route.


On Wed, Apr 24, 2013 at 10:06 PM, Balamurugan V G
balamuruga...@gmail.comwrote:

 Hi Aaron,

 I tried the image you pointed and it worked fine out of the box. That is
 it did not get the route to 169.254.0.0.26 on boot and I am able to
 retrieve info from metadata service. The image I was using earlier is a
 Ubuntu 12.04 LTS desktop image. What do you think could be wrong with my
 image? Its almost the vanilla Ubuntu image, I have not installed much. on
 it.

 Here is the quantum details you asked and more. This was taken before I
 tried the image you pointed to. And by the way, I have not added any host
 route as well.

 root@openstack-dev:~# quantum router-list

 +--+-++
 | id   | name| external_gateway_info
|

 +--+-++
 | d9e87e85-8410-4398-9ddd-2dbc36f4b593 | router1 | {network_id:
 e8862e1c-0233-481f-b284-b027039feef7} |

 +--+-++
 root@openstack-dev:~# quantum net-list

 +--+-+-+
 | id   | name| subnets
 |

 +--+-+-+
 | c4a7475e-e33f-47d0-a6ff-d0cf50c012d7 | net1|
 ecdfe002-658e-4174-a33c-934ba09179b7 192.168.2.0/24 |
 | e8862e1c-0233-481f-b284-b027039feef7 | ext_net |
 783e6a47-d7e0-46ba-9c2a-55a92406b23b 10.5.12.20/24  |

 +--+-+-+
 *root@openstack-dev:~# quantum subnet-list

 +--+--++--+
 | id   | name | cidr   |
 allocation_pools |

 +--+--++--+
 | 783e6a47-d7e0-46ba-9c2a-55a92406b23b |  | 10.5.12.20/24  |
 {start: 10.5.12.21, end: 10.5.12.25} |
 | ecdfe002-658e-4174-a33c-934ba09179b7 |  | 192.168.2.0/24 |
 {start: 192.168.2.2, end: 192.168.2.254} |*

 +--+--++--+
 root@openstack-dev:~# quantum port-list

 +--+--+---++
 | id   | name | mac_address   |
 fixed_ips
|

 +--+--+---++
 | 193bb8ee-f50d-4b1f-87ae-e033c1730953 |  | fa:16:3e:91:3d:c0 |
 {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
 10.5.12.21}  |
 | 19bce882-c746-497b-b401-dedf5ab605b2 |  | fa:16:3e:97:89:f6 |
 {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
 10.5.12.23}  |
 | 41ab9b15-ddc9-4a00-9a34-2e3f14e7e92f |  | fa:16:3e:45:58:03 |
 {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
 192.168.2.2} |
 | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |  | fa:16:3e:83:a7:e4 |
 {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
 192.168.2.3} |
 | 59e69986-6e8a-4f1e-a754-a1d421cdebde |  | fa:16:3e:91:ee:76 |
 {subnet_id: ecdfe002-658e-4174-a33c-934ba09179b7, ip_address:
 192.168.2.1} |
 | 65167653-f6ff-438b-b465-f5dcc8974549 |  | fa:16:3e:a7:77:0b |
 {subnet_id: 783e6a47-d7e0-46ba-9c2a-55a92406b23b, ip_address:
 10.5.12.24}  |

 +--+--+---++
 root@openstack-dev:~# quantum floatingip-list

 +--+--+-+--+
 | id   | fixed_ip_address |
 floating_ip_address | port_id  |

 +--+--+-+--+
 | 1a5dfbf3-0986-461d-854e-f4f8ebb58f8d | 192.168.2.3  | 10.5.12.23
  | 4dbc3c55-5763-4cfa-a7c1-81b254693e87 |
 | f9d6e7f4-b251-4a2d-9310-532d8ee376f6 |  | 10.5.12.24
  |  |

 

[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #26

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/26/Project:precise_havana_glance_trunkDate of build:Wed, 24 Apr 2013 06:01:36 -0400Build duration:3 min 3 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesCall monkey_patch before other modules are loadedby flaper87editbin/glance-registryeditbin/glance-apieditglance/common/wsgi.pyeditglance/tests/unit/test_wsgi.pyConsole Output[...truncated 2673 lines...]Version: 1:2013.2+git201304240601~precise-0ubuntu1Finished at 20130424-0604Build needed 00:01:42, 12072k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304240601~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304240601~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpzezUsW/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpzezUsW/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log e6e02cd147b3b22dc39344df48d3e40abf024240..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304240601~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [6335fdb] Eliminate the race when selecting a port for tests.dch -a [7d341de] Raise 404 while deleting a deleted imagedch -a [459e3e6] Sync with oslo-incubator copy of setup.py and version.pydch -a [6780571] Fix Qpid test casesdch -a [cd00848] Fix the deletion of a pending_delete image.dch -a [1e49329] Fix functional test 'test_scrubber_with_metadata_enc'dch -a [1c5a4d2] Call monkey_patch before other modules are loadeddch -a [6eaf42a] Improve unit tests for glance.api.middleware.cache moduledch -a [ae0f904] Add GridFS storedch -a [28b1129] Verify SSL certificates at boot timedch -a [b1ac90f] Add a policy handler to control copy-from functionalitydch -a [7155134] Add unit tests for glance.api.cached_images moduledebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201304240601~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201304240601~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304240601~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304240601~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_glance_trunk #27

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_glance_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_glance_trunk/27/Project:precise_havana_glance_trunkDate of build:Wed, 24 Apr 2013 12:01:36 -0400Build duration:2 min 54 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesFixes for mis-use of various exceptionsby john.lenihaneditglance/store/gridfs.pyeditglance/common/auth.pyeditglance/registry/__init__.pyeditglance/store/location.pyConsole Output[...truncated 2645 lines...]Finished at 20130424-1204Build needed 00:01:32, 12072k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304241201~precise-0ubuntu1.dsc']' returned non-zero exit status 2ERROR:root:Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304241201~precise-0ubuntu1.dsc']' returned non-zero exit status 2INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/glance/grizzly /tmp/tmpab0izL/glancemk-build-deps -i -r -t apt-get -y /tmp/tmpab0izL/glance/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log e6e02cd147b3b22dc39344df48d3e40abf024240..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304241201~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [d3c5a6c] Fixes for mis-use of various exceptionsdch -a [6335fdb] Eliminate the race when selecting a port for tests.dch -a [7d341de] Raise 404 while deleting a deleted imagedch -a [459e3e6] Sync with oslo-incubator copy of setup.py and version.pydch -a [6780571] Fix Qpid test casesdch -a [cd00848] Fix the deletion of a pending_delete image.dch -a [1e49329] Fix functional test 'test_scrubber_with_metadata_enc'dch -a [1c5a4d2] Call monkey_patch before other modules are loadeddch -a [6eaf42a] Improve unit tests for glance.api.middleware.cache moduledch -a [ae0f904] Add GridFS storedch -a [28b1129] Verify SSL certificates at boot timedch -a [b1ac90f] Add a policy handler to control copy-from functionalitydch -a [7155134] Add unit tests for glance.api.cached_images moduledebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC glance_2013.2+git201304241201~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A glance_2013.2+git201304241201~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304241201~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'glance_2013.2+git201304241201~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #32

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/32/Project:precise_havana_keystone_trunkDate of build:Wed, 24 Apr 2013 13:01:38 -0400Build duration:2 min 30 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove new constraint from migration downgrade.by jlennoxeditkeystone/common/sql/migrate_repo/versions/020_migrate_metadata_table_roles.pyConsole Output[...truncated 2550 lines...]dch -a [335470d] Removed unused importsdch -a [9f7b370] Remove non-production middleware from sample pipelinesdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [2eab5fd] Remove new constraint from migration downgrade.dch -a [8c67341] Sync with oslo-incubator copy of setup.pydch -a [9b9a3d5] Set empty element to ""dch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [23bd9fa] test duplicate namedch -a [e4ec12e] Add TLS Support for LDAPdch -a [97d5624] fix undefined variabledch -a [6f4096b] clean up invalid variable referencedch -a [f846e28] Clean up duplicate methodsdch -a [3f296e0] don't migrate as oftendch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [9467a66] close db migration sessiondch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [1121b8d] bug 1159888 broken links in rst docdch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304241301~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304241301~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304241301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304241301~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #33

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/33/Project:precise_havana_keystone_trunkDate of build:Wed, 24 Apr 2013 13:31:37 -0400Build duration:2 min 17 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesMake migration tests postgres  mysql friendly.by jlennoxeditkeystone/common/sql/migrate_repo/versions/009_normalize_identity.pyedittests/test_sql_upgrade.pyConsole Output[...truncated 2553 lines...]dch -a [335470d] Removed unused importsdch -a [9f7b370] Remove non-production middleware from sample pipelinesdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [2eab5fd] Remove new constraint from migration downgrade.dch -a [8c67341] Sync with oslo-incubator copy of setup.pydch -a [9b9a3d5] Set empty element to ""dch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [23bd9fa] test duplicate namedch -a [e4ec12e] Add TLS Support for LDAPdch -a [97d5624] fix undefined variabledch -a [6f4096b] clean up invalid variable referencedch -a [f846e28] Clean up duplicate methodsdch -a [3f296e0] don't migrate as oftendch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [9467a66] close db migration sessiondch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [1121b8d] bug 1159888 broken links in rst docdch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304241331~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304241331~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304241331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304241331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_keystone_trunk #34

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_keystone_trunk/34/Project:precise_havana_keystone_trunkDate of build:Wed, 24 Apr 2013 15:31:36 -0400Build duration:2 min 27 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesDelete extra dict in token controller.by jiyou09editkeystone/token/controllers.pyConsole Output[...truncated 2556 lines...]dch -a [335470d] Removed unused importsdch -a [9f7b370] Remove non-production middleware from sample pipelinesdch -a [fccfa39] Fixed logging usage instead of LOGdch -a [2eab5fd] Remove new constraint from migration downgrade.dch -a [8c67341] Sync with oslo-incubator copy of setup.pydch -a [9b9a3d5] Set empty element to ""dch -a [78dcfc6] Fixed unicode username user creation errordch -a [a62d3af] Fix token ids for memcacheddch -a [61629c3] Use is_enabled() in folsom->grizzly upgrade (bug 1167421)dch -a [28ef9cd] Generate HTTPS certificates with ssl_setup.dch -a [cbac771] Fix for configuring non-default auth plugins properlydch -a [23bd9fa] test duplicate namedch -a [e4ec12e] Add TLS Support for LDAPdch -a [97d5624] fix undefined variabledch -a [6f4096b] clean up invalid variable referencedch -a [f846e28] Clean up duplicate methodsdch -a [3f296e0] don't migrate as oftendch -a [5c217fd] use the openstack test runnerdch -a [b033538] Fix 401 status responsedch -a [a65f737] Add missing colon for documentation build steps.dch -a [9467a66] close db migration sessiondch -a [b94f62a] Use string for port in default endpoints (bug 1160573)dch -a [1121b8d] bug 1159888 broken links in rst docdch -a [6f88699] Remove un-needed LimitingReader read() function.dch -a [e16742b] residual grants after delete action (bug1125637)dch -a [0b4ee31] catch errors in wsgi.Middleware.debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC keystone_2013.2+git201304241531~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A keystone_2013.2+git201304241531~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304241531~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'keystone_2013.2+git201304241531~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #55

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/55/Project:precise_havana_quantum_trunkDate of build:Wed, 24 Apr 2013 22:31:36 -0400Build duration:2 min 3 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesImported Translations from Transifexby Jenkinseditquantum/locale/ka_GE/LC_MESSAGES/quantum.poeditquantum/locale/quantum.poteditquantum/locale/ja/LC_MESSAGES/quantum.poConsole Output[...truncated 3062 lines...]git log de5c1e4f281f59d550b476919b27ac4e2aae14ac..HEAD --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304242231~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [9c21592] Imported Translations from Transifexdch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [765baf8] Imported Translations from Transifexdch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304242231~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304242231~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304242231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304242231~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_quantum_trunk #56

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_quantum_trunk/56/Project:precise_havana_quantum_trunkDate of build:Wed, 24 Apr 2013 23:31:36 -0400Build duration:1 min 54 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesSend 400 error if device specification contains unexpected attributesby salv.orlandoeditquantum/plugins/nicira/extensions/nvp_networkgw.pyeditquantum/tests/unit/nicira/test_networkgw.pyConsole Output[...truncated 3066 lines...]dch -b -D precise --newversion 1:2013.2+git201304242331~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [9c21592] Imported Translations from Transifexdch -a [01a977b] Send 400 error if device specification contains unexpected attributesdch -a [62017cd] Imported Translations from Transifexdch -a [26b98b7] lbaas: check object state before update for pools, members, health monitorsdch -a [49c1c98] Metadata agent: reuse authentication info across eventlet threadsdch -a [11639a2] Imported Translations from Transifexdch -a [35988f1] Make the 'admin' role configurabledch -a [765baf8] Imported Translations from Transifexdch -a [343ca18] Imported Translations from Transifexdch -a [c117074] Remove locals() from strings substitutionsdch -a [fb66e24] Imported Translations from Transifexdch -a [e001a8d] Add string 'quantum'/ version to scope/tag in NVPdch -a [5896322] Changed DHCPV6_PORT from 467 to 547, the correct port for DHCPv6.dch -a [80ffdde] Imported Translations from Transifexdch -a [929cbab] Imported Translations from Transifexdch -a [2a24058] Imported Translations from Transifexdch -a [b6f0f68] Imported Translations from Transifexdch -a [1e1c513] Imported Translations from Transifexdch -a [6bbcc38] Imported Translations from Transifexdch -a [bd702cb] Imported Translations from Transifexdch -a [a13295b] Enable automatic validation of many HACKING rules.dch -a [91bed75] Ensure unit tests work with all interface typesdch -a [0446eac] Shorten the path of the nicira nvp plugin.dch -a [8354133] Implement LB plugin delete_pool_health_monitor().dch -a [147038a] Parallelize quantum unit testing:debcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucdebsign -k9935ACDC quantum_2013.2+git201304242331~precise-0ubuntu1_source.changessbuild -d precise-havana -n -A quantum_2013.2+git201304242331~precise-0ubuntu1.dscTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304242331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['sbuild', '-d', 'precise-havana', '-n', '-A', 'quantum_2013.2+git201304242331~precise-0ubuntu1.dsc']' returned non-zero exit status 2Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Jenkins build is back to normal : cloud-archive_grizzly_version-drift #3

2013-04-24 Thread openstack-testing-bot
See http://10.189.74.7:8080/job/cloud-archive_grizzly_version-drift/3/


-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp


[Openstack-ubuntu-testing-notifications] Build Still Failing: precise_havana_cinder_trunk #28

2013-04-24 Thread openstack-testing-bot
Title: precise_havana_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_havana_cinder_trunk/28/Project:precise_havana_cinder_trunkDate of build:Thu, 25 Apr 2013 01:31:35 -0400Build duration:1 min 5 secBuild cause:Started by an SCM changeBuilt on:pkg-builderHealth ReportWDescriptionScoreBuild stability: All recent builds failed.0ChangesRemove duplicate method definitionby dirkeditcinder/tests/test_hp3par.pyAdd stats reporting to Nexenta Driverby john.griffitheditcinder/volume/drivers/nexenta/volume.pyeditcinder/tests/test_nexenta.pyConsole Output[...truncated 1381 lines...]DEBUG:root:['bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']Building using working treeBuilding package in merge modeLooking for a way to retrieve the upstream tarballUsing the upstream tarball that is present in /tmp/tmp1NWCl0bzr: ERROR: An error (1) occurred running quilt: Applying patch fix_cinder_dependencies.patchpatching file tools/pip-requiresHunk #1 FAILED at 18.1 out of 1 hunk FAILED -- rejects in file tools/pip-requiresPatch fix_cinder_dependencies.patch does not apply (enforce with -f)ERROR:root:Error occurred during package creation/build: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-67f066a6-d40d-47b6-922d-accf5a2080b6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3ERROR:root:Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-67f066a6-d40d-47b6-922d-accf5a2080b6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3INFO:root:Complete command log:INFO:root:Destroying schroot.bzr branch lp:~openstack-ubuntu-testing/cinder/grizzly /tmp/tmp1NWCl0/cindermk-build-deps -i -r -t apt-get -y /tmp/tmp1NWCl0/cinder/debian/controlpython setup.py sdistgit log -n1 --no-merges --pretty=format:%Hgit log -n5 --no-merges --pretty=format:[%h] %sdch -b -D precise --newversion 1:2013.2+git201304250131~precise-0ubuntu1 Automated Ubuntu testing build:dch -a [781055b] Add stats reporting to Nexenta Driverdch -a [c03fcae] Remove duplicate method definitiondch -a [7d5787d] iscsi: Add ability to specify or autodetect block vs fileiodch -a [3727324] Rename duplicate test methoddch -a [cc7fe54] Add missing space to "volumes already consumed" messagedebcommitbzr builddeb -S -- -sa -us -ucbzr builddeb -S -- -sa -us -ucTraceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-67f066a6-d40d-47b6-922d-accf5a2080b6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Error in sys.excepthook:Traceback (most recent call last):  File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 68, in apport_excepthookbinary = os.path.realpath(os.path.join(os.getcwd(), sys.argv[0]))OSError: [Errno 2] No such file or directoryOriginal exception was:Traceback (most recent call last):  File "/var/lib/jenkins/tools/openstack-ubuntu-testing/bin/build-package", line 139, in raise esubprocess.CalledProcessError: Command '['/usr/bin/schroot', '-p', '-r', '-c', 'precise-amd64-67f066a6-d40d-47b6-922d-accf5a2080b6', '-u', 'jenkins', '--', 'bzr', 'builddeb', '-S', '--', '-sa', '-us', '-uc']' returned non-zero exit status 3Build step 'Execute shell' marked build as failureEmail was triggered for: FailureSending email for trigger: Failure-- 
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help   : https://help.launchpad.net/ListHelp