Re: [Openstack] Can I create a VM with 2 NICs while there is only one network?
Rosen: I want to implement a virutal IPS(intrusion protection system) on L2 layer, so the input interface and the output interface should be on the same network. Now I manually modify the packet vlan using OpenFlow protocol at the two NICs, so that the loop won't happen. On Thu, May 30, 2013 at 2:11 PM, Aaron Rosen aro...@nicira.com wrote: I still don't see why you want to have two nics on the same L2? We don't allow this because we don't want to allow a tenants to bridge them creating a loop in the network. Aaron On Thu, May 23, 2013 at 8:18 PM, Liu Wenmao marvel...@gmail.com wrote: Hello: I have a network with a subnet, I want create a VM with one NIC connected with this subnet, with one or two extra NIC left, because I want to do some more things such as intrusion protection I wonder is it possible to create a VM with more NICs than its connected network? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Can I create a VM with 2 NICs while there is only one network?
Hi Salvatore: Thanks, I will try that. Liu Wenmao On Wed, May 29, 2013 at 6:07 PM, Salvatore Orlando sorla...@nicira.comwrote: I am afraid there is no way of having two NICs on the same network at the moment. If you are trying to deploy a VM which provides some form of network service, like packet filtering, you might think about implementing it as a quantum service plugin. The 'agent' for this plugin would plug two tap interfaces connected to the same network, and do the processing as you suggest. Clearly, this is not very easy, as it would require you to implement a plugin, possibly an API for it, as well as an agent which uses Quantum library functions for plugging/unplugging interfaces. The agent will also have the responsibility for starting/stopping the service you're providing. Salvatore On 28 May 2013 07:34, Liu Wenmao marvel...@gmail.com wrote: Thanks Salvatore I can create two ports with admin-state down, which are in the same network, but nova says that the two NICs of the VM can not be in the same network. Actually I want to redirect all the packets of the network to VM eth0, after some processes the VM sends the packets out to eth1, so the two NICs should be in the same network. Is it possible? root@node1:/usr/src/python-quantumclient# quantum port-create --admin-state-down net1 Created a new port: +--+---+ | Field| Value | +--+---+ | admin_state_up | False | | binding:capabilities | {port_filter: false} | | binding:vif_type | ovs | | device_id| | | device_owner | | | fixed_ips| {subnet_id: c11eaa0d-3aff-41a8-909a-1dfdfdf20f48, ip_address: 100.0.0.12} | | id | ca48bce7-7e42-4263-8832-cffb6e99ac0a | | mac_address | fa:16:3e:0e:08:e1 | | name | | | network_id | 17d31ea4-4473-4da0-9493-9a04fa5aff33 | | status | DOWN | | tenant_id| 53707d290204404dbff625378969c25c | +--+---+ root@node1:/usr/src/python-quantumclient# quantum port-create --admin-state-down net1 Created a new port: +--+---+ | Field| Value | +--+---+ | admin_state_up | False | | binding:capabilities | {port_filter: false} | | binding:vif_type | ovs | | device_id| | | device_owner | | | fixed_ips| {subnet_id: c11eaa0d-3aff-41a8-909a-1dfdfdf20f48, ip_address: 100.0.0.13} | | id | 8a320aae-4a16-4a78-acba-1ec505cfe914 | | mac_address | fa:16:3e:db:c5:15 | | name | | | network_id | 17d31ea4-4473-4da0-9493-9a04fa5aff33 | | status | DOWN | | tenant_id| 53707d290204404dbff625378969c25c | +--+---+ root@node1:/usr/src/python-quantumclient# nova boot --image cirros --flavor m1.tiny --nic port-id=ca48bce7-7e42-4263-8832-cffb6e99ac0a --nic port-id=8a320aae-4a16-4a78-acba-1ec505cfe914 testips ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-ac85648c-4e9b-4624-bf88-a6ceeb8e79aa) nova-api.log: 3028 2013-05-28 11:50:06.007 3232 TRACE nova.api.openstack File /usr/lib/python2.7/contextlib.py, line 24, in __exit__ 3029 2013-05-28 11:50:06.007 3232 TRACE nova.api.openstack self.gen.next() 3030 2013-05-28 11:50:06.007 3232 TRACE nova.api.openstack File /usr/local/lib/python2.7/dist-packages/nova-2013.1-py2.7.egg/nova/compute/api.py, line 522, in _validate_and_provision_instance 3031 2013-05-28 11:50:06.007 3232
Re: [Openstack] Can I create a VM with 2 NICs while there is only one network?
', which would be tantamount to unplugged network cards, if this is what you want to achieve. Using Quantum you can create a few ports on some network, set these ports administratively down, and boot the VM with this ports (--nic port-id). Even if Quantum does not allow you to move these ports to another network perhaps it might still satisfy your requirements. Salvatore On 24 May 2013 04:23, Istimsak Abdulbasir saqman2...@gmail.com wrote: Are you saying the the VM sees two virtual NICs or two physical NICs? Istimsak Abdulbasir change is good On Thu, May 23, 2013 at 11:18 PM, Liu Wenmao marvel...@gmail.com wrote: Hello: I have a network with a subnet, I want create a VM with one NIC connected with this subnet, with one or two extra NIC left, because I want to do some more things such as intrusion protection I wonder is it possible to create a VM with more NICs than its connected network? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Can I create a VM with 2 NICs while there is only one network?
Hello: I have a network with a subnet, I want create a VM with one NIC connected with this subnet, with one or two extra NIC left, because I want to do some more things such as intrusion protection I wonder is it possible to create a VM with more NICs than its connected network? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] can two tenants create two identical network?
Yes, I use namespace and quantum. After I set the allow_overlapping_ips option, it works, thanks guys. To Kumaran: Do you mean that namespace support should be disabled in DHCP agent, but why? If I disable namespace in dhcp_agent.ini, is it possible that something does not work? On Fri, May 17, 2013 at 6:07 PM, Ashok Kumaran ashokkumara...@gmail.comwrote: In addition to the below changes set use_namespaces=False in dhcp_agent.ini too.Make sure that your operating system supports network namespaces Sent from my iPhone On 17-May-2013, at 3:32 PM, Balamurugan V G balamuruga...@gmail.com wrote: I am assuming you are using Quantum for networking, in which case make sure you have enable overlapping IPs by setting: allow_overlapping_ips = True in /etc/quantum/quantum.conf You also need to enable namespaces: use_namespaces = True in /etc/quantum/l3_agent.ini Regards Balu On Fri, May 17, 2013 at 3:22 PM, Liu Wenmao marvel...@gmail.com wrote: Hi Suppose there are two tenants: A and B, they can create their own networks, but the networks are both 100.0.0.0/24, I think it is possible in multi-tenant scenarios since networks of different tenants are isolated, But in openstack, I have a network creation error, which says there is an existing network already. I wonder is it possible to create two identical networks for two tenant? Liu Wenmao ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] openflow FLOOD data can not go through br-int to br-tun
hi all: I have set up quantum+floodlight, there are a compute node and a controller, so I create a VM in the compute node, but the VM(100.0.0.4) can not ping its gateway(100.0.0.1) in the controller node. When the VM send a ARP request to OVS of the compute node, a packet_in request is sent to the controller, then the controller send a packet_out response to the OVS, telling it to flood the ARP request. I run tcpdump at both br-int and br-tun interface, packets are captured at br-int, but no packets are captured at br-tun: root@node1:/var/log/openvswitch# tcpdump -i br-int -nn tcpdump: WARNING: br-int: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes 14:26:45.485978 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28 14:26:46.482442 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28 14:26:47.482416 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28 ^C 3 packets captured 3 packets received by filter 0 packets dropped by kernel root@node1:/var/log/openvswitch# tcpdump -i br-tun -nn tcpdump: WARNING: br-tun: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-tun, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel root@node1:/var/log/openvswitch# ovs-ofctl snoop br-int OFPT_PACKET_IN (xid=0x0): total_len=42 in_port=6 data_len=42 buffer=0x044d priority0:tunnel0:in_port0006:tci(0) macfa:16:3e:9f:5b:2c-ff:ff:ff:ff:ff:ff type0806 proto1 tos0 ttl0 ip100.0.0.4-100.0.0.1 arp_hafa:16:3e:9f:5b:2c-00:00:00:00:00:00 fa:16:3e:9f:5b:2c ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 100.0.0.1 tell 100.0.0.4, length 28 OFPT_PACKET_OUT (xid=0x0): in_port=6 actions_len=8 actions=FLOOD data_len=42 fa:16:3e:9f:5b:2c ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 100.0.0.1 tell 100.0.0.4, length 28 I guess it is because the gateway is on another node, so ARP request should go through br-int-br-tun-eth2[compute node side]-- [controller side]eth2-br-tun-br-int, but the ARP request seems to be blocked between br-int and br-tun. I don't know why the ARP request is not sent to br-tun. it seems that ARP request is sent to normal port of the OVS because VM 100.0.0.4 can ping other VMs(100.0.0.2) on the same OVS. root@node1:/var/log/openvswitch# ovs-vsctl show afaf59ee-48cc-4f5b-9a1d-4311b509a6c5 *Bridge br-int* Controller tcp:30.0.0.1:6633 is_connected: true Port qvoe06ea8d8-d7 tag: 1 Interface qvoe06ea8d8-d7 Port qvoa96762cb-f3 tag: 4095 Interface qvoa96762cb-f3 Port qvo38f23ca0-59 tag: 1 Interface qvo38f23ca0-59 Port qvofc3fe9ed-fb tag: 4095 Interface qvofc3fe9ed-fb Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port eth3 Interface eth3 Port qvo1021fd99-eb tag: 4095 Interface qvo1021fd99-eb Port qvo329db52d-81 tag: 4095 Interface qvo329db52d-81 Bridge qbre06ea8d8-d7 Port qbre06ea8d8-d7 Interface qbre06ea8d8-d7 type: internal Port qvbe06ea8d8-d7 Interface qvbe06ea8d8-d7 Port tape06ea8d8-d7 Interface tape06ea8d8-d7 Bridge qbr329db52d-81 Port qbr329db52d-81 Interface qbr329db52d-81 type: internal Port qvb329db52d-81 Interface qvb329db52d-81 Bridge qbrc8ec86f4-3a Port qbrc8ec86f4-3a Interface qbrc8ec86f4-3a type: internal Port qvbc8ec86f4-3a Interface qvbc8ec86f4-3a *Bridge br-tun* Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port gre-1 Interface gre-1 type: gre options: {in_key=flow, out_key=flow, remote_ip=30.0.0.1} Bridge qbr31c6e35b-81 Port qbr31c6e35b-81 Interface qbr31c6e35b-81 type: internal Port qvb31c6e35b-81 Interface qvb31c6e35b-81 Bridge qbr38f23ca0-59 Port qbr38f23ca0-59 Interface qbr38f23ca0-59 type: internal Port tap38f23ca0-59 Interface tap38f23ca0-59 Port qvb38f23ca0-59 Interface qvb38f23ca0-59 Bridge qbr28117358-50 Port qvb28117358-50 Interface qvb28117358-50 Port qbr28117358-50
[Openstack] floodlight ignore subnet gateway due to PORT_DOWN and LINK_DOWN
hi I use quantum grizzly with namespace and floodlight, but VMs can not ping its gateway. It seems that floodlight ignore devices whose status is PORT_DOWN or LINK_DOWN, somehow the subnetwork gateway is really PORT_DOWN and LINK_DOWN.. is it normal?or how can I change its status to normal? root@controller:~# ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e2ed9e9b6942 n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff 1(qr-c5496165-c7): addr:5e:67:22:5b:d5:0e config: PORT_DOWN state: LINK_DOWN * 2(qr-8af2e01f-bb): addr:e4:00:00:00:00:00this is the gateway.* * config: PORT_DOWN* * state: LINK_DOWN* 3(qr-48c69382-4f): addr:22:64:6f:3a:9f:cd config: PORT_DOWN state: LINK_DOWN 4(patch-tun): addr:8e:90:4c:aa:d2:06 config: 0 state: 0 5(tap5b5891ac-94): addr:6e:52:f7:c1:ef:f4 config: PORT_DOWN state: LINK_DOWN 6(tap09a002af-66): addr:c6:cb:01:60:3f:8a config: PORT_DOWN state: LINK_DOWN 7(tap160480aa-84): addr:96:43:cc:05:71:d5 config: PORT_DOWN state: LINK_DOWN 8(tapf6040ba0-b5): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 9(tap0ded1c0f-df): addr:12:c8:b3:5c:fb:6a config: PORT_DOWN state: LINK_DOWN 10(tapaebb6140-31): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 11(tapddc3ce63-2b): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 12(qr-9b9a3229-19): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN LOCAL(br-int): addr:e2:ed:9e:9b:69:42 config: PORT_DOWN state: LINK_DOWN OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0 floodlight codes: if (entity.hasSwitchPort() !topology.isAttachmentPointPort(entity.getSwitchDPID(), entity.getSwitchPort().shortValue())) { if (logger.isDebugEnabled()) { logger.debug(Not learning new device on internal + link: {}, entity); } public boolean portEnabled(OFPhysicalPort port) { if (port == null) return false; if ((port.getConfig() OFPortConfig.OFPPC_PORT_DOWN.getValue()) 0) return false; ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] openflow FLOOD data can not go through br-int to br-tun
It seems OK after I set controller for both br-tun and br-int. but floodlight official installation only set br-int's controller, am I correct? On Tue, May 7, 2013 at 2:33 PM, Liu Wenmao marvel...@gmail.com wrote: hi all: I have set up quantum+floodlight, there are a compute node and a controller, so I create a VM in the compute node, but the VM(100.0.0.4) can not ping its gateway(100.0.0.1) in the controller node. When the VM send a ARP request to OVS of the compute node, a packet_in request is sent to the controller, then the controller send a packet_out response to the OVS, telling it to flood the ARP request. I run tcpdump at both br-int and br-tun interface, packets are captured at br-int, but no packets are captured at br-tun: root@node1:/var/log/openvswitch# tcpdump -i br-int -nn tcpdump: WARNING: br-int: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes 14:26:45.485978 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28 14:26:46.482442 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28 14:26:47.482416 ARP, Request who-has 100.0.0.1 tell 100.0.0.4, length 28 ^C 3 packets captured 3 packets received by filter 0 packets dropped by kernel root@node1:/var/log/openvswitch# tcpdump -i br-tun -nn tcpdump: WARNING: br-tun: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-tun, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel root@node1:/var/log/openvswitch# ovs-ofctl snoop br-int OFPT_PACKET_IN (xid=0x0): total_len=42 in_port=6 data_len=42 buffer=0x044d priority0:tunnel0:in_port0006:tci(0) macfa:16:3e:9f:5b:2c-ff:ff:ff:ff:ff:ff type0806 proto1 tos0 ttl0 ip100.0.0.4-100.0.0.1 arp_hafa:16:3e:9f:5b:2c-00:00:00:00:00:00 fa:16:3e:9f:5b:2c ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 100.0.0.1 tell 100.0.0.4, length 28 OFPT_PACKET_OUT (xid=0x0): in_port=6 actions_len=8 actions=FLOOD data_len=42 fa:16:3e:9f:5b:2c ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 42: Request who-has 100.0.0.1 tell 100.0.0.4, length 28 I guess it is because the gateway is on another node, so ARP request should go through br-int-br-tun-eth2[compute node side]-- [controller side]eth2-br-tun-br-int, but the ARP request seems to be blocked between br-int and br-tun. I don't know why the ARP request is not sent to br-tun. it seems that ARP request is sent to normal port of the OVS because VM 100.0.0.4 can ping other VMs(100.0.0.2) on the same OVS. root@node1:/var/log/openvswitch# ovs-vsctl show afaf59ee-48cc-4f5b-9a1d-4311b509a6c5 *Bridge br-int* Controller tcp:30.0.0.1:6633 is_connected: true Port qvoe06ea8d8-d7 tag: 1 Interface qvoe06ea8d8-d7 Port qvoa96762cb-f3 tag: 4095 Interface qvoa96762cb-f3 Port qvo38f23ca0-59 tag: 1 Interface qvo38f23ca0-59 Port qvofc3fe9ed-fb tag: 4095 Interface qvofc3fe9ed-fb Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port eth3 Interface eth3 Port qvo1021fd99-eb tag: 4095 Interface qvo1021fd99-eb Port qvo329db52d-81 tag: 4095 Interface qvo329db52d-81 Bridge qbre06ea8d8-d7 Port qbre06ea8d8-d7 Interface qbre06ea8d8-d7 type: internal Port qvbe06ea8d8-d7 Interface qvbe06ea8d8-d7 Port tape06ea8d8-d7 Interface tape06ea8d8-d7 Bridge qbr329db52d-81 Port qbr329db52d-81 Interface qbr329db52d-81 type: internal Port qvb329db52d-81 Interface qvb329db52d-81 Bridge qbrc8ec86f4-3a Port qbrc8ec86f4-3a Interface qbrc8ec86f4-3a type: internal Port qvbc8ec86f4-3a Interface qvbc8ec86f4-3a *Bridge br-tun* Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Port gre-1 Interface gre-1 type: gre options: {in_key=flow, out_key=flow, remote_ip=30.0.0.1} Bridge qbr31c6e35b-81 Port qbr31c6e35b-81 Interface qbr31c6e35b-81 type: internal Port qvb31c6e35b-81 Interface qvb31c6e35b-81 Bridge qbr38f23ca0-59 Port
Re: [Openstack] floodlight ignore subnet gateway due to PORT_DOWN and LINK_DOWN
I just comment some of floodlight codes and VMs can ping the gateway. But I do not know why the gateway port and link is down, it is up in namespace view: root@controller :/usr/src/eclipse# ip netns exec qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c ip link 14: qr-8af2e01f-bb: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:f7:3d:5e brd ff:ff:ff:ff:ff:ff root@controller :/usr/src/eclipse# ip netns exec qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c ip addr 14: qr-8af2e01f-bb: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:f7:3d:5e brd ff:ff:ff:ff:ff:ff inet 100.0.0.1/24 brd 100.0.0.255 scope global qr-8af2e01f-bb inet6 fe80::f816:3eff:fef7:3d5e/64 scope link valid_lft forever preferred_lft forever On Tue, May 7, 2013 at 5:01 PM, Liu Wenmao marvel...@gmail.com wrote: hi I use quantum grizzly with namespace and floodlight, but VMs can not ping its gateway. It seems that floodlight ignore devices whose status is PORT_DOWN or LINK_DOWN, somehow the subnetwork gateway is really PORT_DOWN and LINK_DOWN.. is it normal?or how can I change its status to normal? root@controller:~# ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e2ed9e9b6942 n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff 1(qr-c5496165-c7): addr:5e:67:22:5b:d5:0e config: PORT_DOWN state: LINK_DOWN * 2(qr-8af2e01f-bb): addr:e4:00:00:00:00:00this is the gateway.* * config: PORT_DOWN* * state: LINK_DOWN* 3(qr-48c69382-4f): addr:22:64:6f:3a:9f:cd config: PORT_DOWN state: LINK_DOWN 4(patch-tun): addr:8e:90:4c:aa:d2:06 config: 0 state: 0 5(tap5b5891ac-94): addr:6e:52:f7:c1:ef:f4 config: PORT_DOWN state: LINK_DOWN 6(tap09a002af-66): addr:c6:cb:01:60:3f:8a config: PORT_DOWN state: LINK_DOWN 7(tap160480aa-84): addr:96:43:cc:05:71:d5 config: PORT_DOWN state: LINK_DOWN 8(tapf6040ba0-b5): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 9(tap0ded1c0f-df): addr:12:c8:b3:5c:fb:6a config: PORT_DOWN state: LINK_DOWN 10(tapaebb6140-31): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 11(tapddc3ce63-2b): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 12(qr-9b9a3229-19): addr:e4:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN LOCAL(br-int): addr:e2:ed:9e:9b:69:42 config: PORT_DOWN state: LINK_DOWN OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0 floodlight codes: if (entity.hasSwitchPort() !topology.isAttachmentPointPort(entity.getSwitchDPID(), entity.getSwitchPort().shortValue())) { if (logger.isDebugEnabled()) { logger.debug(Not learning new device on internal + link: {}, entity); } public boolean portEnabled(OFPhysicalPort port) { if (port == null) return false; if ((port.getConfig() OFPortConfig.OFPPC_PORT_DOWN.getValue()) 0) return false; ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] quantum: no gateways in network node
Hi list: I set up quantum without namespace support, quantum-server/l3 agent/dhcp agent are running at the same node, besides there is a compute node. I create a router connecting two network(100.0.0.0/24, 200.0.0.0/24), so there should be two gateways(100.0.0.1 and 200.0.0.1) in the controller, however,I can see two dhcp server(100.0.0.3 and 200.0.0.2), but no gateways: root@controller:~# ifconfig br-ex... br-int... eth0... eth1 eth2. lo. tap09a002af-66 Link encap:Ethernet HWaddr fa:16:3e:9e:11:e0 inet addr:192.168.19.129 Bcast:192.168.19.255 Mask:255.255.255.128 inet6 addr: fe80::f816:3eff:fe9e:11e0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:146 errors:0 dropped:146 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:9490 (9.4 KB) TX bytes:594 (594.0 B) tap160480aa-84 Link encap:Ethernet HWaddr fa:16:3e:54:77:83 inet addr:100.0.0.3 Bcast:100.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:fe54:7783/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1110 errors:0 dropped:156 overruns:0 frame:0 TX packets:514 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:121029 (121.0 KB) TX bytes:66549 (66.5 KB) tap5b5891ac-94 Link encap:Ethernet HWaddr fa:16:3e:ae:35:d3 inet addr:200.0.0.2 Bcast:200.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::f816:3eff:feae:35d3/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:147 errors:0 dropped:146 overruns:0 frame:0 TX packets:6 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:9816 (9.8 KB) TX bytes:468 (468.0 B) root@controller:~# quantum subnet-show subnet1 +--+--+ | Field| Value| +--+--+ | allocation_pools | {start: 100.0.0.2, end: 100.0.0.254} | | cidr | 100.0.0.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 100.0.0.1| | host_routes | | | id | 25b34a57-db92-4a4f-a1f5-a550d5b8e1e6 | | ip_version | 4| | name | subnet1 | | network_id | eccf5627-a6c6-4007-82a0-f6b85bd2b4ce | | tenant_id| 53707d290204404dbff625378969c25c | +--+--+ The VMs can not ping gateways, but can ping DHCP servers, why cannot I find the gateway? Wenmao Liu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] no packets captured at br-int or br-tun
hi all: I set up quantum without namespace support, now VMs can ping gateway(100.0.0.1) but I can not capture any packet at gateway interface, br-int or br-tun: root@controller:/var/log/openvswitch# tcpdump -i qr-c5496165-c7 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qr-c5496165-c7, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel root@controller:/var/log/openvswitch# tcpdump -i br-int -nn tcpdump: WARNING: br-int: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel root@controller:/var/log/openvswitch# tcpdump -i br-tun -nn tcpdump: WARNING: br-tun: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-tun, link-type EN10MB (Ethernet), capture size 65535 bytes ^C 0 packets captured 0 packets received by filter 0 packets dropped by kernel When I ping the DHCP server, packets are captured at DHCP server interface, but not at br-int or br-tun: root@controller:/var/log/openvswitch# tcpdump -i tap160480aa-84 -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on tap160480aa-84, link-type EN10MB (Ethernet), capture size 65535 bytes 09:41:58.922771 IP 100.0.0.4 100.0.0.3: ICMP echo request, id 62726, seq 26, length 64 09:41:58.922885 IP 100.0.0.3 100.0.0.4: ICMP echo reply, id 62726, seq 26, length 64 It is strange, 1) why packet is captured at DHCP server interface, not at gateway interface, 2) why packet is not captured at br-int or br-tun(GRE packets are captured at eth2)? root@controller:/var/log/openvswitch# tcpdump -i eth2 proto gre -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 09:50:06.430375 IP 30.0.0.11 30.0.0.1: GREv0, key=0x2, length 110: IP 100.0.0.4 100.0.0.1: ICMP echo request, id 64774, seq 2, length 64 09:50:06.430452 IP 30.0.0.1 30.0.0.11: GREv0, key=0x2, length 110: IP 100.0.0.1 100.0.0.4: ICMP echo reply, id 64774, seq 2, length 64 09:50:07.430514 IP 30.0.0.11 30.0.0.1: GREv0, key=0x2, length 110: IP 100.0.0.4 100.0.0.1: ICMP echo request, id 64774, seq 3, length 64 09:50:07.430616 IP 30.0.0.1 30.0.0.11: GREv0, key=0x2, length 110: IP 100.0.0.1 100.0.0.4: ICMP echo reply, id 64774, seq 3, length 64 ca3b3f14-7564-4a88-b0f2-417ccc6d60bf Bridge br-tun Port gre-2 Interface gre-2 type: gre options: {in_key=flow, out_key=flow, remote_ip=30.0.0.11} Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port br-tun Interface br-tun type: internal Bridge virbr0 Port virbr0 Interface virbr0 type: internal Bridge br-int Controller tcp:30.0.0.1:6633 Port qr-c5496165-c7 tag: 3 Interface qr-c5496165-c7 type: internal Port br-int Interface br-int type: internal Port tapaebb6140-31 tag: 4095 Interface tapaebb6140-31 type: internal Port tap5b5891ac-94 tag: 2 Interface tap5b5891ac-94 type: internal Port tap160480aa-84 tag: 3 Interface tap160480aa-84 type: internal Port qr-48c69382-4f tag: 2 Interface qr-48c69382-4f type: internal Port tapf6040ba0-b5 tag: 4095 Interface tapf6040ba0-b5 type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port tap09a002af-66 tag: 4095 Interface tap09a002af-66 type: internal Port tapddc3ce63-2b tag: 4095 Interface tapddc3ce63-2b type: internal Port tap0ded1c0f-df tag: 1 Interface tap0ded1c0f-df type: internal Port qr-9b9a3229-19 tag: 4095 Interface qr-9b9a3229-19 type: internal Port qr-8af2e01f-bb tag: 4095 Interface qr-8af2e01f-bb type: internal Bridge br-ex Port br-ex Interface br-ex type: internal Port qg-0eda5152-09 Interface qg-0eda5152-09 type: internal Port eth0 Interface eth0 ovs_version: 1.4.0+build0 root@controller:~# ifconfig br-ex Link encap:Ethernet HWaddr
[Openstack] subnet gateway's arp ack not sent back
Hi all: I set up openstack with quantum successfully, but I use floodlight as the network controller, VMs can not ping their gateway. I use a host as compute/network controller(30.0.0.1), and another host as a compute node(30.0.0.11). The VM X address is 100.0.0.7 and the subnet gateway G is 100.0.0.1. I use namespace to isolate networks (floodlight restproxy seems not to support namespace, but I use floodligt standalone) When X is pinging G, I can see gateway responses a ARP ack: root@controller:/usr/src/floodlight# ip netns exec qrouter-7bde1209-e8ed-4ae6-a627-efaa148c743c tcpdump -nn -i qr-8af2e01f-bb tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on qr-8af2e01f-bb, link-type EN10MB (Ethernet), capture size 65535 bytes 18:52:32.769334 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:52:32.769371 ARP, Reply 100.0.0.1 is-at fa:16:3e:f7:3d:5e, length 28 18:52:33.769049 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:52:33.769082 ARP, Reply 100.0.0.1 is-at fa:16:3e:f7:3d:5e, length 28 18:52:34.769117 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:52:34.769149 ARP, Reply 100.0.0.1 is-at fa:16:3e:f7:3d:5e, length 28 But when I listen to the bridge br-int or physical interface eth2, no ARP reply is heard: root@controller:/usr/src/floodlight# tcpdump -i br-int -nn tcpdump: WARNING: br-int: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on br-int, link-type EN10MB (Ethernet), capture size 65535 bytes 18:50:31.405691 IP 0.0.0.0.68 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:1c:65:d0, length 286 18:50:31.749137 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:50:32.749232 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:50:33.749575 ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 root@controller:/usr/src/floodlight# tcpdump -i eth2 proto gre -nn tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth2, link-type EN10MB (Ethernet), capture size 65535 bytes 18:54:28.784500 IP 30.0.0.11 30.0.0.1: GREv0, key=0x0, length 50: ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:54:29.784430 IP 30.0.0.11 30.0.0.1: GREv0, key=0x0, length 50: ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 18:54:30.784317 IP 30.0.0.11 30.0.0.1: GREv0, key=0x0, length 50: ARP, Request who-has 100.0.0.1 tell 100.0.0.7, length 28 After I delete the controller from openvswitch and restart openvswithes, VMs can ping their gateway. I do not what causes the problem Can anyone give me some resources how the namespace and bridges work together. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] which network controller is the best for quantum grizzly?
I have tried floodlight, but it does not support namespace, so I wonder is there a better network controller to support quantum?(nox, ryu ..) Wenmao Liu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] which network controller is the best for quantum grizzly?
hi Heiko: My network topology is very simple: a router connecting with two subnets, each VM in the two subnets can ping each other. So it needs l3 layer routing, I also need namespace for quantum configuration. So is there a controller suitable for such a scenario? Thanks. On Wed, Apr 17, 2013 at 8:16 PM, Heiko Krämer i...@honeybutcher.de wrote: Hi Wenmao, i think you should plan your network topologie first and after that you can decide which controller are the best choice for you. Greetings Heiko On 17.04.2013 14:01, Liu Wenmao wrote: I have tried floodlight, but it does not support namespace, so I wonder is there a better network controller to support quantum?(nox, ryu ..) Wenmao Liu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] is namespace implemented only in quantum-l3-agent?
hi all: I can use namespace to build and isolate virtual networks with quantum and its l3-agent. Everythings goes fine. I need floodlight controller, but big switch restproxy can not work with l3-agent, so I have to disable l3-agent. Still, I quantum server outputs errors: AttributeError: No such RPC function 'tunnel_sync' So I disabled quantum-ovs-agent and quantum-ovs-plugin as well. The quantum server starts fine and net/subnet creation and deletion are OK.But I find no namespace using ip netns on the controller, besides VMs can not ping virutal subnet gateway or DHCP server. I do not know how to solve this problem. do I have to enable quantum-ovs-agent again? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] ceilometer-agent-central starting fail
Thanks, the ceilometer seems to lack some default options in configuration files and the official guidance. ( http://docs.openstack.org/developer/ceilometer/configuration.html) So maybe it is not ready for users yet? On Wed, Apr 10, 2013 at 8:28 PM, Doug Hellmann doug.hellm...@dreamhost.comwrote: On Wed, Apr 10, 2013 at 6:10 AM, Liu Wenmao marvel...@gmail.com wrote: Actually this is not over. The main reason of service failure is that central/manager.py and service.py use different vairables: central/manager.py 70 def interval_task(self, task): 71 self.keystone = ksclient.Client( 72 username=cfg.CONF.*os_username*, 73 password=cfg.CONF.os_password, 74 tenant_id=cfg.CONF.os_tenant_id, 75 tenant_name=cfg.CONF.os_tenant_name, 76 auth_url=cfg.CONF.os_auth_url) 44 CLI_OPTIONS = [ 45 cfg.StrOpt('*os-username*', 46default=os.environ.get('OS_USERNAME', 'ceilometer'), 47help='Username to use for openstack service access'), 48 cfg.StrOpt('os-password', 49default=os.environ.get('OS_PASSWORD', 'admin'), 50help='Password to use for openstack service access'), 51 cfg.StrOpt('os-tenant-id', 52default=os.environ.get('OS_TENANT_ID', ''), 53help='Tenant ID to use for openstack service access'), 54 cfg.StrOpt('os-tenant-name', 55default=os.environ.get('OS_TENANT_NAME', 'admin'), 56help='Tenant name to use for openstack service access'), 57 cfg.StrOpt('os_auth_url', 58default=os.environ.get('OS_AUTH_URL', 59 'http://localhost:5000/v2.0'), So after I change all - to _ and modify all options in /etc/ceilometer/ceilometer.conf, the service starts OK. The thing that fixed it was changing - to _ in your configuration file. The options library allows option names to have - in them so they look nice as command line switches, but the option name uses the _. Doug On Wed, Apr 10, 2013 at 2:02 PM, Liu Wenmao marvel...@gmail.com wrote: I solve this problem by two steps: 1 modify /etc/init/ceilometer-agent-central.conf exec start-stop-daemon --start --chuid ceilometer --exec /usr/local/bin/ceilometer-agent-central -- --config-file=/etc/ceilometer/ceilometer.conf 2 add some lines to /etc/ceilometer/ceilometer.conf: os-username=ceilometer os-password=nsfocus os-tenant-name=service os-auth-url=http://controller:5000/v2.0 On Wed, Apr 10, 2013 at 1:36 PM, Liu Wenmao marvel...@gmail.com wrote: Hi all: I have just install ceilometer grizzly github version, but fail to start ceilometer-agent-central service. I think it is due to that I didn't set up the keystone user/password like other projects. but I follow the instructions( http://docs.openstack.org/developer/ceilometer/install/manual.html#configuring-keystone-to-work-with-api) but it does not include the ceilometer configuration. # service ceilometer-agent-central start ceilometer-agent-central start/running, process 5679 # cat /etc/init/ceilometer-agent-central.conf description ceilometer-agent-compute author Chuck Short zul...@ubuntu.com start on runlevel [2345] stop on runlelvel [!2345] chdir /var/run pre-start script mkdir -p /var/run/ceilometer chown ceilometer:ceilometer /var/run/ceilometer mkdir -p /var/lock/ceilometer chown ceilometer:ceilometer /var/lock/ceilometer end script exec start-stop-daemon --start --chuid ceilometer --exec /usr/local/bin/ceilometer-agent-central /var/log/ceilometer/ceilometer-agent-central.log 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.loopingcall] in looping call Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/openstack/common/loopingcall.py, line 67, in _inner self.f(*self.args, **self.kw) File /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/central/manager.py, line 76, in interval_task auth_url=cfg.CONF.os_auth_url) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 134, in __init__ self.authenticate() File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py, line 205, in authenticate token) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 174, in get_raw_token_from_identity_servicetoken=token) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 202, in _base_authN resp, body = self.request(url, 'POST', body=params, headers=headers) File /usr/local/lib/python2.7/dist-packages
Re: [Openstack] ceilometer-agent-central starting fail
I solve this problem by two steps: 1 modify /etc/init/ceilometer-agent-central.conf exec start-stop-daemon --start --chuid ceilometer --exec /usr/local/bin/ceilometer-agent-central -- --config-file=/etc/ceilometer/ceilometer.conf 2 add some lines to /etc/ceilometer/ceilometer.conf: os-username=ceilometer os-password=nsfocus os-tenant-name=service os-auth-url=http://controller:5000/v2.0 On Wed, Apr 10, 2013 at 1:36 PM, Liu Wenmao marvel...@gmail.com wrote: Hi all: I have just install ceilometer grizzly github version, but fail to start ceilometer-agent-central service. I think it is due to that I didn't set up the keystone user/password like other projects. but I follow the instructions( http://docs.openstack.org/developer/ceilometer/install/manual.html#configuring-keystone-to-work-with-api) but it does not include the ceilometer configuration. # service ceilometer-agent-central start ceilometer-agent-central start/running, process 5679 # cat /etc/init/ceilometer-agent-central.conf description ceilometer-agent-compute author Chuck Short zul...@ubuntu.com start on runlevel [2345] stop on runlelvel [!2345] chdir /var/run pre-start script mkdir -p /var/run/ceilometer chown ceilometer:ceilometer /var/run/ceilometer mkdir -p /var/lock/ceilometer chown ceilometer:ceilometer /var/lock/ceilometer end script exec start-stop-daemon --start --chuid ceilometer --exec /usr/local/bin/ceilometer-agent-central /var/log/ceilometer/ceilometer-agent-central.log 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.loopingcall] in looping call Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/openstack/common/loopingcall.py, line 67, in _inner self.f(*self.args, **self.kw) File /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/central/manager.py, line 76, in interval_task auth_url=cfg.CONF.os_auth_url) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 134, in __init__ self.authenticate() File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py, line 205, in authenticate token) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 174, in get_raw_token_from_identity_servicetoken=token) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 202, in _base_authN resp, body = self.request(url, 'POST', body=params, headers=headers) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py, line 366, in request raise exceptions.from_response(resp, resp.text) Unauthorized: Unable to communicate with identity service: {error: {message: Invalid user / password, code: 401, title: Not Authorized}}. (HTTP 401) 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.threadgroup] Unable to communicate with identity service: {error: {message: Invalid user / password, code: 401, title: Not Authorized}}. (HTTP 401) ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] root_helper deprecated?
Hi all: In quantum dhcp grizzly log, I find the following warnings: 2013-04-09 15:12:48 WARNING [quantum.agent.common.config] Deprecated: DEFAULT.root_helper is deprecated! I do set root_helper in the ini file: root_helper = sudo /usr/local/bin/quantum-rootwrap /etc/quantum/rootwrap.conf After I remove this line, it gives the following error: Stderr: 'sudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: no tty present and no askpass program specified\nSorry, try again.\nsudo: 3 incorrect password attempts\n' 2013-04-09 15:07:56DEBUG [quantum.agent.linux.utils] Running command: ['sudo', 'cat', '/proc/5609/cmdline'] 2013-04-09 15:07:56DEBUG [quantum.agent.linux.utils] Command: ['sudo', 'cat', '/proc/5609/cmdline'] So it seems that I still have to use root_helper, then how can I get the warning removed? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] swift: Account not found[grizzly]
thanks Kuo setting the two option to true really solve my problem:-) On Tue, Apr 9, 2013 at 3:36 PM, Kuo Hugo tonyt...@gmail.com wrote: 1) No minimal limitation currently . 2) Did you set the above option to true ? allow_account_management https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L61 account_autocreate https://github.com/openstack/swift/blob/master/etc/proxy-server.conf-sample#L69 +Hugo Kuo+ h...@swiftstack.com tonyt...@gmail.com +886 935004793 2013/4/9 Liu Wenmao marvel...@gmail.com Hi all: I just installed swift from github, after I configure a proxy node and a storage node, and run the stat command, it fails: # swift -v -V 2.0 -A http://controller:5000/v2.0 -U service:swift -K nsfocus stat Account not found Keystone and disk configuation seem OK, syslog gives: Apr 9 13:45:21 node1 account-server AUTH_2755db390fcd4c9bb504242617d5f6a0 (txn: tx6919d8c66d454e50a9b03deded9b2ec8) Apr 9 13:45:21 node1 account-server 20.0.0.1 - - [09/Apr/2013:05:45:21 +] HEAD /swr/27113/AUTH_2755db390fcd4c9bb504242617d5f6a0 404 - tx6919d8c66d454e50a9b03deded9b2ec8 - - 0.0020 I read the code and find that the server try to visit db file: /srv/node/swr/accounts/27113/e03/1a7a753448a645fdf2b6bcc7223e5e03, but my directory /srv/node/swr/accounts/ is empty, so server return a 404 error. I find that the db file is only created when the server receives a REPLICATE request, but I do not know how to generate such a request, or why does it not generate automatically. Moreover, what is the minimal amount of storage nodes? Thanks Wenmao Liu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] vm unable to reach 169.254.169.254
hi all: I setup quantum and nova grizzly, but vms can not get public key from 169.254.169.254: debug end ## cloudsetup: failed to read iid from metadata. tried 30 WARN: /etc/rc3.d/S45cloudsetup failed Starting dropbear sshd: generating rsa key... generating dsa key... OK = cloudfinal: system completely up in 39.98 seconds wget: can't connect to remote host (169.254.169.254): Connection refused wget: can't connect to remote host (169.254.169.254): Connection refused wget: can't connect to remote host (169.254.169.254): Connection refused instanceid: publicipv4: I have configured nova.conf enabled_apis=ec2,osapi_compute,metadata metadata_manager=nova.api.manager.MetadataManager metadata_listen=0.0.0.0 metadata_listen_port=8775 service_quantum_metadata_proxy=true metadata_host=20.0.0.1 metadata_port=8775 quantum l3_agent.ini metadata_ip = 20.0.0.1 metadata_port = 8775 metadata_agent.ini nova_metadata_ip = 20.0.0.1 nova_metadata_port = 8775 20.0.0.1 is my controller ip. p.s. I can not see any things like 169.254.169.254 in the iptables of controller or compute nodes. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] vm unable to reach 169.254.169.254
Thanks Mouad After I install the latest grizzly quantum and remove metadata_port from l3_agent.ini, I can connect 169.254.169.254:80, but the server return a 404 NOT Found error: Starting dropbear sshd: generating rsa key... generating dsa key... OK = cloudfinal: system completely up in 5.03 seconds instanceid: i005b publicipv4: localipv4 : 100.0.0.4 wget: server returned error: HTTP/1.1 404 Not Found clouduserdata: failed to read user data url: http://169.254.169.254/20090404/userdata WARN: /etc/rc3.d/S99clouduserdata failed I use cirros-0.3.0-x86_64-disk.img, is it a problem of cirros-image, or quantum? On Tue, Apr 9, 2013 at 6:18 PM, Mouad Benchchaoui m.benchcha...@cloudbau.de wrote: Hi, Are you using namespaces ? b/c i think this is related to https://bugs.launchpad.net/quantum/+bug/1160955, if so a fix was just commited in the stable grizzly branch, so upgrade if you want to use another port than the default one, or i think removing the option metadata_port from l3_agent.ini should also make it work for you. HTH, -- Mouad On Tue, Apr 9, 2013 at 11:48 AM, Liu Wenmao marvel...@gmail.com wrote: hi all: I setup quantum and nova grizzly, but vms can not get public key from 169.254.169.254: debug end ## cloudsetup: failed to read iid from metadata. tried 30 WARN: /etc/rc3.d/S45cloudsetup failed Starting dropbear sshd: generating rsa key... generating dsa key... OK = cloudfinal: system completely up in 39.98 seconds wget: can't connect to remote host (169.254.169.254): Connection refused wget: can't connect to remote host (169.254.169.254): Connection refused wget: can't connect to remote host (169.254.169.254): Connection refused instanceid: publicipv4: I have configured nova.conf enabled_apis=ec2,osapi_compute,metadata metadata_manager=nova.api.manager.MetadataManager metadata_listen=0.0.0.0 metadata_listen_port=8775 service_quantum_metadata_proxy=true metadata_host=20.0.0.1 metadata_port=8775 quantum l3_agent.ini metadata_ip = 20.0.0.1 metadata_port = 8775 metadata_agent.ini nova_metadata_ip = 20.0.0.1 nova_metadata_port = 8775 20.0.0.1 is my controller ip. p.s. I can not see any things like 169.254.169.254 in the iptables of controller or compute nodes. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] root_helper deprecated?
Thanks Thierry, it seems to make sense. On Tue, Apr 9, 2013 at 4:53 PM, Thierry Carrez thie...@openstack.orgwrote: Rahul Upadhyaya wrote: I think you should use : rootwrap_config=/etc/quantum/rootwrap.conf Found this at below mentioned wiki page. I think this should hold true for Quantum too. No, Quantum still uses root_helper and has not transitioned to using rootwrap_config yet. Looking at the code, the message seems to point to configuration sections. The [DEFAULT] root_helper configuration option is now deprecated, it needs to be specified in the [AGENT] section of quantum.conf. See https://github.com/openstack/quantum/blob/master/etc/quantum.conf for an example. -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] ceilometer-agent-central starting fail
Hi all: I have just install ceilometer grizzly github version, but fail to start ceilometer-agent-central service. I think it is due to that I didn't set up the keystone user/password like other projects. but I follow the instructions( http://docs.openstack.org/developer/ceilometer/install/manual.html#configuring-keystone-to-work-with-api) but it does not include the ceilometer configuration. # service ceilometer-agent-central start ceilometer-agent-central start/running, process 5679 # cat /etc/init/ceilometer-agent-central.conf description ceilometer-agent-compute author Chuck Short zul...@ubuntu.com start on runlevel [2345] stop on runlelvel [!2345] chdir /var/run pre-start script mkdir -p /var/run/ceilometer chown ceilometer:ceilometer /var/run/ceilometer mkdir -p /var/lock/ceilometer chown ceilometer:ceilometer /var/lock/ceilometer end script exec start-stop-daemon --start --chuid ceilometer --exec /usr/local/bin/ceilometer-agent-central /var/log/ceilometer/ceilometer-agent-central.log 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.loopingcall] in looping call Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/openstack/common/loopingcall.py, line 67, in _inner self.f(*self.args, **self.kw) File /usr/local/lib/python2.7/dist-packages/ceilometer-2013.1-py2.7.egg/ceilometer/central/manager.py, line 76, in interval_task auth_url=cfg.CONF.os_auth_url) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 134, in __init__ self.authenticate() File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py, line 205, in authenticate token) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 174, in get_raw_token_from_identity_servicetoken=token) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/v2_0/client.py, line 202, in _base_authN resp, body = self.request(url, 'POST', body=params, headers=headers) File /usr/local/lib/python2.7/dist-packages/python_keystoneclient-0.2.3.1.g3a3e254-py2.7.egg/keystoneclient/client.py, line 366, in request raise exceptions.from_response(resp, resp.text) Unauthorized: Unable to communicate with identity service: {error: {message: Invalid user / password, code: 401, title: Not Authorized}}. (HTTP 401) 2013-04-10 13:01:39ERROR [ceilometer.openstack.common.threadgroup] Unable to communicate with identity service: {error: {message: Invalid user / password, code: 401, title: Not Authorized}}. (HTTP 401) ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] swift: Account not found[grizzly]
Hi all: I just installed swift from github, after I configure a proxy node and a storage node, and run the stat command, it fails: # swift -v -V 2.0 -A http://controller:5000/v2.0 -U service:swift -K nsfocus stat Account not found Keystone and disk configuation seem OK, syslog gives: Apr 9 13:45:21 node1 account-server AUTH_2755db390fcd4c9bb504242617d5f6a0 (txn: tx6919d8c66d454e50a9b03deded9b2ec8) Apr 9 13:45:21 node1 account-server 20.0.0.1 - - [09/Apr/2013:05:45:21 +] HEAD /swr/27113/AUTH_2755db390fcd4c9bb504242617d5f6a0 404 - tx6919d8c66d454e50a9b03deded9b2ec8 - - 0.0020 I read the code and find that the server try to visit db file: /srv/node/swr/accounts/27113/e03/1a7a753448a645fdf2b6bcc7223e5e03, but my directory /srv/node/swr/accounts/ is empty, so server return a 404 error. I find that the db file is only created when the server receives a REPLICATE request, but I do not know how to generate such a request, or why does it not generate automatically. Moreover, what is the minimal amount of storage nodes? Thanks Wenmao Liu ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] nova calls libvirt but failed:Operation not supported
I solve this problem by blacklist bridge module.. I refer this post http://dev.opennebula.org/issues/1688, which means I can only use brcompat in ubuntu 12.04. but the default bridge module is the built-in bridge module, so I blacklist it and restart openvswitch, it works. On Tue, Apr 2, 2013 at 2:46 PM, Liu Wenmao marvel...@gmail.com wrote: hi Aaron: thanks anyway.. it's really a weird problem On Tue, Apr 2, 2013 at 2:29 PM, Aaron Rosen aro...@nicira.com wrote: I've not encountered these errors. If you didn't drop the list perhaps someone else could help you On Mon, Apr 1, 2013 at 11:15 PM, Liu Wenmao marvel...@gmail.com wrote: After I uncomment user and group in the qemu.conf, the error remains the same. My linux release is Ubuntu 12.04LTS: root@node1:~# dpkg -l|grep libvirt ii libvirt-bin 0.9.8-2ubuntu17.7 programs for the libvirt library ii libvirt0 0.9.8-2ubuntu17.7 library for interfacing with different virtualization systems ii python-libvirt 0.9.8-2ubuntu17.7 libvirt Python bindings root@node1:~# tail /var/log/libvirt/libvirtd.log 2013-04-02 03:47:59.159+: 12796: info : libvirt version: 0.9.8 2013-04-02 03:47:59.159+: 12796: error : virNetDevBridgeAddPort:309 : Unable to add bridge br-int port tapcc380352-8b: Operation not supported 2013-04-02 03:49:26.103+: 12793: error : virNetDevBridgeAddPort:309 : Unable to add bridge br-int port tapd941ef5a-f1: Operation not supported 2013-04-02 05:01:12.223+: 30917: info : libvirt version: 0.9.8 2013-04-02 05:01:12.223+: 30917: error : virNetSocketReadWire:996 : End of file while reading data: Input/output error 2013-04-02 05:01:38.735+: 30921: error : virNetDevBridgeAddPort:309 : Unable to add bridge br-int port tap8317bc2f-53: Operation not supported 2013-04-02 05:03:08.780+: 30917: error : virNetSocketReadWire:996 : End of file while reading data: Input/output error 2013-04-02 06:07:12.279+: 18978: info : libvirt version: 0.9.8 2013-04-02 06:07:12.279+: 18978: error : virNetSocketReadWire:996 : End of file while reading data: Input/output error 2013-04-02 06:07:42.519+: 18980: error : virNetDevBridgeAddPort:309 : Unable to add bridge br-int port tapfcbc59a4-96: Operation not supported On Tue, Apr 2, 2013 at 1:05 PM, Aaron Rosen aro...@nicira.com wrote: I believe that with older versions of libvirt you need to uncomment the following lines in /etc/libvirt/qemu.conf # The user ID for QEMU processes run by the system instance. user = root # The group ID for QEMU processes run by the system instance. group = root I'd also check what's in /var/log/libvirt/libvirtd.log ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] what is the difference between 2013.1 and grizzly?
Thank Oleg and Thierr, it's really helpful On Wed, Mar 27, 2013 at 5:20 PM, Thierry Carrez thie...@openstack.orgwrote: Oleg Gelbukh wrote: Generally, grizzly-X is a milestone tag inside release cycle codenamed 'Grizzly'. Note that tagging scheme has changed between milestones 2 and 3 of 'Grizzly' release cycle, so you see 'grizzly-1' and 'grizzly-2' tags but no 'grizzly-3'. Milestone 3 of 'Grizzly' is tagged '2013.1.g3' instead. Looks like we won't see codenames in tags anymore in following development cycles. '2013.1.rc1' is a tag referring to release candidate 1 version, and you can expect 2013.1.rc2 and so on as well. Finally, '2013.1' is the official release version and it reflects that it is first release made during year 2013. Hope this helps and if I've mistaken, someone will correct me. That's correct. We recently changed the format of our tags (from grizzly-3 to 2013.1.g3), now that versioning is more closely related to tag names. Grizzly release candidates are therefore tagged 2013.1.rcX. Cheers, -- Thierry Carrez (ttx) Release Manager, OpenStack ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] nova calls libvirt but failed:Operation not supported
Hi all: I use github to install nova and quantum, but when I launch an instance, nova-compute fails: 2013-04-02 11:00:15DEBUG [nova.openstack.common.lockutils] Released file lock iptables at /var/lock/nova/nova-iptables for method _apply... 2013-04-02 11:00:17ERROR [nova.compute.manager] Instance failed to spawn Traceback (most recent call last): File /usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/compute/manager.py, line 1069, in _spawn block_device_info) File /usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/virt/libvirt/driver.py, line 1520, in spawn block_device_info) File /usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/virt/libvirt/driver.py, line 2435, in _create_domain_and_network domain = self._create_domain(xml, instance=instance) File /usr/local/lib/python2.7/dist-packages/nova-2013.2.a89.ge9912c6-py2.7.egg/nova/virt/libvirt/driver.py, line 2396, in _create_domain domain.createWithFlags(launch_flags) File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 187, in doit result = proxy_call(self._autowrap, f, *args, **kwargs) File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 147, in proxy_call rv = execute(f,*args,**kwargs) File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 76, in tworker rv = meth(*args,**kwargs) File /usr/lib/python2.7/dist-packages/libvirt.py, line 581, in createWithFlags if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) libvirtError: Unable to add bridge br-int port tap89ed2dc0-2e: Operation not supported 2013-04-02 11:00:17DEBUG [nova.openstack.common.lockutils] Got semaphore compute_resources for method abort... 2013-04-02 11:00:17DEBUG [nova.compute.claims] Aborting claim: [Claim: 512 MB memory, 0 GB disk, 1 VCPUS] Is it because user nova call libvirt to create a port so it has not enough permission? note1:I set up sudoer: root@node1:~# cat /etc/sudoers.d/nova_sudoers Defaults:nova !requiretty nova ALL = (root) NOPASSWD: /usr/local/bin/nova-rootwrap note2:I login as root, execute ovs-vsctl add-port, and succeed. root@node1:~# ovs-vsctl add-port br-int tap89ed2dc0-2e root@node1:~# ovs-vsctl show f3f4cdc0-1391-45fd-a535-1947d5aea488 Bridge br0 Port eth0 Interface eth0 Port br0 Interface br0 type: internal Bridge br-int Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: {peer=patch-int} Port tap89ed2dc0-2e Interface tap89ed2dc0-2e Bridge br-tun Port patch-int Interface patch-int type: patch options: {peer=patch-tun} Port gre-1 Interface gre-1 type: gre options: {in_key=flow, out_key=flow, remote_ip=192.168.19.1} Port br-tun Interface br-tun type: internal ovs_version: 1.4.0+build0 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] bigswitch plugin start failure
Hi: I am integrating floodlight with openstack quantum, but the bigswitch restproxy plugin seems have some error in the latest git version: the quantum-server and l3-agent log give the following errors: File /usr/local/lib/python2.7/dist-packages/quantum-2013.2.a331.g9ac82ee-py2.7.egg/quantum/common/rpc.py, line 43, in dispatch quantum_ctxt, version, method, **kwargs) File /usr/local/lib/python2.7/dist-packages/quantum-2013.2.a331.g9ac82ee-py2.7.egg/quantum/openstack/common/rpc/dispatcher.py, line 136, in dispatch raise AttributeError(No such RPC function '%s' % method) AttributeError: No such RPC function 'sync_routers' I find similiar answer here(I also have the missing report_state error): https://bugs.launchpad.net/quantum/+bug/1159581 I have two questions: 1 is the github(https://github.com/openstack) the newest openstack repo? all files in the github are fixed but rpc.py, https://review.openstack.org/#/c/25024/3/quantum/agent/rpc.py shows that it should be return self.cast(context,(fixed version) rather than return self.call(context, (current latest github version) 2 how can I solve the sync_routers problem? there seems no solution right now.. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] what is the difference between 2013.1 and grizzly?
I notice that openstack components have two different develop code names, for example, openstack grizzly has 2013.1 and grizzly, so what is the difference between the two? There is a rc version of 2013.1 but none of grizzly, so I think they are not equal to the developers. root@controller:/usr/src/nova# git tag 0.9.0 2011.1rc1 2011.2 2011.2gamma1 2011.2rc1 2011.3 2011.3.1 2012.1 2012.1.1 2012.1.2 2012.1.3 2012.2 2012.2.1 2012.2.2 2012.2.3 2013.1.g3 2013.1.rc1 diablo-1 diablo-2 diablo-3 diablo-4 essex-1 essex-2 essex-3 essex-4 essex-rc1 essex-rc2 essex-rc3 essex-rc4 folsom-1 folsom-2 folsom-3 folsom-rc1 folsom-rc2 folsom-rc3 grizzly-1 grizzly-2 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Error: Upgrade DB using Essex release first
hi all: I have setup a basic openstack environment, and I recently try to install cinder. When following the openstack install guide, I stuck after running cinder-manage db sync: root@controller:~# cinder-manage db sync 2013-02-05 11:39:39 15121 DEBUG cinder.utils [-] backend module 'cinder.db.sqlalchemy.migration' from '/usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.pyc' __get_backend /usr/lib/python2.7/dist-packages/cinder/utils.py:477 Command failed, please check log for more info 2013-02-05 11:39:39 15121 CRITICAL cinder [-] Upgrade DB using Essex release first. 2013-02-05 11:39:39 15121 TRACE cinder Traceback (most recent call last): 2013-02-05 11:39:39 15121 TRACE cinder File /usr/bin/cinder-manage, line 757, in module 2013-02-05 11:39:39 15121 TRACE cinder main() 2013-02-05 11:39:39 15121 TRACE cinder File /usr/bin/cinder-manage, line 744, in main 2013-02-05 11:39:39 15121 TRACE cinder fn(*fn_args, **fn_kwargs) 2013-02-05 11:39:39 15121 TRACE cinder File /usr/bin/cinder-manage, line 212, in sync 2013-02-05 11:39:39 15121 TRACE cinder return migration.db_sync(version) 2013-02-05 11:39:39 15121 TRACE cinder File /usr/lib/python2.7/dist-packages/cinder/db/migration.py, line 33, in db_sync 2013-02-05 11:39:39 15121 TRACE cinder return IMPL.db_sync(version=version) 2013-02-05 11:39:39 15121 TRACE cinder File /usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.py, line 76, in db_sync 2013-02-05 11:39:39 15121 TRACE cinder current_version = db_version() 2013-02-05 11:39:39 15121 TRACE cinder File /usr/lib/python2.7/dist-packages/cinder/db/sqlalchemy/migration.py, line 101, in db_version 2013-02-05 11:39:39 15121 TRACE cinder raise exception.Error(_(Upgrade DB using Essex release first.)) 2013-02-05 11:39:39 15121 TRACE cinder Error: Upgrade DB using Essex release first. 2013-02-05 11:39:39 15121 TRACE cinder end of the error prompt-- I use ubuntu 12.10 and apt-get to install openstack folsom, so it's strange why should I use Essex release. Any hints? Thanks Liu Wenmao ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Fwd: using Win AD authentication as keystone backend
-- Forwarded message -- From: Liu Wenmao marvel...@gmail.com Date: Tue, Jan 22, 2013 at 4:55 PM Subject: Re: [Openstack] using Win AD authentication as keystone backend To: Tim Bell tim.b...@cern.ch Thanks Bell is it possible to use active directory and mysql database at the same time? for example, keystone first query the user in AD, if nothing is found, it then query mysql database. The motivation is that I want to store service users(glance, nova) in mysql and use current AD database for employee login. Wenmao On Tue, Jan 22, 2013 at 3:51 PM, Tim Bell tim.b...@cern.ch wrote: We run Active Directory with Keystone at CERN. ** ** The configuration is documented by Jose in the Wiki at http://wiki.openstack.org/HowtoIntegrateKeystonewithAD. ** ** Not sure if all the patches made it into Folsom though. ** ** Tim ** ** *From:* openstack-bounces+tim.bell=cern...@lists.launchpad.net [mailto: openstack-bounces+tim.bell=cern...@lists.launchpad.net] *On Behalf Of *Liu Wenmao *Sent:* 22 January 2013 04:23 *To:* openstack@lists.launchpad.net *Subject:* [Openstack] using Win AD authentication as keystone backend ** ** hello all: ** ** My company use Windows AD(active directory) authentication for internal user login, is it possible to integrate the current authentication with keystone backend, so that we do not extra user/password maintaining. Hope Openstack Folsom has an easy and stable solution. ** ** thanks ** ** ** ** Wenmao Liu NSFOCUS ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] using Win AD authentication as keystone backend
hello all: My company use Windows AD(active directory) authentication for internal user login, is it possible to integrate the current authentication with keystone backend, so that we do not extra user/password maintaining. Hope Openstack Folsom has an easy and stable solution. thanks Wenmao Liu NSFOCUS ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] kvm dump core continuously
hi all I set up an openstack scenario on a physical computer. The computer runs ubuntu 12.10 on a kvm hypervior. And then I emulate a virtualized controller node and two virtualized compute nodes with qemu-system-x86_64/kvm. Everything seems ok, but after some time, I ssh a vm on the compute node, or I open a vnc client, something crashes and a lot of core dump appears on my physical computer's screen. I can not capture them as text, so I take some photos, anyone knows what happend? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] mirror internal flow to external physical switch
hi all: I want to detect internal network flow by a physical IDS(Intrustion detection system) device, so a possible approach is switch span. first, I create a mirror with the openvswitch and redirect all data to an pysical interface eth1 ovs-vsctl -- --id=@m create mirror name=mirror0 -- add bridge br-int mirrors @m ovs-vsctl set mirror mirror0 output_port=4d5ed382-a0c3-4453-ab3c-58e1e7f603b0(uuid of eth1) ovs-vsctl set mirror mirror0 select_src_port=d624f5b1-f5e3-4f85-a907-bd209b5463aa(uuid of br-int) ovs-vsctl set mirror mirror0 select_dst_port=d624f5b1-f5e3-4f85-a907-bd209b5463aa(uuid of br-int) so that the internal transfered data is copied to the eth1, if the eth1 and the IDS device are in the same vlan, the IDS can detect internal flow of the openvswitch. But the problem is that: all compute node should have an extra physical interface, so that the internal data inside the compute node can be detected, it is a really waste. So I wonder is it possible to mirror the data to a vlan,rather than a port(i.e output_vlan instead of output_port), but I find that there are few documents about the output_vlan argument. After I create a vlan tags 998 on both a compute node and a network node, I find that the system halts and I can not ssh to the nodes. So can any one tell how to mirror the data to a vlan, please? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] bug of quantum-server
I fail to restart the quantum-server recently without any error log output. root@controller:~# service quantum-server restart stop: Unknown instance: quantum-server start/running, process 3763 root@controller:~# service quantum-server restart stop: Unknown instance: quantum-server start/running, process 4231 I trace to the code below finally: # remove from table unallocated vlans for any unconfigured physical # networks for allocs in allocations.itervalues(): for alloc in allocs: if not alloc.allocated: LOG.debug(removing vlan %s on physical network %s from pool % (alloc.vlan_id, *physical_network*)) session.delete(alloc) the local vairable *physical_network* should be *alloc.physical_network* I suggest to add some error output to the logger root@controller:~# quantum-server --config-file /etc/quantum/quantum.conf --log-file /var/log/quantum/server.log --config-file /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini --config-file /etc/quantum/plugins/restproxy/restproxy.ini Traceback (most recent call last): File /usr/bin/quantum-server, line 26, in module server() File /usr/lib/python2.7/dist-packages/quantum/server/__init__.py, line 40, in main quantum_service = service.serve_wsgi(service.QuantumApiService) File /usr/lib/python2.7/dist-packages/quantum/service.py, line 83, in serve_wsgi service.start() File /usr/lib/python2.7/dist-packages/quantum/service.py, line 42, in start self.wsgi_app = _run_wsgi(self.app_name) File /usr/lib/python2.7/dist-packages/quantum/service.py, line 89, in _run_wsgi app = config.load_paste_app(app_name) File /usr/lib/python2.7/dist-packages/quantum/common/config.py, line 133, in load_paste_app app = deploy.loadapp(config:%s % config_path, name=app_name) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 247, in loadapp return loadobj(APP, uri, name=name, **kw) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 272, in loadobj return context.create() File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create return self.object_type.invoke(self) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in invoke **context.local_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/util.py, line 56, in fix_call val = callable(*args, **kw) File /usr/lib/python2.7/dist-packages/paste/urlmap.py, line 25, in urlmap_factory app = loader.get_app(app_name, global_conf=global_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in get_app name=name, global_conf=global_conf).create() File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create return self.object_type.invoke(self) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 144, in invoke **context.local_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/util.py, line 56, in fix_call val = callable(*args, **kw) File /usr/lib/python2.7/dist-packages/quantum/auth.py, line 61, in pipeline_factory app = loader.get_app(pipeline[-1]) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 350, in get_app name=name, global_conf=global_conf).create() File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 710, in create return self.object_type.invoke(self) File /usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py, line 146, in invoke return fix_call(context.object, context.global_conf, **context.local_conf) File /usr/lib/python2.7/dist-packages/paste/deploy/util.py, line 56, in fix_call val = callable(*args, **kw) File /usr/lib/python2.7/dist-packages/quantum/api/v2/router.py, line 67, in factory return cls(**local_config) File /usr/lib/python2.7/dist-packages/quantum/api/v2/router.py, line 71, in __init__ plugin = manager.QuantumManager.get_plugin() File /usr/lib/python2.7/dist-packages/quantum/manager.py, line 65, in get_plugin cls._instance = cls() File /usr/lib/python2.7/dist-packages/quantum/manager.py, line 60, in __init__ self.plugin = plugin_klass() File /usr/lib/python2.7/dist-packages/quantum/plugins/openvswitch/ovs_quantum_plugin.py, line 197, in __init__ ovs_db_v2.sync_vlan_allocations(self.network_vlan_ranges) File /usr/lib/python2.7/dist-packages/quantum/plugins/openvswitch/ovs_db_v2.py, line 112, in sync_vlan_allocations (alloc.vlan_id, physical_network)) UnboundLocalError: local variable 'physical_network' referenced before assignment ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] is it possible to connect to real public network in quantum in tunnel mode?
I follow the OpenStack Network (Quantum) Administration Guide and build an internal network and I want VMs in the private network to access Internet. So I follow the instructions and create a external network, and the internal VM has a floating ip, but it can not connect to the physical Internet. I guess the external network is still a logical concept, which can not be physical one. root@controller:~# quantum floatingip-list +--+--+-+--+ | id | fixed_ip_address | floating_ip_address | port_id | +--+--+-+--+ | f2148ab7-02f8-465a-a23c-fbdb77c8e2bd | 10.0.50.4| 192.168.3.164 | 1f0dce9f-1ada-4b39-96b9-4285c111afba | +--+--+-+--+ root@controller:~# quantum net-list -- --router:external=True +--++--+ | id | name | subnets | +--++--+ | bbe28ed0-fff2-4944-84ba-c410a5bdd164 | public | 1695bffe-460a-4cf7-8c47-4c7fa39d4041 | +--++--+ So is it possible to connect to the Internet in the tunnel mode, should I use the vlan mode? What configurations should I change in the vlan mode? thanks all ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] instance launched in a wrong compute node
hi all: I have 3 compute nodes,but one(node 3) is down: root@controller:~/vms# nova-manage service list 2012-12-11 15:10:50 DEBUG nova.utils [req-a103d7d9-265c-4ef4-a11d-1dba1ccbc9e2 None None] backend module 'nova.db.sqlalchemy.api' from '/usr/lib/python2.7/dist-packages/nova/db/sqlalchemy/api.pyc' from (pid=14904) __get_backend /usr/lib/python2.7/dist-packages/nova/utils.py:494 Binary Host Zone Status State Updated_At nova-scheduler controller nova enabled:-) 2012-12-11 07:10:48 nova-network controller nova enabledXXX 2012-11-22 02:05:14 nova-compute controller nova enabledXXX 2012-11-21 03:44:34 nova-certcontroller nova enabled:-) 2012-12-11 07:10:46 nova-consoleauth controller nova enabled:-) 2012-12-11 07:10:41 nova-volume controller nova enabled:-) 2012-12-11 07:10:48 nova-compute node1nova enabled:-) 2012-12-11 07:10:42 nova-network node1nova enabledXXX 2012-11-22 02:05:31 nova-compute node2nova enabled:-) 2012-12-11 07:10:41 nova-network node2nova enabledXXX 2012-11-22 02:05:31 *nova-compute node3nova enabledXXX 2012-12-07 09:03:06* When I am going to launch a instance, it often turns out that the task is launched on the unavailable node so the task fails: 2012-12-11 15:20:10 DEBUG nova.scheduler.filters.retry_filter [req-5b648097-33c8-426d-aea8-74be196f8f25 5eb9644d73544e04b347666d1156a002 e6621dd241764ddbaf9cd556882c5aa7] Previously tried hosts: [u'node1', u'node2']. (host=node3) from (pid=24208) host_passes /usr/lib/python2.7/dist-packages/nova/scheduler/filters/retry_filter.py:39 2012-12-11 15:20:10 DEBUG nova.scheduler.filters.ram_filter [req-5b648097-33c8-426d-aea8-74be196f8f25 5eb9644d73544e04b347666d1156a002 *e6621dd241764ddbaf9cd556882c5aa7] host 'node3': free_ram_mb:476 free_disk_mb:22528 does not have 4096 MB usable ram, it only has 970.0 MB usable ram. from (pid=24208) host_passes * /usr/lib/python2.7/dist-packages/nova/scheduler/filters/ram_filter.py:48 2012-12-11 15:20:10 DEBUG nova.scheduler.host_manager [req-5b648097-33c8-426d-aea8-74be196f8f25 5eb9644d73544e04b347666d1156a002 e6621dd241764ddbaf9cd556882c5aa7] Host filter function bound method RamFilter.host_passes of nova.scheduler.filters.ram_filter.RamFilter object at 0x43be250 failed for node3 from (pid=24208) passes_filters /usr/lib/python2.7/dist-packages/nova/scheduler/host_manager.py:166 2012-12-11 15:20:10 WARNING nova.scheduler.driver [req-5b648097-33c8-426d-aea8-74be196f8f25 5eb9644d73544e04b347666d1156a002 e6621dd241764ddbaf9cd556882c5aa7] [instance: 557ade83-151c-425c-bf38-2770e25d0450] *Setting instance to ERROR state.* I do not know why, any suggestion? p.s. I use ubuntu 12.10 with Openstack 2012.2 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp