[openstack-dev] ml2 and vxlan configurations, neutron-server fails to start
Hi Thankyou for the info, I downgraded sqlalchemy accordingly, but there were lot of other dependencies I had to take care (as below). The error which still continues to persist in my environment is : RuntimeError: Unable to load quantum from configuration file /etc/neutron/api-paste.ini. are there any other dependencies to be taken care to resolve this error? pip uninstall sqlalchemy (uninstalled -0.8.3) pip install sqlalchemy==0.7.9 pip install jsonrpclib pip uninstall eventlet (uninstalled -0.12.0) pip install eventlet (installed -0.14.0) pip install pyudev for the error Requirement.parse('amqp>=1.0.10,<1.1.0')) pip uninstall amqp pip install amqp -- but its installing 1.3.3 so, download the source code of 1.0.10 (https://pypi.python.org/pypi/amqp/1.0.10) python setup.py build python setup.py install Regards Gopi Krishna Yongsheng Gong gongysh at unitedstack.com VersionConflict: (SQLAlchemy 0.8.3 (/usr/lib64/python2.7/site-packages), Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99')) it seems your SQLAlchemy is newer than required. so pip uninstall sqlalchemy and then install older one: sudo pip install sqlalchemy==0.7.9 ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] ml2 and vxlan configurations, neutron-server fails to start
Hi Trinath Please find the server.log and neutron.conf * ** server.log - 2013-11-29 11:21:45.276 13505 INFO neutron.common.config [-] Logging enabled! 2013-11-29 11:21:45.277 13505 WARNING neutron.common.legacy [-] Old class module path in use. Please change 'quantum.openstack.common.rpc.impl_qpid' to 'neutron.openstack.common.rpc.impl_qpid'. 2013-11-29 11:21:45.277 13505 ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver 2013-11-29 11:21:45.277 13505 DEBUG neutron.service [-] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1890 2013-11-29 11:21:45.277 13505 DEBUG neutron.service [-] Configuration options gathered from: log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1891 2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] command line args: ['--config-file', '/usr/share/neutron/neutron-dist.conf', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugin.ini', '--log-file', '/var/log/neutron/server.log'] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1892 2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] config files: ['/usr/share/neutron/neutron-dist.conf', '/etc/neutron/neutron.conf', '/etc/neutron/plugin.ini'] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1893 2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1894 2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] allow_bulk = True log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.278 13505 DEBUG neutron.service [-] allow_overlapping_ips = True log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-] allow_pagination = False log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-] allow_sorting = False log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-] allowed_rpc_exception_modules = ['neutron.openstack.common.exception', 'nova.exception', 'cinder.exception', 'exceptions'] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.279 13505 DEBUG neutron.service [-] api_extensions_path= log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] api_paste_config = /etc/neutron/api-paste.ini log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] auth_strategy = keystone log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] backdoor_port = None log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] backlog = 4096 log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.280 13505 DEBUG neutron.service [-] base_mac = fa:16:3e:00:00:00 log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] bind_host = 0.0.0.0 log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] bind_port = 9696 log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] config_dir = None log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.281 13505 DEBUG neutron.service [-] config_file = ['/usr/share/neutron/neutron-dist.conf', '/etc/neutron/neutron.conf', '/etc/neutron/plugin.ini'] log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-] control_exchange = rabbit log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-] core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-] debug = True log_opt_values /usr/lib/python2.7/site-packages/oslo/config/cfg.py:1903 2013-11-29 11:21:45.282 13505 DEBUG neutron.service [-] default_log_levels = ['amqplib=WARN', 'sqlalchemy
[openstack-dev] ml2 and vxlan configurations, neutron-server fails to start
Hi I am configuring Havana on fedora 19. Observing the below errors in case of neutron. Please help me resolve this issue. copied only few lines from the server.log, in case full log is required, let me know. /etc/neutron/plugins/ml2/ml2_conf.ini type_drivers = vxlan,local tenant_network_types = vxlan mechanism_drivers = neutron.plugins.ml2.drivers.OpenvswitchMechanismDriver network_vlan_ranges = physnet1:1000:2999 vni_ranges = 5000:6000 vxlan_group = 239.10.10.1 ERROR neutron.common.legacy [-] Skipping unknown group key: firewall_driver ERROR stevedore.extension [-] Could not load 'local': (SQLAlchemy 0.8.3 (/usr/lib64/python2.7/site-packages), Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99')) ERROR stevedore.extension [-] (SQLAlchemy 0.8.3 (/usr/lib64/python2.7/site-packages), Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99')) ERROR stevedore.extension [-] Could not load 'vxlan': (SQLAlchemy 0.8.3 (/usr/lib64/python2.7/site-packages), Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99')) ERROR stevedore.extension [-] (SQLAlchemy 0.8.3 (/usr/lib64/python2.7/site-packages), Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99')) TRACE stevedore.extension VersionConflict: (SQLAlchemy 0.8.3 (/usr/lib64/python2.7/site-packages), Requirement.parse('SQLAlchemy>=0.7.8,<=0.7.99')) ERROR neutron.common.config [-] Unable to load neutron from configuration file /etc/neutron/api-paste.ini. TRACE neutron.common.config LookupError: No section 'quantum' (prefixed by 'app' or 'application' or 'composite' or 'composit' or 'pipeline' or 'filter-app') found in config /etc/neutron/api-paste.ini ERROR neutron.service [-] In serve_wsgi() TRACE neutron.service RuntimeError: Unable to load quantum from configuration file /etc/neutron/api-paste.ini. Regards Gopi Krishna ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Merge multiple OVS bridges ?
Are the below observations on Havana? My observations are w.r.t. Grizzly, but I didnot see any changes in Havana reg. the same. -- Regards Gopi Krishna ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Merge multiple OVS bridges ?
Hi Currently in Neutron the integration bridge (br-int) and other bridges configured over each physical nic (e.g. br-eth0, br-eth1 etc) are being configured in Compute and Network nodes. What is the logic behind or advantages having 2 OVS bridges in the physical host? Can we have only one bridge for each physical NIC, similar to what linux bridge setup has. And configure/modify the flows such that VLAN conversion is appropriately setup for ingress and egress traffic within the single bridge. Thus also eliminating the veth pairs used to connect the bridges together. Is this a feasible proposal, and can it be worked on? -- Regards Gopi Krishna ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] reg. Multihost dhcp feature in Havana ?
On Tue, Sep 10, 2013 at 1:46 PM, Gopi Krishna B wrote: > Hi > Was looking at the below link and checking if the feature to support > Multihost networking is part of Havana. > > https://blueprints.launchpad.net/neutron/+spec/quantum-multihost > > I couldnot find out the feature in the Havana blue print. Could you let me > know details reg. the feature ? > > https://blueprints.launchpad.net/neutron/havana/+specs?show=all > > -- > Regards > Gopi Krishna > > > Hi Does anyone have info related to this feature being part of Havana. Thanks ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] reg. Multihost dhcp feature in Havana ?
Hi Was looking at the below link and checking if the feature to support Multihost networking is part of Havana. https://blueprints.launchpad.net/neutron/+spec/quantum-multihost I couldnot find out the feature in the Havana blue print. Could you let me know details reg. the feature ? https://blueprints.launchpad.net/neutron/havana/+specs?show=all -- Regards Gopi Krishna ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Quantum (Grizzly) setup on Fedora18 - VM doesnot receive an IP address
Resending the mail, as wrong dashboard logs were posted in the prev question... We are unable to get an IP address when a VM gets launched and the below DHCP error is observed in the Dashboard logs The setup in done on Fedora 18 using Openstack Grizzly. Its a 2 node setup, with Network+ Controller node having 3 NIC's, and Compute node having 2 NIC's. Tried configuring networking in vlan mode. em1 - mgmt network, em2 - external/public network (br-ex is created on top of this iface) eth1 - internal/data network (br-eth1 is created on top of this iface) ** * plugin.ini - enable_tunneling = False tenant_network_type = vlan network_vlan_ranges = eth1:100:1000 integration_bridge = br-int bridge_mappings = eth1:br-eth1,em2:br-ex ** * Starting logging: OK Initializing random number generator... done. Starting acpid: OK cirrosds 'local' up at 1.48 no results found for mode=local. up 1.53. searched: nocloud configdrive ec2 Starting network... udhcpc (v1.20.1) started Sending discover... Sending discover... Sending discover... No lease, failing WARN: /etc/rc3.d/S40network failed cirrosds 'net' up at 181.79 checking http://169.254.169.254/20090404/instanceid failed 1/20: up 181.83. request failed failed 2/20: up 184.04. request failed failed to read iid from metadata. tried 20 no results found for mode=net. up 222.36. searched: nocloud configdrive ec2 failed to get instanceid of datasource Starting dropbear sshd: generating rsa key... generating dsa key... OK === network info === ifinfo: lo,up,127.0.0.1,8,::1 ifinfo: eth0,up,,8,fe80::f816:3eff:fede:dbc8 === datasource: None None === === cirros: current=0.3.1 uptime=222.60 === route: fscanf === pinging gateway failed, debugging connection === ### route n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface route: fscanf ### cat /etc/resolv.conf cat: can't open '/etc/resolv.conf': No such file or directory ### gateway not found /sbin/cirrosstatus: line 1: can't open /etc/resolv.conf: no such file ### pinging nameservers ### uname a Linux cirros 3.2.037virtual #58Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 GNU/Linux Could anyone help us in resolving this issue, we have tried following different links and options available on internet, but couldnot resolve this error. Let us know if further information is required to identify the root cause. Some more info, and if someone could possibly identify the root cause then it would be of great help to me. from the tcpdump output , I could track the DHCP discover packet at the tapXXX , qbrXXX , qvbXXX, qvoXX, int-br-eth0, phy-br-eth0 interface, but not after that As per my understanding the flow of packets should be from tapXX -> qbrXX -> qvbXX -> qvoXX --> br-int -> int-br-eth0 -> phy-br-eth0 -> br-eth0 -> eth0 So in this case is there a missing security group rules, which possibly drop the packet. I am not familiar with the iptables rules, so if I need to add any rules could you please help me in adding the rule. -- Regards Gopi Krishna On Wed, Jul 10, 2013 at 10:21 AM, Gopi Krishna B wrote: > We are unable to get an IP address when a VM gets launched and the below > DHCP error is observed in the Dashboard logs > > The setup in done on Fedora 18 using Openstack Grizzly. Its a 2 node > setup, with Network+ Controller node having 3 NIC's, and Compute node > having 2 NIC's. Tried configuring networking in vlan mode. > > em1 - mgmt network, > em2 - external/public network (br-ex is created on top of this iface) > eth1 - internal/data network (br-eth1 is created on top of this iface) > ** * > plugin.ini > - > enable_tunneling = False > tenant_network_type = vlan > network_vlan_ranges = eth1:100:1000 > integration_bridge = br-int > bridge_mappings = eth1:br-eth1,em2:br-ex > ** * > > The console log from the dashboards is as below. > > Initializing random number generator... done. > > Starting network... > > udhcpc (v1.18.5) started > > Sending discover... > > Sending select for 192.168.120.2... > > Sending select for 192.168.120.2... > > Sending select for 192.168.120.2... > > No lease, failing > > WARN: /etc/rc3.d/S40network failed > > cirrosds 'net' up at 182.11 > > checking http://169.254.169.254/20090404/instanceid > > failed 1/20: up 182.13. request failed > > failed 2/20: up 184.34. request failed > > failed 3/20: up 186.36. request failed > > > Could anyone help us in resolving this issue, we have tried following > different links and options available on internet, but couldnot resolve > this error. Let us know if further infor
[openstack-dev] Quantum (Grizzly) setup on Fedora18 - VM doesnot receive an IP address
We are unable to get an IP address when a VM gets launched and the below DHCP error is observed in the Dashboard logs The setup in done on Fedora 18 using Openstack Grizzly. Its a 2 node setup, with Network+ Controller node having 3 NIC's, and Compute node having 2 NIC's. Tried configuring networking in vlan mode. em1 - mgmt network, em2 - external/public network (br-ex is created on top of this iface) eth1 - internal/data network (br-eth1 is created on top of this iface) ** * plugin.ini - enable_tunneling = False tenant_network_type = vlan network_vlan_ranges = eth1:100:1000 integration_bridge = br-int bridge_mappings = eth1:br-eth1,em2:br-ex ** * The console log from the dashboards is as below. Initializing random number generator... done. Starting network... udhcpc (v1.18.5) started Sending discover... Sending select for 192.168.120.2... Sending select for 192.168.120.2... Sending select for 192.168.120.2... No lease, failing WARN: /etc/rc3.d/S40network failed cirrosds 'net' up at 182.11 checking http://169.254.169.254/20090404/instanceid failed 1/20: up 182.13. request failed failed 2/20: up 184.34. request failed failed 3/20: up 186.36. request failed Could anyone help us in resolving this issue, we have tried following different links and options available on internet, but couldnot resolve this error. Let us know if further information is required to identify the root cause. Some more info, and if someone could possibly identify the root cause then it would be of great help to me. from the tcpdump output , I could track the DHCP discover packet at the tapXXX , qbrXXX , qvbXXX, qvoXX, int-br-eth0, phy-br-eth0 interface, but not after that As per my understanding the flow of packets should be from tapXX -> qbrXX -> qvbXX -> qvoXX --> br-int -> int-br-eth0 -> phy-br-eth0 -> br-eth0 -> eth0 So in this case is there a missing security group rules, which possibly drop the packet. I am not familiar with the iptables rules, so if I need to add any rules could you please help me in adding the rule. -- Regards Gopi Krishna ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev