is the instance scheduled to an hypervisor ? check this with openstack
server show uuid
(admin credential)

if yes check nova-compute.log on the hypervisor. maybe you find some good
information to debug

Saverio

Il 03 mag 2017 2:16 AM, "Steve Powell" <spow...@silotechgroup.com> ha
scritto:

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security



[ml2_type_flat]

flat_networks = provider









*From:* Neil Jerram [mailto:n...@tigera.io]
*Sent:* Tuesday, May 2, 2017 7:58 PM
*To:* Steve Powell <spow...@silotechgroup.com>; Chris Sarginson <
csarg...@gmail.com>; openstack-operators@lists.openstack.org

*Subject:* Re: [Openstack-operators] Neutron Issues



I think you probably need to say what ML2 mechanism driver(s) you are using.





On Tue, May 2, 2017 at 10:29 PM Steve Powell <spow...@silotechgroup.com>
wrote:

No OVS here. Thanks though! This one has me stumped!



*From:* Chris Sarginson [mailto:csarg...@gmail.com]
*Sent:* Tuesday, May 2, 2017 5:24 PM
*To:* Steve Powell <spow...@silotechgroup.com>; openstack-operators@lists.
openstack.org
*Subject:* Re: [Openstack-operators] Neutron Issues



If you're using openvswitch, with Newton there was a change to the default
agent for configuring openvswitch to be the python ryu library, I think
it's been mentioned on here recently, so probably worth having a poke
through the archives for more information.  I'd check your neutron
openvswitch agent logs for errors pertaining to openflow configuration
specifically, and if you see anything, it's probably worth applying the
following config to your ml2 ini file under the [OVS] section:



of_interface = ovs-ofctl



https://docs.openstack.org/mitaka/config-reference/
networking/networking_options_reference.html



Then restart the neutron openvswitch agent, watch the logs, hopefully this
is of some use to you.



On Tue, 2 May 2017 at 21:30 Steve Powell <spow...@silotechgroup.com> wrote:

I forgot to mention I’m running Newton and my neutron.conf file is below
and I’m running haproxy.



    [DEFAULT]

    core_plugin = ml2

    service_plugins = router

    allow_overlapping_ips = True

    notify_nova_on_port_status_changes = True

    notify_nova_on_port_data_changes = True

    transport_url = rabbit://openstack:#############@x.x.x.x

    auth_strategy = keystone



    [agent]

    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf



    [cors]



    [cors.subdomain]



    [database]

    connection = mysql+pymysql://neutron:###########################@
10.10.6.220/neutron



    [keystone_authtoken]

    auth_url = http://x.x.x.x:35357/v3

    auth_uri = https://xxx.xxxx.xxx:5000/v3

    memcached_servers = x.x.x.x:11211

    auth_type = password

    project_domain_name = Default

    user_domain_name = Default

    project_name = service

    username = neutron

    password = ##################################################





    [matchmaker_redis]



    [nova]



    auth_url = http://x.x.x.x:35357/v3

    auth_type = password

    project_domain_name = Default

    user_domain_name = Default

    region_name = RegionOne

    project_name = service

    username = nova

    password = ###################################################



    [oslo_concurrency]



    [oslo_messaging_amqp]



    [oslo_messaging_notifications]



    [oslo_messaging_rabbit]



    [oslo_messaging_zmq]



    [oslo_middleware]

    enable_proxy_headers_parsing = True

    enable_http_proxy_to_wsgi = True



    [oslo_policy]



    [qos]



    [quotas]



    [ssl]



*From:* Steve Powell [mailto:spow...@silotechgroup.com]
*Sent:* Tuesday, May 2, 2017 4:16 PM
*To:* openstack-operators@lists.openstack.org
*Subject:* [Openstack-operators] Neutron Issues



This sender failed our fraud detection checks and may not
be who they appear to be. Learn about spoofing
<http://aka.ms/LearnAboutSpoofing>

Feedback <http://aka.ms/SafetyTipsFeedback>

Hello Ops!



I have a major issue slapping me in the face and seek any assistance
possible. When trying to spin up and instance whether from the command
line, manually in Horizon, or with a HEAT template I receive the following
error in nova and, where applicable, heat logs:



Failed to allocate the network(s), not rescheduling.



I see in the neutron logs where the request make it through to completion
but that info is obviously not making it back to nova.



INFO neutron.notifiers.nova [-] Nova event response: {u'status':
u'completed', u'code': 200, u'name': u'network-changed', u'server_uuid':
u'6892bb9e-4256-4fc9-a313-331f0c576a03'}



What am I missing? Why would the response from neutron not make it back to
nova?







_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
_______________________________________________
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

Reply via email to