** Also affects: neutron
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624494
Title:
[master] metadata is not working on multi-node setup
Public bug reported:
setup:
1 controller
2 network nodes(q-dhcp)
1 ESXi nova compute
3 KVM ubuntu compute
Instance cirros launch is failing on both ESXi and KVM.
vmware@cntr1:~$ nova service-list
Public bug reported:
Setup:
2 KVM hosts
2 ESX Hosts
1 Controller
1 Network node
Devstack(master)
Steps:
1. I don't have cinder service configured
2. Horizon by default imposes 1GB of volume(cant set it to 0 from UI). Attached
snapshot
3. VM launch fails has Cinder service catalog is empty with
Thanks Brandon for looking into it..I did couple of experiment to confirm if
algorithm works fine.
Suspect it was "nc" issue when I reported issue at first. Marking bug as
Invalid..
I moved away from "nc" and deployed web servers using "nginx". Tried
below experiments and seems to be working as
Public bug reported:
Setup:
Single controller
1 KVM compute
Steps:
1. create subnet vn 20.0.0.0/24
2. Launch 4 instances on the above network
3. create LB1 for subnet vn.
4. create listner and pool . Add instances as members to the pool
5. Try to delete the LBaaS port using neutron command
As
Public bug reported:
Just noticed that collect_stats task is periodically running every 10 sec on LB
which doesn’t have any pool/members.
Its flooding logs and may be we can check LB for its pool/member before
collecting stats.
Logs:
root@runner:/home/stack/nsbu_cqe_openstack# neutron
Thanks Brandon for looking into this. Couple of points,
1. This is HA proxy drviver implementation not Octavia
2. VM's are spawned free hand and had given enough time for LB & health-monitor
to provision members of pool.
I am assigning this bug back..Please let me know what you think..
**
Public bug reported:
Setup:
1. Loadbalancer with one listener and pool.
2. 4 webservers as members of pool.
3. create health-monitor for the pool
Test:
1. Send 4 curl requests, requests are distributed equally among the members as
ROUND_ROBIN lb_alorithm is selected.
2. using "nova stop <>"
Public bug reported:
Currently we dont have option to check status of listener. Below is the
output of listener without status.
root@runner:~# neutron lbaas-listener-show 8c0e0289-f85d-4539-8970-467a45a5c191
+---++
| Field
Public bug reported:
I am trying to test ROUND_ROBIN algorithm with lbaasv2 haproxy on
stable/liberty.
With 4 members(web servers) in pool, I see there is some problem while
scheduling in round robin fashion.
Webserver is running on CirrOS VM using unix utility "netcat".
MYIP=$(ifconfig
Public bug reported:
After below steps, I see config got cleaned and left q-lbaas namespace
on the controller.
Steps:
Create LBaaS loadbalancer for subnet
Create listener
Create pool and add instances as members
Delete pool
Delete listener
Delete LB
stack@controller:/opt/stack/logs$ sudo ip
Public bug reported:
Setup:
Single controller[48 GB RAM, 16vCPU, 120GB Disk]
3 Network Nodes
100 ESX hypervisors distributed in 10 nova-compute nodes
Test:
1. Create /16 network
2. Heat template which which will launch 100 instances on network created step 1
3. Create 10 stack back2back so
Public bug reported:
Single controller[48 GB RAM, 16vCPU, 120GB Disk]
1 Network Node
100 ESX hypervisors distributed in 10 nova-compute nodes
100 KVM hypervisors
Test:
1. Create /16 network
2. Template to spawn 500 instances on network created in step 1
Observation:
1. Tried spinning 500
Public bug reported:
System was running with 2k cirrOS vm on 100 KVM hypervisors, and seeing
below DB exception while trying to delete using nova api.
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and
attach the Nova API log if possible.
(HTTP 500) (Request-ID:
Public bug reported:
I had 12 ESX nova-compute cluster with 100 ESX hypervisor. For some reason one
of nova-compute node went down.
After couple of attempt nova-compute came up fine. But,
1. Nova deleted all the instances running on that particular( esx-compute11)
from its DB
2. All the
Public bug reported:
By mistake I gave parameter with wrong extra spec
"quota:quota:vif_shares_share=10" instead of
"quota:vif_shares_share=10", executing of command was successful. I
didnt see same Shares value being set on VM NIC as expected.
It might be good idea to validate the parameters
16 matches
Mail list logo