[Yahoo-eng-team] [Bug 1628670] Re: Duplicate designate record error shown when floating ip attached to two VMs with same name

2016-09-28 Thread Abhishek Chanda
Imran confirmed that this should not affect designate, so removed it

** No longer affects: designate

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628670

Title:
  Duplicate designate record error shown when floating ip attached to
  two VMs with same name

Status in neutron:
  New

Bug description:
  We have seen observed this error in designate-neutron integration for
  mitaka.

   when we create a network and add a designate dns domain to it using
  "neutron net-update --dns-domain my-domain.com" , and later assign a
  floating IP, the designate record for floating IP gets created fine.

  However, when we create a second network, and add SAME designate
  domain like "neutron net-update __dns_domain my-domain.com", later
  create the VM with SAME name too, but this time the VM is created in
  this second network, upon floating IP association, we see this error
  in neutron logs:

  """

  /var/log/neutron/neutron-server.log:9698:2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db [req-4c6ff4f4-a542-49e2-8051-9e16404142c3 
1de3aeb554644fccb20bbee5c9f41c9b e6fcd8295a5349e8bb96ec47c30b9cd7 - - -] Error 
publishing floating IP data in external DNS service. Name: 'nova-test'. Domain: 
'my-domain.com.'. DNS service driver message 'Name nova-test is duplicated in 
the external DNS service'
  /var/log/neutron/neutron-server.log-9699-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db Traceback (most recent call last):
  /var/log/neutron/neutron-server.log-9700-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db   File 
"/usr/lib/python2.7/site-packages/neutron/db/dns_db.py", line 316, in 
_add_ips_to_external_dns_service
  /var/log/neutron/neutron-server.log-9701-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db records)
  /var/log/neutron/neutron-server.log-9702-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db   File 
"/usr/lib/python2.7/site-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 142, in create_record_set
  /var/log/neutron/neutron-server.log-9703-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db raise dns.DuplicateRecordSet(dns_name=dns_name)
  /var/log/neutron/neutron-server.log:9704:2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db DuplicateRecordSet: Name nova-test is duplicated in the 
external DNS service
  /var/log/neutron/neutron-server.log-9705-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db

  """

  is this because lack of multi-network support in designate neutron
  integration?

  the scenario is very simple, create two networks, update both networks
  with same dns designate domain, create two VMs with same name but in
  each separate network, when floating ip is associated to second vm ,
  we see duplicate record error

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1628670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1628670] Re: Duplicate designate record error shown when floating ip attached to two VMs with same name

2016-09-28 Thread Abhishek Chanda
** Description changed:

  We have seen observed this error in designate-neutron integration for
  mitaka.
  
-  when we create a network and add a designate dns domain to it using
- "neutron net-update __dns_domain my-domain.com" , and later assign a
+  when we create a network and add a designate dns domain to it using
+ "neutron net-update --dns-domain my-domain.com" , and later assign a
  floating IP, the designate record for floating IP gets created fine.
  
  However, when we create a second network, and add SAME designate domain
  like "neutron net-update __dns_domain my-domain.com", later create the
  VM with SAME name too, but this time the VM is created in this second
  network, upon floating IP association, we see this error in neutron
  logs:
  
  """
  
  /var/log/neutron/neutron-server.log:9698:2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db [req-4c6ff4f4-a542-49e2-8051-9e16404142c3 
1de3aeb554644fccb20bbee5c9f41c9b e6fcd8295a5349e8bb96ec47c30b9cd7 - - -] Error 
publishing floating IP data in external DNS service. Name: 'nova-test'. Domain: 
'my-domain.com.'. DNS service driver message 'Name nova-test is duplicated in 
the external DNS service'
  /var/log/neutron/neutron-server.log-9699-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db Traceback (most recent call last):
  /var/log/neutron/neutron-server.log-9700-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db   File 
"/usr/lib/python2.7/site-packages/neutron/db/dns_db.py", line 316, in 
_add_ips_to_external_dns_service
  /var/log/neutron/neutron-server.log-9701-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db records)
  /var/log/neutron/neutron-server.log-9702-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db   File 
"/usr/lib/python2.7/site-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 142, in create_record_set
  /var/log/neutron/neutron-server.log-9703-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db raise dns.DuplicateRecordSet(dns_name=dns_name)
  /var/log/neutron/neutron-server.log:9704:2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db DuplicateRecordSet: Name nova-test is duplicated in the 
external DNS service
  /var/log/neutron/neutron-server.log-9705-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db
  
  """
  
- 
- is this because lack of multi-network support in designate neutron 
integration? 
+ is this because lack of multi-network support in designate neutron
+ integration?
  
  the scenario is very simple, create two networks, update both networks
  with same dns designate domain, create two VMs with same name but in
  each separate network, when floating ip is associated to second vm , we
  see duplicate record error

** Also affects: designate
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1628670

Title:
  Duplicate designate record error shown when floating ip attached to
  two VMs with same name

Status in neutron:
  New

Bug description:
  We have seen observed this error in designate-neutron integration for
  mitaka.

   when we create a network and add a designate dns domain to it using
  "neutron net-update --dns-domain my-domain.com" , and later assign a
  floating IP, the designate record for floating IP gets created fine.

  However, when we create a second network, and add SAME designate
  domain like "neutron net-update __dns_domain my-domain.com", later
  create the VM with SAME name too, but this time the VM is created in
  this second network, upon floating IP association, we see this error
  in neutron logs:

  """

  /var/log/neutron/neutron-server.log:9698:2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db [req-4c6ff4f4-a542-49e2-8051-9e16404142c3 
1de3aeb554644fccb20bbee5c9f41c9b e6fcd8295a5349e8bb96ec47c30b9cd7 - - -] Error 
publishing floating IP data in external DNS service. Name: 'nova-test'. Domain: 
'my-domain.com.'. DNS service driver message 'Name nova-test is duplicated in 
the external DNS service'
  /var/log/neutron/neutron-server.log-9699-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db Traceback (most recent call last):
  /var/log/neutron/neutron-server.log-9700-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db   File 
"/usr/lib/python2.7/site-packages/neutron/db/dns_db.py", line 316, in 
_add_ips_to_external_dns_service
  /var/log/neutron/neutron-server.log-9701-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db records)
  /var/log/neutron/neutron-server.log-9702-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db   File 
"/usr/lib/python2.7/site-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 142, in create_record_set
  /var/log/neutron/neutron-server.log-9703-2016-09-28 19:43:15.731 502 ERROR 
neutron.db.dns_db raise dns.DuplicateRecordSet(dns_name=dns_name)
  /var/log/neutron/neutron-server.log:9704:2016-09-28 19:43:15.731 502 

[Yahoo-eng-team] [Bug 1488619] Re: Neutron API reports both routers in active state for L3 HA

2015-09-17 Thread Abhishek Chanda
This turned out to be a split brain problem due to intermittence in the
underlying physical network. Rebooting the boxes fixed it.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488619

Title:
  Neutron API reports both routers in active state for L3 HA

Status in neutron:
  Invalid

Bug description:
  I am running Kilo with L3 HA. Here is what I see:

  # neutron --insecure --os-project-domain-name default --os-user-domain-name 
default l3-agent-list-hosting-router test-router
  
+--+-++---+--+
  | id   | host| admin_state_up | 
alive | ha_state |
  
+--+-++---+--+
  | 7dc44513-256a-4d51-b77d-8da6125928ca | one | True   | :-)   | 
active   |
  | c91b437a-e300-4b08-8118-b226ae68cc04 | two | True   | :-)   | 
active   |
  
+--+-++---+--+

  My relevant neutron config on both nodes is
  l3_ha = True
  max_l3_agents_per_router = 2
  min_l3_agents_per_router = 2

  We checked the following:
  1. IP monitor is running on both nodes
  2. Keepalived can talk between the nodes, we see packets on the HA interface

  What are we missing?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488619] [NEW] Neutron API reports both routers in active state for L3 HA

2015-08-25 Thread Abhishek Chanda
Public bug reported:

I am running Kilo with L3 HA. Here is what I see:

# neutron --insecure --os-project-domain-name default --os-user-domain-name 
default l3-agent-list-hosting-router test-router
+--+-++---+--+
| id   | host| admin_state_up | 
alive | ha_state |
+--+-++---+--+
| 7dc44513-256a-4d51-b77d-8da6125928ca | one | True   | :-)   | active  
 |
| c91b437a-e300-4b08-8118-b226ae68cc04 | two | True   | :-)   | active  
 |
+--+-++---+--+

My relevant neutron config on both nodes is
l3_ha = True
max_l3_agents_per_router = 2
min_l3_agents_per_router = 2

We checked the following:
1. IP monitor is running on both nodes
2. Keepalived can talk between the nodes, we see packets on the HA interface

What are we missing?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488619

Title:
  Neutron API reports both routers in active state for L3 HA

Status in neutron:
  New

Bug description:
  I am running Kilo with L3 HA. Here is what I see:

  # neutron --insecure --os-project-domain-name default --os-user-domain-name 
default l3-agent-list-hosting-router test-router
  
+--+-++---+--+
  | id   | host| admin_state_up | 
alive | ha_state |
  
+--+-++---+--+
  | 7dc44513-256a-4d51-b77d-8da6125928ca | one | True   | :-)   | 
active   |
  | c91b437a-e300-4b08-8118-b226ae68cc04 | two | True   | :-)   | 
active   |
  
+--+-++---+--+

  My relevant neutron config on both nodes is
  l3_ha = True
  max_l3_agents_per_router = 2
  min_l3_agents_per_router = 2

  We checked the following:
  1. IP monitor is running on both nodes
  2. Keepalived can talk between the nodes, we see packets on the HA interface

  What are we missing?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488619] Re: Neutron API reports both routers in active state for L3 HA

2015-08-25 Thread Abhishek Chanda
Sorry for the miscommunication. This isn't confirmed to be a SELinux
issue right now. We will keep debugging with the pointers you provided.

** Changed in: neutron
   Status: Invalid = New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488619

Title:
  Neutron API reports both routers in active state for L3 HA

Status in neutron:
  New

Bug description:
  I am running Kilo with L3 HA. Here is what I see:

  # neutron --insecure --os-project-domain-name default --os-user-domain-name 
default l3-agent-list-hosting-router test-router
  
+--+-++---+--+
  | id   | host| admin_state_up | 
alive | ha_state |
  
+--+-++---+--+
  | 7dc44513-256a-4d51-b77d-8da6125928ca | one | True   | :-)   | 
active   |
  | c91b437a-e300-4b08-8118-b226ae68cc04 | two | True   | :-)   | 
active   |
  
+--+-++---+--+

  My relevant neutron config on both nodes is
  l3_ha = True
  max_l3_agents_per_router = 2
  min_l3_agents_per_router = 2

  We checked the following:
  1. IP monitor is running on both nodes
  2. Keepalived can talk between the nodes, we see packets on the HA interface

  What are we missing?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1488619/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460681] [NEW] Neutron DHCP namespaces are not created properly on reboot

2015-06-01 Thread Abhishek Chanda
Public bug reported:

I am running a Openstack Juno on a bunch of docker containers. When my
neutron-network container reboots, neutron dhcp logs has a bunch of

015-05-28 17:49:14.629 475 TRACE neutron.agent.dhcp_agent Stderr:
'RTNETLINK answers: Invalid argument\n' 2015-05-28 17:49:14.629 475
TRACE neutron.agent.dhcp_agent

I noticed that this is due to the fact that the namespace behaves
weird when the container comes back up:

# ip netns exec qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 ip a
setting the network namespace
qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 failed: Invalid argument
# ls -la /var/run/netns/
total 8
drwxr-xr-x 2 root root 4096 May 29 14:43 .
drwxr-xr-x 9 root root 4096 May 29 14:43 ..
-- 1 root root0 May 29 14:43
qdhcp-474bd6da-e74f-436a-8408-e10fe5925220

So, the namespace does exist, but the kernel does not seem to recognize
it.

Note that neutron-dhcp is running in a docker container. Also, reboot is
a `docker restart`

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460681

Title:
  Neutron DHCP namespaces are not created properly on reboot

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am running a Openstack Juno on a bunch of docker containers. When my
  neutron-network container reboots, neutron dhcp logs has a bunch of

  015-05-28 17:49:14.629 475 TRACE neutron.agent.dhcp_agent Stderr:
  'RTNETLINK answers: Invalid argument\n' 2015-05-28 17:49:14.629 475
  TRACE neutron.agent.dhcp_agent

  I noticed that this is due to the fact that the namespace behaves
  weird when the container comes back up:

  # ip netns exec qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 ip a
  setting the network namespace
  qdhcp-474bd6da-e74f-436a-8408-e10fe5925220 failed: Invalid argument
  # ls -la /var/run/netns/
  total 8
  drwxr-xr-x 2 root root 4096 May 29 14:43 .
  drwxr-xr-x 9 root root 4096 May 29 14:43 ..
  -- 1 root root0 May 29 14:43
  qdhcp-474bd6da-e74f-436a-8408-e10fe5925220

  So, the namespace does exist, but the kernel does not seem to
  recognize it.

  Note that neutron-dhcp is running in a docker container. Also, reboot
  is a `docker restart`

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460116] [NEW] keepalived should have snmp support enabled

2015-05-29 Thread Abhishek Chanda
Public bug reported:

Keepalived should be started with the `-x` switch to enable snmp integration. 
keepalived will connect to a local snmp daemon, as described in:
http://vincent.bernat.im/en/blog/2011-keepalived-snmp-ipv6.html

** Affects: neutron
 Importance: Undecided
 Assignee: Abhishek Chanda (abhishek-i)
 Status: In Progress


** Tags: l3-ha

** Tags added: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460116

Title:
  keepalived should have snmp support enabled

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Keepalived should be started with the `-x` switch to enable snmp integration. 
keepalived will connect to a local snmp daemon, as described in:
  http://vincent.bernat.im/en/blog/2011-keepalived-snmp-ipv6.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452039] [NEW] HAProxy LBaaS driver does not work with L3 HA

2015-05-05 Thread Abhishek Chanda
Public bug reported:

We have a deployment with L3 HA. When we deployed HAProxy LBaaS driver
on it, a few times, the haproxy instance landed on a network node that
was not the master. When this happens, there is no way to access the
load balanced instances over a floating IP. Here are the steps:

1. Deploy neutron with L3 HA and HAProxy LBaaS driver.
2. Setup a tenant and a public network with a HA router for the public network.
3. Boot three VMs in the tenant network.
4. Create a lb pool.
5. Add two VMs to the pool.
6. Create a health monitor and associate to the pool.
7. Create a VIP.
8. Start servers on the two VMs.
9. Create a floating IP in neutron.
10. Associate the floating IP to the VIP.

At this point, the servers should be accessible from outside the cloud
using the floating IP. But that does not happen if the haproxy instance
is scheduled on a node that is not the master in L3 HA.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1452039

Title:
  HAProxy LBaaS driver does not work with L3 HA

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  We have a deployment with L3 HA. When we deployed HAProxy LBaaS driver
  on it, a few times, the haproxy instance landed on a network node that
  was not the master. When this happens, there is no way to access the
  load balanced instances over a floating IP. Here are the steps:

  1. Deploy neutron with L3 HA and HAProxy LBaaS driver.
  2. Setup a tenant and a public network with a HA router for the public 
network.
  3. Boot three VMs in the tenant network.
  4. Create a lb pool.
  5. Add two VMs to the pool.
  6. Create a health monitor and associate to the pool.
  7. Create a VIP.
  8. Start servers on the two VMs.
  9. Create a floating IP in neutron.
  10. Associate the floating IP to the VIP.

  At this point, the servers should be accessible from outside the cloud
  using the floating IP. But that does not happen if the haproxy
  instance is scheduled on a node that is not the master in L3 HA.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1452039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450882] Re: Cannot expose VIP for neutron L3 HA

2015-05-05 Thread Abhishek Chanda
Operator error on my part. Closing this bug and the PR for now. Thanks
for the clarification Assaf.

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450882

Title:
  Cannot expose VIP for neutron L3 HA

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  I am running two neutron L3 agents on two physical machines. Once
  neutron is up and running, and HA routers are created, I was hoping to
  get the allocated VIP and route to it from another network (external
  to my cloud). This would allow my VMs to reach the external network.
  However, I have two problems:

  1. The parent VIP range seems hardcoded in
  
https://github.com/openstack/neutron/blob/6bd91060f1d6db1d33c810deb9feefcb0258bde5/neutron/agent/linux/keepalived.py#L160
  and this is a link local range. Thus, there is no easy way to route
  the link local VIP out to another network.

  2. There is no API that exposes the VIP

  It seems the VIP range should be configurable.
  Is there an easy way to do what I need to do here? Or is it not a supported 
use case?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450882] [NEW] Cannot expose VIP for neutron L3 HA

2015-05-01 Thread Abhishek Chanda
Public bug reported:

I am running two neutron L3 agents on two physical machines. Once
neutron is up and running, and HA routers are created, I was hoping to
get the allocated VIP and route to it from another network (external to
my cloud). This would allow my VMs to reach the external network.
However, I have two problems:

1. The parent VIP range seems hardcoded in
https://github.com/openstack/neutron/blob/6bd91060f1d6db1d33c810deb9feefcb0258bde5/neutron/agent/linux/keepalived.py#L160
and this is a link local range. Thus, there is no easy way to route the
link local VIP out to another network.

2. There is no API that exposes the VIP

It seems the VIP range should be configurable.
Is there an easy way to do what I need to do here? Or is it not a supported use 
case?

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- Cannot expose VIP from neutron L3 HA
+ Cannot expose VIP for neutron L3 HA

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450882

Title:
  Cannot expose VIP for neutron L3 HA

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I am running two neutron L3 agents on two physical machines. Once
  neutron is up and running, and HA routers are created, I was hoping to
  get the allocated VIP and route to it from another network (external
  to my cloud). This would allow my VMs to reach the external network.
  However, I have two problems:

  1. The parent VIP range seems hardcoded in
  
https://github.com/openstack/neutron/blob/6bd91060f1d6db1d33c810deb9feefcb0258bde5/neutron/agent/linux/keepalived.py#L160
  and this is a link local range. Thus, there is no easy way to route
  the link local VIP out to another network.

  2. There is no API that exposes the VIP

  It seems the VIP range should be configurable.
  Is there an easy way to do what I need to do here? Or is it not a supported 
use case?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1402933] [NEW] Neutron l3 agent should have GARP delay configurable

2014-12-15 Thread Abhishek Chanda
Public bug reported:

It defaults to 5 sec for keepalived. A value larger than the advert
interval will cause the router to send GARP packets slower than sending
advert packets. This might end up increasing failover time because the
passive router won't know the master is offline sooner.

** Affects: neutron
 Importance: Undecided
 Assignee: Abhishek Chanda (abhishek-i)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Abhishek Chanda (abhishek-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1402933

Title:
  Neutron l3 agent should have GARP delay configurable

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  It defaults to 5 sec for keepalived. A value larger than the advert
  interval will cause the router to send GARP packets slower than
  sending advert packets. This might end up increasing failover time
  because the passive router won't know the master is offline sooner.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1402933/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395188] [NEW] Neutron has unused code from oslo

2014-11-21 Thread Abhishek Chanda
Public bug reported:

timeutils and importutils have graduated and should be removed from 
openstack-common.conf
strutils is not being used anywhere

** Affects: neutron
 Importance: Undecided
 Assignee: Abhishek Chanda (abhishek-i)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Abhishek Chanda (abhishek-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1395188

Title:
  Neutron has unused code from oslo

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  timeutils and importutils have graduated and should be removed from 
openstack-common.conf
  strutils is not being used anywhere

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1395188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1395188] Re: Neutron has unused code from oslo

2014-11-21 Thread Abhishek Chanda
Duplicate of https://launchpad.net/bugs/1385355

** Changed in: neutron
 Assignee: Abhishek Chanda (abhishek-i) = (unassigned)

** Changed in: neutron
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1395188

Title:
  Neutron has unused code from oslo

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  timeutils and importutils have graduated and should be removed from 
openstack-common.conf
  strutils is not being used anywhere

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1395188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1381575] Re: In a Scale setup nova list returns only the 1000 odd vm's at one shot and not the whole list

2014-10-15 Thread Abhishek Chanda
Added novaclient since this might be a client issue

** Also affects: python-novaclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1381575

Title:
  In a Scale setup nova list returns only the 1000 odd vm's at one shot
  and not the whole list

Status in OpenStack Compute (Nova):
  New
Status in Python client library for Nova:
  New

Bug description:
  In the scale tests it is usually seen that the nova list returns only
  the 1000 odd vm's and not the whole list even though the instances
  would have been provisioned above 2000 +

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1381575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260915] Re: keystone:411 keystone did not start

2013-12-15 Thread Abhishek Chanda
** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1260915

Title:
  keystone:411 keystone did not start

Status in devstack - openstack dev environments:
  New
Status in OpenStack Identity (Keystone):
  New

Bug description:
  The keystone fails to initialize using devstack, and puts following
  message, appears some initialization issue not sure if it's ssl or pki
  cert issue, if some one can review and answer. mysql back end seems
  fine. Here is call trace fail in starrup.sh of devstack ... also if
  needs to be files with devstack or openstack not clear, any feedback
  welcome.

  + screen -S stack -p key -X stuff 'cd /opt/stack/keystone  
/opt/stack/keystone/bin/keystone-all --config-file /etc/keystone/keystone.conf 
--log-config /etc/keystone/logging.conf -d --debug || echo key failed to 
start | tee /opt/stack/s'atus/stack/key.failure
  + echo 'Waiting for keystone to start...'
  Waiting for keystone to start...
  + timeout 60 sh -c 'while ! curl --noproxy '\''*'\'' -s 
http://10.145.90.61:5000/v2.0/ /dev/null; do sleep 1; done'
  + die 411 'keystone did not start'
  + local exitcode=0
  + set +o xtrace
  [Call Trace]
  ./stack.sh:874:start_keystone
  /home/stack/devstack/lib/keystone:411:die
  [ERROR] /home/stack/devstack/lib/keystone:411 keystone did not start

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1260915/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261194] [NEW] The wsgi server should support tcp_keepalive

2013-12-15 Thread Abhishek Chanda
Public bug reported:

This will be useful in deployments with a load balancer

** Affects: neutron
 Importance: Undecided
 Assignee: Abhishek Chanda (abhishek-i)
 Status: New


** Tags: api

** Changed in: neutron
 Assignee: (unassigned) = Abhishek Chanda (abhishek-i)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1261194

Title:
  The wsgi server should support tcp_keepalive

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This will be useful in deployments with a load balancer

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1261194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260310] [NEW] gate-tempest-dsvm-full failure with An error occurred while enabling hairpin mode on domain with xml

2013-12-12 Thread Abhishek Chanda
Public bug reported:

Kibana search

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkFuIGVycm9yIG9jY3VycmVkIHdoaWxlIGVuYWJsaW5nIGhhaXJwaW4gbW9kZSBvbiBkb21haW4gd2l0aCB4bWxcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4Njg1MjgzMTk3M30=

This is pretty infrequent

** Affects: nova
 Importance: High
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260310

Title:
  gate-tempest-dsvm-full failure with An error occurred while enabling
  hairpin mode on domain with xml

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Kibana search

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOiBcIkFuIGVycm9yIG9jY3VycmVkIHdoaWxlIGVuYWJsaW5nIGhhaXJwaW4gbW9kZSBvbiBkb21haW4gd2l0aCB4bWxcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM4Njg1MjgzMTk3M30=

  This is pretty infrequent

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260310/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260138] Re: VMWARE: Unable to spawn instances from sparse/ide images

2013-12-12 Thread Abhishek Chanda
Duplicate of #1260139

** Changed in: nova
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260138

Title:
  VMWARE: Unable to spawn instances from sparse/ide images

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  Branch: stable/havana

  Traceback: http://paste.openstack.org/show/54855/

  Steps to reprodude:
  Upload a ide/sparse type image to glance.
  Spawn an instance from that image

  Actual Result:
  Failed to spawn an image

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1260138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1260489] Re: --debug flag not working in neutron

2013-12-12 Thread Abhishek Chanda
From the description, this looks like a neutronclient issue. I've added
it and marked the bug in nova as invalid. Please assign the bug to
yourself in neutronclient.

** Also affects: python-neutronclient
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1260489

Title:
  --debug flag not working in neutron

Status in OpenStack Compute (Nova):
  Invalid
Status in Python client library for Neutron:
  New

Bug description:
  This is with the neutron master branch, in a single node devstack
  setup. The branch is at commit
  3b4233873539bad62d202025529678a5b0add412.

  If I use the --debug flag in a neutron CLI, for example, port-list, I
  don't see any debug output:

  cloud@controllernode:/opt/stack/neutron$ neutron --debug port-list
  
+--+--+---+-+
  | id   | name | mac_address   | fixed_ips 
  |
  
+--+--+---+-+
  | 6c26cdc1-acc1-439c-bb47-d343085b7b78 |  | fa:16:3e:32:2c:eb | 
{subnet_id: 37f15352-e816-4a03-b58c-b4d5c1fa8e2a, ip_address: 10.0.0.2} 
|
  | f09b14b2-3162-4212-9d91-f97b22c95f31 |  | fa:16:3e:99:08:6b | 
{subnet_id: d4717b67-fd64-45ed-b22c-dedbd23afff3, ip_address: 
172.24.4.226} |
  | f0ba4efd-12ca-4d56-8c7d-e879e4150a63 |  | fa:16:3e:02:41:47 | 
{subnet_id: 37f15352-e816-4a03-b58c-b4d5c1fa8e2a, ip_address: 10.0.0.1} 
|
  
+--+--+---+-+
  cloud@controllernode:/opt/stack/neutron$ 

  
  On the other hand, if I use the --debug flag for nova, for example, nova 
list, I see the curl request and response showing up:

  
  cloud@controllernode:/opt/stack/neutron$ nova --debug list

  REQ: curl -i 'http://192.168.52.85:5000/v2.0/tokens' -X POST -H
  Content-Type: application/json -H Accept: application/json -H
  User-Agent: python-novaclient -d '{auth: {tenantName: admin,
  passwordCredentials: {username: admin, password:
  password}}}'

  RESP: [200] CaseInsensitiveDict({'date': 'Thu, 05 Dec 2013 23:41:07 GMT', 
'vary': 'X-Auth-Token', 'content-length': '8255', 'content-type': 
'application/json'})
  RESP BODY: {access: {token: {issued_at: 2013-12-05T23:41:07.307915, 
expires: 2013-12-06T23:41:07Z, id: 
MIIOkwYJKoZIhvcNAQcCoIIOhDCCDoACAQExCTAHBgUrDgMCGjCCDOkGCSqGSIb3DQEHAaCCDNoEggzWeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMi0wNVQyMzo0MTowNy4zMDc5MTUiLCAiZXhwaXJlcyI6ICIyMDEzLTEyLTA2VDIzOjQxOjA3WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVsbCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAibmFtZSI6ICJhZG1pbiJ9fSwgInNlcnZpY2VDYXRhbG9nIjogW3siZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92Mi9hN2IzOTYwYjk3OTI0YmFiOWE1NWE5ZjlmNjg0YTg3MCIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAiaWQiOiAiMDQyMzVjMmE1ODNlNDAwZDg1NTBkYTI0NmNiZDI1YWEiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzQvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOi
 
AiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojk2OTYvIiwgImlkIjogIjYyNWI1YzM3ZDJlYzQ4ZGRhMTRmZGZmZmMyZjBhMTY0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo5Njk2LyJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJuZXR3b3JrIiwgIm5hbWUiOiAibmV1dHJvbiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjUyLjg1Ojg3NzYvdjIvYTdiMzk2MGI5NzkyNGJhYjlhNTVhOWY5ZjY4NGE4NzAiLCAicmVnaW9uIjogIlJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIiwgImlkIjogIjNmODVjN2ZmZjNjMzRmNWNiMzlmMTZiMzQ2ZmY1Mjc0IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC41Mi44NTo4Nzc2L3YyL2E3YjM5NjBiOTc5MjRiYWI5YTU1YTlmOWY2ODRhODcwIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInZvbHVtZXYyIiwgIm5hbWUiOiAiY2luZGVyIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovLzE5Mi4xNjguNTIuODU6ODc3NC92MyIsICJyZWd