[Yahoo-eng-team] [Bug 1666446] [NEW] Create directory

2017-02-21 Thread Vincent Legoll
Public bug reported:

Cloud-init has writefiles, but it looks like it does not have the
capability to create directories.

how is mixing

runcmd:
 - mkdir /home/toto/bin

and

write_files:
 - content: |
#!/bin/sh
echo "Hello world!"

path: /home/toto/bin/hw.sh
permissions: '0755'

going to work ?

Is runcmd garanteed to happen before write_files ?
What about a "users" section ?

This is the same kind of problem that is reported here:
https://bugs.launchpad.net/cloud-init/+bug/1486113

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1666446

Title:
  Create directory

Status in cloud-init:
  New

Bug description:
  Cloud-init has writefiles, but it looks like it does not have the
  capability to create directories.

  how is mixing

  runcmd:
   - mkdir /home/toto/bin

  and

  write_files:
   - content: |
  #!/bin/sh
  echo "Hello world!"

  path: /home/toto/bin/hw.sh
  permissions: '0755'

  going to work ?

  Is runcmd garanteed to happen before write_files ?
  What about a "users" section ?

  This is the same kind of problem that is reported here:
  https://bugs.launchpad.net/cloud-init/+bug/1486113

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1666446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666493] [NEW] auto address allocation conflicting with non-subnet addresses

2017-02-21 Thread Kevin Benton
Public bug reported:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22already%20allocated%20in%20subnet%5C%22

This is happening on v6 tests and the error seems to indicate that the
ipam manager is getting confuses and is using the wrong subnet to check
for overlapping addresses.

** Affects: neutron
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666493

Title:
  auto address allocation conflicting with non-subnet addresses

Status in neutron:
  New

Bug description:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22already%20allocated%20in%20subnet%5C%22

  This is happening on v6 tests and the error seems to indicate that the
  ipam manager is getting confuses and is using the wrong subnet to
  check for overlapping addresses.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666493/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666501] [NEW] add_router_interface does not export port_id

2017-02-21 Thread Maurice Schreiber
Public bug reported:

In order to be able to control via policy
"add_router_interface:port_id", who can add a router interface by
providing a port (instead of a subnet), the export of the port_id field
is needed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666501

Title:
  add_router_interface does not export port_id

Status in neutron:
  New

Bug description:
  In order to be able to control via policy
  "add_router_interface:port_id", who can add a router interface by
  providing a port (instead of a subnet), the export of the port_id
  field is needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656010] Re: Incorrect notification to nova about ironic baremetall port (for nodes in 'cleaning' state)

2017-02-21 Thread Chuck Short
** Changed in: ironic (Ubuntu)
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1656010

Title:
  Incorrect notification to nova about ironic baremetall port (for nodes
  in 'cleaning' state)

Status in Ironic:
  Fix Released
Status in neutron:
  In Progress
Status in ironic package in Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  version: newton (2:9.0.0-0ubuntu1~cloud0)

  When neutron trying to bind port for Ironic baremetall node, it
  sending wrong notification to nova about port been ready. neutron send
  it with 'device_id' == ironic-node-id, and nova rejects it as 'not
  found' (there is no nova instance with such id).

  Log:
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 completed by entity DHCP. 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:147
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153
  neutron.callbacks.manager[22265]: DEBUG Notify callbacks 
[('neutron.plugins.ml2.plugin.Ml2Plugin._port_provisioned--9223372036854150578',
 >)] for port, 
provisioning_complete [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] 
_notify_loop /usr/lib/python2.7/dist-packages/neutron/callbacks/manager.py:142
  neutron.plugins.ml2.plugin[22265]: DEBUG Port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 cannot update to ACTIVE because it is not 
bound. [req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _port_provisioned 
/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py:224
  oslo_messaging._drivers.amqpdriver[22265]: DEBUG sending reply msg_id: 
254703530cd3440584c980d72ed93011 reply queue: 
reply_8b6e70ad5191401a9512147c4e94ca71 time elapsed: 0.0452275519492s 
[req-49434e88-4952-4e9d-a1c4-41dbf6c0091a - - - - -] _send_reply 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
  neutron.notifiers.nova[22263]: DEBUG Sending events: [{'name': 
'network-changed', 'server_uuid': u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] 
send_events /usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:257
  novaclient.v2.client[22263]: DEBUG REQ: curl -g -i --insecure -X POST 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H 
"Accept: application/json" -H "X-Auth-Token: 
{SHA1}592539c9fcd820d7e369ea58454ee17fe7084d5e" -d '{"events": [{"name": 
"network-changed", "server_uuid": "d02c7361-5e3a-4fdf-89b5-f29b3901f0fc"}]}' 
_http_log_request /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:337
  novaclient.v2.client[22263]: DEBUG RESP: [404] Content-Type: 
application/json; charset=UTF-8 Content-Length: 78 X-Compute-Request-Id: 
req-a029af9e-e460-476f-9993-4551f3b210d6 Date: Thu, 12 Jan 2017 15:43:37 GMT 
Connection: keep-alive 
  RESP BODY: {"itemNotFound": {"message": "No instances found for any event", 
"code": 404}}
   _http_log_response 
/usr/lib/python2.7/dist-packages/keystoneauth1/session.py:366
  novaclient.v2.client[22263]: DEBUG POST call to compute for 
http://nova-api.p.ironic-dal-1.servers.com:28774/v2/93c697ef6c2649eb9966900a8d6a73d8/os-server-external-events
 used request id req-a029af9e-e460-476f-9993-4551f3b210d6 _log_request_id 
/usr/lib/python2.7/dist-packages/novaclient/client.py:85
  neutron.notifiers.nova[22263]: DEBUG Nova returned NotFound for event: 
[{'name': 'network-changed', 'server_uuid': 
u'd02c7361-5e3a-4fdf-89b5-f29b3901f0fc'}] send_events 
/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py:263
  oslo_messaging._drivers.amqpdriver[22265]: DEBUG received message msg_id: 
0bf04ac8fedd4234bd6cd6c04547beca reply to 
reply_8b6e70ad5191401a9512147c4e94ca71 __call__ 
/usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194
  neutron.db.provisioning_blocks[22265]: DEBUG Provisioning complete for port 
db3766ad-f82b-437d-b8b2-4133a92b1b86 [req-47c505d7-4eb5-4c71-9656-9e0927408822 
- - - - -] provisioning_complete 
/usr/lib/python2.7/dist-packages/neutron/db/provisioning_blocks.py:153

  
  Port info:
  
+-+---+
  | Field   | Value 
|
  
+-+---+
  | admin_state_up  | True  
|
  |

[Yahoo-eng-team] [Bug 1449606] Re: Firewall status changed to Error if a rule was inserted or removed

2017-02-21 Thread Chuck Short
** Changed in: neutron-fwaas (Ubuntu)
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1449606

Title:
  Firewall status changed to Error if a rule was inserted or removed

Status in neutron:
  Incomplete
Status in neutron-fwaas package in Ubuntu:
  Won't Fix

Bug description:
  Inserting/removing firewall rule does not handled properly.  
Inserting/removing rule results to:
  - The firewall removed from a router. filter table is clear. Old rules were 
removed and no new rules were created in the filter table of the iptables
  - The Firewall has ERROR status

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1449606/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666549] [NEW] Infinite router update in neutron L3 agent (HA)

2017-02-21 Thread Roman Klimenko
Public bug reported:

After fresh deployment of environment and launched ostf tests (or rally), 
neutron l3 agent logs on nodes filled (every .003 second timestamp) with such 
traces:
http://paste.openstack.org/show/599851/
which causes cluster fall when log partition will filled up.  


Environment: Fuel 9.0 upgraded to 9.2, fresh install
3 controllers/kafka + 3 computes + 4 storage ceph-osd + 1 LMA nodes

neutron agents 8.3.0:
neutron-dhcp-agent   2:8.3.0-1~u14.04+mos30 
all  OpenStack virtual network service - DHCP agent
neutron-l3-agent 2:8.3.0-1~u14.04+mos30 
all  OpenStack virtual network service - l3 agent
neutron-lbaasv2-agent2:8.3.0-2~u14.04+mos1  
all  Neutron is a virtual network service for Openstack - LBaaSv2 agent
neutron-metadata-agent   2:8.3.0-1~u14.04+mos30 
all  OpenStack virtual network service - metadata agent
neutron-openvswitch-agent2:8.3.0-1~u14.04+mos30 
all  OpenStack virtual network service - Open vSwitch agent

Steps to reproduce:
1. Deploy openstack witj Fuel 9.2
2. Create rally venv and run  scenario
NeutronNetworks.create_and_delete_routers (concurrency 100 and times 100, 
or  more)
3. /var/log/neutron/l3-agent.log full of these traces.

** Affects: mos
 Importance: Undecided
 Status: New


** Tags: area-neutron l3 l3-ha neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666549

Title:
  Infinite router update in neutron L3 agent (HA)

Status in Mirantis OpenStack:
  New

Bug description:
  After fresh deployment of environment and launched ostf tests (or rally), 
neutron l3 agent logs on nodes filled (every .003 second timestamp) with such 
traces:
  http://paste.openstack.org/show/599851/
  which causes cluster fall when log partition will filled up.  

  
  Environment: Fuel 9.0 upgraded to 9.2, fresh install
  3 controllers/kafka + 3 computes + 4 storage ceph-osd + 1 LMA nodes

  neutron agents 8.3.0:
  neutron-dhcp-agent   2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - DHCP agent
  neutron-l3-agent 2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - l3 agent
  neutron-lbaasv2-agent2:8.3.0-2~u14.04+mos1
  all  Neutron is a virtual network service for Openstack - LBaaSv2 
agent
  neutron-metadata-agent   2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - metadata agent
  neutron-openvswitch-agent2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - Open vSwitch agent

  Steps to reproduce:
  1. Deploy openstack witj Fuel 9.2
  2. Create rally venv and run  scenario
  NeutronNetworks.create_and_delete_routers (concurrency 100 and times 100, 
or  more)
  3. /var/log/neutron/l3-agent.log full of these traces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1666549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666549] Re: Infinite router update in neutron L3 agent (HA)

2017-02-21 Thread Oleg Bondarev
** Project changed: neutron => mos

** Tags added: area-neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666549

Title:
  Infinite router update in neutron L3 agent (HA)

Status in Mirantis OpenStack:
  New

Bug description:
  After fresh deployment of environment and launched ostf tests (or rally), 
neutron l3 agent logs on nodes filled (every .003 second timestamp) with such 
traces:
  http://paste.openstack.org/show/599851/
  which causes cluster fall when log partition will filled up.  

  
  Environment: Fuel 9.0 upgraded to 9.2, fresh install
  3 controllers/kafka + 3 computes + 4 storage ceph-osd + 1 LMA nodes

  neutron agents 8.3.0:
  neutron-dhcp-agent   2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - DHCP agent
  neutron-l3-agent 2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - l3 agent
  neutron-lbaasv2-agent2:8.3.0-2~u14.04+mos1
  all  Neutron is a virtual network service for Openstack - LBaaSv2 
agent
  neutron-metadata-agent   2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - metadata agent
  neutron-openvswitch-agent2:8.3.0-1~u14.04+mos30   
  all  OpenStack virtual network service - Open vSwitch agent

  Steps to reproduce:
  1. Deploy openstack witj Fuel 9.2
  2. Create rally venv and run  scenario
  NeutronNetworks.create_and_delete_routers (concurrency 100 and times 100, 
or  more)
  3. /var/log/neutron/l3-agent.log full of these traces.

To manage notifications about this bug go to:
https://bugs.launchpad.net/mos/+bug/1666549/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666029] Re: simple_cell_setup should not exist with 1 when there is no compute hosts

2017-02-21 Thread Sylvain Bauza
We already discussed about that, and a change was also provided
https://review.openstack.org/#/c/420079/ but if you see the status of
that change, it was abandoned.

If you look at the reason why it was abandoned, it is because we made a
consensus on leaving simple_cells_setup() as much simple and opiniated
as possible and rather provide a step-by-step experience for operators
deploying CellsV2 that you can see described in
https://docs.openstack.org/developer/nova/cells.html#first-time-setup


** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1666029

Title:
  simple_cell_setup should not exist with 1 when there is no compute
  hosts

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When deploying ironic environment, even though nova-compute is running,
  but there will be no compute nodes until adding ironic nodes. And the
  user may not start nova-compute for some reason during fresh deployment.
  
  So stop return 1 when there is no compute hosts in simple_cell_setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1666029/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666606] [NEW] Swift create container modal making calls to api on input "key press" event

2017-02-21 Thread Daniel Castellanos
Public bug reported:

The Create container modal in swift UI is making calls to the API on
each Key press when typing in the container name.

Steps to reproduce:

1. Go to Object Store -> Containers
2. click on Create container
3. open the Development tools window (Chrome on MAC Command+Alt+I)
4. Go to the network tab
5. type the name of the container

Result:

See how the calls to the "/api/swift/containers//metadata/" 
begin to stack up
with each key press

Expected Result:
Not sure if this is a desired behavior in order to check the existence of a 
container, so this might not be a bug after all

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: angularjs swift

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/106

Title:
  Swift create container modal making calls to api on input "key press"
  event

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The Create container modal in swift UI is making calls to the API on
  each Key press when typing in the container name.

  Steps to reproduce:

  1. Go to Object Store -> Containers
  2. click on Create container
  3. open the Development tools window (Chrome on MAC Command+Alt+I)
  4. Go to the network tab
  5. type the name of the container

  Result:

  See how the calls to the "/api/swift/containers//metadata/" 
begin to stack up
  with each key press

  Expected Result:
  Not sure if this is a desired behavior in order to check the existence of a 
container, so this might not be a bug after all

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444841] Re: Resize instance fails after creating host aggregate

2017-02-21 Thread Sylvain Bauza
Marking the bug as invalid as it was explained in c#12 and also as the
doc describes the problem which is a configuration issue
https://docs.openstack.org/developer/nova/aggregates.html#availability-
zones-azs

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1444841

Title:
  Resize instance fails after creating host aggregate

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Latest Kilo code

  
  Reproduce steps:

  1. Do not define any host aggregate. AZ of host is 'nova'. Boot one
  instance named 'zhaoqin-nova' whose AZ is 'nova'

  2. Create host aggregate 'zhaoqin' whose AZ is 'zhaoqin-az'. Add host
  to 'zhaoqin' aggregate.  Now AZ of instance 'zhaoqin-nova' in db is
  still 'nova'; and 'nova list' displays AZ of 'zhaoqin-nova' is
  'zhaoqin-az'.

  3. Resize 'zhaoqin-nova' fails, no valid host.

  4. Boot another instance 'zhaoqin-my-az' whose AZ is 'zhaoqin-az'.
  Resize 'zhaoqin-my-az' succeed.

  5. Remove host from aggregate 'zhaoqin'.

  6. Resize 'zhaoqin-nova' succeed.  Resize 'zhaoqin-my-az' fails, no
  valid host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1444841/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1316621] Re: ebtables calls can race with libvirt

2017-02-21 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/431773
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=486e2f4eb5a02c98958582e366a4d6081ea897e0
Submitter: Jenkins
Branch:master

commit 486e2f4eb5a02c98958582e366a4d6081ea897e0
Author: Kevin Benton 
Date:   Thu Feb 9 15:10:20 2017 -0800

Pass --concurrent flag to ebtables calls

This flag will force ebtables to acquire a lock so we don't
have to worry about ebtables errors occuring if something else
on the system is trying to use ebtables as well.

Closes-Bug: #1316621
Change-Id: I695c01e015fdc201df8f23d9b48f9d3678240266


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1316621

Title:
  ebtables calls can race with libvirt

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Sometimes request to associate floating IP may fail, when using nova
  network with libvirt like:

  > 
http://192.168.1.12:8774/v2/258a4b20c77240bf9b386411430683fa/servers/a9e734e4-5310-4191-a7f0-78fca4b367e7/action
  > 
  > BadRequest: Bad request
  > Details: {'message': 'Error. Unable to associate floating ip', 'code': 
'400'}

  Real issue is that ebtables rootwrap call fails:
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ebtables -t nat -I 
PREROUTING --logical-in br100 -p ipv4 --ip-src 192.168.32.10 ! --ip-dst 
192.168.32.0/22 -j redirect --redirect-target ACCEPT
  Exit code: 255
  Stdout: ''
  Stderr: "Unable to update the kernel. Two possible causes:\n1. Multiple 
ebtables programs were executing simultaneously. The ebtables\n   userspace 
tool doesn't by default support multiple ebtables programs running\n   
concurrently. The ebtables option --concurrent or a tool like flock can be\n   
used to support concurrent scripts that update the ebtables kernel tables.\n2. 
The kernel doesn't support a certain ebtables extension, consider\n   
recompiling your kernel or insmod the extension.\n.\n"

  It happens like once in whole tempest run, and also not always, so kernel 
support and other reasons should not apply here.
  Probably already mentioned in 
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg23422.html.

  As that call in nova is synchronized, locked, it could be that nova
  can actually race with libvirt itself calling ebtables?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1316621/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666606] Re: Swift create container modal making calls to api on input "key press" event

2017-02-21 Thread Richard Jones
This is intended, to check whether the container name is already used.
Note that it shouldn't check on every press if you type quickly, there's
a delay (though it might be very short, it's also configurable).

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
   Status: Invalid => Won't Fix

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/106

Title:
  Swift create container modal making calls to api on input "key press"
  event

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  The Create container modal in swift UI is making calls to the API on
  each Key press when typing in the container name.

  Steps to reproduce:

  1. Go to Object Store -> Containers
  2. click on Create container
  3. open the Development tools window (Chrome on MAC Command+Alt+I)
  4. Go to the network tab
  5. type the name of the container

  Result:

  See how the calls to the "/api/swift/containers//metadata/" 
begin to stack up
  with each key press

  Expected Result:
  Not sure if this is a desired behavior in order to check the existence of a 
container, so this might not be a bug after all

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1592396] Re: [RFE] Specifying which floatingip to create should not be a restricted operation

2017-02-21 Thread Armando Migliaccio
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1592396

Title:
  [RFE] Specifying which floatingip to create should not be a restricted
  operation

Status in neutron:
  Won't Fix

Bug description:
  Hello

  In my opinion, the --floating-ip-address option to "neutron
  floatingip-create" should by default not be restricted to the admin
  user.

  I notice that we now have the option, when creating a floating ip, to
  specify which IP to create as opposed to only getting a (semi-)random
  IP from the pool.

  netron floatingip-create --floating-ip-address xx.xx.xx.xx external

   Which is very nice. But I also noticed that this option, by default is 
limited to an admin user. So why is this? If a user really want an IP which is 
free, he can likely get it by creating and deleting addresses
  until the one he wants comes up.

  In my opinion, we should therefore relax the default policy, allowing
  ordinary users to specify which floating IP to use.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1592396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666731] [NEW] ofctl timeout error in gate functional tests

2017-02-21 Thread Kevin Benton
Public bug reported:

Functional test failure in test_install_flood_to_tun

RuntimeError: ofctl request
version=0x4,msg_type=0xe,msg_len=None,xid=0xc073a211,OFPFlowMod(buffer_id=4294967295,command=0,cookie=10613959233739808590L,cookie_mask=0,flags=0,hard_timeout=0,idle_timeout=0,instructions=[OFPInstructionActions(actions=[OFPActionPopVlan(len=8,type=18),
OFPActionSetField(tunnel_id=888),
OFPActionOutput(len=16,max_len=0,port=-1,type=0)],type=4)],match=OFPMatch(oxm_fields={'vlan_vid':
4873}),out_group=0,out_port=0,priority=1,table_id=22) timed out

** Affects: neutron
 Importance: High
 Status: New


** Tags: gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666731

Title:
  ofctl timeout error in gate functional tests

Status in neutron:
  New

Bug description:
  Functional test failure in test_install_flood_to_tun

  RuntimeError: ofctl request
  
version=0x4,msg_type=0xe,msg_len=None,xid=0xc073a211,OFPFlowMod(buffer_id=4294967295,command=0,cookie=10613959233739808590L,cookie_mask=0,flags=0,hard_timeout=0,idle_timeout=0,instructions=[OFPInstructionActions(actions=[OFPActionPopVlan(len=8,type=18),
  OFPActionSetField(tunnel_id=888),
  
OFPActionOutput(len=16,max_len=0,port=-1,type=0)],type=4)],match=OFPMatch(oxm_fields={'vlan_vid':
  4873}),out_group=0,out_port=0,priority=1,table_id=22) timed out

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666731/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647910] Re: hostname is set incorrectly if localhostname is fully qualified

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.10.1

---
cloud-init (0.7.9-0ubuntu1~16.10.1) yakkety; urgency=medium

  * debian/copyright: update License field to include Apache-2.0
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/update-grub-legacy-ec2: detect kernels ending in -aws as
ec2 bootable (LP: #1655934).
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]

 -- Scott Moser   Tue, 31 Jan 2017 21:02:28 -0500

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1647910

Title:
  hostname is set incorrectly if localhostname is fully qualified

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  If no data source is available and the local hostname is set to
  "localhost.localdomain", and /etc/hosts looks like:

127.0.0.1   localhost localhost.localdomain localhost4
  localhost4.localdomain4

  Then in sources/__init__.py in get_hostname:

  - util.get_hostname() will return 'localhost.localdomain'
  - util.get_fqdn_from_hosts(hostname) will return 'localhost'
  - 'toks' will be set to [ 'localhost.localdomain', 'localdomain'

  And ultimately the system hostname will be set to
  'localhost.localdomain.localdomain', which isn't useful to anybody.

  Also reported in:

  https://bugzilla.redhat.com/show_bug.cgi?id=1389048

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1647910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1647910] Re: hostname is set incorrectly if localhostname is fully qualified

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.04.2

---
cloud-init (0.7.9-0ubuntu1~16.04.2) xenial-proposed; urgency=medium

  * debian/update-grub-legacy-ec2: fix shell syntax error. (LP:
#1662221)

cloud-init (0.7.9-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  * debian/copyright: update License field to include Apache.
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/patches/azure-use-walinux-agent.patch: continue relying on
walinux agent in stable release.
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix [Scott Moser]
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]
- user-groups: fix bug when groups was provided as string and had spaces
  [Scott Moser] (LP: #1354694)
- when adding a user, strip whitespace from group list
  [Lars Kellogg-Stedman] (LP: #1354694)
- fix decoding of utf-8 chars in yaml test
- Replace usage of sys_netdev_info with read_sys_net
  [Joshua Harlow] (LP: #1625766)
- fix problems found in python2.6 test. [Joshua Harlow]
- Just use file logging by default [Joshua Harlow] (LP: #1643990)
- Improve formatting for ProcessExecutionError [Wesley Wiedenmeier]
- flake8: fix trailing white space
- Doc: various documentation fixes [Sean Bright]
- cloudinit/config/cc_rh_subscription.py: Remove repos before adding
  [Brent Baude]
- packages/redhat: fix rpm spec file.
- main: set TZ in environment if not already set. [Ryan Harper]

 -- Scott Moser   Mon, 06 Feb 2017 16:18:28 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1647910

Title:
  hostname is set incorrectly if localhostname is fully qualified

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  If no data source is available and the local hostname is set to
  "localhost.localdomain", and /etc/hosts looks like:

127.0.0.1   localhost localhost.localdomain localhost4
  localhost4.localdomain4

  Then in sources/__init__.py in get_hostname:

  - util.get_hostname() will return 'localhost.localdomain'
  - util.get_fqdn_from_hosts(hostname) will return 'localhost'
  - 'toks' will be set to [ 'localhost.localdomain', 'localdomain'

  And ultimately the system hostname will be set to
  'localhost.localdomain.localdomain', which isn't useful to anybody.

  Also reported in:

  https://bugzilla.redhat.com/show_bug.cgi?id=1389048

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1647910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643990] Re: cloud-init-local.service messages not written to /var/log/cloud-init.log in systemd

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.04.2

---
cloud-init (0.7.9-0ubuntu1~16.04.2) xenial-proposed; urgency=medium

  * debian/update-grub-legacy-ec2: fix shell syntax error. (LP:
#1662221)

cloud-init (0.7.9-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  * debian/copyright: update License field to include Apache.
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/patches/azure-use-walinux-agent.patch: continue relying on
walinux agent in stable release.
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix [Scott Moser]
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]
- user-groups: fix bug when groups was provided as string and had spaces
  [Scott Moser] (LP: #1354694)
- when adding a user, strip whitespace from group list
  [Lars Kellogg-Stedman] (LP: #1354694)
- fix decoding of utf-8 chars in yaml test
- Replace usage of sys_netdev_info with read_sys_net
  [Joshua Harlow] (LP: #1625766)
- fix problems found in python2.6 test. [Joshua Harlow]
- Just use file logging by default [Joshua Harlow] (LP: #1643990)
- Improve formatting for ProcessExecutionError [Wesley Wiedenmeier]
- flake8: fix trailing white space
- Doc: various documentation fixes [Sean Bright]
- cloudinit/config/cc_rh_subscription.py: Remove repos before adding
  [Brent Baude]
- packages/redhat: fix rpm spec file.
- main: set TZ in environment if not already set. [Ryan Harper]

 -- Scott Moser   Mon, 06 Feb 2017 16:18:28 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1643990

Title:
  cloud-init-local.service messages not written to /var/log/cloud-
  init.log in systemd

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  === Begin SRU Template ===
  [Impact] 
  Cloud-init's logging is inconsistent due to availability of syslog during
  boot.

  Cloud-init logs to /var/log/cloud-init.log by default.  It does this in
  a way that was originally designed to prefer to use syslog if it was
  available, and then fall back to writing directly to that file.

  Over time this has been shown to be problematic.
  a.) it relied on syslog during boot, and on some distros it wasn't
  present.
  b.) sometimes it would not be available during cloud-init-local.service
  and then would be during cloud-init.service.  The result was that
  the log would have two different time stamp formats (one written
  by rsyslog and one by python logging).
  c.) if rsyslog was used, micro seconds were not included in the log.
  d.) since the move to systemd, there has even been times when cloud-init's
  attempt to determine if syslog was available would false-positive.
  that would result logging not being written to the file at all.

  Over all, the complexity was just not found to worth the benefit.

  [Test Case]
* Launch an instance.
* Look at /var/log/cloud-init.log.
  on start, the cloud-int process logs a message like
'Cloud-init v 0.7.8 running'
  Look at those messages specifically.  In the example here (lxd), neither
  cloud-init.service or cloud-init-local.service successfully logged at all.

  # grep Cloud-init /var/log/cloud-init.log 
  Dec  2 18:06:56 y2 [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.8 
running 'modules:config' at Fri, 02 Dec 2016 18:06:56 +. Up 5.0 seconds.
  Dec  2 18:06:58 y2 [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.8 
running 'modules:final' at Fri, 02 Dec 2016 18:06:58 +. Up 7.0 seconds.
  Dec  2 18:06:58 y2 [CLOUDINIT] util.py[DEBUG]: Cloud-init v. 0.7.8 
finished at Fri, 02 Dec 2016 18:06:58 +. Datasource DataSourceNoCloud 
[seed=/var/lib/cloud/seed/nocloud-net][dsmode=net].  Up 7.0 seconds

* update to proposed, cleanup reboot
  # enable propose and update
  # c

[Yahoo-eng-team] [Bug 1625766] Re: Fallback networking doesn't handle IOError when reading sys/net//carrier

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.04.2

---
cloud-init (0.7.9-0ubuntu1~16.04.2) xenial-proposed; urgency=medium

  * debian/update-grub-legacy-ec2: fix shell syntax error. (LP:
#1662221)

cloud-init (0.7.9-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  * debian/copyright: update License field to include Apache.
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/patches/azure-use-walinux-agent.patch: continue relying on
walinux agent in stable release.
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix [Scott Moser]
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]
- user-groups: fix bug when groups was provided as string and had spaces
  [Scott Moser] (LP: #1354694)
- when adding a user, strip whitespace from group list
  [Lars Kellogg-Stedman] (LP: #1354694)
- fix decoding of utf-8 chars in yaml test
- Replace usage of sys_netdev_info with read_sys_net
  [Joshua Harlow] (LP: #1625766)
- fix problems found in python2.6 test. [Joshua Harlow]
- Just use file logging by default [Joshua Harlow] (LP: #1643990)
- Improve formatting for ProcessExecutionError [Wesley Wiedenmeier]
- flake8: fix trailing white space
- Doc: various documentation fixes [Sean Bright]
- cloudinit/config/cc_rh_subscription.py: Remove repos before adding
  [Brent Baude]
- packages/redhat: fix rpm spec file.
- main: set TZ in environment if not already set. [Ryan Harper]

 -- Scott Moser   Mon, 06 Feb 2017 16:18:28 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1625766

Title:
  Fallback networking doesn't handle IOError when reading
  sys/net//carrier

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  Sometimes reading from /sys/class/net//carrier returns an error
  and is unhangled causing fallback networking to not bring anything up.

  
  [Original Description]

  I am running Arch on a KVM vps provider. I installed using this
  template: Arch Linux 2016.03 64-bit (template). Everything was working
  fine until I decided to upgrade. I did pacman -Syu and everything
  upgraded without error until it restarted.

  I had to manually install certain python packages. But, I kept getting
  more errors so I joined IRC.

  Here's the log: https://irclogs.ubuntu.com/2016/09/20/%23cloud-
  init.html

  Was told to post it Here to sum up everything

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1625766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582323] Re: Commissioning fails when competing cloud metadata resides on disk

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.04.2

---
cloud-init (0.7.9-0ubuntu1~16.04.2) xenial-proposed; urgency=medium

  * debian/update-grub-legacy-ec2: fix shell syntax error. (LP:
#1662221)

cloud-init (0.7.9-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  * debian/copyright: update License field to include Apache.
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/patches/azure-use-walinux-agent.patch: continue relying on
walinux agent in stable release.
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix [Scott Moser]
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]
- user-groups: fix bug when groups was provided as string and had spaces
  [Scott Moser] (LP: #1354694)
- when adding a user, strip whitespace from group list
  [Lars Kellogg-Stedman] (LP: #1354694)
- fix decoding of utf-8 chars in yaml test
- Replace usage of sys_netdev_info with read_sys_net
  [Joshua Harlow] (LP: #1625766)
- fix problems found in python2.6 test. [Joshua Harlow]
- Just use file logging by default [Joshua Harlow] (LP: #1643990)
- Improve formatting for ProcessExecutionError [Wesley Wiedenmeier]
- flake8: fix trailing white space
- Doc: various documentation fixes [Sean Bright]
- cloudinit/config/cc_rh_subscription.py: Remove repos before adding
  [Brent Baude]
- packages/redhat: fix rpm spec file.
- main: set TZ in environment if not already set. [Ryan Harper]

 -- Scott Moser   Mon, 06 Feb 2017 16:18:28 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1582323

Title:
  Commissioning fails when competing cloud metadata resides on disk

Status in cloud-init:
  Fix Released
Status in MAAS:
  Fix Committed
Status in MAAS 2.1 series:
  Fix Released
Status in MAAS trunk series:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  === Begin SRU Information ===
  [Impact]
  The issue originally reported was that when MAAS attempted to enlist a
  system by booting it with a remote iscsi disk with intent to have cloud-init
  utilize the MAAS metadata service, cloud-init found some metadata from a
  previous use of the system on the local disk.  cloud-init then went on
  to use that data and did not respond to maas.

  The impact in this case was that cloud-init failed to enlist.  The same 
problem
  could occur in any other case where there was data on the local disk that
  provided a datasource for cloud-init.

  The fix provided was for the scenario provided was for MAAS to provide
  configuration on the maas provided kernel command line that tells cloud-init
  it should read only attempt to use the MAAS datasource.

  Specifically, as mentioned in comment 7, this looked like:
 root=iscsi: cc:{'datasource_list': ['MAAS']}end_cc \
 cloud-config-url=http://maas/path/to/config ...

  cloud-init then reads that information on boot and it overrides the settings
  found inside the iscsi root device.

  [Test Case]
  A test case lives in unit tests now that ensures kernel config overrides
  system config.

  To further test this we could
  a.) cause this situation by
1.) installing a node in maas
2.) putting config drive or nocloud data onto one of the partitions
3.) returning the system to maas
4.) attempt re-deploy.

  b.) use a cloud image, kernel and initramfs and web server
1.) download image, update cloud-init to -proposed.
2.) set up a web service to serve files like MAAS described at
https://maas.ubuntu.com/docs/development/metadata.html
3.) boot image with kernel command line including the cc: and 
cloud-config-url  referencing that web service.
4.) have provided a config drive or nocloud seed disk to the vm.

  The 'b' test above is easier to reproduce in that it does not rely on
  MAAS.

  [Regression Potential]
  Reg

[Yahoo-eng-team] [Bug 1582323] Re: Commissioning fails when competing cloud metadata resides on disk

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.10.1

---
cloud-init (0.7.9-0ubuntu1~16.10.1) yakkety; urgency=medium

  * debian/copyright: update License field to include Apache-2.0
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/update-grub-legacy-ec2: detect kernels ending in -aws as
ec2 bootable (LP: #1655934).
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]

 -- Scott Moser   Tue, 31 Jan 2017 21:02:28 -0500

** Changed in: cloud-init (Ubuntu Yakkety)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1582323

Title:
  Commissioning fails when competing cloud metadata resides on disk

Status in cloud-init:
  Fix Released
Status in MAAS:
  Fix Committed
Status in MAAS 2.1 series:
  Fix Released
Status in MAAS trunk series:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  === Begin SRU Information ===
  [Impact]
  The issue originally reported was that when MAAS attempted to enlist a
  system by booting it with a remote iscsi disk with intent to have cloud-init
  utilize the MAAS metadata service, cloud-init found some metadata from a
  previous use of the system on the local disk.  cloud-init then went on
  to use that data and did not respond to maas.

  The impact in this case was that cloud-init failed to enlist.  The same 
problem
  could occur in any other case where there was data on the local disk that
  provided a datasource for cloud-init.

  The fix provided was for the scenario provided was for MAAS to provide
  configuration on the maas provided kernel command line that tells cloud-init
  it should read only attempt to use the MAAS datasource.

  Specifically, as mentioned in comment 7, this looked like:
 root=iscsi: cc:{'datasource_list': ['MAAS']}end_cc \
 cloud-config-url=http://maas/path/to/config ...

  cloud-init then reads that information on boot and it overrides the settings
  found inside the iscsi root device.

  [Test Case]
  A test case lives in unit tests now that ensures kernel config overrides
  system config.

  To further test this we could
  a.) cause this situation by
1.) installing a node in maas
2.) putting config drive or nocloud data onto one of the partitions
3.) returning the system to maas
4.) attempt re-deploy.

  b.) use a cloud image, kernel and initramfs and web server
1.) download image, update cloud-init to -proposed.
2.) set up a web service to serve files like MAAS described at
https://maas.ubuntu.com/docs/development/metadata.html
3.) boot image with kernel command line including the cc: and 
cloud-config-url  referencing that web service.
4.) have provided a config drive or nocloud seed disk to the vm.

  The 'b' test above is easier to reproduce in that it does not rely on
  MAAS.

  [Regression Potential]
  Regression potential is low, in that this feature worked for some time
  in previous releases.  A bad reading of the code made me (smoser) change
  the code intending to fix the problem, but in fact regressed it.  So this
  change is actually reverting a previous change in behavior.

  This was first broken in 16.04 in 0.7.7~bzr1245-0ubuntu1~16.04.1 .

  [Other Info]
  The upstream commit that fixed this behavior (including the added tests)
  is 0b0f254a [1]

  --
  [1] 
https://git.launchpad.net/cloud-init/commit/?id=0b0f254a6935a1b1fff128fa177152dd519e1a3d

  === End SRU Information ===

  A customer reused hardware that had previously deployed a RHEL
  Overcloud-controller which places metadata on the disk as a legitimate
  source, that cloud-init looks at by default.  When the newly enlisted
  node appeared it had the name of "overcloud-controller-0" vs. maas-
  enlist, pulled from the disk metadata which had overridden MAAS'
  metadata.  Commissioning continually failed on all of the nodes until
  the di

[Yahoo-eng-team] [Bug 1354694] Re: useradd crashes if group list contains whitespace

2017-02-21 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.9-0ubuntu1~16.04.2

---
cloud-init (0.7.9-0ubuntu1~16.04.2) xenial-proposed; urgency=medium

  * debian/update-grub-legacy-ec2: fix shell syntax error. (LP:
#1662221)

cloud-init (0.7.9-0ubuntu1~16.04.1) xenial-proposed; urgency=medium

  * debian/copyright: update License field to include Apache.
  * debian/update-grub-legacy-ec2: fix to include kernels whose config
has CONFIG_XEN=y (LP: #1379080).
  * debian/patches/azure-use-walinux-agent.patch: continue relying on
walinux agent in stable release.
  * New upstream release.
- doc: adjust headers in tests documentation for consistency.
- pep8: fix issue found in zesty build with pycodestyle.
- integration test: initial commit of integration test framework
  [Wesley Wiedenmeier]
- LICENSE: Allow dual licensing GPL-3 or Apache 2.0 [Jon Grimm]
- Fix config order of precedence, putting kernel command line over system.
  [Wesley Wiedenmeier] (LP: #1582323)
- pep8: whitespace fix [Scott Moser]
- Update the list of valid ssh keys. [Michael Felt]
- network: add ENI unit test for statically rendered routes.
- set_hostname: avoid erroneously appending domain to fqdn
  [Lars Kellogg-Stedman] (LP: #1647910)
- doc: change 'nobootwait' to 'nofail' in docs [Anhad Jai Singh]
- Replace an expired bit.ly link in code comment. [Joshua Harlow]
- user-groups: fix bug when groups was provided as string and had spaces
  [Scott Moser] (LP: #1354694)
- when adding a user, strip whitespace from group list
  [Lars Kellogg-Stedman] (LP: #1354694)
- fix decoding of utf-8 chars in yaml test
- Replace usage of sys_netdev_info with read_sys_net
  [Joshua Harlow] (LP: #1625766)
- fix problems found in python2.6 test. [Joshua Harlow]
- Just use file logging by default [Joshua Harlow] (LP: #1643990)
- Improve formatting for ProcessExecutionError [Wesley Wiedenmeier]
- flake8: fix trailing white space
- Doc: various documentation fixes [Sean Bright]
- cloudinit/config/cc_rh_subscription.py: Remove repos before adding
  [Brent Baude]
- packages/redhat: fix rpm spec file.
- main: set TZ in environment if not already set. [Ryan Harper]

 -- Scott Moser   Mon, 06 Feb 2017 16:18:28 -0500

** Changed in: cloud-init (Ubuntu Xenial)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1354694

Title:
  useradd crashes if group list contains whitespace

Status in cloud-init:
  Fix Released
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Fix Released
Status in cloud-init source package in Yakkety:
  Fix Released

Bug description:
  === Begin SRU Template ===
  [Impact]
  A specific usage of user data to cloud-init will fail to add a user.

  This cloud-config:
    #cloud-config
    users:
  - default
  - name: foobar
    gecos: "My User"
    groups: sudo, adm

  Will fail with information in the cloud-init log showing:
  2016-12-19 21:39:32,713 - util.py[WARNING]: Failed to create group  adm
  2016-12-19 21:39:32,713 - util.py[DEBUG]: Failed to create group  adm
  Traceback (most recent call last):
  ...
  cloudinit.util.ProcessExecutionError: Unexpected error while running command.
  Command: ['groupadd', ' adm']
  Exit code: 3
  Reason: -
  Stdout: ''
  Stderr: "groupadd: ' adm' is not a valid group name\n"

  While changing the last line to the following would work:
    groups: [sudo, adm]

  [Test Case]
  $ cat > user-data <<"EOF"
  #cloud-config
  users:
    - default
    - name: foobar
  gecos: "My User"
  groups: sudo, adm
    - name: wark
  groups: [sudo, adm]
  EOF

  $ release=yakkety
  $ name="$release-1354694"

  $ lxc launch "ubuntu-daily:$release" "$name" \
   "--config=user.user-data=$(cat user-data)"

  $ sleep 10

  ## Check foobar is in expected groups
  $ lxc exec $name -- groups foobar
  foobar : foobar adm sudo

  $ lxc exec $name -- groups wark
  wark : wark adm sudo

  $ lxc exec $name -- grep WARN /var/log/cloud-init.log || echo "no warn"
  no warn

  [Regression Potential]
  There are 3 changes in this commit
  a.) if 'groups' entry is a string, split on "," and strip pieces
  The most likely path to failure here is if previously a non-string
  (possibly bytes) was being passed in and now will be ignored.
  That seems unlikely and clearly wrong input.

  b.) fix and unit tests to explicitly set system=False or no_create_home=True.
  Previously those paths did not test the value of the entry, only the
  presense of the entry.
  This meant that these 2 configs were the same:
    users: {name: bob, system: True}
  and
    users: {name: bob, system: False}

  That bug is fixed here so that 'system: False' is just 

[Yahoo-eng-team] [Bug 1643767] Re: horizon jenkins failed:tempest_horizon.tests.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario

2017-02-21 Thread Richard Jones
Log mentioned no longer exists and bug is not present in recent tempest
runs.

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1643767

Title:
  horizon jenkins
  
failed:tempest_horizon.tests.scenario.test_dashboard_basic_ops.TestDashboardBasicOps.test_basic_scenario

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  You can see the full failure at
  http://logs.openstack.org/12/399812/4/check/gate-horizon-dsvm-tempest-
  plugin-ubuntu-xenial/e827ee0/console.html

  Captured traceback:
  2016-11-22 03:49:21.316333 | ~~~
  2016-11-22 03:49:21.316348 | Traceback (most recent call last):
  2016-11-22 03:49:21.316392 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/tempest_horizon/tests/scenario/test_dashboard_basic_ops.py",
 line 139, in test_basic_scenario
  2016-11-22 03:49:21.316405 | self.check_login_page()
  2016-11-22 03:49:21.316447 |   File 
"/opt/stack/new/tempest/.tox/tempest/local/lib/python2.7/site-packages/tempest_horizon/tests/scenario/test_dashboard_basic_ops.py",
 line 85, in check_login_page
  2016-11-22 03:49:21.316470 | response = 
self._get_opener().open(CONF.dashboard.dashboard_url).read()
  2016-11-22 03:49:21.316490 |   File "/usr/lib/python2.7/urllib2.py", line 
435, in open
  2016-11-22 03:49:21.316504 | response = meth(req, response)
  2016-11-22 03:49:21.316524 |   File "/usr/lib/python2.7/urllib2.py", line 
548, in http_response
  2016-11-22 03:49:21.316541 | 'http', request, response, code, msg, 
hdrs)
  2016-11-22 03:49:21.316572 |   File "/usr/lib/python2.7/urllib2.py", line 
467, in error
  2016-11-22 03:49:21.316586 | result = self._call_chain(*args)
  2016-11-22 03:49:21.316607 |   File "/usr/lib/python2.7/urllib2.py", line 
407, in _call_chain
  2016-11-22 03:49:21.316619 | result = func(*args)
  2016-11-22 03:49:21.316640 |   File "/usr/lib/python2.7/urllib2.py", line 
654, in http_error_302
  2016-11-22 03:49:21.316658 | return self.parent.open(new, 
timeout=req.timeout)
  2016-11-22 03:49:21.316676 |   File "/usr/lib/python2.7/urllib2.py", line 
435, in open
  2016-11-22 03:49:21.316690 | response = meth(req, response)
  2016-11-22 03:49:21.316711 |   File "/usr/lib/python2.7/urllib2.py", line 
548, in http_response
  2016-11-22 03:49:21.316728 | 'http', request, response, code, msg, 
hdrs)
  2016-11-22 03:49:21.316747 |   File "/usr/lib/python2.7/urllib2.py", line 
473, in error
  2016-11-22 03:49:21.316761 | return self._call_chain(*args)
  2016-11-22 03:49:21.316781 |   File "/usr/lib/python2.7/urllib2.py", line 
407, in _call_chain
  2016-11-22 03:49:21.316793 | result = func(*args)
  2016-11-22 03:49:21.316815 |   File "/usr/lib/python2.7/urllib2.py", line 
556, in http_error_default
  2016-11-22 03:49:21.316835 | raise HTTPError(req.get_full_url(), 
code, msg, hdrs, fp)
  2016-11-22 03:49:21.316853 | urllib2.HTTPError: HTTP Error 500: Internal 
Server Error
  2016-11-22 03:49:21.316860 | 

  
  ---

  Below is horizon_error.log,

  2016-11-22 03:49:20.957173 Invalid HTTP_HOST header: '127.0.0.1'. You may 
need to add u'127.0.0.1' to ALLOWED_HOSTS.
  2016-11-22 03:49:20.973949 mod_wsgi (pid=719): Exception occurred processing 
WSGI script '/opt/stack/new/horizon/openstack_dashboard/wsgi/django.wsgi'.
  2016-11-22 03:49:20.973969 Traceback (most recent call last):
  2016-11-22 03:49:20.973979   File 
"/usr/local/lib/python2.7/dist-packages/django/core/handlers/wsgi.py", line 
189, in __call__
  2016-11-22 03:49:20.974013 response = self.get_response(request)
  2016-11-22 03:49:20.974018   File 
"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 
207, in get_response
  2016-11-22 03:49:20.974027 return debug.technical_500_response(request, 
*sys.exc_info(), status_code=400)
  2016-11-22 03:49:20.974032   File 
"/usr/local/lib/python2.7/dist-packages/django/views/debug.py", line 97, in 
technical_500_response
  2016-11-22 03:49:20.974245 html = reporter.get_traceback_html()
  2016-11-22 03:49:20.974253   File 
"/usr/local/lib/python2.7/dist-packages/django/views/debug.py", line 384, in 
get_traceback_html
  2016-11-22 03:49:20.974262 return t.render(c)
  2016-11-22 03:49:20.974267   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 210, in 
render
  2016-11-22 03:49:20.974505 return self._render(context)
  2016-11-22 03:49:20.974512   File 
"/usr/local/lib/python2.7/dist-packages/django/template/base.py", line 202, in 
_render
  2016-11-22 03:49:20.974520 return self.nodelist.render(context)
  2016-

[Yahoo-eng-team] [Bug 1649275] Re: Instance table status polling fails in Firefox

2017-02-21 Thread Richard Jones
Can't reproduce with current master and Firefox 48

** Changed in: horizon
   Status: New => Invalid

** Changed in: horizon
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1649275

Title:
  Instance table status polling fails in Firefox

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I launch a new instance through launch instance wizard, the page 
redirects me to the instances page but the status is not automatically updated 
in the table. The status gets updated when I refresh the page. Strangely, this 
behavior is observed only in firefox. In firefox the row doesn't get 
automatically updated.
  Firefox version: 48
  Openstack version: Mitaka
  OS version: Ubuntu 14.04

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1649275/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666749] [NEW] Update subnet will not bump network revision sometime

2017-02-21 Thread Hong Hui Xiao
Public bug reported:

dragonflow replies on neutron revision to judge if an object has
updates. So dragonflow has a series of test cases to verify that neutron
revision works well.

After a recent change[1], an intermittent failure with high rate can be
observed in dragonflow jekins job [2]. The test will update name of
subnet and then verify that the revision of network increases.

This issue can't be reproduced from user interface. So, I create 2 UT[3]
to verify the issue. It looks like with a new context object, the issue
will not reproduce.


[1] https://review.openstack.org/#/c/435748
[2] 
http://logs.openstack.org/08/435208/1/check/gate-dragonflow-python35/d6ca0d0/testr_results.html.gz
[3]

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666749

Title:
  Update subnet will not bump network revision sometime

Status in neutron:
  New

Bug description:
  dragonflow replies on neutron revision to judge if an object has
  updates. So dragonflow has a series of test cases to verify that
  neutron revision works well.

  After a recent change[1], an intermittent failure with high rate can
  be observed in dragonflow jekins job [2]. The test will update name of
  subnet and then verify that the revision of network increases.

  This issue can't be reproduced from user interface. So, I create 2
  UT[3] to verify the issue. It looks like with a new context object,
  the issue will not reproduce.

  
  [1] https://review.openstack.org/#/c/435748
  [2] 
http://logs.openstack.org/08/435208/1/check/gate-dragonflow-python35/d6ca0d0/testr_results.html.gz
  [3]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666749/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1622624] Re: Message missing for Not Implemented IPv6 Combos

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622624

Title:
  Message missing for Not Implemented IPv6 Combos

Status in neutron:
  Expired

Bug description:
  As per [1] , there are certain combinations for IPv6 Address Mode and
  IPv6 Router Advertisement which are not currently implemented.

  Expected output:
  Currently no message is raised for them to the API consumers.

  Actual output:
  Proper message should be raised to inform the API consumers about the 
missing/incorrect combination of IPv6 Addressing options.


  [1]: http://docs.openstack.org/mitaka/networking-guide/config-
  ipv6.html#ipv6-ra-mode-and-ipv6-address-mode-combinations

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1622624/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1532106] Re: while bringing up the neutron network on the openstack kilo, virtual swtich internal bridges values might be greater than 4094

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1532106

Title:
  while bringing up the neutron network on the openstack kilo, virtual
  swtich internal bridges values might be greater than 4094

Status in neutron:
  Expired

Bug description:
  On Kilo, we have the following issue.
  When newtron comes up, the bridges come up with value of 4095, which is out 
of range of 0-4094
  Due to this the instance never comes up.
  Once we delete and re-create it, it comes up fine

  # ovs-vsctl show
  0b80f15d-ee1d-48b7-90b5-9192ab423f20
  Bridge br-int
  fail_mode: secure
  Port "qr-e1269850-cf"
  tag: 4095
  Interface "qr-e1269850-cf"
  type: internal
  Port "qg-7a2410cc-3e"
  tag: 4095
  Interface "qg-7a2410cc-3e"
  type: internal
  Port "qr-f41e6139-7e"
  tag: 4095
  Interface "qr-f41e6139-7e"
  type: internal

  Port "tap30314ee9-be"
  tag: 4095
  Interface "tap30314ee9-be"
  type: internal
  Port br-int
  Interface br-int
  type: internal
  Port "tap24e08b6f-be"
  tag: 4095
  Interface "tap24e08b6f-be"
  type: internal

  [root@cassini ~]# ovs-vsctl show
  0b80f15d-ee1d-48b7-90b5-9192ab423f20
  Bridge br-int
  fail_mode: secure
  Port "qr-e1269850-cf"
  tag: 4095
  Interface "qr-e1269850-cf"
  type: internal
  Port "qg-7a2410cc-3e"
  tag: 4095
  Interface "qg-7a2410cc-3e"
  type: internal
  Port "qr-f41e6139-7e"
  tag: 4095
  Interface "qr-f41e6139-7e"
  type: internal

  Port "tap30314ee9-be"
  tag: 4095
  Interface "tap30314ee9-be"
  type: internal
  Port br-int
  Interface br-int
  type: internal
  Port "tap24e08b6f-be"
  tag: 4095
  Interface "tap24e08b6f-be"
  type: internal

  deleted the existing instance, route and external network and re-created the 
neutron network again. didnot fell into the same tag values again.
  here the info...

  [root@cassini tmp]# ovs-vsctl show
  0b80f15d-ee1d-48b7-90b5-9192ab423f20
  Bridge br-int
  fail_mode: secure
  Port "qg-9698414d-56"
  tag: 5
  Interface "qg-9698414d-56"
  type: internal
  Port patch-tun
  Interface patch-tun
  type: patch
  options:
  {peer=patch-int}

  Port "qvo86f86280-d0"
  tag: 4
  Interface "qvo86f86280-d0"
  Port "qr-b643dac3-69"
  tag: 4
  Interface "qr-b643dac3-69"
  type: internal
  Port int-br-ex
  Interface int-br-ex
  type: patch
  options:
  {peer=phy-br-ex}

  Port "qvo9ceab7b9-12"
  tag: 3
  Interface "qvo9ceab7b9-12"
  Port "qr-5210e32f-a1"
  tag: 3
  Interface "qr-5210e32f-a1"
  type: internal
  Port br-int
  Interface br-int
  type: internal
  Port "tap4d4815bb-9f"
  tag: 4
  Interface "tap4d4815bb-9f"
  type: internal
  Port "tape03b5305-94"
  tag: 3
  Interface "tape03b5305-94"
  type: internal
  Bridge br-tun
  fail_mode: secure
  Port br-tun
  Interface br-tun
  type: internal
  Port patch-int
  Interface patch-int
  type: patch
  options:
  {peer=patch-tun}

  Bridge br-ex
  Port br-ex
  Interface br-ex
  type: internal
  Port phy-br-ex
  Interface phy-br-ex
  type: patch
  options:
  {peer=int-br-ex}

  Port "eno1"
  Interface "eno1"
  ovs_version: "2.3.1"
  [root@cassini tmp]#

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1532106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535218] Re: NotFound maybe returned by keystone

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535218

Title:
  NotFound maybe returned by keystone

Status in neutron:
  Expired

Bug description:
  How to reproduce:
     1. Remove nova_admin_tenant_id from /etc/neutron/neutron.conf
     2. Boot an instance

  Expected log:
  Keystone returned NotFound for event:...

  Actual log:
     Nova returned NotFound for event:...

  Related log (we have not fixed bug 1309187 yet) :

   2016-01-15 13:38:17.449 3509717 ERROR neutron.notifiers.nova [-] Failed to 
notify nova on events: [{'status': 'completed', 'tag': 
u'55524e69-9bf8-408e-bb9c-4239084837e9', 'name': 'network-vif-unplugged', 
'server_uuid': u'a7466a5f-ca73-4ded-b0c3-77374f676b26'}]
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova Traceback (most 
recent call last):
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/neutron/notifiers/nova.py", line 223, in 
send_events
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova 
batched_events)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/v1_1/contrib/server_external_events.py",
 line 39, in create
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova 
return_raw=True)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/base.py", line 152, in _create
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova _resp, body 
= self.api.client.post(url, body=body)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 312, in post
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova return 
self._cs_request(url, 'POST', **kwargs)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 275, in 
_cs_request
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova 
self.authenticate()
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 408, in 
authenticate
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova auth_url = 
self._v2_auth(auth_url)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 495, in _v2_auth
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova return 
self._authenticate(url, body)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 508, in 
_authenticate
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova **kwargs)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 268, in 
_time_request
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova resp, body = 
self.request(url, method, **kwargs)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova   File 
"/usr/lib/python2.6/site-packages/novaclient/client.py", line 262, in request
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova raise 
exceptions.from_response(resp, body, url, method)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova NotFound: Not 
found (HTTP 404)
  2016-01-15 13:38:17.449 3509717 TRACE neutron.notifiers.nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535218/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533704] Re: networking doesn't work for VMs on xen

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533704

Title:
  networking doesn't work for VMs on xen

Status in neutron:
  Expired

Bug description:
  I didn't experience it myself, I got an email from Tom Carroll
  explaining this problem with lots of details. I thought I'd file a bug
  so that other people can benefit.

  This is the report:

  "I've been attempting to use liberty neutron on XenServer and I've
  noticed some changes that make it difficult to do so.  These changes
  begin with commit 3543d8858691c1a709127e25fc0838e054bd34ef, the
  delegating of is_active() to AsyncProcess.

  The root cause of the problem is that the root helper, in this,
  neutron-rootwrap-xen-dom0, runs in a domU, but executes commands in
  dom0.

  In this scenario, AsyncProcess.pid returns None. This is due to trying
  to travel from the root helper down to leaf children. And again, the
  children are running in a different dom. As a consequence,
  AsyncProcess.is_active() returns false, causing the ovsdb client to be
  eventually respawned.

  Another complicating scenario, is neutron-rootwrap-xen-dom0
  communicates with dom0 using an XMLRPC style protocol. It reads the
  entire stdin, launches the command in dom0 providing the buffer to
  stdin, reads the entire stdout, and responds back. If the command
  never ends, a response will never be returned.

  The end result is that new interfaces are never annotated with the
  proper 1Q tag, which means that the network is inoperable for the VM.

  A complete restart of the neutron agent, fixesup the networking."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533704/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1517628] Re: DBreferenceError raised when port isn't found in portbindings_db

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1517628

Title:
  DBreferenceError raised when port isn't found in portbindings_db

Status in neutron:
  Expired

Bug description:
  We have been seeing this trace in the OVN logs

  http://logs.openstack.org/66/244866/1/check/gate-tempest-dsvm-
  networking-
  ovn/35ed090/logs/screen-q-svc.txt.gz?level=TRACE#_2015-11-12_21_51_21_591

  it looks like the rpc interface is update the port binding info on a
  port after it's been deleted and thus this db error is being raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1517628/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531440] Re: update sanity check to maniplate VF instead of parsing the usage

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531440

Title:
  update sanity check to maniplate VF instead of parsing the usage

Status in neutron:
  Expired

Bug description:
  we should refactor all the VF management checks in
  
https://github.com/openstack/neutron/blob/master/neutron/cmd/sanity/checks.py#L147-L165
  to look for VF and to run some ip link commands instead of parsing the usage.
  I will do it.

  This is because some features maybe not support in the driver level.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531440/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527420] Re: Neutron does not log UserID or TenantID

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527420

Title:
  Neutron does not log UserID or TenantID

Status in neutron:
  Expired

Bug description:
  Neutron shows the same issues as this devstack bug:
  https://bugs.launchpad.net/devstack/+bug/1399788

  We are using user_identity, which is showing 'none' in the logs.

  from log.py/options.py:

  log_opts = [
  cfg.StrOpt('logging_context_format_string',
     default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
     '%(name)s [%(request_id)s %(user_identity)s] '
     '%(instance)s%(message)s',

  Log Snippets:

   | success | rc=0 >>
  2015-12-15 13:26:48.298 43398 INFO neutron.wsgi 
[req-f408d470-2b16-49e4-8f93-3b4c3ef284e7 None] 172.29.236.10 - - [15/Dec/2015 
13:26:48] "DELETE /v2.0/routers/27354e70-87d7-4266-a5f6-0f09827f6b42.json 
HTTP/1.1" 204 149 0.198291

   | success | rc=0 >>
  2015-12-15 13:26:50.278 43009 INFO neutron.wsgi 
[req-a07e31a6-b85f-4a83-baf2-c3c18c259877 None] 172.29.236.10 - - [15/Dec/2015 
13:26:50] "DELETE /v2.0/routers/698f8934-a85d-455c-aa4f-eec7fde36dd7.json 
HTTP/1.1" 204 149 0.115795

  I propose changing the default to the fix used in devstack:

  log_opts = [
  cfg.StrOpt('logging_context_format_string',
     default='%(asctime)s.%(msecs)03d %(process)d %(levelname)s '
     '%(name)s [%(request_id)s %(user_name)s 
%(project_name)s] '
     '%(instance)s%(message)s',

  This applies to both Juno and Kilo, I have not checked anything
  earlier.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527420/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527049] Re: is_dvr_serviced in unbind_router_servicenode is duplicated and unnecessary

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527049

Title:
  is_dvr_serviced in unbind_router_servicenode is duplicated and
  unnecessary

Status in neutron:
  Expired

Bug description:
  in method unbind_router_servicenode, it will check if there is still
  any dvr serviced port exist on node, by getting all ports on host that
  related to given router:

  for subnet in subnet_ids:
  ports = (
  self._core_plugin.get_ports_on_host_by_subnet(
  context, host, subnet))
  for port in ports:
  if (n_utils.is_dvr_serviced(port['device_owner'])):
  port_found = True
  LOG.debug('One or more ports exist on the snat '
    'enabled l3_agent host %(host)s and '
    'router_id %(id)s',
    {'host': host, 'id': router_id})
  break
  
https://github.com/openstack/neutron/blob/master/neutron/db/l3_dvrscheduler_db.py#L293-L303

  but the logic in inner for loop here is duplicated and unnecessary, since 
get_ports_on_host_by_subnet will get all dvr serviced ports already, so it 
doesn't need to check again.
  in get_ports_on_host_by_subnet:

  for port in ports:
  device_owner = port['device_owner']
  if (utils.is_dvr_serviced(device_owner)):
  if port[portbindings.HOST_ID] == host:
  port_dict = self.plugin._make_port_dict(port,
  process_extensions=False)
  ports_by_host.append(port_dict)

  
https://github.com/openstack/neutron/blob/master/neutron/db/dvr_mac_db.py#L128-L156

  ### update ###
  Per 
https://review.openstack.org/#/c/255374/4/neutron/db/l3_agentschedulers_db.py, 
only method check_ports_exist_on_l3agent(or its new version 
check_dvr_serviceable_ports_on_host) is necessary to be used in 
unbind_router_servicenode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527049/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1519912] Re: Releasenotes have broken links

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1519912

Title:
  Releasenotes have broken links

Status in neutron:
  Expired

Bug description:
  If you run "tox -e releasenotes", you get these warnings:
  
/home/aj/Software/vcs/OpenStack/openstack/neutron/releasenotes/source/liberty.rst:3:
 WARNING: Duplicate explicit target name: "this".
  
/home/aj/Software/vcs/OpenStack/openstack/neutron/releasenotes/source/liberty.rst:3:
 WARNING: Duplicate explicit target name: "this".
  
/home/aj/Software/vcs/OpenStack/openstack/neutron/releasenotes/source/liberty.rst:3:
 WARNING: Duplicate explicit target name: "here".

  problem is that several release notes use `here ` - but they all end in 
one file and RST specification says that "here" is a label.
  I suggest to rework the links completely and avoid these.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1519912/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523835] Re: egress sg_rule use 'dest_ip_prefix' but not 'source_ip_prefix'

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1523835

Title:
  egress sg_rule use 'dest_ip_prefix' but not 'source_ip_prefix'

Status in neutron:
  Expired

Bug description:
  When we add sg_rule, if it is an ingress rule, the remote CIRD
  x.x.x.x/x means traffic from source ip belong to the CIRD  satisfy
  this rule; if it is an egress rule, the remote CIRD x.x.x.x/x means
  traffic to dest ip belong to the CIRD satisfy this rule.

  But the test cases for sg egress rule in
  neutron/tests/unit/agent/linux/test_iptables_firewall.py  using wrong
  prefix to add_rule. Which should be fixed. Take one for example:

  def test_filter_ipv4_egress_prefix(self):
  prefix = FAKE_PREFIX['IPv4']
  rule = {'ethertype': 'IPv4',
  'direction': 'egress',
  'source_ip_prefix': prefix}
  egress = mock.call.add_rule(
  'ofake_dev', '-s %s -j RETURN' % prefix, comment=None)
  ingress = None
  self._test_prepare_port_filter(rule, ingress, egress)

'source_ip_prefix'  should changes to 'dest_ip_prefix'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1523835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1531273] Re: AsyncProcess.pid is None when root helper executes command on another system

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1531273

Title:
  AsyncProcess.pid is None when root helper executes command on another
  system

Status in neutron:
  Expired

Bug description:
  Version: Liberty
  Compute hypervisor: XenServer 6.5SP1
  Compute vm: Ubuntu 14.04.3

  When a root helper executes commands in another system context,
  AsyncProcess.pid returns None. This affects the execution of certain
  AsyncProcess's including ovsdb_monitor.

  When running with XenServer compute nodes, root_helper = /usr/bin
  /neutron-rootwrap-xen-dom0. The wrapper runs in domU while the
  commands are executed in dom0. AsyncProcess.pid reteurns None and
  AsyncProcess.is_active() returns false. This is due to
  utils.get_root_helper_child_pid() return None as the local system
  cannot observe the dom0 children processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1531273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530070] Re: Neutron Netns Cleanup script fails to delete namespaces after reboot

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1530070

Title:
  Neutron Netns Cleanup script fails to delete namespaces after reboot

Status in neutron:
  Expired

Bug description:
  After rebooting a node which held an active VRRP router, DHCP , and metadata 
agent, the neutron-netns-cleanup utility failed to delete stale namespaces. 
  The utility fails with :

  seting the network namespace "qrouter-3d4e5634-59f0-401e-
  9f28-6c8daaec311c" failed: Invalid argument

  The reason is a bug in iproute which fails to do any operation on a stale 
namespaces which appear in /var/run/netns like this:
  root@stratonode66 ~# ls -l /var/run/netns/ 
  total 0
  rrr- 1 root root 0 Dec 24 13:38 qdhcp-0a348422-97e2-4ab6-bb22-55994a125823
  rrr- 1 root root 0 Dec 24 11:54 qdhcp-2258aa3f-d256-4c9f-9e48-16811fc57981
  rrr- 1 root root 0 Dec 24 13:38 qdhcp-3ceb1f27-e3fc-413a-a184-567041f073e2
  rrr- 1 root root 0 Dec 24 11:54 qdhcp-62a51b66-d0e2-42fc-bdf2-2d622a889e75
  rrr- 1 root root 0 Dec 24 11:54 qdhcp-81b550a2-c483-4280-a83a-b560ecdc416b
  -- 1 root root 0 Dec 23 13:54 
qrouter-3d4e5634-59f0-401e-9f28-6c8daaec311c
  -- 1 root root 0 Dec 24 11:25 
qrouter-69d20923-da78-4c6b-bb24-967dd67acb1d
  -- 1 root root 0 Dec 23 13:54 
qrouter-cc649801-96ec-4d59-90de-1004fc026024

  This bug s related, but doesn't solve the issue after reboot:
  https://bugs.launchpad.net/neutron/+bug/1052535.

  I solved it by fixing the neutron-netns-cleanup --force code, with
  this patch:

  diff --git a/neutron/agent/netns_cleanup_util.py 
b/neutron/agent/netns_cleanup_util.py
  index 771a77f..3c43480 100644
  --- a/neutron/agent/netns_cleanup_util.py
  +++ b/neutron/agent/netns_cleanup_util.py
  @@ -132,8 +132,13 @@ def destroy_namespace(conf, namespace, force=False):
   # NOTE: The dhcp driver will remove the namespace if is it empty,
   # so a second check is required here.
   if ip.netns.exists(namespace):
  -for device in ip.get_devices(exclude_loopback=True):
  -unplug_device(conf, device)
  +try:
  +for device in ip.get_devices(exclude_loopback=True):
  +unplug_device(conf, device)
  +except RuntimeError:
  +LOG.info(_('Keep calm, and destroy namespace: %s'), 
namespace)
  +ip.netns.delete(namespace)
  +return
   
   ip.garbage_collect_namespace()
   except Exception:

  When I run the following after reboot, the name spaces are cleaned-up
  and when starting neutron-openvswitch-agent.service neutron-dhcp-
  agent.service neutron-l3-agent.service they are recreated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1530070/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524531] Re: ip_lib_force_root doesn't force root, breaking neutron on XenServer compute nodes

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524531

Title:
  ip_lib_force_root doesn't force root, breaking neutron on XenServer
  compute nodes

Status in neutron:
  Expired

Bug description:
  Version: Liberty
  Compute hypervisor: XenServer 6.5
  Compute vm: Ubuntu 14.04.3

  With this option set, it is documented that all ip_lib commands should
  be executed with the assistance of the defined root_helper. This does
  not occur. This is necessary as root_helper executes commands in the
  Dom0 context.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1524531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518995] Re: l3-agent router sync unbinds port on down ovs-agent

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518995

Title:
  l3-agent router sync unbinds port on down ovs-agent

Status in neutron:
  Expired

Bug description:
  Using ha-routers, we've found that getting network node agents down
  for T>agent_down time, and then bringing them up fires a race
  condition during ovs-agent and l3-agent boot.

  Even if you set a constraint on ovs-agent being up before l3-agent,
  that won't work, because still the l3-agent will start and sync
  routers before the ovs-agent is able to report himself active back to
  neutron server.

  A possible solution:

 Using systemd notify from the openvswitch-agent, so it won't notify
  system-d about readiness until the first heartbeat has been sent to
  neutron-server.

  Other solutions could be detecting the unbound state ("Device x
  not defined on plugin") from the ovs-agent and trying to rebind it if
  the port is present.

  
  How does it reproduce:

  1) neutron-servers and agents are down by being set to standby

  some time, T>agent_down_time

  2) cluster is took out of standby, neutron-servers and agents are
  started again, first neutron-server, then agents

  3) neutron servers consider the agents to be down (last heartbeat >
  agent_down_time), when checking ports in the database, reset them as
  binding failed...

  4) agents start, try to fetch info from the ports, as those are in
  status binding failed... they get no info from the server (or the
  interpretation is wrong "Device xxx not defined on plugin")

  5) agents mark ports as dead vlan (4095), as the port "does not exist"
  on the server

  
  Analysis over the logs:


  -- SRV2 tryies to bind port on dead agent (: same happening for SRV1
  and 3 that's omitted---

  2015-08-25 09:37:11.687000 [SRV2] 16522 DEBUG
  neutron.plugins.ml2.managers [req-fd4c17c2-0acc-4bb6-a652-91359eda496b
  None] Attempting to bind port 6adcffbf-09d5-4a85-9339-9d6beb2bf82c on
  host neutron-n-0 for vnic_type normal with profile  bind_port
  /usr/lib/python2.7/site-packages/neutron/plugins/ml2/managers.py:574

  2015-08-25 09:37:11.687000 [SRV2] 16522 DEBUG
  neutron.plugins.ml2.drivers.mech_agent [req-fd4c17c2-0acc-
  4bb6-a652-91359eda496b None] Attempting to bind port 6adcffbf-
  09d5-4a85-9339-9d6beb2bf82c on network 2c52ca73-b11f-426f-
  9b78-185ef15809e3 bind_port /usr/lib/python2.7/site-
  packages/neutron/plugins/ml2/drivers/mech_agent.py:57

  2015-08-25 09:37:11.689000 [SRV2] 16522 DEBUG
  neutron.plugins.ml2.drivers.mech_agent [req-fd4c17c2-0acc-
  4bb6-a652-91359eda496b None] Checking agent: {'binary': u'neutron-
  openvswitch-agent', 'description': None, 'admin_state_up': True,
  'heartbeat_timestamp': datetime.datetime(2015, 8, 25, 1, 35, 5),
  'alive': False, 'id': u'196101f4-9680-43bb-8310-9fca49cd4930',
  'topic': u'N/A', 'host': u'neutron-n-0', 'agent_type': u'Open vSwitch
  agent', 'started_at': datetime.datetime(2015, 8, 22, 6, 55, 59),
  'created_at': datetime.datetime(2015, 8, 12, 10, 23, 16),
  'configurations': {u'arp_responder_enabled': False, u'tunneling_ip':
  u'', u'devices': 28, u'l2_population': False, u'tunnel_types': [],
  u'enable_distributed_routing': False, u'bridge_mappings': {u'physnet-
  external': u'br-ex', u'physnet-tenants': u'br-enp7s0'}}} bind_port
  /usr/lib/python2.7/site-
  packages/neutron/plugins/ml2/drivers/mech_agent.py:65

  2015-08-25 09:37:11.69 [SRV2] 16522 WARNING
  neutron.plugins.ml2.drivers.mech_agent [req-fd4c17c2-0acc-
  4bb6-a652-91359eda496b None] Attempting to bind with dead agent:
  {'binary': u'neutron-openvswitch-agent', 'description': None,
  'admin_state_up': True, 'heartbeat_timestamp': datetime.datetime(2015,
  8, 25, 1, 35, 5), 'alive': False, 'id':
  u'196101f4-9680-43bb-8310-9fca49cd4930', 'topic': u'N/A', 'host':
  u'neutron-n-0', 'agent_type': u'Open vSwitch agent', 'started_at':
  datetime.datetime(2015, 8, 22, 6, 55, 59), 'created_at':
  datetime.datetime(2015, 8, 12, 10, 23,16), 'configurations':
  {u'arp_responder_enabled': False, u'tunneling_ip': u'', u'devices':
  28, u'l2_population': False, u'tunnel_types': [],
  u'enable_distributed_routing': False, u'bridge_mappings': {u'physnet-
  external': u'br-ex', u'physnet-tenants': u'br-enp7s0'}}}

  2015-08-25 09:37:11.69 [SRV2] 16522 WARNING
  neutron.plugins.ml2.managers [req-fd4c17c2-0acc-4bb6-a652-91359eda496b
  None] Failed to bind port 6adcffbf-09d5-4a85-9339-9d6beb2bf82c on host
  neutron-n-0

  2015-08-25 09:37:11.719000 [SRV2] 16522 WARNING
  neutron.plugins.ml2.plugin [req-fd4c17c2-0acc-4bb6-a652-91359eda496b
  None] In _notify_port_updated(), no bound segment for port 6adcffbf-
  09d5-4a85-9339-9d6beb2bf82c on network 2c52ca73-b11f-426f-
  9b78-185ef15809e3

  
  -- OVS agent 1 request

[Yahoo-eng-team] [Bug 1518581] Re: [RFE] sriov vxlan network support

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518581

Title:
  [RFE] sriov vxlan network support

Status in neutron:
  Expired

Bug description:
  1 problem

  now the vxlan can only support ovs vxlan network but donot support sr-
  iov vxlan network

  if support the sr-iov vxlan network, it's need to support create a
  network with parameter [--provider: physical netwrok], the create
  network should be  as following:

  neutron net-create ext_net -provider:network_type vxlan
  --provider:physical_network physnet1 --provider:segmentation_id 1000

  the current neutron DB don't has physical network for vxlan network.

  if support the ovs vxlan network, there is no need parm [--provider:
  physical netwrok]

  so the parm [--provider: physical netwrok] is a optional parm
  forcreating vxlan network

  
  2 how to find this problem
  We do a project which need to deploy a sr-iov vxlan network and find that 
cannot assign the physcial network for vxlan network.
  It seems that the neutron don't support sr-iov vxlan network and only support 
OVS vxlan network

  3 how to support sr-iov vxlan network
  (1) first it need to create vxlan network associating with physical network 

  (2) second it need get the mapping relationship between VNI and vlan

  4 how this problem is going?
  Now we have modified the neuron code and support this question, we hope share 
our code and commit to neutron project.

  5 significance
  As everyone knows that the sr-iov performance is better than the ovs.if the 
SR-IOV support vxlan network, it has a widely potentiall for vxlan network 
application,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1115999] Re: quantum-netns-cleanup does not stop metadata proxies

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1115999

Title:
  quantum-netns-cleanup does not stop metadata proxies

Status in neutron:
  Expired

Bug description:
  The quantum-netns-cleanup does not properly stop metadata proxies when
  cleaning up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1115999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496586] Re: Avoid the pattern sql select/delete

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496586

Title:
  Avoid the pattern sql select/delete

Status in neutron:
  Expired

Bug description:
  The following pattern:

   obj = query.filter(...).one()
   context.session.delete(obj)

   is racy because obj can be deleted between the select and the delete,
  we should prefer when possible the pattern:

   count = query.filter(...).delete(synchronize_session=False)
   if not count:
 raise NotFound

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509008] Re: stable/kilo FixedIntervalLoopingCall error message not useful

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509008

Title:
  stable/kilo FixedIntervalLoopingCall error message not useful

Status in neutron:
  Expired

Bug description:
  The error message printed when a subclass of FixedIntervalLoopingCall exceeds 
its scheduled interval
  shows information about the python object rather than the name of the 
function that is exceeding its
  schedule.  Showing python object information is not useful to operators.

  example from log:

  2015-10-22 02:18:14.005 37767 WARNING
  neutron.openstack.common.loopingcall [req-4f447ecc-0ea4-4651-883b-
  1f7dab14beba ] task > run outlasted interval
  by 20.02 sec

  This class is not present in stable/liberty or master, so this applies
  to only stable/kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470076] Re: Security Group Attributes that are documented as UUIDs require validation

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470076

Title:
  Security Group Attributes that are documented as UUIDs require
  validation

Status in neutron:
  Expired

Bug description:
  In the API, security_group_id and remote_group_id are documented
  as requiring UUIDs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470076/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1508801] Re: fix treat_vif_port logic

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508801

Title:
  fix treat_vif_port logic

Status in neutron:
  Expired

Bug description:
  Now it is:
  ...
  if not vif_port.ofport:
  LOG.warn(_LW("VIF port: %s has no ofport configured, "
   "and might not be able to transmit"), 
vif_port.vif_id)
  if vif_port:
  if admin_state_up:
  self.port_bound(vif_port, network_id, network_type,
  ...
  logic:
  if vif_port:
  if not vif_port.ofport:
  ...
  should be better.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508801/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483957] Re: icmp rule type should be in [0, 255]

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483957

Title:
  icmp rule type should be in [0,255]

Status in neutron:
  Expired

Bug description:
  when  I enter  Access & Security  to create security group in dashboard,
  then I manage rules and add a rule using "custom icmp rule"
  in the type and code item, it cue "Enter a value for ICMP type in the range 
(-1:255)"
  I think  "Enter a value for ICMP type in the range [0:255]" will be better

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1403411] Re: Can't get the list of router with the filter of "distributed" or "ha"

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1403411

Title:
  Can't get the list of router with the filter of "distributed" or "ha"

Status in neutron:
  Expired

Bug description:
  SYMPTOM:
  GET  /v2.0/routers?distributed=false
  response:
  "routers": [
  {
  "status": "ACTIVE",
  "external_gateway_info": {
  "network_id": "bb935f5c-c72e-4abc-9550-2ce7b90c14c8",
  "enable_snat": true,
  "external_fixed_ips": [
  {
  "subnet_id": "7b49431e-f1c2-473e-b919-135ead0274e0",
  "ip_address": "172.24.4.102"
  }
  ]
  },
  "name": "router2",
  "admin_state_up": true,
  "tenant_id": "475660789a404a0e9294eb92a0cecb0e",
  "distributed": true,
  "routes": [],
  "ha": false,
  "id": "f77809c6-05ae-446a-b497-1b4392f29bc8"
  }
  ]
  }

  and the same with "ha"

  In some case,we need get the list of router with the filter of
  "distributed" or "ha".So we need to fix it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1403411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223369] Re: Metadata ns proxy didn't start - pid already exist. Daemon already running?

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223369

Title:
  Metadata ns proxy didn't start - pid already exist. Daemon already
  running?

Status in neutron:
  Expired

Bug description:
  This failure happened just once. Levels are Ubuntu Raring 13.04, Grizzly 
Quantum packages at 1:2013.1.2-0ubuntu1.
  I noticed the metadata namespace proxy hadn't started after the network node 
was booted. The l3-agent.log (was only at INFO) has:

  2013-09-04 15:53:16 INFO [quantum.openstack.common.rpc.common] Connected 
to AMQP server on 10.0.10.10:5672
  2013-09-04 15:53:16 INFO [quantum.agent.l3_agent] L3 agent started
  2013-09-04 15:53:28ERROR [quantum.agent.l3_agent] Failed synchronizing 
routers
  Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py", line 
638, in _sync_routers_task
  self._process_routers(routers, all_routers=True)
File "/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py", line 
618, in _process_routers
  self._router_added(r['id'], r)
File "/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py", line 
236, in _router_added
  self._spawn_metadata_proxy(ri)
File "/usr/lib/python2.7/dist-packages/quantum/agent/l3_agent.py", line 
270, in _spawn_metadata_proxy
  pm.enable(callback)
File 
"/usr/lib/python2.7/dist-packages/quantum/agent/linux/external_process.py", 
line 55, in enable
  ip_wrapper.netns.execute(cmd)
File "/usr/lib/python2.7/dist-packages/quantum/agent/linux/ip_lib.py", line 
414, in execute
  check_exit_code=check_exit_code)
File "/usr/lib/python2.7/dist-packages/quantum/agent/linux/utils.py", line 
61, in execute
  raise RuntimeError(m)
  RuntimeError: 
  Command: ['sudo', 'quantum-rootwrap', '/etc/quantum/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-fa2ec96d-d1f9-4af2-a022-cac171646aa7', 
'quantum-ns-metadata-proxy', 
'--pid_file=/var/lib/quantum/external/pids/fa2ec96d-d1f9-4af2-a022-cac171646aa7.pid',
 '--router_id=fa2ec96d-d1f9-4af2-a022-cac171646aa7', 
'--state_path=/var/lib/quantum', '--metadata_port=9697', '--verbose', 
'--log-file=quantum-ns-metadata-proxyfa2ec96d-d1f9-4af2-a022-cac171646aa7.log', 
'--log-dir=/var/log/quantum']
  Exit code: 1
  Stdout: ''
  Stderr: '2013-09-04 15:53:28 INFO [quantum.common.config] Logging 
enabled!\n2013-09-04 15:53:28ERROR [quantum.agent.linux.daemon] Pidfile /var
  /lib/quantum/external/pids/fa2ec96d-d1f9-4af2-a022-cac171646aa7.pid already 
exist. Daemon already running?\n'

  
  And quantum-ns-metadata-proxyfa2ec96d-d1f9-4af2-a022-cac171646aa7.log has:

  2013-08-29 19:04:04 INFO [quantum.common.config] Logging enabled!
  2013-09-04 15:53:28 INFO [quantum.common.config] Logging enabled!
  2013-09-04 15:53:28ERROR [quantum.agent.linux.daemon] Pidfile 
/var/lib/quantum/external/pids/fa2ec96d-d1f9-4af2-a022-cac171646aa7.pid already 
exist. Daemon already running?

  
  It is the same error message as 
https://bugs.launchpad.net/neutron/+bug/1177416 - but the patch from that bug 
was applied.

  The file /lib/quantum/external/pids/fa2ec96d-
  d1f9-4af2-a022-cac171646aa7.pid had 2045 in it - but no process with
  pid 2045 was running when I checked - /proc/2045/ did not exist. The
  pid file was stale as its date was that of the previous launch.

  The process call chain in short-hand is like this:
  l3-agent --> sudo rootwrap... --> python rootwrap ip netns exec qrouter-uuid 
quantum-ns-metadata-proxy router_id=uuid... --> python 
quantum-ns-metadata-proxy router_id=uuid... 

  Now the code in external_process.py either didn't find a
  /proc/2045/cmdline, or if it did then that file did not have the
  strings 'python' and 'fa2ec96d-d1f9-4af2-a022-cac171646aa7'. But the
  code in daemon.py must have found a /proc/2045/cmdline and it must
  have had those strings. The only explaination I can give for this is
  that the python rootwrap process started by sudo just happened to get
  pid 2045 that time, and this is what daemon.py is_running() found. Its
  full command line would have looked like:

  /usr/bin/python /usr/bin/quantum-rootwrap /etc/quantum/rootwrap.conf
  ip netns exec qrouter-fa2ec96d-d1f9-4af2-a022-cac171646aa7 quantum-ns-
  metadata-proxy --pid_file=/var/lib/quantum/external/pids/fa2ec96d-
  d1f9-4af2-a022-cac171646aa7.pid --router_id=fa2ec96d-
  d1f9-4af2-a022-cac171646aa7 --state_path=/var/lib/quantum
  --metadata_port=9697 --verbose --log-file=quantum-ns-metadata-
  proxyfa2ec96d-d1f9-4af2-a022-cac171646aa7.log --log-
  dir=/var/log/quantum

  It has the strings 'python' and the router's uuid, so it would have
  matched. If my theory is right, then a possible fix would be to change
  the checks to not report cmdlines with 'ip\x00netns\x00

[Yahoo-eng-team] [Bug 1508530] Re: _kill_process return value are not used

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1508530

Title:
  _kill_process return value are not used

Status in neutron:
  Expired

Bug description:
  As a "private" method, class AsyncProcess method _kill_process return
  value seems useless. And currently, they are only used in UT.

  Maybe we should remove useless return value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1508530/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591111] Re: schedulers deprecated warning in unit tests

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/159

Title:
  schedulers deprecated warning in unit tests

Status in neutron:
  Expired

Bug description:
  aptured stderr:
  
  
/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
self.add_agent_status_check(self.remove_networks_from_down_agents)
  
  {0} 
vmware_nsx.tests.unit.nsx_v3.test_plugin.TestL3NatTestCase.test_router_add_interface_port_bad_tenant_returns_404
 [3.499239s] ... ok

  Captured stderr:
  
  
/home/gkotton/vmware-nsx/.tox/py27/src/neutron/neutron/db/agentschedulers_db.py:229:
 DeprecationWarning: Using function/method 
'NsxV3Plugin.add_agent_status_check()' is deprecated: This will be removed in 
the N cycle. Please use 'add_agent_status_check_worker' instead.
self.add_agent_status_check(self.remove_networks_from_down_agents)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/159/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585770] Re: [RFE] DVR-aware fixed IP announcements for with BGP

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585770

Title:
  [RFE] DVR-aware fixed IP announcements for with BGP

Status in neutron:
  Expired

Bug description:
  Enable BGP to announce the next-hop for fixed IP host route when using
  DVR. The next-hop when using DVR is the IP address of the FIP agent
  gateway. This would allow an operator to toggle whether to enable
  announcement of host routes for each fixed IP or just rely on the
  prefix announcement for the subnet that sends traffic through the
  central router.

  Depends on https://bugs.launchpad.net/neutron/+bug/1557290. Fast-exit
  DVR not required, but would be a nice companion feature.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588326] Re: Eliminate all use of id

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588326

Title:
  Eliminate all use of id

Status in neutron:
  Expired

Bug description:
  id is a Python built-in function. Too often we encounter bugs due to
  id being used as a variable, for example
  https://launchpad.net/bugs/1588281

  We should eliminate all use of id in the code base.
  The hardest step will be to change neutron.db.model_base.HasId

  Once all uses have been eliminated, introduce a hacking check to
  prevent new occurrences.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1588326/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1591048] Re: VM in self Service Networks aren't getting IP

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1591048

Title:
  VM in self Service Networks aren't getting IP

Status in neutron:
  Expired

Bug description:
  Hello Team,

  I have setup openstack mitaka distribution on RHEL 7 box. I setup One
  Controller node and one compute node with Networking 2 option (Self
  Service Networks). I can spin up VM in both subnets but VM in private
  self-service network is not getting IP assigned where as VM in
  provider networks are getting IP . Is this kind of bug in Mitaka
  version.

  I had setup openstack-liberty also where VM's in self service networks
  are getting IP's.

  I found i am not the only one who coming across this issue.
  http://stackoverflow.com/questions/37426821/why-the-vm-in-selfservice-
  network-can-not-get-ip

  Thanks,
  Rajiv Sharma

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1591048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583739] Re: dns_name was reset, when delete VM or detach-interface

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583739

Title:
  dns_name was reset,when delete VM or detach-interface

Status in neutron:
  Expired

Bug description:
  When delete VM ,or detach port. the dns_name of port should be
  retained ,when it was assigned by the user

  At this fix: https://bugs.launchpad.net/nova/mitaka/+bug/1572593.
  reset dns_name directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583739/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577861] Re: error when Populate the database Neutron on controller node

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1577861

Title:
  error when Populate the database Neutron on controller node

Status in neutron:
  Expired

Bug description:
  I am following the Openstack documentation
  http://docs.openstack.org/mitaka/install-guide-ubuntu/neutron-
  controller-install.html for installing Openstack on Ubuntu server
  14.04. there is an instruction to finalize the installation of Neutron
  on the controller node with the following command;

  # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf 
\
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

  However, I get the following error;

  root@controller:/home/controller # su -s /bin/sh -c "neutron-db-manage 
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  No handlers could be found for logger "oslo_config.cfg"
  INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron ...
  INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  INFO  [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> 
c3a73f615e4, Add ip_version to AddressScope
  Traceback (most recent call last):
File "/usr/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
753, in main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
226, in do_upgrade
  desc=branch, sql=CONF.command.sql)
File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
128, in do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 174, in 
upgrade
  script.run_env()
File "/usr/lib/python2.7/dist-packages/alembic/script/base.py", line 397, 
in run_env
  util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 81, 
in load_python_file
  module = load_module_py(module_id, path)
File "/usr/lib/python2.7/dist-packages/alembic/util/compat.py", line 79, in 
load_module_py
  mod = imp.load_source(module_id, path, fp)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 126, in 
  run_migrations_online()
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in run_migrations_online
  context.run_migrations()
File "", line 8, in run_migrations
File "/usr/lib/python2.7/dist-packages/alembic/runtime/environment.py", 
line 797, in run_migrations
  self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/dist-packages/alembic/runtime/migration.py", line 
312, in run_migrations
  step.migration_fn(**kw)
File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/mitaka/expand/c3a73f615e4_add_ip_version_to_address_scope.py",
 line 33, in upgrade
  sa.Column('ip_version', sa.Integer(), nullable=False))
File "", line 8, in add_column
File "", line 3, in add_column
File "/usr/lib/python2.7/dist-packages/alembic/operations/ops.py", line 
1535, in add_column
  return operations.invoke(op)
File "/usr/lib/python2.7/dist-packages/alembic/operations/base.py", line 
318, in invoke
  return fn(self, operation)
File "/usr/lib/python2.7/dist-packages/alembic/operations/toimpl.py", line 
123, in add_column
  schema=schema
File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 172, in 
add_column
  self._exec(base.AddColumn(table_name, column, schema=schema))
File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 118, in 
_exec
  return conn.execute(construct, *multiparams, **params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
914, in execute
  return meth(self, multiparams, params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py", line 68, in 
_execute_on_connection
  return connection._execute_ddl(self, multiparams, params)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
968, in _execute_ddl
  compiled
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1146, in _execute_context
  context)
File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1337, in _handle_dbapi_exception
  util.raise_from_cause(newraise, e

[Yahoo-eng-team] [Bug 1583519] Re: Add tempest to test-requirements.txt

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583519

Title:
  Add tempest to test-requirements.txt

Status in neutron:
  Expired

Bug description:
  Neutron is updated to use tempest instead of tempest-lib with the
  below commit

  
https://github.com/openstack/neutron/commit/e3210bc880c1cda8a883cb5da05b279cd87aecd4

  With this, neutron's tempest tests depend on "tempest" package, but
  this package is not added to test-requirements.txt

  Need to add tempest to test-requirements.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527061] Re: Nova should not throw exception when port binding fails for Ironic

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527061

Title:
  Nova should not throw exception when port binding fails for Ironic

Status in neutron:
  Expired

Bug description:
  Neutron tries to bind port on compute where instance is launched. It
  doesn't make sense when hypervisor_type is ironic, since VM does not
  live on hypervisor in this case. Furthermore it leads to failed
  provisioning of baremetal node, when neutron is not configured on
  ironic compute node.

  When binding fails, nova starts cleanup and then throws exception. As
  part of cleanup ironic port is deleted. This should not be the case
  when virt-driver being used is Ironic.

  Nova versions: liberty on-wards.

  Setup:
  node-1: controller with neutron
  node-2: ironic-compute without neutron/neutron agents

  Register a BM node and perform nova boot.

  From Nova:
  =
  nova-compute.log:21147:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
  nova-compute.log:21148:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager Traceback (most recent call last):
  nova-compute.log:21149:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/nova/compute/manager.py",
 line 1564, in _allocate_network_async
  nova-compute.log:21150:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager dhcp_options=dhcp_options)
  nova-compute.log:21151:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 727, in allocate_for_instance
  nova-compute.log:21152:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager self._delete_ports(neutron, instance, created_port_ids)
  nova-compute.log:21153:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 204, in __exit__
  nova-compute.log:21154:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager six.reraise(self.type_, self.value, self.tb)
  nova-compute.log:21155:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 719, in allocate_for_instance
  nova-compute.log:21156:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager security_group_ids, available_macs, dhcp_opts)
  nova-compute.log:21157:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
 line 342, in _create_port
  nova-compute.log:21158:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager raise exception.PortBindingFailed(port_id=port_id)
  nova-compute.log:21159:2015-12-05 02:57:38.542 7649 ERROR 
nova.compute.manager PortBindingFailed: Binding failed for port 
0514b7e8-0408-4d5e-9b48-8292e686494f, please check neutron logs for more 
information.
  nova-compute.log:21160:2015-12-05 02:57:38.542 7649 ERROR nova.compute.manager
  2015-12-05 02:57:43.375 7649 DEBUG nova.virt.ironic.driver 
[req-25ba4c81-4333-43e4-b8d9-e460827435e0 a989fe7fc89e4825b98d7e6584cc 
1308f1a382ef43c79eeed0ebf8a9db3b] unplug: 
instance_uuid=d9295d4d-1104-47ab-8e75-5b30d0a3838b vif=[] _unp
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager 
[req-25ba4c81-4333-43e4-b8d9-e460827435e0 a989fe7fc89e4825b98d7e6584cc 
1308f1a382ef43c79eeed0ebf8a9db3b] [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b] Instance failed to spawn
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b] Traceback (most recent call last):
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b]   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2157, in _build_resources
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b] yield resources
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b]   File 
"/opt/stack/venv/nova-20151204T044743Z/lib/python2.7/site-packages/nova/compute/manager.py",
 line 2011, in _build_and_run_inst
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b] block_device_info=block_device_info)
  2015-12-05 02:57:43.376 7649 ERROR nova.compute.manager [instance: 
d9295d4d-1104-47ab-8e75-5b30d0a3838b]   File 
"/opt/stack/venv/nova-20151204T0

[Yahoo-eng-team] [Bug 1535392] Re: Simplify l2pop driver code

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535392

Title:
  Simplify l2pop driver code

Status in neutron:
  Expired

Bug description:
  Simplify l2pop driver code.

  
  l2pop driver unable to properly decide about port add/remove notifications 
when it gets called with different port status from nova and l2 agents through 
different workers during instance migrations.
  There were many suggestions to modify the port status in plugin(some 
suggested to change it to BUILD, some reviewers to DOWN, etc ..) during 
migration(see the comments in https://review.openstack.org/#/c/215467/ ), so 
that l2pop can properly decide about port's binding host based on status. 
Deciding about which status to set to port in plugin and using the same in 
l2pop driver is confusing, and reviewers giving different opinion(comments in 
https://review.openstack.org/#/c/215467/ ).

  Kevin suggested that l2pop shouldn't depend on port status for
  deciding about host. Instead it should use binding info in db, which
  reviewers of the patch( https://review.openstack.org/#/c/215467/)
  agreed.

  note: There was a separate bug
  https://bugs.launchpad.net/neutron/+bug/1555600  reported for "agents
  fail to create flood flows with multiple workers"

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1544458] Re: SCTP packets from VM are not NATed

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1544458

Title:
  SCTP packets from VM are not NATed

Status in neutron:
  Expired

Bug description:
  We should add sanity check for sctp modules, as mentioned by Balaji:
  https://highon.coffee/blog/security-harden-centos-7/

  ==
  Disable Uncommon Protocols

  The following Protocols will be disabled:

  Datagram Congestion Control Protocol (DCCP)
  Stream Control Transmission Protocol (SCTP)
  Reliable Datagram Sockets (RDS)
  Transparent Inter-Process Communication (TIPC)
  ==

  Details below:
  

  We have installed kilo release

  [root@sienna ~]# uname -a
  Linux sienna 3.10.0-327.4.5.el7.x86_64 #1 SMP Mon Jan 25 22:07:14 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux
  [root@sienna ~]# cat /etc/os-release
  NAME="CentOS Linux"
  VERSION="7 (Core)"
  ID="centos"
  ID_LIKE="rhel fedora"
  VERSION_ID="7"

  [root@sienna ~]# openstack --version
  openstack 1.0.3
  [root@sienna ~]# neutron --version
  2.4.0
  [root@sienna ~]# nova --version
  2.23.0

  After installing kilo release, we found that SCTP packets VM were being 
dropped at the host.
  Found that this was a known issue 
https://bugs.launchpad.net/neutron/+bug/1460741 and downloaded the neutron patch
  neutron 2015.1.2 and applied the same.

  After that the SCTP packets from VM were transmitted from the host.
  But with the private IP Address (192.168.x.x) without SNAT being
  performed.

  SNAT is being done for UDP packets though.

  only SCTP packets are sent out with private IP Addresses.

  Please confirm whether this is a known issue and any fix/patch
  available for this in Neutron for Kilo release.

  Thank you
  Balaji Srinivasan

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1544458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1350852] Re: REST API should allow router filtering by network_id

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1350852

Title:
  REST API should allow router filtering by network_id

Status in neutron:
  Expired

Bug description:
  There is currently no way to display all routers that are connected to
  a certain network. This makes it hard for large deployments to figure
  out which networks are connected to which routers. The proposed change
  adds this functionality to the REST API, which should also give the
  end-user the ability to apply this filter using the neutronclient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1350852/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556013] Re: Dropping a rule from security group rules don't drop the connection in the IptablesFirewallDriver (they do for Hybrid)

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1556013

Title:
  Dropping a rule from security group rules don't drop the connection in
  the IptablesFirewallDriver (they do for Hybrid)

Status in neutron:
  Expired

Bug description:
  This happens because connection tracking zones don't work in the
  IptablesFirewallDriver (they do for Hybrid).

  
  The subclass for the hybrid driver is the one introducing the zone
  rules [1]

  I remember it was discussed during this review [2], but I cannot see if
  there was any technical detail why we could not do the same thing on
  the plain IptablesFirewallDriver itself.

  [1]
  
https://github.com/openstack/neutron/blob/01a5d9a3c088e54ae78c068408d419ccc53f8ca8/neutron/agent/linux/iptables_firewall.py#L905

  [2] https://review.openstack.org/#/c/118274/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1556013/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541738] Re: Rule on the tun bridge is not updated in time while migrating the vm

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541738

Title:
  Rule on the tun bridge is not updated in time while migrating the vm

Status in neutron:
  Expired

Bug description:
  ENV:neutron/master, vxlan

  After the vm live migration, we can observe that the vm is active
  using command "nova show". However, the vm network is not ready. When
  processing vm live migration, nova invokes neutron update_port. It
  only updates the host ID of the port attribute, but doesn't update the
  rules on the tun bridge. This means the output port in the rule below
  is not updated to the vxlan port, which should be connected to the
  host node that the vm is migrated to.

  ovs-ofctl dump-flows br-tun | grep 1ef
  cookie=0x0, duration=194.884s, table=20, n_packets=0, n_bytes=0, 
hard_timeout=300, idle_age=194, 
priority=1,vlan_tci=0x0306/0x0fff,dl_dst=5a:c6:4f:34:61:06 
actions=load:0->NXM_OF_VLAN_TCI[],load:0x1ef->NXM_NX_TUN_ID[],output:24

  Due to the reason explained above, the time  for vm migration is
  increased. By monitoring the rule status on the tun bridge and the
  network connectivity, the network connectivity is restored after the
  rule of tun bridge is updated.

  Therefore, the time for vm migration can be reduced by updating the
  rule immediately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541738/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548633] Re: One of the Network node out of five is over utilized (has more dhcp namespaces) than the others.

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548633

Title:
  One of the Network node out of five is over utilized (has more dhcp
  namespaces) than the others.

Status in neutron:
  Expired

Bug description:
  I have 5 Network nodes (NN's) on my setup each with 32 GB RAM. All
  have same configuration (Created from the same ubuntu template).

  I am running a scale scenario under which I am creating 4K networks
  each with one subnet using rally with 100 concurrency. Ideally all the
  network namespaces should have been equally divided among all the 5
  NN’s but one NN is over utilized which in turn creates resource crunch
  and future requests start failing on it.

  The number of namespace on the faulty NN is 1175 while other NN's have
  650 to 750 namespaces. I ran the scenario twice and both the times the
  result was same.

  Please note if I create networks one by one without any concurrency
  the namespace distribution is even and the problem is not seen.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548633/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543656] Re: Cannot connect a router to overlapping subnets with address scopes

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543656

Title:
  Cannot connect a router to overlapping subnets with address scopes

Status in neutron:
  Expired

Bug description:
  This is a known limitation in the reference implementation of address
  scopes [1] in the L3 agent that a router cannot be connected to
  subnets with overlapping IPs even when the subnets are in different
  address scopes and, in theory, there should be no ambiguity.  This was
  documented in the devref [2].  I'm filing this bug to capture ideas
  for possibly eliminating this limitation in the future.

  [1] https://review.openstack.org/#/c/270001/
  [2] 
http://docs.openstack.org/developer/neutron/devref/address_scopes.html#address-scopes

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1559978] Re: log segmentation_id over threshold for monitoring

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1559978

Title:
  log segmentation_id over threshold for monitoring

Status in neutron:
  Expired

Bug description:
  Use case
  ===
  Monitoring of the "segmentation resources".

  Logging the status of such resources as we go, (or the pass over a certain 
threshold) would allow monitoring solutions to identify tripping over
  certain levels, and warn the administrator to take action: cleaning up
  unused tenant networks, changing configuration, changing segmentation
  technologies. etc.

  Description
  =
  Depending on configuration, and underlaying technologies, the segmentation
  ids can be exhausted (vlan/vni/tunnel keys, etc..), making it a consumable
  resource.

  External monitoring solutions have no easy way to determine the amount of
  "segmentation resources" available on the underlaying resource technology.

  Alternatives
  ==
  One alternative could be providing a generic API to retrieve the usage of
  resources. That would require the monitoring solution to make API calls
  and therefore use credentials, making it harder to leverage standard
  deployments and monitoring tools. This could also be considered as a second
  step of this RFE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1559978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471200] Re: Resource update history is not clear

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471200

Title:
  Resource update history is not clear

Status in neutron:
  Expired

Bug description:
  The followings are needed to confirm the resource update history.
   *Latest value
   *Updated value(point)
   *Default value

  Latest value is got by "GET" API.
  **GET API**
  $ curl -X GET -H "X-Auth-Token: $TOKEN"  
http://192.168.122.141:9696/v2.0/networks/e6e81e5a-9706-4e33-a6e4-4e63cc152a3a 
| jq .
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   374  100   3740 0  22034  0 --:--:-- --:--:-- --:--:-- 24933
  {
"network": {
  "provider:segmentation_id": 1013,
  "id": "e6e81e5a-9706-4e33-a6e4-4e63cc152a3a",
  "mtu": 0,
  "port_security_enabled": true,
  "shared": false,
  "status": "ACTIVE",
  "subnets": [],
  "name": "test-002",
  "provider:physical_network": null,
  "router:external": false,
  "tenant_id": "0862ba8c3497455a8fdf40c49f0f2644",
  "admin_state_up": true,
  "provider:network_type": "vxlan"
}
  }
  $

  And, Updated value is logged as "Request body:" when resource is updated.
  **PUT API**
  $ curl -X PUT -d '{"network":{"name":"test-002"}}' -H "X-Auth-Token: $TOKEN"  
-H "Content-Type: application/json" 
http://192.168.122.141:9696/v2.0/networks/e6e81e5a-9706-4e33-a6e4-4e63cc152a3a

  **log**
  2015-07-03 15:35:43.870 DEBUG neutron.api.v2.base 
[req-a45f50ab-2606-4f00-9e35-bcdec16ae3a7 admin 
0862ba8c3497455a8fdf40c49f0f2644] Request body: {u'network': {u'name': 
u'test-002'}} from (pid=2084) prepare_request_body 
/opt/stack/neutron/neutron/api/v2/base.py:598

  But, Default value is not logged.

  If many resources has been updated in many times, user cannot determine about 
following.
   *What is the original value of a changed value?
  That is not useful in all project.

  This patch will log Default value when new resource is created.

  Followings are sample that Default value logged as "Response body:"
  **POST API**
  $ curl -X POST  -d '{"network":{"name":"test-001"}}' -H "X-Auth-Token: 
$TOKEN"  -H "Content-Type: application/json" 
http://192.168.122.141:9696/v2.0/networks

  **log**
  2015-07-03 15:30:48.835 INFO neutron.api.v2.resource 
[req-356f5432-a525-4d60-92f8-e760febe2865 admin 
0862ba8c3497455a8fdf40c49f0f2644] Response body: {u'network': {u'status': 
u'ACTIVE', u'subnets': [], u'name': u'test-001', u'provider:physical_network': 
None, u'router:external': False, u'tenant_id': 
u'0862ba8c3497455a8fdf40c49f0f2644', u'admin_state_up': True, 
u'provider:network_type': u'vxlan', u'port_security_enabled': True, u'shared': 
False, u'mtu': 0, u'id': u'e6e81e5a-9706-4e33-a6e4-4e63cc152a3a', 
u'provider:segmentation_id': 1013}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471200/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551088] Re: nova detach a wrong port

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551088

Title:
  nova detach a  wrong port

Status in neutron:
  Expired

Bug description:
  when we detach a port which is not exist("nova detach port-id server"),don't 
have any return.It seens  to be success,
  but in fact,it is not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551088/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1546490] Re: Security groups don't work with fullstack

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1546490

Title:
  Security groups don't work with fullstack

Status in neutron:
  Expired

Bug description:
  Iptables doesn't work properly with fullstack, as can be observed in
  [1].

  The gist is that since all ovs-agents are running on the same namespace, they 
try to override each other's iptables, causing the failures. This will 
obviously cause security groups to fail.
  Also, Assaf Muller mentioned that since FakeMachines are directly connected 
to br-int, security groups will also not work properly on them. Instead, they 
should be connected through an intermediary linuxbridge.

  [1]: http://logs.openstack.org/71/270971/3/check/gate-neutron-dsvm-
  
fullstack/c913b51/logs/TestConnectivitySameNetwork.test_connectivity_VLANs,Ofctl_
  /neutron-openvswitch-agent--2016-02-14--
  11-40-19-078390.log.txt.gz#_2016-02-14_11_41_03_165

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1546490/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1566972] Re: Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes the issue

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1566972

Title:
  Neutron unit tests are failing with SQLAlchemy 1.0.11, 1.0.12 fixes
  the issue

Status in neutron:
  Expired

Bug description:
  When running the unit tests (when building the Debian package for
  Neutron Mitaka RC3), Neutron fails more than 500 unit tests. Upgrading
  from SQLAlchemy 1.0.11 to 1.0.12 fixed the issue.

  Example of failed run:
  https://mitaka-jessie.pkgs.mirantis.com/job/neutron/37/consoleFull

  Moving forward, upgrading the global-requirements.txt to SQLAlchemy
  1.0.12 may not be possible, so probably it'd be nice to fix the issue
  in Neutron.

  FYI, in Debian, I don't really mind, as Debian Sid has version 1.0.12,
  and that's where I upload. For the (non-official) backports to Debian
  Jessie and Ubuntu Trusty, I did a backport of 1.0.12, and that is
  fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1566972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1530293] Re: demo tenant can use admin's unshared private network to create a vm successful

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1530293

Title:
  demo tenant can use admin's unshared private network to create a vm
  successful

Status in neutron:
  Expired

Bug description:
  Hi, all
  i encounter a problem as follows:

  Environment:
  neutron version is the latest version downloading from github(m), same with 
other openstack components.
  operating system: ubuntu 14.04

  procedure:
  when i test the isolation of the different tenants under the private network 
like 192.168.50.0/24.
  step 1: create a private network like 192.168.50.0/24 under admin tenant, and 
the network is not shared.
  step 2: create a vm1 by using admin tenant together with the private network 
which created in step 1.
  step 3: create a vm2 by using demo tenant together with the private network 
which created in step 1.

  ps: i create vm through restclient.

  Expected output
  vm1 can create successful while vm2 will destroy with an error.

  Actual output
  vm1 and vm2 both create successful.

  actually, demo could not use the private network which created by
  admin, so, when demo tenant create a vm with the network, the vm
  should be destroyed, but  how that above result come out, does anyone
  have ever encountered such situation. Thank you.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1530293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1548274] Re: rabbitmq message-queue not be dropped which not binded with consumers

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1548274

Title:
  rabbitmq message-queue not be dropped which not binded with consumers

Status in neutron:
  Expired

Bug description:
  Queue q-agent-notifier-port-update_fanout_91e5c8311b1b47a2b39ede94dad9a56b be 
blocked ( please refer to the attached  picture)
   
  Version: RabbitMQ 3.6.0 release, openstack kilo 

  This phenomeno comes up sometimes when in the large-scale environment,
  when one rabbitmq message-queue be created,if no consume binded to it but 
  the producers publish messages to queue continuously, then the queue will not 
be dropped!
  If I want the queue which hasn't been binded with consumers or producers to 
be dropped,
  how can I do?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1548274/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463831] Re: neutron DVR poor performance

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463831

Title:
  neutron DVR poor performance

Status in neutron:
  Expired

Bug description:
  Scenario:
  2 VMs of same tenant but in different subnets talk to each other. The traffic 
flow is ...

  Traffic VM1 to VM2:
  = CN1     CN2 ===
  VM1--->br-int--->Router--->br-int--->br-tun-->br-tun--->br-int--->VM2

  Traffic VM2 to VM1:
  = CN2    CN1 ===
  VM2--->br-int--->Router--->br-int---br-tun--->br-tun--->br-int--->VM1

  This works as designed; however obviously br-int of CN1 never gets
  traffic from Router of CN1 (except the very first ARP response), same
  for br-int of CN2. This might lead to flow (or mac?) timeout on br-int
  after 300 secs and degrades performance massively because traffic is
  flooded.

  Changing the mac-addr aging timer influences the issue; change to 30 (default 
300) and the issue occurs after 30 seconds (instead 300) 
  #ovs-vsctl set bridge br-int other_config:mac-aging-time=30

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539226] Re: Stable branch version number needs to be updated in stable/kilo

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1539226

Title:
  Stable branch version number needs to be updated in stable/kilo

Status in neutron:
  Expired

Bug description:
  Jenkins fails after complaining about the version number

  2016-01-27 23:16:47.695 | You are using pip version 7.1.0, however version 
8.0.2 is available.
  2016-01-27 23:16:47.695 | You should consider upgrading via the 'pip install 
--upgrade pip' command.
  2016-01-27 23:16:47.946 | Collecting pbr
  2016-01-27 23:16:48.051 |   Downloading 
http://pypi.IAD.openstack.org/packages/py2.py3/p/pbr/pbr-1.8.1-py2.py3-none-any.whl
 (89kB)
  2016-01-27 23:16:48.085 | Installing collected packages: pbr
  2016-01-27 23:16:48.158 | Successfully installed pbr-1.8.1
  2016-01-27 23:16:48.179 | + sdist_check/bin/python setup.py sdist
  2016-01-27 23:16:48.802 | ERROR:root:Error parsing
  2016-01-27 23:16:48.802 | Traceback (most recent call last):
  2016-01-27 23:16:48.802 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/core.py",
 line 109, in pbr
  2016-01-27 23:16:48.802 | attrs = util.cfg_to_args(path)
  2016-01-27 23:16:48.802 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/util.py",
 line 243, in cfg_to_args
  2016-01-27 23:16:48.802 | pbr.hooks.setup_hook(config)
  2016-01-27 23:16:48.802 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/hooks/__init__.py",
 line 25, in setup_hook
  2016-01-27 23:16:48.802 | metadata_config.run()
  2016-01-27 23:16:48.803 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/hooks/base.py",
 line 27, in run
  2016-01-27 23:16:48.803 | self.hook()
  2016-01-27 23:16:48.803 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/hooks/metadata.py",
 line 26, in hook
  2016-01-27 23:16:48.803 | self.config['name'], self.config.get('version', 
None))
  2016-01-27 23:16:48.803 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/packaging.py",
 line 659, in get_version
  2016-01-27 23:16:48.803 | version = _get_version_from_git(pre_version)
  2016-01-27 23:16:48.803 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/packaging.py",
 line 601, in _get_version_from_git
  2016-01-27 23:16:48.803 | result = _get_version_from_git_target(git_dir, 
target_version)
  2016-01-27 23:16:48.803 |   File 
"/home/jenkins/workspace/gate-neutron-pep8/sdist_check/local/lib/python2.7/site-packages/pbr/packaging.py",
 line 563, in _get_version_from_git_target
  2016-01-27 23:16:48.803 | dict(new=new_version, target=target_version))
  2016-01-27 23:16:48.803 | ValueError: git history requires a target version 
of pbr.version.SemanticVersion(2015.1.4), but target version is 
pbr.version.SemanticVersion(2015.1.3)
  2016-01-27 23:16:48.804 | error in setup command: Error parsing 
/home/jenkins/workspace/gate-neutron-pep8/setup.cfg: ValueError: git history 
requires a target version of pbr.version.SemanticVersion(2015.1.4), but target 
version is pbr.version.SemanticVersion(2015.1.3)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1539226/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1483690] Re: if no subnet enable DHCP, DHCP agent should be disable

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1483690

Title:
  if no subnet enable DHCP, DHCP agent should be disable

Status in neutron:
  Expired

Bug description:
  I creat a network ,then creat   a Subnet with enable Gateway and  DHCP.
  a port  and a DHCP Agents was created.
  then I disable  Gateway and DHCP, the port was deleted but there is no change 
in the DHCP Agents status .

  it  make no sense DHCP Agents is running well if no subnet is enable DHCP.
  I think the DHCP Agents  should be deleted too if  no subnet is enable DHCP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1483690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535554] Re: Multiple dhcp agents are scheduled to host one network automatically if multiple subnets are created at the same time

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535554

Title:
  Multiple dhcp agents are scheduled to host one network automatically
  if multiple subnets are created at the same time

Status in neutron:
  Expired

Bug description:
  I have three all-in-one controller nodes deployed by DevStack with the
  latest codes. Neutron servers on these controllers are set behind
  Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
  cluster is used as my database backend.

  In neutron.conf, I have made the following changes:
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Since I only allow one dhcp agent per tenant on each controller, now I
  have three dhcp agents in total for a given tenant. After I created
  one network within this given tenant, before I add any subnets to this
  network, no dhcp agents would be scheduled to host this network. If I
  run multiple commands at the same time to add subnets to the network,
  we may end up with more than one dhcp agents hosting the network.

  It is not easy to re-produce the bug. You might need to repeat the
  following steps multiple times.

  How to reproduce:

  Prerequisite
  make the following changes in neutron.conf
  [DEFAULT]
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Step 1: Confirm multiple dhcp agents are running
  $ neutron agent-list --agent_type='DHCP agent'
  my result is shown http://paste.openstack.org/show/483956/

  Step 2: Create a network
  $ neutron net-create net-dhcptest

  Step 3: Create multiple subnets on the network at the same time
  On controller1:
  $ neutron subnet-create --name subnet-dhcptest-1 net-dhcptest 192.162.101.0/24
  On controller2:
  $ neutron subnet-create --name subnet-dhcptest-2 net-dhcptest 192.162.102.0/24

  Step 4: Check which dhcp agent(s) is/are hosting the network
  $ neutron dhcp-agent-list-hosting-net net-dhcptest
  my result is shown http://paste.openstack.org/show/483958/

  If you end up with only one dhcp agent, please delete the subnets and
  network. Then repeat Step 1-4 several times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1524394] Re: neutron-openvswitch-agent fails to start when root_helper operates in a different context

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1524394

Title:
  neutron-openvswitch-agent fails to start when root_helper operates in
  a different context

Status in neutron:
  Expired

Bug description:
  Version: Liberty
  Compute hypervisor: XenServer 6.5
  Compute vm: Ubuntu 14.04.3

  This issue appears in liberty--and not before--when running XenServer
  hypervisor. In this environment, root-helper is set to /usr/bin
  /neutron-rootwrap-xen-dom0, which executes commands in the
  hypervisor's Dom0 context. This problem keeps the neutron-openvswitch-
  agent from starting and thus breaking the networking on the compute
  nodes.

  A backtrace will be appended. The gist of the problem is that
  ip_lib.get_devices()  does not use root_helper to obtain a list of the
  network interfaces when the network namespace is the global namespace.
  Thus, it obtains the interfaces of the compute virtual machine
  environment and not the Dom0 environment.

  I've appended two patches, one for ip_lib that corrects the listing
  and one to netwrap to allow find. There are security implications by
  permitting the execution of `find' in netwrap.

  Backtrace from openvswitch-agent.log:

   2015-12-09 07:44:10.274 11884 CRITICAL neutron [-] RuntimeError:
  Command: ['/usr/bin/neutron-rootwrap-xen-dom0', '/etc/neutron/rootwrap.conf', 
'ip', 'addr', 'show', 'br-int', 'to', '192.168.1.26']
  Exit code: 96
  Stdin:
  Stdout:
  Stderr: Traceback (most recent call last):
File "/usr/bin/neutron-rootwrap-xen-dom0", line 119, in run_command
  {'cmd': json.dumps(user_args), 'cmd_input': json.dumps(cmd_input)})
File "/usr/lib/python2.7/dist-packages/XenAPI.py", line 245, in __call__
  return self.__send(self.__name, args)
File "/usr/lib/python2.7/dist-packages/XenAPI.py", line 149, in 
xenapi_request
  result = _parse_result(getattr(self, methodname)(*full_params))
File "/usr/lib/python2.7/dist-packages/XenAPI.py", line 219, in 
_parse_result
  raise Failure(result['ErrorDescription'])
  Failure: ['XENAPI_PLUGIN_FAILURE', 'run_command', 'PluginError', 'Device 
"br-int" does not exist.\n']
  2015-12-09 07:44:10.274 11884 ERROR neutron Traceback (most recent call last):
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
  2015-12-09 07:44:10.274 11884 ERROR neutron sys.exit(main())
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py",
 line 20, in main
  2015-12-09 07:44:10.274 11884 ERROR neutron agent_main.main()
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/main.py",
 line 49, in main
  2015-12-09 07:44:10.274 11884 ERROR neutron mod.main()
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/main.py",
 line 36, in main
  2015-12-09 07:44:10.274 11884 ERROR neutron 
ovs_neutron_agent.main(bridge_classes)
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1899, in main2015-12-09 07:44:10.274 11884 ERROR neutron 
validate_local_ip(agent_config['local_ip'])
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1876, in validate_local_ip
  2015-12-09 07:44:10.274 11884 ERROR neutron if not 
ip_lib.IPWrapper().get_device_by_ip(local_ip):
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 131, in 
get_device_by_ip
  2015-12-09 07:44:10.274 11884 ERROR neutron if device.addr.list(to=ip):
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 514, in 
list
  2015-12-09 07:44:10.274 11884 ERROR neutron for line in 
self._run(options, tuple(args)).split('\n'):
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 274, in 
_run
  2015-12-09 07:44:10.274 11884 ERROR neutron return 
self._parent._run(options, self.COMMAND, args)
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-packages/neutron/agent/linux/ip_lib.py", line 70, in 
_run
  2015-12-09 07:44:10.274 11884 ERROR neutron log_fail_2015-12-09 
07:44:10.274 11884 ERROR neutron log_fail_as_error=self.log_fail_as_error)
  2015-12-09 07:44:10.274 11884 ERROR neutron   File 
"/usr/lib/python2.7/dist-

[Yahoo-eng-team] [Bug 1549442] Re: Require an update schema on database created with nuage plugin to work with DB2

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549442

Title:
  Require an update schema on database created with nuage plugin to work
  with DB2

Status in neutron:
  Expired

Bug description:
  Require a schema change in the foreign key names to be able to perform
  subnet operations with DB2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549442/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1510814] Re: There are some url in neutron gerrit dashboards redirect to an new url

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1510814

Title:
  There are some url in neutron gerrit dashboards redirect to an new url

Status in neutron:
  Expired

Bug description:
  There are some url in neutron gerrit dashboards redirect to an new
  url,it's not a big problem but i think it need be corrected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1510814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505571] Re: FIP disassociation takes longer in non DVR test scenario

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505571

Title:
  FIP disassociation takes longer in non DVR test scenario

Status in neutron:
  Expired

Bug description:
  Problem description:
  With series of VM delete operation in openstack (4000 vms) with KVM compute 
nodes,  the VM instance goes into ERROR state.
  The error shown in Horizon UI is 
  "ConnectionFailed: Connection to neutron failed: 
HTTPConnectionPool(host='192.168.0.1', port=9696): Read timed out. (read 
timeout=30)"

  This happens because neutron takes more than 30 secs (actually around 80 
secs) to delete one port, and nova sets the instance into ERROR state 'cz the 
default timeout of all neutron api(s) is set to 30 sec in nova.
  This can be worked around, by increasing the timeout to 120 in nova.conf. But 
this cannot be recommended as the solution.

  cat /etc/nova/nova.conf | grep url_timeout
  url_timeout = 120

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504941] Re: RBAC-RFE- neutron net-show command should display all tenant that using the network

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504941

Title:
  RBAC-RFE- neutron net-show command should display all tenant that
  using the network

Status in neutron:
  Expired

Bug description:
  On rdo- liberty I thested neutron rbac feature .
  when network assigned to more then 1 tenant we still see one tenant in 
neutron net-show 
  [root@cougar16 ~(keystone_admin)]# neutron net-show 
590ca7b9-1682-4c40-8213-02feaa7a96cc
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 590ca7b9-1682-4c40-8213-02feaa7a96cc |
  | mtu   | 0|
  | name  | internal_ipv4_a  |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 70   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 9a1a387e-88cf-484a-8b12-5a1834be0233 |
  | tenant_id | fa4add4659704239b771b0bccb8b6829 |
  +---+--+

  this network shared in 2 tenants : 
  [root@cougar16 ~(keystone_admin)]# neutron rbac-list
  
+--+--+
  | id   | object_id
|
  
+--+--+
  | 4f1a9c9d-e820-46e4-b431-b3142c6bb245 | 818dd42f-f627-45d4-a578-dd475b9e19e4 
|
  | 8c995ab1-dea6-411b-854c-a405cf5365fa | 590ca7b9-1682-4c40-8213-02feaa7a96cc 
|
  | abb375b9-95d0-4297-80f1-3f22f0f84a9e | b071a769-0d50-4d25-8730-fed3dea13a2f 
|
  | f3122b92-f47a-4a0f-a422-c9f7ed482341 | 590ca7b9-1682-4c40-8213-02feaa7a96cc 
|

  
  [root@cougar16 ~(keystone_admin)]# rpm -qa |grep neutron 
  python-neutronclient-3.1.1-dev1.el7.centos.noarch
  python-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-ml2-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-common-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-openvswitch-7.0.0.0-rc2.dev21.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1552147] Re: Change of force_metadata (dhcp agent) from True to False is not applied on existing resources

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1552147

Title:
  Change of force_metadata (dhcp agent)  from True to False  is not
  applied on existing resources

Status in neutron:
  Expired

Bug description:
  Reproduction steps:
  1. Set force_metadata = True in dhcp agent config, restart dhcp agent
  2. Create internal network and subnet, router, add the subnet to the router. 
- result: We have static route in options(/var/lib/neutron/dhcp/ID/opts) .
  3. disable force, restart dhcp agent- result: We have static route ( we 
should not have that because force is false and we have a router) in 
options(/var/lib/neutron/dhcp/ID/opts).

  The issue happens only to the created network and subnet.
  If we will create a new one with new config it will be ok.

  Liberty

  reproducible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1552147/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1485537] Re: adopt oslo_config.fixture.Config

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1485537

Title:
  adopt oslo_config.fixture.Config

Status in neutron:
  Expired

Bug description:
  Currently, base test class tries to mock out oslo.config autodiscovery
  to isolate tests from external configuration files, but does not make
  it completely. F.e. policy.d directory is still accessed by
  oslo.policy code, making bugs like bug 1484553.

  I hope that adopting the fixture should isolate us from all
  configuration files that are external to unit tests tree.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1485537/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1487978] Re: Performance of L2 population

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1487978

Title:
  Performance of L2 population

Status in neutron:
  Expired

Bug description:
  when a compute node restarts, all ports on this host will trigger l2
  pop again, even if these ports were not change, if there are many
  compute nodes restarting at the same time,  the l2 pop will be much
  consumable, I think if a l2 agent restarts, other l2 agents should not
  flush the tunnel info to this restarted l2 agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1487978/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1471957] Re: Invalid subnet cidr cause dhcp runtimerror

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1471957

Title:
  Invalid subnet cidr cause dhcp runtimerror

Status in neutron:
  Expired

Bug description:
  Trace:
  ERROR neutron.agent.linux.utils [req-26ce0148-4bc4-40bd-96ac-e9d484f37b61 
demo 12b3399d1cb64da488e20f6a7c355d10] 
  Command: ['sudo', '/usr/local/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qdhcp-6cdefebf-ab88-4f55-b2b9-719286a7b75b', 'ip', 'route', 'replace', 
'default', 'via', '0.0.0.1', 'dev', 'tapb81e677c-8c']
  Exit code: 2
  Stdout: ''
  Stderr: 'RTNETLINK answers: Network is unreachable\n'
  ERROR neutron.agent.dhcp_agent [req-26ce0148-4bc4-40bd-96ac-e9d484f37b61 demo 
12b3399d1cb64da488e20f6a7c355d10] Unable to enable dhcp for 
6cdefebf-ab88-4f55-b2b9-719286a7b75b.
  Traceback (most recent call last):
File "/opt/stack/neutron/neutron/agent/dhcp_agent.py", line 128, in 
call_driver
  getattr(driver, action)(**action_kwargs)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 205, in enable
  interface_name = self.device_manager.setup(self.network)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 1056, in setup
  self._set_default_route(network, interface_name)
File "/opt/stack/neutron/neutron/agent/linux/dhcp.py", line 928, in 
_set_default_route
  device.route.add_gateway(subnet.gateway_ip)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 395, in 
add_gateway
  self._as_root(*args)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 242, in 
_as_root
  kwargs.get('use_root_namespace', False))
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 74, in 
_as_root
  log_fail_as_error=self.log_fail_as_error)
File "/opt/stack/neutron/neutron/agent/linux/ip_lib.py", line 86, in 
_execute
  log_fail_as_error=log_fail_as_error)
File "/opt/stack/neutron/neutron/agent/linux/utils.py", line 84, in execute
  raise RuntimeError(m)
  TRACE neutron.agent.dhcp_agent RuntimeError: 
  TRACE neutron.agent.dhcp_agent Command: ['sudo', 
'/usr/local/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 
'exec', 'qdhcp-6cdefebf-ab88-4f55-b2b9-719286a7b75b', 'ip', 'route', 'replace', 
'default', 'vi
  a', '0.0.0.1', 'dev', 'tapb81e677c-8c']
  TRACE neutron.agent.dhcp_agent Exit code: 2
  TRACE neutron.agent.dhcp_agent Stdout: ''
  TRACE neutron.agent.dhcp_agent Stderr: 'RTNETLINK answers: Network is 
unreachable\n'
  TRACE neutron.agent.dhcp_agent 

  
  Steps to reproduce:
  NET_NAME=test-ip
  neutron net-create ${NET_NAME}
  neutron port-create ${NET_NAME}
  neutron subnet-create ${NET_NAME} 0.0.0.0/8

  
  Impact:
  cause logs to grow more than necessary

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1471957/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1535554] Re: Multiple dhcp agents are scheduled to host one network automatically if multiple subnets are created at the same time

2017-02-21 Thread Lujin Luo
I will fix this one. Sorry for the late notice.

** Changed in: neutron
 Assignee: (unassigned) => Lujin Luo (luo-lujin)

** Changed in: neutron
   Status: Expired => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1535554

Title:
  Multiple dhcp agents are scheduled to host one network automatically
  if multiple subnets are created at the same time

Status in neutron:
  In Progress

Bug description:
  I have three all-in-one controller nodes deployed by DevStack with the
  latest codes. Neutron servers on these controllers are set behind
  Pacemaker and HAProxy to realize active/active HA. MariaDB Galera
  cluster is used as my database backend.

  In neutron.conf, I have made the following changes:
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Since I only allow one dhcp agent per tenant on each controller, now I
  have three dhcp agents in total for a given tenant. After I created
  one network within this given tenant, before I add any subnets to this
  network, no dhcp agents would be scheduled to host this network. If I
  run multiple commands at the same time to add subnets to the network,
  we may end up with more than one dhcp agents hosting the network.

  It is not easy to re-produce the bug. You might need to repeat the
  following steps multiple times.

  How to reproduce:

  Prerequisite
  make the following changes in neutron.conf
  [DEFAULT]
  dhcp_agents_per_network = 1
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler

  Step 1: Confirm multiple dhcp agents are running
  $ neutron agent-list --agent_type='DHCP agent'
  my result is shown http://paste.openstack.org/show/483956/

  Step 2: Create a network
  $ neutron net-create net-dhcptest

  Step 3: Create multiple subnets on the network at the same time
  On controller1:
  $ neutron subnet-create --name subnet-dhcptest-1 net-dhcptest 192.162.101.0/24
  On controller2:
  $ neutron subnet-create --name subnet-dhcptest-2 net-dhcptest 192.162.102.0/24

  Step 4: Check which dhcp agent(s) is/are hosting the network
  $ neutron dhcp-agent-list-hosting-net net-dhcptest
  my result is shown http://paste.openstack.org/show/483958/

  If you end up with only one dhcp agent, please delete the subnets and
  network. Then repeat Step 1-4 several times.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1535554/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450548] Re: Some VMs get a bad metadata route

2017-02-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1450548

Title:
  Some VMs get a bad metadata route

Status in neutron:
  Expired

Bug description:
  In a configuration using the dhcp_agent.ini setting

 enable_isolated_metadata = True

  When creating a network configuration that is *not* isolated it has
  been observed that the dnsmasq processes are being configured with
  static routes for the metadata-service (169.254.169.254) that point to
  the local dhcp server.

  ci-info: 
+---+-++-+---+---+
  ci-info: | Route |   Destination   |  Gateway   | Genmask | Interface 
| Flags |
  ci-info: 
+---+-++-+---+---+
  ci-info: |   0   | 0.0.0.0 | 71.0.0.161 | 0.0.0.0 |eth0   
|   UG  |
  ci-info: |   1   |71.0.0.160   |  0.0.0.0   | 255.255.255.240 |eth0   
|   U   |
  ci-info: |   2   | 169.254.169.254 | 71.0.0.163 | 255.255.255.255 |eth0   
|  UGH  |

  
  However, in this particular scenario the dnsmasq processes have no 
metadata-proxy processes.

  When a VM boots it gets the static route via DHCP and is unable to
  access the metadata service.

  This issue seems to have appeared due to patch #116832 "Don't spawn
  metadata-proxy for non-isolated nets".

  Is it possible that the basis for that optimisation is flawed?

  The optimisation implements checks of whether a subnet is considered 
isolated. These checks include whether a subnet has a neutron router port 
available. However, it appears that decision can change during network 
construction or manipulation. 
  That potential change of decision would appear to defeat the previous 
optimisation.

  Once it has been decided that a network is isolated the static route for 
metadata-service may be passed to VMs. At which point we cannot run without 
metadata-proxies on the dhcp-servers, even if a neutron router becomes 
available and the network become non-isolated.
   
  A proposal would be to remove the optimisation of not launching 
metadata-proxy-agents on dhcp-servers. Which means we will return to carrying 
the metadata-proxy-agents processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1450548/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666779] [NEW] Expose neutron API via a WSGI script

2017-02-21 Thread Ihar Hrachyshka
Public bug reported:

As per Pike goal [1], we should expose neutron API via a WSGI script,
and make devstack installation use a web server for default deployment.
This bug is a RFE/tracker for the feature.

[1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-
wsgi.html

** Affects: neutron
 Importance: Wishlist
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress


** Tags: api

** Changed in: neutron
 Assignee: (unassigned) => Ihar Hrachyshka (ihar-hrachyshka)

** Changed in: neutron
   Importance: Undecided => Wishlist

** Changed in: neutron
   Status: New => In Progress

** Tags added: api

** Changed in: neutron
Milestone: None => pike-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666779

Title:
  Expose neutron API via a WSGI script

Status in neutron:
  In Progress

Bug description:
  As per Pike goal [1], we should expose neutron API via a WSGI script,
  and make devstack installation use a web server for default
  deployment. This bug is a RFE/tracker for the feature.

  [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-
  wsgi.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666779/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1504941] Re: RBAC-RFE- neutron net-show command should display all tenant that using the network

2017-02-21 Thread Eran Kuris
** Changed in: neutron
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1504941

Title:
  RBAC-RFE- neutron net-show command should display all tenant that
  using the network

Status in neutron:
  New

Bug description:
  On rdo- liberty I thested neutron rbac feature .
  when network assigned to more then 1 tenant we still see one tenant in 
neutron net-show 
  [root@cougar16 ~(keystone_admin)]# neutron net-show 
590ca7b9-1682-4c40-8213-02feaa7a96cc
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | id| 590ca7b9-1682-4c40-8213-02feaa7a96cc |
  | mtu   | 0|
  | name  | internal_ipv4_a  |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 70   |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   | 9a1a387e-88cf-484a-8b12-5a1834be0233 |
  | tenant_id | fa4add4659704239b771b0bccb8b6829 |
  +---+--+

  this network shared in 2 tenants : 
  [root@cougar16 ~(keystone_admin)]# neutron rbac-list
  
+--+--+
  | id   | object_id
|
  
+--+--+
  | 4f1a9c9d-e820-46e4-b431-b3142c6bb245 | 818dd42f-f627-45d4-a578-dd475b9e19e4 
|
  | 8c995ab1-dea6-411b-854c-a405cf5365fa | 590ca7b9-1682-4c40-8213-02feaa7a96cc 
|
  | abb375b9-95d0-4297-80f1-3f22f0f84a9e | b071a769-0d50-4d25-8730-fed3dea13a2f 
|
  | f3122b92-f47a-4a0f-a422-c9f7ed482341 | 590ca7b9-1682-4c40-8213-02feaa7a96cc 
|

  
  [root@cougar16 ~(keystone_admin)]# rpm -qa |grep neutron 
  python-neutronclient-3.1.1-dev1.el7.centos.noarch
  python-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-ml2-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-common-7.0.0.0-rc2.dev21.el7.centos.noarch
  openstack-neutron-openvswitch-7.0.0.0-rc2.dev21.el7.centos.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1504941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp