[Yahoo-eng-team] [Bug 1493414] Re: OVS Neutron agent is marking port as dead before they are deleted

2016-03-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/248908
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=5289d9494984b7c95407ad2f9b761b2e647953b2
Submitter: Jenkins
Branch:master

commit 5289d9494984b7c95407ad2f9b761b2e647953b2
Author: Ramu Ramamurthy 
Date:   Mon Nov 23 15:21:46 2015 -0500

Remove stale ofport drop-rule upon port-delete

When a port is deleted, that port is set to a dead-vlan, and
an ofport drop-flow is added in port_dead().

The ofport drop-flow gets removed only in some cases
in _bind_devices() - depending on the timing of the
concurrent port-deletion. In other cases, the drop-flow
never gets removed, and such garbage drop-flow rules
accumulate forever until the ovs-agent restarts.

The fix is to use the function update_stale_ofport_rules which
solves this problem of tracking stale ofport flows
in deleted ports, but currently only applies only to
prevent_arp_spoofing.

Change-Id: I0d1dbe3918cc7d7b3d0cdc49d7b6ff85f9b02a17
Closes-Bug: #1493414


** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1493414

Title:
  OVS Neutron agent is marking port as dead before they are deleted

Status in neutron:
  Fix Released

Bug description:
  The situation is happening on Liberty-3.

  When trying to clear the gateway port and tenant network interface
  delete in router, the OVS agent is marking the port as dead instead of
  treat them as removed: security group removed and port_unbound

  This is causing to left stale OVS flows in br-int, and it may affect
  the port_unbound() logic in ovs_neutron_agent.py.

  The ovs_neutron_agent is in one iteration of rpc_loop processing the
  deleted port via process_deleted_ports() method, marking the qg- port
  as dead (ovs flow rule to drop the traffic) and in another iteration,
  the ovs_neutron_agent is processing the removed port by
  treat_devices_removed() method.

  In first iteration, the port deleting is triggered by port_delete() method:
  2015-09-04 14:16:20.337 DEBUG 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-e43234b1-633b-404d-92d0-0f844dadb586 admin 
0f6c0469ea6e4d95a27782c46021243a] port_delete message processed for port 
1c749258-74fb-498b-9a08-1fec6725a1cf from (pid=136030) port_delete 
/opt/openstack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:410

  and in second iteration, the device removed is triggered by ovsdb:
  2015-09-04 14:16:20.848 DEBUG neutron.agent.linux.ovsdb_monitor [-] Output 
received from ovsdb monitor: 
{"data":[["bab86f35-d004-4df6-95c2-0f7432338edb","delete","qg-1c749258-74",49,["map",[["attached-mac","fa:16:3e:99:37:68"],["iface-id","1c749258-74fb-498b-9a08-1fec6725a1cf"],["iface-status","active"],"headings":["row","action","name","ofport","external_ids"]}
   from (pid=136030) _read_stdout 
/opt/openstack/neutron/neutron/agent/linux/ovsdb_monitor.py:50

  Log from ovs neutron agent:
  http://paste.openstack.org/show/445479/

  Steps to reproduce:
  1. Create router
  2. Add tenant network interface to the router
  3. Launch a VM
  4. Add external network gateway to created router
  5. Check the br-int for current port numbers
  6. Remove external network gateway
  7. Check the br-int for dead port flows (removed port qg-)
  8. Remove the network interface from tenant network
  9. Check the br-int for dead port flows.

  Repeat the steps 4-9 few times to see if dead port flows will appear
  in br-int.

  This is affecting the legacy, dvr and HA router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1493414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1454455] Re: nova doesn't log to syslog

2016-03-27 Thread Launchpad Bug Tracker
[Expired for oslo.log because there has been no activity for 60 days.]

** Changed in: oslo.log
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1454455

Title:
  nova doesn't log to syslog

Status in OpenStack Compute (nova):
  Invalid
Status in oslo.log:
  Expired

Bug description:
  Logs from nova are not recorded when using syslog. Neutron logging
  works fine using the same rsyslog service. I've tried with debug and
  verbose enabled and disabled.

  
  1) Nova version:
   1:2014.2.2-0ubuntu1~cloud0 on Ubuntu 14.04

  2) Relevant log files:
  No relevant log files, as that is the problem

  3) Reproduction steps:
a) Set the following in nova.conf 
 logdir=/var/log/nova
b) Restart nova services
c) Confirm that logs are created in /var/log/nova
d) Remove logdir and add the following to nova.conf
use_syslog=true
syslog_log_facility=LOG_LOCAL0
e) Restart nova services
f) Nova's logs are not showing up in /var/log/syslog

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1454455/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527668] Re: Service plugins need to be notified if a port belonging to a service instance they manage is updated

2016-03-27 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1527668

Title:
  Service plugins need to be notified if a port belonging to a service
  instance they manage is updated

Status in neutron:
  Expired

Bug description:
  Service plugins need to know if a port belonging to a service instance
  they manage is updated.

  For eg:
  If an interface on a router managed by a service plugin is disabled/enabled 
by setting the corresponding port  admin_state_up as false/true respectively, 
then the service plugin should be notofied. Currently the modification is done 
in the update_port function defined in the ml2/plugin.py only. No further 
notification is sent to any service plugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1527668/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1512443] Re: config drive on RBD leaves orphaned loopback device mounts

2016-03-27 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1512443

Title:
  config drive on RBD leaves orphaned loopback device mounts

Status in OpenStack Compute (nova):
  Expired

Bug description:
  Version: Juno (with https://review.openstack.org/#/c/123073/
  backported)

  Reproduce:
  Create a VM with config-drive on ceph/rbd.

  Expected result:
  When the VM is created, part of the above patch copies the config drive onto 
RBD by way of using a loopback device to copy the contents.  It is expected 
that once this is completed, the loopback device will be properly cleaned up.  
The end result being no loopback devices left open and the config drive being 
properly stored on RBD.

  Actual result:
  After the config drive is copied to RBD, a loopback device is left mounted to 
a deleted file:
  /dev/loop0: [0807]:33682451 
(/var/lib/nova/instances/0925ed53-16f1-48f2-a190-ab19706b80c6_del/disk.config 
(deleted))
  The config drive is successfully copied to RBD, but eventually all the 
loopback devices are consumed, causing subsequent VM creations (on the 
hypervisor in question) to fail and requiring the hypervisor to be rebooted to 
clean them up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1512443/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562681] [NEW] instance evacuate without image's metadata

2016-03-27 Thread guo.lei
Public bug reported:

 when I boot a instance with image , and the image has metadata,  eg. 
hw_qemu_guest_agent=yes.
means the instance enable qemu-guest-agent,but the qemu-guest-agent was 
disappeared after the instance evacuated.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562681

Title:
  instance evacuate without image's metadata

Status in OpenStack Compute (nova):
  New

Bug description:
   when I boot a instance with image , and the image has metadata,  eg. 
hw_qemu_guest_agent=yes.
  means the instance enable qemu-guest-agent,but the qemu-guest-agent was 
disappeared after the instance evacuated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562681/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562670] [NEW] nova set-password return 409

2016-03-27 Thread zhaozhilong
Public bug reported:

1). i had create an instance
# nova list
+--+---+++-+-+
| ID   | Name  | Status | Task State | Power 
State | Networks|
+--+---+++-+-+
| 40c65eed-7339-4a3a-a838-0c564bc78bcd | test1 | ACTIVE | -  | Running  
   | private=10.0.0.25, fd28:29fc:e927:0:f816:3eff:fe36:3666 |
+--+---+++-+-+

2). and i forget the password.So, i have to reset the password for my instance
# nova set-password 40c65eed-7339-4a3a-a838-0c564bc78bcd
New password: 
Again: 
ERROR (Conflict): Failed to set admin password on 
40c65eed-7339-4a3a-a838-0c564bc78bcd because error setting admin password (HTTP 
409) (Request-ID: req-4bfdc3c7-058d-4742-a414-ee1f99698f68)

at the end, i get the 409 return code.
should i do other things before i change the password?? or this is a bug?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562670

Title:
  nova set-password return 409

Status in OpenStack Compute (nova):
  New

Bug description:
  1). i had create an instance
  # nova list
  
+--+---+++-+-+
  | ID   | Name  | Status | Task State | Power 
State | Networks|
  
+--+---+++-+-+
  | 40c65eed-7339-4a3a-a838-0c564bc78bcd | test1 | ACTIVE | -  | 
Running | private=10.0.0.25, fd28:29fc:e927:0:f816:3eff:fe36:3666 |
  
+--+---+++-+-+

  2). and i forget the password.So, i have to reset the password for my instance
  # nova set-password 40c65eed-7339-4a3a-a838-0c564bc78bcd
  New password: 
  Again: 
  ERROR (Conflict): Failed to set admin password on 
40c65eed-7339-4a3a-a838-0c564bc78bcd because error setting admin password (HTTP 
409) (Request-ID: req-4bfdc3c7-058d-4742-a414-ee1f99698f68)

  at the end, i get the 409 return code.
  should i do other things before i change the password?? or this is a bug?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562665] [NEW] Nova-compute connects to all of the iscsi target in Kilo

2016-03-27 Thread Hiroyuki Eguchi
Public bug reported:

Nova-compute tries to connect to all of the iscsi target which discovered when 
using multhpath in Kilo.
As a result , a lot of unnecessary iscsi sessions occur on the nova-compute.

We have to choose correct targets from output of iscsiadm discovery by
iscsi_properties of the volume we try to connect.

Steps of attaching multipath device is like this:

1. discover iscsi targets
2. get all active iscsi sessions
3. compare 1. with 2. and connect to all iscsi targets not connected

It should be like this:

1. discover iscsi targets
2. choose correct targets from output of discovery
3. get all active iscsi sessions
4. compare 2. with 3. and connect to the iscsi target if it is not connected

** Affects: nova
 Importance: Undecided
 Assignee: Hiroyuki Eguchi (h-eguchi)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Hiroyuki Eguchi (h-eguchi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562665

Title:
  Nova-compute connects to all of the iscsi target in Kilo

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova-compute tries to connect to all of the iscsi target which discovered 
when using multhpath in Kilo.
  As a result , a lot of unnecessary iscsi sessions occur on the nova-compute.

  We have to choose correct targets from output of iscsiadm discovery by
  iscsi_properties of the volume we try to connect.

  Steps of attaching multipath device is like this:

  1. discover iscsi targets
  2. get all active iscsi sessions
  3. compare 1. with 2. and connect to all iscsi targets not connected

  It should be like this:

  1. discover iscsi targets
  2. choose correct targets from output of discovery
  3. get all active iscsi sessions
  4. compare 2. with 3. and connect to the iscsi target if it is not connected

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526138] Re: xenserver driver lacks of linux bridge qbrXXX

2016-03-27 Thread OpenStack Infra
** Changed in: nova
   Status: Opinion => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1526138

Title:
  xenserver driver lacks of linux bridge qbrXXX

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  1. Nova latest master branch, should be Mitaka with next release

  2. XenServer as compute driver in OpenStack lacks of linux bridge when
  using neutron networking and thus it cannot support neutron security
  group as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1526138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562323] Re: test_server_basic_ops rename in tempest is breaking cells job

2016-03-27 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/298065
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e86b7fbb99de0c4cff6a9569f748bae1bde693a6
Submitter: Jenkins
Branch:master

commit e86b7fbb99de0c4cff6a9569f748bae1bde693a6
Author: Matt Riedemann 
Date:   Sun Mar 27 19:31:32 2016 -0400

Update cells blacklist regex for test_server_basic_ops

Tempest change 9bee3b92f1559cb604c8bd74dcca57805a85a97a
renamed a test in our blacklist so update the filter to
handle the old and new name.

The Tempest team is hesitant to revert the change so we
should handle it ourselves and eventually move to using
test uuids for our blacklist, but there might need to
be work in devstack-gate for that first.

Change-Id: Ibab3958044c21568d7fbbe0a298bb40bbbc20df3
Closes-Bug: #1562323


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562323

Title:
  test_server_basic_ops rename in tempest is breaking cells job

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) liberty series:
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  In Progress
Status in tempest:
  Won't Fix

Bug description:
  Nova has a blacklist on tempest tests that won't work in the cells v1
  job, including:

  https://github.com/openstack/nova/blob/master/devstack/tempest-dsvm-
  cells-rc#L77

  With change https://review.openstack.org/#/c/296842/ there was a
  blacklisted test that was renamed so it's no longer skipped and the
  cells job is now failing on all nova changes.

  We can either revert the tempest change or update the blacklist in
  nova, but since tempest is branchless we'd have to backport that nova
  change to stable/mitaka, stable/liberty and stable/kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562649] [NEW] It prompt:Error:Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.

2016-03-27 Thread pkw1155402
Public bug reported:

When i started a host from image tab in Web, it failure and  prompt that:
Error:Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.  (HTTP 500) (Request-ID: 
req-c26c3d65-dd95-470b-892e-d83b30291da3).

I had checked the log in the servers.
keystone.log:
2016-03-28 09:06:14.221 11430 INFO keystone.common.wsgi 
[req-8e062673-a5a5-443b-95b5-4cd1a3e82a5b - - - - -] POST 
http://controller:35357/v3/auth/tokens
2016-03-28 09:06:14.234 11430 WARNING keystone.common.wsgi 
[req-8e062673-a5a5-443b-95b5-4cd1a3e82a5b - - - - -] Authorization failed. The 
request you have made requires authentication. from 192.168.20.101

And it was builded with centos7 according to the processes from
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-
public.html

Anyone help me?

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562649

Title:
  It prompt:Error:Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP 500)
  (Request-ID: req-c26c3d65-dd95-470b-892e-d83b30291da3)  when i started
  host from image.

Status in OpenStack Compute (nova):
  New

Bug description:
  When i started a host from image tab in Web, it failure and  prompt that:
  Error:Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API log if possible.  (HTTP 500) (Request-ID: 
req-c26c3d65-dd95-470b-892e-d83b30291da3).

  I had checked the log in the servers.
  keystone.log:
  2016-03-28 09:06:14.221 11430 INFO keystone.common.wsgi 
[req-8e062673-a5a5-443b-95b5-4cd1a3e82a5b - - - - -] POST 
http://controller:35357/v3/auth/tokens
  2016-03-28 09:06:14.234 11430 WARNING keystone.common.wsgi 
[req-8e062673-a5a5-443b-95b5-4cd1a3e82a5b - - - - -] Authorization failed. The 
request you have made requires authentication. from 192.168.20.101

  And it was builded with centos7 according to the processes from
  http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-
  public.html

  Anyone help me?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562649/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562323] Re: test_server_basic_ops rename in tempest is breaking cells job

2016-03-27 Thread Matt Riedemann
** Also affects: nova/liberty
   Importance: Undecided
   Status: New

** Also affects: nova/mitaka
   Importance: Undecided
   Status: New

** Changed in: nova/liberty
   Status: New => Confirmed

** Changed in: nova/mitaka
   Status: New => Confirmed

** Changed in: nova
   Importance: Undecided => Critical

** Changed in: nova/mitaka
   Importance: Undecided => Critical

** Changed in: nova/liberty
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1562323

Title:
  test_server_basic_ops rename in tempest is breaking cells job

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) liberty series:
  Confirmed
Status in OpenStack Compute (nova) mitaka series:
  Confirmed
Status in tempest:
  Opinion

Bug description:
  Nova has a blacklist on tempest tests that won't work in the cells v1
  job, including:

  https://github.com/openstack/nova/blob/master/devstack/tempest-dsvm-
  cells-rc#L77

  With change https://review.openstack.org/#/c/296842/ there was a
  blacklisted test that was renamed so it's no longer skipped and the
  cells job is now failing on all nova changes.

  We can either revert the tempest change or update the blacklist in
  nova, but since tempest is branchless we'd have to backport that nova
  change to stable/mitaka, stable/liberty and stable/kilo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1562323/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1551288] Re: Fullstack native tests sometimes fail with an OVS agent failing to start with 'Address already in use' error

2016-03-27 Thread Assaf Muller
Still seeing instances of this bug. I have a deterministic solution
coming up.

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551288

Title:
  Fullstack native tests sometimes fail with an OVS agent failing to
  start with 'Address already in use' error

Status in neutron:
  Confirmed

Bug description:
  Example failure:
  test_connectivity(VLANs,Native) fails with this error:

  http://paste.openstack.org/show/488585/

  wait_until_env_is_up is timing out, which typically means that the
  expected number of agents failed to start. Indeed in this particular
  example I saw this line being output repeatedly in neutron-server.log:

  [29/Feb/2016 04:16:31] "GET /v2.0/agents.json HTTP/1.1" 200 1870
  0.005458

  Fullstack calls GET on agents to determine if the expected amount of
  agents were started and are successfully reporting back to neutron-
  server.

  We then see that one of the three OVS agents crashed with this TRACE:
  http://paste.openstack.org/show/488586/

  This happens only with the native tests using the Ryu library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562495] [NEW] A part of description written on the top of each step in ng-Launch Instance modal does not show

2016-03-27 Thread Kenji Ishii
Public bug reported:

On the top of each step in ng-Launch Instance modal is displayed some 
description about this step page.
Due to the help element, users can not show a part of description.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Attachment added: "img2.png"
   https://bugs.launchpad.net/bugs/1562495/+attachment/4613368/+files/img2.png

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562495

Title:
  A part of description written on the top of each step in ng-Launch
  Instance modal does not show

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  On the top of each step in ng-Launch Instance modal is displayed some 
description about this step page.
  Due to the help element, users can not show a part of description.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562495/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562488] [NEW] "Set as Active Project" menu is shown even though the project is disabled

2016-03-27 Thread Kenji Ishii
Public bug reported:

Project selection pulldown in header is controlled to display only enabled 
projects.
However, in project list page, "Set as Active Project" menu is displayed even 
thought a project is disabled.
And error message "Project switch failed for user xxx" is displayed.
It should be improve not to show if the project is disabled.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562488

Title:
  "Set as Active Project" menu is shown even though the project is
  disabled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Project selection pulldown in header is controlled to display only enabled 
projects.
  However, in project list page, "Set as Active Project" menu is displayed even 
thought a project is disabled.
  And error message "Project switch failed for user xxx" is displayed.
  It should be improve not to show if the project is disabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562483] [NEW] mitaka: needs to update language list based on translation progress

2016-03-27 Thread Akihiro Motoki
Public bug reported:

The language list in openstack_dashboard/settings.py needs to be updated based 
on the progress of translations in Mitaka.
It was discussed in the i18n list [1] and the criteria is 66% in Horizon 
translations [2].
This helps users see well-translated languages.

The deadline of translations is set to Mar 28.
I will propose an update on Mar 28.

[1] http://lists.openstack.org/pipermail/openstack-i18n/2016-March/002036.html
[2] 
https://translate.openstack.org/version-group/view/mitaka-translation/projects/horizon/stable-mitaka

** Affects: horizon
 Importance: High
 Assignee: Akihiro Motoki (amotoki)
 Status: New


** Tags: mitaka-rc-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1562483

Title:
  mitaka: needs to update language list based on translation progress

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The language list in openstack_dashboard/settings.py needs to be updated 
based on the progress of translations in Mitaka.
  It was discussed in the i18n list [1] and the criteria is 66% in Horizon 
translations [2].
  This helps users see well-translated languages.

  The deadline of translations is set to Mar 28.
  I will propose an update on Mar 28.

  [1] http://lists.openstack.org/pipermail/openstack-i18n/2016-March/002036.html
  [2] 
https://translate.openstack.org/version-group/view/mitaka-translation/projects/horizon/stable-mitaka

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1562483/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1562467] [NEW] DVR logic in OVS doesn't handle CSNAT ofport change

2016-03-27 Thread Kevin Benton
Public bug reported:

if the ofport of a port changes due to it being quickly
unplugged/plugged (i.e. within a polling interval), the OVS agent will
not update the ofport in its DVR cache of local port info so it will
fail to be wired correctly.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1562467

Title:
  DVR logic in OVS doesn't handle CSNAT ofport change

Status in neutron:
  New

Bug description:
  if the ofport of a port changes due to it being quickly
  unplugged/plugged (i.e. within a polling interval), the OVS agent will
  not update the ofport in its DVR cache of local port info so it will
  fail to be wired correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1562467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp