[Yahoo-eng-team] [Bug 1479452] [NEW] Changing resource's domain_id should not be possible

2015-07-29 Thread Henrique Truta
Public bug reported:

Changing a resource's domain_id, specially a project, is not something
we want, as discussed at the last topic of:
http://eavesdrop.openstack.org/meetings/keystone/2015/keystone.2015-07-21-18.01.log.html

This could cause some security problems as well as hierarchy's
inconsistency, once it'll require the whole hierarchy to be changed,
when changing a parent project's domain_id.

We shall deprecate the 'domain_id_immutable' property
(https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L66)
to remove it in the future and for now,  show a warning if it is set
false.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1479452

Title:
  Changing resource's domain_id should not be possible

Status in Keystone:
  New

Bug description:
  Changing a resource's domain_id, specially a project, is not something
  we want, as discussed at the last topic of:
  
http://eavesdrop.openstack.org/meetings/keystone/2015/keystone.2015-07-21-18.01.log.html

  This could cause some security problems as well as hierarchy's
  inconsistency, once it'll require the whole hierarchy to be changed,
  when changing a parent project's domain_id.

  We shall deprecate the 'domain_id_immutable' property
  
(https://github.com/openstack/keystone/blob/master/etc/keystone.conf.sample#L66)
  to remove it in the future and for now,  show a warning if it is set
  false.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479406] [NEW] LBaaS v2 Radware driver bugs

2015-07-29 Thread Evgeny Fedoruk
Public bug reported:


Problems:

1.
LBaaS v2 Radware driver creates proxy port on member's network if member is 
located on different than LB network.
It should remove it when LB is deleted.
It does not  always delete the proxy port
It also tries to allocate proxy port for any other member located on another 
different network which is unnecessary

2.
Adding another listener to existing LB does is not reflected in Radware 
back-end system

** Affects: neutron
 Importance: Undecided
 Assignee: Evgeny Fedoruk (evgenyf)
 Status: New


** Tags: lbaas radware

** Changed in: neutron
 Assignee: (unassigned) = Evgeny Fedoruk (evgenyf)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479406

Title:
  LBaaS v2 Radware driver bugs

Status in neutron:
  New

Bug description:
  
  Problems:

  1.
  LBaaS v2 Radware driver creates proxy port on member's network if member is 
located on different than LB network.
  It should remove it when LB is deleted.
  It does not  always delete the proxy port
  It also tries to allocate proxy port for any other member located on another 
different network which is unnecessary

  2.
  Adding another listener to existing LB does is not reflected in Radware 
back-end system

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479406/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464239] Re: mount: special device /dev/sdb does not exist

2015-07-29 Thread Matt Riedemann
The workaround in tripleo is here:
https://review.openstack.org/#/c/190629/

** Changed in: tripleo
   Status: In Progress = Fix Committed

** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464239

Title:
  mount: special device /dev/sdb does not exist

Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  Fix Committed

Bug description:
  As of today it looks like all jobs fail due to a missing Ephemeral
  partition:

  mount: special device /dev/sdb does not exist

  
  

  This Nova commit looks suspicious: 7f8128f87f5a2fa93c857295fb7e4163986eda25
  Add the swap and ephemeral BDMs if needed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466851] Re: Move to graduated oslo.service

2015-07-29 Thread Tim Hinrichs
** Changed in: congress
   Status: Fix Committed = Fix Released

** Changed in: congress
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466851

Title:
  Move to graduated oslo.service

Status in Ceilometer:
  Fix Committed
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in Ironic:
  Fix Committed
Status in Keystone:
  Fix Released
Status in Magnum:
  Fix Committed
Status in Manila:
  Fix Released
Status in murano:
  Fix Committed
Status in neutron:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Committed
Status in Trove:
  Fix Released

Bug description:
  oslo.service library has graduated so all OpenStack projects should
  port to it instead of using oslo-incubator code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1466851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479377] Re: Eror while installing keystone

2015-07-29 Thread Dolph Mathews
** Project changed: keystone = keystone (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1479377

Title:
  Eror while installing keystone

Status in keystone package in Ubuntu:
  New

Bug description:
  While I am installing keystone and python client for it I am facing
  the following error:

  Traceback (most recent call last):
File /usr/local/bin/keystone-manage, line 6, in module
  from keystone.cmd.manage import main
  ImportError: No module named cmd.manage
  dpkg: error processing package keystone (--configure):
   subprocess installed post-installation script returned error exit status 1
  Errors were encountered while processing:
   keystone

  
  The same is when I do sudo -s /bin/sh -c keystone-manage db_sync keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/keystone/+bug/1479377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464239] Re: mount: special device /dev/sdb does not exist

2015-07-29 Thread Matt Riedemann
** Changed in: nova
   Status: Invalid = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1464239

Title:
  mount: special device /dev/sdb does not exist

Status in OpenStack Compute (nova):
  In Progress
Status in tripleo:
  Fix Committed

Bug description:
  As of today it looks like all jobs fail due to a missing Ephemeral
  partition:

  mount: special device /dev/sdb does not exist

  
  

  This Nova commit looks suspicious: 7f8128f87f5a2fa93c857295fb7e4163986eda25
  Add the swap and ephemeral BDMs if needed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1464239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1323766] Re: Incorrect Floating IP behavior in dual stack or ipv6 only network

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1323766

Title:
  Incorrect Floating IP behavior  in dual stack or ipv6 only network

Status in neutron:
  Fix Released

Bug description:
  investigation on the floatingip API revealed a few issues:
-- if the external network is associated with multiple subnets, one IP 
address from each of the subnets is allocated. But the ip address used for the 
floating ip is picked up randomly. For example, a network could have both an 
ipv6 and an ipv4 subnet. In my tests, for one of such network, it picked up the 
ipv4 address; for the other, it picked up the ipv6 address. 
-- given that one IP is allocated from each subnet, addresses not used for 
floating ip is wasted.
-- Most likely, ipv6 floating ip (with the same mechanism as ipv4) won't be 
supported. But the API allows ipv6 addresses to be used as floating ip.
-- it allows an IPv4 floating ip to be associated with a port's ipv6 
addresses, and vice versa.

  In general, the behavior/semantics involved in the floating IP API
  needs to be looked at in the context of dual stack or ipv6 only
  network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1323766/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1469080] Re: LBaaS Message of DeviceNotFoundOnAgent is still the default one

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1469080

Title:
  LBaaS Message of DeviceNotFoundOnAgent is still the default one

Status in neutron:
  Fix Released

Bug description:
  The default message of an exception is An unknown exception occurred.. 
  Need to use 'message' instead of 'msg' for the following exception, otherwise
  the default message is logged.

  class DeviceNotFoundOnAgent(n_exc.NotFound):
  msg = _('Unknown device with loadbalancer_id %(loadbalancer_id)s')

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1469080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473278] Re: add vhost user constants to portbinding extention

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473278

Title:
  add vhost user constants to portbinding extention

Status in neutron:
  Fix Released

Bug description:
  during the kilo development cycle

  a new vif type VIF_TYPE_VHOST_USER was added to the nova libvit
  driver.

  in parallel a ovs dpdk mechanism driver was also created which
  consumes this new VIF type.

  as humans are error prone when multitasking i forgot to introduce the pathset 
to add 
  vhost_user constants for port binding to 
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/portbindings.py

  the rfe bug captures the addtion of the following binding constants
  for the vhost user interface.

  #  - vhost_user_ovs_plug: Boolean used to inform Nova that the ovs plug
  # method should be used when binding the
  # vhost user vif.
  VHOST_USER_OVS_PLUG = 'vhostuser_ovs_plug'

  #  - vhost_user_mode: String value used to declare how the mode of a
  # vhost-user socket
  VHOST_USER_MODE = 'vhostuser_mode'
  #  - server: socket created by hypervisor
  VHOST_USER_MODE_SERVER = 'server'
  #  - client: socket created by vswitch
  VHOST_USER_MODE_CLIENT = 'client'

  #  - vhostuser_socket String value used to declare the vhostuser socket name
  VHOST_USER_SOCKET = 'vhostuser_socket'

  #  - vif_type_vhost_user: vif type to enable use of the qemu vhost-user vif
  VIF_TYPE_VHOST_USER = 'vhostuser'

  # default location for vhostuser sockets
  VHOSTUSER_SOCKET_DIR = '/var/run/openvswitch'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1473278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467531] Re: ip_lib get_gateway() for default route through an interface returns wrong output

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1467531

Title:
  ip_lib get_gateway() for default route through an interface returns
  wrong output

Status in neutron:
  Fix Released

Bug description:
  When the routing table has a default route through an interface as shown 
below, 
  get_gateway() of IpRouteCommand class, returns a wrong output. 

  # ip -4 route list dev qg-595abc24-c8
  default  scope link 
  8.8.8.0/24 via 19.4.4.4 
  19.4.4.0/24  proto kernel  scope link  src 19.4.4.4 

  # ip -6 route list dev qg-595abc24-c8 
  fd00::/64  proto kernel  metric 256 
  fe80::/64  proto kernel  metric 256 
  default  metric 1024 

  Output:
  gateway_dev.route.get_gateway(ip_version=4)
  {'gateway': 'link'}

  gateway_dev.route.get_gateway(ip_version=6)
  {'metric': 1024, 'gateway': '1024'}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1467531/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468259] Re: native ovsdb iface should make sure manager that is used for connections is availabe

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468259

Title:
  native ovsdb iface should make sure manager that is used for
  connections is availabe

Status in neutron:
  Fix Released

Bug description:
  At the moment, if native ovsdb iface is used, then there is a
  requirement that user sets a manager that allows such connection in
  advance. Instead, native implementation should just set that manager
  before connection attempts once on startup.

  That allows to use new implementation without support for set-manager
  call in all deployment tools.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478925] Re: Instances on the same compute node unable to connect to each other's ports

2015-07-29 Thread Davanum Srinivas (DIMS)
Is this a Nova issue or Neutron issue? :)

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478925

Title:
  Instances on the same compute node unable to connect to each other's
  ports

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  Openstack version: Icehouse 2014.1.5
  Nova version: 2.17.0

  I have two instances created on the same compute node connected to a
  virtual network. I am trying to connect via the virtual network from
  one instance to another to some port to which no process is listening
  to and I am expecting to get a 'Connection refused' message from the
  kernel.

  This works as expected with any two instances on the same virtual
  network that are located on different compute nodes, however, if the
  instances are created on the same compute node, the connection times
  out.

  I have noticed that a temporary fix has been to tamper with the input
  iptables rules by moving the rule which drops packets in an invalid
  state after the rules for the other instances are defined, as such:

  From:
  -A neutron-openvswi-ic05bb97b-2 -m state --state INVALID -j DROP
  -A neutron-openvswi-ic05bb97b-2 -m state --state RELATED,ESTABLISHED -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p tcp -m tcp --dport 22 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p tcp -m tcp --dport 35357 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p tcp -m tcp --dport 80 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p tcp -m tcp --dport 5000 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p tcp -m tcp -m multiport --dports 9000: 
-j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.41/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.25/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.45/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.17/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 4.0.0.12/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.36/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.43/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.40/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.35/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 6.0.0.3/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.28/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 4.0.0.10/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.22/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.44/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.47/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.44/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.39/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.20/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.26/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.38/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.29/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.48/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 6.0.0.6/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.15/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.24/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 4.0.0.11/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.45/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.54/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.13/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.43/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.33/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.42/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.46/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.42/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.23/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.50/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.12/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.16/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.14/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.37/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 5.0.0.7/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.41/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 3.0.0.46/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.48/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.30/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.21/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.27/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 5.0.0.8/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 6.0.0.5/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 5.0.0.6/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -s 9.0.0.49/32 -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p icmp -j RETURN

  To:
  -A neutron-openvswi-ic05bb97b-2 -m state --state RELATED,ESTABLISHED -j RETURN
  -A neutron-openvswi-ic05bb97b-2 -p tcp -m tcp --dport 22 -j RETURN
  -A 

[Yahoo-eng-team] [Bug 1464461] Re: delete action always cause error ( in kilo)

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464461

Title:
  delete action always cause error ( in kilo)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  When i did any delete actions (delete router, delete network etc...)
  in japanese environment , always get a error page.

  horizon error logs:
  -
  Traceback (most recent call last):
File /usr/lib/python2.7/site-packages/django/core/handlers/base.py, line 
132, in get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 52, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 36, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/decorators.py, line 84, in 
dec
  return view_func(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
71, in view
  return self.dispatch(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/django/views/generic/base.py, line 
89, in dispatch
  return handler(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 223, 
in post
  return self.get(request, *args, **kwargs)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 159, 
in get
  handled = self.construct_tables()
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 150, 
in construct_tables
  handled = self.handle_table(table)
File /usr/lib/python2.7/site-packages/horizon/tables/views.py, line 125, 
in handle_table
  handled = self._tables[name].maybe_handle()
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1640, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File /usr/lib/python2.7/site-packages/horizon/tables/base.py, line 1482, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
302, in multiple
  return self.handle(data_table, request, object_ids)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
828, in handle
  exceptions.handle(request, ignore=ignore)
File /usr/lib/python2.7/site-packages/horizon/exceptions.py, line 364, in 
handle
  six.reraise(exc_type, exc_value, exc_traceback)
File /usr/lib/python2.7/site-packages/horizon/tables/actions.py, line 
817, in handle
  (self._get_action_name(past=True), datum_display))
  UnicodeDecodeError: 'ascii' codec can't decode byte 0xe3 in position 0: 
ordinal not in range(128)
  -

  It occurs in japanese,korean,chinese,french and deutsche, not occurs
  in english and spanish.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464461/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453074] Re: [OSSA 2015-010] help_text parameter of fields is vulnerable to arbitrary html injection (CVE-2015-3219)

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1453074

Title:
  [OSSA 2015-010] help_text parameter of fields is vulnerable to
  arbitrary html injection (CVE-2015-3219)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  The Field class help_text attribute is vulnerable to code injection if
  the text is somehow taken from the user input.

  Heat UI allows to create stacks from the user input which define
  parameters. Those parameters are then converted to the input field
  which are vulnerable.

  The heat stack example exploit:

  description: Does not matter
  heat_template_version: '2013-05-23'
  outputs: {}
  parameters:
    param1:
  type: string
  label: normal_label
  description: hack=scriptalert('YOUR HORIZON IS PWNED')/script
  resources: {}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1453074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1448245] Re: VPNaaS UTs broken in neutron/tests/unit/extensions/test_l3.py

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1448245

Title:
  VPNaaS UTs broken in neutron/tests/unit/extensions/test_l3.py

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Recently, VPNaaS repo UTs are failing in tests that inherit from
  Neutron tests. The tests worked 4/22/2015 and are broken on 4/24/2015.
  Will try to bisect to find change in Neutron that affects tests.

  Example failure:

  2015-04-24 06:40:39.838 | Captured pythonlogging:
  2015-04-24 06:40:39.838 | ~~~
  2015-04-24 06:40:39.838 | 2015-04-24 06:40:38,704ERROR 
[neutron.api.extensions] Extension path 'neutron/tests/unit/extensions' doesn't 
exist!
  2015-04-24 06:40:39.838 | 
  2015-04-24 06:40:39.838 | 
  2015-04-24 06:40:39.838 | Captured traceback:
  2015-04-24 06:40:39.838 | ~~~
  2015-04-24 06:40:39.838 | Traceback (most recent call last):
  2015-04-24 06:40:39.838 |   File 
neutron_vpnaas/tests/unit/db/vpn/test_vpn_db.py, line 886, in 
test_delete_router_interface_in_use_by_vpnservice
  2015-04-24 06:40:39.839 | expected_code=webob.exc.
  2015-04-24 06:40:39.839 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/src/neutron/neutron/tests/unit/extensions/test_l3.py,
 line 401, in _router_interface_action
  2015-04-24 06:40:39.839 | self.assertEqual(res.status_int, 
expected_code, msg)
  2015-04-24 06:40:39.839 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
  2015-04-24 06:40:39.839 | self.assertThat(observed, matcher, message)
  2015-04-24 06:40:39.839 |   File 
/home/jenkins/workspace/gate-neutron-vpnaas-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  2015-04-24 06:40:39.839 | raise mismatch_error
  2015-04-24 06:40:39.839 | testtools.matchers._impl.MismatchError: 200 != 
409
  2015-04-24 06:40:39.839 | 
  2015-04-24 06:40:39.839 | 
  2015-04-24 06:40:39.839 | Captured pythonlogging:
  2015-04-24 06:40:39.839 | ~~~
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,694 INFO 
[neutron.manager] Loading core plugin: 
neutron_vpnaas.tests.unit.db.vpn.test_vpn_db.TestVpnCorePlugin
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,694 INFO 
[neutron.manager] Service L3_ROUTER_NAT is supported by the core plugin
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,694 INFO 
[neutron.manager] Loading Plugin: neutron_vpnaas.services.vpn.plugin.VPNPlugin
  2015-04-24 06:40:39.840 | 2015-04-24 06:40:38,704ERROR 
[neutron.api.extensions] Extension path 'neutron/tests/unit/extensions' doesn't 
exist!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1448245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447884] Re: Boot from volume, block device allocate timeout cause VM error, but volume would be available later

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447884

Title:
  Boot from volume, block device allocate timeout cause VM error, but
  volume would be available later

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When we  try to boot multi instances from volume (with a  large image
  source)  at the same time,  we usually got a block device allocate
  error as the logs in nova-compute.log:

  2015-03-30 23:22:46.920 6445 WARNING nova.compute.manager [-] Volume id: 
551ea616-e1c4-4ef2-9bf3-b0ca6d4474dc finished being created but was not set as 
'available'
  2015-03-30 23:22:47.131 6445 ERROR nova.compute.manager [-] [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] Instance failed block device setup
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] Traceback (most recent call last):
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/compute/manager.py, line 1829, in 
_prep_block_device
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] do_check_attach=do_check_attach) +
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 407, in 
attach_block_devices
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] map(_log_and_attach, 
block_device_mapping)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 405, in 
_log_and_attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] bdm.attach(*attach_args, 
**attach_kwargs)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 339, in 
attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] do_check_attach=do_check_attach)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 46, in 
wrapped
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] ret_val = method(obj, context, *args, 
**kwargs)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/virt/block_device.py, line 229, in 
attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] volume_api.check_attach(context, 
volume, instance=instance)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]   File 
/usr/lib/python2.6/site-packages/nova/volume/cinder.py, line 305, in 
check_attach
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] raise 
exception.InvalidVolume(reason=msg)
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6] InvalidVolume: Invalid volume: status 
must be 'available'
  2015-03-30 23:22:47.131 6445 TRACE nova.compute.manager [instance: 
483472b2-61b3-4574-95e2-8cd0304f90f6]

  This error cause the VM in error status:
  
+--+++--+-+--+
  | ID   | Name   | Status | Task State 
  | Power State | Networks |
  
+--+++--+-+--+
  | 1fa2d7aa-8bd9-4a22-8538-0a07d9dae8aa | inst02 | ERROR  | 
block_device_mapping | NOSTATE |  |
  
+--+++--+-+--+
  But the volume was in available status:
  ---+
  |  ID  |   Status  | Name | Size | Volume 
Type | Bootable | Attached to  |
  
+--+---+--+--+-+--+--+
  | a9ab2dc2-b117-44ef-8678-f71067a9e770 | available | None |  2   | 

[Yahoo-eng-team] [Bug 1451860] Re: Attached volume migration failed, due to incorrect arguments order passed to swap_volume

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451860

Title:
  Attached volume migration failed, due to incorrect arguments  order
  passed to swap_volume

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in openstack-ansible:
  Fix Released

Bug description:
  Steps to reproduce:
  1. create a volume in cinder
  2. boot a server from image in nova
  3. attach this volume to server
  4. use ' cinder migrate  --force-host-copy True  
3fa956b6-ba59-46df-8a26-97fcbc18fc82 openstack-wangp11-02@pool_backend_1#Pool_1'

  log from nova compute:( see attched from detail info):

  2015-05-05 00:33:31.768 ERROR root [req-b8424cde-e126-41b0-a27a-ef675e0c207f 
admin admin] Original exception being dropped: ['Traceback (most recent ca
  ll last):\n', '  File /opt/stack/nova/nova/compute/manager.py, line 351, in 
decorated_function\nreturn function(self, context, *args, **kwargs)\n
  ', '  File /opt/stack/nova/nova/compute/manager.py, line 4982, in 
swap_volume\ncontext, old_volume_id, instance_uuid=instance.uuid)\n', 
Attribut
  eError: 'unicode' object has no attribute 'uuid'\n]

  
  according to my debug result:
  # here  parameters passed to swap_volume
  def swap_volume(self, ctxt, instance, old_volume_id, new_volume_id):
  return self.manager.swap_volume(ctxt, instance, old_volume_id,
  new_volume_id)
  # swap_volume function
  @wrap_exception()
  @reverts_task_state
  @wrap_instance_fault
  def swap_volume(self, context, old_volume_id, new_volume_id, instance):
  Swap volume for an instance.
  context = context.elevated()

  bdm = objects.BlockDeviceMapping.get_by_volume_id(
  context, old_volume_id, instance_uuid=instance.uuid)
  connector = self.driver.get_volume_connector(instance)

  
  You can find: passed in order is self, ctxt, instance, old_volume_id, 
new_volume_id while function definition is self, context, old_volume_id, 
new_volume_id, instance

  this cause the 'unicode' object has no attribute 'uuid'\n error when
  trying to access instance['uuid']


  BTW: this problem was introduced in
  https://review.openstack.org/#/c/172152

  affect both Kilo and master

  Thanks
  Peter

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451860/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450682] Re: nova unit tests failing with pbr 0.11

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450682

Title:
  nova unit tests failing with pbr 0.11

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  test_version_string_with_package_is_good breaks with the release of
  pbr 0.11

  
nova.tests.unit.test_versions.VersionTestCase.test_version_string_with_package_is_good
  
--

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File nova/tests/unit/test_versions.py, line 33, in 
test_version_string_with_package_is_good
  version.version_string_with_package())
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py,
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: '5.5.5.5-g9ec3421' != 
'2015.2.0-g9ec3421'

  
  
http://logs.openstack.org/27/169827/8/check/gate-nova-python27/2009c78/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1450682/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451931] Re: ironic password config not marked as secret

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451931

Title:
  ironic password config not marked as secret

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  The ironic config option for the password and auth token are not
  marked as secret so the values will get logged during startup in debug
  mode.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451931/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450624] Re: Nova waits for events from neutron on resize-revert that aren't coming

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1450624

Title:
  Nova waits for events from neutron on resize-revert that aren't coming

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  On resize-revert, the original host was waiting for plug events from
  neutron before restarting the instance. These aren't sent since we
  don't ever unplug the vifs. Thus, we'll always fail like this:

  
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 134, in _dispatch_and_reply
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 177, in _dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 123, in _do_dispatch
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py,
 line 88, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher payload)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/exception.py,
 line 71, in wrapped
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 298, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher pass
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 284, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 348, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 326, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py,
 line 82, in __exit__
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/bbc/openstack-10.0-bbc40/nova/local/lib/python2.7/site-packages/nova/compute/manager.py,
 line 314, in decorated_function
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2015-04-30 19:45:42.602 23513 TRACE oslo.messaging.rpc.dispatcher   File 

[Yahoo-eng-team] [Bug 1456963] Re: VNC Console failed to load with IPv6 Addresses

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1456963

Title:
  VNC Console failed to load with IPv6 Addresses

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Description of problem:
  After installation with packstack of openstack over IPv6 address(All 
components using IPv6) VNC console is unreachable

  Version-Release number of selected component (if applicable):
  Packstack version-
  packstack Kilo 2015.1.dev1537.gba5183c
  RHEL version -
  Red Hat Enterprise Linux Server release 7.1 (Maipo)
  openstack versions:
  2015.1.0
  novnc-0.5.1-2.el7.noarch
  openstack-nova-cert-2015.1.0-3.el7.noarch
  openstack-nova-compute-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  python-novaclient-2.23.0-1.el7.noarch

  
  How reproducible:
  Try to open noVNC console via the web browser with IPv6 address

  Steps to Reproduce:
  1. Install openstack with IPv6 addresses for all components
  2. Login to the horizon dashboard using IPv6
  3. Launch an instance
  4. try to activate console

  Actual results:
  Console failed to connect - error 1006

  Expected results:
  Console should connect successfully

  Additional info:

  nova novnc log:
  2015-05-12 10:25:33.961 15936 INFO nova.console.websocketproxy [-] WebSocket 
server settings:
  2015-05-12 10:25:33.962 15936 INFO nova.console.websocketproxy [-]   - Listen 
on ::0:6080
  2015-05-12 10:25:33.962 15936 INFO nova.console.websocketproxy [-]   - Flash 
security policy server
  2015-05-12 10:25:33.962 15936 INFO nova.console.websocketproxy [-]   - Web 
server. Web root: /usr/share/novnc
  2015-05-12 10:25:33.963 15936 INFO nova.console.websocketproxy [-]   - No 
SSL/TLS support (no cert file)
  2015-05-12 10:25:33.965 15936 INFO nova.console.websocketproxy [-]   - 
proxying from ::0:6080 to None:None
  2015-05-13 10:33:12.084 15936 CRITICAL nova [-] UnboundLocalError: local 
variable 'exc' referenced before assignment
  2015-05-13 10:33:12.084 15936 TRACE nova Traceback (most recent call last):
  2015-05-13 10:33:12.084 15936 TRACE nova   File /usr/bin/nova-novncproxy, 
line 10, in module
  2015-05-13 10:33:12.084 15936 TRACE nova sys.exit(main())
  2015-05-13 10:33:12.084 15936 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py, line 49, in main
  2015-05-13 10:33:12.084 15936 TRACE nova port=CONF.novncproxy_port)
  2015-05-13 10:33:12.084 15936 TRACE nova   File 
/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py, line 72, in proxy
  2015-05-13 10:33:12.084 15936 TRACE nova 
RequestHandlerClass=websocketproxy.NovaProxyRequestHandler
  2015-05-13 10:33:12.084 15936 TRACE nova   File 
/usr/lib/python2.7/site-packages/websockify/websocket.py, line 1018, in 
start_server
  2015-05-13 10:33:12.084 15936 TRACE nova self.msg(handler exception: 
%s, str(exc))
  2015-05-13 10:33:12.084 15936 TRACE nova UnboundLocalError: local variable 
'exc' referenced before assignment
  2015-05-13 10:33:12.084 15936 TRACE nova 
  2015-05-13 10:52:41.893 3696 INFO nova.console.websocketproxy [-] WebSocket 
server settings:
  2015-05-13 10:52:41.893 3696 INFO nova.console.websocketproxy [-]   - Listen 
on ::0:6080
  2015-05-13 10:52:41.894 3696 INFO nova.console.websocketproxy [-]   - Flash 
security policy server
  2015-05-13 10:52:41.894 3696 INFO nova.console.websocketproxy [-]   - Web 
server. Web root: /usr/share/novnc
  2015-05-13 10:52:41.894 3696 INFO nova.console.websocketproxy [-]   - No 
SSL/TLS support (no cert file)
  2015-05-13 10:52:41.920 3696 INFO nova.console.websocketproxy [-]   - 
proxying from ::0:6080 to None:None
  2015-05-13 10:54:04.345 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connecting to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.380 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connected to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.388 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connecting to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.408 3979 INFO oslo_messaging._drivers.impl_rabbit 
[req-e47dae76-1c51-4ce8-9100-d98022fc6e34 - - - - -] Connected to AMQP server 
on 2001:77:77:77:f816:3eff:fe95:8683:5672
  2015-05-13 10:54:04.554 3979 INFO nova.console.websocketproxy 

[Yahoo-eng-team] [Bug 1465922] Re: Password visible in clear text in keystone.log when user created and keystone debug logging is enabled

2015-07-29 Thread Alan Pevec
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1465922

Title:
  Password visible in clear text in keystone.log when user created and
  keystone debug logging is enabled

Status in Keystone:
  Fix Released
Status in Keystone juno series:
  In Progress
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  grep CLEARTEXTPASSWORD keystone.log

  2015-06-16 06:44:39.770 20986 DEBUG keystone.common.controller [-]
  RBAC: Authorizing identity:create_user(user={u'domain_id': u'default',
  u'password': u'CLEARTEXTPASSWORD', u'enabled': True,
  u'default_project_id': u'0175b43419064ae38c4b74006baaeb8d', u'name':
  u'DermotJ'}) _build_policy_check_credentials /usr/lib/python2.7/site-
  packages/keystone/common/controller.py:57

  Issue code:
  
https://github.com/openstack/keystone/blob/master/keystone/common/controller.py#L57

  LOG.debug('RBAC: Authorizing %(action)s(%(kwargs)s)', {
  'action': action,
  'kwargs': ', '.join(['%s=%s' % (k, kwargs[k]) for k in kwargs])})

  Shadow the values of sensitive fields like 'password' by some
  meaningless garbled text like X is one way to fix.

  Well, in addition to this, I think we should never pass the 'password'
  with its original value along the code and save it in any persistence,
  instead we should convert it to a strong hash value as early as
  possible. With the help of a good hash system, we never have to need
  the original value of the password, right?

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1465922/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453666] Re: libvirt: guestfs api makes nova-compute hang

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453666

Title:
  libvirt: guestfs api makes nova-compute hang

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Latest Kilo code.

  In inspect_capabilities() of nova/virt/disk/vfs/guestfs.py, guestfs
  api, which is C-extension, will hang nova-compute process when it is
  invoked. This problem will result in message queue time out error and
  instance booting failure.

  And example of this problem is:

  2015-05-09 17:07:08.393 4449 DEBUG nova.virt.disk.vfs.api 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Using primary VFSGuestFS 
instance_for_image /usr/lib/python2.7/site-packages/nova/virt/disk/vfs/api.py:50
  2015-05-09 17:08:35.443 4449 DEBUG nova.virt.disk.vfs.guestfs 
[req-1f7c1104-2679-43a5-bbcb-f73114ce9103 - - - - -] Setting up appliance for 
/var/lib/nova/instances/0517e2a9-469c-43f4-a129-f489fc1c8356/disk qcow2 setup 
/usr/lib/python2.7/site-packages/nova/virt/disk/vfs/guestfs.py:169
  2015-05-09 17:08:35.457 4449 DEBUG nova.openstack.common.periodic_task 
[req-bb78b74b-bed7-450f-bd40-19686aab2c3e - - - - -] Running periodic task 
ComputeManager._instance_usage_audit run_periodic_tasks 
/usr/lib/python2.7/site-packages/nova/openstack/common/periodic_task.py:219
  2015-05-09 17:08:35.461 4449 INFO oslo_messaging._drivers.impl_rabbit 
[req-bb78b74b-bed7-450f-bd40-19686aab2c3e - - - - -] Connecting to AMQP server 
on 127.0.0.1:5671
  2015-05-09 17:08:35.472 4449 ERROR nova.compute.manager [-] Instance failed 
network setup after 1 attempt(s)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager Traceback (most 
recent call last):
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 1783, in 
_allocate_network_async
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager 
system_metadata=sys_meta)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 739, in 
_instance_update
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager **kwargs)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/conductor/api.py, line 308, in 
instance_update
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager updates, 
'conductor')
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/nova/conductor/rpcapi.py, line 194, in 
instance_update
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager service=service)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py, line 156, in 
call
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager retry=self.retry)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/transport.py, line 90, in 
_send
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager timeout=timeout, 
retry=retry)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
350, in send
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager retry=retry)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
339, in _send
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager result = 
self._waiter.wait(msg_id, timeout)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
243, in wait
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager message = 
self.waiters.get(msg_id, timeout=timeout)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager   File 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py, line 
149, in get
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager 'to message ID 
%s' % msg_id)
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager MessagingTimeout: 
Timed out waiting for a reply to message ID 8ff07520ea8743c997b5017f6638a0df
  2015-05-09 17:08:35.472 4449 TRACE nova.compute.manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453666/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447084] Re: view hypervisor details should be controlled by policy.json

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447084

Title:
  view hypervisor details should be controlled by policy.json

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When a user with non-admin permissions attempts to view the hypervisor
  details (/v2/2f8728e1c3214d8bb59903ba654ed6c1/os-hypervisors/1) , we
  see the following error :

  2015-04-19 21:34:22.194 23179 ERROR 
nova.api.openstack.compute.contrib.hypervisors 
[req-5caab0db-31aa-4a24-9263-750af6555ef5 
605c378ebded02d6a2deebe138c0ef9d6a0ddf39447297105dcc4eb18c7cc062 
9b0d73e660af434481a0a9b6d6a3bab7 - - -] User does not have admin privileges
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors Traceback (most recent call 
last):
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/api/openstack/compute/contrib/hypervisors.py,
 line 147, in show
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors service = 
self.host_api.service_get_by_compute_host(context, hyp.host)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/compute/api.py, line 3451, in 
service_get_by_compute_host
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors return 
objects.Service.get_by_compute_host(context, host_name)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/objects/base.py, line 163, in wrapper
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors result = fn(cls, context, 
*args, **kwargs)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/objects/service.py, line 151, in 
get_by_compute_host
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors db_service = 
db.service_get_by_compute_host(context, host)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/db/api.py, line 139, in 
service_get_by_compute_host
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors use_slave=use_slave)
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line 214, in 
wrapper
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors 
nova.context.require_admin_context(args[0])
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors   File 
/usr/lib/python2.7/site-packages/nova/context.py, line 235, in 
require_admin_context
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors raise 
exception.AdminRequired()
  2015-04-19 21:34:22.194 23179 TRACE 
nova.api.openstack.compute.contrib.hypervisors AdminRequired: User does not 
have admin privileges

  
  This is caused because the 
/usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api layer mandates that 
only an admin can perform this operation. This should not be the case. Instead 
the permissions should be controlled as per the rules defined in the nova 
policy.json. This used to work for non-admins till few days/weeks back

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465927] Re: varible'version' is undefine in function'_has_cpu_policy_support'

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1465927

Title:
  varible'version' is undefine in function'_has_cpu_policy_support'

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  My running environment is
  openstack-nova-compute-2015.1.0-3.el7.noarch
  python-nova-2015.1.0-3.el7.noarch
  openstack-nova-novncproxy-2015.1.0-3.el7.noarch
  openstack-nova-conductor-2015.1.0-3.el7.noarch
  openstack-nova-api-2015.1.0-3.el7.noarch
  openstack-nova-console-2015.1.0-3.el7.noarch
  openstack-nova-scheduler-2015.1.0-3.el7.noarch
  openstack-nova-serialproxy-2015.1.0-3.el7.noarch
  openstack-nova-common-2015.1.0-3.el7.noarch

  When boot a instance to a host with llibvirt version 1.2.10  and flavor key 
set with hw:cpu_policy=dedicated,
  there  is log with below:

  File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py,
  line 3404, in _has_cpu_policy_support

  File /usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py,
  line 524, in _version_to_string

  TypeError: 'module'  object is not iterable  in
  nova/virt/libvirt/driver.py

  Souce Code of K verison is below:

   def _has_cpu_policy_support(self):
  for ver in BAD_LIBVIRT_CPU_POLICY_VERSIONS:
  if self._host.has_version(ver):
  ver_ = self._version_to_string(version)
  raise exception.CPUPinningNotSupported(reason=_(
  'Invalid libvirt version %(version)s') % {'version': 
ver_})
  return True

  I thought this func is mistake in writing with

  ver_ = self._version_to_string(version)

  So when libvirt version is BAD_LIBVIRT_CPU_POLICY_VERSIONS there will
  be a TypeError

  It should be ver_ = self._version_to_string(ver)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1465927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468828] Re: HA router-create breaks ML2 drivers that implement create_network such as Arista

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468828

Title:
  HA router-create  breaks ML2 drivers that implement create_network
  such as Arista

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  This issue was discovered with Arista ML2 driver, when an HA router
  was created. However, this will impact any ML2 driver that implements
  create_network.

  When an admin creates HA router (neutron router-create --ha ), the HA 
framework invokes network_create() and sets tenant-id to '' (The empty string).
  network_create() ML2 mech driver API expects tenant-id to be set to a valid 
ID.
  Any ML2 driver, which relies on tenant-id, will fail/reject network_create() 
request, resulting in router-create to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459758] Re: inline flavor migration fails with pre-kilo instances

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459758

Title:
  inline flavor migration fails with pre-kilo instances

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  If instance.save() is used to migrate the flavor from sys_meta to
  instance_extra, this only works in that instance already has an
  instance_extra row.

  In the case where its missing, the update call silently fails to make
  any changes to the database, and you get lots of
  OrphanedInstanceErrors when listing instances, because the instance no
  longer has any flavors.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1444497] Re: Instance doesn't get an address via DHCP (nova-network) because of issue with live migration

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/197

Title:
  Instance doesn't get an address via DHCP (nova-network) because of
  issue with live migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  When instance is migrated to another compute node, it's dhcp lease is not 
removed from the first compute node even after instance termination.
  If a new instance got the same IP which was present in the previous instance 
created on the the first compute node where dhcp lease for this IP remains, 
then the dnsmasq refuse DHCP request of the IP address for a new instance with 
different MAC.

  Steps to reproduce:
  Scenario:
  1. Create cluster (CentOS, nova-network with Flat-DHCP , Ceph for 
images and volumes)
  2. Add 1 node with controller and ceph OSD roles
  3. Add 2 node with compute and ceph OSD roles
  4. Deploy the cluster

  5. Create a VM
  6. Wait until the VM got IP address via DHCP (in VM console log)
  7. Migrate the VM to another compute node.
  8. Terminate the VM.

  9. Repeat stages from 5 to 8 several times (in my case - 4..6 
times was enough) until a new instance stops receiving IP address via DHCP.
  10. Check dnsmasq-dhcp.log (/var/log/daemon.log on the compute 
node) for messages like :
  =
  2014-11-09T20:28:29.671344+00:00 warning: not using configured address 
10.0.0.2 because it is leased to fa:16:3e:65:70:be

  This means that:
 I. An instance was created on the compute node-1 and got a dhcp lease:
   nova-dhcpbridge.log
  2014-11-09 20:12:03.811 27360 DEBUG nova.dhcpbridge [-] Called 'add' for mac 
'fa:16:3e:65:70:be' with ip '10.0.0.2' main 
/usr/lib/python2.6/site-packages/nova/cmd/dhcpbridge.py:135

II. When the instance was migrating from compute node-1 to node-3, 
'dhcp_release' was not performed on compute node-1, please check the time range 
in the logs : 2014-11-09 20:14:36-37
   Running.log (node-1)
  2014-11-09T20:14:36.647588+00:00 debug: cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf conntrack -D -r 10.0.0.2
  ### But there is missing a command like: sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.0.0.2 fa:16:3e:65:70:be

III. On the compute node-3, DHCP lease was added and it was successfully 
removed when the instance was terminated:
   Running.log (node-3)
  2014-11-09T20:15:17.250243+00:00 debug: cmd (subprocess): sudo nova-rootwrap 
/etc/nova/rootwrap.conf dhcp_release br100 10.0.0.2 fa:16:3e:65:70:be

IV. When an another instance got the same address '10.0.0.2' and was 
created on node-1, it didn't get IP address via DHCP:
   Running.log (node-1)
  2014-11-09T20:28:29.671344+00:00 warning: not using configured address 
10.0.0.2 because it is leased to fa:16:3e:65:70:be

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/197/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446082] Re: Instance without extra data crashes nova-compute

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446082

Title:
  Instance without extra data crashes nova-compute

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  I'm upgrading from Icehouse to Kilo. I have a single instance that was
  created in Icehouse. After the upgrade, nova-compute crashes because
  it's looking for instance extra data that is not there.

  To fix this, we need to check if there is any extra data for the
  instance before trying to read properties such as numa_topology.

  # dpkg -l | grep nova
  ii  nova-common 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - common files
  ii  nova-compute1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node base
  ii  nova-compute-kvm1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node (KVM)
  ii  nova-compute-libvirt1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute - compute node libvirt support
  ii  python-nova 1:2015.1~rc1-0ubuntu1~cloud0  
all  OpenStack Compute Python libraries
  ii  python-novaclient   1:2.22.0-0ubuntu1~cloud0  
all  client library for OpenStack Compute API

  nova-compute.log:
  2015-04-20 17:35:09.214 15508 DEBUG oslo_concurrency.lockutils 
[req-43d3110a-cac7-425e-842c-f725bda91c10 - - - - -] Lock compute_resources 
acquired by _update_available_resource :: waited 0.000s inner 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:444
  2015-04-20 17:35:09.299 15508 DEBUG oslo_concurrency.lockutils 
[req-43d3110a-cac7-425e-842c-f725bda91c10 - - - - -] Lock compute_resources 
released by _update_available_resource :: held 0.085s inner 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:456
  Traceback (most recent call last):
File /usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 457, in 
fire_timers
  timer()
File /usr/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 58, in 
__call__
  cb(*args, **kw)
File /usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 214, 
in main
  result = function(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, 
line 497, in run_service
  service.start()
File /usr/lib/python2.7/dist-packages/nova/service.py, line 183, in start
  self.manager.pre_start_hook()
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1287, 
in pre_start_hook
  self.update_available_resource(nova.context.get_admin_context())
File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 6236, 
in update_available_resource
  rt.update_available_resource(context)
File /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, 
line 402, in update_available_resource
  self._update_available_resource(context, resources)
File /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 
445, in inner
  return f(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, 
line 436, in _update_available_resource
  'numa_topology'])
File /usr/lib/python2.7/dist-packages/nova/objects/base.py, line 163, in 
wrapper
  result = fn(cls, context, *args, **kwargs)
File /usr/lib/python2.7/dist-packages/nova/objects/instance.py, line 
1152, in get_by_host_and_node
  expected_attrs)
File /usr/lib/python2.7/dist-packages/nova/objects/instance.py, line 
1068, in _make_instance_list
  expected_attrs=expected_attrs)
File /usr/lib/python2.7/dist-packages/nova/objects/instance.py, line 501, 
in _from_db_object
  db_inst.get('extra').get('numa_topology'))
  AttributeError: 'NoneType' object has no attribute 'get'
  2015-04-20 17:35:09.301 15508 ERROR nova.openstack.common.threadgroup 
[req-12483464-12a6-4b74-a671-bc6bb943b265 - - - - -] 'NoneType' object has no 
attribute 'get'
  2015-04-20 17:35:09.301 15508 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2015-04-20 17:35:09.301 15508 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
145, in wait
  2015-04-20 17:35:09.301 15508 TRACE nova.openstack.common.threadgroup 
x.wait()
  2015-04-20 17:35:09.301 15508 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
47, in wait
  2015-04-20 17:35:09.301 15508 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2015-04-20 17:35:09.301 15508 TRACE 

[Yahoo-eng-team] [Bug 1474074] Re: PciDeviceList is not versioned properly in liberty and kilo

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1474074

Title:
  PciDeviceList is not versioned properly in liberty and kilo

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The following commit:

  https://review.openstack.org/#/c/140289/4/nova/objects/pci_device.py

  missed to bump the PciDeviceList version.

  We should do it now (master @ 4bfb094) and backport this to stable
  Kilo as well

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1474074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453274] Re: libvirt: resume instance with utf-8 name results in UnicodeDecodeError

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453274

Title:
  libvirt: resume instance with utf-8 name results in UnicodeDecodeError

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This bug is very similar to
  https://bugs.launchpad.net/nova/+bug/1388386.

  Resuming a server that has a unicode name after suspending it results
  in:

  2015-05-08 15:22:30.148 4370 INFO nova.compute.manager 
[req-ac919325-aa2d-422c-b679-5f05ecca5d42 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
6dfced8dd0df4d4d98e4a0db60526c8d - - -] [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Resuming
  2015-05-08 15:22:31.651 4370 ERROR nova.compute.manager 
[req-ac919325-aa2d-422c-b679-5f05ecca5d42 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
6dfced8dd0df4d4d98e4a0db60526c8d - - -] [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Setting instance vm_state to ERROR
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] Traceback (most recent call last):
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 6427, in 
_error_out_instance_on_exception
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] yield
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/nova/compute/manager.py, line 4371, in 
resume_instance
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] block_device_info)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 2234, in 
resume
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] vifs_already_plugged=True)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/powervc_nova/virt/powerkvm/driver.py, line 
2061, in _create_domain_and_network
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] disk_info=disk_info)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4391, in 
_create_domain_and_network
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] power_on=power_on)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4322, in 
_create_domain
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] LOG.error(err)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/oslo_utils/excutils.py, line 85, in __exit__
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] six.reraise(self.type_, self.value, 
self.tb)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]   File 
/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py, line 4305, in 
_create_domain
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] err = _LE('Error defining a domain 
with XML: %s') % xml
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547] UnicodeDecodeError: 'ascii' codec can't 
decode byte 0xc3 in position 297: ordinal not in range(128)
  2015-05-08 15:22:31.651 4370 TRACE nova.compute.manager [instance: 
12371aa8-889d-4333-8fab-61a13f87a547]

  The _create_domain() method has the following line:

  err = _LE('Error defining a domain with XML: %s') % xml

  which fails with the UnicodeDecodeError because the xml object has
  utf-8 encoding.  The fix is to wrap the xml object in
  oslo.utils.encodeutils.safe_decode for the error message.

  I'm seeing the issue on Kilo, but it is also likely an issue on Juno
  as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453274/+subscriptions

-- 

[Yahoo-eng-team] [Bug 1460673] Re: nova-manage flavor convert fails if instance has no flavor in sys_meta

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1460673

Title:
  nova-manage flavor convert fails if instance has no flavor in sys_meta

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  nova-manage fails if instance has no flavor in sys_meta when trying to
  move them all to instance_extra.

  But mostly the instance_type table includes the correct information,
  so it should be possible to copy it from there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1460673/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461777] Re: Random NUMA cell selection can leave NUMA cells unused

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1461777

Title:
  Random NUMA cell selection can leave NUMA cells unused

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  NUMA cell overcommit can leave NUMA cells unused

  When no NUMA configuration is defined for the guest (no flavor extra specs),
  nova identifies the NUMA topology of the host and tries to match the cpu 
  placement to a NUMA cell (cpuset). 

  The cpuset is selected randomly.
  pin_cpuset = random.choice(viable_cells_cpus) #nova/virt/libvirt/driver.py

  However, this can lead to NUMA cells not being used.
  This is particular noticeable when the flavor as the same number of vcpus 
  as the host NUMA cells and in the host CPUs are not overcommit 
(cpu_allocation_ratio = 1)

  ###
  Particular use case:

  Compute nodes with the NUMA topology:
  VirtNUMAHostTopology: {'cells': [{'mem': {'total': 12279, 'used': 0}, 
'cpu_usage': 0, 'cpus': '0,1,2,3,8,9,10,11', 'id': 0}, {'mem': {'total': 12288, 
'used': 0}, 'cpu_usage': 0, 'cpus': '4,5,6,7,12,13,14,15', 'id': 1}]}

  No CPU overcommit: cpu_allocation_ratio = 1
  Boot instances using a flavor with 8 vcpus. 
  (No NUMA topology defined for the guest in the flavor)

  In this particular case the host can have 2 instances. (no cpu overcommit)
  Both instances can be allocated (random) with the same cpuset from the 2 
options:
  vcpu placement='static' cpuset='4-7,12-15'8/vcpu
  vcpu placement='static' cpuset='0-3,8-11'8/vcpu

  As consequence half of the host CPUs are not used.

  
  ###
  How to reproduce:

  Using: nova 2014.2.2
  (not tested in trunk however the code path looks similar)

  1. set cpu_allocation_ratio = 1
  2. Identify the NUMA topology of the compute node
  3. Using a flavor with a number of vcpus that matches a NUMA cell in the 
compute node,
  boot instances until fill the compute node.
  4. Check the cpu placement cpuset used by the each instance.

  Notes: 
  - at this point instances can use the same cpuset leaving NUMA cells unused.
  - the selection of the cpuset is random. Different tries may be needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1461777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453835] Re: Hyper-V: Nova cold resize / migration fails

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1453835

Title:
  Hyper-V: Nova cold resize / migration fails

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Commit https://review.openstack.org/#/c/162999/ changed where the
  Hyper-V VM configuration files are stored. The files are being stored
  in the same folder as the instance. Performing a cold resize /
  migration will cause a os.rename call on the instance's folder, which
  fails as long as there are configuration files used by Hyper-V in that
  folder, thus resulting in a failed migration and the instance ending
  up in ERROR state.

  Logs: http://paste.openstack.org/show/219887/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1453835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466144] Re: dhcp fails if extra_dhcp_opts for stateless subnet enabled

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1466144

Title:
  dhcp fails if extra_dhcp_opts for stateless subnet enabled

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  vm on a network having IPv4 and IPv6 dhcpv6 stateless subnets,
  fails to get IPv4 address, when vm uses a port with extra_dhcp_opts.

  neutron creates entries in dhcp host file for each subnet of a port.
  Each of these entries will have same mac address as first field,
  and may have client_id, fqdn, ipv4/ipv6 address for dhcp/dhcpv6 stateful,
  or tag as other fields.
  For dhcpv6 stateless subnet with extra_dhcp_opts,
  host file will have only mac address and tag.

  If the last entry in host file for the port with extra_dhcp_opts,
  is for dhcpv6 stateless subnet, then dhclient tries to use this entry,
  to resolve dhcp request even for IPv4, treats as 'no address found'
  and fails to send DHCPOFFER.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1466144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1451389] Re: Nova gate broke due to failed unit test

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1451389

Title:
  Nova gate broke due to failed unit test

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  
  [x]

  
  ft1.13172: 
nova.tests.unit.virt.vmwareapi.test_read_write_util.ReadWriteUtilTestCase.test_ipv6_host_read_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 1201, in patched
  return func(*args, **keywargs)
File nova/tests/unit/virt/vmwareapi/test_read_write_util.py, line 49, in 
test_ipv6_host_read
  verify=False)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 846, in assert_called_once_with
  return self.assert_called_with(*args, **kwargs)
File 
/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/mock.py,
 line 835, in assert_called_with
  raise AssertionError(msg)
  AssertionError: Expected call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, verify=False)
  Actual call: request('get', 
'https://[fd8c:215d:178e:c51e:200:c9ff:fed1:584c]:7443/folder/tmp/fake.txt?dcPath=fake_dcdsName=fake_ds',
 stream=True, headers={'User-Agent': 'OpenStack-ESX-Adapter'}, 
allow_redirects=True, params=None, verify=False)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1451389/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472458] Re: Arista ML2 VLAN driver should ignore non-VLAN network types

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472458

Title:
  Arista ML2 VLAN driver should ignore non-VLAN network types

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in neutron kilo series:
  Fix Released

Bug description:
  Arista ML2 VLAN driver should process only VLAN based networks. Any
  other network type (e.g. vxlan) should be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446405] Re: Test discovery is broken for the api and functional paths

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1446405

Title:
  Test discovery is broken for the api and functional paths

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The following failures in test discovery were noted:

  https://review.openstack.org/#/c/169962/
  https://bugs.launchpad.net/neutron/+bug/1443480

  It was eventually determined that the use of the unittest discovery
  mechanism to perform manual discovery in package init for the api and
  functional subtrees was to blame.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1446405/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426324] Re: VFS blkid calls need to handle 0 or 2 return codes

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1426324

Title:
  VFS blkid calls need to handle 0 or 2 return codes

Status in ubuntu-cloud-archive:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released

Bug description:
  kilo-2 introduce blkid calls for fs detection on all new instances; if
  the specified key is not found on the block device, blkid will return
  2 instead of 0 - nova needs to deal with this:

  2015-02-27 10:48:51.270 3062 INFO nova.virt.disk.vfs.api [-] Unable to import 
guestfs, falling back to VFSLocalFS
  2015-02-27 10:48:51.476 3062 ERROR nova.compute.manager [-] [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] Instance failed to spawn
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] Traceback (most recent call last):
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2328, in 
_build_resources
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] yield resources
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2198, in 
_build_and_run_instance
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] flavor=flavor)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2329, in 
spawn
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] admin_pass=admin_password)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2728, in 
_create_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] project_id=instance['project_id'])
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 230, 
in cache
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] *args, **kwargs)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 507, 
in create_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] copy_qcow2_image(base, self.path, 
size)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 431, in 
inner
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] return f(*args, **kwargs)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 473, 
in copy_qcow2_image
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] disk.extend(target, size, 
use_cow=True)
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 183, in extend
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] if not is_image_extendable(image, 
use_cow):
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/api.py, line 235, in 
is_image_extendable
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] if fs.get_image_fs() in 
SUPPORTED_FS_TO_EXTEND:
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b]   File 
/usr/lib/python2.7/dist-packages/nova/virt/disk/vfs/localfs.py, line 167, in 
get_image_fs
  2015-02-27 10:48:51.476 3062 TRACE nova.compute.manager [instance: 
1aa12a52-c91b-49b4-9636-63c39f7ba72b] run_as_root=True)
  2015-02-27 10:48:51.476 3062 TRACE 

[Yahoo-eng-team] [Bug 1249065] Re: Nova throws 400 when attempting to add floating ip (instance.info_cache.network_info is empty)

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1249065

Title:
  Nova throws 400 when attempting to add floating ip
  (instance.info_cache.network_info is empty)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Ran into this problem in check-tempest-devstack-vm-neutron

   Traceback (most recent call last):
 File tempest/scenario/test_snapshot_pattern.py, line 74, in 
test_snapshot_pattern
   self._set_floating_ip_to_server(server, fip_for_server)
 File tempest/scenario/test_snapshot_pattern.py, line 62, in 
_set_floating_ip_to_server
   server.add_floating_ip(floating_ip)
 File /opt/stack/new/python-novaclient/novaclient/v1_1/servers.py, line 
108, in add_floating_ip
   self.manager.add_floating_ip(self, address, fixed_address)
 File /opt/stack/new/python-novaclient/novaclient/v1_1/servers.py, line 
465, in add_floating_ip
   self._action('addFloatingIp', server, {'address': address})
 File /opt/stack/new/python-novaclient/novaclient/v1_1/servers.py, line 
993, in _action
   return self.api.client.post(url, body=body)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 234, in 
post
   return self._cs_request(url, 'POST', **kwargs)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 213, in 
_cs_request
   **kwargs)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 195, in 
_time_request
   resp, body = self.request(url, method, **kwargs)
 File /opt/stack/new/python-novaclient/novaclient/client.py, line 189, in 
request
   raise exceptions.from_response(resp, body, url, method)
   BadRequest: No nw_info cache associated with instance (HTTP 400) 
(Request-ID: req-9fea0363-4532-4ad1-af89-114cff68bd89)

  Full console logs here: http://logs.openstack.org/27/55327/3/check
  /check-tempest-devstack-vm-neutron/8d26d3c/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1249065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361360] Re: Eventlet green threads not released back to the pool leading to choking of new requests

2015-07-29 Thread Alan Pevec
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361360

Title:
  Eventlet green threads not released back to the pool leading to
  choking of new requests

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance icehouse series:
  Fix Committed
Status in heat:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone icehouse series:
  Confirmed
Status in Keystone juno series:
  Fix Committed
Status in Keystone kilo series:
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron icehouse series:
  Fix Released
Status in neutron juno series:
  Fix Committed
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New
Status in Sahara:
  Confirmed

Bug description:
  Currently reproduced  on Juno milestone 2. but this issue should be
  reproducible in all releases since its inception.

  It is possible to choke OpenStack API controller services using
  wsgi+eventlet library by simply not closing the client socket
  connection. Whenever a request is received by any OpenStack API
  service for example nova api service, eventlet library creates a green
  thread from the pool and starts processing the request. Even after the
  response is sent to the caller, the green thread is not returned back
  to the pool until the client socket connection is closed. This way,
  any malicious user can send many API requests to the API controller
  node and determine the wsgi pool size configured for the given service
  and then send those many requests to the service and after receiving
  the response, wait there infinitely doing nothing leading to
  disrupting services for other tenants. Even when service providers
  have enabled rate limiting feature, it is possible to choke the API
  services with a group (many tenants) attack.

  Following program illustrates choking of nova-api services (but this
  problem is omnipresent in all other OpenStack API Services using
  wsgi+eventlet)

  Note: I have explicitly set the wsi_default_pool_size default value to 10 in 
order to reproduce this problem in nova/wsgi.py.
  After you run the below program, you should try to invoke API
  

  import time
  import requests
  from multiprocessing import Process

  def request(number):
 #Port is important here
 path = 'http://127.0.0.1:8774/servers'
  try:
  response = requests.get(path)
  print RESPONSE %s-%d % (response.status_code, number)
  #during this sleep time, check if the client socket connection is 
released or not on the API controller node.
  time.sleep(1000)
  print “Thread %d complete % number
  except requests.exceptions.RequestException as ex:
  print “Exception occurred %d-%s % (number, str(ex))

  if __name__ == '__main__':
  processes = []
  for number in range(40):
  p = Process(target=request, args=(number,))
  p.start()
  processes.append(p)
  for p in processes:
  p.join()

  


  Presently, the wsgi server allows persist connections if you configure 
keepalive to True which is default.
  In order to close the client socket connection explicitly after the response 
is sent and read successfully by the client, you simply have to set keepalive 
to False when you create a wsgi server.

  Additional information: By default eventlet passes “Connection: keepalive” if 
keepalive is set to True when a response is sent to the client. But it doesn’t 
have capability to set the timeout and max parameter.
  For example.
  Keep-Alive: timeout=10, max=5

  Note: After we have disabled keepalive in all the OpenStack API
  service using wsgi library, then it might impact all existing
  applications built with the assumptions that OpenStack API services
  uses persistent connections. They might need to modify their
  applications if reconnection logic is not in place and also they might
  experience the performance has slowed down as it will need to
  reestablish the http connection for every request.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1361360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1296414] Re: quotas not updated when periodic tasks or startup finish deletes

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1296414

Title:
  quotas not updated when periodic tasks or startup finish deletes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  There are a couple of cases in the compute manager where we don't pass
  reservations to _delete_instance().  For example, one of them is
  cleaning up when we see a delete that is stuck in DELETING.

  The only place we ever update quotas as part of delete should be when
  the instance DB record is removed. If something is stuck in DELETING,
  it means that the quota was not updated.  We should make sure we're
  always updating the quota when the instance DB record is removed.

  Soft delete kinda throws a wrench in this, though, because I think you
  want soft deleted instances to not count against quotas -- yet their
  DB records will still exist. In this case, it seems we may have a race
  condition in _delete_instance() - _complete_deletion() where if the
  instance somehow was SOFT_DELETED, quotas would have updated twice
  (once in soft_delete and once in _complete_deletion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1296414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1407664] Re: Race: instance nw_info cache is updated to empty list because of nova/neutron event mechanism

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1407664

Title:
  Race: instance nw_info cache is updated to empty list because of
  nova/neutron event mechanism

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  Fix Committed
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This applies only when the nova/neutron event reporting mechanism is
  enabled.

  Boot instance, like this:
  nova boot --image xxx --flavor xxx --nic port-id=xxx test_vm

  The booting instance is successful, but instance nw_info cache is empty.
  This is a probabilistic problem, not always can be reproduced.

  After analysis the booting instance and nova/neutron event mechanism workflow,
  I get the reproduce timeline:

  1. neutronv2.api.allocate_for_instance when booting instance
  2. neutronclient.update_port trigger neutron network_change event
  3. nova get the port change event, start to dispose event
  4. instance.get_by_uuid in external_instance_event , at this time 
instance.nw_info_cache is empty,
  because nw_info cache hadn't been added into db in booting instance thread.
  5. booting instance thread start to save the instance nw_info cache into db.
  6. event disposing thread start to update instance nw_info cache to empty.

  Face this issue in Juno.
  I add some breakpoints in order to reproduce this bug in my devstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1407664/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1417379] Re: KeyError returned when subnet-update enable_dhcp to False

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1417379

Title:
  KeyError returned when subnet-update enable_dhcp to False

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Keyerror outputed in the trace log after set enable_dhcp of subnet to
  False.

  [reproduce]
   neutron net-create test
   neutron dhcp-agent-network-add ID_of_DHCP_agent test
   neutron subnet-create test 192.168.100.0/24 --name test1
   neutron subnet-create test 192.168.101.0/24 --name test2
   neutron subnet-update test2 --enable_dhcp False
   tailf /opt/stack/logs/q-dhcp.log

  [Trace log]
  
  2015-02-14 01:01:08.556 5436 DEBUG neutron.agent.dhcp.agent [-] resync 
(536ef879-baf5-405b-8402-303ff5e2e905): 
[KeyError(u'37f0b628-22e6-4446-8bb9-2c2176c5a646',)] _periodic_resync_helper 
/opt/stack/neutron/neutron/agent/dhcp/agent.py:189
  2015-02-14 01:01:08.557 5436 DEBUG oslo_concurrency.lockutils [-] Lock 
dhcp-agent acquired by sync_state :: waited 0.000s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:430
  2015-02-14 01:01:08.558 5436 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state
  2015-02-14 01:01:08.559 5436 DEBUG oslo_messaging._drivers.amqpdriver [-] 
MSG_ID is a0f460425e904cc0b045336351d961d5 _send 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:378
  2015-02-14 01:01:08.559 5436 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID 
is d3aff7b1f8744f5b909ef5bc6eded8d2. _add_unique_id 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqp.py:224
  2015-02-14 01:01:08.632 5436 DEBUG neutron.agent.dhcp.agent [-] Calling 
driver for network: 536ef879-baf5-405b-8402-303ff5e2e905 action: enable 
call_driver /opt/stack/neutron/neutron/agent/dhcp/agent.py:106
  2015-02-14 01:01:08.633 5436 DEBUG neutron.agent.linux.utils [-] Unable to 
access /opt/stack/data/neutron/dhcp/536ef879-baf5-405b-8402-303ff5e2e905/pid 
get_value_from_file /opt/stack/neutron/neutron/agent/linux/utils.py:168
  2015-02-14 01:01:08.633 5436 ERROR neutron.agent.dhcp.agent [-] Unable to 
enable dhcp for 536ef879-baf5-405b-8402-303ff5e2e905.
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/dhcp/agent.py, line 116, in call_driver
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent 
getattr(driver, action)(**action_kwargs)
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 207, in enable
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 934, in setup
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent port = 
self.setup_dhcp_port(network)
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent   File 
/opt/stack/neutron/neutron/agent/linux/dhcp.py, line 924, in setup_dhcp_port
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent for fixed_ip 
in dhcp_port.fixed_ips]
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent KeyError: 
u'37f0b628-22e6-4446-8bb9-2c2176c5a646'
  2015-02-14 01:01:08.633 5436 TRACE neutron.agent.dhcp.agent
  2015-02-14 01:01:08.634 5436 INFO neutron.agent.dhcp.agent [-] Synchronizing 
state complete
  2015-02-14 01:01:08.635 5436 DEBUG oslo_concurrency.lockutils [-] Lock 
dhcp-agent released by sync_state :: held 0.078s inner 
/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:442
  

  ・All DHCP agents looks fine :-)
  ・Restart changes nothing. :-(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1417379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1404268] Re: Missing nova context during spawn

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1404268

Title:
  Missing nova context during spawn

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The nova request context tracks a security context and other request
  information, including a request id that is added to log entries
  associated with this request.  The request context is passed around
  explicitly in many chunks of OpenStack code.  But nova/context.py also
  stores the RequestContext in the thread's local store (when the
  RequestContext is created, or when it is explicitly stored through a
  call to update_store).  The nova logger will use an explicitly passed
  context, or look for it in the local.store.

  A recent change in community openstack code has resulted in the
  context not being set for many nova log messages during spawn:

  https://bugs.launchpad.net/neutron/+bug/1372049

  This change spawns a new thread in nova/compute/manager.py
  build_and_run_instance, and the spawn runs in that new thread.  When
  the original RPC thread created the nova RequestContext, the context
  was set in the thread's local store.  But the context does not get set
  in the newly-spawned thread.

  Example of log messages with missing req id during spawn:

  014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore 87c7fc32-042e-40b7-af46-44bff50fa1b4 lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
  2014-12-13 22:20:30.987 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock _locked_do_build_and_run_instance inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
  2014-12-13 22:20:31.012 18219 AUDIT nova.compute.manager 
[req-bd959d69-86de-4eea-ae1d-a066843ca317 None] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Starting instance...
  ...
  2014-12-13 22:20:31.280 18219 DEBUG nova.openstack.common.lockutils [-] 
Created new semaphore compute_resources internal_lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:206
  2014-12-13 22:20:31.281 18219 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore compute_resources lock 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:229
  2014-12-13 22:20:31.282 18219 DEBUG nova.openstack.common.lockutils [-] Got 
semaphore / lock instance_claim inner 
/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py:271
  2014-12-13 22:20:31.284 18219 DEBUG nova.compute.resource_tracker [-] Memory 
overhead for 512 MB instance; 0 MB instance_claim 
/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py:1272014-12-13 
22:20:31.290 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Attempting claim: memory 512 MB, disk 10 
GB2014-12-13 22:20:31.292 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total memory: 131072 MB, used: 12288.00 
MB2014-12-13 22:20:31.296 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] memory limit not specified, defaulting to 
unlimited2014-12-13 22:20:31.300 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] Total disk: 2097152 GB, used: 60.00 
GB2014-12-13 22:20:31.304 18219 AUDIT nova.compute.claims [-] [instance: 
87c7fc32-042e-40b7-af46-44bff50fa1b4] disk limit not specified, defaulting to 
unlimited
  ...

  2014-12-13 22:20:32.850 18219 DEBUG nova.network.neutronv2.api [-]
  [instance: 87c7fc32-042e-40b7-af46-44bff50fa1b4]
  get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.6/site-
  packages/nova/network/neutronv2/api.py:611

  Proposed patch:

  one new line of code at the beginning of nova/compute/manager.py
  _do_build_and_run_instance:

  context.update_store()

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1404268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1300265] Re: some tests call assert_called_once() into a mock, this function doesn't exists, and gets auto-mocked, falsely passing tests

2015-07-29 Thread Alan Pevec
** Changed in: sahara/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1300265

Title:
  some tests call assert_called_once() into a mock, this function
  doesn't exists, and gets auto-mocked, falsely passing tests

Status in neutron:
  Fix Released
Status in Sahara:
  Fix Released
Status in Sahara kilo series:
  Fix Released

Bug description:
  neutron/tests/unit/agent/linux/test_async_process.py:
spawn.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
func.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_start.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_event.send.assert_called_once()
  neutron/tests/unit/agent/linux/test_async_process.py:
mock_kill_process.assert_called_once(pid)
  neutron/tests/unit/test_dhcp_agent.py:
log.error.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_dhcp_agent.py:
device.route.get_gateway.assert_called_once()
  neutron/tests/unit/test_post_mortem_debug.py:
mock_post_mortem.assert_called_once()
  neutron/tests/unit/test_linux_interface.py:
log.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/test_l3_agent.py:self.send_arp.assert_called_once()
  neutron/tests/unit/cisco/test_nexus_plugin.py:
mock_db.assert_called_once()
  neutron/tests/unit/linuxbridge/test_lb_neutron_agent.py:
exec_fn.assert_called_once()
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_update_firewall.assert_called_once(
  
neutron/tests/unit/services/firewall/agents/l3reference/test_firewall_l3_agent.py:
mock_driver_delete_firewall.assert_called_once(

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1300265/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1353939] Re: Rescue fails with 'Failed to terminate process: Device or resource busy' in the n-cpu log

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1353939

Title:
  Rescue fails with 'Failed to terminate process: Device or resource
  busy' in the n-cpu log

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in nova package in Ubuntu:
  New

Bug description:
  [Impact]

   * Users may sometimes fail to shutdown an instance if the associated qemu
 process is on uninterruptable sleep (typically IO).

  [Test Case]

   * 1. create some IO load in a VM
 2. look at the associated qemu, make sure it has STAT D in ps output
 3. shutdown the instance
 4. with the patch in place, nova will retry calling libvirt to shutdown
the instance 3 times to wait for the signal to be delivered to the 
qemu process.

  [Regression Potential]

   * None


  message: Failed to terminate process AND
  message:'InstanceNotRescuable' AND message: 'Exception during message
  handling' AND tags:screen-n-cpu.txt

  The above log stash-query reports back only the failed jobs, the 'Failed to 
terminate process' close other failed rescue tests,
  but tempest does not always reports them as an error at the end.

  message: Failed to terminate process AND tags:screen-n-cpu.txt

  Usual console log:
  Details: (ServerRescueTestJSON:test_rescue_unrescue_instance) Server 
0573094d-53da-40a5-948a-747d181462f5 failed to reach RESCUE status and task 
state None within the required time (196 s). Current status: SHUTOFF. Current 
task state: None.

  http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-
  full/90726cb/console.html#_2014-08-07_03_50_26_520

  Usual n-cpu exception:
  
http://logs.openstack.org/82/107982/2/gate/gate-tempest-dsvm-postgres-full/90726cb/logs/screen-n-cpu.txt.gz#_2014-08-07_03_32_02_855

  2014-08-07 03:32:02.855 ERROR oslo.messaging.rpc.dispatcher 
[req-39ce7a3d-5ceb-41f5-8f9f-face7e608bd1 ServerRescueTestJSON-2035684545 
ServerRescueTestJSON-1017508309] Exception during message handling: Instance 
0573094d-53da-40a5-948a-747d181462f5 cannot be rescued: Driver Error: Failed to 
terminate process 26425 with SIGKILL: Device or resource busy
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
134, in _dispatch_and_reply
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
177, in _dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py, line 
123, in _do_dispatch
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher result 
= getattr(endpoint, method)(ctxt, **new_args)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 408, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 88, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher payload)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/exception.py, line 71, in wrapped
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 292, in decorated_function
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher pass
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/openstack/common/excutils.py, line 82, in __exit__
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  2014-08-07 03:32:02.855 22829 TRACE oslo.messaging.rpc.dispatcher   File 
/opt/stack/new/nova/nova/compute/manager.py, line 278, in 

[Yahoo-eng-team] [Bug 1478503] Re: test_admin_version_v3 actually tests public app

2015-07-29 Thread Dolph Mathews
*** This bug is a duplicate of bug 1478504 ***
https://bugs.launchpad.net/bugs/1478504

** This bug has been marked a duplicate of bug 1478504
   test_admin_version_v3 actually  tests public app

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478503

Title:
  test_admin_version_v3 actually  tests public app

Status in Keystone:
  Invalid

Bug description:
  VersionTestCase.test_admin_version_v3
  (keystone/tests/unit/test_versions.py) in fact tests public app:

  def test_admin_version_v3(self):
  client = tests.TestClient(self.public_app)

  It makes sense only in case of V3 eventless setup where public app
  handles bot endpoints, but I believe it should be tested by a separate
  test like .test_admin_version_v3_eventlets which will be introduced as
  part of fix of bug #1381961. Also this behavior was introduced when 2
  apps setup was used for eventless.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478503/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456157] Re: Updating image with invalid operation type raises 500

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1456157

Title:
  Updating image with invalid operation type raises 500

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released

Bug description:
  When trying to update an image using PATCH method I got Internal
  Server Error when 'op' (operation) has invalid value. We could
  consider changing this behavior to return some kind of 40x message
  (400 probably).

  Steps to reproduce:
  URL: http://localhost:9292/v2/images/image_id
  Headers:
X-Auth-Token: XYZ
Content-Type: application/openstack-images-v2.1-json-patch
  Data:
  [{path: /description, value: , op: axdd}]

  Result: 500 Internal Server Error

  I'm opening it as `Opinion`, as there can be a reason why are we
  returning 500 instead of some 40x code.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1456157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462544] Re: Create Image: Uncaught TypeError: $form.on is not a function

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462544

Title:
  Create Image: Uncaught TypeError: $form.on is not a function

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  on master June 5 2015

  Project - Images - Create Image dialog

  Stacktrace
  
  horizon.datatables.disable_actions_on_submit  @   horizon.tables.js:185
  (anonymous function)  @   horizon.modals.js:26
  jQuery.extend.each@   jquery.js:657
  jQuery.fn.jQuery.each @   jquery.js:266
  horizon.modals.initModal  @   horizon.modals.js:25
  (anonymous function)  @   horizon.modals.js:177
  jQuery.event.dispatch @   jquery.js:5095
  jQuery.event.add.elemData.handle  @   jquery.js:4766
  jQuery.event.trigger  @   jquery.js:5007
  jQuery.event.trigger  @   jquery-migrate.js:493
  (anonymous function)  @   jquery.js:5691
  jQuery.extend.each@   jquery.js:657
  jQuery.fn.jQuery.each @   jquery.js:266
  jQuery.fn.extend.trigger  @   jquery.js:5690
  horizon.modals.success@   horizon.modals.js:48
  horizon.modals._request.$.ajax.success@   horizon.modals.js:342
  jQuery.Callbacks.fire @   jquery.js:3048
  jQuery.Callbacks.self.fireWith@   jquery.js:3160
  done  @   jquery.js:8235
  jQuery.ajaxTransport.send.callback@   jquery.js:8778

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462544/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459446] Re: can't update dns for an ipv6 subnet

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459446

Title:
  can't update dns for an ipv6 subnet

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  It's not possible to update ipv6 subnet info using Horizon. To
  recreate:

  Setup: create a new network (Admin-System-Networks-Create Network)
  create an ipv6 subnet in that network
  (new network Detail-Create Subnet)
  Network Address: fdc5:f49e:fe9e::/64 
  IP Version IPv6
  Gateway IP: fdc5:f49e:fe9e::1
  click create

  To view the problem: Edit the subnet
  (Admin-System-Networks[detail]-Edit Subnet-Subnet Details
  attempt to add a DNS name server
  fdc5:f49e:fe9e::3

  An error is returned: Error: Failed to update subnet
  fdc5:f49e:fe9e::/64: Cannot update read-only attribute ipv6_ra_mode

  however, it's possible to make the update using
  neutron subnet-update --dns-nameserver [ip] [id]

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1461776] Re: Error messages are encoded to HTML entity

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1461776

Title:
  Error messages are encoded to HTML entity

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released

Bug description:
  If you pass min_disk or min_ram as -1 to the image create command,
  then it shows following error message on the command prompt.

  $ glance image-create --name test --container-format bare --disk-format raw 
--file filename --min-disk -1
  400 Bad Request: Invalid value '-1' for parameter 'min_disk': Image min_disk 
must be gt;= 0 ('-1' specified). (HTTP 400)

  The above error message will be rendered correctly in the browser but
  it is not readable on the command prompt.

  This issue belongs to v1 API, in case of v2 api it returns proper error 
message:
  400 Bad Request: Invalid value '-1' for parameter 'min_disk': Cannot be a 
negative value (HTTP 400)

  So we can make this error message consistent for both the APIs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1461776/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1456171] Re: [sahara] relaunch job fail if job is created by saharaclient and no input args

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1456171

Title:
  [sahara] relaunch job fail if job is created by saharaclient and no
  input args

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  If we launch a job from a job_template without input args by saharaclient 
directly (not in Horizon), and then relaunch it on Horizon, you will get an 
error. This is because Horizon thinks job_configs always have elements 'args'. 
You can refer to 
horizon/openstack_dashboard/dashboards/project/data_processing/jobs/workflows/launch.py,
 line 190
  job_args = json.dumps(job_configs['args'])

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1456171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1452750] Re: dest_file in task convert is wrong

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1452750

Title:
  dest_file in task convert is wrong

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released

Bug description:
  
https://github.com/openstack/glance/commit/1b144f4c12fd6da58d7c48348bf7bab5873388e9
  #diff-f02c20aafcce326b4d31c938376f6c2aR78 - head - desk

  The dest_path is not formated correctly, which ends up in converting
  the image to a path that is completely ignored by other tasks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1452750/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458928] Re: jshint failing on angular js in stable/kilo

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458928

Title:
  jshint failing on angular js in stable/kilo

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  http://logs.openstack.org/56/183656/1/check/gate-horizon-
  jshint/cd75430/console.html.gz#_2015-05-15_19_27_08_073

  Looks like this started after 5/14, since there was a passing job
  before that:

  http://logs.openstack.org/21/183321/1/check/gate-horizon-
  jshint/90ca4dd/console.html.gz#_2015-05-14_22_50_02_203

  The only difference I see in external libraries used is tox went from
  1.9.2 (passing) to tox 2.0.1 (failing).  So I'm thinking there is
  something with how the environment is defined for the jshint runs
  because it appears that .jshintrc isn't getting used, see the
  workaround fix here:

  https://review.openstack.org/#/c/185172/

  From the tox 2.0 changelog:

  https://testrun.org/tox/latest/changelog.html

  (new) introduce environment variable isolation: tox now only passes
  the PATH and PIP_INDEX_URL variable from the tox invocation
  environment to the test environment and on Windows also SYSTEMROOT,
  PATHEXT, TEMP and TMP whereas on unix additionally TMPDIR is passed.
  If you need to pass through further environment variables you can use
  the new passenv setting, a space-separated list of environment
  variable names. Each name can make use of fnmatch-style glob patterns.
  All environment variables which exist in the tox-invocation
  environment will be copied to the test environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458928/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459625] Re: TemplateDoesNotExist when click manage/unmanage volume

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1459625

Title:
  TemplateDoesNotExist when click manage/unmanage volume

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  manage/unmanage template includes wrong templates:

  project/volumes/volumes/_unmanage_volume.html
  project/volumes/volumes/_manage_volume.html

  it should be admin/volumes/... instead of project/volumes/...

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1459625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1453068] Re: task: Image's locations empty after importing to store

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1453068

Title:
  task: Image's locations empty after importing to store

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released

Bug description:
  The ImportToStore task is setting the image data correctly but not
  saving it after it's been imported. This causes the image's location
  to be lost and therefore the image is completely useless because there
  are no locations associated to it

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1453068/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1263665] Re: Number of GET requests grows exponentially when multiple rows are being updated in the table

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1263665

Title:
  Number of GET requests grows exponentially when multiple rows are
  being updated in the table

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  1. In Launch instance dialog select number of instances 10.
  2. Create 10 instances.
  2. While instances are being created and table rows are being updated the 
number of row update requests grows exponentially and a queue of pending 
requests still exists after all rows had beed updated.

  There is a request type:
  Request 
URL:http://donkey017/project/instances/?action=row_updatetable=instancesobj_id=7c4eaf35-ebc0-4ea3-a702-7554c8c36cf2
  Request Method:GET

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1263665/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447288] Re: create volume from snapshot using horizon error

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1447288

Title:
  create volume from snapshot using horizon error

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  When I try to create a volume from snapshot using the OpenStack UI it
  creates a new raw volume with correct size, but it's not created from
  a snapshot.

  $ cinder show 9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:08:53.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
9d5d0ca1-3dd0-47b4-b9f4-86f97d65724e |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | v2s2
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  2  
 |
  |  snapshot_id  | None
 |
  |  source_volid | None
 |
  | status|  available  
 |
  |user_id|   c8163c5313504306b40377a0775e9ffa  
 |
  |  volume_type  | None
 |
  
+---+--+

  But when I use cinder command line everything seems to be fine.

  $ cinder create --snapshot-id 382a0e1d-168b-4cf6-a9ff-715d8ad385eb 1
  
+---+--+
  |Property   |Value
 |
  
+---+--+
  |  attachments  |  [] 
 |
  |   availability_zone   | nova
 |
  |bootable   |false
 |
  |  consistencygroup_id  | None
 |
  |   created_at  |  2015-04-22T18:15:08.00 
 |
  |  description  | None
 |
  |   encrypted   |False
 |
  |   id  | 
b33ec1ef-9d29-4231-8d15-8cf22ca3c502 |
  |metadata   |  {} 
 |
  |  multiattach  |False
 |
  |  name | None
 |
  | os-vol-host-attr:host | ubuntu@ns_nfs-1#nfs 
 |
  | os-vol-mig-status-attr:migstat| None
 |
  | os-vol-mig-status-attr:name_id| None
 |
  |  os-vol-tenant-attr:tenant_id |   4968203f183641b283e111a2f2db  
 |
  |   os-volume-replication:driver_data   | None
 |
  | os-volume-replication:extended_status | None
 |
  |   replication_status  |   disabled  
 |
  |  size |  1  

[Yahoo-eng-team] [Bug 1433849] Re: Horizon crashes which user click the Confirm resize action multiple times while an instance is resizing

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1433849

Title:
  Horizon crashes which user click the Confirm resize action multiple
  times while an instance is resizing

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Steps to reproduce:

  1. Boot an instance
  2. Resize that instance
  3. When the instance is in Confirm resize state, click the Confirm resize 
action.
  4. After the action is clicked once, the Confirm resize action still shows 
up, click it couple times.
  5. You will see Horizon crashes with the following error:

  Cannot 'confirmResize' instance d1ba0033-4ce7-431d-a9dc-754fe0631fef
  while it is in vm_state active (HTTP 409)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1433849/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427756] Re: Event listeners are not being attached to some modal windows

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427756

Title:
  Event listeners are not being attached to some modal windows

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  In some modal windows, form event listeners are not being attached correctly. 
For example the date picker that should be triggered on a focus event in the 
text box in [dashboard/admin/metering - Modify Usage Report Parameters - 
Other] is not. This might be a problem with the modal loading or a django issue.
  This issue is not present if the form is not in a modal window (like it was 
in juno release), the event listeners are attached regularly and work as 
expected if the form is in a page of its own.

  See attached files for reference.

  Steps to reproduce:
  1) Open the dashboard and go to admin/metering
  2) Press the 'Modify Usage Report Parameters' button.
  3) When the modal window appear select 'Other' from the spinner.
  4) Click the 'from' or 'to' text inputs, a datepicker should pop out from 
there but it does not.
  5) If you repeat these steps in juno, the datepicker will appear since the 
admin/metering/create url is in a page of its own.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427756/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1419823] Re: Nullable image description crashes v2 client

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1419823

Title:
  Nullable image description crashes v2 client

Status in ubuntu-cloud-archive:
  Confirmed
Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in glance package in Ubuntu:
  Confirmed

Bug description:
  When you somehow set the image description to None the glanceclient v2
  image-list crashes (as well as image-show, image-update for this
  particular image). The only way to show all images now is to use
  client v1, because it's more stable in this case.

  Steps to reproduce:

  1. Open Horizon and go to the edit page of any image.
  2. Set description to anything eg. 123 and save.
  3. Open image edit page again, remove description and save it.
  4. List all images using glanceclient v2: glance --os-image-api-version 2 
image-list
  5. Be sad, because of raised exception:

  None is not of type u'string'

  Failed validating u'type' in schema[u'additionalProperties']:
  {u'type': u'string'}

  On instance[u'description']:
  None

  During investigating the issue I've found that the
  additionalProperties schema is set to accept only string values, so it
  should be expanded to allow for null values as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1419823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1450535] Re: [data processing] Create node group and cluster templates can fail

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1450535

Title:
  [data processing] Create node group and cluster templates can fail

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  * Probably a kilo-backport candidate *

  In an environment that uses a rewrite / to /dashboard (or whatever),
  trying to create a node group, cluster template or job will fail when
  we try to do a urlresolver.resolve on the path.  That operation isn't
  even necessary since the required kwargs are already available in
  request.resolver_match.kwargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1450535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1415087] Re: [OSSA 2015-011] Format-guessing and file disclosure in image convert (CVE-2015-1850, CVE-2015-1851)

2015-07-29 Thread Alan Pevec
** Changed in: cinder/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1415087

Title:
  [OSSA 2015-011] Format-guessing and file disclosure in image convert
  (CVE-2015-1850, CVE-2015-1851)

Status in Cinder:
  Fix Released
Status in Cinder icehouse series:
  Fix Released
Status in Cinder juno series:
  Fix Committed
Status in Cinder kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Security Advisory:
  Fix Committed

Bug description:
  Cinder does not provide input format to several calls of qemu-img
  convert. This allows the attacker to play the format guessing by
  providing a volume with a qcow2 signature. If this signature contains
  a base file, this file will be read by a process running as root and
  embedded in the output. This bug is similar to CVE-2013-1922.

  Tested with: lvm backed volume storage, it may apply to others as well
  Steps to reproduce:
  - create volume and attach to vm,
  - create a qcow2 signature with base-file[1] from within the vm and
  - trigger upload to glance with cinder upload-to-image --disk-type qcow2[2].
  The image uploaded to glance will have /etc/passwd from the cinder-volume 
host embedded.
  Affected versions: tested on 2014.1.3, found while reading 2014.2.1

  Fix: Always specify both input -f and output format -O to qemu-
  img convert. The code is in module cinder.image.image_utils.

  Bastian Blank

  [1]: qemu-img create -f qcow2 -b /etc/passwd /dev/vdb
  [2]: The disk-type != raw triggers the use of qemu-img convert

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1415087/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1449260] Re: [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1449260

Title:
  [OSSA 2015-009] Sanitation of metadata label (CVE-2015-3988)

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released

Bug description:
  1) Start up Horizon
  2) Go to Images
  3) Next to an image, pick Update Metadata
  4) From the dropdown button, select Update Metadata
  5) In the Custom box, enter a value with some HTML like 
'/scriptscriptalert(1)/script//', click +
  6) On the right-hand side, give it a value, like ee
  7) Click Save
  8) Pick Update Metadata for the image again, the page will fail to load, 
and the JavaScript console says:

  SyntaxError: invalid property id
  var existing_metadata = {

  An alternative is if you change the URL to update_metadata for the
  image (for example,
  
http://192.168.122.239/admin/images/fa62ba27-e731-4ab9-8487-f31bac355b4c/update_metadata/),
  it will actually display the alert box and a bunch of junk.

  I'm not sure if update_metadata is actually a page, though... can't
  figure out how to get to it other than typing it in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1449260/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371022] Re: Idle client connections can persist indefinitely

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1371022

Title:
  Idle client connections can persist indefinitely

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance juno series:
  New
Status in Glance kilo series:
  Fix Released
Status in Manila:
  Fix Released

Bug description:
  Idle client socket connections can persist forever, eg:

  
  $ nc localhost 8776
  [never returns]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1371022/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1445827] Re: unit test failures: Glance insist on ordereddict

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1445827

Title:
  unit test failures: Glance insist on ordereddict

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in Glance liberty series:
  Fix Released
Status in taskflow:
  Fix Committed

Bug description:
  There's no python-ordereddict package anymore in Debian, as this is
  normally included in Python 2.7. I have therefore patched
  requirements.txt to remove ordereddict. However, even after this, I
  get some bad unit test errors about it. This must be fixed upstream,
  because there's no way (modern) downstream distributions can fix it
  (as the ordereddict Python package will *not* come back).

  Below is the tracebacks for the 4 failed unit tests.

  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_api_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 143, in 
test_list_api_opts
  expected_opt_groups, expected_opt_names)
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
  list_fn = ep.load()
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
  self.require(env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_cache_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 288, in 
test_list_cache_opts
  expected_opt_groups, expected_opt_names)
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
  list_fn = ep.load()
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
  self.require(env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_manage_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 301, in 
test_list_manage_opts
  expected_opt_groups, expected_opt_names)
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
  list_fn = ep.load()
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
  self.require(env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_registry_opts
  --
  Traceback (most recent call last):
  _StringException: Traceback (most recent call last):
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 192, in 
test_list_registry_opts
  expected_opt_groups, expected_opt_names)
File /��PKGBUILDDIR��/glance/tests/unit/test_opts.py, line 45, in 
_test_entry_point
  list_fn = ep.load()
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2188, in load
  self.require(env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 2202, in 
require
  items = working_set.resolve(reqs, env, installer)
File /usr/lib/python2.7/dist-packages/pkg_resources.py, line 639, in 
resolve
  raise DistributionNotFound(req)
  DistributionNotFound: ordereddict

  
  ==
  FAIL: glance.tests.unit.test_opts.OptsTestCase.test_list_scrubber_opts
  --
  Traceback 

[Yahoo-eng-team] [Bug 1458769] Re: horizon can't update subnet ip pool

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1458769

Title:
  horizon can't update subnet ip pool

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  the update of ip pool in subnet reports success, but the refresh shows
  the data is not changed.

  steps to recreate:
  1. admin/network/subnet
  2. edit subnet/details/allocation pools
  3. save the changes
  4. check the subnet detail after success message shows

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1458769/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430042] Re: Virtual Machine could not be evacuated because virtual interface creation failed

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1430042

Title:
  Virtual Machine could not be evacuated because virtual interface
  creation failed

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  I believe this issue is related to Question 257358
  (https://answers.launchpad.net/ubuntu/+source/nova/+question/257358).

  On the source host we see the successful vif plug:

  2015-03-09 01:22:12.363 629 DEBUG neutron.plugins.ml2.rpc 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
  2015-03-09 01:22:12.392 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Acquired semaphore db-access lock 
/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:377
  2015-03-09 01:22:12.436 629 DEBUG oslo_concurrency.lockutils 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] Releasing semaphore db-access 
lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:390
  2015-03-09 01:22:12.437 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
740634ca8c7a49418a39c429669f2f27. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
  2015-03-09 01:22:12.439 629 DEBUG oslo_messaging._drivers.amqp 
[req-5de70341-d64b-4a3a-bc05-54eb2802f25d ] UNIQUE_ID is 
3264e8d7dd7c492d9aa17d3e9892b1fc. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:224
  2015-03-09 01:22:14.436 629 DEBUG neutron.notifiers.nova [-] Sending events: 
[{'status': 'completed', 'tag': u'14ac5edd-269f-4808-9a34-c4cc93e9ab70', 
'name': 'network-vif-plugged', 'server_uuid': 
u'2790be4a-5285-46aa-8ee2-c68f5b936c1d'}] send_events 
/usr/lib/python2.7/site-packages/neutron/notifiers/nova.py:237

  Later, the destination host of the evacuation attempts to plug the vif
  but can't:

  2015-03-09 02:15:41.441 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 up at agent ovs-agent-ipxx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:156
  2015-03-09 02:15:41.485 629 DEBUG neutron.plugins.ml2.rpc 
[req-5ea6625c-a60c-48fb-9264-e2a5a3ed0d26 None] Device 
14ac5edd-269f-4808-9a34-c4cc93e9ab70 not bound to the agent host ipx 
update_device_up /usr/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:163

  The cause of the problem seems to be that the neutron port does not
  have is binding:host_id properly updated on evacuation, the answer to
  question 257358 looks like the fix.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1430042/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427056] Re: shelved_image_id is deleted before completing unshelving instance on compute node

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1427056

Title:
  shelved_image_id is deleted before completing unshelving instance on
  compute node

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Steps to reproduce:

  1. Boot an instance from image.
  2. Call Shelve instance, instance becomes SHELVED_OFFLOADED or SHELVED state 
depending on 'shelved_offload_time' configured in nova.conf.
  3. Call unshelve instance.

 For shelved_offload_time = 0:
     3.1 nova-conductor calls RPC.Cast to nova-compute
     If some failure happens in nova-compute. e.g. Instance failed to spawn 
error  from libvirt
     3.2 nova-conductor deletes instance_system_metadata.shelved_image_id after 
RPC.cast for unshelving the instance.
     3.3 Instance becomes SHELVED_OFFLOADED again by revert_task_state, but 
instance_system_metadata.shelved_image_id is already deleted for this instance

  For shelved_offload_time = -1:
     3.1 nova-conductor calls RPC.Cast to nova-compute
     If some failure happens in nova-compute. e.g. InstanceNotFound error 
while starting the instance.
     3.2 nova-conductor deletes snapshot and 
instance_system_metadata.shelved_image_id after RPC.cast to start the instance.
     3.3 Instance becomes SHELVED again by revert_task_state, but snapshot and  
instance_system_metadata.shelved_image_id is already deleted for this instance

  Problems:
  1. As there is no shelved_image_id, during unshelving the instance again, it 
gives error while getting image-meta in
     libvirt driver and instance remains in SHELVED_OFFLOADED state.

  2. As there is no shelved_image_id, deleting the instance will try to
  delete image_id=None image from glance, but 404 error will be
  returned from glance, instance will be successfully deleted, and
  shelved image remains.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1427056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1437855] Re: Floating IPs should be associated with the first fixed IPv4 address

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1437855

Title:
  Floating IPs should be associated with the first fixed IPv4 address

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  If a port attached to an instance has multiple fixed IPs and a
  floating IP is associated without specifying a fixed ip to associate,
  the behavior in Neutron is to reject the associate request. The
  behavior in Nova in the absence of a specified fixed ip, however, is
  to pick the first one from the list of fixed ips on the port.

  This is a problem if an IPv6 address is the first on the port because
  the floating IP will be NAT'ed to the IPv6 fixed address, which is not
  supported. Any attempts to reach the instance through its floating
  address will fail. This causes failures in certain scenario tests that
  use the Nova floating IP API when dual-stack IPv4+IPv6 is enabled,
  such as test_baremetal_basic_ops  in check-tempest-dsvm-ironic-pxe_ssh
  in https://review.openstack.org/#/c/168063

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1437855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479532] [NEW] Ironic driver needs to handle nodes in CLEANWAIT state

2015-07-29 Thread Ruby Loo
Public bug reported:

Ironic recently added a new state 'CLEANWAIT' [1] so nodes that
previously had their provision state as CLEANING will now be in either
CLEANING or CLEANWAIT.

The ironic driver in nova needs to be updated to know about this new
CLEANWAIT state. In particular, when destroying an instance, from nova's
perspective, the instance has been removed when a node is in CLEANWAIT
state.

[1] Ic2bc4f147f68947f53d341fda5e0c8d7b594a553

** Affects: nova
 Importance: Undecided
 Assignee: Ruby Loo (rloo)
 Status: New


** Tags: ironic

** Tags added: ironic

** Changed in: nova
 Assignee: (unassigned) = Ruby Loo (rloo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479532

Title:
  Ironic driver needs to handle nodes in CLEANWAIT state

Status in OpenStack Compute (nova):
  New

Bug description:
  Ironic recently added a new state 'CLEANWAIT' [1] so nodes that
  previously had their provision state as CLEANING will now be in either
  CLEANING or CLEANWAIT.

  The ironic driver in nova needs to be updated to know about this new
  CLEANWAIT state. In particular, when destroying an instance, from
  nova's perspective, the instance has been removed when a node is in
  CLEANWAIT state.

  [1] Ic2bc4f147f68947f53d341fda5e0c8d7b594a553

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479532/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477296] Re: docs: admin guide doesn't mention SSH settings for migration/resize

2015-07-29 Thread Davanum Srinivas (DIMS)
** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477296

Title:
  docs: admin guide doesn't mention SSH settings for migration/resize

Status in OpenStack Compute (nova):
  New
Status in openstack-manuals:
  New

Bug description:
  When doing a migration/resize to another host without having a shared
  storage, the compute nodes need SSH access to each other[1], which is
  not documented in the admin guide [2]. Right now one has to search
  for blogs which describe how to solve this [3]. I think this should
  either be documented in the OpenStack admin guide in the compute
  section or there could be a solution where the SSH access to another
  compute host is not necessary anymore.

  References:
  
  [1] OpenStack; Nova; libvirt driver; mkdir via SSH;
  
https://github.com/openstack/nova/blob/2015.1.0/nova/virt/libvirt/driver.py#L6242
  [2] OpenStack Cloud Admin Guide; Configure migrations;
  
http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html
  [3] Sebastien Han; 2015-01-06; OpenStack Configure VM Migrate Nova SSH;
  
http://www.sebastien-han.fr/blog/2015/01/06/openstack-configure-vm-migrate-nova-ssh/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1477296/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479514] [NEW] subnet-update of allocation_pool does not prevent overlap of existing gateway IP

2015-07-29 Thread John Kasperski
Public bug reported:

There are three variations here with the subnet-update command:

1) subnet-update of the gateway_ip  such that the new gateway is placed
in the existing subnet allocaton pool.  This results in
GatewayConflictWithAllocationPools.

2) subnet-update of both the gateway_ip and the allocation_poll .  If
the new gateway IP is located in the new allocation pool, then
GatewayConflictWithAllocationPools is returned.

3) subnet-update of just the allocation_pool.  If the new allocation
pool includes the existing gateway_ip, no error is returned.

Scenario 3 needs to be fixed to return same exception as (1) and (2).

** Affects: neutron
 Importance: Undecided
 Assignee: John Kasperski (jckasper)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = John Kasperski (jckasper)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1479514

Title:
  subnet-update of allocation_pool does not prevent overlap of existing
  gateway IP

Status in neutron:
  In Progress

Bug description:
  There are three variations here with the subnet-update command:

  1) subnet-update of the gateway_ip  such that the new gateway is
  placed in the existing subnet allocaton pool.  This results in
  GatewayConflictWithAllocationPools.

  2) subnet-update of both the gateway_ip and the allocation_poll .  If
  the new gateway IP is located in the new allocation pool, then
  GatewayConflictWithAllocationPools is returned.

  3) subnet-update of just the allocation_pool.  If the new allocation
  pool includes the existing gateway_ip, no error is returned.

  Scenario 3 needs to be fixed to return same exception as (1) and (2).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1479514/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479523] [NEW] Stop using debug for insecure responses

2015-07-29 Thread Brant Knudson
Public bug reported:


If you set debug=true in keystone.conf the server 1) logs at debug level, and 
2) sends out insecure responses. Deployers might think that debug=true only 
does 1, not knowing about 2 since it's not documented in the sample config. The 
behaviors should be decoupled to improve security a bit.

** Affects: keystone
 Importance: Undecided
 Assignee: Brant Knudson (blk-u)
 Status: New


** Tags: security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1479523

Title:
  Stop using debug for insecure responses

Status in Keystone:
  New

Bug description:
  
  If you set debug=true in keystone.conf the server 1) logs at debug level, and 
2) sends out insecure responses. Deployers might think that debug=true only 
does 1, not knowing about 2 since it's not documented in the sample config. The 
behaviors should be decoupled to improve security a bit.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1479523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1466744] Re: Integration test test_image_register_unregister failing gate

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1466744

Title:
  Integration test test_image_register_unregister failing gate

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Following test is failing on gate.

  Traceback (most recent call last):
  File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_sahara_image_registry.py,
 line 34, in test_image_register_unregister Image was not registered.)
  File 
/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py,
 line 678, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : Image was not registered.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1466744/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464991] Re: Errors are not handled correctly during image updates

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1464991

Title:
  Errors are not handled correctly during image updates

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  To reproduce:

  Log in to horizon as unprivileged user. Navigate to image editing, try
  to mark an image as public.

  Observed result: an error message, stating: Danger: There was an error 
submitting the form. Please try again.
  Logs indicate, that an UnboundLocalError occurrs

File /Users/teferi/murano/horizon/openstack_dashboard/api/glance.py, line 
129, in image_update
  return image
  UnboundLocalError: local variable 'image' referenced before assignment

  This is because image variable is not handled correctly in
  image_update function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1464991/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467950] Re: test_floatingip, test_keypair tests, test_create_delete_user are consistently failing

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1467950

Title:
  test_floatingip, test_keypair tests,  test_create_delete_user are
  consistently failing

Status in OpenStack Dashboard (Horizon):
  New
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  The test above are failing with different reasons. Please note that
  they were working until yesterday (2015-06-22).

  test_floatingip:

  2015-06-23 12:30:20.815 | 2015-06-23 12:30:20.797 | Traceback (most recent 
call last):
  2015-06-23 12:30:20.817 | 2015-06-23 12:30:20.799 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_floatingip.py,
 line 25, in test_floatingip
  2015-06-23 12:30:20.819 | 2015-06-23 12:30:20.801 | floating_ip = 
floatingip_page.allocate_floatingip()
  2015-06-23 12:30:20.821 | 2015-06-23 12:30:20.803 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/pages/project/compute/access_and_security/floatingipspage.py,
 line 62, in allocate_floatingip
  2015-06-23 12:30:20.822 | 2015-06-23 12:30:20.804 | 
self.floatingips_table.allocate_ip_to_project.click()
  2015-06-23 12:30:20.824 | 2015-06-23 12:30:20.806 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/baseregion.py,
 line 59, in __getattr__
  2015-06-23 12:30:20.826 | 2015-06-23 12:30:20.808 | return 
self._dynamic_properties[name]()
  2015-06-23 12:30:20.827 | 2015-06-23 12:30:20.810 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/regions/baseregion.py,
 line 78, in __call__
  2015-06-23 12:30:20.829 | 2015-06-23 12:30:20.811 | return result if 
self.index is None else result[self.index]
  2015-06-23 12:30:20.830 | 2015-06-23 12:30:20.813 | IndexError: list index 
out of range

  Most likely the page was changed.


  test_keypair:
  2015-06-23 12:30:22.598 | 2015-06-23 12:30:22.581 | Traceback (most recent 
call last):
  2015-06-23 12:30:22.600 | 2015-06-23 12:30:22.582 |   File 
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_keypair.py,
 line 28, in test_keypair
  2015-06-23 12:30:22.601 | 2015-06-23 12:30:22.583 | 
self.assertTrue(keypair_page.is_keypair_present(self.KEYPAIR_NAME))
  2015-06-23 12:30:22.603 | 2015-06-23 12:30:22.585 |   File 
/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py,
 line 678, in assertTrue
  2015-06-23 12:30:22.604 | 2015-06-23 12:30:22.586 | raise 
self.failureException(msg)
  2015-06-23 12:30:22.606 | 2015-06-23 12:30:22.588 | AssertionError: False is 
not true

  Either changed page, or an active polling of the ready state is
  needed.

  
  See for example 
http://logs.openstack.org/72/193072/1/gate/gate-horizon-dsvm-integration/dd294c6/console.html#_2015-06-23_12_30_20_817

  
  The bug is to track the skip patch which will follow and the future 
investigations of the real issues, that needs to be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1467950/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473303] Re: horizon gate failing due to latest release of mock

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1473303

Title:
  horizon gate failing due to latest release of mock

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:

  the latest release of mock exposed some bad test in horizon and the
  gate jobs are now failing

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1473303/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465221] Re: Horizon running in newer django, the fields is now not sorted correctly.

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1465221

Title:
  Horizon running in newer django, the fields is now not sorted
  correctly.

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Create User form has wrong order of fields.

  Correct order : name, email, password, confirm_password,
  project, role.

  Current order : password, confirm_password, name, email,
  project, role.

  this becomes the cause for integration test(test_create_delete_user)
  fail.

  Traceback (most recent call last):
  
/opt/stack/new/horizon/openstack_dashboard/test/integration_tests/tests/test_user_create_delete.py,
 line 26, in test_create_delete_user
  self.assertTrue(users_page.is_user_present(self.USER_NAME))
  
File/opt/stack/new/horizon/.tox/py27integration/local/lib/python2.7/site-packages/unittest2/case.py,
 line 678, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true

  Can be seen both on latest devstack, or gate-horizon-dsvm-integration
  gate job.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1465221/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1473369] Re: new mock release broke a bunch of unit tests

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1473369

Title:
  new mock release broke a bunch of unit tests

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in glance_store:
  Fix Released
Status in murano:
  Fix Committed
Status in murano kilo series:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  In Progress
Status in python-muranoclient:
  Fix Committed
Status in python-muranoclient kilo series:
  In Progress
Status in OpenStack Object Storage (swift):
  Fix Committed

Bug description:
  http://lists.openstack.org/pipermail/openstack-
  dev/2015-July/069156.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1473369/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468300] Re: changing user's email from user list deletes user password

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1468300

Title:
  changing user's email from user list deletes user password

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  OS: Ubuntu Server 14.04.2 LTS
  Openstack: Kilo
  Openstack-dashboard package: 1:2015.1.0-0ubuntu1~cloud0

  robcresswell: Seems to also occur on master as of 2015-06-24

  While logged as an admin user in Dashboard (horizon), if you try to change an 
email address from another user directly on users list , it will change the 
email address properly but will turn to NULL that user's password.
  This behaviour doesn't seem to have effect while changing email address on 
Edit form.

  Before changing email address:
   select * from user where name=demo;
  
+--+--+---+-+-+---+--+
  | id   | name | extra | password  

  | enabled | domain_id | default_project_id   |
  
+--+--+---+-+-+---+--+
  | 651261afa8654ed1a6431ed2b7405bd3 | demo | {email: } | 
$6$rounds=4$mXk6yBRZo.00pnoU$rRfNvGXVW15gHq8k6p9caT9bDQwIaNgpN29dLE0aR8wSisIN56xvbdbiQRGs/2S6qmIrrKaTUAm3uso8jMIr61
 |   1 | default   | 7dd667e26b2e4169bb74cf3306eac352 |
  
+--+--+---+-+-+---+--+

  After:
   select * from user where name=demo;
  
+--+--++--+-+---+--+
  | id   | name | extra 
 | password | enabled | domain_id | default_project_id   |
  
+--+--++--+-+---+--+
  | 651261afa8654ed1a6431ed2b7405bd3 | demo | {email: 
notrealin...@gti.uvigo.es} | NULL |   1 | default   | 
7dd667e26b2e4169bb74cf3306eac352 |
  
+--+--++--+-+---+--+

  Due to security: No pass equals can't log in through dashboard also I
  tried logging in using a CLI without password and it doesn't seem to
  work. So, I guess it's not a security vulnerability.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1468300/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467982] Re: Profiler raises an error when it is enabled

2015-07-29 Thread Alan Pevec
** Changed in: glance/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1467982

Title:
  Profiler raises an error when it is enabled

Status in Glance:
  Fix Released
Status in Glance kilo series:
  Fix Released
Status in Glance liberty series:
  Fix Released

Bug description:
  Description:

  When OSProfiler is enabled in Glance API and Registry, they raise the
  following exception:

  CRITICAL glance [-] AttributeError: 'module' object has no attribute 
'messaging'
  TRACE glance Traceback (most recent call last):
  TRACE glance   File /usr/bin/glance-api, line 10, in module
  TRACE glance sys.exit(main())
  TRACE glance   File /usr/lib/python2.7/site-packages/glance/cmd/api.py, 
line 80, in main
  TRACE glance notifier.messaging, {},
  TRACE glance AttributeError: 'module' object has no attribute 'messaging'
  TRACE glance

  
  Steps to reproduce:
  1. Enable profiler in glance-api.conf and glance-registry.conf

  [profiler]
  enabled = True

  2. Restart API and Registry Services

  
  Expected Behavior:
  Start services with profiler enabled

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1467982/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443186] Re: rebooted instances are shutdown by libvirt lifecycle event handling

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443186

Title:
  rebooted instances are shutdown by libvirt lifecycle event handling

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  This is a continuation of bug 1293480 (which created bug 1433049).
  Those were reported against xen domains with the libvirt driver but we
  have a recreate with CONF.libvirt.virt_type=kvm, see the attached logs
  and reference the instance with uuid
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78.

  In this case, we're running a stress test of soft rebooting 30 active
  instances at once.  Because of a delay in the libvirt lifecycle event
  handling, they are all shutdown after the reboot operation is complete
  and the instances go from ACTIVE to SHUTDOWN.

  This was reported to me against Icehouse code but the recreate is
  against Juno code with patch:

  https://review.openstack.org/#/c/169782/

  For better logging.

  Snippets from the log:

  2015-04-10 21:02:38.234 11195 AUDIT nova.compute.manager [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Rebooting instance

  2015-04-10 21:03:47.703 11195 DEBUG nova.compute.manager [req-
  8219e6cf-dce8-44e7-a5c1-bf1879e155b2 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  unplugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:49.299 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown successfully.

  2015-04-10 21:03:53.251 11195 DEBUG nova.compute.manager [req-
  521a6bdb-172f-4c0c-9bef-855087d7dff0 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Received event network-vif-
  plugged-0b2c7633-a5bc-4150-86b2-c8ba58ffa785 external_instance_event
  /usr/lib/python2.6/site-packages/nova/compute/manager.py:6285

  2015-04-10 21:03:53.259 11195 INFO nova.virt.libvirt.driver [-]
  [instance: 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance running
  successfully.

  2015-04-10 21:03:53.261 11195 INFO nova.virt.libvirt.driver [req-
  b24d4f8d-4a10-44c8-81d7-f79f27e3a3e7 None] [instance:
  9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance soft rebooted
  successfully.

  **
  At this point we have successfully soft rebooted the instance
  **

  now we get a lifecycle event from libvirt that the instance is
  stopped, since we're no longer running a task we assume the hypervisor
  is correct and we call the stop API

  2015-04-10 21:04:01.133 11195 DEBUG nova.virt.driver [-] Emitting event 
LifecycleEvent: 1428699829.13, 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78 = 
Stopped emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:01.134 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Stopped (Lifecycle Event)
  2015-04-10 21:04:01.245 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event Stopped; current vm_state: active, current task_state: None, 
current DB power_state: 1, VM power_state: 4
  2015-04-10 21:04:01.334 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] During _sync_instance_power_state the DB 
power_state (1) does not match the vm_power_state from the hypervisor (4). 
Updating power_state in the DB to match the hypervisor.
  2015-04-10 21:04:01.463 11195 WARNING nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Instance shutdown by itself. Calling the 
stop API. Current vm_state: active, current task_state: None, original DB 
power_state: 1, current VM power_state: 4

  **
  now we get a lifecycle event from libvirt that the instance is started, but 
since the instance already has a task_state of 'powering-off' because of the 
previous stop API call from _sync_instance_power_state, we ignore it.
  **

  
  2015-04-10 21:04:02.085 11195 DEBUG nova.virt.driver [-] Emitting event 
LifecycleEvent: 1428699831.45, 9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78 = 
Started emit_event /usr/lib/python2.6/site-packages/nova/virt/driver.py:1298
  2015-04-10 21:04:02.086 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] VM Started (Lifecycle Event)
  2015-04-10 21:04:02.190 11195 INFO nova.compute.manager [-] [instance: 
9ad8f6c5-a5dc-4820-9ea5-fa081e74ec78] Synchronizing instance power state after 
lifecycle event Started; current vm_state: active, current task_state: 
powering-off, current DB power_state: 4, VM power_state: 1
  2015-04-10 21:04:02.414 11195 INFO 

[Yahoo-eng-team] [Bug 1442343] Re: Mapping openstack_project attribute in k2k assertions with different domains

2015-07-29 Thread Alan Pevec
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1442343

Title:
  Mapping openstack_project attribute in k2k assertions with different
  domains

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  We can have two projects with the same name in different domains. So
  if we have a Project A in Domain X and a Project A in Domain
  Y, there is no way to differ what Project A is being used in a SAML
  assertion generated by this IdP (we have only the openstack_project
  attribute in the SAML assertion).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1442343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1440834] Re: Unit test tree does not match the structure of the code tree

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1440834

Title:
  Unit test tree does not match the structure of the code tree

Status in networking-brocade:
  Fix Released
Status in networking-cisco:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The structure of the unit test tree does not currently correspond to
  the structure of the code under test.  This makes it difficult for a
  developer to find the unit tests for a given module and complicates
  non-mechanical evaluation of coverage.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-brocade/+bug/1440834/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443765] Re: Delete endpoint_group should remove project_endpoint_group at first

2015-07-29 Thread Alan Pevec
** Changed in: keystone/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1443765

Title:
  Delete endpoint_group should remove project_endpoint_group at first

Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released

Bug description:
  Since the endpoint_group_id of table project_endpoint_group has a
  foreign key reference with the id of endpoint_group, so if we want to
  delete endpoint_group we need delete associated project_endpoint_group
  at first, or else, it will hit follow exceptions:

  {error: {message: An unexpected error prevented the server from
  fulfilling your request: (IntegrityError) (1451, 'Cannot delete or
  update a parent row: a foreign key constraint fails
  (`keystone`.`project_endpoint_group`, CONSTRAINT
  `project_endpoint_group_ibfk_1` FOREIGN KEY (`endpoint_group_id`)
  REFERENCES `endpoint_group` (`id`))') 'DELETE FROM endpoint_group
  WHERE endpoint_group.id = %s' ('d5c86622fea04c43b0c0e3b540417e1f',)
  (Disable debug mode to suppress these details.), code: 500,
  title: Internal Server Error}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1443765/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443480] Re: Some of the neutron functional tests are failing with import error after unit test tree reorganization

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443480

Title:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  Some of the neutron functional tests are failing with import error
  after unit test tree reorganization

  
   Traceback (most recent call last):
  ImportError: Failed to import test module: 
neutron.tests.functional.scheduler.test_dhcp_agent_scheduler
  Traceback (most recent call last):
File /usr/lib64/python2.7/unittest/loader.py, line 254, in _find_tests
  module = self._get_module_from_name(name)
File /usr/lib64/python2.7/unittest/loader.py, line 232, in 
_get_module_from_name
  __import__(name)
File neutron/tests/functional/scheduler/test_dhcp_agent_scheduler.py, 
line 24, in module
  from neutron.tests.unit import test_dhcp_scheduler as test_dhcp_sch
  ImportError: cannot import name test_dhcp_scheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443480/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1441400] Re: Move N1kv section from neutron tree's ml2_conf_cisco.ini to stackforge repo

2015-07-29 Thread Alan Pevec
** Changed in: neutron/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1441400

Title:
  Move N1kv section from neutron tree's ml2_conf_cisco.ini to stackforge
  repo

Status in networking-cisco:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron kilo series:
  Fix Released

Bug description:
  The change includes moving the N1kv section from the neutron tree's
  ml2_conf_cisco.ini file to the stackforge/networking-cisco project.

  The change will also include addition of a new parameter --
  'sync_interval' to the N1kv section, after the section is moved to the
  stackforge repo.

  sync_interval: configurable parameter for Neutron - VSM (controller)
  sync duration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1441400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1446583] Re: services no longer reliably stop in stable/kilo

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1446583

Title:
  services no longer reliably stop in stable/kilo

Status in Cinder:
  Fix Released
Status in Cinder kilo series:
  Fix Released
Status in Keystone:
  Fix Released
Status in Keystone kilo series:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in oslo-incubator:
  Fix Released

Bug description:
  In attempting to upgrade the upgrade branch structure to support
  stable/kilo - master in devstack gate, we found the project could no
  longer pass Grenade testing. The reason is because pkill -g is no
  longer reliably killing off the services:

  http://logs.openstack.org/91/175391/5/gate/gate-grenade-
  dsvm/0ad4a94/logs/grenade.sh.txt.gz#_2015-04-21_03_15_31_436

  It has been seen with keystone-all and cinder-api on this patch
  series:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVGhlIGZvbGxvd2luZyBzZXJ2aWNlcyBhcmUgc3RpbGwgcnVubmluZ1wiIEFORCBtZXNzYWdlOlwiZGllXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mjk2MTU0NTQ2MzB9

  There were a number of changes to the oslo-incubator service.py code
  during kilo, it's unclear at this point which is the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1446583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438638] Re: Hyper-V: Compute Driver doesn't start if there are instances with no VM Notes

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438638

Title:
  Hyper-V: Compute Driver doesn't start if there are instances with no
  VM Notes

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The Nova Hyper-V Compute Driver cannot start if there are instances
  with Notes = None. This can be caused by the users, by manually
  altering the VM Notes or if there are VMs created by the users.

  Logs: http://paste.openstack.org/show/197681/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1438638/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443970] Re: nova-manage create networks with wrong dhcp_server in DB(nova)

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1443970

Title:
  nova-manage create networks with wrong dhcp_server in DB(nova)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Using nova network and creating new network 'dhcp_server' values are
  wrong.

  command (example)
  /usr/bin/nova-manage network create novanetwork 10.0.0.0/16 3 8 --vlan_start 
103 --dns1 8.8.8.8 --dns2 8.8.4.4

  This happens because in file nova/network/manager.py in method
  _do_create_networks() the variable 'enable_dhcp' is incorrectly used
  in loop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1443970/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1434429] Re: libvirt: _compare_cpu doesn't consider NotSupportedError

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1434429

Title:
  libvirt: _compare_cpu doesn't consider NotSupportedError

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  Issue
  =

  The libvirt driver method _compare_cpu doesn't consider that the
  underlying libvirt function could throw a NotSupportedError (like 
  baselineCPU call in host.py module [1])

  
  Steps to reproduce
  ==

  * Create setup with at least 2 compute nodes
  * Create cinder volume with bootable image
  * Launch instance from that volume
  * Start live migration of instance to another host

  Expected behavior
  =

  If the target host has the same CPU architecture like the source host,
  the live migration should be triggered.

  Actual behavior
  ===

  The live migration gets aborted and rolled back because all libvirt
  errors gets treated equally.

  Logs  Env.
  ===

  section libvirt in /etc/nova/nova.conf in both nodes:

  [libvirt]
  live_migration_flag = 
VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE, 
VIR_MIGRATE_TUNNELLED
  disk_cachemodes = block=none
  vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
  inject_partition = -2
  live_migration_uri = qemu+tcp://stack@%s/system
  use_usb_tablet = False
  cpu_mode = none
  virt_type = kvm

  
  Nova version
  

  /opt/stack/nova$ git log --oneline -n5
  90ee915 Merge Add api microvesion unit test case for wsgi.action
  7885b74 Merge Remove db layer hard-code permission checks for flavor-manager
  416f310 Merge Remove db layer hard-code permission checks for 
migrations_get*
  ecb306b Merge Remove db layer hard-code permission checks for 
migration_create/update
  6efc8ad Merge libvirt: don't allow to resize down the default ephemeral disk

  
  References
  ==

  [1] baselineCPU call to libvirt catches NotSupportedError; 
  
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/host.py#L753

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1434429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447249] Re: Ironic: injected files not passed through to configdrive

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1447249

Title:
  Ironic: injected files not passed through to configdrive

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released

Bug description:
  The ironic driver's code to generate a configdrive does not pass
  injected_files through to the configdrive builder, resulting in
  injected files not being in the resulting configdrive.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1447249/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1398754] Re: LBaas v1 Associate Monitor to Pool Fails

2015-07-29 Thread Alan Pevec
** Changed in: horizon/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1398754

Title:
  LBaas v1 Associate Monitor to Pool Fails

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) kilo series:
  Fix Released

Bug description:
  Try to associate health monitor to pool in horizon, there's no monitor
  listed on page Associate Monitor.

  Reproduce Procedure: 
  1.  Create Pool 
  2.  Add two members
  3.  Create Health Monitor
  4.  Click Associate Monitor button of pool resource
  5.  There's no monitor listed.

  ***
  At this point, use CLI to:
  1.  show pool, there's no monitor associated yet.
  ++--+
  | Field  | Value  |
  ++--+
  | health_monitors| |
  | health_monitors_status |  |
  ++--+

  2. list monitor, there's available monitor.
  $ neutron lb-healthmonitor-list
  +--+--++
  | id   | type | admin_state_up |
  +--+--++
  | f5e764f0-eceb-4516-9919-7806f409c1ae | HTTP | True   |
  +--+--++

  3. Assocaite monitor to pool. Succeeded.
  $ neutron lb-healthmonitor-associate  f5e764f0-eceb-4516-9919-7806f409c1ae  
mypool
  Associated health monitor f5e764f0-eceb-4516-9919-7806f409c1ae

  *

  Base on above info, it should be a horizon bug. Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1398754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1416496] Re: nova.conf - configuration options icehouse compat flag is not right

2015-07-29 Thread Alan Pevec
** Changed in: nova/kilo
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1416496

Title:
  nova.conf - configuration options icehouse compat flag is not right

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) kilo series:
  Fix Released
Status in openstack-manuals:
  Fix Released

Bug description:
  Table 2.57. Description of upgrade levels configuration options has
  the wrong information for setting icehouse/juno compat flags during
  upgrades.

  Specifically this section:

  compute = None (StrOpt) Set a version cap for messages sent to compute
  services. If you plan to do a live upgrade from havana to icehouse, you 
should set this option to icehouse-compat
  before beginning the live upgrade procedure

  This should be compute = old release, for example compute = icehouse
  when doing an upgrade from I to J.

  ---
  Built: 2015-01-29T19:27:05 00:00
  git SHA: 3e80c2419cfe03f86057f3229044cd0d495e0295
  URL: 
http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html
  source File: 
file:/home/jenkins/workspace/openstack-manuals-tox-doc-publishdocs/doc/config-reference/compute/section_compute-options-reference.xml
  xml:id: list-of-compute-config-options

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1416496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470234] Re: test_arp_spoof_allowed_address_pairs_0cidr sporadically failing functional job

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470234

Title:
  test_arp_spoof_allowed_address_pairs_0cidr sporadically failing
  functional job

Status in neutron:
  Fix Released

Bug description:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwidGVzdF9hcnBfc3Bvb2ZfYWxsb3dlZF9hZGRyZXNzX3BhaXJzXzBjaWRyXCIgQU5EIHRhZ3M6XCJjb25zb2xlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU2OTU1NTk3MTB9

  18 hits in last 7 days.

  Here's a failure trace of
  test_arp_spoof_allowed_address_pairs_0cidr(native):

  ft1.78: 
neutron.tests.functional.agent.test_ovs_flows.ARPSpoofOFCtlTestCase.test_arp_spoof_allowed_address_pairs_0cidr(native)_StringException:
 Empty attachments:
pythonlogging:'neutron.api.extensions'
stderr
stdout

  pythonlogging:'': {{{
  2015-06-30 19:36:25,695ERROR [neutron.agent.linux.utils] 
  Command: ['ip', 'netns', 'exec', 'func-8f787d60-208d-4524-b4d4-e79b5cd19eae', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
  Exit code: 1
  Stdin: 
  Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.

  --- 192.168.0.2 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  
  Stderr:
  }}}

  Traceback (most recent call last):
File neutron/tests/functional/agent/test_ovs_flows.py, line 169, in 
test_arp_spoof_allowed_address_pairs_0cidr
  net_helpers.assert_ping(self.src_namespace, self.dst_addr)
File neutron/tests/common/net_helpers.py, line 69, in assert_ping
  dst_ip])
File neutron/agent/linux/ip_lib.py, line 676, in execute
  extra_ok_codes=extra_ok_codes, **kwargs)
File neutron/agent/linux/utils.py, line 138, in execute
  raise RuntimeError(m)
  RuntimeError: 
  Command: ['ip', 'netns', 'exec', 'func-8f787d60-208d-4524-b4d4-e79b5cd19eae', 
'ping', '-c', 1, '-W', 1, '192.168.0.2']
  Exit code: 1
  Stdin: 
  Stdout: PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.

  --- 192.168.0.2 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470186] Re: Pylint 1.4.1 broken due to new logilab.common 1.0.0 release

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470186

Title:
  Pylint 1.4.1 broken due to new logilab.common 1.0.0 release

Status in neutron:
  Fix Released

Bug description:
  Pylint 1.4.1 is using logilab-common, which had a release on the 30th,
  breaking pylint. Pylint developers are planning a logilab common
  release tomorrow which should unbreak pylint once again, at which
  point I'll re-enable pylint.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470186/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470622] Re: Devref documentation for client command extension support

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470622

Title:
  Devref documentation for client command extension support

Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Committed

Bug description:
  The only documentation for client extensibility is the commit message
  in https://review.openstack.org/148318

  The information should be in an official devref document.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470622/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1470894] Re: test_port_creation_and_deletion KeyError: UUID

2015-07-29 Thread Doug Hellmann
** Changed in: neutron
   Status: Fix Committed = Fix Released

** Changed in: neutron
Milestone: None = liberty-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470894

Title:
  test_port_creation_and_deletion KeyError: UUID

Status in neutron:
  Fix Released

Bug description:
  Functional test test_port_creation_and_deletion sometimes fails with
  this stacktrace:

  2015-07-02 03:24:13.028 | 2015-07-02 03:24:13.005 | Traceback (most 
recent call last):
  2015-07-02 03:24:13.029 | 2015-07-02 03:24:13.007 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1526, in rpc_loop
  2015-07-02 03:24:13.031 | 2015-07-02 03:24:13.008 | port_info = 
self.scan_ports(reg_ports, updated_ports_copy)
  2015-07-02 03:24:13.032 | 2015-07-02 03:24:13.010 |   File 
neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py, line 
1065, in scan_ports
  2015-07-02 03:24:13.034 | 2015-07-02 03:24:13.011 | cur_ports = 
self.int_br.get_vif_port_set()
  2015-07-02 03:24:13.036 | 2015-07-02 03:24:13.013 |   File 
neutron/agent/common/ovs_lib.py, line 365, in get_vif_port_set
  2015-07-02 03:24:13.037 | 2015-07-02 03:24:13.014 | results = 
cmd.execute(check_error=True)
  2015-07-02 03:24:13.038 | 2015-07-02 03:24:13.016 |   File 
neutron/agent/ovsdb/native/commands.py, line 42, in execute
  2015-07-02 03:24:13.040 | 2015-07-02 03:24:13.017 | ctx.reraise = 
False
  2015-07-02 03:24:13.041 | 2015-07-02 03:24:13.019 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py,
 line 119, in __exit__
  2015-07-02 03:24:13.045 | 2015-07-02 03:24:13.022 | 
six.reraise(self.type_, self.value, self.tb)
  2015-07-02 03:24:13.046 | 2015-07-02 03:24:13.023 |   File 
neutron/agent/ovsdb/native/commands.py, line 35, in execute
  2015-07-02 03:24:13.048 | 2015-07-02 03:24:13.025 | txn.add(self)
  2015-07-02 03:24:13.955 | 2015-07-02 03:24:13.932 |   File 
neutron/agent/ovsdb/api.py, line 70, in __exit__
  2015-07-02 03:24:13.957 | 2015-07-02 03:24:13.934 | self.result = 
self.commit()
  2015-07-02 03:24:13.958 | 2015-07-02 03:24:13.935 |   File 
neutron/agent/ovsdb/impl_idl.py, line 70, in commit
  2015-07-02 03:24:13.959 | 2015-07-02 03:24:13.937 | raise result.ex
  2015-07-02 03:24:13.961 | 2015-07-02 03:24:13.938 | KeyError: 
UUID('7a90521e-6b79-444f-a63f-0973b71b8018')

  see query

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiS2V5RXJyb3I6IFVVSURcIiBBTkQgYnVpbGRfbmFtZTpcImNoZWNrLW5ldXRyb24tZHN2bS1mdW5jdGlvbmFsXCIgIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzU4NDk3OTU1MjIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  example of console log
  
http://logs.openstack.org/27/197727/4/check/check-neutron-dsvm-functional/7fd5c5c/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1470894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   >