[Yahoo-eng-team] [Bug 1500716] Re: Startup a instance with attached volume

2015-09-29 Thread Markus Zoeller (markus_z)
This is not a bug. If you need data at boot time, you can use the
"config drive" [1] or pass in a "block device mapping" [2].

[1] http://docs.openstack.org/user-guide/cli_config_drive.html
[2] http://docs.openstack.org/openstack-ops/content/attach_block_storage.html

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500716

Title:
  Startup a instance with attached volume

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  When an instance is created, it automatically powered on and no volume
  is attached.

  However, in some cases, an auto-start program in the instance required
  some data which is stored in an persistent volume. Then, there must be
  some way to attach that volume before the instance is powered on, or,
  an option in the instance creation command which stop the instance
  from automatically power on.

  It seems neither of the above workaround is implemented...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500716] [NEW] Startup a instance with attached volume

2015-09-29 Thread Lingfeng Xiong
Public bug reported:

When an instance is created, it automatically powered on and no volume
is attached.

However, in some cases, an auto-start program in the instance required
some data which is stored in an persistent volume. Then, there must be
some way to attach that volume before the instance is powered on, or, an
option in the instance creation command which stop the instance from
automatically power on.

It seems neither of the above workaround is implemented...

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500716

Title:
  Startup a instance with attached volume

Status in OpenStack Compute (nova):
  New

Bug description:
  When an instance is created, it automatically powered on and no volume
  is attached.

  However, in some cases, an auto-start program in the instance required
  some data which is stored in an persistent volume. Then, there must be
  some way to attach that volume before the instance is powered on, or,
  an option in the instance creation command which stop the instance
  from automatically power on.

  It seems neither of the above workaround is implemented...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467719] Re: image-create returns wrong error

2015-09-29 Thread Flavio Percoco
** Changed in: glance
   Importance: Undecided => Low

** Also affects: glance/liberty
   Importance: Low
 Assignee: takmatsu (takeaki-matsumoto)
   Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1467719

Title:
  image-create returns wrong error

Status in Glance:
  In Progress
Status in Glance liberty series:
  In Progress

Bug description:
  When set wrong credentials in glance-api.conf and not exist 
~/.glanceclient/image_schema.json,
  image-create returns unrecognized arguments.

  ex)
  $vim /etc/glance/glance-api.conf
     [keystone_authtoken]
     ...
     password = wrongpassword #set wrong password
     ...
  $ sudo service glance-api restart
  $ rm ~/.glanceclient/image_schema.json
  $ export OS_IMAGE_API_VERSION=2
  $ wget -P /tmp/images 
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
  $ glance image-create --name "cirros-0.3.4-x86_64" --file 
/tmp/images/cirros-0.3.4-x86_64-disk.img   --disk-format qcow2 
--container-format bare --visibility public --progress

  usage: glance [--version] [-d] [-v] [--get-schema] [--timeout TIMEOUT]
    [--no-ssl-compression] [-f] [--os-image-url OS_IMAGE_URL]
    [--os-image-api-version OS_IMAGE_API_VERSION]
    [--profile HMAC_KEY] [-k] [--os-cert OS_CERT]
    [--cert-file OS_CERT] [--os-key OS_KEY] [--key-file OS_KEY]
    [--os-cacert ] [--ca-file OS_CACERT]
    [--os-username OS_USERNAME] [--os-user-id OS_USER_ID]
    [--os-user-domain-id OS_USER_DOMAIN_ID]
    [--os-user-domain-name OS_USER_DOMAIN_NAME]
    [--os-project-id OS_PROJECT_ID]
    [--os-project-name OS_PROJECT_NAME]
    [--os-project-domain-id OS_PROJECT_DOMAIN_ID]
    [--os-project-domain-name OS_PROJECT_DOMAIN_NAME]
    [--os-password OS_PASSWORD] [--os-tenant-id OS_TENANT_ID]
    [--os-tenant-name OS_TENANT_NAME] [--os-auth-url OS_AUTH_URL]
    [--os-region-name OS_REGION_NAME]
    [--os-auth-token OS_AUTH_TOKEN]
    [--os-service-type OS_SERVICE_TYPE]
    [--os-endpoint-type OS_ENDPOINT_TYPE]
     ...
  glance: error: unrecognized arguments: --name --disk-format qcow2 
--container-format bare --visibility public

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1467719/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2015-09-29 Thread Pradeep Kumar Singh
** Also affects: barbican
   Importance: Undecided
   Status: New

** Changed in: barbican
 Assignee: (unassigned) => Pradeep Kumar Singh (pradeep-singh-u)

** Changed in: barbican
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Barbican:
  In Progress
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  In Progress
Status in Designate:
  In Progress
Status in Glance:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in Keystone:
  In Progress
Status in Manila:
  In Progress
Status in Mistral:
  In Progress
Status in murano:
  Confirmed
Status in OpenStack Compute (nova):
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  New
Status in python-mistralclient:
  New
Status in Python client library for Zaqar:
  In Progress
Status in Sahara:
  Fix Released
Status in zaqar:
  In Progress

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241577] Re: sqlalchemy.exc.OperationalError: (OperationalError) Cannot add a NOT NULL column with default value NULL

2015-09-29 Thread Ann Kamyshnikova
This problem is invalid for master branch, so for juno, kilo, liberty
branches.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1241577

Title:
  sqlalchemy.exc.OperationalError: (OperationalError) Cannot add a NOT
  NULL column with default value NULL

Status in neutron:
  Invalid
Status in neutron icehouse series:
  Fix Released

Bug description:
  As per this piupart failure report:
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726719

  there's a problem with alembic migration with SQLite.

  INFO  [alembic.migration] Context impl SQLiteImpl.
INFO  [alembic.migration] Will assume transactional DDL.
INFO  [alembic.migration] Running upgrade None -> folsom
INFO  [alembic.migration] Running upgrade folsom -> 2c4af419145b
INFO  [alembic.migration] Running upgrade 2c4af419145b -> 5a875d0e5c
INFO  [alembic.migration] Running upgrade 5a875d0e5c -> 48b6f43f7471
INFO  [alembic.migration] Running upgrade 48b6f43f7471 -> 3cb5d900c5de
INFO  [alembic.migration] Running upgrade 3cb5d900c5de -> 1d76643bcec4
INFO  [alembic.migration] Running upgrade 1d76643bcec4 -> 2a6d0b51f4bb
INFO  [alembic.migration] Running upgrade 2a6d0b51f4bb -> 1b693c095aa3
INFO  [alembic.migration] Running upgrade 1b693c095aa3 -> 1149d7de0cfa
INFO  [alembic.migration] Running upgrade 1149d7de0cfa -> 49332180ca96
INFO  [alembic.migration] Running upgrade 49332180ca96 -> 38335592a0dc
INFO  [alembic.migration] Running upgrade 38335592a0dc -> 54c2c487e913
INFO  [alembic.migration] Running upgrade 54c2c487e913 -> 45680af419f9
INFO  [alembic.migration] Running upgrade 45680af419f9 -> 1c33fa3cd1a1
INFO  [alembic.migration] Running upgrade 1c33fa3cd1a1 -> 363468ac592c
INFO  [alembic.migration] Running upgrade 363468ac592c -> 511471cc46b
INFO  [alembic.migration] Running upgrade 511471cc46b -> 3b54bf9e29f7
INFO  [alembic.migration] Running upgrade 3b54bf9e29f7 -> 4692d074d587
INFO  [alembic.migration] Running upgrade 4692d074d587 -> 1341ed32cc1e
INFO  [alembic.migration] Running upgrade 1341ed32cc1e -> grizzly
INFO  [alembic.migration] Running upgrade grizzly -> f489cf14a79c
INFO  [alembic.migration] Running upgrade f489cf14a79c -> 176a85fc7d79
INFO  [alembic.migration] Running upgrade 176a85fc7d79 -> 32b517556ec9
INFO  [alembic.migration] Running upgrade 32b517556ec9 -> 128e042a2b68
Traceback (most recent call last):
  File "/usr/bin/neutron-db-manage", line 10, in 
sys.exit(main())
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
143, in main
CONF.command.func(config, CONF.command.name)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
80, in do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File "/usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py", line 
59, in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/alembic/command.py", line 124, in 
upgrade
script.run_env()
  File "/usr/lib/python2.7/dist-packages/alembic/script.py", line 191, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/lib/python2.7/dist-packages/alembic/util.py", line 186, in 
load_python_file
module = imp.load_source(module_id, path, open(path, 'rb'))
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 105, in 
run_migrations_online()
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py",
 line 89, in run_migrations_online
options=build_options())
  File "", line 7, in run_migrations
  File "/usr/lib/python2.7/dist-packages/alembic/environment.py", line 494, 
in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/lib/python2.7/dist-packages/alembic/migration.py", line 211, 
in run_migrations
change(**kw)
  File 
"/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/128e042a2b68_ext_gw_mode.py",
 line 55, in upgrade
nullable=False, default=True))
  File "", line 7, in add_column
  File "/usr/lib/python2.7/dist-packages/alembic/operations.py", line 342, 
in add_column
schema=schema
  File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 126, in 
add_column
self._exec(base.AddColumn(table_name, column, schema=schema))
  File "/usr/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 75, in 
_exec
conn.execute(construct, *multiparams, **params)
  File "/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
662, in execute
params)
  File 

[Yahoo-eng-team] [Bug 1485792] Re: Glance create an image with incorrect location

2015-09-29 Thread Flavio Percoco
** Project changed: glance => glance-store

** Also affects: glance-store/liberty
   Importance: High
 Assignee: Kairat Kushaev (kkushaev)
   Status: In Progress

** Also affects: glance-store/kilo
   Importance: Undecided
   Status: New

** Changed in: glance-store/kilo
   Importance: Undecided => High

** Changed in: glance-store/kilo
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1485792

Title:
  Glance create an image with incorrect location

Status in glance_store:
  In Progress
Status in glance_store kilo series:
  Triaged
Status in glance_store liberty series:
  In Progress

Bug description:
  I tried to upload images from location by way like using SCP:

  # glance image-create --name LINUX-64 --is-public True --disk-format
  iso --container-format bare --progress --location
  http://:~/ubuntu-14.04.2-server-amd64.iso

  # glance image-create --name LINUX-64-2 --is-public True --disk-format
  iso --container-format bare --progress --copy-from
  http://:~/ubuntu-14.04.2-server-amd64.iso

  Glance client accepted wrong location and as result i got images in Glance 
with Active status and 0 size.
  Same behavior noticed with aki and ari images.

  Expected that Glance client will prevent creation of image from
  malformed source.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance-store/+bug/1485792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1493576] Re: Incorrect usage of python-novaclient

2015-09-29 Thread Doug Shelley
** Also affects: trove
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1493576

Title:
  Incorrect usage of python-novaclient

Status in Cinder:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Trove:
  New

Bug description:
  All projects should use only `novaclient.client` as entry point. It designed 
with some version checks and backward compatibility.
  Direct import of versioned client object(i.e. novaclient.v2.client) is a way 
to "shoot yourself in the foot".

  Python-novaclient's doc: http://docs.openstack.org/developer/python-
  novaclient/api.html

  Affected projects:
   - Horizon - 
https://github.com/openstack/horizon/blob/69d6d50ef4a26e2629643ed35ebd661e82e10586/openstack_dashboard/api/nova.py#L31

   - Manila -
  
https://github.com/openstack/manila/blob/473b46f6edc511deaba88b48392b62bfbb979787/manila/compute/nova.py#L23

   - Cinder-
  
https://github.com/openstack/cinder/blob/de64f5ad716676b7180365798efc3ea69a4fef0e/cinder/compute/nova.py#L23

   - Mistral -
  
https://github.com/openstack/mistral/blob/f42b7f5f5e4bcbce8db7e7340b4cac12de3eec4d/mistral/actions/openstack/actions.py#L23

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1493576/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500830] [NEW] setting COMPRESS_ENABLED = False and restarting Apache leads to every xstatic library being NOT FOUND

2015-09-29 Thread Thomas Goirand
Public bug reported:

Hi,

Trying to see if it is possible to debug Horizon in production, one of
my colleague tried to disable compress. Then the result isn't nice at
all. Setting COMPRESS_ENABLED = False and restarting Apache leads to
every xstatic library being NOT FOUND, and loading of pages taking
forever.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500830

Title:
  setting COMPRESS_ENABLED = False and restarting Apache leads to every
  xstatic library being NOT FOUND

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi,

  Trying to see if it is possible to debug Horizon in production, one of
  my colleague tried to disable compress. Then the result isn't nice at
  all. Setting COMPRESS_ENABLED = False and restarting Apache leads to
  every xstatic library being NOT FOUND, and loading of pages taking
  forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500830/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500812] [NEW] With pagination implemented for Project->Images FixedFilter needs to be applied serverside

2015-09-29 Thread Timur Sufiev
Public bug reported:

Once bug 1252649 is resolved, existing OwnerFilter which splits images
into 3 categories won't work nice with pagination, so we need to apply
this categorization server-side.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500812

Title:
  With pagination implemented for Project->Images FixedFilter needs to
  be applied serverside

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Once bug 1252649 is resolved, existing OwnerFilter which splits images
  into 3 categories won't work nice with pagination, so we need to apply
  this categorization server-side.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500812/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464361] Re: Support for multiple gateways in Neutron subnets

2015-09-29 Thread Kevin Benton
For now I don't think we are going to go forward with this because it's
a very invasive change and there isn't a really strong demand for it.
Let's revisit this if necessary after the routed networks and other
model changes happen during mitaka that are designed to help with large
operator use cases like this.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464361

Title:
  Support for multiple gateways in Neutron subnets

Status in neutron:
  Won't Fix

Bug description:
  Currently, the subnets in Neutron only support one gateway. For
  provider networks in large data centers, quite often, the architecture
  is such a way that multiple gateways are configured for the subnets.
  These multiple gateways are typically spread across backplanes  so
  that the production traffic can be load-balanced between backplanes.

  This is just my use case for supporting multiple gateways, but other
  folks might have more use cases as well.

  I want to open up a discussion on this topic and figure out the best
  way to handle this. Should this be done in a same way as dns-
  nameserver, with a separate table with two columns:  gateway_ip,
  subnet_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1390421] Re: /nova/api/openstack/compute/servers.py:773:1: C901 'Controller.create' is too complex (46)

2015-09-29 Thread Markus Zoeller (markus_z)
As Joel wrote in comment #4, the comments in review [1] tend to see this
bug as invalid.

** Changed in: nova
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1390421

Title:
  /nova/api/openstack/compute/servers.py:773:1: C901 'Controller.create'
  is too complex (46)

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Noticed while reviewing this:

  https://review.openstack.org/#/c/131859/

  Marking as low-hanging-fruit since it just requires some refactoring
  (break out some of the guts into private methods).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1390421/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500896] [NEW] It's not necessary to pass context as kwarg to oslo.log in most cases

2015-09-29 Thread Matt Riedemann
Public bug reported:

Nova is using oslo.context's RequestContext which means the context
object is in scope when doing logging using oslo.log:

http://docs.openstack.org/developer/oslo.log/usage.html#passing-context

But there are a lot of places in nova where we do something like:

context = context.elevated()
LOG.info(_LI("Rebooting instance"), context=context, instance=instance)

This is confusing because it makes you wonder if (1) you should be
passing context in logging method and (2) if it's OK to pass the
elevated context in this case or if you should be passing the original
context.

It turns out that in this case neither is necessary.  The elevated
context just has the admin flag set, the request / user / project IDs in
the context are left unchanged, which is what we want for logging.  And
the context is already in scope because of:

http://git.openstack.org/cgit/openstack/oslo.context/tree/oslo_context/context.py#n71

So we don't need to pass it as a kwarg.

This bug is meant to scrub through nova and remove any unnecessary
passing of the context object to oslo.log methods.

** Affects: nova
 Importance: Low
 Status: Triaged


** Tags: logging low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500896

Title:
  It's not necessary to pass context as kwarg to oslo.log in most cases

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  Nova is using oslo.context's RequestContext which means the context
  object is in scope when doing logging using oslo.log:

  http://docs.openstack.org/developer/oslo.log/usage.html#passing-
  context

  But there are a lot of places in nova where we do something like:

  context = context.elevated()
  LOG.info(_LI("Rebooting instance"), context=context, 
instance=instance)

  This is confusing because it makes you wonder if (1) you should be
  passing context in logging method and (2) if it's OK to pass the
  elevated context in this case or if you should be passing the original
  context.

  It turns out that in this case neither is necessary.  The elevated
  context just has the admin flag set, the request / user / project IDs
  in the context are left unchanged, which is what we want for logging.
  And the context is already in scope because of:

  
http://git.openstack.org/cgit/openstack/oslo.context/tree/oslo_context/context.py#n71

  So we don't need to pass it as a kwarg.

  This bug is meant to scrub through nova and remove any unnecessary
  passing of the context object to oslo.log methods.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500896/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500920] [NEW] SameHostFilter should fail if no instances on host

2015-09-29 Thread Alvaro Lopez
Public bug reported:

According to the docs, the SameHostFilter "schedules the instance on the
same host as another instance in a set of instances", so it should only
pass if the host is executing any of the instances passed as the
scheduler hint. However, the filter also passes if the host does not
have any instances.

** Affects: nova
 Importance: Undecided
 Assignee: Alvaro Lopez (aloga)
 Status: In Progress


** Tags: scheduler

** Tags added: scheduler

** Changed in: nova
   Status: New => In Progress

** Changed in: nova
 Assignee: (unassigned) => Alvaro Lopez (aloga)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500920

Title:
  SameHostFilter should fail if no instances on host

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  According to the docs, the SameHostFilter "schedules the instance on
  the same host as another instance in a set of instances", so it should
  only pass if the host is executing any of the instances passed as the
  scheduler hint. However, the filter also passes if the host does not
  have any instances.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500920/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500962] [NEW] Heat Stacks BatchActions missing icons

2015-09-29 Thread Cindy Lu
Public bug reported:

Preview, Check, Suspend, and Resume actions are missing icons.

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500962

Title:
  Heat Stacks BatchActions missing icons

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Preview, Check, Suspend, and Resume actions are missing icons.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1457034] Re: BGPVPN extension

2015-09-29 Thread Kyle Mestery
I believe the external networking BGP project covers this. IF not,
please re-open.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1457034

Title:
  BGPVPN extension

Status in neutron:
  Invalid

Bug description:
  We propose to extend the neutron API to allow tenants to stretch their
  BGP based IP VPN to their Openstack project.

  This extension would allow to create BGPVPN connection objects based
  on route targets informations. A BGPVPN objects would be assign to a
  tenant. The tenant could then easily stretch its IP VPNs to its
  Openstack project, by attaching networks to the corresponding BGPVPN
  connection.

  This RFE is related to the BGPVPN spec.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1457034/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500960] [NEW] Decouple FwaaS from L3 Agent

2015-09-29 Thread Sean M. Collins
Public bug reported:

The FwaaS code is tightly coupled to the L3 agent, which is a concern
because in CI we need to eliminate circular dependencies.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500960

Title:
  Decouple FwaaS from L3 Agent

Status in neutron:
  New

Bug description:
  The FwaaS code is tightly coupled to the L3 agent, which is a concern
  because in CI we need to eliminate circular dependencies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500960/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499856] Re: latest doa breaks with new db layout

2015-09-29 Thread David Lyle
** Also affects: horizon
   Importance: Undecided
   Status: New

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
   Importance: High => Critical

** Tags added: liberty-rc-potential

** Tags removed: liberty-rc-potential
** Tags added: liberty-rc2-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1499856

Title:
  latest doa breaks with new db layout

Status in django-openstack-auth:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When upgrading to new horizon and doa, a mysql backed session engine
  sees this error:

  ERRORS:
  openstack_auth.User.keystone_user_id: (mysql.E001) MySQL does not allow 
unique CharFields to have a max_length > 255.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1499856/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1119119] Re: firewall driver uses abstracemethod without metaclass set

2015-09-29 Thread Cedric Brandily
That's the opposite: FirewallDriver is an abstract class using the
pattern:

 def method(...):
raise NotImplementedError

instead of:

 @abstractmethod
 def method(...):
   pass

** Changed in: neutron
   Status: Won't Fix => Confirmed

** Summary changed:

- firewall driver uses abstracemethod without metaclass set
+ firewall driver is an abstract class not using abstractmethod

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1119119

Title:
  firewall driver is an abstract class not using abstractmethod

Status in neutron:
  Confirmed

Bug description:
  FirewallDrifver uses @abstractmethod decorator, but it doesn't set its
  metaclass to ABCMeta.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1119119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501091] [NEW] Launch Instance Model file is incorrectly named

2015-09-29 Thread Rajat Vig
Public bug reported:

launch-instance-model.js should be renamed to launch-instance-
model.service.js.

** Affects: horizon
 Importance: Undecided
 Assignee: Rajat Vig (rajatv)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501091

Title:
  Launch Instance Model file is incorrectly named

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  launch-instance-model.js should be renamed to launch-instance-
  model.service.js.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501091/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 fails

2015-09-29 Thread Craig Magina
Cloud-init is getting the wrong time because of this error:

[   14.726283] hctosys: unable to open rtc device (rtc0)

What that means is the RTC_DRV_XGENE kernel config option was changed to
'm', this needs to be built-in in order for the device to be available
for hctosys.

** Also affects: wily (Ubuntu)
   Importance: Undecided
   Status: New

** Package changed: wily (Ubuntu) => linux (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 fails

Status in cloud-init:
  New
Status in curtin:
  New
Status in MAAS:
  New
Status in linux package in Ubuntu:
  New

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501090] [NEW] OVSDB wait_for_change waits for a change that has already happened

2015-09-29 Thread Terry Wilson
Public bug reported:

The idlutils wait_for_change() function calls idl.run(), but doesn't
check to see if it caused a change before calling poller.block.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501090

Title:
  OVSDB wait_for_change waits for a change that has already happened

Status in neutron:
  In Progress

Bug description:
  The idlutils wait_for_change() function calls idl.run(), but doesn't
  check to see if it caused a change before calling poller.block.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501090/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500688] [NEW] VNC URL of instance unavailable in CLI

2015-09-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

I use heat template to build an autoscaling group with
'OS::Heat::AutoScalingGroup' and 'OS::Nova::Server' and it works fine. I
can see instance running both by CLI and dashboard. However, I can only
get to the console by dashboard directly. While using command 'nova get-
vnc-console instance_ID novnc', I got an error:'ERROR (NotFound): The
resource could not be found. (HTTP 404) (Request-ID: req-6f260624-56ad-
45fd-aa21-f86fb2c541d1)' instead of its URL.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
VNC URL of instance unavailable in CLI
https://bugs.launchpad.net/bugs/1500688
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-09-29 Thread Tim Gardner
** Summary changed:

- maas wily deployment to HP Proliant m400 fails
+ maas wily deployment to HP Proliant m400 arm64 server cartridge fails

** Changed in: linux (Ubuntu)
 Assignee: (unassigned) => Tim Gardner (timg-tpi)

** Also affects: linux (Ubuntu Wily)
   Importance: Undecided
 Assignee: Tim Gardner (timg-tpi)
   Status: Incomplete

** Also affects: linux (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Changed in: linux (Ubuntu Vivid)
   Status: New => In Progress

** Changed in: linux (Ubuntu Vivid)
 Assignee: (unassigned) => Tim Gardner (timg-tpi)

** Changed in: linux (Ubuntu Wily)
   Status: Incomplete => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Confirmed
Status in curtin:
  New
Status in MAAS:
  New
Status in cloud-init package in Ubuntu:
  New
Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in cloud-init source package in Wily:
  New
Status in linux source package in Wily:
  In Progress

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1037562] Re: Different secgroups can't be applied to different interfaces

2015-09-29 Thread Cedric Brandily
This bug concerns at most nova as neutron is able to handle secgroup per
port.

Moreover nova is able to boot a vm using existing neutron ports (with
specific secgroups) so this bug seems invalid fro nova also

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1037562

Title:
  Different secgroups can't be applied to different interfaces

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  With the coming of quantum it's quite reasonable to start an interface
  attached to multiple network segments with different purposes.  But
  there's only one security group for the whole machine, so I can't
  define different firewalling for the internal and external interfaces
  of a VM, for example.

  secgroups should apply to interfaces, not machines as a whole.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1037562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500688] Re: VNC URL of instance unavailable in CLI

2015-09-29 Thread Maru Newby
** Project changed: heat => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1500688

Title:
  VNC URL of instance unavailable in CLI

Status in OpenStack Compute (nova):
  New

Bug description:
  I use heat template to build an autoscaling group with
  'OS::Heat::AutoScalingGroup' and 'OS::Nova::Server' and it works fine.
  I can see instance running both by CLI and dashboard. However, I can
  only get to the console by dashboard directly. While using command
  'nova get-vnc-console instance_ID novnc', I got an error:'ERROR
  (NotFound): The resource could not be found. (HTTP 404) (Request-ID:
  req-6f260624-56ad-45fd-aa21-f86fb2c541d1)' instead of its URL.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1500688/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501046] Re: v2 API: Not possible to "retire" disk formats

2015-09-29 Thread Kairat Kushaev
So this behavior happens when glanceclient already received image information. 
You can at least get an info trhough API as workaround.
The root cause is in schema validation after receiving this info. Need to think 
more about this case.

** Project changed: glance => python-glanceclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1501046

Title:
  v2 API: Not possible to "retire" disk formats

Status in python-glanceclient:
  New

Bug description:
  Use case:

  We tried to remove QCOW2 from the list of supported disk_formats in
  some of our deployments, because the conversion overhead of using them
  was making for a sub-wonderful user experience.

  After removing "qcow2" from the disk_formats parameter, the Glance v2
  API started returning 404 errors on extant QCOW2 images.  This means
  that we effectively cannot "retire" this disk format and block new
  image uploads, since Glance will immediately disavow all knowledge of
  any QCOW2 images in its purview.

  Steps to Reproduce:

  1.  Stand up a devstack
  2.  Upload a QCOW2 image
  3.  glance --os-image-api-version 2 image-show IMAGE_ID
  4.  Reconfigure glance-api to only allow e.g., "raw" disk_formats:
  [image_format]
  disk_formats = raw,ami,ari,aki,iso
  5.  Restart glance-api
  6. glance --os-image-api-version 2 image-show IMAGE_ID

  Expected results:

  The "glance image-show" command works both times

  Actual results:

  The second "glance image-show" command fails.

  Errata:

  The "glance image-show" using the v1 API does work as expected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/python-glanceclient/+bug/1501046/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501080] [NEW] VMwareVMOpsTestCase.test_get_datacenter_ref_and_name can fail with KeyError

2015-09-29 Thread Matt Riedemann
Public bug reported:

This is actually being seen on a very old version of nova but still
applies to the code on master:

FAIL: 
nova.tests.virt.vmwareapi.test_vmwareapi_vmops.VMwareVMOpsTestCase.test_get_datacenter_ref_and_name_with_no_datastore
tags: worker-0
--
Traceback (most recent call last):
  File "nova/tests/virt/vmwareapi/test_vmwareapi_vmops.py", line 172, in 
test_get_datacenter_ref_and_name_with_no_datastore
self._test_get_datacenter_ref_and_name()
  File "nova/tests/virt/vmwareapi/test_vmwareapi_vmops.py", line 153, in 
_test_get_datacenter_ref_and_name
dc_info = _vcvmops.get_datacenter_ref_and_name(instance_ds_ref)
  File "nova/virt/vmwareapi/vmops.py", line 1704, in get_datacenter_ref_and_name
"Datacenter", ["name", "datastore", "vmFolder"])
  File "/usr/local/lib/python2.7/dist-packages/mock.py", line 955, in __call__
return _mock_self._mock_call(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1018, in 
_mock_call
ret_val = effect(*args, **kwargs)
  File "nova/tests/virt/vmwareapi/test_vmwareapi_vmops.py", line 139, in 
fake_call_method
ds_ref=ds_ref))
  File "nova/virt/vmwareapi/fake.py", line 692, in __init__
create_network()
  File "nova/virt/vmwareapi/fake.py", line 750, in create_network
_create_object('Network', network)
  File "nova/virt/vmwareapi/fake.py", line 84, in _create_object
_db_content[table][table_obj.obj] = table_obj
KeyError: 'Network'


We should make sure that _db_content has a default value in it before we try 
putting entries into it:

https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/vmwareapi/fake.py#L77

** Affects: nova
 Importance: Low
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: testing vmware

** Changed in: nova
   Status: New => Triaged

** Changed in: nova
   Importance: Undecided => Low

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501080

Title:
  VMwareVMOpsTestCase.test_get_datacenter_ref_and_name can fail with
  KeyError

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  This is actually being seen on a very old version of nova but still
  applies to the code on master:

  FAIL: 
nova.tests.virt.vmwareapi.test_vmwareapi_vmops.VMwareVMOpsTestCase.test_get_datacenter_ref_and_name_with_no_datastore
  tags: worker-0
  --
  Traceback (most recent call last):
File "nova/tests/virt/vmwareapi/test_vmwareapi_vmops.py", line 172, in 
test_get_datacenter_ref_and_name_with_no_datastore
  self._test_get_datacenter_ref_and_name()
File "nova/tests/virt/vmwareapi/test_vmwareapi_vmops.py", line 153, in 
_test_get_datacenter_ref_and_name
  dc_info = _vcvmops.get_datacenter_ref_and_name(instance_ds_ref)
File "nova/virt/vmwareapi/vmops.py", line 1704, in 
get_datacenter_ref_and_name
  "Datacenter", ["name", "datastore", "vmFolder"])
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 955, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/mock.py", line 1018, in 
_mock_call
  ret_val = effect(*args, **kwargs)
File "nova/tests/virt/vmwareapi/test_vmwareapi_vmops.py", line 139, in 
fake_call_method
  ds_ref=ds_ref))
File "nova/virt/vmwareapi/fake.py", line 692, in __init__
  create_network()
File "nova/virt/vmwareapi/fake.py", line 750, in create_network
  _create_object('Network', network)
File "nova/virt/vmwareapi/fake.py", line 84, in _create_object
  _db_content[table][table_obj.obj] = table_obj
  KeyError: 'Network'

  
  We should make sure that _db_content has a default value in it before we try 
putting entries into it:

  
https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/vmwareapi/fake.py#L77

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1501080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1119119] Re: firewall driver uses abstracemethod without metaclass set

2015-09-29 Thread Cedric Brandily
** Changed in: neutron
 Assignee: Isaku Yamahata (yamahata) => (unassigned)

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1119119

Title:
  firewall driver is an abstract class not using abstractmethod

Status in neutron:
  Confirmed

Bug description:
  FirewallDrifver uses @abstractmethod decorator, but it doesn't set its
  metaclass to ABCMeta.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1119119/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-09-29 Thread Scott Moser
cloud-init changes caused issues when "fixing" timestamp for oauth on systems 
with bad clock.
change here is to fix that. I'll upload later.

** Patch added: "cloud-init fix diff"
   
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1499869/+attachment/4479305/+files/out.diff

** Changed in: cloud-init
   Importance: Undecided => High

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
 Assignee: (unassigned) => Scott Moser (smoser)

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: cloud-init (Ubuntu Vivid)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Confirmed
Status in curtin:
  New
Status in MAAS:
  New
Status in cloud-init package in Ubuntu:
  New
Status in linux package in Ubuntu:
  In Progress
Status in linux source package in Vivid:
  In Progress
Status in cloud-init source package in Wily:
  New
Status in linux source package in Wily:
  In Progress

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1282374] Re: neutron tests require pysqlite

2015-09-29 Thread Cedric Brandily
** Changed in: neutron
 Assignee: YAMAMOTO Takashi (yamamoto) => (unassigned)

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1282374

Title:
  neutron tests require pysqlite

Status in neutron:
  Invalid

Bug description:
  neutron's test-requirements.txt lacks pysqlite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1282374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501086] [NEW] ARP entries dropped by DVR routers when the qr device is not ready or present

2015-09-29 Thread Swaminathan Vasudevan
Public bug reported:

The ARP entries are dropped by DVR routers when the 'qr' device does not
exist in the namespace.

There are two ways in the L3 agent the ARP entries are updated.
Once when an internal csnat port is created, then arp entries added from the 
'dvr_local_router' by calling the "set_subnet_arp_info" which in turn calls the 
"_update_arp_entry".

There is another time, when an arp update "rpc" message comes from the
Server to the agent as "add_arp_entry" or "delete_arp_entry" which
inturn calls "_update_arp_entry".

We have seen log traces that shows that the arp update message comes
before the "qr" device is ready. So we get to drop those arp message.

We need to kind of cache those arp messages and update the router-
namespace when the "qr" device is ready.

If you see the message below, we are checking for the device and
throwing a warning message that the device is not ready, but the arp
entries are not saved anywere. They are dropped.

2015-09-24 18:45:30.150 WARNING neutron.agent.l3.dvr_local_router [req-
0565ce3a-905d-43fa-a6f3-1a07df6c6c2b None None] Arp operation add failed
for device qr-b672ffde-cd, since the device does not exist anymore. The
device might have been concurrently deleted or not created yet.

If you see here the internal_network 'qr' device is added later.

2015-09-24 18:45:30.367 DEBUG neutron.agent.l3.router_info [req-
7e5722e4-5fef-4889-9372-8cf1218522a2 None None] adding internal network:
prefix(qr-), port(b672ffde-cd80-49eb-9817-58436fa8e8fd)
_internal_network_added
/opt/stack/new/neutron/neutron/agent/l3/router_info.py:300

** Affects: neutron
 Importance: Undecided
 Assignee: Swaminathan Vasudevan (swaminathan-vasudevan)
 Status: In Progress


** Tags: l3-dvr-backlog

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Swaminathan Vasudevan (swaminathan-vasudevan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501086

Title:
  ARP entries dropped by DVR routers when the qr device is not ready or
  present

Status in neutron:
  In Progress

Bug description:
  The ARP entries are dropped by DVR routers when the 'qr' device does
  not exist in the namespace.

  There are two ways in the L3 agent the ARP entries are updated.
  Once when an internal csnat port is created, then arp entries added from the 
'dvr_local_router' by calling the "set_subnet_arp_info" which in turn calls the 
"_update_arp_entry".

  There is another time, when an arp update "rpc" message comes from the
  Server to the agent as "add_arp_entry" or "delete_arp_entry" which
  inturn calls "_update_arp_entry".

  We have seen log traces that shows that the arp update message comes
  before the "qr" device is ready. So we get to drop those arp message.

  We need to kind of cache those arp messages and update the router-
  namespace when the "qr" device is ready.

  If you see the message below, we are checking for the device and
  throwing a warning message that the device is not ready, but the arp
  entries are not saved anywere. They are dropped.

  2015-09-24 18:45:30.150 WARNING neutron.agent.l3.dvr_local_router
  [req-0565ce3a-905d-43fa-a6f3-1a07df6c6c2b None None] Arp operation add
  failed for device qr-b672ffde-cd, since the device does not exist
  anymore. The device might have been concurrently deleted or not
  created yet.

  If you see here the internal_network 'qr' device is added later.

  2015-09-24 18:45:30.367 DEBUG neutron.agent.l3.router_info [req-
  7e5722e4-5fef-4889-9372-8cf1218522a2 None None] adding internal
  network: prefix(qr-), port(b672ffde-cd80-49eb-9817-58436fa8e8fd)
  _internal_network_added
  /opt/stack/new/neutron/neutron/agent/l3/router_info.py:300

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1179713] Re: too many subnet-create cause q-dhcp failure

2015-09-29 Thread Armando Migliaccio
Well, the bug is valid: we should make sure that after
max_fixed_ips_per_port or quota_subnet subnets (whichever is smaller),
the server tells you you run out of subnet you can create. Quota
enforcement already works, but we don't validate the # of subnets
against max_fixed_ips_per_port

** No longer affects: python-neutronclient

** Changed in: neutron
 Assignee: jagan kumar kotipatruni (jagankumar-k) => (unassigned)

** Tags added: low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1179713

Title:
  too many subnet-create cause q-dhcp failure

Status in neutron:
  In Progress

Bug description:
  I create several subnets in my 'private' network:

  quantum subnet-create private  CIDR_{i} --verbose

  the client keeps returning success. However, after the fifth attempt,
  I see my dhcp agent spitting out nasty errors, like the one below:

  2013-05-13 15:29:46ERROR [quantum.agent.dhcp_agent] Unable to restart 
dhcp.
  Traceback (most recent call last):
File "/opt/stack/quantum/quantum/agent/dhcp_agent.py", line 130, in 
call_driver
  getattr(driver, action)()
File "/opt/stack/quantum/quantum/agent/linux/dhcp.py", line 87, in restart
  self.enable()
File "/opt/stack/quantum/quantum/agent/linux/dhcp.py", line 123, in enable
  reuse_existing=True)
File "/opt/stack/quantum/quantum/agent/dhcp_agent.py", line 530, in setup
  port = self.plugin.get_dhcp_port(network.id, device_id)
File "/opt/stack/quantum/quantum/agent/dhcp_agent.py", line 379, in 
get_dhcp_port
  topic=self.topic))
File "/opt/stack/quantum/quantum/openstack/common/rpc/proxy.py", line 86, 
in call
  return rpc.call(context, real_topic, msg, timeout)
File "/opt/stack/quantum/quantum/openstack/common/rpc/__init__.py", line 
140, in call
  return _get_impl().call(CONF, context, topic, msg, timeout)
File "/opt/stack/quantum/quantum/openstack/common/rpc/impl_kombu.py", line 
798, in call
  rpc_amqp.get_connection_pool(conf, Connection))
File "/opt/stack/quantum/quantum/openstack/common/rpc/amqp.py", line 615, 
in call
  rv = list(rv)
File "/opt/stack/quantum/quantum/openstack/common/rpc/amqp.py", line 564, 
in __iter__
  raise result
  RemoteError: Remote error: InvalidInput Invalid input for operation: Exceeded 
maximim amount of fixed ips per port.

  This happens with the ovs plugin, on master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1179713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-09-29 Thread Andres Rodriguez
** No longer affects: maas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Confirmed
Status in curtin:
  New
Status in cloud-init package in Ubuntu:
  New
Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Vivid:
  In Progress
Status in cloud-init source package in Wily:
  New
Status in linux source package in Wily:
  Fix Committed

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501118] [NEW] Delete Security Groups action displays two messages in German

2015-09-29 Thread Yuko Katabami
Public bug reported:

Project > Compute > Access and Seciruty > Delete Security Groups

When delete a security group,  English UI shows a pop-up saying that the
security group was successfully deleted, however in the German version
TWO notes pop up: Security Group successfully deleted AND "Error,
Security Group cannot be deleted" - but it is deleted.

Screenshot attached.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "HorizonBug2.png"
   
https://bugs.launchpad.net/bugs/1501118/+attachment/4479455/+files/HorizonBug2.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501118

Title:
  Delete Security Groups action displays two messages in German

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Project > Compute > Access and Seciruty > Delete Security Groups

  When delete a security group,  English UI shows a pop-up saying that
  the security group was successfully deleted, however in the German
  version TWO notes pop up: Security Group successfully deleted AND
  "Error, Security Group cannot be deleted" - but it is deleted.

  Screenshot attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501150] [NEW] Reorganize and improve L3 agent functional tests

2015-09-29 Thread Assaf Muller
Public bug reported:

This bug is to track the following work:
1) neutron/tests/functional/agent/test_l3_agent is enormous. When I created 
that file it was 300 lines of code. It's now nearly 1,500 lines of code. It's 
very difficult to find what you're looking for. I propose splitting it up so 
that the common helper functions and base class is in 
neutron/tests/functional/agent/l3/framework. The tests themselves are then to 
be split up to the following four modules: legacy, HA, metadata_proxy and DVR. 
It would also be an opportunity to make further cosmetic clean ups, finding 
common code and extracting it out to the framework class.

2) The tests focus on the creation of a router with complete data: A
router with internal interfaces, an external interface, floating IPs,
extra routes and so on. The existing 'lifecycle' style test is useful:
Create a router, make assertions, delete it, make some more assertions.
However, I'd like to see improved coverage for update operations, for
all three router types (Legacy, HA, DVR): Create a router without
interfaces or floating IPs, add an internal interface, make assertions.
Add an external gateway, make assertions, and so on. The existing
coverage essentially covers the case of an existing router, and
restarting an agent so that a complete router is built. The latter (And
missing) coverage is for the case of a new router being created, and API
calls coming in to gradually attach the router to existing networks and
floating IPs. Both are important cases to cover and execute at times
different code paths.

3) Are there L3 agent  or router unit tests that are superseded entirely
and could be deleted?

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: functional-tests l3-dvr-backlog l3-ha l3-ipam-dhcp

** Tags added: l3-dvr-backlog l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501150

Title:
  Reorganize and improve L3 agent functional tests

Status in neutron:
  New

Bug description:
  This bug is to track the following work:
  1) neutron/tests/functional/agent/test_l3_agent is enormous. When I created 
that file it was 300 lines of code. It's now nearly 1,500 lines of code. It's 
very difficult to find what you're looking for. I propose splitting it up so 
that the common helper functions and base class is in 
neutron/tests/functional/agent/l3/framework. The tests themselves are then to 
be split up to the following four modules: legacy, HA, metadata_proxy and DVR. 
It would also be an opportunity to make further cosmetic clean ups, finding 
common code and extracting it out to the framework class.

  2) The tests focus on the creation of a router with complete data: A
  router with internal interfaces, an external interface, floating IPs,
  extra routes and so on. The existing 'lifecycle' style test is useful:
  Create a router, make assertions, delete it, make some more
  assertions. However, I'd like to see improved coverage for update
  operations, for all three router types (Legacy, HA, DVR): Create a
  router without interfaces or floating IPs, add an internal interface,
  make assertions. Add an external gateway, make assertions, and so on.
  The existing coverage essentially covers the case of an existing
  router, and restarting an agent so that a complete router is built.
  The latter (And missing) coverage is for the case of a new router
  being created, and API calls coming in to gradually attach the router
  to existing networks and floating IPs. Both are important cases to
  cover and execute at times different code paths.

  3) Are there L3 agent  or router unit tests that are superseded
  entirely and could be deleted?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1499869] Re: maas wily deployment to HP Proliant m400 arm64 server cartridge fails

2015-09-29 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init - 0.7.7~bzr1147-0ubuntu1

---
cloud-init (0.7.7~bzr1147-0ubuntu1) wily; urgency=medium

  * New upstream snapshot.
* MAAS: fix oauth when system clock is bad (LP: #1499869)

 -- Scott Moser   Tue, 29 Sep 2015 20:16:57 -0400

** Changed in: cloud-init (Ubuntu Wily)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1499869

Title:
  maas wily deployment to HP Proliant m400 arm64 server cartridge fails

Status in cloud-init:
  Confirmed
Status in curtin:
  New
Status in cloud-init package in Ubuntu:
  Fix Released
Status in linux package in Ubuntu:
  Fix Committed
Status in linux source package in Vivid:
  In Progress
Status in cloud-init source package in Wily:
  Fix Released
Status in linux source package in Wily:
  Fix Committed

Bug description:
  This is the error seen on the console:

  [   64.149080] cloud-init[834]: 2015-08-27 15:03:29,289 - util.py[WARNING]: 
Failed fetching metadata from url http://10.229.32.21/MAAS/metadata/curtin
  [  124.513212] cloud-init[834]: 2015-09-24 17:23:10,006 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed 
[2427570/120s]: request error [HTTPConnectionPool(host='169.254.169.254', 
port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id 
(Caused by 
ConnectTimeoutError(, 'Connection to 169.254.169.254 timed out. (connect 
timeout=50.0)'))]
  [  124.515570] cloud-init[834]: 2015-09-24 17:23:10,007 - 
DataSourceEc2.py[CRITICAL]: Giving up on md from 
['http://169.254.169.25/2009-04-04/meta-data/instance-id'] after 2427570 seconds
  [  124.531624] cloud-init[834]: 2015-09-24 17:23:10,024 - 
url_helper.py[WARNING]: Calling 'http:///latest/meta-data/instance-id' failed [0/120s]: bad status code [404]

  This times out eventually and the node is left at the login prompt. I
  can install wily via netboot without issue and some time back, wily
  was deployable to this node from MAAS.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1499869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501158] [NEW] Missing MEDIA_URL config in test settings

2015-09-29 Thread Lin Hua Cheng
Public bug reported:

Steps to reproduce:
1. Set Debug=True in the test settings.
2. run the test

Actual output:

Test failure.

==
ERROR: Failure: ImproperlyConfigured (Empty static prefix not permitted)
--
Traceback (most recent call last):
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/nose/loader.py", 
line 418, in loadTestsFromName
addr.filename, addr.module)
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/nose/importer.py", 
line 47, in importFromPath
return self.importFromDir(dir_path, fqname)
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/nose/importer.py", 
line 94, in importFromDir
mod = load_module(part_fqname, fh, filename, desc)
  File 
"/workspace/horizon/openstack_dashboard/dashboards/project/volumes/backups/tests.py",
 line 22, in 
INDEX_URL = reverse('horizon:project:volumes:index')
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 550, in reverse
app_list = resolver.app_dict[ns]
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 352, in app_dict
self._populate()
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 285, in _populate
for pattern in reversed(self.url_patterns):
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 402, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 396, in urlconf_module
self._urlconf_module = import_module(self.urlconf_name)
  File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
  File "/workspace/horizon/openstack_dashboard/test/urls.py", line 46, in 

urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
  File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/conf/urls/static.py",
 line 25, in static
raise ImproperlyConfigured("Empty static prefix not permitted")
ImproperlyConfigured: Empty static prefix not permitted

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501158

Title:
  Missing MEDIA_URL config in test settings

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Steps to reproduce:
  1. Set Debug=True in the test settings.
  2. run the test

  Actual output:

  Test failure.

  ==
  ERROR: Failure: ImproperlyConfigured (Empty static prefix not permitted)
  --
  Traceback (most recent call last):
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/nose/loader.py", 
line 418, in loadTestsFromName
  addr.filename, addr.module)
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/nose/importer.py", 
line 47, in importFromPath
  return self.importFromDir(dir_path, fqname)
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/nose/importer.py", 
line 94, in importFromDir
  mod = load_module(part_fqname, fh, filename, desc)
File 
"/workspace/horizon/openstack_dashboard/dashboards/project/volumes/backups/tests.py",
 line 22, in 
  INDEX_URL = reverse('horizon:project:volumes:index')
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 550, in reverse
  app_list = resolver.app_dict[ns]
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 352, in app_dict
  self._populate()
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 285, in _populate
  for pattern in reversed(self.url_patterns):
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 402, in url_patterns
  patterns = getattr(self.urlconf_module, "urlpatterns", 
self.urlconf_module)
File 
"/workspace/horizon/.venv/local/lib/python2.7/site-packages/django/core/urlresolvers.py",
 line 396, in urlconf_module
  self._urlconf_module = import_module(self.urlconf_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
  __import__(name)
File "/workspace/horizon/openstack_dashboard/test/urls.py", line 46, in 

  urlpatterns += static(settings.MEDIA_URL, 

[Yahoo-eng-team] [Bug 1501163] [NEW] Selenium suite under horizon/ breaks under Django 1.8

2015-09-29 Thread Richard Jones
Public bug reported:

The dummy user model used in the horizon/ test suite breaks under Django
1.8 with an error pretty much the same as what we see for django-
openstack-auth.

** Affects: horizon
 Importance: Undecided
 Assignee: Richard Jones (r1chardj0n3s)
 Status: In Progress


** Tags: liberty-rc2-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501163

Title:
  Selenium suite under horizon/ breaks under Django 1.8

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  The dummy user model used in the horizon/ test suite breaks under
  Django 1.8 with an error pretty much the same as what we see for
  django-openstack-auth.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501163/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501116] [NEW] "Image Registry" window contains a text box which overlaps with buttons (Japanese and German)

2015-09-29 Thread Yuko Katabami
Public bug reported:

Project > Data Processing > Image Registry > Register Image

The "Register Image" window, "Image Registry tool" text box becomes
larger when it contains translated text (only confirmed in German and
Japanese, but could be affecting more languages). It overlaps with
buttons and other text.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "HorizonBug1-de.png"
   
https://bugs.launchpad.net/bugs/1501116/+attachment/4479446/+files/HorizonBug1-de.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1501116

Title:
  "Image Registry" window contains a text box which overlaps with
  buttons (Japanese and German)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Project > Data Processing > Image Registry > Register Image

  The "Register Image" window, "Image Registry tool" text box becomes
  larger when it contains translated text (only confirmed in German and
  Japanese, but could be affecting more languages). It overlaps with
  buttons and other text.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1501116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501152] [NEW] Firewall Rule can be created with IPv4 Source address and IPv6 Destination address

2015-09-29 Thread Reedip
Public bug reported:

reedip@reedip-VirtualBox:/opt/stack/logs$ neutron firewall-rule-create 
--source-ip-address 1.1.1.1 --destination-ip-address 1::1 --protocol tcp 
--action allow
Created a new firewall_rule:
++--+
| Field  | Value|
++--+
| action | allow|
| description|  |
| destination_ip_address | 1::1 |
| destination_port   |  |
| enabled| True |
| firewall_policy_id |  |
| id | 92abcef7-56ac-4730-bf06-cde88a2b84e8 |
| ip_version | 4|
| name   |  |
| position   |  |
| protocol   | tcp  |
| shared | False|
| source_ip_address  | 1.1.1.1  |
| source_port|  |
| tenant_id  | f0e01e9a74684ed68e2f95565873c6fe |
++--+
reedip@reedip-VirtualBox:/opt/stack/logs$

** Affects: neutron
 Importance: Undecided
 Assignee: Reedip (reedip-banerjee)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Reedip (reedip-banerjee)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1501152

Title:
  Firewall Rule can be created with IPv4 Source address and IPv6
  Destination address

Status in neutron:
  New

Bug description:
  reedip@reedip-VirtualBox:/opt/stack/logs$ neutron firewall-rule-create 
--source-ip-address 1.1.1.1 --destination-ip-address 1::1 --protocol tcp 
--action allow
  Created a new firewall_rule:
  ++--+
  | Field  | Value|
  ++--+
  | action | allow|
  | description|  |
  | destination_ip_address | 1::1 |
  | destination_port   |  |
  | enabled| True |
  | firewall_policy_id |  |
  | id | 92abcef7-56ac-4730-bf06-cde88a2b84e8 |
  | ip_version | 4|
  | name   |  |
  | position   |  |
  | protocol   | tcp  |
  | shared | False|
  | source_ip_address  | 1.1.1.1  |
  | source_port|  |
  | tenant_id  | f0e01e9a74684ed68e2f95565873c6fe |
  ++--+
  reedip@reedip-VirtualBox:/opt/stack/logs$

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1501152/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500994] [NEW] Json parameter is converted from double quotes to single quotes - Error on launch

2015-09-29 Thread s...@us.ibm.com
Public bug reported:

I created a json parameter with a default value in my template:
{"somekey" : "somevalue"}

When I launch the stack in Horizon, it changes the json value to be:
{u'somekey': u'somevalue'}

But this is invalid, because when I try to launch it tells me:  Error:
ERROR: Value must be valid JSON: Expecting property name enclosed in
double quotes: line 1 column 2 (char 1)

Resolution: Don't translate the quotes in the json parameter default
value at launch.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: json parameter

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1500994

Title:
  Json parameter is converted from double quotes to single quotes -
  Error on launch

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I created a json parameter with a default value in my template:
  {"somekey" : "somevalue"}

  When I launch the stack in Horizon, it changes the json value to be:
  {u'somekey': u'somevalue'}

  But this is invalid, because when I try to launch it tells me:  Error:
  ERROR: Value must be valid JSON: Expecting property name enclosed in
  double quotes: line 1 column 2 (char 1)

  Resolution: Don't translate the quotes in the json parameter default
  value at launch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1500994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1427032] Re: Enable neutron network integration testing via horizon.conf

2015-09-29 Thread Timur Sufiev
*** This bug is a duplicate of bug 1425882 ***
https://bugs.launchpad.net/bugs/1425882

** This bug has been marked a duplicate of bug 1425882
   Use Neutron in Horizon integration tests job

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1427032

Title:
  Enable neutron network integration testing via horizon.conf

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  At first , have a testing devstack with neutron enabled  by referring
  https://wiki.openstack.org/wiki/NeutronDevstack

  add the following to local.conf

  [[local|localrc]]
  disable_service n-net
  enable_service q-svc
  enable_service q-agt
  enable_service q-dhcp
  enable_service q-l3
  enable_service q-meta


  Then , have  horizon.conf  include something like this

  #Set to neutron to testing notwork dashboard
  #There is no network testing at default. The default
  #is nova-network
  #network=nova-netwrok
  network=neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1427032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500993] [NEW] Define a new vNIC type for exposing complete physical functions (SR-IOV)

2015-09-29 Thread Miguel Angel Ajo
Public bug reported:

One of the telco working group requirements is being able to expose a whole PF 
(physical 
function) on an SR-IOV card.

To indicate that to nova, we need to specify we want a
"physicalfunction" type of port.

It's different from the ironic baremetal ports in the sense that those PFs will 
be memory
mapped to the guest.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500993

Title:
  Define a new vNIC type for exposing complete physical functions (SR-
  IOV)

Status in neutron:
  New

Bug description:
  One of the telco working group requirements is being able to expose a whole 
PF (physical 
  function) on an SR-IOV card.

  To indicate that to nova, we need to specify we want a
  "physicalfunction" type of port.

  It's different from the ironic baremetal ports in the sense that those PFs 
will be memory
  mapped to the guest.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1500990] [NEW] dnsmasq responds with NACKs to requests from unknown hosts

2015-09-29 Thread Michael Turek
Public bug reported:

 When a request comes in from a host not managed by neutron, dnsmasq
responds with a NACK. This causes a race condition where if the wrong
DHCP server responds to the request, your request will not be honored.
This can be inconvenient if you  are sharing a subnet with other DHCP
servers.

Our team recently ran into this in our Ironic development environment
and were stepping on each other's DHCP requests. A solution is to
provide an option that ignores unknown hosts rather than NACKing them.

The symptom of this was the repeated DISCOVER,OFFER,REQUEST,PACK cycle
with no acceptance from the host. (Sorry for all the omissions, this may
be overly cautious)

Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) .205  
Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) .205  
Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205 
Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) .205   

...And so on

I did a dhcpdump and saw NACKs  coming from my two teammates' machines.

Of course multiple DHCP servers on a subnet is not a standard or common
case, but we've needed this case in our Ironic development environment
and have found the fix to be useful.

** Affects: neutron
 Importance: Undecided
 Assignee: Michael Turek (mjturek)
 Status: In Progress


** Tags: dhcp dnsmasq

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500990

Title:
  dnsmasq responds with NACKs to requests from unknown hosts

Status in neutron:
  In Progress

Bug description:
   When a request comes in from a host not managed by neutron, dnsmasq
  responds with a NACK. This causes a race condition where if the wrong
  DHCP server responds to the request, your request will not be honored.
  This can be inconvenient if you  are sharing a subnet with other DHCP
  servers.

  Our team recently ran into this in our Ironic development environment
  and were stepping on each other's DHCP requests. A solution is to
  provide an option that ignores unknown hosts rather than NACKing them.

  The symptom of this was the repeated DISCOVER,OFFER,REQUEST,PACK cycle
  with no acceptance from the host. (Sorry for all the omissions, this
  may be overly cautious)

  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:18 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205  
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205  
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:21 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205  
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPDISCOVER(tapf1244648-f5) 

  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPOFFER(tapf1244648-f5) 
.205 
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPREQUEST(tapf1244648-f5) 
.205 
  Sep 16 09:58:25 localhost dnsmasq-dhcp[30340]: DHCPACK(tapf1244648-f5) 
.205   

  ...And so on

  I did a dhcpdump and saw NACKs  coming from my two teammates'
  machines.

  Of course multiple DHCP servers on a subnet is not a standard or
  common case, but we've needed this case in our Ironic development
  environment and have found the fix to be useful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501032] [NEW] incorrect method list is returned when scoping tokens with federation

2015-09-29 Thread Lance Bragstad
Public bug reported:

In keystone, when a user gets an unscoped token using a password and
their username, the unscoped token response contains a method list. This
method list will consist of ['password'], since it was the method used
to obtain the token. When the user goes to scope their unscoped token to
a project, the project scoped response will contain a method list of
['password', 'token'], since a password was used initially, and the
unscoped token was also used as a form of authentication.

In federation, when a user gets an unscoped token from a valid SAML
assertion, the unscoped response's method list will consist of
['saml2']. When the user goes to get a project scoped token, the project
scoped response's method list will only contain ['saml2']. The 'token'
entry is missing from the method list for rescoped federated tokens,
despite using an unscoped token as a method of authentication.


This seems to be an inconsistency between the authentication API and the 
federated authentication API.

I've pushed a patch that exposes this bug here -
https://review.openstack.org/#/c/229125/

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: federation

** Tags added: federation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1501032

Title:
  incorrect method list is returned when scoping tokens with federation

Status in Keystone:
  New

Bug description:
  In keystone, when a user gets an unscoped token using a password and
  their username, the unscoped token response contains a method list.
  This method list will consist of ['password'], since it was the method
  used to obtain the token. When the user goes to scope their unscoped
  token to a project, the project scoped response will contain a method
  list of ['password', 'token'], since a password was used initially,
  and the unscoped token was also used as a form of authentication.

  In federation, when a user gets an unscoped token from a valid SAML
  assertion, the unscoped response's method list will consist of
  ['saml2']. When the user goes to get a project scoped token, the
  project scoped response's method list will only contain ['saml2']. The
  'token' entry is missing from the method list for rescoped federated
  tokens, despite using an unscoped token as a method of authentication.

  
  This seems to be an inconsistency between the authentication API and the 
federated authentication API.

  I've pushed a patch that exposes this bug here -
  https://review.openstack.org/#/c/229125/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1501032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1330199] Re: rpc_workers does not work with Qpid

2015-09-29 Thread Ryan Moats
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1330199

Title:
  rpc_workers does not work with Qpid

Status in oslo.messaging:
  Fix Released

Bug description:
  After enable rpc_workers other than 0, restart the neutron-server, and
  found that No consumers will be ever created for q-plugin within Qpid.

  It does appear that the all sub processes of neutron-server are
  getting hanging within the step of self.connection.open() in
  impl_qpid.py reconnect method.

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo.messaging/+bug/1330199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1315138] Re: stable backports failing with "sub_unit.log was > 50 MB of uncompressed data!!!"

2015-09-29 Thread Armando Migliaccio
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1315138

Title:
  stable backports failing with "sub_unit.log was > 50 MB of
  uncompressed data!!!"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) icehouse series:
  Fix Released
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  Since this merged today: https://review.openstack.org/#/c/85797/2

  We have jobs failing in the stable branches which are backports:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiKyBlY2hvICdzdWJfdW5pdC5sb2cgd2FzID4gNTAgTUIgb2YgdW5jb21wcmVzc2VkIGRhdGEhISEnXCIgQU5EIHRhZ3M6Y29uc29sZSIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5ODk3NTA0Njc4MH0=

  Seems this should only be enforced on master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1315138/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp