[Yahoo-eng-team] [Bug 1803498] [NEW] node_staging_uri needs the file store to be configured

2018-11-15 Thread Thomas Herve
Public bug reported:

If you set node_staging_uri=file:///var/lib/glance/staging and try to do
a web-download import without the file store enabled, this is logged:


 Traceback (most recent call last):
   File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 53, in _execute_task
 result = task.execute(**arguments)
   File 
"/usr/lib/python2.7/site-packages/glance/async_/flows/api_image_import.py", 
line 92, in execute
 store_api.delete_from_backend(file_path)
   File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 409, 
in delete_from_backend
 loc = location.get_location_from_uri(uri, conf=CONF)
   File "/usr/lib/python2.7/site-packages/glance_store/location.py", line 75, 
in get_location_from_uri
 raise exceptions.UnknownScheme(scheme=pieces.scheme)
 UnknownScheme: Unknown scheme 'file' found in URI


The consequence is that the staging area is not cleaned up, and files are 
piling up. The store should probably be automatically configured.

** Affects: glance
 Importance: High
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1803498

Title:
  node_staging_uri needs the file store to be configured

Status in Glance:
  Confirmed

Bug description:
  If you set node_staging_uri=file:///var/lib/glance/staging and try to
  do a web-download import without the file store enabled, this is
  logged:

  
   Traceback (most recent call last):
 File 
"/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", 
line 53, in _execute_task
   result = task.execute(**arguments)
 File 
"/usr/lib/python2.7/site-packages/glance/async_/flows/api_image_import.py", 
line 92, in execute
   store_api.delete_from_backend(file_path)
 File "/usr/lib/python2.7/site-packages/glance_store/backend.py", line 409, 
in delete_from_backend
   loc = location.get_location_from_uri(uri, conf=CONF)
 File "/usr/lib/python2.7/site-packages/glance_store/location.py", line 75, 
in get_location_from_uri
   raise exceptions.UnknownScheme(scheme=pieces.scheme)
   UnknownScheme: Unknown scheme 'file' found in URI

  
  The consequence is that the staging area is not cleaned up, and files are 
piling up. The store should probably be automatically configured.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1803498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1803299] [NEW] Failure in web-dowload doesn't put the image into error

2018-11-13 Thread Thomas Herve
Public bug reported:

I tried to do an image import with the web-download method, but I put
the wrong URI. In the glance logs I have:

HTTPError: HTTP Error 404: Not Found

The associated task is in failure. But my image isn't:

$ openstack image show b631db79-e542-4c62-86f3-39bf50fec22
...
status   | importing
...

It'd be good if the status was synced correctly.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1803299

Title:
  Failure in web-dowload doesn't put the image into error

Status in Glance:
  New

Bug description:
  I tried to do an image import with the web-download method, but I put
  the wrong URI. In the glance logs I have:

  HTTPError: HTTP Error 404: Not Found

  The associated task is in failure. But my image isn't:

  $ openstack image show b631db79-e542-4c62-86f3-39bf50fec22
  ...
  status   | importing
  ...

  It'd be good if the status was synced correctly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1803299/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771293] [NEW] Deadlock with quota when deleting port

2018-05-15 Thread Thomas Herve
Public bug reported:

Found here: http://logs.openstack.org/38/567238/3/check/heat-functional-
convg-mysql-
lbaasv2-py35/295509a/logs/screen-q-svc.txt.gz#_May_14_09_25_12_996826

The following query fails:

oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
'Deadlock found when trying to get lock; try restarting transaction')
[SQL: 'UPDATE quotausages SET dirty=%(dirty)s WHERE
quotausages.project_id = %(quotausages_project_id)s AND
quotausages.resource = %(quotausages_resource)s'] [parameters: {'dirty':
1, 'quotausages_project_id': '9774958ed24f4e28b5d2f5d72863861d',
'quotausages_resource': 'network'}

The incoming DELETE API call fails with a 500.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1771293

Title:
  Deadlock with quota when deleting port

Status in neutron:
  New

Bug description:
  Found here: http://logs.openstack.org/38/567238/3/check/heat-
  functional-convg-mysql-
  lbaasv2-py35/295509a/logs/screen-q-svc.txt.gz#_May_14_09_25_12_996826

  The following query fails:

  oslo_db.exception.DBDeadlock: (pymysql.err.InternalError) (1213,
  'Deadlock found when trying to get lock; try restarting transaction')
  [SQL: 'UPDATE quotausages SET dirty=%(dirty)s WHERE
  quotausages.project_id = %(quotausages_project_id)s AND
  quotausages.resource = %(quotausages_resource)s'] [parameters:
  {'dirty': 1, 'quotausages_project_id':
  '9774958ed24f4e28b5d2f5d72863861d', 'quotausages_resource': 'network'}

  The incoming DELETE API call fails with a 500.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1771293/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734698] [NEW] Squash database patches

2017-11-27 Thread Thomas Herve
Public bug reported:

With newton being EOL, we should be able to drastically reduce the
number of patches in the migration repository: we start at Havana/216,
going at least to 313 would be a nice improvement, and we save a lot of
time in CI and new deployments.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1734698

Title:
  Squash database patches

Status in OpenStack Compute (nova):
  New

Bug description:
  With newton being EOL, we should be able to drastically reduce the
  number of patches in the migration repository: we start at Havana/216,
  going at least to 313 would be a nice improvement, and we save a lot
  of time in CI and new deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1734698/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1697349] Re: The resulting fields of Event List in CLI and in Dashboard are different.

2017-06-12 Thread Thomas Herve
** Project changed: heat => horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1697349

Title:
  The resulting fields of Event List in CLI and in Dashboard are
  different.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The field event_id is missing in Dashboard instead of which we can see a 
redundant field 
  (Stack Resource, Resource). 
  Below are the list of fields for an event in CLI and dashboard respectively.
  CLI ->  (Resource_Name, Id, Resource_status_reason, resource_status, 
event_time)
  Dashboard -> (Stack Resource, Resource, Time Spent, Status, Status Reason)

  
  The information displayed in the CLI and Dashboard should be similar and the 
'Event Id' column should be displayed in the dashboard instead of 'Resource' 
column.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1697349/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1424728] Re: Remove old rpc alias(es) from code

2017-03-20 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424728

Title:
  Remove old rpc alias(es) from code

Status in Cinder:
  In Progress
Status in Designate:
  In Progress
Status in grenade:
  New
Status in Ironic:
  Fix Released
Status in Magnum:
  In Progress
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.messaging:
  Confirmed
Status in Sahara:
  In Progress
Status in OpenStack Search (Searchlight):
  New
Status in watcher:
  New

Bug description:
  We have several TRANSPORT_ALIASES entries from way back (Essex, Havana)
  http://git.openstack.org/cgit/openstack/nova/tree/nova/rpc.py#n48

  We need a way to warn end users that they need to fix their nova.conf
  So these can be removed in a later release (full cycle?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1424728/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1588171] Re: Should update nova api version to 2.1

2017-02-24 Thread Thomas Herve
** Changed in: heat
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1588171

Title:
  Should update nova api version to 2.1

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in heat:
  Fix Released
Status in neutron:
  Confirmed
Status in octavia:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in OpenStack Search (Searchlight):
  Fix Released

Bug description:
  The nova team has decided to removew nova v2 API code completly. And it will 
be merged
  very soon: https://review.openstack.org/#/c/311653/

  we should bump to use v2.1 ASAP

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1588171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1467589] Re: Remove Cinder V1 support

2017-02-24 Thread Thomas Herve
** Changed in: heat
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1467589

Title:
  Remove Cinder V1 support

Status in Cinder:
  Won't Fix
Status in devstack:
  Fix Released
Status in grenade:
  In Progress
Status in heat:
  Fix Released
Status in OpenStack Compute (nova):
  Opinion
Status in os-client-config:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in Rally:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  Cinder created v2 support in the Grizzly release. This is to track
  progress in removing v1 support in other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1467589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls

2017-02-14 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596829

Title:
  String interpolation should be delayed at logging calls

Status in congress:
  Fix Released
Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in Ironic:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in Glance Client:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in python-troveclient:
  In Progress
Status in senlin:
  Invalid

Bug description:
  String interpolation should be delayed to be handled by the logging
  code, rather than being done at the point of the logging call.

  Wrong: LOG.debug('Example: %s' % 'bad')
  Right: LOG.debug('Example: %s', 'good')

  See the following guideline.

  * http://docs.openstack.org/developer/oslo.i18n/guidelines.html
  #adding-variables-to-log-messages

  The rule for it should be added to hacking checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1596829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596829] Re: String interpolation should be delayed at logging calls

2017-02-13 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596829

Title:
  String interpolation should be delayed at logging calls

Status in congress:
  Fix Released
Status in Glance:
  In Progress
Status in glance_store:
  In Progress
Status in Ironic:
  Fix Released
Status in masakari:
  Fix Released
Status in networking-vsphere:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released
Status in os-vif:
  Fix Released
Status in python-cinderclient:
  Fix Released
Status in Glance Client:
  Fix Released
Status in python-manilaclient:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in python-openstackclient:
  New
Status in python-troveclient:
  In Progress

Bug description:
  String interpolation should be delayed to be handled by the logging
  code, rather than being done at the point of the logging call.

  Wrong: LOG.debug('Example: %s' % 'bad')
  Right: LOG.debug('Example: %s', 'good')

  See the following guideline.

  * http://docs.openstack.org/developer/oslo.i18n/guidelines.html
  #adding-variables-to-log-messages

  The rule for it should be added to hacking checks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/congress/+bug/1596829/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1650330] [NEW] Support relative files when creating a Heat stack from a URL

2016-12-15 Thread Thomas Herve
Public bug reported:

Since bug 1512564 Horizon supports using get_file on URLs, when creating
a Heat stack. If you specify the template as a URL, it still expects the
files to be full URLs. It would be nice that to handle relative path
files so that get_file worked seamlessly. For example, if you have in
the template

resources:
  foo:
type: dir/2.yaml

And you create the stack with http://example.com/templates/1.yaml
pointing to that, it would automatically fetch
http://example.com/templates/dir/2.yaml

Additionally it would be useful if the environment file was provided as
a URL, but it may be better to do that seperately.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1650330

Title:
  Support relative files when creating a Heat stack from a URL

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Since bug 1512564 Horizon supports using get_file on URLs, when
  creating a Heat stack. If you specify the template as a URL, it still
  expects the files to be full URLs. It would be nice that to handle
  relative path files so that get_file worked seamlessly. For example,
  if you have in the template

  resources:
foo:
  type: dir/2.yaml

  And you create the stack with http://example.com/templates/1.yaml
  pointing to that, it would automatically fetch
  http://example.com/templates/dir/2.yaml

  Additionally it would be useful if the environment file was provided
  as a URL, but it may be better to do that seperately.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1650330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1643268] Re: test_cancel_update_server_with_port failing with non-convergence intermittently

2016-11-20 Thread Thomas Herve
It seems it started happening with the move to xenial, so maybe a
libvirt issue. https://bugs.launchpad.net/tacker/+bug/1515768 is a
previous occurrence of it.

Adding nova has thre is a 500, so there ought to be a bug.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1643268

Title:
  test_cancel_update_server_with_port failing with non-convergence
  intermittently

Status in heat:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  test_cancel_update_server_with_port seems to be failing[1]
  intermittently with non-convergence with

  2016-11-20 04:36:45.142985 | 2016-11-20 04:36:45.141 |
  heat_integrationtests.common.exceptions.StackBuildErrorException:
  Stack CancelUpdateTest-297766966/b0208a66-631e-4716-8785-91819afaf1a5
  is in ROLLBACK_FAILED status due to 'ClientException:
  resources.Server: Failed to attach network adapter device to 01ce1b55
  -167d-4b16-a265-35661f4c1b48 (HTTP 500) (Request-ID: req-
  c7044e7e-b245-4d78-9709-b92c544673b9)'

  From the nova logs it looks like a libvirt error[2]. Not checked if
  this is a nova issue or not. Filing this bug to track it.

  
  [1] 
http://logs.openstack.org/76/398476/1/gate/gate-heat-dsvm-functional-orig-mysql-lbaasv2-ubuntu-xenial/1e38d87/console.html#_2016-11-20_04_36_45_142985

  [2] http://logs.openstack.org/76/398476/1/gate/gate-heat-dsvm-
  functional-orig-mysql-lbaasv2-ubuntu-
  xenial/1e38d87/logs/screen-n-cpu.txt.gz?level=ERROR#_2016-11-20_04_26_56_121

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1643268/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-17 Thread Thomas Herve
** No longer affects: heat

** No longer affects: python-heatclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in kuryr:
  In Progress
Status in kuryr-libnetwork:
  In Progress
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in Murano:
  In Progress
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475722] Re: Never use MagicMock

2016-09-21 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1475722

Title:
  Never use MagicMock

Status in Aodh:
  New
Status in Barbican:
  In Progress
Status in Ceilometer:
  New
Status in Cinder:
  New
Status in Designate:
  New
Status in Glance:
  New
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  In Progress
Status in keystonemiddleware:
  In Progress
Status in Mistral:
  In Progress
Status in Murano:
  In Progress
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in Panko:
  New
Status in python-barbicanclient:
  In Progress
Status in python-ceilometerclient:
  New
Status in python-heatclient:
  In Progress
Status in python-mistralclient:
  In Progress
Status in python-muranoclient:
  In Progress
Status in python-neutronclient:
  In Progress
Status in python-novaclient:
  New
Status in python-openstackclient:
  Fix Released
Status in OpenStack SDK:
  Fix Committed
Status in python-swiftclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in Rally:
  New
Status in OpenStack Object Storage (swift):
  New
Status in tempest:
  New
Status in OpenStack DBaaS (Trove):
  New

Bug description:
  They magically allow things to pass. This is bad.

  Any usage should be replaced with the Mock class and explicit
  attributes should be set on it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/aodh/+bug/1475722/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1274523] Re: connection_trace does not work with DB2 backend

2016-07-08 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1274523

Title:
  connection_trace does not work with DB2 backend

Status in Cinder:
  Invalid
Status in Glance:
  Triaged
Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in oslo.db:
  Fix Released

Bug description:
  When setting connection_trace=True, the stack trace does not get
  printed for DB2 (ibm_db).

  I have a patch that we've been using internally for this fix that I
  plan to upstream soon, and with that we can get output like this:

  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] SELECT 
services.created_at AS services_created_at, services.updated_at AS 
services_updated_at, services.deleted_at AS services_deleted_at, 
services.deleted AS services_deleted, services.id AS services_id, services.host 
AS services_host, services."binary" AS services_binary, services.topic AS 
services_topic, services.report_count AS services_report_count, 
services.disabled AS services_disabled, services.disabled_reason AS 
services_disabled_reason
  FROM services WHERE services.deleted = ? AND services.id = ? FETCH FIRST 1 
ROWS ONLY
  2013-09-11 13:07:51,985 INFO sqlalchemy.engine.base.Engine (0, 3)
  2013-09-11 13:07:51.985 28151 INFO sqlalchemy.engine.base.Engine [-] (0, 3)
  File 
/usr/lib/python2.6/site-packages/nova/servicegroup/drivers/db.py:92 
_report_state() service.service_ref, state_catalog)
  File /usr/lib/python2.6/site-packages/nova/conductor/api.py:270 
service_update() return self._manager.service_update(context, service, values)
  File 
/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py:420 
catch_client_exception() return func(*args, **kwargs)
  File /usr/lib/python2.6/site-packages/nova/conductor/manager.py:461 
service_update() svc = self.db.service_update(context, service['id'], values)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:505 
service_update() with_compute_node=False, session=session)
  File /usr/lib/python2.6/site-packages/nova/db/sqlalchemy/api.py:388 
_service_get() result = query.first()

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1274523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596135] Re: Make raw_input py3 compatible

2016-06-25 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1596135

Title:
  Make  raw_input py3 compatible

Status in anvil:
  New
Status in Aodh:
  New
Status in Bandit:
  New
Status in daisycloud-core:
  New
Status in Freezer:
  New
Status in KloudBuster:
  New
Status in Murano:
  New
Status in Packstack:
  New
Status in Poppy:
  New
Status in python-solumclient:
  New
Status in vmtp:
  New
Status in vmware-nsx:
  New

Bug description:
  In py3,

  Raw_input renamed to input, 
  so it need to modify the code to make it compatible.


  https://github.com/openstack/python-
  solumclient/blob/ea37d226a6ba55d7ad4024233b9d8001aab92ca5/contrib
  /setup-tools/solum-app-setup.py#L76

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1596135/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583419] Re: Make dict.keys() PY3 compatible

2016-06-22 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583419

Title:
  Make dict.keys() PY3 compatible

Status in Ceilometer:
  In Progress
Status in Cinder:
  In Progress
Status in neutron:
  New
Status in OpenStack Compute (nova):
  New
Status in python-ceilometerclient:
  New
Status in python-cinderclient:
  Fix Released
Status in python-glanceclient:
  New
Status in python-heatclient:
  New
Status in python-manilaclient:
  In Progress
Status in python-troveclient:
  In Progress
Status in Rally:
  In Progress
Status in tempest:
  New

Bug description:
  In PY3, dict.keys() will return a view of list but not a list anymore, i.e.
  $ python3.4
  Python 3.4.3 (default, Mar 31 2016, 20:42:37) 
  >>> body={"11":"22"}
  >>> body[body.keys()[0]]
  Traceback (most recent call last):
File "", line 1, in 
  TypeError: 'dict_keys' object does not support indexing

  so for py3 compatible we should change it as follows:
  >>> body[list(body.keys())[0]]
  '22'

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1583419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259292] Re: Some tests use assertEqual(observed, expected) , the argument order is wrong

2016-06-22 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259292

Title:
  Some tests use assertEqual(observed, expected) , the argument order is
  wrong

Status in Barbican:
  In Progress
Status in Ceilometer:
  Invalid
Status in Cinder:
  Fix Released
Status in congress:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in glance_store:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in Manila:
  Fix Released
Status in Mistral:
  Fix Released
Status in Murano:
  Fix Released
Status in OpenStack Compute (nova):
  Won't Fix
Status in os-brick:
  In Progress
Status in python-ceilometerclient:
  Invalid
Status in python-cinderclient:
  Fix Released
Status in python-designateclient:
  Fix Committed
Status in python-glanceclient:
  In Progress
Status in python-mistralclient:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in Python client library for Zaqar:
  Fix Released
Status in Sahara:
  Fix Released
Status in zaqar:
  Fix Released

Bug description:
  The test cases will produce a confusing error message if the tests
  ever fail, so this is worth fixing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1259292/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1348447] Re: Enable metadata when create server groups

2016-06-02 Thread Thomas Herve
** Changed in: heat
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1348447

Title:
  Enable metadata when create server groups

Status in heat:
  Won't Fix
Status in OpenStack Compute (nova):
  Invalid
Status in python-novaclient:
  In Progress

Bug description:
  instance_group object already support instance group metadata but the
  api extension do not support this.

  We should enable this by default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1348447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1190149] Re: Token auth fails when token is larger than 8k

2016-05-25 Thread Thomas Herve
** No longer affects: heat/havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190149

Title:
  Token auth fails when token is larger than 8k

Status in Cinder:
  Fix Released
Status in Cinder havana series:
  Fix Released
Status in Glance:
  Fix Released
Status in Glance havana series:
  Fix Committed
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Object Storage (swift):
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released

Bug description:
  The following tests fail when there are 8 or more endpoints registered with 
keystone 
  tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token 
  tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token

  Steps to reproduce:
  - run devstack with the following services (the heat h-* apis push the 
endpoint count over the threshold

ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-obj,n-cpu,n-sch,horizon,mysql,rabbit,sysstat,tempest,s-proxy,s-account,s-container,s-object,cinder,c-api,c-vol,c-sch,n-cond,heat,h-api,h-api-cfn,h-api-cw,h-eng,n-net
  - run the failing tempest tests, eg
testr run test_v3_token
  - results in the following errors:
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestJSON.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File "tempest/api/compute/test_auth_token.py", line 48, in test_v3_token
  self.servers_v3.list_servers()
File "tempest/services/compute/json/servers_client.py", line 138, in 
list_servers
  resp, body = self.get(url)
File "tempest/common/rest_client.py", line 269, in get
  return self.request('GET', url, headers)
File "tempest/common/rest_client.py", line 394, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File "tempest/common/rest_client.py", line 327, in _parse_resp
  return json.loads(body)
File "/usr/lib64/python2.7/json/__init__.py", line 326, in loads
  return _default_decoder.decode(s)
File "/usr/lib64/python2.7/json/decoder.py", line 366, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib64/python2.7/json/decoder.py", line 384, in raw_decode
  raise ValueError("No JSON object could be decoded")
  ValueError: No JSON object could be decoded
  ==
  ERROR: tempest.api.compute.test_auth_token.AuthTokenTestXML.test_v3_token
  tags: worker-0
  --
  Traceback (most recent call last):
File "tempest/api/compute/test_auth_token.py", line 48, in test_v3_token
  self.servers_v3.list_servers()
File "tempest/services/compute/xml/servers_client.py", line 181, in 
list_servers
  resp, body = self.get(url, self.headers)
File "tempest/common/rest_client.py", line 269, in get
  return self.request('GET', url, headers)
File "tempest/common/rest_client.py", line 394, in request
  resp, resp_body)
File "tempest/common/rest_client.py", line 443, in _error_checker
  resp_body = self._parse_resp(resp_body)
File "tempest/common/rest_client.py", line 519, in _parse_resp
  return xml_to_json(etree.fromstring(body))
File "lxml.etree.pyx", line 2993, in lxml.etree.fromstring 
(src/lxml/lxml.etree.c:63285)
File "parser.pxi", line 1617, in lxml.etree._parseMemoryDocument 
(src/lxml/lxml.etree.c:93571)
File "parser.pxi", line 1495, in lxml.etree._parseDoc 
(src/lxml/lxml.etree.c:92370)
File "parser.pxi", line 1011, in lxml.etree._BaseParser._parseDoc 
(src/lxml/lxml.etree.c:89010)
File "parser.pxi", line 577, in 
lxml.etree._ParserContext._handleParseResultDoc (src/lxml/lxml.etree.c:84711)
File "parser.pxi", line 676, in lxml.etree._handleParseResult 
(src/lxml/lxml.etree.c:85816)
File "parser.pxi", line 627, in lxml.etree._raiseParseError 
(src/lxml/lxml.etree.c:85308)
  XMLSyntaxError: None
  Ran 2 tests in 2.497s (+0.278s)
  FAILED (id=214, failures=2)

  - run keystone endpoint-delete on endpoints until there is 7 endpoints
  - failing tests should now pass

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1190149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340596] Re: Tests fail due to novaclient 2.18 update

2016-05-25 Thread Thomas Herve
** No longer affects: heat/havana

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1340596

Title:
  Tests fail due to novaclient 2.18 update

Status in heat:
  Invalid
Status in heat icehouse series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in python-novaclient:
  Fix Released

Bug description:
  tests currently fail on stable branches:
  2014-07-11 07:14:28.737 | 
==
  2014-07-11 07:14:28.738 | ERROR: test_index 
(openstack_dashboard.dashboards.admin.aggregates.tests.AggregatesViewTests)
  2014-07-11 07:14:28.774 | 
--
  2014-07-11 07:14:28.775 | Traceback (most recent call last):
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/helpers.py",
 line 124, in setUp
  2014-07-11 07:14:28.775 | test_utils.load_test_data(self)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/utils.py",
 line 43, in load_test_data
  2014-07-11 07:14:28.775 | data_func(load_onto)
  2014-07-11 07:14:28.775 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 60, in data
  2014-07-11 07:14:28.776 | TEST.exceptions.nova_unauthorized = 
create_stubbed_exception(nova_unauth)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 44, in create_stubbed_exception
  2014-07-11 07:14:28.776 | return cls(status_code, msg)
  2014-07-11 07:14:28.776 |   File 
"/home/jenkins/workspace/gate-horizon-python26/openstack_dashboard/test/test_data/exceptions.py",
 line 31, in fake_init_exception
  2014-07-11 07:14:28.776 | self.code = code
  2014-07-11 07:14:28.776 | AttributeError: can't set attribute
  2014-07-11 07:14:28.777 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1340596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339273] Re: Sphinx documentation build failed in stable/havana: source_dir is not a directory

2016-05-25 Thread Thomas Herve
** Changed in: heat/havana
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339273

Title:
  Sphinx documentation build failed in stable/havana: source_dir is not
  a directory

Status in Glance:
  Invalid
Status in Glance havana series:
  New
Status in heat:
  Invalid
Status in heat havana series:
  Won't Fix
Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Dashboard (Horizon) havana series:
  Fix Released

Bug description:
  Documentation is not building in stable/havana:

  $ tox -evenv -- python setup.py build_sphinx
  venv inst: /opt/stack/horizon/.tox/dist/horizon-2013.2.4.dev9.g19634d6.zip
  venv runtests: PYTHONHASHSEED='1422458638'
  venv runtests: commands[0] | python setup.py build_sphinx
  running build_sphinx
  error: 'source_dir' must be a directory name (got 
`/opt/stack/horizon/doc/source`)
  ERROR: InvocationError: '/opt/stack/horizon/.tox/venv/bin/python setup.py 
build_sphinx'

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503501] Re: oslo.db no longer requires testresources and testscenarios packages

2016-05-25 Thread Thomas Herve
** Changed in: heat/liberty
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1503501

Title:
  oslo.db no longer requires testresources and testscenarios packages

Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in heat liberty series:
  Fix Released
Status in Ironic:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in Sahara:
  Fix Released
Status in Sahara liberty series:
  Fix Committed
Status in Sahara mitaka series:
  Fix Released

Bug description:
  As of https://review.openstack.org/#/c/217347/ oslo.db no longer has
  testresources or testscenarios in its requirements, So next release of
  oslo.db will break several projects. These project that use fixtures
  from oslo.db should add these to their requirements if they need it.

  Example from Nova:
  ${PYTHON:-python} -m subunit.run discover -t ./ ${OS_TEST_PATH:-./nova/tests} 
--list 
  ---Non-zero exit code (2) from test listing.
  error: testr failed (3) 
  import errors ---
  Failed to import test module: nova.tests.unit.db.test_db_api
  Traceback (most recent call last):
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 456, in _find_test_path
  module = self._get_module_from_name(name)
File 
"/home/travis/build/dims/nova/.tox/py27/lib/python2.7/site-packages/unittest2/loader.py",
 line 395, in _get_module_from_name
  __import__(name)
File "nova/tests/unit/db/test_db_api.py", line 31, in 
  from oslo_db.sqlalchemy import test_base
File 
"/home/travis/build/dims/nova/.tox/py27/src/oslo.db/oslo_db/sqlalchemy/test_base.py",
 line 17, in 
  import testresources
  ImportError: No module named testresources

  https://travis-ci.org/dims/nova/jobs/83992423

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1503501/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1584558] Re: Some config options in codes use double quotes but most of the options use single quotes and that need to be normalized

2016-05-24 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1584558

Title:
  Some config options in codes use double quotes but most of the options
  use single quotes and that need to be normalized

Status in Ceilometer:
  Won't Fix
Status in Cinder:
  In Progress
Status in Glance:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  description:

  most options like this use '':
  volume_manager_opts = [
  cfg.StrOpt('volume_driver',
 default='cinder.volume.drivers.lvm.LVMISCSIDriver',
 help='Driver to use for volume creation'),
  cfg.IntOpt('migration_create_volume_timeout_secs',
 default=300,
 help='Timeout for creating the volume to migrate to '
  'when performing volume migration (seconds)'),
  cfg.BoolOpt('volume_service_inithost_offload',
  default=False,
  help='Offload pending volume delete during '
   'volume service startup'),
  cfg.StrOpt('zoning_mode',
 default='none',
 help='FC Zoning mode configured'),
  cfg.StrOpt('extra_capabilities',
 default='{}',
 help='User defined capabilities, a JSON formatted string '
  'specifying key/value pairs. The key/value pairs can '
  'be used by the CapabilitiesFilter to select between '
  'backends when requests specify volume types. For '
  'example, specifying a service level or the geographical '
  'location of a backend, then creating a volume type to '
  'allow the user to select by these different '
  'properties.'),
  ]

  
  but some option like this use ""
  store_type_opts = [
  cfg.ListOpt("store_type_preference",
  default=[],
  help=_("The store names to use to get store preference order. 
"
 "The name must be registered by one of the stores "
 "defined by the 'stores' config option. "
 "This option will be applied when you using "
 "'store_type' option as image location strategy "
 "defined by the 'location_strategy' config option."))
  ]
  profiler_opts = [
  cfg.BoolOpt("enabled", default=False,
  help=_('If False fully disable profiling feature.')),
  cfg.BoolOpt("trace_sqlalchemy", default=False,
  help=_("If False doesn't trace SQL requests."))
  ]

  I think it's need to normalize

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1584558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290234] Re: do not use __builtin__ in Python3

2016-04-19 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290234

Title:
  do not use __builtin__ in Python3

Status in Bandit:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Magnum:
  Fix Released
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in octavia:
  Fix Released
Status in Swift Authentication:
  Fix Released
Status in OpenStack DBaaS (Trove):
  In Progress

Bug description:
  __builtin__ does not exist in Python 3, use six.moves.builtins
  instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/bandit/+bug/1290234/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 856764] Re: RabbitMQ connections lack heartbeat or TCP keepalives

2016-04-19 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/856764

Title:
  RabbitMQ connections lack heartbeat or TCP keepalives

Status in Ceilometer:
  Invalid
Status in Ceilometer icehouse series:
  Fix Released
Status in Cinder:
  Confirmed
Status in Mirantis OpenStack:
  Fix Committed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Invalid
Status in oslo.messaging:
  Fix Released
Status in oslo.messaging package in Ubuntu:
  Fix Released

Bug description:
  There is currently no method built into Nova to keep connections from
  various components into RabbitMQ alive.  As a result, placing a
  stateful firewall (such as a Cisco ASA) between the connection
  can/does result in idle connections being terminated without either
  endpoint being aware.

  This issue can be mitigated a few different ways:

  1. Connections to RabbitMQ set socket options to enable TCP
  keepalives.

  2. Rabbit has heartbeat functionality.  If the client requests
  heartbeats on connection, rabbit server will regularly send messages
  to each connections with the expectation of a response.

  3. Other?

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/856764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 904307] Re: Application/server name not available in service logs

2016-04-19 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/904307

Title:
  Application/server name not available in service logs

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Fix Released
Status in Sahara:
  Fix Released

Bug description:
  If Nova is configured to use syslog based logging, and there are
  multiple services running on a system, it becomes difficult to
  identify the service that emitted the log. This can be resolved if the
  log record also contains the name of the service/binary that generated
  the log. This will also be useful with an OpenStack system using a
  centralized syslog based logging mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/904307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1556758] Re: Instance create error because of timeout

2016-03-14 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1556758

Title:
  Instance create error because of  timeout

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  2016-03-14 14:19:47.620 TRACE heat.engine.resource Traceback (most recent 
call last):
  2016-03-14 14:19:47.620 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 688, in _action_recorder
  2016-03-14 14:19:47.620 TRACE heat.engine.resource yield
  1). my yaml file as blow:
  heat_template_version: 2015-10-15
   
  description: HOT template for one interconnected VMs with floating ips.
   
  parameters:
image_id:
  type: string
  description: Image Name
  default: cirros-0.3.4-x86_64-uec
   
public_net:
  type: string
  description: public network id
  default: a059ae2f-0eed-468f-b2a2-a0427f621da1 
   
  resources:
private_net:
  type: OS::Neutron::Net
  properties:
name: private-net

private_subnet:
  type: OS::Neutron::Subnet
  properties:
network_id: { get_resource: private_net }
cidr: 172.16.2.0/24
gateway_ip: 172.16.2.1
   
server1:
  type: OS::Nova::Server
  properties:
name: Server1
image: { get_param: image_id }
flavor: m1.tiny
networks:
  - network: {get_resource: private_net }

  
  2). when i try to create an instance, i get an error.(since )
  2016-03-14 14:28:36.237 TRACE heat.engine.resource Traceback (most recent 
call last):
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 688, in _action_recorder
  2016-03-14 14:28:36.237 TRACE heat.engine.resource yield
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 759, in _do_action
  2016-03-14 14:28:36.237 TRACE heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/scheduler.py", line 297, in wrapper
  2016-03-14 14:28:36.237 TRACE heat.engine.resource step = next(subtask)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resource.py", line 730, in action_handler_task
  2016-03-14 14:28:36.237 TRACE heat.engine.resource handler_data = 
handler(*args)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/opt/stack/heat/heat/engine/resources/openstack/nova/server.py", line 822, in 
handle_create
  2016-03-14 14:28:36.237 TRACE heat.engine.resource admin_pass=admin_pass)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 1038, in 
create
  2016-03-14 14:28:36.237 TRACE heat.engine.resource **boot_kwargs)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 555, in _boot
  2016-03-14 14:28:36.237 TRACE heat.engine.resource return_raw=return_raw, 
**kwargs)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/base.py", line 302, in _create
  2016-03-14 14:28:36.237 TRACE heat.engine.resource _resp, body = 
self.api.client.post(url, body=body)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/client.py", line 451, in post
  2016-03-14 14:28:36.237 TRACE heat.engine.resource return 
self._cs_request(url, 'POST', **kwargs)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/client.py", line 426, in 
_cs_request
  2016-03-14 14:28:36.237 TRACE heat.engine.resource resp, body = 
self._time_request(url, method, **kwargs)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/client.py", line 399, in 
_time_request
  2016-03-14 14:28:36.237 TRACE heat.engine.resource resp, body = 
self.request(url, method, **kwargs)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/novaclient/client.py", line 393, in request
  2016-03-14 14:28:36.237 TRACE heat.engine.resource raise 
exceptions.from_response(resp, body, url, method)
  2016-03-14 14:28:36.237 TRACE heat.engine.resource ClientException: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.
  2016-03-14 14:28:36.237 TRACE heat.engine.resource  (HTTP 500) (Request-ID: 
req-d1fb3f88-9623-47b6-b3cd-74703e2ae89e)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1556758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net

[Yahoo-eng-team] [Bug 1538497] [NEW] Race condition: returns 404 when listing ports

2016-01-27 Thread Thomas Herve
Public bug reported:

I've observed this behavior in heat integration tests:
http://logs.openstack.org/51/271451/1/gate/gate-heat-dsvm-functional-
orig-mysql/9f09db7/logs/

When creating a server, nova calls list_ports and it fails because a
network can't be found: http://logs.openstack.org/51/271451/1/gate/gate-
heat-dsvm-functional-orig-
mysql/9f09db7/logs/screen-n-api.txt.gz?level=ERROR

As it turns out, the network has been removed after the list ports has
started, thus making it failing.

It should handle that NotFound and ignore the error, as there is a
transaction issue somewhere.

http://logs.openstack.org/51/271451/1/gate/gate-heat-dsvm-functional-
orig-mysql/9f09db7/logs/screen-q-svc.txt.gz?level=ERROR is the neutron
side failure.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1538497

Title:
  Race condition: returns 404 when listing ports

Status in neutron:
  New

Bug description:
  I've observed this behavior in heat integration tests:
  http://logs.openstack.org/51/271451/1/gate/gate-heat-dsvm-functional-
  orig-mysql/9f09db7/logs/

  When creating a server, nova calls list_ports and it fails because a
  network can't be found: http://logs.openstack.org/51/271451/1/gate
  /gate-heat-dsvm-functional-orig-
  mysql/9f09db7/logs/screen-n-api.txt.gz?level=ERROR

  As it turns out, the network has been removed after the list ports has
  started, thus making it failing.

  It should handle that NotFound and ignore the error, as there is a
  transaction issue somewhere.

  http://logs.openstack.org/51/271451/1/gate/gate-heat-dsvm-functional-
  orig-mysql/9f09db7/logs/screen-q-svc.txt.gz?level=ERROR is the neutron
  side failure.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1538497/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538204] [NEW] Failed to stop nova-api in grenade tests

2016-01-26 Thread Thomas Herve
Public bug reported:

Saw this during a grenade run:

2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 143, in 
clear
2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup for sig in 
self._signal_handlers:
2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup RuntimeError: 
dictionary changed size during iteration

(From http://logs.openstack.org/25/272425/1/gate/gate-grenade-dsvm-
heat/b32eda2/).

May be due to a change in oslo, but it's in the old process so I'm not
sure it ought to use it.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538204

Title:
  Failed to stop nova-api in grenade tests

Status in OpenStack Compute (nova):
  New

Bug description:
  Saw this during a grenade run:

  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 143, in 
clear
  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup for sig in 
self._signal_handlers:
  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup RuntimeError: 
dictionary changed size during iteration

  (From http://logs.openstack.org/25/272425/1/gate/gate-grenade-dsvm-
  heat/b32eda2/).

  May be due to a change in oslo, but it's in the old process so I'm not
  sure it ought to use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538204] Re: Failed to stop nova-api in grenade tests

2016-01-26 Thread Thomas Herve
https://review.openstack.org/#/c/260386/ looks suspicious.

** Also affects: oslo.service
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538204

Title:
  Failed to stop nova-api in grenade tests

Status in OpenStack Compute (nova):
  New
Status in oslo.service:
  New

Bug description:
  Saw this during a grenade run:

  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 143, in 
clear
  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup for sig in 
self._signal_handlers:
  2016-01-26 16:12:58.553 22016 ERROR oslo_service.threadgroup RuntimeError: 
dictionary changed size during iteration

  (From http://logs.openstack.org/25/272425/1/gate/gate-grenade-dsvm-
  heat/b32eda2/).

  May be due to a change in oslo, but it's in the old process so I'm not
  sure it ought to use it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1538204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1303802] Re: qemu image convert fails in snapshot

2016-01-14 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1303802

Title:
  qemu image convert fails in snapshot

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Periodically in the gate we see a failure by qemu image convert in
  snapshot:

  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/openstack/common/processutils.py", line 193, in 
execute
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher cmd=' 
'.join(cmd))
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher 
ProcessExecutionError: Unexpected error while running command.
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Command: 
qemu-img convert -f qcow2 -O qcow2 
/opt/stack/data/nova/instances/4ff6dc10-eac8-41d2-a645-3a0e0ba07c8a/disk 
/opt/stack/data/nova/instances/snapshots/tmpcVpCxJ/33eb0bb2b49648c69770b47db3211a86
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Exit code: 1
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Stdout: ''
  2014-04-07 01:31:29.470 29554 TRACE oslo.messaging.rpc.dispatcher Stderr: 
'qemu-img: error while reading sector 0: Input/output error\n'

  qemu-img is very obtuse on what the actual issue is, so it's unclear
  if this is a corrupt disk, or a totally missing disk.

  The user visible face of this is on operations like shelve where the
  instance will believe that it's still in active state -
  http://logs.openstack.org/02/85602/1/gate/gate-tempest-dsvm-
  full/20ed964/console.html#_2014-04-07_01_44_29_309

  Even though everything is broken instead.

  Logstash query:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcInFlbXUtaW1nOiBlcnJvclwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxMzk2ODc2MTQ4NDc3fQ==

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1303802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1484086] [NEW] ec2tokens authentication is failing during Heat tests

2015-08-12 Thread Thomas Herve
Public bug reported:

As seen here for example: http://logs.openstack.org/54/194054/37/check
/gate-heat-dsvm-functional-orig-mysql/a812f55/

We're getting the error: Non-default domain is not supported which
seems to have been introduced here:
https://review.openstack.org/#/c/208069/

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1484086

Title:
  ec2tokens authentication is failing during Heat tests

Status in Keystone:
  New

Bug description:
  As seen here for example: http://logs.openstack.org/54/194054/37/check
  /gate-heat-dsvm-functional-orig-mysql/a812f55/

  We're getting the error: Non-default domain is not supported which
  seems to have been introduced here:
  https://review.openstack.org/#/c/208069/

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1484086/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1479869] [NEW] Creating a server fails with an error about checksum field

2015-07-30 Thread Thomas Herve
Public bug reported:

As seen here for example: http://logs.openstack.org/18/205018/2/gate
/gate-heat-dsvm-functional-orig-mysql/aa761d5/

It gets the error: Caught error: Field `checksum' cannot be None

On logstash it's been happening for some hours:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVmFsdWVFcnJvcjogRmllbGQgYGNoZWNrc3VtJyBjYW5ub3QgYmUgTm9uZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzgyNzEwNjEyMTJ9

From bug https://bugs.launchpad.net/cinder/+bug/1308058 it seems that
checksum ought to be able to be None, but obviously it's not all the
time, so I suppose there is a race condition somewhere. We ought to get
a better error if it's transient.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1479869

Title:
  Creating a server fails with an error about checksum field

Status in OpenStack Compute (nova):
  New

Bug description:
  As seen here for example: http://logs.openstack.org/18/205018/2/gate
  /gate-heat-dsvm-functional-orig-mysql/aa761d5/

  It gets the error: Caught error: Field `checksum' cannot be None

  On logstash it's been happening for some hours:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiVmFsdWVFcnJvcjogRmllbGQgYGNoZWNrc3VtJyBjYW5ub3QgYmUgTm9uZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI4NjQwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MzgyNzEwNjEyMTJ9

  From bug https://bugs.launchpad.net/cinder/+bug/1308058 it seems that
  checksum ought to be able to be None, but obviously it's not all the
  time, so I suppose there is a race condition somewhere. We ought to
  get a better error if it's transient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1479869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-05-26 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in OpenStack Key Management (Barbican):
  New
Status in OpenStack Telemetry (Ceilometer):
  New
Status in Cinder:
  New
Status in OpenStack Congress:
  New
Status in Designate:
  New
Status in OpenStack Dashboard (Horizon):
  New
Status in MagnetoDB - key-value storage service for OpenStack:
  New
Status in Magnum - Containers for OpenStack:
  New
Status in Manila:
  New
Status in Mistral:
  New
Status in Murano:
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  New
Status in Rally:
  Invalid
Status in OpenStack Data Processing (Sahara):
  New
Status in Openstack Database (Trove):
  New

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382118] [NEW] Cloud-init doesn't support SSH ed25519 keys

2014-10-16 Thread Thomas Herve
Public bug reported:

Recent (?) OpenSSH versions supports ed25519 keys and Ubuntu specifies
the following its sshd_config:

HostKey /etc/ssh/ssh_host_ed25519_key

Unfortunately, cloudinit deletes all the key in /etc/ssh, and doesn't
recreate that specific key. It should regenerate one of those key, or
remove the option from the configuration file.

The main side effect currently is an ugly error message during
authentication. In /var/log/auth.log:

sshd: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1382118

Title:
  Cloud-init doesn't support SSH ed25519 keys

Status in Init scripts for use on cloud images:
  New

Bug description:
  Recent (?) OpenSSH versions supports ed25519 keys and Ubuntu specifies
  the following its sshd_config:

  HostKey /etc/ssh/ssh_host_ed25519_key

  Unfortunately, cloudinit deletes all the key in /etc/ssh, and doesn't
  recreate that specific key. It should regenerate one of those key, or
  remove the option from the configuration file.

  The main side effect currently is an ugly error message during
  authentication. In /var/log/auth.log:

  sshd: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1382118/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376586] [NEW] pre_live_migration is missing some disk information in case of block migration

2014-10-02 Thread Thomas Herve
Public bug reported:

The pre_live_migration API is called with a disk retrieved by a call to
driver.get_instance_disk_info when doing a block migration.
Unfortunately block device information is not passed, so Nova is calling
LibvirtDriver._create_images_and_backing with partial disk_info.

As a result, for example when migrating a volume with a NFS volume
attached, a useless file is created in the instance directory.

** Affects: nova
 Importance: Undecided
 Assignee: Thomas Herve (therve)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Thomas Herve (therve)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376586

Title:
  pre_live_migration is missing some disk information in case of block
  migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  The pre_live_migration API is called with a disk retrieved by a call
  to driver.get_instance_disk_info when doing a block migration.
  Unfortunately block device information is not passed, so Nova is
  calling LibvirtDriver._create_images_and_backing with partial
  disk_info.

  As a result, for example when migrating a volume with a NFS volume
  attached, a useless file is created in the instance directory.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1376615] [NEW] Libvirt volumes are copied in case of block live migration

2014-10-02 Thread Thomas Herve
Public bug reported:

When doing a block live migration with volumes attached, the libvirt
drive-mirror operation tries to sync all the disks attached to the
domain, even if they are external volumes. It should only do so for the
ephemeral storage, so we need to be able to pass that information.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1376615

Title:
  Libvirt volumes are copied in case of block live migration

Status in OpenStack Compute (Nova):
  New

Bug description:
  When doing a block live migration with volumes attached, the libvirt
  drive-mirror operation tries to sync all the disks attached to the
  domain, even if they are external volumes. It should only do so for
  the ephemeral storage, so we need to be able to pass that information.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1376615/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1334496] Re: Stacks panel shows multiple copies of a stack

2014-07-11 Thread Thomas Herve
*** This bug is a duplicate of bug 1332611 ***
https://bugs.launchpad.net/bugs/1332611

** This bug is no longer a duplicate of bug 1322097
   Stack list pagination is broken
** This bug has been marked a duplicate of bug 1332611
   --marker behavior broken for stack-list

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1334496

Title:
  Stacks panel shows multiple copies of a stack

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The stacks panel shows multiple copies of a stack. In this case the
  heat stack was a failed stack. Fedora 30. The stack is an auto-
  scaling-group. The openstack is a devstack (master git) installed
  2014/06/25

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1334496/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1333410] Re: sqla 0.9.5 breaks the world

2014-06-25 Thread Thomas Herve
** No longer affects: heat

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1333410

Title:
  sqla 0.9.5 breaks the world

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  There are several different tests failing here:

  http://logs.openstack.org/79/101579/4/check/gate-nova-
  python26/990cd05/console.html

  Checking on the ec2 failure shows it started on 6/23:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5hcGkuZWMyLnRlc3RfY2xvdWQuQ2xvdWRUZXN0Q2FzZS50ZXN0X2Fzc29jaWF0ZV9kaXNhc3NvY2lhdGVfYWRkcmVzc1wiIEFORCB0YWdzOlwiY29uc29sZVwiIiwiZmllbGRzIjpbXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI2MDQ4MDAiLCJncmFwaG1vZGUiOiJjb3VudCIsInRpbWUiOnsidXNlcl9pbnRlcnZhbCI6MH0sInN0YW1wIjoxNDAzNTU0ODE0OTQ2fQ==

  I'm guessing this is the change that caused the problem:

  
https://github.com/openstack/nova/commit/077e3c770ebeebd037ce882863a6b5dcefd644cf

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1333410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290344] [NEW] Password field should be optional during Heat stack creation

2014-03-10 Thread Thomas Herve
Public bug reported:

Currently a password is needed when doing a stack creation using
Horizon. This has been done in  bug #1199549 after a change in Heat
which is not up to date. Depending on the configuration, this is
actually not needed, so the field should probably be optional in the UI.
An error will be returned if a password was needed and not provided.

Ideally, there would a configuration switch so that the field doesn't
appear if it's not needed (when trusts are enabled in Heat).

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1290344

Title:
  Password field should be optional during Heat stack creation

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Currently a password is needed when doing a stack creation using
  Horizon. This has been done in  bug #1199549 after a change in Heat
  which is not up to date. Depending on the configuration, this is
  actually not needed, so the field should probably be optional in the
  UI.  An error will be returned if a password was needed and not
  provided.

  Ideally, there would a configuration switch so that the field doesn't
  appear if it's not needed (when trusts are enabled in Heat).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1290344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1261442] [NEW] Spurious error from ComputeManager._run_image_cache_manager_pass

2013-12-16 Thread Thomas Herve
Public bug reported:

I got the following errors in a tempest test at
http://logs.openstack.org/97/62397/1/check/check-tempest-dsvm-
full/3f7b8c3:


2013-12-16 16:07:52.189 27621 DEBUG nova.openstack.common.processutils [-] 
Result was 1 execute 
/opt/stack/new/nova/nova/openstack/common/processutils.py:172
2013-12-16 16:07:52.189 27621 ERROR nova.openstack.common.periodic_task [-] 
Error during ComputeManager._run_image_cache_manager_pass: Unexpected error 
while running command.
Command: env LC_ALL=C LANG=C qemu-img info 
/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk
Exit code: 1
Stdout: ''
Stderr: qemu-img: Could not open 
'/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk': No 
such file or directory\n
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
Traceback (most recent call last):
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/openstack/common/periodic_task.py, line 180, in 
run_periodic_tasks
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
task(self, context)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/compute/manager.py, line 5210, in 
_run_image_cache_manager_pass
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
self.driver.manage_image_cache(context, filtered_instances)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/driver.py, line 4650, in 
manage_image_cache
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
self.image_cache_manager.verify_base_images(context, all_instances)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 603, in 
verify_base_images
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
inuse_backing_images = self._list_backing_images()
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/imagecache.py, line 345, in 
_list_backing_images
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
backing_file = virtutils.get_disk_backing_file(disk_path)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/libvirt/utils.py, line 442, in 
get_disk_backing_file
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
backing_file = images.qemu_img_info(path).backing_file
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/virt/images.py, line 56, in qemu_img_info
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
'qemu-img', 'info', path)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/utils.py, line 175, in execute
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
return processutils.execute(*cmd, **kwargs)
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task   File 
/opt/stack/new/nova/nova/openstack/common/processutils.py, line 178, in 
execute
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
cmd=' '.join(cmd))
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
ProcessExecutionError: Unexpected error while running command.
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task 
Command: env LC_ALL=C LANG=C qemu-img info 
/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task Exit 
code: 1
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task Stdout: 
''
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task Stderr: 
qemu-img: Could not open 
'/opt/stack/data/nova/instances/95dbf14f-3abb-42c7-94c5-dd7355ecd78a/disk': No 
such file or directory\n
2013-12-16 16:07:52.189 27621 TRACE nova.openstack.common.periodic_task

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1261442

Title:
  Spurious error from ComputeManager._run_image_cache_manager_pass

Status in OpenStack Compute (Nova):
  New

Bug description:
  I got the following errors in a tempest test at
  http://logs.openstack.org/97/62397/1/check/check-tempest-dsvm-
  full/3f7b8c3:


  2013-12-16 16:07:52.189 27621 DEBUG nova.openstack.common.processutils [-] 
Result was 1 execute 
/opt/stack/new/nova/nova/openstack/common/processutils.py:172
  2013-12-16 16:07:52.189 27621 ERROR nova.openstack.common.periodic_task [-] 
Error during