[Yahoo-eng-team] [Bug 1863534] [NEW] [openstacksdk] Create image doesn't validate checksum correctly using sha256 algorithm

2020-02-16 Thread Tushar Patil
Public bug reported:

I have set config option ``hashing_algorithm`` value as sha256 in
glance.

Now, I'm trying to create an image using openstacksdk.

I have set hash value of the image to sha256 parameter of create_image
method but it fails with an error "Image checksum verification failed".

Reason: glance store calculates checksum using md5 algorithm and it
calculates owner_specified.openstack.sha256/os_hash_value of an image
using the algorithm that's set in ``hashing_algorithm``. In
openstacksdk, it compares the checksum as shown below:


  checksum = data.get('checksum')
  if checksum:
  valid = (checksum == md5 or checksum == sha256)
  if not valid:
  raise Exception('Image checksum verification failed')

IMO, except md5 algorithm, it should compare sha256 with the
os_hash_value that's calculated and set by glance for an image.


for cirros-0.4.0-x86_64-disk.img image:-
md5 checksum is 443b7623e27ecf03dc9e01ee93f67afe
sha256 checksum is 
a8dd75ecffd4cdd96072d60c2237b448e0c8b2bc94d57f10fdbc8c481d9005b8

If I pass sha256 parameter to create_image as
a8dd75ecffd4cdd96072d60c2237b448e0c8b2bc94d57f10fdbc8c481d9005b8, it
fails to create an image.

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- [openstacksdk] Create image doesn't validate checksum using  sha256 algorithm
+ [openstacksdk] Create image doesn't validate checksum correctly using  sha256 
algorithm

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1863534

Title:
  [openstacksdk] Create image doesn't validate checksum correctly using
  sha256 algorithm

Status in Glance:
  New

Bug description:
  I have set config option ``hashing_algorithm`` value as sha256 in
  glance.

  Now, I'm trying to create an image using openstacksdk.

  I have set hash value of the image to sha256 parameter of create_image
  method but it fails with an error "Image checksum verification
  failed".

  Reason: glance store calculates checksum using md5 algorithm and it
  calculates owner_specified.openstack.sha256/os_hash_value of an image
  using the algorithm that's set in ``hashing_algorithm``. In
  openstacksdk, it compares the checksum as shown below:

  
checksum = data.get('checksum')
if checksum:
valid = (checksum == md5 or checksum == sha256)
if not valid:
raise Exception('Image checksum verification failed')

  IMO, except md5 algorithm, it should compare sha256 with the
  os_hash_value that's calculated and set by glance for an image.

  
  for cirros-0.4.0-x86_64-disk.img image:-
  md5 checksum is 443b7623e27ecf03dc9e01ee93f67afe
  sha256 checksum is 
a8dd75ecffd4cdd96072d60c2237b448e0c8b2bc94d57f10fdbc8c481d9005b8

  If I pass sha256 parameter to create_image as
  a8dd75ecffd4cdd96072d60c2237b448e0c8b2bc94d57f10fdbc8c481d9005b8, it
  fails to create an image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1863534/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863209] [NEW] [openstacksdk] image name is not set if filename is not passed to create_image method

2020-02-13 Thread Tushar Patil
Public bug reported:

I want to create an image without uploading image data using
openstacksdk create_image method.

sdkconnection.image.create_image(name, allow_duplicates=True, **fields)

fields = {"min_disk": min_disk, "min_ram": min_ram,
  "disk_format": "qcow2",
  "container_format": "bare",
  "sha256": ,
  "visibility": "private"}

Image is created successfully but it doesn't have any name to it.

** Affects: glance
 Importance: Undecided
 Status: New

** Summary changed:

- [openstacksdk] image name is not set if filename is not used during 
create_image
+ [openstacksdk] image name is not set if filename is not passed to 
create_image method

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1863209

Title:
  [openstacksdk] image name is not set if filename is not passed to
  create_image method

Status in Glance:
  New

Bug description:
  I want to create an image without uploading image data using
  openstacksdk create_image method.

  sdkconnection.image.create_image(name, allow_duplicates=True,
  **fields)

  fields = {"min_disk": min_disk, "min_ram": min_ram,
"disk_format": "qcow2",
"container_format": "bare",
"sha256": ,
"visibility": "private"}

  Image is created successfully but it doesn't have any name to it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1863209/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1804062] Re: test_hacking fails for python 3.6.7 and newer

2019-02-25 Thread Tushar Patil
** Also affects: masakari
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1804062

Title:
  test_hacking fails for python 3.6.7 and newer

Status in masakari:
  New
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  The check for double words in test_hacking is failing in python 3.6.7
  (released in ubuntu 18.04 within the last few days) and in new
  versions of 3.7.x. This is is because of this change to python:
  https://bugs.python.org/issue33899 .

  This is causing failures in python 36 unit tests for nova.

  The fix ought to be adding a newline to the code sample. Maybe.

To manage notifications about this bug go to:
https://bugs.launchpad.net/masakari/+bug/1804062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1726237] [NEW] Installing stable/ocata using expand/migrate/contract for the first time raises TypeError

2017-10-22 Thread Tushar Patil
Public bug reported:

If you are installing stable/ocata branch for the first time using the
latest zero-down time database upgrade method, then it gives following
error when you run "db migrate" command.

$glance-manage db migrate

Output:
2017-10-19 02:31:26.550 CRITICAL glance [-] TypeError: 'function' object has no 
attribute '__getitem__'
   2017-10-19 02:31:26.550 TRACE glance Traceback (most recent call last):
   2017-10-19 02:31:26.550 TRACE glance   File "./glance-manage", line 10, in 

   2017-10-19 02:31:26.550 TRACE glance sys.exit(main())
   2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 460, in main
   2017-10-19 02:31:26.550 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
   2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 175, in contract
   2017-10-19 02:31:26.550 TRACE glance if 
data_migrations.has_pending_migrations(db_api.get_engine()):
   2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/__init__.py",
 line 61, in has_pending_migrations
   2017-10-19 02:31:26.550 TRACE glance return 
any([x.has_migrations(engine) for x in migrations])
   2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.py",
 line 43, in has_migrations
   2017-10-19 02:31:26.550 TRACE glance rows_with_pending_shared = 
(select[images.c.id]
   2017-10-19 02:31:26.550 TRACE glance TypeError: 'function' object has no 
attribute '__getitem__'
   2017-10-19 02:31:26.550 TRACE glance

Reason: 
https://github.com/openstack/glance/blob/stable/ocata/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.py#L43
Actual line: rows_with_pending_shared = (select[images.c.id]
Expected line: rows_with_pending_shared = (select([images.c.id])


Steps to reproduce:
1. Clone glance stable/ocata branch.
2. CREATE DATABASE glance CHARACTER SET utf8;
3. glance-manage db expand
4. glance-manage db migrate

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1726237

Title:
  Installing stable/ocata using expand/migrate/contract for the first
  time raises TypeError

Status in Glance:
  New

Bug description:
  If you are installing stable/ocata branch for the first time using the
  latest zero-down time database upgrade method, then it gives following
  error when you run "db migrate" command.

  $glance-manage db migrate

  Output:
  2017-10-19 02:31:26.550 CRITICAL glance [-] TypeError: 'function' object has 
no attribute '__getitem__'
 2017-10-19 02:31:26.550 TRACE glance Traceback (most recent call last):
 2017-10-19 02:31:26.550 TRACE glance   File "./glance-manage", line 10, in 

 2017-10-19 02:31:26.550 TRACE glance sys.exit(main())
 2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 460, in main
 2017-10-19 02:31:26.550 TRACE glance return 
CONF.command.action_fn(*func_args, **func_kwargs)
 2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/cmd/manage.py", line 175, in contract
 2017-10-19 02:31:26.550 TRACE glance if 
data_migrations.has_pending_migrations(db_api.get_engine()):
 2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/__init__.py",
 line 61, in has_pending_migrations
 2017-10-19 02:31:26.550 TRACE glance return 
any([x.has_migrations(engine) for x in migrations])
 2017-10-19 02:31:26.550 TRACE glance   File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.py",
 line 43, in has_migrations
 2017-10-19 02:31:26.550 TRACE glance rows_with_pending_shared = 
(select[images.c.id]
 2017-10-19 02:31:26.550 TRACE glance TypeError: 'function' object has no 
attribute '__getitem__'
 2017-10-19 02:31:26.550 TRACE glance

  Reason: 
https://github.com/openstack/glance/blob/stable/ocata/glance/db/sqlalchemy/alembic_migrations/data_migrations/ocata_migrate01_community_images.py#L43
  Actual line: rows_with_pending_shared = (select[images.c.id]
  Expected line: rows_with_pending_shared = (select([images.c.id])

  
  Steps to reproduce:
  1. Clone glance stable/ocata branch.
  2. CREATE DATABASE glance CHARACTER SET utf8;
  3. glance-manage db expand
  4. glance-manage db migrate

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1726237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 1726209] [NEW] Db expand fails if the db is previously synced using 'db sync' command

2017-10-22 Thread Tushar Patil
Public bug reported:

If you have synced the database using "db sync" command and if you
attempt to run "db expand" command, it gives following error.

Actual Result:
INFO  [alembic.runtime.migration] Context impl MySQLImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade mitaka02 -> ocata_expand01, 
add visibility to images
CRITI [glance] Unhandled error
Traceback (most recent call last):
  File "/usr/local/bin/glance-manage", line 10, in 
sys.exit(main())
  File "/opt/stack/glance/glance/cmd/manage.py", line 464, in main
return CONF.command.action_fn(*func_args, **func_kwargs)
  File "/opt/stack/glance/glance/cmd/manage.py", line 144, in expand
self.sync(version=expand_head)
  File "/opt/stack/glance/glance/cmd/manage.py", line 120, in sync
alembic_command.upgrade(a_config, version)
  File "/usr/local/lib/python2.7/dist-packages/alembic/command.py", line 254, 
in upgrade
script.run_env()
  File "/usr/local/lib/python2.7/dist-packages/alembic/script/base.py", line 
425, in run_env
util.load_python_file(self.dir, 'env.py')
  File "/usr/local/lib/python2.7/dist-packages/alembic/util/pyfiles.py", line 
93, in load_python_file
module = load_module_py(module_id, path)
  File "/usr/local/lib/python2.7/dist-packages/alembic/util/compat.py", line 
75, in load_module_py
mod = imp.load_source(module_id, path, fp)
  File "/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/env.py", line 
88, in 
run_migrations_online()
  File "/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/env.py", line 
83, in run_migrations_online
context.run_migrations()
  File "", line 8, in run_migrations
  File "/usr/local/lib/python2.7/dist-packages/alembic/runtime/environment.py", 
line 836, in run_migrations
self.get_context().run_migrations(**kw)
  File "/usr/local/lib/python2.7/dist-packages/alembic/runtime/migration.py", 
line 330, in run_migrations
step.migration_fn(**kw)
  File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/versions/ocata_expand01_add_visibility.py",
 line 149, in upgrade
_add_visibility_column(meta)
  File 
"/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/versions/ocata_expand01_add_visibility.py",
 line 126, in _add_visibility_column
op.add_column('images', v_col)
  File "", line 8, in add_column
  File "", line 3, in add_column
  File "/usr/local/lib/python2.7/dist-packages/alembic/operations/ops.py", line 
1565, in add_column
return operations.invoke(op)
  File "/usr/local/lib/python2.7/dist-packages/alembic/operations/base.py", 
line 318, in invoke
return fn(self, operation)
  File "/usr/local/lib/python2.7/dist-packages/alembic/operations/toimpl.py", 
line 123, in add_column
schema=schema
  File "/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 172, 
in add_column
self._exec(base.AddColumn(table_name, column, schema=schema))
  File "/usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py", line 118, 
in _exec
return conn.execute(construct, *multiparams, **params)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
945, in execute
return meth(self, multiparams, params)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/sql/ddl.py", line 68, 
in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1002, in _execute_ddl
compiled
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1189, in _execute_context
context)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1398, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 
203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 
1182, in _execute_context
context)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", 
line 470, in do_execute
cursor.execute(statement, parameters)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 166, 
in execute
result = self._query(query)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 322, 
in _query
conn.query(q)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
856, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
1057, in _read_query_result
result.read()
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 
1340, in read
first_packet = self.connection._read_packet()
  File "/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 

[Yahoo-eng-team] [Bug 1719269] Re: Unable to run individual test

2017-09-25 Thread Tushar Patil
As per ostestr documentation [1], -n option is used to run single test.
So it's not a workaround solution, it is working as expected.

Marking it as invalid.


[1]: https://docs.openstack.org/os-testr/latest/user/ostestr.html

** Changed in: glance
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1719269

Title:
  Unable to run individual test

Status in Glance:
  Invalid

Bug description:
  If you try to run individual test using tox then it runs entire test
  suit.

  Steps to reproduce:
  Run below command,

  tox -e py27
  
glance.tests.unit.test_auth.TestKeystoneAuthPlugin.test_get_plugin_from_strategy_keystone

  or

  tox -e py35
  
glance.tests.unit.test_auth.TestKeystoneAuthPlugin.test_get_plugin_from_strategy_keystone

  Instead of running single unit test it runs entire test suit.

  Workaround:

  Workaround so far is to mention -- -n while running single test like
  below;

  tox -e py27 -- -n
  
glance.tests.unit.test_auth.TestKeystoneAuthPlugin.test_get_plugin_from_strategy_keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1719269/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1430553] [NEW] Volume remains in detaching status when user call detach request immediately after attach request

2015-03-10 Thread Tushar Patil
 oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 419, in 
decorated_function
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 88, in wrapped
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
payload)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 82, 
in __exit__
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/exception.py, line 71, in wrapped
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 303, in 
decorated_function
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher pass
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 82, 
in __exit__
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 289, in 
decorated_function
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 331, in 
decorated_function
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
kwargs['instance'], e, sys.exc_info())
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py, line 82, 
in __exit__
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 319, in 
decorated_function
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 4642, in 
detach_volume
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
self._detach_volume(context, instance, bdm)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 4588, in 
_detach_volume
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher 
connection_info = jsonutils.loads(bdm.connection_info)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/jsonutils.py, line 
188, in loads
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher return 
json.loads(strutils.safe_decode(s, encoding), **kwargs)
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/strutils.py, line 145, 
in safe_decode
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher raise 
TypeError(%s can't be decoded % type(text))
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher TypeError: 
type 'NoneType' can't be decoded
2015-02-28 01:09:20,000.256 7026 TRACE oslo.messaging.rpc.dispatcher

### Attach completed

2015-02-28 01:09:20,000.980 7026 DEBUG nova.openstack.common.lockutils 
[req-f5978fea-cfa7-4bef-af37-d32d816fe78c ] Releasing semaphore 
7695a528-c73b-4b44-b4de-2c26974ac471 lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2015-02-28 01:09:20,000.980 7026 DEBUG nova.openstack.common.lockutils 
[req-f5978fea-cfa7-4bef-af37-d32d816fe78c ] Semaphore / lock released 
do_attach_volume inner 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275
2015-02-28 01:09:20,000.981 7026 DEBUG nova.openstack.common.lockutils [-] 
Acquired semaphore 7695a528-c73b-4b44-b4de-2c26974ac471 lock 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:229

** Affects: nova
 Importance: Undecided
 Assignee: Tushar Patil (tpatil)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Tushar Patil (tpatil)

-- 
You received this bug notification because you

[Yahoo-eng-team] [Bug 1379526] [NEW] Rebuild instance doesn't work when instance is stopped

2014-10-09 Thread Tushar Patil
Public bug reported:

Design wise it is allowed to rebuild the instance when the vm_state is in 
ERROR, ACTIVE or STOPPED state.
But rebuild instance fails to bring the vm_state to ACTIVE when it is in 
STOPPED state.

Steps to reproduce:
1. Create a new instance and wait unit it becomes ACTIVE.
2. Stop the instance
3. Rebuild the instance

Expected Results: vm_state should become active
Actual Results: vm_state is stopped

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1379526

Title:
  Rebuild instance doesn't work when instance is stopped

Status in OpenStack Compute (Nova):
  New

Bug description:
  Design wise it is allowed to rebuild the instance when the vm_state is in 
ERROR, ACTIVE or STOPPED state.
  But rebuild instance fails to bring the vm_state to ACTIVE when it is in 
STOPPED state.

  Steps to reproduce:
  1. Create a new instance and wait unit it becomes ACTIVE.
  2. Stop the instance
  3. Rebuild the instance

  Expected Results: vm_state should become active
  Actual Results: vm_state is stopped

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1379526/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339564] Re: glance image-delete on an image with the status saving doesn't delete the image's file from store

2014-09-23 Thread Tushar Patil
*** This bug is a duplicate of bug 1243127 ***
https://bugs.launchpad.net/bugs/1243127

** This bug is no longer a duplicate of bug 1329319
   Restart glance when a image is uploading, then delete the image. The data of 
the image is not deleted
** This bug has been marked a duplicate of bug 1243127
   Image is not clean when uploading image kill glance process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339564

Title:
  glance image-delete on an image with the status saving doesn't
  delete the image's file from store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  After running the the scenario described in 
bugs.launchpad.net/cinder/+bug/1339545 , I've deleted two images with that were 
stuck in saving status with 
  # glance image-delete image-id image-id

  both of the image's files were still in the store: 
  #ls -l /var/lib/glance/images
  -rw-r-. 1 glance glance  2158362624 Jul  9 10:18 
d4da7dea-c94d-4c9e-a987-955a905a7fed
  -rw-r-. 1 glance glance  1630994432 Jul  9 10:09 
8532ef07-3dfa-4d63-8537-033c31b16814

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.12.0-1.el7ost.noarch
  python-glance-2014.1-4.el7ost.noarch
  openstack-glance-2014.1-4.el7ost.noarch

  
  How reproducible:

  
  Steps to Reproduce:
  1. Run the scenario from bugs.launchpad.net/cinder/+bug/1339545
  2. Delete the image:
  # glance image-delete image-id

  
  Actual results:
  The file is still in the store.

  Expected results:
  The file has been deleted from the store.

  Additional info:
  The logs are attached -
  images uid's: 
  d4da7dea-c94d-4c9e-a987-955a905a7fed
  8532ef07-3dfa-4d63-8537-033c31b16814

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337367] Re: The add method of swift.py have a problem.When a large image is uploading and the glance-api is restarted, then we can not delete the image content that have been up

2014-09-23 Thread Tushar Patil
*** This bug is a duplicate of bug 1243127 ***
https://bugs.launchpad.net/bugs/1243127

** This bug is no longer a duplicate of bug 1329319
   Restart glance when a image is uploading, then delete the image. The data of 
the image is not deleted
** This bug has been marked a duplicate of bug 1243127
   Image is not clean when uploading image kill glance process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1337367

Title:
  The add method of swift.py have a problem.When a large image is
  uploading and the glance-api is restarted, then we can not delete the
  image content that have been uploaded in swift

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  1. upload a large image, for example 50G
  2. kill glance-api when image status:saving
  3. restart glance-api
  4. delete image

  the image content that have been uploaded can not be deleted. I think the add 
method of glance/swift/BaseStore should put the object manifest onto swift 
first, before we upload the content when we upload a large image in chunks.
   manifest = %s/%s- % (location.container, location.obj)
   headers = {'ETag': hashlib.md5().hexdigest(), 'X-Object-Manifest': 
manifest}
  connection.put_object(location.container, location.obj,  None, 
headers=headers)
  the code above shoud put before the code we upload the image chunks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1337367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368844] [NEW] Block migration fails with volume size 2 GB and more

2014-09-12 Thread Tushar Patil
Public bug reported:

Tested on master code with commit id :
fd72c308fc6adc1f5d07c5287c1db5bfc12328fc

volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver

Case 1: Instance is booted using volume
 
Steps to reproduce:
1. Create a bootable volume of size 2 GB using an image.
2. Boot an instance with this volume on host 1.
3. Block migrate the instance on host 2
4. Instance will not be migrated and migration fails with below mentioned error 
log message on the source compute node
 
Case 2: Instance is booted using image then attach a volume to this newly 
booted instance
 
Steps to reproduce:
1. Create a volume of size 2 GB.
2. Boot an instance using image on host 1.
3. Attach a 2 GB volume to this instance.
3. Block migrate the instance on host 2.
4. Instance will not be migrated and migration fails with below mentioned error 
log message on the source compute node
 

Error Log message on the source compute node:
{{{
2014-09-11 02:42:41.884 ERROR nova.virt.libvirt.driver [-] [instance: 
ca59bee5-bae5-4c61-9e01-f76a1df3d324]
Live Migration failure: operation failed: migration job: unexpectedly failed
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 
115, in wait
listener.cb(fileno)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
212, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5128, in 
_live_migration
recover_method(context, instance, dest, block_migration)
  File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5122, in 
_live_migration
CONF.libvirt.live_migration_bandwidth)
  File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 183, in 
doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 141, in 
proxy_call
rv = execute(f, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 122, in 
execute
six.reraise(c, e, tb)
  File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 80, in 
tworker
rv = meth(*args, **kwargs)
  File /usr/lib/python2.7/dist-packages/libvirt.py, line 1582, in 
migrateToURI2
if ret == -1: raise libvirtError ('virDomainMigrateToURI2() failed', 
dom=self)
libvirtError: operation failed: migration job: unexpectedly failed
Removing descriptor: 19
}}}

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368844

Title:
  Block migration fails with volume size 2 GB and more

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tested on master code with commit id :
  fd72c308fc6adc1f5d07c5287c1db5bfc12328fc

  volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver

  Case 1: Instance is booted using volume
   
  Steps to reproduce:
  1. Create a bootable volume of size 2 GB using an image.
  2. Boot an instance with this volume on host 1.
  3. Block migrate the instance on host 2
  4. Instance will not be migrated and migration fails with below mentioned 
error log message on the source compute node
   
  Case 2: Instance is booted using image then attach a volume to this newly 
booted instance
   
  Steps to reproduce:
  1. Create a volume of size 2 GB.
  2. Boot an instance using image on host 1.
  3. Attach a 2 GB volume to this instance.
  3. Block migrate the instance on host 2.
  4. Instance will not be migrated and migration fails with below mentioned 
error log message on the source compute node
   

  Error Log message on the source compute node:
  {{{
  2014-09-11 02:42:41.884 ERROR nova.virt.libvirt.driver [-] [instance: 
ca59bee5-bae5-4c61-9e01-f76a1df3d324]
  Live Migration failure: operation failed: migration job: unexpectedly failed
  Traceback (most recent call last):
File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/poll.py, line 
115, in wait
  listener.cb(fileno)
File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
212, in main
  result = function(*args, **kwargs)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5128, in 
_live_migration
  recover_method(context, instance, dest, block_migration)
File /opt/stack/nova/nova/openstack/common/excutils.py, line 82, in 
__exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5122, in 
_live_migration
  CONF.libvirt.live_migration_bandwidth)
File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 183, 
in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File /usr/local/lib/python2.7/dist-packages/eventlet/tpool.py, line 141, 

[Yahoo-eng-team] [Bug 1367944] [NEW] tenant usage information api is consuming lot of memory

2014-09-10 Thread Tushar Patil
Public bug reported:

I have noticed that when a tenant usage information API is invoked for a
particular tenant owning large number of instances (both active 
terminated), then I see a sudden increase in nova-api process memory
consumption from 500 MB up to 2.3 GB.

It is due to a SQL retrieving large number of records of
instance_system_metadata for instances using where in clause.

At the time of getting tenant usage information, I had approx. 120,000
instances in the db for a particular tenant (few were active and
remaining terminated)

Also in this plugin, it unnecessarily  gets following information of the 
instances from the db further degrading the performance of the API.
1. metadata
2. info_cache
3. security_groups

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: ntt

** Tags added: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367944

Title:
  tenant usage information api is consuming lot of memory

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have noticed that when a tenant usage information API is invoked for
  a particular tenant owning large number of instances (both active 
  terminated), then I see a sudden increase in nova-api process memory
  consumption from 500 MB up to 2.3 GB.

  It is due to a SQL retrieving large number of records of
  instance_system_metadata for instances using where in clause.

  At the time of getting tenant usage information, I had approx. 120,000
  instances in the db for a particular tenant (few were active and
  remaining terminated)

  Also in this plugin, it unnecessarily  gets following information of the 
instances from the db further degrading the performance of the API.
  1. metadata
  2. info_cache
  3. security_groups

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202690] Re: Incorrect HTTP response while creating flavor without name

2013-10-09 Thread Tushar Patil
*** This bug is a duplicate of bug 1220087 ***
https://bugs.launchpad.net/bugs/1220087

Alex: You are correct, this issue is fixed.

If user pass only white spaces to flavor name, then it should give error.
{
flavor: {
name:
ram: 1024,
vcpus: 2,
disk: 10,
id: 10,
os-flavor-access:is_public: false
}
}
 Current behavior it creates flavor successfully. I will file a new bug to fix 
this issue.

** This bug has been marked a duplicate of bug 1220087
   nova api should raise badrequest when create flavor with  incorrect format 
of data in request body

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1202690

Title:
  Incorrect HTTP response while creating flavor without name

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  Bug reproduced on nova/master commit ID:
  80ccf6dd956ce1b2754db276e8b40f058a7e32eb

  When creating a new flavor if name key is not specified in JSON
  request body, it returns status error response as 500. Ideally it
  should return a badRequest(400) as status response.

  Test data:
  {
  flavor: {
  ram: 1024,
  vcpus: 2,
  disk: 10,
  id: 12354
  }
  }

  
  Current error response:
  {
  computeFault:{
  message:The server has either erred or is incapable of performing the 
requested operation.,
  code:500
  }
  }

  
  Expected output should return error response as shown below:
  {
  badRequest:{
  message:Invalid input received: 'name' argument is mandatory,
  code:400
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1202690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp